Refine
Has Fulltext
- yes (260) (remove)
Year of publication
- 2013 (260) (remove)
Document Type
- Doctoral Thesis (89)
- Postprint (60)
- Article (57)
- Preprint (22)
- Monograph/Edited Volume (17)
- Conference Proceeding (9)
- Habilitation Thesis (2)
- Master's Thesis (2)
- Part of Periodical (2)
Language
- English (260) (remove)
Keywords
- Curriculum Framework (17)
- European values education (17)
- Europäische Werteerziehung (17)
- Familie (17)
- Family (17)
- Lehrevaluation (17)
- Studierendenaustausch (17)
- Unterrichtseinheiten (17)
- curriculum framework (17)
- lesson evaluation (17)
Institute
- Institut für Chemie (29)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (26)
- Extern (24)
- Institut für Geowissenschaften (22)
- Wirtschaftswissenschaften (20)
- Institut für Mathematik (19)
- Institut für Umweltwissenschaften und Geographie (19)
- Institut für Biochemie und Biologie (15)
- Institut für Physik und Astronomie (15)
- Mathematisch-Naturwissenschaftliche Fakultät (14)
Metabolically active microbial communities are present in a wide range of subsurface environments. Techniques like enumeration of microbial cells, activity measurements with radiotracer assays and the analysis of porewater constituents are currently being used to explore the subsurface biosphere, alongside with molecular biological analyses. However, many of these techniques reach their detection limits due to low microbial activity and abundance. Direct measurements of microbial turnover not just face issues of insufficient sensitivity, they only provide information about a single specific process but in sediments many different process can occur simultaneously. Therefore, the development of a new technique to measure total microbial activity would be a major improvement. A new tritium-based hydrogenase-enzyme assay appeared to be a promising tool to quantify total living biomass, even in low activity subsurface environments. In this PhD project total microbial biomass and microbial activity was quantified in different subsurface sediments using established techniques (cell enumeration and pore water geochemistry) as well as a new tritium-based hydrogenase enzyme assay. By using a large database of our own cell enumeration data from equatorial Pacific and north Pacific sediments and published data it was shown that the global geographic distribution of subseafloor sedimentary microbes varies between sites by 5 to 6 orders of magnitude and correlates with the sedimentation rate and distance from land. Based on these correlations, global subseafloor biomass was estimated to be 4.1 petagram-C and ~0.6 % of Earth's total living biomass, which is significantly lower than previous estimates. Despite the massive reduction in biomass the subseafloor biosphere is still an important player in global biogeochemical cycles. To understand the relationship between microbial activity, abundance and organic matter flux into the sediment an expedition to the equatorial Pacific upwelling area and the north Pacific Gyre was carried out. Oxygen respiration rates in subseafloor sediments from the north Pacific Gyre, which are deposited at sedimentation rates of 1 mm per 1000 years, showed that microbial communities could survive for millions of years without fresh supply of organic carbon. Contrary to the north Pacific Gyre oxygen was completely depleted within the upper few millimeters to centimeters in sediments of the equatorial upwelling region due to a higher supply of organic matter and higher metabolic activity. So occurrence and variability of electron acceptors over depth and sites make the subsurface a complex environment for the quantification of total microbial activity. Recent studies showed that electron acceptor processes, which were previously thought to thermodynamically exclude each other can occur simultaneously. So in many cases a simple measure of the total microbial activity would be a better and more robust solution than assays for several specific processes, for example sulfate reduction rates or methanogenesis. Enzyme or molecular assays provide a more general approach as they target key metabolic compounds. Since hydrogenase enzymes are ubiquitous in microbes, the recently developed tritium-based hydrogenase radiotracer assay is applied to quantify hydrogenase enzyme activity as a parameter of total living cell activity. Hydrogenase enzyme activity was measured in sediments from different locations (Lake Van, Barents Sea, Equatorial Pacific and Gulf of Mexico). In sediment samples that contained nitrate, we found the lowest cell specific enzyme activity around 10^(-5) nmol H_(2) cell^(-1) d^(-1). With decreasing energy yield of the electron acceptor used, cell-specific hydrogenase activity increased and maximum values of up to 1 nmol H_(2) cell^(-1) d^(-1) were found in samples with methane concentrations of >10 ppm. Although hydrogenase activity cannot be converted directly into a turnover rate of a specific process, cell-specific activity factors can be used to identify specific metabolism and to quantify the metabolically active microbial population. In another study on sediments from the Nankai Trough microbial abundance and hydrogenase activity data show that both the habitat and the activity of subseafloor sedimentary microbial communities have been impacted by seismic activities. An increase in hydrogenase activity near the fault zone revealed that the microbial community was supplied with hydrogen as an energy source and that the microbes were specialized to hydrogen metabolism.
Deepening understanding
(2013)
A survey has been carried out in the Computer Science (CS) department at the University of Baghdad to investigate the attitudes of CS students in a female dominant environment, showing the differences between male and female students in different academic years. We also compare the attitudes of the freshman students of two different cultures (University of Baghdad, Iraq, and the University of Potsdam).
We study a boundary value problem for an overdetermined elliptic system of nonlinear first order differential equations with linear boundary operators. Such a problem is solvable for a small set of data, and so we pass to its variational formulation which consists in minimising the discrepancy. The Euler-Lagrange equations for the variational problem are far-reaching analogues of the classical Laplace equation. Within the framework of Euler-Lagrange equations we specify an operator on the boundary whose zero set consists precisely of those boundary data for which the initial problem is solvable. The construction of such operator has much in common with that of the familiar Dirichlet to Neumann operator. In the case of linear problems we establish complete results.
The study of outcrop modeling is located at the interface between two fields of expertise, Sedimentology and Computing Geoscience, which respectively investigates and simulates geological heterogeneity observed in the sedimentary record. During the last past years, modeling tools and techniques were constantly improved. In parallel, the study of Phanerozoic carbonate deposits emphasized the common occurrence of a random facies distribution along single depositional domain. Although both fields of expertise are intrinsically linked during outcrop simulation, their respective advances have not been combined in literature to enhance carbonate modeling studies. The present study re-examines the modeling strategy adapted to the simulation of shallow-water carbonate systems, based on a close relationship between field sedimentology and modeling capabilities. In the present study, the evaluation of three commonly used algorithms Truncated Gaussian Simulation (TGSim), Sequential Indicator Simulation (SISim), and Indicator Kriging (IK), were performed for the first time using visual and quantitative comparisons on an ideally suited carbonate outcrop. The results show that the heterogeneity of carbonate rocks cannot be fully simulated using one single algorithm. The operating mode of each algorithm involves capabilities as well as drawbacks that are not capable to match all field observations carried out across the modeling area. Two end members in the spectrum of carbonate depositional settings, a low-angle Jurassic ramp (High Atlas, Morocco) and a Triassic isolated platform (Dolomites, Italy), were investigated to obtain a complete overview of the geological heterogeneity in shallow-water carbonate systems. Field sedimentology and statistical analysis performed on the type, morphology, distribution, and association of carbonate bodies and combined with palaeodepositional reconstructions, emphasize similar results. At the basin scale (x 1 km), facies association, composed of facies recording similar depositional conditions, displays linear and ordered transitions between depositional domains. Contrarily, at the bedding scale (x 0.1 km), individual lithofacies type shows a mosaic-like distribution consisting of an arrangement of spatially independent lithofacies bodies along the depositional profile. The increase of spatial disorder from the basin to bedding scale results from the influence of autocyclic factors on the transport and deposition of carbonate sediments. Scale-dependent types of carbonate heterogeneity are linked with the evaluation of algorithms in order to establish a modeling strategy that considers both the sedimentary characteristics of the outcrop and the modeling capabilities. A surface-based modeling approach was used to model depositional sequences. Facies associations were populated using TGSim to preserve ordered trends between depositional domains. At the lithofacies scale, a fully stochastic approach with SISim was applied to simulate a mosaic-like lithofacies distribution. This new workflow is designed to improve the simulation of carbonate rocks, based on the modeling of each scale of heterogeneity individually. Contrarily to simulation methods applied in literature, the present study considers that the use of one single simulation technique is unlikely to correctly model the natural patterns and variability of carbonate rocks. The implementation of different techniques customized for each level of the stratigraphic hierarchy provides the essential computing flexibility to model carbonate systems. Closer feedback between advances carried out in the field of Sedimentology and Computing Geoscience should be promoted during future outcrop simulations for the enhancement of 3-D geological models.
Relating to students
(2013)
Several mechanisms are proposed to be part of the earthquake triggering process, including static stress interactions and dynamic stress transfer. Significant differences of these mechanisms are particularly expected in the spatial distribution of aftershocks. However, testing the different hypotheses is challenging because it requires the consideration of the large uncertainties involved in stress calculations as well as the appropriate consideration of secondary aftershock triggering which is related to stress changes induced by smaller pre- and aftershocks. In order to evaluate the forecast capability of different mechanisms, I take the effect of smaller--magnitude earthquakes into account by using the epidemic type aftershock sequence (ETAS) model where the spatial probability distribution of direct aftershocks, if available, is correlated to alternative source information and mechanisms. Surface shaking, rupture geometry, and slip distributions are tested. As an approximation of the shaking level, ShakeMaps are used which are available in near real-time after a mainshock and thus could be used for first-order forecasts of the spatial aftershock distribution. Alternatively, the use of empirical decay laws related to minimum fault distance is tested and Coulomb stress change calculations based on published and random slip models. For comparison, the likelihood values of the different model combinations are analyzed in the case of several well-known aftershock sequences (1992 Landers, 1999 Hector Mine, 2004 Parkfield). The tests show that the fault geometry is the most valuable information for improving aftershock forecasts. Furthermore, they reveal that static stress maps can additionally improve the forecasts of off--fault aftershock locations, while the integration of ground shaking data could not upgrade the results significantly. In the second part of this work, I focused on a procedure to test the information content of inverted slip models. This allows to quantify the information gain if this kind of data is included in aftershock forecasts. For this purpose, the ETAS model based on static stress changes, which is introduced in part one, is applied. The forecast ability of the models is systematically tested for several earthquake sequences and compared to models using random slip distributions. The influence of subfault resolution and segment strike and dip is tested. Some of the tested slip models perform very good, in that cases almost no random slip models are found to perform better. Contrastingly, for some of the published slip models, almost all random slip models perform better than the published slip model. Choosing a different subfault resolution hardly influences the result, as long the general slip pattern is still reproducible. Whereas different strike and dip values strongly influence the results depending on the standard deviation chosen, which is applied in the process of randomly selecting the strike and dip values.
We consider systems of Euler-Lagrange equations with two degrees of freedom and with Lagrangian being quadratic in velocities. For this class of equations the generic case of the equivalence problem is solved with respect to point transformations. Using Lie's infinitesimal method we construct a basis of differential invariants and invariant differentiation operators for such systems. We describe certain types of Lagrangian systems in terms of their invariants. The results are illustrated by several examples.
Antarctic glacier forfields are extreme environments and pioneer sites for ecological succession. The Antarctic continent shows microbial community development as a natural laboratory because of its special environment, geographic isolation and little anthropogenic influence. Increasing temperatures due to global warming lead to enhanced deglaciation processes in cold-affected habitats and new terrain is becoming exposed to soil formation and accessible for microbial colonisation. This study aims to understand the structure and development of glacier forefield bacterial communities, especially how soil parameters impact the microorganisms and how those are adapted to the extreme conditions of the habitat. To this effect, a combination of cultivation experiments, molecular, geophysical and geochemical analysis was applied to examine two glacier forfields of the Larsemann Hills, East Antarctica. Culture-independent molecular tools such as terminal restriction length polymorphism (T-RFLP), clone libraries and quantitative real-time PCR (qPCR) were used to determine bacterial diversity and distribution. Cultivation of yet unknown species was carried out to get insights in the physiology and adaptation of the microorganisms. Adaptation strategies of the microorganisms were studied by determining changes of the cell membrane phospholipid fatty acid (PLFA) inventory of an isolated bacterium in response to temperature and pH fluctuations and by measuring enzyme activity at low temperature in environmental soil samples. The two studied glacier forefields are extreme habitats characterised by low temperatures, low water availability and small oligotrophic nutrient pools and represent sites of different bacterial succession in relation to soil parameters. The investigated sites showed microbial succession at an early step of soil formation near the ice tongue in comparison to closely located but rather older and more developed soil from the forefield. At the early step the succession is influenced by a deglaciation-dependent areal shift of soil parameters followed by a variable and prevalently depth-related distribution of the soil parameters that is driven by the extreme Antarctic conditions. The dominant taxa in the glacier forefields are Actinobacteria, Acidobacteria, Proteobacteria, Bacteroidetes, Cyanobacteria and Chloroflexi. The connection of soil characteristics with bacterial community structure showed that soil parameter and soil formation along the glacier forefield influence the distribution of certain phyla. In the early step of succession the relative undifferentiated bacterial diversity reflects the undifferentiated soil development and has a high potential to shift according to past and present environmental conditions. With progressing development environmental constraints such as water or carbon limitation have a greater influence. Adapting the culturing conditions to the cold and oligotrophic environment, the number of culturable heterotrophic bacteria reached up to 108 colony forming units per gram soil and 148 isolates were obtained. Two new psychrotolerant bacteria, Herbaspirillum psychrotolerans PB1T and Chryseobacterium frigidisoli PB4T, were characterised in detail and described as novel species in the family of Oxalobacteraceae and Flavobacteriaceae, respectively. The isolates are able to grow at low temperatures tolerating temperature fluctuations and they are not specialised to a certain substrate, therefore they are well-adapted to the cold and oligotrophic environment. The adaptation strategies of the microorganisms were analysed in environmental samples and cultures focussing on extracellular enzyme activity at low temperature and PLFA analyses. Extracellular phosphatases (pH 11 and pH 6.5), β-glucosidase, invertase and urease activity were detected in the glacier forefield soils at low temperature (14°C) catalysing the conversion of various compounds providing necessary substrates and may further play a role in the soil formation and total carbon turnover of the habitat. The PLFA analysis of the newly isolated species C. frigidisoli showed that the cold-adapted strain develops different strategies to maintain the cell membrane function under changing environmental conditions by altering the PLFA inventory at different temperatures and pH values. A newly discovered fatty acid, which was not found in any other microorganism so far, significantly increased at decreasing temperature and low pH and thus plays an important role in the adaption of C. frigidisoli. This work gives insights into the diversity, distribution and adaptation mechanisms of microbial communities in oligotrophic cold-affected soils and shows that Antarctic glacier forefields are suitable model systems to study bacterial colonisation in connection to soil formation.
The course timetabling problem can be generally defined as the task of assigning a number of lectures to a limited set of timeslots and rooms, subject to a given set of hard and soft constraints. The modeling language for course timetabling is required to be expressive enough to specify a wide variety of soft constraints and objective functions. Furthermore, the resulting encoding is required to be extensible for capturing new constraints and for switching them between hard and soft, and to be flexible enough to deal with different formulations. In this paper, we propose to make effective use of ASP as a modeling language for course timetabling. We show that our ASP-based approach can naturally satisfy the above requirements, through an ASP encoding of the curriculum-based course timetabling problem proposed in the third track of the second international timetabling competition (ITC-2007). Our encoding is compact and human-readable, since each constraint is individually expressed by either one or two rules. Each hard constraint is expressed by using integrity constraints and aggregates of ASP. Each soft constraint S is expressed by rules in which the head is the form of penalty (S, V, C), and a violation V and its penalty cost C are detected and calculated respectively in the body. We carried out experiments on four different benchmark sets with five different formulations. We succeeded either in improving the bounds or producing the same bounds for many combinations of problem instances and formulations, compared with the previous best known bounds.
The time-dependent approach to electronic spectroscopy, as popularized by Heller and coworkers in the 1980's, is applied here in conjunction with linear-response, time-dependent density functional theory to study vibronic absorption, emission and resonance Raman spectra of several diamondoids. Two-state models, the harmonic and the Condon approximations, are used for the calculations, making them easily applicable to larger molecules. The method is applied to nine pristine lower and higher diamondoids: adamantane, diamantane, triamantane, and three isomers each of tetramantane and pentamantane. We also consider a hybrid species “Dia = Dia” – a shorthand notation for a recently synthesized molecule comprising two diamantane units connected by a C[double bond, length as m-dash]C double bond. We resolve and interpret trends in optical and vibrational properties of these molecules as a function of their size, shape, and symmetry, as well as effects of “blending” with sp2-hybridized C-atoms. Time-dependent correlation functions facilitate the computations and shed light on the vibrational dynamics following electronic transitions.
Intracellular photoactivation of caged cGMP induces myosin II and actin responses in motile cells
(2013)
Cyclic GMP (cGMP) is a ubiquitous second messenger in eukaryotic cells. It is assumed to regulate the association of myosin II with the cytoskeleton of motile cells. When cells of the social amoeba Dictyostelium discoideum are exposed to chemoattractants or to increased osmotic stress, intracellular cGMP levels rise, preceding the accumulation of myosin II in the cell cortex. To directly investigate the impact of intracellular cGMP on cytoskeletal dynamics in a living cell, we released cGMP inside the cell by laser-induced photo-cleavage of a caged precursor. With this approach, we could directly show in a live cell experiment that an increase in intracellular cGMP indeed induces myosin II to accumulate in the cortex. Unexpectedly, we observed for the first time that also the amount of filamentous actin in the cell cortex increases upon a rise in the cGMP concentration, independently of cAMP receptor activation and signaling. We discuss our results in the light of recent work on the cGMP signaling pathway and suggest possible links between cGMP signaling and the actin system.
Data integration aims to combine data of different sources and to provide users with a unified view on these data. This task is as challenging as valuable. In this thesis we propose algorithms for dependency discovery to provide necessary information for data integration. We focus on inclusion dependencies (INDs) in general and a special form named conditional inclusion dependencies (CINDs): (i) INDs enable the discovery of structure in a given schema. (ii) INDs and CINDs support the discovery of cross-references or links between schemas. An IND “A in B” simply states that all values of attribute A are included in the set of values of attribute B. We propose an algorithm that discovers all inclusion dependencies in a relational data source. The challenge of this task is the complexity of testing all attribute pairs and further of comparing all of each attribute pair's values. The complexity of existing approaches depends on the number of attribute pairs, while ours depends only on the number of attributes. Thus, our algorithm enables to profile entirely unknown data sources with large schemas by discovering all INDs. Further, we provide an approach to extract foreign keys from the identified INDs. We extend our IND discovery algorithm to also find three special types of INDs: (i) Composite INDs, such as “AB in CD”, (ii) approximate INDs that allow a certain amount of values of A to be not included in B, and (iii) prefix and suffix INDs that represent special cross-references between schemas. Conditional inclusion dependencies are inclusion dependencies with a limited scope defined by conditions over several attributes. Only the matching part of the instance must adhere the dependency. We generalize the definition of CINDs distinguishing covering and completeness conditions and define quality measures for conditions. We propose efficient algorithms that identify covering and completeness conditions conforming to given quality thresholds. The challenge for this task is twofold: (i) Which (and how many) attributes should be used for the conditions? (ii) Which attribute values should be chosen for the conditions? Previous approaches rely on pre-selected condition attributes or can only discover conditions applying to quality thresholds of 100%. Our approaches were motivated by two application domains: data integration in the life sciences and link discovery for linked open data. We show the efficiency and the benefits of our approaches for use cases in these domains.
Background: DNA fragments carrying internal recognition sites for the restriction endonucleases intended for cloning into a target plasmid pose a challenge for conventional cloning.
Results: A method for directional insertion of DNA fragments into plasmid vectors has been developed. The target sequence is amplified from a template DNA sample by PCR using two oligonucleotides each containing a single deoxyinosine base at the third position from the 5' end. Treatment of such PCR products with endonuclease V generates 3' protruding ends suitable for ligation with vector fragments created by conventional restriction endonuclease reactions.
Conclusions: The developed approach generates terminal cohesive ends without the use of Type II restriction endonucleases, and is thus independent from the DNA sequence. Due to PCR amplification, minimal amounts of template DNA are required. Using the robust Taq enzyme or a proofreading Pfu DNA polymerase mutant, the method is applicable to a broad range of insert sequences. Appropriate primer design enables direct incorporation of terminal DNA sequence modifications such as tag addition, insertions, deletions and mutations into the cloning strategy. Further, the restriction sites of the target plasmid can be either retained or removed.
Black shales are sedimentary rocks with a high content of organic carbon, which leads to a dark grayish to black color. Due to their potential to contain oil or gas, black shales are of great interest for the support of the worldwide energy supply. An integrated seismic investigation of the Lower Palaeozoic black shales was carried out at the Danish island Bornholm to locate the shallow-lying Alum Shale layer and its surrounding formations and to characterize its potential as a source rock. Therefore, two seismic experiments at a total of three crossing profiles were carried out in October 2010 and in June 2012 in the southern part of the island. Two different active measurements were conducted with either a weight drop source or a minivibrator. Additionally, the ambient noise field was recorded at the study location over a time interval of about one day, and also a laboratory analysis of borehole samples was carried out. The seismic profiles were positioned as close as possible to two scientific boreholes which were used for comparative purposes. The seismic field data was analyzed with traveltime tomography, surface wave inversion and seismic interferometry to obtain the P-wave and S-wave velocity models of the subsurface. The P-wave velocity models which were determined for all three profiles clearly locate the Alum Shale layer between the Komstad Limestone layer on top and the Læså Sandstone Formation at the base of the models. The black shale layer has P-wave velocities around 3 km/s which are lower compared to the adjacent formations. Due to a very good agreement of the sonic log and the vertical velocity profiles of the two seismic lines, which are directly crossing the borehole where the sonic log was conducted, the reliability of the traveltime tomography is proven. A correlation of the seismic velocities with the content of organic carbon is an important task for the characterization of the reservoir properties of a black shale formation. It is not possible without calibration but in combination with a full 2D tomographic image of the subsurface it gives the subsurface distribution of the organic material. The S-wave model obtained with surface wave inversion of the vibroseis data of one of the profiles images the Alum Shale layer also very well with S-wave velocities around 2 km/s. Although individual 1D velocity models for each of the source positions were determined, the subsurface S-wave velocity distribution is very uniform with a good match between the single models. A really new approach described here is the application of seismic interferometry to a really small study area and a quite short time interval. Also new is the selective procedure of only using time windows with the best crosscorrelation signals to achieve the final interferograms. Due to the small scale of the interferometry even P-wave signals can be observed in the final crosscorrelations. In the laboratory measurements the seismic body waves were recorded for different pressure and temperature stages. Therefore, samples of different depths of the Alum Shale were available from one of the scientific boreholes at the study location. The measured velocities have a high variance with changing pressure or temperature. Recordings with wave propagation both parallel and perpendicular to the bedding of the samples reveal a great amount of anisotropy for the P-wave velocity, whereas the S-wave velocity is almost independent of the wave direction. The calculated velocity ratio is also highly anisotropic with very low values for the perpendicular samples and very high values for the parallel ones. Interestingly, the laboratory velocities of the perpendicular samples are comparable to the velocities of the field experiments indicating that the field measurements are sensitive to wave propagation in vertical direction. The velocity ratio is also calculated with the P-wave and S-wave velocity models of the field experiments. Again, the Alum Shale can be clearly separated from the adjacent formations because it shows overall very low vP/vS ratios around 1.4. The very low velocity ratio indicates the content of gas in the black shale formation. With the combination of all the different methods described here, a comprehensive interpretation of the seismic response of the black shale layer can be made and the hydrocarbon source rock potential can be estimated.
Climatic variations and human activity now and increasingly in the future cause land cover changes and introduce perturbations in the terrestrial carbon reservoirs in vegetation, soil and detritus. Optical remote sensing and in particular Imaging Spectroscopy has shown the potential to quantify land surface parameters over large areas, which is accomplished by taking advantage of the characteristic interactions of incident radiation and the physico-chemical properties of a material. The objective of this thesis is to quantify key soil parameters, including soil organic carbon, using field and Imaging Spectroscopy. Organic carbon, iron oxides and clay content are selected to be analyzed to provide indicators for ecosystem function in relation to land degradation, and additionally to facilitate a quantification of carbon inventories in semiarid soils. The semiarid Albany Thicket Biome in the Eastern Cape Province of South Africa is chosen as study site. It provides a regional example for a semiarid ecosystem that currently undergoes land changes due to unadapted management practices and furthermore has to face climate change induced land changes in the future. The thesis is divided in three methodical steps. Based on reflectance spectra measured in the field and chemically determined constituents of the upper topsoil, physically based models are developed to quantify soil organic carbon, iron oxides and clay content. Taking account of the benefits limitations of existing methods, the approach is based on the direct application of known diagnostic spectral features and their combination with multivariate statistical approaches. It benefits from the collinearity of several diagnostic features and a number of their properties to reduce signal disturbances by influences of other spectral features. In a following step, the acquired hyperspectral image data are prepared for an analysis of soil constituents. The data show a large spatial heterogeneity that is caused by the patchiness of the natural vegetation in the study area that is inherent to most semiarid landscapes. Spectral mixture analysis is performed and used to deconvolve non-homogenous pixels into their constituent components. For soil dominated pixels, the subpixel information is used to remove the spectral influence of vegetation and to approximate the pure spectral signature coming from the soil. This step is an integral part when working in natural non-agricultural areas where pure bare soil pixels are rare. It is identified as the largest benefit within the multi-stage methodology, providing the basis for a successful and unbiased prediction of soil constituents from hyperspectral imagery. With the proposed approach it is possible (1) to significantly increase the spatial extent of derived information of soil constituents to areas with about 40 % vegetation coverage and (2) to reduce the influence of materials such as vegetation on the quantification of soil constituents to a minimum. Subsequently, soil parameter quantities are predicted by the application of the feature-based soil prediction models to the maps of locally approximated soil signatures. Thematic maps showing the spatial distribution of the three considered soil parameters in October 2009 are produced for the Albany Thicket Biome of South Africa. The maps are evaluated for their potential to detect erosion affected areas as effects of land changes and to identify degradation hot spots in regard to support local restoration efforts. A regional validation, carried out using available ground truth sites, suggests remaining factors disturbing the correlation of spectral characteristics and chemical soil constituents. The approach is developed for semiarid areas in general and not adapted to specific conditions in the study area. All processing steps of the developed methodology are implemented in software modules, where crucial steps of the workflow are fully automated. The transferability of the methodology is shown for simulated data of the future EnMAP hyperspectral satellite. Soil parameters are successfully predicted from these data despite intense spectral mixing within the lower spatial resolution EnMAP pixels. This study shows an innovative approach to use Imaging Spectroscopy for mapping of key soil constituents, including soil organic carbon, for large areas in a non-agricultural ecosystem and under consideration of a partially vegetation coverage. It can contribute to a better assessment of soil constituents that describe ecosystem processes relevant to detect and monitor land changes. The maps further provide an assessment of the current carbon inventory in soils, valuable for carbon balances and carbon mitigation products.
Deepening understanding
(2013)
1. Key concepts 2. What students should have done 3. What students did 4. Deepening understanding 5. General description of deepening understanding 6. Why is deepening understanding an important stage? 7. How does deepening understanding occur in the lessons and some examples 8. Possible difficulties 9. Conclusion
Systems of Systems (SoS) have received a lot of attention recently. In this thesis we will focus on SoS that are built atop the techniques of Service-Oriented Architectures and thus combine the benefits and challenges of both paradigms. For this thesis we will understand SoS as ensembles of single autonomous systems that are integrated to a larger system, the SoS. The interesting fact about these systems is that the previously isolated systems are still maintained, improved and developed on their own. Structural dynamics is an issue in SoS, as at every point in time systems can join and leave the ensemble. This and the fact that the cooperation among the constituent systems is not necessarily observable means that we will consider these systems as open systems. Of course, the system has a clear boundary at each point in time, but this can only be identified by halting the complete SoS. However, halting a system of that size is practically impossible. Often SoS are combinations of software systems and physical systems. Hence a failure in the software system can have a serious physical impact what makes an SoS of this kind easily a safety-critical system. The contribution of this thesis is a modelling approach that extends OMG's SoaML and basically relies on collaborations and roles as an abstraction layer above the components. This will allow us to describe SoS at an architectural level. We will also give a formal semantics for our modelling approach which employs hybrid graph-transformation systems. The modelling approach is accompanied by a modular verification scheme that will be able to cope with the complexity constraints implied by the SoS' structural dynamics and size. Building such autonomous systems as SoS without evolution at the architectural level --- i. e. adding and removing of components and services --- is inadequate. Therefore our approach directly supports the modelling and verification of evolution.
In this paper we report on our experiments in teaching computer science concepts with a mix of tangible and abstract object manipulations. The goal we set ourselves was to let pupils discover the challenges one has to meet to automatically manipulate formatted text. We worked with a group of 25 secondary school pupils (9-10th grade), and they were actually able to “invent” the concept of mark-up language. From this experiment we distilled a set of activities which will be replicated in other classes (6th grade) under the guidance of maths teachers.
A method is presented of acquiring the principles of three sorting algorithms through developing interactive applications in Excel.
The genetic code is degenerate; thus, protein evolution does not uniquely determine the coding sequence. One of the puzzles in evolutionary genetics is therefore to uncover evolutionary driving forces that result in specific codon choice. In many bacteria, the first 5-10 codons of protein-coding genes are often codons that are less frequently used in the rest of the genome, an effect that has been argued to arise from selection for slowed early elongation to reduce ribosome traffic jams. However, genome analysis across many species has demonstrated that the region shows reduced mRNA folding consistent with pressure for efficient translation initiation. This raises the possibility that unusual codon usage is a side effect of selection for reduced mRNA structure. Here we discriminate between these two competing hypotheses, and show that in bacteria selection favours codons that reduce mRNA folding around the translation start, regardless of whether these codons are frequent or rare. Experiments confirm that primarily mRNA structure, and not codon usage, at the beginning of genes determines the translation rate.
Requirements engineers have to elicit, document, and validate how stakeholders act and interact to achieve their common goals in collaborative scenarios. Only after gathering all information concerning who interacts with whom to do what and why, can a software system be designed and realized which supports the stakeholders to do their work. To capture and structure requirements of different (groups of) stakeholders, scenario-based approaches have been widely used and investigated. Still, the elicitation and validation of requirements covering collaborative scenarios remains complicated, since the required information is highly intertwined, fragmented, and distributed over several stakeholders. Hence, it can only be elicited and validated collaboratively. In times of globally distributed companies, scheduling and conducting workshops with groups of stakeholders is usually not feasible due to budget and time constraints. Talking to individual stakeholders, on the other hand, is feasible but leads to fragmented and incomplete stakeholder scenarios. Going back and forth between different individual stakeholders to resolve this fragmentation and explore uncovered alternatives is an error-prone, time-consuming, and expensive task for the requirements engineers. While formal modeling methods can be employed to automatically check and ensure consistency of stakeholder scenarios, such methods introduce additional overhead since their formal notations have to be explained in each interaction between stakeholders and requirements engineers. Tangible prototypes as they are used in other disciplines such as design, on the other hand, allow designers to feasibly validate and iterate concepts and requirements with stakeholders. This thesis proposes a model-based approach for prototyping formal behavioral specifications of stakeholders who are involved in collaborative scenarios. By simulating and animating such specifications in a remote domain-specific visualization, stakeholders can experience and validate the scenarios captured so far, i.e., how other stakeholders act and react. This interactive scenario simulation is referred to as a model-based virtual prototype. Moreover, through observing how stakeholders interact with a virtual prototype of their collaborative scenarios, formal behavioral specifications can be automatically derived which complete the otherwise fragmented scenarios. This, in turn, enables requirements engineers to elicit and validate collaborative scenarios in individual stakeholder sessions – decoupled, since stakeholders can participate remotely and are not forced to be available for a joint session at the same time. This thesis discusses and evaluates the feasibility, understandability, and modifiability of model-based virtual prototypes. Similarly to how physical prototypes are perceived, the presented approach brings behavioral models closer to being tangible for stakeholders and, moreover, combines the advantages of joint stakeholder sessions and decoupled sessions.
The challenge is providing teachers with the resources they need to strengthen their instructions and better prepare students for the jobs of the 21st Century. Technology can help meet the challenge. Teachers’ Tryscience is a noncommercial offer, developed by the New York Hall of Science, TeachEngineering, the National Board for Professional Teaching Standards and IBM Citizenship to provide teachers with such resources. The workshop provides deeper insight into this tool and discussion of how to support teaching of informatics in schools.
The Arctic is considered as a focal region in the ongoing climate change debate. The currently observed and predicted climate warming is particularly pronounced in the high northern latitudes. Rising temperatures in the Arctic cause progressive deepening and duration of permafrost thawing during the arctic summer, creating an ‘active layer’ with high bioavailability of nutrients and labile carbon for microbial consumption. The microbial mineralization of permafrost carbon creates large amounts of greenhouse gases, including carbon dioxide and methane, which can be released to the atmosphere, creating a positive feedback to global warming. However, to date, the microbial communities that drive the overall carbon cycle and specifically methane production in the Arctic are poorly constrained. To assess how these microbial communities will respond to the predicted climate changes, such as an increase in atmospheric and soil temperatures causing increased bioavailability of organic carbon, it is necessary to investigate the current status of this environment, but also how these microbial communities reacted to climate changes in the past. This PhD thesis investigated three records from two different study sites in the Russian Arctic, including permafrost, lake shore and lake deposits from Siberia and Chukotka. A combined stratigraphic approach of microbial and molecular organic geochemical techniques were used to identify and quantify characteristic microbial gene and lipid biomarkers. Based on this data it was possible to characterize and identify the climate response of microbial communities involved in past carbon cycling during the Middle Pleistocene and the Late Pleistocene to Holocene. It is shown that previous warmer periods were associated with an expansion of bacterial and archaeal communities throughout the Russian Arctic, similar to present day conditions. Different from this situation, past glacial and stadial periods experienced a substantial decrease in the abundance of Bacteria and Archaea. This trend can also be confirmed for the community of methanogenic archaea that were highly abundant and diverse during warm and particularly wet conditions. For the terrestrial permafrost, a direct effect of the temperature on the microbial communities is likely. In contrast, it is suggested that the temperature rise in scope of the glacial-interglacial climate variations led to an increase of the primary production in the Arctic lake setting, as can be seen in the corresponding biogenic silica distribution. The availability of this algae-derived carbon is suggested to be a driver for the observed pattern in the microbial abundance. This work demonstrates the effect of climate changes on the community composition of methanogenic archae. Methanosarcina-related species were abundant throughout the Russian Arctic and were able to adapt to changing environmental conditions. In contrast, members of Methanocellales and Methanomicrobiales were not able to adapt to past climate changes. This PhD thesis provides first evidence that past climatic warming led to an increased abundance of microbial communities in the Arctic, closely linked to the cycling of carbon and methane production. With the predicted climate warming, it may, therefore, be anticipated that extensive amounts of microbial communities will develop. Increasing temperatures in the Arctic will affect the temperature sensitive parts of the current microbiological communities, possibly leading to a suppression of cold adapted species and the prevalence of methanogenic archaea that tolerate or adapt to increasing temperatures. These changes in the composition of methanogenic archaea will likely increase the methane production potential of high latitude terrestrial regions, changing the Arctic from a carbon sink to a source.
The problem under consideration in the thesis is a two level atom in a photonic crystal and a pumping laser. The photonic crystal provides an environment for the atom, that modifies the decay of the exited state, especially if the atom frequency is close to the band gap. The population inversion is investigated als well as the emission spectrum. The dynamics is analysed in the context of open quantum systems. Due to the multiple reflections in the photonic crystal, the system has a finite memory that inhibits the Markovian approximation. In the Heisenberg picture the equations of motion for the system variables form a infinite hierarchy of integro-differential equations. To get a closed system, approximations like a weak coupling approximation are needed. The thesis starts with a simple photonic crystal that is amenable to analytic calculations: a one-dimensional photonic crystal, that consists of alternating layers. The Bloch modes inside and the vacuum modes outside a finite crystal are linked with a transformation matrix that is interpreted as a transfer matrix. Formulas for the band structure, the reflection from a semi-infinite crystal, and the local density of states in absorbing crystals are found; defect modes and negative refraction are discussed. The quantum optics section of the work starts with the discussion of three problems, that are related to the full resonance fluorescence problem: a pure dephasing model, the driven atom and resonance fluorescence in free space. In the lowest order of the system-environment coupling, the one-time expectation values for the full problem are calculated analytically and the stationary states are discussed for certain cases. For the calculation of the two time correlation functions and spectra, the additional problem of correlations between the two times appears. In the Markovian case, the quantum regression theorem is valid. In the general case, the fluctuation dissipation theorem can be used instead. The two-time correlation functions are calculated by the two different methods. Within the chosen approximations, both methods deliver the same result. Several plots show the dependence of the spectrum on the parameters. Some examples for squeezing spectra are shown with different approximations. A projection operator method is used to establish two kinds of Markovian expansion with and without time convolution. The lowest order is identical with the lowest order of system environment coupling, but higher orders give different results.
Background: Recent studies have demonstrated a superior diagnostic accuracy of cardiovascular magnetic resonance (CMR) for the detection of coronary artery disease (CAD). We aimed to determine the comparative cost-effectiveness of CMR versus single-photon emission computed tomography (SPECT).
Methods: Based on Bayes' theorem, a mathematical model was developed to compare the cost-effectiveness and utility of CMR with SPECT in patients with suspected CAD. Invasive coronary angiography served as the standard of reference. Effectiveness was defined as the accurate detection of CAD, and utility as the number of quality-adjusted life-years (QALYs) gained. Model input parameters were derived from the literature, and the cost analysis was conducted from a German health care payer's perspective. Extensive sensitivity analyses were performed.
Results: Reimbursement fees represented only a minor fraction of the total costs incurred by a diagnostic strategy. Increases in the prevalence of CAD were generally associated with improved cost-effectiveness and decreased costs per utility unit (Delta QALY). By comparison, CMR was consistently more cost-effective than SPECT, and showed lower costs per QALY gained. Given a CAD prevalence of 0.50, CMR was associated with total costs of (sic)6,120 for one patient correctly diagnosed as having CAD and with (sic)2,246 per Delta QALY gained versus (sic)7,065 and (sic)2,931 for SPECT, respectively. Above a threshold value of CAD prevalence of 0.60, proceeding directly to invasive angiography was the most cost-effective approach.
Conclusions: In patients with low to intermediate CAD probabilities, CMR is more cost-effective than SPECT. Moreover, lower costs per utility unit indicate a superior clinical utility of CMR.
OCP-Place, a cross-linguistically well-attested constraint against pairs of consonants with shared [place], is psychologically real. Studies have shown that the processing of words violating OCP-Place is inhibited. Functionalists assume that OCP arises as a consequence of low-level perception: a consonant following another with the same [place] cannot be faithfully perceived as an independent unit. If functionalist theories were correct, then lexical access would be inhibited if two homorganic consonants conjoin at word boundaries-a problem that can only be solved with lexical feedback.
Here, we experimentally challenge the functional account by showing that OCP-Place can be used as a speech segmentation cue during pre-lexical processing without lexical feedback, and that the use relates to distributions in the input.
In Experiment 1, native listeners of Dutch located word boundaries between two labials when segmenting an artificial language. This indicates a use of OCP-Labial as a segmentation cue, implying a full perception of both labials. Experiment 2 shows that segmentation performance cannot solely be explained by well-formedness intuitions. Experiment 3 shows that knowledge of OCP-Place depends on language-specific input: in Dutch, co-occurrences of labials are under-represented, but co-occurrences of coronals are not. Accordingly, Dutch listeners fail to use OCP-Coronal for segmentation.
For the first time the transcriptional reprogramming of distinct root cortex cells during the arbuscular mycorrhizal (AM) symbiosis was investigated by combining Laser Capture Mirodissection and Affymetrix GeneChip® Medicago genome array hybridization. The establishment of cryosections facilitated the isolation of high quality RNA in sufficient amounts from three different cortical cell types. The transcript profiles of arbuscule-containing cells (arb cells), non-arbuscule-containing cells (nac cells) of Rhizophagus irregularis inoculated Medicago truncatula roots and cortex cells of non-inoculated roots (cor) were successfully explored. The data gave new insights in the symbiosis-related cellular reorganization processes and indicated that already nac cells seem to be prepared for the upcoming fungal colonization. The mycorrhizal- and phosphate-dependent transcription of a GRAS TF family member (MtGras8) was detected in arb cells and mycorrhizal roots. MtGRAS shares a high sequence similarity to a GRAS TF suggested to be involved in the fungal colonization processes (MtRAM1). The function of MtGras8 was unraveled upon RNA interference- (RNAi-) mediated gene silencing. An AM symbiosis-dependent expression of a RNAi construct (MtPt4pro::gras8-RNAi) revealed a successful gene silencing of MtGras8 leading to a reduced arbuscule abundance and a higher proportion of deformed arbuscules in root with reduced transcript levels. Accordingly, MtGras8 might control the arbuscule development and life-time. The targeting of MtGras8 by the phosphate-dependent regulated miRNA5204* was discovered previously (Devers et al., 2011). Since miRNA5204* is known to be affected by phosphate, the posttranscriptional regulation might represent a link between phosphate signaling and arbuscule development. In this work, the posttranscriptional regulation was confirmed by mis-expression of miRNA5204* in M. truncatula roots. The miRNA-mediated gene silencing affects the MtGras8 transcript abundance only in the first two weeks of the AM symbiosis and the mis-expression lines seem to mimic the phenotype of MtGras8-RNAi lines. Additionally, MtGRAS8 seems to form heterodimers with NSP2 and RAM1, which are known to be key regulators of the fungal colonization process (Hirsch et al., 2009; Gobbato et al., 2012). These data indicate that MtGras8 and miRNA5204* are linked to the sym pathway and regulate the arbuscule development in phosphate-dependent manner.
The EVE curriculum framework
(2013)
Lakes are increasingly being recognized as an important component of the global carbon cycle, yet anthropogenic activities that alter their community structure may change the way they transport and process carbon. This research focuses on the relationship between carbon cycling and community structure of primary producers in small, shallow lakes, which are the most abundant lake type in the world, and furthermore subject to intense terrestrial-aquatic coupling due to their high perimeter:area ratio. Shifts between macrophyte and phytoplankton dominance are widespread and common in shallow lakes, with potentially large consequences to regional carbon cycling. I thus compared a lake with clear-water conditions and a submerged macrophyte community to a turbid, phytoplankton-dominated lake, describing differences in the availability, processing, and export of organic and inorganic carbon. I furthermore examined the effects of increasing terrestrial carbon inputs on internal carbon cycling processes. Pelagic diel (24-hour) oxygen curves and independent fluorometric approaches of individual primary producers together indicated that the presence of a submerged macrophyte community facilitated higher annual rates of gross primary production than could be supported in a phytoplankton-dominated lake at similar nutrient concentrations. A simple model constructed from the empirical data suggested that this difference between regime types could be common in moderately eutrophic lakes with mean depths under three to four meters, where benthic primary production is a potentially major contributor to the whole-lake primary production. It thus appears likely that a regime shift from macrophyte to phytoplankton dominance in shallow lakes would typically decrease the quantity of autochthonous organic carbon available to lake food webs. Sediment core analyses indicated that a regime shift from macrophyte to phytoplankton dominance was associated with a four-fold increase in carbon burial rates, signalling a major change in lake carbon cycling dynamics. Carbon mass balances suggested that increasing carbon burial rates were not due to an increase in primary production or allochthonous loading, but instead were due to a higher carbon burial efficiency (carbon burial / carbon deposition). This, in turn, was associated with diminished benthic mineralization rates and an increase in calcite precipitation, together resulting in lower surface carbon dioxide emissions. Finally, a period of unusually high precipitation led to rising water levels, resulting in a feedback loop linking increasing concentrations of dissolved organic carbon (DOC) to severely anoxic conditions in the phytoplankton-dominated system. High water levels and DOC concentrations diminished benthic primary production (via shading) and boosted pelagic respiration rates, diminishing the hypolimnetic oxygen supply. The resulting anoxia created redox conditions which led to a major release of nutrients, DOC, and iron from the sediments. This further transformed the lake metabolism, providing a prolonged summertime anoxia below a water depth of 1 m, and leading to the near-complete loss of fish and macroinvertebrates. Pelagic pH levels also decreased significantly, increasing surface carbon dioxide emissions by an order of magnitude compared to previous years. Altogether, this thesis adds an important body of knowledge to our understanding of the significance of the benthic zone to carbon cycling in shallow lakes. The contribution of the benthic zone towards whole-lake primary production was quantified, and was identified as an important but vulnerable site for primary production. Benthic mineralization rates were furthermore found to influence carbon burial and surface emission rates, and benthic primary productivity played an important role in determining hypolimnetic oxygen availability, thus controlling the internal sediment loading of nutrients and carbon. This thesis also uniquely demonstrates that the ecological community structure (i.e. stable regime) of a eutrophic, shallow lake can significantly influence carbon availability and processing. By changing carbon cycling pathways, regime shifts in shallow lakes may significantly alter the role of these ecosystems with respect to the global carbon cycle.
The Arctic tundra, covering approx. 5.5 % of the Earth’s land surface, is one of the last ecosystems remaining closest to its untouched condition. Remote sensing is able to provide information at regular time intervals and large spatial scales on the structure and function of Arctic ecosystems. But almost all natural surfaces reveal individual anisotropic reflectance behaviors, which can be described by the bidirectional reflectance distribution function (BRDF). This effect can cause significant changes in the measured surface reflectance depending on solar illumination and sensor viewing geometries. The aim of this thesis is the hyperspectral and spectro-directional reflectance characterization of important Arctic tundra vegetation communities at representative Siberian and Alaskan tundra sites as basis for the extraction of vegetation parameters, and the normalization of BRDF effects in off-nadir and multi-temporal remote sensing data. Moreover, in preparation for the upcoming German EnMAP (Environmental Mapping and Analysis Program) satellite mission, the understanding of BRDF effects in Arctic tundra is essential for the retrieval of high quality, consistent and therefore comparable datasets. The research in this doctoral thesis is based on field spectroscopic and field spectro-goniometric investigations of representative Siberian and Alaskan measurement grids. The first objective of this thesis was the development of a lightweight, transportable, and easily managed field spectro-goniometer system which nevertheless provides reliable spectro-directional data. I developed the Manual Transportable Instrument platform for ground-based Spectro-directional observations (ManTIS). The outcome of the field spectro-radiometrical measurements at the Low Arctic study sites along important environmental gradients (regional climate, soil pH, toposequence, and soil moisture) show that the different plant communities can be distinguished by their nadir-view reflectance spectra. The results especially reveal separation possibilities between the different tundra vegetation communities in the visible (VIS) blue and red wavelength regions. Additionally, the near-infrared (NIR) shoulder and NIR reflectance plateau, despite their relatively low values due to the low structure of tundra vegetation, are still valuable information sources and can separate communities according to their biomass and vegetation structure. In general, all different tundra plant communities show: (i) low maximum NIR reflectance; (ii) a weakly or nonexistent visible green reflectance peak in the VIS spectrum; (iii) a narrow “red-edge” region between the red and NIR wavelength regions; and (iv) no distinct NIR reflectance plateau. These common nadir-view reflectance characteristics are essential for the understanding of the variability of BRDF effects in Arctic tundra. None of the analyzed tundra communities showed an even closely isotropic reflectance behavior. In general, tundra vegetation communities: (i) usually show the highest BRDF effects in the solar principal plane; (ii) usually show the reflectance maximum in the backward viewing directions, and the reflectance minimum in the nadir to forward viewing directions; (iii) usually have a higher degree of reflectance anisotropy in the VIS wavelength region than in the NIR wavelength region; and (iv) show a more bowl-shaped reflectance distribution in longer wavelength bands (>700 nm). The results of the analysis of the influence of high sun zenith angles on the reflectance anisotropy show that with increasing sun zenith angles, the reflectance anisotropy changes to azimuthally symmetrical, bowl-shaped reflectance distributions with the lowest reflectance values in the nadir view position. The spectro-directional analyses also show that remote sensing products such as the NDVI or relative absorption depth products are strongly influenced by BRDF effects, and that the anisotropic characteristics of the remote sensing products can significantly differ from the observed BRDF effects in the original reflectance data. But the results further show that the NDVI can minimize view angle effects relative to the contrary spectro-directional effects in the red and NIR bands. For the researched tundra plant communities, the overall difference of the off-nadir NDVI values compared to the nadir value increases with increasing sensor viewing angles, but on average never exceeds 10 %. In conclusion, this study shows that changes in the illumination-target-viewing geometry directly lead to an altering of the reflectance spectra of Arctic tundra communities according to their object-specific BRDFs. Since the different tundra communities show only small, but nonetheless significant differences in the surface reflectance, it is important to include spectro-directional reflectance characteristics in the algorithm development for remote sensing products.
When playing violent video games, aggressive actions are performed against the background of an originally neutral environment, and associations are formed between cues related to violence and contextual features. This experiment examined the hypothesis that neutral contextual features of a virtual environment become associated with aggressive meaning and acquire the function of primes for aggressive cognitions. Seventy-six participants were assigned to one of two violent video game conditions that varied in context (ship vs. city environment) or a control condition. Afterwards, they completed a Lexical Decision Task to measure the accessibility of aggressive cognitions in which they were primed either with ship-related or city-related words. As predicted, participants who had played the violent game in the ship environment had shorter reaction times for aggressive words following the ship primes than the city primes, whereas participants in the city condition responded faster to the aggressive words following the city primes compared to the ship primes. No parallel effect was observed for the non-aggressive targets. The findings indicate that the associations between violent and neutral cognitions learned during violent game play facilitate the accessibility of aggressive cognitions.
This thesis presents novel ideas and research findings for the Web of Data – a global data space spanning many so-called Linked Open Data sources. Linked Open Data adheres to a set of simple principles to allow easy access and reuse for data published on the Web. Linked Open Data is by now an established concept and many (mostly academic) publishers adopted the principles building a powerful web of structured knowledge available to everybody. However, so far, Linked Open Data does not yet play a significant role among common web technologies that currently facilitate a high-standard Web experience. In this work, we thoroughly discuss the state-of-the-art for Linked Open Data and highlight several shortcomings – some of them we tackle in the main part of this work. First, we propose a novel type of data source meta-information, namely the topics of a dataset. This information could be published with dataset descriptions and support a variety of use cases, such as data source exploration and selection. For the topic retrieval, we present an approach coined Annotated Pattern Percolation (APP), which we evaluate with respect to topics extracted from Wikipedia portals. Second, we contribute to entity linking research by presenting an optimization model for joint entity linking, showing its hardness, and proposing three heuristics implemented in the LINked Data Alignment (LINDA) system. Our first solution can exploit multi-core machines, whereas the second and third approach are designed to run in a distributed shared-nothing environment. We discuss and evaluate the properties of our approaches leading to recommendations which algorithm to use in a specific scenario. The distributed algorithms are among the first of their kind, i.e., approaches for joint entity linking in a distributed fashion. Also, we illustrate that we can tackle the entity linking problem on the very large scale with data comprising more than 100 millions of entity representations from very many sources. Finally, we approach a sub-problem of entity linking, namely the alignment of concepts. We again target a method that looks at the data in its entirety and does not neglect existing relations. Also, this concept alignment method shall execute very fast to serve as a preprocessing for further computations. Our approach, called Holistic Concept Matching (HCM), achieves the required speed through grouping the input by comparing so-called knowledge representations. Within the groups, we perform complex similarity computations, relation conclusions, and detect semantic contradictions. The quality of our result is again evaluated on a large and heterogeneous dataset from the real Web. In summary, this work contributes a set of techniques for enhancing the current state of the Web of Data. All approaches have been tested on large and heterogeneous real-world input.
Public debate about energy relations between the EU and Russia is distorted. These distortions present considerable obstacles to the development of true partnership. At the core of the conflict is a struggle for resource rents between energy producing, energy consuming and transit countries. Supposed secondary aspects, however, are also of great importance. They comprise of geopolitics, market access, economic development and state sovereignty. The European Union, having engaged in energy market liberalisation, faces a widening gap between declining domestic resources and continuously growing energy demand. Diverse interests inside the EU prevent the definition of a coherent and respected energy policy. Russia, for its part, is no longer willing to subsidise its neighbouring economies by cheap energy exports. The Russian government engages in assertive policies pursuing Russian interests. In so far, it opts for a different globalisation approach, refusing the role of mere energy exporter. In view of the intensifying struggle for global resources, Russia, with its large energy potential, appears to be a very favourable option for European energy supplies, if not the best one. However, several outcomes of the strategic game between the two partners can be imagined. Engaging in non-cooperative strategies will in the end leave all stakeholders worse-off. The European Union should therefore concentrate on securing its partnership with Russia instead of damaging it. Stable cooperation would need the acceptance that the partner may pursue his own goals, which might be different from one’s own interests. The question is, how can a sustainable compromise be found? This thesis finds that a mix of continued dialogue, a tit for tat approach bolstered by an international institutional framework and increased integration efforts appears as a preferable solution.
Developing rich Web applications can be a complex job - especially when it comes to mobile device support. Web-based environments such as Lively Webwerkstatt can help developers implement such applications by making the development process more direct and interactive. Further the process of developing software is collaborative which creates the need that the development environment offers collaboration facilities. This report describes extensions of the webbased development environment Lively Webwerkstatt such that it can be used in a mobile environment. The extensions are collaboration mechanisms, user interface adaptations but as well event processing and performance measuring on mobile devices.
In various biological systems and small scale technological applications particles transiently bind to a cylindrical surface. Upon unbinding the particles diffuse in the vicinal bulk before rebinding to the surface. Such bulk-mediated excursions give rise to an effective surface translation, for which we here derive and discuss the dynamic equations, including additional surface diffusion. We discuss the time evolution of the number of surface-bound particles, the effective surface mean squared displacement, and the surface propagator. In particular, we observe sub- and superdiffusive regimes. A plateau of the surface mean-squared displacement reflects a stalling of the surface diffusion at longer times. Finally, the corresponding first passage problem for the cylindrical geometry is analysed.
1. Abstract 2. Introduction to the main monetary policy tools in China 2.1 Reserve requirements 2.2 Open market operations 2.3 Interest rate policy 2.4 Credit policy and window guidance 2.5 Real estate credit control 3. Loosening monetary policy and its effect on the banking 3.1 Loosening monetary policy measures 3.2 The effect of the expansionary monetary policy on the banking 4. Sound monetary policy with tight trend and its effect on banking 4.1 Main measures of the sound monetary policy with tight trend 4.2 The effect of sound monetary policy with tight trend on banking 5. Conclusion
We consider diffusion processes with a spatially varying diffusivity giving rise to anomalous diffusion. Such heterogeneous diffusion processes are analysed for the cases of exponential, power-law, and logarithmic dependencies of the diffusion coefficient on the particle position. Combining analytical approaches with stochastic simulations, we show that the functional form of the space-dependent diffusion coefficient and the initial conditions of the diffusing particles are vital for their statistical and ergodic properties. In all three cases a weak ergodicity breaking between the time and ensemble averaged mean squared displacements is observed. We also demonstrate a population splitting of the time averaged traces into fast and slow diffusers for the case of exponential variation of the diffusivity as well as a particle trapping in the case of the logarithmic diffusivity. Our analysis is complemented by the quantitative study of the space coverage, the diffusive spreading of the probability density, as well as the survival probability.
Relating to the students
(2013)
The aim of the present study was to examine how different types of tracking— between-school streaming, within-school streaming, and course-by-course tracking—shape students’ mathematics self-concept. This was done in an internationally comparative framework using data from the Programme for International Student Assessment (PISA). After controlling for individual and track mean achievement, results indicated that generally for students in course-by-course tracking, high-track students had higher mathematics self-concepts and low-track students had lower mathematics self-concepts. For students in between-school and within-school streaming, the reverse pattern was found. These findings suggest a solution to the ongoing debate about the effects of tracking on students’ academic self-concept and suggest that the reference groups to which students compare themselves differ according to the type of tracking.
The supercapacitor is one of the most important energy storage devices as its construction allows for addressing many of the drawbacks related to batteries, but the low energy density of current systems is a major issue. In this doctoral dissertation, with a view to attaining high energy density supercapacitor systems that can be comparable to those for batteries, new heteroatom-containing carbons in the form of particles and three-dimensional films were investigated. A nitrogen-containing material, acrodam, was chosen as the carbon precursor due to the inexpensiveness, high carbonization yield, oligomerizability, etc. The carbon particles were prepared from acrodam together with caesium acetate as a meltable flux agent, and disclosed excellent properties in hydroquinone-loaded sulphuric acid electrolyte with high energy densities (up to 133.0 Wh kg–1) and sufficient cycle stabilities. These properties are already now comparable to those of batteries. Besides, conductive carbon three-dimensional films were fabricated using acrodam oligomer as the precursor by the inexpensive spin coating method. The films were found to be homogeneous, flat, void- and crack-free, and high conductivities (up to 334 S cm–1) could be obtained at the carbonization temperature of 1000 ºC. Furthermore, a porous carbon three-dimensional film could be formed using an organic template at the first attempt. This finding demonstrates the film’s potentiality for various applications such as supercapacitor electrode; the essential absence of contact resistance within the network should contribute to effective transportation of electron within the electrode. The progress made in this dissertation will open a new way to further enhancement of energy density for supercapacitor as well as other applications that exceeds the current properties.
We report findings from psycholinguistic experiments investigating the detailed timing of processing morphologically complex words by proficient adult second (L2) language learners of English in comparison to adult native (L1) speakers of English. The first study employed the masked priming technique to investigate -ed forms with a group of advanced Arabic-speaking learners of English. The results replicate previously found L1/L2 differences in morphological priming, even though in the present experiment an extra temporal delay was offered after the presentation of the prime words.
The second study examined the timing of constraints against inflected forms inside derived words in English using the eye-movement monitoring technique and an additional acceptability judgment task with highly advanced Dutch L2 learners of English in comparison to adult L1 English controls. Whilst offline the L2 learners performed native-like, the eye-movement data showed that their online processing was not affected by the morphological constraint against regular plurals inside derived words in the same way as in native speakers. Taken together, these findings indicate that L2 learners are not just slower than native speakers in processing morphologically complex words, but that the L2 comprehension system employs real-time grammatical analysis (in this case, morphological information) less than the L1 system.
Within a research project about future sustainable water management options in the Elbe River basin, quasi-natural discharge scenarios had to be provided. The semi-distributed eco-hydrological model SWIM was utilised for this task. According to scenario simulations driven by the stochastical climate model STAR, the region would get distinctly drier. However, this thesis focuses on the challenge of meeting the requirement of high model fidelity even for smaller sub-basins. Usually, the quality of the simulations is lower at inner points than at the outlet. Four research paper chapters and the discussion chapter deal with the reasons for local model deviations and the problem of optimal spatial calibration. Besides other assessments, the Markov Chain Monte Carlo method is applied to show whether evapotranspiration or precipitation should be corrected to minimise runoff deviations, principal component analysis is used in an unusual way to evaluate local precipitation alterations by land cover changes, and remotely sensed surface temperatures allow for an independent view on the evapotranspiration landscape. The overall insight is that spatially explicit hydrological modelling of such a large river basin requires a lot of local knowledge. It probably needs more time to obtain such knowledge as is usually provided for hydrological modelling studies.
Despite the increased attention devoted to sexual aggression among young people in the international scientific literature, Brazil has little research on the subject exclusively among this group. There is evidence that sexual aggression and victimization may start early. Identifying the magnitude and factors that increase the chance for the onset and persistence of sexual victimization are the first steps for prevention efforts among this group. Using both cross-sectional and prospective analyses, this study examined the prevalence of, and vulnerability factors for sexual aggression and victimization in female and male college students (N = 742; M = 20.1 years) in Brazil, of whom a subgroup (n = 354) took part in two measurements six months apart. At Time 1, a Portuguese version of the Short Form of the Sexual Experiences Survey (Koss et al., 2007) was administered to collect information from men and women as both victims and perpetrators of sexual aggression since the age of 14. The students were also asked to provide information on their cognitive representations (sexual scripts) of a consensual sexual encounter, their actual sexual behavior, use of pornography, and experiences of child abuse. At Time 2, the same items from the SES were presented again to assess the incidence of sexual aggression in the 6-month period since T1. The overall prevalence rate of victimization was 27% among men and 29% among women. In contrast, perpetration rates were significantly higher among men (33.7%) than among women (3%). Confirming the hypotheses, cognitive (i.e., risky sexual scripts, normative beliefs), behavioral (i.e., pornography use, sexual behavior patterns) and biographical (i.e., childhood abuse) risk factors were linked to male sexual aggression and to male and female victimization both cross-sectionally and longitudinally with the path models analyses demonstrating good fit with the data. The results supported: a) the role of the sexual script for a first consensual sexual encounter as an underlying factor of real sexual behavior and sexual victimization or perpetration; b) the role of pornography as “inputs” for sexual scripts, increasing indirectly the risk for victimization, and directly and indirectly the risk for perpetration; c) the direct and indirect link between childhood experiences of (sexual) abuse and male sexual aggression and victimization mediated by sexual behavior; and d) the direct link between child sexual abuse and sexual victimization among women. Few gender differences were found in the victimization model. The findings challenge societal beliefs that sexual aggression is restricted to groups with low socio-economic status and that men are unlikely to be sexually coerced. The disparity between male victimization and female perpetration rates is discussed based on traditional gender roles in Brazil. This study is also the first prospective investigation of risk factors for sexual aggression and victimization in Brazil, demonstrating the role of behavioral, cognitive and biographical factors that increase the vulnerability among college students.
A comparison of current trends within computer science teaching in school in Germany and the UK
(2013)
In the last two years, CS as a school subject has gained a lot of attention worldwide, although different countries have differing approaches to and experiences of introducing CS in schools. This paper reports on a study comparing current trends in CS at school, with a major focus on two countries, Germany and UK. A survey was carried out of a number of teaching professionals and experts from the UK and Germany with regard to the content and delivery of CS in school. An analysis of the quantitative data reveals a difference in foci in the two countries; putting this into the context of curricular developments we are able to offer interpretations of these trends and suggest ways in which curricula in CS at school should be moving forward.
A detailed description of the characteristics of antimicrobial peptides (AMPs) is highly demanded, since the resistance against traditional antibiotics is an emerging problem in medicine. They are part of the innate immune system in every organism, and they are very efficient in the protection against bacteria, viruses, fungi and even cancer cells. Their advantage is that their target is the cell membrane, in contrast to antibiotics which disturb the metabolism of the respective cell type. This allows AMPs to be more active and faster. The lack of an efficient therapy for some cancer types and the evolvement of resistance against existing antitumor agents make AMPs promising in cancer therapy besides being an alternative to traditional antibiotics. The aim of this work was the physical-chemical characterization of two fragments of LL-37, a human antimicrobial peptide from the cathelicidin family. The fragments LL-32 and LL-20 exhibited contrary behavior in biological experiments concerning their activity against bacterial cells, human cells and human cancer cells. LL-32 had even a higher activity than LL-37, while LL-20 had almost no effect. The interaction of the two fragments with model membranes was systematically studied in this work to understand their mode of action. Planar lipid films were mainly applied as model systems in combination with IR-spectroscopy and X-ray scattering methods. Circular Dichroism spectroscopy in bulk systems completed the results. In the first approach, the structure of the peptides was determined in aqueous solution and compared to the structure of the peptides at the air/water interface. In bulk, both peptides are in an unstructured conformation. Adsorbed and confined to at the air-water interface, the peptides differ drastically in their surface activity as well as in the secondary structure. While LL-32 transforms into an α-helix lying flat at the water surface, LL-20 stays partly unstructured. This is in good agreement with the high antimicrobial activity of LL-32. In the second approach, experiments with lipid monolayers as biomimetic models for the cell membrane were performed. It could be shown that the peptides fluidize condensed monolayers of negatively charged DPPG which can be related to the thinning of a bacterial cell membrane. An interaction of the peptides with zwitterionic PCs, as models for mammalian cells, was not clearly observed, even though LL-32 is haemolytic. In the third approach, the lipid monolayers were more adapted to the composition of human erythrocyte membranes by incorporating sphingomyelin (SM) into the PC monolayers. Physical-chemical properties of the lipid films were determined and the influence of the peptides on them was studied. It could be shown that the interaction of the more active LL-32 is strongly increased for heterogeneous lipid films containing both gel and fluid phases, while the interaction of LL-20 with the monolayers was unaffected. The results indicate an interaction of LL-32 with the membrane in a detergent-like way. Additionally, the modelling of the peptide interaction with cancer cells was performed by incorporating some negatively charged lipids into the PC/SM monolayers, but the increased charge had no effect on the interaction of LL-32. It was concluded, that the high anti-cancer activity of the peptide originates from the changed fluidity of cell membrane rather than from the increased surface charge. Furthermore, similarities to the physical-chemical properties of melittin, an AMP from the bee venom, were demonstrated.
Cloud computing is a model for enabling on-demand access to a shared pool of computing resources. With virtually limitless on-demand resources, a cloud environment enables the hosted Internet application to quickly cope when there is an increase in the workload. However, the overhead of provisioning resources exposes the Internet application to periods of under-provisioning and performance degradation. Moreover, the performance interference, due to the consolidation in the cloud environment, complicates the performance management of the Internet applications. In this dissertation, we propose two approaches to mitigate the impact of the resources provisioning overhead. The first approach employs control theory to scale resources vertically and cope fast with workload. This approach assumes that the provider has knowledge and control over the platform running in the virtual machines (VMs), which limits it to Platform as a Service (PaaS) and Software as a Service (SaaS) providers. The second approach is a customer-side one that deals with the horizontal scalability in an Infrastructure as a Service (IaaS) model. It addresses the trade-off problem between cost and performance with a multi-goal optimization solution. This approach finds the scale thresholds that achieve the highest performance with the lowest increase in the cost. Moreover, the second approach employs a proposed time series forecasting algorithm to scale the application proactively and avoid under-utilization periods. Furthermore, to mitigate the interference impact on the Internet application performance, we developed a system which finds and eliminates the VMs suffering from performance interference. The developed system is a light-weight solution which does not imply provider involvement. To evaluate our approaches and the designed algorithms at large-scale level, we developed a simulator called (ScaleSim). In the simulator, we implemented scalability components acting as the scalability components of Amazon EC2. The current scalability implementation in Amazon EC2 is used as a reference point for evaluating the improvement in the scalable application performance. ScaleSim is fed with realistic models of the RUBiS benchmark extracted from the real environment. The workload is generated from the access logs of the 1998 world cup website. The results show that optimizing the scalability thresholds and adopting proactive scalability can mitigate 88% of the resources provisioning overhead impact with only a 9% increase in the cost.
Communicating location-specific information to pedestrians is a challenging task which can be aided by user-friendly digital technologies. In this paper, landmark visibility analysis, as a means for developing more usable pedestrian navigation systems, is discussed. Using an algorithmic framework for image-based 3D analysis, this method integrates a 3D city model with identified landmarks and produces raster visibility layers for each one. This output enables an Android phone prototype application to indicate the visibility of landmarks from the user's actual position. Tested in the field, the method achieves sufficient accuracy for the context of use and improves navigation efficiency and effectiveness.
1. Developing lesson plans and choosing strategies 2. The aims of the lesson plans in general 3. Strategies as a means to achieve theaims of the lesson plans 4. Evaluating the quality of lesson plans 5. Difficulties during lessons and adaptations afterwards 6. Student teachers’ overall feeling about their work 7. Using the strategies in future classes 8. Conclusion
Portal Wissen = Borders
(2013)
The new edition of the Potsdam Research Magazine “Portal Wissen” approaches the subject “Borders” from different perspectives.
As a linguist, this headline makes me think of linguistic borders and the effects that might result from the contact of two languages at a particular border. There is, for instance, ample evidence of code-switching, i.e. the use of material from at least two languages in a single utterance. The reasons for code-switching can be manifold. On the one hand, code-switching may result from a limited language competence, for example if a speaker lacks a particular word in a nonnative language. On the other hand, code-switching may be a matter of prestige if the speaker wants to demonstrate his or her affiliation to a certain social group by switching languages. If code-switching does not only occur sporadically but involves whole language communities over a longer period of time, it can result in significant changes of the involved languages. Which language “gives” and which one “takes” is determined by sociolinguistic factors. It is, hence, quite easy to predict that German varieties spoken in language islands in South and Eastern Europe as well as in North and Latin America will absorb more and more language material from their neighbouring languages until they disappear unless political will strives to preserve these language varieties. Increasing mobility of modern societies has multiplied the extent and the intensity of language contact and certainly comprises a large number of different contact situations besides the one most commonly known, i.e. the contact between German and English.
From a historic point of view, German witnesses a strong influence of various Romance languages such as Latin, French and Italian. In Potsdam, one cannot help being reminded of the French influence during the 18th century.
Overcoming language borders becomes also apparent in the everyday life of an international research university. In March this year, the Annual Conference of the German Linguistic Society took place in Potsdam, with more than 500 participants. Lingua franca of this conference was English. Compared to previous conferences, this further increased the number of international participants.
The articles in this edition illustrate various approaches to the topic “Borders”: On the trail of “Boundary Surveys”, we follow the Australian explorer Ludwig Leichhardt. “Travellers Across Borders” is focussed on articles dealing with the literature of the colonial Caribbean or with the work of an Italian geologist deep beneath the earth’s surface, for example. Looking for the “Boundless”, our authors follow scientists who discuss questions like “Why love hurts?”. The present issue of “Portal Wissen” also takes into account “Drawing Up Borders” in an article that is concerned with the limits of workrelated stress. Instances of successful “Border Crossing” are provided by the “Handkerchief Lab” as well as by new biotechnological applications.
I would like to wish you inspiring border experiences, hoping that you will get many impulses for crossing professional borders in your field of expertise.
Prof. Ulrike Demske
Professor of the History and the Varieties of the German Language
Vice President International Affairs, Alumni and Fundraising
Measuring the metabolite profile of plants can be a strong phenotyping tool, but the changes of metabolite pool sizes are often difficult to interpret, not least because metabolite pool sizes may stay constant while carbon flows are altered and vice versa. Hence, measuring the carbon allocation of metabolites enables a better understanding of the metabolic phenotype. The main challenge of such measurements is the in vivo integration of a stable or radioactive label into a plant without perturbation of the system. To follow the carbon flow of a precursor metabolite, a method is developed in this work that is based on metabolite profiling of primary metabolites measured with a mass spectrometer preceded by a gas chromatograph (Wagner et al. 2003; Erban et al. 2007; Dethloff et al. submitted). This method generates stable isotope profiling data, besides conventional metabolite profiling data. In order to allow the feeding of a 13C sucrose solution into the plant, a petiole and a hypocotyl feeding assay are developed. To enable the processing of large numbers of single leaf samples, their preparation and extraction are simplified and optimised. The metabolite profiles of primary metabolites are measured, and a simple relative calculation is done to gain information on carbon allocation from 13C sucrose. This method is tested examining single leaves of one rosette in different developmental stages, both metabolically and regarding carbon allocation from 13C sucrose. It is revealed that some metabolite pool sizes and 13C pools are tightly associated to relative leaf growth, i.e. to the developmental stage of the leaf. Fumaric acid turns out to be the most interesting candidate for further studies because pool size and 13C pool diverge considerably. In addition, the analyses are also performed on plants grown in the cold, and the initial results show a different metabolite pool size pattern across single leaves of one Arabidopsis rosette, compared to the plants grown under normal temperatures. Lastly, in situ expression of REIL genes in the cold is examined using promotor-GUS plants. Initial results suggest that single leaf metabolite profiles of reil2 differ from those of the WT.
We have used polarized confocal Raman microspectroscopy and scanning near-field optical microscopy with a resolution of 60 nm to characterize photoinscribed grating structures of azobenzene doped polymer films on a glass support. Polarized Raman microscopy allowed determining the reorientation of the chromophores as a function of the grating phase and penetration depth of the inscribing laser in three dimensions. We found periodic patterns, which are not restricted to the surface alone, but appear also well below the surface in the bulk of the material. Near-field optical microscopy with nanoscale resolution revealed lateral two-dimensional optical contrast, which is not observable by atomic force and Raman microscopy.
1. Porter strategic competitive analysis 2. A Porter analysis of the competitive advantage of banks in business lending and proprietary trading 3. Summary, competitive advantage of banks in business lending and proprietary trading 4. JPMorgan’s “London Whale” speculation 5. A common misapprehension about hedged positions in corporate debt 6. Conclusion
During an unusually massive filament eruption on 7 June 2011, SDO/AIA imaged for the first time significant EUV emission around a magnetic reconnection region in the solar corona. The reconnection occurred between magnetic fields of the laterally expanding CME and a neighbouring active region. A pre-existing quasi-separatrix layer was activated in the process. This scenario is supported by data-constrained numerical simulations of the eruption. Observations show that dense cool filament plasma was re-directed and heated in situ, producing coronal-temperature emission around the reconnection region. These results provide the first direct observational evidence, supported by MHD simulations and magnetic modelling, that a large-scale re-configuration of the coronal magnetic field takes place during solar eruptions via the process of magnetic reconnection.
This thesis rests on two large Active Galactic Nuclei (AGNs) surveys. The first survey deals with galaxies that host low-level AGNs (LLAGN) and aims at identifying such galaxies by quantifying their variability. While numerous studies have shown that AGNs can be variable at all wavelengths, the nature of the variability is still not well understood. Studying the properties of LLAGNs may help to understand better galaxy evolution, and how AGNs transit between active and inactive states. In this thesis, we develop a method to extract variability properties of AGNs. Using multi-epoch deep photometric observations, we subtract the contribution of the host galaxy at each epoch to extract variability and estimate AGN accretion rates. This pipeline will be a powerful tool in connection with future deep surveys such as PANSTARS. The second study in this thesis describes a survey of X-ray selected AGN hosts at redshifts z>1.5 and compares them to quiescent galaxies. This survey aims at studying environments, sizes and morphologies of star-forming high-redshift AGN hosts in the COSMOS Survey at the epoch of peak AGN activity. Between redshifts 1.5<z<3.8, the COSMOS HST/ACS imaging probes the UV regime, where separating the AGN flux from its host galaxy is very challenging. Nevertheless, we successfully derived the structural properties of 249 AGN hosts using two-dimensional surface-brightness profile fitting with the GALFIT package. This is the largest sample of AGN hosts at redshift z>1.5 to date. We analyzed the evolution of structural parameters of AGN and non-AGN host galaxies with redshift, and compared their disturbance rates to identify the more probable AGN triggering mechanism in the 43.5<log_10 L_X<45 luminosity range. We also conducted mock AGN and quiescent galaxies observations to determine errors and corrections for the derived parameters. We find that the size-absolute magnitude relations of AGN hosts and non-AGN galaxies are very similar, with estimated mean sizes in both samples decreasing by ~50% between redshifts z=1.5 and z=3.5. Morphological classification of both active and quiescent galaxies shows that the majority of the AGN host galaxies are disc-dominated, with disturbance rates that are significantly lower than among the non-AGN galaxies. Such a finding suggests that Major Mergers are probably not responsible for triggering AGN accretion in most of these galaxies. Other secular mechanisms should therefore be responsible.
The nutrient exchange between plant and fungus is the key element of the arbuscular mycorrhizal (AM) symbiosis. The fungus improves the plant’s uptake of mineral nutrients, mainly phosphate, and water, while the plant provides the fungus with photosynthetically assimilated carbohydrates. Still, the knowledge about the mechanisms of the nutrient exchange between the symbiotic partners is very limited. Therefore, transport processes of both, the plant and the fungal partner, are investigated in this study. In order to enhance the understanding of the molecular basis underlying this tight interaction between the roots of Medicago truncatula and the AM fungus Rhizophagus irregularis, genes involved in transport processes of both symbiotic partners are analysed here. The AM-specific regulation and cell-specific expression of potential transporter genes of M. truncatula that were found to be specifically regulated in arbuscule-containing cells and in non-arbusculated cells of mycorrhizal roots was confirmed. A model for the carbon allocation in mycorrhizal roots is suggested, in which carbohydrates are mobilized in non-arbusculated cells and symplastically provided to the arbuscule-containing cells. New insights into the mechanisms of the carbohydrate allocation were gained by the analysis of hexose/H+ symporter MtHxt1 which is regulated in distinct cells of mycorrhizal roots. Metabolite profiling of leaves and roots of a knock-out mutant, hxt1, showed that it indeed does have an impact on the carbohydrate balance in the course of the symbiosis throughout the whole plant, and on the interaction with the fungal partner. The primary metabolite profile of M. truncatula was shown to be altered significantly in response to mycorrhizal colonization. Additionally, molecular mechanisms determining the progress of the interaction in the fungal partner of the AM symbiosis were investigated. The R. irregularis transcriptome in planta and in extraradical tissues gave new insight into genes that are differentially expressed in these two fungal tissues. Over 3200 fungal transcripts with a significantly altered expression level in laser capture microdissection-collected arbuscules compared to extraradical tissues were identified. Among them, six previously unknown specifically regulated potential transporter genes were found. These are likely to play a role in the nutrient exchange between plant and fungus. While the substrates of three potential MFS transporters are as yet unknown, two potential sugar transporters are might play a role in the carbohydrate flow towards the fungal partner. In summary, this study provides new insights into transport processes between plant and fungus in the course of the AM symbiosis, analysing M. truncatula on the transcript and metabolite level, and provides a dataset of the R. irregularis transcriptome in planta, providing a high amount of new information for future works.
This book deals with the inner life of the capitalist firm. There we find numerous conflicts, the most important of which concerns the individual employment relationship which is understood as a principal-agent problem between the manager, the principal, who issues orders that are to be followed by the employee, the agent. Whereas economic theory traditionally analyses this relationship from a (normative) perspective of the firm in order to support the manager in finding ways to influence the behavior of the employees, such that the latter – ideally – act on behalf of their superior, this book takes a neutral stance. It focusses on explaining individual behavioral patterns and the resulting interactions between the actors in the firm by taking sociological, institutional, and above all, psychological research into consideration. In doing so, insights are gained which challenge many assertions economists take for granted.
Background: Short lived, iteroparous animals in seasonal environments experience variable social and environmental conditions over their lifetime. Animals can be divided into those with a "young-of-the-year" life history (YY, reproducing and dying in the summer of birth) and an "overwinter" life history (OW, overwintering in a subadult state before reproducing next spring).
We investigated how behavioural patterns across the population were affected by season and sex, and whether variation in behaviour reflects the variation in life history patterns of each season. Applications of pace-of-life (POL) theory would suggest that long-lived OW animals are shyer in order to increase survival, and YY are bolder in order to increase reproduction. Therefore, we expected that in winter and spring samples, when only OW can be sampled, the animals should be shyer than in summer and autumn, when both OW and YY animals can be sampled. We studied common vole (Microtus arvalis) populations, which express typical, intra-annual density fluctuation. We captured a total of 492 voles at different months over 3 years and examined boldness and activity level with two standardised behavioural experiments.
Results: Behavioural variables of the two tests were correlated with each other. Boldness, measured as short latencies in both tests, was extremely high in spring compared to other seasons. Activity level was highest in spring and summer, and higher in males than in females.
Conclusion: Being bold in laboratory tests may translate into higher risk-taking in nature by being more mobile while seeking out partners or valuable territories. Possible explanations include asset-protection, with OW animals being rather old with low residual reproductive value in spring. Therefore, OW may take higher risks during this season. Offspring born in spring encounter a lower population density and may have higher reproductive value than offspring of later cohorts. A constant connection between life history and animal personality, as suggested by the POL theory, however, was not found. Nevertheless, correlations of traits suggest the existence of animal personalities. In conclusion, complex patterns of population dynamics, seasonal variation in life histories, and variability of behaviour due to asset-protection may cause complex seasonal behavioural dynamics in a population.
Filming illegals
(2013)
Introduction
(2013)
Challenging Khmer citizenship : minorities, the state, and the international community in Cambodia
(2013)
The idea of a distinctly ‘liberal’ form of multiculturalism has emerged in the theory and practice of Western democracies and the international community has become actively engaged in its global dissemination via international norms and organizations. This thesis investigates the internationalization of minority rights, by exploring state-minority relations in Cambodia, in light of Will Kymlicka’s theory of multicultural citizenship. Based on extensive empirical research, the analysis explores the situation and aspirations of Cambodia’s ethnic Vietnamese, highland peoples, Muslim Cham, ethnic Chinese and Lao and the relationships between these groups and the state. All Cambodian regimes since independence have defined citizenship with reference to the ethnicity of the Khmer majority and have - often violently - enforced this conception through the assimilation of highland peoples and the Cham and the exclusion of ethnic Vietnamese and Chinese. Cambodia’s current constitution, too, defines citizenship ethnically. State-sponsored Khmerization systematically privileges members of the majority culture and marginalizes minority members politically, economically and socially. The thesis investigates various international initiatives aimed at promoting application of minority rights norms in Cambodia. It demonstrates that these initiatives have largely failed to accomplish a greater degree of compliance with international norms in practice. This failure can be explained by a number of factors, among them Cambodia’s neo-patrimonial political system, the geo-political fears of a ‘minoritized’ Khmer majority, the absence of effective regional security institutions, the lack of minority access to political decision-making, the significant differences between international and Cambodian conceptions of modern statehood and citizenship and the emergence of China as Cambodia’s most important bilateral donor and investor. Based on this analysis, the dissertation develops recommendations for a sequenced approach to minority rights promotion, with pragmatic, less ambitious shorter-term measures that work progressively towards achievement of international norms in the longer-term.
In many biological and environmental applications spatially resolved sensing of molecular oxygen is desirable. A powerful tool for distributed measurements is optical time domain reflectometry (OTDR) which is often used in the field of telecommunications. We combine this technique with a novel optical oxygen sensor dye, triangular-[4] phenylene (TP), immobilized in a polymer matrix. The TP luminescence decay time is 86 ns. The short decay time of the sensor dye is suitable to achieve a spatial resolution of some meters. In this paper we present the development and characterization of a reflectometer in the UV range of the electromagnetic spectrum as well as optical oxygen sensing with different fiber arrangements.
Business processes are instrumental to manage work in organisations. To study the interdependencies between business processes, Business Process Architectures have been introduced. These express trigger and message ow relations between business processes. When we investigate real world Business Process Architectures, we find complex interdependencies, involving multiple process instances. These aspects have not been studied in detail so far, especially concerning correctness properties. In this paper, we propose a modular transformation of BPAs to open nets for the analysis of behavior involving multiple business processes with multiplicities. For this purpose we introduce intermediary nets to portray semantics of multiplicity specifications. We evaluate our approach on a use case from the public sector.
The German Banking System
(2013)
Constraints allow developers to specify desired properties of systems in a number of domains, and have those properties be maintained automatically. This results in compact, declarative code, avoiding scattered code to check and imperatively re-satisfy invariants. Despite these advantages, constraint programming is not yet widespread, with standard imperative programming still the norm. There is a long history of research on integrating constraint programming with the imperative paradigm. However, this integration typically does not unify the constructs for encapsulation and abstraction from both paradigms. This impedes re-use of modules, as client code written in one paradigm can only use modules written to support that paradigm. Modules require redundant definitions if they are to be used in both paradigms. We present a language – Babelsberg – that unifies the constructs for en- capsulation and abstraction by using only object-oriented method definitions for both declarative and imperative code. Our prototype – Babelsberg/R – is an extension to Ruby, and continues to support Ruby’s object-oriented se- mantics. It allows programmers to add constraints to existing Ruby programs in incremental steps by placing them on the results of normal object-oriented message sends. It is implemented by modifying a state-of-the-art Ruby virtual machine. The performance of standard object-oriented code without con- straints is only modestly impacted, with typically less than 10% overhead compared with the unmodified virtual machine. Furthermore, our architec- ture for adding multiple constraint solvers allows Babelsberg to deal with constraints in a variety of domains. We argue that our approach provides a useful step toward making con- straint solving a generic tool for object-oriented programmers. We also provide example applications, written in our Ruby-based implementation, which use constraints in a variety of application domains, including interactive graphics, circuit simulations, data streaming with both hard and soft constraints on performance, and configuration file Management.
This article expands our current knowledge about ministerial selection in coalition governments and analyses why ministerial candidates succeed in acquiring a cabinet position after general elections. It argues that political parties bargain over potential office-holders during government-formation processes, selecting future cabinet ministers from an emerging bargaining pool'. The article draws upon a new dataset comprising all ministrable candidates discussed by political parties during eight government-formation processes in Germany between 1983 and 2009. The conditional logit regression analysis reveals that temporal dynamics, such as the day she enters the pool, have a significant effect on her success in achieving a cabinet position. Other determinants of ministerial selection discussed in the existing literature, such as party and parliamentary expertise, are less relevant for achieving ministerial office. The article concludes that scholarship on ministerial selection requires a stronger emphasis for its endogenous nature in government-formation as well as the relevance of temporal dynamics in such processes.
An important strand of research has investigated the question of how children acquire a morphological system using offline data from spontaneous or elicited child language. Most of these studies have found dissociations in how children apply regular and irregular inflection (Marcus et al. 1992, Weyerts & Clahsen 1994, Rothweiler & Clahsen 1993). These studies have considerably deepened our understanding of how linguistic knowledge is acquired and organised in the human mind. Their methodological procedures, however, do not involve measurements of how children process morphologically complex forms in real time. To date, little is known about how children process inflected word forms. The aim of this study is to investigate children’s processing of inflected words in a series of on-line reaction time experiments. We used a cross-modal priming experiment to test for decompositional effects on the central level. We used a speeded production task and a lexical decision task to test for frequency effects on access level in production and recognition. Children’s behaviour was compared to adults’ behaviour towards three participle types (-t participles, e.g. getanzt ‘danced’ vs. -n participles with stem change, e.g. gebrochen ‘broken’ vs.-n participles without stem change, e.g. geschlafen ‘slept’). For the central level, results indicate that -t participles but not -n participles have decomposed representations. For the access level, results indicate that -t participles are represented according to their morphemes and additionally as full forms, at least from the age of nine years onwards (Pinker 1999 and Clahsen et al. 2004). Further evidence suggested that -n participles are represented as full-form entries on access level and that -n participles without stem change may encode morphological structure (cf. Clahsen et al. 2003). Out data also suggests that processing strategies for -t participles are differently applied in recognition and production. These results provide evidence that children (within the age range tested) employ the same mechanisms for processing participles as adults. The child lexicon grows as children form additional full-form representations for -t participles on access level and elaborate their full-form lexical representations of -n participles on central level. These results are consistent with processing as explained in dual-system theories.
This document presents a formula selection system for classical first order theorem proving based on the relevance of formulae for the proof of a conjecture. It is based on unifiability of predicates and is also able to use a linguistic approach for the selection. The scope of the technique is the reduction of the set of formulae and the increase of the amount of provable conjectures in a given time. Since the technique generates a subset of the formula set, it can be used as a preprocessor for automated theorem proving. The document contains the conception, implementation and evaluation of both selection concepts. While the one concept generates a search graph over the negation normal forms or Skolem normal forms of the given formulae, the linguistic concept analyses the formulae and determines frequencies of lexemes and uses a tf-idf weighting algorithm to determine the relevance of the formulae. Though the concept is built for first order logic, it is not limited to it. The concept can be used for higher order and modal logik, too, with minimal adoptions. The system was also evaluated at the world championship of automated theorem provers (CADE ATP Systems Competition, CASC-24) in combination with the leanCoP theorem prover and the evaluation of the results of the CASC and the benchmarks with the problems of the CASC of the year 2012 (CASC-J6) show that the concept of the system has positive impact to the performance of automated theorem provers. Also, the benchmarks with two different theorem provers which use different calculi have shown that the selection is independent from the calculus. Moreover, the concept of TEMPLAR has shown to be competitive to some extent with the concept of SinE and even helped one of the theorem provers to solve problems that were not (or slower) solved with SinE selection in the CASC. Finally, the evaluation implies that the combination of the unification based and linguistic selection yields more improved results though no optimisation was done for the problems.
Permafrost-affected ecosystems including peat wetlands are among the most obvious regions in which current microbial controls on organic matter decomposition are likely to change as a result of global warming. Wet tundra ecosystems in particular are ideal sites for increased methane production because of the waterlogged, anoxic conditions that prevail in seasonally increasing thawed layers. The following doctoral research project focused on investigating the abundance and distribution of the methane-cycling microbial communities in four different polygons on Herschel Island and the Yukon Coast. Despite the relevance of the Canadian Western Arctic in the global methane budget, the permafrost microbial communities there have thus far remained insufficiently characterized. Through the study of methanogenic and methanotrophic microbial communities involved in the decomposition of permafrost organic matter and their potential reaction to rising environmental temperatures, the overarching goal of the ensuing thesis is to fill the current gap in understanding the fate of the organic carbon currently stored in Artic environments and its implications regarding the methane cycle in permafrost environments. To attain this goal, a multiproxy approach including community fingerprinting analysis, cloning, quantitative PCR and next generation sequencing was used to describe the bacterial and archaeal community present in the active layer of four polygons and to scrutinize the diversity and distribution of methane-cycling microorganisms at different depths. These methods were combined with soil properties analyses in order to identify the main physico-chemical variables shaping these communities. In addition a climate warming simulation experiment was carried-out on intact active layer cores retrieved from Herschel Island in order to investigate the changes in the methane-cycling communities associated with an increase in soil temperature and to help better predict future methane-fluxes from polygonal wet tundra environments in the context of climate change. Results showed that the microbial community found in the water-saturated and carbon-rich polygons on Herschel Island and the Yukon Coast was diverse and showed a similar distribution with depth in all four polygons sampled. Specifically, the methanogenic community identified resembled the communities found in other similar Arctic study sites and showed comparable potential methane production rates, whereas the methane oxidizing bacterial community differed from what has been found so far, being dominated by type-II rather than type-I methanotrophs. After being subjected to strong increases in soil temperature, the active-layer microbial community demonstrated the ability to quickly adapt and as a result shifts in community composition could be observed. These results contribute to the understanding of carbon dynamics in Arctic permafrost regions and allow an assessment of the potential impact of climate change on methane-cycling microbial communities. This thesis constitutes the first in-depth study of methane-cycling communities in the Canadian Western Arctic, striving to advance our understanding of these communities in degrading permafrost environments by establishing an important new observatory in the Circum-Arctic.
In sedimentary basins, rock thermal conductivity can vary both laterally and vertically, thus altering the basin’s thermal structure locally and regionally. Knowledge of the thermal conductivity of geological formations and its spatial variations is essential, not only for quantifying basin evolution and hydrocarbon maturation processes, but also for understanding geothermal conditions in a geological setting. In conjunction with the temperature gradient, thermal conductivity represents the basic input parameter for the determination of the heat-flow density; which, in turn, is applied as a major input parameter in thermal modeling at different scales. Drill-core samples, which are necessary to determine thermal properties by laboratory measurements, are rarely available and often limited to previously explored reservoir formations. Thus, thermal conductivities of Mesozoic rocks in the North German Basin (NGB) are largely unknown. In contrast, geophysical borehole measurements are often available for the entire drilled sequence. Therefore, prediction equations to determine thermal conductivity based on well-log data are desirable. In this study rock thermal conductivity was investigated on different scales by (1) providing thermal-conductivity measurements on Mesozoic rocks, (2) evaluating and improving commonly applied mixing models which were used to estimate matrix and pore-filled rock thermal conductivities, and (3) developing new well-log based equations to predict thermal conductivity in boreholes without core control. Laboratory measurements are performed on sedimentary rock of major geothermal reservoirs in the Northeast German Basin (NEGB) (Aalenian, Rhaethian-Liassic, Stuttgart Fm., and Middle Buntsandstein). Samples are obtained from eight deep geothermal wells that approach depths of up to 2,500 m. Bulk thermal conductivities of Mesozoic sandstones range between 2.1 and 3.9 W/(m∙K), while matrix thermal conductivity ranges between 3.4 and 7.4 W/(m∙K). Local heat flow for the Stralsund location averages 76 mW/m², which is in good agreement to values reported previously for the NEGB. For the first time, in-situ bulk thermal conductivity is indirectly calculated for entire borehole profiles in the NEGB using the determined surface heat flow and measured temperature data. Average bulk thermal conductivity, derived for geological formations within the Mesozoic section, ranges between 1.5 and 3.1 W/(m∙K). The measurement of both dry- and water-saturated thermal conductivities allow further evaluation of different two-component mixing models which are often applied in geothermal calculations (e.g., arithmetic mean, geometric mean, harmonic mean, Hashin-Shtrikman mean, and effective-medium theory mean). It is found that the geometric-mean model shows the best correlation between calculated and measured bulk thermal conductivity. However, by applying new model-dependent correction, equations the quality of fit could be significantly improved and the error diffusion of each model reduced. The ‘corrected’ geometric mean provides the most satisfying results and constitutes a universally applicable model for sedimentary rocks. Furthermore, lithotype-specific and model-independent conversion equations are developed permitting a calculation of water-saturated thermal conductivity from dry-measured thermal conductivity and porosity within an error range of 5 to 10%. The limited availability of core samples and the expensive core-based laboratory measurements make it worthwhile to use petrophysical well logs to determine thermal conductivity for sedimentary rocks. The approach followed in this study is based on the detailed analyses of the relationships between thermal conductivity of rock-forming minerals, which are most abundant in sedimentary rocks, and the properties measured by standard logging tools. By using multivariate statistics separately for clastic, carbonate and evaporite rocks, the findings from these analyses allow the development of prediction equations from large artificial data sets that predict matrix thermal conductivity within an error of 4 to 11%. These equations are validated successfully on a comprehensive subsurface data set from the NGB. In comparison to the application of earlier published approaches formation-dependent developed for certain areas, the new developed equations show a significant error reduction of up to 50%. These results are used to infer rock thermal conductivity for entire borehole profiles. By inversion of corrected in-situ thermal-conductivity profiles, temperature profiles are calculated and compared to measured high-precision temperature logs. The resulting uncertainty in temperature prediction averages < 5%, which reveals the excellent temperature prediction capabilities using the presented approach. In conclusion, data and methods are provided to achieve a much more detailed parameterization of thermal models.
We introduce the notion of coupling distances on the space of Lévy measures in order to quantify rates of convergence towards a limiting Lévy jump diffusion in terms of its characteristic triplet, in particular in terms of the tail of the Lévy measure. The main result yields an estimate of the Wasserstein-Kantorovich-Rubinstein distance on path space between two Lévy diffusions in terms of the couping distances. We want to apply this to obtain precise rates of convergence for Markov chain approximations and a statistical goodness-of-fit test for low-dimensional conceptual climate models with paleoclimatic data.
African states are often called corrupt indicating that the political system in Africa differs from the one prevalent in the economically advanced democracies. This however does not give us any insight into what makes corruption the ruling norm of African statehood. Thus we must turn to the overly neglected theoretical work on the political economy of Africa in order to determine how the poverty of governance in Africa is firmly anchored both in Africa’s domestic socioeconomic reality, as well as in the region’s role in the international economic order. Instead of focusing on increased monitoring, enforcement and formal democratic procedures, this book integrates economic analysis with political theory in order to arrive at a better understanding of the political-economic roots of corruption in Sub-Saharan Africa.
Initiation and perpetuation of inflammatory bowel diseases (IBD) may result from an exaggerated mucosal immune response to the luminal microbiota in a susceptible host. We proposed that this may be caused either 1) by an abnormal microbial composition or 2) by weakening of the protective mucus layer due to excessive mucus degradation, which may lead to an easy access of luminal antigens to the host mucosa triggering inflammation. We tested whether the probiotic Enterococcus faecium NCIMB 10415 (NCIMB) is capable of reducing chronic gut inflammation by changing the existing gut microbiota composition and aimed to identify mechanisms that are involved in possible beneficial effects of the probiotic. To identify health-promoting mechanisms of the strain, we used interleukin (IL)-10 deficient mice that spontaneously develop gut inflammation and fed these mice a diet containing NCIMB (106 cells g-1) for 3, 8 and 24 weeks, respectively. Control mice were fed an identically composed diet but without the probiotic strain. No clear-cut differences between the animals were observed in pro-inflammatory cytokine gene expression and in intestinal microbiota composition after probiotic supplementation. However, we observed a low abundance of the mucin-degrading bacterium Akkermansia muciniphila in the mice that were fed NCIMB for 8 weeks. These low cell numbers were associated with significantly lower interferon gamma (IFN-γ) and IFN-γ-inducible protein (IP-10) mRNA levels as compared to the NCIMB-treated mice that were killed after 3 and 24 weeks of intervention. In conclusion, NCIMB was not capable of reducing gut inflammation in the IL-10-/- mouse model. To further identify the exact role of A. muciniphila and uncover a possible interaction between this bacterium, NCIMB and the host in relation to inflammation, we performed in vitro studies using HT-29 colon cancer cells. The HT-29 cells were treated with bacterial conditioned media obtained by growing either A. muciniphila (AM-CM) or NCIMB (NCIMB-CM) or both together (COMB-CM) in Dulbecco’s Modified Eagle Medium (DMEM) for 2 h at 37 °C followed by bacterial cell removal. HT-29 cells treated with COMB-CM displayed reduced cell viability after 18 h (p<0.01) and no viable cells were detected after 24 h of treatment, in contrast to the other groups or heated COMB-CM. Detection of activated caspase-3 in COMB-CM treated groups indicated that death of the HT-29 cells was brought about by apoptosis. It was concluded that either NCIMB or A. muciniphila produce a soluble and heat-sensitive factor during their concomitant presence that influences cell viability in an in vitro system. We currently hypothesize that this factor is a protein, which has not yet been identified. Based on the potential effect of A. muciniphila on inflammation (in vivo) and cell-viability (in vitro) in the presence of NCIMB, we investigated how the presence of A. muciniphila affects the severity of an intestinal Salmonella enterica Typhimurium (STm)-induced gut inflammation using gnotobiotic C3H mice with a background microbiota of eight bacterial species (SIHUMI, referred to as simplified human intestinal microbiota). Presence of A. muciniphila in STm-infected SIHUMI (SIHUMI-AS) mice caused significantly increased histopathology scores and elevated mRNA levels of IFN-γ, IP-10, tumor necrosis factor alpha (TNF-α), IL-12, IL-17 and IL-6 in cecal and colonic tissue. The number of mucin filled goblet cells was 2- to 3- fold lower in cecal tissue of SIHUMI-AS mice compared to SIHUMI mice associated with STm (SIHUMI-S) or A. muciniphila (SIHUMI-A) or SIHUMI mice. Reduced goblet cell numbers significantly correlated with increased IFN-γ (r2 = -0.86, ***P<0.001) in all infected mice. In addition, loss of cecal mucin sulphation was observed in SIHUMI-AS mice. Concomitant presence of A. muciniphila and STm resulted in a drastic change in microbiota composition of the SIHUMI consortium. The proportion of Bacteroides thetaiotaomicron in SIHUMI, SIHUMI-A and SIHUMI-S mice made up to 80-90% but was completely taken over by STm in SIHUMI-AS mice contributing 94% to total bacteria. These results suggest that A. muciniphila exacerbates STm-induced intestinal inflammation by its ability to disturb host mucus homeostasis. In conclusion, abnormal microbiota composition together with excessive mucus degradation contributes to severe intestinal inflammation in a susceptible host.
1. Introduction of China’s bank reform 1.1 Stage 1 (1978–1993): Rebuilding the financial system 1.2 Stage 2 (1994–1997): Regulating the financial system 1.3 Stage 3 (1998–2002): Deepening reform of state-owned commercial banks 1.4 Stage 4 (2003-present): Public listing of state-owned banks 2. The roles of SWF in China’s bank reform 3. Future challenges
Interactive generation of effective discourse in situated context : a planning-based approach
(2013)
As our modern-built structures are becoming increasingly complex, carrying out basic tasks such as identifying points or objects of interest in our surroundings can consume considerable time and cognitive resources. In this thesis, we present a computational approach to converting contextual information about a person's physical environment into natural language, with the aim of helping this person identify given task-related entities in their environment. Using efficient methods from automated planning - the field of artificial intelligence concerned with finding courses of action that can achieve a goal -, we generate discourse that interactively guides a hearer through completing their task. Our approach addresses the challenges of controlling, adapting to, and monitoring the situated context. To this end, we develop a natural language generation system that plans how to manipulate the non-linguistic context of a scene in order to make it more favorable for references to task-related objects. This strategy distributes a hearer's cognitive load of interpreting a reference over multiple utterances rather than one long referring expression. Further, to optimize the system's linguistic choices in a given context, we learn how to distinguish speaker behavior according to its helpfulness to hearers in a certain situation, and we model the behavior of human speakers that has been proven helpful. The resulting system combines symbolic with statistical reasoning, and tackles the problem of making non-trivial referential choices in rich context. Finally, we complement our approach with a mechanism for preventing potential misunderstandings after a reference has been generated. Employing remote eye-tracking technology, we monitor the hearer's gaze and find that it provides a reliable index of online referential understanding, even in dynamically changing scenes. We thus present a system that exploits hearer gaze to generate rapid feedback on a per-utterance basis, further enhancing its effectiveness. Though we evaluate our approach in virtual environments, the efficiency of our planning-based model suggests that this work could be a step towards effective conversational human-computer interaction situated in the real world.
This dissertation is about factors that contribute to the surface forms of tones in connected speech in Akan. Akan is an African tone language, which is spoken in Ghana. It has two level tones (low and high), automatic and non-automatic downstep. Downstep is the major factor that influences the surface forms of tones. The thesis shows that downstep is caused by declination. It is argued that declination is an intonational property of Akan, which serves to signal coherence. A phonological representation using a high and a low register tone, associating to the left and right edge of an intonational phrase (IP), respectively, is proposed. Declination/downstep is modelled using a (phonetic) pitch implementation algorithm (Liberman & Pierrehumbert, 1984). An innovative application of the algorithm is presented, which naturally captures the relation between declination and downstep in Akan. Another important factor is the prosodic manifestation of sentence level pragmatic meanings, such as sentence mode and focus. Regarding the former, the thesis shows that a post-lexical low tone, which associates with the right edge of an IP, signals interrogativity. Additionally, lexical tones in Yes – No questions are realized in a higher pitch register, which does not lead to a reduction of declination. It is claimed that the higher register is not part of the phonological representation in Akan, but that it emerges at the phonetic level to compensate for the ‘unnatural’ form of the question morpheme and to satisfy the Frequency code (Gussenhoven, 2002; 2004). An extension of Rialland’s (2007) typology in terms of a new category called “low tense” question prosody is proposed. Concerning focus marking, it is argued that the use of the morpho-syntactic focus marking strategy is related to extra grammatical factors, such as hearer expectation, discourse expectability (Zimmermann, 2007) and emphasis (Hartmann, 2008). If a speaker of Akan wants to highlight a particular element in a sentence, in-situ, i.e. by means of prosody, the default prosodic structure is modified in such a way that the focused element forms its own phonological phrase (pP). If it is already contained in a pP, the boundary deliminating the focused element is enhanced (Féry, 2012). This restructuring/enhancement is accompanied by an interruption of the otherwise continuous melody due to insertion of a pause and/or a glottal stop. Beside declination and intonation, raising of H tones applies in Akan. H raising is analyzed as a local anticipatory planning effect, employed at the phonetic level, which enhances the perceptual distance between low and high tones. Low tones are raised, if they are wedged between two high tones. L raising is argued to be a local carryover effect (co-articulation). Further, it is demonstrated that global anticipatory raising takes place. It is shown that Akan speakers anticipate the length of an IP. Preplanning (anticipatory raising) is argued to be an important process at the level of pitch implementation. It serves to ensure that declination can be maintained throughout the IP, which prevents pitch resetting.
The melody of an Akan sentence is largely determined by the choice of words. The inventory of post-lexical tones is small. It consists of post-lexical register tones, which trigger declination and post-lexical intonational tones, which signal sentence type. The overall melodic shape is falling. At the local level, H raising and L raising occur. At the global level, initial low and high tones are realized higher if they occur in a long and/or complex sentence. This dissertation shows that many factors, which emerge at different levels of the tone production process, contribute to the surface form of tones in Akan.
The service-oriented architecture supports the dynamic assembly and runtime reconfiguration of complex open IT landscapes by means of runtime binding of service contracts, launching of new components and termination of outdated ones. Furthermore, the evolution of these IT landscapes is not restricted to exchanging components with other ones using the same service contracts, as new services contracts can be added as well. However, current approaches for modeling and verification of service-oriented architectures do not support these important capabilities to their full extend.In this report we present an extension of the current OMG proposal for service modeling with UML - SoaML - which overcomes these limitations. It permits modeling services and their service contracts at different levels of abstraction, provides a formal semantics for all modeling concepts, and enables verifying critical properties. Our compositional and incremental verification approach allows for complex properties including communication parameters and time and covers besides the dynamic binding of service contracts and the replacement of components also the evolution of the systems by means of new service contracts. The modeling as well as verification capabilities of the presented approach are demonstrated by means of a supply chain example and the verification results of a first prototype are shown.
Cellulose is the most abundant biopolymer on earth. In this work it has been used, in various forms ranging from wood to fully processed laboratory grade microcrystalline cellulose, to synthesise a variety of metal and metal carbide nanoparticles and to establish structuring and patterning methodologies that produce highly functional nano-hybrids. To achieve this, the mechanisms governing the catalytic processes that bring about graphitised carbons in the presence of iron have been investigated. It was found that, when infusing cellulose with an aqueous iron salt solution and heating this mixture under inert atmosphere to 640 °C and above, a liquid eutectic mixture of iron and carbon with an atom ratio of approximately 1:1 forms. The eutectic droplets were monitored with in-situ TEM at the reaction temperature where they could be seen dissolving amorphous carbon and leaving behind a trail of graphitised carbon sheets and subsequently iron carbide nanoparticles. These transformations turned ordinary cellulose into a conductive and porous matrix that is well suited for catalytic applications. Despite these significant changes on the nanometre scale the shape of the matrix as a whole was retained with remarkable precision. This was exemplified by folding a sheet of cellulose paper into origami cranes and converting them via the temperature treatment in to magnetic facsimiles of those cranes. The study showed that the catalytic mechanisms derived from controlled systems and described in the literature can be transferred to synthetic concepts beyond the lab without loss of generality. Once the processes determining the transformation of cellulose into functional materials were understood, the concept could be extended to other metals and metal-combinations. Firstly, the procedure was utilised to produce different ternary iron carbides in the form of MxFeyC (M = W, Mn). None of those ternary carbides have thus far been produced in a nanoparticle form. The next part of this work encompassed combinations of iron with cobalt, nickel, palladium and copper. All of those metals were also probed alone in combination with cellulose. This produced elemental metal and metal alloy particles of low polydispersity and high stability. Both features are something that is typically not associated with high temperature syntheses and enables to connect the good size control with a scalable process. Each of the probed reactions resulted in phase pure, single crystalline, stable materials. After showing that cellulose is a good stabilising and separating agent for all the investigated types of nanoparticles, the focus of the work at hand is shifted towards probing the limits of the structuring and pattering capabilities of cellulose. Moreover possible post-processing techniques to further broaden the applicability of the materials are evaluated. This showed that, by choosing an appropriate paper, products ranging from stiff, self-sustaining monoliths to ultra-thin and very flexible cloths can be obtained after high temperature treatment. Furthermore cellulose has been demonstrated to be a very good substrate for many structuring and patterning techniques from origami folding to ink-jet printing. The thereby resulting products have been employed as electrodes, which was exemplified by electrodepositing copper onto them. Via ink-jet printing they have additionally been patterned and the resulting electrodes have also been post functionalised by electro-deposition of copper onto the graphitised (printed) parts of the samples. Lastly in a preliminary test the possibility of printing several metals simultaneously and thereby producing finely tuneable gradients from one metal to another have successfully been made. Starting from these concepts future experiments were outlined. The last chapter of this thesis concerned itself with alternative synthesis methods of the iron-carbon composite, thereby testing the robustness of the devolved reactions. By performing the synthesis with partly dissolved scrap metal and pieces of raw, dry wood, some progress for further use of the general synthesis technique were made. For example by using wood instead of processed cellulose all the established shaping techniques available for wooden objects, such as CNC milling or 3D prototyping, become accessible for the synthesis path. Also by using wood its intrinsic well defined porosity and the fact that large monoliths are obtained help expanding the prospect of using the composite. It was also demonstrated in this chapter that the resulting material can be applied for the environmentally important issue of waste water cleansing. Additionally to being made from renewable resources and by a cheap and easy one-pot synthesis, the material is recyclable, since the pollutants can be recovered by washing with ethanol. Most importantly this chapter covered experiments where the reaction was performed in a crude, home-built glass vessel, fuelled – with the help of a Fresnel lens – only by direct concentrated sunlight irradiation. This concept carries the thus far presented synthetic procedures from being common laboratory syntheses to a real world application. Based on cellulose, transition metals and simple equipment, this work enabled the easy one-pot synthesis of nano-ceramic and metal nanoparticle composites otherwise not readily accessible. Furthermore were structuring and patterning techniques and synthesis routes involving only renewable resources and environmentally benign procedures established here. Thereby it has laid the foundation for a multitude of applications and pointed towards several future projects reaching from fundamental research, to application focussed research and even and industry relevant engineering project was envisioned.
One of the central questions in psycholinguistic is understanding whether and how prosodic phrase boundaries are used to resolve syntactic ambiguities in sentence processing. The present work aimed to answer both, first, the effects of φ- and ι-boundaries on syntactic ambiguity resolution, and second, how the prosodic correlates of the auditory input are taken for the phonetic-phonology mapping in order to attain a meaningful sentence interpretation.
With regard to the first aim, we investigated locally syntactic ambiguities involving either φ- or ι-phrase boundaries in German and the structural preference that listeners have, based on the prosodic content. The experiments described in this work show that German listeners exploit both types of prosodic phrase boundaries to resolve local syntactic ambiguities, that however, their disambiguation altered by the presence or absence of prosodic cues correlated with the corresponding boundary. Specifically, the perception data revealed that the phonetically measured prosodic correlates of each prosodic boundary such as pitch accents, boundary tones, deaccentuation and durational properties do not contribute to ambiguity resolution in equal measure. Rather, it is the case that listeners rely primarily on prefinal lengthening as a correlate of phrasing in the vicinity of φ-phrase boundaries, while at the level of the ι-phrase boundary, boundary tones serve as phrasal cues. This way the results of the present work take account of the as yet missing information on individual contributions of prosodic correlates on listeners’ disambiguation of syntactically ambiguous sentences in German. It further implies that the question of how German listeners resolve syntactic ambiguities cannot simply be attributed to the presence or absence of prosodic correlates. The interpretation of the phrasal structure rather depends on a more general picture of cohesion between prosodic correlates and prosodic boundary sizes.
With respect to the second aim, the processing models proposed in the present work describe a specific phonetics-phonology mapping in the vicinity of both phrase boundaries. It is assumed that auditory sentence processing proceeds in several successively organized steps, during which listeners transform overt phonetic forms into language specific abstract surface forms. This process is referred to as phonetics-phonology mapping in the present work. Perceptual evidence resulting from the experiments of the present work suggest that the phonetics-phonology mapping is guided by the above mentioned boundary related prosodic correlates. The resulting abstract phonological structure is subjected to the syntax-prosody mapping, in turn. The outcome of the presented perception experiments are modulated in an Optimality-Theoretic framework. The offered OT-models are grounded on the assumption that single prosodic correlates are used by listeners as a signal to syntax in sentence processing. This is in line with studies arguing that the prosodic phrase structure determines the syntactic parse (Cutler et al., 1997; Warren et al., 1995; Pynte & Prieur, 1996; Snedeker & Trueswell, 2003; Kjelgaard & Speer, 1999), to name just a few.
We study origin, parameter optimization, and thermodynamic efficiency of isothermal rocking ratchets based on fractional subdiffusion within a generalized non-Markovian Langevin equation approach. A corresponding multi-dimensional Markovian embedding dynamics is realized using a set of auxiliary Brownian particles elastically coupled to the central Brownian particle (see video on the journal web site). We show that anomalous subdiffusive transport emerges due to an interplay of nonlinear response and viscoelastic effects for fractional Brownian motion in periodic potentials with broken space-inversion symmetry and driven by a time-periodic field. The anomalous transport becomes optimal for a subthreshold driving when the driving period matches a characteristic time scale of interwell transitions. It can also be optimized by varying temperature, amplitude of periodic potential and driving strength. The useful work done against a load shows a parabolic dependence on the load strength. It grows sublinearly with time and the corresponding thermodynamic efficiency decays algebraically in time because the energy supplied by the driving field scales with time linearly. However, it compares well with the efficiency of normal diffusion rocking ratchets on an appreciably long time scale.
Understanding the magnetic configuration of the source regions of coronal mass ejections (CMEs) is vital in order to determine the trigger and driver of these events. Observations of four CME productive active regions are presented here, which indicate that the pre-eruption magnetic configuration is that of a magnetic flux rope. The flux ropes are formed in the solar atmosphere by the process known as flux cancellation and are stable for several hours before the eruption. The observations also indicate that the magnetic structure that erupts is not the entire flux rope as initially formed, raising the question of whether the flux rope is able to undergo a partial eruption or whether it undergoes a transition in specific flux rope configuration shortly
before the CME.
We shall examine the Pedagogical Content Knowledge (PCK) of Computer Science (CS) teachers concerning students’ Computational Thinking (CT) problem solving skills within the context of a CS course in Dutch secondary education and thus obtain an operational definition of CT and ascertain appropriate teaching methodology. Next we shall develop an instrument to assess students’ CT and design a curriculum intervention geared toward teaching and improving students’ CT problem solving skills and competences. As a result, this research will yield an operational definition of CT, knowledge about CT PCK, a CT assessment instrument and teaching materials and accompanying teacher instructions. It shall contribute to CS teacher education, development of CT education and to education in other (STEM) subjects where CT plays a supporting role, both nationally and internationally.
Developing critical thinking
(2013)
Research indicates that evidence-based policy making is most successful when public administrators refer to diversified information portfolios. With the rising prominence of social media in the last decade, this paper argues that governments can benefit from integrating this publically available, user-generated data through the technique of social media analytics (SMA). There are already several initiatives set up to predict future policy issues, e.g. for the policy fields of crisis mitigation or migrant integration insights. The authors analyse these endeavours and their potential for providing more efficient and effective public policies. Furthermore, they scrutinise the challenges to governmental SMA usage in particular with regards to legal and ethical aspects. Reflecting the latter, this paper provides forward-looking recommendations on how these technologies can best be used for future policy making in a legally and ethically sound manner.
Background: With increasing age neuromuscular deficits (e.g., sarcopenia) may result in impaired physical performance and an increased risk for falls. Prominent intrinsic fall-risk factors are age-related decreases in balance and strength / power performance as well as cognitive decline. Additional studies are needed to develop specifically tailored exercise programs for older adults that can easily be implemented into clinical practice. Thus, the objective of the present trial is to assess the effects of a fall prevention program that was developed by an interdisciplinary expert panel on measures of balance, strength / power, body composition, cognition, psychosocial well-being, and falls self-efficacy in healthy older adults. Additionally, the time-related effects of detraining are tested.
Methods/Design: Healthy old people (n = 54) between the age of 65 to 80 years will participate in this trial. The testing protocol comprises tests for the assessment of static / dynamic steady-state balance (i.e., Sharpened Romberg Test, instrumented gait analysis), proactive balance (i.e., Functional Reach Test; Timed Up and Go Test), reactive balance (i.e., perturbation test during bipedal stance; Push and Release Test), strength (i.e., hand grip strength test; Chair Stand Test), and power (i.e., Stair Climb Power Test; countermovement jump). Further, body composition will be analysed using a bioelectrical impedance analysis system. In addition, questionnaires for the assessment of psychosocial (i.e., World Health Organisation Quality of Life Assessment-Bref), cognitive (i.e., Mini Mental State Examination), and fall risk determinants (i.e., Fall Efficacy Scale -International) will be included in the study protocol. Participants will be randomized into two intervention groups or the control / waiting group. After baseline measures, participants in the intervention groups will conduct a 12-week balance and strength / power exercise intervention 3 times per week, with each training session lasting 30 min. (actual training time). One intervention group will complete an extensive supervised training program, while the other intervention group will complete a short version (` 3 times 3') that is home-based and controlled by weekly phone calls. Post-tests will be conducted right after the intervention period. Additionally, detraining effects will be measured 12 weeks after program cessation. The control group / waiting group will not participate in any specific intervention during the experimental period, but will receive the extensive supervised program after the experimental period.
Discussion: It is expected that particularly the supervised combination of balance and strength / power training will improve performance in variables of balance, strength / power, body composition, cognitive function, psychosocial well-being, and falls self-efficacy of older adults. In addition, information regarding fall risk assessment, dose-response-relations, detraining effects, and supervision of training will be provided. Further, training-induced health-relevant changes, such as improved performance in activities of daily living, cognitive function, and quality of life, as well as a reduced risk for falls may help to lower costs in the health care system. Finally, practitioners, therapists, and instructors will be provided with a scientifically evaluated feasible, safe, and easy-to-administer exercise program for fall prevention.
Passive plant actuators have fascinated many researchers in the field of botany and structural biology since at least one century. Up to date, the most investigated tissue types in plant and artificial passive actuators are fibre-reinforced composites (and multilayered assemblies thereof) where stiff, almost inextensible cellulose microfibrils direct the otherwise isotropic swelling of a matrix. In addition, Nature provides examples of actuating systems based on lignified, low-swelling, cellular solids enclosing a high-swelling cellulosic phase. This is the case of the Delosperma nakurense seed capsule, in which a specialized tissue promotes the reversible opening of the capsule upon wetting. This tissue has a diamond-shaped honeycomb microstructure characterized by high geometrical anisotropy: when the cellulosic phase swells inside this constraining structure, the tissue deforms up to four times in one principal direction while maintaining its original dimension in the other. Inspired by the example of the Delosoperma nakurense, in this thesis we analyze the role of architecture of 2D cellular solids as models for natural hygromorphs. To start off, we consider a simple fluid pressure acting in the cells and try to assess the influence of several architectural parameters onto their mechanical actuation. Since internal pressurization is a configurational type of load (that is the load direction is not fixed but it “follows” the structure as it deforms) it will result in the cellular structure acquiring a “spontaneous” shape. This shape is independent of the load but just depends on the architectural characteristics of the cells making up the structure itself. Whereas regular convex tiled cellular solids (such as hexagonal, triangular or square lattices) deform isotropically upon pressurization, we show through finite element simulations that by introducing anisotropic and non-convex, reentrant tiling large expansions can be achieved in each individual cell. The influence of geometrical anisotropy onto the expansion behaviour of a diamond shaped honeycomb is assessed by FEM calculations and a Born lattice approximation. We found that anisotropic expansions (eigenstrains) comparable to those observed in the keels tissue of the Delosoperma nakurense are possible. In particular these depend on the relative contributions of bending and stretching of the beams building up the honeycomb. Moreover, by varying the walls’ Young modulus E and internal pressure p we found that both the eigenstrains and 2D elastic moduli scale with the ratio p/E. Therefore the potential of these pressurized structures as soft actuators is outlined. This approach was extended by considering several 2D cellular solids based on two types of non-convex cells. Each honeycomb is build as a lattice made of only one non-convex cell. Compared to usual honeycombs, these lattices have kinked walls between neighbouring cells which offers a hidden length scale allowing large directed deformations. By comparing the area expansion in all lattices, we were able to show that less convex cells are prone to achieve larger area expansions, but the direction in which the material expands is variable and depends on the local cell’s connectivity. This has repercussions both at the macroscopic (lattice level) and microscopic (cells level) scales. At the macroscopic scale, these non-convex lattices can experience large anisotropic (similarly to the diamond shaped honeycomb) or perfectly isotropic principal expansions, large shearing deformations or a mixed behaviour. Moreover, lattices that at the macroscopic scale expand similarly can show quite different microscopic deformation patterns that include zig-zag motions and radical changes of the initial cell shape. Depending on the lattice architecture, the microscopic deformations of the individual cells can be equal or not, so that they can build up or mutually compensate and hence give rise to the aforementioned variety of macroscopic behaviours. Interestingly, simple geometrical arguments involving the undeformed cell shape and its local connectivity enable to predict the results of the FE simulations. Motivated by the results of the simulations, we also created experimental 3D printed models of such actuating structures. When swollen, the models undergo substantial deformation with deformation patterns qualitatively following those predicted by the simulations. This work highlights how the internal architecture of a swellable cellular solid can lead to complex shape changes which may be useful in the fields of soft robotics or morphing structures.
The aim of our article is to collect and present information about contemporary programming environments that are suitable for primary education. We studied the ways they implement (or do not implement) some programming concepts, the ways programs are represented and built in order to support young and novice programmers, as well as their suitability to allow different forms of sharing the results of pupils’ work. We present not only a short description of each considered environment and the taxonomy in the form of a table, but also our understanding and opinions on how and why the environments implement the same concepts and ideas in different ways and which concepts and ideas seem to be important to the creators of such environments.
User-centered design processes are the first choice when new interactive systems or services are developed to address real customer needs and provide a good user experience. Common tools for collecting user research data, conducting brainstormings, or sketching ideas are whiteboards and sticky notes. They are ubiquitously available, and no technical or domain knowledge is necessary to use them. However, traditional pen and paper tools fall short when saving the content and sharing it with others unable to be in the same location. They are also missing further digital advantages such as searching or sorting content. Although research on digital whiteboard and sticky note applications has been conducted for over 20 years, these tools are not widely adopted in company contexts. While many research prototypes exist, they have not been used for an extended period of time in a real-world context. The goal of this thesis is to investigate what the enablers and obstacles for the adoption of digital whiteboard systems are. As an instrument for different studies, we developed the Tele-Board software system for collaborative creative work. Based on interviews, observations, and findings from former research, we tried to transfer the analog way of working to the digital world. Being a software system, Tele-Board can be used with a variety of hardware and does not depend on special devices. This feature became one of the main factors for adoption on a larger scale. In this thesis, I will present three studies on the use of Tele-Board with different user groups and foci. I will use a combination of research methods (laboratory case studies and data from field research) with the overall goal of finding out when a digital whiteboard system is used and in which cases not. Not surprisingly, the system is used and accepted if a user sees a main benefit that neither analog tools nor other applications can offer. However, I found that these perceived benefits are very different for each user and usage context. If a tool provides possibilities to use in different ways and with different equipment, the chances of its adoption by a larger group increase. Tele-Board has now been in use for over 1.5 years in a global IT company in at least five countries with a constantly growing user base. Its use, advantages, and disadvantages will be described based on 42 interviews and usage statistics from server logs. Through these insights and findings from laboratory case studies, I will present a detailed analysis of digital whiteboard use in different contexts with design implications for future systems.
Imaginary Interfaces
(2013)
The size of a mobile device is primarily determined by the size of the touchscreen. As such, researchers have found that the way to achieve ultimate mobility is to abandon the screen altogether. These wearable devices are operated using hand gestures, voice commands or a small number of physical buttons. By abandoning the screen these devices also abandon the currently dominant spatial interaction style (such as tapping on buttons), because, seemingly, there is nothing to tap on. Unfortunately this design prevents users from transferring their learned interaction knowledge gained from traditional touchscreen-based devices. In this dissertation, I present Imaginary Interfaces, which return spatial interaction to screenless mobile devices. With these interfaces, users point and draw in the empty space in front of them or on the palm of their hands. While they cannot see the results of their interaction, they obtain some visual and tactile feedback by watching and feeling their hands interact. After introducing the concept of Imaginary Interfaces, I present two hardware prototypes that showcase two different forms of interaction with an imaginary interface, each with its own advantages: mid-air imaginary interfaces can be large and expressive, while palm-based imaginary interfaces offer an abundance of tactile features that encourage learning. Given that imaginary interfaces offer no visual output, one of the key challenges is to enable users to discover the interface's layout. This dissertation offers three main solutions: offline learning with coordinates, browsing with audio feedback and learning by transfer. The latter I demonstrate with the Imaginary Phone, a palm-based imaginary interface that mimics the layout of a physical mobile phone that users are already familiar with. Although these designs enable interaction with Imaginary Interfaces, they tell us little about why this interaction is possible. In the final part of this dissertation, I present an exploration into which human perceptual abilities are used when interacting with a palm-based imaginary interface and how much each accounts for performance with the interface. These findings deepen our understanding of Imaginary Interfaces and suggest that palm-based Imaginary Interfaces can enable stand-alone eyes-free use for many applications, including interfaces for visually impaired users.
This article is a summary of the work carried out by the Ministry of Education in Turkey, in terms of the development of a new ICT Curriculum, together with the e-Training of teachers who will play an important role in the forthcoming pilot study. Based on recent literature on the topic, the article starts by introducing the “F@tih Project”, a national project that aims to effectively integrate technology into schools. After assessing teachers’ and students’ ICT competencies, as defined internationally, the review continues with the proposed model for the e-training of teachers. Summarizing the process of development of the new ICT curriculum, researchers underline key points of the curriculum such as dimensions, levels and competencies. Then teachers’ e-training approaches, together with selected tools, are explained in line with the importance and stages of action research that will be used throughout the pilot implementation of the curriculum and e-training process.
Learning a model for the relationship between the attributes and the annotated labels of data examples serves two purposes. Firstly, it enables the prediction of the label for examples without annotation. Secondly, the parameters of the model can provide useful insights into the structure of the data. If the data has an inherent partitioned structure, it is natural to mirror this structure in the model. Such mixture models predict by combining the individual predictions generated by the mixture components which correspond to the partitions in the data. Often the partitioned structure is latent, and has to be inferred when learning the mixture model. Directly evaluating the accuracy of the inferred partition structure is, in many cases, impossible because the ground truth cannot be obtained for comparison. However it can be assessed indirectly by measuring the prediction accuracy of the mixture model that arises from it. This thesis addresses the interplay between the improvement of predictive accuracy by uncovering latent cluster structure in data, and further addresses the validation of the estimated structure by measuring the accuracy of the resulting predictive model. In the application of filtering unsolicited emails, the emails in the training set are latently clustered into advertisement campaigns. Uncovering this latent structure allows filtering of future emails with very low false positive rates. In order to model the cluster structure, a Bayesian clustering model for dependent binary features is developed in this thesis. Knowing the clustering of emails into campaigns can also aid in uncovering which emails have been sent on behalf of the same network of captured hosts, so-called botnets. This association of emails to networks is another layer of latent clustering. Uncovering this latent structure allows service providers to further increase the accuracy of email filtering and to effectively defend against distributed denial-of-service attacks. To this end, a discriminative clustering model is derived in this thesis that is based on the graph of observed emails. The partitionings inferred using this model are evaluated through their capacity to predict the campaigns of new emails. Furthermore, when classifying the content of emails, statistical information about the sending server can be valuable. Learning a model that is able to make use of it requires training data that includes server statistics. In order to also use training data where the server statistics are missing, a model that is a mixture over potentially all substitutions thereof is developed. Another application is to predict the navigation behavior of the users of a website. Here, there is no a priori partitioning of the users into clusters, but to understand different usage scenarios and design different layouts for them, imposing a partitioning is necessary. The presented approach simultaneously optimizes the discriminative as well as the predictive power of the clusters. Each model is evaluated on real-world data and compared to baseline methods. The results show that explicitly modeling the assumptions about the latent cluster structure leads to improved predictions compared to the baselines. It is beneficial to incorporate a small number of hyperparameters that can be tuned to yield the best predictions in cases where the prediction accuracy can not be optimized directly.
Large areas in the humid tropics are currently undergoing land-use change. The decrease of tropical rainforest, which is felled for land clearing and timber production, is countered by increasing areas of tree plantations and secondary forests. These changes are known to affect the regional water cycle as a result of plant-specific water demand and by influencing key soil properties which determine hydrological flow paths. One of these key properties sensitive to land-use change is the saturated hydraulic conductivity (Ks) as it governs vertical percolation of water within the soil profile. Low values of Ks in a certain soil depth can form an impeding layer and lead to perched water tables and the development of predominantly lateral flow paths such as overland flow. These processes can induce nutrient redistribution, erosion and soil degradation and thus affect ecosystem services and human livelihoods. Due to its sensitivity to land-use change, Ks is commonly used to assess the associated changes in hydrological flow paths. The objective of this dissertation was to assess the effect of land-use change on hydrological flow paths by analysing Ks as indicator variable. Sources of Ks variability, their implications for Ks monitoring and the relationship between Ks and near-surface hydrological flow paths in the context of land-use change were studied. The research area was located in central Panama, a country widely experiencing the abovementioned changes in land use. Ks is dependent on both static, soil-inherent properties such as particle size and clay mineralogy and dynamic, land use-dependent properties such as organic carbon content. By conducting a pair of studies with one of these influences held constant in each, the importance of static and dynamic properties for Ks was assessed. Applying a space-for-time approach to sample Ks under secondary forests of different age classes on comparable soils, a recovery of Ks from the former pasture use was shown to require more than eight years. The process was limited to the 0−6 cm sampling depth and showed large variability among replicates. A wavelet analysis of a Ks transect crossing different soil map units under comparable land cover, old-growth tropical rainforest, showed large small-scale variability, which was attributed to biotic influences, as well as a possible but non-conclusive influence of soil types. The two results highlight the importance of dynamic, land use-dependent influences on Ks. Monitoring studies can help to quantify land use-induced change of Ks, but there is a variety of sampling designs which differ in efficiency of estimating mean Ks. A comparative study of four designs and their suitability for Ks monitoring is used to give recommendations about designing a Ks monitoring scheme. Quantifying changes in spatial means of Ks for small catchments with a rotational stratified sampling design did not prove to be more efficient than Simple Random Sampling. The lack of large-scale spatial structure prevented benefits of stratification, and large small-scale variability resulting from local biotic processes and artificial effects of destructive sampling caused a lack of temporal consistency in the re-sampling of locations, which is part of the rotational design. The relationship between Ks and near-surface hydrological flow paths is of critical importance when assessing the consequences of land-use change in the humid tropics. The last part of this dissertation aimed at disclosing spatial relationships between Ks and overland flow as influenced by different land cover types. The effects of Ks on overland-flow generation were spatially variable, different between planar plots and incised flowlines and strongly influenced by land-cover characteristics. A simple comparison of Ks values and rainfall intensities was insufficient to describe the observed pattern of overland flow. Likewise, event flow in the stream was apparently not directly related to overland flow response patterns within the catchments. The study emphasises the importance of combining pedological, hydrological, meteorological and botanical measurements to comprehensively understand the land use-driven change in hydrological flow paths. In summary, Ks proved to be a suitable parameter for assessing the influence of land-use change on soils and hydrological processes. The results illustrated the importance of land cover and spatial variability of Ks for decisions on sampling designs and for interpreting overland-flow generation. As relationships between Ks and overland flow were shown to be complex and dependent on land cover, an interdisciplinary approach is required to comprehensively understand the effects of land-use change on soils and near-surface hydrological flow paths in the humid tropics.