600 Technik, Technologie
Refine
Year of publication
Document Type
- Article (84)
- Postprint (55)
- Doctoral Thesis (4)
- Conference Proceeding (1)
- Other (1)
Language
- English (145) (remove)
Is part of the Bibliography
- yes (145)
Keywords
- Environmental sciences (4)
- animal personality (4)
- deep reinforcement learning (4)
- exploratory-behavior (4)
- expression (4)
- human behaviour (4)
- production control (4)
- activation (3)
- disease (3)
- 2 Different Strains (2)
Institute
- Mathematisch-Naturwissenschaftliche Fakultät (26)
- Institut für Biochemie und Biologie (25)
- Institut für Physik und Astronomie (21)
- Institut für Geowissenschaften (11)
- Institut für Chemie (8)
- Fachgruppe Betriebswirtschaftslehre (7)
- Strukturbereich Kognitionswissenschaften (7)
- Wirtschaftswissenschaften (7)
- Hasso-Plattner-Institut für Digital Engineering GmbH (6)
- Institut für Informatik und Computational Science (5)
Cells and organelles are not homogeneous but include microcompartments that alter the spatiotemporal characteristics of cellular processes. The effects of microcompartmentation on metabolic pathways are however difficult to study experimentally. The pyrenoid is a microcompartment that is essential for a carbon concentrating mechanism (CCM) that improves the photosynthetic performance of eukaryotic algae. Using Chlamydomonas reinhardtii, we obtained experimental data on photosynthesis, metabolites, and proteins in CCM-induced and CCM-suppressed cells. We then employed a computational strategy to estimate how fluxes through the Calvin-Benson cycle are compartmented between the pyrenoid and the stroma. Our model predicts that ribulose-1,5-bisphosphate (RuBP), the substrate of Rubisco, and 3-phosphoglycerate (3PGA), its product, diffuse in and out of the pyrenoid, respectively, with higher fluxes in CCM-induced cells. It also indicates that there is no major diffusional barrier to metabolic flux between the pyrenoid and stroma. Our computational approach represents a stepping stone to understanding microcompartmentalized CCM in other organisms.
Peripersonal space is the space surrounding our body, where multisensory integration of stimuli and action execution take place. The size of peripersonal space is flexible and subject to change by various personal and situational factors. The dynamic representation of our peripersonal space modulates our spatial behaviors towards other individuals. During the COVID-19 pandemic, this spatial behavior was modified by two further factors: social distancing and wearing a face mask. Evidence from offline and online studies on the impact of a face mask on pro-social behavior is mixed. In an attempt to clarify the role of face masks as pro-social or anti-social signals, 235 observers participated in the present online study. They watched pictures of two models standing at three different distances from each other (50, 90 and 150 cm), who were either wearing a face mask or not and were either interacting by initiating a hand shake or just standing still. The observers’ task was to classify the model by gender. Our results show that observers react fastest, and therefore show least avoidance, for the shortest distances (50 and 90 cm) but only when models wear a face mask and do not interact. Thus, our results document both pro- and anti-social consequences of face masks as a result of the complex interplay between social distancing and interactive behavior. Practical implications of these findings are discussed.
Droughts in São Paulo
(2023)
Literature has suggested that droughts and societies are mutually shaped and, therefore, both require a better understanding of their coevolution on risk reduction and water adaptation. Although the Sao Paulo Metropolitan Region drew attention because of the 2013-2015 drought, this was not the first event. This paper revisits this event and the 1985-1986 drought to compare the evolution of drought risk management aspects. Documents and hydrological records are analyzed to evaluate the hazard intensity, preparedness, exposure, vulnerability, responses, and mitigation aspects of both events. Although the hazard intensity and exposure of the latter event were larger than the former one, the policy implementation delay and the dependency of service areas in a single reservoir exposed the region to higher vulnerability. In addition to the structural and non-structural tools implemented just after the events, this work raises the possibility of rainwater reuse for reducing the stress in reservoirs.
The olfactomotor system is especially investigated by examining the sniffing in reaction to olfactory stimuli. The motor output of respiratory-independent muscles was seldomly considered regarding possible influences of smells. The Adaptive Force (AF) characterizes the capability of the neuromuscular system to adapt to external forces in a holding manner and was suggested to be more vulnerable to possible interfering stimuli due to the underlying complex control processes. The aim of this pilot study was to measure the effects of olfactory inputs on the AF of the hip and elbow flexors, respectively. The AF of 10 subjects was examined manually by experienced testers while smelling at sniffing sticks with neutral, pleasant or disgusting odours. The reaction force and the limb position were recorded by a handheld device. The results show, inter alia, a significantly lower maximal isometric AF and a significantly higher AF at the onset of oscillations by perceiving disgusting odours compared to pleasant or neutral odours (p < 0.001). The adaptive holding capacity seems to reflect the functionality of the neuromuscular control, which can be impaired by disgusting olfactory inputs. An undisturbed functioning neuromuscular system appears to be characterized by a proper length tension control and by an earlier onset of mutual oscillations during an external force increase. This highlights the strong connection of olfaction and motor control also regarding respiratory-independent muscles.
The layered dichalcogenide MoS2 is relevant for electrochemical Li adsorption/intercalation, in the course of which the material undergoes a concomitant structural phase transition from semiconducting 2H-MoS2 to metallic 1T-LixMoS2. With the core hole clock approach at the S L1 X-ray absorption edge we quantify the ultrafast directional charge transfer of excited S3p electrons in-plane () and out-of-plane (perpendicular to) for 2H-MoS2 as tau 2H,=0.38 +/- 0.08 fs and tau 2H,perpendicular to =0.33 +/- 0.06 fs and for 1T-LixMoS2 as tau 1T,=0.32 +/- 0.12 fs and tau 1T,perpendicular to =0.09 +/- 0.07 fs. The isotropic charge delocalization of S3p electrons in the semiconducting 2H phase within the S-Mo-S sheets is assigned to the specific symmetry of the Mo-S bonding arrangement. Formation of 1T-LixMoS2 by lithiation accelerates the in-plane charge transfer by a factor of similar to 1.2 due to electron injection to the Mo-S covalent bonds and concomitant structural repositioning of S atoms within the S-Mo-S sheets. For excitation into out-of-plane orbitals, an accelerated charge transfer by a factor of similar to 3.7 upon lithiation occurs due to S-Li coupling.
Development Aid
(2017)
Development aid has been an important catalyst for economic development and international politics since the end of WWII. A critical analysis of the main political, social and economic advances in development aid, traces the development agenda from the advent of the Bretton Woods agreement, the Truman Doctrine and the Marshall Plan, to the Washington Consensus and its neoliberal manifesto. The failure of the Washington Consensus and the rise of the post-Washington Consensus is analysed providing a backdrop for the critique of economic globalisation as a development aid cornerstone. Trump’s rejection of the neoliberal globalisation agenda and departure from post-WWII ideologies is discussed.
In this work, which is part of a larger research program, a framework called "virtual data fusion" was developed to provide an automated and consistent crack detection method that allows for the cross-comparison of results from large quantities of X-ray computed tomography (CT) data. A partial implementation of this method in a custom program was developed for use in research focused on crack quantification in alkali-silica reaction (ASR)-sensitive concrete aggregates. During the CT image processing, a series of image analyses tailored for detecting specific, individual crack-like characteristics were completed. The results of these analyses were then "fused" in order to identify crack-like objects within the images with much higher accuracy than that yielded by any individual image analysis procedure. The results of this strategy demonstrated the success of the program in effectively identifying crack-like structures and quantifying characteristics, such as surface area and volume. The results demonstrated that the source of aggregate has a very significant impact on the amount of internal cracking, even when the mineralogical characteristics remain very similar. River gravels, for instance, were found to contain significantly higher levels of internal cracking than quarried stone aggregates of the same mineralogical type.
Dendritic hPG-amid-C18-mPEG core-multishell nanocarriers (CMS) represent a novel class of unimolecular micelles that hold great potential as drug transporters, e. g., to facilitate topical therapy in skin diseases. Atopic dermatitis is among the most common inflammatory skin disorders with complex barrier alterations which may affect the efficacy of topical treatment.
Here, we tested the penetration behavior and identified target structures of unloaded CMS after topical administration in healthy mice and in mice with oxazolone-induced atopic dermatitis. We further examined whole body distribution and possible systemic side effects after simulating high dosage dermal penetration by subcutaneous injection.
Following topical administration, CMS accumulated in the stratum corneum without penetration into deeper viable epidermal layers. The same was observed in atopic dermatitis mice, indicating that barrier alterations in atopic dermatitis had no influence on the penetration of CMS. Following subcutaneous injection, CMS were deposited in the regional lymph nodes as well as in liver, spleen, lung, and kidney. However, in vitro toxicity tests, clinical data, and morphometry-assisted histopathological analyses yielded no evidence of any toxic or otherwise adverse local or systemic effects of CMS, nor did they affect the severity or course of atopic dermatitis.
Taken together, CMS accumulate in the stratum corneum in both healthy and inflammatory skin and appear to be highly biocompatible in the mouse even under conditions of atopic dermatitis and thus could potentially serve to create a depot for anti-inflammatory drugs in the skin.
Increasingly fast development cycles and individualized products pose major challenges for today's smart production systems in times of industry 4.0. The systems must be flexible and continuously adapt to changing conditions while still guaranteeing high throughputs and robustness against external disruptions. Deep rein- forcement learning (RL) algorithms, which already reached impressive success with Google DeepMind's AlphaGo, are increasingly transferred to production systems to meet related requirements. Unlike supervised and unsupervised machine learning techniques, deep RL algorithms learn based on recently collected sensor- and process-data in direct interaction with the environment and are able to perform decisions in real-time. As such, deep RL algorithms seem promising given their potential to provide decision support in complex environments, as production systems, and simultaneously adapt to changing circumstances. While different use-cases for deep RL emerged, a structured overview and integration of findings on their application are missing. To address this gap, this contribution provides a systematic literature review of existing deep RL applications in the field of production planning and control as well as production logistics. From a performance perspective, it became evident that deep RL can beat heuristics significantly in their overall performance and provides superior solutions to various industrial use-cases. Nevertheless, safety and reliability concerns must be overcome before the widespread use of deep RL is possible which presumes more intensive testing of deep RL in real world applications besides the already ongoing intensive simulations.
Increasingly fast development cycles and individualized products pose major challenges for today's smart production systems in times of industry 4.0. The systems must be flexible and continuously adapt to changing conditions while still guaranteeing high throughputs and robustness against external disruptions. Deep reinforcement learning (RL) algorithms, which already reached impressive success with Google DeepMind's AlphaGo, are increasingly transferred to production systems to meet related requirements. Unlike supervised and unsupervised machine learning techniques, deep RL algorithms learn based on recently collected sensorand process-data in direct interaction with the environment and are able to perform decisions in real-time. As such, deep RL algorithms seem promising given their potential to provide decision support in complex environments, as production systems, and simultaneously adapt to changing circumstances. While different use-cases for deep RL emerged, a structured overview and integration of findings on their application are missing. To address this gap, this contribution provides a systematic literature review of existing deep RL applications in the field of production planning and control as well as production logistics. From a performance perspective, it became evident that deep RL can beat heuristics significantly in their overall performance and provides superior solutions to various industrial use-cases. Nevertheless, safety and reliability concerns must be overcome before the widespread use of deep RL is possible which presumes more intensive testing of deep RL in real world applications besides the already ongoing intensive simulations.
The in‐depth understanding of charge carrier photogeneration and recombination mechanisms in organic solar cells is still an ongoing effort. In donor:acceptor (bulk) heterojunction organic solar cells, charge photogeneration and recombination are inter‐related via the kinetics of charge transfer states—being singlet or triplet states. Although high‐charge‐photogeneration quantum yields are achieved in many donor:acceptor systems, only very few systems show significantly reduced bimolecular recombination relative to the rate of free carrier encounters, in low‐mobility systems. This is a serious limitation for the industrialization of organic solar cells, in particular when aiming at thick active layers. Herein, a meta‐analysis of the device performance of numerous bulk heterojunction organic solar cells is presented for which field‐dependent photogeneration, charge carrier mobility, and fill factor are determined. Herein, a “spin‐related factor” that is dependent on the ratio of back electron transfer of the triplet charge transfer (CT) states to the decay rate of the singlet CT states is introduced. It is shown that this factor links the recombination reduction factor to charge‐generation efficiency. As a consequence, it is only in the systems with very efficient charge generation and very fast CT dissociation that free carrier recombination is strongly suppressed, regardless of the spin‐related factor.
Many knowledge representation tasks involve trees or similar structures as abstract datatypes. However, devising compact and efficient declarative representations of such structural properties is non-obvious and can be challenging indeed. In this article, we take a number of acyclicity properties into consideration and investigate various logic-based approaches to encode them. We use answer set programming as the primary representation language but also consider mappings to related formalisms, such as propositional logic, difference logic and linear programming. We study the compactness of encodings and the resulting computational performance on benchmarks involving acyclic or tree structures.
Taxonomy plays a central role in biological sciences. It provides a communication system for scientists as it aims to enable correct identification of the studied organisms. As a consequence, species descriptions should seek to include as much available information as possible at species level to follow an integrative concept of 'taxonomics'. Here, we describe the cryptic species Epimeria frankei sp. nov. from the North Sea, and also redescribe its sister species, Epimeria cornigera. The morphological information obtained is substantiated by DNA barcodes and complete nuclear 18S rRNA gene sequences. In addition, we provide, for the first time, full mitochondrial genome data as part of a metazoan species description for a holotype, as well as the neotype. This study represents the first successful implementation of the recently proposed concept of taxonomics, using data from high-throughput technologies for integrative taxonomic studies, allowing the highest level of confidence for both biodiversity and ecological research.
Cooperation is — despite not being predicted by game theory — a widely documented aspect of human behaviour in Prisoner’s Dilemma (PD) situations. This article presents a comparison between subjects restricted to playing pure strategies and subjects allowed to play mixed strategies in a one-shot symmetric PD laboratory experiment. Subjects interact with 10 other subjects and take their decisions all at once. Because subjects in the mixed-strategy treatment group are allowed to condition their level of cooperation more precisely on their beliefs about their counterparts’ level of cooperation, we predicted the cooperation rate in the mixed-strategy treatment group to be higher than in the pure-strategy control group. The results of our experiment reject our prediction: even after controlling for beliefs about the other subjects’ level of cooperation, we find that cooperation in the mixed-strategy group is lower than in the pure-strategy group. We also find, however, that subjects in the mixedstrategy group condition their cooperative behaviour more closely on their beliefs than in the pure-strategy group. In the mixed-strategy group, most subjects choose intermediate levels of cooperation.
Cooperation is — despite not being predicted by game theory — a widely documented aspect of human behaviour in Prisoner’s Dilemma (PD) situations. This article presents a comparison between subjects restricted to playing pure strategies and subjects allowed to play mixed strategies in a one-shot symmetric PD laboratory experiment. Subjects interact with 10 other subjects and take their decisions all at once. Because subjects in the mixed-strategy treatment group are allowed to condition their level of cooperation more precisely on their beliefs about their counterparts’ level of cooperation, we predicted the cooperation rate in the mixed-strategy treatment group to be higher than in the pure-strategy control group. The results of our experiment reject our prediction: even after controlling for beliefs about the other subjects’ level of cooperation, we find that cooperation in the mixed-strategy group is lower than in the pure-strategy group. We also find, however, that subjects in the mixedstrategy group condition their cooperative behaviour more closely on their beliefs than in the pure-strategy group. In the mixed-strategy group, most subjects choose intermediate levels of cooperation.
With the spread of smart phones capable of taking high-resolution photos and the development of high-speed mobile data infrastructure, digital visual media is becoming one of the most important forms of modern communication. With this development, however, also comes a devaluation of images as a media form with the focus becoming the frequency at which visual content is generated instead of the quality of the content. In this work, an interactive system using image-abstraction techniques and an eye tracking sensor is presented, which allows users to experience diverting and dynamic artworks that react to their eye movement. The underlying modular architecture enables a variety of different interaction techniques that share common design principles, making the interface as intuitive as possible. The resulting experience allows users to experience a game-like interaction in which they aim for a reward, the artwork, while being held under constraints, e.g., not blinking. The co nscious eye movements that are required by some interaction techniques hint an interesting, possible future extension for this work into the field of relaxation exercises and concentration training.
With the spread of smart phones capable of taking high-resolution photos and the development of high-speed mobile data infrastructure, digital visual media is becoming one of the most important forms of modern communication. With this development, however, also comes a devaluation of images as a media form with the focus becoming the frequency at which visual content is generated instead of the quality of the content. In this work, an interactive system using image-abstraction techniques and an eye tracking sensor is presented, which allows users to experience diverting and dynamic artworks that react to their eye movement. The underlying modular architecture enables a variety of different interaction techniques that share common design principles, making the interface as intuitive as possible. The resulting experience allows users to experience a game-like interaction in which they aim for a reward, the artwork, while being held under constraints, e.g., not blinking. The co nscious eye movements that are required by some interaction techniques hint an interesting, possible future extension for this work into the field of relaxation exercises and concentration training.
Emotions are a central element of human experience. They occur with high frequency in everyday life and play an important role in decision making. However, currently there is no consensus among researchers on what constitutes an emotion and on how emotions should be investigated. This dissertation identifies three problems of current emotion research: the problem of ground truth, the problem of incomplete constructs and the problem of optimal representation. I argue for a focus on the detailed measurement of emotion manifestations with computer-aided methods to solve these problems. This approach is demonstrated in three research projects, which describe the development of methods specific to these problems as well as their application to concrete research questions.
The problem of ground truth describes the practice to presuppose a certain structure of emotions as the a priori ground truth. This determines the range of emotion descriptions and sets a standard for the correct assignment of these descriptions. The first project illustrates how this problem can be circumvented with a multidimensional emotion perception paradigm which stands in contrast to the emotion recognition paradigm typically employed in emotion research. This paradigm allows to calculate an objective difficulty measure and to collect subjective difficulty ratings for the perception of emotional stimuli. Moreover, it enables the use of an arbitrary number of emotion stimuli categories as compared to the commonly used six basic emotion categories. Accordingly, we collected data from 441 participants using dynamic facial expression stimuli from 40 emotion categories. Our findings suggest an increase in emotion perception difficulty with increasing actor age and provide evidence to suggest that young adults, the elderly and men underestimate their emotion perception difficulty. While these effects were predicted from the literature, we also found unexpected and novel results. In particular, the increased difficulty on the objective difficulty measure for female actors and observers stood in contrast to reported findings. Exploratory analyses revealed low relevance of person-specific variables for the prediction of emotion perception difficulty, but highlighted the importance of a general pleasure dimension for the ease of emotion perception.
The second project targets the problem of incomplete constructs which relates to vaguely defined psychological constructs on emotion with insufficient ties to tangible manifestations. The project exemplifies how a modern data collection method such as face tracking data can be used to sharpen these constructs on the example of arousal, a long-standing but fuzzy construct in emotion research. It describes how measures of distance, speed and magnitude of acceleration can be computed from face tracking data and investigates their intercorrelations. We find moderate to strong correlations among all measures of static information on one hand and all measures of dynamic information on the other. The project then investigates how self-rated arousal is tied to these measures in 401 neurotypical individuals and 19 individuals with autism. Distance to the neutral face was predictive of arousal ratings in both groups. Lower mean arousal ratings were found for the autistic group, but no difference in correlation of the measures and arousal ratings could be found between groups. Results were replicated in a high autistic traits group consisting of 41 participants. The findings suggest a qualitatively similar perception of arousal for individuals with and without autism. No correlations between valence ratings and any of the measures could be found which emphasizes the specificity of our tested measures for the construct of arousal.
The problem of optimal representation refers to the search for the best representation of emotions and the assumption that there is a one-fits-all solution. In the third project we introduce partial least squares analysis as a general method to find an optimal representation to relate two high-dimensional data sets to each other. The project demonstrates its applicability to emotion research on the question of emotion perception differences between men and women. The method was used with emotion rating data from 441 participants and face tracking data computed on 306 videos. We found quantitative as well as qualitative differences in the perception of emotional facial expressions between these groups. We showed that women’s emotional perception systematically captured more of the variance in facial expressions. Additionally, we could show that significant differences exist in the way that women and men perceive some facial expressions which could be visualized as concrete facial expression sequences. These expressions suggest differing perceptions of masked and ambiguous facial expressions between the sexes. In order to facilitate use of the developed method by the research community, a package for the statistical environment R was written. Furthermore, to call attention to the method and its usefulness for emotion research, a website was designed that allows users to explore a model of emotion ratings and facial expression data in an interactive fashion.
Cell-free protein synthesis as a novel tool for directed glycoengineering of active erythropoietin
(2018)
As one of the most complex post-translational modification, glycosylation is widely involved in cell adhesion, cell proliferation and immune response. Nevertheless glycoproteins with an identical polypeptide backbone mostly differ in their glycosylation patterns. Due to this heterogeneity, the mapping of different glycosylation patterns to their associated function is nearly impossible. In the last years, glycoengineering tools including cell line engineering, chemoenzymatic remodeling and site-specific glycosylation have attracted increasing interest. The therapeutic hormone erythropoietin (EPO) has been investigated in particular by various groups to establish a production process resulting in a defined glycosylation pattern. However commercially available recombinant human EPO shows batch-to-batch variations in its glycoforms. Therefore we present an alternative method for the synthesis of active glycosylated EPO with an engineered O-glycosylation site by combining eukaryotic cell-free protein synthesis and site-directed incorporation of non-canonical amino acids with subsequent chemoselective modifications.
Cell-free protein synthesis as a novel tool for directed glycoengineering of active erythropoietin
(2018)
As one of the most complex post-translational modification, glycosylation is widely involved in cell adhesion, cell proliferation and immune response. Nevertheless glycoproteins with an identical polypeptide backbone mostly differ in their glycosylation patterns. Due to this heterogeneity, the mapping of different glycosylation patterns to their associated function is nearly impossible. In the last years, glycoengineering tools including cell line engineering, chemoenzymatic remodeling and site-specific glycosylation have attracted increasing interest. The therapeutic hormone erythropoietin (EPO) has been investigated in particular by various groups to establish a production process resulting in a defined glycosylation pattern. However commercially available recombinant human EPO shows batch-to-batch variations in its glycoforms. Therefore we present an alternative method for the synthesis of active glycosylated EPO with an engineered O-glycosylation site by combining eukaryotic cell-free protein synthesis and site-directed incorporation of non-canonical amino acids with subsequent chemoselective modifications.
From an active labor market policy perspective, start-up subsidies for unemployed individuals are very effective in improving long-term labor market outcomes for participants. From a business perspective, however, the assessment of these public programs is less clear since they might attract individuals with low entrepreneurial abilities and produce businesses with low survival rates and little contribution to job creation, economic growth, and innovation. In this paper, we use a rich data set to compare participants of a German start-up subsidy program for unemployed individuals to a group of regular founders who started from non-unemployment and did not receive the subsidy. The data allows us to analyze their business performance up until 40 months after business formation. We find that formerly subsidized founders lag behind not only in survival and job creation, but especially also in innovation activities. The gaps in these business outcomes are relatively constant or even widening over time. Hence, we do not see any indication of catching up in the longer run. While the gap in survival can be entirely explained by initial differences in observable start-up characteristics, the gap in business development remains and seems to be the result of restricted access to capital as well as differential business strategies and dynamics. Considering these conflicting results for the assessment of the subsidy program from an ALMP and business perspective, policy makers need to carefully weigh the costs and benefits of such a strategy to find the right policy mix.
Quorum-sensing bacteria in a growing colony of cells send out signalling molecules (so-called “autoinducers”) and themselves sense the autoinducer concentration in their vicinity. Once—due to increased local cell density inside a “cluster” of the growing colony—the concentration of autoinducers exceeds a threshold value, cells in this clusters get “induced” into a communal, multi-cell biofilm-forming mode in a cluster-wide burst event. We analyse quantitatively the influence of spatial disorder, the local heterogeneity of the spatial distribution of cells in the colony, and additional physical parameters such as the autoinducer signal range on the induction dynamics of the cell colony. Spatial inhomogeneity with higher local cell concentrations in clusters leads to earlier but more localised induction events, while homogeneous distributions lead to comparatively delayed but more concerted induction of the cell colony, and, thus, a behaviour close to the mean-field dynamics. We quantify the induction dynamics with quantifiers such as the time series of induction events and burst sizes, the grouping into induction families, and the mean autoinducer concentration levels. Consequences for different scenarios of biofilm growth are discussed, providing possible cues for biofilm control in both health care and biotechnology.
Quorum-sensing bacteria in a growing colony of cells send out signalling molecules (so-called “autoinducers”) and themselves sense the autoinducer concentration in their vicinity. Once—due to increased local cell density inside a “cluster” of the growing colony—the concentration of autoinducers exceeds a threshold value, cells in this clusters get “induced” into a communal, multi-cell biofilm-forming mode in a cluster-wide burst event. We analyse quantitatively the influence of spatial disorder, the local heterogeneity of the spatial distribution of cells in the colony, and additional physical parameters such as the autoinducer signal range on the induction dynamics of the cell colony. Spatial inhomogeneity with higher local cell concentrations in clusters leads to earlier but more localised induction events, while homogeneous distributions lead to comparatively delayed but more concerted induction of the cell colony, and, thus, a behaviour close to the mean-field dynamics. We quantify the induction dynamics with quantifiers such as the time series of induction events and burst sizes, the grouping into induction families, and the mean autoinducer concentration levels. Consequences for different scenarios of biofilm growth are discussed, providing possible cues for biofilm control in both health care and biotechnology.
Procrastination is a self-regulatory problem of voluntarily and destructively delaying intended and necessary or personally important tasks. Previous studies showed that procrastination is associated with executive dysfunctions that seem to be particularly strong in punishing contexts. In the present event-related potential (ERP) study a monetary version of the parametric Go/No-Go task was performed by high and low academic procrastinators to verify the influence of motivational context (reward vs. punishment expectation) and task difficulty (easy vs. hard) on procrastination-related executive dysfunctions. The results revealed increased post-error slowing along with reduced P300 and error-related negativity (ERN) amplitudes in high (vs. low) procrastination participants-effects that indicate impaired attention and error-related processing in this group. This pattern of results did not differ as a function of task difficulty and motivation condition. However, when the task got more difficult executive attention deficits became even more apparent at the behavioral level in high procrastinators, as indexed by increased reaction time variability. The findings substantiate prior preliminary evidence that procrastinators show difficulties in certain aspects of executive functioning (in attention and error processing) during execution of task-relevant behavior, which may be more apparent in highly demanding situations.
Materials based on biodegradable polyesters, such as poly(butylene terephthalate) (PBT) or poly(butylene terephthalate-co-poly(alkylene glycol) terephthalate) (PBTAT), have potential application as pro-regenerative scaffolds for bone tissue engineering. Herein, the preparation of films composed of PBT or PBTAT and an engineered spider silk protein, (eADF4(C16)), that displays multiple carboxylic acid moieties capable of binding calcium ions and facilitating their biomineralization with calcium carbonate or calcium phosphate is reported. Human mesenchymal stem cells cultured on films mineralized with calcium phosphate show enhanced levels of alkaline phosphatase activity suggesting that such composites have potential use for bone tissue engineering.
Energy system models are advancing rapidly. However, it is not clear whether models are becoming better, in the sense that they address the questions that decision-makers need to be answered to make well-informed decisions. Therefore, we investigate the gap between model improvements relevant from the perspective of modellers compared to what users of model results think models should address. Thus, we ask: What are the differences between energy model improvements as perceived by modellers, and the actual needs of users of model results? To answer this question, we conducted a literature review, 32 interviews, and an online survey. Our results show that user needs and ongoing improvements of energy system models align to a large degree so that future models are indeed likely to be better than current models. We also find mismatches between the needs of modellers and users, especially in the modelling of social, behavioural and political aspects, the trade-off between model complexity and understandability, and the ways that model results should be communicated. Our findings suggest that a better understanding of user needs and closer cooperation between modellers and users is imperative to truly improve models and unlock their full potential to support the transition towards climate neutrality in Europe.
In daily life, we automatically form impressions of other individuals on basis of subtle facial features that convey trustworthiness. Because these face-based judgements influence current and future social interactions, we investigated how perceived trustworthiness of faces affects long-term memory using event-related potentials (ERPs). In the current study, participants incidentally viewed 60 neutral faces differing in trustworthiness, and one week later, performed a surprise recognition memory task, in which the same old faces were presented intermixed with novel ones. We found that after one week untrustworthy faces were better recognized than trustworthy faces and that untrustworthy faces prompted early (350–550 ms) enhanced frontal ERP old/new differences (larger positivity for correctly remembered old faces, compared to novel ones) during recognition. Our findings point toward an enhanced long-lasting, likely familiarity-based, memory for untrustworthy faces. Even when trust judgments about a person do not necessarily need to be accurate, a fast access to memories predicting potential harm may be important to guide social behaviour in daily life.
In daily life, we automatically form impressions of other individuals on basis of subtle facial features that convey trustworthiness. Because these face-based judgements influence current and future social interactions, we investigated how perceived trustworthiness of faces affects long-term memory using event-related potentials (ERPs). In the current study, participants incidentally viewed 60 neutral faces differing in trustworthiness, and one week later, performed a surprise recognition memory task, in which the same old faces were presented intermixed with novel ones. We found that after one week untrustworthy faces were better recognized than trustworthy faces and that untrustworthy faces prompted early (350–550 ms) enhanced frontal ERP old/new differences (larger positivity for correctly remembered old faces, compared to novel ones) during recognition. Our findings point toward an enhanced long-lasting, likely familiarity-based, memory for untrustworthy faces. Even when trust judgments about a person do not necessarily need to be accurate, a fast access to memories predicting potential harm may be important to guide social behaviour in daily life.
In a subset of patients, non-alcoholic fatty liver disease (NAFLD) is complicated by cell death and inflammation resulting in non-alcoholic steatohepatitis (NASH), which may progress to fibrosis and subsequent organ failure. Apart from cytokines, prostaglandins, in particular prostaglandin E-2 (PGE(2)), play a pivotal role during inflammatory processes. Expression of the key enzymes of PGE(2) synthesis, cyclooxygenase 2 and microsomal PGE synthase 1 (mPGES-1), was increased in human NASH livers in comparison to controls and correlated with the NASH activity score. Both enzymes were also induced in NASH-diet-fed wild-type mice, resulting in an increase in hepatic PGE(2) concentration that was completely abrogated in mPGES-1-deficient mice. PGE(2) is known to inhibit TNF-alpha synthesis in macrophages. A strong infiltration of monocyte-derived macrophages was observed in NASH-diet-fed mice, which was accompanied with an increase in hepatic TNF-alpha expression. Due to the impaired PGE(2) production, TNF-alpha expression increased much more in livers of mPGES-1-deficient mice or in the peritoneal macrophages of these mice. The increased levels of TNF-alpha resulted in an enhanced IL-1 beta production, primarily in hepatocytes, and augmented hepatocyte apoptosis. In conclusion, attenuation of PGE(2) production by mPGES-1 ablation enhanced the TNF-alpha-triggered inflammatory response and hepatocyte apoptosis in diet-induced NASH.
In a subset of patients, non-alcoholic fatty liver disease (NAFLD) is complicated by cell death and inflammation resulting in non-alcoholic steatohepatitis (NASH), which may progress to fibrosis and subsequent organ failure. Apart from cytokines, prostaglandins, in particular prostaglandin E-2 (PGE(2)), play a pivotal role during inflammatory processes. Expression of the key enzymes of PGE(2) synthesis, cyclooxygenase 2 and microsomal PGE synthase 1 (mPGES-1), was increased in human NASH livers in comparison to controls and correlated with the NASH activity score. Both enzymes were also induced in NASH-diet-fed wild-type mice, resulting in an increase in hepatic PGE(2) concentration that was completely abrogated in mPGES-1-deficient mice. PGE(2) is known to inhibit TNF-alpha synthesis in macrophages. A strong infiltration of monocyte-derived macrophages was observed in NASH-diet-fed mice, which was accompanied with an increase in hepatic TNF-alpha expression. Due to the impaired PGE(2) production, TNF-alpha expression increased much more in livers of mPGES-1-deficient mice or in the peritoneal macrophages of these mice. The increased levels of TNF-alpha resulted in an enhanced IL-1 beta production, primarily in hepatocytes, and augmented hepatocyte apoptosis. In conclusion, attenuation of PGE(2) production by mPGES-1 ablation enhanced the TNF-alpha-triggered inflammatory response and hepatocyte apoptosis in diet-induced NASH.
Copolyesterurethanes (PDLCLs) based on oligo(epsilon-caprolactone) (OCL) and oligo(omega-pentadecalactone) (OPDL) segments are biodegradable thermoplastic temperature-memory polymers. The temperature-memory capability in these polymers with crystallizable control units is implemented by a thermomechanical programming process causing alterations in the crystallite arrangement and chain organization. These morphological changes can potentially affect degradation. Initial observations on the macroscopic level inspire the hypothesis that switching of the controlling units causes an accelerated degradation of the material, resulting in programmable degradation by sequential coupling of functions. Hence, detailed degradation studies on Langmuir films of a PDLCL with 40 wt% OPDL content are carried out under enzymatic catalysis. The temperature-memory creation procedure is mimicked by compression at different temperatures. The evolution of the chain organization and mechanical properties during the degradation process is investigated by means of polarization-modulated infrared reflection absorption spectroscopy, interfacial rheology and to some extend by X-ray reflectivity. The experiments on PDLCL Langmuir films imply that degradability is not enhanced by thermal switching, as the former depends on the temperature during cold programming. Nevertheless, the thin film experiments show that the leaching of OCL segments does not induce further crystallization of the OPDL segments, which is beneficial for a controlled and predictable degradation.
alt'ai is an agent-based simulation inspired by aesthetics, culture and environmental conditions of the Altai mountain region on the borders between Russia, Kazakhstan, China and Mongolia. It is set into a scenario of a remote automated landscape populated by sentient machines, where biological species, machines and environments autonomously interact to produce unforeseeable visual outputs. It poses a question of designing future machine-to-machine authentication protocols that are based on the use of images encoding agent behavior. Also, the simulation provides rich visual perspective on this challenge. The project pleads for a heavily aestheticized approach to design practice and highlights the importance of productively inefficient and information redundant systems.
In the last decades, there was a notable progress in solving the well-known Boolean satisfiability (Sat) problem, which can be witnessed by powerful Sat solvers. One of the reasons why these solvers are so fast are structural properties of instances that are utilized by the solver’s interna. This thesis deals with the well-studied structural property treewidth, which measures the closeness of an instance to being a tree. In fact, there are many problems parameterized by treewidth that are solvable in polynomial time in the instance size when parameterized by treewidth.
In this work, we study advanced treewidth-based methods and tools for problems in knowledge representation and reasoning (KR). Thereby, we provide means to establish precise runtime results (upper bounds) for canonical problems relevant to KR. Then, we present a new type of problem reduction, which we call decomposition-guided (DG) that
allows us to precisely monitor the treewidth when reducing from one problem to another problem. This new reduction type will be the basis for a long-open lower bound result for quantified Boolean formulas and allows us to design a new methodology for establishing runtime lower bounds for problems parameterized by treewidth.
Finally, despite these lower bounds, we provide an efficient implementation of algorithms that adhere to treewidth. Our approach finds suitable abstractions of instances, which are subsequently refined in a recursive fashion, and it uses Sat solvers for solving subproblems. It turns out that our resulting solver is quite competitive for two canonical counting problems related to Sat.
Fragmentation of peptides leaves characteristic patterns in mass spectrometry data, which can be used to identify protein sequences, but this method is challenging for mutated or modified sequences for which limited information exist. Altenburg et al. use an ad hoc learning approach to learn relevant patterns directly from unannotated fragmentation spectra.
Mass spectrometry-based proteomics provides a holistic snapshot of the entire protein set of living cells on a molecular level. Currently, only a few deep learning approaches exist that involve peptide fragmentation spectra, which represent partial sequence information of proteins.
Commonly, these approaches lack the ability to characterize less studied or even unknown patterns in spectra because of their use of explicit domain knowledge.
Here, to elevate unrestricted learning from spectra, we introduce 'ad hoc learning of fragmentation' (AHLF), a deep learning model that is end-to-end trained on 19.2 million spectra from several phosphoproteomic datasets. AHLF is interpretable, and we show that peak-level feature importance values and pairwise interactions between peaks are in line with corresponding peptide fragments.
We demonstrate our approach by detecting post-translational modifications, specifically protein phosphorylation based on only the fragmentation spectrum without a database search. AHLF increases the area under the receiver operating characteristic curve (AUC) by an average of 9.4% on recent phosphoproteomic data compared with the current state of the art on this task.
Furthermore, use of AHLF in rescoring search results increases the number of phosphopeptide identifications by a margin of up to 15.1% at a constant false discovery rate. To show the broad applicability of AHLF, we use transfer learning to also detect cross-linked peptides, as used in protein structure analysis, with an AUC of up to 94%.
A treemap is a visualization that has been specifically designed to facilitate the exploration of tree-structured data and, more general, hierarchically structured data. The family of visualization techniques that use a visual metaphor for parent-child relationships based “on the property of containment” (Johnson, 1993) is commonly referred to as treemaps. However, as the number of variations of treemaps grows, it becomes increasingly important to distinguish clearly between techniques and their specific characteristics. This paper proposes to discern between Space-filling Treemap TS, Containment Treemap TC, Implicit Edge Representation Tree TIE, and Mapped Tree TMT for classification of hierarchy visualization techniques and highlights their respective properties. This taxonomy is created as a hyponymy, i.e., its classes have an is-a relationship to one another: TS TC TIE TMT. With this proposal, we intend to stimulate a discussion on a more unambiguous classification of treemaps and, furthermore, broaden what is understood by the concept of treemap itself.
A treemap is a visualization that has been specifically designed to facilitate the exploration of tree-structured data and, more general, hierarchically structured data. The family of visualization techniques that use a visual metaphor for parent-child relationships based “on the property of containment” (Johnson, 1993) is commonly referred to as treemaps. However, as the number of variations of treemaps grows, it becomes increasingly important to distinguish clearly between techniques and their specific characteristics. This paper proposes to discern between Space-filling Treemap TS, Containment Treemap TC, Implicit Edge Representation Tree TIE, and Mapped Tree TMT for classification of hierarchy visualization techniques and highlights their respective properties. This taxonomy is created as a hyponymy, i.e., its classes have an is-a relationship to one another: TS TC TIE TMT. With this proposal, we intend to stimulate a discussion on a more unambiguous classification of treemaps and, furthermore, broaden what is understood by the concept of treemap itself.
A tale of shifting relations
(2021)
Understanding the dynamics between the East Asian summer (EASM) and winter monsoon (EAWM) is needed to predict their variability under future global warming scenarios. Here, we investigate the relationship between EASM and EAWM as well as the mechanisms driving their variability during the last 10,000 years by stacking marine and terrestrial (non-speleothem) proxy records from the East Asian realm. This provides a regional and proxy independent signal for both monsoonal systems. The respective signal was subsequently analysed using a linear regression model. We find that the phase relationship between EASM and EAWM is not time-constant and significantly depends on orbital configuration changes. In addition, changes in the Atlantic Meridional Overturning circulation, Arctic sea-ice coverage, El Niño-Southern Oscillation and Sun Spot numbers contributed to millennial scale changes in the EASM and EAWM during the Holocene. We also argue that the bulk signal of monsoonal activity captured by the stacked non-speleothem proxy records supports the previously argued bias of speleothem climatic archives to moisture source changes and/or seasonality.
A tale of shifting relations
(2021)
Understanding the dynamics between the East Asian summer (EASM) and winter monsoon (EAWM) is needed to predict their variability under future global warming scenarios. Here, we investigate the relationship between EASM and EAWM as well as the mechanisms driving their variability during the last 10,000 years by stacking marine and terrestrial (non-speleothem) proxy records from the East Asian realm. This provides a regional and proxy independent signal for both monsoonal systems. The respective signal was subsequently analysed using a linear regression model. We find that the phase relationship between EASM and EAWM is not time-constant and significantly depends on orbital configuration changes. In addition, changes in the Atlantic Meridional Overturning circulation, Arctic sea-ice coverage, El Niño-Southern Oscillation and Sun Spot numbers contributed to millennial scale changes in the EASM and EAWM during the Holocene. We also argue that the bulk signal of monsoonal activity captured by the stacked non-speleothem proxy records supports the previously argued bias of speleothem climatic archives to moisture source changes and/or seasonality.
Large-scale biochemical models are of increasing sizes due to the consideration of interacting organisms and tissues. Model reduction approaches that preserve the flux phenotypes can simplify the analysis and predictions of steady-state metabolic phenotypes. However, existing approaches either restrict functionality of reduced models or do not lead to significant decreases in the number of modelled metabolites. Here, we introduce an approach for model reduction based on the structural property of balancing of complexes that preserves the steady-state fluxes supported by the network and can be efficiently determined at genome scale. Using two large-scale mass-action kinetic models of Escherichia coli, we show that our approach results in a substantial reduction of 99% of metabolites. Applications to genome-scale metabolic models across kingdoms of life result in up to 55% and 85% reduction in the number of metabolites when arbitrary and mass-action kinetics is assumed, respectively. We also show that predictions of the specific growth rate from the reduced models match those based on the original models. Since steady-state flux phenotypes from the original model are preserved in the reduced, the approach paves the way for analysing other metabolic phenotypes in large-scale biochemical networks.
The use of monoclonal antibodies is ubiquitous in science and biomedicine but the generation and validation process of antibodies is nevertheless complicated and time-consuming. To address these issues we developed a novel selective technology based on an artificial cell surface construct by which secreted antibodies were connected to the corresponding hybridoma cell when they possess the desired antigen-specificity. Further the system enables the selection of desired isotypes and the screening for potential cross-reactivities in the same context. For the design of the construct we combined the transmembrane domain of the EGF-receptor with a hemagglutinin epitope and a biotin acceptor peptide and performed a transposon-mediated transfection of myeloma cell lines. The stably transfected myeloma cell line was used for the generation of hybridoma cells and an antigen- and isotype-specific screening method was established. The system has been validated for globular protein antigens as well as for haptens and enables a fast and early stage selection and validation of monoclonal antibodies in one step.
The use of monoclonal antibodies is ubiquitous in science and biomedicine but the generation and validation process of antibodies is nevertheless complicated and time-consuming. To address these issues we developed a novel selective technology based on an artificial cell surface construct by which secreted antibodies were connected to the corresponding hybridoma cell when they possess the desired antigen-specificity. Further the system enables the selection of desired isotypes and the screening for potential cross-reactivities in the same context. For the design of the construct we combined the transmembrane domain of the EGF-receptor with a hemagglutinin epitope and a biotin acceptor peptide and performed a transposon-mediated transfection of myeloma cell lines. The stably transfected myeloma cell line was used for the generation of hybridoma cells and an antigen- and isotype-specific screening method was established. The system has been validated for globular protein antigens as well as for haptens and enables a fast and early stage selection and validation of monoclonal antibodies in one step.
We use the prolonged Greek crisis as a case study to understand how a lasting economic shock affects the innovation strategies of firms in economies with moderate innovation activities. Adopting the 3-stage CDM model, we explore the link between R&D, innovation, and productivity for different size groups of Greek manufacturing firms during the prolonged crisis. At the first stage, we find that the continuation of the crisis is harmful for the R&D engagement of smaller firms while it increased the willingness for R&D activities among the larger ones. At the second stage, among smaller firms the knowledge production remains unaffected by R&D investments, while among larger firms the R&D decision is positively correlated with the probability of producing innovation, albeit the relationship is weakened as the crisis continues. At the third stage, innovation output benefits only larger firms in terms of labor productivity, while the innovation-productivity nexus is insignificant for smaller firms during the lasting crisis.
In nowadays production, fluctuations in demand, shortening product life-cycles, and highly configurable products require an adaptive and robust control approach to maintain competitiveness. This approach must not only optimise desired production objectives but also cope with unforeseen machine failures, rush orders, and changes in short-term demand. Previous control approaches were often implemented using a single operations layer and a standalone deep learning approach, which may not adequately address the complex organisational demands of modern manufacturing systems. To address this challenge, we propose a hyper-heuristics control model within a semi-heterarchical production system, in which multiple manufacturing and distribution agents are spread across pre-defined modules. The agents employ a deep reinforcement learning algorithm to learn a policy for selecting low-level heuristics in a situation-specific manner, thereby leveraging system performance and adaptability. We tested our approach in simulation and transferred it to a hybrid production environment. By that, we were able to demonstrate its multi-objective optimisation capabilities compared to conventional approaches in terms of mean throughput time, tardiness, and processing of prioritised orders in a multi-layered production system. The modular design is promising in reducing the overall system complexity and facilitates a quick and seamless integration into other scenarios.
In nowadays production, fluctuations in demand, shortening product life-cycles, and highly configurable products require an adaptive and robust control approach to maintain competitiveness. This approach must not only optimise desired production objectives but also cope with unforeseen machine failures, rush orders, and changes in short-term demand. Previous control approaches were often implemented using a single operations layer and a standalone deep learning approach, which may not adequately address the complex organisational demands of modern manufacturing systems. To address this challenge, we propose a hyper-heuristics control model within a semi-heterarchical production system, in which multiple manufacturing and distribution agents are spread across pre-defined modules. The agents employ a deep reinforcement learning algorithm to learn a policy for selecting low-level heuristics in a situation-specific manner, thereby leveraging system performance and adaptability. We tested our approach in simulation and transferred it to a hybrid production environment. By that, we were able to demonstrate its multi-objective optimisation capabilities compared to conventional approaches in terms of mean throughput time, tardiness, and processing of prioritised orders in a multi-layered production system. The modular design is promising in reducing the overall system complexity and facilitates a quick and seamless integration into other scenarios.
The "Lomonosov" space project is lead by Lomonosov Moscow State University in collaboration with the following key partners: Joint Institute for Nuclear Research, Russia, University of California, Los Angeles (USA), University of Pueblo (Mexico), Sungkyunkwan University (Republic of Korea) and with Russian space industry organi-zations to study some of extreme phenomena in space related to astrophysics, astroparticle physics, space physics, and space biology. The primary goals of this experiment are to study:
-Ultra-high energy cosmic rays (UHECR) in the energy range of the Greizen-ZatsepinKuzmin (GZK) cutoff;
-Ultraviolet (UV) transient luminous events in the upper atmosphere;
-Multi-wavelength study of gamma-ray bursts in visible, UV, gamma, and X-rays;
-Energetic trapped and precipitated radiation (electrons and protons) at low-Earth orbit (LEO) in connection with global geomagnetic disturbances;
-Multicomponent radiation doses along the orbit of spacecraft under different geomagnetic conditions and testing of space segments of optical observations of space-debris and other space objects;
-Instrumental vestibular-sensor conflict of zero-gravity phenomena during space flight.
This paper is directed towards the general description of both scientific goals of the project and scientific equipment on board the satellite. The following papers of this issue are devoted to detailed descriptions of scientific instruments.