Refine
Has Fulltext
- yes (681) (remove)
Year of publication
- 2013 (681) (remove)
Document Type
- Article (301)
- Doctoral Thesis (137)
- Postprint (89)
- Monograph/Edited Volume (49)
- Review (23)
- Preprint (22)
- Part of Periodical (21)
- Master's Thesis (16)
- Conference Proceeding (14)
- Other (5)
- Bachelor Thesis (2)
- Habilitation Thesis (2)
Language
Keywords
- Curriculum Framework (18)
- European values education (18)
- Europäische Werteerziehung (18)
- Familie (18)
- Family (18)
- Lehrevaluation (18)
- Studierendenaustausch (18)
- Unterrichtseinheiten (18)
- curriculum framework (18)
- lesson evaluation (18)
Institute
- Extern (83)
- Institut für Slavistik (49)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (36)
- Institut für Umweltwissenschaften und Geographie (36)
- Institut für Chemie (35)
- Wirtschaftswissenschaften (35)
- Institut für Informatik und Computational Science (31)
- Strafrecht (29)
- WeltTrends e.V. Potsdam (28)
- Institut für Jüdische Studien und Religionswissenschaft (27)
The European Values Education (EVE) project is a large-scale, cross-national, and longitudinal survey research programme on basic human values. The main topic of its second stage was family values in Europe. Student teachers of several universities in Europe worked together in multicultural exchange groups. Their results are presented in this issue.
Das rasant voranschreitende Wirtschaftswachstum Chiles in Zusammenhang mit einer sehr liberalen Wirtschafts- und Stadtentwicklungspolitik ist Ursache für eine tiefgreifende gesellschaftliche und urbane Umstrukturierung der chilenischen Hauptstadt. Die Nutznießer dieser Entwicklung sind in jüngster Zeit besonders auch Angehörige der unteren Mittelschicht, für die eigens preiswerte bewachte und umzäunte Wohnprojekte ins Leben gerufen werden. Schwerpunkt der vorliegenden Untersuchung war es, diese sich neu formierende Gesellschaftsschicht genauer zu betrachten und zu untersuchen, mit welchen Anpassungshandlungen diese auf die veränderten Lebens- und Wohnbedingungen innerhalb dieser sogenannten condominios reagieren. Das Konzept condominio scheint zunächst die ideale Antwort auf zahlreiche Handlungsprobleme zu sein; das Wohnen im abgeschlossenen Wohnviertel gibt das Gefühl der Sicherheit und Kontrolle der unmittelbaren Umgebung und ist durch seine Exklusivität gleichzeitig ein willkommenes Statussymbol. Erst auf den zweiten Blick wird sichtbar, was das condominio nicht leisten kann und welche weiteren Probleme mit dem Wohnen im abgeschlossenen Viertel entstehen. Im Laufe der Analyse wurde jedoch die trotz aller Problemlagen essentielle Bedeutung des condominios für ihre Bewohner erkennbar. Die neue Wohnform der unteren Mittelschicht ist nicht nur ein Produkt der sich in ihren Potentialen, Ansprüchen und Werten verändernden Anwohner. Sie wird auch aktiv für die Konstruktion von sozialen Identitäten genutzt und ist damit also zentrales Element in der Formation und Identifikation dieser Gesellschafts-schicht.
I. Staat und Kommunen: Ein Weg oder viele in Europa? II. Ausgangspunkte: Spielarten kommunaler Selbstverwaltung in Europa III. Reform der intergouvernementalen Beziehungen zwischen Dezentralisierung und Zentralisierung, Verstaatlichung und lokaler Autonomie IV. Entwicklungslinien: Reformen der intergouvernementalen Beziehungen in Deutschland, Frankreich und Großbritannien V. Europäische Kommunen zwischen Konvergenz und nationaler Eigenlogik. Und schließlich doch: Welche Rolle spielt die EU?
I. Reformbedarf II. Konsequenzen III. Funktionalreformen IV. Bausteine einer umfassenden Verwaltungsmodernisierung V. Das Leitbild „Gemeinde der Zukunft“ VI. Freiwilligkeitsphase VII. Zukunftsvertrag zwischen Land und Kommunen VIII. Zur FAG-Änderung 2012 IX. Haushaltskonsolidierung und Kommunaler Haushaltskonsolidierungsfonds X. Kofinanzierungsfonds XI. Fazit
Background: DNA fragments carrying internal recognition sites for the restriction endonucleases intended for cloning into a target plasmid pose a challenge for conventional cloning.
Results: A method for directional insertion of DNA fragments into plasmid vectors has been developed. The target sequence is amplified from a template DNA sample by PCR using two oligonucleotides each containing a single deoxyinosine base at the third position from the 5' end. Treatment of such PCR products with endonuclease V generates 3' protruding ends suitable for ligation with vector fragments created by conventional restriction endonuclease reactions.
Conclusions: The developed approach generates terminal cohesive ends without the use of Type II restriction endonucleases, and is thus independent from the DNA sequence. Due to PCR amplification, minimal amounts of template DNA are required. Using the robust Taq enzyme or a proofreading Pfu DNA polymerase mutant, the method is applicable to a broad range of insert sequences. Appropriate primer design enables direct incorporation of terminal DNA sequence modifications such as tag addition, insertions, deletions and mutations into the cloning strategy. Further, the restriction sites of the target plasmid can be either retained or removed.
Background: The linear noise approximation (LNA) is commonly used to predict how noise is regulated and exploited at the cellular level. These predictions are exact for reaction networks composed exclusively of first order reactions or for networks involving bimolecular reactions and large numbers of molecules. It is however well known that gene regulation involves bimolecular interactions with molecule numbers as small as a single copy of a particular gene. It is therefore questionable how reliable are the LNA predictions for these systems.
Results: We implement in the software package intrinsic Noise Analyzer (iNA), a system size expansion based method which calculates the mean concentrations and the variances of the fluctuations to an order of accuracy higher than the LNA. We then use iNA to explore the parametric dependence of the Fano factors and of the coefficients of variation of the mRNA and protein fluctuations in models of genetic networks involving nonlinear protein degradation, post-transcriptional, post-translational and negative feedback regulation. We find that the LNA can significantly underestimate the amplitude and period of noise-induced oscillations in genetic oscillators. We also identify cases where the LNA predicts that noise levels can be optimized by tuning a bimolecular rate constant whereas our method shows that no such regulation is possible. All our results are confirmed by stochastic simulations.
Conclusion: The software iNA allows the investigation of parameter regimes where the LNA fares well and where it does not. We have shown that the parametric dependence of the coefficients of variation and Fano factors for common gene regulatory networks is better described by including terms of higher order than LNA in the system size expansion. This analysis is considerably faster than stochastic simulations due to the extensive ensemble averaging needed to obtain statistically meaningful results. Hence iNA is well suited for performing computationally efficient and quantitative studies of intrinsic noise in gene regulatory networks.
In this paper, we determine necessary and sufficient conditions for Bruck-Reilly and generalized Bruck-Reilly ∗-extensions of arbitrary monoids to be regular, coregular and strongly π-inverse. These semigroup classes have applications in various field of mathematics, such as matrix theory, discrete mathematics and p-adic analysis (especially in operator theory). In addition, while regularity and coregularity have so many applications in the meaning of boundaries (again in operator theory), inverse monoids and Bruck-Reilly extensions contain a mixture fixed-point results of algebra, topology and geometry within the purposes of this journal.
Über die Autoren
(2013)
The correction of software failures tends to be very cost-intensive because their debugging is an often time-consuming development activity. During this activity, developers largely attempt to understand what causes failures: Starting with a test case that reproduces the observable failure they have to follow failure causes on the infection chain back to the root cause (defect). This idealized procedure requires deep knowledge of the system and its behavior because failures and defects can be far apart from each other. Unfortunately, common debugging tools are inadequate for systematically investigating such infection chains in detail. Thus, developers have to rely primarily on their intuition and the localization of failure causes is not time-efficient. To prevent debugging by disorganized trial and error, experienced developers apply the scientific method and its systematic hypothesis-testing. However, even when using the scientific method, the search for failure causes can still be a laborious task. First, lacking expertise about the system makes it hard to understand incorrect behavior and to create reasonable hypotheses. Second, contemporary debugging approaches provide no or only partial support for the scientific method. In this dissertation, we present test-driven fault navigation as a debugging guide for localizing reproducible failures with the scientific method. Based on the analysis of passing and failing test cases, we reveal anomalies and integrate them into a breadth-first search that leads developers to defects. This systematic search consists of four specific navigation techniques that together support the creation, evaluation, and refinement of failure cause hypotheses for the scientific method. First, structure navigation localizes suspicious system parts and restricts the initial search space. Second, team navigation recommends experienced developers for helping with failures. Third, behavior navigation allows developers to follow emphasized infection chains back to root causes. Fourth, state navigation identifies corrupted state and reveals parts of the infection chain automatically. We implement test-driven fault navigation in our Path Tools framework for the Squeak/Smalltalk development environment and limit its computation cost with the help of our incremental dynamic analysis. This lightweight dynamic analysis ensures an immediate debugging experience with our tools by splitting the run-time overhead over multiple test runs depending on developers’ needs. Hence, our test-driven fault navigation in combination with our incremental dynamic analysis answers important questions in a short time: where to start debugging, who understands failure causes best, what happened before failures, and which state properties are infected.
Soft Power ist zu einem einflussreichen Konzept avanciert – in der Politikwissenschaft, aber auch in der Politik selbst. Dabei bleibt es sowohl theoretisch als auch praktisch umstritten. In der Praxis wird das Konzept instrumentalisiert, um außenpolitisches Handeln von militärischem und wirtschaftlichem Druck positiv abzugrenzen. Unklar ist, wie Soft Power in militärischen Kontexten, in denen Hard Power im Vordergrund steht, funktionieren kann. Anhand des Afghanistan-Einsatzes der Bundeswehr wird dieses Verhältnis analysiert und eine eigene Definition von Soft Power entwickelt.
The nutrient exchange between plant and fungus is the key element of the arbuscular mycorrhizal (AM) symbiosis. The fungus improves the plant’s uptake of mineral nutrients, mainly phosphate, and water, while the plant provides the fungus with photosynthetically assimilated carbohydrates. Still, the knowledge about the mechanisms of the nutrient exchange between the symbiotic partners is very limited. Therefore, transport processes of both, the plant and the fungal partner, are investigated in this study. In order to enhance the understanding of the molecular basis underlying this tight interaction between the roots of Medicago truncatula and the AM fungus Rhizophagus irregularis, genes involved in transport processes of both symbiotic partners are analysed here. The AM-specific regulation and cell-specific expression of potential transporter genes of M. truncatula that were found to be specifically regulated in arbuscule-containing cells and in non-arbusculated cells of mycorrhizal roots was confirmed. A model for the carbon allocation in mycorrhizal roots is suggested, in which carbohydrates are mobilized in non-arbusculated cells and symplastically provided to the arbuscule-containing cells. New insights into the mechanisms of the carbohydrate allocation were gained by the analysis of hexose/H+ symporter MtHxt1 which is regulated in distinct cells of mycorrhizal roots. Metabolite profiling of leaves and roots of a knock-out mutant, hxt1, showed that it indeed does have an impact on the carbohydrate balance in the course of the symbiosis throughout the whole plant, and on the interaction with the fungal partner. The primary metabolite profile of M. truncatula was shown to be altered significantly in response to mycorrhizal colonization. Additionally, molecular mechanisms determining the progress of the interaction in the fungal partner of the AM symbiosis were investigated. The R. irregularis transcriptome in planta and in extraradical tissues gave new insight into genes that are differentially expressed in these two fungal tissues. Over 3200 fungal transcripts with a significantly altered expression level in laser capture microdissection-collected arbuscules compared to extraradical tissues were identified. Among them, six previously unknown specifically regulated potential transporter genes were found. These are likely to play a role in the nutrient exchange between plant and fungus. While the substrates of three potential MFS transporters are as yet unknown, two potential sugar transporters are might play a role in the carbohydrate flow towards the fungal partner. In summary, this study provides new insights into transport processes between plant and fungus in the course of the AM symbiosis, analysing M. truncatula on the transcript and metabolite level, and provides a dataset of the R. irregularis transcriptome in planta, providing a high amount of new information for future works.
A detailed description of the characteristics of antimicrobial peptides (AMPs) is highly demanded, since the resistance against traditional antibiotics is an emerging problem in medicine. They are part of the innate immune system in every organism, and they are very efficient in the protection against bacteria, viruses, fungi and even cancer cells. Their advantage is that their target is the cell membrane, in contrast to antibiotics which disturb the metabolism of the respective cell type. This allows AMPs to be more active and faster. The lack of an efficient therapy for some cancer types and the evolvement of resistance against existing antitumor agents make AMPs promising in cancer therapy besides being an alternative to traditional antibiotics. The aim of this work was the physical-chemical characterization of two fragments of LL-37, a human antimicrobial peptide from the cathelicidin family. The fragments LL-32 and LL-20 exhibited contrary behavior in biological experiments concerning their activity against bacterial cells, human cells and human cancer cells. LL-32 had even a higher activity than LL-37, while LL-20 had almost no effect. The interaction of the two fragments with model membranes was systematically studied in this work to understand their mode of action. Planar lipid films were mainly applied as model systems in combination with IR-spectroscopy and X-ray scattering methods. Circular Dichroism spectroscopy in bulk systems completed the results. In the first approach, the structure of the peptides was determined in aqueous solution and compared to the structure of the peptides at the air/water interface. In bulk, both peptides are in an unstructured conformation. Adsorbed and confined to at the air-water interface, the peptides differ drastically in their surface activity as well as in the secondary structure. While LL-32 transforms into an α-helix lying flat at the water surface, LL-20 stays partly unstructured. This is in good agreement with the high antimicrobial activity of LL-32. In the second approach, experiments with lipid monolayers as biomimetic models for the cell membrane were performed. It could be shown that the peptides fluidize condensed monolayers of negatively charged DPPG which can be related to the thinning of a bacterial cell membrane. An interaction of the peptides with zwitterionic PCs, as models for mammalian cells, was not clearly observed, even though LL-32 is haemolytic. In the third approach, the lipid monolayers were more adapted to the composition of human erythrocyte membranes by incorporating sphingomyelin (SM) into the PC monolayers. Physical-chemical properties of the lipid films were determined and the influence of the peptides on them was studied. It could be shown that the interaction of the more active LL-32 is strongly increased for heterogeneous lipid films containing both gel and fluid phases, while the interaction of LL-20 with the monolayers was unaffected. The results indicate an interaction of LL-32 with the membrane in a detergent-like way. Additionally, the modelling of the peptide interaction with cancer cells was performed by incorporating some negatively charged lipids into the PC/SM monolayers, but the increased charge had no effect on the interaction of LL-32. It was concluded, that the high anti-cancer activity of the peptide originates from the changed fluidity of cell membrane rather than from the increased surface charge. Furthermore, similarities to the physical-chemical properties of melittin, an AMP from the bee venom, were demonstrated.
Kindliche Aphasie
(2013)
Strikt geregelt und bemessen
(2013)
Strafbarkeit des Bannbruchs
(2013)
Der Täter hinter dem Täter
(2013)
1. Introduction 2. Analysis of implementation of the Basel III in China 2.1 Implementation of capital adequacy rules 2.2 Implementation of leverage ratio rules 2.3 Implementation of liquidity management rules 3. Suggestions for further development of China’s banking industry 3.1 Promoting capital structure adjustment and broadening capital supplement channels 3.2 Transforming business models and developing intermediary and off-balance business 3.3 Increasing the intensity of risk management and refining its standards
1. Introduction 2. The growth of China’s SMBs and the changes of the banking market structure – a land of small- and medium-sized companies 2.1 The characteristics of China’s banking market structure 2.2 The growth of China’s SMBs 2.3 The changes of China’s banking market structure 3. The opportunities and challenges facing SMBs in China 3.1 Opportunities 3.2 Challenges 4. Conclusion
Die Idee für den Workshop war entstanden im Rahmen der Nachwuchstagung Judaistik/Jüdische Studien der Vereinigung für Jüdische Studien e. V., die im Februar 2012 in Bamberg stattgefunden hatte. Dort äußerte sich ein großer Bedarf nach größerer überregionaler Vernetzung. Als sehr wünschenswert wurde festgehalten, in Ergänzung zur Nachwuchstagung auch regelmäßige Treffen in kleineren Arbeitsgruppen zu etablieren. Der Workshop in Veitshöchheim war die erste Veranstaltung, die diese Idee zeitnah, acht Monate nach der Nachwuchstagung, umsetzte. Der Workshop fand in Kooperation zwischen der Vereinigung für Jüdische Studien mit dem Lehrstuhl für fränkische Landesgeschichte an der Universität Würzburg statt.
Rezensiertes Werk: von der Krone, Kerstin: Wissenschaft in Öffentlichkeit. Die Wissenschaft des Judentums und ihre Zeitschriften. - Berlin: de Gruyter 2012. X, 539 S. - (=Studia Judaica, Bd. 65) Thulin, Mirjam: Kaufmanns Nachrichtendienst. Ein jüdisches Gelehrtennetzwerk im 19. Jahrhundert. - Göttingen: vandenhoeck & Ruprecht 2012. 424 S., 14 Abb., 6 Karten, 6 Tabellen. - (=Schriften des Simon-Dubnow-Instituts, Bd.16)
Rezensiertes Werk: Timm, Erika; Birnbaum, Eleazar und Birnbaum, David(Hg.): Ein Leben für die Wissenschaft/A Lifetime of Achievement. Wissenschaftliche Aufsätze aus sechs Jahrzehnten von Salomo A. Birnbaum/Six Decades of Scholarly Articles by Solomon A. Birnbaum. 2 Bde. - Berlin – Boston: De Gruyter 2011. Band 1, 540 S., Band 2. XXVII, 458 S.
Diet is a major force influencing the intestinal microbiota. This is obvious from drastic changes in microbiota composition after a dietary alteration. Due to the complexity of the commensal microbiota and the high inter-individual variability, little is known about the bacterial response at the cellular level. The objective of this work was to identify mechanisms that enable gut bacteria to adapt to dietary factors. For this purpose, germ-free mice monoassociated with the commensal Escherichia coli K-12 strain MG1655 were fed three different diets over three weeks: a diet rich in starch, a diet rich in non-digestible lactose and a diet rich in casein. Two dimensional gel electrophoresis and electrospray tandem mass spectrometry were applied to identify differentially expressed proteins of E. coli recovered from small intestine and caecum of mice fed the lactose or casein diets in comparison with those of mice fed the starch diet. Selected differentially expressed bacterial proteins were characterised in vitro for their possible roles in bacterial adaptation to the various diets. Proteins belonging to the oxidative stress regulon oxyR such as alkyl hydroperoxide reductase subunit F (AhpF), DNA protection during starvation protein (Dps) and ferric uptake regulatory protein (Fur), which are required for E. coli’s oxidative stress response, were upregulated in E. coli of mice fed the lactose-rich diet. Reporter gene analysis revealed that not only oxidative stress but also carbohydrate-induced osmotic stress led to the OxyR-dependent expression of ahpCF and dps. Moreover, the growth of E. coli mutants lacking the ahpCF or oxyR genes was impaired in the presence of non-digestible sucrose. This indicates that some OxyR-dependent proteins are crucial for the adaptation of E. coli to osmotic stress conditions. In addition, the function of two so far poorly characterised E. coli proteins was analysed: 2 deoxy-D gluconate 3 dehydrogenase (KduD) was upregulated in intestinal E. coli of mice fed the lactose-rich diet and this enzyme and 5 keto 4 deoxyuronate isomerase (KduI) were downregulated on the casein-rich diet. Reporter gene analysis identified galacturonate and glucuronate as inducers of the kduD and kduI gene expression. Moreover, KduI was shown to facilitate the breakdown of these hexuronates, which are normally degraded by uronate isomerase (UxaC), altronate oxidoreductase (UxaB), altronate dehydratase (UxaA), mannonate oxidoreductase (UxuB) and mannonate dehydratase (UxuA), whose expression was repressed by osmotic stress. The growth of kduID-deficient E. coli on galacturonate or glucuronate was impaired in the presence of osmotic stress, suggesting KduI and KduD to compensate for the function of the regular hexuronate degrading enzymes under such conditions. This indicates a novel function of KduI and KduD in E. coli’s hexuronate metabolism. Promotion of the intracellular formation of hexuronates by lactose connects these in vitro observations with the induction of KduD on the lactose-rich diet. Taken together, this study demonstrates the crucial influence of osmotic stress on the gene expression of E. coli enzymes involved in stress response and metabolic processes. Therefore, the adaptation to diet-induced osmotic stress is a possible key factor for bacterial colonisation of the intestinal environment.
A method is presented of acquiring the principles of three sorting algorithms through developing interactive applications in Excel.
Problem solving is one of the central activities performed by computer scientists as well as by computer science learners. Whereas the teaching of algorithms and programming languages is usually well structured within a curriculum, the development of learners’ problem-solving skills is largely implicit and less structured. Students at all levels often face difficulties in problem analysis and solution construction. The basic assumption of the workshop is that without some formal instruction on effective strategies, even the most inventive learner may resort to unproductive trial-and-error problemsolving processes. Hence, it is important to teach problem-solving strategies and to guide teachers on how to teach their pupils this cognitive tool. Computer science educators should be aware of the difficulties and acquire appropriate pedagogical tools to help their learners gain and experience problem-solving skills.
.NET Gadgeteer Workshop
(2013)
The challenge is providing teachers with the resources they need to strengthen their instructions and better prepare students for the jobs of the 21st Century. Technology can help meet the challenge. Teachers’ Tryscience is a noncommercial offer, developed by the New York Hall of Science, TeachEngineering, the National Board for Professional Teaching Standards and IBM Citizenship to provide teachers with such resources. The workshop provides deeper insight into this tool and discussion of how to support teaching of informatics in schools.
Reciprocal processes, whose concept can be traced back to E. Schrödinger, form a class of stochastic processes constructed as mixture of bridges, that satisfy a time Markov field property. We discuss here a new unifying approach to characterize several types of reciprocal processes via duality formulae on path spaces: The case of reciprocal processes with continuous paths associated to Brownian diffusions and the case of pure jump reciprocal processes associated to counting processes are treated. This presentation is based on joint works with M. Thieullen, R. Murr and C. Léonard.
We are interested in modeling the Darwinian evolution of a population described by two levels of biological parameters: individuals characterized by an heritable phenotypic trait submitted to mutation and natural selection and cells in these individuals influencing their ability to consume resources and to reproduce. Our models are rooted in the microscopic description of a random (discrete) population of individuals characterized by one or several adaptive traits and cells characterized by their type. The population is modeled as a stochastic point process whose generator captures the probabilistic dynamics over continuous time of birth, mutation and death for individuals and birth and death for cells. The interaction between individuals (resp. between cells) is described by a competition between individual traits (resp. between cell types). We are looking for tractable large population approximations. By combining various scalings on population size, birth and death rates and mutation step, the single microscopic model is shown to lead to contrasting nonlinear macroscopic limits of different nature: deterministic approximations, in the form of ordinary, integro- or partial differential equations, or probabilistic ones, like stochastic partial differential equations or superprocesses.
Die Dissertation beschreibt die Herstellung von ringförmigen Verbindungen (Naphthalenophanen) mit Hilfe der Dehydro-Diels-Alder-Reaktion, wobei immer Enantiomerenpaare auftreten. Es wird der diastereoselektive Aufbau von Naphthalenophanen und der enantiomeren reine Aufbau von Biarylen untersucht. Desweiteren werden die physikalischen Eigenschaften der erhaltenen Verbindungen, wie die Phosphoreszenz, Trennbarkeit der entstehenden Enantiomere und die Ringspannung beschrieben.
Dieser Beitrag stellt das Lehr-Lern-Konzept zur Kompetenzförderung im Software Engineering im Studiengang Mechatronik der Hochschule Aschaffenburg dar. Dieses Konzept ist mehrstufig mit Vorlesungs-, Seminar- und Projektsequenzen. Dabei werden Herausforderungen und Verbesserungspotentiale identifiziert und dargestellt. Abschließend wird ein Überblick gegeben, wie im Rahmen eines gerade gestarteten Forschungsprojektes Lehr-Lernkonzepte weiterentwickelt werden können.
Der traditionelle Weg in der Informatik besteht darin, Kompetenzen entweder normativ durch eine Expertengruppe festzulegen oder als Ableitungsergebnis eines Bildungsstandards aus einem externen Feld. Dieser Artikel stellt einen neuartigen und alternativen Ansatz vor, der sich der Methodik der Qualitativen Inhaltsanalyse (QI) bedient. Das Ziel war die Ableitung von informatischen Schlüsselkompetenzen anhand bereits etablierter und erprobter didaktischer Ansätze der Informatikdidaktik. Dazu wurde zunächst aus einer Reihe von Informatikdidaktikbüchern eine Liste mit möglichen Kandidaten für Kompetenzen generiert. Diese Liste wurde als QI-Kategoriensystem verwendet, mit der sechs verschiedene didaktische Ansätze analysiert wurden. Ein abschließender Verfeinerungsschritt erfolgte durch die Überprüfung, welche der gefundenen Kompetenzen in allen vier Kernbereichen der Informatik (theoretische, technische, praktische und angewandte Informatik) Anwendung finden. Diese Methode wurde für die informatische Schulausbildung exemplarisch entwickelt und umgesetzt, ist aber ebenfalls ein geeignetes Vorgehen für die Identifizierung von Schlüsselkompetenzen in anderen Gebieten, wie z. B. in der informatischen Hochschulausbildung, und soll deshalb hier kurz vorgestellt werden.
Informatik im Alltag
(2013)
Die Fachwissenschaft Informatik stellt Mittel bereit, deren Nutzung für Studierende heutzutage selbstverständlich ist. Diese Tatsache darf uns allerdings nicht dar- über hinwegtäuschen, dass Studierende in der Regel keine Grundlage im Sinne einer informatischen Allgemeinbildung gemäÿ der Bildungsstandards der Gesellschaft für Informatik besitzen. Das Schulfach Informatik hat immer noch keinen durchgängigen Platz in den Stundentafeln der allgemein bildenden Schule gefunden. Zukünftigen Lehrkräften ist im Rahmen der bildungswissenschaftlichen Anteile im Studium eine hinreichende Medienkompetenz zu vermitteln. Mit der überragenden Bedeutung der digitalen Medien kann dies nur auf der Grundlage einer ausreichenden informatischen Grundbildung erfolgen. Damit ist es angezeigt, ein Studienangebot bereitzustellen, das allen Studierenden ein Eintauchen in Elemente (Fachgebiete) der Fachwissenschaft Informatik aus der Sicht des Alltags bietet. An diesen Elementen werden exemplarisch verschiedene Aspekte der Fachwissenschaft beleuchtet, um einen Einblick in die Vielgestaltigkeit der Fragen und Lösungsstrategien der Informatik zu erlauben und so die informatische Grundbildung zu befördern.
The aim of our article is to collect and present information about contemporary programming environments that are suitable for primary education. We studied the ways they implement (or do not implement) some programming concepts, the ways programs are represented and built in order to support young and novice programmers, as well as their suitability to allow different forms of sharing the results of pupils’ work. We present not only a short description of each considered environment and the taxonomy in the form of a table, but also our understanding and opinions on how and why the environments implement the same concepts and ideas in different ways and which concepts and ideas seem to be important to the creators of such environments.
A comparison of current trends within computer science teaching in school in Germany and the UK
(2013)
In the last two years, CS as a school subject has gained a lot of attention worldwide, although different countries have differing approaches to and experiences of introducing CS in schools. This paper reports on a study comparing current trends in CS at school, with a major focus on two countries, Germany and UK. A survey was carried out of a number of teaching professionals and experts from the UK and Germany with regard to the content and delivery of CS in school. An analysis of the quantitative data reveals a difference in foci in the two countries; putting this into the context of curricular developments we are able to offer interpretations of these trends and suggest ways in which curricula in CS at school should be moving forward.
This article is a summary of the work carried out by the Ministry of Education in Turkey, in terms of the development of a new ICT Curriculum, together with the e-Training of teachers who will play an important role in the forthcoming pilot study. Based on recent literature on the topic, the article starts by introducing the “F@tih Project”, a national project that aims to effectively integrate technology into schools. After assessing teachers’ and students’ ICT competencies, as defined internationally, the review continues with the proposed model for the e-training of teachers. Summarizing the process of development of the new ICT curriculum, researchers underline key points of the curriculum such as dimensions, levels and competencies. Then teachers’ e-training approaches, together with selected tools, are explained in line with the importance and stages of action research that will be used throughout the pilot implementation of the curriculum and e-training process.
Japan launched the new Course of Study in April 2012, which has been carried out in elementary schools and junior high schools. It will also be implemented in senior high schools from April 2013. This article presents an overview of the information studies education in the new Course of Study for K-12. Besides, the authors point out what role experts of informatics and information studies education should play in the general education centered around information studies that is meant to help people of the nation to lead an active, powerful, and flexible life until the satisfying end.
The traditional purpose of algorithm in education is to prepare students for programming. In our effort to introduce the practically missing computing science into Czech general secondary education, we have revisited this purpose.We propose an approach, which is in better accordance with the goals of general secondary education in Czechia. The importance of programming is diminishing, while recognition of algorithmic procedures and precise (yet concise) communication of algorithms is gaining importance. This includes expressing algorithms in natural language, which is more useful for most of the students than programming. We propose criteria to evaluate such descriptions. Finally, an idea about the limitations is required (inefficient algorithms, unsolvable problems, Turing’s test). We describe these adjusted educational goals and an outline of the resulting course. Our experience with carrying out the proposed intentions is satisfactory, although we did not accomplish all the defined goals.
We launched an original large-scale experiment concerning informatics learning in French high schools. We are using the France-IOI platform to federate resources and share observation for research. The first step is the implementation of an adaptive hypermedia based on very fine grain epistemic modules for Python programming learning. We define the necessary traces to be built in order to study the trajectories of navigation the pupils will draw across this hypermedia. It may be browsed by pupils either as a course support, or an extra help to solve the list of exercises (mainly for algorithmics discovery). By leaving the locus of control to the learner, we want to observe the different trajectories they finally draw through our system. These trajectories may be abstracted and interpreted as strategies and then compared for their relative efficiency. Our hypothesis is that learners have different profiles and may use the appropriate strategy accordingly. This paper presents the research questions, the method and the expected results.
We shall examine the Pedagogical Content Knowledge (PCK) of Computer Science (CS) teachers concerning students’ Computational Thinking (CT) problem solving skills within the context of a CS course in Dutch secondary education and thus obtain an operational definition of CT and ascertain appropriate teaching methodology. Next we shall develop an instrument to assess students’ CT and design a curriculum intervention geared toward teaching and improving students’ CT problem solving skills and competences. As a result, this research will yield an operational definition of CT, knowledge about CT PCK, a CT assessment instrument and teaching materials and accompanying teacher instructions. It shall contribute to CS teacher education, development of CT education and to education in other (STEM) subjects where CT plays a supporting role, both nationally and internationally.
Informatics as a school subject has been virtually absent from bilingual education programs in German secondary schools. Most bilingual programs in German secondary education started out by focusing on subjects from the field of social sciences. Teachers and bilingual curriculum experts alike have been regarding those as the most suitable subjects for bilingual instruction – largely due to the intercultural perspective that a bilingual approach provides. And though one cannot deny the gain that ensues from an intercultural perspective on subjects such as history or geography, this benefit is certainly not limited to social science subjects. In consequence, bilingual curriculum designers have already begun to include other subjects such as physics or chemistry in bilingual school programs. It only seems a small step to extend this to informatics. This paper will start out by addressing potential benefits of adding informatics to the range of subjects taught as part of English-language bilingual programs in German secondary education. In a second step it will sketch out a methodological (= didactical) model for teaching informatics to German learners through English. It will then provide two items of hands-on and tested teaching material in accordance with this model. The discussion will conclude with a brief outlook on the chances and prerequisites of firmly establishing informatics as part of bilingual school curricula in Germany.
In this paper we report on our experiments in teaching computer science concepts with a mix of tangible and abstract object manipulations. The goal we set ourselves was to let pupils discover the challenges one has to meet to automatically manipulate formatted text. We worked with a group of 25 secondary school pupils (9-10th grade), and they were actually able to “invent” the concept of mark-up language. From this experiment we distilled a set of activities which will be replicated in other classes (6th grade) under the guidance of maths teachers.
We present a concept of better integration of practical teaching in student teacher education in Computer Science. As an introduction to the workshop different possible scenarios are discussed on the basis of examples. Afterwards workshop participants will have the opportunity to discuss the application of the aconcepts in other settings.
Relating to students
(2013)
Deepening understanding
(2013)
Bereits seit Mitte der 1990er Jahre greift die schulische Sprachförderung im Land Berlin vor allem auf das Merkmal der „nichtdeutschen Herkunftssprache“ (ndH) zurück. Mit der Einführung dieses Merkmals entschied sich der Landesgesetzgeber dafür, die – aus seiner Sicht weiterhin dringend notwendige – Sprachförderung von Kindern und Jugendlichen mit Migrationshintergrund in der Schule nicht mehr an eine ausländische Staatsbürgerschaft, sondern, ungeachtet der Staatsbürgerschaft der Schüler, an das Vorherrschen einer nichtdeutschen Kommunikationssprache in der Familie anzuknüpfen. An diesem Ansatz hat sich auch durch die Novellierung des Berliner Schulgesetzes im Jahre 2004 nichts Grundsätzliches geändert. Neben der Bedeutung des Merkmals ‚ndH‘ für die individuelle Sprachförderung in Schulen kommt diesem Merkmal – zusammen mit dem erst unlängst aufgewerteten weiteren Sprachfördermerkmal „Lernmittelbefreiung“ (LmB) – jedoch nunmehr auch eine zentrale Rolle bei der Zumessung von Sprachfördermitteln und Personalressourcen zu. In der Vergangenheit ist das Merkmal ‚ndH‘ allerdings wegen seiner angeblich diskriminierenden und vermeintlich segregierenden Wirkung mehrfach in die Kritik geraten, die sich, ausgelöst durch einen Vorfall an einer Kreuzberger Grundschule im Jahre 2012, noch einmal verstärkt hat. So wird neben der Tatsache, dass das Merkmal ‚ndH‘ überhaupt erhoben und der Sprachförderung zugrunde gelegt wird, auch die Praxis der Berliner Senatsverwaltung für Bildung, Jugend und Wissenschaft, auf den sog. Schulporträts im Internet ‚ndH‘-Quoten zu veröffentlichen, angegriffen und die Abschaffung dieser Praxis gefordert. Ziel der vorliegenden Arbeit ist die Beantwortung der Frage nach der Berechtigung dieser Kritik. Ausgehend von einer Darstellung zur Einführung und Entwicklung des Merkmals ‚ndH‘ unter Berücksichtigung der zuvor geltenden Rechtslage und einer Darstellung der aktuellen rechtlichen Grundlagen der schulischen Sprachförderung im Land Berlin wird dieses Merkmal einer näheren Betrachtung unterzogen. Nach einer Bestimmung des Merkmals ‚ndH‘, einer Erläuterung der einschlägigen Regelungen zur ‚ndH‘-Sprachförderung und einem Vergleich mit dem zusätzlich bestehenden Fördermerkmal ‚LmB‘ im Kontext der aktuellen Bestimmungen wird zunächst ein Überblick über wesentliche Aspekte der schulischen Sprachförderung auf der Grundlage des Merkmals ‚ndH‘ in der Praxis gegeben, in den wiederum das Vergleichsmerkmal ‚LmB‘ einbezogen wird. Daran knüpft die Untersuchung der These an, das Merkmal ‚ndH‘ bzw. zumindest seine Veröffentlichung im Rahmen der Schulporträts der Senatsschulverwaltung habe diskriminierende Wirkung und führe zu einer Segregation der Schülerschaft. Im Anschluss daran wird als zusätzliche Überlegung der Frage nach der tatsächlichen Notwendigkeit einer sich an dem Merkmal ‚ndH‘ und damit einer familiären Kommunikationssprache orientierenden Sprachförderung nachgegangen, die, wenn sie denn bejaht werden könnte, etwaige Diskriminierungs- und Segregationswirkungen rechtfertigte.
Information flows in EU policy-making are heavily dependent on personal networks, both within the Brussels sphere but also reaching outside the narrow limits of the Belgian capital. These networks develop for example in the course of formal and informal meetings or at the sidelines of such meetings. A plethora of committees at European, transnational and regional level provides the basis for the establishment of pan-European networks. By studying affiliation to those committees, basic network structures can be uncovered. These affiliation network structures can then be used to predict EU information flows, assuming that certain positions within the network are advantageous for tapping into streams of information while others are too remote and peripheral to provide access to information early enough. This study has tested those assumptions for the case of the reform of the Common Fisheries Policy for the time after 2012. Through the analysis of an affiliation network based on participation in 10 different fisheries policy committees over two years (2009 and 2010), network data for an EU-wide network of about 1300 fisheries interest group representatives and more than 200 events was collected. The structure of this network showed a number of interesting patterns, such as – not surprisingly – a rather central role of Brussels-based committees but also close relations of very specific interests to the Brussels-cluster and stronger relations between geographically closer maritime regions. The analysis of information flows then focused on access to draft EU Commission documents containing the upcoming proposal for a new basic regulation of the Common Fisheries Policy. It was first documented that it would have been impossible to officially obtain this document and that personal networks were thus the most likely sources for fisheries policy actors to obtain access to these “leaks” in early 2011. A survey of a sample of 65 actors from the initial network supported these findings: Only a very small group had accessed the draft directly from the Commission. Most respondents who obtained access to the draft had received it from other actors, highlighting the networked flow of informal information in EU politics. Furthermore, the testing of the hypotheses connecting network positions and the level of informedness indicated that presence in or connections to the Brussels sphere had both advantages for overall access to the draft document and with regard to timing. Methodologically, challenges of both the network analysis and the analysis of information flows but also their relevance for the study of EU politics have been documented. In summary, this study has laid the foundation for a different way to study EU policy-making by connecting topical and methodological elements – such as affiliation network analysis and EU committee governance – which so far have not been considered together, thereby contributing in various ways to political science and EU studies.
Im September 2009 fand der 5. Potsdamer Lateintag statt. Er war Bestandteil des auf drei Jahre angelegten Brandenburger Antike-Denkwerks (BrAnD), das von der Robert Bosch Stiftung gefördert wurde. Thema war dieses Mal: Macht und Ohnmacht der Worte – Gesellschaft und Rhetorik. Aus der Antike stammen nicht nur die heute noch benutzten Rhetoriktheorien und -anweisungen. Auch das reziproke Verhältnis von Gesellschaft und Rhetorik wurde hier vorgelebt und vorgedacht. Es sollte mit antiken Rhetoriklehren vertraut gemacht werden, antike Reden auf deren Umsetzung und auf ihre Wirkung betrachtet und die Möglichkeit einer heutigen Umsetzung überprüft werden. Der Band versammelt die Vorträge des Lateintags von Herrn Prof. Dr. P. Riemer und Herrn Prof. A. Fritsch, die Darstellung zum Ablauf des gesamten Projekts sowie eine Auswahl der Berichte zu den Schulprojekten.
Gegenstand dieser Arbeit sind sog. nicht-kanonische bzw. unintegrierte Nebensätze. Diese Nebensätze zeichnen sich dadurch aus, dass sie sich mittels gängiger Kriterien (Satzgliedstatus, Verbletztstellung) nicht klar als koordiniert oder subordiniert beschreiben lassen. Das Phänomen nicht-kanonischer Nebensätze ist ein Thema, welches in der Sprachwissenschaft generell seit den späten Siebzigern (Davison 1979) diskutiert wird und spätestens mit Fabricius-Hansen (1992) auch innerhalb der germanistischen Linguistik angekommen ist. Ein viel beachteter Komplex ist hierbei – neben der reinen Identifizierung nicht-kanonischer Satzgefüge – meist auch die Erstellung einer Klassifikation zur Erfassung zumindest einiger nicht-kanonischer Gefüge, wie dies etwa bei Fabricius-Hansen (1992) und Reis (1997) zu sehen ist. Das Ziel dieser Studie ist es, eine exhaustive Klassifikation der angesprochenen Nebensatztypen vorzunehmen. Dazu werden zunächst – unter Zuhilfenahme von Korpusdaten – alle potentiellen Subordinationsmerkmale genauer untersucht, da die meisten bisherigen Studien zu diesem Thema die stets gleichen Merkmale als gegeben voraussetzen. Dabei wird sich herausstellen, dass nur eine kleine Anzahl von Merkmalen sich wirklich zweifelsfrei dazu eignet, Aufschluss über die Satzverknüpfungsqualität zu geben. Die anschließend aufgestellte Taxonomie deutscher Nebensätze wird schließlich einzig mit der Postulierung einer nicht-kanonischen Nebensatzklasse auskommen. Sie ist darüber hinaus auch in der Lage, die zahlreich vorkommenden Ausnahmefälle zu erfassen. Dies heißt konkret, dass auch etwaige Nebensätze, die sich aufgrund bestimmter Eigenschaften teilweise idiosynkratisch verhalten, einfach in die vorgeschlagene Klassifikation übernommen werden können. In diesem Zuge werde ich weiterhin zeigen, wie eine Nebensatzklassifikation auch sog. sekundären Subordinationsmerkmalen gerecht werden kann, obwohl diese sich hinsichtlich der einzelnen Nebensatzklassen nicht einheitlich verhalten. Schließlich werde ich eine theoretische Modellierung der zuvor postulierten Taxonomie vornehmen, die auf Basis der HPSG mittels Merkmalsvererbung alle möglichen Nebensatztypen zu erfassen imstande ist.
Logging and large earthquakes are disturbances that may significantly affect hydrological and erosional processes and process rates, although in decisively different ways. Despite numerous studies that have documented the impacts of both deforestation and earthquakes on water and sediment fluxes, a number of details regarding the timing and type of de- and reforestation; seismic impacts on subsurface water fluxes; or the overall geomorphic work involved have remained unresolved. The main objective of this thesis is to address these shortcomings and to better understand and compare the hydrological and erosional process responses to such natural and man-made disturbances. To this end, south-central Chile provides an excellent natural laboratory owing to its high seismicity and the ongoing conversion of land into highly productive plantation forests. In this dissertation I combine paired catchment experiments, data analysis techniques, and physics-based modelling to investigate: 1) the effect of plantation forests on water resources, 2) the source and sink behavior of timber harvest areas in terms of overland flow generation and sediment fluxes, 3) geomorphic work and its efficiency as a function of seasonal logging, 4) possible hydrologic responses of the saturated zone to the 2010 Maule earthquake and 5) responses of the vadose zone to this earthquake. Re 1) In order to quantify the hydrologic impact of plantation forests, it is fundamental to first establish their water balances. I show that tree species is not significant in this regard, i.e. Pinus radiata and Eucalyptus globulus do not trigger any decisive different hydrologic response. Instead, water consumption is more sensitive to soil-water supply for the local hydro-climatic conditions. Re 2) Contradictory opinions exist about whether timber harvest areas (THA) generate or capture overland flow and sediment. Although THAs contribute significantly to hydrology and sediment transport because of their spatial extent, little is known about the hydrological and erosional processes occurring on them. I show that THAs may act as both sources and sinks for overland flow, which in turn intensifies surface erosion. Above a rainfall intensity of ~20 mm/h, which corresponds to <10% of all rainfall, THAs may generate runoff whereas below that threshold they remain sinks. The overall contribution of Hortonian runoff is thus secondary considering the local rainfall regime. The bulk of both runoff and sediment is generated by Dunne, saturation excess, overland flow. I also show that logging may increase infiltrability on THAs which may cause an initial decrease in streamflow followed by an increase after the groundwater storage has been refilled. Re 3) I present changes in frequency-magnitude distributions following seasonal logging by applying Quantile Regression Forests at hitherto unprecedented detail. It is clearly the season that controls the hydro-geomorphic work efficiency of clear cutting. Logging, particularly dry seasonal logging, caused a shift of work efficiency towards less flashy and mere but more frequent moderate rainfall-runoff events. The sediment transport is dominated by Dunne overland flow which is consistent with physics-based modelling using WASA-SED. Re 4) It is well accepted that earthquakes may affect hydrological processes in the saturated zone. Assuming such flow conditions, consolidation of saturated saprolitic material is one possible response. Consolidation raises the hydraulic gradients which may explain the observed increase in discharge following earthquakes. By doing so, squeezed water saturates the soil which in turn increases the water accessible for plant transpiration. Post-seismic enhanced transpiration is reflected in the intensification of diurnal cycling. Re 5) Assuming unsaturated conditions, I present the first evidence that the vadose zone may also respond to seismic waves by releasing pore water which in turn feeds groundwater reservoirs. By doing so, water tables along the valley bottoms are elevated thus providing additional water resources to the riparian vegetation. By inverse modelling, the transient increase in transpiration is found to be 30-60%. Based on the data available, both hypotheses, are not testable. Finally, when comparing the hydrological and erosional effects of the Maule earthquake with the impact of planting exotic plantation forests, the overall observed earthquake effects are comparably small, and limited to short time scales.
In the presence of a solid-liquid or liquid-air interface, bacteria can choose between a planktonic and a sessile lifestyle. Depending on environmental conditions, cells swimming in close proximity to the interface can irreversibly attach to the surface and grow into three-dimensional aggregates where the majority of cells is sessile and embedded in an extracellular polymer matrix (biofilm). We used microfluidic tools and time lapse microscopy to perform experiments with the polarly flagellated soil bacterium Pseudomonas putida (P. putida), a bacterial species that is able to form biofilms. We analyzed individual trajectories of swimming cells, both in the bulk fluid and in close proximity to a glass-liquid interface. Additionally, surface related growth during the early phase of biofilm formation was investigated. In the bulk fluid, P.putida shows a typical bacterial swimming pattern of alternating periods of persistent displacement along a line (runs) and fast reorientation events (turns) and cells swim with an average speed around 24 micrometer per second. We found that the distribution of turning angles is bimodal with a dominating peak around 180 degrees. In approximately six out of ten turning events, the cell reverses its swimming direction. In addition, our analysis revealed that upon a reversal, the cell systematically changes its swimming speed by a factor of two on average. Based on the experimentally observed values of mean runtime and rotational diffusion, we presented a model to describe the spreading of a population of cells by a run-reverse random walker with alternating speeds. We successfully recover the mean square displacement and, by an extended version of the model, also the negative dip in the directional autocorrelation function as observed in the experiments. The analytical solution of the model demonstrates that alternating speeds enhance a cells ability to explore its environment as compared to a bacterium moving at a constant intermediate speed. As compared to the bulk fluid, for cells swimming near a solid boundary we observed an increase in swimming speed at distances below d= 5 micrometer and an increase in average angular velocity at distances below d= 4 micrometer. While the average speed was maximal with an increase around 15% at a distance of d= 3 micrometer, the angular velocity was highest in closest proximity to the boundary at d=1 micrometer with an increase around 90% as compared to the bulk fluid. To investigate the swimming behavior in a confinement between two solid boundaries, we developed an experimental setup to acquire three-dimensional trajectories using a piezo driven objective mount coupled to a high speed camera. Results on speed and angular velocity were consistent with motility statistics in the presence of a single boundary. Additionally, an analysis of the probability density revealed that a majority of cells accumulated near the upper and lower boundaries of the microchannel. The increase in angular velocity is consistent with previous studies, where bacteria near a solid boundary were shown to swim on circular trajectories, an effect which can be attributed to a wall induced torque. The increase in speed at a distance of several times the size of the cell body, however, cannot be explained by existing theories which either consider the drag increase on cell body and flagellum near a boundary (resistive force theory) or model the swimming microorganism by a multipole expansion to account for the flow field interaction between cell and boundary. An accumulation of swimming bacteria near solid boundaries has been observed in similar experiments. Our results confirm that collisions with the surface play an important role and hydrodynamic interactions alone cannot explain the steady-state accumulation of cells near the channel walls. Furthermore, we monitored the number growth of cells in the microchannel under medium rich conditions. We observed that, after a lag time, initially isolated cells at the surface started to grow by division into colonies of increasing size, while coexisting with a comparable smaller number of swimming cells. After 5:50 hours, we observed a sudden jump in the number of swimming cells, which was accompanied by a breakup of bigger clusters on the surface. After approximately 30 minutes where planktonic cells dominated in the microchannel, individual swimming cells reattached to the surface. We interpret this process as an emigration and recolonization event. A number of complementary experiments were performed to investigate the influence of collective effects or a depletion of the growth medium on the transition. Similar to earlier observations on another bacterium from the same family we found that the release of cells to the swimming phase is most likely the result of an individual adaption process, where syntheses of proteins for flagellar motility are upregulated after a number of division cycles at the surface.
Requirements engineers have to elicit, document, and validate how stakeholders act and interact to achieve their common goals in collaborative scenarios. Only after gathering all information concerning who interacts with whom to do what and why, can a software system be designed and realized which supports the stakeholders to do their work. To capture and structure requirements of different (groups of) stakeholders, scenario-based approaches have been widely used and investigated. Still, the elicitation and validation of requirements covering collaborative scenarios remains complicated, since the required information is highly intertwined, fragmented, and distributed over several stakeholders. Hence, it can only be elicited and validated collaboratively. In times of globally distributed companies, scheduling and conducting workshops with groups of stakeholders is usually not feasible due to budget and time constraints. Talking to individual stakeholders, on the other hand, is feasible but leads to fragmented and incomplete stakeholder scenarios. Going back and forth between different individual stakeholders to resolve this fragmentation and explore uncovered alternatives is an error-prone, time-consuming, and expensive task for the requirements engineers. While formal modeling methods can be employed to automatically check and ensure consistency of stakeholder scenarios, such methods introduce additional overhead since their formal notations have to be explained in each interaction between stakeholders and requirements engineers. Tangible prototypes as they are used in other disciplines such as design, on the other hand, allow designers to feasibly validate and iterate concepts and requirements with stakeholders. This thesis proposes a model-based approach for prototyping formal behavioral specifications of stakeholders who are involved in collaborative scenarios. By simulating and animating such specifications in a remote domain-specific visualization, stakeholders can experience and validate the scenarios captured so far, i.e., how other stakeholders act and react. This interactive scenario simulation is referred to as a model-based virtual prototype. Moreover, through observing how stakeholders interact with a virtual prototype of their collaborative scenarios, formal behavioral specifications can be automatically derived which complete the otherwise fragmented scenarios. This, in turn, enables requirements engineers to elicit and validate collaborative scenarios in individual stakeholder sessions – decoupled, since stakeholders can participate remotely and are not forced to be available for a joint session at the same time. This thesis discusses and evaluates the feasibility, understandability, and modifiability of model-based virtual prototypes. Similarly to how physical prototypes are perceived, the presented approach brings behavioral models closer to being tangible for stakeholders and, moreover, combines the advantages of joint stakeholder sessions and decoupled sessions.
Die Arbeit widmet sich dem kontroversen Diskurs über den Schutz von Urheberrechten einerseits und den freien Zugang zu Ressourcen über das Internet andererseits. Auf Grundlage eines Korpus aus mündlichen sowie schriftlichen Textdaten werden drei zentrale Ziele verfolgt: Erstens werden die Identitätskonstruktionen der Teilnehmer innerhalb des gewählten Diskursausschnittes analysiert. Zweitens werden Zusammenhänge zwischen Mikro- und Makroebene, d. h. zwischen Identitätskonstruktion auf lokaler Gesprächsebene und solcher auf der globaleren Ebene des schriftlichen Datenmaterials untersucht. Drittens werden die eingesetzten Analyseinstrumente aus verschiedenen sprachwissenschaftlichen Disziplinen auf ihre Anwendbarkeit für eine ebenenübergreifende Studie bewertet. Die Arbeit bewegt sich damit in ihrer Methodik zwischen zwei kommunikationslinguistischen Forschungsperspektiven, der Konversationsanalyse und der Diskursanalyse, die bisher im deutschen Forschungsraum noch eher getrennte Wege gegangen sind.
In this work, thermosensitive hydrogels having tunable thermo-mechanical properties were synthesized. Generally the thermal transition of thermosensitive hydrogels is based on either a lower critical solution temperature (LCST) or critical micelle concentration/ temperature (CMC/ CMT). The temperature dependent transition from sol to gel with large volume change may be seen in the former type of thermosensitive hydrogels and is negligible in CMC/ CMT dependent systems. The change in volume leads to exclusion of water molecules, resulting in shrinking and stiffening of system above the transition temperature. The volume change can be undesired when cells are to be incorporated in the system. The gelation in the latter case is mainly driven by micelle formation above the transition temperature and further colloidal packing of micelles around the gelation temperature. As the gelation mainly depends on concentration of polymer, such a system could undergo fast dissolution upon addition of solvent. Here, it was envisioned to realize a thermosensitive gel based on two components, one responsible for a change in mechanical properties by formation of reversible netpoints upon heating without volume change, and second component conferring degradability on demand. As first component, an ABA triblockcopolymer (here: Poly(ethylene glycol)-b-poly(propylene glycol)-b-poly(ethylene glycol) (PEPE) with thermosensitive properties, whose sol-gel transition on the molecular level is based on micellization and colloidal jamming of the formed micelles was chosen, while for the additional macromolecular component crosslinking the formed micelles biopolymers were employed. The synthesis of the hydrogels was performed in two ways, either by physical mixing of compounds showing electrostatic interactions, or by covalent coupling of the components. Biopolymers (here: the polysaccharides hyaluronic acid, chondroitin sulphate, or pectin, as well as the protein gelatin) were employed as additional macromolecular crosslinker to simultaneously incorporate an enzyme responsiveness into the systems. In order to have strong ionic/electrostatic interactions between PEPE and polysaccharides, PEPE was aminated to yield predominantly mono- or di-substituted PEPEs. The systems based on aminated PEPE physically mixed with HA showed an enhancement in the mechanical properties such as, elastic modulus (G′) and viscous modulus (G′′) and a decrease of the gelation temperature (Tgel) compared to the PEPE at same concentration. Furthermore, by varying the amount of aminated PEPE in the composition, the Tgel of the system could be tailored to 27-36 °C. The physical mixtures of HA with di-amino PEPE (HA·di-PEPE) showed higher elastic moduli G′ and stability towards dissolution compared to the physical mixtures of HA with mono-amino PEPE (HA·mono-PEPE). This indicates a strong influence of electrostatic interaction between –COOH groups of HA and –NH2 groups of PEPE. The physical properties of HA with di-amino PEPE (HA·di-PEPE) compare beneficially with the physical properties of the human vitreous body, the systems are highly transparent, and have a comparable refractive index and viscosity. Therefore,this material was tested for a potential biological application and was shown to be non-cytotoxic in eluate and direct contact tests. The materials will in the future be investigated in further studies as vitreous body substitutes. In addition, enzymatic degradation of these hydrogels was performed using hyaluronidase to specifically degrade the HA. During the degradation of these hydrogels, increase in the Tgel was observed along with decrease in the mechanical properties. The aminated PEPE were further utilised in the covalent coupling to Pectin and chondroitin sulphate by using EDC as a coupling agent. Here, it was possible to adjust the Tgel (28-33 °C) by varying the grafting density of PEPE to the biopolymer. The grafting of PEPE to Pectin enhanced the thermal stability of the hydrogel. The Pec-g-PEPE hydrogels were degradable by enzymes with slight increase in Tgel and decrease in G′ during the degradation time. The covalent coupling of aminated PEPE to HA was performed by DMTMM as a coupling agent. This method of coupling was observed to be more efficient compared to EDC mediated coupling. Moreover, the purification of the final product was performed by ultrafiltration technique, which efficiently removed the unreacted PEPE from the final product, which was not sufficiently achieved by dialysis. Interestingly, the final products of these reaction were in a gel state and showed enhancement in the mechanical properties at very low concentrations (2.5 wt%) near body temperature. In these hydrogels the resulting increase in mechanical properties was due to the combined effect of micelle packing (physical interactions) by PEPE and covalent netpoints between PEPE and HA. PEPE alone or the physical mixtures of the same components were not able to show thermosensitive behavior at concentrations below 16 wt%. These thermosensitive hydrogels also showed on demand solubilisation by enzymatic degradation. The concept of thermosensitivity was introduced to 3D architectured porous hydrogels, by covalently grafting the PEPE to gelatin and crosslinking with LDI as a crosslinker. Here, the grafted PEPE resulted in a decrease in the helix formation in gelatin chains and after fixing the gelatin chains by crosslinking, the system showed an enhancement in the mechanical properties upon heating (34-42 °C) which was reversible upon cooling. A possible explanation of the reversible changes in mechanical properties is the strong physical interactions between micelles formed by PEPE being covalently linked to gelatin. Above the transition temperature, the local properties were evaluated by AFM indentation of pore walls in which an increase in elastic modulus (E) at higher temperature (37 °C) was observed. The water uptake of these thermosensitive architectured porous hydrogels was also influenced by PEPE and temperature (25 °C and 37 °C), showing lower water up take at higher temperature and vice versa. In addition, due to the lower water uptake at high temperature, the rate of hydrolytic degradation of these systems was found to be decreased when compared to pure gelatin architectured porous hydrogels. Such temperature sensitive architectured porous hydrogels could be important for e.g. stem cell culturing, cell differentiation and guided cell migration, etc. Altogether, it was possible to demonstrate that the crosslinking of micelles by a macromolecular crosslinker increased the shear moduli, viscosity, and stability towards dissolution of CMC-based gels. This effect could be likewise be realized by covalent or non-covalent mechanisms such as, micelle interactions, physical interactions of gelatin chains and physical interactions between gelatin chains and micelles. Moreover, the covalent grafting of PEPE will create additional net-points which also influence the mechanical properties of thermosensitive architectured porous hydrogels. Overall, the physical and chemical interactions and reversible physical interactions in such thermosensitive architectured porous hydrogels gave a control over the mechanical properties of such complex system. The hydrogels showing change of mechanical properties without a sol-gel transition or volume change are especially interesting for further study with cell proliferation and differentiation.
It was the goal of this work to explore two different synthesis pathways using green chemistry. The first part of this thesis is focusing on the use of the urea-glass route towards single phase manganese nitride and manganese nitride/oxide nano-composites embedded in carbon, while the second part of the thesis is focusing on the use of the “saccharide route” (namely cellulose, sucrose, glucose and lignin) towards metal (Ni0), metal alloy (Pd0.9Ni0.1, Pd0.5Ni0.5, Fe0.5Ni0.5, Cu0.5Ni0.5 and W0.15Ni0.85) and ternary carbide (Mn0.75Fe2.25C) nanoparticles embedded in carbon. In the interest of battery application, MnN0.43 nanoparticles surrounded by a graphitic shell and embedded in carbon with a high surface area (79 m^2/g) were synthesized, following a previously set route.The comparison of the material characteristics before and after the discharge showed no remarkable difference in terms of composition and just slight differences in the morphological point of view, meaning the particles are stable but agglomerate. The graphitic shell is contributing to the resistance of the material and leads to a fine cyclic stability over 140 cycles of 230 mAh/g after the first charge/discharge and coulombic efficiencies close to 100%. Due to the low voltage towards Li/Li+ and the low polarization, it might be an attractive anode material for lithium ion batteries. However, the capacity is still noticeably lower than the theoretical value for MnN0.43. A mixture of MnN0.43 and MnO nanoparticles embedded in carbon (surface area 93 m^2/g) was able to improve the cyclic stability to over 160 cycles giving a capacity of 811 mAh/g, which is considerably higher than the capacity of the conventional material graphite (372 mAh/g). This nano-composite seems to agglomerate less during the process of discharge. Interestingly, although the capacity is much higher than of the single phase manganese nitride, the nano-composite seems to only contain MnN0.43 nanoparticles after the process of discharge with no oxide phase to be found. Concerning catalysis application, different metal, metal alloy, and metal carbide nanoparticles were synthesized using the saccharide route. At first, systems that were already investigated before, being Pd0.9Ni0.1, Pd0.5Ni0.5, Fe0.5Ni0.5 and Mn0.75Fe2.25C using cellulose as the carbon source were prepared and tested in an alkylation reaction of toluene with benzylchloride. Unexpectedly, the metal alloys did not show any catalytic activity, but the ternary carbide Mn0.75Fe2.25C showed fine catalytic activity of 98% conversion after 9 hour reaction time (110 °C). In a second step, the saccharide route was modified towards other carbon sources and carbon to metal ratios in order to improve the homogeneity of the samples and accessibility of the particle surfaces. The used carbon sources sucrose and glucose are similar in their basic structure of carbohydrates, but reducing the (polymeric) chain length. Indeed, the cellulose could be successfully replaced by sucrose and glucose. A lower carbon to metal ratio was found to influence the size, homogeneity and accessibility (as evidenced by TEM) of the samples. Since sucrose is an aliment, glucose is the better choice as a carbon source. Using glucose, the synthesis of Cu0.5Ni0.5 and W0.15Ni0.85 nano-composites was also possible, although the later was never obtained as pure phase. These alloy nano-composites were tested, along with nickel0 nanoparticles also prepared with glucose and on their catalytic activity towards the reduction of phenylacetylene. The results obtained let believe that any (poly) saccharide, including lignin, could be used as carbon source. The nickel0 nano-composites prepared with lignin as a carbon source were tested along with those prepared with cellulose and sucrose for their catalytic activity in the transfer hydrogenation of nitrobenzene (results compared with exposed nickel nanoparticles and nickel supported on carbon) leading to very promising results. Based on the urea-glass route and the saccharide route, simple equipment and transition metals, it was possible to have a one-pot synthesize with scale-up possibilities towards new material that can be applied in catalysis and battery systems.
The Prussian geologist Leopold von Buch was a lifelong friend of Alexander von Humboldt and had a significant influence on Humboldt’s geological ideas. In a talk, held in Berlin in 1831, which is published here for the first time, von Buch presented the Duria Antiquior of 1830 by the English geologist Henry De La Beche. The Duria Antiquior is widely regarded as the earliest depiction of a scene of prehistoric life from deep time. The print raised new questions about the processes of geohistorical change. The talk reveals that Leopold von Buch was a true scientist of the Romantic Age. His descriptions of geohistorical organismic transformations are taken from pictorial examples of organismic transformation from the classical literature. The talk also illustrates how influential English geologists were for geo-historical reconstructions in Germany.
This thesis gives formal definitions of discourse-givenness, coreference and reference, and reports on experiments with computational models of discourse-givenness of noun phrases for English and German. Definitions are based on Bach's (1987) work on reference, Kibble and van Deemter's (2000) work on coreference, and Kamp and Reyle's Discourse Representation Theory (1993). For the experiments, the following corpora with coreference annotation were used: MUC-7, OntoNotes and ARRAU for Englisch, and TueBa-D/Z for German. As for classification algorithms, they cover J48 decision trees, the rule based learner Ripper, and linear support vector machines. New features are suggested, representing the noun phrase's specificity as well as its context, which lead to a significant improvement of classification quality.
Measuring the metabolite profile of plants can be a strong phenotyping tool, but the changes of metabolite pool sizes are often difficult to interpret, not least because metabolite pool sizes may stay constant while carbon flows are altered and vice versa. Hence, measuring the carbon allocation of metabolites enables a better understanding of the metabolic phenotype. The main challenge of such measurements is the in vivo integration of a stable or radioactive label into a plant without perturbation of the system. To follow the carbon flow of a precursor metabolite, a method is developed in this work that is based on metabolite profiling of primary metabolites measured with a mass spectrometer preceded by a gas chromatograph (Wagner et al. 2003; Erban et al. 2007; Dethloff et al. submitted). This method generates stable isotope profiling data, besides conventional metabolite profiling data. In order to allow the feeding of a 13C sucrose solution into the plant, a petiole and a hypocotyl feeding assay are developed. To enable the processing of large numbers of single leaf samples, their preparation and extraction are simplified and optimised. The metabolite profiles of primary metabolites are measured, and a simple relative calculation is done to gain information on carbon allocation from 13C sucrose. This method is tested examining single leaves of one rosette in different developmental stages, both metabolically and regarding carbon allocation from 13C sucrose. It is revealed that some metabolite pool sizes and 13C pools are tightly associated to relative leaf growth, i.e. to the developmental stage of the leaf. Fumaric acid turns out to be the most interesting candidate for further studies because pool size and 13C pool diverge considerably. In addition, the analyses are also performed on plants grown in the cold, and the initial results show a different metabolite pool size pattern across single leaves of one Arabidopsis rosette, compared to the plants grown under normal temperatures. Lastly, in situ expression of REIL genes in the cold is examined using promotor-GUS plants. Initial results suggest that single leaf metabolite profiles of reil2 differ from those of the WT.
Intensive Forschung hat in den vergangenen Jahrzehnten zu einer sehr detaillierten Charakterisierung des Geschmackssystems der Säugetiere geführt. Dennoch sind mit den bislang eingesetzten Methoden wichtige Fragestellungen unbeantwortet geblieben. Eine dieser Fragen gilt der Unterscheidung von Bitterstoffen. Die Zahl der Substanzen, die für den Menschen bitter schmecken und in Tieren angeborenes Aversionsverhalten auslösen, geht in die Tausende. Diese Substanzen sind sowohl von der chemischen Struktur als auch von ihrer Wirkung auf den Organismus sehr verschieden. Während viele Bitterstoffe potente Gifte darstellen, sind andere in den Mengen, die mit der Nahrung aufgenommen werden, harmlos oder haben sogar positive Effekte auf den Körper. Zwischen diesen Gruppen unterscheiden zu können, wäre für ein Tier von Vorteil. Ein solcher Mechanismus ist jedoch bei Säugetieren nicht bekannt. Das Ziel dieser Arbeit war die Untersuchung der Verarbeitung von Geschmacksinformation in der ersten Station der Geschmacksbahn im Mausgehirn, dem Nucleus tractus solitarii (NTS), mit besonderem Augenmerk auf der Frage nach der Diskriminierung verschiedener Bitterstoffe. Zu diesem Zweck wurde eine neue Untersuchungsmethode für das Geschmackssystem etabliert, die die Nachteile bereits verfügbarer Methoden umgeht und ihre Vorteile kombiniert. Die Arc-catFISH-Methode (cellular compartment analysis of temporal activity by fluorescent in situ hybridization), die die Charakterisierung der Antwort großer Neuronengruppen auf zwei Stimuli erlaubt, wurde zur Untersuchung geschmacksverarbeitender Zellen im NTS angewandt. Im Zuge dieses Projekts wurde erstmals eine stimulusinduzierte Arc-Expression im NTS gezeigt. Die ersten Ergebnisse offenbarten, dass die Arc-Expression im NTS spezifisch nach Stimulation mit Bitterstoffen auftritt und sich die Arc exprimierenden Neurone vornehmlich im gustatorischen Teil des NTS befinden. Dies weist darauf hin, dass Arc-Expression ein Marker für bitterverarbeitende gustatorische Neurone im NTS ist. Nach zweimaliger Stimulation mit Bittersubstanzen konnten überlappende, aber verschiedene Populationen von Neuronen beobachtet werden, die unterschiedlich auf die drei verwendeten Bittersubstanzen Cycloheximid, Chininhydrochlorid und Cucurbitacin I reagierten. Diese Neurone sind vermutlich an der Steuerung von Abwehrreflexen beteiligt und könnten so die Grundlage für divergentes Verhalten gegenüber verschiedenen Bitterstoffen bilden.
An important strand of research has investigated the question of how children acquire a morphological system using offline data from spontaneous or elicited child language. Most of these studies have found dissociations in how children apply regular and irregular inflection (Marcus et al. 1992, Weyerts & Clahsen 1994, Rothweiler & Clahsen 1993). These studies have considerably deepened our understanding of how linguistic knowledge is acquired and organised in the human mind. Their methodological procedures, however, do not involve measurements of how children process morphologically complex forms in real time. To date, little is known about how children process inflected word forms. The aim of this study is to investigate children’s processing of inflected words in a series of on-line reaction time experiments. We used a cross-modal priming experiment to test for decompositional effects on the central level. We used a speeded production task and a lexical decision task to test for frequency effects on access level in production and recognition. Children’s behaviour was compared to adults’ behaviour towards three participle types (-t participles, e.g. getanzt ‘danced’ vs. -n participles with stem change, e.g. gebrochen ‘broken’ vs.-n participles without stem change, e.g. geschlafen ‘slept’). For the central level, results indicate that -t participles but not -n participles have decomposed representations. For the access level, results indicate that -t participles are represented according to their morphemes and additionally as full forms, at least from the age of nine years onwards (Pinker 1999 and Clahsen et al. 2004). Further evidence suggested that -n participles are represented as full-form entries on access level and that -n participles without stem change may encode morphological structure (cf. Clahsen et al. 2003). Out data also suggests that processing strategies for -t participles are differently applied in recognition and production. These results provide evidence that children (within the age range tested) employ the same mechanisms for processing participles as adults. The child lexicon grows as children form additional full-form representations for -t participles on access level and elaborate their full-form lexical representations of -n participles on central level. These results are consistent with processing as explained in dual-system theories.
In this work, the development of temperature- and protein-responsive sensor materials based on biocompatible, inverse hydrogel opals (IHOs) is presented. With these materials, large biomolecules can be specifically recognised and the binding event visualised. The preparation of the IHOs was performed with a template process, for which monodisperse silica particles were vertically deposited onto glass slides as the first step. The obtained colloidal crystals with a thickness of 5 μm displayed opalescent reflections because of the uniform alignment of the colloids. As a second step, the template was embedded in a matrix consisting of biocompatible, thermoresponsive hydrogels. The comonomers were selected from the family of oligo(ethylene glycol)methacrylates. The monomer solution was injected into a polymerisation mould, which contained the colloidal crystals as a template. The space in-between the template particles was filled with the monomer solution and the hydrogel was cured via UV-polymerisation. The particles were chemically etched, which resulted in a porous inner structure. The uniform alignment of the pores and therefore the opalescent reflection were maintained, so these system were denoted as inverse hydrogel opals. A pore diameter of several hundred nanometres as well as interconnections between the pores should facilitate a diffusion of bigger (bio)molecules, which was always a challenge in the presented systems until now. The copolymer composition was chosen to result in a hydrogel collapse over 35 °C. All hydrogels showed pronounced swelling in water below the critical temperature. The incorporation of a reactive monomer with hydroxyl groups ensured a potential coupling group for the introduction of recognition units for analytes, e.g. proteins. As a test system, biotin as a recognition unit for avidin was coupled to the IHO via polymer-analogous Steglich esterification. The amount of accessible biotin was quantified with a colorimetric binding assay. When avidin was added to the biotinylated IHO, the wavelength of the opalescent reflection was significantly shifted and therefore the binding event was visualised. This effect is based on the change in swelling behaviour of the hydrogel after binding of the hydrophilic avidin, which is amplified by the thermoresponsive nature of the hydrogel. A swelling or shrinking of the pores induces a change in distance of the crystal planes, which are responsible for the colour of the reflection. With these findings, the possibility of creating sensor materials or additional biomolecules in the size range of avidin is given.
Landslides are one of the biggest natural hazards in Georgia, a mountainous country in the Caucasus. So far, no systematic monitoring and analysis of the dynamics of landslides in Georgia has been made. Especially as landslides are triggered by extrinsic processes, the analysis of landslides together with precipitation and earthquakes is challenging. In this thesis I describe the advantages and limits of remote sensing to detect and better understand the nature of landslide in Georgia. The thesis is written in a cumulative form, composing a general introduction, three manuscripts and a summary and outlook chapter. In the present work, I measure the surface displacement due to active landslides with different interferometric synthetic aperture radar (InSAR) methods. The slow landslides (several cm per year) are well detectable with two-pass interferometry. In same time, the extremely slow landslides (several mm per year) could be detected only with time series InSAR techniques. I exemplify the success of InSAR techniques by showing hitherto unknown landslides, located in the central part of Georgia. Both, the landslide extent and displacement rate is quantified. Further, to determine a possible depth and position of potential sliding planes, inverse models were developed. Inverse modeling searches for parameters of source which can create observed displacement distribution. I also empirically estimate the volume of the investigated landslide using displacement distributions as derived from InSAR combined with morphology from an aerial photography. I adapted a volume formula for our case, and also combined available seismicity and precipitation data to analyze potential triggering factors. A governing question was: What causes landslide acceleration as observed in the InSAR data? The investigated area (central Georgia) is seismically highly active. As an additional product of the InSAR data analysis, a deformation area associated with the 7th September Mw=6.0 earthquake was found. Evidences of surface ruptures directly associated with the earthquake could not be found in the field, however, during and after the earthquake new landslides were observed. The thesis highlights that deformation from InSAR may help to map area prone landslides triggering by earthquake, potentially providing a technique that is of relevance for country wide landslide monitoring, especially as new satellite sensors will emerge in the coming years.
Passive plant actuators have fascinated many researchers in the field of botany and structural biology since at least one century. Up to date, the most investigated tissue types in plant and artificial passive actuators are fibre-reinforced composites (and multilayered assemblies thereof) where stiff, almost inextensible cellulose microfibrils direct the otherwise isotropic swelling of a matrix. In addition, Nature provides examples of actuating systems based on lignified, low-swelling, cellular solids enclosing a high-swelling cellulosic phase. This is the case of the Delosperma nakurense seed capsule, in which a specialized tissue promotes the reversible opening of the capsule upon wetting. This tissue has a diamond-shaped honeycomb microstructure characterized by high geometrical anisotropy: when the cellulosic phase swells inside this constraining structure, the tissue deforms up to four times in one principal direction while maintaining its original dimension in the other. Inspired by the example of the Delosoperma nakurense, in this thesis we analyze the role of architecture of 2D cellular solids as models for natural hygromorphs. To start off, we consider a simple fluid pressure acting in the cells and try to assess the influence of several architectural parameters onto their mechanical actuation. Since internal pressurization is a configurational type of load (that is the load direction is not fixed but it “follows” the structure as it deforms) it will result in the cellular structure acquiring a “spontaneous” shape. This shape is independent of the load but just depends on the architectural characteristics of the cells making up the structure itself. Whereas regular convex tiled cellular solids (such as hexagonal, triangular or square lattices) deform isotropically upon pressurization, we show through finite element simulations that by introducing anisotropic and non-convex, reentrant tiling large expansions can be achieved in each individual cell. The influence of geometrical anisotropy onto the expansion behaviour of a diamond shaped honeycomb is assessed by FEM calculations and a Born lattice approximation. We found that anisotropic expansions (eigenstrains) comparable to those observed in the keels tissue of the Delosoperma nakurense are possible. In particular these depend on the relative contributions of bending and stretching of the beams building up the honeycomb. Moreover, by varying the walls’ Young modulus E and internal pressure p we found that both the eigenstrains and 2D elastic moduli scale with the ratio p/E. Therefore the potential of these pressurized structures as soft actuators is outlined. This approach was extended by considering several 2D cellular solids based on two types of non-convex cells. Each honeycomb is build as a lattice made of only one non-convex cell. Compared to usual honeycombs, these lattices have kinked walls between neighbouring cells which offers a hidden length scale allowing large directed deformations. By comparing the area expansion in all lattices, we were able to show that less convex cells are prone to achieve larger area expansions, but the direction in which the material expands is variable and depends on the local cell’s connectivity. This has repercussions both at the macroscopic (lattice level) and microscopic (cells level) scales. At the macroscopic scale, these non-convex lattices can experience large anisotropic (similarly to the diamond shaped honeycomb) or perfectly isotropic principal expansions, large shearing deformations or a mixed behaviour. Moreover, lattices that at the macroscopic scale expand similarly can show quite different microscopic deformation patterns that include zig-zag motions and radical changes of the initial cell shape. Depending on the lattice architecture, the microscopic deformations of the individual cells can be equal or not, so that they can build up or mutually compensate and hence give rise to the aforementioned variety of macroscopic behaviours. Interestingly, simple geometrical arguments involving the undeformed cell shape and its local connectivity enable to predict the results of the FE simulations. Motivated by the results of the simulations, we also created experimental 3D printed models of such actuating structures. When swollen, the models undergo substantial deformation with deformation patterns qualitatively following those predicted by the simulations. This work highlights how the internal architecture of a swellable cellular solid can lead to complex shape changes which may be useful in the fields of soft robotics or morphing structures.