Refine
Year of publication
- 2014 (1009) (remove)
Document Type
- Article (1009) (remove)
Language
- English (1009) (remove)
Keywords
Institute
- Institut für Geowissenschaften (195)
- Institut für Physik und Astronomie (194)
- Institut für Biochemie und Biologie (182)
- Institut für Chemie (128)
- Department Psychologie (63)
- Institut für Ernährungswissenschaft (40)
- Institut für Mathematik (33)
- Department Linguistik (29)
- Institut für Informatik und Computational Science (28)
- Department Sport- und Gesundheitswissenschaften (21)
- Sozialwissenschaften (16)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (13)
- Strukturbereich Kognitionswissenschaften (12)
- Bürgerliches Recht (9)
- Wirtschaftswissenschaften (9)
- Institut für Umweltwissenschaften und Geographie (8)
- Extern (5)
- Department Erziehungswissenschaft (4)
- Institut für Anglistik und Amerikanistik (3)
- Institut für Romanistik (3)
- Philosophische Fakultät (3)
- Fachgruppe Politik- & Verwaltungswissenschaft (2)
- Mathematisch-Naturwissenschaftliche Fakultät (2)
- Vereinigung für Jüdische Studien e. V. (2)
- Department Grundschulpädagogik (1)
- Humanwissenschaftliche Fakultät (1)
- Institut für Germanistik (1)
- Institut für Jüdische Studien und Religionswissenschaft (1)
- Juristische Fakultät (1)
- Öffentliches Recht (1)
Geometric generalization is a fundamental concept in the digital mapping process. An increasing amount of spatial data is provided on the web as well as a range of tools to process it. This jABC workflow is used for the automatic testing of web-based generalization services like mapshaper.org by executing its functionality, overlaying both datasets before and after the transformation and displaying them visually in a .tif file. Mostly Web Services and command line tools are used to build an environment where ESRI shapefiles can be uploaded, processed through a chosen generalization service and finally visualized in Irfanview.
In the geoinformatics field, remote sensing data is often used for analyzing the characteristics of the current investigation area. This includes DEMs, which are simple raster grids containing grey scales representing the respective elevation values. The project CREADED that is presented in this paper aims at making these monochrome raster images more significant and more intuitively interpretable. For this purpose, an executable interactive model for creating a colored and relief-shaded Digital Elevation Model (DEM) has been designed using the jABC framework. The process is based on standard jABC-SIBs and SIBs that provide specific GIS functions, which are available as Web services, command line tools and scripts.
This paper describes the implementation of a workflow model for service-oriented computing of potential areas for wind turbines in jABC. By implementing a re-executable model the manual effort of a multi-criteria site analysis can be reduced. The aim is to determine the shift of typical geoprocessing tools of geographic information systems (GIS) from the desktop to the web. The analysis is based on a vector data set and mainly uses web services of the “Center for Spatial Information Science and Systems” (CSISS). This paper discusses effort, benefits and problems associated with the use of the web services.
Location analyses are among the most common tasks while working with spatial data and geographic information systems. Automating the most frequently used procedures is therefore an important aspect of improving their usability. In this context, this project aims to design and implement a workflow, providing some basic tools for a location analysis. For the implementation with jABC, the workflow was applied to the problem of finding a suitable location for placing an artificial reef. For this analysis three parameters (bathymetry, slope and grain size of the ground material) were taken into account, processed, and visualized with the The Generic Mapping Tools (GMT), which were integrated into the workflow as jETI-SIBs. The implemented workflow thereby showed that the approach to combine jABC with GMT resulted in an user-centric yet user-friendly tool with high-quality cartographic outputs.
Creation of topographic maps
(2014)
Location analyses are among the most common tasks while working with spatial data and geographic information systems. Automating the most frequently used procedures is therefore an important aspect of improving their usability. In this context, this project aims to design and implement a workflow, providing some basic tools for a location analysis. For the implementation with jABC, the workflow was applied to the problem of finding a suitable location for placing an artificial reef. For this analysis three parameters (bathymetry, slope and grain size of the ground material) were taken into account, processed, and visualized with the The Generic Mapping Tools (GMT), which were integrated into the workflow as jETI-SIBs. The implemented workflow thereby showed that the approach to combine jABC with GMT resulted in an user-centric yet user-friendly tool with high-quality cartographic outputs.
GraffDok is an application helping to maintain an overview over sprayed images somewhere in a city. At the time of writing it aims at vandalism rather than at beautiful photographic graffiti in an underpass. Looking at hundreds of tags and scribbles on monuments, house walls, etc. it would be interesting to not only record them in writing but even make them accessible electronically, including images.
GraffDok’s workflow is simple and only requires an EXIF-GPS-tagged photograph of a graffito. It automatically determines its location by using reverse geocoding with the given GPS-coordinates and the Gisgraphy WebService. While asking the user for some more meta data, GraffDok analyses the image in parallel with this and tries to detect fore- and background – before extracting the drawing lines and make them stand alone. The command line based tool ImageMagick is used here as well as for accessing EXIF data.
Any meta data is written to csv-files, which will stay easily accessible and can be integrated in TeX-files as well. The latter ones are converted to PDF at the end of the workflow, containing a table about all graffiti and a summary for each – including the generated characteristic graffiti pattern image.
The protein classification workflow described in this report enables users to get information about a novel protein sequence automatically. The information is derived by different bioinformatic analysis tools which calculate or predict features of a protein sequence. Also, databases are used to compare the novel sequence with known proteins.
Analyses of metagenomes in life sciences present new opportunities as well as challenges to the scientific community and call for advanced computational methods and workflows. The large amount of data collected from samples via next-generation sequencing (NGS) technologies render manual approaches to sequence comparison and annotation unsuitable. Rather, fast and efficient computational pipelines are needed to provide comprehensive statistics and summaries and enable the researcher to choose appropriate tools for more specific analyses. The workflow presented here builds upon previous pipelines designed for automated clustering and annotation of raw sequence reads obtained from next-generation sequencing technologies such as 454 and Illumina. Employing specialized algorithms, the sequence reads are processed at three different levels. First, raw reads are clustered at high similarity cutoff to yield clusters which can be exported as multifasta files for further analyses. Independently, open reading frames (ORFs) are predicted from raw reads and clustered at two strictness levels to yield sets of non-redundant sequences and ORF families. Furthermore, single ORFs are annotated by performing searches against the Pfam database
Exploratory Data Analysis
(2014)
In bioinformatics the term exploratory data analysis refers to different methods to get an overview of large biological data sets. Hence, it helps to create a framework for further analysis and hypothesis testing. The workflow facilitates this first important step of the data analysis created by high-throughput technologies. The results are different plots showing the structure of the measurements. The goal of the workflow is the automatization of the exploratory data analysis, but also the flexibility should be guaranteed. The basic tool is the free software R.
With the jABC it is possible to realize workflows for numerous questions in different fields. The goal of this project was to create a workflow for the identification of differentially expressed genes. This is of special interest in biology, for it gives the opportunity to get a better insight in cellular changes due to exogenous stress, diseases and so on. With the knowledge that can be derived from the differentially expressed genes in diseased tissues, it becomes possible to find new targets for treatment.
A workflow for visualizing server connections using the Google Maps API was built in the jABC. It makes use of three basic services: An XML-based IP address geolocation web service, a command line tool and the Static Maps API. The result of the workflow is an URL leading to an image file of a map, showing server connections between a client and a target host.
Spotlocator is a game wherein people have to guess the spots of where photos were taken. The photos of a defined area for each game are from panoramio.com. They are published at http://spotlocator. drupalgardens.com with an ID. Everyone can guess the photo spots by sending a special tweet via Twitter that contains the hashtag #spotlocator, the guessed coordinates and the ID of the photo. An evaluation is published for all tweets. The players are informed about the distance to the real photo spots and the positions are shown on a map.
Through the use of next generation sequencing (NGS) technology, a lot of newly sequenced organisms are now available. Annotating those genes is one of the most challenging tasks in sequence biology. Here, we present an automated workflow to find homologue proteins, annotate sequences according to function and create a three-dimensional model.
In this project I constructed a workflow that takes a DNA sequence as input and provides a phylogenetic tree, consisting of the input sequence and other sequences which were found during a database search. In this phylogenetic tree the sequences are arranged depending on similarities. In bioinformatics, constructing phylogenetic trees is often used to explore the evolutionary relationships of genes or organisms and to understand the mechanisms of evolution itself.
We study the diffusion of a tracer particle, which moves in continuum space between a lattice of excluded volume, immobile non-inert obstacles. In particular, we analyse how the strength of the tracer–obstacle interactions and the volume occupancy of the crowders alter the diffusive motion of the tracer. From the details of partitioning of the tracer diffusion modes between trapping states when bound to obstacles and bulk diffusion, we examine the degree of localisation of the tracer in the lattice of crowders. We study the properties of the tracer diffusion in terms of the ensemble and time averaged mean squared displacements, the trapping time distributions, the amplitude variation of the time averaged mean squared displacements, and the non-Gaussianity parameter of the diffusing tracer. We conclude that tracer–obstacle adsorption and binding triggers a transient anomalous diffusion. From a very narrow spread of recorded individual time averaged trajectories we exclude continuous type random walk processes as the underlying physical model of the tracer diffusion in our system. For moderate tracer–crowder attraction the motion is found to be fully ergodic, while at stronger attraction strength a transient disparity between ensemble and time averaged mean squared displacements occurs. We also put our results into perspective with findings from experimental single-particle tracking and simulations of the diffusion of tagged tracers in dense crowded suspensions. Our results have implications for the diffusion, transport, and spreading of chemical components in highly crowded environments inside living cells and other structured liquids.
The looping of polymers such as DNA is a fundamental process in the molecular biology of living cells, whose interior is characterised by a high degree of molecular crowding. We here investigate in detail the looping dynamics of flexible polymer chains in the presence of different degrees of crowding. From the analysis of the looping–unlooping rates and the looping probabilities of the chain ends we show that the presence of small crowders typically slows down the chain dynamics but larger crowders may in fact facilitate the looping. We rationalise these non-trivial and often counterintuitive effects of the crowder size on the looping kinetics in terms of an effective solution viscosity and standard excluded volume. It is shown that for small crowders the effect of an increased viscosity dominates, while for big crowders we argue that confinement effects (caging) prevail. The tradeoff between both trends can thus result in the impediment or facilitation of polymer looping, depending on the crowder size. We also examine how the crowding volume fraction, chain length, and the attraction strength of the contact groups of the polymer chain affect the looping kinetics and hairpin formation dynamics. Our results are relevant for DNA looping in the absence and presence of protein mediation, DNA hairpin formation, RNA folding, and the folding of polypeptide chains under biologically relevant high-crowding conditions.