Refine
Year of publication
Document Type
- Conference Proceeding (493) (remove)
Language
- English (493) (remove)
Keywords
- social media (5)
- Information Structure (4)
- COVID-19 (3)
- Cloud Computing (3)
- E-Mail Tracking (3)
- ERP (3)
- MOOC (3)
- Privacy (3)
- conversational agents (3)
- enterprise systems (3)
Institute
- Extern (122)
- Fachgruppe Betriebswirtschaftslehre (76)
- Institut für Biochemie und Biologie (56)
- Department Sport- und Gesundheitswissenschaften (40)
- Institut für Ernährungswissenschaft (37)
- Department Psychologie (28)
- Institut für Künste und Medien (22)
- Institut für Chemie (15)
- Institut für Physik und Astronomie (15)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (14)
The rise of open source models for software and hardware development has catalyzed the debate regarding sustainable business models. Open Source Software has already become a dominant part in the software industry, whereas Open Source Hardware is still a little-researched phenomenon but has the potential to do the same to manufacturing in a wide range of products. This article addresses this potential by introducing a research design to analyze the prototyping phase of six different Open Source Hardware projects tackling ecological, social, and economical challenges. Using a design science research methodology, a process model is developed to concretise the prototype development steps. The prototype phase is important because it is where fundamental decisions are made that affect the openness of the final product. This paper aims to advance the discourse on open production as a concept that enables companies to apply the aspect of openness towards collaboration-oriented and sustainable business models.
Looking for participation
(2022)
A stronger learner orientation through participatory learning increases learning motivation and results. But what does participatory learning mean? Where do learning factories and fabrication laboratories (FabLabs) stand in this context, and how can didactic implementation be improved in this respect? Using a newly developed analytical framework, which contains elements of the stage model of participation and general media didactics, we compare a FabLab and a learning factory example concerning the degree of participation. From this, we derive guidelines for designing participative teaching and learning processes in learning factories. We explain how FabLabs can be an inspiration for the didactic design of learning factories.
Deductive databases need general formulas in rule bodies, not only conjuctions of literals. This is well known since the work of Lloyd and Topor about extended logic programming. Of course, formulas must be restricted in such a way that they can be effectively evaluated in finite time, and produce only a finite number of new tuples (in each iteration of the TP-operator: the fixpoint can still be infinite). It is also necessary to respect binding restrictions of built-in predicates: many of these predicates can be executed only when certain arguments are ground. Whereas for standard logic programming rules, questions of safety, allowedness, and range-restriction are relatively easy and well understood, the situation for general formulas is a bit more complicated. We give a syntactic analysis of formulas that guarantees the necessary properties.
Halide perovskites
(2021)
Power relations within the area of blockchain governance are complex by definition and a comprehensive analysis that links technological and institutional elements is missing to date. The research that is presented with this article focuses on the visualization of the shifting power relations with the introduction of blockchain. For this purpose, the analysis leverages an adjusted version of the multi-stakeholder influence mapping tool. The analysis considers the various stakeholders within the multi-layered blockchain technology stack and compares three fundamental blockchain scenarios, including public and private blockchain settings. The findings show that public administrations face indeed less power with the introduction of blockchain, while new stakeholders come into play who wield influence rather uncontrolled. Nonetheless, public administrations are not powerless overall and remain influential stakeholders. This paper concludes that blockchain governance is not as democratic as blockchain enthusiasts tend to argue and derives corresponding opportunities for further research.
Because software development is increasingly expensive and timeconsuming, software reuse gains importance. Aspect-oriented software development modularizes crosscutting concerns which enables their systematic reuse. Literature provides a number of AOP patterns and best practices for developing reusable aspects based on compelling examples for concerns like tracing, transactions and persistence. However, such best practices are lacking for systematically reusing invasive aspects. In this paper, we present the ‘callback mismatch problem’. This problem arises in the context of abstraction mismatch, in which the aspect is required to issue a callback to the base application. As a consequence, the composition of invasive aspects is cumbersome to implement, difficult to maintain and impossible to reuse. We motivate this problem in a real-world example, show that it persists in the current state-of-the-art, and outline the need for advanced aspectual composition mechanisms to deal with this.
We introduce and discuss a number of issues that arise in the process of building a finite-state morphological analyzer for Urdu, in particular issues with potential ambiguity and non-concatenative morphology. Our approach allows for an underlyingly similar treatment of both Urdu and Hindi via a cascade of finite-state transducers that transliterates the very different scripts into a common ASCII transcription system. As this transliteration system is based on the XFST tools that the Urdu/Hindi common morphological analyzer is also implemented in, no compatibility problems arise.
Enterprise Resource Planning (ERP) systems are critical to the success of enterprises, facilitating business operations through standardized digital processes. However, existing ERP systems are unsuitable for startups and small and medium-sized enterprises that grow quickly and require adaptable solutions with low barriers to entry. Drawing upon 15 explorative interviews with industry experts, we examine the challenges of current ERP systems using the task technology fit theory across companies of varying sizes. We describe high entry barriers, high costs of implementing implicit processes, and insufficient interoperability of already employed tools. We present a vision of a future business process platform based on three enablers: Business processes as first-class entities, semantic data and processes, and cloud-native elasticity and high availability. We discuss how these enablers address current ERP systems' challenges and how they may be used for research on the next generation of business software for tomorrow's enterprises.
In this paper we consider a simple syntactic extension of Answer Set Programming (ASP) for dealing with (nested) existential quantifiers and double negation in the rule bodies, in a close way to the recent proposal RASPL-1. The semantics for this extension just resorts to Equilibrium Logic (or, equivalently, to the General Theory of Stable Models), which provides a logic-programming interpretation for any arbitrary theory in the syntax of Predicate Calculus. We present a translation of this syntactic class into standard logic programs with variables (either disjunctive or normal, depending on the input rule heads), as those allowed by current ASP solvers. The translation relies on the introduction of auxiliary predicates and the main result shows that it preserves strong equivalence modulo the original signature.
The paper aims to bring the experience of playing videogames closer to objective knowledge, where the experience can be assessed and falsified via an operational concept. The theory focuses on explaining the basic elements that form the core of the process of the experience. The name of puppetry is introduced after discussing the similarities in the importance of experience for both videogames and theatrical puppetry. Puppetry, then, operationalizes the gaming experience into a concept that can be assessed.
We summarize Chandra observations of the emission line profiles from 17 OB stars. The lines tend to be broad and unshifted. The forbidden/intercombination line ratios arising from Helium-like ions provide radial distance information for the X-ray emission sources, while the H-like to He-like line ratios provide X-ray temperatures, and thus also source temperature versus radius distributions. OB stars usually show power law differential emission measure distributions versus temperature. In models of bow shocks, we find a power law differential emission measure, a wide range of ion stages, and the bow shock flow around the clumps provides transverse velocities comparable to HWHM values. We find that the bow shock results for the line profile properties, consistent with the observations of X-ray line emission for a broad range of OB star properties.
We study the time variability of emission lines in three WNE stars : WR 2 (WN2), WR 3 (WN3ha) and WR152 (WN3). While WR 2 shows no variability above the noise level, the other stars do show variation, which are like other WR stars in WR 152 but very fast in WR 3. From these motions, we deduce a value of β ∼1 for WR 3 that is like that seen in O stars and β ∼2–3 for WR 152, that is intermediate between other WR stars and WR 3.
Artificial intelligence (AI)-based technologies can increasingly perform knowledge work tasks, such as medical diagnosis. Thereby, it is expected that humans will not be replaced by AI but work closely with AI-based technology (“augmentation”). Augmentation has ethical implications for humans (e.g., impact on autonomy, opportunities to flourish through work), thus, developers and managers of AI-based technology have a responsibility to anticipate and mitigate risks to human workers. However, doing so can be difficult as AI encompasses a wide range of technologies, some of which enable fundamentally new forms of interaction. In this research-in-progress paper, we propose the development of a taxonomy to categorize unique characteristics of AI-based technology that influence the interaction and have ethical implications for human workers. The completed taxonomy will support researchers in forming cumulative knowledge on the ethical implications of augmentation and assist practitioners in the ethical design and management of AI-based technology in knowledge work.
By quantitatively fitting simple emission line profile models that include both atomic opacity and porosity to the Chandra X-ray spectrum of ζ Pup, we are able to explore the trade-offs between reduced mass-loss rates and wind porosity. We find that reducing the mass-loss rate of ζ Pup by roughly a factor of four, to 1.5 × 10−6 M⊙ yr−1, enables simple non-porous wind models to provide good fits to the data. If, on the other hand, we take the literature mass-loss rate of 6×10−6 M⊙ yr−1, then to produce X-ray line profiles that fit the data, extreme porosity lengths – of h∞ ≈ 3 R∗ – are required. Moreover, these porous models do not provide better fits to the data than the non-porous, low optical depth models. Additionally, such huge porosity lengths do not seem realistic in light of 2-D numerical simulations of the wind instability.
KEYCIT 2014
(2015)
In our rapidly changing world it is increasingly important not only to be an expert in a chosen field of study but also to be able to respond to developments, master new approaches to solving problems, and fulfil changing requirements in the modern world and in the job market. In response to these needs key competencies in understanding, developing and using new digital technologies are being brought into focus in school and university programmes. The IFIP TC3 conference "KEYCIT – Key Competences in Informatics and ICT (KEYCIT 2014)" was held at the University of Potsdam in Germany from July 1st to 4th, 2014 and addressed the combination of key competencies, Informatics and ICT in detail. The conference was organized into strands focusing on secondary education, university education and teacher education (organized by IFIP WGs 3.1 and 3.3) and provided a forum to present and to discuss research, case studies, positions, and national perspectives in this field.
We present an algorithm that computes a function that assigns consecutive integers to trees recognized by a deterministic, acyclic, finite-state, bottom-up tree automaton. Such function is called minimal perfect hashing. It can be used to identify trees recognized by the automaton. Its value may be seen as an index in some other data structures. We also present an algorithm for inverted hashing.
Many hot stars exhibit stochastic polarimetric variability, thought to arise from clumping low in the wind. Here we investigate the wind properties required to reproduce this variability using analytic models, with particular emphasis on Luminous Blue Variables. We find that the winds must be highly structured, consisting of a large number of optically-thin clumps; while we find that the overall level of polarization should scale with mass-loss rate – consistent with observations of LBVs. The models also predict variability on very short timescales, which is supported by the results of a recent polarimetric monitoring campaign.
We present the latest results on the observational dependence of the mass-loss rate in stellar winds of O and early-B stars on the metal content of their atmospheres, and compare these with predictions. Absolute empirical rates for the mass loss of stars brighter than 10$^{5.2} L_{\odot}$, based on H$\alpha$ and ultraviolet (UV) wind lines, are found to be about a factor of two higher than predictions. If this difference is attributed to inhomogeneities in the wind this would imply that luminous O and early-B stars have clumping factors in their H$\alpha$ and UV line forming regime of about a factor of 3--5. The investigated stars cover a metallicity range $Z$ from 0.2 to 1 $Z_{\odot}$. We find a hint towards smaller clumping factors for lower $Z$. The derived clumping factors, however, presuppose that clumping does not impact the predictions of the mass-loss rate. We discuss this assumption and explain how we intend to investigate its validity in more detail.
Due to changing customer behavior in digitalization, banks urge to change their traditional value creation in order to improve interaction with customers. New digital technologies such as core banking solutions change organizational structures to provide organizational and individual affordances in IT-supported personal advisory. Based on adaptive structuration theory and with qualitative data from 24 German banks, we identify first, second and third order issues of organizational change in value creation, which are connected with a set of affordances and constraints as the outcomes for customer interaction.
The usage of data to improve or create business models has become vital for companies in the 21st century. However, to extract value from data it is important to understand the business model. Taxonomies for data-driven business models (DDBM) aim to provide guidance for the development and ideation of new business models relying on data. In IS research, however, different taxonomies have emerged in recent years, partly redundant, partly contradictory. Thus, there is a need to synthesize the common ground of these taxonomies within IS research. Based on 26 IS-related taxonomies and 30 cases, we derive and define 14 generic building blocks of DDBM to develop a consolidated taxonomy that represents the current state-of-the-art. Thus, we integrate existing research on DDBM and provide avenues for further exploration of data-induced potentials for business models as well as for the development and analysis of general or industry-specific DDBM.
A key problem in automatic annotation of historical corpora is inconsistent spelling. Because the spelling of some word forms can differ between texts, a language model trained on already annotated treebanks may fail to recognize known word forms due to differences in spelling. In the present work, we explore the feasibility of an unsupervised method for spelling-adjustment for the purpose of improved part of speech (POS) tagging. To this end, we present a method for spelling normalization based on weighted edit distances, which exploits within-text spelling variation. We then evaluate the improvement in taging accuracy resulting from between-texts spelling normalization in two tagging experiments on several Early New High German (ENHG) texts.
Physiological and genomic variation among cryptic species of a marsh snail (Melampus bidentatus)
(2021)
Received views of utterance context in pragmatic theory characterize the occurrent subjective states of interlocutors using notions like common knowledge or mutual belief. We argue that these views are not compatible with the uncertainty and robustness of context-dependence in humanhuman dialogue. We present an alternative characterization of utterance context as objective and normative. This view reconciles the need for uncertainty with received intuitions about coordination and meaning in context, and can directly inform computational approaches to dialogue.
During the outbreak of the COVID-19 pandemic, many people shared their symptoms across Online Social Networks (OSNs) like Twitter, hoping for others’ advice or moral support. Prior studies have shown that those who disclose health-related information across OSNs often tend to regret it and delete their publications afterwards. Hence, deleted posts containing sensitive data can be seen as manifestations of online regrets. In this work, we present an analysis of deleted content on Twitter during the outbreak of the COVID-19 pandemic. For this, we collected more than 3.67 million tweets describing COVID-19 symptoms (e.g., fever, cough, and fatigue) posted between January and April 2020. We observed that around 24% of the tweets containing personal pronouns were deleted either by their authors or by the platform after one year.
As a practical application of the resulting dataset, we explored its suitability for the automatic classification of regrettable content on Twitter.
This paper describes the key aspects of the system SynCoP (Syntactic Constraint Parser) developed at the Berlin-Brandenburgische Akademie der Wissenschaften. The parser allows to combine syntactic tagging and chunking by means of constraint grammar using weighted finite state transducers (WFST). Chunks are interpreted as local dependency structures within syntactic tagging. The linguistic theories are formulated by criteria which are formalized by a semiring; these criteria allow structural preferences and gradual grammaticality. The parser is essentially a cascade of WFSTs. To find the most likely syntactic readings a best-path search is used.
A new method is used in an eye-tracking pilot experiment which shows that it is possible to detect differences in common ground associated with the use of minimally different types of indefinite anaphora. Following Richardson and Dale (2005), cross recurrence quantification analysis (CRQA) was used to show that the tandem eye movements of two Swedish-speaking interlocutors are slightly more coupled when they are using fully anaphoric indefinite expressions than when they are using less anaphoric indefinites. This shows the potential of CRQA to detect even subtle processing differences in ongoing discourse.
Interdisziplinäres Zentrum für Musterdynamik und Angewandte Fernerkundung Workshop vom 9. - 10. Februar 2006
Spectral detection enables multi-color fluorescence fluctuation spectroscopy studies in living cells
(2021)
We present an analysis of student language input in a corpus of tutoring dialogue in the domain of symbolic differentiation. Our focus on procedural tutoring makes the dialogue comparable to collaborative problem-solving (CPS). Existing CPS models describe the process of negotiating plans and goals, which also fits procedural tutoring. However, we provide a classification of student utterances and corpus annotation which shows that approximately 28% of non-trivial student language in this corpus is not accounted for by existing models, and addresses other functions, such as evaluating past actions or correcting mistakes. Our analysis can be used as a foundation for improving models of tutoring dialogue.
Background:
Anti-TNFα monoclonal antibodies (mAbs) are a well-established treatment for patients with Crohn’s disease (CD). However, subtherapeutic concentrations of mAbs have been related to a loss of response during the first year of therapy1. Therefore, an appropriate dosing strategy is crucial to prevent the underexposure of mAbs for those patients. The aim of our study was to assess the impact of different dosing strategies (fixed dose or body size descriptor adapted) on drug exposure and the target concentration attainment for two different anti-TNFα mAbs: infliximab (IFX, body weight (BW)-based dosing) and certolizumab pegol (CZP, fixed dosing). For this purpose, a comprehensive pharmacokinetic (PK) simulation study was performed.
Methods:
A virtual population of 1000 clinically representative CD patients was generated based on the distribution of CD patient characteristics from an in-house clinical database (n = 116). Seven dosing regimens were investigated: fixed dose and per BW, lean BW (LBW), body surface area, height, body mass index and fat-free mass. The individual body size-adjusted doses were calculated from patient generated body size descriptor values. Then, using published PK models for IFX and CZP in CD patients2,3, for each patient, 1000 concentration–time profiles were simulated to consider the typical profile of a specific patient as well as the range of possible individual profiles due to unexplained PK variability across patients. For each dosing strategy, the variability in maximum and minimum mAb concentrations (Cmax and Cmin, respectively), area under the concentration-time curve (AUC) and the per cent of patients reaching target concentration were assessed during maintenance therapy.
Results:
For IFX and CZP, Cmin showed the highest variability between patients (CV ≈110% and CV ≈80%, respectively) with a similar extent across all dosing strategies. For IFX, the per cent of patients reaching the target (Cmin = 5 µg/ml) was similar across all dosing strategies (~15%). For CZP, the per cent of patients reaching the target average concentration of 17 µg/ml ranged substantially (52–71%), being the highest for LBW-adjusted dosing.
Conclusion:
By using a PK simulation approach, different dosing regimen of IFX and CZP revealed the highest variability for Cmin, the most commonly used PK parameter guiding treatment decisions, independent upon dosing regimen. Our results demonstrate similar target attainment with fixed dosing of IFX compared with currently recommended BW-based dosing. For CZP, the current fixed dosing strategy leads to comparable percentage of patients reaching target as the best performing body size-adjusted dosing (66% vs. 71%, respectively).
Public blockchain
(2020)
Blockchain has the potential to change business transactions to a major extent. Thereby, underlying consensus algorithms are the core mechanism to achieve consistency in distributed infrastructures. Their application aims for transparency and accountability in societal transactions. As a result of missing reviews holistically covering consensus algorithms, we aim to (1) identify prevalent consensus algorithms for public blockchains, and (2) address the resource perspective with a sustainability consideration (whereby we address the three spheres of sustainability). Our systematic literature review identified 33 different consensus algorithms for public blockchains. Our contribution is twofold: first, we provide a systematic summary of consensus algorithms for public blockchains derived from the scientific literature as well as real-world applications and systemize them according to their research focus; second, we assess the sustainability of consensus algorithms using a representative sample and thereby highlight the gaps in literature to address the holistic sustainability of consensus algorithms.