Green Open-Access
Refine
Has Fulltext
- no (23)
Document Type
- Other (23) (remove)
Language
- English (23)
Is part of the Bibliography
- yes (23)
Keywords
- Agile methods (1)
- Aufklarung (1)
- Automatic domain term extraction (1)
- Data integration (1)
- English dialects (1)
- Enlightenment (1)
- Entropy (1)
- Generalized additive mixed-effects modeling (1)
- Industry 4.0 (1)
- Internet of Things (1)
Institute
- Hasso-Plattner-Institut für Digital Engineering GmbH (5)
- Institut für Physik und Astronomie (5)
- Department Linguistik (3)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (2)
- Institut für Biochemie und Biologie (2)
- Institut für Geowissenschaften (2)
- Department Psychologie (1)
- Institut für Chemie (1)
- Institut für Ernährungswissenschaft (1)
- Institut für Germanistik (1)
Nanocarriers
(2017)
Scrum2kanban
(2018)
Using university capstone courses to teach agile software development methodologies has become commonplace, as agile methods have gained support in professional software development. This usually means students are introduced to and work with the currently most popular agile methodology: Scrum. However, as the agile methods employed in the industry change and are adapted to different contexts, university courses must follow suit. A prime example of this is the Kanban method, which has recently gathered attention in the industry. In this paper, we describe a capstone course design, which adds the hands-on learning of the lean principles advocated by Kanban into a capstone project run with Scrum. This both ensures that students are aware of recent process frameworks and ideas as well as gain a more thorough overview of how agile methods can be employed in practice. We describe the details of the course and analyze the participating students' perceptions as well as our observations. We analyze the development artifacts, created by students during the course in respect to the two different development methodologies. We further present a summary of the lessons learned as well as recommendations for future similar courses. The survey conducted at the end of the course revealed an overwhelmingly positive attitude of students towards the integration of Kanban into the course.
The electromagnetic coupling of molecular excitations to plasmonic nanoparticles offers a promising method to manipulate the light-matter interaction at the nanoscale. Plasmonic nanoparticles foster exceptionally high coupling strengths, due to their capacity to strongly concentrate the light-field to sub-wavelength mode volumes. A particularly interesting coupling regime occurs, if the coupling increases to a level such that the coupling strength surpasses all damping rates in the system. In this so-called strong-coupling regime hybrid light-matter states emerge, which can no more be divided into separate light and matter components. These hybrids unite the features of the original components and possess new resonances whose positions are separated by the Rabi splitting energy h Omega. Detuning the resonance of one of the components leads to an anticrossing of the two arising branches of the new resonances omega(+) and omega(-) with a minimal separation of Omega = omega(+) - omega(-).
For the last ten years, almost every theoretical result concerning the expected run time of a randomized search heuristic used drift theory, making it the arguably most important tool in this domain. Its success is due to its ease of use and its powerful result: drift theory allows the user to derive bounds on the expected first-hitting time of a random process by bounding expected local changes of the process - the drift. This is usually far easier than bounding the expected first-hitting time directly. Due to the widespread use of drift theory, it is of utmost importance to have the best drift theorems possible. We improve the fundamental additive, multiplicative, and variable drift theorems by stating them in a form as general as possible and providing examples of why the restrictions we keep are still necessary. Our additive drift theorem for upper bounds only requires the process to be nonnegative, that is, we remove unnecessary restrictions like a finite, discrete, or bounded search space. As corollaries, the same is true for our upper bounds in the case of variable and multiplicative drift.
For theoretical analyses there are two specifics distinguishing GP from many other areas of evolutionary computation. First, the variable size representations, in particular yielding a possible bloat (i.e. the growth of individuals with redundant parts). Second, the role and realization of crossover, which is particularly central in GP due to the tree-based representation. Whereas some theoretical work on GP has studied the effects of bloat, crossover had a surprisingly little share in this work. We analyze a simple crossover operator in combination with local search, where a preference for small solutions minimizes bloat (lexicographic parsimony pressure); the resulting algorithm is denoted Concatenation Crossover GP. For this purpose three variants of the wellstudied Majority test function with large plateaus are considered. We show that the Concatenation Crossover GP can efficiently optimize these test functions, while local search cannot be efficient for all three variants independent of employing bloat control.
We analyze the problem of response suggestion in a closed domain along a real-world scenario of a digital library. We present a text-processing pipeline to generate question-answer pairs from chat transcripts. On this limited amount of training data, we compare retrieval-based, conditioned-generation, and dedicated representation learning approaches for response suggestion. Our results show that retrieval-based methods that strive to find similar, known contexts are preferable over parametric approaches from the conditioned-generation family, when the training data is limited. We, however, identify a specific representation learning approach that is competitive to the retrieval-based approaches despite the training data limitation.
Comparative text mining extends from genre analysis and political bias detection to the revelation of cultural and geographic differences, through to the search for prior art across patents and scientific papers. These applications use cross-collection topic modeling for the exploration, clustering, and comparison of large sets of documents, such as digital libraries. However, topic modeling on documents from different collections is challenging because of domain-specific vocabulary. We present a cross-collection topic model combined with automatic domain term extraction and phrase segmentation. This model distinguishes collection-specific and collection-independent words based on information entropy and reveals commonalities and differences of multiple text collections. We evaluate our model on patents, scientific papers, newspaper articles, forum posts, and Wikipedia articles. In comparison to state-of-the-art cross-collection topic modeling, our model achieves up to 13% higher topic coherence, up to 4% lower perplexity, and up to 31% higher document classification accuracy. More importantly, our approach is the first topic model that ensures disjunct general and specific word distributions, resulting in clear-cut topic representations.
Speech scientists have long noted that the qualities of naturally-produced vowels do not remain constant over their durations regardless of being nominally "monophthongs" or "diphthongs". Recent acoustic corpora show that there are consistent patterns of first (F1) and second (F2) formant frequency change across different vowel categories. The three Australian English (AusE) close front vowels /i:, 1, i/ provide a striking example: while their midpoint or mean F1 and F2 frequencies are virtually identical, their spectral change patterns distinctly differ. The results indicate that, despite the distinct patterns of spectral change of AusE /i:, i, la/ in production, its perceptual relevance is not uniform, but rather vowel-category dependent.
We investigated online electrophysiological components of distributional learning, specifically of tones by listeners of a non tonal language. German listeners were presented with a bimodal distribution of syllables with lexical tones from a synthesized continuum based on Cantonese level tones. Tones were presented in sets of four standards (within-category tokens) followed by a deviant (across-category token). Mismatch negativity (MMN) was measured. Earlier behavioral data showed that exposure to this bimodal distribution improved both categorical perception and perceptual acuity for level tones [I]. In the present study we present analyses of the electrophysiological response recorded during this exposure, i.e., the development of the MMN response during distributional learning. This development over time is analyzed using Generalized Additive Mixed Models and results showed that the MMN amplitude increased for both within and across-category tokens, reflecting higher perceptual acuity accompanying category formation. This is evidence that learners zooming in on phonological categories undergo neural changes associated with more accurate phonetic perception.
Voice onset time (VOT), a primary cue for voicing in many languages including English and German, is known to vary greatly between speakers, but also displays robust within-speaker consistencies, at least in English. The current analysis extends these findings to German. VOT measures were investigated from voiceless alveolar and velar stops in CV syllables cued by a visual prompt in a cue-distractor task. Comparably to English, a considerable portion of German VOT variability can be attributed to the syllable’s vowel length and the stop’s place of articulation. Individual differences in VOT still remain irrespective of speech rate. However, significant correlations across places of articulation and between speaker-specific mean VOTs and standard deviations indicate that talkers employ a relatively unified VOT profile across places of articulation. This could allow listeners to more efficiently adapt to speaker-specific realisations.
In an effort to explain the formation of a narrow third radiation belt at ultra-relativistic energies detected during a solar storm in September 20121, Mann et al.2 present simulations from which they conclude it arises from a process of outward radial diffusion alone, without the need for additional loss processes from higher frequency waves. The comparison of observations with the model in Figs 2 and 3 of their Article clearly shows that even with strong radial diffusion rates, the model predicts a third belt near L* = 3 that is twice as wide as observed and approximately an order of magnitude more intense. We therefore disagree with their interpretation that “the agreement between the absolute fluxes from the model and those observed by REPT [the Relativistic Electron Proton Telescope] shown on Figs 2 and 3 is excellent.”
Previous studies3 have shown that outward radial diffusion plays a very important role in the dynamics of the outer belt and is capable of explaining rapid reductions in the electron flux. It has also been shown that it can produce remnant belts (Fig. 2 of a long-term simulation study4). However, radial diffusion alone cannot explain the formation of the narrow third belt at multi-MeV during September 2012. An additional loss mechanism is required.
Higher radial diffusion rates cannot improve the comparison of model presented by Mann et al. with observations. A further increase in the radial diffusion rates (reported in Fig. 4 of the Supplementary Information of ref. 2) results in the overestimation of the outer belt fluxes by up to three orders of magnitude at energy of 3.4 MeV.
Observations at 2 MeV, where belts show only a two-zone structure, were not presented by Mann et al. Moreover, simulations of electrons with energies below 2 MeV with the same diffusion rates and boundary conditions used by the authors would probably produce very strong depletions down to L = 3–3.5, where L is radial distance from the centre of the Earth to the given field line in the equatorial plane. Observations do not show a non-adiabatic loss below L ∼ 4.5 for 2 MeV. Such different dynamics between 2 MeV and above 4 MeV at around L = 3.5 are another indication that particles are scattered by electromagnetic ion cyclotron (EMIC) waves that affect only energies above a certain threshold.
Observations of the phase space density (PSD) provide additional evidence for the local loss of electrons. Around L* = 3.5–4 PSD shows significant decrease by an order of magnitude starting in the afternoon of 3 September (Fig. 1a), while PSD above L* = 4 is increasing. The minimum in PSD between L* = 3.5–4 continues to decrease until 4 September. This evolution demonstrates that the loss is not produced by outward diffusion. Radial diffusion cannot produce deepening minima, as it works to smooth gradients. Just as growing peaks in PSD show the presence of localized acceleration5, deepening minima show the presence of localized loss.
Figure 1: Time evolution of radiation profiles in electron PSD at relativistic and ultra-relativistic energies.
figure 1
a, Similar to Supplementary Fig. 3 of ref. 2, but using TS07D model10 and for μ = 2,500 MeV G−1, K = 0.05 RE G0.5 (where RE is the radius of the Earth). b, Similar to Supplementary Fig. 3 of ref. 2, but using TS07D model and for μ = 700 MeV G−1, corresponding to MeV energies in the heart of the belt. Minimum in PSD in the heart of the multi-MeV electron radiation belt between 3.5 and 4 RE deepening between the afternoon of 3 September and 5 September clearly show that the narrow remnant belt at multi-MeV below 3.5 RE is produced by the local loss.
Full size image
The minimum in the outer boundary is reached on the evening of 2 September. After that, the outer boundary moves up, while the minimum decreases by approximately an order of magnitude, clearly showing that this main decrease cannot be explained by outward diffusion, and requires additional loss processes. The analysis of profiles of PSD is a standard tool used, for example, in the study about electron acceleration5 and routinely used by the entire Van Allen Probes team. In the Supplementary Information, we show that this analysis is validated by using different magnetic field models. The Supplementary Information also shows that measurements are above background noise.
Deepening minima at multi-MeV during the times when the boundary flux increases are clearly seen in Fig. 1a. They show that there must be localized loss, as radial diffusion cannot produce a minimum that becomes lower with time. At lower energies of 1–2 MeV, which corresponds to lower values of the first adiabatic invariant μ (Fig. 1b), the profiles are monotonic between L* = 3–3.5, consistent with the absence of scattering by EMIC waves that affect only electrons above a certain energy threshold6,7,8,9.
In summary, the results of the modelling and observations presented by Mann et al. do not lend support to the claim of explaining the dynamics of the ultra-relativistic third Van Allen radiation belt in terms of an outward radial diffusion process alone. While the outward radial diffusion driven by the loss to the magnetopause2 is certainly operating during this storm, there is compelling observational and modelling2,6 evidence that shows that very efficient localized electron loss operates during this storm at multi-MeV energies, consistent with localized loss produced by EMIC waves.
Industry 4.0 and the Internet of Things are recent developments that have lead to the creation of new kinds of manufacturing data. Linking this new kind of sensor data to traditional business information is crucial for enterprises to take advantage of the data’s full potential. In this paper, we present a demo which allows experiencing this data integration, both vertically between technical and business contexts and horizontally along the value chain. The tool simulates a manufacturing company, continuously producing both business and sensor data, and supports issuing ad-hoc queries that answer specific questions related to the business. In order to adapt to different environments, users can configure sensor characteristics to their needs.
LoANs
(2019)
Recently, deep neural networks have achieved remarkable performance on the task of object detection and recognition. The reason for this success is mainly grounded in the availability of large scale, fully annotated datasets, but the creation of such a dataset is a complicated and costly task. In this paper, we propose a novel method for weakly supervised object detection that simplifies the process of gathering data for training an object detector. We train an ensemble of two models that work together in a student-teacher fashion. Our student (localizer) is a model that learns to localize an object, the teacher (assessor) assesses the quality of the localization and provides feedback to the student. The student uses this feedback to learn how to localize objects and is thus entirely supervised by the teacher, as we are using no labels for training the localizer. In our experiments, we show that our model is very robust to noise and reaches competitive performance compared to a state-of-the-art fully supervised approach. We also show the simplicity of creating a new dataset, based on a few videos (e.g. downloaded from YouTube) and artificially generated data.
The German Enlightenment
(2017)
The term Enlightenment (or Aufklärung) remains heavily contested. Even when historians delimit the remit of the concept, assigning it to a particular historical period rather than to an intellectual or moral programme, the public resonance of the Enlightenment remains high and problematic—especially when equated in an essentialist manner with modernity or some core values of ‘the West’. This Forum has been convened to discuss recent research on the Enlightenment in Germany, different views of the term and its ideological use in public discourse outside academia (and sometimes within it).