004 Datenverarbeitung; Informatik
Refine
Year of publication
Document Type
- Article (79) (remove)
Language
- English (79) (remove)
Keywords
- Computer Science Education (4)
- Competence Measurement (3)
- Secondary Education (3)
- Big Data (2)
- Competence Modelling (2)
- Computational thinking (2)
- Informatics Education (2)
- Informatics Modelling (2)
- Informatics System Application (2)
- Informatics System Comprehension (2)
- Machine learning (2)
- computational thinking (2)
- (FPGA) (1)
- 21st century skills, (1)
- ABRACADABRA (1)
- Achievement (1)
- Activity Theory (1)
- Activity-orientated Learning (1)
- Advanced Video Codec (AVC) (1)
- Animal building (1)
- Arduino (1)
- Assessment (1)
- Augmented and virtual reality (1)
- Austria (1)
- Automated Theorem Proving (1)
- Automatically controlled windows (1)
- Automatisches Beweisen (1)
- Bean (1)
- Bloom’s Taxonomy (1)
- CS concepts (1)
- Capability approach (1)
- Challenges (1)
- Clause Learning (1)
- Cognitive Skills (1)
- Competences (1)
- Competencies (1)
- Computational Thinking (1)
- Computer Science (1)
- Computer Science in Context (1)
- Computing (1)
- Contest (1)
- Contextualisation (1)
- Contradictions (1)
- Convolution (1)
- Curriculum (1)
- Curriculum Development (1)
- Customer ownership (1)
- DPLL (1)
- Data Analysis (1)
- Data Management (1)
- Data Privacy (1)
- Databases (1)
- Defining characteristics of physical computing (1)
- Digital Competence (1)
- Digital Education (1)
- Digital Revolution (1)
- Digital image analysis (1)
- Digitalization (1)
- Dynamic assessment (1)
- Early Literacy (1)
- Educational Standards (1)
- Educational software (1)
- Embedded Systems (1)
- Euclid’s algorithm (1)
- FPGA (1)
- Facebook (1)
- Feature extraction (1)
- Fibonacci numbers (1)
- Field programmable gate arrays (1)
- Finite automata (1)
- Function (1)
- Fundamental Ideas (1)
- Graphensuche (1)
- H.264 (1)
- Hardware accelerator (1)
- Histograms (1)
- ICT Competence (1)
- ICT competencies (1)
- ICT skills (1)
- Image resolution (1)
- Imperative calculi (1)
- Improving classroom (1)
- Inference (1)
- Informatics (1)
- Inquiry-based Learning (1)
- Insurance industry (1)
- Interface design (1)
- Kernel (1)
- Key Competencies (1)
- Klausellernen (1)
- Learners (1)
- Learning Fields (1)
- Learning ecology (1)
- Learning interfaces development (1)
- Learning with ICT (1)
- Lindenmayer systems (1)
- Logarithm (1)
- Loss (1)
- Low Latency (1)
- Lower Secondary Level (1)
- MOOCs (1)
- Machine Learning (1)
- Massive Open Online Courses (1)
- Measurement (1)
- Media in education (1)
- Multi-sided platforms (1)
- Music Technology (1)
- NUI (1)
- Natural Science Education (1)
- Natural ventilation (1)
- NoSQL (1)
- Norway (1)
- Novice programmers (1)
- Optimization (1)
- Pedagogical content knowledge (1)
- Pedagogical issues (1)
- Physical Science (1)
- Plant identification (1)
- Preprocessing (1)
- Problem Solving (1)
- Random access memory (1)
- Recommendations for CS-Curricula in Higher Education (1)
- Region of Interest (1)
- Relevanz (1)
- Reversibility (1)
- SAT (1)
- Scale-invariant feature transform (SIFT) (1)
- Sensors (1)
- Sharing (1)
- Signal processing (1)
- Simulations (1)
- Single event upsets (1)
- Small Private Online Courses (1)
- Social (1)
- Systems of parallel communicating (1)
- Tasks (1)
- Teacher perceptions (1)
- Teachers (1)
- Teaching information security (1)
- Technology proficiency (1)
- Terminology (1)
- Tests (1)
- Theorembeweisen (1)
- Theory (1)
- Type and effect systems (1)
- UX (1)
- Unifikation (1)
- VGG16 (1)
- Value network (1)
- Vocational Education (1)
- Young People (1)
- abstraction (1)
- action and change (1)
- algorithms (1)
- analogical thinking (1)
- answer set programming (1)
- architecture (1)
- automata (1)
- automated planning (1)
- bibliometric analysis (1)
- binary representation (1)
- binary search (1)
- citation analysis (1)
- classroom language (1)
- co-citation analysis (1)
- co-occurrence analysis (1)
- cognitive modifiability (1)
- combined task and motion planning (1)
- competence (1)
- competencies (1)
- competency (1)
- complexity (1)
- comprehension (1)
- computer science education (1)
- computer science teachers (1)
- computer vision (1)
- cs4fn (1)
- curriculum theory (1)
- determinism (1)
- developmental systems (1)
- digitally-enabled pedagogies (1)
- divide and conquer (1)
- e-mentoring (1)
- education (1)
- education and public policy (1)
- educational programming (1)
- educational systems (1)
- edutainment (1)
- environments (1)
- exponentiation (1)
- field-programmable gate array (1)
- formal languages (1)
- fun (1)
- functions (1)
- graph-search (1)
- hardware accelerator (1)
- hardware architecture (1)
- high school (1)
- higher (1)
- image processing (1)
- informal and formal learning (1)
- informatics education (1)
- innovation (1)
- interactive course (1)
- interactive workshop (1)
- key competences in physical computing (1)
- key competencies (1)
- kinaesthetic teaching (1)
- knowledge representation and nonmonotonic reasoning (1)
- learning (1)
- machine learning (1)
- machine learning algorithms (1)
- manipulation planning (1)
- mediated learning experience (1)
- mobile learning (1)
- mobile technologies and apps (1)
- monitoring (1)
- networks (1)
- online learning (1)
- operating system (1)
- organisational evolution (1)
- paper prototyping (1)
- parallel processing (1)
- parallel rewriting (1)
- parameter (1)
- pedagogy (1)
- personal (1)
- personal response systems (1)
- philosophical foundation of informatics pedagogy (1)
- physical computing tools (1)
- pre-primary level (1)
- predictive models (1)
- preprocessing (1)
- primary education (1)
- primary level (1)
- problem-solving (1)
- professional development (1)
- programming (1)
- programming in context (1)
- real-time (1)
- relevance (1)
- reliability (1)
- restricted parallelism (1)
- secondary computer science education (1)
- secondary education (1)
- self-adaptive multiprocessing system (1)
- self-efficacy (1)
- single event upset (1)
- social media (1)
- solar particle event (1)
- space missions (1)
- student activation (1)
- student experience (1)
- student perceptions (1)
- students’ conceptions (1)
- students’ knowledge (1)
- teacher competencies (1)
- teaching (1)
- teaching informatics in general education (1)
- technical notes and rapid communications (1)
- theorem (1)
- tools (1)
- tracing (1)
- unification (1)
- user experience (1)
- user-centred (1)
- virtual reality (1)
- ‘unplugged’ computing (1)
Institute
- Institut für Informatik und Computational Science (79) (remove)
Multi-sided platforms (MSP) strongly affect markets and play a crucial part within the digital and networked economy. Although empirical evidence indicates their occurrence in many industries, research has not investigated the game-changing impact of MSP on traditional markets to a sufficient extent. More specifically, we have little knowledge of how MSP affect value creation and customer interaction in entire markets, exploiting the potential of digital technologies to offer new value propositions. Our paper addresses this research gap and provides an initial systematic approach to analyze the impact of MSP on the insurance industry. For this purpose, we analyze the state of the art in research and practice in order to develop a reference model of the value network for the insurance industry. On this basis, we conduct a case-study analysis to discover and analyze roles which are occupied or even newly created by MSP. As a final step, we categorize MSP with regard to their relation to traditional insurance companies, resulting in a classification scheme with four MSP standard types: Competition, Coordination, Cooperation, Collaboration.
The growing impact of globalisation and the development of
a ‘knowledge society’ have led many to argue that 21st century skills are
essential for life in twenty-first century society and that ICT is central
to their development. This paper describes how 21st century skills, in
particular digital literacy, critical thinking, creativity, communication
and collaboration skills, have been conceptualised and embedded in the
resources developed for teachers in iTEC, a four-year, European project.
The effectiveness of this approach is considered in light of the data
collected through the evaluation of the pilots, which considers both the
potential benefits of using technology to support the development of
21st century skills, but also the challenges of doing so. Finally, the paper
discusses the learning support systems required in order to transform
pedagogies and embed 21st century skills. It is argued that support is
required in standards and assessment; curriculum and instruction; professional
development; and learning environments.
Social networks are currently at the forefront of tools that
lend to Personal Learning Environments (PLEs). This study aimed to
observe how students perceived PLEs, what they believed were the
integral components of social presence when using Facebook as part
of a PLE, and to describe student’s preferences for types of interactions
when using Facebook as part of their PLE. This study used mixed
methods to analyze the perceptions of graduate and undergraduate
students on the use of social networks, more specifically Facebook as a
learning tool. Fifty surveys were returned representing a 65 % response
rate. Survey questions included both closed and open-ended questions.
Findings suggested that even though students rated themselves relatively
well in having requisite technology skills, and 94 % of students used
Facebook primarily for social use, they were hesitant to migrate these
skills to academic use because of concerns of privacy, believing that
other platforms could fulfil the same purpose, and by not seeing the
validity to use Facebook in establishing social presence. What lies
at odds with these beliefs is that when asked to identify strategies in
Facebook that enabled social presence to occur in academic work, the
majority of students identified strategies in five categories that lead to
social presence establishment on Facebook during their coursework.
Spotlocator is a game wherein people have to guess the spots of where photos were taken. The photos of a defined area for each game are from panoramio.com. They are published at http://spotlocator. drupalgardens.com with an ID. Everyone can guess the photo spots by sending a special tweet via Twitter that contains the hashtag #spotlocator, the guessed coordinates and the ID of the photo. An evaluation is published for all tweets. The players are informed about the distance to the real photo spots and the positions are shown on a map.
The intensity of cosmic radiation may differ over five orders of magnitude within a few hours or days during the Solar Particle Events (SPEs), thus increasing for several orders of magnitude the probability of Single Event Upsets (SEUs) in space-borne electronic systems. Therefore, it is vital to enable the early detection of the SEU rate changes in order to ensure timely activation of dynamic radiation hardening measures. In this paper, an embedded approach for the prediction of SPEs and SRAM SEU rate is presented. The proposed solution combines the real-time SRAM-based SEU monitor, the offline-trained machine learning model and online learning algorithm for the prediction. With respect to the state-of-the-art, our solution brings the following benefits: (1) Use of existing on-chip data storage SRAM as a particle detector, thus minimizing the hardware and power overhead, (2) Prediction of SRAM SEU rate one hour in advance, with the fine-grained hourly tracking of SEU variations during SPEs as well as under normal conditions, (3) Online optimization of the prediction model for enhancing the prediction accuracy during run-time, (4) Negligible cost of hardware accelerator design for the implementation of selected machine learning model and online learning algorithm. The proposed design is intended for a highly dependable and self-adaptive multiprocessing system employed in space applications, allowing to trigger the radiation mitigation mechanisms before the onset of high radiation levels.
A major part of the scientific experiments that are carried out today requires thorough computational support. While database and algorithm providers face the problem of bundling resources to create and sustain powerful computation nodes, the users have to deal with combining sets of (remote) services into specific data analysis and transformation processes. Today’s attention to “big data” amplifies the issues of size, heterogeneity, and process-level diversity/integration. In the last decade, especially workflow-based approaches to deal with these processes have enjoyed great popularity. This book concerns a particularly agile and model-driven approach to manage scientific workflows that is based on the XMDD paradigm. In this chapter we explain the scope and purpose of the book, briefly describe the concepts and technologies of the XMDD paradigm, explain the principal differences to related approaches, and outline the structure of the book.
We study the concept of reversibility in connection with parallel communicating systems of finite automata (PCFA in short). We define the notion of reversibility in the case of PCFA (also covering the non-deterministic case) and discuss the relationship of the reversibility of the systems and the reversibility of its components. We show that a system can be reversible with non-reversible components, and the other way around, the reversibility of the components does not necessarily imply the reversibility of the system as a whole. We also investigate the computational power of deterministic centralized reversible PCFA. We show that these very simple types of PCFA (returning or non-returning) can recognize regular languages which cannot be accepted by reversible (deterministic) finite automata, and that they can even accept languages that are not context-free. We also separate the deterministic and non-deterministic variants in the case of systems with non-returning communication. We show that there are languages accepted by non-deterministic centralized PCFA, which cannot be recognized by any deterministic variant of the same type.
ProtoSense
(2015)
The protein classification workflow described in this report enables users to get information about a novel protein sequence automatically. The information is derived by different bioinformatic analysis tools which calculate or predict features of a protein sequence. Also, databases are used to compare the novel sequence with known proteins.
The study reported in this paper involved the employment
of specific in-class exercises using a Personal Response System (PRS).
These exercises were designed with two goals: to enhance students’
capabilities of tracing a given code and of explaining a given code in
natural language with some abstraction. The paper presents evidence
from the actual use of the PRS along with students’ subjective impressions
regarding both the use of the PRS and the special exercises. The
conclusions from the findings are followed with a short discussion on
benefits of PRS-based mental processing exercises for learning programming
and beyond.
plasp 3
(2019)
We describe the new version of the Planning Domain Definition Language (PDDL)-to-Answer Set Programming (ASP) translator plasp. First, it widens the range of accepted PDDL features. Second, it contains novel planning encodings, some inspired by Satisfiability Testing (SAT) planning and others exploiting ASP features such as well-foundedness. All of them are designed for handling multivalued fluents in order to capture both PDDL as well as SAS planning formats. Third, enabled by multishot ASP solving, it offers advanced planning algorithms also borrowed from SAT planning. As a result, plasp provides us with an ASP-based framework for studying a variety of planning techniques in a uniform setting. Finally, we demonstrate in an empirical analysis that these techniques have a significant impact on the performance of ASP planning.
We introduce a new measure of descriptional complexity on finite automata, called the number of active states. Roughly speaking, the number of active states of an automaton A on input w counts the number of different states visited during the most economic computation of the automaton A for the word w. This concept generalizes to finite automata and regular languages in a straightforward way. We show that the number of active states of both finite automata and regular languages is computable, even with respect to nondeterministic finite automata. We further compare the number of active states to related measures for regular languages. In particular, we show incomparability to the radius of regular languages and that the difference between the number of active states and the total number of states needed in finite automata for a regular language can be of exponential order.
In control theory, to solve a finite-horizon sequential decision problem (SDP) commonly means to find a list of decision rules that result in an optimal expected total reward (or cost) when taking a given number of decision steps. SDPs are routinely solved using Bellman's backward induction. Textbook authors (e.g. Bertsekas or Puterman) typically give more or less formal proofs to show that the backward induction algorithm is correct as solution method for deterministic and stochastic SDPs. Botta, Jansson and Ionescu propose a generic framework for finite horizon, monadic SDPs together with a monadic version of backward induction for solving such SDPs. In monadic SDPs, the monad captures a generic notion of uncertainty, while a generic measure function aggregates rewards. In the present paper, we define a notion of correctness for monadic SDPs and identify three conditions that allow us to prove a correctness result for monadic backward induction that is comparable to textbook correctness proofs for ordinary backward induction. The conditions that we impose are fairly general and can be cast in category-theoretical terms using the notion of Eilenberg-Moore algebra. They hold in familiar settings like those of deterministic or stochastic SDPs, but we also give examples in which they fail. Our results show that backward induction can safely be employed for a broader class of SDPs than usually treated in textbooks. However, they also rule out certain instances that were considered admissible in the context of Botta et al. 's generic framework. Our development is formalised in Idris as an extension of the Botta et al. framework and the sources are available as supplementary material.
A project involving the composition of a number of pieces
of music by public participants revealed levels of engagement with and
mastery of complex music technologies by a number of secondary student
volunteers. This paper reports briefly on some initial findings of
that project and seeks to illuminate an understanding of computational
thinking across the curriculum.
As a result of the Bologna reform of educational systems in
Europe the outcome orientation of learning processes, competence-oriented
descriptions of the curricula and competence-oriented assessment
procedures became standard also in Computer Science Education
(CSE). The following keynote addresses important issues of shaping
a CSE competence model especially in the area of informatics system
comprehension and object-oriented modelling. Objectives and research
methodology of the project MoKoM (Modelling and Measurement
of Competences in CSE) are explained. Firstly, the CSE competence
model was derived based on theoretical concepts and then secondly the
model was empirically examined and refined using expert interviews.
Furthermore, the paper depicts the development and examination of
a competence measurement instrument, which was derived from the
competence model. Therefore, the instrument was applied to a large
sample of students at the gymnasium’s upper class level. Subsequently,
efforts to develop a competence level model, based on the retrieved empirical
results and on expert ratings are presented. Finally, further demands
on research on competence modelling in CSE will be outlined.
We summarize here the main characteristics and features of the jABC framework, used in the case studies as a graphical tool for modeling scientific processes and workflows. As a comprehensive environment for service-oriented modeling and design according to the XMDD (eXtreme Model-Driven Design) paradigm, the jABC offers much more than the pure modeling capability. Associated technologies and plugins provide in fact means for a rich variety of supporting functionality, such as remote service integration, taxonomical service classification, model execution, model verification, model synthesis, and model compilation. We describe here in short both the essential jABC features and the service integration philosophy followed in the environment. In our work over the last years we have seen that this kind of service definition and provisioning platform has the potential to become a core technology in interdisciplinary service orchestration and technology transfer: Domain experts, like scientists not specially trained in computer science, directly define complex service orchestrations as process models and use efficient and complex domain-specific tools in a simple and intuitive way.