004 Datenverarbeitung; Informatik
Filtern
Erscheinungsjahr
Dokumenttyp
- Wissenschaftlicher Artikel (326) (entfernen)
Schlagworte
- Informatik (16)
- Didaktik (13)
- Ausbildung (12)
- Hochschuldidaktik (12)
- Big Data (4)
- Computer Science Education (4)
- Competence Measurement (3)
- Data profiling (3)
- Informatics Education (3)
- Secondary Education (3)
- computational thinking (3)
- machine learning (3)
- social media (3)
- Blended Learning (2)
- Blockchains (2)
- Competence Modelling (2)
- Computational thinking (2)
- Deep learning (2)
- Diversity (2)
- E-Learning (2)
- General Earth and Planetary Sciences (2)
- Geography, Planning and Development (2)
- Informatics (2)
- Informatics Modelling (2)
- Informatics System Application (2)
- Informatics System Comprehension (2)
- Informatikstudium (2)
- JSP (2)
- Kompetenzen (2)
- Machine learning (2)
- Runtime analysis (2)
- Social (2)
- Teamarbeit (2)
- Theory (2)
- Twitter (2)
- Water Science and Technology (2)
- answer set programming (2)
- bibliometric analysis (2)
- citation analysis (2)
- collaboration (2)
- competence (2)
- comprehension (2)
- computer science (2)
- data integration (2)
- duplicate detection (2)
- education (2)
- identity theory (2)
- informatics education (2)
- learning (2)
- networks (2)
- perception of robots (2)
- programming (2)
- software engineering (2)
- virtual reality (2)
- (FPGA) (1)
- 21st century skills, (1)
- 3D point clouds (1)
- ABRACADABRA (1)
- APX-hardness (1)
- ARCS Modell (1)
- Achievement (1)
- Activity Theory (1)
- Activity-orientated Learning (1)
- Activity-oriented Optimization (1)
- Actor (1)
- Actor model (1)
- Adaption (1)
- Adaptive hypermedia (1)
- Advanced Video Codec (AVC) (1)
- Algebraic methods (1)
- Analyse (1)
- Andere Fachrichtungen (1)
- Anerkennung (1)
- Animal building (1)
- Anrechnung (1)
- Application (1)
- Arduino (1)
- Argument Mining (1)
- Artificial neural networks (1)
- Assessment (1)
- Attribute aggregation (1)
- Audience Response Systeme (1)
- Augmented and virtual reality (1)
- Augmented reality (1)
- Austria (1)
- Authentication (1)
- Autismus (1)
- Automated Theorem Proving (1)
- Automatically controlled windows (1)
- Automatisches Beweisen (1)
- BPMN (1)
- Bachelor (1)
- Bachelorstudium (1)
- Barrierefreiheit (1)
- Berufsausbildung (1)
- Beweisaufgaben (1)
- Bibliometrics (1)
- Bidirectional order dependencies (1)
- Big Five model (1)
- Bitcoin (1)
- Blended learning (1)
- Bloom’s Taxonomy (1)
- Brownian motion with discontinuous drift (1)
- Business Process Management (1)
- Business process modeling (1)
- Bystander (1)
- C-Test (1)
- CCS Concepts (1)
- COVID-19 (1)
- CS Ed Research (1)
- CS at school (1)
- CS concepts (1)
- CS curriculum (1)
- Calibration (1)
- Canvas (1)
- Capability approach (1)
- Case management (1)
- Challenges (1)
- Clause Learning (1)
- Clinical predictive modeling (1)
- Codeverständnis (1)
- Cognitive Skills (1)
- Cographs (1)
- Coherent partition (1)
- Commonsense reasoning (1)
- Comparing programming environments (1)
- Competences (1)
- Competencies (1)
- Compliance checking (1)
- Computational Thinking (1)
- Computational photography (1)
- Computer Science (1)
- Computer Science in Context (1)
- Computer crime (1)
- Computergestützes Training (1)
- Computing (1)
- Conceptual modeling (1)
- Condition number (1)
- Consistency (1)
- Contest (1)
- Contextualisation (1)
- Contradictions (1)
- Convolution (1)
- Course development (1)
- Course marketing (1)
- Course of Study (1)
- Courses for female students (1)
- Covid (1)
- Critical pairs (1)
- Crowd-sourcing (1)
- Cryptography (1)
- Currencies (1)
- Curricula Development (1)
- Curriculum (1)
- Curriculum Development (1)
- Curriculum analysis (1)
- Customer ownership (1)
- DPLL (1)
- Data Analysis (1)
- Data Literacy (1)
- Data Management (1)
- Data Privacy (1)
- Data Science (1)
- Data dependencies (1)
- Data mining (1)
- Data modeling (1)
- Data warehouse (1)
- Data-centric (1)
- Databases (1)
- Datenanalyse (1)
- Datenbanken (1)
- Datenschutz (1)
- Decision support (1)
- Defining characteristics of physical computing (1)
- Degenerationsprozesse (1)
- Delphi study (1)
- Delta preservation (1)
- Dependency discovery (1)
- Digital Competence (1)
- Digital Education (1)
- Digital Game Based Learning (1)
- Digital Revolution (1)
- Digitalisierung von Produktionsprozessen (1)
- Digitalization (1)
- Diskussionskultur (1)
- Dispositional learning analytics (1)
- Distanzlehre (1)
- Distributed computing (1)
- Distributed programming (1)
- Durchlässigkeit (1)
- Dynamic assessment (1)
- Early Literacy (1)
- Economics (1)
- Ecosystems (1)
- Educational Standards (1)
- Educational software (1)
- Electronic and spintronic devices (1)
- Embedded Systems (1)
- Empirische Untersuchung (1)
- Entity resolution (1)
- Enumeration algorithm (1)
- Erfolgsmessung (1)
- Estimation-of-distribution algorithm (1)
- Ethics (1)
- Euclid’s algorithm (1)
- Evolutionary algorithms (1)
- Explorative Datenanalyse (1)
- FPGA (1)
- Facebook (1)
- Fachinformatik (1)
- Fachinformatiker (1)
- Feature extraction (1)
- Feature selection (1)
- Federated learning (1)
- Feedback (1)
- Fertigkeiten (1)
- Fibonacci numbers (1)
- Field programmable gate arrays (1)
- Finite automata (1)
- Fitness-distance correlation (1)
- Formal modelling (1)
- Formale Sprachen und Automaten (1)
- Formative assessment (1)
- Forschung (1)
- Function (1)
- Functional dependencies (1)
- Fundamental Ideas (1)
- Gender (1)
- Gene expression (1)
- General subject “Information” (1)
- Geschäftsmodell (1)
- Graph databases (1)
- Graph homomorphisms (1)
- Graph logic (1)
- Graph partitions (1)
- Graph repair (1)
- Graph transformation (1)
- Graphensuche (1)
- H.264 (1)
- HEI (1)
- Hardware accelerator (1)
- Helmholtz problem (1)
- HiGHmed (1)
- Histograms (1)
- Hochschule (1)
- Hochschulkurse (1)
- Hochschullehre (1)
- Human (1)
- Human-robot interaction (1)
- ICT (1)
- ICT Competence (1)
- ICT competencies (1)
- ICT curriculum (1)
- ICT skills (1)
- IHL (1)
- IHRL (1)
- IOPS (1)
- ISSEP (1)
- Identity management systems (1)
- Identität (1)
- Ill-conditioning (1)
- Image (1)
- Image resolution (1)
- Image-based rendering (1)
- Imperative calculi (1)
- Improving classroom (1)
- Inclusion dependencies (1)
- Indefinite (1)
- Industries (1)
- Inference (1)
- Informatik B. Sc. (1)
- Informatik für alle (1)
- Information Ethics (1)
- Informationskompetenz (1)
- Inhalte (1)
- Inhaltsanalyse (1)
- Initial conflicts (1)
- Inquiry-based Learning (1)
- Instagram (1)
- Insurance industry (1)
- Interface design (1)
- Interpretability (1)
- Intersectionality (1)
- Interventionen (1)
- Inverted Classroom (1)
- KI (1)
- Kernel (1)
- Key Competencies (1)
- Klausellernen (1)
- Kollaboration (1)
- Kompetenz (1)
- Kompetenzentwicklung (1)
- Kompetenzerwerb (1)
- Kompetenzmessung (1)
- LIDAR (1)
- LMS (1)
- LSTM (1)
- Learners (1)
- Learning Analytics (1)
- Learning Fields (1)
- Learning analytics (1)
- Learning dispositions (1)
- Learning ecology (1)
- Learning interfaces development (1)
- Learning with ICT (1)
- Lehr- und Lernformate (1)
- Lehramtsstudium (1)
- Lehre (1)
- Lehrer*innenbildung (1)
- Lehrevaluation (1)
- Lern-App (1)
- Lernerfolg (1)
- Lernmotivation (1)
- Lernzentrum (1)
- Licenses (1)
- Lindenmayer systems (1)
- Logarithm (1)
- Low Latency (1)
- Lower Secondary Level (1)
- Lückentext (1)
- MOOCs (1)
- Machine Learning (1)
- Marketing (1)
- Massive Open Online Courses (1)
- Matroids (1)
- Measurement (1)
- Media in education (1)
- Mensch-Computer-Interaktion (1)
- Metaverse (1)
- Minimal hitting set (1)
- Model repair (1)
- Model verification (1)
- Model-driven (1)
- Multi-objective optimization (1)
- Multi-sided platforms (1)
- Multimodal behavior (1)
- Music Technology (1)
- Mutation operators (1)
- N-of-1 trial (1)
- NUI (1)
- Natural Science Education (1)
- Natural ventilation (1)
- Navigation (1)
- Nephrology (1)
- Nested graph conditions (1)
- Network clustering (1)
- NoSQL (1)
- Norway (1)
- Novice programmers (1)
- O (1)
- Onlinekurse (1)
- OpenOLAT (1)
- Opinion mining (1)
- Optimization (1)
- OptoGait (1)
- Order dependencies (1)
- Ordinances (1)
- Parallelization (1)
- Pedagogical content knowledge (1)
- Pedagogical issues (1)
- Peer-Review (1)
- Personas (1)
- Physical Science (1)
- Point-based rendering (1)
- Preprocessing (1)
- Primary informatics (1)
- Prime graphs (1)
- Prior knowledge (1)
- Privacy (1)
- Problem Solving (1)
- Problem solving (1)
- Problem solving strategies (1)
- Process Execution (1)
- Programmierausbildung (1)
- Programming environments for children (1)
- Programming learning (1)
- Projekte (1)
- Protocols (1)
- Pytho n (1)
- Query execution (1)
- Query optimization (1)
- Rainfall-runoff (1)
- Random access memory (1)
- Re-Engineering (1)
- Recommendations for CS-Curricula in Higher Education (1)
- Region of Interest (1)
- Relational data (1)
- Relevanz (1)
- Reproducible benchmarking (1)
- Resource Allocation (1)
- Resource Management (1)
- Reversibility (1)
- Robot personality (1)
- Run time analysis (1)
- SAT (1)
- SCED (1)
- SQL (1)
- STEM (1)
- Satisfiability (1)
- Scale-invariant feature transform (SIFT) (1)
- Scientific understanding of Information (1)
- Second Life (1)
- Security (1)
- Semantic Web (1)
- Semiconductors (1)
- Seminarkonzept (1)
- Sensors (1)
- Sequential anomaly (1)
- Sharing (1)
- Signal processing (1)
- Simulations (1)
- Single event upsets (1)
- Small Private Online Courses (1)
- Smart cities (1)
- Social impact (1)
- Sociotechnical Design (1)
- Software Engineering (1)
- Softwareentwicklung (1)
- Specification (1)
- Stance Detection (1)
- Strategie (1)
- Strukturverbesserung (1)
- Student Engagement (1)
- Studentenjobs (1)
- Studienabbrecher (1)
- Studienabbruch (1)
- Studienanfänger*innen (1)
- Studiendauer (1)
- Studieneingangsphase (1)
- Studiengestaltung (1)
- Studiengänge (1)
- Studienverläufe (1)
- Studierendenperformance (1)
- Studium (1)
- Submodular function (1)
- Submodular functions (1)
- Subset selection (1)
- Systematics (1)
- Systems of parallel communicating (1)
- Tasks (1)
- Taxonomy (1)
- Teacher perceptions (1)
- Teachers (1)
- Teaching information security (1)
- Teaching problem solving strategies (1)
- Technology proficiency (1)
- Terminology (1)
- Tests (1)
- Text mining (1)
- Theorembeweisen (1)
- Theoretische Informatik (1)
- Time series (1)
- Trajectories (1)
- Transversal hypergraph (1)
- Type and effect systems (1)
- UX (1)
- Umfrage (1)
- Uncanny valley (1)
- Unifikation (1)
- Unique column combination (1)
- Unique column combinations (1)
- User Experience (1)
- VR (1)
- Validation (1)
- Value network (1)
- Visualization (1)
- Vocabulary (1)
- Vocational Education (1)
- Vorkenntnisse (1)
- Vorwissen (1)
- W[3]-Completeness (1)
- Wartung von Lehrveranstaltungen (1)
- Weiterbildung (1)
- Werbung (1)
- WhatsApp (1)
- Wissenschaftliches Arbeiten (1)
- Women and IT (1)
- X-ray imaging (1)
- YouTube (1)
- Young People (1)
- Zebris (1)
- abstraction (1)
- action and change (1)
- adaptive (1)
- algorithms (1)
- analogical thinking (1)
- anxiety (1)
- app (1)
- approximation (1)
- architecture (1)
- architecture recovery (1)
- argumentation research (1)
- attacks (1)
- attribute assurance (1)
- authorship attribution (1)
- automata (1)
- automated planning (1)
- betriebliche Weiterbildungspraxis (1)
- big data (1)
- binary representation (1)
- binary search (1)
- biomarker detection (1)
- blockchain (1)
- brand personality (1)
- business process management (1)
- business processes (1)
- cancer therapy (1)
- center dot Computing (1)
- classroom language (1)
- cloud security (1)
- co-citation analysis (1)
- co-occurrence analysis (1)
- coding and information theory (1)
- cognition (1)
- cognitive load (1)
- cognitive modifiability (1)
- combined task and motion planning (1)
- competence development (1)
- competencies (1)
- competency (1)
- complexity (1)
- complexity dichotomy (1)
- computed tomography (1)
- computer science education (1)
- computer science teachers (1)
- computer vision (1)
- computing science education (1)
- concept of algorithm (1)
- concurrent graph rewriting (1)
- conditions (1)
- conflicts and dependencies in (1)
- constructionism (1)
- contract (1)
- conversational agents (1)
- corporate nomadism (1)
- corporate takeovers (1)
- cryptocurrency exchanges (1)
- cryptology (1)
- cs4fn (1)
- curriculum theory (1)
- cyber (1)
- cyber humanistic (1)
- cyber threat intelligence (1)
- cyber-attack (1)
- cyberbullying (1)
- cyberwar (1)
- data analytics (1)
- data assimilation (1)
- data migration (1)
- data pipeline (1)
- data preparation (1)
- data quality (1)
- data requirements (1)
- data structures and information theory (1)
- data transformation (1)
- data wrangling (1)
- database systems (1)
- deferred choice (1)
- depression (1)
- determinism (1)
- deterministic properties (1)
- developmental systems (1)
- didactics (1)
- didaktisches Konzept (1)
- digital health (1)
- digital identity (1)
- digital interventions (1)
- digital nomadism (1)
- digital transformation (1)
- digital workplace transformation (1)
- digitale Bildung (1)
- digitally-enabled pedagogies (1)
- digitization of production processes (1)
- distributed systems (1)
- divide and conquer (1)
- drift theory (1)
- duale IT-Ausbildung (1)
- e-Assessment (1)
- e-Learning (1)
- e-mentoring (1)
- education and public policy (1)
- educational programming (1)
- educational systems (1)
- edutainment (1)
- elections (1)
- emotional design (1)
- empathy (1)
- engaged computing (1)
- engagement (1)
- engine (1)
- engineering (1)
- environments (1)
- ethics (1)
- evaluation (1)
- evolutionary computation (1)
- exact simulation methods (1)
- experiment (1)
- explainability (1)
- explainability-accuracy trade-off (1)
- explainable AI (1)
- exploratory programming (1)
- exponentiation (1)
- expression (1)
- external knowledge bases (1)
- failure model (1)
- field-programmable gate array (1)
- forensics (1)
- formal languages (1)
- formal semantics (1)
- forschendes Lernen (1)
- forschungsorientiertes Lernen (1)
- fun (1)
- fächerverbindend (1)
- gait analysis algorithm (1)
- gender (1)
- gene (1)
- gene selection (1)
- general (1)
- general secondary education (1)
- gewerkschaftlich unterstützte Weiterbildungspraxis (1)
- graph languages (1)
- graph pattern matching (1)
- graph transformation (1)
- graph-search (1)
- hardware accelerator (1)
- hardware architecture (1)
- health care (1)
- healthcare (1)
- heterogeneity (1)
- high school (1)
- higher (1)
- higher education (1)
- home office (1)
- human–computer interaction (1)
- identity broker (1)
- image captioning (1)
- image processing (1)
- individual effects (1)
- individuelle Lernwege (1)
- inertial measurement unit (1)
- informal and formal learning (1)
- informatics (1)
- informatics curricula (1)
- informatics in upper secondary education (1)
- information diffusion (1)
- informatische Grundkompetenzen (1)
- innovation (1)
- instruction (1)
- interactive course (1)
- interactive technologies (1)
- interactive workshop (1)
- international comparison (1)
- international human rights (1)
- international humanitarian law (1)
- international study (1)
- interpretable machine learning (1)
- iteration method (1)
- job shop scheduling (1)
- job-shop scheduling (1)
- key competences in physical computing (1)
- key competencies (1)
- kinaesthetic teaching (1)
- klinisch-praktischer Unterricht (1)
- knowledge building (1)
- knowledge management (1)
- knowledge representation and nonmonotonic reasoning (1)
- knowledge work (1)
- labour union education (1)
- law (1)
- law and technology (1)
- learning factory (1)
- lesson (1)
- literature review (1)
- localization (1)
- logic programming (1)
- long-term interaction (1)
- longitudinal (1)
- machine (1)
- machine learning algorithms (1)
- mandatory computer science foundations (1)
- manipulation planning (1)
- maschinelles Lernen (1)
- matrices (1)
- media (1)
- mediated learning experience (1)
- medical malpractice (1)
- memory (1)
- method comparision (1)
- methodologie (1)
- methods (1)
- metric learning (1)
- migration (1)
- misconceptions (1)
- mobile application (1)
- mobile learning (1)
- mobile technologies and apps (1)
- mobiles Lernen (1)
- modelling (1)
- modular counting (1)
- modularity (1)
- molecular tumor board (1)
- monitoring (1)
- mood (1)
- multimedia learning (1)
- multimodal representations (1)
- mutli-task learning (1)
- mutual gaze (1)
- neural (1)
- neural networks (1)
- new technologies (1)
- news media (1)
- notation (1)
- online learning (1)
- open learning (1)
- operating system (1)
- optimal transport (1)
- oracles (1)
- organisational evolution (1)
- paper prototyping (1)
- parallel processing (1)
- parallel rewriting (1)
- parameter (1)
- pedagogy (1)
- performance (1)
- personal (1)
- personal response systems (1)
- personality prediction (1)
- personalization principle (1)
- personalized medicine (1)
- philosophical foundation of informatics pedagogy (1)
- phone (1)
- physical computing tools (1)
- planning (1)
- power-law (1)
- pre-primary level (1)
- predictive models (1)
- preprocessing (1)
- primary education (1)
- primary level (1)
- primary school (1)
- prior knowledge (1)
- problem-solving (1)
- process scheduling (1)
- processes (1)
- processing (1)
- production planning and control (1)
- professional development (1)
- program (1)
- programming in context (1)
- programming skills (1)
- public dataset (1)
- quantified logics (1)
- random I (1)
- random graphs (1)
- randomized control trial (1)
- real-time (1)
- record linkage (1)
- recursive tuning (1)
- relevance (1)
- reliability (1)
- remodularization (1)
- remote-first (1)
- resilient architectures (1)
- restoration (1)
- restricted parallelism (1)
- review (1)
- robustness (1)
- science (1)
- search plan generation (1)
- secondary computer science education (1)
- secondary education (1)
- security (1)
- security chaos engineering (1)
- security risk assessment (1)
- self-adaptive multiprocessing system (1)
- self-driving (1)
- self-efficacy (1)
- self-healing (1)
- self-sovereign identity (1)
- sentiment (1)
- signal processing (1)
- similarity learning (1)
- similarity measures (1)
- simulation (1)
- single event upset (1)
- single-case experimental design (1)
- situated learning (1)
- skew Brownian motion (1)
- skew diffusions (1)
- small files (1)
- smart contracts (1)
- smoother (1)
- social media analysis (1)
- social network analysis (1)
- social networking (1)
- social networking sites (1)
- societal effects (1)
- software selection (1)
- solar particle event (1)
- sorting (1)
- space missions (1)
- spread correction (1)
- spreadsheets (1)
- standardization (1)
- stochastic process (1)
- student activation (1)
- student experience (1)
- student perceptions (1)
- studentische Forschung (1)
- students’ conceptions (1)
- students’ knowledge (1)
- survey mode (1)
- systematic literature review (1)
- taxonomy (1)
- teacher (1)
- teacher competencies (1)
- teacher education (1)
- teacher training (1)
- teaching (1)
- teaching informatics in general education (1)
- teaching material (1)
- teamwork (1)
- technical notes and rapid communications (1)
- technische Rahmenbedingungen (1)
- terminology (1)
- test items (1)
- text based classification methods (1)
- theorem (1)
- tools (1)
- topics (1)
- tort law (1)
- tracing (1)
- training (1)
- transfer learning (1)
- trust (1)
- trust model (1)
- uncanny valley (1)
- unification (1)
- usability (1)
- user experience (1)
- user-centred (1)
- virtual collaboration (1)
- virtual groups (1)
- virtual teams (1)
- vocational training (1)
- vulnerabilities (1)
- web application (1)
- weight (1)
- well-being (1)
- wissenschaftliches Arbeiten (1)
- wissenschaftliches Schreiben (1)
- workflow patterns (1)
- workload prediction (1)
- ‘unplugged’ computing (1)
Institut
- Institut für Informatik und Computational Science (148)
- Hasso-Plattner-Institut für Digital Engineering GmbH (42)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (40)
- Extern (34)
- Institut für Mathematik (16)
- Fachgruppe Betriebswirtschaftslehre (12)
- Bürgerliches Recht (10)
- Wirtschaftswissenschaften (8)
- Institut für Physik und Astronomie (4)
- Department Erziehungswissenschaft (3)
Homeoffice und mobiles Arbeiten haben sich infolge der Covid-19-Pandemie bei vielen Unternehmen bekanntlich etabliert. Die Anweisung bzw. „Duldung“ des Homeoffice beruhte allerdings meist mehr auf tatsächlicher als auf rechtlicher Grundlage. Letztere könnte aber aus betrieblicher Übung erwachsen. Dieser Beitrag geht dem rechtlichen Rahmen dafür nach.
.NET Gadgeteer Workshop
(2013)
A comparison of current trends within computer science teaching in school in Germany and the UK
(2013)
In the last two years, CS as a school subject has gained a lot of attention worldwide, although different countries have differing approaches to and experiences of introducing CS in schools. This paper reports on a study comparing current trends in CS at school, with a major focus on two countries, Germany and UK. A survey was carried out of a number of teaching professionals and experts from the UK and Germany with regard to the content and delivery of CS in school. An analysis of the quantitative data reveals a difference in foci in the two countries; putting this into the context of curricular developments we are able to offer interpretations of these trends and suggest ways in which curricula in CS at school should be moving forward.
There is an increasing interest in fusing data from heterogeneous sources. Combining data sources increases the utility of existing datasets, generating new information and creating services of higher quality. A central issue in working with heterogeneous sources is data migration: In order to share and process data in different engines, resource intensive and complex movements and transformations between computing engines, services, and stores are necessary.
Muses is a distributed, high-performance data migration engine that is able to interconnect distributed data stores by forwarding, transforming, repartitioning, or broadcasting data among distributed engines' instances in a resource-, cost-, and performance-adaptive manner. As such, it performs seamless information sharing across all participating resources in a standard, modular manner. We show an overall improvement of 30 % for pipelining jobs across multiple engines, even when we count the overhead of Muses in the execution time. This performance gain implies that Muses can be used to optimise large pipelines that leverage multiple engines.
As resources are valuable assets, organizations have to decide which resources to allocate to business process tasks in a way that the process is executed not only effectively but also efficiently. Traditional role-based resource allocation leads to effective process executions, since each task is performed by a resource that has the required skills and competencies to do so. However, the resulting allocations are typically not as efficient as they could be, since optimization techniques have yet to find their way in traditional business process management scenarios. On the other hand, operations research provides a rich set of analytical methods for supporting problem-specific decisions on resource allocation. This paper provides a novel framework for creating transparency on existing tasks and resources, supporting individualized allocations for each activity in a process, and the possibility to integrate problem-specific analytical methods of the operations research domain. To validate the framework, the paper reports on the design and prototypical implementation of a software architecture, which extends a traditional process engine with a dedicated resource management component. This component allows us to define specific resource allocation problems at design time, and it also facilitates optimized resource allocation at run time. The framework is evaluated using a real-world parcel delivery process. The evaluation shows that the quality of the allocation results increase significantly with a technique from operations research in contrast to the traditional applied rule-based approach.
Image feature detection is a key task in computer vision. Scale Invariant Feature Transform (SIFT) is a prevalent and well known algorithm for robust feature detection. However, it is computationally demanding and software implementations are not applicable for real-time performance. In this paper, a versatile and pipelined hardware implementation is proposed, that is capable of computing keypoints and rotation invariant descriptors on-chip. All computations are performed in single precision floating-point format which makes it possible to implement the original algorithm with little alteration. Various rotation resolutions and filter kernel sizes are supported for images of any resolution up to ultra-high definition. For full high definition images, 84 fps can be processed. Ultra high definition images can be processed at 21 fps.
This paper describes the proof calculus LD for clausal propositional logic, which is a linearized form of the well-known DPLL calculus extended by clause learning. It is motivated by the demand to model how current SAT solvers built on clause learning are working, while abstracting from decision heuristics and implementation details. The calculus is proved sound and terminating. Further, it is shown that both the original DPLL calculus and the conflict-directed backtracking calculus with clause learning, as it is implemented in many current SAT solvers, are complete and proof-confluent instances of the LD calculus.
We introduce a logic-based incremental approach to graph repair, generating a sound and complete (upon termination) overview of least-changing graph repairs from which a user may select a graph repair based on non-formalized further requirements. This incremental approach features delta preservation as it allows to restrict the generation of graph repairs to delta-preserving graph repairs, which do not revert the additions and deletions of the most recent consistency-violating graph update. We specify consistency of graphs using the logic of nested graph conditions, which is equivalent to first-order logic on graphs. Technically, the incremental approach encodes if and how the graph under repair satisfies a graph condition using the novel data structure of satisfaction trees, which are adapted incrementally according to the graph updates applied. In addition to the incremental approach, we also present two state-based graph repair algorithms, which restore consistency of a graph independent of the most recent graph update and which generate additional graph repairs using a global perspective on the graph under repair. We evaluate the developed algorithms using our prototypical implementation in the tool AutoGraph and illustrate our incremental approach using a case study from the graph database domain.
Author summary <br /> The use of orally inhaled drugs for treating lung diseases is appealing since they have the potential for lung selectivity, i.e. high exposure at the site of action -the lung- without excessive side effects. However, the degree of lung selectivity depends on a large number of factors, including physiochemical properties of drug molecules, patient disease state, and inhalation devices. To predict the impact of these factors on drug exposure and thereby to understand the characteristics of an optimal drug for inhalation, we develop a predictive mathematical framework (a "pharmacokinetic model"). In contrast to previous approaches, our model allows combining knowledge from different sources appropriately and its predictions were able to adequately predict different sets of clinical data. Finally, we compare the impact of different factors and find that the most important factors are the size of the inhaled particles, the affinity of the drug to the lung tissue, as well as the rate of drug dissolution in the lung. In contrast to the common belief, the solubility of a drug in the lining fluids is not found to be relevant. These findings are important to understand how inhaled drugs should be designed to achieve best treatment results in patients. <br /> The fate of orally inhaled drugs is determined by pulmonary pharmacokinetic processes such as particle deposition, pulmonary drug dissolution, and mucociliary clearance. Even though each single process has been systematically investigated, a quantitative understanding on the interaction of processes remains limited and therefore identifying optimal drug and formulation characteristics for orally inhaled drugs is still challenging. To investigate this complex interplay, the pulmonary processes can be integrated into mathematical models. However, existing modeling attempts considerably simplify these processes or are not systematically evaluated against (clinical) data. In this work, we developed a mathematical framework based on physiologically-structured population equations to integrate all relevant pulmonary processes mechanistically. A tailored numerical resolution strategy was chosen and the mechanistic model was evaluated systematically against data from different clinical studies. Without adapting the mechanistic model or estimating kinetic parameters based on individual study data, the developed model was able to predict simultaneously (i) lung retention profiles of inhaled insoluble particles, (ii) particle size-dependent pharmacokinetics of inhaled monodisperse particles, (iii) pharmacokinetic differences between inhaled fluticasone propionate and budesonide, as well as (iv) pharmacokinetic differences between healthy volunteers and asthmatic patients. Finally, to identify the most impactful optimization criteria for orally inhaled drugs, the developed mechanistic model was applied to investigate the impact of input parameters on both the pulmonary and systemic exposure. Interestingly, the solubility of the inhaled drug did not have any relevant impact on the local and systemic pharmacokinetics. Instead, the pulmonary dissolution rate, the particle size, the tissue affinity, and the systemic clearance were the most impactful potential optimization parameters. In the future, the developed prediction framework should be considered a powerful tool for identifying optimal drug and formulation characteristics.
Informatics as a school subject has been virtually absent from bilingual education programs in German secondary schools. Most bilingual programs in German secondary education started out by focusing on subjects from the field of social sciences. Teachers and bilingual curriculum experts alike have been regarding those as the most suitable subjects for bilingual instruction – largely due to the intercultural perspective that a bilingual approach provides. And though one cannot deny the gain that ensues from an intercultural perspective on subjects such as history or geography, this benefit is certainly not limited to social science subjects. In consequence, bilingual curriculum designers have already begun to include other subjects such as physics or chemistry in bilingual school programs. It only seems a small step to extend this to informatics. This paper will start out by addressing potential benefits of adding informatics to the range of subjects taught as part of English-language bilingual programs in German secondary education. In a second step it will sketch out a methodological (= didactical) model for teaching informatics to German learners through English. It will then provide two items of hands-on and tested teaching material in accordance with this model. The discussion will conclude with a brief outlook on the chances and prerequisites of firmly establishing informatics as part of bilingual school curricula in Germany.
In an attempt to pave the way for more extensive Computer Science Education (CSE) coverage in K-12, this research developed and made a preliminary evaluation of a blended-learning Introduction to CS program based on an academic MOOC. Using an academic MOOC that is pedagogically effective and engaging, such a program may provide teachers with disciplinary scaffolds and allow them to focus their attention on enhancing students’ learning experience and nurturing critical 21st-century skills such as self-regulated learning. As we demonstrate, this enabled us to introduce an academic level course to middle-school students. In this research, we developed the principals and initial version of such a program, targeting ninth-graders in science-track classes who learn CS as part of their standard curriculum. We found that the middle-schoolers who participated in the program achieved academic results on par with undergraduate students taking this MOOC for academic credit. Participating students also developed a more accurate perception of the essence of CS as a scientific discipline. The unplanned school closure due to the COVID19 pandemic outbreak challenged the research but underlined the advantages of such a MOOCbased blended learning program above classic pedagogy in times of global or local crises that lead to school closure. While most of the science track classes seem to stop learning CS almost entirely, and the end-of-year MoE exam was discarded, the program’s classes smoothly moved to remote learning mode, and students continued to study at a pace similar to that experienced before the school shut down.
Graphs play an important role in many areas of Computer Science. In particular, our work is motivated by model-driven software development and by graph databases. For this reason, it is very important to have the means to express and to reason about the properties that a given graph may satisfy. With this aim, in this paper we present a visual logic that allows us to describe graph properties, including navigational properties, i.e., properties about the paths in a graph. The logic is equipped with a deductive tableau method that we have proved to be sound and complete.
Compared to their inorganic counterparts, organic semiconductors suffer from relatively low charge carrier mobilities. Therefore, expressions derived for inorganic solar cells to correlate characteristic performance parameters to material properties are prone to fail when applied to organic devices. This is especially true for the classical Shockley-equation commonly used to describe current-voltage (JV)-curves, as it assumes a high electrical conductivity of the charge transporting material. Here, an analytical expression for the JV-curves of organic solar cells is derived based on a previously published analytical model. This expression, bearing a similar functional dependence as the Shockley-equation, delivers a new figure of merit α to express the balance between free charge recombination and extraction in low mobility photoactive materials. This figure of merit is shown to determine critical device parameters such as the apparent series resistance and the fill factor.
A simplified run time analysis of the univariate marginal distribution algorithm on LeadingOnes
(2021)
With elementary means, we prove a stronger run time guarantee for the univariate marginal distribution algorithm (UMDA) optimizing the LEADINGONES benchmark function in the desirable regime with low genetic drift. If the population size is at least quasilinear, then, with high probability, the UMDA samples the optimum in a number of iterations that is linear in the problem size divided by the logarithm of the UMDA's selection rate. This improves over the previous guarantee, obtained by Dang and Lehre (2015) via the deep level-based population method, both in terms of the run time and by demonstrating further run time gains from small selection rates. Under similar assumptions, we prove a lower bound that matches our upper bound up to constant factors.
This paper describes the implementation of a workflow model for service-oriented computing of potential areas for wind turbines in jABC. By implementing a re-executable model the manual effort of a multi-criteria site analysis can be reduced. The aim is to determine the shift of typical geoprocessing tools of geographic information systems (GIS) from the desktop to the web. The analysis is based on a vector data set and mainly uses web services of the “Center for Spatial Information Science and Systems” (CSISS). This paper discusses effort, benefits and problems associated with the use of the web services.
Active use of social networking sites (SNSs) has long been assumed to benefit users' well-being. However, this established hypothesis is increasingly being challenged, with scholars criticizing its lack of empirical support and the imprecise conceptualization of active use. Nevertheless, with considerable heterogeneity among existing studies on the hypothesis and causal evidence still limited, a final verdict on its robustness is still pending. To contribute to this ongoing debate, we conducted a week-long randomized control trial with N = 381 adult Instagram users recruited via Prolific. Specifically, we tested how active SNS use, operationalized as picture postings on Instagram, affects different dimensions of well-being. The results depicted a positive effect on users' positive affect but null findings for other well-being outcomes. The findings broadly align with the recent criticism against the active use hypothesis and support the call for a more nuanced view on the impact of SNSs. <br /> Lay Summary Active use of social networking sites (SNSs) has long been assumed to benefit users' well-being. However, this established assumption is increasingly being challenged, with scholars criticizing its lack of empirical support and the imprecise conceptualization of active use. Nevertheless, with great diversity among conducted studies on the hypothesis and a lack of causal evidence, a final verdict on its viability is still pending. To contribute to this ongoing debate, we conducted a week-long experimental investigation with 381 adult Instagram users. Specifically, we tested how posting pictures on Instagram affects different aspects of well-being. The results of this study depicted a positive effect of posting Instagram pictures on users' experienced positive emotions but no effects on other aspects of well-being. The findings broadly align with the recent criticism against the active use hypothesis and support the call for a more nuanced view on the impact of SNSs on users.
Adaption von Lernwegen in adaptierten Lehrmaterialien für Studierende mit Berufsausbildungsabschluss
(2023)
Obwohl immer mehr Menschen nicht direkt ein Studium aufnehmen, sondern zuvor eine berufliche Ausbildung absolvieren, werden die in der Ausbildung erworbenen Kompetenzen von den Hochschulen inhaltlich und didaktisch meist ignoriert. Ein Ansatz, diese Kompetenzen zu würdigen, ist die formale Anrechnung von mitgebrachten Kompetenzen als (für den Studienabschluss erforderliche) Leistungspunkte. Eine andere Variante ist der Einsatz von speziell für die Zielgruppe der Studierenden mit Vorkenntnissen adaptiertem Lehr-Lernmaterial. Um darüber hinaus individuelle Unterschiede zu berücksichtigen, erlaubt eine weitere Adaption individueller Lernpfade den Lernenden, genau die jeweils fehlenden Kompetenzen zu erwerben. In diesem Beitrag stellen wir die exemplarische Entwicklung derartigen Materials anhand des Kurses „Datenbanken“ für die Zielgruppe der Studierenden mit einer abgeschlossenen Ausbildung zum Fachinformatiker bzw. zur Fachinformatikerin vor.
Algorithmic management
(2022)
We launched an original large-scale experiment concerning informatics learning in French high schools. We are using the France-IOI platform to federate resources and share observation for research. The first step is the implementation of an adaptive hypermedia based on very fine grain epistemic modules for Python programming learning. We define the necessary traces to be built in order to study the trajectories of navigation the pupils will draw across this hypermedia. It may be browsed by pupils either as a course support, or an extra help to solve the list of exercises (mainly for algorithmics discovery). By leaving the locus of control to the learner, we want to observe the different trajectories they finally draw through our system. These trajectories may be abstracted and interpreted as strategies and then compared for their relative efficiency. Our hypothesis is that learners have different profiles and may use the appropriate strategy accordingly. This paper presents the research questions, the method and the expected results.
In der letzten Jahren ist die Zahl der erfolgreichen Prüfungen von Studierenden im Informatikkurs des ersten Studienjahres für verschiedene Studiengänge an der Universität Óbuda stark gesunken. Dies betrifft Prüfungen in den Teilgebieten Rechnerarchitektur, Betrieb von Peripheriegeräten, Binäre Codierung und logische Operationen, Computerviren, Computernetze und das Internet, Steganographie und Kryptographie, Betriebsysteme. Mehr als der Hälfte der Studenten konnte die Prüfungen der ersten Semester nicht erfolgreich absolvieren. Die hier vorgelegte Analyse der Studienleistungen zielt darauf ab, Gründe für diese Entwicklung zu identifizieren, die Zahl der Abbrecher zu reduzieren und die Leistungen der Studenten zu verbessern. Die Analyse zeigt, dass die Studenten die erforderlichen Lehrmaterialen erst ein bis zwei Tage vor oder sogar erst am Tag der Klausuren vom Server downloaden, so dass sie nicht mehr hinreichend Zeit zum Lernen haben. Diese Tendenz zeigt sich bei allen Teilgebieten des Studiengangs. Ein Mangel an kontinuierlicher Mitarbeit scheint einer der Gründe für ein frühes Scheitern zu sein. Ferner zeigt sich die Notwendigkeit, dass bei den Lehrangeboten in Informatik auf eine kontinuierliche Kommunikation mit den Studierenden und Rückmeldung zu aktuellen Unterrichtsinhalten zu achten ist. Dies kann durch motivierende Maßnahmen zur Teilnahme an den Übungen oder durch kleine wöchentliche schriftliche Tests geschehen.
Diese Arbeit enthält eine umfassende Analyse, wie der Kompetenzerwerb in einem einsemestrigen Softwarepraktikum vonstatten geht. Dabei steht neben der Frage, welche Kompetenzen besonders gut erworben wurden, der Einfluss von Vorwissen/-kompetenz im Mittelpunkt der Abhandlung. Auf dieser Basis werden einige grundlegende und konkrete Verbesserungsvorschläge erarbeitet, wie der breite Kompetenzerwerb begünstigt wird, d.h. möglichst viele Studierende sich in einem breiten Kompetenzspektrum weiterentwickeln.
Students of computer science studies enter university education with very different competencies, experience and knowledge. 145 datasets collected of freshmen computer science students by learning management systems in relation to exam outcomes and learning dispositions data (e. g. student dispositions, previous experiences and attitudes measured through self-reported surveys) has been exploited to identify indicators as predictors of academic success and hence make effective interventions to deal with an extremely heterogeneous group of students.
Analysis of protrusion dynamics in amoeboid cell motility by means of regularized contour flows
(2021)
Amoeboid cell motility is essential for a wide range of biological processes including wound healing, embryonic morphogenesis, and cancer metastasis. It relies on complex dynamical patterns of cell shape changes that pose long-standing challenges to mathematical modeling and raise a need for automated and reproducible approaches to extract quantitative morphological features from image sequences. Here, we introduce a theoretical framework and a computational method for obtaining smooth representations of the spatiotemporal contour dynamics from stacks of segmented microscopy images. Based on a Gaussian process regression we propose a one-parameter family of regularized contour flows that allows us to continuously track reference points (virtual markers) between successive cell contours. We use this approach to define a coordinate system on the moving cell boundary and to represent different local geometric quantities in this frame of reference. In particular, we introduce the local marker dispersion as a measure to identify localized membrane expansions and provide a fully automated way to extract the properties of such expansions, including their area and growth time. The methods are available as an open-source software package called AmoePy, a Python-based toolbox for analyzing amoeboid cell motility (based on time-lapse microscopy data), including a graphical user interface and detailed documentation. Due to the mathematical rigor of our framework, we envision it to be of use for the development of novel cell motility models. We mainly use experimental data of the social amoeba Dictyostelium discoideum to illustrate and validate our approach. <br /> Author summary Amoeboid motion is a crawling-like cell migration that plays an important key role in multiple biological processes such as wound healing and cancer metastasis. This type of cell motility results from expanding and simultaneously contracting parts of the cell membrane. From fluorescence images, we obtain a sequence of points, representing the cell membrane, for each time step. By using regression analysis on these sequences, we derive smooth representations, so-called contours, of the membrane. Since the number of measurements is discrete and often limited, the question is raised of how to link consecutive contours with each other. In this work, we present a novel mathematical framework in which these links are described by regularized flows allowing a certain degree of concentration or stretching of neighboring reference points on the same contour. This stretching rate, the so-called local dispersion, is used to identify expansions and contractions of the cell membrane providing a fully automated way of extracting properties of these cell shape changes. We applied our methods to time-lapse microscopy data of the social amoeba Dictyostelium discoideum.
Die Studieneingangsphase stellt für Studierende eine Schlüsselphase des tertiären Ausbildungsabschnitts dar. Fachwissenschaftliches Wissen wird praxisfern vermittelt und die Studierenden können die Zusammenhänge zwischen den Themenfeldern der verschiedenen Vorlesungen nicht erkennen. Zur Verbesserung der Situation wurde ein Workshop entwickelt, der die Verbindung der Programmierung und der Datenstrukturen vertieft. Dabei wird das Spiel Go-Moku1 als Android-App von den Studierenden selbständig entwickelt. Die Kombination aus Software (Java, Android-SDK) und Hardware (Tablet-Computer) für ein kleines realistisches Softwareprojekt stellt für die Studierenden eine neue Erfahrung dar.
Erstsemester-Studierende sind mit den Anforderungen des Lehr-/ Lernprozess einer Universität oder Fachhochschule noch nicht vertraut. Ihre Erwartungen orientieren sich vielmehr an ihrer bisherigen Lerngeschichte (Abitur, Fachabitur, o. ä.). Neben den fachlichen Anforderungen des ersten Semesters müssen die Studierenden also auch Veränderungen im Lehr-/Lernprozess erkennen und bewältigen. Es wird anhand einer Output-orientierten
informatischen Lehrveranstaltung aufgezeigt, dass sich aus deren strengen Anforderungen der Messbarkeit klare Kompetenzbeschreibungen ergeben, die besonders dem Orientierungsbedürfnis Erstsemester-Studierender entgegenkommen.
Answer Set Programming faces an increasing popularity for problem solving in various domains. While its modeling language allows us to express many complex problems in an easy way, its solving technology enables their effective resolution. In what follows, we detail some of the key factors of its success. Answer Set Programming [ASP; Brewka et al. Commun ACM 54(12):92–103, (2011)] is seeing a rapid proliferation in academia and industry due to its easy and flexible way to model and solve knowledge-intense combinatorial (optimization) problems. To this end, ASP offers a high-level modeling language paired with high-performance solving technology. As a result, ASP systems provide out-off-the-box, general-purpose search engines that allow for enumerating (optimal) solutions. They are represented as answer sets, each being a set of atoms representing a solution. The declarative approach of ASP allows a user to concentrate on a problem’s specification rather than the computational means to solve it. This makes ASP a prime candidate for rapid prototyping and an attractive tool for teaching key AI techniques since complex problems can be expressed in a succinct and elaboration tolerant way. This is eased by the tuning of ASP’s modeling language to knowledge representation and reasoning (KRR). The resulting impact is nicely reflected by a growing range of successful applications of ASP [Erdem et al. AI Mag 37(3):53–68, 2016; Falkner et al. Industrial applications of answer set programming. K++nstliche Intelligenz (2018)]
Zur Unterstützung von Studierenden in der Studieneingangsphase wurde an der RWTH Aachen ein neuartiger und motivierender Einstieg in den Vorkurs Informatik entwickelt und zum Wintersemester 2011/12 erprobt. Dabei wurde die grafische Programmierung mittels App Inventor eingeführt, die zur Umsetzung anwendungsbezogener Projekte genutzt wurde. In diesem Beitrag werden die Motivation für die Neugestaltung, das Konzept und die Evaluation des Testlaufs beschrieben. Diese dienen als Grundlage für eine vollständige Neukonzeption des Vorkurses für das Wintersemester 2012/2013.
Projektmanagement-Kompetenzen werden von Unternehmen unterschiedlichster Branchen mit wachsender Priorität betrachtet und eingefordert. Als Beitrag zu einer kompetenzorientierten Ausbildung werden in diesem Paper interdisziplinäre Studienmodule als Bestandteil des Wirtschaftsinformatik-Studiums vorgestellt. Zielsetzung der Studienmodule ist die Befähigung der Studierenden, konkrete Projekte unter Nutzung von standardisierten Werkzeugen und Methoden nach dem IPMA-Standard planen und durchführen zu können.
Arbeitsschutz bei Corona
(2020)
The use of neural networks is considered as the state of the art in the field of image classification. A large number of different networks are available for this purpose, which, appropriately trained, permit a high level of classification accuracy. Typically, these networks are applied to uncompressed image data, since a corresponding training was also carried out using image data of similar high quality. However, if image data contains image errors, the classification accuracy deteriorates drastically. This applies in particular to coding artifacts which occur due to image and video compression. Typical application scenarios for video compression are narrowband transmission channels for which video coding is required but a subsequent classification is to be carried out on the receiver side. In this paper we present a special H.264/Advanced Video Codec (AVC) based video codec that allows certain regions of a picture to be coded with near constant picture quality in order to allow a reliable classification using neural networks, whereas the remaining image will be coded using constant bit rate. We have combined this feature with the ability to run with lowest latency properties, which is usually also required in remote control applications scenarios. The codec has been implemented as a fully hardwired High Definition video capable hardware architecture which is suitable for Field Programmable Gate Arrays.
Argument mining on twitter
(2021)
In the last decade, the field of argument mining has grown notably. However, only relatively few studies have investigated argumentation in social media and specifically on Twitter. Here, we provide the, to our knowledge, first critical in-depth survey of the state of the art in tweet-based argument mining. We discuss approaches to modelling the structure of arguments in the context of tweet corpus annotation, and we review current progress in the task of detecting argument components and their relations in tweets. We also survey the intersection of argument mining and stance detection, before we conclude with an outlook.
In this paper we describe the recent state of our research
project concerning computer science teachers’ knowledge on students’
cognition. We did a comprehensive analysis of textbooks, curricula
and other resources, which give teachers guidance to formulate assignments.
In comparison to other subjects there are only a few concepts
and strategies taught to prospective computer science teachers in university.
We summarize them and given an overview on our empirical
approach to measure this knowledge.
ATIB
(2021)
Identity management is a principle component of securing online services. In the advancement of traditional identity management patterns, the identity provider remained a Trusted Third Party (TTP). The service provider and the user need to trust a particular identity provider for correct attributes amongst other demands. This paradigm changed with the invention of blockchain-based Self-Sovereign Identity (SSI) solutions that primarily focus on the users. SSI reduces the functional scope of the identity provider to an attribute provider while enabling attribute aggregation. Besides that, the development of new protocols, disregarding established protocols and a significantly fragmented landscape of SSI solutions pose considerable challenges for an adoption by service providers. We propose an Attribute Trust-enhancing Identity Broker (ATIB) to leverage the potential of SSI for trust-enhancing attribute aggregation. Furthermore, ATIB abstracts from a dedicated SSI solution and offers standard protocols. Therefore, it facilitates the adoption by service providers. Despite the brokered integration approach, we show that ATIB provides a high security posture. Additionally, ATIB does not compromise the ten foundational SSI principles for the users.
A degree course in IT and business administration solely for women (FIW) has been offered since 2009 at the HTW Berlin – University of Applied Sciences. This contribution discusses student motivations for enrolling in such a women only degree course and gives details of our experience over recent years. In particular, the approach to attracting new female students is described and the composition of the intake is discussed. It is shown that the women-only setting together with other factors can attract a new clientele for computer science.
Die gelungene Durchführung einer Vorlesung „Informatik I – Einführung in die Programmierung“ ist schwierig, trotz einer Vielfalt existierender Materialien und erprobter didaktischer Methoden. Gerade aufgrund dieser vielfältigen Auswahl hat sich bisher noch kein robustes Konzept durchgesetzt, das unabhängig von den Durchführenden eine hohe Erfolgsquote garantiert. An den Universitäten Tübingen und Freiburg wurde die Informatik I aus den gleichen Lehrmaterialien und unter ähnlichen Bedingungen durchgeführt, um das verwendete Konzept auf Robustheit zu überprüfen. Die Grundlage der Vorlesung bildet ein systematischer Ansatz zum Erlernen des Programmierens, der von der PLTGruppe in USA entwickelt worden ist. Hinzu kommen neue Ansätze zur Betreuung, insbesondere das Betreute Programmieren, bei dem die Studierenden eine solide Basis für ihre Programmierfähigkeiten entwickeln. Der vorliegende Bericht beschreibt hierbei gesammelte Erfahrungen, erläutert die Entwicklung der Unterrichtsmethodik und der Inhaltsauswahl im Vergleich zu vorangegangenen Vorlesungen und präsentiert Daten zum Erfolg der Vorlesung.
Lehrkräfte aller Fächer benötigen informatische Kompetenzen, um der wachsenden Alltagsrelevanz von Informatik und aktuell gültigen Lehrplänen gerecht zu werden. Beispielsweise verweist in Sachsen der Lehrplan für das Fach Gemeinschaftskunde, Rechtserziehung und Wirtschaft am Gymnasium mit dem für die Jahrgangsstufe 11 vorgesehenem Thema „Digitalisierung und sozialer Wandel“ auf Künstliche Intelligenz (KI) und explizit auf die Bedeutung der informatischen Bildung. Um die nötigen informatischen Grundlagen zu vermitteln, wurde für Lehramtsstudierende des Faches Politik ein Workshop erarbeitet, der die Grundlagen der Funktionsweise von KI anhand von überwachtem maschinellen Lernen in neuronalen Netzen vermittelt. Inhalt des Workshops ist es, mit Bezug auf gesellschaftliche Implikationen wie Datenschutz bei Trainingsdaten und algorithmic bias einen informierten Diskurs zu politischen Themen zu ermöglichen. Ziele des Workshops für Lehramtsstudierende mit dem Fach Politik sind: (1) Aufbau informatischer Kompetenzen in Bezug zum Thema KI, (2) Stärkung der Diskussionsfähigkeiten der Studierenden durch passende informatische Kompetenzen und (3) Anregung der Studierenden zum Transfer auf passende Themenstellungen im Politikunterricht. Das Evaluationskonzept umfasst eine Pre-Post-Befragung zur Zuversicht zur Vermittlungskompetenz unter Bezug auf maschinelles Lernen in neuronalen Netzen im Unterricht, sowie die Analyse einer abschließenden Diskussion. Für die Pre-Post-Befragung konnte eine Steigerung der Zuversicht zur Vermittlungskompetenz beobachtet werden. Die Analyse der Diskussion zeigte das Bewusstsein der Alltagsrelevanz des Themas KI bei den Teilnehmenden, aber noch keine Anwendung der informatischen Inhalte des Workshops zur Stützung der Argumente in der Diskussion.
This document presents an axiom selection technique for classic first order theorem proving based on the relevance of axioms for the proof of a conjecture. It is based on unifiability of predicates and does not need statistical information like symbol frequency. The scope of the technique is the reduction of the set of axioms and the increase of the amount of provable conjectures in a given time. Since the technique generates a subset of the axiom set, it can be used as a preprocessor for automated theorem proving. This technical report describes the conception, implementation and evaluation of ARDE. The selection method, which is based on a breadth-first graph search by unifiability of predicates, is a weakened form of the connection calculus and uses specialised variants or unifiability to speed up the selection. The implementation of the concept is evaluated with comparison to the results of the world championship of theorem provers of the year 2012 (CASC J6). It is shown that both the theorem prover leanCoP which uses the connection calculus and E which uses equality reasoning, can benefit from the selection approach. Also, the evaluation shows that the concept is applyable for theorem proving problems with thousands of formulae and that the selection is independent from the calculus used by the theorem prover.
Es wird ein Informatik-Wettbewerb für Schülerinnen und Schüler der Sekundarstufe II beschrieben, der über mehrere Wochen möglichst realitätsnah die Arbeitswelt eines Informatikers vorstellt. Im Wettbewerb erarbeiten die Schülerteams eine Android-App und organisieren ihre Entwicklung durch Projektmanagementmethoden, die sich an professionellen, agilen Prozessen orientieren. Im Beitrag werden der theoretische Hintergrund zu Wettbewerben, die organisatorischen und didaktischen Entscheidung, eine erste Evaluation sowie Reflexion und Ausblick dargestellt.
Biographisches Lernen betont insbesondere die Rolle individueller biographischer Erfahrungen und deren Auswirkungen auf Selbstbild, Weltbild und Verhaltensmuster. Schlagwortartig kann diese Perspektive als Unterschied zwischen ‚Informatik lernen‘ und ‚Informatiker/in werden‘ beschrieben werden. Im Artikel wird die Perspektive des Biographischen Lernens an Beispielen aus der Informatik skizziert. Biographisches Lernen ist in der Informatik zunächst aus rein pragmatischen Gründen bedeutsam. Der rasche Wandel der Informationstechnologien im Alltag verändert Erfahrungshintergründe der Studierenden (bzw. Schülerinnen und Schüler). Dementsprechend verändern sich Erwartungen, Interessen, Vorkenntnisse, generelle Einstellungen oder auch ganz banal die ‚IT-Ausstattung‘ der Lernenden.
A method is presented of acquiring the principles of three sorting algorithms through developing interactive applications in Excel.
Based on the performance requirements of modern spatio-temporal data mining applications, in-memory database systems are often used to store and process the data. To efficiently utilize the scarce DRAM capacities, modern database systems support various tuning possibilities to reduce the memory footprint (e.g., data compression) or increase performance (e.g., additional indexes). However, the selection of cost and performance balancing configurations is challenging due to the vast number of possible setups consisting of mutually dependent individual decisions. In this paper, we introduce a novel approach to jointly optimize the compression, sorting, indexing, and tiering configuration for spatio-temporal workloads. Further, we consider horizontal data partitioning, which enables the independent application of different tuning options on a fine-grained level. We propose different linear programming (LP) models addressing cost dependencies at different levels of accuracy to compute optimized tuning configurations for a given workload and memory budgets. To yield maintainable and robust configurations, we extend our LP-based approach to incorporate reconfiguration costs as well as a worst-case optimization for potential workload scenarios. Further, we demonstrate on a real-world dataset that our models allow to significantly reduce the memory footprint with equal performance or increase the performance with equal memory size compared to existing tuning heuristics.
BugHunt
(2015)
Competencies related to operating systems and computer
security are usually taught systematically. In this paper we present
a different approach, in which students have to remove virus-like
behaviour on their respective computers, which has been induced by
software developed for this purpose. They have to develop appropriate
problem-solving strategies and thereby explore essential elements of
the operating system. The approach was implemented exemplarily in
two computer science courses at a regional general upper secondary
school and showed great motivation and interest in the participating
students.
Student teachers often struggle to keep track of everything that is happening in the classroom, and particularly to notice and respond when students cause disruptions. The complexity of the classroom environment is a potential contributing factor that has not been empirically tested. In this experimental study, we utilized a virtual reality (VR) classroom to examine whether classroom complexity affects the likelihood of student teachers noticing disruptions and how they react after noticing. Classroom complexity was operationalized as the number of disruptions and the existence of overlapping disruptions (multidimensionality) as well as the existence of parallel teaching tasks (simultaneity). Results showed that student teachers (n = 50) were less likely to notice the scripted disruptions, and also less likely to respond to the disruptions in a comprehensive and effortful manner when facing greater complexity. These results may have implications for both teacher training and the design of VR for training or research purpose. This study contributes to the field from two aspects: 1) it revealed how features of the classroom environment can affect student teachers' noticing of and reaction to disruptions; and 2) it extends the functionality of the VR environment-from a teacher training tool to a testbed of fundamental classroom processes that are difficult to manipulate in real-life.
CloudStrike
(2020)
Most cyber-attacks and data breaches in cloud infrastructure are due to human errors and misconfiguration vulnerabilities. Cloud customer-centric tools are imperative for mitigating these issues, however existing cloud security models are largely unable to tackle these security challenges. Therefore, novel security mechanisms are imperative, we propose Risk-driven Fault Injection (RDFI) techniques to address these challenges. RDFI applies the principles of chaos engineering to cloud security and leverages feedback loops to execute, monitor, analyze and plan security fault injection campaigns, based on a knowledge-base. The knowledge-base consists of fault models designed from secure baselines, cloud security best practices and observations derived during iterative fault injection campaigns. These observations are helpful for identifying vulnerabilities while verifying the correctness of security attributes (integrity, confidentiality and availability). Furthermore, RDFI proactively supports risk analysis and security hardening efforts by sharing security information with security mechanisms. We have designed and implemented the RDFI strategies including various chaos engineering algorithms as a software tool: CloudStrike. Several evaluations have been conducted with CloudStrike against infrastructure deployed on two major public cloud infrastructure: Amazon Web Services and Google Cloud Platform. The time performance linearly increases, proportional to increasing attack rates. Also, the analysis of vulnerabilities detected via security fault injection has been used to harden the security of cloud resources to demonstrate the effectiveness of the security information provided by CloudStrike. Therefore, we opine that our approaches are suitable for overcoming contemporary cloud security issues.
CoFeeMOOC-v.2
(2021)
Providing adequate support to MOOC participants is often a challenging task due to massiveness of the learners’ population and the asynchronous communication among peers and MOOC practitioners. This workshop aims at discussing common learners’ problems reported in the literature and reflect on designing adequate feedback interventions with the use of learning data. Our aim is three-fold: a) to pinpoint MOOC aspects that impact the planning of feedback, b) to explore the use of learning data in designing feedback strategies, and c) to propose design guidelines for developing and delivering scaffolding interventions for personalized feedback in MOOCs. To do so, we will carry out hands-on activities that aim to involve participants in interpreting learning data and using them to design adaptive feedback. This workshop appeals to researchers, practitioners and MOOC stakeholders who aim to providing contextualized scaffolding. We envision that this workshop will provide insights for bridging the gap between pedagogical theory and practice when it comes to feedback interventions in MOOCs.
Coherent network partitions
(2021)
We continue to study coherent partitions of graphs whereby the vertex set is partitioned into subsets that induce biclique spanned subgraphs. The problem of identifying the minimum number of edges to obtain biclique spanned connected components (CNP), called the coherence number, is NP-hard even on bipartite graphs. Here, we propose a graph transformation geared towards obtaining an O (log n)-approximation algorithm for the CNP on a bipartite graph with n vertices. The transformation is inspired by a new characterization of biclique spanned subgraphs. In addition, we study coherent partitions on prime graphs, and show that finding coherent partitions reduces to the problem of finding coherent partitions in a prime graph. Therefore, these results provide future directions for approximation algorithms for the coherence number of a given graph.
Solving problems combining task and motion planning requires searching across a symbolic search space and a geometric search space. Because of the semantic gap between symbolic and geometric representations, symbolic sequences of actions are not guaranteed to be geometrically feasible. This compels us to search in the combined search space, in which frequent backtracks between symbolic and geometric levels make the search inefficient.We address this problem by guiding symbolic search with rich information extracted from the geometric level through culprit detection mechanisms.