Refine
Year of publication
- 2012 (123) (remove)
Document Type
- Monograph/Edited Volume (123) (remove)
Keywords
- AUTOSAR (2)
- Bauernroman (2)
- Data Integration (2)
- Datenintegration (2)
- Deutschland (2)
- Geopolitics (2)
- Geopolitik (2)
- Iran (2)
- Low German (2)
- Model Synchronisation (2)
Institute
- Wirtschaftswissenschaften (20)
- Institut für Romanistik (15)
- Department Erziehungswissenschaft (13)
- Hasso-Plattner-Institut für Digital Engineering gGmbH (11)
- WeltTrends e.V. Potsdam (11)
- Sozialwissenschaften (10)
- Historisches Institut (6)
- Extern (5)
- Kommunalwissenschaftliches Institut (4)
- Strafrecht (4)
- Institut für Informatik und Computational Science (3)
- Institut für Künste und Medien (3)
- Philosophische Fakultät (3)
- Department Psychologie (2)
- Institut für Umweltwissenschaften und Geographie (2)
- MenschenRechtsZentrum (2)
- Strukturbereich Kognitionswissenschaften (2)
- Öffentliches Recht (2)
- Bürgerliches Recht (1)
- Department Musik und Kunst (1)
- Institut für Anglistik und Amerikanistik (1)
- Institut für Religionswissenschaft (1)
- Institut für Slavistik (1)
- Wirtschafts- und Sozialwissenschaftliche Fakultät (1)
- Zentrum für Sprachen und Schlüsselkompetenzen (Zessko) (1)
Das Paper untersucht die geopolitische Konfrontation zwischen Iran und den USA sowie deren Auswirkung auf eine Veränderung der bestehenden Weltordnung. Es wird deutlich, dass die US-amerikanische Sanktionspolitik nicht nur auf das Nuklearprogramm abzielt, sondern grundsätzlich versucht, die Wirtschaftskraft der unabhängig von den Interessen der USA agierenden Regionalmacht zu brechen. Doch die Sanktionspolitik konnte weder eine Lösung des Nukleardossiers herbeiführen noch die Wirtschaftskraft Irans eindämmen. Es ist Zeit für eine fundamentale Kursänderung.
Aus dem Inhalt:
Vorwort der Herausgeber
Vorbemerkung
Einleitung
I. Abschnitt: Der Zweck des „Systems der Philosophie“
II. Abschnitt: Übergang zu den Grundlagen der Kultur
III. Abschnitt: Das Motiv der Menschheit als systematisches Grundproblem
IV. Abschnitt: Der Zusammenhang von Sittlichkeit und wissenschaftlicher Erfahrung als Leitproblem für die philosophische Entwicklung Cohens
V. Abschnitt: Die Aufgabe der Vernunft als Grundlegung für die Entwicklung der Kultur
Schluss: Cohens Bekenntnis der Vernunft
Nachwort
Extract-Transform-Load (ETL) tools are used for the creation, maintenance, and evolution of data warehouses, data marts, and operational data stores. ETL workflows populate those systems with data from various data sources by specifying and executing a DAG of transformations. Over time, hundreds of individual workflows evolve as new sources and new requirements are integrated into the system. The maintenance and evolution of large-scale ETL systems requires much time and manual effort. A key problem is to understand the meaning of unfamiliar attribute labels in source and target databases and ETL transformations. Hard-to-understand attribute labels lead to frustration and time spent to develop and understand ETL workflows. We present a schema decryption technique to support ETL developers in understanding cryptic schemata of sources, targets, and ETL transformations. For a given ETL system, our recommender-like approach leverages the large number of mapped attribute labels in existing ETL workflows to produce good and meaningful decryptions. In this way we are able to decrypt attribute labels consisting of a number of unfamiliar few-letter abbreviations, such as UNP_PEN_INT, which we can decrypt to UNPAID_PENALTY_INTEREST. We evaluate our schema decryption approach on three real-world repositories of ETL workflows and show that our approach is able to suggest high-quality decryptions for cryptic attribute labels in a given schema.
Program behavior that relies on contextual information, such as physical location or network accessibility, is common in today's applications, yet its representation is not sufficiently supported by programming languages. With context-oriented programming (COP), such context-dependent behavioral variations can be explicitly modularized and dynamically activated. In general, COP could be used to manage any context-specific behavior. However, its contemporary realizations limit the control of dynamic adaptation. This, in turn, limits the interaction of COP's adaptation mechanisms with widely used architectures, such as event-based, mobile, and distributed programming. The JCop programming language extends Java with language constructs for context-oriented programming and additionally provides a domain-specific aspect language for declarative control over runtime adaptations. As a result, these redesigned implementations are more concise and better modularized than their counterparts using plain COP. JCop's main features have been described in our previous publications. However, a complete language specification has not been presented so far. This report presents the entire JCop language including the syntax and semantics of its new language constructs.
Experimental and quantitative research in the field of human language processing and production strongly depends on the quality of the underlying language material: beside its size, representativeness, variety and balance have been discussed as important factors which influence design, analysis and interpretation of experiments and their results. This volume brings together creators and users of both general purpose and specialized lexical resources which are used in psychology, psycholinguistics, neurolinguistics and cognitive research. It aims to be a forum to report experiences and results, review problems and discuss perspectives of any linguistic data used in the field.
Praxismodelle im Studium : Chancen und Probleme aus der Perspektive von Potsdamer Studierenden
(2012)
Data dependencies, or integrity constraints, are used to improve the quality of a database schema, to optimize queries, and to ensure consistency in a database. In the last years conditional dependencies have been introduced to analyze and improve data quality. In short, a conditional dependency is a dependency with a limited scope defined by conditions over one or more attributes. Only the matching part of the instance must adhere to the dependency. In this paper we focus on conditional inclusion dependencies (CINDs). We generalize the definition of CINDs, distinguishing covering and completeness conditions. We present a new use case for such CINDs showing their value for solving complex data quality tasks. Further, we define quality measures for conditions inspired by precision and recall. We propose efficient algorithms that identify covering and completeness conditions conforming to given quality thresholds. Our algorithms choose not only the condition values but also the condition attributes automatically. Finally, we show that our approach efficiently provides meaningful and helpful results for our use case.
Mehr Privat statt Staat! Diese Kampfformel galt vielen noch vor kurzem als Schlüssel zur erfolgreichen Entlastung der angespannten kommunalen Haushalte. Immer mehr Kommunen beschritten den vermeintlichen Königsweg. So vielfältig wie die Gegenstände sind auch die in der Verwaltungspraxis zu beobachtenden Erscheinungsformen der Privatisierung: Vermögensprivatisierung, Organisationsprivatisierung, Aufgabenprivatisierung mit facettenreichen Mischformen namentlich der Public Private Partnerships. Zwar brachte der „Verkauf des Tafelsilbers“ den Kommunen kurzzeitig einen Geldsegen. Doch haben bei weitem nicht alle Privatisierungsmaßnahmen die in sie gesetzten Erwartungen erfüllt und es setzt sich zunehmend die Einsicht durch, dass die Privatwirtschaft nicht zwangsläufig besser, effizienter und kostengünstiger arbeitet als die Öffentliche Hand. Inzwischen deutet sich im kommunalen Bereich eine klare Trendumkehr in Richtung Rekommunalisierung an. Die 17. Fachtagung des Kommunalwissenschaftlichen Instituts (KWI) der Universität Potsdam greift in diese anlaufende Grundsatzdebatte ein und nimmt aktuelle Bestrebungen der Rekommunalisierung lokaler Aufgaben auf. Im Vordergrund stehen erste praktische Erfahrungen, Implementationsprobleme und nicht zuletzt die rechtlichen Rahmenbedingungen und normativen Direktiven für Rekommunalisierungen namentlich in Segmenten der öffentlichen Daseinsvorsorge.
Cyber-physical systems achieve sophisticated system behavior exploring the tight interconnection of physical coupling present in classical engineering systems and information technology based coupling. A particular challenging case are systems where these cyber-physical systems are formed ad hoc according to the specific local topology, the available networking capabilities, and the goals and constraints of the subsystems captured by the information processing part. In this paper we present a formalism that permits to model the sketched class of cyber-physical systems. The ad hoc formation of tightly coupled subsystems of arbitrary size are specified using a UML-based graph transformation system approach. Differential equations are employed to define the resulting tightly coupled behavior. Together, both form hybrid graph transformation systems where the graph transformation rules define the discrete steps where the topology or modes may change, while the differential equations capture the continuous behavior in between such discrete changes. In addition, we demonstrate that automated analysis techniques known for timed graph transformation systems for inductive invariants can be extended to also cover the hybrid case for an expressive case of hybrid models where the formed tightly coupled subsystems are restricted to smaller local networks.
Räume der Mode
(2012)
Als Antwort auf die Herausforderungen der Sinn- und Wertekrise der Moderne besinnt sich Paul Valéry auf das geistige Vermögen des Menschen und entwirft eine radikal subjektive und lebensnahe Philosophie. Der Philosoph wird bei ihm zum Dichter, der mittels Metaphern sein Denken in Bilder fasst. Philosophie ist für ihn Denkkunst, die die Möglichkeiten des Lebens abbildet, zum Denken anregt und den ...
Duplicate detection is the task of identifying all groups of records within a data set that represent the same real-world entity, respectively. This task is difficult, because (i) representations might differ slightly, so some similarity measure must be defined to compare pairs of records and (ii) data sets might have a high volume making a pair-wise comparison of all records infeasible. To tackle the second problem, many algorithms have been suggested that partition the data set and compare all record pairs only within each partition. One well-known such approach is the Sorted Neighborhood Method (SNM), which sorts the data according to some key and then advances a window over the data comparing only records that appear within the same window. We propose several variations of SNM that have in common a varying window size and advancement. The general intuition of such adaptive windows is that there might be regions of high similarity suggesting a larger window size and regions of lower similarity suggesting a smaller window size. We propose and thoroughly evaluate several adaption strategies, some of which are provably better than the original SNM in terms of efficiency (same results with fewer comparisons).
Lebenswissenschaft
(2012)
During the overall development of complex engineering systems different modeling notations are employed. For example, in the domain of automotive systems system engineering models are employed quite early to capture the requirements and basic structuring of the entire system, while software engineering models are used later on to describe the concrete software architecture. Each model helps in addressing the specific design issue with appropriate notations and at a suitable level of abstraction. However, when we step forward from system design to the software design, the engineers have to ensure that all decisions captured in the system design model are correctly transferred to the software engineering model. Even worse, when changes occur later on in either model, today the consistency has to be reestablished in a cumbersome manual step. In this report, we present in an extended version of [Holger Giese, Stefan Neumann, and Stephan Hildebrandt. Model Synchronization at Work: Keeping SysML and AUTOSAR Models Consistent. In Gregor Engels, Claus Lewerentz, Wilhelm Schäfer, Andy Schürr, and B. Westfechtel, editors, Graph Transformations and Model Driven Enginering - Essays Dedicated to Manfred Nagl on the Occasion of his 65th Birthday, volume 5765 of Lecture Notes in Computer Science, pages 555–579. Springer Berlin / Heidelberg, 2010.] how model synchronization and consistency rules can be applied to automate this task and ensure that the different models are kept consistent. We also introduce a general approach for model synchronization. Besides synchronization, the approach consists of tool adapters as well as consistency rules covering the overlap between the synchronized parts of a model and the rest. We present the model synchronization algorithm based on triple graph grammars in detail and further exemplify the general approach by means of a model synchronization solution between system engineering models in SysML and software engineering models in AUTOSAR which has been developed for an industrial partner. In the appendix as extension to [19] the meta-models and all TGG rules for the SysML to AUTOSAR model synchronization are documented.