Refine
Has Fulltext
- yes (584) (remove)
Year of publication
Document Type
- Monograph/Edited Volume (182)
- Working Paper (110)
- Postprint (103)
- Doctoral Thesis (97)
- Article (52)
- Master's Thesis (18)
- Report (8)
- Part of a Book (6)
- Bachelor Thesis (4)
- Conference Proceeding (3)
- Habilitation Thesis (1)
Keywords
- Verwaltung (9)
- Korruption (8)
- Ethik (7)
- fiscal policy (7)
- Germany (6)
- experiment (6)
- Cayley Graph (5)
- Deutschland (5)
- Entrepreneurship (5)
- Free Group (5)
Institute
- Wirtschaftswissenschaften (584) (remove)
Durch die Covid-19-Pandemie und den Ukraine- Krieg sind den Kommunen erhebliche finanzielle Mehrbelastungen in Form von zusätzlichen Aufwendungen und Mindererträgen entstanden. Das Land NRW hatte daher mit dem „Gesetz zur Isolierung der aus der Covid-19-Pandemie folgenden Belastungen der kommunalen Haushalte im Land Nordrhein-Westfalen (NKF-COVID-19- Isolierungsgesetz – NKF-CIG)“ vom 29. September 2020 beschlossen, befristet die Aufstellung der Haushalte zu erleichtern und finanzielle Mehrbelastungen bilanziell zu „isolieren“. Mit dem ersten Änderungsgesetz vom 1. Dezember 2021 wurden die Regelungen überarbeitet und der Geltungszeitraum verlängert. Mit dem zweiten Änderungsgesetz vom 9. Dezember 2022 erfolgte eine sachliche und zeitliche Erweiterung. Gleichzeitig wurde das Gesetz umbenannt, um die sachliche Erweiterung um die finanziellen Mehrbelastungen aus dem Ukraine-Krieg zu verdeutlichen (NKF-CUIG). Unser Positionspapier setzt sich in einem ersten Schritt kritisch mit der bilanziellen „Isolierung“ dieser finanziellen Mehrbelastungen mittels Bilanzierungshilfe auseinander und identifiziert sowohl die Herausforderungen bei der genauen Bestimmung dieser finanziellen Mehrbelastungen als auch die Anwendungsprobleme bei der Bildung, dem Ausweis und der Bewertung dieser Bilanzierungshilfe im kommunalen Jahresabschluss. In einem zweiten Schritt werden die Auswirkungen der Bilanzierung einer solchen Bilanzierungshilfe auf die Prüfung des Jahresabschlusses eingehend untersucht und kritisch diskutiert. In einem dritten Schritt wird eine rechtspolitische Bewertung des NKF-CUIG vorgenommen. Zusammenfassend ist festzuhalten, dass eine „Hilfe“, wie sie der Begriff der Bilanzierungshilfe im pragmatischen Sprachgebrauch suggeriert, in keiner Weise festzustellen ist. Auch in Zukunft ist mit Situationen zu rechnen, die der Covid-Pandemie und dem Ukraine-Krieg vergleichbar sind. Auch dann könnten finanzielle Mehrbelastungen die rechtliche Handlungsfähigkeit der Kommunen gefährden. Um diese zu erhalten, sollten vom Landesgesetzgeber jedoch andere Maßnahmen als die Aktivierung einer Bilanzierungshilfe in Betracht gezogen werden. Die alternativen Maßnahmen sollten einerseits den Besonderheiten der historischen Situation und dem Ziel des Erhalts der rechtlichen Handlungsfähigkeit der Kommunen gerecht werden. Sie sollten gleichzeitig aber auch Systembrüche in der Doppik und im Haushaltsrecht sowie unnötige Bürokratielasten vermeiden.
Die 2016 verabschiedeten Sustainable Development Goals (SDGs) der Vereinten Nationen sind Referenzrahmen von Nachhaltigkeitsstrategien auf Bundes- Landes- und kommunaler Ebene geworden. Städte rückten im Zuge der Agenda 2030 in den Mittelpunkt. Ihre Verwaltungen befinden sich dabei in einem herausfordernden Spannungsfeld: Einerseits haben die SDGs den holistischen Anspruch, vollständig in das Handeln der Kommunen integriert zu werden. Andererseits ist für eine effektive Umsetzung eine starke Anpassung der SDGs an den lokalen Kontext notwendig. Die vorliegende Arbeit betrachtet anhand einer Fallstudie die Frage, wie Kommunen die Nachhaltigkeitsziele der Vereinten Nationen in ihre Handlungsprogramme und Nachhaltigkeitsstrategien übersetzen, und welche Faktoren Einfluss auf diesen Prozess haben. Dabei wird ein translationstheoretischer Ansatz verwendet, der die Übertragung einer Idee in einen lokalen Kontext als aktiven Transfer versteht, bei dem das Handeln der beteiligten Akteure und deren Konstruktion der aufzunehmenden Idee im Fokus steht. Die Translation wird mit Hilfe von qualitativen Interviews nachvollzogen und analysiert. Die Ergebnisse zeigen, dass die SDGs zwar anhand ihrer Relevanz für die Kommune gefiltert werden, der normative Anspruch der SDGs aber erhalten bleibt und angesichts des als gering beurteilten Fortschritts der Kommune besonderes Gewicht erhält. Zentrale Einflussfaktoren für die Translation sind die verfügbaren personellen und finanziellen Ressourcen, die Akzeptanz für die SDGs in Verwaltung, Politik und Gesellschaft und nicht zuletzt das persönliche Engagement einzelner Verwaltungsmitarbeiter*innen.
Algorithmic management
(2022)
The organisation of legislative chambers and the consequences of parliamentary procedures have been among the most prominent research questions in legislative studies. Even though democratic elections not only lead to the formation of a government but also result in an opposition, the literature has mostly neglected oppositions and their role in legislative chambers. This paper proposes to fill this gap by looking at the legislative organisation from the perspective of opposition players. The paper focuses on the potential influence of opposition players in the policy-making process and presents data on more than 50 legislative chambers. The paper shows considerable variance of the formal power granted to opposition players. Furthermore, the degree of institutionalisation of opposition rights is connected to electoral systems and not necessarily correlated with other institutional characteristics such as regime type or the size of legislative chambers.
This article merges theoretical literature on non-controlling minority shareholdings (NCMS) in a coherent model to study the effects of NCMS on competition and collusion. The model encompasses both the case of a common owner holding shares of rival firms as well as the case of cross ownership among rivals. We find that by softening competition, NCMS weaken the sustainability of collusion under a greater variety of situations than was indicated by earlier literature. Such effects exist, in particular, in the presence of an effective competition authority.
Volatile supply and sales markets, coupled with increasing product individualization and complex production processes, present significant challenges for manufacturing companies. These must navigate and adapt to ever-shifting external and internal factors while ensuring robustness against process variabilities and unforeseen events. This has a pronounced impact on production control, which serves as the operational intersection between production planning and the shop- floor resources, and necessitates the capability to manage intricate process interdependencies effectively. Considering the increasing dynamics and product diversification, alongside the need to maintain constant production performances, the implementation of innovative control strategies becomes crucial.
In recent years, the integration of Industry 4.0 technologies and machine learning methods has gained prominence in addressing emerging challenges in production applications. Within this context, this cumulative thesis analyzes deep learning based production systems based on five publications. Particular attention is paid to the applications of deep reinforcement learning, aiming to explore its potential in dynamic control contexts. Analysis reveal that deep reinforcement learning excels in various applications, especially in dynamic production control tasks. Its efficacy can be attributed to its interactive learning and real-time operational model. However, despite its evident utility, there are notable structural, organizational, and algorithmic gaps in the prevailing research. A predominant portion of deep reinforcement learning based approaches is limited to specific job shop scenarios and often overlooks the potential synergies in combined resources. Furthermore, it highlights the rare implementation of multi-agent systems and semi-heterarchical systems in practical settings. A notable gap remains in the integration of deep reinforcement learning into a hyper-heuristic.
To bridge these research gaps, this thesis introduces a deep reinforcement learning based hyper- heuristic for the control of modular production systems, developed in accordance with the design science research methodology. Implemented within a semi-heterarchical multi-agent framework, this approach achieves a threefold reduction in control and optimisation complexity while ensuring high scalability, adaptability, and robustness of the system. In comparative benchmarks, this control methodology outperforms rule-based heuristics, reducing throughput times and tardiness, and effectively incorporates customer and order-centric metrics. The control artifact facilitates a rapid scenario generation, motivating for further research efforts and bridging the gap to real-world applications. The overarching goal is to foster a synergy between theoretical insights and practical solutions, thereby enriching scientific discourse and addressing current industrial challenges.
Advancing digitalization is changing society and has far-reaching effects on people and companies. Fundamental to these changes are the new technological possibilities for processing data on an ever-increasing scale and for various purposes. The availability of large and high-quality data sets, especially those based on personal data, is crucial. They are used either to improve the productivity, quality, and individuality of products and services or to develop new types of services. Today, user behavior is tracked more actively and comprehensively than ever despite increasing legal requirements for protecting personal data worldwide. That increasingly raises ethical, moral, and social questions, which have moved to the forefront of the political debate, not least due to popular cases of data misuse. Given this discourse and the legal requirements, today's data management must fulfill three conditions: Legality or legal conformity of use and ethical legitimacy. Thirdly, the use of data should add value from a business perspective. Within the framework of these conditions, this cumulative dissertation pursues four research objectives with a focus on gaining a better understanding of
(1) the challenges of implementing privacy laws,
(2) the factors that influence customers' willingness to share personal data,
(3) the role of data protection for digital entrepreneurship, and
(4) the interdisciplinary scientific significance, its development, and its interrelationships.
Creative intensive processes
(2023)
Creativity – developing something new and useful – is a constant challenge in the working world. Work processes, services, or products must be sensibly adapted to changing times. To be able to analyze and, if necessary, adapt creativity in work processes, a precise understanding of these creative activities is necessary. Process modeling techniques are often used to capture business processes, represent them graphically and analyze them for adaptation possibilities. This has been very limited for creative work. An accurate understanding of creative work is subject to the challenge that, on the one hand, it is usually very complex and iterative. On the other hand, it is at least partially unpredictable as new things emerge. How can the complexity of creative business processes be adequately addressed and simultaneously manageable? This dissertation attempts to answer this question by first developing a precise process understanding of creative work. In an interdisciplinary approach, the literature on the process description of creativity-intensive work is analyzed from the perspective of psychology, organizational studies, and business informatics. In addition, a digital ethnographic study in the context of software development is used to analyze creative work. A model is developed based on which four elementary process components can be analyzed: Intention of the creative activity, Creation to develop the new, Evaluation to assess its meaningfulness, and Planning of the activities arising in the process – in short, the ICEP model. These four process elements are then translated into the Knockledge Modeling Description Language (KMDL), which was developed to capture and represent knowledge-intensive business processes. The modeling extension based on the ICEP model enables creative business processes to be identified and specified without the need for extensive modeling of all process details. The modeling extension proposed here was developed using ethnographic data and then applied to other organizational process contexts. The modeling method was applied to other business contexts and evaluated by external parties as part of two expert studies. The developed ICEP model provides an analytical framework for complex creative work processes. It can be comprehensively integrated into process models by transforming it into a modeling method, thus expanding the understanding of existing creative work in as-is process analyses.
Accurately solving classification problems nowadays is likely to be the most relevant machine learning task. Binary classification separating two classes only is algorithmically simpler but has fewer potential applications as many real-world problems are multi-class. On the reverse, separating only a subset of classes simplifies the classification task. Even though existing multi-class machine learning algorithms are very flexible regarding the number of classes, they assume that the target set Y is fixed and cannot be restricted once the training is finished. On the other hand, existing state-of-the-art production environments are becoming increasingly interconnected with the advance of Industry 4.0 and related technologies such that additional information can simplify the respective classification problems. In light of this, the main aim of this thesis is to introduce dynamic classification that generalizes multi-class classification such that the target class set can be restricted arbitrarily to a non-empty class subset M of Y at any time between two consecutive predictions.
This task is solved by a combination of two algorithmic approaches. First, classifier calibration, which transforms predictions into posterior probability estimates that are intended to be well calibrated. The analysis provided focuses on monotonic calibration and in particular corrects wrong statements that appeared in the literature. It also reveals that bin-based evaluation metrics, which became popular in recent years, are unjustified and should not be used at all. Next, the validity of Platt scaling, which is the most relevant parametric calibration approach, is analyzed in depth. In particular, its optimality for classifier predictions distributed according to four different families of probability distributions as well its equivalence with Beta calibration up to a sigmoidal preprocessing are proven. For non-monotonic calibration, extended variants on kernel density estimation and the ensemble method EKDE are introduced. Finally, the calibration techniques are evaluated using a simulation study with complete information as well as on a selection of 46 real-world data sets.
Building on this, classifier calibration is applied as part of decomposition-based classification that aims to reduce multi-class problems to simpler (usually binary) prediction tasks. For the involved fusing step performed at prediction time, a new approach based on evidence theory is presented that uses classifier calibration to model mass functions. This allows the analysis of decomposition-based classification against a strictly formal background and to prove closed-form equations for the overall combinations. Furthermore, the same formalism leads to a consistent integration of dynamic class information, yielding a theoretically justified and computationally tractable dynamic classification model. The insights gained from this modeling are combined with pairwise coupling, which is one of the most relevant reduction-based classification approaches, such that all individual predictions are combined with a weight. This not only generalizes existing works on pairwise coupling but also enables the integration of dynamic class information.
Lastly, a thorough empirical study is performed that compares all newly introduced approaches to existing state-of-the-art techniques. For this, evaluation metrics for dynamic classification are introduced that depend on corresponding sampling strategies. Thereafter, these are applied during a three-part evaluation. First, support vector machines and random forests are applied on 26 data sets from the UCI Machine Learning Repository. Second, two state-of-the-art deep neural networks are evaluated on five benchmark data sets from a relatively recent reference work. Here, computationally feasible strategies to apply the presented algorithms in combination with large-scale models are particularly relevant because a naive application is computationally intractable. Finally, reference data from a real-world process allowing the inclusion of dynamic class information are collected and evaluated. The results show that in combination with support vector machines and random forests, pairwise coupling approaches yield the best results, while in combination with deep neural networks, differences between the different approaches are mostly small to negligible. Most importantly, all results empirically confirm that dynamic classification succeeds in improving the respective prediction accuracies. Therefore, it is crucial to pass dynamic class information in respective applications, which requires an appropriate digital infrastructure.
Teaching and learning as well as administrative processes are still experiencing intensive changes with the rise of artificial intelligence (AI) technologies and its diverse application opportunities in the context of higher education. Therewith, the scientific interest in the topic in general, but also specific focal points rose as well. However, there is no structured overview on AI in teaching and administration processes in higher education institutions that allows to identify major research topics and trends, and concretizing peculiarities and develops recommendations for further action. To overcome this gap, this study seeks to systematize the current scientific discourse on AI in teaching and administration in higher education institutions. This study identified an (1) imbalance in research on AI in educational and administrative contexts, (2) an imbalance in disciplines and lack of interdisciplinary research, (3) inequalities in cross-national research activities, as well as (4) neglected research topics and paths. In this way, a comparative analysis between AI usage in administration and teaching and learning processes, a systematization of the state of research, an identification of research gaps as well as further research path on AI in higher education institutions are contributed to research.