Refine
Has Fulltext
- no (3)
Year of publication
- 2021 (3) (remove)
Document Type
- Article (3) (remove)
Language
- English (3)
Is part of the Bibliography
- yes (3)
Keywords
- model (3) (remove)
Institute
Paradoxical leadership behaviour (PLB) represents an emerging leadership construct that can help leaders deal with conflicting demands. In this paper, we report three studies that add to this nascent literature theoretically, methodologically, and empirically. In Study 1, we validate an effective short-form measure of global PLB using three different samples. In Studies 2 and 3, we draw on the job demands-resources model to propose that paradoxical leaders promote followers' work engagement by simultaneously fostering follower goal clarity and work autonomy. The results of survey data from Studies 2 and 3 largely confirm our model. Specifically, our findings show that PLB is positively associated with follower goal clarity and work autonomy, and that PLB exerts an indirect effect on work engagement via these variables. Moreover, our results support a hypothesized interaction effect of goal clarity and work autonomy to predict followers' work engagement, as well as a conditional indirect effect of PLB on work engagement via the interactive effect. We discuss the practical implications for leaders and organizations.
Practitioner points
To effectively engage followers in their work, leaders should create work environments in which followers know exactly what to do (i.e., have high goal clarity), but at the same time can determine on their own how to do their work (i.e., have high work autonomy)
To foster both goal clarity and work autonomy, leaders should combine communal (e.g., other-centred, flexibility-providing) and agentic aspects of leadership (e.g., maintaining decision control and enforcing performance standards).
HR departments should design leadership trainings that help leaders to combine seemingly opposing, yet ultimately synergistic behaviours.
While previous research underscores the role of leaders in stimulating employee voice behaviour, comparatively little is known about what affects leaders' support for such constructive but potentially threatening employee behaviours. We introduce leader member exchange quality (LMX) as a central predictor of leaders' support for employees' ideas for constructive change. Apart from a general benefit of high LMX for leaders' idea support, we propose that high LMX is particularly critical to leaders' idea support if the idea voiced by an employee constitutes a power threat to the leader. We investigate leaders' attribution of prosocial and egoistic employee intentions as mediators of these effects. Hypotheses were tested in a quasi-experimental vignette study (N = 160), in which leaders evaluated a simulated employee idea, and a field study (N = 133), in which leaders evaluated an idea that had been voiced to them at work. Results show an indirect effect of LMX on leaders' idea support via attributed prosocial intentions but not via attributed egoistic intentions, and a buffering effect of high LMX on the negative effect of power threat on leaders' idea support. Results differed across studies with regard to the main effect of LMX on idea support.
Experiments in research on memory, language, and in other areas of cognitive science are increasingly being analyzed using Bayesian methods. This has been facilitated by the development of probabilistic programming languages such as Stan, and easily accessible front-end packages such as brms. The utility of Bayesian methods, however, ultimately depends on the relevance of the Bayesian model, in particular whether or not it accurately captures the structure of the data and the data analyst's domain expertise. Even with powerful software, the analyst is responsible for verifying the utility of their model. To demonstrate this point, we introduce a principled Bayesian workflow (Betancourt, 2018) to cognitive science. Using a concrete working example, we describe basic questions one should ask about the model: prior predictive checks, computational faithfulness, model sensitivity, and posterior predictive checks. The running example for demonstrating the workflow is data on reading times with a linguistic manipulation of object versus subject relative clause sentences. This principled Bayesian workflow also demonstrates how to use domain knowledge to inform prior distributions. It provides guidelines and checks for valid data analysis, avoiding overfitting complex models to noise, and capturing relevant data structure in a probabilistic model. Given the increasing use of Bayesian methods, we aim to discuss how these methods can be properly employed to obtain robust answers to scientific questions.