TY - GEN A1 - Husain, Samar A1 - Yadav, Himanshu T1 - Target Complexity Modulates Syntactic Priming During Comprehension T2 - Postprints der Universität Potsdam : Humanwissenschaftliche Reihe N2 - Syntactic priming is known to facilitate comprehension of the target sentence if the syntactic structure of the target sentence aligns with the structure of the prime (Branigan et al., 2005; Tooley and Traxler, 2010). Such a processing facilitation is understood to be constrained due to factors such as lexical overlap between the prime and the target, frequency of the prime structure, etc. Syntactic priming in SOV languages is also understood to be influenced by similar constraints (Arai, 2012). Sentence comprehension in SOV languages is known to be incremental and predictive. Such a top-down parsing process involves establishing various syntactic relations based on the linguistic cues of a sentence and the role of preverbal case-markers in achieving this is known to be critical. Given the evidence of syntactic priming during comprehension in these languages, this aspect of the comprehension process and its effect on syntactic priming becomes important. In this work, we show that syntactic priming during comprehension is affected by the probability of using the prime structure while parsing the target sentence. If the prime structure has a low probability given the sentential cues (e.g., nominal case-markers) in the target sentence, then the chances of persisting with the prime structure in the target reduces. Our work demonstrates the role of structural complexity of the target with regard to syntactic priming during comprehension and highlights that syntactic priming is modulated by an overarching preference of the parser to avoid rare structures T3 - Zweitveröffentlichungen der Universität Potsdam : Humanwissenschaftliche Reihe - 619 KW - syntactic priming KW - top-down parsing KW - sentence comprehension KW - SOV language KW - Hindi Y1 - 2020 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-460394 SN - 1866-8364 IS - 619 ER - TY - JOUR A1 - Husain, Samar A1 - Yadav, Himanshu T1 - Target Complexity Modulates Syntactic Priming During Comprehension JF - Frontiers in Psychology N2 - Syntactic priming is known to facilitate comprehension of the target sentence if the syntactic structure of the target sentence aligns with the structure of the prime (Branigan et al., 2005; Tooley and Traxler, 2010). Such a processing facilitation is understood to be constrained due to factors such as lexical overlap between the prime and the target, frequency of the prime structure, etc. Syntactic priming in SOV languages is also understood to be influenced by similar constraints (Arai, 2012). Sentence comprehension in SOV languages is known to be incremental and predictive. Such a top-down parsing process involves establishing various syntactic relations based on the linguistic cues of a sentence and the role of preverbal case-markers in achieving this is known to be critical. Given the evidence of syntactic priming during comprehension in these languages, this aspect of the comprehension process and its effect on syntactic priming becomes important. In this work, we show that syntactic priming during comprehension is affected by the probability of using the prime structure while parsing the target sentence. If the prime structure has a low probability given the sentential cues (e.g., nominal case-markers) in the target sentence, then the chances of persisting with the prime structure in the target reduces. Our work demonstrates the role of structural complexity of the target with regard to syntactic priming during comprehension and highlights that syntactic priming is modulated by an overarching preference of the parser to avoid rare structures KW - syntactic priming KW - top-down parsing KW - sentence comprehension KW - SOV language KW - Hindi Y1 - 2019 U6 - https://doi.org/10.3389/fpsyg.2020.00454 SN - 1664-1078 VL - 11 PB - Frontiers Research Foundation CY - Lausanne ER - TY - JOUR A1 - Laurinavichyute, Anna A1 - Yadav, Himanshu A1 - Vasishth, Shravan T1 - Share the code, not just the data BT - a case study of the reproducibility of articles published in the Journal of Memory and Language under the open data policy JF - Journal of memory and language N2 - In 2019 the Journal of Memory and Language instituted an open data and code policy; this policy requires that, as a rule, code and data be released at the latest upon publication. How effective is this policy? We compared 59 papers published before, and 59 papers published after, the policy took effect. After the policy was in place, the rate of data sharing increased by more than 50%. We further looked at whether papers published under the open data policy were reproducible, in the sense that the published results should be possible to regenerate given the data, and given the code, when code was provided. For 8 out of the 59 papers, data sets were inaccessible. The reproducibility rate ranged from 34% to 56%, depending on the reproducibility criteria. The strongest predictor of whether an attempt to reproduce would be successful is the presence of the analysis code: it increases the probability of reproducing reported results by almost 40%. We propose two simple steps that can increase the reproducibility of published papers: share the analysis code, and attempt to reproduce one's own analysis using only the shared materials. KW - Open data KW - Reproducible statistical analyses KW - Reproducibility KW - Open KW - science KW - Meta-research KW - Journal policy Y1 - 2022 U6 - https://doi.org/10.1016/j.jml.2022.104332 SN - 0749-596X SN - 1096-0821 VL - 125 PB - Elsevier CY - San Diego ER - TY - THES A1 - Yadav, Himanshu T1 - A computational evaluation of feature distortion and cue weighting in sentence comprehension T1 - Eine komputationale Evaluation von Feature-Verfälschung und Cue-Gewichtung in der Satzverarbeitung N2 - Successful sentence comprehension requires the comprehender to correctly figure out who did what to whom. For example, in the sentence John kicked the ball, the comprehender has to figure out who did the action of kicking and what was being kicked. This process of identifying and connecting the syntactically-related words in a sentence is called dependency completion. What are the cognitive constraints that determine dependency completion? A widely-accepted theory is cue-based retrieval. The theory maintains that dependency completion is driven by a content-addressable search for the co-dependents in memory. The cue-based retrieval explains a wide range of empirical data from several constructions including subject-verb agreement, subject-verb non-agreement, plausibility mismatch configurations, and negative polarity items. However, there are two major empirical challenges to the theory: (i) Grammatical sentences’ data from subject-verb number agreement dependencies, where the theory predicts a slowdown at the verb in sentences like the key to the cabinet was rusty compared to the key to the cabinets was rusty, but the data are inconsistent with this prediction; and, (ii) Data from antecedent-reflexive dependencies, where a facilitation in reading times is predicted at the reflexive in the bodybuilder who worked with the trainers injured themselves vs. the bodybuilder who worked with the trainer injured themselves, but the data do not show a facilitatory effect. The work presented in this dissertation is dedicated to building a more general theory of dependency completion that can account for the above two datasets without losing the original empirical coverage of the cue-based retrieval assumption. In two journal articles, I present computational modeling work that addresses the above two empirical challenges. To explain the grammatical sentences’ data from subject-verb number agreement dependencies, I propose a new model that assumes that the cue-based retrieval operates on a probabilistically distorted representation of nouns in memory (Article I). This hybrid distortion-plus-retrieval model was compared against the existing candidate models using data from 17 studies on subject-verb number agreement in 4 languages. I find that the hybrid model outperforms the existing models of number agreement processing suggesting that the cue-based retrieval theory must incorporate a feature distortion assumption. To account for the absence of facilitatory effect in antecedent-reflexive dependen� cies, I propose an individual difference model, which was built within the cue-based retrieval framework (Article II). The model assumes that individuals may differ in how strongly they weigh a syntactic cue over a number cue. The model was fitted to data from two studies on antecedent-reflexive dependencies, and the participant-level cue-weighting was estimated. We find that one-fourth of the participants, in both studies, weigh the syntactic cue higher than the number cue in processing reflexive dependencies and the remaining participants weigh the two cues equally. The result indicates that the absence of predicted facilitatory effect at the level of grouped data is driven by some, not all, participants who weigh syntactic cues higher than the number cue. More generally, the result demonstrates that the assumption of differential cue weighting is important for a theory of dependency completion processes. This differential cue weighting idea was independently supported by a modeling study on subject-verb non-agreement dependencies (Article III). Overall, the cue-based retrieval, which is a general theory of dependency completion, needs to incorporate two new assumptions: (i) the nouns stored in memory can undergo probabilistic feature distortion, and (ii) the linguistic cues used for retrieval can be weighted differentially. This is the cumulative result of the modeling work presented in this dissertation. The dissertation makes an important theoretical contribution: Sentence comprehension in humans is driven by a mechanism that assumes cue-based retrieval, probabilistic feature distortion, and differential cue weighting. This insight is theoretically important because there is some independent support for these three assumptions in sentence processing and the broader memory literature. The modeling work presented here is also methodologically important because for the first time, it demonstrates (i) how the complex models of sentence processing can be evaluated using data from multiple studies simultaneously, without oversimplifying the models, and (ii) how the inferences drawn from the individual-level behavior can be used in theory development. N2 - Bei der Satzverarbeitung muss der Leser richtig herausfinden, wer wem was angetan hat. Zum Beispiel muss der Leser in dem Satz „John hat den Ball getreten“ herausfinden, wer tat die Aktion des Tretens und was getreten wurde. Dieser Prozess des Identifizierens und Verbindens der syntaktisch verwandte Wörter in einem Satz nennt man Dependency-Completion. Was sind die kognitiven Mechanismen, die Dependency-Completion bestimmen? Eine weithin akzeptierte Theorie ist der Cue-based retrieval. Die Theorie besagt, dass die Dependency-Completion durch eine inhaltsadressierbare Suche nach der vorangetrieben wird Co-Abhängige im Gedächtnis. Der Cue-basierte Abruf erklärt ein breites Spektrum an empirischen Daten mehrere Konstruktionen, darunter Subjekt-Verb-Übereinstimmung, Subjekt-Verb-Nichtübereinstimmung, Plausibilität Mismatch-Konfigurationen und Elemente mit negativer Polarität. Es gibt jedoch zwei große empirische Herausforderungen für die Theorie: (i) Grammatische Sätze Daten aus Subjekt-Verb-Nummer-Dependency, bei denen die Theorie eine Verlangsamung vorhersagt das Verb in Sätzen wie „the key to the cabinet was rusty“ im Vergleich zu „the key to the cabinets was rusty“, aber die Daten stimmen nicht mit dieser Vorhersage überein; und (ii) Daten von Antezedenz-Reflexiv Strukturen, wo eine Leseerleichterung beim reflexiven „the bodybuilder who worked with the trainers injured themselves“ vs. „the bodybuilder who worked with the trainers injured themselves", aber die Daten zeigen keine vermittelnde Wirkung. Die in dieser Dissertation vorgestellte Arbeit widmet sich dem Aufbau einer allgemeineren Theorie von Dependency-Completion, die die beiden oben genannten Datensätze berücksichtigen kann, ohne das Original zu verlieren empirische Abdeckung der Cue-based Retrieval-Annahme. In zwei Zeitschriftenartikeln stelle ich Arbeiten zur Computermodellierung vor, die sich mit den beiden oben genannten empirischen Herausforderungen befassen. Um die Daten der grammatikalischen Sätze aus den Abhängigkeiten der Subjekt-Verb-Nummer-Übereinstimmung zu erklären, schlage ich ein neues Modell vor, das davon ausgeht, dass der Cue-basierte Abruf probabilistisch funktioniert verzerrte Darstellung von Substantiven im Gedächtnis (Artikel I). Dieses hybride Distortion-plus-Retrieval-Modell wurde anhand von Daten aus 17 Studien zu Subjekt-Verb mit den bestehenden Kandidatenmodellen verglichen Nummernvereinbarung in 4 Sprachen. Ich finde, dass das Hybridmodell die bestehenden Modelle übertrifft der Nummernvereinbarungsverarbeitung, was darauf hindeutet, dass die Cue-based Retrieval-Theorie Folgendes umfassen muss: a Annahme von Feature-Verfälschung. Um das Fehlen eines unterstützenden Effekts in antezedens-reflexiven Abhängigkeiten zu berücksichtigen, schlage ich ein individuelles Differenzmodell vor, das innerhalb des Cue-based Retrieval-Frameworks erstellt wurde (Artikel II). Das Modell geht davon aus, dass Individuen sich darin unterscheiden können, wie stark sie eine Syntax gewichten Cue über einem Nummern-Cue. Das Modell wurde an Daten aus zwei Studien zum Antezedenz-Reflexiv angepasst Abhängigkeiten, und die Cue-Gewichtung auf Teilnehmerebene wurde geschätzt. Wir finden, dass ein Viertel von Die Teilnehmer in beiden Studien gewichten bei der Verarbeitung den syntaktischen Cue höher als den numerischen Cue reflexive Abhängigkeiten und die verbleibenden Teilnehmer gewichten die beiden Cue gleichermaßen. Das Ergebnis weist darauf hin, dass das Fehlen des prognostizierten Erleichterungseffekts auf der Ebene der gruppierten Daten von einigen, nicht alle Teilnehmer, die syntaktische Cue höher gewichten als Zahlenhinweise. Allgemeiner gesagt, die Das Ergebnis zeigt, dass die Annahme einer differentiellen Hinweisgewichtung wichtig für eine Theorie von ist Dependency-Completion. Diese Idee der differentiellen Cue-Gewichtung wurde unabhängig unterstützt durch eine Modellierungsstudie zu Subjekt-Verb-Nichteinigungsabhängigkeiten (Artikel III). Insgesamt benötigt der Cue-basierte Abruf, der eine allgemeine Theorie der Abhängigkeitsvervollständigung ist um zwei neue Annahmen aufzunehmen: (i) die im Gedächtnis gespeicherten Substantive können einer Wahrscheinlichkeitsanalyse unterzogen werden Feature-Verfälschung, und (ii) die für den Abruf verwendeten sprachlichen Cue können unterschiedlich gewichtet werden. Das ist das kumulative Ergebnis der in dieser Dissertation vorgestellten Modellierungsarbeit.Die Dissertation leistet einen wichtigen theoretischen Beitrag: Satzverständnis in Der Mensch wird von einem Mechanismus getrieben, der einen hinweisbasierten Abruf, eine probabilistische Merkmalsverzerrung und eine differentielle Hinweisgewichtung annimmt. Diese Einsicht ist theoretisch wichtig, weil es einige gibt unabhängige Unterstützung für diese drei Annahmen in der Satzverarbeitung und im weiteren Gedächtnis Literatur. Die hier vorgestellten Modellierungsarbeiten sind auch methodisch wichtig, weil für die Zum ersten Mal wird gezeigt, (i) wie die komplexen Modelle der Satzverarbeitung evaluiert werden können Daten aus mehreren Studien gleichzeitig zu verwenden, ohne die Modelle zu stark zu vereinfachen, und (ii) wie die Schlussfolgerungen aus dem Verhalten auf individueller Ebene können in der Theorieentwicklung verwendet werden. KW - sentence comprehension KW - individual differences KW - cue-based retrieval KW - memory distortion KW - Approximate Bayesian Computation KW - cue reliability KW - ungefähre Bayessche Komputation KW - Cue-Gewichtung KW - Cue-basierter Retrieval KW - individuelle Unterschiede KW - Darstellung Verfälschung KW - Satzverarbeitung Y1 - 2023 U6 - http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:kobv:517-opus4-585055 ER - TY - JOUR A1 - Yadav, Himanshu A1 - Husain, Samar A1 - Futrell, Richard T1 - Do dependency lengths explain constraints on crossing dependencies? JF - Linguistics vanguard : multimodal online journal N2 - In syntactic dependency trees, when arcs are drawn from syntactic heads to dependents, they rarely cross. Constraints on these crossing dependencies are critical for determining the syntactic properties of human language, because they define the position of natural language in formal language hierarchies. We study whether the apparent constraints on crossing syntactic dependencies in natural language might be explained by constraints on dependency lengths (the linear distance between heads and dependents). We compare real dependency trees from treebanks of 52 languages against baselines of random trees which are matched with the real trees in terms of their dependency lengths. We find that these baseline trees have many more crossing dependencies than real trees, indicating that a constraint on dependency lengths alone cannot explain the empirical rarity of crossing dependencies. However, we find evidence that a combined constraint on dependency length and the rate of crossing dependencies might be able to explain two of the most-studied formal restrictions on dependency trees: gap degree and well-nestedness. KW - crossing dependencies KW - dependency length KW - dependency treebanks KW - efficiency KW - language processing KW - syntax Y1 - 2021 U6 - https://doi.org/10.1515/lingvan-2019-0070 SN - 2199-174X VL - 7 PB - De Gruyter Mouton CY - Berlin ; New York, NY ER -