Events
Upcoming Events
Alessandra Rampinini (Université de Genève) on Open Data in Neurolinguistics
On 17 January, Alessandra Rampinini, post doctoral fellow at the Brain and Language Lab, University of Geneva, will talk about Leveraging Open Data in Neurolinguistics: Challenges in the Age of Reproducible Science.
Time: 10.15-12.00
On-site: SOL:L303b
Zoom Link: https://lu-se.zoom.us/j/62491331134
Abstract
In the rapidly evolving landscape of neuroscience and linguistics, open data practices are becoming pivotal for advancing reproducible and transparent science. This talk explores the opportunities and obstacles associated with implementing open data principles in neurolinguistics research. Open data can accelerate discovery, foster interdisciplinary collaboration, and enhance the reliability of findings by enabling independent validation. However, navigating ethical considerations, data standardization, cultural and technical barriers remains a significant challenge, particularly when dealing with sensitive information such as brain imaging and language performance data.
In the second part of the talk, I will introduce the Nebula101 dataset, a rich multimodal resource designed to advance research in neurolinguistics. Nebula101 integrates behavioral, neuroimaging, and phenotypic data, annotated using standardized ontologies to ensure transparency, interoperability, and reusability. By adhering to open science principles such as the Brain Imaging Data Structure (BIDS), this dataset exemplifies how accessible, well-documented resources can facilitate interdisciplinary collaboration and reproducibility in neuroscience research. Through the lens of Nebula101, I will explore the ethical and practical challenges of sharing sensitive data while maintaining participant confidentiality and data integrity. Additionally, I will discuss how this dataset serves as a benchmark for validating neurolinguistic theories, fostering equitable access to data and empowering researchers to address critical questions about language and cognition. This section will demonstrate how resources such as Nebula101 represent a step forward in leveraging open data to enhance the impact of neurolinguistic research.
EEG analysis Workshop led by Olaf Hauk (University of Cambridge)
On April 2-4 April, Olaf Hauk from University of Cambridge will hold a workshop on EEG analysis (More information TBA).
This is an on-site event and registration is required. Please send an email to NLSsol.luse to sign up.
Past Events
Ekaterina Kopaeva on Mood modulations of affective word processing: a predictive coding perspective
On 3 December, Ekaterina Kopaeva, PhD candidate from Lund University will talk about Mood modulations of affective word processing: a predictive coding perspective
Time: 13.15-15.00
On-site: SOL:H402
Zoom Link: https://lu-se.zoom.us/j/62491331134
Abstract
An individual’s emotional state, or mood, has been shown to influence perception, attention, decision-making and other cognitive processes. Its effects extend to language, where it is seen as a context for information processing. If a linguistic expression is non-neutral in itself, mood might augment or attenuate its perceived valence. Motivated by a lack of clarity regarding the nature and temporal dynamics of mood-valence interaction, we conducted an exploratory EEG study to find whether an individual’s mood might change the temporal profile of emotional word processing. We looked at the interaction of mood and valence in a control and two mood-induced conditions over three consecutive time windows. Results revealed an interaction in a happy but not sad mood. High valence words elicited greater N1 amplitudes in the control condition, signalling greater attention allocation, but showed facilitation in a happy mood. In the subsequent time window (200–300 ms), congruence effects persisted: low valence words were attended to in the happy mood, as seen in increased P2 amplitudes, and high valence words were facilitated, as less negative EPN slopes show.
The talk will explore the potential of regarding mood as a hyperprior in the predictive coding framework, modulating predictions and prediction errors, and the extent to which such view is supported by the current study.
Prof. Dr. Johanna Kissler on Emotion in word processing: neuroscientific evidence
On 26 November, Prof. Dr. Johanna Kissler from Bielefeld University, Germany, will talk about Emotion in word processing: neuroscientific evidence
Time: 13.15-15.00
On-site: SOL:A158
Zoom Link: https://lu-se.zoom.us/j/62491331134
ABSTRACT
Emotions can be elicited and described by words, apparently linking linguistic cognitive systems with older, more rudimentary emotional brain systems, thereby altering word processing. Processing of emotional words has been shown to differ from neutral ones: Emotional words elicit higher amplitude event-related potentials from earliest processing stages and hemodynamic responses in deep “emotional brain” nuclei such as the amygdala. They are responded to faster, induce stronger pre-response activation of the motor systems, and are better remembered than neutral words. Moreover, they can also serve as efficient learning contexts, modulating the processing of other information such as faces. Yet, recent evidence has shown that what sets emotional words apart from neutral ones is itself malleable by context. Moreover, it has also become apparent that several of the physiological markers thought to index activation of emotional brain regions during emotional word processing persist even in the absence of those regions and that subjective appraisal and emotional memory need not be altered by loss of medial temporal lobe structures. Thus, these data establish several facts about the processing of emotional words but also paint a complex picture on their cerebral representation: on the one hand, a large body of data shows that the processing of these words often draws on body-related brain structures. On the other hand, this embodiment is itself contextualized and the processing system seems quite robust against considerable damage to emotion-related brain structures. Therefore, data suggest that representation and processing of emotional words are realized in multiple, interactive and partly redundant systems that while typically embodied and socially situated, can show a considerable degree of autonomy.
Oleksandra Osypenko on Measuring brain potentials in Ukrainian-Russian bilinguals: investigating the effects of two contrasting grammatical gender systems
On 29 October, Oleksandra Osypenko, visiting PhD student from Lancaster University, will present her work on "Measuring brain potentials in Ukrainian-Russian bilinguals: investigating the effects of two contrasting grammatical gender systems".
Time: 13.15-15.00
On-site: SOL:A158
Zoom Link: https://lu-se.zoom.us/j/62491331134
Abstract
Our study explores the principle of linguistic relativity (Whorf, 1956), focusing on how the languages people speak influence cognition, specifically through the lens of grammatical gender. We examine how this linguistic feature in the native languages of Ukrainian-Russian simultaneous bilinguals impacts cognitive processes. Using an adapted nonverbal categorization task (Sato, Casaponsa, & Athanasopoulos, 2020), participants were asked to judge the similarity between pictures of objects and gendered faces. To investigate whether grammatical gender primes conceptual gender, we combine behavioral measures (reaction times and accuracy) with electrophysiological recordings (ERPs). Previous research has shown moderate behavioral effects and modulations of brain potentials such as N300, P2/VPP (Sato et al., 2020), and LAN (Boutonnet et al., 2012) in sequential bilinguals (with a gendered L1 and a genderless L2), supporting the idea that grammatical gender can unconsciously shape cognition.
Our study builds on this by examining simultaneous bilinguals with two gendered languages (Ukrainian and Russian), each of which has a three-gender system, while conducting the task in the context of genderless English (L2). Moreover, addressing critiques in the field, we investigate whether speakers of three-gendered languages show less cognitive influence from gender compared to speakers of binary-gender languages, where gender may be more salient (Sera et al., 2002). We utilize stimuli with both matching (e.g., "strawberry," feminine in both languages) and mismatching grammatical gender (e.g., "moon," masculine in Ukrainian, feminine in Russian). Based on previous research, we predict ERP modulations, including the N300 and P2/VPP waves, further illustrating the influence of grammatical gender on perceptual and semantic processing.
Bayesian Statistics Workshop by Joost van de Weijer
On 17 September, Joost van de Weijer from Lund University will lead a workshop on Bayesian Statistics.
Time: 13.15-15.00
Location: SOL:H402
Jinhee Kwon on Neural semantic effects of tone accents
On 10 September, Jinhee Kwon, PhD candidate from Lund University will talk about Neural semantic effects of tone accents
Time: 13.15-15.00
On-site: SOL:A158
Zoom Link: https://lu-se.zoom.us/j/62491331134
Abstract:
This study investigated whether the brain utilizes morphologically induced tones for semantic processing during online speech perception. An auditory comprehension task was conducted while measuring event-related potentials (ERPs). The study tested whether a discrepancy between contextual expectations and the tonal realizations of the target word would yield an N400 effect, indicative of semantic processing difficulty. An N400 effect was observed, reflecting integration difficulty due to semantic anomalies caused by incongruent tones. Additionally, the ERPs in the congruent conditions were modulated by the cohort entropy of the target word indicating lexical competition. The late negativity observed in this study encompasses both the N400 and preactivation negativity. This overlap underscores the brain’s potential for rapidly connecting form and meaning from different sources within the word, relying on statistically based prediction in semantic processing.
International Symposium on Speech Processing, organised by Renata Kochančikaitė and Tugba Lulaci
The symposium takes place on 30 maj 2024 13:00-19:00, at Lund University (Room: LUX C121 hörsal)
About the symposium:
The latest developments in auditory and neurobiological aspects of speech perception will be explored and discussed during this one-day event which gathers together young and advanced researchers, students and professors.
Speech processing comprises how we perceive the sounds that make up spoken language, how we comprehend the meaning, and how we respond to it verbally. With the recent advancements of neuroimaging techniques, the field of speech processing has taken a turn towards broader cognitive science and has growing implications for scientific and societal development. This international symposium will bring prominent scholars from different language and neuroscience labs to one place for exchange of most recent findings. The brain mechanisms of speech processing will be discussed in an interdisciplinary group, providing fresh perspectives.
Speakers:
Prof. Yale E. Cohen, Penn Auditory Research Laboratory, (University of Pennsylvania)
Dr. M. Florencia Assaneo, Institute for Neurobiology, UNAM (Universidad Nacional Autónoma de México)
Prof. Mikael Roll, Lund Neurolinguistics group (Lund University)
Dr. Pelle Söderström, MARCS Institute for Brain, Behaviour and Development (Western Sydney University) & Lund Neurolinguistics group (Lund University)
Program:
13:15-13:30 Opening remarks
13:30-14:30 Keynote 1: M. Florencia Assaneo "Individual Differences in Perceptual-Motor Synchronization Performance"
14:30-15:30 Mikael Roll "Predictive Speech Processing in the Brain"
15:30-16:00 Coffee break
16:00-17:00 Pelle Söderström "Rapid Brain Responses to Spoken Language"
17:00-18:00 Keynote 2: Yale E. Cohen "Neural Correlates of Auditory Perception"
18:00-18:45 General Discussion: Interdisciplinary perspectives on auditory perception
18:45-19:00 Closing notes
Hanna Lindfors on Similar ERP effects for verbal and pictorial sequences with hierarchical structure
On 28 May, Hanna Lindfors, PhD candidate from Linnaeus University will talk about Similar ERP effects for verbal and pictorial sequences with hierarchical structure.
Time: 13.15-15.00
On-site: SOL:A158
Zoom Link: https://lu-se.zoom.us/j/62491331134
Abstract:
I will present a study in preparation for submission (see preliminary abstract below) and would greatly appreciate NLS members insights.
ERP studies have suggested that adults’ processing of hierarchical structure in picture sequences is similar to processing of syntactic structure in sentences. To extend these findings to children and to a within-subjects design, we developed a verbal paradigm that corresponded to the pictorial paradigm. In a first step, we showed that adults’ ERPs for the novel verbal paradigm mainly replicated previous findings for the pictorial paradigm. That is, anterior negativities were elicited to disruptions within sentence-initial and sentence-final constituents, and posterior positivities were elicited to disruptions between constituents and within sentence-final constituents. Unlike adults, children did not show anterior negativities that patterned with the hierarchical structure in either the verbal or the pictorial domain. However, children did show the adult pattern of posterior positivities in both domains. These results reveal developmental differences in processing hierarchical structures and as predicted, similarities in processing verbal and pictorial sequences, supporting domain-general views of language.
Co-authors: Kristina Hansson, John E. Drury, Neil Cohn, Eric Pakulak, and Annika Andersson
Carolin Dudschig (Universität Tübingen) on The N400 and Large Language Models: Prediction- vs. count-based methods
On 23 April, Carolin Dudschig from Universität Tübingen will talk about The N400 and Large Language Models: Prediction- vs. count-based methods
Time: 13:15
On site: SOL A158
Zoom Link: https://lu-se.zoom.us/j/62491331134
Abstract:
The N400 is a well-established event-related potential (ERP) component that is widely studied in the field of cognitive neuroscience. It is typically larger following the detection of a semantic violation or other incongruities within language processing tasks. The N400 provides valuable insights into the cognitive processes underlying
language comprehension. For example, it has been used to investigate whether linguistic and world-knowledge violations are integrated in parallel during comprehension (e.g., Dudschig, Maienborn & Kaup, 2016; Hagoort, Bastiaansen & Petersson, 2004). Nevertheless, to date, it is still under debate what processes or information are reflected in the N400 and whether integration of basic operators such as negation are reflected in the N400 (e.g., Dudschig et al., 2019). The accounts range from integration views - suggesting the N400 reflects integration processes - to the lexical view - suggesting that the N400 is non-combinatorial in nature and dominant prediction-based accounts that focus on the predictability of the critical word (for a review, see Kutas & Federmeier, 2011). Recent developments in large language models (LLMs) have opened new avenues to investigate what processes are reflected and what insights can be gained from examining the N400.
This presentation aims to investigate the extent to which the N400 amplitude can be better explained by pre-determined discrete condition labels (e.g., correct vs. world-knowledge vs. semantic violation) versus continuous word-pair embedding measures derived from multiple LLMs. Overall, this presentation aims to bridge the gap between traditional N400 research and the emerging field of natural language modeling.
Dudschig, C., Mackenzie, I. G., Maienborn, C., Kaup, B., & Leuthold, H. (2019). Negation and the N400: Investigating temporal aspects of negation integration using semantic and world-knowledge violations. Language, Cognition and Neuroscience, 34(3), 309-319. Dudschig, C., Maienborn, C., & Kaup, B. (2016). Is there a difference between stripy journeys and stripy ladybirds? The N400 response to semantic and world-knowledge violations during sentence processing. Brain and Cognition, 103, 38-49. Hagoort, P., Hald, L., Bastiaansen, M., & Petersson, K. M. (2004).
Integration of word meaning and world knowledge in language comprehension. Science, 304(5669), 438-441.
Kutas, M., & Federmeier, K. D. (2011). Thirty years and counting: finding meaning in the N400 component of the event-related brain potential (ERP). Annual Review of Psychology, 62, 621-647.
Heming Strømholt Bremnes, Norges Teknisk-Naturvitenskapelige Universitet, on the computational complexity of quantifier verification and its neural consequences
The computational complexity of quantifier verification and its neural consequences
Date: 16 April
Time 14.15
On site: SOL L303b, zoom: https://lu-se.zoom.us/j/62491331134
Because of their mathematical nature, quantifiers are one of the few types of expressions in natural language for which a purely non-linguistic semantics can be given. This feature has prompted extensive study of quantificational expressions in formal semantics and has resulted in several theoretical results. Among these results is the fact that quantifiers fall into different classes depending on the complexity of their verification. Of interest to neuro- and psycholinguistics is the corollary that Aristotelian and numerical quantifiers (e.g. 'all' and 'three') can be verified with minimal working memory resources, whereas proportional quantifiers (e.g. 'most') requires an algorithm with a dedicated memory component. In a series of three EEG-experiments, we demonstrated that this theoretically derived difference is reflected in the evoked potential during sentence processing with verification. In this talk, I will present these experiments and attempt to draw some conclusions about the impact of verification complexity on models of sentence processing.
Victor Almeida, LIM-27, Neuroscience lab, Institute of Psychiatry, Faculty of Medicine, University of São Paulo (USP)
Date: 9 April
Time: 13.15-15
On site: SOL L303b
Zoom: https://lu-se.zoom.us/j/62491331134
Victor Almeida from LIM-27, Neuroscience lab, Institute of Psychiatry, Faculty of Medicine, University of São Paulo (USP) combines insights from animal studies, linguistics and neurophysiology to explain electrical potentials associated with predicton and prediction error.
The Predictive Coding (PC) framework was popularised by renowned neuroscientist Karl Frinston following Rao and Ballard’s 1999 computational model of extra-classical receptive-field effects in the visual cortex. Since then, canonical tenets of PC theory have infiltrated various subfields of cognitive neuroscience (whether ipsis litteris or under adaptations). Arguably, an epitome of this phenomenon is none other than psycholinguistics, given its multitude of generative models of prediction and prediction error. Regrettably though, a critical aspect of PC theory has been largely overlooked in our field. Frinston was originally inspired by a neurophysiological model which demonstrated how cortical feedback signals (predictions) and feedforward (residual, unpredicted error) could shape contextual interactions associated with peripheral receptive fields of lower visual cortex’s pyramidal neurons - namely, in such a way that it mimicked in vivo recordings. Hence, such mesoscopic neural operations constitute the cardinal pillar of the entire cognitive dimension of the PC framework - and much of its appeal in neuroscience. Yet, save for a few exceptions (to my knowledge), the same preoccupation with neural constraints of this nature appears to be lacking in language studies, which, in turn, might be problematic for a few reasons. Firstly, for example, associative and sensory cortices differ quite significantly in ways pertaining to microstructure, neurophysiology, and neural populations that behave differently, insofar as these differences should ideally be accounted for whenever one conjectures about prediction/prediction error in language, rather than perception. While it is infeasible to observe them via recordings of language processing in animals (for obvious reasons), they can still be safely inferred from neural behaviour during more basic cognitive processes in higher-order regions (e.g., working memory, selective attention, categorisation). Secondly, cognitive models can be extremely appealing even in spite of biological implausibility (as history itself teaches), and this poses a very real danger to the field - that is, it runs the risk of being misled into adopting questionable premises for empirical research, as well as non sequitur conclusions on the resulting data. In this seminar, I will thereby attempt to draw attention to these caveats. Namely, I will cover some of the transdisciplinary literature on the neural basis of prediction and prediction error - viz. as derived from in vivo studies and computational modelling of evoked-related potentials - and, by the end, I will make a case for a shift towards a more neurocentric approach in the study of language.
Efthymia Kapnoula on Individual differences in speech perception gradiency: Current insights and future directions
Efthymia Kapnoula, The Basque Center on Cognition, Brain and Language
Date: 12 March
Time: 10.15-12.00
On site: SOL A158
Zoom: https://lu-se.zoom.us/j/62491331134
Abstract: Listeners discriminate speech sounds from different phoneme categories better than equivalent acoustic differences within the same category - an empirical phenomenon widely known as Categorical Perception. Based on this phenomenon, it has been hypothesized that listeners perceive speech categorically, i.e., ignoring within-category differences. Despite the long prevalence of this idea in the field, there is now mounting evidence that listeners perceive speech sounds in a gradient manner and that they use subphonemic information to flexibly process speech. In addition, recent work on individual differences in speech perception has shed light onto the sources of this speech perception gradiency, as well as its functional role in spoken language processing. In this talk, I will present some key findings from this literature and briefly discuss some on-going work and future directions.
Sahel Azizpourlindy on What MEG can tell us about predictive processing during language comprehension
On 5 March, PhD candidate at Donders Centre for Cognition in Nijmegen, Netherlands, Sahel Azizpourlindy will talk about what MEG can tell us about predictive processing during language comprehension.
Time 13.15
On-site: SOL L123
Link to the zoom room: lu-se.zoom.us/j/62491331134
Azizpourlindy combines MEG and large language models to study the neural indices of predictive processing.
The brain uses contextual information and prior knowledge to predict future content during language comprehension. Previously, it has been demonstrated that contextual word embeddings, derived from Large Language Models, can be linearly mapped to brain data. Recently this method has been used to study neural signatures of predictive processing. One study found that in a naturalistic listening setting, predictive signatures of an upcoming word can be observed in its pre-onset signal, measured with ECoG. In the fMRI domain, another study has shown that including embeddings of multiple upcoming words improves the model’s fit to brain data. This has been interpreted as an indication that the brain encodes long-range predictions. In this study we examine whether the same predictive information can be found in MEG data, a signal with lower signal-to-noise ratio than EcOG and higher temporal resolution than fMRI. We show that: 1) The signatures of pre-onset predictions are also detectable in MEG data similarly to ECoG and 2) Contrary to what has been observed in the fMRI data, including future embeddings does not improve brain mapping in MEG signals. These findings provide a novel avenue for studying predictive processing during language comprehension with naturalistic stimuli.
Lars Meyer, Max Plank Institute, Leipzig, on How brain electrophysiology shapes language
Date 13 February
Time: 13.15-15.00
On site: SOL:H402
Zoom: https://lu-se.zoom.us/j/62491331134
Current research into the neurobiology of language puts strong focus on the role of periodic electrophysiological activity—so-called neural oscillations—for auditory and linguistic processing. Electrophysiological cycles are thought to provide processing time windows for acoustic and abstract linguistic units (e.g., prosodic and syntactic phrases, respectively). Most work has studied such functions in response to speech, that is, driven by acoustic or abstract cues available from the stimulus. My presentation turns this perspective around. I am presenting evidence that oscillations shape the comprehension and acquisition of language, as well as language as such, from the inside out. First, I discuss evidence that slow-frequency oscillations time-constrain our ability to form multi-word units during auditory comprehension and reading. Second, I show that the underlying neural rhythm may be reflected in the temporal architecture of prosody and syntax across the world’s languages. Third, I present cross-sectional electrophysiological results that suggest a tight relationship between the ontogenetic acceleration of brain rhythms—from slow to fast—and the gradual refinement of the temporal resolution of acoustic–phonological processing. In sum, I suggest that the built-in pace of brain electrophysiology poses an electrophysiological bottleneck for language acquisition, comprehension, and language as a cultural system.
Joint NLS and English Linguistics Seminar: Sara Farshchi (Lund University) on "ERP responses to confirmed and disconfirmed predictions in negated contexts"
On 6 December, 13:15-15:00, Sara Farshchi (Lund University) will talk about ERP responses to confirmed and disconfirmed predictions in negated contexts.
On-site: SOL:A158
NLS Symposium on Tone and prediction in language, organised by Sabine Gosselke Berthelsen and Mikael Roll
The symposium takes place on 17 November, 9:00-12:30, at Lund University:
On-site: SOL:H402
Zoom: https://lu-se.zoom.us/j/63486401613
In a series of six talks, the general process of prediction in different aspects of language will be discussed. Below is an overview of the program:
=========================================
09.00-09.30 Mikael Roll, Lund University
Lexical tone accents and prediction in the brain
09.30-10.00 Pelle Söderström, Western Sydney University
Within-word prediction: from tones to segments
10.00-10.30 Sabine Gosselke Berthelsen, University of Copenhagen
Morphophonological prediction in second language learners
Coffee break
11.00-11.30 Pei-Ju Chien, Lund University
Neural correlates of lexical tone and intonation perception in Mandarin Chinese
11.30-12.00 Wing Yee Chow, University College London
Incremental prediction in real-time language comprehension:
From meaning to pitch contour
12.00-12.30 Yiling Huo, University College London
Organisers: Sabine Gosselke Berthelsen and Mikael Roll
=========================================
For more information about the talks and abstracts, see the link below:
https://www.sol.lu.se/en/the-department/calendar/event/symposium-tone-and-prediction-language/
Panos Athanasopoulos (Lund University) on Language modulations of pre-attentive categorical perception
On 17th of October at 13:15, Panos Athanasopoulos from Lund University will give a talk about "Language modulations of pre-attentive categorical perception".
Location: SOL:H402
Zoom Link: https://lu-se.zoom.us/j/63963142026
Abstract
Modern approaches to the Sapir-Whorf linguistic relativity hypothesis have reframed it from one of whether language shapes our thinking or not, to one that tries to understand the extent and nature of any observable influence of language on perception. One important dimension of this strand of research asks whether language modulates our perception only at a conscious level, or whether such modulations can also be observed outside of conscious awareness, at early pre-attentive stages of visual integration. The current talk will review Event Related Brain Potential (ERP) evidence from three research domains (colour, objects, grammatical gender) that sheds light on these questions. The data shows that it is possible to observe language effects very early in the visual processing stream, thus supporting one of the basic tenets of the linguistic relativity hypothesis, namely that “the 'real world' is to a large extent unconsciously built up on the language habits of the group” (Sapir 1958 [1929], p. 69).
Conference: NLS 2023, 1-2 June
The first NLS conference will take place on 1-2 June, 2023, in Lund.
For more information see:
Lia Călinescu (NTNU) on Verb-Noun and Adjective-Noun composition in the brain
Title: In search for composition in the brain: ERP and oscillatory effects of Verb-Noun and Adjective-Noun composition
Date: 9 May
Time: 13.15-15
Room: SOL:A158
Zoom https://lu-se.zoom.us/j/62491331134
Francesca Carota on "A neurobiologically informed theory of language production"
On 2 May 13.15-15.00 Francesca Carota from the Max Planck Institute for Psycholinguistics & Donders Center for Cognitive Neuroimaging will give the talk "Towards a neurobiologically informed theory of language production".
Location: SOL A158
Link to the zoom room: lu-se.zoom.us/j/62491331134
Yury Shtyrov, Aarhus University, on morphosyntactic interactions through the lens of brain dynamics
Are complex words real mental objects represented in the lexicon as such, or are they learnt, stored and processed as mere combinations of individual morphemes bound together by morphosyntactic rules? Do these mechanisms differ depending on the type of morphology under investigation? Such questions debated in (psycho)linguistic literature can be straightforwardly addressed using neurophysiology. Using MEG and EEG, we have established a distinct double dissociation pattern in neurophysiological responses to spoken language, which can reflect lexical («representational») vs. (morpho)syntactic («combinatorial») processes in the brain. These are manifest as: (1) a larger passive (i.e. obtained without any stimulus-related task) brain response to meaningful words relative to matched meaningless pseudowords, reflecting stronger activation of pre-existing lexical memory traces for monomorphemic words (= lexical ERP/ERF pattern), (2) a smaller brain response amplitude for congruous word combinations (reflecting priming via syntactic links), relative to incongruous combinations where no priming is possible (=combinatorial pattern). This double dissociation – larger response for auditorily presented simple holistic representations vs. smaller response for well-formed combinatorial sequences – allows, in turn, for clear experimental predictions. Such experiments could test the nature of morphosyntactic processing by presenting the subjects with real complex words and incongruous morpheme combinations in passive auditory event-related designs, and comparing the relative dynamics of their brain responses.
We have used this neurophysiological approach to address a range of morphosyntactic questions: neural processing of compound words, past tense inflections, particle verbs as well as differences between inflectional and derivational morphology and processes of complex word acquisition in L1 and L2. This body of results generally supports a flexible dual-route account of complex-word processing, with a range of strategies involved dynamically, depending on exact psycholinguistic stimulus properties. Furthermore, as these experiments indicate, comprehension of spoken complex words is a largely automatized process underpinned by a very rapid (starting from ~50 ms) neural activation in bilateral perisylvian areas.
Date: 28 February
Time: 13.15-15
Room: SOL A158 or on zoom https://lu-se.zoom.us/j/62491331134
Mikkel Wallentin on "Sex/gender in language. Large differences with small effects and small differences with large effects"
Tuesday, November 8, 13:15-15, Mikkel Wallentin (Aarhus University) will present his work on "Sex/gender in language" in Lund.
Location: SOL: H402
Zoom link: https://lu-se.zoom.us/s/62491331134
Pei-Ju Chien on "The neural bases of speech intonation and lexical tone in Mandarin Chinese"
Tuesday, October 25, 10:15-12, Pei-Ju Chien on "The neural bases of speech intonation and lexical tone in Mandarin Chinese"
Location: SOL: A158
Link to the Zoom room: https://lu-se.zoom.us/s/62491331134
Pelle Söderström on "Spoken-word recognition in the brain—A case for ubiquitous predictive processing"
Tuesday, October 18, 13:15-15, Pelle Söderström (Lund University & MARCS Institute, Sydney) will present his work on "Spoken-word recognition in the brain"
Location: SOL: H402
Link to the Zoom room: https://lu-se.zoom.us/s/62491331134
Rosario Tomasello on The neuropragmatics of speech acts, Tuesday 14 June
Elliot Murphy on his ECoG/iEEG work on syntactic composition, 1 June 14.15
Title: A cortical mosaic for linguistic structure: Insights from intracranial recordings
Elliot Murphy (University of Texas Health Science Center)
Wed., June 1, 14:15-15:30 CET
zoom link: https://NTNU.zoom.us/j/94287253224
This is a talk organized by NTNU.
Katharina Rufener on tACS to modulates auditory gamma oscillations
Title: Modulating auditory gamma oscillations by means of transcranial alternating current stimulation (tACS) – first evidence on the efficacy and feasability in individuals diagnosed with developmental dyslexia
Katharina Rufener (Otto-von-Guericke University, Magdeburg, Germany)
Wed., May 25, 13:15-14:15 CET
zoom link: https://lu-se.zoom.us/j/63263453894
Prediction in Brain Potentials 29 April 2022
Program
13:15-13:30 Introduction
13:30-14:15 Stronger expectations, larger negativities: slow negativities associated with semantic prediction in sentence comprehension. Patricia León-Cabrera
14:30-15:15 Information sampling during self-regulated language learning: Evidence using slow event-related brain components (ERPs). Antoni Rodriguez-Fornells
15:15-15:45 Coffee and snacks
15:45-16:30 The pre-activation negativity. Sabine Gosselke-Berthelsen, Anna Hjortdal & Mikael Roll
16:30-17:00 General discussion