So on Tuesday we had a guest talk by one Karen Emmory, described by one of our faculty as "the foremost researcher on Sign neurolinguistics". And it was a great talk. I will now do her talk approximately zero justice by attempting to summarise the coolest parts several days after the fact.
- If you're bilingual (or n-lingual for any n>1) both languages are 'always on'. The main way this has been shown has been through various kinds of Stroop tasks and eye gaze tasks, where the word you're interested in (let's say 'marker') is similar in some way to a word in your other language ('marka' which means 'stamp' in Russian I think), and your eye gaze or response time reflects the fact that you've been distracted by the word in your other language
- The other way we know that both languages are always on is from people who are bilingual in a sign language and a spoken language, because when they talk (in either modality) bits of the other language leak through simultaneously. So if you give bimodal bilinguals a task where they watch a cartoon and then narrate it to someone else, regardless of whether they choose sign or spoken language as the narrative language, they'll spontaneously provide translations for some individual words in the other modality, completely unconsciously.
- In addition, in ASL certain grammatical structures like questions are marked on the face, via things like raised or furrowed eyebrows. When speaking in English, English-ASL bilinguals will often do those facial markings unconsciously, which leads to a phenomenon where people who aren't fluent in sign often think that the ASL speaker is expressing various emotions when in fact they're just marking grammatical structures. So even when there's incentive for them to suppress the other language, they're not always entirely successful
- However! Clearly there is some suppression going on. The bimodal bilinguals don't sign every word they verbalise (or vice versa), verbal bilinguals aren't constantly coming out with weird gibberish from trying to speak both languages simultaneously
- This leads us to the hypothesis that bilinguals have better cognitive control than monolinguals, because even just speaking is this exercise in choosing one language and suppressing the input from the other. Turns out this hypothesis is correct - when you give bilinguals various tasks that involve ignoring extraneous information, they tend to be both faster and more accurate than monolinguals
- But it's not entirely clear from the literature whether the advantage comes from the perception side (needing to be able to categorise and comprehend multiple different kinds of input) or the production side (since you only have a single tongue you can't speak multiple languages simultaneously)
- Bimodal bilinguals give us a way to separate out which bits of bilingualism are responsible for which aspects of increased cognitive control. For verbal bilinguals, they can only speak one language at a time, which requires suppressing the other language. For bimodal bilinguals, you can speak both languages simultaneously (up to a point - the syntax is different but we'll ignore that for now) so while they may have some cognitive advantages from bilingualism, they don't need to put as much effort into suppressing their other language
- This hypothesis is borne out! There are some tasks for which bimodal bilinguals pattern with monolinguals instead of unimodal bilinguals, and other tasks where any kind of bilinguality is an advantage over monollingualism. Unfortunately my memory fails me here as to which tasks were which.
- Sidenote: You may be curious about what would happen if you tested people who are bilingual in two different signed languages. Unfortunately this isn't really a testable hypothesis, because all signers are bilingual in their native sign language and their area of birth's spoken language. So someone who knows two signed languages would actually be trilingual and therefore not a fair comparison to the bilinguals.
- Sidenote the second: there are intriguing results with Italian signers simultaneously producing syntactically correct sentences in both Italian Sign Language and Italian even though they have very different syntax. There is a theory that they could do this because the participants weren't choosing a dominant syntactic structure, but a dominant morphological structure, and the morphological marking is similar enough in both languages to support simultaneous production. But that's mostly speculative at the moment.