erratio: (Default)
So on Tuesday we had a guest talk by one Karen Emmory, described by one of our faculty as "the foremost researcher on Sign neurolinguistics". And it was a great talk. I will now do her talk approximately zero justice by attempting to summarise the coolest parts several days after the fact.
  •  If you're bilingual (or n-lingual for any n>1) both languages are 'always on'. The main way this has been shown has been through various kinds of Stroop tasks and eye gaze tasks, where the word you're interested in (let's say 'marker') is similar in some way to a word in your other language ('marka' which means 'stamp' in Russian I think), and your eye gaze or response time reflects the fact that you've been distracted by the word in your other language
  • The other way we know that both languages are always on is from people who are bilingual in a sign language and a spoken language, because when they talk (in either modality) bits of the other language leak through simultaneously. So if you give bimodal bilinguals a task where they watch a cartoon and then narrate it to someone else, regardless of whether they choose sign or spoken language as the narrative language, they'll spontaneously provide translations for some individual words in the other modality, completely unconsciously.
  • In addition, in ASL certain grammatical structures like questions are marked on the face, via things like raised or furrowed eyebrows. When speaking in English, English-ASL bilinguals will often do those facial markings unconsciously, which leads to a phenomenon where people who aren't fluent in sign often think that the ASL speaker is expressing various emotions when in fact they're just marking grammatical structures. So even when there's incentive for them to suppress the other language, they're not always entirely successful
  • However! Clearly there is some suppression going on. The bimodal bilinguals don't sign every word they verbalise (or vice versa), verbal bilinguals aren't constantly coming out with weird gibberish from trying to speak both languages simultaneously
  • This leads us to the hypothesis that bilinguals have better cognitive control than monolinguals, because even just speaking is this exercise in choosing one language and suppressing the input from the other. Turns out this hypothesis is correct  - when you give bilinguals various tasks that involve ignoring extraneous information, they tend to be both faster and more accurate than monolinguals
  • But it's not entirely clear from the literature whether the advantage comes from the perception side (needing to be able to categorise and comprehend multiple different kinds of input) or the production side (since you only have a single tongue you can't speak multiple languages simultaneously)
  • Bimodal bilinguals give us a way to separate out which bits of bilingualism are responsible for which aspects of increased cognitive control. For verbal bilinguals, they can only speak one language at a time, which requires suppressing the other language. For bimodal bilinguals, you can speak both languages simultaneously (up to a point - the syntax is different but we'll ignore that for now) so while they may have some cognitive advantages from bilingualism, they don't need to put as much effort into suppressing their other language
  • This hypothesis is borne out! There are some tasks for which bimodal bilinguals pattern with monolinguals instead of unimodal bilinguals, and other tasks where any kind of bilinguality is an advantage over monollingualism. Unfortunately my memory fails me here as to which tasks were which.
  • Sidenote: You may be curious about what would happen if you tested people who are bilingual in two different signed languages. Unfortunately this isn't really a testable hypothesis, because all signers are bilingual in their native sign language and their area of birth's spoken language. So someone who knows two signed languages would actually be trilingual and therefore not a fair comparison to the bilinguals.
  • Sidenote the second: there are intriguing results with Italian signers simultaneously producing syntactically correct sentences in both Italian Sign Language and Italian even though they have very different syntax. There is a theory that they could do this because the participants weren't choosing a dominant syntactic structure, but a dominant morphological structure, and the morphological marking is similar enough in both languages to support simultaneous production. But that's mostly speculative at the moment.
erratio: (Default)
Some background on Nicaraguan Sign Language (NSL):
NSL is about 30 years old now. It began when a special school for the education of deaf children was established in Managua, allowing deaf children starting from age 4, who'd previously only had very ad hoc systems of home sign with their family, interact with 25-30 other deaf kids, each of whom brought their own idiosyncratic home sign system/language with them. When those kids interacted they created a completely new, richer sign language. And then since it's a school, every year another cohort of 25 or so new kids would come in, and those kids in turn expanded on the original system created by the first few cohorts, and so on until NSL reached its current status where it's basically a full-fledged language, created within the last 30 years from virtually nothing and without major contamination from other languages. Basically, it's a linguist's dream language, because we have detailed records of what the language looked like at each stage of growth (more on this in a moment) and so we can literally see the grammar unfolding over time rather than having to guess, like we do for pretty much every other language. And it turns out that we can also see what cognitive functions language is and isn't necessary for, which is pretty cool. More on this in a moment too.
The way this school works is that kids go from age 4 until age 14, at which stage they graduate and henceforth are allowed to hang out at the deaf club, but are (obviously) no longer at school. There are also often older kids just starting who couldn't come previously, and their language development is obviously not as good since they haven't had access to a decent source of language until that point. The early cohorts didn't really hang out with each other outside of the school context, but the later ones, having grown up in the age of cell phones, do. At school, the kids get several hours a day to hang out with each other - during food/play times, on the school bus (some of them live as much as 2 hours away, so that's a lot of time to socialise with the other kids), behind the teacher's back... The schooling is pretty much all done in Spanish and mostly with non-signing teachers. As you might expect, not a whole lot of regular school learning actually goes on, although more recently they've started hiring adults from the first generation of signers as teachers so they can actually communicate. Plus, texting via cell phones means the kids are way more incentivised to learn to read than the first generations were.

On sign languages in general: when signs are coined they are often iconic in some way or other. For example, the sign for a king may be the action of putting a crown on, or the sign for a cat might be drawing imaginary whiskers on your face. But there's nnothing principled about what iconic aspect of a thing or action will become encoded as a sign, and signs tend to get less iconic over time.

So, Ann Senghas. She's been going down to this school for the deaf every summer for many years now, documenting their language, getting them to complete various linguistics tasks, and so on. And now, onto the pithy details of the talk, listed in bullet point form as usual because I'm lazy and can't be bothered with trivialities like "good writing".

* The NSL signers can be split into roughly 3 generations, descriptively called first second and third. First generation started school in the 70's, second in the 80's, third in the 90's
* If you look at a video of each generation signing, there aren't any obvious differences at first, except in speed - first generation is slow compared to second is slow compared to third. But they're all clearly using language, not pantomiming or gesturing.
* However if you look more closely, there are bigger differences. Two ways that we saw today included the expression of number and expression of space. Others that were mentioned include expression of path/manner of movement, syntax, theory of mind stuff, and general 'with it'-ness
* On path/manner of movement: where the first and second generation would express a ball rolling down a hill by more or less pantomiming an object rolling down, the third generation would express a ball rolling down a hill by first indicating a rolling thing and then indicating a descent.
* On syntax: for the earlier generations, verbs could only take a single argument each, so "the boy fed the woman" would be expressed as "woman sit; boy feed"
* On expression of number: the first generation would express number the same way us non-signers generally would: 15 would be 5 on both hands followed by 5 on one hand. The second generation developed a more efficient (one-handed, faster) system that builds on that of the first generation: A girl counting to 10 counting the first 5 normally on one hand followed by counting from 1-5 again on the same hand but accompanied by a slight twist. Another girl asked to express the number 15 did so by first indicating a 1 and then moving her hand slightly to one and then indicating a 5 (so basically a 1 in the 10's column and a 5 in the units). Kids in the third generation came up with a new system altogether that loses a lot of the transparency but is even faster and more compact: 15 is expressed by holding the middle finger towards the palm with the thumb (imagine you're trying to form a ring with your thumb and middle finger - this represents 10) and then flicking it outwards to show 5 fingers. Apparently the older generations understand these kinds of signs but are disdainful of them - "they don't even look like numbers, it's just a flick!". This kind of pattern exemplifies the different generations: first generation functional but not particularly efficient, second generation has some kind of systematic improvement that allows them to express themselves more efficiently, and third generation as often as not will come up with something way more abstract that bears very little iconic resemblance to its meaning.
* On expression of space: there's a task linguists sometimes get people to do that goes as follows: person A has to describe a simple picture to person B, who then picks the matching picture on their side of the test area. In this case the pictures were of a tree and a man, where the man would be standing either to the left or right of the tree and could be facing towards or away from the tree, or out to the audience or away from it. Ann Senghas gave this task to her signers to find out how they expressed spatial concepts. Instead what she found was that the first generation failed the task - they couldn't encode spatial relations and performed at chance. In the later generations everyone could do it just fine. We were shown a video of the task being done by a first generation speaker and her third generation nephew, where during a break in the task she asked him to explain how to get it right. The kid does a pretty good job of explaining something that must have seemed ridiculously obvious to him - if the person is on this side of the tree then you put them like so, otherwise you put them on the other like so. This isn't something you can practise, you just look and then do it. Easy! (very rough paraphrase from memory). She did not get it.
* On theory of mind and 'with it'-ness: the first generation fails at second-order theory of mind, aka situations where you have to express what you know that I know. They're also a lot less 'with it' in general - like when Senghas is trying to coordinate with them for meetings and such, they're just a lot less good at it. They're also way less good at metalinguistic stuff - being aware of how you express things.

erratio: (Default)
There's been some interesting talks lately, but today was the first one in a while that made me think "I should blog about that". But since I also would like records of the other talks, I'm going to start trying to summarise the ones I found interesting.

Julie Van Dyke - on language processing using cue retrieval
* Language processing is really heavily dependent on working memory.

* But we don't actually know much about working memory (eg. how much of it we have), so to be safe let's assume that a hypothetical person can only remember the last item they heard/read. This isn't as insane as it sounds - computer models have indicated that processing can do pretty well even with such an impoverished working memory. Everything that isn't in active working memory is absorbed passively and can be called upon (albeit not as easily)

* So let's consider a few hypothetical sentences: 1) the book ripped 2) the book recommended by the editor ripped 3) the book from Michigan by Anne Rice that was recommended by the editor ripped. How does a listener tell if 'ripped' forms a grammatical sentence with 'the book'? There are a few ways: they could search forwards or backwards through the sentence, in which case you would expect processing times to reflect the amount of material between "the book" and "ripped". Or you could do cue-based retrieval, where you filter the sentence for words that have the features you're looking for, in which case you wouldn't expect there to be significant time difference in retrieval. As the name of the talk might suggest, people use cue-based retrieval.

* So now we have a model where we store words as bundles of semantic/phonological/etc features and then retrieve them by using those features. But what if the sentence has several possible words that have the features you're looking for? In that case, retrieval might get blocked due to interference from the other items. This, according to Julie Van Dyke, is why people forget. (I don't know whether she meant in general or when processing sentences. Hopefully the latter)

* And the main difference between people who are good at processing (eg. fast readers) vs those who aren't, is almost entirely based on how detailed your representations are. Because if your word representations are super detailed with lots of features, it's easier to zero in on them. And, good news, the main factor in how good your representations are (after controlling for IQ and a bunch of other bothersome details) is practice. So if you suck at reading, all you need to do to fix it is read more.


erratio: (Default)

June 2017

2526 27282930 


RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jun. 28th, 2017 12:24 pm
Powered by Dreamwidth Studios