erratio: (Default)
So on Tuesday we had a guest talk by one Karen Emmory, described by one of our faculty as "the foremost researcher on Sign neurolinguistics". And it was a great talk. I will now do her talk approximately zero justice by attempting to summarise the coolest parts several days after the fact.
  •  If you're bilingual (or n-lingual for any n>1) both languages are 'always on'. The main way this has been shown has been through various kinds of Stroop tasks and eye gaze tasks, where the word you're interested in (let's say 'marker') is similar in some way to a word in your other language ('marka' which means 'stamp' in Russian I think), and your eye gaze or response time reflects the fact that you've been distracted by the word in your other language
  • The other way we know that both languages are always on is from people who are bilingual in a sign language and a spoken language, because when they talk (in either modality) bits of the other language leak through simultaneously. So if you give bimodal bilinguals a task where they watch a cartoon and then narrate it to someone else, regardless of whether they choose sign or spoken language as the narrative language, they'll spontaneously provide translations for some individual words in the other modality, completely unconsciously.
  • In addition, in ASL certain grammatical structures like questions are marked on the face, via things like raised or furrowed eyebrows. When speaking in English, English-ASL bilinguals will often do those facial markings unconsciously, which leads to a phenomenon where people who aren't fluent in sign often think that the ASL speaker is expressing various emotions when in fact they're just marking grammatical structures. So even when there's incentive for them to suppress the other language, they're not always entirely successful
  • However! Clearly there is some suppression going on. The bimodal bilinguals don't sign every word they verbalise (or vice versa), verbal bilinguals aren't constantly coming out with weird gibberish from trying to speak both languages simultaneously
  • This leads us to the hypothesis that bilinguals have better cognitive control than monolinguals, because even just speaking is this exercise in choosing one language and suppressing the input from the other. Turns out this hypothesis is correct  - when you give bilinguals various tasks that involve ignoring extraneous information, they tend to be both faster and more accurate than monolinguals
  • But it's not entirely clear from the literature whether the advantage comes from the perception side (needing to be able to categorise and comprehend multiple different kinds of input) or the production side (since you only have a single tongue you can't speak multiple languages simultaneously)
  • Bimodal bilinguals give us a way to separate out which bits of bilingualism are responsible for which aspects of increased cognitive control. For verbal bilinguals, they can only speak one language at a time, which requires suppressing the other language. For bimodal bilinguals, you can speak both languages simultaneously (up to a point - the syntax is different but we'll ignore that for now) so while they may have some cognitive advantages from bilingualism, they don't need to put as much effort into suppressing their other language
  • This hypothesis is borne out! There are some tasks for which bimodal bilinguals pattern with monolinguals instead of unimodal bilinguals, and other tasks where any kind of bilinguality is an advantage over monollingualism. Unfortunately my memory fails me here as to which tasks were which.
  • Sidenote: You may be curious about what would happen if you tested people who are bilingual in two different signed languages. Unfortunately this isn't really a testable hypothesis, because all signers are bilingual in their native sign language and their area of birth's spoken language. So someone who knows two signed languages would actually be trilingual and therefore not a fair comparison to the bilinguals.
  • Sidenote the second: there are intriguing results with Italian signers simultaneously producing syntactically correct sentences in both Italian Sign Language and Italian even though they have very different syntax. There is a theory that they could do this because the participants weren't choosing a dominant syntactic structure, but a dominant morphological structure, and the morphological marking is similar enough in both languages to support simultaneous production. But that's mostly speculative at the moment.
erratio: (Default)
So as part of phonetics class we watched a documentary about deafness (no, I'm not entirely sure why this was in phonetics class either) called Sound and Fury, following two related families with deaf children and their discussions about whether or not to get the kids cochlear implants.

Anyway, in the end the deaf parents with the deaf 5 year old girl decided not to get her a cochlear implant, while the deaf 11 month old with the hearing parents from deaf families decided to get him the implant. I can't help feeling that in the end all the justifications and reasoning given came down to wanting their kid to be like them. And I've moved from thinking that it's their kid and they have the right to raise them in their culture to thinking that raising them in your culture is fine but they should probably also get the implants.

Reasons given by the deaf parents (and other people in the deaf community) for not wanting the implant:
* it's unnatural, it will make them like a robot
* it's their body, they should be allowed to make the decision themselves when they're older (except that if you don't learn how to speak relatively young it's basically impossible to learn to speak fluently later)
* you must be ashamed of Deaf culture/you think we're not good enough/the child won't have a Deaf identity/I'm deaf and I did just fine
* it doesn't even work that well (I think the figure was 20% getting good usage out of it, and the kid has to spend a lot of time in hearing environments or they'll fall back on signing too much)
* Deaf culture will go extinct if everyone gets implants

Reasons given by the hearing parents and grandparents for wanting the implant
* The kids should not have to grow up isolated and made fun of or stared at for being deaf
* It's a disability, if your whole family was crippled you would jump on a treatment that fixed it. Why should deafness be any different?
* It will give them access to both the Deaf and hearing world/it will open up more potential choices
* A lot of deaf kids get terrible education; the average reading level of a deaf highschooler is 4th grade (might be partly cos deaf kids are effectively forced to learn a foreign language to read, since sign languages have wildly different syntax and morphology to English, and the phonology is completely untranslateable)

The family who opted to keep their kid implant-free ended up moving to Maryland to live in a much bigger deaf community next to an awesome school for the deaf, where random people in the supermarket and restaurants would often know a bit of sign. One thing the father said as this move was in process really stood out for me as hammering home just how much identity politics plays a role in deafness in the US: "[at our old home] I felt caged in, like they wanted to jail me. Here, I feel comfortable and safe". The grandmother accuses them of trying to escape and of trying to put up a fence, both of which felt apt to me. If the hearing world wants to cage deaf people, then by moving into a deaf community the family is basically just choosing a big enough cage for themselves that they can pretend the walls don't exist.

Also of interest: some of the deaf community did have managerial jobs in hearing workplaces. They relied heavily on email, writing and interpreters, but they got by okay. And the hearing mother, whose parents are both deaf, had a crappy time growing up - she had to get years of speech therapy because her deaf family couldn't provide a speech environment for her to learn from, she had to put up with the other kids making fun of her parents and deaf people constantly, and she spend a lot of her time playing interpreter for her parents.

Phonetics

Sep. 19th, 2012 09:46 pm
erratio: (Default)
In case you still had the misconception that you're able to perceive reality as it really is, try taking a phonetics course. Specifically, try to learn how to transcribe speech.

As a student of linguistics, I of course already knew that the brain does a ton of work converting sound streams into recognisable speech. I already knew that speech sounds are affected by their surrounding environment, and that my brain does cool compensatory stuff and predictive work to let me easily pick out individual words and meanings with remarkably few mistakes.

But a couple of days ago we did our first real-time transcription exercise in class where someone reads out a bunch of words in isolation, and those things are *hard*. That sound at the end of the word, was it an unreleased [t]? A glottal stop? Nothing at all? Something else entirely? That word pronounced in an American accent that sounds kind of like 'cut' to me, was actually 'cot', and if there'd been any context I would have easily gotten the right word and wouldn't have even noticed that the vowel sounds more like [u] than [o] to me.

Basically what I'm saying is that I now have a new visceral appreciation for how hard this stuff is.



Other classes I'm taking this semester include:
How to use Google and Twitter searches as totally legitimate sources of linguistic research, (aka Experimental Syntax)
Assigning stress to words and phrases is way harder than you would naively expect (aka Foot Structure)
Fancy mathematical formalisms you can use as a framework to understand syntax instead of making up plausible sounding crap that doesn't really engage with most of the previous literature (aka Compositional Syntax)
erratio: (Default)
Some background on Nicaraguan Sign Language (NSL):
NSL is about 30 years old now. It began when a special school for the education of deaf children was established in Managua, allowing deaf children starting from age 4, who'd previously only had very ad hoc systems of home sign with their family, interact with 25-30 other deaf kids, each of whom brought their own idiosyncratic home sign system/language with them. When those kids interacted they created a completely new, richer sign language. And then since it's a school, every year another cohort of 25 or so new kids would come in, and those kids in turn expanded on the original system created by the first few cohorts, and so on until NSL reached its current status where it's basically a full-fledged language, created within the last 30 years from virtually nothing and without major contamination from other languages. Basically, it's a linguist's dream language, because we have detailed records of what the language looked like at each stage of growth (more on this in a moment) and so we can literally see the grammar unfolding over time rather than having to guess, like we do for pretty much every other language. And it turns out that we can also see what cognitive functions language is and isn't necessary for, which is pretty cool. More on this in a moment too.
The way this school works is that kids go from age 4 until age 14, at which stage they graduate and henceforth are allowed to hang out at the deaf club, but are (obviously) no longer at school. There are also often older kids just starting who couldn't come previously, and their language development is obviously not as good since they haven't had access to a decent source of language until that point. The early cohorts didn't really hang out with each other outside of the school context, but the later ones, having grown up in the age of cell phones, do. At school, the kids get several hours a day to hang out with each other - during food/play times, on the school bus (some of them live as much as 2 hours away, so that's a lot of time to socialise with the other kids), behind the teacher's back... The schooling is pretty much all done in Spanish and mostly with non-signing teachers. As you might expect, not a whole lot of regular school learning actually goes on, although more recently they've started hiring adults from the first generation of signers as teachers so they can actually communicate. Plus, texting via cell phones means the kids are way more incentivised to learn to read than the first generations were.

On sign languages in general: when signs are coined they are often iconic in some way or other. For example, the sign for a king may be the action of putting a crown on, or the sign for a cat might be drawing imaginary whiskers on your face. But there's nnothing principled about what iconic aspect of a thing or action will become encoded as a sign, and signs tend to get less iconic over time.

So, Ann Senghas. She's been going down to this school for the deaf every summer for many years now, documenting their language, getting them to complete various linguistics tasks, and so on. And now, onto the pithy details of the talk, listed in bullet point form as usual because I'm lazy and can't be bothered with trivialities like "good writing".

* The NSL signers can be split into roughly 3 generations, descriptively called first second and third. First generation started school in the 70's, second in the 80's, third in the 90's
* If you look at a video of each generation signing, there aren't any obvious differences at first, except in speed - first generation is slow compared to second is slow compared to third. But they're all clearly using language, not pantomiming or gesturing.
* However if you look more closely, there are bigger differences. Two ways that we saw today included the expression of number and expression of space. Others that were mentioned include expression of path/manner of movement, syntax, theory of mind stuff, and general 'with it'-ness
* On path/manner of movement: where the first and second generation would express a ball rolling down a hill by more or less pantomiming an object rolling down, the third generation would express a ball rolling down a hill by first indicating a rolling thing and then indicating a descent.
* On syntax: for the earlier generations, verbs could only take a single argument each, so "the boy fed the woman" would be expressed as "woman sit; boy feed"
* On expression of number: the first generation would express number the same way us non-signers generally would: 15 would be 5 on both hands followed by 5 on one hand. The second generation developed a more efficient (one-handed, faster) system that builds on that of the first generation: A girl counting to 10 counting the first 5 normally on one hand followed by counting from 1-5 again on the same hand but accompanied by a slight twist. Another girl asked to express the number 15 did so by first indicating a 1 and then moving her hand slightly to one and then indicating a 5 (so basically a 1 in the 10's column and a 5 in the units). Kids in the third generation came up with a new system altogether that loses a lot of the transparency but is even faster and more compact: 15 is expressed by holding the middle finger towards the palm with the thumb (imagine you're trying to form a ring with your thumb and middle finger - this represents 10) and then flicking it outwards to show 5 fingers. Apparently the older generations understand these kinds of signs but are disdainful of them - "they don't even look like numbers, it's just a flick!". This kind of pattern exemplifies the different generations: first generation functional but not particularly efficient, second generation has some kind of systematic improvement that allows them to express themselves more efficiently, and third generation as often as not will come up with something way more abstract that bears very little iconic resemblance to its meaning.
* On expression of space: there's a task linguists sometimes get people to do that goes as follows: person A has to describe a simple picture to person B, who then picks the matching picture on their side of the test area. In this case the pictures were of a tree and a man, where the man would be standing either to the left or right of the tree and could be facing towards or away from the tree, or out to the audience or away from it. Ann Senghas gave this task to her signers to find out how they expressed spatial concepts. Instead what she found was that the first generation failed the task - they couldn't encode spatial relations and performed at chance. In the later generations everyone could do it just fine. We were shown a video of the task being done by a first generation speaker and her third generation nephew, where during a break in the task she asked him to explain how to get it right. The kid does a pretty good job of explaining something that must have seemed ridiculously obvious to him - if the person is on this side of the tree then you put them like so, otherwise you put them on the other like so. This isn't something you can practise, you just look and then do it. Easy! (very rough paraphrase from memory). She did not get it.
* On theory of mind and 'with it'-ness: the first generation fails at second-order theory of mind, aka situations where you have to express what you know that I know. They're also a lot less 'with it' in general - like when Senghas is trying to coordinate with them for meetings and such, they're just a lot less good at it. They're also way less good at metalinguistic stuff - being aware of how you express things.


erratio: (Default)
I'm currently running a mini experiment on how native speakers of various dialects of English pronounce 'a' in some nonsense words. It only takes a few minutes and would help me out greatly. Results can be either left as a comment below this post or emailed to evil dot jen at gmail dot com. Thanks!

PS: If anyone's interested in the results or context for this, I'd be happy to post them up in a day or three when I've got all the results aggregated.

Instructions and words )
erratio: (Default)
There's been some interesting talks lately, but today was the first one in a while that made me think "I should blog about that". But since I also would like records of the other talks, I'm going to start trying to summarise the ones I found interesting.

Julie Van Dyke - on language processing using cue retrieval
* Language processing is really heavily dependent on working memory.

* But we don't actually know much about working memory (eg. how much of it we have), so to be safe let's assume that a hypothetical person can only remember the last item they heard/read. This isn't as insane as it sounds - computer models have indicated that processing can do pretty well even with such an impoverished working memory. Everything that isn't in active working memory is absorbed passively and can be called upon (albeit not as easily)

* So let's consider a few hypothetical sentences: 1) the book ripped 2) the book recommended by the editor ripped 3) the book from Michigan by Anne Rice that was recommended by the editor ripped. How does a listener tell if 'ripped' forms a grammatical sentence with 'the book'? There are a few ways: they could search forwards or backwards through the sentence, in which case you would expect processing times to reflect the amount of material between "the book" and "ripped". Or you could do cue-based retrieval, where you filter the sentence for words that have the features you're looking for, in which case you wouldn't expect there to be significant time difference in retrieval. As the name of the talk might suggest, people use cue-based retrieval.

* So now we have a model where we store words as bundles of semantic/phonological/etc features and then retrieve them by using those features. But what if the sentence has several possible words that have the features you're looking for? In that case, retrieval might get blocked due to interference from the other items. This, according to Julie Van Dyke, is why people forget. (I don't know whether she meant in general or when processing sentences. Hopefully the latter)

* And the main difference between people who are good at processing (eg. fast readers) vs those who aren't, is almost entirely based on how detailed your representations are. Because if your word representations are super detailed with lots of features, it's easier to zero in on them. And, good news, the main factor in how good your representations are (after controlling for IQ and a bunch of other bothersome details) is practice. So if you suck at reading, all you need to do to fix it is read more.
erratio: (Default)
This is a midpoint-rooted phoneme tree produced using a Bayesian approach, from a paper whose sole purpose is to debunk a recent-ish high profile paper that claimed a language's phoneme inventory is correlated with how far away it is from Africa, thus providing support for the out-of-Africa story of human origins.

What this picture shows: something to do with phonemic relatedness and geographic clustering and how they don't correlate particularly well. I'm not really sure, to be honest, I've never used that technique or read a paper that used it before, so my being a linguist isn't particularly helpful here. Although I will assume that the different colours correspond to different language families, since that's pretty standard.



Source: Language Log, which also goes into lots more detail about the paper
and has lots of other colourful graphs, although none as cool as this one.
erratio: (Default)
Last week we had a presentation by Keith Chen, who you may remember as the economist/management guy who caused a bit of a furor by claiming that the way any given language marks the future tense is a good predictor of future-oriented behaviours like saving money, taking care of health, and that sort of thing.

Some background: If you know about hyperbolic discounting it's probably a no-brainer to hear that if you give people a 401k/superannuation form with a picture of themselves now they'll put down lower payments than if you present them with a photo of themselves photoshopped to look old. The reason for this is that as a species we like to enjoy the good times now and make our future selves pay for it, but if you make people identify more with their future selves then they don't feel quite so great about putting stuff off that way.

With that out of the way, Chen's presentation can be summarised more or less as follows: Languages vary in the way they mark the future, with some languages like Hebrew forcing you to express the future explicitly whenever you're talking about future events and other languages like Chinese letting you talk about the future without using overt tense markers. For languages like German or English which have both overt (I will eat breakfast tomorrow) and covert (I am going to eat breakfast tomorrow) ways of talking about the future that can be used more or less interchangeably, the linguists who did these surveys looked at frequency of each type and and categorised the language as weak future-marking or whatever based on the most frequent forms.

The modern version of the Sapir-Whorf hypothesis says that language structure has weak priming effects on speakers. For example, in languages where the grammatical gender of "bridge" is feminine, people are more likely to describe them as "graceful" or "soaring" whereas speakers of languages that have masculine bridges are more likely to describe them as "strong" or "durable". (See Lena Boroditsky's publications page for lots more experiments of this type).

Chen used a bunch of census type data that included the language spoken at home to find that even after you control for a fairly impressive array of confounding factors, there was a strong correlation between savings behaviour and type of future marking in a language. In fact, he claimed, language type is a better predictor of savings behaviour than a bunch of other factors that economists usually consider to be pretty important, including trifling considerations like the country's economy.

Overall story: Every time you use an overt future marker you are priming yourself to think of future-you as a different person to present-you, so you're more likely to do what makes present-you happy. Or alternatively, there is a common cause for the savings behaviour and speakers' choice of how they talk about the future.
erratio: (Default)
So, I just attended a talk by Carol Fowler, an academic who works in articulatory/gestural phonology. She is awesome and her talk was awesome - I was more impressed by it than I've been by most other talks, even though is only varely related to my usual interests. I'm now going to try to capture as much of it as I can remember, and since it's about cogsci-ish stuff I'm putting it up here where other people with similar interests can read it, if they want. It's quite rambly though and I don't know enough about phonetics to really explain most of it clearly, so if you want the cool/easier to understand bits, skip down to the embodied cognition/mirror neuron section.

Relevant jargon: 
gestures: when applied to speech production refer to movements of the tongue, jaw, lips, etc.
formants: the frequences of speech sounds. More specifically, most speech sounds show up on a spectrogram/graph/whatever as 2 or moer lines at various heights which our brain combines to produce a sound.
VOT - voice onset timing: I'm not really sure what this is since it's outside my area but it's one of the qualities of the speech signal that lets you distinguish between different sounds. VOT's vary across demographics in much the way you would expect, which is to say that it varies between individuals but there are also broad gender/cultural trends.
TMS: transcranial magnetic stimulation - it's gotten popular recently as a way to shut off areas of the brain but apparently it's also what they use for stimulating specific muscles directly.
Stop - a consonant that involves full stoppage of the vocal tract. The most consonanty consonants that exist. Letters like p,t,g are stops.

Overall argument: a really big part of language perception relies on embodied cognition type stuff, because trying to reconstruct actual sounds from a continuous stream is really really hard.

Phonetics stuff: Speech signals are not consistent. For example, /gi/ versus /gu/, the formants are what you would expect for the vowel parts (/i/ is high, /u/ is lower), but the onsets that make people hear the /g/ look like a little downtick on the first one and a little uptick on the second. The short noisy burst that is a stop looks almost exactly the same regardless of whether it's a /p/,/t/ or whatever, so Liberman guessed that speakers must be using articulatory information to disambiguate them. The proof of this can be seen in the McGurk effect, where you listen to one syllable and see another being mouthed and what you perceive is usually somewhere between the two but closer to what you see. The McGurk effect has been replicated in multiple modalities, including one Carol recounted one where the listener put their hand over her face while she mouthed various syllables, and another one where people got a puff of air on their necks to mimic the aspiration of the /p/ in /pa/ versus the non-aspiration in /ba/. Writing is one of the few modalities that doesn't show an effect. When you put other syllables such as /ar/ or /al/ in front of an ambiguous /pa/ or /ga/, people hear it in a way that indicates that they're overcompensating for the effects of coarticulation of the two consonants. (the other theory had to do with formants, but a Tamil linguist finally found a minimal pair to test both theories and it came out in favour of overcompensation)

The peception by synthesis argument: Someone (Liberman?) thinks people understand speech by modelling possible gestures until they find the correct one. Carol disagrees with this because in perception you only have a very limited time to work out what they said, and your brain doesn't enjoy being wrong because that means more work, so trial and error seems unlikely. Also, no one actually speaks identically so there's no way I can model what you said accurately in any case (although this one's kind of a weak argument, because it's a matter of getting close enough)

Mirror neurons and embodied cognition stuff: People primed with thoughts about old people moved more slowly on their way to the lift afterwards. Subjects made to hold a pen between their teeth, forcing their mouth into a 'smile', were more likely to perceive other faces as smiling. In speech perception, if people had a machine pulling their mouths up or down to mimic the shape of their mouth when forming various vowels were more likely to hear ambiguous vowels as the one corresponding to their mouth shape, even though they weren't making that mouth shape deliberately. Similarly, when TMS was used to stimulate subjects lips or tongue while hearing various consonants, they were more likely to perceive the consonant made with that part of the mouth. People watching other people walk have short bursts of neural activity corresponding to leg muscle movement, this kind of thing doesn't happen when watching a movement that isn't humanly possibly, like wagging a tail. Similar effects in speech perception. When TMS was used to knock out part of the articulatory apparatus, people's speech perception suffered. When subjects had their jaw moved in a specific way by a machine such that it didn't actually affect how they produced a specific vowel, it still had an effect on how that vowel was perceived by those subjects later.

The chinchilla experiment: chinchillas kept in a US lab were successfully taught to distinguish between  /pa/ and /ba/. More interestingly, the acoustic properties they were picking up were specific to English - apparently the way English speakers distinguish between the two sounds (something to do with VOT's) is fairly unusual, most languages put the boundary somewhere slightly different. So that's evidence against there being a special human phonetic module for speech perception. Other animals can do it too, chinchillas are just the silliest and therefore one of the strongest examples of it not being anything special. But Carol mentioned that she's skeptical of this experiment and would like to see it replicated.

Questions: Do blind people have correspondingly worse speech perception since they lack a lot of the cross-modal information which is apparently so important? Studies of the mirror neuron/embodied cognition stuff in signed languages.
erratio: (Default)
 So it turns out I misremembered the study I saw about children not being able to lay down memories without language. The study I meant to refer to actually shows that children could only describe events using the vocabulary they had at the time the memory was encoded. Which in younger children meant that they couldn't describe it verbally at all. However, they were able to remember it, as evidenced by their ability to re-enact it and recognise photos of the activity involved.

And studies of young children involving conditioning show that even very young children are perfectly capable of remembering things (see this paper which includes a description of tying a baby's foot to a mobile with string so that it could entertain itself, and then checking how long some string or the mobile would elicit the learnt kicking motions), so it's not that children are incapable of remembering events per se. (although that paper does note that their memory of the mobile/string thing only lasted for a few weeks at the outer limit).

There's a bunch of other research out there, but most of it isn't solid and/or is hidden behind paywalls, so it's difficult to really check. But it looks like the 'context-specific' explanation is the leading one so far. I'd be really interested in trying to find other people who've undergone relatively severe paradigm shifts in the way they think, and see whether their memory from before the paradigm shift is worse than you would normally expect for older memories. Or is the preverbal -> verbal shift the only one big enough to potentially make all the previous states of mind completely inaccessible?
erratio: (Default)
 Apparently, thought isn't as dependent on language as we might naively think

Excerpt:

"The man she would call, ‘Ildefonso,’ had figured out how to survive, in part by simply copying those around him, but he had no idea what language was. Schaller found that he observed people’s lips and mouth moving, unaware that they were making sound, unaware that there was sound, trying to figure out what was happening from the movements of the mouths. She felt that he was frustrated because he thought everyone else could figur e things out from looking at each others’ moving mouths.
.
In contrast to the absolute inability Idefenso had getting the idea of ‘idea,’ or his struggles with points in time, he clearly was capable of all sorts of tasks that suggest he was not mentally inert or completely vacant. He had survived into adulthood, crossed into the US, kept himself from being mowed down in traffic or starving to death. Moreover, he and other languageless individuals had apparently figured out ways to communicate without a shared language, which I find both phenomenally intriguing and difficult to even imagine (putting aside the definitional problem of distinguishing human communication from ‘language’ broadly construed).

Schaller highlights that learning language isolated Ildefonso from other languageless individuals. Schaller explains:

The only thing he said, which I think is fascinating and raises more questions than answers, is that he used to be able to talk to his other languageless friends. They found each other over the years. He said to me, “I think differently. I can’t remember how I thought.” I think that’s phenomenal!
"

That last part about not being able to remember how to think or talk to his languageless friends echoes other research that language is important for encoding memories (I don't have the link handy, but in short: young childrens' ability to remember stuff was shown to be strongly correlated with their progress in language acquisition). But the way Ildefonso is described above makes me think that the lack of ability to remember pre-language events might not be due to an inability to encode memories in the absence of linguistic symbols, but a result of not sharing enough mental context with your prelinguistic self to be able to retrieve them.
erratio: (Default)
[Poll #1166612]

My results prior to this:
Number of people asked: 6. 3 of them technically proficient, one proficient but not technical, 2 computer illiterates.

So far they have all said "mice". Only one of them says he's even heard of anyone else refer to them as "mouses".

I'm mostly hoping to gather enough data points to prove (for a certain value of prove) that the use of "mice" is the common usage these days and isn't just industry jargon, as my Ling lecturers seem to think. Of course, I could also be proven wrong, in which case I'll still email my lecturer with the results so that she can be amused at my expense. Either way, if you have any additional data points to add, go for it.

PS: I am aware of why the 'correct' plural is "mouses". And I still think that because it's an extension of an existing noun rather than the introduction of a new one it should follow the plural of the original.

PPS: I also think it should be "dynamic systems" rather than "dynamical systems". Any explanation as to why would be enlightening.
erratio: (Default)
So university session has started, giving me millions of papers that need to be read in advance and lots of interesting lecturers to listen to. And I noticed a strange trend which I didn't notice previously, namely this tendency to express adjectives that used to end in '-ic', like dynamic and problematic, with an '-al' suffix tacked on. So the AI lecturer has been telling us about dynamical systems, and just as I was thinking that I could float it past one of my linguistics lecturers I noticed that in the textbook about grammar, co-written by them and set as the prescribed text for one of my courses, features the construction 'unproblematical'. Am I the only one who finds the construction odd-sounding and unnecessary? Not that it matters of course since it looks like it's the new accepted way to write and say them.
erratio: (Default)

The other day I was wondering: The language reflects the culture of a people, right? Like the Japanese have a million and one honorifics because social status is extremely important to them. So, how does gendering of a language reflect culture/shape people's thoughts? By gendering I mean languages like Hebrew which not only require verbs to agree with gender when talking about people but assign gender to all nouns based mostly on the way they sound.

The distribution of gendered nouns isn't quite what you would expect it to be; not all objects stereotypically used by females are of the feminine gender and vice versa. And when coining neologisms, how much consideration does gender get?

Also, one might think that a non-gendered language would belong to a culture that doesn't have strong gender roles, except English shows this to be a lie straight away. It started losing its genders in the Middle English period, whihc doesn't correlate at all with feminism etc.

erratio: (Default)
Yesterday I got permission from one of my Linguistics lecturers to do my final essay on the topic of leetspeak. Namely whether it's a register, dialect, or whatever. Or to put it more generally, what are the features of leetspeak and what pigeonhole can I put it in as a result :p
-------------------------------------


Todays XKCD comic is incredibly sad, especially the alt-text. So being bored and all I went to the forums to see what other people thougt of it (it's been a looong time since he did a completely serious comic). And I found this:


You fit into me
like a hook into an eye

a fish hook
an open eye

It's a poem by Margaret Atwood, by the way.
erratio: (Default)
Recently I had a problem with a friend, wherein I told them something and they then went and told other friends of mine in a public place. Now technically this person wasn't in the wrong, as I hadn't specifically told them that I was uncomfortable having this information spread around without my permission. But I was unhappy about it because it was in a public place where others were close by, and one of the people who they assumed it would be alright to talk to hadn't heard about it from me. But I hadn't said  so to them, I just made the assumption that they wouldn't talk about it, and it turned out I should in fact have told them so directly.
And this got me thinking about the difference between discretion and tact.
The friend in question and I both agree that neither of us have much tact. If I want someone to do something I might as well tell them straight out, because I'm incapable of being indirect and any attempt to do so will come across as stupidly transparent. I've improved these days to the point where I can say things somewhat less hurtfully rather than just blurting it out any which way, but if I feel like the point needs to be made then virtually no force on earth will prevent me from saying it. My friend is much the same.
However I do have a lot of discretion. For me the default settings on any information I get is private rather than public. Unless the information is posted in some publicly accessible place or the information is exceptionally good or important I won't discuss it with other people unless I know for sure that they've been given access to it as well. I'll also be as careful as possible where and when I discuss things to minimise other people getting access to information that might be privileged for all I know.

I guess I'm still socially immature in a lot of ways to assume that what I say will never be taken as gossip fodder. After all, talking about mutual acquaintances and events is what makes up 90% of social interactions. But knowing this, my gut instinct is to react by holding the information I do possess even closer to my chest, lest it travel outside my sphere of control. This is ridiculous of course, and so I try to let go. But.. it really isn't easy, especially when things like this happen. To me, everything anyone says has an invisible Private tag attached, and so I find it somewhat frustrating that other people don't always feel the same way. If you keep asking them to keep things to themselves you come across as not trusting the other person and/or paranoid, but if you don't then they might spread it more than you feel comfortable with.. *sigh*


Anyway, this all brings me to a related point which I just found randomly interesting from a personal perspective. As usual it concerns Linguistics :P In tute the other day one of the questions concerned politeness strategies. It presented a couple of scenarios where you had to ask a favour from your neighbour/friend. In the first scenario you had to convince them to let your young daughter play with their young daughter for the day while you went shopping. For the second you had to convince them to mind your few-months-old baby (called Howler) for the night while you go to stay with a friend who hates babies.
My own answers were that for the first scenario you could get away with just asking straight out as long as you were somewhat polite about it, ie "I don't suppose you could do me a favour and mind my daughter just for today" and for the second scenario, same deal except with <i>much</i> more politeness, ie "I know this is a huge imposition and I wouldn't be asking you if I could possibly avoid it, but could you pleeeeaase take care of Howler just for tonight?"
The tutor on the other hand discussed them in these terms:
Scenario 1: "That's not a huge favour to ask... in fact you probably wouldn't even need to lie!... You could even possibly get away without even asking, say something like "you know, our daughters get along sooo well and they haven't seen eachother in ages..." and then just wait for the other person to make the offer!"
Scenario 2: "You would lie, definitely. Your uncle died, you have a wedding to go to, anything! Just not the truth!"
My reactions to the above discussion?
Scenario 1: Eww.. Not only could I never do that sort of indirect request where you hint and then wait for the other person to offer, but I really really hate it when people do that to me.
Scenario 2: Just wow. It never even occurred to me to lie outright. I would have been as polite as possible and skirted around the reasons why I needed the favour, but if pushed I would admit it was because I just wanted to spend one night with my friend who (and I would probably embroider the truth somewhat here, since hating babies is socially unacceptable) had trouble dealing with babies.

Apparently this ability to lie and hint around the favour you want so that you save face (both for yourself and for the person you're communicating with) is part of negative politeness; obviously it's something I'll have to work on if I want to be socially acceptable.. I'm not sure if I do though given what's involved.
erratio: (Default)
After a year and a half or so of taking Linguistics courses I've noticed that Linguistics lecturers are the only ones who uniformly seem to be having fun during their lectures.
erratio: (Default)
This was inspired by a column in the SMH where they had a short paragraph complaining about how these days everyone's peppering their speech with "like", "you know", and "actually","basically", and "literally"

Does anyone have any theories on this? And why those words in particular?

My own theory centres around the way that none of these words actually (there I go again) add much meaningful content to a sentence. Add that to the way that socially it's become much less acceptable to stop and think before speaking, you're expected to reply RIGHT NOW even if you have nothing meaningful thought up yet, and so you start inserting all sorts of qualifiers and extra words just to give you time to think of something relevant to add. I'm still not sure why those particular words, although I think it may have something to do with the way it's easy to put a rising intonation on them..
erratio: (Default)
http://www.orwell.ru/library/essays/politics/english/e_polit

For the lazy, Orwell is arguing that English has become degraded by practices such as stale and/or misused metaphors and using long complicated words to try to make yourself sound more profound. And of course since this is Orwell he then links it to politics. But the political aspects aren't quite so interesting to me.
At the end he posts this short guide to expressing yourself:

1. Never use a metaphor, simile, or other figure of speech which you are used to seeing in print.
2. Never use a long word where a short one will do.
3. If it is possible to cut a word out, always cut it out.
4. Never use the passive where you can use the active.
5. Never use a foreign phrase, a scientific word, or a jargon word if you can think of an everyday English equivalent.
6. Break any of these rules sooner than say anything outright barbarous.

Let's see; I'm guilty of (1) sometimes (the other day I said "she's an oasis of logic in a sea of Arts." And then I wanted to smack myself for talking about an oasis in a sea).
I'm good for (2) and (3), since both my writing and speaking style have always been as succint as possible (which does kinda make it hard when I'm trying to make wordcounts for essays. I suspect that for my last Linguistics essay I broke a lot of the above rules, not surprising since I was running on a tight schedule and making up my argument as I went while at the same time heading for that wordcount target).
(4) is harder. I don't think I do it much but it's much harder to be aware of.
(5) same as for (2) and (3), I've always gone for the 'less is more' approach.

And for the record, yes I did change my writing in the last paragraph to try to stick to the rules more effectively :p


In other Linguistics-related news, lately I've been considering angles to write a Computing thesis from that could somehow incorporate Linguistics into it. Ideas so far include:
* Sign language to text converter - I don't think it's feasible, from what I remember Anu tried to make some kind of program that allows you to move a mouse pointer by pointing your hand for his thesis, and it wasn't too crash hot. The technology really isn't there for something as complicated as sign language.
* Something to do with natural language parsing - I really don't want to though because the problem with the whole field is very simple to understand and hellishly difficult to fix. Basically the problem is that language is never going to adhere to any kind of fixed patterns that a computer could have hardcoded into it. The reason language works for humans is because our brains are amazing at receiving huge amounts of data from the environment, discarding parts that aren't important and then sifting the rest for meaning. Until a computer can emulate that process natural language parsing is screwed. Quick example: I originally wrote the first line as "something to do with a natural language parsing". I didn't even notice the typo until just now, however that inclusion of an article where there shouldn't have been one would have been enough to break a lot of parsers.
* Something to do with how computing is a field dominated by English and how this has affected programming and design - There's a couple of problems with this idea. The first is that it sounds a bit wanky* to me, which means that I'm probably not going to be able to get really motivated about it. The second is that the faculty member most likely to be interested in a topic like this is John Plaice. For those who don't know John Plaice, he's an angry angry man and the idea of working with/under him does not fill me with joy.
* This isn't even computing related, but interesting nonetheless: Something to do with translating maths notation into English and vice versa. Brought about by Alex's constant maths-ing, where I noted that mathematicians can say a surprisingly large amount using their incomprehensible symbols, and maths notation is supposedly a universal language between mathematicians and so forth.


*Wanky: A term I first heard used by my HSC English teacher. From the context she used it I think it means something you write for the specific purpose of getting the marker to give you more points, but from that I've always mentally expanded it to anything you write that gives off a vibe of "ooh look at me I'm so intelligent because I can use literary techniques and big words and overly complicated ideas". In fact, George Orwell's idea of bad language fits my concept of "wanky" quite nicely.
As for the etymology of "wanky".. think about it :p.
erratio: (Default)
So I managed to survive my linguistics presentation. I was really nervous while just speaking to the class but strangely enough once I hit the discussion questions I calmed right down, this despite the fact that for the first part I was just reading off a sheet while for the second I was actually making stuff up on the spot. Also, I don't think I did the best job ever at summarising but I figure my essay mark should balance out the crap presentation mark.

Also, this came up at work today: If I were to pretend that my particular brand of English was in fact a dialect of it's own (in line with Heinese), it would be called Jenglish :D

And now I try to find something constructive to do and tell my adrenaline-fired nerves to shut the hell up.

Profile

erratio: (Default)
erratio

June 2017

S M T W T F S
    123
45678910
11121314151617
18192021222324
2526 27282930 

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jun. 28th, 2017 12:25 pm
Powered by Dreamwidth Studios