erratio: (Default)
Why is it so frustrating/annoying when someone gives you advice (especially if it's advice you've already thought of, regardless of how clever it is) when all you really wanted was a sympathetic ear?


erratio: (Default)
(conversation in the grad room)

S: My wife always says "you're not *really* sorry, you're only sorry you got caught".

Me: I think you can be genuinely sorry if you did something by accident, but not on purpose.

E: ..or if you did something deliberate that had unforeseen consequences.

Me: Right. But if you intended it.. by the time you've made the decision and acted on it, you're not sorry anymore.

We missed the case where someone has to prioritise tasks such that someone is unavoidably harmed in pursuit of a more important goal. In that situation you can easily be sorry that there wasn't an otherwise-identical action that didn't cause that harm. Did we miss anything else?
erratio: (Default)
(Long time, no post, etc. Maybe one of these days I'll go into detail about what I've been up for the last semester or so. Also, the key insight that led to this post is due to my friend N)

A friend of mine, B, used to suffer from terrible road rage. His girlfriend, L, felt so uncomfortable driving with him when he was like this that she put considerable time and effort into working out what was going on, since B is not typically an angry guy. Eventually, she realised that what was going on was that B wasn't seeing the other cars as vehicles containing living people with plans and emotions of their own, but as potential obstacles that sometimes moved in unpredictable ways to block his path.

One of my favorite bloggers had a very well-received post about a certain type of guy who approaches women like they're vending machines for sex, where he just needs to perform the right moves and say the right things, and lo and behold he'll get laid. When this doesn't happen he gets angry and bitter and talks about how he's such a Nice Guy but girls still aren't interested in him.

On hearing B's road rage story, it occurred to me that I have a similar failure mode when I'm socially anxious, where I treat the people around me as mysterious black boxes that require that I perform esoteric nonsensical social rituals in order to appease and become accepted by them, where any deviation from the rituals will be punished with immediate scorn and/or rejection*. Unsurprisingly, this way of thinking does not particularly aid me in my efforts to be liked and accepted.

In a post about abusive partners, one of the comments highlighted the way that the abused blame themselves, searching for the thing they did to deserve the punishment. When they think they've found it, they tell themselves that if they just stop doing that particular thing, their partner will stop abusing them. Inevitably the abuse happens again, because the thing they did that first time was at best a convenient excuse, at worst completely uncorrelated with the abusive behavior.

In all of these stories, a person with otherwise completely functional theory of mind is put in a stressful situation, and in response they have lost their ability to think about what the other people in the interaction believe and desire. I'm not sure that calling it a failure in theory of mind is quite correct though, since the classical failure mode for theory of mind is to assume that everyone shares the same information/desires/beliefs that you do, as opposed to the situations here where the failure seems to involve denying/forgetting that the other people in the interaction have meaningful internal states at all. I could call it objectification, except that the connotations of the term have drifted so far away from the strict meaning that it's now completely useless for trying to describe anything else.

Does anyone know if there's a better name for this phenomenon? Or if there's any literature on it? So far I'm drawing a blank, but it seems like an area that ought to have been studied. If people lose their ability to model others when they're under stress, it seems like this would have huge implications for a lot of subfields.


* Yes, I know that's an exaggeration of what would actually happen, but you're welcome to try convincing my brain of this when I'm feeling socially anxious

Huh

Dec. 11th, 2012 09:01 pm
erratio: (Default)
So, a few months back I started going to the gym semi-regularly for the first time ever, in part because I'm ridiculously unfit to the point where if I'm not careful when I'm playing sport or whatever I end up desperately hyperventilating. And around the same time I went back to ballet, where I discovered exactly the same problem there - I'm ok for the slow stuff*, but when we get to the jumps I usually end up gasping by the end.

But then yesterday I went to the gym and used the cross trainer for 20 minutes, felt like I was working hard (lots of sweat, etc) but had the machine telling me that my heart rate was 56 or below for most of the time, and when I got off the machine my heart felt only just barely higher than resting. And at ballet today the jumps didn't tire me out, despite the fact that we did a lot of them and at one point I was goofing off with a friend and trying to do super quick jumps in double time just to see if I could. Then after noticing that I wasn't tired after class, I did a bunch more jumping around in my apartment with a similar lack of effect

I don't see any way my aerobic fitness could have improved so much so abruptly. I didn't get a particularly good amount of sleep this morning (got up early to work on a presentation) so I doubt the factor is well-restedness. Two factors that were definitely different yesterday and today were that I'm feeling less stressed than I've been for quite a long time**, and I was significantly happier today than is usual for me. But I've never heard of stress having a direct effect on fitness.

In other words, I am confused.
 
* Funnily enough the slow stuff is actually much more work than the jumping. Jumping is straight cardio, the slow stuff is lots of fine muscle control and remembering to keep breathing
** even though I kind of have a mountain of finals to work on and study for
erratio: (Default)
So there's this idea, backed up by fairly solid studies, that a very large part of what most people think is 'talent' or 'genius' or whatever your preferred term for innate skill is, is actually just the result of a very long time spent practicing those skills. Around ten thousand hours or so, to be more specific - that's the amount of time most people need to spend practicing a skill in order to master it.

The other day the obvious-in-hindsight realisation struck me that I spent most of my childhood mastering skills that are completely useless to me in the grand scheme of things, and very little time mastering skills that are useful to my current work. As a result, I often feel incompetent because the skills I'm using aren't the ones that I'm especially practiced at.

Things I am almost certain I've spent at least 10,000 hours doing:
  •  Dancing,  most of which was ballet: I eventually failed at this because of hard physical constraints (eg. my hips are extremely stiff/inflexible) and because I hated performing on stage, but I did pick up a couple of superpowers along the way, like my near-eidetic memory for sequences of steps* and my ability to ignore the cold in the depths of winter as long as I'm within a 3 minute window of doing ballet (before or after)
  • Reading, especially fantasy fiction: Very large vocabulary, fast reading speed
  • Sitting and paying attention to teachers: yup, I am pro at this.
Things I have spent less than 10k hours on, but a heck of a lot more time than many others in the same class:
  • Programming - I would put my mastery level somewhere in the level of hundreds of hours, at most 1k. I appear superpowered to most of my grad colleagues, but I'm achingly aware of just how many gaps I still have and how long it sometimes takes me to do what I feel ought to be very trivial tasks
  • Video games/boardgames: Probably around the 2k hour mark, maybe higher (My childhood was basically evenly split between dancing, videogames, and school). Gave me the ability to quickly orient myself in new internally-consistent systems, to make decent chocies in those systems without a complete explicit understanding of what constitutes a good choice, and to locate loopholes/synergies/imbalances in said systems. Also, encyclopedic knowledge of the standard fantasy depictions of medieval weapons, warfare, life and philosophy, some of which bears a decent resemblance to the real thing. But again, I'm achingly aware of how bad I am at all these things relative to the real experts
  • Being on the Interwebs: 'nuff said
Things I have not spent anywhere near enough time on to feel even close to being an expert, but wish I had:
  • Working on stuff in a consistent and/or timely manner regardless of motivation levels
  • Talking to people, both in the sense of small talk/hanging out and of being able to persuade/argue coherently in real time
  • Fashion/personal appearance: Although I'm led to believe that there's a ton of low-hanging fruit here that would require less than 20 hours to learn
  • Academic/nonfiction writing: I'm a lot better at this than a random person off the street, but not relative to my peers
  • Critical thinking, with emphasis on the critical: For some reason as a kid it virtually never occurred to me that I could/should question the way things were; I treated everything as hard constraints that needed to be routed around if I didn't like them rather than challenged directly. I still have some of that mentality, where I take things as given that I have the power to change or should be least be more critical of

Audience participation: What areas have you spent your 10k hours on? Are they directly related to what you're doing now?
erratio: (Default)
So as part of phonetics class we watched a documentary about deafness (no, I'm not entirely sure why this was in phonetics class either) called Sound and Fury, following two related families with deaf children and their discussions about whether or not to get the kids cochlear implants.

Anyway, in the end the deaf parents with the deaf 5 year old girl decided not to get her a cochlear implant, while the deaf 11 month old with the hearing parents from deaf families decided to get him the implant. I can't help feeling that in the end all the justifications and reasoning given came down to wanting their kid to be like them. And I've moved from thinking that it's their kid and they have the right to raise them in their culture to thinking that raising them in your culture is fine but they should probably also get the implants.

Reasons given by the deaf parents (and other people in the deaf community) for not wanting the implant:
* it's unnatural, it will make them like a robot
* it's their body, they should be allowed to make the decision themselves when they're older (except that if you don't learn how to speak relatively young it's basically impossible to learn to speak fluently later)
* you must be ashamed of Deaf culture/you think we're not good enough/the child won't have a Deaf identity/I'm deaf and I did just fine
* it doesn't even work that well (I think the figure was 20% getting good usage out of it, and the kid has to spend a lot of time in hearing environments or they'll fall back on signing too much)
* Deaf culture will go extinct if everyone gets implants

Reasons given by the hearing parents and grandparents for wanting the implant
* The kids should not have to grow up isolated and made fun of or stared at for being deaf
* It's a disability, if your whole family was crippled you would jump on a treatment that fixed it. Why should deafness be any different?
* It will give them access to both the Deaf and hearing world/it will open up more potential choices
* A lot of deaf kids get terrible education; the average reading level of a deaf highschooler is 4th grade (might be partly cos deaf kids are effectively forced to learn a foreign language to read, since sign languages have wildly different syntax and morphology to English, and the phonology is completely untranslateable)

The family who opted to keep their kid implant-free ended up moving to Maryland to live in a much bigger deaf community next to an awesome school for the deaf, where random people in the supermarket and restaurants would often know a bit of sign. One thing the father said as this move was in process really stood out for me as hammering home just how much identity politics plays a role in deafness in the US: "[at our old home] I felt caged in, like they wanted to jail me. Here, I feel comfortable and safe". The grandmother accuses them of trying to escape and of trying to put up a fence, both of which felt apt to me. If the hearing world wants to cage deaf people, then by moving into a deaf community the family is basically just choosing a big enough cage for themselves that they can pretend the walls don't exist.

Also of interest: some of the deaf community did have managerial jobs in hearing workplaces. They relied heavily on email, writing and interpreters, but they got by okay. And the hearing mother, whose parents are both deaf, had a crappy time growing up - she had to get years of speech therapy because her deaf family couldn't provide a speech environment for her to learn from, she had to put up with the other kids making fun of her parents and deaf people constantly, and she spend a lot of her time playing interpreter for her parents.

Phonetics

Sep. 19th, 2012 09:46 pm
erratio: (Default)
In case you still had the misconception that you're able to perceive reality as it really is, try taking a phonetics course. Specifically, try to learn how to transcribe speech.

As a student of linguistics, I of course already knew that the brain does a ton of work converting sound streams into recognisable speech. I already knew that speech sounds are affected by their surrounding environment, and that my brain does cool compensatory stuff and predictive work to let me easily pick out individual words and meanings with remarkably few mistakes.

But a couple of days ago we did our first real-time transcription exercise in class where someone reads out a bunch of words in isolation, and those things are *hard*. That sound at the end of the word, was it an unreleased [t]? A glottal stop? Nothing at all? Something else entirely? That word pronounced in an American accent that sounds kind of like 'cut' to me, was actually 'cot', and if there'd been any context I would have easily gotten the right word and wouldn't have even noticed that the vowel sounds more like [u] than [o] to me.

Basically what I'm saying is that I now have a new visceral appreciation for how hard this stuff is.



Other classes I'm taking this semester include:
How to use Google and Twitter searches as totally legitimate sources of linguistic research, (aka Experimental Syntax)
Assigning stress to words and phrases is way harder than you would naively expect (aka Foot Structure)
Fancy mathematical formalisms you can use as a framework to understand syntax instead of making up plausible sounding crap that doesn't really engage with most of the previous literature (aka Compositional Syntax)
erratio: (Default)
Inspired by a question my friend asked: if you were in Jaime Lannister's position, would you have killed Aerys? It spiralled out from a discussion of personal honour vs utilitarianism to a discussion of whether Jaime killed Aerys for the right reasons and then onto Jaime's personality in general.

My defense of Jaime Lannister is below. No explicit spoilers past the first book, but I talk about his personality in a way that probably shows more insight than you can get from the first book or two.

Jaime is at heart a good man. Yes he dreamt of personal glory, but no more than you would expect from a young knight in his culture. My reading of him is that he killed Aerys partly because of his loyalty to his father and to Cersei, and partly because Aerys was a monster. And then later he became angry/cynical because rather than being praised for killing a monster he got the entire kingdom hating on him for it. Similarly when he pushes Bran off the tower, it's not because he's amoral/evil but because he's loyal to Cersei. In fact, I would go so far as to say that Jaime's greatest flaw by far is his loyalty to his family above everything else, and even that wouldn't really be a flaw if the rest of his family weren't such assholes or if he was intelligent/farsighted enough to try to argue them out of some of their stupid decisions rather than doing what they want (or what he thinks they want in the case of Aerys).
erratio: (Default)
ok, so this isn't actually liveblogging, I read the first third of the book a couple of weeks ago. Anyway, on to the interesting bits, in no particular order.

* Supposedly, laughter/amusement is an exaptation of the flight response (panting to get more oxygen), frisson (enjoyable chills) comes from the fight response (erecting all your hairs, cat-style), and a gasp/sense of awe comes from the freeze response (one large breath to get as much oxygen as possible, followed by holding your breath so as not to attract attention).

* The book started out as being about the psychology of music and ended up being mostly about the psychology of expectation, albeit with mostly musical examples.

* we have models of how music ought to go based on our previous music knowledge/experiences, but also based on on-the-fly modelling and possibly a general idea of melody and closure and so forth - a betting game where people had to predict the next note for a traditional tune of some non-Western variety showed the group who were familiar with that music did way better than a group of Western music students, but the students still as a group still did well above chance

* There are lots of ways to try to test people's expectations of music. None of them are even close to ideal. They include ERP studies, asking people to improvise the next note, getting people to predict or bet on next possible notes, playing them a bunch of probe tones and getting them to pick the best, head-turn studies in babies...

* the main findings of these studies have been that most people do have expectations of how music ought to go, including that the melody line should on the whole descend during the second half of the piece, a large jump up or down the scale should be followed by movement in the opposite direction, for intervals between adjacent notes to be smaller rather than larger, and for small intervals to generally all move in the same direction. Most of these were tested cross-culturally too. However, these expectations don't actually hold for music (again, cross-culturally). Rather than a large interval being followed by movement in the opposite direction, we actually see regression to the mean. Rather than small intervals generally all being in the same direction, the tendency is for small intervals to move down the scale. And musical phrases follow an arch, not only descending in the second half. So this shows that listeners are using heuristics to form their expectations rather than building totally accurate models.
erratio: (Default)
ok, semester is over and obviously the humour series sort of got away from me. So instead of trying to continue it in detail, I'll offer a summary of the key points

The theory offered in Inside Jokes is this: we are constantly constructing new mental models of our environment (mental and physical) on the fly, based on knowledge and guesses. It is important that these models be as accurate as possible - that tiger hidden in the bushes better be recognised as a tiger or you won't survive long. Amusement is your brain's method of getting you to troubleshoot your models by acting as a reward mechanism, kicking in when you discover that a part of your model that you'd accepted as accurate turns out to be wrong. The urge to laugh is the reward/we know we are amused because we want to laugh. Jokes and deliberate humour are a super-stimulus for the amusement system.

To that I would add: the reason amusement is signalled by laughter, as opposed to some other mechanism, is that laughter was co-opted from an earlier fear response. Huron (from Sweet Anticipation, which I'm reading now) suggests that we can trace laughter from the play-panting common to primates and some mammals, which in turn used to just be hyperventilation in preparation for a flight response. The panting/hyperventilation came to be a signal of low status in response to being confronted with a higher status peer (you can tell I'm afraid of you because I'm panting), which then got co-opted for play, and in humans became more obvious (vocalisation) and efficient (only using the out-breath) in the form of laughter.

The previous paragraph is an ev-psych just so story, but I like the way it ties in all the status/signalling aspects of laughter. It suggests that laughter has two distinct purposes: displaying status, and troubleshooting mental models. It explains why we feel as if higher-status people come across as funnier - we laugh as a signal of low status and then misinterpret the laughter as a sign of being amused, particularly since the environment suggests that being amused is a plausible response (unlike the guy who suffered from involuntary laughter, who did not feel amused). This also explains why it's so hard to account for all kinds of humour/laughter using a single theory.

Not adequately covered by either half of this model: laughing at other people, particularly people who are lower status than you. Although it might be as simple as 'my model which previously suggested that this person was intelligent/competent was incorrect'. In-jokes, which I'm going to explain as a form of self-anchoring - I found this funny in this context in the past and so now I'm the kind of person who finds this joke funny when said in a similar enough context.

Next: Sweet Anticipation, by David Huron. It's a book about the cognitive science of music and also of expectation. Instead of reading the whole thing and then attempting to summarise it I might try something more like live-blogging, where I stop every now and then to summarise the interesting points so far.
erratio: (Default)
Some background on Nicaraguan Sign Language (NSL):
NSL is about 30 years old now. It began when a special school for the education of deaf children was established in Managua, allowing deaf children starting from age 4, who'd previously only had very ad hoc systems of home sign with their family, interact with 25-30 other deaf kids, each of whom brought their own idiosyncratic home sign system/language with them. When those kids interacted they created a completely new, richer sign language. And then since it's a school, every year another cohort of 25 or so new kids would come in, and those kids in turn expanded on the original system created by the first few cohorts, and so on until NSL reached its current status where it's basically a full-fledged language, created within the last 30 years from virtually nothing and without major contamination from other languages. Basically, it's a linguist's dream language, because we have detailed records of what the language looked like at each stage of growth (more on this in a moment) and so we can literally see the grammar unfolding over time rather than having to guess, like we do for pretty much every other language. And it turns out that we can also see what cognitive functions language is and isn't necessary for, which is pretty cool. More on this in a moment too.
The way this school works is that kids go from age 4 until age 14, at which stage they graduate and henceforth are allowed to hang out at the deaf club, but are (obviously) no longer at school. There are also often older kids just starting who couldn't come previously, and their language development is obviously not as good since they haven't had access to a decent source of language until that point. The early cohorts didn't really hang out with each other outside of the school context, but the later ones, having grown up in the age of cell phones, do. At school, the kids get several hours a day to hang out with each other - during food/play times, on the school bus (some of them live as much as 2 hours away, so that's a lot of time to socialise with the other kids), behind the teacher's back... The schooling is pretty much all done in Spanish and mostly with non-signing teachers. As you might expect, not a whole lot of regular school learning actually goes on, although more recently they've started hiring adults from the first generation of signers as teachers so they can actually communicate. Plus, texting via cell phones means the kids are way more incentivised to learn to read than the first generations were.

On sign languages in general: when signs are coined they are often iconic in some way or other. For example, the sign for a king may be the action of putting a crown on, or the sign for a cat might be drawing imaginary whiskers on your face. But there's nnothing principled about what iconic aspect of a thing or action will become encoded as a sign, and signs tend to get less iconic over time.

So, Ann Senghas. She's been going down to this school for the deaf every summer for many years now, documenting their language, getting them to complete various linguistics tasks, and so on. And now, onto the pithy details of the talk, listed in bullet point form as usual because I'm lazy and can't be bothered with trivialities like "good writing".

* The NSL signers can be split into roughly 3 generations, descriptively called first second and third. First generation started school in the 70's, second in the 80's, third in the 90's
* If you look at a video of each generation signing, there aren't any obvious differences at first, except in speed - first generation is slow compared to second is slow compared to third. But they're all clearly using language, not pantomiming or gesturing.
* However if you look more closely, there are bigger differences. Two ways that we saw today included the expression of number and expression of space. Others that were mentioned include expression of path/manner of movement, syntax, theory of mind stuff, and general 'with it'-ness
* On path/manner of movement: where the first and second generation would express a ball rolling down a hill by more or less pantomiming an object rolling down, the third generation would express a ball rolling down a hill by first indicating a rolling thing and then indicating a descent.
* On syntax: for the earlier generations, verbs could only take a single argument each, so "the boy fed the woman" would be expressed as "woman sit; boy feed"
* On expression of number: the first generation would express number the same way us non-signers generally would: 15 would be 5 on both hands followed by 5 on one hand. The second generation developed a more efficient (one-handed, faster) system that builds on that of the first generation: A girl counting to 10 counting the first 5 normally on one hand followed by counting from 1-5 again on the same hand but accompanied by a slight twist. Another girl asked to express the number 15 did so by first indicating a 1 and then moving her hand slightly to one and then indicating a 5 (so basically a 1 in the 10's column and a 5 in the units). Kids in the third generation came up with a new system altogether that loses a lot of the transparency but is even faster and more compact: 15 is expressed by holding the middle finger towards the palm with the thumb (imagine you're trying to form a ring with your thumb and middle finger - this represents 10) and then flicking it outwards to show 5 fingers. Apparently the older generations understand these kinds of signs but are disdainful of them - "they don't even look like numbers, it's just a flick!". This kind of pattern exemplifies the different generations: first generation functional but not particularly efficient, second generation has some kind of systematic improvement that allows them to express themselves more efficiently, and third generation as often as not will come up with something way more abstract that bears very little iconic resemblance to its meaning.
* On expression of space: there's a task linguists sometimes get people to do that goes as follows: person A has to describe a simple picture to person B, who then picks the matching picture on their side of the test area. In this case the pictures were of a tree and a man, where the man would be standing either to the left or right of the tree and could be facing towards or away from the tree, or out to the audience or away from it. Ann Senghas gave this task to her signers to find out how they expressed spatial concepts. Instead what she found was that the first generation failed the task - they couldn't encode spatial relations and performed at chance. In the later generations everyone could do it just fine. We were shown a video of the task being done by a first generation speaker and her third generation nephew, where during a break in the task she asked him to explain how to get it right. The kid does a pretty good job of explaining something that must have seemed ridiculously obvious to him - if the person is on this side of the tree then you put them like so, otherwise you put them on the other like so. This isn't something you can practise, you just look and then do it. Easy! (very rough paraphrase from memory). She did not get it.
* On theory of mind and 'with it'-ness: the first generation fails at second-order theory of mind, aka situations where you have to express what you know that I know. They're also a lot less 'with it' in general - like when Senghas is trying to coordinate with them for meetings and such, they're just a lot less good at it. They're also way less good at metalinguistic stuff - being aware of how you express things.


erratio: (Default)
I'm currently running a mini experiment on how native speakers of various dialects of English pronounce 'a' in some nonsense words. It only takes a few minutes and would help me out greatly. Results can be either left as a comment below this post or emailed to evil dot jen at gmail dot com. Thanks!

PS: If anyone's interested in the results or context for this, I'd be happy to post them up in a day or three when I've got all the results aggregated.

Instructions and words )
erratio: (Default)
There's been some interesting talks lately, but today was the first one in a while that made me think "I should blog about that". But since I also would like records of the other talks, I'm going to start trying to summarise the ones I found interesting.

Julie Van Dyke - on language processing using cue retrieval
* Language processing is really heavily dependent on working memory.

* But we don't actually know much about working memory (eg. how much of it we have), so to be safe let's assume that a hypothetical person can only remember the last item they heard/read. This isn't as insane as it sounds - computer models have indicated that processing can do pretty well even with such an impoverished working memory. Everything that isn't in active working memory is absorbed passively and can be called upon (albeit not as easily)

* So let's consider a few hypothetical sentences: 1) the book ripped 2) the book recommended by the editor ripped 3) the book from Michigan by Anne Rice that was recommended by the editor ripped. How does a listener tell if 'ripped' forms a grammatical sentence with 'the book'? There are a few ways: they could search forwards or backwards through the sentence, in which case you would expect processing times to reflect the amount of material between "the book" and "ripped". Or you could do cue-based retrieval, where you filter the sentence for words that have the features you're looking for, in which case you wouldn't expect there to be significant time difference in retrieval. As the name of the talk might suggest, people use cue-based retrieval.

* So now we have a model where we store words as bundles of semantic/phonological/etc features and then retrieve them by using those features. But what if the sentence has several possible words that have the features you're looking for? In that case, retrieval might get blocked due to interference from the other items. This, according to Julie Van Dyke, is why people forget. (I don't know whether she meant in general or when processing sentences. Hopefully the latter)

* And the main difference between people who are good at processing (eg. fast readers) vs those who aren't, is almost entirely based on how detailed your representations are. Because if your word representations are super detailed with lots of features, it's easier to zero in on them. And, good news, the main factor in how good your representations are (after controlling for IQ and a bunch of other bothersome details) is practice. So if you suck at reading, all you need to do to fix it is read more.
erratio: (Default)
(preamble: anyone who's a regular reader of LW can safely skip this post, it's nothing that hasn't been covered there a few hundred times)

There's a folk psychology idea that emotions and wisdom are opposed traits. There are lots of people who make really short-sighted impulsive decisions based on their emotions who would obviously benefit from stopping once in a while to think through the consequences of their actions. And on the other end of the spectrum is the Spock stereotype that most nerds are haunted by at some point or another**. Well, good news everyone! Turns out there's no dichotomy between the two! In fact, you need both!

Let's pick on Spock for a moment, and take the kind of scenario he might be faced with in a typical episode of Star Trek: there's a couple of crew members down on a planet who've been captured by the local bad guys. Those crew members will die if they're not rescued. Only problem is that they're being held in the middle of the bad guys' Fortress of Doom, and according to Spock's calculations a typical rescue attempt only has a 5% chance of succeeding and has a 50% chance of resulting in the deaths of the entire rescue team. What's the rational thing to do here? *

What if one of the crew members being held is Scotty, who they need to keep the ship running? What if it's Captain Kirk, who they need to seduce alien queens**? Is it more rational to mount a rescue then? Why? It's not like any of the numbers of the original estimate have changed.  Dig into Spock's 'rationality' and it pretty clearly comes down to number of lives saved. A rescue attempt with a 5% chance of success and a 50% chance of more deaths is a lousy gamble. The perceived odds shift (even though the bare numbers haven't changed) when taking into account more important crew members because those people are essential to preventing more deaths further down the line. But why is it rational to save lives?

The real answer here is that Spock isn't actually ignoring his emotions at all. The only reason anyone would be interested in saving lives is if they value life over death. To unpack that further, we like it when people are alive and we don't like it when people die. Or maybe you do like it when people die but don't like it when everyone shuns you because you're a creepy death-loving weirdo, so you pretend to dislike death. The point here is that ultimately you act according to your values, and your values consist of emotional valencies towards certain concepts, eg. +10 life, -10 death, -20 being alone forever, +5 having a prestigious career, and so on. Without values, you have no mechanism to decide that thing A is a better decision than things B-Z. Some of these values are more common and deep-rooted than others, mostly because we only really have a small number of things we like and dislike, and so a value like "having a prestigious career" (which can change when you re-evaluate your life) is just a fancier version of "being liked by others" (which is much harder to shift and can be satisfied in lots of different ways).

Transient emotions can also affect our values. Dan Ariely, in his book Predictably Irrational, talked about some experiments on how arousal affects decision making. A bunch of young men were asked questions like "would you have sex without using protection?" and "would you enjoy being spanked?" while in a normal baseline state. Unsurprisingly, they all said they would always use protection, wouldn't engage in taboo or kink, would always get consent, and so forth. Then they were given a stack of porn and given similar questions while they were aroused, and lo and behold, suddenly things like consent and protection were less important. Not because they were originally lying***, but because arousal causes a temporary rearrangement of your values to encourage you to procreate.


This is getting longish, and I have a roleplaying game to go to, so I'll stop here. Next post will be about curiosity, humour, and the evolutionary importance of having good mental models.


* I should probably mention that I've watched very little of the original series, and it's been a long time since I wached any of The Next Generation, so really I'm just making stuff up here.

** ok fine, and also to get into punch-ups. And I suppose to command the ship occasionally

*** even if their original answers were just signalling, I would argue that that's still a strong indication of their values: namely that their actual values around sex were getting outranked by their values around appearing virtuous, and then arousal changes that ranking****

**** One of my current classes is all about analysing phonology using a ranking system called OT. I feel a bit like I have rankings on the brain as a result
erratio: (Default)
This is a midpoint-rooted phoneme tree produced using a Bayesian approach, from a paper whose sole purpose is to debunk a recent-ish high profile paper that claimed a language's phoneme inventory is correlated with how far away it is from Africa, thus providing support for the out-of-Africa story of human origins.

What this picture shows: something to do with phonemic relatedness and geographic clustering and how they don't correlate particularly well. I'm not really sure, to be honest, I've never used that technique or read a paper that used it before, so my being a linguist isn't particularly helpful here. Although I will assume that the different colours correspond to different language families, since that's pretty standard.



Source: Language Log, which also goes into lots more detail about the paper
and has lots of other colourful graphs, although none as cool as this one.
erratio: (Default)
Last week we had a presentation by Keith Chen, who you may remember as the economist/management guy who caused a bit of a furor by claiming that the way any given language marks the future tense is a good predictor of future-oriented behaviours like saving money, taking care of health, and that sort of thing.

Some background: If you know about hyperbolic discounting it's probably a no-brainer to hear that if you give people a 401k/superannuation form with a picture of themselves now they'll put down lower payments than if you present them with a photo of themselves photoshopped to look old. The reason for this is that as a species we like to enjoy the good times now and make our future selves pay for it, but if you make people identify more with their future selves then they don't feel quite so great about putting stuff off that way.

With that out of the way, Chen's presentation can be summarised more or less as follows: Languages vary in the way they mark the future, with some languages like Hebrew forcing you to express the future explicitly whenever you're talking about future events and other languages like Chinese letting you talk about the future without using overt tense markers. For languages like German or English which have both overt (I will eat breakfast tomorrow) and covert (I am going to eat breakfast tomorrow) ways of talking about the future that can be used more or less interchangeably, the linguists who did these surveys looked at frequency of each type and and categorised the language as weak future-marking or whatever based on the most frequent forms.

The modern version of the Sapir-Whorf hypothesis says that language structure has weak priming effects on speakers. For example, in languages where the grammatical gender of "bridge" is feminine, people are more likely to describe them as "graceful" or "soaring" whereas speakers of languages that have masculine bridges are more likely to describe them as "strong" or "durable". (See Lena Boroditsky's publications page for lots more experiments of this type).

Chen used a bunch of census type data that included the language spoken at home to find that even after you control for a fairly impressive array of confounding factors, there was a strong correlation between savings behaviour and type of future marking in a language. In fact, he claimed, language type is a better predictor of savings behaviour than a bunch of other factors that economists usually consider to be pretty important, including trifling considerations like the country's economy.

Overall story: Every time you use an overt future marker you are priming yourself to think of future-you as a different person to present-you, so you're more likely to do what makes present-you happy. Or alternatively, there is a common cause for the savings behaviour and speakers' choice of how they talk about the future.
erratio: (Default)
Benign violation (BV) theory
 
There isn't actually that much to say here that doesn't properly fit into either IR theory or status/superiority theory, but here goes.
 
The central premise of benign violation theory is that humour exists in the violation of social norms as long as there's no real harm in it. The main point in favour of it is that it describes accurately why things like slapstick and verbal sparring can be funny but attempted murder and arguments aren't, even though they mostly involve the same actions. It also captures why so much humour revolves around sex, excrement, and death, all of which are things you don't talk about in polite company.
 
Minsky (1981) proposed a sort of Freudian account of humour. Namely that your brain has a bunch of cognitive censors designed to taboo certain kinds of words/thoughts such as sex or excrement related, a la Freud, but also censors for faulty reasoning. And then cheating these censors is 'naughty' and this is what you find funny, successfully carrying out taboo acts or thoughts. This sort of provides an explanation for wordplay humour, since the joke usually lies in an ambiguity between a normal serious reading and the incorrect nonsensical one.
 
Another point in favour of BV theory is the evolutionary psychology explanation of laughter. Some types of primates have a 'false alarm' signal to go along with the 'snake', 'jaguar' and other assorted predator signals. And on top of that, apparently when chimps play they make a special 'play face' and engage in a kind of panting, which both help to signal that they're playing and the situation isn't serious. Hurley, Dennett, and Adams' explanation of laughter is as a sort of combination of these things: a way to signal that there's no real danger and I'm just playing with you. And so since humour is accompanied by the 'not serious' signal, the logic is that humour can pretty much be characterised as potentially-harmful things done in a non-harmful manner and that's why we all laugh at it.
 
A final point in favour of BV theory is that it accurately captures the intuition that it's difficult to be in a negative emotional state and find something funny at the same time. But if that negative state isn't due to the potential humour, BV theory says nothing about why I should find something less funny then than when I start off in a neutral or positive emotional state.
 
What BV theory can't capture
 
1. Humour isn't always harmless. See: pejorative and bullying humour, mean humour, satire, humour based on inferiority of others (eg. Irish jokes)
2. All the subtleties of humourcraft: if humour is just being non-serious or cheating an internal censor, it should be much easier to craft hilarious jokes than it is. All I would need to do is go out to a public space and say 'poo' a lot, or do something obviously nonsensical to trip the 'faulty reasoning' censor. Or for that matter just lie in bed and think of nonsensical or scatological scenarios. Jokes shouldn't really get more or less funny depending on whether you've been exposed to them before, since a norm violation isn't going to be less of one over time.
 
Overall, benign-violation theory makes a decent attempt to provide an explanation of humour but misses the mark on many many levels.
erratio: (Default)
 
Incongruity resolution (IR) theory
 
The central idea of incongruity resolution theory is that when we have an expectation that is suddenly resolved, we find it funny. IR theory comes in many different flavours: Kant claimed humour is when we have 'strained expectations that come to nothing", other modern researchers have claimed that it's when we develop two competing frames/expectations from a setup, which is then resolved in favour of one by the punchline, or that we have one frame for the setup and another for the punchline and the humour comes from resolving the two, or that it's when our perceptions and our abstract representations clash, or any number of other variations that involve unexpectedness. And not just any old unexpectedness - pretty much everything that happens to us isn't anything we actively expect. The kind of unexpectedness IR theory calls for are things that we expected *not* to happen as opposed to things that we merely weren't expecting. I didn't expect to see the particular guy at the library who checked my books out for me today, but if he'd been dressed up as Death I probably would have found it amusing.
 
What incongruity resolution theory gets right:
It accounts for why watching people fall down is widely considered hilarious. It explains most wordplay (where the incongruity comes from ambiguity in meaning). It somewhat explains parodies and obscure humour, where the requirement of being able to draw on your previous knowledge is likely to bring a set of expectations with it to be shattered. It explains unhelpful humour, since we have expectations of people saying things to us that are relevant and truthful (see Grice's maxims for more detail), and to a certain extent mean humour (by the same maxims I expect people not to be unnecessarily mean).
 
What incongruity resolution theory gets wrong:
There are lots of examples of incongruousness that aren't funny. Some examples: a patient with baffling symptoms, lies, mysteries and puzzles, snow out of season, an instrument out of tune at a concert.
There are plenty of jokes that remain funny even when you already know the punchline, including ingroup humour and really good comedy movies and shows like Monty Python or the earlier seasons of the Simpsons. In these cases, there aren't any expectations being proven false or resolved in an unexpected way, since I already know what's going to happen
IR theory also does a bad job of explaining the social aspects of humour - why other people's laughter makes things funnier, although we could stretch the theory to cover it by guessing that other people laughing makes you more likely to reach the same interpretation as them and therefore also find it funny. Finally, there's still a lot of vagueness in the theory: what is incongruity exactly? Most of the proposed definitions contradict each other. IR theory is more of a description than an explanation.
erratio: (Default)
It occurs to me that I may have misrepresented part of the status/signalling/superiority theory of humour, in that I focussed on why you would tell jokes but only very briefly mentioned why it is that we might find things funny even when they have no obvious author. So just to make things clear: another way to state the status/superiority theory of humour is that people find it funny when they recognise their superiority over someone else, and this can include your past self. But all the other points still hold: not all humour can be explained in terms of status (eg. nonsequitur, some forms of wordplay), and it doesn't do a good job of explaining why we find the things funny that we do
erratio: (Default)
So I've been reading Inside Jokes the past few days, and one thing that I find both interesting and alarming in their overview of cognitive theories is just how much each theory relies on contemporary technology, not just as a framework but in a way that indicates that the authors of each theory were mentally constrained by what was available. For example, the release theory of humour, the idea that humour is a release of nervous energy/tension, relies heavily on a gasoline model of cognition, where over time cognitive energy can 'build up' over time in the hypothetical pipes of our brains. And the frame/script model of cognition, where we have a bunch of scripts pre-built from common features of our previous experiences, is considered an example of 'just-in-case' processing, a kind of model processing that was widely in practice at the time. The alternative that Dennet et al are proposing? Just-in-time spreading activation, which is a modelling process borrowed from current economics and which is used extensively in inventory management in large companies. And of course our current models of cognition involve computation and neural nets and so forth.

All of this of course, points to the idea that part of the reason we've been having so much trouble with computational modelling of cognition is that computation might be the wrong metaphor. More wrong than the pipes-and-fuel model, or the gears-and-cogs models of the past? Probably not, considering how much more stuff we can do with computation in general. But I definitely wouldn't rule out another paradigm shift or two before we hit on a sufficiently accurate model of cognition that we can actually do stuff with it.

Profile

erratio: (Default)
erratio

June 2017

S M T W T F S
    123
45678910
11121314151617
18192021222324
2526 27282930 

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jun. 28th, 2017 12:11 pm
Powered by Dreamwidth Studios