erratio: (Default)
2014-04-08 07:38 pm

When do people look for emotional congruence?

(emotional congruency being the thing I talked about in my last substantive post, for anyone reading this in the future who didn't also see the previous one)

In one of those strange 'the universe is paying attention to me' moments, two of the people I read recently posted on topics that touch on the need for emotional congruency. One about how when someone close to them died unexpectedly, they think it might have been a comfort to them if there had been the kind of mass grief and hysteria involved as there was for people like Princess Diana. The other about how a lot of fans have trouble wrapping their heads around the idea that the thing they love may not be loved by others and may in fact have problematic aspects and that this doesn't in any way diminish their capacity to love it anyway, rather than feeling the need to start a flamewar every time someone implies that they don't like the thing enough.

A third related phenomenon is outrage-blogging, where the point (besides generating traffic) is to get validation by writing inflammatory posts about things that bother you. Once you get sufficiently good at this, your audience will consist of 90% people who violently agree with you and enjoy the shared outrage, and 10% people who violently disagree with you, who get their own outrage-fix by fighting against the 90% in the comments section. And yet another related phenomenon is the 'release the winged monkeys' effect, where bloggers who have enough popularity have to be careful about how they share outrage, because if they're not careful a certain portion of their audience will swoop over to the source of the outrage and attempt to bury them in hate/counterarguments/etc. And if the source of the outrage has their own sufficiently large audience... well. But the relevant point here is that even if the blogger is not habitually an outrage-blogger and their audience is fairly reasonable, there's something about someone you respect expressing outrage that seems to incite other people to start feeling the same way and to jump to their defense.

I'm still mulling this over way too much to have a coherent mini-essay here about the common points and what it all means and all that, so instead here are some disjointed thoughts on the topic:
  1. There seem to be a couple of magic ingredients here: strength of emotion, and level of status/respect
  2. The stronger the emotion, the more unacceptable it feels for other people not to share that emotion
  3. And when witnessing strong emotion in someone else, the higher their perceived status (to you, the witness) the more likely you'll hop on their emotional bandwagon
  4. It follows from points 2 and 3 that if you feel a strong emotion and other people don't validate it, you must have low status. Now you have two problems: you're upset *and* you're not important enough to have your emotions respected, which is going to feed into your upsetness
  5. I'm tempted to draw some kind of line from low self-esteem or relatively weak personal identity to the desire for emotional congruency, because feeling temporarily disrespected is only a major problem if you feel it implies certain things about you in the grand scheme of things
  6. I'm also tempted to draw a line from extroversion to the desire for emotional congruency, because my experience suggests that the more extroverted you are the more passionate you tend to get about things in general, which would correlate to the 'strength of emotion' part (NB: my subject pool has a major confound in that almost all the introverts I know are NT types on the MBTI)
  7. There's a psych concept called 'locus of control', where if you perceive it as being outside yourself then you're going to feel helpless and like you have no control over your life and if you perceive it as being internal then you feel like you have agency and so forth. I'm going to guess that there's a similar sort of 'locus of identity' concept, where if your sense of self is anchored on a small number of external things like 'is a good parent' or 'Star Trek fan' then you're going to feel massively threatened if one of those things is challenged in some way, such as getting into a fight with your adult child or hearing someone talk about why Star Trek kind of sucks in some ways. Whereas if your identity is more diffuse (parent + fan + athlete + writer +...) or you happen to be one of those lucky people who don't need any kind of external validation at all, then a threat to one of the things you like isn't going to faze you so much.
  8. And obviously the more threatened you feel the stronger your emotional reaction to the threat and the more important it becomes to you that other people at the very least acknowledge your emotions
  9. But none of this fully explains to me why there's the split between belief-congruency and emotional-congruency verbal fight styles. I'm fairly neurotic so it's not like I haven't had my share of strong negative emotions. So why haven't I ever had the urge to start saying hurtful things to get a rise out of the other person?
erratio: (Default)
2014-04-06 11:53 am

Belief congruency vs emotional congruency

A while back I read this post. Short version: When people get into verbal fights, their styles can be roughly split into truth-shouters, who say the truths that they would normally hold back or find themselves unable to express, versus cutlery-loaders, who say all kinds of things that may or may not be true in order to blow off steam and get a reaction out of the other person* - in effect they just load whatever's handy into their cannons and then fire that off, hence the term. I think there might be a more useful way to reframe the concepts: Truth-shouters are aiming to make people know/believe the same things as them. Which is why when things get stressful, the uncomfortable facts start coming out. Cutlery-loaders on the other hand are aiming to make people feel the same things as them. Which is why when they're angry/upset, they'll say whatever they think will cause the other person to feel a similar emotion and often get even more upset if the other person doesn't take the bait, because it makes them feel like their emotions are being treated as invalid or overreaction.**

I can't help wondering whether these are more general interaction styles and are just a lot more obvious during arguments because those tendencies get blown up to several times their usual size.

* Obviously this is a bit of a simplification. Lots of people are not purely one or the other, there are probably styles that don't allow neat categorisations, etc. But I think it's still a useful abstraction

** It's probably obvious from my explanations that I'm a truth-shouter, hence my less-than-charitable description of cutlery-loaders.

The insight for this post came from a Facebook argument where I ended up being accused of acting as if the other person wasn't entitled to their emotions (which was my own fault really - I didn't share their outrage and instead jumped to objecting to part of the factual content of their post). During the ensuing exchange they then expressed a view that can be summarised as '[bad thing] happened to me, and I hope it starts happening to others so that the situation will be addressed before [worse thing] happens to me". After applying the principle of charity, this reads to me as "[bad thing] has caused me to worry about [worse thing], and I wish other people felt the same way as me because then they would take action to help avert the chances of [worse thing] happening". But on first reading, boy did that sentiment get my hackles up.

erratio: (Default)
2013-06-17 12:04 pm

Failures in theory of mind

(Long time, no post, etc. Maybe one of these days I'll go into detail about what I've been up for the last semester or so. Also, the key insight that led to this post is due to my friend N)

A friend of mine, B, used to suffer from terrible road rage. His girlfriend, L, felt so uncomfortable driving with him when he was like this that she put considerable time and effort into working out what was going on, since B is not typically an angry guy. Eventually, she realised that what was going on was that B wasn't seeing the other cars as vehicles containing living people with plans and emotions of their own, but as potential obstacles that sometimes moved in unpredictable ways to block his path.

One of my favorite bloggers had a very well-received post about a certain type of guy who approaches women like they're vending machines for sex, where he just needs to perform the right moves and say the right things, and lo and behold he'll get laid. When this doesn't happen he gets angry and bitter and talks about how he's such a Nice Guy but girls still aren't interested in him.

On hearing B's road rage story, it occurred to me that I have a similar failure mode when I'm socially anxious, where I treat the people around me as mysterious black boxes that require that I perform esoteric nonsensical social rituals in order to appease and become accepted by them, where any deviation from the rituals will be punished with immediate scorn and/or rejection*. Unsurprisingly, this way of thinking does not particularly aid me in my efforts to be liked and accepted.

In a post about abusive partners, one of the comments highlighted the way that the abused blame themselves, searching for the thing they did to deserve the punishment. When they think they've found it, they tell themselves that if they just stop doing that particular thing, their partner will stop abusing them. Inevitably the abuse happens again, because the thing they did that first time was at best a convenient excuse, at worst completely uncorrelated with the abusive behavior.

In all of these stories, a person with otherwise completely functional theory of mind is put in a stressful situation, and in response they have lost their ability to think about what the other people in the interaction believe and desire. I'm not sure that calling it a failure in theory of mind is quite correct though, since the classical failure mode for theory of mind is to assume that everyone shares the same information/desires/beliefs that you do, as opposed to the situations here where the failure seems to involve denying/forgetting that the other people in the interaction have meaningful internal states at all. I could call it objectification, except that the connotations of the term have drifted so far away from the strict meaning that it's now completely useless for trying to describe anything else.

Does anyone know if there's a better name for this phenomenon? Or if there's any literature on it? So far I'm drawing a blank, but it seems like an area that ought to have been studied. If people lose their ability to model others when they're under stress, it seems like this would have huge implications for a lot of subfields.

* Yes, I know that's an exaggeration of what would actually happen, but you're welcome to try convincing my brain of this when I'm feeling socially anxious
erratio: (Default)
2011-05-07 09:47 pm

Transactional analysis, revisited

Some time back I read the seminal book on transactional analysis (TA), Games People Play by Eric Berne, on the recommendation of someone on Livejournal. While it was hobbled by a 1960's understanding of mental illness, there were a couple of ideas in the book that I thought were brilliant:

* Its description of 'strokes', as in attention that other people pay to you. So if you're used to having a long conversation with someone at work and one day they ignore you, it's normal to feel upset because you're not getting your customary strokes from them. And a large part of social interaction is trying to get people to give you strokes. These days I would describe 'strokes' as being a subset of status signalling, since status is all about being able to get people to pay attention to you at will.

* Its description of the games that people play. Even just looking at the names of the games on the Wikipedia page, it's easy to recognise several common patterns of behaviour that are frustrating to deal with, such as Why Don't You/Yes But, which involves someone posing as though seeking advice for a problem when in reality they 'win' the game by successfully exhausting everyone's ideas of how to help them by finding a reason to reject every piece. Or If It Weren't For You, where they blame someone else for obstructing their (otherwise assured) success in some task.

Reading the Wiki article today, I'm inclined to think that the current evolution of TA thinking can be found in Schema Therapy, which makes the same assertion that people can pick up maladaptive ways of thinking and acting when they're children that influence them as adults and make their lives miserable. And the descriptions of the games are just too obviously accurate to be discarded, I think, since after having read the descriptions and recognised some of them in my own behaviour, I now wince and try to behave more sanely whenever I find myself playing any of them. But these points are largely tangential to my real reason for bringing TA up.

So today I was participating in a virtual meetup with some fellow Less Wrong members, and we were talking about belief in belief, specifically Sagan's dragon in the garage and how if someone seems to have a glib answer to every test you raise that would prove or disprove their assertion then you should be wary. And it occurred to me that the structure of the dragon in the garage parable bears a striking resemblance to Why Don't You/Yes But, where every piece of advice is met with immediate glib rejection and no attempt to take it on board. Mind you, I'm not actually sure what this means in terms of how you should deal with conversations of this type, other than to recognise it as fast as possible and stop playing, I just thought it was interesting how much the two dialogues resemble each other. Oh, and there's always the cheap shot to be made that part of why many theists go around announcing their belief in God is that they're using belief in God as an elaborate method of getting attention.
erratio: (Default)
2008-11-09 07:10 am
Entry tags:

Things I have learnt while studying for this exam

This exam being Perception and Cognition.

* If you have brain damage, and especially if you have amnesia, you'll spend your days having neuroscientists performing all kinds of experiments on you. Then again, if you had amnesia it would be a lot less tiresome being prodded by sciencey people all day.

* The consolidation theory of memory states that new memory traces are fragile and can be easily interfered with by subsequent memory formation. Therefore, the best thing you can do if you really want to retain a memory is to either go to sleep or get absolutely blind drunk right after you learn, since either way you won't be forming any new memories during the consolidation period. That's right, students now have an excuse to go get drunk after class.

* If you're a psychologist you get to do all kinds of really wacky things in the name of science. Like the context-sensitive memory retrieval experiment, where one group learnt the word lists while scuba diving 20 metres underwater, and the others learnt in a regular classroom. Also by this theory, my own recall will be best if I do the exam either a) sitting on a moving bus, or b) playing a ps2 game (I was listening to lecture recordings)

* "superior colliculus" sounds funny when pronounced with a foreign (read: non-English) accent. On the plus side, it's not a word I'm likely to forget in the exam. Apparently it does multi-sensory integration and directs attention to new information and stuff.
erratio: (Default)
2008-04-01 11:22 am

Online habits

So many a year ago a bunch of psychologists were doing experiments on operant learning in animals. At one point they performed the following experiment: The basic setup is a rat or bird in a cage. Sticking out of one of the walls is a lever. Now, in some versions of the experiment, the lever causes food to be released every time it's pressed. The second version of the experiment involves releasing food every X number of presses, where X is some number bigger than one. The final version involves giving the food out at random intervals so that essentially the lever pressing actually has no correlation with the food being given. In each case the animal learnt relatively quickly that pressing the lever correlated somehow with food. the interesting part comes when the experimenters turned the food off, so that pressing the lever had no effect. The first group, that received food every lever press, tried pressing for a short while and then gave up when no food was forthcoming. The second group persisted for a bit longer, but also gave up eventually. But the last group continued pressing the lever forever, always hoping that maybe this time the food would come out.

And this, my friends, is the model of email and online community checking that many people follow. A new message could arrive at any moment, thus even during periods when the rate of emails slows down we can't help checking regularly 'just in case'. It's all very insidious, and makes me think of just how much money is being made by those advertisers that advertise in email and online communities. Quite a lot, judging by the way that most people I know follow this operant behaviour.
erratio: (Default)
2007-12-28 11:52 am

Self-awareness, roleplaying, and doublethink

The other day I had my friend over, who also happens to be the GM for our roleplaying group. While he was over we discussed various people and their behaviours, and why it was they behaved that way. Noting that most of the people in our social groups are quite intelligent and rational (more T than F, in Type-speak), there isn't much space for justifying irrational behaviour. In almost all cases of irrational behaviour it seems like the cause is a lack of self-awareness, because if the person is rational then it's difficult for them to deliberately be acting irrationally, so their own motives must be unclear to them.

From there it seems like a short step to doublethink. I know that I have self-esteem issues, but a lot of the time I act as if I don't. I'm not precisely pretending that I don't have any issues (which would be denial), I'm just ignoring them enough that I can behave more normally. And I realised that I'm deliberately exercising a form of doublethink when I do this. And that I also make use of it for keeping other people's secrets (it's hard to explain how, I don't actually forget it but I can sort of partition it off), playing more than one player in a card game (same sort of thing, partition the knowledge off and only access it when necessary; it has the drawback that I tend to be a worse player when I'm doing this beause I'm working so hard at not knowing the other player that I don't allow myself to make intelligent predictions a lot of the time) and in roleplaying, where I'm trying to take on a different personality altogether and ignore any out-of-game knowledge I may know but only for a few hours at a time. I also know that I'm not the world's best roleplayer. I can have a complete understanding of my character and what he would know, but I tend to direct my character from a distance rather than actually become the character. So at the moment my doublethink extends to keeping two sets of knowledge side by side but not to the point of being able to replace one with the other temporarily.

I now have a theory that to be very good at roleplaying you have to be very good at doublethink, in order to completely change your personality for a period of time and then revert back to your real personality at the end. I also have a less-reliable theory that to be really accomplished at doublethink you have to be quite self-aware, or else you end up either failing entirely at acting as though the new set of beliefs are true, or you end up believing your own inventions with no realisation of what you've done to yourself. Also, I find it quite interesting that if my theory holds up then it means that the better your self-awareness is, the more interesting tricks you can do with/against your own mind.