+1 888 912 1608 | +44 (0)203 824 2468 [email protected]

This blog won’t often feature math, but in this case I think it’s for a good cause. It’s hard to quantify levels of understanding between people–you end up with squishy language like “a lot” or “pretty well”. But I’m interested in how people understand each other in multilingual situations, and I think there’s a basic equation that can describe this kind of interaction.

Let me explain. I spent the last week in Switzerland and Italy, in settings where my functional fluency ranged from pretty good (business-related high German) to moderate (social Swiss German) to minimal (social French) to basically nothing (academic Italian). So I was thinking a bit about how to describe the level of understanding between people as a product of their individual levels of fluency. Bear with me for a second.

Let level of fluency (f) be a number between 0 and 1, with 0 being no ability and 1 being perfect fluency (whatever that is – maybe the ability to express and understand any thought).

Understanding u between a number of persons a,b,…n can be approximated by:

codecogseqn

This is a very basic formula, of course, but I think it gets at a number of realities that will be clear to anyone who has spent time in a multilingual context:

  • Communication between two people of middling ability (say 0.5) is much worse than either of them alone (in this example, it ends up at 0.25)
  • If you’re a beginner, but talking to someone who’s a native in the language, they can help you understand to the best of your ability, but it will be far less than theirs (e.g. 0.3 and 0.95 come out to 0.285)
  • Groups make things harder. The level of understanding for the group will be lower than any of the individuals in it. Throw in one or two people who don’t speak the language well, and group understanding falls apart completely. For example, if I have dinner with a French-speaking family, the fluency and level of understanding drops to my level.

Anecdotally, this model seems to be robust and predictive for n<5, but I have a feeling that odd things start happening with larger values. It also breaks down if the values aren’t that sensitively calibrated (i.e. if you just throw in a bunch of 1s). But I think it has some descriptive power for small groups, and in any case it’s a different way of trying to quantify something that’s inherently subjective.

So, does anyone have an improvement on this model? Further thoughts? Disagreements?