Sunday Reads #122: The Universe beyond your Filter Bubble.
What the song "Despacito" taught me about communicating better.
Welcome to the latest edition of Sunday Reads, where we'll look at a topic (or more) in business, strategy, or society, and use them to build our cognitive toolkits for business.
If you’re new here, don’t forget to check out the compilation of my best articles: The best of Jitha.me. I’m sure you’ll find something you like.
And here’s the last edition of my newsletter, in case you missed it: Sunday Reads #121: How about "Move fast but don't break things"?.
This week, let’s talk about peeking outside your filter bubble. We all know about filter bubbles, but we just don’t understand how strong they are. You can almost never see what’s outside your filter bubble, by definition. But there are ways to take a peek, and in doing so, become better communicators.
Here's the deal - Dive as deep as you want. Read my thoughts first. If you find them intriguing, read the main articles. If you want to learn more, check out the related articles and books.
[PS. As a special treat this week, I’ll offer you personalized book recommendations. Just hit reply to this email, and tell me what you do, your interest areas, and the last 5 books you loved. Some great book recos coming your way soon!]
1. The universe beyond your filter bubble.
I first grokked what the filter bubble was, when I watched the music video Despacito.
I already knew how filter bubbles form. How social media creates echo chambers. But it was only then that I understood it viscerally.
The video has 7.3 billion views today. When I came across it in late 2017, it had 3.5 billion views. I was completely surprised! Such a large proportion of the world had seen it, but my wife or I hadn't heard of it from anyone we knew!
I told myself then, "Remember this surprise. This is what peeking outside your filter bubble feels like".
It reminded me of the first Brexit vote in 2016. A majority voted to leave, but the outcome shocked large numbers of people. I remember speaking to friends living in the UK at that time. They all said they knew exactly zero people who had voted in favor of Brexit.
The truth, of course, was that it was a filter bubble in play. The people who voted for Brexit skewed far older than the population average. They were not online, and their views were less visible.
I thought of filter bubbles again this week, when I saw the tweet below.
Oh yes, the filter bubble is real. And it's more than a novelty.
The filter bubble is pervasive, strong, AND invisible.
You don't have any idea what exists outside your filter bubble. And you can't see where your filter bubble ends, and where the adjacent one starts.
Unknown unknowns are manageable in simple models. But not when it's the rest of the universe!
Scott Alexander shows how pervasive filter bubbles are, in I can tolerate anything except the outgroup:
There are certain theories of dark matter where it barely interacts with the regular world at all, such that we could have a dark matter planet exactly co-incident with Earth and never know. Maybe dark matter people are walking all around us and through us, maybe my house is in the Times Square of a great dark matter city, maybe a few meters away from me a dark matter blogger is writing on his dark matter computer about how weird it would be if there was a light matter person he couldn’t see right next to him.
This is sort of how I feel about conservatives.
I don’t mean the sort of light-matter conservatives who go around complaining about Big Government and occasionally voting for Romney. I see those guys all the time. What I mean is – well, take creationists. According to Gallup polls, about 46% of Americans are creationists. Not just in the sense of believing God helped guide evolution. I mean they think evolution is a vile atheist lie and God created humans exactly as they exist right now. That’s half the country.
And I don’t have a single one of those people in my social circle. It’s not because I’m deliberately avoiding them; I’m pretty live-and-let-live politically, I wouldn’t ostracize someone just for some weird beliefs. And yet, even though I probably know about a hundred fifty people, I am pretty confident that not one of them is creationist.
Odds of this happening by chance? 1/2^150 = 1/10^45 = approximately the chance of picking a particular atom if you are randomly selecting among all the atoms on Earth.
Happens to me too all the time. Even today, I learn so many new things about India. The country where I’ve spent most of my life.
OK, but why is this important?
It's important because our expectations are molded by our experiences.
What we expect from others in a situation, might be very different from what their experiences tell them to do.
I'll give you a simple example. On Twitter, any discussion of capitalism vs. socialism invokes extreme emotions. On both sides. But as someone who grew up in a mixed-economy country during an era of liberalization, I don't get it. Sure, I understand the logic. But I don't understand the strength of emotion. I grew up in a bubble, and this wasn't a big topic in that bubble 🤷♂.
This is big picture stuff, yes. But what happens in the macro, happens in the micro as well.
It's why people from big companies struggle in startups. It's why MNCs transplant people into new countries, rather than hiring local talent.
The impact of filter bubbles gets more and more powerful as the world gets larger and more diverse.
More stuff happens outside our own bubbles, and we know less and less of it.
This warps our understanding of the world. You can’t trust your intuition anymore. As Eliezer Yudkowsky says in Availability:
The Availability Bias is judging the frequency or probability of an event by the ease with which examples of the event come to mind…
Compared to our ancestors, we live in a larger world, in which far more happens, and far less of it reaches us—a much stronger Selection bias, which can create much larger availability biases…
Using availability seems to give rise to an absurdity bias; events that have never happened are not recalled, and hence deemed to have probability zero…
The wise would extrapolate from a memory of small hazards to the possibility of large hazards. Instead, past experience of small hazards seems to set a perceived upper bound on risk.
There is always some "Inferential distance".
If you and I don't have the same exact information about a topic, there's always some "inferential distance" between us.
What I "assume as a given" for my assertion might not be obvious. Unless I spell it out, I can't expect you to come to the same conclusion.
Yudkowsky again, in Illusion of Transparency and Expecting Short Inferential Distances,
Be not too quick to blame those who misunderstand your perfectly clear sentences, spoken or written. Chances are, your words are more ambiguous than you think…
In the ancestral environment, you were unlikely to end up more than one inferential step away from anyone else.
In the ancestral environment there were no abstract disciplines with vast bodies of carefully gathered evidence generalized into elegant theories transmitted by written books whose conclusions are a hundred inferential steps removed from universally shared background premises…
When I observe failures of explanation, I usually see the explainer taking one step back, when they need to take two or more steps back. Or listeners assume that things should be visible in one step, when they take two or more steps to explain…
Both sides act as if they expect very short inferential distances from universal knowledge to any new knowledge.
A clear argument has to lay out an inferential pathway, starting from what the audience already knows or accepts. If you don’t recurse far enough, you’re just talking to yourself.
Let's read that again:
A clear argument has to lay out an inferential pathway, starting from what the audience already knows or accepts. If you don’t recurse far enough, you’re just talking to yourself.
This happens to me all the time. Argue with vehemence for 30 min, only to realize we’ve been talking past each other the whole time. The words are the same, but the meanings are completely different.
We often jump to question people's values, rather than first considering their information. Why don't we think, "Maybe, just maybe, she and I aren't speaking from the same context"?
It's like the question, "If a tree falls in a forest and no one is around to hear it, does it make a sound?"
It depends on what you mean by "sound".
If you means "acoustic vibrations in the air", the answer is yes.
If you means "auditory processing in brains", the answer is no.
There, I saved you from 3 hours of a philosophical debate.
Something similar often happens in the workplace.
The company where I work has offices in 15 countries. When working with people from multiple countries and cultures, even simple idioms don't translate well. So you need to be specific, clear, and precise. Explain your ask in a little more detail than you would otherwise.
It's not because the opposite person needs the explanation. It's to make sure you're on the same page of the same book.
OK, how do we solve this?
I've found it helpful to keep four principles in mind.
#1: First, recognize that the filter bubble exists. And plan for it in your communication.
Specificity is a superpower, as Liron says in his Specificity Sequence.
Specificity is not about asking for a definition. Specificity means using examples.
If someone says Uber exploits drivers, ask how exactly it exploits its drivers.
When you hear a claim that sounds meaningful, but isn’t 100% concrete and specific, the first thing you want to do is zoom into its specifics.
To activate the power of specificity, all you have to do is ask yourself the question, “What’s an example of that?” Or more bluntly, “Can I be more specific?” And then you unleash a ton of power in a surprisingly broad variety of domains.
To be specific, to really explain something, go down the ladder of abstraction. Go towards concrete, and away from abstract concepts. Dive all the way down to the bottom of the ladder.
#2: Be less cocksure of your opinions.
I was a big proponent of “Strong opinions, weakly held”. Not anymore. Because it puts the onus on the opposite person to help you correct yourself. Why should they help you, when you're being insufferable?
Instead, better to have a Confidence Level for your opinions. The moment you do that, you become less sure.
If the probability of the sun rising tomorrow is not 100%, then how can you be so sure that the person you’re speaking with has bad intentions?
Cedric Chin talks about this in ‘Strong Opinions, Weakly Held’ Doesn't Work That Well.
#3: Build diversity in your circles and networks.
Diversity is valuable for its own sake.
It makes you question your implicit assumptions about people, and learn new things. It expands your bubble. It brings in more of reality.
#4: Be kind. Always take the Most Respectful Interpretation (MRI).
People have access to different information. So, when someone has a different opinion on something than you do, your first instinct should be to question information (“Why would you think that?”). Rather than values ("You're a bad person who thinks racism is good").
It's also important to recognize other people's filter bubbles. Be generous in your interpretation of their values, and educate them.
Always take the Most Respectful Interpretation of the opposite person's statement, and work from there.
I spoke about "inferential distance" earlier. Try and cultivate "inferential range".
Be willing to entertain and explore ideas before deciding they are wrong.
2. The World of AI.
I came across this article from Ted Chiang: Why Computers Won’t Make Themselves Smarter.
It's an article about why fears about a "singularity" - i.e., what will happen when machines develop the capability to improve themselves - are misplaced.
It's well-written, like all New Yorker and Ted Chiang prose. But I thought the content was OK, not great.
I was turned off a little bit because he uses two strawman arguments to make his case.
He uses the model of IQ to explain the logic of superintelligence researchers. And then shows that IQ is a flawed model for intelligence. But these researchers don't use IQ as a model for AI anyway.
He uses compiler bootstrapping as a model for how AI could make itself smarter. And then shows why compilers can't compile themselves ad infinitum. Again, not the same at all.
But he does write great science fiction. Check out Stories of Your Life and Others. The title story was adapted into the Hollywood movie Arrival, which I loved.
3. Fortnightly Funnies - "What gets measured gets managed" edition.
Regular readers of this newsletter know how much I love to find weird examples of "what gets measured gets managed."
In Sunday Reads #112: Oops I did it again…, I shared how Hong Kong's HKD 5000 incentive for positive COVID tests got misused.
A few more examples this week.
The origin stories of the Classics.
Do you know why Charles Dickens wrote such long-winded prose? Was it because of his deep imagination and keen sense of observation of everyday life?
Nope. It was because he was paid by the word.
And why did Alexandre Dumas write such choppy, fast-paced action?
Because he was paid by the line.
And when the papers wised up and stopped counting short lines, Dumas killed off a character that specialized in snappy dialogue!
What would you do for free food?
This was hilarious.
When a new sushi restaurant in Taiwan announced a free all-you-can-eat promotion for people with the characters "gui yu" (the Chinese characters for "salmon") in their names... the registrar's office was besieged by people requesting a name change.
One particularly enterprising person added 36 new characters to his name, including the characters for “abalone”, “crab” and “lobster”.
Full marks for long-term thinking 👏👏.
When you allow employees to expense only stationery purchase...
Everything becomes stationery.
Good Fortune Burger of Toronto has named its menu items after office supplies, so that customers can include them on expense reports:
Not making this up. Link.
That's it for this week! Hope you liked the articles. Drop me a line (just hit reply or leave a comment through the button below) and let me know what you think.
PS. I’m slowing down to a fortnightly newsletter for the next few weeks, as I’ve been busy on a couple of things, and haven’t been able to read (or write!) as much. Will be back to weekly soon!
As always, hope you’re staying safe, healthy, and sane.
Until next time,
Jitha
Hey Jitha,
Great article and thanks for familiarizing us with Filter Bubble. Could the recent overpowering of Talibans in Afghanistan be considered as analogous to the Filter Bubble since nobody outside imagined this could happen and that too in such a short time. Is this something similar?