Nuance In Numbers

In her refreshingly nuanced TED Talk, data journalist Mona Chalabi argues that we “have to move beyond either blindly accepting or blindly rejecting statistics.” Instead, we must “be able to tell which numbers are reliable and which ones aren’t.”

Chalabi “worked in a statistical department that’s part of the United Nations,” On a project “to find out how many Iraqis had been forced from their homes as a result of the war, and what they needed.” The work was “incredibly difficult,” and every day the team was “making decisions that affected the accuracy of the numbers — decisions like which parts of the country to go to, who to speak to, which questions to ask.” When she “started to feel really disillusioned,” She became “determined that the one way to make numbers more accurate is to have as many people as possible be able to question them.” That’s how Chalabi “became a data journalist.” Her job is “finding the data sets” that underlie statistical claims “and sharing them with the public.”

Chalabi explains that a lot of data sets are unreliable. Thus, if statistics make you “immediately feel a little bit wary, that’s OK, that doesn’t make you some kind of crazy conspiracy theorist.” Rather, it “makes you skeptical,” and “when it comes to numbers, especially now, you should be skeptical.” (For example, it makes sense for us to “roll our eyes” at “claims like, ‘9 out of 10 women recommend this anti-aging cream.’”)

However, “what’s different now,” Chalabi contends, “is people are questioning statistics” that Don’t come “from a private company,” but come “from the government.” Approximately “4 out of 10 Americans distrust the economic data that gets reported by government. Among supporters of President Trump it’s even higher; it’s about 7 out of 10.” These people “don’t find statistics a kind of common ground, a starting point for debate.” Instead, they feel that “statistics are elitist, maybe even rigged; they don’t make sense and they don’t really reflect what’s happening in people’s everyday lives.” Indeed, “there are actually moves in the US right now to get rid of some government statistics altogether.”

Chalabi warns that this is highly problematic. “Statistics come from the state; that’s where they got their name.” There are many “reasons why government statistics are often better than private statistics.” First and foremost, “private companies don’t have a huge interest in getting the numbers right, they just need the right numbers.” On the other hand, “government statisticians aren’t like that;” they are “impartial, not least because most of them do their jobs regardless of who’s in power. They’re civil servants.”

Critically, government statistics are intended “to better measure the population in order to better serve it.” It makes sense: “How can a government create fair policies if they can’t measure current levels of unfairness? … How can we legislate on health care if we don’t have good data on health or poverty? How can we have public debate about immigration if we can’t at least agree on how many people are entering and leaving the country?” When it comes to these types of questions, if we “give up on the numbers altogether, … we’ll be making public policy decisions in the dark, using nothing but private interests to guide us.”

So, “how do you question government statistics,” then, as Chalabi urges us to do? “You just keep checking everything. Find out how they collected the numbers. Find out if you’re seeing everything on the chart you need to see.” How do we do this? Chalabi gives us three particular questions we should ask.

First: “Can you see uncertainty?” Statistics and data visualizations often “overstate certainty,” which “can numb our brains to criticism.” For example, Chalabi “used actual data from an academic study” to show that “using polls to predict electoral outcomes is about as accurate as using the moon to predict hospital admissions.” That is, electoral predictions from polls aren’t very accurate. “But you wouldn’t necessarily know that to look at” the way these predictions are presented in the media. Chalabi offers several ways to reimagine data in order to convey uncertainty and extract meaningful conclusions and insights, with colorful and amusing examples.

The second question we “should be asking ourselves to spot bad numbers is: Can I see myself in the data?” Too often, statistics “don’t really tell the story of who’s winning and who’s losing from national policy.” We become “frustrated with global averages when they don’t match up with our personal experiences.” When you ask “where you fit in,” the point is “to get as much context as possible.” It’s “about zooming out from one data point, like the unemployment rate is five percent, and seeing how it changes over time, or seeing how it changes by educational status, … or seeing how it varies by gender.” Chalabi summarizes: “The axes are everything; once you change the scale, you can change the story.”

The third and final question is: “How was the data collected?” Chalabi uses a powerful example to illustrate this point. In 2015, a widely reported poll found that “41 percent of Muslims in this country support jihad.” It was “obviously pretty scary.” But it was “an opt-in poll,” Meaning “anyone could have found it on the internet and completed it,” and there is “no way of knowing if those people even identified as Muslim.” Moreover, “there were 600 respondents in that poll,” while “there are roughly three million Muslims in this country,” Meaning that “the poll spoke to roughly one in every 5,000 Muslims.”

Far more troubling, “journalists who reported on that statistic ignored a question … on the survey that asked respondents how they defined ‘jihad.’” Most respondents “defined it as ‘Muslims’ personal, peaceful struggle to be more religious,’” And “only 16 percent defined it as ‘violent holy war against unbelievers.’” Chalabi stresses “the really important point: based on those numbers, it’s totally possible that no one in the survey who defined jihad as violent holy war also said they support it. Those two groups might not overlap at all.” Yet the poll’s conclusions were represented as supporting a very different proposition—one that could neither be proved nor refuted by the data reported.

Where does this leave us? Where we began. In Chalabi’s words, we “have to move beyond either blindly accepting or blindly rejecting statistics.” instead, we must “be able to tell which numbers are reliable and which ones aren’t.”

What do you think? What’s your favorite Ted Talk? Join the discussion and let me know. I want to hear from you!

Join The Conversation

I want to hear from you.

Whether you’ve attended one of my speeches or consulting sessions, ordered Eyes Wide Open, seen my TED Talk, read one of my blog or social media posts, or you’re simply visiting this site, I want to know what you think. Make a point (big or small), share a story, offer criticism, ask a question—whatever suits you. I’d like to start an open conversation, so I’d appreciate your permission to share your submission in the future—anonymously, if you prefer. Even if I can’t share your thoughts with others, however, I still want to hear them, so please do tell me what’s on your mind. Thank you.

-Isaac Lidsky

 

Share Your Story

Use the form below to share your thoughts with Isaac.

 Isaac can share my story with others. I would like to remain anonymous.

Please leave this field empty.

 Please add me to your mailing list

Though Isaac tries to respond promptly to everyone who reaches out, we are a very small organization and sometimes it can take us a while to respond. We thank you in advance for your patience and understanding.