Don't Let Facebook's Emotional Manipulation Study Make You So Mad

Email a Friend

Last week, Facebook announced it had conducted an experiment on some of its users without their knowledge or permission. 

Here’s how it worked. For a short amount of time, one group of users only saw negative posts from their friends in their news feed, while others only saw positive posts. The idea was to test whether moods can be contagious across networks. The researchers believe that the answer is yes, that if suddenly everyone on your Facebook seems to be melancholy, you’re likely to write a slightly sadder post as well. 

News of the study spread over the weekend, and now people are very angry at Facebook, enough so that the lead researcher on the study has publicly apologized. But why?

The main thrust of the objection is that people say that they don’t like to be emotionally manipulated. The problem is, that’s just not true.

Here’s one example. We watch TV just so that professional liars can pretend to be in love, or sad, or scared, because we want to trick our hearts into feeling those things too. And even during that lie that we’re enjoying, there are breaks where advertisers tell us shorter lies, not because they want to entertain us but because they want to manipulate us into buying dumb stuff.

Of course, consent and transparency in these manipulations are important, and Facebook’s experiment contained neither. On the other hand, the manipulations we receive from TV, film, or even a decent web series are so much more effective than what Facebook can do. In his apology, the lead researcher admitted as much:  “…the actual impact on people in the experiment was the minimal amount to statistically detect it -- the result was that people produced an average of one fewer emotional word, per thousand words, over the following week.” Contrast that with Friday Night Lights, which will make the most emotionally resilient person cry at least once per season. 

All that said, the more interesting problems with the Facebook study are about academic ethics. 

First, there’s the consent problem. You don’t do studies on humans who haven’t given informed consent. (Facebook’s Terms of Service say they might anonymously use your information for “research,” which is a far cry from telling people you might try to make the feel sad for science.) 

And over at Forbes, Kashmir Hill noted that Facebook didn’t submit the study to a university ethics review board for approval — the company just internally decided the experiment was ethically OK. Bad oversight! 

Lastly, it’s not clear the study was even well designed. The Atlantic’s Robinson Meyer talked to an expert who explained that Facebook’s methodology for telling happy posts form sad ones is fairly crude: 

Here are two sample tweets (or status updates) that are not uncommon:

“I am not happy.
“I am not having a great day.”
An independent rater or judge would rate these two tweets as negative — they’re clearly expressing a negative emotion. That would be +2 on the negative scale, and 0 on the positive scale.

But the LIWC 2007 tool doesn’t see it that way. Instead, it would rate these two tweets as scoring +2 for positive (because of the words “great” and “happy”) and +2 for negative (because of the word “not” in both texts).

It’s hard not to like this last idea, that Facebook may’ve angered the entire internet over a study that could fundamentally be bunk.