Emotional Contagion on Facebook *OR* Facebook Did What?

0
Matthew Davis Jun 29, 2014

The Twitternets are ablaze with news that Facebook conducted an experiment on 693,003 of its users to investigate the hypothesis that emotions were contagious via their social network.

First, the experiment:

Posts to users' news feeds were evaluated for postive or negative emotional content using the Linguistic Inquiry and Word Count, developed by James Pennebaker and colleagues (full disclosure: Dr. Pennebaker was my undergraduate thesis advisor and I've authored papers and presentations with him). LIWC is a very well-validated method for evaluating the psychological dimensions of a communication sample, although it can be something of a dull razor (i.e. effect sizes from LIWC observations tend to be very conservative). The Facebook team used LIWC to quantify posts in terms of their positive and negative emotional word content.

Then the manipulation was made: users' news feeds were then manipulated to content less positive emotional content, or less negative emotional content.

And the observation: users responded with more or less positive emotional content in their own posts. A little. And by a little, I mean effect sizes that were in the hundredths-to-tenths of percents range. Still, with nearly 700,000 subjects, the team could claim a significant effect, and thus evidence that emotional content could act contagiously via social media.

So, considering the following three things:

1) knowing that LIWC can be somewhat conservative in scoring content, and

2) the design contained a couple of convolutions of the root effects (e.g. by counting percent of positive/negative emotion words knowing that word count itself changes when the news feeds were manipulated), and

3) my personal prior expectation going into the experiment would be, "yeah, sure, I bet my friend's mood predicts my mood, especially when I'm exposed to their mood" 

I'm really not surprised at all that they found this effect, albeit teeeeeny in size, and the effect is probably bigger than was measured in this design.

So, that's what Facebook did. We can now turn to the Intarwebs and find out what Facebook did.

The response was overwhelmingly negative, including your standard issue jokes about effect size and suicide:

 

 

We got your loaded article titles, for example Kashmir Hill writing for Forbes with the headline, "Facebook Manipulated 689,003 Users' Emotions For Science." Okay, sounds technically accurate, but the author does seem a bit upset that Facebook may "play mind games with us for science."

Katy Waldman is more direct from her column at Slate, leading with "Facebook's Unethical Experiment."

Law Professor James Grimmelmann blogs in a bit more detail about the ideas of informed consent, and how that's different from regular ol'e consent. And how the Facebook Data Usage Policy mentions research as an objective, but does not come close to describing the procedures, risks to participants, and so forth that constitute proper informed consent for human subjects.

And one of my favorite Twitters, Ian Holmes, highlighted the difference between "science" and "data obtained unethically but still useful," positing that if one wants to get "pat on the head for being a scientist", that one "gets judged by scientific ethics."

 

 

 

 

 

Meanwhile, Tal Yarkoni has a rare presentation of the other perspective in a blog post called "In Defense of Facebook" -- arguing that Facebook undoubtedly conducts many experiments, and many of those are likely to manipulate the emotions of the users, intentionally or not.

My summary here is thus: Facebook did something scientific, as evidenced by publishing the results in a scientific journal. Their usage policy covers the use of user data, for sure. But in this case, they are making manipulations, not just observations. And marketing researchers do manipulate users all the time. But in this case, they're explicitly trying to manipulate emotions, which is at the least... uhmmm... emotionally... charged. And, to top it off, they're passing their marketing research off as science. Which it seems like to me, except for not upholding the ethical standards expected of scientific research on human subjects. 

In this case, I'm inclined to suggest a response of "no harm, no foul." But, as scientific endeavors privatize, and as the ability to conduct all domains of research on humans grows (does psychology without informed consent scare you? what about the coming deluge of genomic and healthcare data sure to amass in the private sectors hands?), what standards will be asked of institutions who are not asking federally funded questions?

Article: http://www.pnas.org/content/111/24/8788.full
ALL COMMENTS (2)