Articles

Did Facebook and PNAS violate human research protections in an unethical experiment?

Facebookfail

Ed. Note: NOTE ADDENDUM

I daresay that I’m like a lot of you in that I spend a fair bit of time on Facebook. This blog has a Facebook page (which, by the way, you should head on over and Like immediately). I have a Facebook page, several of our bloggers, such as Harriet Hall, Steve Novella, Mark Crislip, Scott Gavura, Paul Ingraham, Jann Bellamy, Kimball Atwood, John Snyder, and Clay Jones, have Facebook pages. It’s a ubiquitous part of life, and arguably part of the reason for our large increase in traffic over the last year. There are many great things about Facebook, although there are a fair number of issues as well, mostly having to do with privacy and a tendency to use automated scripts that can be easily abused by cranks like antivaccine activists to silence skeptics refuting their pseudoscience. Also, of course, every Facebook user has to realize that Facebook makes most of its money through targeted advertising directed at its users; so the more its users reveal the better it is for Facebook, which can more precisely target advertising.

Whatever good and bad things about Facebook there are, however, there’s one thing that I never expected the company to be engaging in, and that’s unethical human subjects research, but if stories and blog posts appearing over the weekend are to be believed, that’s exactly what it did, and, worse, it’s not very good research. The study, entitled “Experimental evidence of massive-scale emotional contagion through social networks“, was published in the Proceedings of the National Academy of Sciences of the United States of America (PNAS), and its corresponding (and first) author is Adam D. I. Kramer, who is listed as being part of the Core Data Science Team at Facebook. Co-authors include Jamie E. Guillory at the Center for Tobacco Control Research and Education, University of California, San Francisco and Jeffrey T. Hancock from the Departments of Communication and Information Science, Cornell University, Ithaca, NY.

IRB? Facebook ain’t got no IRB. Facebook don’t need no IRB! Facebook don’t have to show you any stinkin’ IRB approval!” Sort of.

There’s been a lot written over a short period of time (as in a couple of days) about this study. Therefore, some of what I write will be my take on issues others have already covered. However, I’ve also delved into some issues that, as far as I’ve been able to tell, no one has covered, such as why the structure of PNAS might have facilitated a study like this “slipping through” despite its ethical lapses.

Before I get into the study itself, let me just discuss a bit where I come from in this discussion. I am trained as a basic scientist and a surgeon, but these days I mostly engage in translational and clinical research in breast cancer. The reason is simple. It’s always been difficult for all but a few surgeons to be both a basic researcher and a clinician and at the same time do both well. However, with changes in the economics of even academic health care, particularly the ever-tightening drive for clinicians to see more patients and generate more RVUs, it’s become darned near impossible. So clinicians who are still driven (and masochistic) enough to want to do research have to go with where their strengths are, and make sure there’s a strong clinical bent to their research. That involves clinical trials, in my case cancer clinical trials. That’s how I became so familiar with how institutional review boards (IRBs) work and with the requirements for informed consent. I’ve also experienced what most clinical researchers have experienced, both personally and through interactions with their colleagues, and that’s the seemingly never-ending tightening of the requirements for what constitutes true informed consent. For the most part, this is a good thing. However, at times it does appear that IRBs seem to go a bit too far, particularly in the social sciences. This PNAS study, however, is not one of them.

The mile high view of the study is that Facebook intentionally manipulated the feeds of 689,003 English-speaking Facebook users between January 11th-18th, 2012 in order to determine whether showing more “positive” posts in a user’s Facebook feed was an “emotional contagion” that would inspire the user to post more “positive” posts himself or herself. Not surprisingly, Kramer et al found that showing more “positive” posts did exactly that, at least within the parameters as defined, and that showing more “negative” posts resulted in more “negative” posting. I’ll discuss the results in more detail and problems with the study methodology in a moment. First, here’s the rather massive problem. Where is the informed consent? This is what the study says about informed consent and the ethics of the experiment:

Posts were determined to be positive or negative if they contained at least one positive or negative word, as defined by Linguistic Inquiry and Word Count software (LIWC2007) (9) word counting system, which correlates with self-reported and physiological measures of well-being, and has been used in prior research on emotional expression (7, 8, 10). LIWC was adapted to run on the Hadoop Map/Reduce system (11) and in the News Feed filtering system, such that no text was seen by the researchers. As such, it was consistent with Facebook’s Data Use Policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research. Both experiments had a control condition, in which a similar proportion of posts in their News Feed were omitted entirely at random (i.e., without respect to emotional content).

Does anyone else notice anything? I noticed right away, both from news stories and when I finally got around to reading the study itself in PNAS. That’s right. There’s no mention of IRB approval. None at all. I had to go to a story in The Atlantic to find out that apparently the IRB of at least one of the universities involved did approve this study:

Did an institutional review board—an independent ethics committee that vets research that involves humans—approve the experiment?

Yes, according to Susan Fiske, the Princeton University psychology professor who edited the study for publication.

“I was concerned,” Fiske told The Atlantic, “until I queried the authors and they said their local institutional review board had approved it—and apparently on the grounds that Facebook apparently manipulates people’s News Feeds all the time.”

Fiske added that she didn’t want the “the originality of the research” to be lost, but called the experiment “an open ethical question.”

This is not how one should find out whether a study was approved by an IRB. Moreover, news coming out since the story broke suggests that there was no IRB approval before publication.

Also, what is meant by saying that Susan Fiske is the professor who “edited the study” is that she is the member of the National Academy of Sciences who served as editor for the paper. What that means depends on the type of submission the manuscript was. PNAS is a different sort of journal. As I’ve discussed in previous posts regarding Linus Pauling’s vitamin C quackery, about which he published papers in PNAS back in the 1970s. Back then, members of the Academy could contribute papers to PNAS as they saw fit and in essence hand pick their reviewers. Indeed, until recently, the only way that non-members could have papers published in PNAS was if a member of the Academy agreed to submit their manuscript for them, then known as “communicating” it—apparently these days known as “editing” it—and, in fact, members were supposed to take the responsibility for having such papers reviewed before “communicating them” to PNAS. Thus, in essence a member of the Academy could get nearly anything he or she wished published in PNAS, whether written by herself or a friend. Normally, this ability wasn’t such a big problem for quality, because getting into the NAS was (and is still) so incredibly difficult and only the most prestigious scientists are invited to join. Consequently, PNAS is still a prestigious journal with a high impact factor, and most of its papers are of high quality. Scientists know, however, that sometimes Academy members used to use it as a “dumping ground” to publish some of their leftover findings. They also know that on occasion, when rare members fall for dubious science, as Pauling did, they could “communicate” their questionable findings and get them published in PNAS unless they’re so outrageously ridiculous that even the deferential editorial board can’t stomach publishing them.

These days, submission requirements for PNAS are more rigorous. The standard mode is now called Direct Submission, which is still not like that for any other journal in that authors “must recommend three appropriate Editorial Board members, three NAS members who are expert in the paper’s scientific area, and five qualified reviewers.” Not very many authors are likely to be able to achieve this. I doubt I could, and I know an NAS member who’s even a cancer researcher; I’m just not sure I would want to impose on him to handle one of my manuscripts. Now, apparently, what used to be “contributed by” is referred to as submissions through “prearranged editors” (PE). A prearranged editor must be a member of the NAS:

Prior to submission to PNAS, an author may ask an NAS member to oversee the review process of a Direct Submission. PEs should be used only when an article falls into an area without broad representation in the Academy, or for research that may be considered counter to a prevailing view or too far ahead of its time to receive a fair hearing, and in which the member is expert. If the NAS member agrees, the author should coordinate submission to ensure that the member is available, and should alert the member that he or she will be contacted by the PNAS Office within 3 days of submission to confirm his or her willingness to serve as a PE and to comment on the importance of the work.

According to Dr. Fiske, there are only “a half dozen or so social psychologists” in the NAS out of over 2,000 members. Assuming Dr. Fiske’s estimate is accurate, my first guess was that this manuscript was submitted with Dr. Fiske as a prearranged editor because there is not broad representation in the NAS of the relevant specialty. Why that is, is a question for another day, but apparently it is. Oddly enough, however, my first guess was wrong. This paper was a Direct Submission, as stated at the bottom of the article itself. Be that as it may, I remain quite shocked that PNAS doesn’t, as virtually all journals that publish any human subjects research, explicitly require authors to state that they had IRB approval for their research. Some even require proof of IRB approval before they will publish. Actually, in this, PNAS clearly failed to enforce its own requirements, which require that:

The authors did neither.

While it is true that Facebook itself is not bound by the federal Common Rule that requires IRB approval for human subjects research because it is a private company that does not receive federal grant funding and was not, as pharmaceutical and device companies do, performing the research for an application for FDA approval, the other two coauthors were faculty at universities that do receive a lot of federal funding and are therefore bound by the Common Rule. So it’s a really glaring issue that, not only is there no statement of approval from Cornell’s IRB from Jeffrey T. Hancock, but there is also no statement from Jamie Guillory that there was IRB approval from UCSF’s IRB. If there’s one thing I’ve learned in human subject ethics training at every university where I’ve had to take it, it’s that there must be IRB approval from every university with faculty involved in a study.

IRB approval or no IRB approval, the federal Common Rule has several requirements for informed consent in a checklist. Some key requirements include:

  • A statement that the study involves research
  • An explanation of the purposes of the research
  • The expected duration of the subject’s participation
  • A description of the procedures to be followed
  • Identification of any procedures which are experimental
  • A description of any reasonably foreseeable risks or discomforts to the subject
  • A description of any benefits to the subject or to others which may reasonably be expected from the research
  • A statement describing the extent, if any, to which confidentiality of records identifying the subject will be maintained
  • A statement that participation is voluntary, refusal to participate will involve no penalty or loss of benefits to which the subject is otherwise entitled, and the subject may discontinue participation at any time without penalty or loss of benefits, to which the subject is otherwise entitled

The Facebook Terms of Use and Data Use Policy contain none of these elements. None. Zero. Nada. Zip. They do not constitute proper informed consent. Period. The Cornell and/or UCSF IRB clearly screwed up here. Specifically, it screwed up by concluding that the researchers had proper informed consent. The concept that clicking on an “Accept” button for Facebook’s Data Use Policy constitutes anything resembling informed consent for a psychology experiment is risible—yes, risible. Here’s the relevant part of the Facebook Data Use Policy. Among the other expected functions, there are the usual uses of your personal data that you might expect, such as measuring the effectiveness of ads and delivering relevant ads to you, making friend suggestions, and to protect Facebook’s or others’ rights and property. However, at the very end of the list is this little bit, where Facebook asserts its right to use your personal data “for internal operations, including troubleshooting, data analysis, testing, research and service improvement.”

Now, I realize that in the social sciences, depending on the intervention being tested, “informed consent” standards might be less rigorous, and there are examples where it is thought that IRBs have overreached in asserting their hegemony over the social sciences, a concern that dates back several years. What Facebook has is not “informed consent,” anyway. As has been pointed out, this is not how even social scientists define informed consent, and it’s certainly nowhere near how medical researchers define informed consent. Rather, as Will over at Skepchick points out correctly, the Facebook Data Use Policy is more like a general consent than informed consent, similar to the sort of consent form a patient signs before being admitted to the hospital:

Or, if we want to look at biomedical research, which is the kind of research that inspired the Belmont Report, Facebook’s policy is analogous to going to a hospital, signing a form that says any data collected about your stay could be used to help improve hospital services, and then unknowingly participating in a research project where psychiatrists are intentionally pissing off everyone around you to see if you also get pissed off, and then publishing their findings in a scientific journal rather than using it to improve services. Do you feel that you were informed about being experimented on by signing that form?

That’s exactly why “consents to treat” or “consents for admission to the hospital” are not consents for biomedical research. There are minor exceptions. For instance, some consents for surgery include consent to use any tissue removed for research.

In fairness, it must be acknowledged that there are criteria under which certain elements of informed consent can be waived by an IRB. The relevant standard comes from §46.116 and requires that:

  • D: 1. The research involves no more than minimal risk to the subjects;
  • D: 2. The waiver or alteration will not adversely affect the rights and welfare of the subjects;
  • D: 3. The research could not practicably be carried out without the waiver or alteration; and
  • D: 4. Whenever appropriate, the subjects will be provided with additional pertinent information after participation.

Here’s where we get into gray areas. Remember, all of these conditions have to apply before a waiver of informed consent can occur, and clearly all of them do not. D1, for instance, is likely true of this research, although from my perspective D2 is arguable at best, particularly if you believe that users of a commercial company’s service should have the right to know what is being done with their information. D3 is arguable either way. For example, it’s not hard to imagine sending out a consent to all Facebook users, and, given that Facebook has over a billion users, it’s not unlikely that hundreds of thousands would say yes. In contrast, D4 appears not to have been honored. There’s no reason Facebook couldn’t have informed the actual users who were monitored after the study was over what had been done. Even if one could agree that conditions D1-3 were already met, the IRB should have insisted on D4, because there’s no reason to suspect that doing so would have been inappropriate in the sense of altering the outcome of the experiment. No matter how you slice it, there was a serious problem with the informed consent for this study on multiple levels.

Even Susan Fiske admits that she was “creeped out” by the study. To me, that’s a pretty good indication that something’s not ethically right; yet she edited it and facilitated its publication in PNAS anyway. She also doesn’t understand the Common Rule:

“A lot of the regulation of research ethics hinges on government supported research, and of course Facebook’s research is not government supported, so they’re not obligated by any laws or regulations to abide by the standards,” she said. “But I have to say that many universities and research institutions and even for-profit companies use the Common Rule as a guideline anyway. It’s voluntary. You could imagine if you were a drug company, you’d want to be able to say you’d done the research ethically because the backlash would be just huge otherwise.”

No, no, no, no! The reason drug companies follow the Common Rule is because the FDA requires them to. Data from research not done according to the Common Rule can’t be used to support an application to the FDA to approve a drug. It’s also not voluntary for faculty at universities that receive federal research grant funding if those universities have signed on to the Federalwide Assurance (FWA) for the Protection of Human Subjects (as Cornell has done and UCSF appears to have done), as I pointed out when I criticized Dr. Mehmet Oz’s made-for-TV green coffee bean extract study. Nor should it be, and I am unaware of a major university that has refused to agree to “check the box” in the FWA promising that all its human subjects research will be subject to the Common Rule, which makes me wonder how truly “voluntary” it is to agree to be bound by the Common Rule. Moreover, as I was finishing this post, I learned that the study actually did receive some federal funding through the Army Research Office, as described in the Cornell University press release. [NOTE ADDENDUM ADDED 6/30/2014.] IRB approval was definitely required. I note that I couldn’t find any mention of such funding in the manuscript itself.

The study itself

The basic finding of the study, namely that people alter their emotions and moods based upon the presence or absence of other people’s positive (and negative) moods, as expressed on Facebook status updates, is nothing revolutionary. It’s about as much a, “Well, duh!” finding as I can imagine. The researchers themselves referred to this effect an “emotional contagion,” because their conclusion was that Facebook friends’ words that users see on their Facebook news feed directly affected the users’ moods. But is the effect significant, and does the research support the conclusion? The researchers’ methods were described thusly in the study:

The experiment manipulated the extent to which people (N = 689,003) were exposed to emotional expressions in their News Feed. This tested whether exposure to emotions led people to change their own posting behaviors, in particular whether exposure to emotional content led people to post content that was consistent with the exposure—thereby testing whether exposure to verbal affective expressions leads to similar verbal expressions, a form of emotional contagion. People who viewed Facebook in English were qualified for selection into the experiment. Two parallel experiments were conducted for positive and negative emotion: One in which exposure to friends’ positive emotional content in their News Feed was reduced, and one in which exposure to negative emotional content in their News Feed was reduced. In these conditions, when a person loaded their News Feed, posts that contained emotional content of the relevant emotional valence, each emotional post had between a 10% and 90% chance (based on their User ID) of being omitted from their News Feed for that specific viewing. It is important to note that this content was always available by viewing a friend’s content directly by going to that friend’s “wall” or “timeline,” rather than via the News Feed. Further, the omitted content may have appeared on prior or subsequent views of the News Feed. Finally, the experiment did not affect any direct messages sent from one user to another.

And:

For each experiment, two dependent variables were examined pertaining to emotionality expressed in people’s own status updates: the percentage of all words produced by a given person that was either positive or negative during the experimental period (as in ref. 7). In total, over 3 million posts were analyzed, containing over 122 million words, 4 million of which were positive (3.6%) and 1.8 million negative (1.6%).

The results were summed up in a single deceptive chart. Why do I call the chart deceptive? Easy, because charts were done in such a way as to make the effect look much larger by starting the y-axis at 5.0 in the graph that shows a difference between around 5.3 and 5.25 and starting it at 1.5 for a graph that showed a difference between 1.75 and maybe 1.73. See what I mean by looking at Figure 1:

F1.medium

This is another thing the authors did that I can’t believe Dr. Fiske and PNAS let them get away with, as messing with where the y-axis of a graph starts in order to make a tiny effect look bigger is one of the most obvious tricks there is. In this case, given how tiny the effect is, even if it was a statistically significant effect, it’s highly unlikely to be what we call a clinically significant effect.

But are these valid measures? John M. Grohol of PsychCentral was unimpressed, pointing out that the tool that the researchers used to analyze the text was not designed for short snippets of text, asking snarkily, “Why would researchers use a tool not designed for short snippets of text to, well… analyze short snippets of text?” and concluding that it was because the tool chosen, the LIWC, is one of the few tools that can process large amounts of text rapidly. He then went on to describe why it’s a poor tool to apply to a Tweet or a brief Facebook status update:

Length matters because the tool actually isn’t very good at analyzing text in the manner that Twitter and Facebook researchers have tasked it with. When you ask it to analyze positive or negative sentiment of a text, it simply counts negative and positive words within the text under study. For an article, essay or blog entry, this is fine — it’s going to give you a pretty accurate overall summary analysis of the article since most articles are more than 400 or 500 words long.

For a tweet or status update, however, this is a horrible analysis tool to use. That’s because it wasn’t designed to differentiate — and in fact, can’t differentiate — a negation word in a sentence.

Let’s look at two hypothetical examples of why this is important. Here are two sample tweets (or status updates) that are not uncommon:

“I am not happy.”
“I am not having a great day.”

An independent rater or judge would rate these two tweets as negative — they’re clearly expressing a negative emotion. That would be +2 on the negative scale, and 0 on the positive scale.

But the LIWC 2007 tool doesn’t see it that way. Instead, it would rate these two tweets as scoring +2 for positive (because of the words “great” and “happy”) and +2 for negative (because of the word “not” in both texts).

That’s a huge difference if you’re interested in unbiased and accurate data collection and analysis.

Indeed it is. So not only was this research of questionable ethics, it wasn’t even particularly good research. I tend to agree with Dr. Gohol that most likely these results represent nothing more than “statistical blips.” The authors even admit that the effects are tiny. Yet none of that stops them from concluding that their “results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks.” Never mind that they never measured a single person’s emotions or mood states.

“Research” to sell you stuff

Even after the firestorm that erupted this weekend, Facebook unfortunately still doesn’t seem to “get it”, as is evident from its response to the media firestorm yesterday:

This research was conducted for a single week in 2012 and none of the data used was associated with a specific person’s Facebook account,” says a Facebook spokesperson. “We do research to improve our services and to make the content people see on Facebook as relevant and engaging as possible. A big part of this is understanding how people respond to different types of content, whether it’s positive or negative in tone, news from friends, or information from pages they follow. We carefully consider what research we do and have a strong internal review process. There is no unnecessary collection of people’s data in connection with these research initiatives and all data is stored securely.

This is about as tone deaf and clueless a response as I could have expected. Clearly, it was not a researcher, but a corporate drone who wrote the response. Even so, he or she might have had a point if the study were strictly observational. But it wasn’t. It was an experimental study; i.e., an interventional study. An intervention was made directed at one group as compared to a control group, and the effects measured. That the investigators used a poor tool to measure such effects doesn’t change the fact that this was an experimental study, and, quite rightly, the bar for consent and ethical approval is higher for experimental studies. Facebook failed in that and still doesn’t get it. As Kashmir Hill put it, while many users may already expect and be willing to have their behavior studied, they don’t expect that Facebook will actively manipulate their environment in order to see how they react. On the other hand, Facebook has clearly been doing just that for years. Remember, its primary goal is to get you to pay attention to the ads it sells, so that it can make money.

Finally, there is one final “out” that Facebook might claim by lumping its “research” and “service improvement” together. In medicine, quality improvement initiatives are not considered “research” per se and do not require IRB approval. I’m referring to initiatives, for instance, to measure surgical site infections and look for situations or markers that predict them, the purpose being to reduce the rate of such infections by intervening to correct the issues discovered. I myself am heavily involved in just such a collaborative to examine various factors, most critically adherence to evidence-based guidelines, as an indicator of quality.

Facebook might try to claim that its “research” was in reality for “service” improvement, but if that’s the case that would be just as disturbing. Think about it. What “service” is Facebook “improving” through this research? The answer is obvious: Its ability to manipulate the emotions of its users in order to better sell them stuff. Don’t believe me? Here’s what Sheryl Sandberg, Facebook’s chief operations officer, said recently:

Our goal is that every time you open News Feed, every time you look at Facebook, you see something, whether it’s from consumers or whether it’s from marketers, that really delights you, that you are genuinely happy to see.

As if that’s not enough, here it is, from the horse’s mouth, that of Adam Kramer, corresponding author:

Q: Why did you join Facebook?
A: Facebook data constitutes the largest field study in the history of the world. Being able to ask–and answer–questions about the world in general is very, very exciting to me. At Facebook, my research is also immediately useful: When I discover something, we can use this to make improvements to the product. In an academic position, I would have to have a paper accepted, wait for publication, and then hope someone with the means to usefully implement my work takes notice. At Facebook, I just message someone on the right team and my research has an impact within weeks if not days.

I don’t think it can be said much clearer than that.

As tempting of a resource as Facebook’s huge amounts of data might be to social scientists interested in studying online social networks, social scientists need to remember that Facebook’s primary goal is to sell advertising, and therefore any collaboration they strike up with Facebook information scientists will be designed to help Facebook accomplish that goal. That might make it legal for Facebook to dodge human subjects protection guidelines, but it certainly doesn’t make it ethical. That’s why social scientists must take extra care to make sure any research using Facebook data is more than above board in terms of ethical approval and oversight, because Facebook has no incentive to do so and doesn’t even seem to understand why its research failed from an ethical standpoint. Jamie Guillory, Jeffrey Hancock, and Susan Fiske failed to realize this and have reaped the whirlwind.

ADDENDUM: Whoa. Now Cornell University is claiming that the study received no external funding and has tacked an addendum onto its original press release about the study.

Also, as reported here and elsewhere, Susan Fiske is now saying that the investigators had Cornell IRB approval for using a “preexisting data set”:

Also, yesterday after I had finished writing this post, Adam D. I. Kramer published a statement, which basically confirms that Facebook was trying to manipulate emotions and in which he apologized, although it sure sounds like a “notpology” to me.

Finally, we really do appear to be dealing with a culture clash between tech and people who do human subjects research, as described well here:

Academic researchers are brought up in an academic culture with certain practices and values. Early on they learn about the ugliness of unchecked human experimentation. They are socialized into caring deeply for the well-being of their research participants. They learn that a “scientific experiment” must involve an IRB review and informed consent. So when the Facebook study was published by academic researchers in an academic journal (the PNAS) and named an “experiment”, for academic researchers, the study falls in the “scientific experiment” bucket, and is therefore to be evaluated by the ethical standards they learned in academia.

Not so for everyday Internet users and Internet company employees without an academic research background. To them, the bucket of situations the Facebook study falls into is “online social networks”, specifically “targeted advertising” and/or “interface A/B testing”. These practices come with their own expectations and norms in their respective communities of practice and the public at large, which are different from those of the “scientific experiment” frame in academic communities. Presumably, because they are so young, they also come with much less clearly defined and institutionalized norms. Tweaking the algorithm of what your news feed shows is an accepted standard operating procedure in targeted advertising and A/B testing.

This, I think, accounts for the conflict between the techies, who shrug their shoulders and say, NBD, and medical researchers like me who are appalled by this experiment.

Posted in: Clinical Trials, Computers & Internet, Neuroscience/Mental Health, Science and the Media

Leave a Comment (128) ↓

128 thoughts on “Did Facebook and PNAS violate human research protections in an unethical experiment?

  1. X says:

    This is much ado about nothing. As Facebook rightly points out, they already use whatever algorithm they feel like to display your news feed. All of the algorithms used in the study fall within the ordinary parameter range of “whatever they feel like”. Merely observing, recording and correlating the effects of using different algorithms cannot constitute an ethical violation.

    A good analogy for this would be if the barista at your coffee shop scowled at you for a week and then checked whether her tips went up or down. Getting scowled at is part of the ordinary range of customer-barista interactions. It does not require special ethical consideration.

    1. David Gorski says:

      Merely observing, recording and correlating the effects of using different algorithms cannot constitute an ethical violation.

      That statement leads me to speculate that maybe you work in the tech/computer industry, because it sounds like the sorts of things I’ve seen tech industry people say about the study, mainly because they don’t understand human subjects research ethics. Are you really sure that “merely observing, recording and correlating the effects of using different algorithms cannot constitute an ethical violation”? Ever? Nonsense! In any case, what Facebook did in this particular case is precisely an example of why you are wrong to conclude that, as I explained in my usual detailed fashion. There’s good reason why physicians and people who do actual human subjects research, however, have mostly been appalled by this experiment.

      If what Facebook did was simply to alter algorithms and see if people clicked on more ads or spent more time on the page or whatever, you might have a point. However, this research purported to demonstrate “emotional contagion” by explicitly trying to manipulate emotions through changes in algorithms. That “takes it to another level,” as we say. There is a potential to do harm, even though it was small. Moreover, claiming that the Facebook ToS is adequate “informed consent” is similarly nonsense.

      Finally, this was an experiment that explicitly sought to produce generalizable knowledge, as Andrew Kramer has admitted since I finished this post. He has apologized for himself and his coauthors, although it reads more than anything else like a notpology:

      https://www.facebook.com/akramer/posts/10152987150867796

      1. Jack says:

        It seems to me to boil this down to an oversimplification, is that the real issue with this is the appropriateness of publishing this in the PNAS as a valid research study. This study clearly was meant to see if facebook could manipulate the data to make facebook more attractive to customers. If not for the PNAS this would be a normal marketing experiment. But for the reasons you have mentioned this seems to not have been a terribly valid social research study. I find it interesting that the ethics of writing a social research paper and doing a marketing study… which this clearly was really a marketing study… are so different.

      2. Pete says:

        What would be your opinion if the study had the exact same intervention (filtering out posts shown to you based on the emotional content) but different measurements – instead of measuring the effect on your post emotional content, they’d measure how long you stay on site and how much ads are clicked?

        I’m fairly sure that they do that – everything that’s published about FB newsfeed filters (and every other serious online company) indicates that they are doing things like that.

        I believe that it’s their freedom to make their newsfeed show whatever they want. If they want to show all posts, they can. If they want to remove [whatever they think is] spam, they can. If they want to filter posts based on complex criteria about their content, they can. If they want the criteria to depend on users specified gender, they can. If they want to remove all posts, useless but they can. If they want to show autogenerated posts with random profanity, stupid but they can. if they want to show autogenerated posts with random profanity to people with surname ‘Gorski’, then it’s a dick move, but again – they definitely are allowed to do that, it’s their service.

        1. David Gorski says:

          Ah, I sense another person in the tech industry who doesn’t understand human subjects research and the difference between research and service improvement. Granted, there can be gray areas and debates about appropriateness of human subjects research norms to social networks, but one notes that you don’t even acknowledge these things, as I did in my post.

          1. Windriven says:

            I think perhaps Pete’s point was that much of what Facebook does internally involves human research and some of that research may or may not be more disturbing than this.

            But it is Facebook’s bat and ball and therefore Facebook’s rules – which, it should be noted, they can change more or less at will.

            Facebook’s users are its products. Of course they are going to do product research, every company does. If one doesn’t wish to be a lab rat all one need do is stop using the service.

            The funny thing to me is that It probably wouldn’t have made a lick of difference if Facebook said that it would randomly stick a hot soldering iron in selected users’ eyes. Users would click ‘ACCEPT’ and rush off to see whose relationship status had changed.

            1. Andrey Pavlov says:

              It seems to me that what many of the commenters here who think FB did nothing wrong are missing is that this was published in a peer-reviewed journal, with authors who are under higher-than-normal standards and requirements, and for use beyond merely internal business changes (improvement or otherwise).

              There is a distinct difference between having a barista scowl and see how that affects tips and then blog about it and having the same barista do different things to different people, record the results, collaborate with a researcher bound by Common Rule requirements, and then submit and publish in a peer-reviewed journal.

              1. Windriven says:

                Andrey, I for one have not missed that distinction, though it is in the larger sense academic. There may be higher expectations because of the publishing venue or the academic affiliations of some of the researchers but the basic ethics themselves don’t change. Absent informed consent, effing people around for $hits and grins is wrong whether one keeps it a secret or publishes it in Nature. It is a reflection of our sappiness as a society that we maintain these Jesuitical distinctions as if the commercial component somehow makes it OK.

                It isn’t that I think FB did nothing wrong. I think their entire business model is corrupt from the git-go. The notion that there is anything approaching informed consent in their byzantine terms of use is a legal conceit and nothing more. But I struggle to decide who I have more contempt for, Zuckerberg and his glib minions or the slack-jawed half-wits who still claim shock and dismay when Facebook humps them like a rented goat.*

                http://www.southparkstudios.com/clips/382783/i-agreed-by-accident

                *Actually, I do know. I save my greatest contempt for those who understand the deal but do it anyway ’cause Facebook is so kewl.

              2. Andrey Pavlov says:

                Absent informed consent, effing people around for $hits and grins is wrong whether one keeps it a secret or publishes it in Nature.

                No doubt and I absolutely agree. The thing is that in the balance of freedoms and regulations, I think it is likely overly onerous and unreasonable to police private entities doing internal market research the same way in which we should and do police actual researchers. Obviously there are instances where the private company can end up doing such internal research that violates laws and protections for individuals, but in cases like these where there are admittedly significant gray areas, I’m not currently convinced that anything more than calling them out for shady practices is appropriate. However, once you mix in peer reviewed publications and actual researchers, that changes the game. It takes it from poor business practice to an actionable transgression of research ethics and requirements.

                It was merely that point that I was attempted to address in regard to comments like those from X and Pete.

            2. Josh says:

              By that logic, my local university shouldn’t have to inform participants that they are in experiments because it’s already generally understood that the school does research on people and they agreed to it when they paid for their classes.

              Informed consent is a very important, very narrow, agreement on what is going to take place in an experiment. I’m alarmed to see so many people not understand why direct consent is important–it’s starting to make me think we should be teaching intro to psychology in high schools to protect people from themselves.

        2. delphi_ote says:

          Your arguments apply equally well to Facebook posting comments impersonating your family to measure your response. Or making your entire profile public to see your reaction. Are you sure there’s no ethical line to cross here?

          1. David Gorski says:

            Thanks for the examples. I was waiting to see if “X” was going to reply before I provided mine, but actually yours are better than the ones I thought of. :-)

          2. Windriven says:

            @delphi_ote

            Are you suggesting ethical equivalence of selective filtration of information and the generation of false information from whole cloth or purposely overriding one’s personal security settings? That strikes me as quite a reach.

            1. Josh says:

              Intentionally hiding positive or negative comments is a form of gaslighting, which is the very definition of deception. So yeah, it’s pretty unethical to do so without informed consent.

              There’s also the important issue of directly altering mood of people and not observing them during he experiment, nor giving them a debriefing afterward. By virtue of probability, some people are sure to walk away from this experiment thinking that everything in their social network is negative and none of their friends or family have anything nice to say.

              That’s pretty serious stuff to neither monitor, nor provide an informative debriefing for.

              1. Windriven says:

                Josh, I am not arguing that anything that Facebook did is right. But I strongly challenge any assertion that selective deletion of true stories and active creation of false stories are ethically equivalent. While both are wrong, the latter is clearly wronger.

          3. X says:

            Sorry I was absent; I think I’m in a different time zone from you guys. Obviously, falsifying posts does not meet my test of “within the ordinary parameter space of algorithms already in use”. If you consider how Facebook’s news feed currently works, there must be an algorithm with a list of coefficients describing how high to score posts with dog pictures or baby pictures or religious rants or a hundred other things. I cannot see how tweaking the “happy words” and “sad words” parameters can carry ethical considerations when all the other things they’re always doing don’t.

            It’s also odd that the original article criticizes the researchers for making a measurement of a very small effect, even though it’s clearly statistically significant. The fact that the effect is so small shows pretty effectively how the parameters must fall within the range of ordinary usage.

            This doesn’t directly follow up to the post I’m replying to, but I also caution against using a “can this conceivably do harm?” test. Any sufficiently intelligent person can construct a just-so story showing that harm is conceivable for virtually any set of actions taken by anyone. (Consider the government’s Let’s Move campaign. It is clearly intended to induce changes of behavior in the subject population, and maybe it makes fat and disabled kids feel bad that they don’t move as well as the other kids. Do we need ethical controls for that? (Sigh, I made up this argument to be as ridiculous as possible, but Google tells me somebody has already made the same argument in earnest.)) At the very least, we have to use a test showing that harm is significantly probable.

            I think we need to reflect on how this incident compares to a number of other science-based concerns where we would ordinarily populate the other side. Are you responding on an emotional level to an experiment that sounds scary rather than soberly reflecting on probable likelihood of harm? How does that response differ from anti-vaccination or anti-fluoridation hysteria?

            1. David Gorski says:

              It’s also odd that the original article criticizes the researchers for making a measurement of a very small effect, even though it’s clearly statistically significant. The fact that the effect is so small shows pretty effectively how the parameters must fall within the range of ordinary usage.

              Point one: “Statistically significant” does not necessarily mean “significant.”

              Point two: An unethical study does not become ethical simply because no one was hurt. Or, as Kimball Atwood so nicely quoted Henry K. Beecher:

              “An experiment is ethical or not at its inception; it does not become ethical post hoc — ends do not justify means.”

              N Engl J Med. 1966 Jun 16;274(24):1354-60.

              1. X says:

                What other definition of “significant” could there be? Or you mean that the demonstrated effect of Facebook posts on mood is insignificant compared to the effect of environment noise on mood? I would use the term “unimportant” or “negligible” to describe that, since they do not have the same precise technical meaning as “significant”.

                I believe Beecher is thinking of a situation in which substantial probability of harm exists but does not come to pass merely by chance. I only want to consider cases where substantial probability of harm does not exist in the first place.

              2. Andrey Pavlov says:

                X:

                “statistical significance” is a technical term that means “it is unlikely that the difference between groups is by random chance alone.” It has no implication, absolutely zero, as to whether the results are important, unimportant, useful, meaningful, negligible, or anything else. It only means that the difference between the two groups is considered unlikely enough (by pre-set criteria, in most sciences an alpha of <0.05 but in physics that threshold is set much, much, much higher) that said difference is not due to random chance alone.

              3. CC says:

                In studies of this size, ‘small likelihood of harm’ over the entire population can mask the possibility of harm to a small subset of the population. I believe this is one of the purposes of informed consent – to allow individuals the option to decline participation in a study they feel might be harmful to them.

                For example, in this study, you may have a subset of English-speaking Facebook users who are struggling with a life-threatening illness, such as major depression. For such a person, manipulating their emotional state with artificial negativity could cause serious problems. People in this subset of the population might like the option to decline to participate in a study like this.

            2. JD says:

              I think we need to reflect on how this incident compares to a number of other science-based concerns where we would ordinarily populate the other side. Are you responding on an emotional level to an experiment that sounds scary rather than soberly reflecting on probable likelihood of harm? How does that response differ from anti-vaccination or anti-fluoridation hysteria?

              I consider this much different. The arguments posited here are not the same as the “hysteria” that is likely to result from this in the lay media. If the response was to rail against big brother and technology, or the medical research establishment, all because of this one study with which we have issue, then that would be more analogous to the anti-vaccine movement.

              I view this post and the comments in defense of ethical research practices as a reasonable response to what this study represents, an experimental trial that did not achieve adequate informed consent for participation, that was ultimately shepherded through peer review due to the way in which PNAS publishes. I think it brings up an important aspect of these types of studies, and I would hope that the consent process would be improved as a result.

      3. Josh says:

        We’re also forgetting that these studies manipulate mood, which could mean issues of participants being made to feel depressed. Something like this is normally frowned upon by the IRB during in vitro studies where the participant is being constantly observed and given a debriefing. To do such a thing with no oversight at all is highly unethical.

        This whole predicament makes it ridiculously clear that we need to expand the direct authority of IRBs over private institutions. The idea that only government-funded human research should be regulated is just full-retard, especially with the depth of power and reach that corporations have these days.

    2. Jaime A Pretell says:

      The terms of Service say they can look at information to analyze it, not that they can manipulate the innformation that is not violating terms of service to alter the user. Each user chooses who they sign up as a friend and what to follow. They expect those feeds to be based on their choices, not Facebook’s. Furthermore, there was no safeguards in place. Who is to say what person who might have benefited from a positive post from one of his friends, and instead was bombarded with negative news that pushed him over the brink in a bad situation? And it wouldn’t be an accident, because Facebook intentionally tried to give him negative feelings.
      Ever heard of the eggshell rule? This rule holds one liable for all consequences resulting from his or her tortious (usually negligent) activities leading to an injury to another person, even if the victim suffers an unusually high level of damage (e.g. due to a pre-existing vulnerability or medical condition). The term implies that if a person had a skull as delicate as that of the shell of an egg, and a tortfeasor who was unaware of the condition injured that person’s head, causing the skull unexpectedly to break, the defendant would be held liable for all damages resulting from the wrongful contact, even if the tortfeasor did not intend to cause such a severe injury.
      If a facebook user already had a fragile emotional state, and Facebook purposely manipulated his feed to make him even more depressed, and a subpoena can show he was one of those who was manipulated (or the family, if the person commits, say, suicide), they would have a valid lawsuit at hand.

      1. Windriven says:

        From Facebook’s Data Use Policy -> How we use the information we receive:

        *for internal operations, including troubleshooting, data analysis, testing, research and service improvement.

        “They expect those feeds to be based on their choices, not Facebook’s.”

        One’s expectations are not necessarily reflected in a company’s business practices. I agree that it sucks. But suckage and illegality are different.

        1. Msrk Crislip says:

          Now I can’t find the source, but yesterday a site noted that research was NOT in the terms of service at the time the study was done, it was added later.

          1. Windriven says:

            “research was NOT in the terms of service”

            I wonder if testing was? In a land where the meaning of the word ‘is’ is parsable, then: testing, research, some say toe-may-toe, some say toe-mah-toe.

            I feel as if I need a shower just for thinking about it. ;-)

        2. Eli Bishop says:

          I don’t know what point you’re trying to make by quoting that phrase from the FB data use policy. It is unresponsive to Jaime Pretell’s point, which was also based on the wording of that policy: FB says it can “use the information [they] receive” for testing, research, etc., but that’s data collection– it means they can look at what you’re doing and draw conclusions about it. That is not an intervention. If you tell me I have the right to use any photos that I take of you, that doesn’t mean I have the right to reach out and knock your hat off of your head because I think you’ll look better without it in my photos.

          “suckage and illegality are different”– this also isn’t particularly responsive. I have no idea whether Facebook could be subject to a legal judgment in JP’s scenario, but JP clearly said that if so, it wouldn’t be because of a general sense that their actions “sucked”; it would be specifically because they set out to cause emotional harm to a subset of their users for reasons that had nothing to do with the stated purpose of the site nor any commonly accepted business practice.

    3. JD says:

      I just don’t understand how there can be widespread condemnation of studies performed as part of the Resuscitation Outcomes Consortium (ROC), but it is okay to do an experimental trial on unsuspecting Facebook users, without any real consent. The criticism of ROC’s exemption from informed consent was so great that study sites must offer bracelets saying “No Study” to prevent someone from being enrolled in a future out-of-hospital cardiac arrest trial. (REFERENCE). If you can believe it, this goober even likened ROC to Nazi medical research. (REFERENCE). And keep in mind that the interventions performed in ROC are generally options already readily available on the ambulance, so in some ways, this would not be much different from your barista analogy. Although, I do recognize that a trial involving paramedics pushing lidocaine would obviously need more regulation than changing a Facebook feed.

      1. JD says:

        As I was writing about the bracelets, maybe that could be a way of handling informed consent for these types of Facebook studies. Instead of a bracelet, it could be some kind of avatar, statement, or profile option. As has been discussed, the way this study was performed, this was not an option. As ROC taught us, there has to be some means of opting out.

  2. k_inca says:

    I wasn’t so surprised about the results, but I too was concerned about the methods. I have almost 10 years experience as a clinical research coordinator, and the methods that I could glean from the paper (there was no methods section) seemed very much against how we carefully run research studies. Also keep in mind that one of the researchers was in-house, which introduces a huge bias to the results.

    1. David Gorski says:

      I was thinking about this, and something occurred to me that I wish had occurred to me while I was writing the post. I alluded to something similar near the end of the post but didn’t quite go all the way there. This reminds me very much of the culture clash that often occurs when academic physicians and researchers collaborate with industry physicians and researchers. True, the issues are somewhat different. When drug company scientists interact with academic scientists, usually the issues are secrecy and when the academic scientist can publish, with industry scientists being used to a very locked down, goal-driven environment compared to academic scientists, who prize openness and being able to go where the evidence leads.

      In this case, though, we see the tech industry people, whose attitude is basically “What’s the big deal?” because they tweak algorithms all the time and see how people respond to them as a matter of product improvement or repurposing. They don’t understand where product improvement or development crosses the line into human subjects research. This represents a very different culture and attitude than academia, particularly medicine, where the ethics of human subjects research are critical and the rules are, quite properly, very tight. This leads to comments like those by X and Pete, who don’t understand that what Facebook does very much appear to cross the line into human subjects research rather than being just quality improvement/product development:

      As Twitter user @ZLeeily stated, the “key issue is they moved beyond tests for product design & into psych research without adapting ethical procedures to suit context.”

      Correct. I also found a more detailed discussion of this culture clash here, which can explain why people like “X” and Pete don’t understand the concerns of researchers with respect to human subjects research:

      Academic researchers are brought up in an academic culture with certain practices and values. Early on they learn about the ugliness of unchecked human experimentation. They are socialized into caring deeply for the well-being of their research participants. They learn that a “scientific experiment” must involve an IRB review and informed consent. So when the Facebook study was published by academic researchers in an academic journal (the PNAS) and named an “experiment”, for academic researchers, the study falls in the “scientific experiment” bucket, and is therefore to be evaluated by the ethical standards they learned in academia.

      Not so for everyday Internet users and Internet company employees without an academic research background. To them, the bucket of situations the Facebook study falls into is “online social networks”, specifically “targeted advertising” and/or “interface A/B testing”. These practices come with their own expectations and norms in their respective communities of practice and the public at large, which are different from those of the “scientific experiment” frame in academic communities. Presumably, because they are so young, they also come with much less clearly defined and institutionalized norms. Tweaking the algorithm of what your news feed shows is an accepted standard operating procedure in targeted advertising and A/B testing.

      1. Roberto says:

        Technology professional of 30 years’ experience here, with undergrad & grad social sciences degrees including original human subject research, so I know both worlds pretty well.

        I’m with you 100% on the ethics issues. Subjecting nearly 700,000 people to a psychological intervention without fully informed consent is right off the charts in terms of unethical conduct. All the more so when it is as intimately personal as attempting to alter their emotions.

        David, you may not be aware of the extent of the underlying problematic attitudes in the high-tech industry, but the situation is difficult to describe in language that does not come across as hyperbole. In brief there is an incredible amount of sheer arrogance and hubris, and a no-holds-barred outlook toward the manipulation of other people, as if humans themselves are nothing more than algorithms embodied in biology rather than silicon. These attitudes often overlap with beliefs in “strong AI” that should properly be considered as pseudoscience. And they are usually justified via some kind of Ayn Randian ideology.

        It is almost a certainty that a sample of this size included people who were emotionally fragile and thus susceptible to serious adverse reactions: people who should have been excluded from any study of this kind. Facebook’s callousness toward that risk is both appalling and revealing. It is also almost a certainty that the study produced additional results that were useful to Facebook but were not published, including rank-ordering of demographic categories of Facebook users by responsiveness to the intervention (in other words, ascertaining what kinds of people can most effectively be emotionally manipulated).

        Given the egregiousness of the conduct, some kind of harsh penalty is appropriate. I would suggest that it include the requirement to make the entire data set (anonymized of course) publicly available for download in a widely accessible format, so independent researchers can make use of it to draw their own conclusions, and so journalists can make use of it to inform the public.

        I would also suggest that those who find Facebook’s behavior unethical and unacceptable, should stop using Facebook entirely, and post a protest message stating their reasons, where it can be seen by others. Whatever the “benefits” of social networking might be, there are certain principles that must necessarily come first. Giving up a convenient publicity venue in order to protest a large-scale flagrant violation of informed consent, is a reasonable tradeoff and hardly a sacrifice.

        1. SB says:

          I agree with your point about emotionally fragile people. While it seems unlikely that they harmed any of the participants it is impossible to know for certain. Because they did not get informed consent they really experimented at random on an unknown population, they did not monitor their subjects beyond recording their responses and they made no follow up to check their wellbeing. Seems quite concerning to me.

  3. fxh says:

    Not to mention the software can’t pick up irony or snark.

    Like::
    I love Facebook. What’s good for business is good for Facebook . lucky Facebook users.

    Anyway complaining about facebooks research ethics is like complaining that Rebecca Brooks looked over your shoulder at an email you were writing.

  4. Mark Heil says:

    I wonder if Facebook realizes that it was playing with fire on this one. Its likely a mathematical certainty that among the 689,003 users involved in this some of them suffered some extreme emotional distress during the period of study (hopefully nothing as bad as assault, murder or suicide). Regardless of whether this manipulation contributed to it, savvy attorney’s could build cases against FB. I can see the daytime TV commercials now asking for potential class action clients now.

  5. Keating Willcox says:

    Excellent article. Very instructive about research design and ethics…FB exists to make money.

  6. Excellent article and IRB discussion form someone clearly knowledgable. As for those saying “what’s the big deal?”, here’s a thought experiment. Let’s say the test subject is a kid, or someone emotionally unstable, or clinically depressed. Let’s say they have legal access to a gun. Do you want to go ahead and make them sad(der) just to see if you could? What are the unintended consequences? And, no, you don’t have to show this actually happened. That’s why there’s an IRB (and ethics) to discuss it before it happens, not after.

  7. Sven says:

    I very much feel inclined to send a letter to PNAS and say that I did not explicitly consent to this study and was neither informed nor debriefed and ask them to retract the paper based on violations of Declaration of Helsinki on human subjects. However, maybe it would be better to start a letter/petition and having other people sign it. Not sure how best to start this. Any recommendations?
    Thanks in advance,
    sven

  8. BillyJoe says:

    I know it’s irrelevant, but that short video had me in absolute stitches – even after a third run through!

  9. Windriven says:

    This only has gripped attention because the Facebook brain trust chose to publish in a scientific journal. More interesting to me is the glimpse this gives into the actions behind the terms of use verbiage and the motivations that drive them.

    The final upshot is likely to be some minimal mea culpa and a vague assurance that Facebook will be more diligent in the future. And then this will drop into the abyss of lessons unlearned.

    It amazes (and amuses) me how cheaply some will sell their dignity and their privacy.

    1. SB says:

      Facebook did this under a pretext of it being “science” so yeah, we should get mad when they do their science unethically and badly.

    2. Serge says:

      Hear hear!

  10. Graymalk says:

    I’ve really been noticing how this is sorting out the tech versus the health sciences people. Definitely a culture clash. I actually come from both backgrounds and can see it from both angles, but I believe the health sciences people are correct in this one. Tech people receive little to no training in how to conduct experimental studies in general, much less with human subjects (I certainly only received my training in the health sciences areas, not tech, but I don’t purport to know what every single school does). You would think the social scientists involved would have known better, but they are operating completely within a tech realm.

    I’ve stopped using Facebook because of this. And I worry that there could be a precedent set that tells other companies that they can do whatever they want and even publish it. (I also worry that they’ll think they can still do whatever they want as long as they don’t publish it)

    1. David Gorski says:

      You would think the social scientists involved would have known better, but they are operating completely within a tech realm

      You would, but obviously they got sucked into the mentality of the tech world. It reminds me of biomedical scientists in academia who work with pharma and how they can sometimes imbibe the culture of pharma.

    2. Kathy says:

      “they’ll think they can still do whatever they want as long as they don’t publish it”. Maybe they have and we don’t know about it.

      1. Windriven says:

        If you work your way through Facebook’s heavily nested Terms of Use and Data Use Policies you’ll find that they can do almost anything not specifically proscribed by law, all while smirking that “your privacy is very important to us.”

  11. Grendel says:

    There are problems with the way that the study was written up, which you highlight. But your post shows that there is a big difference between medical research and social science research. In the former, as found in the Declaration of Helsinki, the guiding principle is informed consent should always be obtained before any medical experimentation.

    In social science that is also desirable, but there are very many instances where the exceptions you point to are used. You write that “Now, I realize that in the social sciences, depending on the intervention being tested, “informed consent” standards might be less rigorous,” in my experience as a social scientist, this is a major understatement.

    The issue is that when studying people’s behaviour (the whole point of social science) there are huge problems with experimenter effects. At the very least people become self conscious when they know they are being observed and so alter their behaviour. So its not unusual for some social science research to involve not telling the subjects that they are part of an experiment so their ‘natural’ behaviour can be observed. This is acceptable as, unlike doctors, social scientists can’t give people drugs, surgery or other things that might be very harmful. Some of the classic experiments in psychology have used ‘undisclosed observation’ in which the subjects were not aware that they were part of an experiment. In other cases, subjects were aware that they were part of an experiment, but were deliberately deceived as to what was being measured (to take a hypothetical example, a group of people might be invited to take an intelligence test, and be unaware that what was being observed was their behaviour toward the person leading the test depending on their ethnicity or gender). Obviously there are ethical issues and a clear need for safeguards to prevent abuse which is why such experiments would need to be considered by an ethical review board before they could take place.

    Moreover, as mentioned in the comment above, such experiments are commonplace in the tech industry and elsewhere. Fast food chains will, for example, change menu items in individual stores and compare the effect on sales with ‘control’ stores. Much more importantly, governments are constantly making minor changes to regulations and researching what happens. A relative spends a lot of time making minor tweaks to a city road network and writing up the effect upon traffic flow.

    If it became a universal requirement that all subjects in social research had to have informed consent then a large amount of research would be impossible. The only experimental data would tell us how people behave when they know that their behaviour is being observed. Which wouldn’t tell us much about the world outside the experiment. Most importantly, the research that underpins evidence based public policy would be much more thin on the ground.

    There is an interesting post here which sums up the issues better than I have: http://thomasleeper.com/2014/06/facebook-ethics/

    1. LIz says:

      While informed consent isn’t always possible, there’s no good reason to not debrief. In fact, I think it’s particularly necessary to debrief in those cases where you don’t get informed consent.

  12. Mike says:

    I’m curious- Are there different rules for pre-existing data sets where there was no intervention like the analyses done by the guys at OK Cupid?

    1. Grendel says:

      It depends upon a lot of things, but with pre-existing data (basically natural experiments) the main issues are with the use of personal data and privacy regulations. I assume that OK Cupid users agreed to similar terms as with Facebook (you the company can do what ever you like with the data). Even with such a waiver, an academic would usually have to follow local data protection and privacy regulations (which differ by jurisdiction). If there’s no personal data involved and no copyright issues then a researcher can pretty much do what they like (eg anyone could run a correlation using national level data on average household income and rates of illiteracy).

  13. Suz says:

    As a social scientist who knows a lot of computer scientists, I think you are overly optimistic about academic researchers. Academics in the social sciences, biological and medical sciences are informed about and socialized into caring about research ethics. This is commonly *not* part of the curriculum for computer scientists. I have heard computer scientists heatedly arguing that their research on humans should not be considered human subjects research (ie, subject to ethical review). Of course, they don’t want to have to deal with the process… who does?

    Note that I *do* believe social science research should be subject to ethical review. Thinking of the post you linked to, there are generally easier “minimal review” paths for projects where the chance for harm is quite low. While I do think IRBs sometimes get more invasive than they should, and I agree this is often because the process was designed with medical research in mind, I think the solution is to make sure there is a process for social sciences projects, and reviewers for them, that is informed by knowledge of the social sciences. I’ve been at one university that does this well, and another that doesn’t.

  14. KNW says:

    Reason #493 Why I’m Not On Facebook

    [Reason #1: Facebook is Evil.]

  15. Gwynne Ash says:

    As a social scientist, I was GREATLY disturbed reading about the study, which violated all of the principles of the Belmont report.

    Further, no one has addressed that children from 13-17 can have Facebook pages. That means they also conducted research, without consent, on minors, because I find it highly unlikely that they filtered minors from their study parameters (and there is no evidence that they did so in their paper).

  16. Grendel says:

    Actually, mulling the personal data issues, David Gorski writes:

    “In contrast, D4 appears not to have been honored. There’s no reason Facebook couldn’t have informed the actual users who were monitored after the study was over what had been done.”

    One of the largest potential problems with conducting social research is that the researchers may well collect large quantities of personal data. This has obvious issues for privacy and data protection etc.

    (At least in the jurisdiction where I work) the regulator insists that the data be anonymized to the greatest extent possible. What that means in practice is that the researcher will either try to collect no personal data at all (to take my earlier example, someone researching traffic flow would just monitor the passage of vehicles and not record any more information on individual cars etc), or if that is not possible after the data has been collected the researcher will to the greatest extent possible remove any data which could be used to identify an individual. The latter does not just involve removing names, but also any combination that could be used to identify an individual. These procedures are strictly enforced.

    According to the Atlantic article: “none of the data used was associated with a specific person’s Facebook account”. That is actually a very good way of preserving the anonymity of the data. If it was collected by an algorithm and the data seen by the researchers was not at all associated a user’s account then it is very unlikely that any of the data being analyzed could be associated with a specific individual.

    However, such a strong means of protecting personal data means that sending a message to every participant (either before or after the experiment) would have involved linking individuals’ names to the participants in the study. I have no idea whether this was an issue for the review board. But I think that sending such messages would have made the privacy and data protection issues more complicated. At the very least that would have involved collecting some details on the profiles of all the Facebook users studied (so that a message could be sent) which would have increased the quantity of personal data collected.

    Of course this wouldn’t have been an insurmountable obstacle (for example, a database containing the names of participants could have been deleted after the messages were sent).

    My point is just that there is a trade-off between informing people and anonymity. The most anonymous research (with least collection of personal data) is also that with no direct contact with the subjects of the research. Its a complicated issue.

    1. LIz says:

      The study says that participants were “randomly selected” using their Facebook userids. They not only already had personal information associated with the participants in the study, but that personal information would have allowed them to contact the participants and debrief them. All this could have been done very easily while still not associating any specific data with any specific persons. Not very complicated at all.

  17. Suz says:

    An addendum to my earlier comment: I was talking about *academic* computer scientists. Professors at universities, not researchers at private corporations.

    Also: As Grendel says, informed consent can be tricky for valid measurement in social science research. While I’m not a psych lab researcher, we all know that some of the fun of designing psych experiments is–when it would matter–finding ways to ethically hide the true point of the research until after someone has participated.

    Note that bit–until after participation. At the *very* least, I would expect a research project designed to manipulate emotions to tell subjects afterward that this is what had happened. I would also expect it to inform them about mental health resources available/make resources available to them.

  18. Suz says:

    I see Grendel and I both posted about the same time. To add to Grendel’s comment, there are well-established means of separating information about who participates in research from information about the participants. At an extreme, I’ve known someone working with sensitive data who had to keep the identifying information on a computer not connected to the internet in a locked room (closet) only she had access to.

    I’ve also conducted research based on data gathered through an institution. Keeping anonymity in such situations is simple, though it adds a layer of processing that probably requires some labor from at least one person not on the research team. Does this add complexity? Yes. Would it add complexity that an organization like Facebook couldn’t deal with? I have no specific knowledge, but I just don’t believe that would be the case.

    1. Grendel says:

      I agree. It would have made the research design more complicated. But that would not be beyond the wit of Facebook to solve.

    2. David Gorski says:

      To add to Grendel’s comment, there are well-established means of separating information about who participates in research from information about the participants.

      Indeed, this is not a new issue. This is well-known in tissue banking, where tissue is stored, along with basic relevant clinical information about the patient from whom it was taken, but no personal identifying information is included. That is kept elsewhere and linked using an identifier in such a way that only the relevant staff running the tissue bank can link patient to tissue sample. This allows investigators who use the tissue bank to access the tissue, with relevant clinical information, without having access to patient identifiers. If more clinical information is needed, an IRB protocol can be submitted for approval to link relevant patient samples to patient charts and the information can be obtained.

      I was once involved in a committee for revamping a tissue bank. We argued discussed these issues endlessly, and whether a general consent to bank tissue could be added to the surgical consent as a check box or whether there had to be a separate consent. It was…educational.

  19. Isaac Ray says:

    So I thought a lot about this, because I’m a software engineer but my wife is a clinical psychologist, so its particularly relevant to both of us. And what I came up with is this: What has people concerned is that Facebook not only used positive cues, but also conducted an experiment using negative ones. Now, the academics will probably say, oh no, that doesn’t matter, but hear me out. When we watch television commercials, and we watch lots of them, advertisers will use ANY means necessary to make us feel positively about their products. They are clearly attempting to manipulate our emotions, and to argue that this is not true is absurd. We even see ads become viral BECAUSE they do such a good job of manipulating our emotions. And, yes, to be sure, my wife (and many others) despise and abhor the fact that advertisers make such an effort to psychologically manipulate us into buying their products. But the mere fact that advertisers manipulate our emotions is not a topic of national discussion every day because… why? Because ads are always designed to make us feel positive. Our culture has implicitly accepted that elements of our environment that are manipulated to make us feel happy, or sexy, or important are manipulations that we are ok with. In some way, every single advertisement ever produced is a scientific experiment. It is “intervening” in our thought process about a specific product to get us to feel a certain way, not just about that product but about ourselves in the context of that product. The reason we don’t make every single advertiser get IRB approval before producing an ad is because we understand and accept that ads are designed, by their most basic purpose, to be positive and not do harm, beyond the harm of us buying something we don’t really need. Facebook’s “study”, however, explicitly broke that social contract between advertisers and consumers. The negative cues they manipulated risk causing damage to a person’s psyche, or worse. By explicitly trying to make people feel “bad”, they crossed a line, and its not the line between ethical or unethical experimentation, nor is it the line between good research and bad research. It is the line between acceptable manipulation of our environment by an institution, and unacceptable manipulation. And while that line is, I think, pretty poorly defined in American culture, I think in this instance, we can make out where the two sides are. And THAT is why everyone is freaking out.

    1. Grendel says:

      Isaac, I agree in general, but some advertising is negative. Especially that produced by politicians.

      1. Calli Arcale says:

        And lawyers. Just watch the ads for lawyers seeking clients who have had mesothelioma, or pelvic mesh implants, or whatever bit of ambulance chasing the particular firm happens to be after at the time. They are very negative, designed to make you feel afraid.

        1. Isaac Ray says:

          I think there is a substantive difference between “You might have cancer, call me and I’ll help you” and “Why does the world suck so damn much?!”…. The first is a likely ad for a lawyer, the second is a status update I’ve seen. The whole point of an ad, ANY ad, is to get you to feel positively about SOMETHING. Otherwise, why would anyone pay for it? What would be the point?

    2. Suz says:

      I agree that the negative slant makes it more troubling–mental health issues! Potentially larger effects on vulnerable individuals! Research not being clear about which direction of content leads to which direction of effect!

      I think that consent is still an issue here, however. There is a societal understanding that advertisements are designed by companies to manipulate how we think and feel about things. This includes, for instance, ads on facebook. I will non-technically call it implied consent. This is why I generally mute my computer and go to another screen when there are ads I have to wait through before seeing video: I expect them to be manipulative, and since I don’t want to consent to dealing with that manipulation I don’t watch them. There has *not* been an understanding that presentation of the user-generated content on facebook is designed by the company to manipulate how I feel about things.

      This is why, as another example, product placement in theatrically released films (or tv shows) is seen ethically problematic: the consumer or viewer is being targeted under false pretenses–say, when a character is driving a BMW not because it makes sense for that character, but because BMW gave the movie money to do so. It’s more accepted now in the US, but that is at the cost of people having to readjust their expectations of what a film is. It’s also why TV broadcasters must notify viewers of product placement in broadcast TV shows (enforced to a weak standard), and embedded advertising is highly regulated in EU countries (no, I am not going into the complexities.. it’s sort of prohibited but sort of not).

      1. Grendel says:

        Actually, I think that the main form of creepyness is that Facebook was manipulating what people regard as being private communication – sharing pictures of cute pets etc with a defined group of friends and family. The problem for the digital age is that it isn’t private, all Facebook users have given the company access to their data.

        As mentioned upthread this study has just highlighted what tech companies are doing on a daily basis. To take one example, Google scans my Gmail account in order to target advertising in my direction. I’d be very surprised if that company hadn’t tweaked the algorithms in an experimental fashion with my personal email. We don’t like the invasion of privacy, but enough of us do like the service on offer that is based upon that invasion.

        1. Calli Arcale says:

          I think part of what makes the “but it’s standard practice” argument groundless is that users do not seriously expect to see this much manipulation in the content they’re sending to one another. Yes, they say “yes” to the terms of service — but it’s widely known that practically nobody actually reads it, and that would never in a million years be tolerated on any other study. Plenty of fraudsters have been excoriated for not getting proper informed consent, by hiding it among other, unrelated data or specifically choosing test subjects who don’t speak English very well or actively discouraging subjects from reading the consent forms. Considering how openly accepted it now is that nobody actually reads the ToS, how can they expect the ToS to constitute informed consent? Even ignoring that it doesn’t mention using your data for a scientific study that will be published?

        2. Suz says:

          Have you read this essay on that broader issue? I first came across it because of the sexual assault connection that it starts with, but it’s about consent models in online business. http://modelviewculture.com/pieces/the-fantasy-and-abuse-of-the-manipulable-user

          I stopped using Gmail for most things after their use of email text became public. Not that I claim to know a better alternative for when I don’t want to use my professional email address (and don’t want to use the one reserved for when I expect it to get “shared”).

          I do think the lack of regulation for privately-funded human subjects research is a bigger–and yes, super-complex!–problem. Dealing with that would require actually passing a law, however, instead of acting mainly through federal regulations.

          1. Calli Arcale says:

            Well, one option is to buy a domain name and set up your own mailserver. ;-) Even a very old PC will run it just fine. But this does require a lot more technical input.

  20. CrankyEpi says:

    I agree with KNW – I’m glad I never got a FaceBook account!

    Thank you Dr. Gorski for this post; this is a very important topic and your comments are dead on. You only have to think about historical cases where totalitarian governments controlled people by manipulating the information they received to have the hair on the back of your neck stand up.

    I am seeing more and more editorials and articles where the author is saying we should get rid of IRB review for certain types of studies because they don’t see any risk in them. Yes, IRB review can be inconvenient and frustrating at times but I would still rather have a neutral third party weigh in on a proposed study.

    Given Susan Fiske’s e-mail, it sounds like the researchers deceived the IRB? They didn’t have a pre-existing dataset (unless they did the study first without telling the IRB, and then said “we have a pre-existing dataset with manipulated posts.”) Also, come on Susan, you don’t ask the researchers if they got IRB approval!! Think about it Susan – who do you contact? Who could possibly confirm whether they got IRB approval?

    Science comment: come on lunkheads, any study with >689,000 subjects will find statistically significant differences with every test!!!

    1. brewandferment says:

      I’m about halfway through the “Divergent” trilogy and somehow this whole experiment seems like just an early precursor to the scenarios in the books! So far I definitely recommend the series, even if some snob in the publishing world thinks it’s “embarrassing” that adults are reading YA novels (I saw a blurb about the essay but decided I wasn’t interested in taking a chance on raising my blood pressure by reading the essay.)

    2. Seat12e says:

      RE: …”I am seeing more and more editorials and articles where the author is saying we should get rid of IRB review for certain types of studies because they don’t see any risk in them.”

      Yes, CrankyEpi, these arguments go back to the 1960s. A recent proposal (the ANPRM) would significantly reduce the protections for “minimal risk” studies. Should the proposal be accepted, there will be fewer reviews of and restrictions on regulated research at this level and fewer IRB administrators trained to review them.

  21. Adam Jacobs says:

    I think there’s a crucial distinction here that you’re missing. Yes, some social science research is done without participant’s consent. But wouldn’t that pretty much always be purely observational research? The reason the FB research is different is because it wasn’t observational, it’s experimental.

    While I mainly inhabit the world of clinical research and only dabble in the social sciences as a hobby, I can’t think of an example when it would be considered acceptable to do an interventional experiment without participant’s consent. Which is exactly what FB have done here.

    1. Grendel says:

      There is an article here on the use of deception on psychology experiments (ie when the actual nature of the experiment is not what the subject has been told by the scientist). The key tl;dr quotes being:

      “A participant who enrolls in a research study is often misled about its real purpose, the responses researchers are actually monitoring, and the true identity of fellow “subjects.” In some cases, participants are not even informed that they are involved in a research study. [...]

      Over the first two-thirds of the 20th century, deception became a staple of psychological research. According to a recent history of deception in social psychology, before 1950 only about 10 percent of articles in social psychology journals involved deceptive methods. By the 1970s, the use of deception had reached over 50 percent, and in some journals the figure reached two-thirds of studies. This means that subjects in social psychology experiments — at least those that survived the peer-review process and made it to publication — had a better than 50-50 chance of having the truth withheld from them, being told things that were not true, or being manipulated in covert ways.”

      1. Grendel says:

        Forgot to post the link. (In Eurp, need to get some sleep). Here it is:

        http://www.theatlantic.com/health/archive/2013/04/a-study-in-deception-psychologys-sickness/274739/

      2. CrankyEpi says:

        There are lots of study with temporary deception, but in many the subject knows they are in a research study, just are distracted away from the real purpose. Then , they are supposed to be debriefed after the research is over. The subjects in this interventional research didn’t even know they were in a study.

        1. Calli Arcale says:

          And, in all likelihood, still don’t. Could one of the test subjects even be you or I?

  22. Adam Jacobs says:

    Oh. That was supposed to be in reply to post #12 from Grendel. Not sure why the threading isn’t working for me.

    1. Windriven says:

      @Adam Jacobs

      Sometimes you need to hit Reply, then scroll back up from the Comment window and hit Reply again. I have no idea why this works – or even if it really does – but it seems to work for me.

  23. God says:

    OK Cupid arose from an earlier website which was based on data experiments from the beginning. They also have clear options to not allow personal images to be used for those purposes, and make it clear that such analysis is to be done.

    And yes, it matters if you are just analyzing data vs. manipulating users secretly.

  24. Chris says:

    Hmmmm… Just this past week one of my kids in college was contacted by researchers in the psychology department to participate in a study on participation on social networks, specifically Facebook. They explained what they would look for, that they would not send comments/feedback/etc, but just lurk and that it was for only a certain time period, and at the end the subjects would receive a ten dollar gift card. To consent to the study the student only needed to accept a friend request from a certain name.

    I don’t know if he is going to consent. He really doesn’t use Facebook much, he and his friends have moved on to Snapchat.

  25. Mark R says:

    As a medical educator and occasional researcher, rules on this sort of thing are pretty clear to me. Even in a healthcare setting, where we want to compare two approved treatment protocols to see which is better, standard research regulations reply. Once you make the determination that your results will be used for publication (and not internal quality improvement / quality control) consent becomes a requirement if the intervention poses more than minimal risk. Manipulating someone’s psychological state is more than minimal risk There are some incredibly emotionally fragile individuals living out part of their lives on social media, and there is no shortage of stories of suicides linked to social media activities. While the folks at facebook, not being in the biomedical profession, may not have known, their collaborators in academia should be held to the standards of human subjects protection.

  26. LIz says:

    Really nicely-written and argued article, David.

  27. Preston Garrison says:

    How could do any social scientific research that involves more than simply observation if you strictly followed all the stipulations that you use for a drug study? If people know they are participating in a study, that’s almost certainly going to influence their behavior. If you perturb the “social system” in any way, that creates the possibility that you have caused harm, even if slight, to someone. The specific study seems to be an example of what a friend of mine used to call “profound grasp of the obvious.” This all seems like a tempest in a teapot to me. They manipulated people’s feed for a whole week! Horrors!

    (The obvious selection by algorithm of your feed is one reason I’ve deliberately kept my number of friends low, so I’ll see a larger proportion of what they post.)

  28. Preston Garrison says:

    If I had gotten into this discussion earlier, I would have been tempted to post something emotional and extreme, just to watch the local social system oscillate in response. :)

  29. Preston Garrison says:

    It seems like you could do this kind of study by observation alone. Lots of negative and positive comments get posted. Use your language filter to find some of each and follow the time course of comments from people who received the original comments, including replies to the original posts and subsequent posts that aren’t replies. Of course, more “profound grasp of the obvious.” I guess that’s why I became a biochemist and tormented yeast to get actual new information.

  30. SB says:

    Another issue that arose when I was perusing the author affliation’s was the declaration of no conflict of interest. It seems to me the author Adam Kramer from Facebook has an enormous conflict of interest in this piece of research? Or maybe I’m reading too much into it?

    1. Frank B. says:

      It would be hard to imagine a more blatant example of a conflict of interest!

    2. Seat12e says:

      Yes, you may be reading too much into it. They may be using a definition that relies upon equity interest or leadership position. Being an employee and receiving a salary is generally not included. Possibly because it is so self-evident.

  31. Mark O'Leary says:

    The experiment was testing whether being a exposed to a consistently positive (or negative) timeline influenced the subject to be more positive (or negative) in the posts they wrote in turn. But what grounds are there to assume that these “nudges” only evinced changes in behaviour around FB posting, and had no influence on their wider life? Shouldn’t the experiment have tracked (and the IRBs considered) issues such as increased incidence of self-harm, domestic violence or even suicide among the ‘negative milieu’ group? After all, sometimes the last straw can appear insignificant… For this large a sample size I’m not sure that this is just a reductio ad absurdam rhetorical argument

    1. Angora Rabbit says:

      I agree and more so. The “discussion” actually states the authors expected the opposite outcome based on the published literature – that posting good news would cause a bounce-back negative emotion in the recipient. The fact that they found the opposite is irrelevant – the prior literature suggested the potential for adverse outcome.

  32. SaraF says:

    The use of an “existing Dataset” exemption to attain IRB approval was my first thought as I read this post. This would imply that the data was gathered by the FB team and not the university faculty, suggesting the faculty authors were responsible for analysis, writing, etc. but not the actual data collection. This is important as it allows the faculty to go thru IRB minimal review instead of the full IRB process, much faster but also much more limited in what the board requires. If the university faculty was actually involved in collecting data this would be a violation, but if they only worked with the existing data that FB collected they are covered by the minimal process.

    1. Angora Rabbit says:

      I originally considered that as well, so I just pulled up the paper. Fortunately, PNAS is one of those good journals that require author contributions to be provided:

      “Author contributions: A.D.I.K., J.E.G., and J.T.H. designed research; A.D.I.K. performed
      research; A.D.I.K. analyzed data; and A.D.I.K., J.E.G., and J.T.H. wrote the paper.”

      Bad news – all three were involved in the design. Then Facebook Lad ran the study and generously sent the data back to the academics to write the paper. This really sounds like an attempt to do an end-run around university IRBs.

      Writing as someone who sits on the HRPP oversight committee for a Tier One research institution, this study sounds like a violation of human research ethics. It is not a neutral analysis of a data set. Subjects were manipulated, and the mere fact that the manipulator is a co-author creates a conflict of interest and not a neutral party. More so because the manipulator is paid by the subjects’ venue to manipulate the subjects and analyze the response without ever notifying the subjects. This means any IRB review within Facebook would be compromised because there is the appearance (and perhaps reality) of a financial self-interest. There was no opt-out concession. Even more troubling, as someone pointed out, individuals under 18yrs were likely included in what is described as a random selection. That is a big no-no.

      The paper’s last sentence is particularly telling of the arrogance and lack of ethical consideration on the part of the authors:
      “And after all, an effect size of d = 0.001 at Facebook’s scale is not negligible: In early 2013, this would have corresponded to hundreds of thousands of emotion expressions in status updates per day.”

      1. E-rook says:

        I too was particularly struck by the lack of an opt out and the blatantly described coercion. Part of informed consent is the ability to withdraw from the study at any time, and the opt out mechanism has to be explained on the consent form (which according to the paper, the user agreement was the consent form). Services provided by an entity cannot be affected by a potential participant’s non-willingness to be in research study, and as described, the use of Facebook required giving “consent” to participating in the study. I can’t imagine an IRB in a million years would let that go. Are IRB applications, approval letters, and meeting minutes public? We could (the journal should have) asked to see the approval letter. I do human research as part of my job and I proudly share my approva letters with anyone who asks (it’s a lot of work to put the application together!)

  33. Frank B. says:

    It would not have been difficult to carry out this study retroactively, without manipulating anyone’s news feeds. FB could have identified a million users who, by chance, happened to have unusually negative news feeds for some week and compared them to another million users who, by chance, happened to have unusually positive news feeds that week, and then analyzed the subsequent posts made by these two million users. On top of all of the other issues previously raised, this seems to be a clear violation of D3: “The research could not practicably be carried out without the waiver or alteration”.

  34. Sandy says:

    How did they ensure no children were experimented on in this study? A child cannot give informed consent, but they can easily lie about their age on social media.

  35. Regardless of what goes into facebook, you only get horse-crap out.

    More ridiculous was the COO’s horse-crap filled standard corporate speak pre-scripted “apology.” My guess is that the CEO had that coveted 1130 tee time and couldn’t be arsed to spew forth a phony apology himself for something they both knew about well in advance and never considered the ethics of in the first place.

  36. ObjectiveCynic says:

    when it comes to this particular example, I agree with this article, this was a poor experiment. However, when it comes to ‘unethical’ experiments, some bring conclusions that are essential to human behavior and human nature. Some so called ‘unethical’ experiments, I believe, were labeled as such simply because the experiment revealed a human truth that we found to be unsettling and simply failed to take proper responsibility – as usual.

    For example, The Asch Conformity Experiment (1953), revealed that we are not as individualized, unique, and rebelliously unafraid to ‘be ourselves’ as we claim. When it comes down to it, we will go so far as to deliberately answer a question incorrectly, just because the majority got the question wrong. It is as though Humans were born to conform. It also puts new light on those annoyingly empty individuals who will always tell you what they think you want to hear instead of the truth what they feel to be the truth.

    Example #2, The Bystander Apathy Experiment, revealed that if we are alone then we are highly likely to help someone in an emergency situation. However, when there are others present, our feeling of responsibility shrinks dramatically, with the smallest of excuses being the only thing needed to do nothing.
    http://en.wikipedia.org/wiki/2009_Richmond_High_School_gang_rape
    http://www.bbc.co.uk/news/uk-england-london-12330222
    http://news.bbc.co.uk/2/hi/uk_news/england/london/3700446.stm

    Example #3, The Stanford Prison Experiment (1971), revealed that it was simply a fear of consequences that kept us from dominated and torturing our fellow human beings. In this particular study, it took the participants less than 24 hours, to totally plunge the atmosphere of the experiment into the abysmal dark side of humanity. Despite sitting in their own filth, and mentally pushed to the limit, none asked to be excused from the study despite the fact that none of the participants were guilty of any crime or was truly imprisoned (beyond their own minds, that is). Many of the individuals role-playing the GUARD role, was very disappointed that the experiment was stopped, and instead wanted it to continue………………despite it ALL being nothing more than a role-playing experiment.

    The most infamous example, arguable, of all – The Milgram Experiment of 1961, which horrifying revealed our near absolute obedience to authority. The results of this experiment basically revealed that the average individual will do anything, obey any order (even knowing kill another human being), simply because the order came from an authoritative figure. Experiments such as this, and others, have been extremely valuable in helping psychology understand mob mentalities, human obsession with approval, the lack of compassion and moral compass towards our fellow man, and how soldiers can so easily follow orders without ever considering if the orders are right or wholly wrong.

    So they may be considered unethical, but I believe that the more of an ugly reality it reveals about humanity, the more likely it is to be labeled as unethical. Honestly, I do not personally see anything wrong with the onset of these experiments. Everyone knew they were in an experiment, everyone knew that they could end it at any time, and even in the Milgram Experiment the authority figure was simply a scientist. Not a judge, cop, federal agent, or any other type of high power authority. It was just a dude in a white coat. Without these types of experiments, keen insights on this dark but real human behavior, would always be nothing more than negative assumptions made (obviously) by a misanthropic ahole. :-)

    Now what IS highly unethical, and complete shame on our nation and government, would be the greatest example that right off the top of my head would be the Tuskegee Syphilis Experiment. Now THAT is truly unethical.

    1. Windriven says:

      What does any of this have to do with informed consent?

      1. Seat12e says:

        It doesn’t. It addresses a bigger question. What is it that made this particular study unethical at its inception? Opinion seems to be that the unethical element was the manipulation itself. ObjectiveCynic makes an interesting point. I like to think that, were I on the IRB reviewing the plans, I would have been disturbed by each of these proposed studies. But using hindsight makes it all really easy.

        1. Windriven says:

          “Opinion seems to be that the unethical element was the manipulation itself.”

          I guess we haven’t been reading the same comments. The unethical element was never more or less than failure to obtain informed consent from the research subjects.

  37. Angora Rabbit says:

    Here’s the policy of my institution:

    “The Human Research Protection Program (HRPP) provides oversight of all research activities involving human subjects at the institution…All faculty, students, and staff who are involved in research involving human participants are required to comply with federal, state and university policies for the protection of human research participants…All research involving human subjects must be reviewed by an IRB.”

    There is no exception for the funding source or identity of the collaborator, that is, whether the partner is public or private.

    Additionally, “To ensure that the safety of research participants is adequately protected, the institution: (1) requires reporting and review of significant financial interests that might present a real or potential conflict of interest with the individual’s research prior to final IRB approval of the research; and (2) limits the participation of individuals who hold significant financial interests related to a research project involving human subjects in that research.”

    Since Facebook could have a financial interest in the outcome, because it could use the information to “improve its service” which is euphemism for increased ad revenue, the study indeed merits external scrutiny.Informed consent was necessary, as is external review.

    1. E-rook says:

      I cannot speak regarding Cornell. I work in the UC system and the UCSF researcher has zero excuse for being involved in a human research study that did not have IRB approval, informed consent, or a mechanism to withdraw. We spend hours going through annual re certifications reminding and reminding and reminding us of the institutional requirements (which should be, in my opinion, professional norms) for human research, (including the definitions of research involving human subjects). Now that the crap has bit the fan, I do not expect to hear a peep from that individual. The kerfuffle seems to have swift and harsh, I think it ends up being a useful lesson to everyone, though. I mean, in comparison to the research that conspiracy theorists think we are all complicit about, look at this massive community response from the profession about small manipulations a to Facebook feeds. It makes me feel better about our willingness to self police.

  38. i need to start reading this blog again says:

    …. what most people i know who reposted this on facebook didnt like about the study…. that it reduced posts by their friends

    …priorities dont seem to be in the right place for my facebook friends

    1. Maree says:

      exactly!
      I was wondering what was happening to my real-life friends because we did use FB a lot to coordinate getting together – and I’d miss some of those posts – or be unable to find them when needed, and began to wonder if I had offended someone and was being cut from the group.

      Also I began to wonder if I was losing my mind. Interesting news articles or posts from friends would disappear, then reappear later – or would randomly be on a friend’s timeline, but not on my own when I wanted to go back to add a comment.

      I think that messing with someone’s perception of their own perceptions really violates the spirit of at least the first 2 rules of informed consent:
      • D: 1. The research involves no more than minimal risk to the subjects;
      • D: 2. The waiver or alteration will not adversely affect the rights and welfare of the subjects;

  39. BackedByData says:

    Cornell added a statement on their website saying hat “no review by the Cornell Human Research Protection Program was required.”
    http://mediarelations.cornell.edu/2014/06/30/media-statement-on-cornell-universitys-role-in-facebook-emotional-contagion-research/

    Also, Facebook added the “Research” part to their user agreement 4 months after the study was performed.
    http://www.forbes.com/sites/kashmirhill/2014/06/30/facebook-only-got-permission-to-do-research-on-users-after-emotion-manipulation-study/

  40. Catherine says:

    I can’t be the only one peeved enough to drop FB over this.
    There were two breaches at Target last year. I wasn’t snagged in them, because I was bothered by their creepy analysis of Target customers’ purchasing behavior and stopped shopping there a long time back. I don’t expect that my boycott of one makes any difference that corporation, but it makes a difference to me. Similarly, I’ve broken up with FB. Doubtful they’ll feel it, but already, I’m much happier.
    Removing the FB app from my phone means I can go longer between charging it. It means I don’t know what my high-school classmates think of the latest SCOTUS decision. I don’t know what my husband’s niece had for lunch. I don’t know which of my friends was first to ‘Like’ the latest picture that George Takei posted. And I’ve been spared the latest from Natural Health News that my cousin’s ex-wife thinks will finally cure her chronic Lyme. As a bonus, I’ve also missed a ton of Dr. Oz themed targeted advertising.
    FB behaved like a manipulative and abusive boyfriend. Just because I tolerated it once or twice or ten times (I’d been a member since 2007!) doesn’t mean I must hang around till death do us part. I wrote them, telling them we’re breaking up and telling them why. But they keep writing back, asking where I am. A spam folder is handling all that for me now. So, I’m sorry. I can’t go and “Like” your page, because I’m no longer smoking that FB crack.

  41. geekoid says:

    So, you should shut down you facebook pages, Or is facebook to convenient for you to take action against unethical behavior?

  42. Alex Anlyan says:

    The reporting is objective and impartial, however it lacks concision to the point where no conclusions conveying impact. Any individual whether in advertising, marketing, or psychology, who is so committed to the accomplishment of a specific agenda that they lack the capacity to question their own beliefs, values, impact on others including subjects, and actions has become both unethical and a risk to public welfare. Well it is easy to minimize the impact of Facebook’s “experimentation” the truly chilling part of this scenario is it’s indication of what can be accomplished by way of the manipulation of public and political opinion. This holds the potential to make 1984 look like 1860.

  43. Maree says:

    Having read this article, I am now sure I was among the subjects:
    The description of the experimental protocol explains odd stuff that kept happening to my newsfeed in the relevant time period, where I would see several items I wanted to read, but, returning to my newsfeed after reading one item and making a few posts, I would be unable to find another story I had wanted to read. and then later it might be there, but other material would be missing.
    [" posts that contained emotional content of the relevant emotional valence, each emotional post had between a 10% and 90% chance (based on their User ID) of being omitted from their News Feed for that specific viewing. It is important to note that this content was always available by viewing a friend’s content directly by going to that friend’s “wall” or “timeline,” rather than via the News Feed. Further, the omitted content may have appeared on prior or subsequent views of the News Feed "]

    AND: It was extremely annoying, and made me wonder if I was “losing it” when items seemed to randomly disappear and reappear as I opened and closed articles and switched back and forth among my home page, the newsfeed, and the home pages of various friends. Adverse effect on this subject? YES.
    I would suggest that is a bit of evidence that the research violated the spirit of avoiding adverse effects on subjects pursuant to item D.2. of §46.116 in the elements of informed consent, which says:
    [• D: 2. The waiver or alteration will not adversely affect the rights and welfare of the subjects;]

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>