Articles

Help a reader out: Abstracts that misrepresent the content of the paper

Earlier this week, a reader of ours wrote to Steve and me with a request:

First off, I just want to say thank you for everything you gentlemen do. I find that your sites are extremely helpful when trying to figure out what level of information is BS, and what is real.

In short, I was wondering if either of you two would be able to refer me to a scientific or psuedo-scientific article where the abstract completely misrepresents the article or the conclusion doesn’t fit the analysis/data. The reason is that I’m writing is that I’m currently in my third year at [REDACTED], and currently I’m working on my seminar paper so I can graduate. I decided to look at whether there is a reasonable fair use argument in the reproduction of an entire scientific article and at what instances prior precedent would allow it. Inherent in the argument is that a scientific paper can’t be properly excerpted without losing vital information (or that an abstract does not adequately describe the entire paper), so complete reproduction of the article is necessary to properly convey the point.

Sincerely,

A Reader

So…at the risk of being too blatant, I’ll just say that our readers are very informed and scientifically knowledgeable (excepting the odd troll, of course). Can you help another reader out and provide references that fit this reader’s request? I can think of one, but I don’t think it’s as blatant as what he has in mind. Please list your references below. Heck, we might even be able to get a post for SBM out of this if there are some interesting papers that fit the description above.

Posted in: Basic Science, Medical Academia

Leave a Comment (30) ↓

30 thoughts on “Help a reader out: Abstracts that misrepresent the content of the paper

  1. Mark Crislip says:

    Eur Spine J. 2008 April; 17(Suppl 1): 176–183.
    Published online 2008 February 29. doi: 10.1007/s00586-008-0634-9
    PMCID: PMC2271108
    Risk of Vertebrobasilar Stroke and Chiropractic Care
    Results of a Population-Based Case-Control and Case-Crossover Study

    Best example in CAM world; I’ll keep thinking about ID papers, where I have the most knowledge

  2. Grant Jacobs says:

    I’ve written a blog post on this topic centred around one paper and others’ discussion on this issue:

    http://sciblogs.co.nz/code-for-life/2011/08/08/when-the-abstract-is-not-accurate-or-enough/

  3. lilady says:

    Here’s is an article that appeared in the Journal of American Physicians and Surgeons, that claims abortions are implicated in higher risks for breast cancer:

    http://www.jpands.org/vol8no2/malec.pdf

    Here is the analysis of what the Journal’s article is claiming and what the actual contents/conclusion of various cited studies are:

    http://www.rhrealitycheck.org/blog/2010/01/13/the-truth-about-breast-cancer-and-abortion

    Other pseudoscience websites are available by “Googling” “Abortions and Breast Cancer”

  4. David Gorski says:

    Isn’t picking on the AAPS and JPANDS rather like picking on Age of Autism? The level of scientific literacy is about the same. :-)

  5. dandover says:

    Here’s one:

    Estrogen-like endocrine disrupting chemicals affecting puberty in humans–a review.

    http://www.ncbi.nlm.nih.gov/pubmed/19478717

    One of the most startling claims in the abstract is that BPA has been shown to cause precocious puberty (PP) in girls. Based on the title of the paper, one would assume this claim is about *human* girls. Upon examination of the article, this statement is based solely upon citation of another study that found found BPA to cause PP in *mice*.

    I only examined the BPA-causes-PP-in-humans claim in this paper. There are probably other “embellishments” in there as well.

  6. Scott Gavura says:

    Related, from Booth et al:
    “Presentation of Nonfinal Results of Randomized Controlled Trials at Major Oncology Meetings”
    http://jco.ascopubs.org/content/27/24/3938.full
    “In summary, we have shown that the majority of RCT abstracts presented at major oncology conferences include important data discrepancies compared with their subsequent published articles, suggesting that reporting of nonfinal analyses is common. “

  7. richardigarber says:

    David:

    A 2007 paper in Complementary Health Practice Review on Healing with Bach Flower Essences: Testing a Complementary Therapy is often cited as showing that Rescue Remedy reduces anxiety. The abstract doesn’t say that they found no significant main effect, and resorted to data dredging to find a subgroup with an effect, which they mention in the last sentence.

    I blogged about it here: http://joyfulpublicspeaking.blogspot.com/2010/01/bach-rescue-remedy-and-anxiety.html

    Richard

  8. Troyota says:

    Posted under the wrong article–sorry for the bumbling!

    This is a great case to make, but to be convincing, I believe you need to collect a larger set of examples. This doesn’t necessarily have to be a “random sample” (although, done properly, that would make a terrific master’s thesis!), but it should be more extensive, including, as suggested, a range of mainstream publications.

    There have been some studies of the peer-review process applicable to this problem. One great one (Schroter S, Black N, Evans S, Godlee F, Osorio L, Smith R. What errors do peer reviewers detect, and does training improve their ability to detect them? J R Soc Med. 2008 Oct;101(10):507-14. PubMed PMID: 18840867; PubMed Central PMCID:PMC2586872) found that when reviewers were given manuscripts with embedded errors, from 20% to 39% failed to detect this exact problem–discrepancy between abstract and results. In other words, this error went undetected by reviewers at least 60% of the time!

    So at least there is significant danger of this happening in journal articles, even with peer review…Best of luck with this effort!

  9. Amy (T) says:

    The Skeptical OB did a post somewhat on this topic, specifically noting a paper. although in this case, it’s more the data they used, and data they didn’t use (that is publicly available) which make the abstract conclusions wrong. The post is here: http://skepticalob.blogspot.com/2011/04/being-published-doesnt-make-it-true.html

    the paper (in BMJ) is also linked to in the post: http://www.bmj.com/content/330/7505/1416.full?ehom

  10. Katz et al: “Intravenous Micronutrient Therapy (Myers’ Cocktail) for Fibromyalgia: A Randomized Controlled Trial”

    http://apha.confex.com/apha/134am/techprogram/paper_134303.htm

    That’s an example of an abstract misrepresenting the findings but including statistics that reveal the misrepresentation. Go figure. I mentioned it on SBM a few years ago: http://www.sciencebasedmedicine.org/index.php/science-reason-ethics-and-modern-medicine-part-2-the-tortured-logic-of-david-katz/

  11. I should have added that the abstract that I just linked was presented at a meeting; there was no corresponding, published article at the time. Since then such an article has appeared, and its abstract is substantially different from the one presented at the meeting: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2894814/?tool=pubmed

    Apparently the authors found it necessary, for whatever reason (reviewers’ comments? someone read SBM?–I’m kidding), to own up to the real finding of the study when it came to formal publication. They still did their best to hedge, of course.

  12. Speaking of data dredging (I know that this is not the question being asked here, but still), an infamous example in “CAM” pseudoresearch is the 1998 Sicher/Targ report of “distant healing” for HIV+ patients. The report is here:

    http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1305403/?tool=pubmed

    Neither the abstract nor the body of the report reveals that the authors dredged mucho data, and even added some new ‘data’ after the fact, to find the purported “positive therapeutic effect of DH.” To learn about that you must go here:

    http://www.wired.com/wired/archive/10.12/prayer.html?pg=1&topic=&topic_set=

  13. lizditz says:

    Darn.

    The two chiropractic papers I was thinking of both mention their dodgy methodology (applied kinesiology) in the abstracts.

    DeLong (2011) does mention the addition of specific language disability in the abstract, so also not a contender.

    But

    J Inorg Biochem. 2011 Nov;105(11):1489-99. Epub 2011 Aug 23.
    Do aluminum vaccine adjuvants contribute to the rising prevalence of autism?
    Tomljenovic L, Shaw CA.
    Source
    Neural Dynamics Research Group, Department of Ophthalmology and Visual Sciences, University of British Columbia, 828 W. 10th Ave, Vancouver, BC, Canada V5Z 1L8. lucijat77@gmail.com

    http://www.ncbi.nlm.nih.gov/pubmed/22099159

    Discussed at length at http://scienceblogs.com/insolence/2011/12/and_global_warming_is_caused_by_the_decr.php

    Abstract conceals two of the fatal flaws in the paper,

    1. Argument based on table 1

    “In their Figure 1, the authors are plotting ASD incidence in each year (1991-2008) against total aluminium content for the pediatric schedule *in that year*. Not (as you might expect) the aluminium exposure for the ASD cases themselves, according to the pediatric schedules of 6 to 21 years previously, but no, that same year.
    In effect they are looking for a correlation between the number of people who developed ASD decades earlier, and the number of vaccinations given to other children, in the current year. This is an interesting model of causality!”

    2. Argument based on figure 4 (if I’m recalling correctly) the authors manipulated both the vaccine schedule and the ASD prevalence figures for various countries to make the case that higher vaccine exposure correlated with higher autism prevalence.

  14. papertrail says:

    http://wmbriggs.com/blog/?p=1691
    Dr. Briggs rips a new…conclusion from a study on a yoga program.

  15. papertrail says:

    Looks like Dr. Briggs could be a good resource for this project. He wrote: “Lee’s paper is not unusual. Hundreds of these appear monthly. They are not exactly wrong, but they are useless.”

    I’ll add: Useless to advancing knowledge but not so useless to those trying to promote their own agenda.

  16. papertrail says:

    Wait, how is this any different than science-as-usual, finding flaws in a study’s design and methods that call its conclusions into question? (I’m not a scientist or expert in any way on this subject, so I apologize in advance if this question has an obvious answer.)

  17. This example is not that blatant, but at least confusing.

    The abstract of Kirsch et. al. (2008) says that antidepressants does not really perform better than placebo, but the results show an effect size of 0.32, roughly replicating earlier results from Turner et. al. (2008) that had an effect size of 0.31, concluding that antidepressants outperformed placebo.

    I’m cheating a bit cause Dr. Harriet Hall has already discussed this study on SBM.

    Kirsch I, Deacon BJ, Huedo-Medina TB, Scoboria A, Moore TJ, Johnson BT. Initial severity and antidepressant benefits: a meta-analysis of data submitted to the Food and Drug Administration. PLoS Med. 2008 Feb;5(2).

  18. papertrail says:

    What I mean is: Aren’t there innumerable examples of studies (maybe even most?) where the conclusions presented in the abstracts are questionable due to weakness of the study design or methods that are found (and often debated) after scrutinizing the actual study?

  19. papertrail says:

    From the OP: “Inherent in the argument is that a scientific paper can’t be properly excerpted without losing vital information (or that an abstract does not adequately describe the entire paper), so complete reproduction of the article is necessary to properly convey the point.”

    I don’t know if there are many (any?) examples where you would need a complete reproduction of a study in order to make the point that its abstract doesn’t adequately describe it. Sorry. I do support open and free access to complete studies though simply because I see a lot of value in allowing open scrutiny of studies.

  20. I will post a couple examples. Here is one. I understand that “we” in the health professions have a dogmatic belief that there is no ABC connection, so I will be called names, and no thoughtful dialog will come out of this. Still, this one study is a good example of an abstract failing to accurately report what would have been very easy to report.

    Arch Intern Med. 2007 Apr 23;167(8):814-20.
    Induced and spontaneous abortion and incidence of breast cancer among young women: a prospective cohort study.
    Michels KB, Xue F, Colditz GA, Willett WC.

    In the abstract:
    “We examined the association between induced and spontaneous abortion and the incidence of breast cancer in a prospective cohort of young women, the Nurses’ Health Study II.”

    Sure, they did. But that is not what is reported in the paper…

    In the paper:
    “We censored cases of carcinoma in situ (n = 399) from the primary analyses, but results including in situ cases were comparable to those for invasive cases only.”

    For the cohort examined, there were about 1,400 cases of “breast cancer.” So, this study eliminated about 20% of the cases.

    With an incidence study, dropping 20% of cases really hampers the ability to detect a true finding, if it is there.

    How hard would it be to add to the abstract that they analyzed cases at Stage I or beyond? Because then people would start wondering and asking questions.

    Abortion is a sacred cow for us healthcare professionals. So, when abortion shows up as a risk factor for breast cancer, we have to find a way to discount the study. Researchers have used a wide range of tricks to have studies where the confidence interval crosses down below 1.0. At that point, we can say “no effect.”

    I have learned a lot about how to analyze studies first by looking at the funny business pulled-off in psychiatric drug studies, then by looking over these ABC studies.

    For Michels, the abstract says “cancer,” but the study itself says they dropped “in situ.”

  21. One of the Lexapro marketing strategies was to always declare “well-tolerated.”
    If true, that would be very influential to prescribers. As a scientist, not a PR guy, if someone declares someting to be “well-tolerated,” then I think that some patient-centered outcome of acceptability or unpleasantness was used. Imagine of your next colonoscopy was with an apparatus designed and found to be significantly “well-tolerated” above and beyond the old colonoscopy thingie? If I sit still through the cold stethoscope without complaining, can you assume it was well-tolerated? No. Scientifically, a measure of tolerability must be applied.

    We think about “tolerability” with the needle gage needed for different injections. And so on.

    So, Here is a lexapro efficacy study that apparently is supposed to also include assessment of “tolerability” so they can continue wit the marketing slogan:

    Emslie GJ, Ventura D, Korotzer A, Tourkodimitris S.
    Escitalopram in the treatment of adolescent depression: a randomized
    placebo-controlled multisite trial.
    J Am Acad Child Adolesc Psychiatry. 2009 Jul;48(7):721-9.

    Abstract sez:
    “Conclusions: In this study, escitalopram was effective and well tolerated in the treatment of depressed adolescents.”

    The study itself says more:
    “Adverse events were either spontaneously reported by the patient or the patient’s guardian or noted by the investigator.”

    Wow. There is science in action, folks. If the kid spoke up, I noted the side effect, but if the kid was quiet, and just accepted the $20 for showing up, I did not bother to ask about any side effects.

    As we know, side effects did get reported: dizziness, headache, pregnancy, alien abduction, late homework assignment, etc.

    “In addition to spontaneous reports, suicidality was assessed using patient self-report and clinician-rated instruments.”

    So suicidality was systematically surveyed; asked of ALL participants. Here, they cannot get away with the “pt-reported’ strategy.

    The eventual side effects table looks like they usually do. Placebo is not too diff from active drug.

    They conclude in the article:

    “Based on the study we report here, escitalopram seems to be a well-tolerated effective treatment for adolescents with MDD, which is consistent with the post hoc analysis of the adolescent subset from the earlier escitalopram trial.”

    I guess the “seems to be” gives wiggle room. But this is an insufficient methodology for scientifically assessing “well-tolerated.” Looks good on the brochures and ads, though.

  22. As far as examples from sCAM, that is ridiculous, since it is like nailing jello to the wall – you get the problem of the vague terms and outcomes, so nothing can really be internally inconsistent.

    This topic was a cool idea for a post.

  23. evilrobotxoxo says:

    @MedsVsTherapy: The problem with your example is that “well-tolerated” isn’t clearly defined, so the accuracy of a statement about Lexapro’s tolerability has more to do with the particular definition of tolerability than what the data actually shows. Also, the phrase “well-tolerated” is defined in relative terms in the clinical setting. A “well-tolerated” form of chemotherapy usually has more side effects that a “poorly-tolerated” blood pressure medication. You also imply that the study you cite was performed so that Lundbeck could claim that Lexapro was “well-tolerated” – I didn’t read the study you cited, but is there any evidence to support that, or is that just your conjecture?

    Finally, I will say that “in my experience,” Lexapro does tend to be one of the better tolerated SSRIs (as opposed to Paxil, for example), and SSRIs in general are well-tolerated, meaning that a relatively small percentage of patients stop treatment due to side effects. I don’t mean to imply, obviously, that SSRIs don’t have side effects because all of them obviously do, but I don’t think that tolerability is about the mere presence of side effects, but about the functional importance of those side effects.

  24. lilady says:

    Here is the citation for the Lexapro study

    http://www.jaacap.com/article/S0890-8567%2809%2960109-X/abstract

    It would help if the actual abstract was provided and for a comment to begin at the beginning…not the conclusion:

    Abstract
    Objective

    This article presents the results from a prospective, randomized, double-blind, placebo-controlled trial of escitalopram in adolescent patients with major depressive disorder.
    Method

    Male and female adolescents (aged 12–17 years) with DSM-IV-defined major depressive disorder were randomly assigned to 8 weeks of double-blind treatment with escitalopram 10 to 20 mg/day (n = 155) or placebo (n = 157). The primary efficacy parameter was change from baseline to week 8 in Children’s Depression Rating Scale-Revised (CDRS-R) score using the last observation carried forward approach.
    Results

    A total of 83% patients (259/312) completed 8 weeks of double-blind treatment. Mean CDRS-R score at baseline was 57.6 for escitalopram and 56.0 for placebo. Significant improvement was seen in the escitalopram group relative to the placebo group at endpoint in CDRS-R score (−22.1 versus −18.8, p = .022; last observation carried forward). Adverse events occurring in at least 10% of escitalopram patients were headache, menstrual cramps, insomnia, and nausea; only influenza-like symptoms occurred in at least 5% of escitalopram patients and at least twice the incidence of placebo (7.1% versus 3.2%). Discontinuation rates due to adverse events were 2.6% for escitalopram and 0.6% for placebo. Serious adverse events were reported by 2.6% and 1.3% of escitalopram and placebo patients, respectively, and incidence of suicidality was similar for both groups.
    Conclusions

    In this study, escitalopram was effective and well tolerated in the treatment of depressed adolescents. J. Am. Acad. Child Adolesc. Psychiatry, 2009;48(7):721–729.

    I don’t see any problem with this randomized placebo-controlled double-blinded study.

  25. lilady says:

    I am a registered nurse and also a strong proponent of a woman’s right to chose. Cherry-picking quotes from unattributed studies and blatant lying on the part of certain groups and the AAPS in order to frighten women, is a travesty.

    Here is the full report about the nurses breast cancer study…I don’t see any efforts to skew the results.

    http://archinte.ama-assn.org/cgi/content/full/167/8/814

  26. grendel says:

    Not sure if this one is as bad as my reading of it made out – its new and I’m sleepy!

    http://www.hindawi.com/journals/ecam/2012/417267/

  27. Niall Taylor says:

    My favourite is this paper:

    Rao, M.L., Roy, R., Bell, I.R., Hoover, R., (2007) The defining role of structure (including epitaxy) in the plausibility of homeopathy Homeopathy Vol. 96, no. 3, pp. 175-182

    The authors hint that they have discovered a ‘memory of water’ which might explain how homeopathy works, they mention the word water in the abstract several times and ‘structure of water’ is one of the key words. It is only when you read the paper that you discover the paper is about ethanol, not water (ethanol isn’t mentioned in the abstract).

    Links etc here: http://www.rationalvetmed.org/papers_r-s.html

  28. JPZ says:

    That has to be the best reader question I have ever seen on “s”BM. Please thank the reader on my behalf.

  29. weing says:

    @lilady,

    A p= 0.22 is not statistically significant. I would not be able to conclude from this that escitalopram is effective.

Comments are closed.