Gold mine or dumpster dive? A closer look at adverse event reports

All informed health decisions are based on an evaluation of expected risks and known benefits. Nothing is without risk. Drugs can provide an enormous benefit, but they all have the potential to harm. Whether it’s to guide therapy choices or to ensure patients are aware of the risks of their prescription drugs, I spend a lot of time discussing the potential negative consequences of treatments. It’s part of my dialogue with consumers: You cannot have an effect without the possibility of an adverse effect. And even when used in a science-based way, there is always the possibility of a drug causing either predictable or idiosyncratic harm.

An “adverse event” is an undesirable outcome related to the provision of healthcare. It may be a natural consequence of the underlying illness, or it could be related to a treatment provided. The use of the term “event” is deliberate, as it does not imply a cause: it is simply associated with an intervention. The term “adverse reaction,” or more specifically “adverse drug reaction,” is used where a causal relationship is strongly suspected. Not all adverse events can be be causally linked to health interventions. Consequently, many adverse events associated with drug treatments can only be considered “suspected” adverse drug reactions until more information emerges to suggest the relationship is likely to be true.

Correlation fallacies can be hard to identify, even for health professionals. You take a drug (or, say, are given a vaccine). Soon after, some event occurs. Was the event caused by the treatment? It’s one of the most common questions I receive: ”Does drug ‘X’ cause reaction ‘Y’?” We know correlation doesn’t equal causation. But we can do better than dismissing the relationship as anecdotal, as it could be real. Consider an adverse event that is a believed to be related to drug therapy:

  • First, is the event an extension of the drug’s pharmacology? Is it predictable, based on how the drug works? For example, narcotics predictably cause constipation and cognitive impairment. Oral antibiotics cause diarrhea because they kill the normal flora in our colon.
  • Secondly, what was observed in clinical trials? The product monograph or prescribing information usually summarizes which adverse events were reported in trials, and which were more frequently observed than in the placebo group.

Neither of the two approaches is comprehensive. If the suspected event is rare, it may not have shown up simply by chance alone, in the clinical trial. Or the event may develop slowly: clinical trials have a fixed duration, while treatments in the real world can last decades. The patient population in clinical trials is usually healthier and on fewer other medications than those in the real world. So to truly understand the adverse event profile, we need to look at real world data. We could turn to epidemiological studies that evaluate safety in real-world settings, often using massive treatment databases. While not as robust as data from a randomized controlled trial, and prone to misuse,  epidemiological studies can answer questions about vaccine safety or identify subtle effects of drugs on common events, like heart attacks.

If the event in question doesn’t show up in any other data source, but is suspicious, it might be appropriate to submit a report to the manufacturer or to the national drug regulator. Importantly, this is a suspected reaction — we cannot be certain the event was caused by a drug based on a single observation. But multiple reports, sent independently, could “signal” the need for more investigation. Consequently, countries with robust regulatory systems have all established systems for collecting spontaneous reports of harms. Systems generally include:

  • mandatory reporting from drug manufacturers of any adverse event reported to the company
  • optional or mandatory reporting from health professionals who become aware of possible adverse events
  • the option for consumers to report harms directly to regulators (bypassing manufacturers or health professionals)
  • collaboration with other countries to share information on adverse events that are collected.

There are four essential elements to any adverse event report:

  • An identifiable patient: there must be a specific patient known to be involved, and adequate information is necessary (gender, age, etc.).
  • An identifiable reporter.
  • A suspected drug: the drug should be known. If a patient is on multiple drugs, they all need to be described in the report.
  • A suspected adverse event or fatal outcome: specific signs and symptoms are required, and ideally a diagnosis as well (e.g.,”rash” isn’t specific enough to compare between reports).

Multiple reports are typically required to generate the safety “signal”, a flag that an association merits further investigation. The FDA compiles all adverse event reports it collects in the Adverse Event Reporting System (AERS), and continually analyzes that database, publishing possible signals that it has identified. While consumers and health professionals can submit reports to the FDA directly, the overwhelming majority of adverse events are reported by pharmaceutical manufacturers, who are required by law to forward all adverse events they identify. If an event described is serious and not already listed in the prescribing information, it must be forwarded to the FDA within 15 days. Here are the statistics from its Adverse Events Reporting System (AERS):

The blue bars are direct reports, submitted by health professionals or the public — about 4% of the total submitted in 2010. The remainder are submitted by manufacturers. Even massive numbers like these may not be sufficient to identify potential signals of adverse events related to treatments, so countries collaborate and share information: a bigger net can catch more safety signals, it seems. The Uppsala Monitoring Centre is an international collaboration, combining reports submitted by over 100 countries. It has amassed a database of over 7 million reports. Adverse event databases are great resources for identifying safety signals — albeit with some significant limitations.


The Trouble with (V)AERS

The misuse of vaccine-related adverse event reports — called VAERS, for Vaccine Adverse Event Reporting System — is a common tactic of antivaccine groups who believe that vaccines are unsafe and cause significant harms. And a database of suspected harms is a gold mine to those seeking anecdotes. Antivaccine groups have been known to mine the VAERS database, and draw causal relationships where none have been established.

There is no question that adverse effect databases can serve as a valuable resource as part of an overall program to monitor the safety and efficacy of a drug or vaccine. However, in isolation, these databases have limited utility. Patterns or “signals” are recurrent events observed in the data. They are hypothesis-generating — not hypothesis-answering. Most importantly, these databases cannot estimate the incidence of any adverse event. In order to estimate the incidence of an event, we need to know how many times it has occurred in a specified population size: the denominator, which is the total number of patients that have taken the drug (or vaccine, as the case may be). No denominator, no incidence. AERS cannot provide that information, given reporting is spontaneous, incomplete, and the size of the population taking the drug isn’t known. The FDA makes this very, very clear:

AERS data do have limitations. First, there is no certainty that the reported event was actually due to the product. FDA does not require that a causal relationship between a product and event be proven, and reports do not always contain enough detail to properly evaluate an event. Further, FDA does not receive all adverse event reports that occur with a product. Many factors can influence whether or not an event will be reported, such as the time a product has been marketed and publicity about an event. Therefore, AERS cannot be used to calculate the incidence of an adverse event in the U.S. population. [emphasis added]

The old adage that garbage in = garbage out holds true with adverse event databases. The quality of reporting is as important as, if not more important than, the quantity of these reports. Low quality reports, with incomplete data have the potential to increase the challenge of finding true safety signals. Trends in AERS and VAERS reports may also be subject to external pressures unrelated to drug effects. Relationships have been established between H1N1 media stories and VAERS reports. Vaccine litigation can do the same thing. Neither are real, but you need to look beyond the databases to answer the question. It would be interesting to see if some of the current medical-legal drug controversies (e.g., oral contraceptives, antipsychotics, and antidepressants) also have identifiable litigation-related trends in AERS databases.


The Power of Nocebo

Are all reported side effects real effects? Probably not. Any double-blind clinical trial will describe the side effects reported with both the active treatment, and the placebo. And adverse events from placebos can be so significant that they lead to treatment discontinuation. A systematic review of trials in patients with fibromyalgia noted that 67% of patients reported adverse events in the placebo arms, and 10% discontinued “treatment” with the placebo due to adverse effects. In a study of allergy treatments, 27% of participants reported allergic symptoms to a placebo challenge. And in a study that compared ASA (aspirin) versus placebo, describing potential adverse effects of therapy led to a “sixfold increase in the number of subjects withdrawing from the study because of subjective, minor, gastrointestinal symptoms” compared to study sites that did not provide that caution.

The recognition of nocebo effects is another consideration when evaluating individual adverse event reports.


Free the data!

While collecting and analyzing adverse event reports has been standard practice since the thalidomide disaster, the data only became publicly available more recently. Several years ago, the Canadian Broadcasting Corporation (CBC) used access-to-information laws to obtain access to Health Canada’s entire adverse event database, and posted the data online. Health Canada subsequently made the data available directly.  The FDA makes similar data available, which you can query directly. But given the limitations described above, the utility of these databases to the public or health professionals isn’t clear.

It seems clear to a new company,, which aims to make AERS data more easily accessible, and to make money while doing so as the CMAJ described last week. Brian Overstreet, CEO, made the following comments:

We live in an information age, and there is an overwhelming pool of potential information for consumers to look at online. What’s lacking is a real statistical overview. We can come in and say, listen it’s nice that 200 people on this discussion board say they got an upset stomach, but we have 50 000 case reports, and from those we know 27% have an upset stomach. Having hard data to back up the real world perception I think is very, very valuable.

The FDA, and even they don’t know for sure, but they estimate maybe 10% of the serious adverse events are reported. But as much as we’re talking about a limited data set, three million case reports in the last seven years, it’s not a small data set. It’s a pretty robust data set. The data’s never going to be perfect, but it’s better than nothing, and so long as we’re treating it properly, the end result should be valuable.

The obvious problem with this approach, as I’ve pointed out above, is that it ignores the significant limitations of the data itself. You cannot estimate incidence without a denominator. And there is no denominator in the AERS data. These data limitations don’t stop the company from comparing within classes of drugs based on reported events, or even identifying which drugs are most likely to be associated with death.

Another group that recently announced a query service for adverse events reports is headed by Dr. David Healy, psychiatrist and author of the book Pharmageddon. He has launched an independent adverse event collection site,, which states:

There comes a point where, even if the clinical trial data says otherwise, it is just not reasonable to say the problem can’t be happening in at least some people. You and your healthcare team have been handed a megaphone!

It appears that will both analyze FDA-reported adverse events, as well as collect reports directly, and facilitate their submission. In an interview, Healy made the following comment:

People need to wake up and stop thinking that clinical trials have provided all the answers. We need to get back to believing the evidence of our own eyes. The key thing is to get the data. There isn’t anyone else getting data like this.

Given the dangers of drawing conclusions from spontaneous reports, and considering how the VAERS databases have been misused to make erroneous inferences about vaccine safety, I’m somewhat skeptical of what and will contribute to our understanding of drug safety. The sites are not yet fully operational, so this may be a topic I’ll revisit in a future post.


Improving our drug safety monitoring systems

We all share the goal of wanting to understand the true risks of drug treatments. Part of supporting informed evaluations of risk and benefit means regulators must continuously monitor the real-world safety profile of licensed drugs. Adverse event databases perform a critical role in identifying possible safety signals and generating hypotheses that require additional analysis. The significant challenge is to differentiate between the useful signals and the noise, and recognize biases in our observations and in the way we collect this data. Otherwise we may well assign causality where none may exist — another sort of poor outcome.

Adverse effect reporting systems are designed to enhance patient safety. They are one tool, unquestionably useful, but limited in utility. If we don’t keep these limitations in mind, we run the risk of worsening, not improving, our understanding of a treatment’s risk and benefit.

Posted in: Epidemiology, Pharmaceuticals

Leave a Comment (24) ↓

24 thoughts on “Gold mine or dumpster dive? A closer look at adverse event reports

  1. zeno says:

    Surely are likely to get duplicate events: those reported via AERS and those reported directly? With the anonymous data in AERS, how will the be able to tell?

  2. drsteverx says:

    Seems like there is always room for more poor statistical analysis on the interwebs. I cannot think of any positive outcomes for these sort of websites except for the owners I suppose. Most lay people are not able look at this sort of information and know the limitations, I would expect panic from the majority reading about their med on one of these websites.

    “We need to get back to believing the evidence of our own eyes.”

    Wow. That is the whole purpose of adhering to the scientific method the best we can. Our perceptions are faulty at best. I would expect this quote from a homeopathic website or other peveyor of woo. Though I admit I did not read his book Pharmageddon so maybe that does fit.

  3. Sid Offit says:

    So if VAERS data is “garbage” how can it be used to determine the safety of vaccines once released into the market? Is your assertion that it it’s a bad system when adverse events are reported but a great one when those events are not picked up.

    As to media reports influencing reporting, what affect do media stories about the “remarkable safety” of vaccines have?

    And does VAERS have any ability to discover events that develop over time? Is follow up investigation in general adequate? Finally, aren’t stories about people making fake reports really exaggerated in order to downplay the risks of vaccination.

    I’m willing to acknowledge the limits of VAERS can lead to correlation being promoted as causation, but, and restating my earlier point, with all it’s limitations, can VAERS be said to provide even a reasonable picture of what a vaccine does in the real world? Is it a good system that can be misinterpreted to make vaccines seem more dangerous than they are or is it just a bad or very limited system in general.

    Lastly, what is your opinion of reports stating some reactions are substantially under reported?

  4. Scott says:

    I cannot think of any positive outcomes for these sort of websites except for the owners I suppose.

    And the lawyers who will file lawsuits based on the dubious analyses.

  5. cervantes says:

    What we really need — although it’s expensive — is more structured post-marketing surveillance. That means following a cohort of people who receive a treatment so you have a denominator, and can compare their experience to people not getting the drug. There are still many complications, e.g. confounding by indication, but these can be handled to some extent with techniques such as propensity scores and instrumental variables. I am among those who think we really need to do this — clinical trials are of much too short a duration and, as you say, have unrepresentative populations. Adverse event reporting systems are basically a lame substitute for meaningful surveillance that let the FDA and the pharm companies say they’re watching out for us when they really aren’t.

  6. Scott says:

    @ Sid:

    What part of

    There is no question that adverse effect databases can serve as a valuable resource as part of an overall program to monitor the safety and efficacy of a drug or vaccine. However, in isolation, these databases have limited utility. Patterns or “signals” are recurrent events observed in the data. They are hypothesis-generating — not hypothesis-answering.

    is unclear?

  7. Sid Offit says:

    OK, so VAERS is inadequate. But as you say it’s not used it in isolation. If it’s not used in isolation it must be used in conjunction with other systems (for example the VSD.) What are some other aspects of the post-licensure system you are aware of an what type of sensitivity do you feel the system as a whole provides.

  8. cervantes says:

    Well, see my comment preceding.

    The problem with the system as a whole it’s not so much that it’s insensitive as that it’s not specific. There’s no way to sort out real signal from noise. Once you think you see a signal, you need to ramp up a whole new investigation, but what the threshold for that should be, and who will pay for it, and what to do in the meantime, is basically undefined.

    In fact the FDA has mandated post-marketing surveillance studies for many medications and the companies have just ignored it.

  9. PJLandis says:

    My understanding is that VAERS is a purposeful dumping ground not meant to create an accurate profile of any particular treatment. There are ways that is accomplished, or should be, such as the post-marketing studies.

    I didn’t see it mentioned in the article, but VAERS creates a database where any report can be added to the system regardless of quality as opposed to a more stringent reporting system which would likely result in far fewer spontaneous reports from the medical community at large.

    Once an issue is identified, likely through some other means, the reports within the VAERs database could provide useful information to guide further research. Study populations equal money and if the VAERS database brings up little in terms of some new, unexpected adverse event its an indication that the event isn’t a 1-100,000,000 event and that maybe a smaller more managable study could be justified. Plus, it gives information on what might otherwise be little studied or expected events.

    If you up the VAERS standards, you might get a better database for determining incidence but your also losing a lot of reporting that won’t otherwise happen and perhaps adding time to the discovery of serious adverse events. Anyway, just because people are misusing the data doesn’t mean there is something wrong with the VAERS system; all evidence can be misused.

  10. Harriet Hall says:

    Systematic post-marketing surveillance is the answer. There is great potential in computerized databases that could track everyone taking a medication and compare them to those not taking it.

  11. Angora Rabbit says:

    Thanks for a great article on an important topic. In my own field of teratology I too wish there was a better system to record potential adverse outcomes. Many states (but not all!) have mandatory birth defects registries and from those one at least gets a feel for incidence vs. general population. We publish these annually. But this only works because all births are recorded. For most medications outside the birth defects field, as you rightly point out, we do not know the denominator (size of user population) so it is challenging (impossible) to use these datasets to calculate an exposure risk. By definition these reporting systems are biased because the entire population is not sampled, just a subset and only that subset that reports an adverse outcome. It’s recall bias to the max.

    Thalidomide highlights both the best and worst flaws in these reporting systems. It worked because the spectrum of defects was extremely narrow (= well-defined) and, most important, they were unique to thalidomide exposure. The signal-to-noise was huge and so the cause was quickly and correctly identified.

    But most birth defects are not unique and occur at some frequency in the population. Trying to link a drug exposure to, say, midfacial clefting is a challenge because, unless this happens in a high % of the drug users, it will be near impossible to identify those drug-related cases against the low background incidence. Magnify this against outcomes that are already common in society (heart attack, stroke) or occur much later after exposure (cancer) and the registries become nigh useless.

    I completely agree with you, they are not much better than hypothesis-generating and certainly are not sufficient to draw a conclusion. What we need is a better system of recording both use and adverse outcomes on a population scale. I am not smart enough to know how to make this happen. But those two websites are not the answer and I too predict a wave of ambulance chasing to follow.

  12. Angora Rabbit says:

    Dr. Hall, I like your idea as computerized records become widespread. Is it going to be feasible to centralize all uses and not just those associated with adverse outcomes? I don’t know. I know that companies are not keen to have registries and possibly the consumer will again foot the bill.

  13. windriven says:

    @Angora Rabbit

    You said: “I know that companies are not keen to have registries and possibly the consumer will again foot the bill.”

    Consumers are going to foot the bill one way or the other. Drug companies generally have one substantial source of income: those who purchase their products. The cost of monitoring will come from that primary income source. I guess I’m wondering where else you think it might come from?

    I’m also not certain that ‘companies are not keen to have registries.’ I don’t claim to know either way. I’m in the devices business not the drugs business. I would have no hesitation at all about registries – so long as all companies in my field had to share the cost. I suspect that drug makers take their responsibilities just as seriously as device makers.

  14. dhallai says:

    Great post, Scott. I agree with some of the previous commenters that structured post-marketing surveillance using a computerized database — all drugs, all people (anonymized, of course) — would be ideal from a post marketing safety and efficacy standpoint. The cost would be relatively low. The earlier detection of a single fiasco (e.g., Vioxx, Avandia, etc) would probably pay for the whole system many times over.

  15. MerColOzcopy says:

    “Nothing is without risk” says it all.

    If the interpretation of AERS is in question, what value is there in favorable event reporting. If a test group is given a flu shot with no adverse events reported, and everyone avoids getting the flu can it be said it is safe and effective? Certainly not.

    The fact that such information exists would seem to put some drugs and vaccines in a category envious of CAM. When SBM seems to be at odds with itself perhaps the path of minimal risk for some is CAM.

  16. PJLandis says:


    I think your trying to say CAM is better because we don’t know anything as opposed to knowing something but not enough to make a solid conclusion? SBM isn’t at “odds with itself,” if anything I think your seeing the self-correcting nature of science as an achilles heel when it’s perhaps science’s greatest strength. Of course CAM seems without risk when you never look for any risks and ignore anything but positive evidence.

    Either way, favorable events and adverse events are both most useful when they come from a defined population, are compared against a control group, and when all groups (control and treatment) report everything, good or bad. VAERS doesn’t do this, hence Dr. Hall’s complaint that people are drawing conclusions from data that at best might help develop a hypothesis for a study that itself might yield supportable conclusions.

    If a certain group of people are given a flu shot are compared to a similar group which doesn’t received a flu shot, and both groups are followed to determine whether get the flu and to report adverse events, then assuming some other source of bias is apparent and the group is large enough, then we can make statements about the safety and efficay of the flu vaccine. Is it possible that adverse events might be missed or not attributed to the vaccine? Yes, but that is why science is self correcting and a good argument for more post-marketing surveillance studies.

    On the other hand, an unstudied CAM modality has little or no data to be evaluated. Or more commonly, poorly designed studies which give unreliable conclusions. So, should we bet on a well-studied vaccine with known benefits, and adverse events, or an untested CAM modality that is unlikely to offer any efficacy (I’m actually interested in hearing of a CAm treatment for the flu) which makes any risk foolish.

    CAM treatments are akin to getting into a car (CAMry?) everyday that takes you nowhere but blows up and injures or kills someone every so often. Driving my car may have the same or even greater risks, but at least it takes me places.

  17. MerColOzcopy says:

    You thought wrong, I don’t think CAM is better. Adverse or favorable reporting is not relevant when any form of treatment was not needed in the first place. Unnecessary vaccines, antibiotics, and CAM are all guilty. Using your analogy, sometimes the trip is not worth the risk, or even necessary.

    Your “CAM treatment” anolgy “takes you nowhere but blows up and injures or kills someone every so often” is the way many see vaccines. If SBM can not determine conclusively that an adverse event is valid or not, then it “seems” to be at odds with itself, hence CAM.

    CAMry :)

  18. lilady says:

    The question about rotavirus vaccines’ safety records have been brought up recently on a Respectful Insolence blog. I have responded to the one person who persistently posts some inanities about rotavirus and the original vaccine that was licensed:

    “What action did CDC take when cases of intussusception were reported to VAERS?

    CDC, in collaboration with the Food and Drug Administration (FDA), and state and local health departments throughout the United States, conducted two large investigations. One was a multi-state investigation which evaluated whether or not rotavirus vaccine was associated with intussusception. Based on the results of the investigation, CDC estimated that RotaShield® vaccine increased the risk for intussusception by one or two cases of intussusception among each 10,000 infants vaccinated. The other was a similar investigation in children vaccinated at large managed care organizations. When the results of these investigations became available, the Advisory Committee on Immunization Practices (ACIP) withdrew its recommendation to vaccinate infants with RotaShield® vaccine, and the manufacturer voluntarily withdrew RotaShield® from the market in October 1999. ”

    RotaShield vaccine was first licensed July, 1998 and removed from the marketplace within 14 months of licensing, a testament to the effectiveness of the FDA”s and the CDC’s monitoring of an adverse event, that occurred within a relatively small subset of infants who had received RotaShield vaccine.

    This same poster on Respectful Insolence then opined that incidence of Kawasaki Syndrome and deaths have been “reported” following immunization with RotaTeq vaccine (one of two currently licensed rotavirus vaccines.) I then posted this link about Kawasaki disease incidence reported during clinical trials, as well as the incidence of reports of Kawasaki Syndrome from VAERS and the Vaccine Safety Datalink after receiving the vaccine. There were no Kawasaki Syndrome deaths ever reported associated with the administration of Rotavirus vaccines.

    “The FDA reports that five cases of Kawasaki syndrome have been identified in children less that 1 year of age who received the RotaTeq vaccine during clinical trials conducted before the vaccine was licensed. Three reports of Kawasaki syndrome were detected following the vaccine’s approval in February 2006 through routine monitoring using the Vaccine Adverse Event Reporting System (VAERS). After learning about these Kawasaki syndrome reports, CDC identified one additional unconfirmed case through its Vaccine Safety Datalink project. The vaccine label has been revised to notify healthcare providers and the public about the reports of Kawasaki syndrome following RotaTeq vaccination.

    The number of Kawasaki syndrome reports does not exceed the number of cases we expect to see based on the usual occurrence of Kawasaki syndrome in children. There is no known cause-and-effect relationship between receiving RotaTeq or any other vaccine and the occurrence of Kawasaki syndrome.”

    The persistent poster again opined that Rotateq vaccine was implicated in increased risk of intussusception. I then linked to this article from the JAMA:

    “Main Outcome Measure Intussusception occurring in the 1- to 7-day and 1- to 30-day risk windows following RV5 vaccination.

    Results During the study period, 786 725 total RV5 doses, which included 309 844 first doses, were administered. We did not observe a statistically significant increased risk of intussusception with RV5 for either comparison group following any dose in either the 1- to 7-day or 1- to 30-day risk window. For the 1- to 30-day window following all RV5 doses, we observed 21 cases of intussusception compared with 20.9 expected cases (SIR, 1.01; 95% CI, 0.62-1.54); following dose 1, we observed 7 cases compared with 5.7 expected cases (SIR, 1.23; 95% CI, 0.5-2.54). For the 1- to 7-day window following all RV5 doses, we observed 4 cases compared with 4.3 expected cases (SIR, 0.92; 95% CI, 0.25-2.36); for dose 1, we observed 1 case compared with 0.8 expected case (SIR, 1.21; 95% CI, 0.03-6.75). The upper 95% CI limit of the SIR (6.75) from the historical comparison translates to an upper limit for the attributable risk of 1 intussusception case per 65 287 RV5 dose-1 recipients.

    Conclusion Among US infants aged 4 to 34 weeks who received RV5, the risk of intussusception was not increased compared with infants who did not receive the rotavirus vaccine. ”

    It’s hard work to dispel the myths promulgated by notorious anti-vaccine websites and individuals who have no experience in immunology or epidemiology and, who “plug into” the various and sundry conspiracy theories such as *Big Pharma*, *Big Gubmint* posted by the pseudo-science bloggers.

  19. lilady says:

    Scott…I have a long comment held in moderation…too long and too many links, perhaps?

    Thanks in advance for releasing it from the *moderation hopper*….lilady

  20. Deetee says:

    One issue I have is with the trumpeted claim that “less than 1% of reactions are reported” and the use of this inaccurate and unsourced statistic to multiply known side effect rates by 100 fold by whichever idiot is doing the claiming.

    Trying to track the source of underreporting is tricky. There are sources including the FDA stating “as few” as 10% of reactions are officially reported(which does not surprise me since medics will only report events they feel are clinically significant for the patient or unusual (eg no-one would issue an incident report for dizziness or postural hypotension in someone on antihypertensives).

    One document cited frequently for the less than 1% claim is by former FDA chief Kessler in JAMA, but the wording of his statement remains elusive and I have never been able to find the exact text. It gets repeated by luminaries in the antivaccine world like Meryl Dorey (“The fact is that study after study has shown that the vast majority – up to 99% – of reactions are never reported. Yet the government and the medical community rely on these figures which are 99% incorrect.”) and Barbara Loe Fisher: (“Former FDA Commissioner David Kessler estimated in a 1993 article in the Journal of the American Medical Association that fewer than 1 percent of all doctors report injuries and deaths following the administration of prescription drugs. This estimate may be even lower for vaccines.”) [therebye dropping the reported percentage of "injuries and deaths" to under 1%]
    (see for these claims if you dare)

    I once did a check on how often serious vaccine adverse events were reported (eg paralysis after oral polio for an example) and recall that the reporting was consistently over 50% (unfortunately I cannot find these citations any longer)

  21. Deetee says:

    Followup to the above:

    Barbara Loe Fisher stated:
    “Former FDA Commissioner David Kessler estimated in a 1993 article in the Journal of the American Medical Association that fewer than 1 percent of all doctors report injuries and deaths following the administration of prescription drugs. This estimate may be even lower for vaccines.”

    Neat misquote, Babs. Kessler actually said this:
    “Although the FDA receives many adverse event reports, these probably represent only a fraction of the serious adverse events encountered by providers. A recent review article(12) found that between 3% and 11% of hospital admissions could be attributed to adverse drug reactions. Only about 1% of serious events are reported to the FDA, according to one study.(13)”

    So Kessler himself never made this claim, he just cites another study as an example of how low AER rates may be. Yet the cite has morphed into popular antivaccine mythology as being from him in his capacity as the FDA Commissioner.

    Reference 13 is this one:

    It doesn’t say that 1% of serious reactions are reported.
    It compares reporting rates before and after an initiative to improve reporting rates for AERs in Rhode Island. Where the 1% comes from is still a mystery, since there is no meaningful denominator to estimate real overall AER rates in this study as compared to those reported.

  22. Deetee says:

    Quite interesting and comprehensive paper on ways of improving AER reporting:

    Improving reporting of adverse drug reactions: Systematic review.
    Molokhia M, Tanna S, Bell D.
    Clin Epidemiol. 2009 Aug 9;1:75-92.

Comments are closed.