Articles

On Being Certain

Neurologist Robert A. Burton, MD has written a gem of a book: On Being Certain: Believing You Are Right Even When You’re Not. His thesis is that “Certainty and similar states of ‘knowing what we know’ arise out of involuntary brain mechanisms that, like love or anger, function independently of reason.” Your certainty that you are right has nothing to do with how right you are.

Within 24 hours of the Challenger explosion, psychologist Ulric Neisser had 106 students write down how they’d heard about the disaster, where they were, what they were doing at the time, etc. Two and a half years later he asked them the same questions. 25% gave strikingly different accounts, more than half were significantly different, and only 10% had all the details correct. Even after re-reading their original accounts, most of them were confident that their false memories were true. One student commented, “That’s my handwriting, but that’s not what happened.”

Just as we may “know” things that clearly aren’t true, we may think we don’t know when we really do. In the phenomenon of blindsight, patients with a damaged visual cortex have no awareness of vision, but can reliably point to where a light flashes when they think they are just guessing. And there are states of “knowing” that don’t correspond to any specific knowledge: mystical or religious experiences.

A “feeling of knowing” probably had an evolutionary advantage. If we are certain, we can act on that certainty rather than hesitating like Hamlet. Certainty makes us feel good: it rewards learning, and it keeps us from wasting time thinking too much; but it impairs flexibility.

Richard Feynman said,

“I can live with doubt and uncertainty and not knowing. I have approximate answers and possible beliefs and different degrees of certainty about different things… It doesn’t frighten me.”

On the other hand, many people, especially religious fundamentalists, can’t deal with uncertainty. They demand absolute answers and cling to their certainties even in the face of contrary evidence. Why are people so different in their need for certainty? We know there is a gene associated with risk-taking and novelty-seeking. Burton makes an intriguing suggestion: could genetic differences make individuals get different degrees of pleasure out of the feeling of knowing?

There is a “hidden layer” in our brain whose neurons are influenced by genetics, personal experience, hormones, and chemistry. These factors influence all our thought processes without our conscious knowledge. We would like to think that if everyone had the same information they would necessarily reach the same conclusion, but that just isn’t so. There is no such thing as pure reason. “Reason is not disembodied, as the tradition has largely held, but arises from the nature of our brains, bodies, and bodily experiences.”

The autonomous rational mind is a myth. The concepts of the self and free will are innate useful fictions that allow us to function. As Samuel Johnson said, “All theory is against the freedom of the will; all experience is for it.” Modern neurophysiology tells us our decisions are made subconsciously before we are aware of deciding.

Burton discusses how certainty interferes with science. “Integrative medicine” guru Andrew Weil set up tests of osteopathic manipulation for ear infections, and when the experiments showed no effect, he said, “I’m sure there’s an effect there. We couldn’t capture it in the way we set up the experiment.” This kind of thinking is rampant in alternative medicine. Burton thinks that if Dr. Weil recommends osteopathy for an ear infection, he should inform the patient that the recommendation is based on an unconfirmed belief.

Richard Dawkins rejects religion but finds purpose and meaning in science. Burton suggests that purpose and meaning are powerful innate feelings. We feel that our life has purpose and meaning, and we look to science or religion to try to explain that feeling. No amount of rational argument is likely to change us. “Whether an idea originates in a feeling of faith or appears to be the result of pure reason, it arises out of a personal hidden layer that we can neither see nor control.”

Burton thinks irrational beliefs can have adaptive benefits (for instance, the placebo effect) and thinks objectivity and reason should be seen in the larger context of our biological needs and constraints. If science and religion could both accept that all our facts are really provisional, absolutism could be dethroned and a dialog might become possible. What if religious fundamentalists acknowledged even a 0.0000000001% possibility that their beliefs were false? Biology teaches us that absolutism is an untenable stance of ignorance.

I have long thought that absolutism was one of humanity’s greatest problems. There are implications for politics, religion, and every sphere of human activity. The insights from this book can be applied to every human interaction from marital squabbles to terrorism. It may be frightening to recognize the limits of our knowledge. It will be hard for some to give up their cherished certainties, but Burton says he has gained an extraordinary sense of an inner quiet born of acknowledging his limitations.

As a reminder that there is never a 100% guarantee that we are right, Burton suggests we use the words “I believe” instead of “I know.” This is the one place where I disagree with him: I don’t like either word. Belief sounds too much like faith. I don’t like the idea of saying I believe evolution is true. Truth in science, at best, can only mean that the evidence is overwhelming. We can’t “know” absolutely in a metaphysical sense. We provisionally accept evolution because the evidence is so overwhelming that it would be perverse to reject it. We remain open to new evidence.

The author is a neurologist who is also a novelist and a columnist for Salon.com. This well-written book is the result of many years of cogitation by a wise clinician. He supports his arguments with tales of neurology patients, recent research into brain function, and examples of how our senses constantly fool us.

Burton says, “In medicine, we are increasingly developing ethical standards for complex medical decisions that both allow for hope and placebo effect, yet don’t fly in the face of evidence-based medical knowledge.” This subject has come up on this blog before, and it is one we will continue to grapple with.

If there’s anything you think you’re certain of, read this book and you may change your mind.

Posted in: Book & movie reviews, Neuroscience/Mental Health

Leave a Comment (38) ↓

38 thoughts on “On Being Certain

  1. Michelle B says:

    Superb post, particularly enjoyed the Feynman quote.

  2. scottf says:

    I agree that using the wishy-washy faith words “I believe” or the intellectually arrogant “I know” are both unsatisfactory for a skeptic. The way I usually like to express myself when the choice of words is important, is to state that “The evidence suggests that…” and go from there.

  3. Joe says:

    Someone I know likes to say “If you can’t understand; maybe it’s you:” http://www.apa.org/journals/features/psp7761121.pdf
    The article is titled “Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments”

  4. jonny_eh says:

    Sounds like the book would be a good companion to Mistakes Were Made But Not By Me, written by Carol Tavris and Elliot Aronson

  5. teeps29 says:

    The post-modernists are going to love Dr. Burton. How does “Whether an idea originates in a feeling of faith or appears to be the result of pure reason, it arises out of a personal hidden layer that we can neither see nor control” differ from an English professor’s “Scientific facts do not correspond to a natural reality but conform to a social construct”? Dr. Burton’s basic thrust makes sense, but he goes too far. I’m with Samuel Johnson: picture me kicking a rock as I say “I refute you thus!” :)

  6. Harriet Hall says:

    Burton is firmly in the camp of rock-kickers. He is anything but a post-modernist. He doesn’t reject natural reality. Wherever our ideas originate, they can be tested by science in the external reality we all share. In fact, science is all the more important as a corrective because of the mental foibles Burton discusses.

  7. HORansome says:

    There’s a long and glorious history of the propositional attitudes ‘know’ and ‘believe’ in use in Western thought and it’s not really very helpful to ignore it just because you think it has negative connotations (which probably says more about where you come from; in New Zealand, I would say, there is no pejorative sense of ‘believe’ as there seems to be in the States). I think part of the problem is that you use a bad example (‘I believe evolution is true’); presumably you mean by this that you hold a justified true belief in the theory of Evolution by Natural Selection, which is a case of ‘knowing,’ traditionally. If you’re concerned about whether your justified belief (the evidence is overwhelming for your best inference to an explanation) is actually true then you can drop the truth requirement and go for justified belief (where your justification will be predicated on some Correspondence Theory, or Reliabilism, or something like that), which is what I take people to mean when they say ‘I believe that…’

  8. Harriet Hall says:

    Semantics can really tangle us up. Beliefs can be based on either faith or evidence. I would prefer a terminology that clearly indicates that my best guess about reality is (1) based on evidence (2) modifiable if better evidence surfaces and (3) not making any metaphysical claims of absolute truth.

  9. HORansome says:

    Well, justified belief (as long as you can explicate what notion of justification you are dealing with) is what you want, then. There’s about three thousand years worth of literature on this, going back to the ancient Greek philosophers.

    (This is all part of a very important semantic debate; traditionally to know that p has been to have had a justified true belief that p.)

  10. Harriet Hall says:

    Justified belief may be a useful philosophical term, but it’s not practical for discussing things with laymen who believe that a testimonial justifies their belief in a quack remedy. You’d have to stop and explain what your notion of justification is, which would amount to a whole course in the scientific method.

  11. pec says:

    People feel certain when they have not been exposed to contradictory evidence. Most people gravitate towards sources of information that confirm what they already believe, so their initial beliefs are more likely to be strengthened than weakened.

    And we naturally tend to fill in gaps in our understanding with myths. Everyone does this, educated or not. Scientific materialism is just as full of mythology as fundamentalist religion or alternative medicine.

    There is a long tradition among educated scientists of trying to figure out why “superstition” is still so prevalent. Psychologists have done many experiments that supposedly show the limitations of the human mind, especially the uneducated human mind.

    I think it’s a lot of baloney. The human mind does a great job of processing information — throwing out what doesn’t matter much and moving what is critical to the top. The details of the Challenger explosion were not going to make a big difference in the average person’s life, so the memories fade quickly.

    In their intense effort to demonstrate the average person’s limitations, psychologists have often ignored things that seem obvious to someone like myself, who respects the abilities of the average person, educated or not.

    Our problems are not in processing the available data, but in accessing it. However much we learn, our ignorance remains infinite.

    If you are surrounded by people like yourself, and you read books and articles that confirm your beliefs, your certainty will only increase. Academics are just as vulnerable to “group think” as church-goers.

    Psychologists, and other scientific types, but psychologists and cognitive scientists especially, love to feel they have transcended the defects that make humans intolerant, self-righteous and bigoted. But it all comes back around to wanting to feel elevated above your neighbors.

  12. HORansome says:

    Well, okay, when it comes to talking to the laity (now there’s a term with religious connotations [and don't get me started on the sexist version 'laymen']) you might want to crouch your terms with a fair amount of handwaving, which is fine, but, in reality, all you’re doing is obfuscating the debate you’re having with said layperson if you don’t then go ‘…and that’s what I mean by justified belief (or something similar).’ If we let the terms ‘getaway’ then, slowly but surely, we’re giving up.

    Once again, I think this is pretty much down to context. I’d be more than comfortable using belief in New Zealand when discussing things with my students (some of which are future medical professionals) but I might well feel like you if I were teaching in the States. Even so, I’d still use the term and explain its ancestry because, well, it’s a useful term and we really have no better and we cannot afford to give up on useful terminology that has not only stood the test of the ages but, as part of a methodological project, lead to the development and advancement of the Sciences as we know it.

  13. bikeshopgirl says:

    Cordelia Fine has also written an engaging book on this topic: “A mind of its own” Publishers: North America (WW Norton), UK (Icon Books), Australia (Allen & Unwin), Italy (Mondadori), Germany (Elsevier), Denmark (Borgens Forlag) and Japan (Soshisha). It will also be published shortly in Israel (Aryeh Nir), Hong Kong, Macao, and Taiwan (Locus), Brazil and Portugal (Bertrand).
    I’m surprised it didn’t rate a mention in this review, but I’d recommend it to anyone interested in the topic.

  14. Ted Goas says:

    Great article! I thoroughly enjoyed it. It highlights so many reasons why it’s almost impossible to be 100% certain on a lot of things (Ex. God definitely exists, God definitely DOESN’T exist, Fairies definitely do or don’t exist).

    I’d agree with PEC in that “If you are surrounded by people like yourself, and you read books and articles that confirm your beliefs, your certainty will only increase.” Cass Sunstein’s Republic 2.0 does a good job of explaining this fault and how to avoid it.

    I’m wondering if anyone here is a fan of Michael Shermer and his books on how the human mind works (and sometimes believes in false truths).

  15. Harriet Hall says:

    “People feel certain when they have not been exposed to contradictory evidence.” That’s true. Unfortunately, once they are certain, exposure to contradictory evidence usually does no good.

    “The details of the Challenger explosion were not going to make a big difference in the average person’s life, so the memories fade quickly.” That’s true, but the point of the study was that they were confident that their false memories were true.

    “Scientific materialism is just as full of mythology as fundamentalist religion or alternative medicine.” I can accept that scientific materialism doesn’t have any claim to absolute truth in a metaphysical sense, but mythology? I don’t think so. Mythology uses the supernatural to interpret natural events and to explain the nature of the universe and humanity. Science is the observation and testing of natural events and there is no way it “can” study the supernatural unless the supernatural has effects on the material world.

  16. pec says:

    Harriet,

    We have no useful definitions of “natural” vs. “supernatural.”

  17. caoimh says:

    I agree with Harriet on the usage of “belief” and “know”.
    In fact Eugenie Scott (NCSE) has been tireless in her efforts to get scientists to avoid using “belief” when they are discussing science.

    I would recommend this philosophic article on the no belief premise:

    http://www.nobeliefs.com/beliefs.htm

    Ps. Don’t mean to open up a can o’ worms again!

  18. weing says:

    pec,
    Isn’t that a false dichotomy? If the supernatural doesn’t exist, what is left is what exists. The natural.

  19. daedalus2u says:

    There is another term I use a lot, that of “I think”. That is the term I tend to use unless I have a high degree if certainty (then I use “I know”, or usually the passive “it is known”). When there is a very high degree of certainty I use “it is well known”. When I am only pretty sure I use “I believe”. I rarely use “I believe” in terms of scientific issues.

    I make a gigantic distinction between data and inference. Data is something that can be “known”, inferences are less certain than that and more subject to revision. They can only really be “thought” and not “known”.

    When I know “enough” of the literature about something that there is only one possible interpretation, then I would use the term “it is well known”. Things I would put in this category, evolution, conservation of mass/energy, anthropologic global warming, mercury does not cause autism, antibiotics don’t cure viral diseases.

    Perhaps this is a peculiarity of how my mind works, but if I don’t have the data and relationships fresh in my mind the degree of certainty I have regarding them goes down. I only have a very high degree of certainty about something while I am actually working with it and have all the data and relationships pretty fresh in my mind.

    For me, the level of certainty difference between “I think” and “I know” isn’t very much in absolute terms, and mostly which term I use depends more on style and who I am talking with at the time and what I am talking about. Saying “I know” does tend to put people on the defensive and it does come across as arrogant to those who do not understand (even when it is justified). I only use it when I have really thought an issue out and have multiply redundant chains of facts and logic that independently confirm the idea. Usually by then the idea has pretty much satisfied (to me) the Holmes criteria:

    …when you have eliminated the impossible, whatever remains, however improbable, must be the truth. S. Holmes

    Most people use terms such as “I believe”, or “I know”, not as descriptions of their own mental state, but in an attempt to impose that state on someone else. To try and get them to adopt that belief. When most people say “they know”, what they actually mean is that they believe that they know and want you to believe that they know it too.

    A lot of the objection to saying “I know” seems to me to be purely rhetorical. When there are multiple and redundant chains of facts and logic that lead to the same conclusion, arguing over whether 98% or 99%, or 99.99% deserves the “label” “it is known”. The argument shouldn’t be over the semantics being used, but over the facts and logic. If you don’t have facts and logic to support a position, you can’t know it.

    Pec’s explanation above seems to be projection based how her lines of reasoning work. (to me “seems” is the lowest degree of certainty that I have). To speak of such broad categories of people as “scientists” in such sweeping generalities cannot possibly be correct. Pec says “Scientific materialism is just as full of mythology as fundamentalist religion or alternative medicine.” Yes there are some myths that some scientists hold (such as homeostasis). That does not make science a mythological system the way that religion is (which is only myths). In the fullness of time, the myth of homeostasis will be abandoned in name as well as functionally (which many scientists have already done).

    Science make no claims about the supernatural. That would be a good definition of the supernatural that which science cannot study. Claims that cannot be tested (in principle) by science are in the realm of the supernatural.

  20. ellazimm says:

    I agree with Burton: once I accepted that I had much better memory recall than most people I stopped worrying about the stuff I forgot and not only did my stress levels drop but I found I could remember more stuff!

    But Dr Hall, my spouse hates it when I say “I believe” something is true or when I honestly say “I’ll try and remember to do that”. Somehow asking someone to write down what they want is taken as an indication that you don’t want to waste time trying to remember it when you are really saying “That’s important so to make sure it gets done please write it down.” Sigh.

  21. Joe says:

    I agree with Harriet, one should choose the words that are effective. Compare the effect of “If the glove doesn’t fit, you must acquit” to the, lame “That depends on what the meaning of ‘is’ is.”

  22. Harriet Hall says:

    Daedalus said, “Data is something that can be “known”, inferences are less certain than that”

    You can know what the data says, but you can’t know that the data is right. If you forgot to calibrate your apparatus, your data might be a bunch of hooey.

    Data becomes more trustworthy as more experiments confirm it and as it is confirmed by different methods, but you can never trust it 100%.

  23. Diane says:

    My understanding is that uncertainty is the biggest (if not only) feature a “moving” world has to offer, and that two main human ways evolved to deal with it:
    1. measuring effects of things upon each other in a given moment in time (science), and/or;
    2. choosing a culturally-defined, higher-authority-(power)-based set of statements , declaring them to be universally true, then using them as an operating manual in the wider world (religion).

    By the way, my preferred way of dealing with the uncertainty of everything is to preface everything/anything I say or think with “my understanding is..”

  24. DLC says:

    Who was it who said: “you are entitled to your own opinion, but not to your own facts” ?

    As pointed out in the article, people don’t “forget” the challenger disaster happening, but they confabulate what really happened with what they preferred to have experienced, or perhaps even what their mind told them should have happened.
    and when these confabulations become fixed in their minds, they seem to come with the certainty that the confabulation is right.

    I think this ties in somewhere with the plausibility/gullibility system, with people accepting in their own minds what their low gullibility threshold tells them seems plausible.

  25. spurge says:

    @ daedalus2u

    “Yes there are some myths that some scientists hold (such as homeostasis). ”

    Can you explain this a little more please? It has been a long time since I heard anyone use the term homeostasis but I am not sure what you mean when you call it a myth.

    Thanks

  26. daedalus2u says:

    Harriet you are quite correct that there are many potential sources of error in data, but data is not as subjective as inferences or models are and data is not subject to revisions the way that interpretations of the data are. When I think of “data”, I am thinking about the raw measurements and model-free manipulations of those raw measurements. Anything else is a “model”, an interpretation of that data.

    To be sure, there are models inherent in the operation of virtually every piece of analytical equipment. A spectrophotometer produces numerical output according to some model of light production by the light source, absorption by the substance being measured and detection by the detector and computation of the concentration. The purpose of calibration is to “map” the output of the detector onto the concentration of the thing being analyzed. That type of calibration is akin to using a ruler, comparing your light absorption at known concentrations and then interpolating between those known concentrations to estimate the unknown concentrations. In principle one could describe the measurement solely in terms of model-free actions (but it would be a very cumbersome description). The motivation for comparing the output of the detector to known concentrations and then interpolating is due to models of light absorption as functions of concentration, but in principle the “data” could be expressed in model (or hypothesis) independent form. There may be error in the data, or in how it was recorded, or transmitted, but it is not a conceptual error.

    This reminds me of a quote from Einstein

    As far as the laws of mathematics refer to reality, they are not certain, and as far as they are certain, they do not refer to reality.

    I like to think of “data” as a pure description of reality. That is what it should be, but usually that description is too complicated to use efficiently. It is the model or hypothesis that maps the “data” onto the mathematics.

    I think the ”data trustworthiness” you are talking about is not so much about the privatives of the “data”, but rather the conclusions and inferences drawn from that data. But I fear I am delving too deeply into semantic minutia.

    I make a large distinction between “data” and “inferences” drawn from that data. Usually I accept virtually everyone’s “data”. Very often I do not accept inferences from data, even inferences that authors draw from their own data.

    A recent example for me is the effects of antioxidants in diet. Large double blind studies of anti-oxidant supplements show no positive and perhaps slight negative effects. Essentially all of the diet studies have shown that people who eat leafy green vegetables (rich in antioxidants) have better health than those that do not. The supplement studies show no effect of supplemental antioxidants, the diet studies show those who eat diets rich in antioxidants have better health. My interpretation is that the data in both studies is correct. My interpretation is that supplemental antioxidants have no positive effect on health, and that the ”protective” effect of a diet rich in green leafy vegetables is an artifact of how those studies were done. All the diet studies have been with people eating a self-selected diet. There have been no double-blind dietary intervention studies because the subjects won’t do it. My conclusion is that the people who chose the diet rich in green leafy vegetables were already in better health and this is reflected in their decision to eat a diet with more antioxidants.

    I don’t think there is any ”error” in the data from either the supplement studies or the diet studies. The hypothesis that the state of health determines the antioxidant content of food chosen and that antioxidant content of diet doesn’t affect health is consistent with the data from both sets of studies. This is what I would call an “ordinary” hypothesis, a hypothesis that (while new) does explain the data in the literature better than the previous hypothesis.

    Why discarding the hypothesis that diet causes health (proven false by the supplement studies) and adopting the hypothesis that health causes diet (consistent with all the diet studies and the supplement studies) is so difficult for people is not something I understand.

  27. Harriet Hall says:

    daedalus2u said, “data is not as subjective as inferences or models are and data is not subject to revisions the way that interpretations of the data are.”

    While not “as” subjective, there are many ways for subjective influences to interfere with observation, measurement, recording, copying, selection, and other aspects of data-gathering and reporting.

    I’m thinking of N-rays, Benveniste’s homeopathy studies, Rhine’s ESP studies, Schwartz’s energy experiments, and many others where the data-gatherers’ preconceptions subjectively influenced the data they reported, either subtly or in obvious ways.

    You might say that data you gather yourself is more trustworthy, but we are all human and subject to similar foibles. I wouldn’t trust even my own data until it had been confirmed by others.

    And in case any grammar police are watching, yes, I know “data” are plural in Latin; but it’s now acceptable to use “data” as a singular mass noun in English.

  28. daedalus2u says:

    spurge, I have blogged about the myth of homeostasis but I will repeat the high points here.

    http://daedalus2u.blogspot.com/2008/01/myth-of-homeostasis-implications-for.html

    The idea of homeostasis is that for cells and organisms to survive they maintain certain physiological parameters in a static state. In other words, something such as glucose concentration is so important that organisms maintain the glucose level in the blood “static”. Organisms that do so are said to be in “glucoses homeostasis”. Deviations from “glucose homeostasis” are thought to be only pathological, and external treatments to restore “glucose homeostasis” are thought to be only beneficial.

    There was a recent trial where people with elevated glucose levels were subjected to more rigorous interventions to maintain glucose levels in a narrow range. The presumption was better control of glucose in the “normal range” would decrease adverse effects of diabetes type 1 and 2. The trial was stopped early because the intervention group (the group which had better glucose control) exhibited a significantly higher death rate.

    The idea of homeostasis (I can’t bring myself to call it a hypothesis because it is wrong) isn’t “data”, it is an inference. It is a model suggesting that physiology keeps certain parameters constant and static. It is a widely believed model. A search of PubMed for “homeostasis” returns 186,066 hits.

    What is the basis for the idea of homeostasis? It originated ~80 years ago before that much was known about physiology and how cells actually function. When you can’t measure anything, the assumption of stasis is not a bad default assumption. We now know that nothing in cells is static or constant. We know that nothing can be “static”. Many parameters are controlled exquisitely. But that control only occurs via active feed-back mechanisms. A deviation occurs, the deviation is sensed, compensatory mechanisms are activated, the deviation is reduced. None of those steps can occur under conditions of stasis.

    If you look at how the term “homeostasis” is being used, some authors invoke it as a mechanism of physiology. This has real health implications as I mention above. Measuring elevated blood sugar is data. The idea that the elevated blood sugar is only pathological is an inference. The only place that blood sugar can be measured is in the bulk blood. Most cells in the body are not in contact with blood in the vasculature, they are in contact with the extravascular fluid. The glucose level that is important to a cell is the level in the fluid it is in contact with. In the extravascular space, that level is lower than in the bulk blood because intervening cells consume it.

    The problem with thinking of “homeostasis” as a mechanism of physiology is that then dynamic changes in parameters considered to be under “homeostatic regulation” is (by the homeostasis paradigm) only pathological. It prevents the consideration of allowing physiological parameters to get out of “normal” even temporarily even when those might be beneficial. For example in sepsis, blood sugar goes up. That is to be expected because most immune cells function by glycolysis, iNOS expression has caused high levels of NO which shut down most mitochondria. Maintaining ATP supplies by glycolysis takes a lot more glucose than does maintaining the same ATP production rate by mitochondria. During sepsis a state of cachexia is induced. Muscle is turned into amino acids, amino acids are turned into glucose, glucose is turned into ATP and lactate and lactate is turned into fat. During sepsis individuals lose muscle and gain fat.

    What is the “optimum” blood sugar to go for during sepsis? The only way to know the answer is by experiment or by measurement. When a certain blood sugar is measured, should there be intervention to increase it or decrease it (and by what method)? The danger in sepsis is not glycated proteins. The danger is multiple organ failure and death.

    There are similar effects with other parameters. For example body temperature. Essentially all animals (vertebrates and invertebrates) raise their body temperature during infection, either via metabolic heat generation or by movement to a hotter environment. What does the idea of homeostasis say about that? That homeostasis keeps it constant unless it doesn’t. I don’t see that homeostasis as an idea has any utility at all. Our default should not be that things are kept constant unless we measure otherwise, our default should be we don’t know what a physiological parameter is doing unless we measure it.

  29. daedalus2u says:

    Harriet, I completely agree with you, and I sometimes do make a distinction between instrumental measures of something and human measures of something. Human measures of something are much more subject to error. For most things now there are instrumental measures. For some there are not. Some disorders such as the autism spectrum disorders are only diagnosed behaviorally. That necessarily requires a somewhat subjective human judgment.

    I don’t think that data I collect myself is that much more trustworthy than other people’s data. I know that my data only has honest error (and perhaps some sloppiness), I know it doesn’t have fraud. Most other data doesn’t have fraud in it either.

    A very satisfying experience for me is to have an idea of how physiology should behave, look up papers on the subject, and find data supporting my idea. It is especially satisfying when my idea better explains the data than the explanation the authors of the data provide. Since I spent a lot talking about sepsis, an example of that is this paper.

    http://www.thelancet.com/journals/lancet/article/PIIS014067360209459X/abstract

    They don’t mention it in the abstract, but when they did skeletal muscle biopsies of individuals in sepsis, the sepsis survivors had higher ATP levels (nM/mg dry weight) than sepsis non-survivors and higher than controls (patients in for elective surgery). From their table 2.

    15•8 (12•1–18•6), n=12 vs 7•6 (6•6–10•0), n=9 vs 12•5 (9•7–13•7), n=8 (p=0.001)

    (septic survivors) vs (septic non-survivors) vs (controls)

    My interpretation is that the high NO from iNOS expression has raised ATP levels via the interaction of ATP and NO on sGC. This ATP comes from glycolysis because cytochrome c oxidase is inhibited by the high NO levels. So long as glycolysis can maintain that high ATP level everything is fine. If the ATP level drops because glycolysis can’t supply enough ATP, then when the ATP levels fall, the mitochondria turn on. To turn on, the mitochondria must pull down the NO level to disinhibit cytochrome c oxidase. To do that they generate superoxide (which mitochondria have unlimited capacity to do). High superoxide generation by mitochondria under high NO conditions turns off mitochondria. If too many turn off, the cells die, the tissue compartment dies, the organ dies, multiple organs die, the individual does not survive.

    I consider myself quite grammar challenged. I have enough trouble keeping track of my own grammar (such as I do), I don’t have the capacity to keep track of other people’s too.

  30. spurge says:

    Thanks for the reply daedalus2u.

  31. In a recent debate in England about whether religion posed the greatest threat to rationality and science in the world, the “pro-religion” participant claimed that instead of religion, it is actually certainty which is the greater danger.

    http://nonsicuro.blogspot.com/2008/04/debate-on-religion.html

Comments are closed.