Category Archives: Fact-Based Thinking

Compartmentalizing Delusion

CompartmentsReligious people hold a lot of beliefs that nonbelievers conclude are delusional (see here). Many of these believers also hold important positions of responsibility in the government, in the media, and business. Some of them even sit on our Congressional Science Committee. Their decisions deeply impact public policy and the very existence of us and of our planet.

Those most fervent in their beliefs proudly tout the fact that their deeply held religious beliefs guide and influence all of their decisions as a lawmaker. But when someone points out that those deeply held religious beliefs are in direct conflict and contradiction to basic reason and accepted public policy, they then typically claim compartmentalization.

Essentially the contradictory claim they make is that while they affirm that they are deeply influenced by nonsensical ideas, those nonsensical ideas do not influence their thinking in rational matters. They insist that they can wail over rapture on Sunday and make prudent, long-term budget decisions on Monday. They can enumerate why evolution is a hoax cooked up by scientists at Wednesday evening Bible study, then properly assess the advise of climate change scientists in their Thursday morning advisory board meeting. They can affirm that the Bible is the only source of truth on Saturday morning, then go home and work on educational text book selections all afternoon. They assert that one is not affected by the other in the least – except when they want to tout the fact that it is.

Their amazingly selective isolation of thinking, they claim, is all thanks to the magic of compartmentalization. It lets them espouse crazy beliefs and claim to be perfectly sane and rational too. This claim is made so often and with such matter-of-fact certainty, that most people just tend to accept it as true.

But let’s examine this claim of compartmentalization more closely.

All of us compartmentalize somewhat. In fact, such compartmentalization is critical to our functioning. We mentally separate work and home, parent and spouse, private and public. When we think of scientific models, we hold two seemingly different views at the same time (see here). Compartmentalization is an essential rational and emotional adaptation. Maybe that’s partly why we accept their claim of exceptional compartmentalization so easily.

But all normal and highly functional behaviors can become abnormal and dysfunctional at some point. At the extreme, we see people with multiple personalities that are split so completely that they are not even aware of each other. And although some extremely rare individuals can apparently completely isolate their thinking, most of us cannot. For most of us, any irrational, dysfunctional thinking does spill over and taints our rational thinking.

We humans can do hand-stands too. When I was in high school, there were a couple of guys on my gymnastic team who could literally walk up and down stairs between classes, in a crowd at full speed, on their hands with perfect form. But because those rare individuals could do it doesn’t mean we can all claim it. Just because Jimmy Carter seemed to isolate his religious belief from his rational thinking in a healthy way, doesn’t mean that many of us can do that. Jimmy Carter was more like the gymnast who could walk up and down stairs on his hands. Most others who believe they can isolate belief from rationality will invariably plummet down the stairs, taking innumerable others crashing down with them.

As I point out in my book, Belief in Science and the Science of Belief (see here), religious belief is the pot smoking of rational thought. Every pot smoker or alcoholic is convinced that they can handle it. That their rational thinking is not affected. They think what they are expounding while high is really profound, but it’s really just nonsensical gibberish. Religious people can’t see how ridiculous they sound while they’re high on the Bible and only listening to others who are just as stoned.

We don’t easily accept this same claim of compartmentalization in any area other than religion. We don’t fully accept that ones stressful job as a homicide cop has no affect on their home life. We would not accept the assertion by a racist that while he may attend Klan meetings on Friday nights, this has no impact on his professional behavior as a hiring manager. Most of us would be at least skeptical in accepting any opinion expressed by a Wiccan who claimed to have supernatural powers, despite any claim of compartmentalization.

Even religious people don’t accept any compartmentalization except the one they claim. If I ran for public office as an atheist, I don’t have any illusions that my claim that I can compartmentalize my atheism would be sufficient to convince any religious people to trust that my judgement has not been tainted by my atheism.

Religious thinkers claim compartmentalization to avoid legitimate skepticism regarding their compromised rationality. Sadly, we accept this claim for the most part. We should stop giving them this free pass. Not only can such fervent “deeply held” delusions not be sufficiently compartmentalized, but believers don’t really want or intend to compartmentalize away their beliefs in any case.

Religious people want and need to propagate their beliefs and weave them inextricably into public policy. Our polite acceptance of their dubious claim of compartmentalization only helps enable them to do that.


Scientific Models

I recently attended a book club discussion on The Meme Machine by Susan Blackmore (see here).  In it, Blackmore puts forth a thesis of “memetic evolution” to describe how our minds work. In fact, her assertion is that our minds can only be understood in terms of memetic selection. Although that seems to be a wildly exaggerated claim, the scientific model she proposes is both stimulating and promising.

But memetic evolution is not the topic of this article. I only cite it as one example of the kind of topic that  many non-scientists and even some scientists have great difficulty discussing fairly. Often in discussing such topics, a great many unfounded criticisms are lodged, and these quite often flow from an inadequate understanding and appreciation of scientific models.

This is understandable. Unless you are a trained, experienced, and particularly thoughtful scientist, you probably have had inadequate background to fully appreciate the concept of a scientific model. In fact, if you look up the word model in most dictionaries, the scientific usage of the term is typically not even mentioned. No wonder many people have a very limited if not completely mistaken appreciation of what a scientific model is. A scientific model is not analogous to a plastic model kit that is intended to look just like the real race car in every detail. It is not at all like a fashion model, intended to present something in an attractive manner. Nor is it like an aspirational model to be put forth as a goal to emulate and strive toward.

No, a scientific model is a working system that does not need to actually “look like” the real system it describes in any conventional way. The important characteristic of a scientific model is that it behave like the real system it describes. How accurately a scientific model reflects the real system it models is measured by how well it explains observed behaviors of the real system and is able to predict future behaviors of the real system.

For example, in 1913 Ernest Rutherford and Niels Bohr put forth the atomic model of matter that we are all familiar with – a nucleus of protons and neutrons orbited by electrons. This was a highly successful model because it described a huge number of observed characteristics and behaviors of matter, allowed us to gain great understanding of matter, and most importantly allowed us to predict as yet unobserved traits of matter.

But in truth the Bohr model is a laughably simplistic stick-figure representation of matter. It describes certain behaviors adequately but completely fails to describe others. It was quickly extended by De Broglie, by Schrödinger, and innumerable others to include wave and then quantum characteristics.

Despite its almost laughable simplicity and innumerable refinements and extensions made over the last century, the Bohr model remains one of the most important and consequential scientific models of all time. If the Bohr model was presented in many book discussion groups today, it would be criticized, dismissed, and even mocked as having no value.

Certainly we can and should recognize and discuss the limitations of models. But we must not dismiss them out of a mistaken lack of appreciation of the limitations of scientific models. Often these misguided criticisms have the more widespread effect of unfairly discrediting all science. Following are some examples of the kinds of criticisms that are valid and some that are invalid.

  1. We must first recognize when we are talking about a new idea like memetic evolution, that we are talking about a scientific model.
  2. A scientific model does not need to answer everything. We must recognize the limitations of every model, but the more important focus is on how useful it is within its applicable limits. Newton’s Laws do not describe relativistic motion, but in our everyday world Newtonian physics is still fantastically useful. Critics of science should not claim that a model – or science in general – is fundamentally flawed or unreliable because a particular model is not universal.
  3. Many critics of science think they have scored points by pointing out that “you can’t trust science because their models are always being replaced!” But models are hardly ever replaced, rather they are extended. The Bohr model was greatly extended, but the basic model is still perfectly valid within its range of applicability.
  4. The fact that there are many different models of the same thing is not proof that “science contradicts itself and cannot make up its mind.” We famously have the two major models of light- the wave model and particle model. The wave model correctly predicts some behaviors and the particle model correctly predicts others. Though they appear irreconcilably different, both are absolutely valid. Real light is not exactly like either model but is exactly like both models. Think of your mother. She has a mother-model that describes her behavior as a mother. But she also has a wife-model, a career-model, a daughter-model, a skeletal-model, and many others. None of these in themselves completely describes your mother, and many may seem irreconcilably different, but all of them correctly model a different set of behaviors in different situations and only collectively do they all communicate a more complete picture of your mother.

So, when discussing something like memetic evolution, it is proper and correct to ascertain its boundaries and to critique how well it describes and predicts observed behaviors within those boundaries. But it is wrong and counter-productive to dismiss it either because there exist other models or because it does not – yet – describe everything. And worst is to dismiss all of science as flawed because it puts forth multiple models of reality and extends them over time.

To describe and predict human thinking, Skinner put forth a stimulus-response model, Blackmore puts for forth a meme-model, and I often focus on a pattern-recognition model. These are not in competition. One is not right and the others all necessarily wrong. The fact that there are these three and many other models of human thinking does not reflect any fundamental weakness of science, but rather its strength.

It us unfortunate that far too few people have a sufficiently deep appreciation and level of comfort with scientific models. We must do much better to understand and communicate these subtleties that are so fundamental and critical to science.


Out of Context

Charles MurrayIn the Grey Matter section of the Sunday Review in the New York Times, Cornell Professors Wendy M. Williams and Stephen J. Ceci published an article entitled “Charles Murray’s ‘Provocative’ Talk.” In it, they described a small ad hoc study that they conducted to test whether the words of Charles Murray are objectively offensive and thus deserving of the level of resistance to his lecture at Middlebury College (see here).

In their study, the authors took a transcript of Murray’s actual talk and sent it without attribution to 70 college professors with a request to rate the words on a 9 point scale from very conservative to very liberal. They found that although “American college professors are overwhelmingly liberal,” those surveyed found Murray’s words to be “middle of the road” with an average score of about 5. Williams and Ceci interpret this finding as indicating that the protest over Murray’s invitation to speak was objectively ill-informed and unjustified.

This argument is deeply and fundamentally flawed. We often see similar tricks played when someone reads an excerpt from the Constitution or Mein Kampf and asks for an opinion about it – before the gotcha reveal when they identify the authorship.

One major study flaw is the premise that words stand alone. Context matters and the meaning and intent of words can only be fully assessed with due consideration of the person making the statement. Authorship is an essential part of that greater context. If PT Barnum claimed he had a Yeti in his house, I would have received it with tremendous skepticism. If Carl Sagan made the exact same claim, I would have been very excited about the potential of an important new anthropological discovery.

The reality is that Charles Murray has a long history of promoting what many consider to be highly destructive public policy research and analysis that has undermined valuable social programs and has attacked and divided us along gender and racial differences. For example, his statement that “We believe that human happiness requires freedom and that freedom requires limited government,” may sound perfectly reasonable to 70 of our professional contacts if unattributed. Coming from a known liberal speaker, this could be meant to affirm that we should not be forced to live in an overly-policed state. However, coming from Charles Murray it is clear that his intent is to promote the dismantling of social assistance programs. The same statement might mean something even more extreme if David Duke had said it.

Based on the work of Williams and Ceci one might argue that we should remove all bias in approving speakers by using a blinded, unidentified process in which presenters are approved or rejected based solely on the text of their planned presentation. That would be extremely foolish. The reality is that the larger views and history of any speaker plays an essential role in how we should interpret their statements. Reasonable but isolated statements can conceal a larger and very different agenda that is only apparent if we know the source.

I have no doubt that the authors would respond by saying that intellectually unbiased people should be willing to hear any reasonable speaker and make this assessment for themselves, without forced censorship. However, surely they would also agree that there is some limit beyond which a speaker would not be acceptable even to them. But reasonable people can reasonably disagree about where this fuzzy boundary should lie – and that boundary must consider not only the message but the messenger as well.

Clearly a determinative number of alumni, faculty, and students at Middlebury judged that the lifetime body of work by Charles Murray, as well as his very clear lifelong mission, crossed that fuzzy line for them. Williams and Ceci may disagree on their placement of this line and that is legitimate and fair debate. But it is not legitimate and fair to conduct what amounts to a gotcha stunt under the guise of objective science to prove that these people’s determination in this instance is illegitimate and irrational.

All that Williams and Ceci may have actually shown is that, without attribution, college professors don’t assume the worst or the best. They may merely fill the void with their own middle-of-the-road interpretation of unattributed quotations.

Taking Stock-Well

john-stockwellSome of us are lucky enough, or unlucky enough, to stumble into a pivotal event in our lives that reshapes us, blows our minds, opens our eyes, changes our perspective, forever and irrevocably. I stumbled into mine back in college in the 1980’s when I blundered into a lecture by former CIA bureau chief Major John Stockwell (see here). I walked into the event as a relatively naïve and oblivious college kid, and walked out a stunned and shell-shocked cynic with regard to official motivations and storylines. Never again could I accept any official news story without some degree of skepticism and doubt, or for that matter dismiss any “conspiracy theory” out of hand simply because it questioned the official narrative.

Stockwell walked the audience through his recruitment as a young CIA officer in Vietnam and his rapid rise through the ranks, eventually attaining one of the highest positions in the bureau. He told how, during his career, he was repeatedly asked to perform actions that seemed not only immoral but counterproductive. Each time that he asked for some rationale to justify the actions requested of him, his superiors would tell him “if you only knew what we know you’d understand why this is necessary.” He believed that line, over and over, because he had to. Working under that assurance, he was personally aware of or responsible for operations to bomb infrastructure in other nations, disrupt business transactions to destabilize economies,  plant rumors to spread discord in legitimate governments, assassinate key leaders, and foment war. He detailed one of his most shameful accomplishments, how he personally orchestrated his totally contrived build up to the otherwise improbable war in Angola.

His own moment of realization finally came when reached one of the highest levels in the bureau, the level of a world chief. When he got close to the pinnacle of his career ladder, it became obvious that there was no actual reason, no secret justification, for the terrible things he did. It was painful to watch him in the lecture, almost vomiting out his pained confession like an act of penance. In a period of despair, he met for drinks with the few other world chiefs at his peer level in the CIA. They asked each other for just one example of anything they had ever done that was good for the world. None of them could justify even one thing.

That was when he “came out” and wrote his exposé “In Search of Enemies” which the CIA litigated and suppressed for many years. For most of my life it was essentially impossible to find, but I see that it is now finally available on Amazon (see here). In it, Stockwell answers the question “if they CIA accomplishes nothing, why do they do what they do?” His analysis is that the CIA is a bureaucracy that was formed to gather intelligence and take covert action during a time of war. Post-war, they have had to justify their continued existence and their obscene undisclosed budget. How do they prove their worth? They can only do this by finding enemies of the State. They are constantly “In Search of Enemies.” And since they cannot find enough enemies, they create them. They manufacture enemies so that they can then expand operations to combat them. In this way, their self-justification and self-preservation synergizes with an industrial-military complex in which the rich profit from every new or expanded conflict and war.

Stockwell spoke about the “tricks” the CIA uses to destabilize governments, ruin economies, and foment war. One of the most reliable excuses was the old “Russian Arms!” ploy. They would plant and then brilliantly discover Russian arms in a country. They would go back and report this to Kissinger of this who would then order a modest increase in their activities in that nation to counter “Russian Aggression.” It was always an increase. The Russians would see these increased activities (the CIA in fact ensured that they would) and counter, which the CIA would then report back to Kissinger to obtain the go-ahead for even further escalation… And so it goes, the game is repeated over and over and replicated all across the globe.

Unsurprisingly, his obviously heartfelt and first-hand account was NOT well-received by that college audience. They asked very tough and skeptical and even hostile questions. This is natural. No one wants to admit even to themselves that they live in a nation that does terrible things. No one wants to admit that they, by virtue of citizenship, are partially responsible and culpable for those terrible things. So we reject everything. To admit anything is to open the door on all of it. So we simply don’t want to hear it, we dismiss it all as conspiracy theory, we call it hating America and unpatriotic, we excuse it as unfortunate but necessary, we claim “they do it too.” Worst perhaps are those that tell themselves that by being avid readers of the New York Times, they would have been informed if there was anything to this stuff.

But for my part, after Stockwell’s lecture I never again accepted news reports of government accounts with the same level of trust I had earlier. When Ronald Reagan inexplicably invaded Granada, he got on television and fended off questions from the press by assuring them “If only you knew what I know.” That didn’t quite satisfy the press because they continued to ask tough questions. The next night he came out and announced that “Russian arms have been found in Granada,” and suddenly most of the press corps said, oh ok then.

When the first Iraq war came along I was similarly skeptical, but had no alternate theory of the action. I had maintained some personal contact with John Stockwell since that lecture and spoke to him occasionally. So I gave him a phone call and asked for his take on the war. He shared that Bush Senior had used back channels to assure Saddam that the US would not interfere if Iraq took action against Kuwait for their slant drilling into their oil fields. This was just a set-up by Bush who needed a war partially to boost his historically low ratings. This was later confirmed to be largely if not completely true by many corroborating reports.

When Bush Junior initiated the second Iraq war, my Stockman-esque skepticism resurged. Bush put forth – by one accounting – over 40 discrete falsehoods to lie us into that war (see here). When Bush first announced that Iraq was seeking “aluminum tubes” to refine uranium for a nuclear bomb I did an immediate Internet search and found a large number of credible experts already shouting that these tubes were not the type that would be needed for that purpose. Yet the Bush Administration kept citing this false “evidence” and the media kept reporting it, the whole while scoffing at “conspiracy theories” that called this evidence into question. It was almost a year later, after the war was inextricably committed, and after the truth about these tubes was everywhere to be seen except in the mainstream press, that they finally “broke” this revelation with their crack and bold investigative reporting.

And now today we are still hearing stories about why we must – regrettably – launch attacks against a large number of countries. We just launched missiles into Syria. One has to at least wonder if “Chemical Attack!” is the new “Russian Arms!” ploy. It works every time. And overt attacks such as this are only a very small part of our effort to ensure that there are plenty of permanent wars to feed the insatiable machine.

Look, I’m not asking you to believe every seemingly crazy story out there – you shouldn’t. But a healthy skeptic questions both sides – including what their government tells them. If you are only skeptical of the alternative view, then you are NOT a healthy skeptic, you are a Kool-Aid drinker. In fact, I argue that it is better to err on the side of skepticism of our self-perpetuating war-making machine, and force them to provide extreme evidence for their operations, rather than continuing to drink the official Kool-Aid and placing rigorous burdens of proof only on the whistle-blowers while the government merely has to appeal to their own authority as proof of their claims.

This alternate perspective used to be terribly hard to research, but today it is easy. Stockwell was hardly a lone voice but he was one of the bravest and most credentialed voices. Heck, in his 1989 lecture, Stockwell referenced over 120 books out of the thousands available at that time. Today there are innumerably more. So there is no longer any excuse for ignorance and the only ignorance possible is willful. You can start with this YouTube video of John Stockwell speaking at American University, broadcast on C-SPAN in 1989 (see here). It is still relevant today. The lecture part takes up the first hour and the remainder is questions. That hour only scratches the surface exposing the filthy and disgusting rats nest that is American Intelligence.

I urge you to give this video a fair look and consider it in the light of today’s current events. Hey, it’s only an hour and I know you find way more time than that to browse adorable cat videos. Be brave and crack the door open and peek inside. The truth will not destroy you, it will set you free. Becoming aware of and acknowledging the extent of our intelligence operations will not fix anything in and of itself, but we certainly can’t begin to fix anything until we are all willing to take that first crucial step.




The Traits that Spawn Conservatism

There are a large number of important personal and social policy issues upon which liberals and conservatives completely disagree. I have to consider whether all of these seemingly unrelated positions are merely symptomatic of more fundamental underlying personality differences.

I submit that conservative worldviews arise from three primary character traits: dogmatism, selfishness, and fearfulness.

The first basic personality trait is the degree to which you are a situational or a dogmatic thinker. Liberals tend to be situational, weighing and balancing the nuanced competing ethics of a given situation. Conservatives tend to be dogmatic, enforcing strict, simplistic rules in accordance with their moral beliefs. Liberals are frightened by what they regard as mindless dogmatism, while conservatives view situational ethics as a dangerous lack of moral principles.

The second fundamental trait that influences our worldview is selfishness. Conservatives are essentially selfish in putting their self-interest and their beliefs first, whereas liberals tend to more strongly respect differences and emphasize the public good with the view that “it takes a village.”

Their third important trait is fearfulness. It is fearfulness that drives the conservative need for guns, for an insanely large military arm, and fear of immigrants and those unlike them.

Since the real motivations for conservative positions (dogmatism, selfishness, and fear) are not things that conservatives can acknowledge in themselves, they must come up with other rationales for their positions. This causes conservatives to vilify intellectualism and ridicule facts. It forces smart conservatives to defend their dogmatic, selfish, and fearful positions with stupid arguments. Smart people put forth stupid arguments to defend a selfish, anti-social culture of guns. Smart people put forth stupid arguments to defend a belief in god, to defend pro-life legislation, rampant militarism, economic Darwinism, and trickle-down economics.

Smart Christians like Ken Ham make stupid arguments to support their creationist beliefs. Ham insists that everything in the bible he agrees with is literal, while everything he disagrees with is figurative only (see here). Similarly, smart conservative supreme court justices claim that the Constitution must be interpreted literally when it supports them, but when it doesn’t support them they insist in an “original intent” interpretation that always happens to support their conservative views (see here).

The result is that we hear a lot of falsehoods and specious arguments in defense of a wide range of conservative positions that are all really rationalizations of dogmatism, selfishness, and fear.

Now wait a second, you may say. While conservatives may disagree with us liberals, they are simply good, well-intentioned people with sincere differences of opinion as to what is best for everyone. They sincerely believe their pro-life activism saves lives, that more guns are the solution to gun violence, and that a strong military prevents wars. You shouldn’t disparage them with negative characterizations of dogmatism, selfishness, and fear.

I would be inclined to believe that as well. However, we have a disturbing “tell” that suggests otherwise. The fact that conservatives deny global climate changes signals to us that they have not simply reached a differing conclusion on this issue. The facts are so overwhelming on this, that their denial can only be driven by strong underlying traits, particularly selfishness. They simply care more about being able to burn all the fossil fuels they want, make all the money they want today, and heck with tomorrow for the entire world. Since few are willing to claim that CO2 is actually good for the planet, the others simply deny, deny, deny.

The fact that conservatives can deny facts and rationalize their denial of climate change makes it likely that all their other arguments are similarly driven by underlying traits including dogmatism, selfishness, and fear. Their denial of climate change suggests that conservatives do not merely reach different conclusions given the information they are exposed to, rather they limit their information and formulate rationalizations to defend their dogmatism, selfishness, and fear. Climate change tells us that these traits are strong in conservatives, and those traits cannot help but drive their positions on other important issues as well.

If we liberals wish to push back on these critical issues, we need to stop debating specious and shifting secondary arguments and start to deal more directly with these fundamental character drivers.


Anecdotal Evidence Shows

The titular phrase “anecdotal evidence shows that…” is very familiar to us – with good reason. Not only is it very commonly used, but it is subject to a great deal of misuse. It generally makes an assertion that something is probably true because there is some observed evidence to support it. While that evidence does not rise to the level of proof, it does at least create some factual basis for wishful thinking.

Anecdotal evidence is important. It is often the only evidence we can obtain. In many areas, scientists cannot practically conduct a formal study, or it would be ethically wrong to do so. It may simply be an area of study that no one is willing to fund. Therefore, even scientists often have no alternative but to base conclusions upon the best anecdotal data they have.

Anecdotal evidence is essential to making everyday decisions as well. We don’t normally conduct formal studies to see if our friend Julie is a thief. But if ear rings disappear each time she visits, we have enough anecdotal evidence to at least watch her closely. Likewise, even court proceedings must often rely upon anecdotal evidence, which is slightly different than circumstantial evidence.

Knowing when anecdotal evidence is telling, when it is simply a rationalization for wishful thinking, and when it is the basis for an outright con job is not always easy. The fact that sometimes all we have to work with is anecdotal evidence makes it all that much more dangerous and subject to misuse and abuse.

All too often, anecdotal evidence is simply poor evidence. I once presented anecdotal evidence of ghosts by relating a harrowing close encounter that I had. The thing was, I totally made it up (see here). People don’t always intentionally lie when they share an anecdote, but those people who in good faith repeated my story to others were nevertheless sharing bad anecdotal information.

Testimonials are a form of anecdotal claim. Back in the 1800’s a Snake Oil Salesman would trot out an accomplice to support his claims of a miracle cure. Today we see everyone from television preachers to herbal medicine companies use the same technique of providing anecdotal evidence through testimonials. Most of these claims are no more legitimate than my ghost story.

We also see anecdote by testimony performed almost daily in political theatre. The President points to the crowd to identify a person who has benefitted greatly from his policies. In Congressional hearings, supposedly wronged parties are trotted out to give testimony about how badly they were harmed by the actions of the targeted party. Both of these individuals are put forth as typical examples yet they may be exceedingly unusual.

So here’s the situation. We need anecdotal evidence as it is often all we have to work with to make important decisions that must be made. However, basing decisions on anecdotal information is also fraught with risk and uncertainty. How do we make the wisest use of the anecdotal information that we must rely upon?

First, consider the source and the motive of the anecdote. If the motive is to try to persuade you to do something, to support something, to accept something, or to part with your cash, be particularly suspect of anecdotal claims or testimonials. One great example are the Deal Dash commercials. You hear a woman claim that she “won” a large screen television for only $49. Sounds great, until you realize that the anecdote doesn’t tell how many bids she purchased to get it for $49, how much she wasted on other failed auctions, and how much was spent in total by the hundreds of people bidding on that item. Anecdotal evidence are not always an outright lies, but they can still tell huge lies by omission and by cherry-picking.

Second, consider the plausibility of the anecdote. If the anecdote claims to prove that ghosts exist, someone made it up. Likewise with god or miracles or angels or Big Foot. Just because someone reports something incredible, no matter how credible that person may be, demand credible evidence. As Carl Sagan pointed out, “extraordinary claims require extraordinary evidence.”

Third, consider the scope of the anecdotal claim. Does it make sweeping generalizations or is it very limited in scope? If the claim is that all Mexicans are rapists because one Mexican was arrested for rape, we end up with a Fallacy of Extrapolation which is often the result of the misuse of anecdotal information.

Finally, consider the cost/benefit of the response to the anecdotal claim. If the anecdote is that eating yoghurt cured Sam’s cancer, then maybe it’s reasonable to eat more yoghurt. But if the anecdote is that Ed cured his cancer by ceasing all treatments, then perhaps that should be considered a far more risky anecdote to act upon.

Anecdotal information is essential. Many diseases such as AIDS have been uncovered by paying attention to one “anecdotal” case report. In fact, many of the important breakthroughs in science have only been possible because a keen-eyed scientist followed up on what everyone else dismissed as merely anecdotal or anomalous data.

Anecdotes are best used to simply make the claim that something may be possible, but without any claims as to how likely it is. For example, it may be that a second blow to the head has seemed to cure amnesia. However, this cannot be studied clinically and it is not likely to occur often enough to recommend it as a treatment. Still, sometimes it is extremely important to know that something has been thought to happen, no matter how uncertain and infrequent. If a severe blow to the head MAY have cured amnesia at least once, this can help to inform further research into it.

Don’t start feeling overwhelmed. We don’t actually need to stop and consciously analyze every anecdote in detail. Our subconscious pattern-recognition machines are quite capable of performing these fuzzy assessments for us. We only need to be sure to consciously internalize these general program parameters into our pattern recognition machines so that they produce sound conclusions when presented with claims that “anecdotal evidence shows.”


Time To Dump Linda

You have probably read articles that reference the famous Linda Study conducted by researchers Daniel Kahneman and Amos Tversky back in the early 1970’s. In it, the researchers describe an outspoken person named Linda who is and smart and politically active and who has participated in anti-nuclear demonstrations. They then ask the subject to indicate whether Linda is more likely to be a) a bank teller or b) a bank teller who is also an active feminist.

No direct evidence is given to indicate that Linda is either a bank teller or a feminist. She is smart so she might be a bank teller, and since she has been socially active she might be a feminist. But logically it is far more likely that Linda is only one of these things than that she is both. Yet most people, given the choices presented and regardless of education, answer that Linda is probably both a bank teller and a feminist. This is an example of the Conjunction Fallacy (see here), in which a person mistakenly believes that multiple conditions are more likely than a single one.

Although this study is frequently cited in popular science articles, the conclusions drawn from it have been strongly criticized or at least given more nuanced analysis (see here). Few popular ideas from science since the Heisenberg Uncertainty Principle have been so misused and overextended as the Linda Study. We really should stop reading so much into this study and cease abusing it so badly.

irrationalAn example of one such popular science article describes research by Professor Keith Stanovich (see here). In his work he used the Linda Study methodology along with other tests to measure rationality. Although I do not know how well this popular science article represents the actual research by Stanovich, it suggests that the Linda Test is a strong indicator of rationality. I find that assertion very troubling.

First off, while the Linda Test does expose the Conjunction Fallacy, we are all are susceptible to a huge number of logical fallacies. I document dozens of these in my book, “Belief in Science and the Science of Belief” (see here). While everyone should be taught to do better at recognizing and avoiding logical fallacies, failing to do so probably does not adequately correlate to irrational thinking.

If subjects were made aware that this was intended as an SAT-style logic gotcha, many would answer it in a more literal context. But we normally assume a broader scope of inference when answering this sort of question and the pattern-recognition machines we call our brains are capable of all sorts of fuzzy logic that is completely independent of, and much broader than, strict mathematical logic. In the real world, it might well turn out that women like Linda are in fact more likely to be both bankers and feminists.  Moreover “both” is a far richer answer in the context of most real-world interactions. The more logically correct answer is less insightful and interesting.

This is not to suggest that we should become lax about adhering to principles of logic, but only to suggest that a simple “brain teaser” logic question is not a very powerful indicator of overall rationality. Furthermore, equating rationality to a fallacy recognition test diminishes the profound complexity and importance of rationality.

I suggest that there are far stronger indicators of rationality. Does the subject believe in God? Do they deny climate-change? Do they subscribe to pseudoscientific nonsense? Is their thinking muddled by irrational New Age rationalizations? Do they insist the world is only 6 million years old and that humans coexisted with dinosaurs (cough) Ken Ham see here (cough).

Here’s the problem. All of these direct indicators are too entrenched and widespread to be overtly linked to irrationality. So instead we use safe, bland, non-confrontational indicators like the Linda Test that are at best weak and at worst undermine important and frank questions about rationality.

So dump Linda already in favor of far more meaningful measures of rationality!