Sunday 23 September 2012

A science of morality? My thoughts on Sam Harris

(Originally published 26 July, 2011, as a Note on my Facebook.) 
After a frustrating delay, the University of Otago's library has finally got around to acquiring a copy of Sam Harris's The Moral Landscape.  Having been champing at the bit for months, reading only such excerpts as Harris has chosen to present on the internet, I can at last comment on it from a position of knowledge. 
Harris's basic thesis can be summed up as follows:
  • Moral values are synonymous with the well-being of conscious creatures.
  • Science can, in principle, determine what actions will enhance or diminish the well-being of conscious creatures. 
  • Therefore, science can, in principle, determine moral values. 
Harris's title is carefully chosen: there may, he says, be many peaks on the moral landscape, and he is not claiming to prescribe a simple formula for right living.  Most of the other criticisms of the thesis that will no doubt have occurred to you from my brief summary are covered in the book, which (I can assure you) is well worth reading. 
Conscious experience, Harris argues, is the only thing worth caring about.  If anyone claims to discover something that cannot in principle affect any being's consciousness in any way, what they have found is by definition the least interesting thing in the universe.  Likewise, the only meanings available for concepts like good and bad derive from conscious experience -- from, respectively, well-being and misery. 
Harris proceeds to demonstrate that any concept one could reasonably equate with "morality" will, fundamentally, have to do with well-being.  But then he fumbles.  How does he solve David Hume's problem of deriving an ought from an is?  By dodging it:
But this notion of "ought" is an artificial and needlessly confusing way to think about moral choice.  In fact, it seems to be another dismal product of Abrahamic religion -- which, strangely enough, now constrains the thinking of even atheists.  (p38)
Which sits less than comfortably with a previous assertion:
Rather I am arguing that science can, in principle, help us understand what we should do and should want -- and, therefore, what other people should do and should want in order to live the best lives possible.  (p28, emphasis original)
If Harris is using the words should and ought as they are used in all the dialects of English I am familiar with -- as synonyms -- then this is a direct contradiction.  If not, he really owes the reader a clarification. 
I agree with the implicit conflation of "moral values" with the word should in the latter statement: by definition, that which is moral is that which we should (ought to) do.  Consequently, I must needs object to the dismissal of ought in the former statement: ought (should) is a natural and not needlessly confusing way to think about moral choice. 
Why can't we get an ought (or a should) from an is?  Because oughts and shoulds are not propositions but imperatives, and an imperative has no truth-value.  ("Shut the door, please."  "That's not true!")  But in fact an imperative can have a truth-value, corresponding to its utility in achieving some goal which the speaker and hearer agree upon, explicitly or implicitly.  ("Shut the door, please."  "You're right, it's getting cold.") 
One would have thought Harris would proceed to establish global well-being as the goal to be agreed upon.  Instead, he queries the need to do so, arguing that anyone who disagrees has no more right to a say in the matter than a creationist has in the field of biology: 
Scientific "is" statements rest on implicit "oughts" all the way down.  When I say, "Water is two parts hydrogen and one part oxygen," I have uttered a quintessential statement of scientific fact.  But what if someone doubts this statement?  I can appeal to data from chemistry, describing the outcome of simple experiments.  But in so doing, I implicitly appeal to the values of empiricism and logic.  What if my interlocutor doesn't share these values?  What can I say then?  As it turns out, this is the wrong question.  The right question is, why should we care what such a person thinks about chemistry?  (endnote, p203)
Harris here comes perilously close to "My argument is so powerful it's not necessary to talk about it."  Reviewing The Moral Landscape, philosopher Russell Blackford highlighted the yet unbridged gap in his reasoning: we can agree that values are synonymous with well-being -- but on what grounds should we privilege (or refuse to privilege) one individual's well-being over another's?
Why, for example, should I not prefer my own well-being, or the well-being of the people I love, to overall, or global, well-being?... Harris never provides a satisfactory response to this line of thought, and I doubt that one is possible.
Harris answered:
I believe that Blackford (and most everyone else) has confused two separate questions: 
(A) What is the status of moral truth? -- that is, what does it mean to say that one state of the world is "better" than another? 
(B) What is rational for a person to do, given what he or she wants out of life? ...Consider the following example: 
There is one slice of pie left, and you and I both want it. There are at least 4 possible states of the world that might follow from this: (1) you get it; (2) I get it; (3) we split it; or (4) we give it to some needy person. 
Let’s say that a God’s-eye-view of the situation reveals the following: (1) and (2) are morally indistinguishable... that if we were the sort of people who could do (3), we would both be better off... that, in this case, (4) would be better still... 
But, as it turns out, you and I are not the sort of people who can contemplate doing (3) or (4). We are too selfish, and we crave pie too much. And, worse still, we each take pleasure in denying others what they want. (In other words, we both have brain damage.)
Brain damage?  How brain-damaged need one be to buy cage eggs because they are cheaper than free-range, or chocolate from a company known to use child slave labour because no other of the particular flavour requested by one's own children (for a promised treat) is available?  If this is brain damage, all First World consumers are brain-damaged, and every large corporation owes its profits to their condition. 
It is telling that a "God's-eye-view" is needed to reveal the moral truth of the situation.  The world lacks a consensus on morality precisely because such impartial omniscience is not practicable.  As Harris himself points out, every moral philosophy every real society has ever embraced is anchored in a concept of well-being -- though some defer it to an afterlife, or privilege God's well-being (one must not make the Baby Jesus cry) over others'.  That's the easy part.  The hard part is to discover an objective principle for settling conflicts of well-being when such arise. 
One could indeed argue that it is the unattainability of a "God's-eye-view" that has kept the two main academic fields studying human behaviour -- economics and cultural anthropology -- from achieving the status of sciences.  Both falter, in their different ways, at the challenge of establishing an objective viewpoint from which to observe without contaminating the system observed. 
Harris's hypothetical chemistry-sceptic is unconvincing.  If anyone really did refuse to admit the force of reason, there would indeed be no arguing with them.  But I don't think anyone seriously does reject reason as such.  They may define what they call "science" to exclude some particular field of inquiry, such as the existence of the supernatural; but they will still be found drawing logical inferences from real-world phenomena to argue their claims about it. 
Harris does come tantalizingly close, in a later chapter, to connecting morality {global-well-being} to morality {what-one-ought-to-do}:
When I take my daughter to the hospital, I am naturally more concerned about her than I am about the other children in the lobby.  I do not, however, expect the hospital staff to share my bias.  In fact, given time to reflect about it, I realize that I would not want them to.  How could such a denial of my self-interest actually be in the service of my self-interest?  Well, first, there are many more ways for a system to be biased against me than in my favour, and I know that I will benefit from a fair system far more than I will from one that can be easily corrupted.  I also happen to care about other people, and this experience of empathy deeply matters to me.  I feel better as a person valuing fairness, and I want my daughter to become a person who shares this value.  And how would I feel if the physician attending my daughter actually shared my bias for her and viewed her as far more important than the other patients under his care?  Frankly, it would give me the creeps.  (p73)
But Harris gives this merely as an example of a possible solution to one moral problem.  The last few dots remain unjoined. 

The other thing that pushed my buttons was Harris's treatment of anthropologist Scott Atran's work on Islamic jihadists.  To Harris, it is simply obvious that the most important thing driving people to become suicide bombers is their belief in the righteousness of jihad; Atran's "bizarre" denial of this just goes to show how much undeserved respect religion gets among liberals these days.  However, following the URLs in Harris's bibliography, I find that he has misrepresented Atran, chiefly by omission.  (To be fair, Atran likewise misrepresents Harris and others to approximately the same degree.) 
Atran has publicly stated that the greatest predictor of whether a Muslim will move from merely supporting jihad to actually perpetrating an act of suicidal violence "has nothing to do with religion, it has to do with whether you belong to a soccer club."  Atran's analysis of the causes of Muslim violence is relentlessly oblivious to what jihadists themselves say about their own motives.  (p155)
Harris does, at least, report Atran's research question accurately -- but without noticing how it undercuts his own argument.  Atran is researching what drives some members of the subset of Muslims who support jihad to join the much smaller subset who commit jihadist attacks.  Atran's research population is defined by the belief in jihad, which therefore cannot possibly be the determining factor he is looking for. 
According to Harris, Atran
alleges that "core religious beliefs are literally senseless and lacking in truth conditions" and, therefore, cannot actually influence a person's behaviour.  (p155)
But Atran's point actually seems to be that central religious claims are typically logically incoherent -- they can barely be stated, let alone defended, in terms allowing of verification or falsification.  (Hardly a position of excessive respect!)  The consequence Atran draws is not that they cannot influence a person's behaviour, but that they can influence it in any direction; and that, therefore, we need more than religion per se to explain patterns of violence and hatred in certain religious groups. 
Harris draws a conclusion:
Many social scientists have a perverse inability to accept that people often believe exactly what they say they believe. (p157)
which dovetails neatly with his critique of cultural anthropology as a discipline from the introductory chapter:
Many social scientists incorrectly believe that all long-standing human practices must be evolutionarily adaptive: for how else could they persist?  Thus, even the most bizarre and unproductive behaviours -- female genital excision, blood feuds, infanticide, the torture of animals, scarification, foot binding, cannibalism, ceremonial rape, human sacrifice, dangerous male initiations, restricting the diet of pregnant and lactating mothers, slavery, potlatch, the killing of the elderly, sati, irrational dietary and agricultural taboos attended by chronic hunger and malnourishment, the use of heavy metals to treat illness, etc. -- have been rationalized, or even idealized, in the fire-lit scribblings of one or another dazzled ethnographer.  But the mere endurance of a belief system or custom does not suggest that it is adaptive, much less wise.  It merely suggests that it hasn't led directly to a society's collapse or killed its practitioners outright.  (p20)
I should come clean here: I hold a bachelor's degree in cultural anthropology.  By the standards of anthropology graduates, I am a hard-line science enthusiast; I accept that our discipline is properly viewed as the sociobiology of Homo sapiens, I understand the logic of natural selection and the role of genes in human development, I perceive that many culturally transmitted beliefs are indeed simply wrong, I find postmodernism largely a waste of time, and I even think memes are a useful concept. Unusual though I may be in this regard, I don't recognise my teachers in Harris's image of dazzled ethnographers scribbling by firelight -- well, maybe one of them, but I argued with him a lot.  And I think Harris has missed something he could have learned from anthropology.  Here's a conversation he reports with an unnamed government ethics consultant he met at a conference -- "Me" in this dialogue is Harris:
She: What makes you think that science will ever be able to say that forcing women to wear burqas is wrong? 
Me: Because I think that right and wrong are a matter of increasing or decreasing well-being -- and it is obvious that forcing half the population to live in cloth bags, and beating or killing them if they refuse, is not a good strategy for maximizing human well-being. 
She: But that's only your opinion. 
Me: Okay... Let's make it even simpler.  What if we found a culture that ritually blinded every third child by literally plucking out his or her eyes at birth, would you then agree that we had found a culture that was needlessly diminishing human well-being? 
She: It would depend on why they were doing it. 
Me [slowly returning my eyebrows from the back of my head]: Let's say they were doing it on the basis of religious superstition.  In their scripture, God says, "Every third must walk in darkness." 
She: Then you could never say that they were wrong. 
...I found that I could not utter another word to her.  In fact, our conversation ended with my blindly enacting two neurological clichés: my jaw quite literally dropped open, and I spun on my heels before walking away.  (pp43--44, bracketed text original)
Like the consultant, I think I would have had to inquire why -- why the bloody hell -- any society would pluck out every third child's eyes at birth; why anybody would find that a plausible hypothetical example; and how, since clearly none of what we do know about human social interaction (yes, even among Muslims) applies to these people, I should be expected to form any sensible, let alone generally applicable, judgement about them. 
Lest I be misunderstood: the implausibility of Harris's eye-plucking society consists not in the fact that their belief is false, nor that their practices cause needless misery.  Both conditions are sadly widespread across the world.  Let me contrast them with a couple of real-world cases.  First, Harris's account (citing anthropologist Ruth Benedict) of the Dobu islanders:
Every Dobuan's primary interest was to cast spells on other members of the tribe in an effort to sicken or kill them and in the hopes of magically appropriating their crops.  The relevant spells were generally passed down from a maternal uncle...  Spells could be purchased, however, and the economic life of the Dobu was almost entirely devoted to trade in these fantastical commodities. 
...the conscious application of magic was believed necessary for the most mundane tasks.  Even the work of gravity had to be supplemented by relentless wizardry: absent the right spell, a man's vegetables were expected to rise out of the soil and vanish under their own power. 
To make matters worse, the Dobu imagined that good fortune conformed to a rigid law of thermodynamics: if one man succeeded in growing more yams than his neighbour, his surplus crop must have been pilfered through sorcery.  As all Dobu continuously endeavoured to steal one another's crops by such methods, the lucky gardener is likely to have viewed his surplus in precisely these terms... 
...the power of sorcery was believed to grow in proportion to one's intimacy with the intended victim...  Therefore, if a man fell seriously ill or died, his misfortune was immediately blamed on his wife, and vice versa.  The picture is of a society completely in thrall to antisocial delusions.  (pp60--61)
The Dobu, like Harris's eye-pluckers, lived a lifestyle that created real misery for no good reason.  So what's the difference?  That the Dobu belief does, and the Eye-Plucker belief does not, make (apparent) sense of its practitioners' day-to-day lived experience.  Obviously the illusory experience of constant sorcerous attack, upon which the Dobu belief fed, was created by the belief itself.  But in bypassing experience altogether, the Eye-Plucker religion departs from the known range of human variation. 
My second real-world case is this.  A certain religious group believes that every woman, man, and child on the planet is so vile that consigning us to unbearable torture until the end of time is a fitting response to our evil ways.  Fortunately, the Creator of the Universe has performed an unwarranted act of mercy which will allow a chosen few -- all members of the group, of course -- to avoid this fate and start afresh.  How would you expect these people to behave towards their fellow human beings? 
I grew up in this group.  People whom my parents respected and counted as friends taught me the doctrine that I have outlined above; as children will, I accepted it.  I felt the appropriate guilt for my existential filthiness, and was filled with reverent gratitude for the Creator's mercy.  Had you asked me at the time, I would have recited the above belief with complete sincerity. 
In case some of my readers don't recognise it, the religion in question is Christianity.  Protestants and Catholics differ on whether one accepts God's mercy through (respectively) a private prayer of pure faith or pious acts prescribed by a confessor; but both agree that it is mercy, without a crumb of desert.  To Shakespeare the doctrine was unexceptionable, though he felt that a Jew might need it pointed out:
Though justice be thy plea, consider this, 
That in the course of justice, none of us 
Should see salvation; we do pray for mercy. 
---The Merchant of Venice, Act IV, Scene 1

Yet, curiously, it is extremely rare nowadays to find Christians acting as if they truly believed other people deserved to go to Hell, and doing so on those explicit grounds.  Fred Phelps and his Westboro Baptist Church are one such rarity.  Most Christian proselytizers behave as if they believe people deserve to be rescued from going to Hell.  For that matter, most Christians dish out rewards and punishments to their children according to what they "deserve", believe that some crimes "deserve" harsher sentences than others, and agree that Albert Einstein "deserved" a Nobel Prize. 
Here we have a belief every bit as clearly opposed -- on the face of it -- to human well-being as that of the Eye-Pluckers.  Yet, unlike either them or the Dobu, its believers do not enact it, regardless of the sincerity of their belief.  Why not?  Because it does not even pretend to explain anything they commonly experience.  Its function is to reinforce gratitude to God with feelings of unworthiness, and soothe cognitive dissonance as to his coexistence with the evil that one is supposed to thank him for disarming. 
Anthropologists find the same thing in every culture they investigate.  Some beliefs are irrational; some practices are harmful.  But no belief is put into practice unless it seems, to its believers, to answer a practical need.  Many repellent practices remain uninvestigated -- we will never be able to ask the Inca why they felt they needed to expose children on mountaintops -- but the finding is so consistent that we can now take it as a working assumption in any new case. 

Evidently, then, people everywhere are endowed with reason, decency, and that instinct for empirical verification we call "common sense" (not to be confused with a number of other mental faculties we also lump together under "common sense", such as the inclination to accept cultural norms).  The bar for hateful or irrational memes to infiltrate a culture is accordingly higher than Harris, with his Eye-Pluckers and his Chemistry-Sceptic, appears to credit. 
While clearly grounds for optimism, this raises two questions.  First, if people are so reasonable, why is it so hard to get them to listen to reason?  Second, where did this mental capacity -- as implausibly useful as a Babel-fish -- come from? I suspect that the answers are intertwined.  Obviously our genes didn't decide one day to build us a bigger brain so that we could appreciate logical arguments.  Somehow, each step on the way made our ancestors slightly better at having children, or at keeping themselves or their children alive.  As with every other apparently well-designed biological system, we can expect to find vestiges of its history imprinted in its structure; and, perhaps, accompanying flaws in its functionality. 
Harris himself, it turns out, is preparing to investigate these questions:
But I want to look at what mechanisms actually control our resistance to counter-evidence, our defence of our cherished beliefs.  And I also want to look at what it means to successfully change one's mind on points of belief in neurophysiological terms; those moments where you resist despite a wealth of evidence -- what dogmatism looks like in real time -- and those moments where you actually cave in and have your views about the world modulated successfully... 
I don't think that the neuroimaging component of it is necessarily the only or most consequential piece, but the most consequential phenomenon we're dealing with, I think, in human culture, is the fact that we can't successfully talk to each other; we can't successfully change one another's views in anything like real time, based on reasoned argument... 
While we celebrate it in the ivory tower and in public discourse generally as this common tool we use all the time... I think I can count on one hand the number of times I've seen someone stripped of their cherished opinion, in real time, under pressure, in the face of counter-argument.
---Ask Sam Harris Anything #1, beginning at 48:40

This is the point, by the way, at which this article ceases to be a critique of Harris's book, and segués into speculation on what his new research will find.  In fact, I will make so bold as to predict what it will find.  But first, the background argument.  Much of what I'm about to say was hammered out in a recent discussion at RichardDawkins.net, mostly between myself and anthropologist Helga Vierich. 

It's easy -- deceptively easy -- to think of ways reason might be an advantage in a protohuman context.  Surely it helped our ancestors predict the movement of predators (and prey), devise protection against the elements, design useful tools for solving problems; and, perhaps most of all, debate about the kinds of things that would impact their foraging bands' long-term survival. 
Beware the obvious.  There's a problem with survival-value explanations of human intelligence: selective pressures for survival, operating from the environment, tend to apply to lots of different species in the same way, resulting in convergent evolution.  If it was a good idea for us for survival reasons, it should have been a good idea for lots of other species as well.  Reason's survival advantages come at a heavy cost:
A large-brained creature is sentenced to a life that combines all the disadvantages of balancing a watermelon on a broomstick, running in place in a down jacket, and for women, passing a large kidney stone every few years.
---Steven Pinker, The Language Instinct

Any neurologically-cheaper way of solving the problems mentioned above would be favoured by natural selection: tracking prey by scent, keeping warm by huddling with one's groupmates, using one's own teeth or claws rather than making tools.  This is what non-humans do, and we can be pretty confident it's what our ancestors did. 
As for the long-term survival of the foraging band, the same problem arises here as with every group-selection argument.  An individual who doesn't participate in the debates, but goes along with whatever the group does anyway, can get away with having lesser cognitive faculties and putting more energy into reproduction.  Genes for dull brains will spread at the expense of genes for sharp ones, and the population will become steadily stupider.  If that puts them in danger of dying out, die out they will. 
Furthermore, reason is not exactly an technique for navigating the environment, but an ability to re-evaluate one's navigation techniques through abstract thought.  The book-keeping required, as you might say, drastically slows down the computation; keeping it up to anything like practical speed takes a bloated energy-sink of a brain.  And it would be a positive liability if you started philosophizing about rabbit-hunting procedures in the middle of sneaking up on a rabbit. 
Somehow, it must have conferred a reproductive advantage not merely to be able to do things but to know how and why we were doing them, and to assess and evaluate strategies at high levels of abstraction.  And whatever exerted this unusual selection pressure affected only humans -- which means it was probably exerted by humans in some kind of runaway feedback loop. 
I have a hypothesis.  But I'll warn you now, the ideas I'm about to introduce come from an extremely unlikely source.  I wouldn't have credited it myself.  I think the man with the answer is the twentieth-century French philosopher Michel Foucault. 
Foucault is often called a postmodernist; a label that he refused, but one that his dense prose unfortunately appears, at first glance, to bear up.  A closer examination, however, shows that he lacks the common factor most of the postmodernists share -- their commitment to Sigmund Freud's psychoanalytic theory, which was of course largely devised to allow Freud to talk dirty to young female patients. 
Postmodernism was founded by Jacques Lacan with a view to redefining "science" so that psychoanalysis would fall within its scope.  Jacques Derrida introduced the notion of phallogocentrism: the urge to distinguish one thing clearly from another is a disguise of the desire for the phallus, which after all stands out clearly from the body.  Hence patterns of thought, and therefore of language, which depend on clear distinctions are manifestations of phallic domination. 
As I mentioned above, I have very little time for postmodernism.  What few good ideas have come out of it have been far better expressed elsewhere, notably the critique of discontinuous thinking in Richard Dawkins' Gaps in the Mind.  I skipped over all the Foucault in my anthropology readers, under the impression that he was just another postmodernist, and have only recently discovered what I was missing. 
When coming to grips with Foucault, I have two pieces of advice.  First, read Foucault himself, not the people who write about him; and especially not the people who write about him admiringly.  Second, if you can, start with published transcripts of his interviews and lectures (in which he is quite lucid), not the books he wrote.  It's said he once told an interviewer that, regrettably, one's writing must contain a certain quantum of nonsense in order to be taken seriously in French philosophical circles. 
Foucault's work centres on power relations in Western society.  To Foucault, power is any relationship between two or more individuals in which one subset controls or influences others' behaviour.  Why do we whisper in libraries?  Because, if your conversation gets too loud, people will look over at you disapprovingly.  The power exerted is trivial and benign, and it's distributed among all library users rather than concentrated in an elite person or group -- but it's still power. 
One stumbling block for many people encountering Foucault is that when we hear "power", we think of objectionable power.  We think Hitler or Stalin, we think Macbeth or Richard III, we think Sauron or Darth Vader. But Foucault's concept of power extends also to the means of influencing people that we consider legitimate.  Indeed, the notion of legitimacy itself is the primary vehicle of power. 
A palaeontologist discovered dinosaur footprints, and decided to collect the rock they were impressed in for analysis. Unfortunately, it was being used as paving-stones in the local town. So he put on a visibility vest and hard-hat, and dug them out in broad daylight. Because he looked "official", no-one interfered with him. "Looking official" is a tremendously effective way of getting people to do what you want, or at least not to do what you don't want. It is, in short, an exercise of power. 
Foucault connects power intimately with knowledge; in some passages, they are almost synonymous.  This is not a piece of postmodern obscurantism, but refers to the simple fact that we accede to others' opinions when they appear to know what they are talking about.  We grant ideas and practices the status of "legitimate", or "official", if and only if they appear to be based upon knowledge.  As legitimacy is the primary vehicle of power, power relations are also knowledge relations. 
Since knowledge is equivalent to power, an act of reasoning (in conversation with another person) is a challenge in a contest of power.  To be proved wrong is to lose power.  To concede an argument is to subordinate oneself to one's opponent. 
The equivalence of power and knowledge explains why scientists are so widely perceived as arrogant, when the reverse is usually the case.  It explains why people with Asperger's syndrome (like myself) make such a bad impression when we try to use conversation primarily as a vehicle for sharing knowledge.  It explains why people are so reluctant to accept certain truths even when it is within their urgent interest to do so. 
And, in my view, it explains something else as well.  Foucault wasn't interested in evolution, but I am.  Among primates -- humans definitely included -- status correlates strongly with mating opportunities for males, and offspring survival for females.  If status in our ancestors depended on persuasive communication, then we could expect natural selection to favour genes for intelligence in each generation.  With the bar continually rising, the result would be a runaway feedback loop towards greater and greater cognitive ability (at least until we began to outsource it to technology).  Such a feedback loop is exactly what is needed to explain the human brain. 

On the basis of Foucauldian theory, I will make two predictions about what Harris is going to find in his research on reasoning, as follows:
  • There is a suite of neural activity patterns within the human brain associated with acts of dominance and self-assertion.  I imagine they'll be localized in a particular brain area, but there may be some other feature which unites them as a set.  Likewise, there is a suite of neural activity patterns associated with acts of submission and subordination to authority.
  • Arguing one's case in a rational discussion will engage brain activity patterns within the dominance suite.  Conceding an argument to one's opposition in a rational discussion will engage brain activity patterns within the submission suite.
If the first prediction is correct, and the second incorrect -- if we can detect dominance and submission in the brain, and reasoning turns out to be unconnected with them -- then Foucault (or at least my reading of Foucault) is wrong.  Foucault's model may not produce quantitative predictions, but at a qualitative level it is falsifiable and thus testable. 
Pending confirmation, I have sufficient confidence in the Foucauldian hypothesis to hang a last point on it.  Harris's Chemistry-Sceptic (above), who doesn't value empiricism or logic, appears to be a stand-in for creationists; but that's not how creationists roll.  They claim to value empiricism and logic more highly than "evolutionists", and will reliably point out that macroevolution cannot be directly observed. 
Absent Foucault's insight or something like it, we are indeed reduced to concluding that creationists simply don't value the same things we do.  Knowing that argument acts on the brain as a dominance display, we can at least understand why they refuse to listen.  Though they will never concede the point on the spot, they may well reconsider if they find (and I am one who found) that they cannot raise a really crushing counter-argument without tangling themselves in absurdities. 
That being the case, Harris's excuse -- that we must simply assume shared values -- won't wash.  Unlike Russel Blackford, I think an objective principle for settling conflicts of well-being probably can be found, if we try hard enough.  And I would be eager to watch a thinker of Harris's calibre engaging with the question.

No comments:

Post a Comment