Tag Archives: philosophy

The Devout Agnostic

by Jay Parr

Sunrise as seen from orbit. Taken by Chris Hadfield aboard the International Space Station.

I am a devout agnostic. No, that is not an oxymoron.

After considerable searching, study, and introspectionand, having been raised in the Protestant Christian tradition, no small amount of internal conflictI have come to rest in the belief that any entity we might reasonably call God would be so alien to our limited human perceptions as to be utterly, and irreconcilably, beyond human comprehension.

Gah. So convoluted. Even after something like a dozen revisions.

Let me try to strip that down. To wit: Humankind cannot understand God. We cannot remotely define God. We wouldn’t know God if it/he/she/they slapped us square in the face. In the end, we cannot say with any certainty that anything we might reasonably call God actually exists. Nor can we say with any certainty that something we might reasonably call God does not exist.

Splash text: I don't know, and you don't either.

To horribly misquote some theologian (or philosopher?) I seem to remember encountering somewhere along the way, humankind can no more understand God than a grasshopper can understand number theory.

I mean, we can’t even wrap our puny little heads around the immensity of the known physical realm (or Creation, if you prefer) without creating incredibly simplistic, and only vaguely representative models.

Let’s look at some of the things we do know. With only a handful of notable exceptions the entirety of human history has happened on, or very near to, the fragile skin of a tiny drop of semi-molten slag just under 8,000 miles across. That’s just under 25,000 miles around, or a little more than two weeks’ driving at 70 mph, if you went non-stop without stopping for meals or potty breaks.

Freight train in the American west, looking dwarfed by the landscape, with mountains visible in the far-off distance.

Even that tiny drop of slag can feel pretty vast to our little human perceptions, as anyone can tell you who has been on a highway in the American West and looked out at that little N-scale model train over there and realized that, no, it’s actually a full-sized freight train, with engines sixteen feet tall and seventy feet long and as heavy as five loaded-down tractor-trailers. And even though you can plainly see the entire length of that little train, it’s actually over a mile long, and creeping along at seventy-five miles per hour. Oh, and that mountain range just over there in the background? Yeah, it’s three hours away.

If we can’t comprehend the majesty of our own landscape, on this thin skin on this tiny droplet of molten slag we call home, how can we imagine the distance even to our own moon?

To-scale image of Earth and the Moon, with the Moon represented by a single pixel.

If you look at this image, in which the moon is depicted as a single pixel, it is 110 pixels to the earth (which itself is only three pixels wide, partially occupying nine pixels). At this scale it would be about eighty-five times the width of that image before you got to the Sun. If you’re bored, click on the image and it will take you to what the author only-half-jokingly calls “a tediously accurate scale model of the solar system,” where you can scroll through endless screens of nothing as you make your way from the Sun to Pluto.

Beyond the Moon, we’re best off talking about distances in terms of the speed of lightas in, how long it takes a ray of light to travel there, cruising along at about 186,000 miles per second, or 670 million miles per hour.

On the scale of our drop of moltener, Earthlight travels pretty fast. A beam of light can travel around to the opposite side of the Earth in about a fifteenth of a second. That’s why we can call that toll-free customer-service number and suddenly find ourselves talking to some poor soul who’s working through the night somewhere in Indonesiawhich, for the record, is about as close as you can get to the exact opposite point on the planet without hiring a more expensive employee down in Perth.


That capacity for real-time communication just starts to break down when you get to the Moon. At that distance a beam of light, or a radio transmission, takes a little more than a second (about 1.28 seconds, to be more accurate). So the net result is about a two-and-a-half-second lag round-trip. Enough to be noticeable, but it has rarely been a problem, asin all of human historyonly two dozen people have ever been that far away from the Earth (all of them white American men, by the way), and no one has been any further. By the way, that image of the Earthrise up there? That was taken with a very long lens, and then I cropped the image even more for this post, so it looks a lot closer than it really is.

Beyond the Moon, the distances get noticeable even at the speed of light, as the Sun is about four hundred times further away than the Moon. Going back up to that scale model in which the Earth is three pixels wide, if the Earth and Moon are about an inch and a half apart on your typical computer screen, the Sun would be about the size of a softball and fifty feet away (so for a handy visual, the Sun is a softball at the front of a semi trailer and the Earth is a grain of sand back by the doors). Traveling at 186,000 miles per second, light from the Sun makes the 93-million-mile trip to Earth in about eight minutes and twenty seconds.


Even with all that empty space, our three pixels against the fifty feet to the Sun, we’re still right next door. The same sunlight that reaches us in eight minutes takes four hours and ten minutes to reach Neptune, the outermost planet of our solar system since poor Pluto got demoted. If you’re still looking at that scale model, where we’re three pixels wide and the sun is a softball fifty feet away, that puts Neptune about a quarter of a mile away and the size of a small bead. And that’s still within our home solar system. Well within our solar system if you include all the smaller dwarf planets, asteroids, and rubble of the Kuiper Belt (including Pluto, which we now call a dwarf planet).

To get to our next stellar neighbor at this scale, we start out at Ocean Isle Beach, find the grain of sand that is Earth (and the grain of very fine sand an inch and a half away that is the Moon), drop that softball fifty feet away to represent the Sun, lay out a few more grains of sand and a few little beads between the Atlantic Ocean and the first dune to represent the rest of the major bodies in our solar system, and then we drive all the way across the United States, the entire length of I-40 and beyond, jogging down the I-15 (“the” because we’re on the west coast now) to pick up the I-10 through Los Angeles and over to the Pacific Ocean at Santa Monica, where we walk out to the end of the Santa Monica Pier and set down a golf ball to represent Proxima Centauri. And that’s just the star that’s right next door.

See what I’m getting at?

What’s even more mind-bending than the vast distances and vast emptiness of outer space, is that our universe is every bit as vast at the opposite end of the size spectrum. The screen you’re reading this on, the hand you’re scrolling with—even something as dense as a solid ingot of gold bullion—is something like 99.999999999% empty space (and that’s a conservative estimate). Take a glance at this comparison of our solar system against a gold atom, if both the Sun and the gold nucleus had a radius of one foot. You’ll see that the outermost electron in the gold atom would be more than twice the distance of Pluto.


And even though that nucleus looks kind of like a mulberry in this illustration, we now know that those protons and neutrons are, once again, something on the order of being their own solar systems compared to the quarks that constitute them. There’s enough wiggle room in there that at the density of a neutron star, our entire planet would be condensed to the size of a child’s marble. And for all we know, those quarks are made up of still tinier particles. We’re not even sure if they’re actually anything we would call solid matter or if they’re just some kind of highly-organized energy waves. In experiments, they kind of act like both.

This is not mysticism, folks. This is just physics.

The crux of all this is that, with our limited perception and our limited ability to comprehend vast scales, the universe is both orders of magnitude larger and orders of magnitude smaller than we can even begin to wrap our minds around. We live our lives at a very fixed scale, unable to even think about that which is much larger or much smaller than miles, feet, or fractions of an inch (say, within six or seven zeroes).

Those same limitations of scale apply in a very literal sense when we start talking about our perception of such things as the electromagnetic spectrum and the acoustic spectrum. Here’s an old chart of the electromagnetic spectrum from back in the mid-’40s. You can click on the image to expand it in a new tab.


If you look at about the two-thirds point on that spectrum you can see the narrow band that is visible light. We can see wavelengths from about 750 nanometers (400 terahertz) at the red end, to 380 nm (800 THz) at the blue end. In other words, the longest wavelength we can see is right at twice the length, or half the frequency, of the shortest wavelength we can see. If our hearing were so limited, we would only be able to hear one octave. Literally. One single octave.

We can feel some of the longer wavelengths as radiant heat, and some of the shorter wavelengths (or their aftereffects) as sunburn, but even all that is only three or four orders of magnitudetwo or three zeroesand if you look at that chart, you’ll see that it’s a logarithmic scale that spans twenty-seven orders of magnitude.

If we could see the longer wavelengths our car engines would glow and our brake rotors would glow and our bodies would glow, and trees and plants would glow blazing white in the sunlight. A little longer and all the radio towers would be bright lights from top to bottom, and the cell phone towers would have bright bars like fluorescent tubes at the tops of them, and there would be laser-bright satellites in the sky, and our cell phones would flicker and glow, and our computers, and our remotes, and our wireless ear buds, and all the ubiquitous little radios that are in almost everything anymore. It would look like some kind of surreal Christmas.


If we could see shorter wavelengths our clothing would be transparent, and our bodies would be translucent, and the night sky would look totally different. Shorter still and we could see bright quasi-stellar objects straight through the Earth. It would all be very disorienting.

Of course, the ability to perceive such a range of wavelengths would require different organs, once you got beyond the near-ultraviolet that some insects can see and the near-infrared that some snakes can see. And in the end, one might argue that our limited perception of the electromagnetic spectrum is just exactly what we’ve needed to survive this far.

I was going to do the same thing with the vastness of acoustic spectrum against the limitations of human hearing here, but I won’t get into it because acoustics is basically just a subset of fluid dynamics. What we hear as sound is things movingpressure waves against our eardrums, to be precisebut similar theories can be applied from the gravitational interaction of galaxy clusters (on a time scale of eons) to the motion of molecules bumping into one another (on the order of microseconds), and you start getting into math that looks like this…


…and I’m an English major with a graduate degree in creative writing. That image could just as easily be a hoax, and I would be none the wiser. So let’s just leave it at this: There’s a whole lot we can’t hear, either.

We also know for a fact that time is not quite as linear as we would like to think. Einstein first theorized that space and time were related, and that movement through space would affect movement through time (though gravity also plays in there, just to complicate matters). We do just begin to see it on a practical level with our orbiting spacecraft. It’s not very bigthe International Space Station will observe a differential of about one second over its decades-long lifespanbut our navigational satellites do have to adjust for it so your GPS doesn’t drive you to the wrong Starbucks.

Physicists theorize that time does much stranger things on the scale of the universe, and in some of the bizarre conditions that can be found. Time almost breaks down completely in a black hole, for instance. Stephen Hawking has posited (and other theoretical astrophysicists agree) that even if the expanding universe were to reverse course and start contracting, which has not been ruled out as a possibility, it would still be an expanding universe because at that point time would have also reversed itself. Or something like that; this is probably a hugely oversimplified layman’s reading of it. But still, to jump over to popular culture, specifically a television series floating somewhere between science fiction and fantasy, the Tenth Doctor probably said it best:


So far we’ve been talking about physical facts. When we get into how our brains process those facts, things become even more uncertain. We do know that of the information transmitted to our brains via the optic and auditory nerves, the vast majority of it is summarily thrown out without getting any cognitive attention at all. What our brains do process is, from the very beginning, distorted by filters and prejudices that we usually don’t even notice. It’s called conceptually-driven processing, and it has been a fundamental concept in both cognitive psychology and consumer-marketing research for decades (why yes, you should be afraid). Our perceptual set can heavily influence how we interpret what we see—and even what information we throw away to support our assumptions. I’m reminded of that old selective-attention test from a few years back:

There are other fun videos by the same folks on The Invisible Gorilla, but this is a pretty in-your-face example of how we can tune out things that our prejudices have deemed irrelevant, even if it’s a costume gorilla beating its chest right in the middle of the scene. As it turns out, we can only process a limited amount of sensory information in a given time (a small percentage of what’s coming in), so the very first thing our brains do is throw out most of it, before filling in the gaps with our own assumptions about how things should be.

As full of holes as our perception is, our memory process is even worse. We know that memory goes through several phases, from the most ephemeral, sensory memory, which is on the order of fractions of a second, to active memory, on the order of tens of seconds, to various iterations of long-term memory. At each stage, only a tiny portion of the information is selected and passed on to the next. And once something makes it through all those rounds of selection to make it into long-term memory, there is evidence in cognitive neuroscience that in order to retrieve those memories, we have to destroy them first. That’s right; the act of recalling a long-term memory back into active memory physically destroys it. That means that when you think about that dim memory from way back in your childhood (I’m lying on the living-room rug leafing through a volume of our off-brand encyclopedia while my mother works in the kitchen), you’re actually remembering the last time you remembered it. Because the last time you remembered it, you obliterated that memory in the process, and had to remember it all over again.

I’ve heard it said that if scientists ran the criminal-justice system, eyewitness testimony would be inadmissible in court. Given the things we know about perception and memory (especially in traumatic situations), that might not be such a bad idea.



So far I have avoided the topic of religion itself. I’m about to change course, and I know that this is where I might write something that offends someone. So I want to start out with the disclaimer that what I’m writing here is only my opiniononly my experienceand I recognize that everyone’s religious journey is individual, unique, and deeply personal. I’m not here to convert anyone, and I’m not here to pooh-pooh anyone’s religious convictions. Neither am I here to be converted. I respect your right to believe what you believe and to practice your religion as you see fitprovided you respect my right to do the same. Having stated that

Most of the world’s older religions started out as oral traditions. Long before being written down they had been handed down in storytelling, generation after generation after generation, mutating along the way, until what ends up inscribed in the sacred texts might be completely unrecognizable to the scribes’ great-great-grandparents. Written traditions are somewhat more stable, but until the advent of typography, every copy was still transcribed by hand, and subject to the interpretations, misinterpretations, and agendas of the scribes doing the copying.

Acts of translation are even worse. Translation is, by its very nature, an act of deciding what to privilege and what to sacrifice in the source text. I have experienced that process first-hand in my attempts to translate 14th-century English into 21st-century English. Same language, only 600 years later.


Every word is a decision: Do I try to preserve a particular nuance at the expense of the poetic meter of the phrase? Do I use two hundred words to convey the meaning that is packed into these twenty words? How do I explain this cultural reference that is meaningless to us, but would have been as familiar to the intended audience as we woulds find a Seinfeld reference? Can I go back to my translation ten years after the fact and change that word that seemed perfect at the time but that has since proven a nagging source of misinterpretation? Especially in the translation of sacred texts, where people will hang upon the interpretation of a single word, forgetting entirely that it’s just some translator’s best approximation. Wars have been fought over such things.

The Muslim world might have the best idea here, encouraging its faithful to learn and study their scriptures in Arabic rather than rely on hundreds of conflicting translations in different languages. Added bonus: You get a common language everyone can use.


But the thing is, even without the vagaries of translation, human language isat besta horribly imprecise tool. One person starts out with an idea in mind. That person approximates that idea as closely as they can manage, using the clumsy symbols that make up any given languageusually composing on the flyand transmits that language to its intended recipient through some method, be it speech or writing or gestural sign language. The recipient listens to that sequence of sounds, or looks at that sequence of marks or gestures, and interprets them back into a series of symbolic ideas, assembling those ideas back together with the help of sundry contextual clues to approximatehopefully—something resembling what the speaker had in mind.

It’s all fantastically imprecisewristwatch repair with a sledgehammerand when you add in the limitations of the listener’s perceptual set it’s obvious how a rhinoceros becomes a unicorn. I say “tree,” thinking of the huge oak in my neighbor’s back yard, but one reader pictures a spruce, another a dogwood, another a magnolia. My daughter points to the rosemary tree in our dining room, decorated with tinsel for the holidays. The mathematician who works in logic all day imagines data nodes arranged in a branching series of nonrecursive decisions. The genealogist sees a family history.

Humans are also infamously prone to hyperbole. Just ask your second cousin about that bass he had halfway in the boat last summer before it wriggled off the hook. They’re called fish stories for a reason. As an armchair scholar of medieval English literature, I can tell you that a lot of texts presented as history, with a straight face, bear reading with a healthy dose of skepticism. According to the 12th-century History of the Kings of Britain, that nation was founded when some guy named Brutus, who gets his authority by being the grandson of Aeneas (yeah, the one from Greek mythology), sailed up the Thames, defeated the handful of giants who were the sole inhabitants of the whole island, named the island after himself (i.e., Britain), and established the capital city he called New Troy, which would later be renamed London. Sounds legit.


In the beginning of Sir Gawain and the Green Knight, Gawain beheads the huge green man who has challenged him to a one-blow-for-one-blow duel, right there in front of the whole Arthurian court, but the man picks up his head, laughs at Gawain, hops back on his horse, and rides off. Granted, Gawain is presented as allegory rather than fact, but Beowulf is presented as fact, and he battles a monster underwater for hours, then kills a dragon when he’s in his seventies.

Heck, go back to ancient Greek literature and the humans and the gods routinely get into each other’s business, helping each other out, meddling in each other’s affairs, deceiving and coercing each other into to do things, getting caught up in petty jealousies, and launching wars out of spite or for personal gain. Sound familiar?

As for creation stories, there are almost as many of those as there are human civilizations. We have an entire three-credit course focused on creation stories, and even that only has space to address a small sampling of them.


Likewise, there are almost as many major religious texts as there are major civilizations. The Abrahamic traditions have their Bible and their Torah and their Qur’an and Hadith, and their various apocryphal texts, all of which are deemed sacrosanct and infallible by at least a portion of their adherents. The Buddhists have their Sutras. The Hindus have their Vedas, Upanishads, and Bhagavad Gita. The Shinto have their Kojiki. The Taoists have their Tao Te Ching. Dozens of other major world religions have their own texts, read and regarded as sacred by millions. The countless folk religions around the world have their countless oral traditions, some of which have been recorded and some of which have not.

Likewise, there are any number of religions that have arisen out of personality cults, sometimes following spiritual leaders of good faith, sometimes following con artists and charlatans. Sometimes those cults implode early. Sometimes they endure. Sometimes they become major world religions.


At certain levels of civilization, it is useful to have explanations for the unexplainable, symbolic interpretations of the natural world, narratives of origin and identityeven absolute codes of conduct. Religious traditions provide their adherents with comfort, moral guidance, a sense of belonging, and the foundations of strong communities.

However, religion has also been abused throughout much of recorded history, to justify keeping the wealthy and powerful in positions of wealth and power, to justify keeping major segments of society in positions of abject oppression, to justify vast wars, profitable to the most powerful and the least at risk, at the expense of the lives and livelihoods of countless less-powerful innocents.

A lot of good has been done in the name of religion. So has a lot of evil. And before we start talking about Islamist violence, let us remember that millions have been slaughtered in the name of Christianity. Almost every religion has caused bloodshed in its history, and every major religion has caused major bloodshed at some point in its history. Even the Buddhists. And there’s almost always some element of we’re-right-and-you’re-wrong very close to the center of that bloodshed.


But what if we’re all wrong?

If we can’t begin to comprehend the vastness of the universe or the emptiness of what we consider solid, if we can only sense a tiny portion of what is going on around us (and through us), and if we don’t even know for sure what we have actually seen with our own eyes or heard with our own ears, how can we even pretend to have any handle on an intelligence that might have designed all this? How can we even pretend to comprehend an intelligence that might even be all of this? I mean seriously, is there any way for us to empirically rule out the possibility that our entire known universe is part of some greater intelligence too vast for us to begin to comprehend? That in effect we are, and our entire reality is, a minuscule part of God itself?

In short, the more convinced you are that you understand the true nature of anything we might reasonably call God, the more convinced I am that you are probably mistaken.


I’m reminded of the bumper sticker I’ve seen: “If you’re living like there’s no God, you’d better be right!” (usually with too many exclamation points). And the debate I had with a street evangelist in which he tried to convince me that it was safer to believe in Jesus if there is no Christian God, than to be a non-believer if he does exist. Nothing like the threat of hell to bring ’em to Jesus. But to me, that kind of thinking is somewhere between a con job and extortion. You’re either asking me to believe you because you’re telling me bad things will happen to me if I don’t believe you, which is circular logic, or you’re threatening me. Either way, I’m not buying. I don’t believe my immortal soul will be either rewarded or punished in the afterlife, because when it comes right down to it, even if something we might reasonably call God does exist, I still don’t think we will experience anything we would recognize as an afterlife. Or that we possess anything we would recognize as an immortal soul.

To answer the incredulous question of a shocked high-school classmate, yes, I do believe that when we die, we more or less just wink out of existence. And no, I’m not particularly worried about that. I don’t think any of us is aware of it when it happens.

But if there’s no recognizable afterlife, no Heaven or Hell, no divine judgment, what’s to keep us from abandoning all morality and doing as we pleasekilling, raping, looting, destroying property and lives with impunity, without fear of divine retribution? Well, if there is no afterlife, if, upon our deaths, we cease to exist as an individual, a consciousness, an immortal soul, or anything we would recognize as an entitywhich, as I have established here, I believe is likely the casethen it logically follows that this life, this flicker of a few years between the development of  consciousness in the womb and the disintegration of that consciousness at death, well, to put it bluntly, this is all we get. This life, and then we’re gone. There is no better life beyond. You can call it nihilism, but I think it’s quite the opposite.

Because if this one life here on Earth is all we get, ever, that means each life is unique, and finite, and precious, and irreplaceable, and in a very real sense, sacred. Belief in an idealized afterlife can be usedtwisted, ratherto justify the killing of innocents. Kill ’em all and let God sort ’em out. The implication being that if the slaughtered were in fact good people, they’re now in a better place. But if there is no afterlife, no divine judgment, no eternal reward or punishment, then the slaughtered innocent are nothing more than that: Slaughtered. Wiped out. Obliterated. Robbed of their one chance at this beautiful, awesome, awful, and by turns astounding and terrifying experience we call life.

Likewise, if this one life is all we get and someone is deliberately maimedwhether physically or emotionally, with human atrocities inflicted upon them or those they love—they don’t get some blissful afterlife to compensate for it. They spend the rest of their existence missing that hand, or having been raped, or knowing that their parents or siblings or children were killed because they happened to have been born in a certain place, or raised with a certain set of religious traditions, or have a certain color of skin or speak a certain language.

In other words, if this one life is all we get? We had damned well better use it wisely. Because we only get this one chance to sow as much beauty, as much joy, as much nurturing, and peace, and friendliness, and harmony as possible. We only get this one chance to embrace the new ideas and the new experiences. We only get this one chance to welcome the stranger, and to see the world through their eyes, if only for a moment. We only get this one chance to feed that hungry person, or to give our old coat to that person who is cold, or to offer compassion and solace and aid to that person who has seen their home, family, livelihood, and community destroyed by some impersonal natural disaster or some human evil such as war.


If I’m living like there’s no (recognizable) God, I’d better be doing all I can manage to make this world a more beautiful place, a happier place, a more peaceful place, a better place. For everyone.

As for a God who would see someone living like that, or at least giving it their best shot, and then condemn them to eternal damnation because they failed to do something like accept Jesus Christ as their personal lord and savior? I’m sorry, but I cannot believe in a God like that. I might go so far as to say I flat-out refuse to believe in a God like that. I won’t go so far as to say that no God exists, because as I have said, I believe that we literally have no way of knowing, but I’m pretty sure any God that does exist isn’t that small-minded.


So anyway, happy holidays.

This is an examination of my own considered beliefs, and nothing more. I won’t try to convert you. I will thank you to extend me the same courtesy. You believe what you believe and I believe what I believe, and in all likelihood there is some point at which each of us believes the other is wrong. And that’s okay. If after reading this you find yourself compelled to pray for my salvation, I won’t be offended.

If you celebrate Christmas, I wish you a merry Christmas. If you celebrate the Solstice, I wish you a blessed Solstice. If you celebrate Hanukkah, I wish you (belatedly) a happy Hanukkah. If you celebrate Milad un Nabi, I wish you Eid Mubarak. If some sense of tradition and no small amount of marketing has led you to celebrate the celebratory season beyond any sense of religious conviction, you seem to be in good company. If you celebrate some parody of a holiday such as Giftmas, I wish you the love of family and friends, and some cool stuff to unwrap. If you celebrate Festivus, I wish you a productive airing of grievances. If you’re Dudeist, I abide. If you’re Pastafarian, I wish you noodly appendage and all that. If you don’t celebrate anything? We’re cool.

And if you’re still offended because I don’t happen to believe exactly the same thing you believe? Seriously? You need to get over it.


Science in a Postmodern Age

by Matt McKinnon


I am not a scientist.

Just like many prominent, mostly Republican, politicians responding to the issue of climate change—trying their best to refrain from losing votes from their conservative constituencies while not coming across as being completely out of touch with the modern world—I am not a scientist.

Of course, if you ask most people who are in fact scientists, then somewhere around 87% of them agree that climate change is real and that it is mostly due to human activity (or at least if you ask those scientists who are members of the American Association for the Advancement of Science, as reported by the Pew Research Center).


Then again, if you ask average Americans (the ones who are not scientists), then only about 50% think that human activity is the largest cause of climate change.

That’s quite a disparity (37 points), especially since getting 87% of scientists to agree on anything is not all that easy and arguably represents what we could call a scientific consensus.

This, of course, provides much fodder for comedians like Bill Maher and Jon Stewart as well as many liberals and progressives, who have come to see the problem of science and a skeptical public as a characteristic of contemporary American conservatism.


And this characterization is buttressed by the even more overwhelming discrepancy between the public and scientists on the question of evolution. A 2009 study by Pew found that only 54% of the public believe in evolution (22% of whom believe that it was guided by a supreme being) versus 95% of scientists (where only 8% believe it to be guided by a supernatural power). And that more recent 2014 Pew study bumped the public percentage up to 65% and the scientific consensus up to 98%.

That’s a gap of 33 points, a bit less than the 37 points on the issue of climate change. Sure there’s something to be said for the idea that contemporary conservatism is at odds with science on some fundamental issues.

But not so fast.

For while there is a large discrepancy between scientists and the American public on these core conservative questions, there is also a large and seemingly growing discrepancy between the public and science on issues that cross political lines, or that could even be considered liberal issues.


Take the recent controversy about immunizations.

Just as with climate change and evolution, a large majority of scientists not only think that they are safe and effective, but also think that certain immunizations should be mandatory for participation in the wider society. That same 2014 Pew study found that 86% of scientists think immunizations should be mandatory, compared to 68% of the public.

And the very liberal left is often just as vocal as the conservative right on this issue, with folks like Jenny McCarthy who has claimed that her son’s autism was the result of immunizations despite clear scientific evidence that has debunked any link. At least one study by Yale law professor Dan Kahan shows that those who fear childhood immunizations are pretty much split between liberals and conservatives.


Still, with an 18-point gap between scientists and the public on this issue, that leaves a lot of progressives seemingly in the same position as those conservatives denying the role of human activity in climate change.

Just as interesting, however, is the discrepancy between scientists and the public on building more nuclear power plants—a gap that is greater (20 points) though scientific opinion is less certain. Pew found that 45% of the public favors more nuclear power compared to 65% of scientists.

But what is even more intriguing is that all of these gaps between scientific consensus and public opinion are far less than the discrepancy that exists on the issue of biomedical science, from the use of pesticides to animal testing and the most controversial: genetically modified organisms (GMOs).


That same Pew study found that a whopping 88% of scientists believe that it is safe to eat genetically modified foods, a larger consensus than agree on human activity and climate change, compared to public opinion, which languishes very far back at 37% (a disparity of 51%!).

And 68% of those scientists agree that it is safe to eat foods grown with pesticides, compared to 28% of the public (a gap of 40 points).

But you won’t find many liberal politicians wading publicly into this issue, championing the views of science over a skeptical public. Nor will you find much sympathy from those comedians either.


It seems that when the proverbial shoe is on the other foot, then it is either not problematic that so many plain old folks diverge from scientific opinion, or there is in fact good reason for their skepticism.

Which brings me to my point about science in a postmodern age. For while it is true that there are good reasons to be skeptical of the science on the use of pesticides and GMOs, as well as some of these other issues, the problem is: who decides when to be skeptical and how skeptical we should be?


That is the problem of postmodernism, which strives for a leveling of discourse and has more than a bit of anti-clerical skepticism about it. For if postmodernism teaches us anything it’s that the certitude of reason in the modern age is anything but certain. And while this makes for fun philosophical frolicking by folks like Heidegger, Foucoult, and Habbermas, it is problematic for science, which relies completely on the intuition that reason and observation are the only certain means of discovery we have.

But in a postmodern age, nothing is certain, and nothing is beyond reproach—not the government, or business, or think tanks, or even institutions of higher learning. Not scientific studies or scientists or even science itself. Indeed, not even reason for that matter.


The moorings of the modern era in reason have become unmoored to some extent in our postmodern culture. And this, more than anything else, explains the large gaps on many issues between scientific opinion and that of the public.

And in the interest of full disclosure: I believe human activity is causing climate change and that immunizations are safe and should be required but I am very skeptical of the use of pesticides and eating GMOs.

But what do I know? I’m not a scientist.

Why All Babies Deserve to Die: Science and Theology in the Abortion Debate

by Matt McKinnon

The debate rages on…

The debate rages on…

Just a few of the headlines on the abortion debate from the last few weeks:

I would say that the Abortion issue has once again taken center stage in the culture wars, but it never really left. Unlike homosexual marriage, which seems to be making steady progress towards resolution by a majority of Americans that the freedom to marry of consenting adults is basic civil right, the abortion debate continues to divide a populace who is torn between adjudicating the priority of the basic rights of both mother and “potential” child.

I say “potential” child because herein is where the real debate lies: exactly when does a fertilized human egg, a zygote, become a “person,” endowed with certain human if not specifically civil rights?

Is it a person yet?

Is it a person yet?

Dougherty’s main point in his article on liberal denial focuses on the “fact” of the beginnings of human life. He claims that liberals tend to make one of two types of arguments where science and human life are concerned: either they take the unresolved legal issue regarding the idea of personhood and transfer it back to the “facts” of biology, concluding that we cannot really know what human life is or when it begins, or they acknowledge the biological fact of the beginning of human life but claim that this has no bearing on how we should think about the legality of abortion.

Both sorts of arguments, he claims, are obscurantist, and fail to actually take into account the full weight of science on the issue.

But the problem, I contend, isn’t one of science: it’s one of theology—or philosophy for those less religiously inclined.

The problem is not the question of “what” human life is or “when” it begins. Dougherty points out:

After the fusion of sperm and egg, the resulting zygote has unique human DNA from which we can deduce the identity of its biological parents. It begins the process of cell division, and it has a metabolic action that will not end until it dies, whether that is in a few days because it never implants on the uterine wall, or years later in a gruesome fishing accident, or a century later in a hospital room filled with beloved grandchildren.

Two-cell zygote.

Two-cell zygote. Is this a person?

So basically, human life begins at conception because at that point science can locate a grouping of cells from which it can deduce all sorts of things from its DNA, and this grouping of cells, if everything goes nicely, will result in the birth, life, and ultimate death of a human being.

He even gets close to the heart of the problem when, in arguing against an article by Ryan Cooper, he claims that many people are not fine with the idea that an abortion represents the end of a life, nor are they comfortable with having a category of human life that is not granted the status of “humanity”—and thus not afforded basic human rights.

The problem with all of these discussions is that they dance around the real issue here—the issue not of “human life” and its definition and beginning, but rather the philosophical and often theological question of the human “person.”

If we look closely at Dougherty’s remarks above, we note two distinct examples of why the generation of human life is a “fact”: (1) we can locate DNA that tells us all sorts of things about the parents (and other ancestors) of the fetus and (2) this fetus, if everything works properly, will develop into a human being, or rather, I would argue, a human “person.”

For there’s the distinction that makes the difference.

After all, analyze any one of my many bodily fluids and a capable technician would be able to locate the exact same information that Mr. Dougherty points out is right there from the first moments of a zygote’s existence. But no one claims that any of these bodily fluids or the cells my body regularly casts off are likewise deserving of being labeled “human life,” though the sperm in my semen and the cells in my saliva are just as much “alive” as any zygote (believe me, I’ve looked).

No, the distinction and the difference is in the second example: The development of this zygote into a human person. My sperm, without an egg and the right environment, will never develop into a human being. The cells in my saliva have no chance at all—even with an egg and the right conditions.

Nope, not people.

Nope, not people.

So the real force of Doughtery’s argument lies in the “potential” of the zygote to develop into what he and anti-abortion folks would claim is already there in the “reality” of a human person.

The debate thus centers on the question of human personhood, what we call theological or philosophical anthropology. For one side, this personhood is the result of a development and is achieved sometime during the embryonic stage (like “viability”) or even upon birth. For others, it is there at conception. For some in both camps it would include a “soul.” For others it would not.

So the reason that the abortion debate is sui generis or “of its own kind” is because here the issue is not the rights of a minority versus the rights of a majority, as it is in the debate about homosexual marriage, or even the rights of the mother versus the rights of the child. Rather the real debate is about when “human life” is also a human “person” (note this is also informs the debate of whether or not to end the life of someone in a vegetative state).

Is this a person?

Fetus at four weeks. Is this a person?

To this end, Mr. Dougherty is correct: We can and do know what human life is and when it begins. And he is correct that many are uncomfortable with the idea that abortion means the death of a human life. But he fails to recognize that the reason this is the case is that while those on one side regard this “life” as a human person, others do not. Potentially, perhaps, but not a “person” yet. And certainly not one whose “right to life” (if there even is such a thing: nature says otherwise—but that’s another blog post) trumps the rights of the mother.

So what does all of this have to do with all babies deserving to die? It’s simple: this is what the (necessary?) intrusion of theology into public policy debates entails. Once theological ideas are inserted (and note that I am not arguing that they should or shouldn’t be), how do we adjudicate between their competing claims or limit the extent that they go?

For the two great Protestant Reformers Martin Luther and John Calvin, representing the two dominant trajectories of traditional Protestant Christianity, humans are, by nature, sinful. We are conceived in sin and born into sin, and this “Original Sin” is only removed in Baptism (here the Roman Catholic Church would agree). Furthermore, we are prone to keep sinning due to the concupiscence of our sinful nature (here is where the Roman Church would disagree). The point is that, for Protestants, all people are not only sinful, but are also deserving of the one chief effect of sin: Death.


“For the wages of sin is death.” — Romans 6:23


Calvin was most explicit in Book 2, Chapter 1 of his famous Institutes:

Even babies bring their condemnation with them from their mother’s wombs: they suffer for their own imperfections and no one else’s. Although they have not yet produced the fruits of sin, they have the seed within. Their whole nature is like a seedbed of sin and so must be hateful and repugnant to God.

Since babies, like all of us, are sinful in their very nature, and since they will necessarily continually bear the fruits of those sins (anyone who’s ever tried to calm a screaming infant can attest to this), and since the wages of those sins is death, then it’s not a far-fetched theological conclusion that all babies deserve to die. And remember: “they suffer for their own imperfections.”

But they don’t just deserve to die—they deserve to go to hell as well (but that’s also another blog post). And this, not from the fringes of some degenerate religious thinker, but from the theology of one of Protestant Christianity’s most influential thinkers.

A sinner in the eyes of God (or at least Calvin).

A sinner in the eyes of God (according to John Calvin, anyway).

Of course, it should be noted that Calvin does not imply that we should kill babies, or even that their death at human hands would be morally justifiable: thought he does argue (and here all Christian theology would agree) that their death at the hand of God is not just morally justifiable, it is also deserved. It should also be noted that the Roman Catholic theology behind the idea that children cannot sin until they reach the age of reason is predicated on the notion that this is only the case once their Original Sin has been removed in Baptism (So Jewish, Muslim, and Hindu kids would be sinful, unlike their Christian counterparts).

Again, this is not to argue that philosophical and theological principles should not be employed in the abortion debate, or in any debate over public policy. Only that (1) this is what is occurring when pro-choice and anti-abortion folks debate abortion and (2) it is fraught with complexities and difficulties that few on either side seem to recognize.

And contrary to  Mr.Dougherty, this is beyond the realm of science, which at best tells us only about states of nature.

But the only way we have a “prayer” of real sustained dialogue—as opposed to debates that ignore our competing fundamental positions—is to take seriously the philosophical and theological issues that frame the question (even if my own example is less than serious).

But I’m not holding my breath. I would most certainly die if I did.

What Should We Learn in College? (Part II)

by Wade Maki

In my last post I discussed comments made by our Governor on what sorts of things we should, and shouldn’t, be learning in college. This is a conversation going on across higher education. Of course we should learn everything in college, but this goal is not practical as our time and funds are limited. We are left then to prioritize what things to require of our students, what things will be electives, and what things not to offer at all.

One area we do this prioritization in is “general education” (GE), which is the largest issue in determining what we learn in college. Some institutions have a very broad model for GE that covers classic literature, history, philosophy, and the “things an educated person should know.” Exactly what appears on this list will vary by institution with some being more focused on the arts, some on the humanities, and others on social sciences. The point being that the institution decides a very small core for GE.

The drawback to a conscribed model for GE is that it doesn’t allow for as much student choice. The desire for more choice led to another very common GE system often referred to as “the cafeteria model” whereby many courses are offered as satisfying GE requirements and each student picks preferences for a category. This system is good for student choice of what to learn, but it isn’t good if you want a connected “core” of courses.

In recent years there has been a move to have a “common core” in which all universities within a state would have the same GE requirements. This makes transfers easier since all schools have the same core. However, it also tends to limit the amount of choice by reducing the options to only those courses offered at every school. In addition, it eliminates the local character of an institution’s GE (by making them all the same), which also reduces improvements from having competing systems (when everyone does it their own way, good ideas tend to be replicated). If we don’t try different GE systems on campuses then innovation slows.


No matter which direction we move GE, we still have to address the central question of “what should we learn?” For example, should students learn a foreign language? Of course they should in an ideal world, but consider that foreign language requirements are two years.  We must compare the opportunity costs of that four course requirement (what else could we have learned from four other courses in say economics, psychology, science, or communications?). This is just one example of how complicated GE decisions can be. Every course we require is a limitation on choice and makes it less likely that other (non-required) subjects will be learned.

As many states look at a “common core” model there is an additional consideration which is often overlooked.  Suppose we move to a common core of general education in which most students learn the same sorts of things.  Now imagine your business or work environment where most of your coworkers learned the same types of things but other areas of knowledge were not learned by any of them. Is this preferable to an organization where its already employed educated members learned very little in common but have more diverse educational backgrounds? I suspect an organization with more diverse education employees will be more adaptable than one where there are a few things everyone knows and a lot of things no one knows.


This is my worry about the way we are looking to answer the question of what we should learn in college. In the search for an efficient, easy to transfer, common core we may end up:

  1. Having graduates with more similar educations and the same gaps in their educations.
  2. Losing the unique educational cultures of our institutions.
  3. Missing out on the long term advantage of experimentation across our institutions by imposing one model for everyone.

Not having a common core doesn’t solve the all of the problems, but promoting experiments through diverse and unique educational requirements is worth keeping. There is another problem with GE that I can’t resolve, which is how most of us in college answer the question this way: “Everyone should learn what I did or what I’m teaching.” But that is a problem to be addressed in another posting. So, what should we learn in college?

What Should we Learn in College? (Part I)

by Wade Maki

Recently Governor McCrory made some comments on William Bennett’s radio show about higher education. These comments got a lot of people’s attention and not necessarily the good kind. Before reading any comments on what someone else has said it is best to check out the original source. To that end, I suggest listening to the entire segment of the Governor on the show (which you can download as an MP3 here).

Governor Pat McCrory

Governor Pat McCrory

Several comments were made regarding higher education including the importance an education has in getting a job, the shortage of certain kinds of training (welding), and the surplus of workers in other kinds of education (including gender studies, philosophy, and Swahili). While there are a lot of things worth responding to in the radio segment, I will address only one issue: Why disciplinary training in philosophy is valuable. Philosophy is, after all, my field and it is wise to restrict one’s public claims to what one knows.

What does philosophy teach us? Common answers include increased critical thinking, argumentation skills, and clarity of communication. In practice this includes a bundle of skills such as: seeing the logical implications of proposed ideas or courses of action; the ability to identify the relevant issue under discussion and separate out the “red herrings”, unsupported arguments, or fallacious reasoning; being able to break down complex ideas, issues, or communications and explain them in a logically organized fashion, etc. I could go on, but these are a sampling of the real skills learned from an education in philosophy.

What the governor and Dr. Bennett (who holds a Ph.D. in Philosophy) said gives the impression that a philosophy education doesn’t help students get jobs. This has been a takeaway message in the media. Since, others have made the case that a job isn’t the goal of an education, I leave it to the reader to examine that argument. There are two points about the discussion that should be noted. First, Dr. Bennett was suggesting that we have too many Ph.D.’s in philosophy, which is a separate claim than philosophy lacks educational value. It may be true that we have an oversupply of Ph.D.’s in many disciplines (and a shortage in others). The causes of this are many and include the free choice of students as to what to study, the impetus for universities to create graduate programs to enhance their reputations, and the ability to reduce teaching costs by putting graduate students in the classroom. Again, I leave it to others to examine these causes. Nothing Dr. Bennett said indicated that undergraduates shouldn’t learn philosophy.

Dr. William "Bill" Bennett

Dr. William “Bill” Bennett

This leads me to the second point—Dr. Bennett is himself an example of the value philosophy adds to education. What do you do with a philosophy education? Dr. Bennett parlayed his philosophical training, in addition to legal training (a common set of skills), to become Secretary of Education, a political commentator, an author, and a talk radio host. His logical argumentation skills, knowledge of Aristotle and virtue ethics are seen throughout his work. The very skills described above as benefits of a philosophical education are the skills his career represents.

There are very good reasons to include philosophy as part of our higher education curricula. Unfortunately, philosophy becomes an easy target in public discourse disparaging what we learn in this discipline for at least two reasons. First, most people don’t have an understanding of what philosophy is and how it develops numerous valuable skills. Second, philosophy teaches transferable skills that enhance many careers without having a single career associated solely with it (besides teaching). In other words, the value of studying nursing may be to become a nurse in a way that studying philosophy isn’t to become a philosopher. The value of philosophy is found in the skills it develops which can be applied to all sorts of jobs. I suspect Dr. Bennett would agree and I hope Governor McCrory will as well.

Actually, We Can All Just Get Along…And Do Most Of the Time.

by Wade Maki

Who’s out to destroy America? If you believed everything you hear over the next few weeks the answer is just about everyone. Greedy capitalists, lazy moochers, and every candidate running in a competitive race are just some of dangers. Of course if you watch the news you’d also conclude that we’re all about to die from the weather (hurricanes, earthquakes, tornadoes, snow oh my), can’t swim in the oceans (sharks), can’t fly (crashes), and we will be the victims of terrorism, swine flu, computer hacking, identity theft, or sudden onset obesity any minute now.

Similar to how the news exaggerates the risks of daily living, campaigns exaggerate the evil intent of every “other” in society. Luckily, when disasters really do occur most of us get along pretty well (and days without disasters too).


Are the presidential candidates really villains from Batman?

Our predisposition towards cooperation became especially clear to me this summer during a trip to visit family in the hills of northwest Arkansas. On the surface this is a unique region, as you learn when flying into what appears to be nowhere. You land at a very large and modern airport (thanks to Wal-Mart headquarters being in the area). The many small communities contain people from all over the country—most notably retirees seeking warm weather, affordable living, low taxes and a large supply of golf courses.

We stayed with relatives up winding roads in the hills filled with middle class houses and large trees. During the second night of our stay we experienced a very fast and violent storm. The power went out after dark and we experienced the “what do we do without electricity” quandary faced by those too used to technology. Luckily, I had an iPad to light the way until we found a flashlight and got candles lit. As there wasn’t much to do, we grabbed a flashlight took a midnight stroll to see what had happened.

Quickly we realized that this was not a unique idea as there were people roaming all over the neighborhood (in the dark the bouncing flashlights were visible for blocks). Trees were down everywhere. Not just small Imageones but massive trees lay across yards, power lines, and on top of homes as well. It was bad and everyone was making sure everyone else was okay. We hadn’t made it a block before running into a man with a flashlight strapped atop his head by his shirt and his long wet hair hanging down his bare shoulders looking for the chainsaw he had set down along the street. This was the first, but not last person, who in the middle of the night was already getting to work helping neighbors get massive trees removed from damaged homes.

All night and most of the next day we heard the roar of chainsaws as the cleanup continued. People from outside the neighborhood were driving around offering their services to those needing tree removals (some were professionals, others just a guy with a saw trying to make a buck). It is at a time like this you realize that the “greedy capitalist” you hear during campaign season is a good thing to have around when an 8’ wide oak tree is crushing your roof.

For most of the next day power was out (the company workers were doing their best) as a mixture of Imagevolunteers and for profit professionals assisted those in need. One elderly couple had a very large tree crash right into their bedroom. Luckily they weren’t home. Rather than wait to contact them, or wait for an insurance assessor, that same mix of neighbors and professionals got together, removed the tree from the house and put a tarp on the roof to protect this couples’ home from further rain.

There were no bad guys that day. Despite the different political yard signs around, no one viewed anyone else as out to destroy America. When something really bad happened it was amazing how everyone (volunteers, for profit professional, neighbors, etc.) just did what needed doing. As a microcosm of society it is a good reminder of just how well most things work (which is the real magic given how many things could go wrong).Image

Sure there are problems, differences, and our decisions about what policy or person to support can make things better or worse. For the most part though, society is full of pretty good people trying their best, in their own way, to get what needs doing done. Something to remember as you experience the drumbeat of doom from political ads and “news” outlets—We can and do get along just fine…most of the time.

Who is on First: Ambiguous and Loaded Language

By Wade Maki

“Who is on first? Yes, he is.” The classic comedy bit plays on ambiguity in language.  In this case the ambiguity is just the unfortunate result of the situation (people named “Who” and “What” are difficult to talk about).  A great many problems are caused by ambiguous language in which two or more meanings may be found in the same wording. Vast amounts of philosophical disputes revolve around language disputes. What exactly did you mean by X?

Do you believe in God? This seems a simple question, but what does it mean to “believe” in something? Does belief entail: that it is true, that it is likely true, that it is possibly true, that I just hope it true, or even that I just want to be true? The word is unclear and any question involving it invites answers aimed at one of these standards leaving great possibility for confusion between questioner and answerer.

Sometimes ambiguous language is just the unintended result of vague expression. In other cases it results from careless expression. As evidence, here is how a team of students recently reported on a conflict between two companies:

“Throughout the process, this firm created monetary problems for their company explaining why they decided not to provide their services to them.”

While the team knew to what the words “this, their, they, and them,” applied, there was no way for the reader to decipher this meaning given that there were at least two subjects that each word could refer to.

In other cases ambiguous language is a deliberate tool to deceive. Examples from politics and advertising are numerous where, by design, language is selected because it has dual meanings one, which is technically true, and the other which isn’t true but the speaker hopes the listener will accept as true. President Clinton’s famous legal defense about perjury included the curious claim “it depends upon what the meaning of the word ‘is’ is”. You know language is in trouble when “is” becomes ambiguous.

Rather than focus on political ambiguity in language, a subject deserving of its own post, consider how advertisers utilize it. Below are two labels from the same product line called “ecosense” (in two shades of green). When you see this what do you think of? Once you’ve answered the question read the small print at the bottom. Then look at the second version, which represents the updated advertising language. Notice how they changed the small print to be even more ambiguous than the first.

What is going on with these ads is called “greenwashing” whereby an attempt is made to convey an environmental product when, in fact, it is not an environmental product. In the examples above the advertiser plays on both the ambiguity of the phrase “ecosense” and of the color green. The eco in ecosense could mean ecological and/or economical just as the green could mean environmentally friendly and/or affordable. As the small print indicates in the first ad (which was the original label) only the economical portions are true.  However, since the product would sell better if people thought it was environmental this original small print was altered to be more ambiguous. Now it tells you that ecosense means economical sense it leaves an open question as to the environmental impact of the product. People who don’t read the small print (a significant number) would reasonably conclude that the product was environmentally friendly and even those who read the second label may reach that same conclusion.

Thus far the examples have involved language which could have two or more meanings. There is another form of ambiguity in language where meanings are smuggled into language without actually being said.  What comes to mind when I tell you Jones is an environmentalist? For many people the word itself brings with it images of hippies, tree huggers, people diving atop whales to save them from harpoons, Prius owners, or a host of other behaviors. As a result a lot of people say “I’m not an environmentalist” before adding, “but I care about the environment.” This is as logical as the woman who says: “I believe in equal rights, but I’m not a feminist.”

Confusions in such cases come not from the words themselves, but from outside ideas the listener associates with the words. Thus, most conservatives don’t call themselves “environmentalists” as that says SUV burning, un-showered,  neo-hippie. Instead, conservatives are more likely to use the term “conservationist”.  What is the difference? Not a whole lot if you only look at the words and know that both seek to protect parks, air, water, and nature.  Of course the term conservationist also carries additional connotations to some listeners such as, in full Teddy Roosevelt tradition, enjoying nature by using an elephant gun to blow away every creature in the natural world for the trophy wall.

A lot of conflict, confusion, and deception occurs because of ambiguity in either the meaning of language or the smuggling in of additional notions. What one person says can be innocent to one listener but racist/homophobic/offensive to others. The solution isn’t easy. Being aware of ambiguity and smuggled notions goes a long way, but not far enough. If you were running for president and want to protect parks, air, water, and nature what word do you use? If environmentalist and conservationist each scare a third of America what word do you use? This helps explain the tortured use of language in politics.