Tag Archives: literature

The Devout Agnostic

by Jay Parr

Sunrise as seen from orbit. Taken by Chris Hadfield aboard the International Space Station.

I am a devout agnostic. No, that is not an oxymoron.

After considerable searching, study, and introspectionand, having been raised in the Protestant Christian tradition, no small amount of internal conflictI have come to rest in the belief that any entity we might reasonably call God would be so alien to our limited human perceptions as to be utterly, and irreconcilably, beyond human comprehension.

Gah. So convoluted. Even after something like a dozen revisions.

Let me try to strip that down. To wit: Humankind cannot understand God. We cannot remotely define God. We wouldn’t know God if it/he/she/they slapped us square in the face. In the end, we cannot say with any certainty that anything we might reasonably call God actually exists. Nor can we say with any certainty that something we might reasonably call God does not exist.

Splash text: I don't know, and you don't either.

To horribly misquote some theologian (or philosopher?) I seem to remember encountering somewhere along the way, humankind can no more understand God than a grasshopper can understand number theory.

I mean, we can’t even wrap our puny little heads around the immensity of the known physical realm (or Creation, if you prefer) without creating incredibly simplistic, and only vaguely representative models.

Let’s look at some of the things we do know. With only a handful of notable exceptions the entirety of human history has happened on, or very near to, the fragile skin of a tiny drop of semi-molten slag just under 8,000 miles across. That’s just under 25,000 miles around, or a little more than two weeks’ driving at 70 mph, if you went non-stop without stopping for meals or potty breaks.

Freight train in the American west, looking dwarfed by the landscape, with mountains visible in the far-off distance.

Even that tiny drop of slag can feel pretty vast to our little human perceptions, as anyone can tell you who has been on a highway in the American West and looked out at that little N-scale model train over there and realized that, no, it’s actually a full-sized freight train, with engines sixteen feet tall and seventy feet long and as heavy as five loaded-down tractor-trailers. And even though you can plainly see the entire length of that little train, it’s actually over a mile long, and creeping along at seventy-five miles per hour. Oh, and that mountain range just over there in the background? Yeah, it’s three hours away.

If we can’t comprehend the majesty of our own landscape, on this thin skin on this tiny droplet of molten slag we call home, how can we imagine the distance even to our own moon?

To-scale image of Earth and the Moon, with the Moon represented by a single pixel.

If you look at this image, in which the moon is depicted as a single pixel, it is 110 pixels to the earth (which itself is only three pixels wide, partially occupying nine pixels). At this scale it would be about eighty-five times the width of that image before you got to the Sun. If you’re bored, click on the image and it will take you to what the author only-half-jokingly calls “a tediously accurate scale model of the solar system,” where you can scroll through endless screens of nothing as you make your way from the Sun to Pluto.

Beyond the Moon, we’re best off talking about distances in terms of the speed of lightas in, how long it takes a ray of light to travel there, cruising along at about 186,000 miles per second, or 670 million miles per hour.

On the scale of our drop of moltener, Earthlight travels pretty fast. A beam of light can travel around to the opposite side of the Earth in about a fifteenth of a second. That’s why we can call that toll-free customer-service number and suddenly find ourselves talking to some poor soul who’s working through the night somewhere in Indonesiawhich, for the record, is about as close as you can get to the exact opposite point on the planet without hiring a more expensive employee down in Perth.

Earthrise_apollo8_19681224_NASA_500crop

That capacity for real-time communication just starts to break down when you get to the Moon. At that distance a beam of light, or a radio transmission, takes a little more than a second (about 1.28 seconds, to be more accurate). So the net result is about a two-and-a-half-second lag round-trip. Enough to be noticeable, but it has rarely been a problem, asin all of human historyonly two dozen people have ever been that far away from the Earth (all of them white American men, by the way), and no one has been any further. By the way, that image of the Earthrise up there? That was taken with a very long lens, and then I cropped the image even more for this post, so it looks a lot closer than it really is.

Beyond the Moon, the distances get noticeable even at the speed of light, as the Sun is about four hundred times further away than the Moon. Going back up to that scale model in which the Earth is three pixels wide, if the Earth and Moon are about an inch and a half apart on your typical computer screen, the Sun would be about the size of a softball and fifty feet away (so for a handy visual, the Sun is a softball at the front of a semi trailer and the Earth is a grain of sand back by the doors). Traveling at 186,000 miles per second, light from the Sun makes the 93-million-mile trip to Earth in about eight minutes and twenty seconds.

iss-sun-over-earth

Even with all that empty space, our three pixels against the fifty feet to the Sun, we’re still right next door. The same sunlight that reaches us in eight minutes takes four hours and ten minutes to reach Neptune, the outermost planet of our solar system since poor Pluto got demoted. If you’re still looking at that scale model, where we’re three pixels wide and the sun is a softball fifty feet away, that puts Neptune about a quarter of a mile away and the size of a small bead. And that’s still within our home solar system. Well within our solar system if you include all the smaller dwarf planets, asteroids, and rubble of the Kuiper Belt (including Pluto, which we now call a dwarf planet).

To get to our next stellar neighbor at this scale, we start out at Ocean Isle Beach, find the grain of sand that is Earth (and the grain of very fine sand an inch and a half away that is the Moon), drop that softball fifty feet away to represent the Sun, lay out a few more grains of sand and a few little beads between the Atlantic Ocean and the first dune to represent the rest of the major bodies in our solar system, and then we drive all the way across the United States, the entire length of I-40 and beyond, jogging down the I-15 (“the” because we’re on the west coast now) to pick up the I-10 through Los Angeles and over to the Pacific Ocean at Santa Monica, where we walk out to the end of the Santa Monica Pier and set down a golf ball to represent Proxima Centauri. And that’s just the star that’s right next door.

See what I’m getting at?

What’s even more mind-bending than the vast distances and vast emptiness of outer space, is that our universe is every bit as vast at the opposite end of the size spectrum. The screen you’re reading this on, the hand you’re scrolling with—even something as dense as a solid ingot of gold bullion—is something like 99.999999999% empty space (and that’s a conservative estimate). Take a glance at this comparison of our solar system against a gold atom, if both the Sun and the gold nucleus had a radius of one foot. You’ll see that the outermost electron in the gold atom would be more than twice the distance of Pluto.

atom-vs-solar-system

And even though that nucleus looks kind of like a mulberry in this illustration, we now know that those protons and neutrons are, once again, something on the order of being their own solar systems compared to the quarks that constitute them. There’s enough wiggle room in there that at the density of a neutron star, our entire planet would be condensed to the size of a child’s marble. And for all we know, those quarks are made up of still tinier particles. We’re not even sure if they’re actually anything we would call solid matter or if they’re just some kind of highly-organized energy waves. In experiments, they kind of act like both.

This is not mysticism, folks. This is just physics.

The crux of all this is that, with our limited perception and our limited ability to comprehend vast scales, the universe is both orders of magnitude larger and orders of magnitude smaller than we can even begin to wrap our minds around. We live our lives at a very fixed scale, unable to even think about that which is much larger or much smaller than miles, feet, or fractions of an inch (say, within six or seven zeroes).

Those same limitations of scale apply in a very literal sense when we start talking about our perception of such things as the electromagnetic spectrum and the acoustic spectrum. Here’s an old chart of the electromagnetic spectrum from back in the mid-’40s. You can click on the image to expand it in a new tab.

1944electromagnetic_spectrum-5000

If you look at about the two-thirds point on that spectrum you can see the narrow band that is visible light. We can see wavelengths from about 750 nanometers (400 terahertz) at the red end, to 380 nm (800 THz) at the blue end. In other words, the longest wavelength we can see is right at twice the length, or half the frequency, of the shortest wavelength we can see. If our hearing were so limited, we would only be able to hear one octave. Literally. One single octave.

We can feel some of the longer wavelengths as radiant heat, and some of the shorter wavelengths (or their aftereffects) as sunburn, but even all that is only three or four orders of magnitudetwo or three zeroesand if you look at that chart, you’ll see that it’s a logarithmic scale that spans twenty-seven orders of magnitude.

If we could see the longer wavelengths our car engines would glow and our brake rotors would glow and our bodies would glow, and trees and plants would glow blazing white in the sunlight. A little longer and all the radio towers would be bright lights from top to bottom, and the cell phone towers would have bright bars like fluorescent tubes at the tops of them, and there would be laser-bright satellites in the sky, and our cell phones would flicker and glow, and our computers, and our remotes, and our wireless ear buds, and all the ubiquitous little radios that are in almost everything anymore. It would look like some kind of surreal Christmas.

visible-vs-infrared

If we could see shorter wavelengths our clothing would be transparent, and our bodies would be translucent, and the night sky would look totally different. Shorter still and we could see bright quasi-stellar objects straight through the Earth. It would all be very disorienting.

Of course, the ability to perceive such a range of wavelengths would require different organs, once you got beyond the near-ultraviolet that some insects can see and the near-infrared that some snakes can see. And in the end, one might argue that our limited perception of the electromagnetic spectrum is just exactly what we’ve needed to survive this far.

I was going to do the same thing with the vastness of acoustic spectrum against the limitations of human hearing here, but I won’t get into it because acoustics is basically just a subset of fluid dynamics. What we hear as sound is things movingpressure waves against our eardrums, to be precisebut similar theories can be applied from the gravitational interaction of galaxy clusters (on a time scale of eons) to the motion of molecules bumping into one another (on the order of microseconds), and you start getting into math that looks like this…

acoustic-theory

…and I’m an English major with a graduate degree in creative writing. That image could just as easily be a hoax, and I would be none the wiser. So let’s just leave it at this: There’s a whole lot we can’t hear, either.

We also know for a fact that time is not quite as linear as we would like to think. Einstein first theorized that space and time were related, and that movement through space would affect movement through time (though gravity also plays in there, just to complicate matters). We do just begin to see it on a practical level with our orbiting spacecraft. It’s not very bigthe International Space Station will observe a differential of about one second over its decades-long lifespanbut our navigational satellites do have to adjust for it so your GPS doesn’t drive you to the wrong Starbucks.

Physicists theorize that time does much stranger things on the scale of the universe, and in some of the bizarre conditions that can be found. Time almost breaks down completely in a black hole, for instance. Stephen Hawking has posited (and other theoretical astrophysicists agree) that even if the expanding universe were to reverse course and start contracting, which has not been ruled out as a possibility, it would still be an expanding universe because at that point time would have also reversed itself. Or something like that; this is probably a hugely oversimplified layman’s reading of it. But still, to jump over to popular culture, specifically a television series floating somewhere between science fiction and fantasy, the Tenth Doctor probably said it best:

wibbly_wobbly_timey_wimey_stuff_jnapier99_edit

So far we’ve been talking about physical facts. When we get into how our brains process those facts, things become even more uncertain. We do know that of the information transmitted to our brains via the optic and auditory nerves, the vast majority of it is summarily thrown out without getting any cognitive attention at all. What our brains do process is, from the very beginning, distorted by filters and prejudices that we usually don’t even notice. It’s called conceptually-driven processing, and it has been a fundamental concept in both cognitive psychology and consumer-marketing research for decades (why yes, you should be afraid). Our perceptual set can heavily influence how we interpret what we see—and even what information we throw away to support our assumptions. I’m reminded of that old selective-attention test from a few years back:

There are other fun videos by the same folks on The Invisible Gorilla, but this is a pretty in-your-face example of how we can tune out things that our prejudices have deemed irrelevant, even if it’s a costume gorilla beating its chest right in the middle of the scene. As it turns out, we can only process a limited amount of sensory information in a given time (a small percentage of what’s coming in), so the very first thing our brains do is throw out most of it, before filling in the gaps with our own assumptions about how things should be.

As full of holes as our perception is, our memory process is even worse. We know that memory goes through several phases, from the most ephemeral, sensory memory, which is on the order of fractions of a second, to active memory, on the order of tens of seconds, to various iterations of long-term memory. At each stage, only a tiny portion of the information is selected and passed on to the next. And once something makes it through all those rounds of selection to make it into long-term memory, there is evidence in cognitive neuroscience that in order to retrieve those memories, we have to destroy them first. That’s right; the act of recalling a long-term memory back into active memory physically destroys it. That means that when you think about that dim memory from way back in your childhood (I’m lying on the living-room rug leafing through a volume of our off-brand encyclopedia while my mother works in the kitchen), you’re actually remembering the last time you remembered it. Because the last time you remembered it, you obliterated that memory in the process, and had to remember it all over again.

I’ve heard it said that if scientists ran the criminal-justice system, eyewitness testimony would be inadmissible in court. Given the things we know about perception and memory (especially in traumatic situations), that might not be such a bad idea.

court

Okay.

So far I have avoided the topic of religion itself. I’m about to change course, and I know that this is where I might write something that offends someone. So I want to start out with the disclaimer that what I’m writing here is only my opiniononly my experienceand I recognize that everyone’s religious journey is individual, unique, and deeply personal. I’m not here to convert anyone, and I’m not here to pooh-pooh anyone’s religious convictions. Neither am I here to be converted. I respect your right to believe what you believe and to practice your religion as you see fitprovided you respect my right to do the same. Having stated that

Most of the world’s older religions started out as oral traditions. Long before being written down they had been handed down in storytelling, generation after generation after generation, mutating along the way, until what ends up inscribed in the sacred texts might be completely unrecognizable to the scribes’ great-great-grandparents. Written traditions are somewhat more stable, but until the advent of typography, every copy was still transcribed by hand, and subject to the interpretations, misinterpretations, and agendas of the scribes doing the copying.

Acts of translation are even worse. Translation is, by its very nature, an act of deciding what to privilege and what to sacrifice in the source text. I have experienced that process first-hand in my attempts to translate 14th-century English into 21st-century English. Same language, only 600 years later.

SGGK_facsimile

Every word is a decision: Do I try to preserve a particular nuance at the expense of the poetic meter of the phrase? Do I use two hundred words to convey the meaning that is packed into these twenty words? How do I explain this cultural reference that is meaningless to us, but would have been as familiar to the intended audience as we woulds find a Seinfeld reference? Can I go back to my translation ten years after the fact and change that word that seemed perfect at the time but that has since proven a nagging source of misinterpretation? Especially in the translation of sacred texts, where people will hang upon the interpretation of a single word, forgetting entirely that it’s just some translator’s best approximation. Wars have been fought over such things.

The Muslim world might have the best idea here, encouraging its faithful to learn and study their scriptures in Arabic rather than rely on hundreds of conflicting translations in different languages. Added bonus: You get a common language everyone can use.

quran

But the thing is, even without the vagaries of translation, human language isat besta horribly imprecise tool. One person starts out with an idea in mind. That person approximates that idea as closely as they can manage, using the clumsy symbols that make up any given languageusually composing on the flyand transmits that language to its intended recipient through some method, be it speech or writing or gestural sign language. The recipient listens to that sequence of sounds, or looks at that sequence of marks or gestures, and interprets them back into a series of symbolic ideas, assembling those ideas back together with the help of sundry contextual clues to approximatehopefully—something resembling what the speaker had in mind.

It’s all fantastically imprecisewristwatch repair with a sledgehammerand when you add in the limitations of the listener’s perceptual set it’s obvious how a rhinoceros becomes a unicorn. I say “tree,” thinking of the huge oak in my neighbor’s back yard, but one reader pictures a spruce, another a dogwood, another a magnolia. My daughter points to the rosemary tree in our dining room, decorated with tinsel for the holidays. The mathematician who works in logic all day imagines data nodes arranged in a branching series of nonrecursive decisions. The genealogist sees a family history.

Humans are also infamously prone to hyperbole. Just ask your second cousin about that bass he had halfway in the boat last summer before it wriggled off the hook. They’re called fish stories for a reason. As an armchair scholar of medieval English literature, I can tell you that a lot of texts presented as history, with a straight face, bear reading with a healthy dose of skepticism. According to the 12th-century History of the Kings of Britain, that nation was founded when some guy named Brutus, who gets his authority by being the grandson of Aeneas (yeah, the one from Greek mythology), sailed up the Thames, defeated the handful of giants who were the sole inhabitants of the whole island, named the island after himself (i.e., Britain), and established the capital city he called New Troy, which would later be renamed London. Sounds legit.

sggk-edit

In the beginning of Sir Gawain and the Green Knight, Gawain beheads the huge green man who has challenged him to a one-blow-for-one-blow duel, right there in front of the whole Arthurian court, but the man picks up his head, laughs at Gawain, hops back on his horse, and rides off. Granted, Gawain is presented as allegory rather than fact, but Beowulf is presented as fact, and he battles a monster underwater for hours, then kills a dragon when he’s in his seventies.

Heck, go back to ancient Greek literature and the humans and the gods routinely get into each other’s business, helping each other out, meddling in each other’s affairs, deceiving and coercing each other into to do things, getting caught up in petty jealousies, and launching wars out of spite or for personal gain. Sound familiar?

As for creation stories, there are almost as many of those as there are human civilizations. We have an entire three-credit course focused on creation stories, and even that only has space to address a small sampling of them.

BLS300-visions

Likewise, there are almost as many major religious texts as there are major civilizations. The Abrahamic traditions have their Bible and their Torah and their Qur’an and Hadith, and their various apocryphal texts, all of which are deemed sacrosanct and infallible by at least a portion of their adherents. The Buddhists have their Sutras. The Hindus have their Vedas, Upanishads, and Bhagavad Gita. The Shinto have their Kojiki. The Taoists have their Tao Te Ching. Dozens of other major world religions have their own texts, read and regarded as sacred by millions. The countless folk religions around the world have their countless oral traditions, some of which have been recorded and some of which have not.

Likewise, there are any number of religions that have arisen out of personality cults, sometimes following spiritual leaders of good faith, sometimes following con artists and charlatans. Sometimes those cults implode early. Sometimes they endure. Sometimes they become major world religions.

jim-jones

At certain levels of civilization, it is useful to have explanations for the unexplainable, symbolic interpretations of the natural world, narratives of origin and identityeven absolute codes of conduct. Religious traditions provide their adherents with comfort, moral guidance, a sense of belonging, and the foundations of strong communities.

However, religion has also been abused throughout much of recorded history, to justify keeping the wealthy and powerful in positions of wealth and power, to justify keeping major segments of society in positions of abject oppression, to justify vast wars, profitable to the most powerful and the least at risk, at the expense of the lives and livelihoods of countless less-powerful innocents.

A lot of good has been done in the name of religion. So has a lot of evil. And before we start talking about Islamist violence, let us remember that millions have been slaughtered in the name of Christianity. Almost every religion has caused bloodshed in its history, and every major religion has caused major bloodshed at some point in its history. Even the Buddhists. And there’s almost always some element of we’re-right-and-you’re-wrong very close to the center of that bloodshed.

spanish-inquisition

But what if we’re all wrong?

If we can’t begin to comprehend the vastness of the universe or the emptiness of what we consider solid, if we can only sense a tiny portion of what is going on around us (and through us), and if we don’t even know for sure what we have actually seen with our own eyes or heard with our own ears, how can we even pretend to have any handle on an intelligence that might have designed all this? How can we even pretend to comprehend an intelligence that might even be all of this? I mean seriously, is there any way for us to empirically rule out the possibility that our entire known universe is part of some greater intelligence too vast for us to begin to comprehend? That in effect we are, and our entire reality is, a minuscule part of God itself?

In short, the more convinced you are that you understand the true nature of anything we might reasonably call God, the more convinced I am that you are probably mistaken.

understand-everything-crop

I’m reminded of the bumper sticker I’ve seen: “If you’re living like there’s no God, you’d better be right!” (usually with too many exclamation points). And the debate I had with a street evangelist in which he tried to convince me that it was safer to believe in Jesus if there is no Christian God, than to be a non-believer if he does exist. Nothing like the threat of hell to bring ’em to Jesus. But to me, that kind of thinking is somewhere between a con job and extortion. You’re either asking me to believe you because you’re telling me bad things will happen to me if I don’t believe you, which is circular logic, or you’re threatening me. Either way, I’m not buying. I don’t believe my immortal soul will be either rewarded or punished in the afterlife, because when it comes right down to it, even if something we might reasonably call God does exist, I still don’t think we will experience anything we would recognize as an afterlife. Or that we possess anything we would recognize as an immortal soul.

To answer the incredulous question of a shocked high-school classmate, yes, I do believe that when we die, we more or less just wink out of existence. And no, I’m not particularly worried about that. I don’t think any of us is aware of it when it happens.

But if there’s no recognizable afterlife, no Heaven or Hell, no divine judgment, what’s to keep us from abandoning all morality and doing as we pleasekilling, raping, looting, destroying property and lives with impunity, without fear of divine retribution? Well, if there is no afterlife, if, upon our deaths, we cease to exist as an individual, a consciousness, an immortal soul, or anything we would recognize as an entitywhich, as I have established here, I believe is likely the casethen it logically follows that this life, this flicker of a few years between the development of  consciousness in the womb and the disintegration of that consciousness at death, well, to put it bluntly, this is all we get. This life, and then we’re gone. There is no better life beyond. You can call it nihilism, but I think it’s quite the opposite.

Because if this one life here on Earth is all we get, ever, that means each life is unique, and finite, and precious, and irreplaceable, and in a very real sense, sacred. Belief in an idealized afterlife can be usedtwisted, ratherto justify the killing of innocents. Kill ’em all and let God sort ’em out. The implication being that if the slaughtered were in fact good people, they’re now in a better place. But if there is no afterlife, no divine judgment, no eternal reward or punishment, then the slaughtered innocent are nothing more than that: Slaughtered. Wiped out. Obliterated. Robbed of their one chance at this beautiful, awesome, awful, and by turns astounding and terrifying experience we call life.

Likewise, if this one life is all we get and someone is deliberately maimedwhether physically or emotionally, with human atrocities inflicted upon them or those they love—they don’t get some blissful afterlife to compensate for it. They spend the rest of their existence missing that hand, or having been raped, or knowing that their parents or siblings or children were killed because they happened to have been born in a certain place, or raised with a certain set of religious traditions, or have a certain color of skin or speak a certain language.

In other words, if this one life is all we get? We had damned well better use it wisely. Because we only get this one chance to sow as much beauty, as much joy, as much nurturing, and peace, and friendliness, and harmony as possible. We only get this one chance to embrace the new ideas and the new experiences. We only get this one chance to welcome the stranger, and to see the world through their eyes, if only for a moment. We only get this one chance to feed that hungry person, or to give our old coat to that person who is cold, or to offer compassion and solace and aid to that person who has seen their home, family, livelihood, and community destroyed by some impersonal natural disaster or some human evil such as war.

syrian_refugees

If I’m living like there’s no (recognizable) God, I’d better be doing all I can manage to make this world a more beautiful place, a happier place, a more peaceful place, a better place. For everyone.

As for a God who would see someone living like that, or at least giving it their best shot, and then condemn them to eternal damnation because they failed to do something like accept Jesus Christ as their personal lord and savior? I’m sorry, but I cannot believe in a God like that. I might go so far as to say I flat-out refuse to believe in a God like that. I won’t go so far as to say that no God exists, because as I have said, I believe that we literally have no way of knowing, but I’m pretty sure any God that does exist isn’t that small-minded.

einstein

So anyway, happy holidays.

This is an examination of my own considered beliefs, and nothing more. I won’t try to convert you. I will thank you to extend me the same courtesy. You believe what you believe and I believe what I believe, and in all likelihood there is some point at which each of us believes the other is wrong. And that’s okay. If after reading this you find yourself compelled to pray for my salvation, I won’t be offended.

If you celebrate Christmas, I wish you a merry Christmas. If you celebrate the Solstice, I wish you a blessed Solstice. If you celebrate Hanukkah, I wish you (belatedly) a happy Hanukkah. If you celebrate Milad un Nabi, I wish you Eid Mubarak. If some sense of tradition and no small amount of marketing has led you to celebrate the celebratory season beyond any sense of religious conviction, you seem to be in good company. If you celebrate some parody of a holiday such as Giftmas, I wish you the love of family and friends, and some cool stuff to unwrap. If you celebrate Festivus, I wish you a productive airing of grievances. If you’re Dudeist, I abide. If you’re Pastafarian, I wish you noodly appendage and all that. If you don’t celebrate anything? We’re cool.

And if you’re still offended because I don’t happen to believe exactly the same thing you believe? Seriously? You need to get over it.

xmashup

Advertisements

What Should We Learn in College? (Part II)

by Wade Maki

In my last post I discussed comments made by our Governor on what sorts of things we should, and shouldn’t, be learning in college. This is a conversation going on across higher education. Of course we should learn everything in college, but this goal is not practical as our time and funds are limited. We are left then to prioritize what things to require of our students, what things will be electives, and what things not to offer at all.

One area we do this prioritization in is “general education” (GE), which is the largest issue in determining what we learn in college. Some institutions have a very broad model for GE that covers classic literature, history, philosophy, and the “things an educated person should know.” Exactly what appears on this list will vary by institution with some being more focused on the arts, some on the humanities, and others on social sciences. The point being that the institution decides a very small core for GE.

The drawback to a conscribed model for GE is that it doesn’t allow for as much student choice. The desire for more choice led to another very common GE system often referred to as “the cafeteria model” whereby many courses are offered as satisfying GE requirements and each student picks preferences for a category. This system is good for student choice of what to learn, but it isn’t good if you want a connected “core” of courses.

In recent years there has been a move to have a “common core” in which all universities within a state would have the same GE requirements. This makes transfers easier since all schools have the same core. However, it also tends to limit the amount of choice by reducing the options to only those courses offered at every school. In addition, it eliminates the local character of an institution’s GE (by making them all the same), which also reduces improvements from having competing systems (when everyone does it their own way, good ideas tend to be replicated). If we don’t try different GE systems on campuses then innovation slows.

Image

No matter which direction we move GE, we still have to address the central question of “what should we learn?” For example, should students learn a foreign language? Of course they should in an ideal world, but consider that foreign language requirements are two years.  We must compare the opportunity costs of that four course requirement (what else could we have learned from four other courses in say economics, psychology, science, or communications?). This is just one example of how complicated GE decisions can be. Every course we require is a limitation on choice and makes it less likely that other (non-required) subjects will be learned.

As many states look at a “common core” model there is an additional consideration which is often overlooked.  Suppose we move to a common core of general education in which most students learn the same sorts of things.  Now imagine your business or work environment where most of your coworkers learned the same types of things but other areas of knowledge were not learned by any of them. Is this preferable to an organization where its already employed educated members learned very little in common but have more diverse educational backgrounds? I suspect an organization with more diverse education employees will be more adaptable than one where there are a few things everyone knows and a lot of things no one knows.

Image

This is my worry about the way we are looking to answer the question of what we should learn in college. In the search for an efficient, easy to transfer, common core we may end up:

  1. Having graduates with more similar educations and the same gaps in their educations.
  2. Losing the unique educational cultures of our institutions.
  3. Missing out on the long term advantage of experimentation across our institutions by imposing one model for everyone.

Not having a common core doesn’t solve the all of the problems, but promoting experiments through diverse and unique educational requirements is worth keeping. There is another problem with GE that I can’t resolve, which is how most of us in college answer the question this way: “Everyone should learn what I did or what I’m teaching.” But that is a problem to be addressed in another posting. So, what should we learn in college?

All Hallows Eve…and Errors

by Matt McKinnon

All Hallows Eve, or Hallowe’en for short, is one of the most controversial and misunderstood holidays celebrated in the United States—its controversy owing in large part to its misunderstanding.  More so than the recent “War on Christmas” that may or may not be raging across the country, or the most important of all Christian holidays—Easter—blatantly named after the pagan goddess (Eostre), Halloween tends to separate Americans into those who enjoy it and find it harmless and those who disdain it and find it demonic.  Interestingly enough, both groups tend to base their ideas about Halloween on the same erroneous “facts” about its origins.

A quick perusal of the internet (once you have gotten by the commercialized sites selling costumes and the like) will offer the following generalizations about the holiday, taken for granted by most folks as historical truth.

Common ideas from a secular and/or Neopagan perspective:

  • Halloween developed from the  pan-Celtic feast of Samhain (pronounced “sah-ween”)
  • Samhain was the Celtic equivalent of New Years
  • This was a time when the veil between the living and dead was lifted
  • It becomes Christianized as “All Hallows Day” (All Saints Day)
  • The eve of this holy day remained essentially Pagan
  • Celebrating Halloween is innocent fun

Common ideas from an Evangelical Christian perspective (which would accept the first five of the above):

  • Halloween is Pagan in origin and outlook
  • It became intertwined with the “Catholic” All Saints Day
  • It celebrates evil and/or the Devil
  • It glorifies death and the macabre
  • Celebrating Halloween is blasphemous, idolatrous, and harmful

Even more “respectable” sites like those from History.com and the Library of Congress continue to perpetuate the Pagan-turned-Christian history of Halloween despite scarce evidence to support it, and considerable reason to be suspicious of it.

To be sure, like most legends, this “history” of Halloween contains some kernel of fact, though, again like most things, its true history is much more convoluted and complex.

The problem with Halloween and its Celtic origins is that the Celts were a semi-literate people who left only some inscriptions: all the writings we have about the pre-Christian Celts (the pagans) are the product of Christians, who may or may not have been completely faithful in their description and interpretation.  Indeed, all of the resources for ancient Irish mythology are medieval documents (the earliest being from the 11th century—some 600 years after Christianity had been introduced to Ireland).

It may be the case that Samhain indeed marked the Irish commemoration of the change in seasons “when the summer goes to its rest,” as the Medieval Irish tale “The Tain” records.  (Note, however, that our source here is only from the 12th century, and is specific to Ireland.)  The problem is that the historical evidence is not so neat.

A heavy migration of Irish to the Scottish Highlands and Islands in the early Middle Ages introduced the celebration of Samhain there, but the earliest Welsh (also Celtic) records afford no significance to the same dates.  Nor is there any indication that there was a counterpart to this celebration in Anglo-Saxon England from the same period.

So the best we can say is that, by the 10th century or so, Samhaim was established as an Irish holiday denoting the end of summer and the beginning of winter, but that there is no evidence that November 1 was a major pan-Celtic festival, and that even where it was celebrated (Ireland, Scottish Highlands and Islands), it did not have any religious significance or attributes.

As if the supposed Celtic origins of the holiday are uncertain enough, its “Christianization” by a Roman Church determined to stomp out ties to a pagan past are even more problematic.

It is assumed that because the Western Christian churches now celebrate All Saints Day on November 1st—with the addition of the Roman Catholic All Souls Day on November 2nd—there must have been an attempt by the clergy of the new religion to co-opt and supplant the holy days of the old.  After all, the celebrations of the death of the saints and of all Christians seem to directly correlate with the accumulated medieval suggestions that Samhain celebrated the end and the beginning of all things, and recognized a lifting of the veil between the natural and supernatural worlds.

The problem is that All Saints Day was first established by Pope Boniface IV on 13 May, 609 (or 610) when he consecrated the Pantheon at Rome.  It continued to be celebrated in Rome on 13 May, but was also celebrated at various other times in other parts of the Western Church, according to local usage (the medieval Irish church celebrated All Saints Day on April 20th).

Its Roman celebration was moved to 1 November during the reign of Pope Gregory III (d. 741), though with no suggestion that this was an attempt to co-opt the pagan holiday of Samhain.  In fact, there is evidence that the November date was already being kept by some churches in England and Germany as well as the Frankish kingdom, and that the date itself is most probably of Northern German origin.

Thus the idea that the celebration of All Saints Day on November 1st had anything to do either with Celtic influence or Roman concern to supersede the pagan Samhain has no historical basis: instead, Roman and Celtic Christianity followed the lead of the Germanic tradition, the reason for which is lost to history.

The English historian Ronald Hutton concludes that, while there is no doubt that the beginning of November was the time of a major pagan festival that was celebrated in all of the pastoral areas of the British Isles, there is no evidence that it was connected with the dead, and no proof that it celebrated the new year.

By the end of the Middle Ages, however, Halloween—as a Christian festival of the dead—had developed into a major public holiday of revelry, drink, and frolicking, with food and bonfires, and the practice of “souling” (a precursor to our modern trick-or-treating?) culminating in the most important ritual of all: the ringing of church bells to comfort the souls of people in purgatory.

The antics and festivities that most resemble our modern Halloween celebrations come directly from this medieval Christian holiday: the mummers and guisers (performers in disguise) of the winter festivals also being active at this time, and the practice of “souling” where children would go around soliciting “soul cakes” representing souls being freed from purgatory.

The tricks and pranks and carrying of vegetables (originally turnips) carved with scary faces (our jack o’ lanterns) are not attested to until the nineteenth century, so their direct link with earlier pagan practices is sketchy at best.

While the Celtic origins of Samhain may have had some influence on the celebration of Halloween as it begin to take shape during the Middle Ages, the Catholic Christian culture of the Middle Ages had a much more profound effect, where the ancient notion of the spiritual quality of the dates October 31st/November 1st became specifically associated with death—and later with the macabre of more recent times.

Thus modern Halloween is more directly a product of the Christian Middle Ages than it is of Celtic Paganism.  To the extent that some deny its rootedness in Christianity or deride its essence as pagan is more an indication of how these groups feel about medieval Catholic Christianity than Celtic Paganism (about which we know so very little).

And to the extent that we fail to realize just how Christian many of these practices and festivities were, we fail to see how much the Reformation and later movement of pietism and rationalism have been successful in redefining exactly what “Christian” is.

As such, Halloween is no less Christian and no more Pagan than either Christmas or Easter.

Happy trick-or-treating!

Spiders and Toads

By Marc Williams

Laurence Olivier as Richard III.

“Now is the winter of our discontent
Made glorious summer by this sun of York.”
~Richard III (Act 1, scene 1).

King Richard III is among Shakespeare’s greatest villains. Based on the real-life Richard of Glouster, Shakespeare’s title character murders his way to the throne, bragging about his deeds and ambitions to the audience in some of Shakespeare’s most delightful soliloquies. Shakespeare’s Richard is famously depicted as a hunchback, and uses his physical deformity as justification for his evil ambitions:

Why, I, in this weak piping time of peace,
Have no delight to pass away the time,
Unless to spy my shadow in the sun
And descant on mine own deformity:
And therefore, since I cannot prove a lover,
To entertain these fair well-spoken days,
I am determined to prove a villain
And hate the idle pleasures of these days.

For stage actors, Richard III is a tremendously challenging role. On one hand, he is pure evil—but he must also be charming and likeable. If you aren’t familiar with the play, its second scene features Richard successfully wooing Lady Anne as she grieves over her husband’s corpse! And Richard is her husband’s killer! Shakespeare’s Richard is both evil and smooth.

Simon Russell Beale as Richard III.

Actors must also deal with the issue of Richard’s physical disability. For instance, Richard is described as a “poisonous bunch-back’d toad,” an image that inspired Simon Russell Beale’s 1992 performance at the Royal Shakespeare Company, while Antony Sher’s iconic 1984 interpretation was inspired by the phrase “bottled spider,” an insult hurled at Richard in Act I.

Anthony Sher’s “bottled spider” interpretation of  Richard III.

While much of the historical record disputes Shakespeare’s portrayal of Richard as a maniacal mass-murderer, relatively little is known about Richard’s disability. According to the play, Richard is a hunchback with a shriveled arm. However, there is little evidence to support these claims.

This uncertainty may soon change. Archaeologists in Leicester, England have uncovered the remnants of a chapel that was demolished in the 16th century. That chapel, according to historic accounts of Richard’s death at the Battle of Bosworth, was Richard’s burial site. Not only have researchers found the church, but they have also located the choir area, where Richard’s body was allegedly interred. And indeed, last week, the archaeologists uncovered bones in the choir area:

If the archeologists have indeed found the remains of Richard III, the famous king was definitely not a hunchback. It appears he suffered from scoliosis—a lateral curve or twist of the spine—but not from kyphosis, which is a different kind of spinal curvature that leads to a pronounced forward-leaning posture. As Dr. Richard Taylor explains in the video, the excavated remains suggest this person would have appeared to have one shoulder slightly higher than the other as a result of scoliosis.

Interestingly, Ian McKellen’s performance as Richard III, captured in Richard Loncraine’s 1996 film, seems to capture the kind of physical condition described by Dr. Taylor, with one shoulder slightly higher than the other. At the 6:45 mark in this video, one can see how McKellen dealt with Richard’s condition.

So it appears Shakespeare not only distorted historical details in Richard III, he also apparently distorted the title character’s shape. Of Shakespeare’s Richard, McKellen wrote:

Shakespeare’s stage version of Richard has erased the history of the real king, who was, by comparison, a model of probity. Canny Shakespeare may well have conformed to the propaganda of the Tudor Dynasty, Queen Elizabeth I’s grandfather having slain Richard III at the Battle of Bosworth. Shakespeare was not writing nor rewriting history. He was building on his success as thee young playwright of the Henry VI trilogy, some of whose monstrously self-willed men and women recur in Richard III.

It seems likely that Shakespeare wanted Richard to seem as evil as possible in order to flatter Queen Elizabeth I, depicting her grandfather as England’s conquering hero. But why distort Richard’s physical disability as well?

In describing Richard’s body shape, it is difficult to ascertain what Shakespeare’s motives might have been and perhaps even more difficult to assess his attitudes toward physical difference in general. For example, in my “Big Plays, Big Ideas” class in the BLS program, we discuss the issue of race in Othello, even though we don’t know much about what Shakespeare thought about race. Many scholars have investigated the subject of physical difference in Shakespeare, of course: there are papers on Richard’s spine, naturally, but also Othello’s seizures, Lavinia’s marginalization in Titus Andronicus after her hands and feet are severed, the depiction of blindness in King Lear, and even Hermia’s height in A Midsummer Night’s Dream. And just as one must ask, “is Othello about race,” we might also ask, “is Richard III about shape?” I doubt many would argue that physical difference is the primary focus of Shakespeare’s Richard III, but it will be interesting to observe how the apparent discovery of Richard’s body will affect future performances of the play. Will actors continue to twist their bodies into “bottled spiders,” or will they focus on the historical Richard’s scoliosis—and perhaps ask why such vicious language is used to describe such a minor difference?

Resurgence of the American Right

By Claude Tate

In my BLS class, “Self, Society, and Salvation” we devote a unit to problems of society.  In that unit we look at three different approaches to organizing society.  Each approach is designed to promote what it believes to be the best society.  The focus of aristocratic theory is to create an orderly society. We next look at liberalism, which seeks to create a society as free as possible. Our final lesson is on socialism, which strives to create a society that is fair and just. In each discussion we examine how those three approaches are manifested in America.  In our lessons on liberalism and socialism in particular, I try to emphasize how we have struggled from the start over how to create a society that is not only as free as possible, but also fair and just. I also try to emphasize how we see that struggle played out in the news every day.  This post concerns that struggle; a struggle that will only grow more intense in the coming months as the 2012 elections near.  And that intensity will be due in large part to the resurgence of the American Right.

In 2008 our economy suffered, for want of a better term, a melt-down that impacted every area of the economy. The specifics of what caused the “Great Recession” will be argued over for years to come much like the specifics of what brought on the Great Depression. But it is safe to say that the cause of the 2008 collapse was ideologically driven.  The drive for lower taxes and less regulation of business, which began to gain traction in the ‘70s and steadily gained ground through the ‘80s and ‘90s, dominated policy under the administration of George W. Bush.  Taxes were reduced. Regulations that could be eliminated were, areas where new regulations were needed were ignored, and people were put in charge of our regulatory agencies that had spent much of their careers fighting the very agencies they now lead. Our yearly budget surplus was quickly turned into a yearly deficit as revenues coming into the government failed to keep up with our spending. And businesses from Main Street to Wall Street were allowed to conduct business as they saw fit.  The gap between the rich and the middle class which had begun decades earlier widened at an increasing rate.  And both the government and individual citizens were going deeper into debt. But the stock market was soaring. All was well.  And if we had any doubts, President Bush and the Treasury Secretary Hank Paulson assured us the economy’s fundamentals were strong.  In retrospect their statements sounded much like the statements the leaders of business were making in September and October of 1929, weeks before the crash that signaled the beginning of the Great Depression.

But just as with 1929, statements of reassurance could not stop what happened to the economy in the fall of 2008. Capitalism depends on a sound banking system, and our major banks were failing. Both Democrats and Republicans agreed the government had to help them survive.  The economy as a whole was beginning to go into a tailspin; a tailspin that was feared would end in another Depression if the banks failed.  Thus TARP was passed under President Bush. Taxpayer money flowed to the major banks to save them from a catastrophe of their own making.  And it should be noted that our actions were not unique. Other Western nations took similar actions.  The American public did not rise up in protest. In fact, Democrats dominated the election of 2008 as they not only won the Presidency, but also increased their majority in the House, and clearly took control of the Senate. Prior to the election Democrats had relied upon two independents with whom they caucused to control the Senate by a 51 to 49 margin.  Virtually everyone agreed, the American Right was dead.  But the banking situation had not stabilized when President Obama took office in January of 2009, so he continued TARP.  And within weeks the Right started to show signs of life.  The defibrillator was the policies enacted to help the nation recover.  TARP led the way. We began to hear of banks that were being given taxpayer money to survive handing our large bonuses to many top executives. Their justification was that they needed to hold onto their talent.  This was the same talent that had almost destroyed them. The American public began to grow restless. A movement was born, the Tea Party, which embraced the very ideology that had caused the collapse.  Anger that one would think should have directed at Wall Street and to the business practices that caused the collapse instead was being redirected at the government. A stimulus bill, The American Recovery and Reinvestment Act of 2009 was passed, but due to Republican opposition, the final bill was too small to have a substantial impact on recovery.  It mitigated the impact of the economic downturn and saved many jobs, but it was not enough to pull us out. The Dodd–Frank Wall Street Reform and Consumer Protection Act, designed to tighten regulations on banks and cover loopholes that had allowed for dangerous speculation passed, but with opposition and resentment. And to save the automobile industry, the government loaned money to General Motors and Chrysler.  What in the past would have been hailed as necessary now was condemned as government interference in the private sector. Both are paying their loans back and GM is once again number one in sales. Thousands of jobs were saved and both are hiring back workers. (But still, Mitt Romney maintains we should have let them fail.)  And finally, The Patient Protection and Affordable Care Act, an act very similar to a Republican proposal from the 1990’s, was vehemently opposed by Republicans and derided by the Tea Party and others as a government takeover of healthcare.  Misinformation was rampant. The government was not going to kill granny. (I loved a sign I saw a woman holding up at one of the “town meetings” which said, “Government, keep your hands off of my Medicare”.)

The anger against the government accelerated in the fall of 2010 leading to a Republican victory in the mid-term elections with Republicans not only taking over the House of Representatives, but many governorships and state legislatures. And many of those newly elected Republican officials vowed to reinstitute the policies that led to the collapse.  And that move to the right in the Republican Party has continued. Today we are even hearing of Social Security as we know it being dismantled, Medicare fundamentally altered, funding for our public schools and  universities being cut, and our universities called places where the “liberal elite” indoctrinate our young. (I refer to our indoctrination methods as “guerilla teaching”, but evidently the right has seen through us.) And that is the tip of the iceberg. How did this happen? How did an economic collapse that one would think should have opened a window of opportunity for the left instead lead to a resurgence of the right?

On January 9th my wife and I were coming home from the mountains, and as we normally do we were listening to NPR (otherwise known a propaganda tool for the liberal elite that should not be supported by taxpayers).  The guest on “The Diane Rehm Show” that day was Thomas Frank.  Mr. Frank, writer and former opinion columnist for “The Wall Street Journal”, had just published a new book, “Pity the Billionaire”, in which he explored the birth of the Tea Party phenomenon. I purchased the book at soon as we got home and read it immediately.  His is not the final word on the Tea Party and the rise of the Right, but the book does provide insight and in my view, is well worth reading.

From the bestselling author of What’s the Matter with Kansas?, a wonderfully insightful and sardonic look at why the worst economy since the 1930s has brought about the revival of conservatism

–From book description on Amazon.com

Economic catastrophe usually brings social protest and demands for change—or at least it’s supposed to. But when Thomas Frank set out in 2009 to look for expressions of American discontent, all he could find were loud demands that the economic system be made even harsher on the recession’s victims and that society’s traditional winners receive even grander prizes. The American Right, which had seemed moribund after the election of 2008, was strangely reinvigorated by the arrival of hard times. The Tea Party movement demanded not that we question the failed system but that we reaffirm our commitment to it. Republicans in Congress embarked on a bold strategy of total opposition to the liberal state. And TV phenom Glenn Beck demonstrated the commercial potential of heroic paranoia and the purest libertarian economics.
In Pity the Billionaire, Frank, the great chronicler of American paradox, examines the peculiar mechanism by which dire economic circumstances have delivered wildly unexpected political results. Using firsthand reporting, a deep knowledge of the American Right, and a wicked sense of humor, he gives us the first full diagnosis of the cultural malady that has transformed collapse into profit, reconceived the Founding Fathers as heroes from an Ayn Rand novel, and enlisted the powerless in a fan club for the prosperous. The understanding Frank reaches is at once startling, original, and profound.

Click here to listen to the interview of Mr. Frank my wife and I listened to from The Diane Rehm Show

New Year’s Resolutions

By Carrie Levesque

Is it too early to start thinking about New Year’s Resolutions?  It’s not something I usually get to until the holiday insanity is over and the next wave of media bombardment starts pushing gym memberships and Nutrisystem packages.   Though I filter out most of the blah blah blah, I do get to thinking about how to best use this gift of a new year.

While procrastinating online a few days ago, I came across an article, “Five lessons learned from living in Paris.”   I was struck by its epigraph, a quote from Hemingway’s letters, where he writes, “Paris is so very beautiful that it satisfies something in you that is always hungry in America.”   Hemingway’s quote and this article on Jennifer L Scott’s new book,”Lessons from Madame Chic: The Top 20 Things I Learned While Living in Paris” are both indirectly about Americans’ hunger for beauty and grace in a commerce-driven culture that sometimes doesn’t seem to have a lot of use for either.

Two of Scott’s ‘lessons’ resonate with me as I reevaluate my life and habits at year’s end.  First was her observation that “Parisians often turned mundane aspects of everyday life into something special.”  She recalls how her host father made an event of savoring a bit of his favorite cheese every night at the end of dinner; it was just routine enjoyment of a cheese course, but he was so passionate about it that it became a special ritual whose memory Scott savored after she returned to American life.  Meals tend to be something we rush through in the US, a pit-stop refueling on our way to getting done more of whatever it is we are always rushing to get done.  It seems if we’re looking for anywhere in our lives to slow down and satisfy a spiritual hunger along with our physical hunger, mealtime is a worthy candidate.

She also comments on the European tendency to take more pride and care in one’s dress while at the same time owning fewer clothes and accessories than Americans tend to, which simplifies the process of making oneself presentable each day.  This calls to mind web projects like Project 333, one of many movements today exploring the benefits of ‘voluntary simplicity.’   Owning fewer things doesn’t only save you money, it also frees you from having to care for and about heaps of stuff, so that you have more time and energy to indulge in things like Scott’s first point: really enjoying and being present for life’s smaller, even routine, pleasures.

OK, so none of this is anything new.  Oceans of ink have already been spilled on these topics (sorry, Ms. Scott).  And yet we still make New Year’s resolutions, and we still hunger and struggle, in all sorts of ways, to be better (whatever each of us decides that means), to improve the world around us.  Which is why at New Year’s I find myself looking to accounts I’ve come across of people closer to home who decided to devote their year to some sort of radical reevaluation of the way we live (like a New Year’s Resolution on steroids), and the lessons they learned from it.

Two of my favorites in the ‘less is more’ genre are Judith Levine’s “Not Buying It: My Year without Shopping” (fairly self-explanatory) and Colin Beavan’s “No Impact Man” (which became a movie in 2009- he and his family sought to live (in NYC, for a whole year) in a way that made no environmental impact).    For giving more consideration to the sacred place of food in our lives, I think of Barbara Kingsolver’s wonderful “Animal, Vegetable, Miracle,” her account of eating only in-season food produced by or close-by her Virginia mountain farm for one year.  Two other works I might get to this year are A.J. Jacobs’ “The Year of Living Biblically”(in which Jacobs, an agnostic, tries to take literal direction from the Good Book and considers the place of the sacred in our lives) and Sara Bongiorni’s “A Year Without ‘Made in China’: One Family’s True Life Adventure in the Global Economy“ (in which she walks the walk of the ‘Buy American’ talk).

While I don’t see myself devoting my year to anything requiring quite this level of discipline (baby steps!), it is inspiring to live vicariously through these authors and reflect on what they gained from their projects.  They and their families forged meaningful new traditions and found a new sense of community that arises when one disconnects a bit from the mainstream economy (potlucks instead of takeout, preparing a fresh meal as a family instead of popping a frozen meal in the microwave, sharing communal resources instead of everyone needing to own and maintain their own whoziwhatsits).   If you’ve got the money, it’s easy to escape to an exotic foreign locale to feed your need for beauty.  But it’s possible if we take them more to heart, these authors’ efforts can challenge us to find a beauty in our own communities that can leave us all a little less hungry in America.

My Experience in the BLS Program at UNCG

By Julia Burns, BLS Class of 2012

I woke up this morning and went into the bathroom to do my daily ritual as usual. The only problem that I have is looking in the mirror at a very scary soon to be 52 year old! Thinking, I realized today is 11th of December and it is 4 more days until I will officially be graduated. I did it! I worked hard to get my Bachelor of Arts degree while I worked making money in a reputable job. It took me 2 years in person and a year online to complete in 3 ½ years what normally have taken in 4 to 5 years. How did I do it?  The Bachelor of Arts in Liberal Studies program at the University of North Carolina at Greensboro.

When I first enrolled, I thought this was going to be a breeze – a piece of cake. I couldn’t have been more wrong. I never worked so hard in my life. A traditional student can walk to class, take notes, study, test, and interact readily with other students; an online student does not have that luxury. An online article by Terence Loose points out the following seven myths about  online learning:

  • Online courses are easier than in-class courses.
  • You have to be tech-savvy to take an online class.
  • You don’t receive personal attention in online education.
  • You can “hide” in an online course and never participate.
  • You don’t learn as much when you pursue an online degree.
  • Respected schools don’t offer online degrees.
  • Networking opportunities aren’t available through online education.

I compared these seven myths to my experience with online classes. I am technologically illiterate. I received a lot of personal attention in online education. I couldn’t hide in an online course and not participate if I expected to receive a grade and keep my financial aid. I learned more from studying on line than I did from attending in person. The University of North Carolina at Greensboro is a well-respected, fully accredited, state university. I made some wonderful contacts online–not just on the North Carolina campus but from all across the country as well as all over the world.  Through online classes, I have learned the art of self discipline, how to prioritize better, how to write for specific disciplines, developed a stronger interest in all types of literature, and a gained great appreciation for all types of anthropology.

Many classes featured heated debates, such as the mock trials in “Great Trials in American History.” This was done live online and all students had to participate. It was a difficult night because in some parts of the country there were terrible thunderstorms and a lot of tornado activity going on. The thrill of the storms and the debate combined was really exciting!

What do I intend to do with this online degree in Bachelor of Arts? I would like to be a lawyer or a teacher. But in the meanwhile, I have chosen neither. I am currently refreshing my algebra skills to take the GRE and get my Master Arts in Liberal Studies. The law has always fascinated me, teaching would be a great challenge, but to become better educated is where I am headed. Who knows–maybe I will get my PhD?