Tag Archives: film

It’s So Bad it’s Good.

By Claude Tate

I devoted one of my blog entries last fall to a movie, Raise the Red Lantern,  which I felt was not only an excellent movie in and of itself, but a movie of educational value in that it provided a window onto traditional Confucian society in early 20th century China.  In fact, I liked it so much I’ve used it a number of times in classes and recommended it on numerous occasions to my BLS students.

Gong Li in Raise the Red Lantern

Gong Li in Raise the Red Lantern

This blog is also devoted to a movie, but a very different kind of movie. This movie is far removed from an excellent movie. It’s a bad movie. A very bad movie. The general consensus is that it is the worst movie ever made.  In fact, it is so bad that even some critics see it as good.  For example, Phil Hall on his film review site, Rotten Tomatoes, said the exceptionally poor quality of the movie made him laugh so much, he could not put it at the top of his ‘worst of’ list.  Another source, Videohound’s Complete Guide to Cult Flicks and Trash Pics, states that, “In fact, the film has become so famous for its own badness that it’s now beyond criticism.”

Also, this movie does not have the clear educational value that “Lantern” has. But if one defines educational value somewhat loosely—strike that, very loosely—it is not without some value.  In fact, I used it on several occasions in a face to face class I used to teach entitled “US History Since 1945″.  Since we had time in that class to go beyond the high points that one is limited to in the broad surveys, I tried to include things that would allow students to re-imagine what everyday life was like. Toward that end, when I covered the 1950s, among other things, I brought in clips of some of the old classic TV shows as well as some movies.  The atomic bomb and the possibility of nuclear annihilation became a part of our lives during the ’50s, so there were a number of movies made that were built around that theme. Some were good, while others were not so good. We also really became very much aware of space during this ’50s, so a number of movies were made devoted to that theme. Again, some were good, while others were bad. While I mainly used clips, when there was time, I would try to work in an entire movie using one or both of those themes. I tried a couple of the really good ones, but film-making has changed quite a bit since then, so students didn’t seem to appreciate the quality of what they were seeing. So I decided to look for stinkers.  Luckily, I not only found a stinker, the stench from this ‘masterpiece’ could encircle the earth several times over. Students generally really liked the movie, so I thought I would suggest it here.

Plan_9_poster

Plan 9 From Outer Space original poster

It is (music please) Plan 9 From Outer Space (made in 1956, released in 1959). It is a horror movie that incorporated an invasion from outer space theme as aliens planned to conquer the earth by raising the dead against us. It was conceived, produced, written, and directed by the infamous Edward D. Wood, Jr.  After years of criticism, in 1980 Michael Medved and Harry Medved named it the “worst movie ever made” and awarded it their Golden Turkey Award.  That same year Ed Wood (who died in 1978) was also posthumously awarded the Golden Turkey Award as the worst director ever.  I don’t know whether any of the actors in the movie received any ‘worst’ awards for their performances, but a number of them should have. It also played at a Worst Films Festival in New Orleans, figured prominently in an episode of Seinfeld, and was the centerpiece of the movie, Ed Wood, (1994) which was directed and produced by Tim Burton and starred Johnny Depp.

Johnny Depp as Ed Wood

Johnny Depp as Ed Wood

For those of you who haven’t seen the movie, I don’t want to spoil the beauty of it by saying too much about the number of ‘gems’ it contains. But a few hints will not hurt.  Bela Lugosi was going to make a comeback with this movie, but died after only a few test shots. Ed, ‘ingeniously’ included those few shots on a number of occasions in the movie. For other shots, his wife’s chiropractor, who looked nothing like Lugosi, was the stand in. That is why the DVD released through Image Entertainment, states “Almost Starring Bela Lugosi” on the cover.  The special effects are also a thing of beauty as the flying saucers are campy even for the ’50s.  And one just has to love how night turns to day and back to night in back to back scenes. But if you wish to know more, a plot summary can be found on “The Movie Club Annuals…” website.

I own the DVD. I just had to have a physical copy. But it is in the public domain, and can be accessed on YouTube here.   You should also be able to download it from other sources.

Ed Wood

Ed Wood

By the way, Ed Wood made movies before and after this one.  I have not seen them so I cannot attest to their quality, but given his talent for movie making he showed in Plan 9 and the titles, I would assume they are bad also.  But movies evidently weren’t his only passion.  He wrote a large number of books, which I have not read nor intend to, but from the sampling of titles I’ve seen, he seems to have been just as good at writing books as making movies. Ed also led an interesting personal life which you get some hint of in the movie, Ed Wood.  If you want to find out more about Ed or his other “artistic” endeavors, you’re on your own.  I’m only recommending Plan 9 From Outer Space.

If you plan to watch Plan 9, I would suggest you watch Ed Wood first.  I think it will help you understand and appreciate both Ed and Plan 9 since it focuses on Ed’s early career and the making of this ‘masterpiece’.  By the way, Ed Wood is a very good movie. It won two Academy Award, one for Best Supporting Actor (Martin Landau), and one for Best Makeup (Rick Baker, Ve Neill, and Yonlanda Tousseing).  Unfortunately, this movie is not in the public domain so you will need to rent it.

Vampira (Maila Nurmi) in Plan 9

Vampira (Maila Nurmi) in Plan 9 From Outer Space

I would also suggest you watch it with friends. It’s always more fun to see an awful flick with friends as they may see things to make fun of you may miss. So I guess it has value beyond just its dubious educational value. It’s a great excuse for friends to get together.

SECAC Art Conference: Coming to Greensboro in 2013

by Ann Millett-Gallant

SECACSECAC, the Southeast College Art Conference, was founded as a regional arts organization in 1942 and now hosts an annual, national conference for artists, art educators and scholars, and art museum professionals.

The organization also publishes The SECAC Review, presents awards for excellence in teaching, museum exhibitions, and artist works, and posts opportunities and jobs for art professionals.  I have attended and presented at numerous SECAC conferences in the past, in Little Rock, AR, Norfolk, VA, Columbia, SC, and Savannah, GA.  The 2012 conference was held in my hometown, Durham, NC and sponsored by Meredith College.  Conference panels are proposed and selected by panel chairs, and this year, I chaired a panel titled “Disability and Performance: Bodies on Display.”  This topic is central to my research and especially my book, The Disabled Body in Contemporary Art.

millett-gallant_book

The Disabled Body in Contemporary Art

My panelists gave presentations on independent films; the canonical painting by Thomas Eakins, “The Gross Clinic,” 1875, and comparable images of disabled war veterans; and the collection of freak show photographs in the Barnum Museum in Bridgeport, CN.  This was my second experience chairing a panel on disability and disability studies at a SECAC conference, topics that are still somewhat new for art historians and professionals.  The panel went well and sparked much interest and lively conversation.

I also attended a panel on Doppelgangers, or images of doubles or identical pairs, which engaged art historical examples from diverse contexts and time periods, as well as a panel on self-taught, or outsider artists.  This latter panel was of special interest to me, because my good friend from graduate school at UNC Chapel Hill, Leisa Rundquist, presented a paper on the work of Henry Darger (the link is to works by Darger in the Folk Art Museum, whose administration and education employees hosted the panel).  Leisa is now a professor of art history of UNC Asheville, so the conference was also a chance to see her.  I especially enjoy SECAC conferences, because I see a lot of old friends and usually meet new and like-minded people.

Thomas Eakins, “The Gross Clinic,” 1875

Thomas Eakins, “The Gross Clinic,” 1875

I didn’t attend as much of the conference as I usually do, ironically, because it was too close to home.  On the day before my presentation, my refrigerator broke, so I returned home right after the panel to wait for a new refrigerator to arrive.  I attended two panels the next day and caught up with friends over glasses of wine at the bar.  I didn’t participate in any of the organized tours of local museums and art venues, as I can see them whenever I want.  It was nice not to have to pack for and travel to the conference, especially in light of how stressful and expensive flying has become, but there is something nice about going to conferences out of town, staying at the conference hotel, and immersing yourself in the atmosphere and activities.

This Fall, the conference will be held in Greensboro, NC, so hopefully I will see many of my colleagues from UNCG and the Weatherspoon Art Museum there, as well as, perhaps, my students.  I will be chairing a panel titled “Photographing the Body.”

How Free Should Freedom of Speech Be?

by Matt McKinnon

Only now, weeks after the blatantly anti-Islamic film “Innocence of Muslims” posted on YouTube, making headlines and spawning violent reactions from Muslims across the globe, have tensions begun to ease a bit.  Oh, to be sure, mainstream media and American attention has moved on, only to return when the next powder keg blows, while much of the rest of the world is left to grapple with serious questions of rights and responsibilities in this new age of technology.

Riots in Libya in response to “Innocence of Muslims”

These recent riots across the Muslim world bring into high relief serious questions about the freedom of speech, for us as citizens of the United States, as well as participants in a global society made smaller and smaller with the advance of technology.

The problem, at first glance, seems like the usual violent overreaction by Muslim extremists whose narrow view of religion and politics seeks only to protect their view at the expense of the rights of others who may disagree.

Salman Rushdie

We are quickly reminded of the fatwa pronounced on Salman Rushdie  for the horrific action of writing a novel (itself a work of fiction), as well as the assassination of Dutch director Theo van Gogh for his work “Submission” about the treatment of women in Islam.

But a deeper look into this issue, instead of bringing clarity and self-assured anti-jihadist jihad, reveals a real problem that is not so black and white, not so clearly one of the fundamental right of freedom of speech versus ignorant fundamentalism, but rather one that is complicated, with subtleties and nuances not conducive to entertainment-news sound bites and the glib remarks of politicians.

The question becomes, not so much what freedoms we as U.S. citizens have with respect to our Constitution and domestic laws, but rather what extent should we, as participants in a larger global society, respect the various notions of freedom of speech at work in other countries and cultures different from our own.

Theo van Gogh

For, when we look closer at the latest incident itself, instead of finding the literary musings of a great novelist or the social critique of a world-class film director, we find a joke of a movie—though not really a movie at all: more like a few scenes of such low quality and disconnect that one doubts the existence of a larger work.  Some have called it “repugnant,” but what makes it so is not its content, which can scarcely be taken seriously, but rather the intention behind it.

The title itself (Innocence of Muslims) makes little sense—if predicated of the film’s contents.  However, if the title was a display of ironic/sarcastic foresight, then it fits all too well.  It would be hard to find anyone in the Western world who would take it seriously.  There is no plot, no character development, in fact, no real characters—only thinly disguised stereotypes meant to offend.

And that—the intention to offend—seems to be the only real purpose of the work itself.

Now it must be said that intention to offend is not, in and of itself, always a bad thing.  And the intention to offend religion especially is not either.  (I find myself doing both as often as possible.)  The problem arises when the intention to offend is also an intention to provoke, not discussion and debate, a la Rushdie and van Gogh, but violence and riot.

A few details are warranted:

As it turns out, the video was posted on YouTube in early July, 2012 as “The Real Life of Muhammad” and “Muhammad Movie Trailer,” but received little or no attention.  It was then dubbed into Arabic, re-titled with the aforementioned ironic/sarcastic name, and broadcast on Egyptian television on September 9th—two days before the eleventh anniversary of 9/11.

And the rest, as they say, is history.

(Except that it’s still going on, and will continue to escalate in the future.)

Holocaust survivors in Skokie, IL

So why is this case NOT the same as that of Rushdie and van Gogh, overlooking of course the artistic and social merit of these two?  Well, it strikes me that this incident is less like writing a book or making a movie criticizing specific points of any (and all) religions and more like yelling “Fire!” in a crowded theatre or assembling Nazis to march through Jewish neighborhoods in Skokie, Il in1977.

The first are clearly examples of protected free speech, the second is not, and the third is still hotly debated 35 years the case went to court.

Here is where American jurisprudence only helps so far.  For U.S. laws are clear that free speech can be limited if the immediate result is incitement to riot.  (This is what the city of Skokie argued—and lost—in their case against the Nazis: that their uniformed presence in a neighborhood with a significant number of Holocaust survivors would incite riot.)

Members of Westboro Baptist Church

To complicate matters, the freedom of speech laws are even more restrictive in other countries—and not just those “narrow-minded,” theocratic Muslim ones either.  In fact, Canada as well as much of Europe has free-speech laws that significantly restrict its exercise.   For example, the infamous members of Westboro Baptist Church, who recently won a Supreme Court case supporting their right to picket the funerals of U.S. soldiers, are not allowed to protest north of the border in Canada, as that country’s hate speech laws forbid such activity.  (Many in the U.S. disagreed with SCOTUS’s ruling and were in favor of restricting free speech in this case.)

And in most European countries, publicly denying the Holocaust is a crime.  As is racist hate speech (just ask Chelsea footballer John Terry).

The production and “marketing” of “Innocence of Muslims” might or might not meet the criteria of hate-speech, but that is neither my concern nor my point.

The fact that the video was produced with the intention to inflame Muslims, and that when it initially failed to do so it was dubbed into Arabic and specifically presented to Egyptian television to be broadcast to millions, makes it hard to deny that its real intention was to promote and incite riot.

The missing pieces here are the internet and technology—and the laws that have failed to keep up with them.  For with social media, YouTube, smart phones, iPads, Skype, etc…, placing an inflammatory video in the right hands with the capability to reach millions almost instantaneously just may be the 21st century version of standing on a street corner inciting folks to riot.

In fact, it may be worse.

Not recognizing this, I fear, will have even more dire consequences in the future.

__________

Editor’s note: The usual practice in the BLS Program is to provide direct links to primary sources when possible. However, in the case of the video discussed in this entry, we decided it would be imprudent to link to it directly. If you want to view the video for yourself, a search of the title will lead you to the original posting, various repostings, and sundry articles and editorials about the video and its aftermath.

Spiders and Toads

By Marc Williams

Laurence Olivier as Richard III.

“Now is the winter of our discontent
Made glorious summer by this sun of York.”
~Richard III (Act 1, scene 1).

King Richard III is among Shakespeare’s greatest villains. Based on the real-life Richard of Glouster, Shakespeare’s title character murders his way to the throne, bragging about his deeds and ambitions to the audience in some of Shakespeare’s most delightful soliloquies. Shakespeare’s Richard is famously depicted as a hunchback, and uses his physical deformity as justification for his evil ambitions:

Why, I, in this weak piping time of peace,
Have no delight to pass away the time,
Unless to spy my shadow in the sun
And descant on mine own deformity:
And therefore, since I cannot prove a lover,
To entertain these fair well-spoken days,
I am determined to prove a villain
And hate the idle pleasures of these days.

For stage actors, Richard III is a tremendously challenging role. On one hand, he is pure evil—but he must also be charming and likeable. If you aren’t familiar with the play, its second scene features Richard successfully wooing Lady Anne as she grieves over her husband’s corpse! And Richard is her husband’s killer! Shakespeare’s Richard is both evil and smooth.

Simon Russell Beale as Richard III.

Actors must also deal with the issue of Richard’s physical disability. For instance, Richard is described as a “poisonous bunch-back’d toad,” an image that inspired Simon Russell Beale’s 1992 performance at the Royal Shakespeare Company, while Antony Sher’s iconic 1984 interpretation was inspired by the phrase “bottled spider,” an insult hurled at Richard in Act I.

Anthony Sher’s “bottled spider” interpretation of  Richard III.

While much of the historical record disputes Shakespeare’s portrayal of Richard as a maniacal mass-murderer, relatively little is known about Richard’s disability. According to the play, Richard is a hunchback with a shriveled arm. However, there is little evidence to support these claims.

This uncertainty may soon change. Archaeologists in Leicester, England have uncovered the remnants of a chapel that was demolished in the 16th century. That chapel, according to historic accounts of Richard’s death at the Battle of Bosworth, was Richard’s burial site. Not only have researchers found the church, but they have also located the choir area, where Richard’s body was allegedly interred. And indeed, last week, the archaeologists uncovered bones in the choir area:

If the archeologists have indeed found the remains of Richard III, the famous king was definitely not a hunchback. It appears he suffered from scoliosis—a lateral curve or twist of the spine—but not from kyphosis, which is a different kind of spinal curvature that leads to a pronounced forward-leaning posture. As Dr. Richard Taylor explains in the video, the excavated remains suggest this person would have appeared to have one shoulder slightly higher than the other as a result of scoliosis.

Interestingly, Ian McKellen’s performance as Richard III, captured in Richard Loncraine’s 1996 film, seems to capture the kind of physical condition described by Dr. Taylor, with one shoulder slightly higher than the other. At the 6:45 mark in this video, one can see how McKellen dealt with Richard’s condition.

So it appears Shakespeare not only distorted historical details in Richard III, he also apparently distorted the title character’s shape. Of Shakespeare’s Richard, McKellen wrote:

Shakespeare’s stage version of Richard has erased the history of the real king, who was, by comparison, a model of probity. Canny Shakespeare may well have conformed to the propaganda of the Tudor Dynasty, Queen Elizabeth I’s grandfather having slain Richard III at the Battle of Bosworth. Shakespeare was not writing nor rewriting history. He was building on his success as thee young playwright of the Henry VI trilogy, some of whose monstrously self-willed men and women recur in Richard III.

It seems likely that Shakespeare wanted Richard to seem as evil as possible in order to flatter Queen Elizabeth I, depicting her grandfather as England’s conquering hero. But why distort Richard’s physical disability as well?

In describing Richard’s body shape, it is difficult to ascertain what Shakespeare’s motives might have been and perhaps even more difficult to assess his attitudes toward physical difference in general. For example, in my “Big Plays, Big Ideas” class in the BLS program, we discuss the issue of race in Othello, even though we don’t know much about what Shakespeare thought about race. Many scholars have investigated the subject of physical difference in Shakespeare, of course: there are papers on Richard’s spine, naturally, but also Othello’s seizures, Lavinia’s marginalization in Titus Andronicus after her hands and feet are severed, the depiction of blindness in King Lear, and even Hermia’s height in A Midsummer Night’s Dream. And just as one must ask, “is Othello about race,” we might also ask, “is Richard III about shape?” I doubt many would argue that physical difference is the primary focus of Shakespeare’s Richard III, but it will be interesting to observe how the apparent discovery of Richard’s body will affect future performances of the play. Will actors continue to twist their bodies into “bottled spiders,” or will they focus on the historical Richard’s scoliosis—and perhaps ask why such vicious language is used to describe such a minor difference?

Nimrod: What’s In a Name?

by Matt McKinnon

My teenage son is a nimrod. Or so I thought.

And if you have teenagers, and were born in the second half of the twentieth century, you have probably thought at one time or another that your son (or daughter) was a nimrod too, and would not require any specific evidence to explain why.

Of course this is the case only if you are of a certain age: namely, a Baby Boomer or Gen Xer (like myself).

For if you are any older, and if you are rather literate, then you would be perplexed as to why I would think that my son was a nimrod, and why was I not capitalizing Nimrod as it should be.  Since it is, after all, a proper noun.

It is?

Yes, it is.  Or rather it was.  Let me explain.

It turns out, the word “nimrod” (or more properly “Nimrod”) has a fascinating history in which it goes about a substantial reinterpretation.  (Any nimrod can find this out by searching the web, though there is precious little explanation there.)   This, by itself, isn’t surprising, as many words that make their way through the ages transform as well.  But the transformation of “Nimrod” to “nimrod” is particularly interesting in what it tells us about ourselves and our culture.

Nimrod, you see, was a character from the Hebrew Scriptures, or as Christians know it, the Old Testament:

“Cush became the father of Nimrod; he was the first on earth to become a mighty warrior. He was a mighty hunter before the LORD; therefore it is said, ʻLike Nimrod a mighty hunter before the LORD.ʼ”  (Genesis 10:8-9 NRSV)

This is the manner in which older biblically literate folks will understand the term: “as a mighty hunter.”

But there’s more here, for these folks also might understand the term as referencing a tyrannical ruler.

Why?  Well, the etymology of the word links it to the Hebrew “to rebel,” not for anything that Nimrod actually does in the Old Testament, but because, as many scholars attest, it is probably a distortion of the name for the Mesopotamian war-god Ninurta.  And the later chroniclers of Israelite religion didn’t have much sympathy for the polytheism of their Mesopotamian neighbors—especially when it so obviously informed their own religious mythology.

So the word, when it very early on enters the biblical narrative, already shows signs of transformation and tension as referencing both a mighty hunter as well as someone rebellious against the Israelite god.

In fact, Jewish and Christian tradition name Nimrod as the leader of the folks who built the Tower of Babel, though this is not found anywhere in the scriptures.  This, then, is how Nimrod is now portrayed in more conservative circles, despite the lack of biblical attestation:

And as the word is already attested to in Middle English, by the 16th century it is clearly being used in both manners in the English language: as a tyrant and as a great warrior or hunter.

Now I can assure you, neither of these describes my teenage son.  So what gives?

Well, “Nimrod” shows up in a 1932 Broadway play (which only had 11 showings) about two lovesick youngsters:

“He’s in love with her. That makes about the tenth. The same old Nimrod. Won’t let her alone for a second.”

Here, however, the emphasis is still on the term’s former meaning as a hunter, though its use in the play to describe a somewhat frivolous and hapless fellow who moves from one true love to the next points us in the right direction.

And in a 1934 film You’re Telling Me, W.C. Fields’ character, a bit of a buffoon himself (and a drunkard), takes a few swings of a limp golf club and hands it back to his dim-witted caddy, saying in a way only W.C. Fields could:

“Little too much whip in that club, nimrod.”

So here we have the first recorded instance of the word’s transformation from a great hunter or tyrant to a stupid person or jerk.

But that’s not the end of the story.  After all, how many of us have seen You’re Telling Me?  (I haven’t, at least, not until I did the research.)

So the last, and arguably most important piece to the puzzle is not the origination of the word or its transformation, but rather the dissemination of it.

And that, as I’m sure many of you are aware, is none other than T.V. Guide’s greatest cartoon character of all time: Bugs Bunny, who first debuted in the 1940’s, not that long after You’re Telling Me premiered.

In this context, the one most folks born after World War II are familiar with, Bugs Bunny refers to the inept hunter Elmer Fudd as a “little nimrod.”  And the rest, as they say, is history.

For what emerges from Bugs’ usage is not the traditional reference to Fudd as a hunter (though this is the obvious, albeit ironic, intention), but rather Fudd’s more enduring (and endearing?) quality of ineptitude and buffoonery.

And anyone who has (or knows) a teenager can certainly attest to the applicability of this use of the term in describing him or her.

But the important thing is what this says about literacy and our contemporary culture.

For whereas my parents’ generation and earlier were more likely than not to receive their cultural education from Classical stories, the great literature of Europe, and the Bible, those of us born in the latter half of the 20th century and later, are much more likely to receive our cultural education from popular culture.

I have seen this firsthand when teaching, for example, Nietzsche’s “Thus Spake Zarathustra,” which offers a critique of Judaism and Christianity by parodying scripture.  The trouble is, when students don’t know the referent, they can’t fully understand or appreciate the allusion.  And this is as true of Shakespeare and Milton as it is of Nietzsche…or Bugs Bunny for that matter.

And the ramifications of this are far greater than my choosing the proper term to criticize my teenage son.

(Though ya gotta admit, “nimrod” sounds pretty apropos.)

And the Oscar Goes To…

By Marc Williams

This Sunday, the Hollywood glitterati will turn out for its annual jubilee, the Academy Awards. While I’ve never been a fan of award shows (see my post from last summer regarding the Tony Awards), I certainly view an Oscar as the highest recognition in the entertainment industry. While lots of quality work is unrecognized by the Academy each year, I still regard an Oscar nomination as some validation of quality work.

In the decade or so after I finished high school,  I took that validation quite seriously. I made a point of seeing all of the Oscar-nominated films before the awards ceremony. I would definitely see all the Best Picture nominees but I tried to see the documentaries and foreign films too.Indeed I saw many terrific films I wouldn’t have otherwise seen. In 1998, everybody saw Titanic (the Best Picture winner, among many other wins) but I hadn’t seen L.A. Confidential until after it received nine Oscar nominations. In 2004, it was a Best Director Oscar nomination for Fernando Meirelles that prompted me to view City of God–which I now count among my favorite films of all time.

When I first started trying to see all the Oscar-nominated films, my motivation was largely snobbish. I felt I earned a certain cultural cache from seeing all of the “great” films of the year, especially the obscure films my friends hadn’t heard of. Admittedly, there were many times I forced myself to sit through movies in which I wasn’t remotely interested. In earning my status as a highly cultured individual, I figured I had to pay the price of boredom. I suffered through The Red Violin, The Gangs of New York, and many other Oscar nominated films, hating every minute of them.

Naturally, circumstances change. I can’t fit self-imposed boredom into my schedule anymore. Nowadays I find it exceedingly difficult to go to the movies at all. My wife and I try to watch films at home but that can be challenging with a two-year old asleep down the hall. Not surprisingly, we’ve fallen behind on all the movies we want to see–we’ve learned from experience that Netflix only allows users to put 500 movies in the DVD queue. I doubt we’ll ever catch up. The result of our changing circumstances is a need to prioritize our film viewing and spend time only with stories we find truly fascinating–the Netflix queue is getting pared down to the essentials and we make very careful choices when we are able to make a rare trip to the movie theatre.

Of the eight films nominated for Best Picture this year, I’ve seen half. In recent years, I’ve seen far fewer.

In 2009, I had only seen two of the eight nominated films at the time of the ceremony. This year, I’ve seen Hugo, The Artist, The Descendants, and Midnight in Paris. I doubt I will ever see Moneyball or Extremely Loud & Incredibly Close because I’m just not interested. And this, for me, is how the Oscars reflect my changing views on achieving “cultured” status. I’m not willing to endure disinterest in exchange for this status.

I’m convinced, however, that I’m not the only person who has consumed boring art for snobbish reasons. In fact, I believe many of us go to the theatre, museums, or obscure films with boredom as an objective: “If I can withstand this boredom for two hours, I’ve paid my cultural debt to society.” I agree the arts are vital to communities, to self-awareness, and communication but if the work isn’t engaging, interesting, or in some way entertaining, how valuable can it be?

I’ve seen this phenomenon at work in some of my BLS courses. My Eye Appeal students, for instance, are required to attend a live performance in their community. My hope is that the assignment will be fun–and for most of my students, this assignment is the highlight of the course. But sometimes, students attend events in which they clearly aren’t interested. Perhaps they are trying to impress me with their sophistication, attending a ballet or opera that they secretly despise, hoping to manufacture some cultural credibility?

Have you ever suffered boredom for the sake of feeling cultured?

A Window onto a Confucian Society

By Claude Tate

Movies are great tools for those of us who tell the stories of humanity’s past and present. Consequently, throughout my career I have constantly been on the prowl for movies I could use in my classes. I first became acquainted with “Raise the Red Lantern” in a MALS class I took here at UNC-G under Professor Tony Fragola. Besides being an excellent movie in and of itself, I’ve found it to be a valuable addition to units I’ve taught on both China and on Confucianism.  When I taught World History, I used it on numerous occasions to introduce my unit on China. I do not use it in any of my BLS classes.  But when we do our lesson on the Confucian approach to organizing the state in my “Self, Society, and Salvation” course, students sometimes want to know what a society built on Confucian principles would actually look like in practice.  I haven’t hesitated to recommend this movie.  I can think of no other resource that bring those age old principles to life to the degree that this movie can.   Zhang Yimou portrays not only the force and oppressiveness of the culture that evolved in China in which everyone has a clearly defined role, but also how people cope within this highly structured society, and what happens to those who rebel.

One note…Many people avoid movies that are subtitled, but Zhang Yimou is so effective in telling his story with the lens of his camera that one can understand the movie completely without reading a single subtitle.

Click here to read a 1996 review of this movie by James Berardinelli, and the trailer is below:

One more thing…I love this movie. It is well worth your time even if you are not viewing it for academic purposes.

Beyond Apple

By Marc Williams

Following Steve Jobs’ passing on October 5, countless articles, blogs, and remembrances have paid tribute to Jobs’ contributions to the technology industry. I could certainly join that chorus, given that I do so much of my online teaching for the BLS program from my iMac, iPad, and iPhone. I’m a dedicated and unabashed Mac user and I thank Steve Jobs for helping make life a little simpler through these brilliant gadgets.

Somewhat overlooked in the tributes to Jobs are his contributions as the owner/CEO of Pixar. Jobs purchased Pixar from Lucasfilm in 1986, which at the time was developing 3D animation hardware for commercial use. Pixar’s hardware development and sales business was not terribly successful and the company began selling off divisions and laying off employees.

During this period of struggle, Jobs was willing to consider a proposal from one of his employees, John Lasseter. Lasseter, an animator who had been fired from Disney and subsequently hired by Lucasfilm, had been experimenting with short films and advertisements, all completely animated by computer. Lasseter pitched Jobs on the idea of creating a computer-animated feature film. Such an endeavor would be tremendously risky for a company like Pixar, which had not been organized as a film studio. Not only did Jobs sign off on the idea, but he was able to secure a three-picture deal with Walt Disney Feature Animation. Jobs shifted the company’s primary focus to making movies.

The feature film Lasseter pitched to Jobs became Toy Story, and he went on to direct A Bug’s Life, Toy Story 2, Cars, and Cars 2. Lasseter also served as executive producer for virtually every other Pixar film (Finding Nemo, The Incredibles, WALL-E, Up, Toy Story 3) and he is now the Chief Creative Officer for Walt Disney and Pixar Animation Studios.

There’s no denying Lasseter’s genius but Pixar as we know it today would not exist without Steve Jobs. Jobs took a tremendous leap of faith, entrusting the company’s future to the brilliance of his collaborators.

A leader doesn’t necessarily have to be the person with the best idea; sometimes the leader simply must recognize the best idea in the room—and get out of the way. Harry Truman said, “it is amazing how much you can accomplish in life if you don’t mind who gets the credit.” The story of Pixar demonstrates that Steve Jobs fully understood this simple truth.

John Lasseter

“Steve Jobs was an extraordinary visionary, our very dear friend and the guiding light of the Pixar family. He saw the potential of what Pixar could be before the rest of us, and beyond what anyone ever imagined. Steve took a chance on us and believed in our crazy dream of making computer animated films; the one thing he always said was to simply ‘make it great.’ He is why Pixar turned out the way we did and his strength, integrity and love of life has made us all better people. He will forever be a part of Pixar’s DNA. Our hearts go out to his wife Laurene and their children during this incredibly difficult time.”

- John Lasseter, Chief Creative Officer & Ed Catmull, President, Walt Disney and Pixar Animation Studios

A Thousand Faces

By Marc Williams

I teach a number of courses involving dramatic literature, including Big Plays, Big Ideas in the BLS program at UNCG.  In most of these classes, I discuss dramatic structure—the way that incidents are arranged into a plot.  Whenever I teach dramatic structure, I always turn to Sophocles’ Oedipus Rex to serve as an example.  Aristotle believed this play to be tragedy “in its ideal state,” partially because the incidents are arranged in a clear cause-and-effect manner.  One incident logically follows the next and although there are some surprises, none of the events are random, accidental, or tangential.

The story of Oedipus is an ancient myth.  20th century mythology scholar Joseph Campbell wrote about Oedipus frequently, including in his seminal book, The Hero with a Thousand Faces.  In this book, Campbell outlines the “monomyth,” a dramatic structure that many, if not most, stories seem to adhere in one manner or another.  The monomyth consists of several stages of the hero’s journey: a call to adventure, a refusal of that call, followed by aid from a supernatural entity, crossing a threshold into unfamiliar territory, entering/escaping the belly of the whale, traveling a road of trials, and so on, all the way through the hero’s return.  Oedipus’ journey follows Cambell’s pattern almost perfectly. The pattern applies not only to Ancient Greek myths but to stories from virtually every culture across the globe.

Campbell describes the stages of the hero’s journey at length in The Hero with a Thousand Faces and also diagrams the hero’s journey thus:

I was instantly reminded of Joseph Campbell and his diagram today when I came across this:

1.  A character is in a zone of comfort
2.  But they want something
3.  They enter an unfamiliar situation
4.  Adapt to it
5.  Get what they wanted
6.  Pay a heavy price for it
7.  Then return to their familiar situation
8.  Having changed

This diagram was developed by Dan Harmon, creator of the NBC sitcom Community, and according to this article on Wired.com, is apparently the inspiration for every episode of the show:

Dan Harmon

[Harmon] began doodling the circles in the late ’90s, while stuck on a screenplay. He wanted to codify the storytelling process— to find the hidden structure powering the movies and TV shows, even songs, he’d been absorbing since he was a kid. “I was thinking, there must be some symmetry to this,” he says of how stories are told. “Some simplicity.” So he watched a lot of Die Hard, boiled down a lot of Joseph Campbell, and came up with the circle, an algorithm that distills a narrative into eight steps:

Harmon calls his circles embryos— they contain all the elements needed for a satisfying story— and he uses them to map out nearly every turn on Community, from throwaway gags to entire seasons. If a plot doesn’t follow these steps, the embryo is invalid, and he starts over. To this day, Harmon still studies each film and TV show he watches, searching for his algorithm underneath, checking to see if the theory is airtight. “I can’t not see that circle,” he says. “It’s tattooed on my brain.”

The eight-step Harmon embryo model is simpler than Cambell’s monomyth, which contains seventeen structural units. Harmon’s embryo model, because it is simpler than Campell’s, is probably also more universal.  And indeed, Harmon uses this embryo as a litmus test to determine if an episode of Community is structurally sound. It is, after all, a tried and true formula for great storytelling.  So where else can this structure be seen?

The Wizard of Oz and Star Wars come to mind.   Have you encountered a monomyth on television or in a movie theatre recently?  Or a story that follows Harmon’s embryo model?

The Rise and Fall of 3-D

By Marc Williams

I have always loved going to the movies.  Now that my wife and I have a young child at home, we don’t get out as much as we used to, so going to the movies is a particularly special treat. We subscribe to Netflix so we can watch movies at home sometimes but actually going to a movie theatre for the big-screen experience is rare thing these days.

Over the past year or two, most of the movies we’ve seen in the theatre were offered in 3-D. Moviegoers nowadays are often given a choice between traditional 2-D and 3-D–we typically opt for the 2-D experience but we have, of course, occasionally opted for a few 3-D titles.  For example, we saw the final installment of the Harry Potter series, Harry Potter and the Deathly Hallows Part II, in both 2-D and 3-D.

How and why did this trend of 3-D films start?  3-D is not a new idea–filmmakers have been experimenting with the technology for nearly one hundred years and feature films have been offered in 3-D for well over fifty years.  But 3-D has become a craze.  I think it began with Avatar, which was probably the most successful and critically acclaimed 3-D film of all time. Hollywood knows a good money-making machine when they see it, so in the wake of Avatar‘s success, the major studios mobilized and started offering more and more 3-D titles.  Theatre chains have also found a way to cash in on the phenomenon; ticket prices for 3-D films are significantly higher than prices for standard 2-D films.

While I admired Avatar for its technical achievement, I personally have not been able to embrace the 3-D craze. My personal distaste for 3-D primarily stems from the fact that most 3-D films I’ve seen use the 3-D technology as a cheap gimmick, not as a storytelling device. If there is an explosion in a 3-D film, is the story truly enhanced by making the viewers feel as if shrapnel is headed in their direction?  I don’t see much payoff for this use of 3-D, yet this is precisely how most films choose to employ the technology.

Avatar is a different kind of 3-D film for several reasons. First, the lush physical environment is given tremendous depth through the use of 3-D; this is important because the film is about the beauty and fragility of the environment.  In this regard, Avatar does not use 3-D as a cheap gimmick.  In fact, the film was shot in 3-D; it was part of the director’s plan for the film all along.  Most 3-D films today are not shot in 3-D–they are converted to 3-D from a 2-D format.  Disney’s re-release of The Lion King in 3-D is a great example of this; they’re simply capitalizing on the 3-D craze, offering viewers only a slightly different experience from the original 2-D film.  To me, that experience isn’t worth the extra $5.00 the movie theatre wants to charge for a 3-D ticket.

Another issue with 3-D is the amount of light on the screen.  3-D technology depends upon darkening the picture by about 50%.  Film critic Roger Ebert has been particularly critical of 3-D films specifically for this reason–the picture is simply too dark.  I found this to be true in Harry Potter and the Deathly Hallows Part II when I saw it in 3-D.  Images that are intensely white in the 2-D version (bright light, for instance) lost their luster in the 3-D version, seeming more gray than white.  And given that the 2-D film was already very dark and shadowy, some of the picture was rendered incomprehensibly dark in the 3-D version.  Some of the other technical concerns with 3-D are outlined in this letter to Roger Ebert from Walter Murch, arguably the most distinguished film editor in the industry.

Box office receipts are beginning to show that 3-D is becoming less and less palatable for moviegoers.  This could be a rejection of the extra $5.00 being charged by theatre chains, or perhaps a reaction to dim, dizzying images.  I was surprised to learn, for example, that the 3-D version of Harry Potter and the Deathly Hallows Part II generated only one-third of the revenue generated by the 2-D version of the same film.  Because profits are down, the 3-D craze is likely nearing its end.

The point is that a brilliant and promising technology can die if no one is willing to use it properly.  Director James Cameron made Avatar using processes no one had ever used before in a feature film, so he had to be willing to adjust his typical methods in order to maximize the 3-D technology’s potential. The question all of this raises for me is the employment of new technology in the classroom–especially the online classroom.  Given that the BLS degree program at UNCG is offered online, it seems we instructors and course developers run the risk of adopting technology without really using it to its fullest potential.  Or even misusing it.

I have certainly been guilty of using technology in ways that create a obstacles to learning–how can this be avoided?  Or better yet, how can technology be used to give students and faculty an advantage? In what ways can technology truly enhance the educational experience, especially online?