Tuesday, July 19, 2016

The Triumph of Theatre

This past spring, after attending at the National Arts Centre a touring-company revival of The Sound of Music, I was reminded about thoughts I’ve had in the last few years as to how paradoxically in the age of the Internet, the live performance was being revived as a key part of the entertainment business. 

It is a consequence, of course, of the ability of Internet users to pass back and forth digital copies of recorded performance, illegally, of course, but nevertheless it is a fact of life.  According to figures compiled at this site, sales of recordings (compact disc, cassette, vinyl record) have declined from a peak of nearly 20 billion U.S. dollars in 2000, to around six billion inflation-adjusted dollars in 2013.

Thus, in order to be assured of a living, musicians must play live, the only type of performance experience that cannot be adequately pirated or bootlegged.  

The movie-industry has also been affected by digital bootlegging, though not as dramatically as has recorded music.  

Nevertheless, much of the revenue derived from movies comes from other than actual cinematic audiences: while sales of digital-video discs have declined dramatically from their peak in 2004, the slack has been taken up by video-on-demand and related services which bring the movie-going experience direct to the home.  

Recently, too, it was reported that Star Wars / Star Trek director J.J. Abrams, along with other filmmaking luminaries, were boosting a service that would bring newly-released movies directly to the home.

But, while the movie-theatre business itself remains moribund, there has been a very big revival in live theatre presentation, as well.


Often, "new" musical theatre productions are simply reboots of unsuccessful or long-forgotten motion-pictures.

Last autumn, for example, the Arts Centre hosting the touring production of Newsies. Opening on Broadway in 2011, the story is based on an actual New York city newsboy-strike at the very end of the nineteenth century, and is evidently successful enough to be taken on tour across the continent. 

Newsies was, however, originally a movie released by the Disney company in the early 1990s — which was (according to Wikipedia) a complete flop. 

I’ve never seen this movie, nor yet the stage production itself, but is a reversal from the original practice, wherein Broadway musicals would be made into movies; but often the movies adapted to the stage, were often not musicals themselves. 

Newsies originally was a musical, both others re-adaptation were not: such as The Producers, which was a bit of a sensation in the early twenty-first century (and based on a 1967 film starring Gene Wilder), or Hairspray, which premiered in 2002 and was based on a 1988 movie by John Waters (the musical adaptation of which was, in turn, remade into a film a few years later, starring John Travolta as an obese woman). 

Apparently, many stories are more popularly accessible when accompanied by song and dance, and when performed onstage. 

My thoughts in this direction may have been inspired by the book I read recently, No Applause, Just Throw Money, a history of vaudeville by Travis Stewart under the “Trav S.D.” pseudonym. 

Stewart traces vaudeville’s origins to the rough and raunchy stage entertainments of nineteenth-century America. But by the late 1800s, savvy impresarios responded to then-fashionable pleas for moral hygiene. Figuring out that a good, clean stage-show, appropriate even for women and children, was far more lucrative than appealing to smutty instincts of the crowd, theatre-men applied systematic industrial methods to the live entertainment. 

As Stewart writes, “The revolution of the `double audience’ (appealing to women and children now as well as men) added enormously to the profitability of variety production. It had been achieved chiefly though a public relations coup (courtesy of Barnum and his vaudeville acolytes) the likes of which may never be seen again. But several other innovations not only added to, but multiplied the growth of the vaudeville industry, making its existence a foregone conclusion. Variety, after all, had been small potatoes. Its producers were small businessmen whose dreams didn’t extend any farther than their own saloon doors. Vaudeville, on the other hand, was big business. Following Adam Smith’s principles of division of labor and mass production, its producers would come to control the entertainment of the nation.” 

The difference between the old theatre-owners and the showmen of vaudeville, Stewart writes, “is essentially the same as the one between the proprietor of your local greasy spoon and Ray Kroc,” the later being responsible for franchising the original McDonald’s restaurant into a global operation. (Stewart, No Applause, p. 84)

I’ve long viewed the McDonald’s restaurant as the epitome of the division of labour, which even before engineered machinery, is the decisive factor of modern work. The technical resources of the McDonald brothers in 1950s California were no different than what was available to countless other roadside proprietors had at there and across North America. What made their restaurant so special was how they divided up the work needed to make burgers and fries in the most efficient way possible. 

Similarly, the original manufactories were not necessarily steam-driven: they instead employed unskilled workers, each doing a fraction of the total work, to achieve exorbitant productivity. As Stewart observes, “The nineteenth century saw the birth of mass production and distribution of nearly everything: furniture, clothing, tools, appliances. It was inevitable that the techniques and the philosophy of industrialism would come to the theater.” (Stewart, p. 85) 

Vaudeville performances were thus staged from eight o’clock in the morning until late in the evening, or even the early hours. Audiences could pay to enter and leave any time, and were treated to about ten separate acts, repeated over again until closing, according to “Trav.” 

There was a rough pattern the performances: novices and less-polished openers would warm up the crowd for the headliners that took up the middle of the cycle. Then, less-talented acts further down the playbill, would follow on until the final “chaser”, a performer so bad that he or she would literally chase the audience from the seats. The goal was, of course, to create room for more paying customers. Often, notes Stewart, one of the segments of the vaudeville show would be a short silent-film. 

The talking-pictures, though, were (along with the Depression) a cause of vaudeville’s demise. In the 1930s, film-production boomed even as live-theatre, and most other American industries, went into prolonged slump. 

However, as the movies could now feature singing as well as dancing, the musical talent of vaudeville and Broadway largely decamped to Hollywood, creating the great age of the movie-musical from the ‘30s to the 1950s

While vaudeville largely faded out, Broadway continued to originate musical-theatre during this period, with many of productions successfully adapted to the wide-screen. But as the west-coast became the talent-centre for the musical, the stage was set in New York for a fluorescence of theatrical drama, such as not been seen (in the English-speaking world, at least) since Elizabethan times. 

During this period, plays written by Eugene O’Neill, Thornton Wilder, Tennessee Williams, Arthur Miller and Lillian Hellman met with great commercial and literary success, in spite of their often bleak and tragic content. 

They dealt in subject-matter considered too controversial and subversive to be a part of “golden-age” Hollywood cinema. Orson Welles gained footing in show business through the theatre, though he is best-remembered for his contributions to the art of the film. 

Significantly, however, very few of the classics of the “great age of American drama” — such as Our Town by Wilder, O’Neill’s Long Day’s Journey Into Night, Death of a Salesman by Miller, or The Children’s Hour by Hellman — were ever definitively adapted to film. 

Cast of recent staging of Death of a Salesman, with the late Philip Seymour Hoffman second from right (and Spiderman to his left).

Williams’ Streetcar Named Desire is perhaps the exception; but it is remembered chiefly, it seems, for Marlon Brando’s undershirted repetition of the character’s name, “Stella”, than for anything else. 

Apparently, the twentieth-century American dramatists were communicating something onstage that couldn’t be reproduced effectively on film. There was, on the other hand, little financial risk for theatre-owners to stage these high-brow plays, because of the relatively low cost of production (certainly compared to the expense of making a motion picture). 

Edward Albee, a playwright whose career began at the tail-end of this dramaturgical flowering, was in fact the grandson of a vaudeville impresario

Reaching his greatest success in the early ‘60s with Who’s Afraid of Virginia Woolf? (which was successfully adapted into an Oscar-nominated film starring Elizabeth Taylor and Richard Burton), Albee introduced strong language and nudity to the stage, though in relatively chaste form, compared to what came later. 

Perhaps the new era of the Broadway musical began with the premiere of Hair, in 1968. Featuring a group of hippies trying to evade U.S. military conscription, the play also had full-frontal nudity at its climax, a first for Broadway, if not for the American stage as a whole. 

Just a year later appeared Oh! Calcutta, a revue of sexuality-theme sketches that featured nudity throughout the show. 

In real way, both Hair and Calcutta were reviving the smutty character of the pre-vaudeville stage when, as Stewart describes it, boozing and whoring went hand in hand with theatre-going: when taverns and bawdy houses were not located right inside the venue, they sat right next-door. Calcutta’s incorporation of exhibitionism in the old-style revue, made it seem instantly avant-garde. 

Broadway's Theatre Row, 1970s version.

But soon the New York theatre district’s traditional home, around the intersection of Forty-Second street and Broadway, became far more ribald and dangerous than the worst of the nineteenth century saloons. 

Theatres that had staged dramas and musicals were often turned into pornographic cinemas, and the area around Times Square for a time recorded the highest number of offences for its land area, than any other place in the world. Broadway the street was saved by the aggressive policing of scofflaws (which in turn led to a reduction in more serious crimes), and the revocation of slot-palace business-licences. 

Major retailers were encouraged to invest in the area, significantly including the Disney company (hence the implicitly derisive term “Disneyfication” to describe the contemporary Times Square district). 

Below the famous “sign” at the Square (now a series of giant flat-screen monitors), have been installed bleachers so that visitors can sit and view midtown Manhattan as a show in itself (and they in turn are on stage for those walking through the area). 

Perhaps because of its experience in staging technically sophisticated, interactive entertainment at amusement parks in California and Florida, Disney has also become involved in a big way in live-theatre production. 

Not all recent Broadway-musical successes have been based on old movies.  In particular, Hamilton, which debuted in 2015 and features a fictionalized account of the life of American Founding Father Alexander Hamilton and his ultimately fateful rivalry with the Vice-President, Aaron Burr.  Set to a largely hip-hop score, with Hispanic and African-American actors playing Hamilton, Thomas Jefferson, George Washington, and so on, it has been a smash with audiences, especially young people, and is strikingly original in concept and execution.  

Cast of Hamilton.

The irony of all this is, of course, that while the cinema at one time supplanted live-theatre as the main source of public entertainment, now live-entertainment has become the growth industry even as the movie-exhibitor business shrinks in size. 

Aside from being transformed into all-around leisure spots (with the addition of fast-food outlets, video-arcade games, rooms for children’s parties), the switch-over to digital projection has allowed cineplexes to transmit live entertainment as well — sometimes sports events, but also awards shows, and even opera and other kinds of highbrow spectacle. 

Contemporary musical theatre’s revival, often adapting failed or forgotten movies into box-office successes, is part of the trend in which the live performance has become the more lucrative part of the entertainment business, as the Internet and digital technology reduces the value of recorded entertainment to near zero. 

The successful live-staging of stories that were failures or forgotten in other media, demonstrates that the form of experience really does matter, over and above its narrative content. And also that live performance which cannot be reproduced effectively in other, recorded media, is all the more precious to audiences.

Monday, June 27, 2016

When Books Do the Talking

In the recent entry on the probable ordinariness of Shakespeare as a person, I remarked as to how journalism is the oral literature of the modern age. As conveyed in the synonym “reporter”, journalists speak directly to those involved in newsworthy events, quoting these accounts directly in order to write a story (another telling word). Journalism is the translation of the oral into the written. 

But for the last few decades, storytelling conceived and originally carried out in written (or typewritten) form, has been translated into a type of oral literature, as well: the audio-book


I have, myself, listened to just a single audio-book through my entire adult life: a version of the Magnificent Ambersons, the 1918 novel by Booth Tarkington that was adapted to the screen by Orson Welles in 1942. 

I obtained the recording on loan from the public library only because it didn’t have a paper copy. Listening to Ambersons, I did reflect on audio-books as a reintroduction of performance into literature. 

This is what defines “oral literature” apart from the textual kind, and was the norm at least until the advent of the book in medieval times. 

Depiction of Native American storyteller.

When I was young, there were audio “books” — mostly abridged versions on long-playing record of children’s literature such as Jacob Two-two Meets the Hooded Fang, by Mordechai Richler. These were used mostly in schools, as I recall hearing that title in class. They were usually accompanied by an illustrated text, for the students or teacher to follow along. A little chime on the record would indicate that it was time to turn a page, as though no one could figure this out for themselves. 

Audio-literature did not become commonplace until the 1980s, with the success of the Sony Walkman personal cassette player.  As Wikipedia states:

Though spoken recordings were popular in 33⅓ vinyl record format for schools and libraries into the early 1970s, the beginning of the modern retail market for audiobooks can be traced to the wide adoption of cassette tapes during the 1970s. ... a number of technological innovations allowed the cassette tape wider usage in libraries and also spawned the creation of new commercial audiobook market. These innovations included the introduction of small and cheap portable players such as the Walkman, and the widespread use of cassette decks in cars, particularly imported Japanese models which flooded the market during the multiple energy crises of the decade.

Mostly, people listened to music on Walkmans and the numerous cheap knock-offs that followed them on to the market. But the audio-cassette was also a practical medium for the reproduction of books in audio form. Each one could hold at least an hour-and-a-half of recorded sound. 

A regular novel could be contained on five or six cassettes, whereas the same running-time would occupy an impossibly cumbersome and expensive twenty-five or thirty long-playing discs.  (Checking on audio-book titles at random at the library, I found that they average something over five-hundred minutes – almost or as much as ten hours.)   

Courtesy the Walkman, and the standard-feature automobile cassette player, the spoken word became both portable and private. 

Sound-through-earphone thus continued the individuation of conscious awareness as characteristic of modern times. But vocal narration of prose must inevitably be a performance, so as to keep the listener’s attention. 

It thereby attends immediately to the emotional and holistic mind, in a way that reading prose cannot. Long before contemporary digital-reader pads, which render text without the eyestrain characteristic of previous devices of the kind, the audio-book had already transformed literature into “media” — communication of sound and images through electrical current. 

Publishers have long created audio versions of their major releases, on audio-cassette originally, and later on digital and download formats. The Walkman, however, is a perfect embodiment of the “post-modern”. It (and its successors, such as the iPod) have served to intensify the withdrawal of self from surroundings. 

The very means for the marooning awareness in this fashion, is sound that is heard fully only by the user of the device (and it is mere annoyance if played too loudly for others to hear). It affects the more primitive forms of consciousness that are neglected, suppressed and underplayed by modern society. 

Early talking book.

Traditional media for the individuation of consciousness, such as written and printed text, affected primarily the rational aspects of the mind. Electric media previous to the Walkman, could very well be consumed individually. This is especially the case with television, but it and the other broadcast medium, radio, have also been frequently consumed socially. 

 The Walkman required individual use. It is thus enhanced individuality but in a different modality than what is witnessed when writing or reading a book, or attending to some other complex task. The electronic world represents less so the “tribalization” of consciousness, as Marshall McLuhan described it, than a still-individuated consciousness that, unlike during high-modernity, encompasses the emotional and aesthetic aspects of life, as a matter of course.

Monday, May 16, 2016

The Long History of (Management) Gurus

I used to deride the whole idea of the “consultant”, essentially someone who gets paid a lot to advise others how to act. 

But reflecting more carefully on their role in modern business, aren’t consultants in fact the modern equivalents of wise men and women of yore — veterans with deep experience who, by choice or not, do not actually practice what they preach?  

Consultant: medieval version.
kenokazaki com

Not all consultants are superannuated in this fashion. But even consultancies run by those long before retirement, always tout their x years or decades of combined experience, which usually add up to the length of the career of a single retired person. 

Thus a random sampling from the Internet of quotes from various consulting firms: 

The most successful consultants do receive hefty remuneration to convey knowledge about a particular field, which comes mostly through experience, as opposed to mere academic training. 

No wonder that consultants’ preferred media of communication are the business meeting and the oral presentation. 

Their most valuable advice relates not the technicalities of the field, but to behavioural subtleties the mastery of which can lead to greater professional success. 

Certainly, consultants do publish their thoughts in books. Management consultants especially did (at least in the recent past) enjoy best-selling readership. 

But whatever the differences in the various books on management, they are characteristic for their “conversational” style, heavy with epigrams and aphorisms: for example (and tellingly) “management is about people.” 

Revealingly also, management consultants were during the height of their popularity often called “gurus”, as are the most otherworldly of the wizened teachers of wisdom. 

Certain consultants become gurus, because their ideas seem more oracular, than logical. It is their submergence in the oral culture of speech and beseech, which makes them seem like religious mystics.  

The engagement of elders for their wisdom endured long after the accession from tribal-kinship society. 

The oligarchies that controlled ancient city states especially, were in essence councils of elders, men who’d distinguished themselves in the service of the polis.  

The Senate of Rome was just such an entity, constituted originally as an advisory body only, whilst lawmaking was theoretically in the hands of the Assembly of the People. 

Eventually, though, the Assembly of Seniors took precedence over that of the People, which in turn became an irrelevance and formality to the power structure. With the rise of the emperors, the Roman Senate itself became irrelevant. 

Not coincidentally, empire’s the younger man’s project: Julius Caesar was scarcely in his forties when he began his campaign of conquest in Gaul and elsewhere, before assassinated by the elders of the legislature, fretful about his life-dictatorship. 

Casear was da bomb.

Earlier, Alexander the Great conquered most of equatorial Eurasia years before his early death at age thirty-two. Modern empires, too, were overwhelmingly the work of younger men wishing to escape the suffocation of gerontocracy.  

Apart from class struggle, there is an inter-generational contest in every major society between youthful thirst for adventure and novelty, and elders who seek stability and tradition. In modern times, revolutionary movements, like imperial conquest, have been the work of younger men.  

The Protestant Reformation, for example, was begun in 1517 when Martin Luther, then aged 35, nailed his “ninety-five thesis” to the door of the cathedral of Worms. 

Luther’s successors in the revolt against the Catholic hierarchy, were also young men: Jean Cauvin (known in the English-speaking world as John Calvin) was just twenty-one when he began to attack the church hierarchy in Geneva. 

Another Swiss Protestant, famous in his homeland but largely unknown outside, was Huldrych Zwingli, who was just over thirty when he began his theological revolt. 

John Knox, the architect of Scots Protestantism, was about Luther’s age when he became involved in Reformation. Yet even here, the old-fashioned dynamic came into play. For when the Kirk was formally Established, it was governed by Elders — the Presbyters

Similarly, the state Protestant churches established throughout the Occident, had senior clergymen taking control of matters once again. 

Thus, whatever his firebrand ways as a man in his thirties, the elder Martin Luther allied with the German dukes and princes who wished to reign in the instability of ecstatic religiosity, propagandizing against the even more radical young churchmen who succeeded him, as they were suppressed by the nobility. 

The Prussian church to which Luther gave his name, was appropriately sober and senior in constitution. The Reform clergy seemed to understand early on that the capriciousness and instability of youth, was contrary to the becalmed liturgy and lifestyle they wished to impose on society. 
Observe, young friends.

The first of today’s liberal democracies were mostly founded by Protestants, whose version of responsible government was strongly presbyterian in character. 

Thus the upper chamber of modern bicameral legislatures — the one closest to the executive — is always the Senate, or an equivalent name given to an assembly of the old and wise. 

Traditionally, too, these were appointive bodies, as was the U.S. Senate until 1912, while members of the Canadian and British upper Parliaments are appointed even today. The rationale for reviving this ancient institution in modern constitutions, was precisely to give form to the oligarchical basis of democracy. 

The Canadian Senate has been called the “house of sober second thought”, with the sobriety of the cognition therein presumed to come from seniority and a lack of responsibility to an electorate. 

If the Senate here or in any other country, gives short shrift to this function in actuality, there was nevertheless the intention (and the hope of some even now) that it would do so. What does it say, however, about the dynamism of present-day businesses, when they seem to pay so much to consult with elders?

Saturday, April 2, 2016

The Right Ordinary Bill Shakespeare

The twenty-third of April, 2016, will mark the four-hundredth anniversary of the death of William Shakespeare, judged to be the greatest writer in the English-language, one of the greatest ever.  

Recently, the Daily Telegraph published a new portrait of the Bard by Geoffrey Tristram, which purports to be as “authentic” an image as possible of Shakespeare, who in turn is described as being like “a chap down the pub.” (below)

Daily Telegraph, Geoffrey Tristram.

I think a large part of the fascination with Shakespeare lies in the fact that the details of his life are so obscure. As Bill Bryson writes in his very slim biography Shakespeare: The World as Stage, “After four hundred years of dedicated hunting, researchers have found about a hundred documents relating to William Shakespeare and his immediate family-baptismal records, title deeds, tax certificates, marriage bonds, writs of attachment, court records (many court records – it was a litigious age), and so on.” (Atlas Books, Harper/Collins, 2007, page 7). 

In fact, practically everything written about him is conjecture, supposition and outright fantasy. 

There has even been for many decades an ongoing debate as to whether William Shakespeare actually wrote the plays and poems that are credited to his name. 

The most popular candidate as the “real” author of Twelfth Night, Hamlet, Romeo and Juliet and the rest of the Shakespearean corpus, is Edward de Vere, the seventeen Earl of Oxford (amongst other candidates are Sir Francis Bacon, the Sixth Earl of Derby and Shakespeare’s fellow playwright Christopher Marlowe). 

Another more recent line of speculation focuses not on authorship, but religion: the historian Michael Wood, in his In Search of Shakespeare miniseries, argues that the playwright was a secret Catholic. 

This is, argues Wood, because the practice of the “old religion” in Elizabethan England would, if revealed, land the adherent in serious trouble, up to and including torturous and capital punishments.  

But according to Bryson, it is unremarkable that so little of Shakespeare’s life is known, given that this was true of practically everyone in the poet’s lifetime, including his fellow playwrights (Marlowe, Kyd, Jonson and so on), excepting for royals and very powerful nobles. 

Given the obscurity of Shakespeare as a person and publican, it is necessary to examine in detail his background, the details of the historical times in which he lived, for to speculate as to how he might have got on in life. 

This was the procedure of the historian Peter Ackroyd’s Shakespeare: The Biography (London, Chatto and Windus, 2005), and I think Ackroyd captures perhaps the truest picture of the Bard of Stratford as any yet achieved. 

Various portraits of William Shakespeare.

Ackroyd does briefly discuss, but gives no particular credence to the notion that Shakespeare illegally practised Catholicism. 

Considering that evidence marshalled by Ackroyd about what kind of man Shakespeare was, I had the thought that if, somehow, people came to know the “real” Shakespeare (as for example, through the miraculous authentication of a diary or personal letters), it would be a disappointment. 

His name became, in his time, quite famous, and yet few seemed to have been interested in who he was personally. When he died in 1616 in Stratford, no one but friends, associates and family attended his services — quite unlike Jonson and other literary greats of the time. 

The adjective that his contemporaries attached to the Bard was “sweet”, characteristic both of his words and his temperament. Beyond that, not much else, even though he mixed with men of words his entire life, and at a time when the concept of privacy scarcely existed at all. 

According to Ackroyd, there is but one record extant of Shakespeare addressing himself in the personal pronoun. It comes from testimony that Shakespeare gave as a witness in a civil-case over a dowry. As Ackroyd notes, the language Shakespeare uses is completely unremarkable for an educated man of his time. 

Shakespeare’s personal obscurity, the cause of so much intrigue and speculation, may be because he was not very remarkable at all. His “absence” is so persistent, simply because he may have been persistently absent, hunched over a desk in candlelight, furiously writing out words that only came to him in concentration and solitude. Yet, as an individual social actor Shakespeare was the epitome of petit-bourgeois (whatever his aspirations toward gentility). 

He parleyed a relatively modest income as a playwright and actor into a healthy nest-egg, cannily buying and selling land and assets (including a share in the theatrical company he worked with), and even being accused of illegal hoarding during times of shortage. 

At his death, he owned one of the largest houses in Stratford, and bequeathed healthy sums to his elder daughter. It seems that, in Elizabethan times, sobriety and responsibility had not yet become opponents of the highest creativity. 

Part of the mystery of Shakespeare rests in the fact, during his lifetime, there were not yet newspapers. There were, in fact, unbound, semi-periodical documents that ultimately gave rise to what became the daily newspaper, by the opening of the seventeenth century. But the first English-language daily (which was in fact published in Amsterdam) did not appear until four years after Shakespeare died in 1616. 

If newspapers existed then, and had reported if not on the details of Shakespeare’s private life, then about his work as a public man, it would have provided illumination to contemporaries, and to later generations, as to exactly what he was doing and when. Newspapers chronicle public events and activities, just as diaries record private actions and thoughts. 

The distinction isn’t so clear-cut, however, given that private diarists often comment on public events, and before the electric telegraph at least, daily newspapers consisted of correspondence sent from eyewitnesses to the scenes of important events (hence the synonym for “reporter”: “correspondent”). 

It was at one time at least, commonplace for newspapers to be named the Journal, and in French, dailies are referred to as journales (just as a more synonym for reporter is“journalist”). In this respect, the daily newspaper succeeded such documents as the Anglo-Saxon Chronicle or the Domesday book (and their equivalents in other languages) in providing a chronological record of society.   

As with personal diaries, newspapers didn’t have to tell the truth, or at least, the whole truth, in order for readers to gain an understanding of the chronology of the life of the diarist, or of the city or society documented. 

The oral literature of modern times.

But because this institution had not yet appeared when Shakespeare was alive (or at least, not in the English language, though German and Dutch dailies were started in the early seventeenth century), we don’t even know where he actually was during his lifetime, except for one or two instances. 

Neither do we know when exactly, or in what order, any of his plays were first performed. In the presence of newspapers, we might have knowledge, indirectly no doubt, of these key facts. 

We would in any case have a more comprehensive view of Shakespeare’s life, enough perhaps that it would be impossible for anyone to argue that he didn’t actually compose the works that bear his name. It is interesting that while the periodical (which is to say, “the press”) is the ultimate form of the printing press, it really didn’t come into existence until more than a century following the invention of movable type itself. 

During this period, the printing press was treated mainly as a great enhancement of the medieval use of manuscript, which was to preserve the writings of the past, in book form. Unbound documents certainly did have an immediate impact upon discourse. It was only that such printed media didn’t take periodical form until the seventeenth century.

Wednesday, March 9, 2016

Every Instalment Was the Holiday Special: Some Better, Some Worse

Recently, I attended the second-run showing of the Hateful Eight, the “Eighth Film by Quentin Tarantino.”  

It was released, to little fanfare as I recall, during the Christmas season. 

Starlog Magazine, Issue no 19, dated February 1979, but published circa November 1978.

The publication of a Tarantino movie was – as recently as his last film, Django Unchained – something of an event. 

And although the Hateful Eight did respectably at the box-office (according to Wikipedia, earning $145 million in theatres), the success of the Star Wars sequel, The Force Awakens, overshadowed all other late-year releases in 2015.

The Hunger Games film series was, at one time, a cultural phenomenon. The final installment in the series, Mockingjay Part 2, came out in November to the yawns of most sci-fi fans (perhaps not as many as were caused by viewing the first Mockingjay film, however). 

Although Mockingjay the second went on to gross more than US$600 million at the box office, the majority of this came from overseas’ audiences.

And, who can recall that December also saw the release of a movie by the once-hotshot director Ron Howard? In the Heart of the Sea, with a production budget of one-hundred million dollars, grossed just a quarter of that amount in general release.  

By contrast, the Force Awakens has taken in more than two billion dollars in revenue since its release a week before Christmas. 

This total makes the seventh Star Wars film the top-grossing film at the box office, surpassing the record held by the 2009 film Avatar (although, adjusted for inflation, the most successful theatrical film is Gone with the Wind, which was released almost eighty years ago). 

Seeing the Force Awakens just after it came out, I quite enjoyed the experience. The film has been hailed as a return to form for a film-series that definitely lost its way with the three prequel films released between 1999 and 2005. 

Yet, I’ve not been able to get over the notion that the fate of the Force Awakens will be similar to that of Avatar, another highly popular sci-fi film whose impact, as it turns out, was not at all enduring. 

When news came in 2012 that Star Wars mastermind George Lucas had sold his property to the Walt Disney company, and that the latter would commence with the production of further sequels, 

I remained sceptical that there was any story left to tell of this “saga.” Even having enjoyed the film, I think my scepticism was warranted. The Force Awakens is hardly a terrible movie (as was at least the first of the three prequels directed by series creator George Lucas around the turn of the century). 

It is, however, transparently not original, being largely a retelling of the first, and most successful of the movies, released in 1977. This the movie’s director, J.J. Abrams, has all but acknowledged, but it is remarkable how much that Force Awakens conforms to the original, from its settings, to its characters, to the thrust of the overall story. 

The satirical website Cracked had a mock film-script with famous scenes from the first Star Wars crossed out, their equivalents in the new movie placed beside them.  


My idea has long been that the galactic setting of the first movie was not an imaginary place (as is, for example, the Middle Earth of the Lord of the Rings novels and movies), but a device through which several distinct adventure genres (the western, the war flick, the samurai picture, the medieval romance) could be combined into a single movie through the magic of sci-fi technology. 

The original Star Wars was thus a kind of variety show, something that was perhaps unconsciously signalled through the conclusion of several comic elements in the picture (such as the robotic oddball sidekicks, the hulking canine-man, the dance-band of assorted aliens in the famous “cantina” bar scene, which of course has a close counterpart in the Force Awakens). 

It is interesting in this regard that the very first sequel to the original Star Wars was not a movie, but a television show: the Star Wars Holiday Special, broadcast in 1978, and which was actually a variety program that featured the singing talents of Carrie Fisher (who played Princess Leia in the first trilogy as well as the Force Awakens), comic-repartee between Bea Arthur and Harvey Corman (popular television personalities at the time), as well as a cartoon segment that introduced the Boba Fett villain seen next in the Empire Strikes Back

Badly-received at the time, it was never aired again, has never been made available through home-video, and its existence was scarcely acknowledged thereafter. 

Regarded as a strange, “non-cannon” outlier in the whole Star Wars “universe”, the Holiday Special should have been viewed as an unsettling portent of Lucas’ lack of artistic judgement as well as his ambition to pander to the most juvenile elements of the target audience, both of which were on full display with the Phantom Menace and the other prequels. 

Returning to point, I think it is unlikely that the sequel-stories will be as compelling as the Force Awakens simply because of the conceptual insufficiency of the setting in which the whole story takes place. It was a framing device for a variety show, and like any background, it doesn’t stand for much scrutiny until its phoniness is revealed. 

The Star Wars sequel inspired another idea relating to the “retro-mania” described by Simon Reynolds in his recent book of that name. 

The Force Awakens is a sequel (and retelling) of a movie that was released theatrically thirty-eight years earlier. It would have been inconceivable in 1977, though, to make a sequel to a movie released in 1939. 

The cinema of that earlier time was, if not largely forgotten, then viewed as irredeemably antique and outdated in a way that at least certain films of the mid- to late 1970s are not today, apparently. 

This is verified by the fact that Star Wars itself actually was directly inspired by features released in 1939, Buck Rogers, and three years earlier, Flash Gordon. Both starred the former U.S. Olympic athlete Clarence “Buster” Crabbe as a regular earthman who finds himself in the 25th century (Rogers) or in a far-off galaxy (Gordon). 

Presented in about a dozen serial-installments with each but the last ending in a cliffhanger (the term “cliffhanger” itself deriving from a typical climactic predicament in these serials), Flash Gordon and Buck Rogers had total running-times of twice or even three times the typical ninety-minute length of a regular feature film of the era. 

The acknowledgement by Lucas of his debt to these serials, resurrected them from the complete obscurity to everyone but the age-cohort to which he himself belonged. Buck Rogers was revived as a network television series for a couple of seasons around the turn of the 1980s, while Flash Gordon was made into a disastrously-received big-budget picture in 1980. 

I remember this movie quite well...
because I saw it the evening that John Lennon was shot.

I recall seeing the serials themselves on TV during this period, but they nevertheless remained curiosities, and no one to my knowledge has sought to continue the unsuccessful revivals that came in the wake of the original Star Wars. It was not only that the 1930s’ movies were in black-and-white, and the visual-effects were rudimentary (and even colour films of that time appeared unreal in Technicolor garish). 

The serial format itself was part of a cinematic experience that hadn’t existed for decades, when the “feature presentation” was accompanied by another half-dozen films, including the newsreel, a couple of cartoons, a short comedy film, a song-and-dance routine, even a sing-along, in addition to an adventure serial (which were typically cowboy or crime stories). 

Cinema-going in the Flash Gordon era was a kind of variety show in itself. But as television became the chief medium of entertainment, cinemas pared back their offerings until the feature-film was accompanied only by a cartoon (then this disappeared as well, its place taken entirely by coming attractions trailers). 

After the 1950s, there was no place at movie-theatres for serial-features, and while Flash Gordon, Buck Rogers and other serials were edited and presented on early television, their narrative structure (with a cliffhanger occurring at a set interval) seemed awkward when presented in one sitting (as opposed to being stretched over several weeks or months). 

There was, in sum, a far greater disparity between the movie-going experience in the thirty-eight years between 1939 and 1977 that made the earlier features inaccessible to all but a small number of the latter-day audience, than was the case in the same period of time between ‘77 and 2015.