Monday, May 16, 2016

The Long History of (Management) Gurus

I used to deride the whole idea of the “consultant”, essentially someone who gets paid a lot to advise others how to act. 

But reflecting more carefully on their role in modern business, aren’t consultants in fact the modern equivalents of wise men and women of yore — veterans with deep experience who, by choice or not, do not actually practice what they preach?  

Consultant: medieval version.
kenokazaki com

Not all consultants are superannuated in this fashion. But even consultancies run by those long before retirement, always tout their x years or decades of combined experience, which usually add up to the length of the career of a single retired person. 

Thus a random sampling from the Internet of quotes from various consulting firms: 

The most successful consultants do receive hefty remuneration to convey knowledge about a particular field, which comes mostly through experience, as opposed to mere academic training. 

No wonder that consultants’ preferred media of communication are the business meeting and the oral presentation. 

Their most valuable advice relates not the technicalities of the field, but to behavioural subtleties the mastery of which can lead to greater professional success. 

Certainly, consultants do publish their thoughts in books. Management consultants especially did (at least in the recent past) enjoy best-selling readership. 

But whatever the differences in the various books on management, they are characteristic for their “conversational” style, heavy with epigrams and aphorisms: for example (and tellingly) “management is about people.” 

Revealingly also, management consultants were during the height of their popularity often called “gurus”, as are the most otherworldly of the wizened teachers of wisdom. 

Certain consultants become gurus, because their ideas seem more oracular, than logical. It is their submergence in the oral culture of speech and beseech, which makes them seem like religious mystics.  

The engagement of elders for their wisdom endured long after the accession from tribal-kinship society. 

The oligarchies that controlled ancient city states especially, were in essence councils of elders, men who’d distinguished themselves in the service of the polis.  

The Senate of Rome was just such an entity, constituted originally as an advisory body only, whilst lawmaking was theoretically in the hands of the Assembly of the People. 

Eventually, though, the Assembly of Seniors took precedence over that of the People, which in turn became an irrelevance and formality to the power structure. With the rise of the emperors, the Roman Senate itself became irrelevant. 

Not coincidentally, empire’s the younger man’s project: Julius Caesar was scarcely in his forties when he began his campaign of conquest in Gaul and elsewhere, before assassinated by the elders of the legislature, fretful about his life-dictatorship. 

Casear was da bomb.

Earlier, Alexander the Great conquered most of equatorial Eurasia years before his early death at age thirty-two. Modern empires, too, were overwhelmingly the work of younger men wishing to escape the suffocation of gerontocracy.  

Apart from class struggle, there is an inter-generational contest in every major society between youthful thirst for adventure and novelty, and elders who seek stability and tradition. In modern times, revolutionary movements, like imperial conquest, have been the work of younger men.  

The Protestant Reformation, for example, was begun in 1517 when Martin Luther, then aged 35, nailed his “ninety-five thesis” to the door of the cathedral of Worms. 

Luther’s successors in the revolt against the Catholic hierarchy, were also young men: Jean Cauvin (known in the English-speaking world as John Calvin) was just twenty-one when he began to attack the church hierarchy in Geneva. 

Another Swiss Protestant, famous in his homeland but largely unknown outside, was Huldrych Zwingli, who was just over thirty when he began his theological revolt. 

John Knox, the architect of Scots Protestantism, was about Luther’s age when he became involved in Reformation. Yet even here, the old-fashioned dynamic came into play. For when the Kirk was formally Established, it was governed by Elders — the Presbyters

Similarly, the state Protestant churches established throughout the Occident, had senior clergymen taking control of matters once again. 

Thus, whatever his firebrand ways as a man in his thirties, the elder Martin Luther allied with the German dukes and princes who wished to reign in the instability of ecstatic religiosity, propagandizing against the even more radical young churchmen who succeeded him, as they were suppressed by the nobility. 

The Prussian church to which Luther gave his name, was appropriately sober and senior in constitution. The Reform clergy seemed to understand early on that the capriciousness and instability of youth, was contrary to the becalmed liturgy and lifestyle they wished to impose on society. 
Observe, young friends.

The first of today’s liberal democracies were mostly founded by Protestants, whose version of responsible government was strongly presbyterian in character. 

Thus the upper chamber of modern bicameral legislatures — the one closest to the executive — is always the Senate, or an equivalent name given to an assembly of the old and wise. 

Traditionally, too, these were appointive bodies, as was the U.S. Senate until 1912, while members of the Canadian and British upper Parliaments are appointed even today. The rationale for reviving this ancient institution in modern constitutions, was precisely to give form to the oligarchical basis of democracy. 

The Canadian Senate has been called the “house of sober second thought”, with the sobriety of the cognition therein presumed to come from seniority and a lack of responsibility to an electorate. 

If the Senate here or in any other country, gives short shrift to this function in actuality, there was nevertheless the intention (and the hope of some even now) that it would do so. What does it say, however, about the dynamism of present-day businesses, when they seem to pay so much to consult with elders?

Saturday, April 2, 2016

The Right Ordinary Bill Shakespeare

The twenty-third of April, 2016, will mark the four-hundredth anniversary of the death of William Shakespeare, judged to be the greatest writer in the English-language, one of the greatest ever.  

Recently, the Daily Telegraph published a new portrait of the Bard by Geoffrey Tristram, which purports to be as “authentic” an image as possible of Shakespeare, who in turn is described as being like “a chap down the pub.” (below)

Daily Telegraph, Geoffrey Tristram.

I think a large part of the fascination with Shakespeare lies in the fact that the details of his life are so obscure. As Bill Bryson writes in his very slim biography Shakespeare: The World as Stage, “After four hundred years of dedicated hunting, researchers have found about a hundred documents relating to William Shakespeare and his immediate family-baptismal records, title deeds, tax certificates, marriage bonds, writs of attachment, court records (many court records – it was a litigious age), and so on.” (Atlas Books, Harper/Collins, 2007, page 7). 

In fact, practically everything written about him is conjecture, supposition and outright fantasy. 

There has even been for many decades an ongoing debate as to whether William Shakespeare actually wrote the plays and poems that are credited to his name. 

The most popular candidate as the “real” author of Twelfth Night, Hamlet, Romeo and Juliet and the rest of the Shakespearean corpus, is Edward de Vere, the seventeen Earl of Oxford (amongst other candidates are Sir Francis Bacon, the Sixth Earl of Derby and Shakespeare’s fellow playwright Christopher Marlowe). 

Another more recent line of speculation focuses not on authorship, but religion: the historian Michael Wood, in his In Search of Shakespeare miniseries, argues that the playwright was a secret Catholic. 

This is, argues Wood, because the practice of the “old religion” in Elizabethan England would, if revealed, land the adherent in serious trouble, up to and including torturous and capital punishments.  

But according to Bryson, it is unremarkable that so little of Shakespeare’s life is known, given that this was true of practically everyone in the poet’s lifetime, including his fellow playwrights (Marlowe, Kyd, Jonson and so on), excepting for royals and very powerful nobles. 

Given the obscurity of Shakespeare as a person and publican, it is necessary to examine in detail his background, the details of the historical times in which he lived, for to speculate as to how he might have got on in life. 

This was the procedure of the historian Peter Ackroyd’s Shakespeare: The Biography (London, Chatto and Windus, 2005), and I think Ackroyd captures perhaps the truest picture of the Bard of Stratford as any yet achieved. 

Various portraits of William Shakespeare.

Ackroyd does briefly discuss, but gives no particular credence to the notion that Shakespeare illegally practised Catholicism. 

Considering that evidence marshalled by Ackroyd about what kind of man Shakespeare was, I had the thought that if, somehow, people came to know the “real” Shakespeare (as for example, through the miraculous authentication of a diary or personal letters), it would be a disappointment. 

His name became, in his time, quite famous, and yet few seemed to have been interested in who he was personally. When he died in 1616 in Stratford, no one but friends, associates and family attended his services — quite unlike Jonson and other literary greats of the time. 

The adjective that his contemporaries attached to the Bard was “sweet”, characteristic both of his words and his temperament. Beyond that, not much else, even though he mixed with men of words his entire life, and at a time when the concept of privacy scarcely existed at all. 

According to Ackroyd, there is but one record extant of Shakespeare addressing himself in the personal pronoun. It comes from testimony that Shakespeare gave as a witness in a civil-case over a dowry. As Ackroyd notes, the language Shakespeare uses is completely unremarkable for an educated man of his time. 

Shakespeare’s personal obscurity, the cause of so much intrigue and speculation, may be because he was not very remarkable at all. His “absence” is so persistent, simply because he may have been persistently absent, hunched over a desk in candlelight, furiously writing out words that only came to him in concentration and solitude. Yet, as an individual social actor Shakespeare was the epitome of petit-bourgeois (whatever his aspirations toward gentility). 

He parleyed a relatively modest income as a playwright and actor into a healthy nest-egg, cannily buying and selling land and assets (including a share in the theatrical company he worked with), and even being accused of illegal hoarding during times of shortage. 

At his death, he owned one of the largest houses in Stratford, and bequeathed healthy sums to his elder daughter. It seems that, in Elizabethan times, sobriety and responsibility had not yet become opponents of the highest creativity. 

Part of the mystery of Shakespeare rests in the fact, during his lifetime, there were not yet newspapers. There were, in fact, unbound, semi-periodical documents that ultimately gave rise to what became the daily newspaper, by the opening of the seventeenth century. But the first English-language daily (which was in fact published in Amsterdam) did not appear until four years after Shakespeare died in 1616. 

If newspapers existed then, and had reported if not on the details of Shakespeare’s private life, then about his work as a public man, it would have provided illumination to contemporaries, and to later generations, as to exactly what he was doing and when. Newspapers chronicle public events and activities, just as diaries record private actions and thoughts. 

The distinction isn’t so clear-cut, however, given that private diarists often comment on public events, and before the electric telegraph at least, daily newspapers consisted of correspondence sent from eyewitnesses to the scenes of important events (hence the synonym for “reporter”: “correspondent”). 

It was at one time at least, commonplace for newspapers to be named the Journal, and in French, dailies are referred to as journales (just as a more synonym for reporter is“journalist”). In this respect, the daily newspaper succeeded such documents as the Anglo-Saxon Chronicle or the Domesday book (and their equivalents in other languages) in providing a chronological record of society.   

As with personal diaries, newspapers didn’t have to tell the truth, or at least, the whole truth, in order for readers to gain an understanding of the chronology of the life of the diarist, or of the city or society documented. 

The oral literature of modern times.

But because this institution had not yet appeared when Shakespeare was alive (or at least, not in the English language, though German and Dutch dailies were started in the early seventeenth century), we don’t even know where he actually was during his lifetime, except for one or two instances. 

Neither do we know when exactly, or in what order, any of his plays were first performed. In the presence of newspapers, we might have knowledge, indirectly no doubt, of these key facts. 

We would in any case have a more comprehensive view of Shakespeare’s life, enough perhaps that it would be impossible for anyone to argue that he didn’t actually compose the works that bear his name. It is interesting that while the periodical (which is to say, “the press”) is the ultimate form of the printing press, it really didn’t come into existence until more than a century following the invention of movable type itself. 

During this period, the printing press was treated mainly as a great enhancement of the medieval use of manuscript, which was to preserve the writings of the past, in book form. Unbound documents certainly did have an immediate impact upon discourse. It was only that such printed media didn’t take periodical form until the seventeenth century.

Wednesday, March 9, 2016

Every Instalment Was the Holiday Special: Some Better, Some Worse

Recently, I attended the second-run showing of the Hateful Eight, the “Eighth Film by Quentin Tarantino.”  

It was released, to little fanfare as I recall, during the Christmas season. 

Starlog Magazine, Issue no 19, dated February 1979, but published circa November 1978.

The publication of a Tarantino movie was – as recently as his last film, Django Unchained – something of an event. 

And although the Hateful Eight did respectably at the box-office (according to Wikipedia, earning $145 million in theatres), the success of the Star Wars sequel, The Force Awakens, overshadowed all other late-year releases in 2015.

The Hunger Games film series was, at one time, a cultural phenomenon. The final installment in the series, Mockingjay Part 2, came out in November to the yawns of most sci-fi fans (perhaps not as many as were caused by viewing the first Mockingjay film, however). 

Although Mockingjay the second went on to gross more than US$600 million at the box office, the majority of this came from overseas’ audiences.

And, who can recall that December also saw the release of a movie by the once-hotshot director Ron Howard? In the Heart of the Sea, with a production budget of one-hundred million dollars, grossed just a quarter of that amount in general release.  

By contrast, the Force Awakens has taken in more than two billion dollars in revenue since its release a week before Christmas. 

This total makes the seventh Star Wars film the top-grossing film at the box office, surpassing the record held by the 2009 film Avatar (although, adjusted for inflation, the most successful theatrical film is Gone with the Wind, which was released almost eighty years ago). 

Seeing the Force Awakens just after it came out, I quite enjoyed the experience. The film has been hailed as a return to form for a film-series that definitely lost its way with the three prequel films released between 1999 and 2005. 

Yet, I’ve not been able to get over the notion that the fate of the Force Awakens will be similar to that of Avatar, another highly popular sci-fi film whose impact, as it turns out, was not at all enduring. 

When news came in 2012 that Star Wars mastermind George Lucas had sold his property to the Walt Disney company, and that the latter would commence with the production of further sequels, 

I remained sceptical that there was any story left to tell of this “saga.” Even having enjoyed the film, I think my scepticism was warranted. The Force Awakens is hardly a terrible movie (as was at least the first of the three prequels directed by series creator George Lucas around the turn of the century). 

It is, however, transparently not original, being largely a retelling of the first, and most successful of the movies, released in 1977. This the movie’s director, J.J. Abrams, has all but acknowledged, but it is remarkable how much that Force Awakens conforms to the original, from its settings, to its characters, to the thrust of the overall story. 

The satirical website Cracked had a mock film-script with famous scenes from the first Star Wars crossed out, their equivalents in the new movie placed beside them.  


My idea has long been that the galactic setting of the first movie was not an imaginary place (as is, for example, the Middle Earth of the Lord of the Rings novels and movies), but a device through which several distinct adventure genres (the western, the war flick, the samurai picture, the medieval romance) could be combined into a single movie through the magic of sci-fi technology. 

The original Star Wars was thus a kind of variety show, something that was perhaps unconsciously signalled through the conclusion of several comic elements in the picture (such as the robotic oddball sidekicks, the hulking canine-man, the dance-band of assorted aliens in the famous “cantina” bar scene, which of course has a close counterpart in the Force Awakens). 

It is interesting in this regard that the very first sequel to the original Star Wars was not a movie, but a television show: the Star Wars Holiday Special, broadcast in 1978, and which was actually a variety program that featured the singing talents of Carrie Fisher (who played Princess Leia in the first trilogy as well as the Force Awakens), comic-repartee between Bea Arthur and Harvey Corman (popular television personalities at the time), as well as a cartoon segment that introduced the Boba Fett villain seen next in the Empire Strikes Back

Badly-received at the time, it was never aired again, has never been made available through home-video, and its existence was scarcely acknowledged thereafter. 

Regarded as a strange, “non-cannon” outlier in the whole Star Wars “universe”, the Holiday Special should have been viewed as an unsettling portent of Lucas’ lack of artistic judgement as well as his ambition to pander to the most juvenile elements of the target audience, both of which were on full display with the Phantom Menace and the other prequels. 

Returning to point, I think it is unlikely that the sequel-stories will be as compelling as the Force Awakens simply because of the conceptual insufficiency of the setting in which the whole story takes place. It was a framing device for a variety show, and like any background, it doesn’t stand for much scrutiny until its phoniness is revealed. 

The Star Wars sequel inspired another idea relating to the “retro-mania” described by Simon Reynolds in his recent book of that name. 

The Force Awakens is a sequel (and retelling) of a movie that was released theatrically thirty-eight years earlier. It would have been inconceivable in 1977, though, to make a sequel to a movie released in 1939. 

The cinema of that earlier time was, if not largely forgotten, then viewed as irredeemably antique and outdated in a way that at least certain films of the mid- to late 1970s are not today, apparently. 

This is verified by the fact that Star Wars itself actually was directly inspired by features released in 1939, Buck Rogers, and three years earlier, Flash Gordon. Both starred the former U.S. Olympic athlete Clarence “Buster” Crabbe as a regular earthman who finds himself in the 25th century (Rogers) or in a far-off galaxy (Gordon). 

Presented in about a dozen serial-installments with each but the last ending in a cliffhanger (the term “cliffhanger” itself deriving from a typical climactic predicament in these serials), Flash Gordon and Buck Rogers had total running-times of twice or even three times the typical ninety-minute length of a regular feature film of the era. 

The acknowledgement by Lucas of his debt to these serials, resurrected them from the complete obscurity to everyone but the age-cohort to which he himself belonged. Buck Rogers was revived as a network television series for a couple of seasons around the turn of the 1980s, while Flash Gordon was made into a disastrously-received big-budget picture in 1980. 

I remember this movie quite well...
because I saw it the evening that John Lennon was shot.

I recall seeing the serials themselves on TV during this period, but they nevertheless remained curiosities, and no one to my knowledge has sought to continue the unsuccessful revivals that came in the wake of the original Star Wars. It was not only that the 1930s’ movies were in black-and-white, and the visual-effects were rudimentary (and even colour films of that time appeared unreal in Technicolor garish). 

The serial format itself was part of a cinematic experience that hadn’t existed for decades, when the “feature presentation” was accompanied by another half-dozen films, including the newsreel, a couple of cartoons, a short comedy film, a song-and-dance routine, even a sing-along, in addition to an adventure serial (which were typically cowboy or crime stories). 

Cinema-going in the Flash Gordon era was a kind of variety show in itself. But as television became the chief medium of entertainment, cinemas pared back their offerings until the feature-film was accompanied only by a cartoon (then this disappeared as well, its place taken entirely by coming attractions trailers). 

After the 1950s, there was no place at movie-theatres for serial-features, and while Flash Gordon, Buck Rogers and other serials were edited and presented on early television, their narrative structure (with a cliffhanger occurring at a set interval) seemed awkward when presented in one sitting (as opposed to being stretched over several weeks or months). 

There was, in sum, a far greater disparity between the movie-going experience in the thirty-eight years between 1939 and 1977 that made the earlier features inaccessible to all but a small number of the latter-day audience, than was the case in the same period of time between ‘77 and 2015.

Monday, February 22, 2016

Conservatives Despise, and Leftists Embrace, This Particular Example of Laissez-Faire Capitalism

An article posted at the website of the National Post by John Robson, made me aware of certain paradoxes with respect to the market for modern art. 

Robson’s politics are to the right, and while an intellectual, he shares in the conservative distaste for contemporary artworks (the title of the piece: “Modern Art is Garbage”, Feb. 1, 2016). 


Modern art, Robson says not inaccurately, is “meant to disgust, shock, challenge "convention" and reduce hope and morality to a smouldering heap of obscene rubbish.” 

He describes how an abstract-expressionist work long credited to Russian-born American painter Mark Rothko (1903-1970), was recently exposed as a fake, but only after its owner paid US$8 million-plus for it (and who was an official with Sotheby’s, the famed London art-auction house). 

Robson’s contention seems to be that it is in the nature of Rothko paintings that they can be so easily faked. 

But Renaissance works have been subject to successful forgery, as well, with their exposure coming only as a result of chemical analyses revealing the anachronistic materials used in their production. 

Also Fraud.

But more relevantly, abstract-expressionism and other more recent art movements that repel and even disgust conservatives like Robson, are a result of the free market. 

Certainly, the arts (including visual and plastic art) have been subsidized by the government, but modernism in art going back to the nineteenth century, has developed many as a result of the free market. 

The art market is in fact one of the few examples of nearly unregulated capitalism in today’s economy. 

For being essentially a handicraft industry, as well, it is big money, as with the recent sale of a Picasso work for more than US$150 million. 

Conservatives axiomatically favour the free market as the default means through which goods and services are produced (I know this to be true of Robson, as I’ve been reading his columns over many years). 

Yet, when it comes to a product — modern and contemporary art — which results from transactions of a market which is in turn almost entirely laissez-faire, they are unanimous in their hatred of it. 

Capitalist pigs.

This is confounding to leftist politics, as well, though. For, not only do artists adhere to classical- and neo-socialist politics as an orthodoxy, regardless of their level of enrichment by the marketplace in art they claim to despise. 

It is confounding to leftist ideology in the first place, that contemporary art (painting, sculpture or new-media) is almost universally disparaging in content of bourgeois values and tastes, in spite of it being a product of laissez-faire. 

Socialists have long claimed that cultural-commercialism is directed toward conditioning and brainwashing the masses into accepting the “status quo.” 

But with modern art, this is clearly not the case. Further, when contemporary artworks do come under criticism by conservatives, leftist commentators reflexively defend an industry that is practically without regulation by the state.

Sunday, January 31, 2016

Why Do We Prefer an Airplane Over a Starship?

January, 2015 has turned out to be probably the worst month for rock-music deaths since February, 1959.  

Just a week or two ago, came news that Glenn Frey of the Eagles (or rather, “Eagles”, as there was no definite article before their name) had passed away, from cancer, aged 67. A day previously, Dale Griffin, the drummer for the ‘70s band Mott the Hoople, was also 67 at his death (he had suffered from Alzheimer’s Disease). 

Jimmy Bain, the bass-guitarist for hard-rock band Rainbow (founded by guitarist Richie Blackmore after his departure from Deep Purple), died on the 24th. 

 The most famous of the batch was probably David Bowie, who also suffered from cancer and passed away on January 10. Just before the New Year, too, Ian “Lemmy” Kilmister, lead singer and bassist with Motorhead, died from cancer. 

There was also the tragic passing of Animal, the manic drummer for the band on the Muppet Show. 

Paul Kantner.
And then just this week, Paul Kantner, singer and chief songwriter of the Jefferson Airplane / Starship, passed away aged 74 of what is being called “multiple organ failure.” 

Kantner’s obituary on the web site of Rolling Stone reminded me of thoughts I coincidentally had recently about him, and the bands that he lead. 

First, though Kantner wrote the vast majority of the material on Airplane / Starship albums, and was the lead male voice on most of their material (especially after the departure of founder Marty Balin), he was scarcely a household name. 

 Given that the lead singer was Grace Slick, it is no wonder that the other crewmembers of the Airplane / Starship were virtually anonymous to the general public. 

An even more interesting puzzle, though, is why nowadays the Jefferson Airplane remains so well known while the Starship version of the band is virtually forgotten. But during their career, the Jefferson Starship were far more popular than was Kantner’s earlier band. 

According to Wikipedia, three of the Starship’s albums released between 1975 and ’78, achieved “platinum” status (as recognized by the Recording Industry Association of America), with sales of one-million records or more (one of these, Red Octopus, achieved sales of two million or more records). 

Four more Starship records released between 1979 and 1984, sold 500,000 copies or more (thereby achieving the RIAA’s gold-record award). 

As for the Jefferson Airplane, the only one of their albums to go platinum (again, according to Wikipedia) was the Worst of the Jefferson Airplane, the 1970 compilation album. The releases with original material released during the Airplane’s “classic” period (when Slick had replaced the original lead female singer, Signe Toly Anderson), only ever achieved “gold” status. 

The Jefferson Starship was more popular than the Airplane, when these acts were active recording artists. I remember this well from my own youth, when the Starship was so popular, and very occasionally, I would hear the name “Jefferson Airplane”, and think, “Isn’t that supposed to be Jefferson Starship?” 

In the ‘70s, the Airplane seemed an antiquated, forgotten version of the then-current version of the band. However, it is the Airplane, and not the Starship, that people remember today. The name of the Facebook that I belong to which covers the careers of both phases of the band, for example, is named after the Jefferson Airplane, with no explicit mention of the Starship. 

Was the Jefferson Airplane really that much better of a group than the Starship? I don’t know, actually, because although I have seven of their albums, I discovered upon rechecking my disc collection while writing this entry, that I don't have a single Jefferson Starship album (excepting the earlier solo album by Paul Kantner that was backed by all-star band called “Jefferson Starship”). 

I can name several songs by the Airplane – such as White Rabbit, Somebody to Love, or Volunteers – that have significant radio airplay. 

Hey, Grandma.  You're so young.
But Jefferson Airplane hits that are played on the radio these days? Perhaps they are played in places where I don’t live. 

But I would be hard-pressed to name a Starship song (besides We Built This City) to save my life. 

It isn't out of any hostility to toward the Starship’s music that I have failed to buy any of their albums. If I had seen available, new or secondhand, I would have checked them out. Again, it is as though that variant of the band has faded into obscurity while the Airplane has become all the more prominent. 

It is a lesson on the capriciousness of fame. I'm sure there is something bigger here about culture and popular culture that the latter-day obscurity of Jefferson Starship could tell us, but I'm not sure what. 

Signe Toly Anderson Ettin.
Photo from Facebook page.

Postscript: while preparing these writings for publication, I discovered that the aforementioned original Airplane vocalist Signe (Anderson) Ettin, has also passed away. Like Paul Kantner, she was 74 years old. 

A cruel month indeed, January 2015, for old rockers.

Monday, January 11, 2016

The Sounding of Moby Dick

Having seen, over Christmastime, a small independent movie released without much ballyhoo, I turn my attention to what is the undisputed blockbuster of the season, Ron Howard’s In the Heart of the Sea.  

The film portrays the ill-fated voyage of the Essex, the travails of which inspired Herman Melville’s Moby Dick

This tale was recounted also in Ric Burns’ documentary, Into the Deep, a history of American whaling first broadcast on PBS in 2010.  

Image from In the Heart of the Sea.

This comes as part of a more comprehensive account of the whaling industry in the U.S., and is well worth a view. 

It occurred to me, though, that only very recently did the whale change in the imagination of educated Occidentals at least, so that the routine slaughter of the sea mammals, ongoing for centuries, suddenly became intolerable, an activity requiring global prohibition. 

Having surveyed the available literature on the topic, though, my tentative hypothesis had been to link the expansion of whaling to the colonization of the world by Europeans. 

The book Leviathan, is another history of American whaling by Jay Dolin, and confirms that while whale-hunting may have extended back millennia to the Phoenicians and Greeks, a true whaling industry began only during the Middle Ages with the Basques, the people of mysterious origin who have been fighting for centuries for independence from Spain. 

A Basque History of the World, by Mark Kurlansky and published in 1999, expands on the Basque origins of modern whaling, showing that their pursuit of the giant sea-mammals took them throughout the Atlantic ocean long before most other European peoples even attempted long-distance seafaring. 

Kurlansky supports the argument that Basques preceded Norsemen to the New World, citing the coincidence of Basque words in Canadian native languages, as well as the disproportionate number of Basque crew-members on the explorer ships of both Columbus and Magellan. 

Whatever the truth, Basque dominance of commercial whaling was supplanted by the Dutch, imperial masters in the seventeenth century, and a century later by the British, upon whose empire the sun never set during the eighteenth and nineteenth centuries. 

During the nineteenth century, too, the U.S. became a major whaling nation, in tandem with its rise as a global naval power. 

For his part, Kurlansky explains why it is that the bottom-feeding codfish was so valued for centuries: when dried and salted, it kept from spoiling for long periods, which permitted in turn the distant voyages of the Basques and the Norse. The whale was valued because its extensive blubber produced oil for burning. 

Nantucket island, off the coast of Massachusetts, became the fabled centre of the American whaling industry in spite of its relatively sparse population, as it was closer to the original whaling grounds, and the people of barely-arable Nantucket had no other means of prosperity. 

Kurlansky notes that the Basques and other whaling peoples would eat the slaughtered creatures’ meat (with whale tongues being the most prized of the edible parts, often given as tribute to the local high clerisy). 

Dolin states, however, that the anglophones viewed whale-meat as inedible, with the carcasses of the animals left to rot onshore (or dumped back into the sea) after the precious kerosene oil and whalebone had been extracted from them.  

Into the Deep, describes how whaling expeditions became progressively far-flung as quarry near shore diminished in number. 

The daylong hunts of the eighteenth century became months and even years in length later on, but whales had to be processed for their raw materials soon after the slaughter, so as not to go to waste. 

Accordingly, whaling ships evolved into floating factories. They were thus examples of Victorian high technology, and whaling was part of a seafarer culture which gave rise to the novelty of global imperialism. 

Yet, whaling also gave force to something primal in the human male, at least. Only specialists really know or care that the proper scientific name for anatomically modern human is Homo sapiens sapiens, the successor species of Homo sapiens, who were virtually identical with the super-sapiens in terms of physical form, but whose material culture has been shown to be consistently inferior.  

Somewhere around one-hundred thousand years ago, as archaeologist Ian Morris observed, the cultural monotony which characterized sapiens’ settlements up to that time, gave way “suddenly” (over a period of centuries) to a diversity of regional styles. 

Part of this new cultural sophistication involved the hunting of mega-fauna, most of which disappeared shortly after the arrival of Homo sapiens sapiens. 

But not always the victor...
baroquepotion com

Commenting on the habit of contemporary scholarship to exonerate human activity for these extinctions in favour of climate change, Noah Harari observes, “The giant diprotodon appeared in Australia more than 1. 5 million years ago and successfully weathered at least ten previous ice ages. It also survived the first peak of the last ice age, around 70,000 years ago. Why, then, did it disappear 45,000 years ago? Of course, if diprotodons had been the only large animal to disappear at this time, it might have been just a fluke. But more than 90 per cent of Australia’s megafauna disappeared along with the diprotodon. The evidence is circumstantial, but it's hard to imagine that Sapiens, just by coincidence, arrived in Australia at the precise point that all these animals were dropping dead of the chills.” (Sapiens: A Brief History of Humankind, 2014, p. 66) 

Harari goes on to state that “mass extinctions akin to the archetypal Australian decimation occurred again and again in the ensuing millennia — whenever people settled another part of the Outer World. In these cases Sapiens guilt is irrefutable. For example, the megafauna of New Zealand — which had weathered the alleged `climate change’ of c. 45,000 years ago without a scratch — suffered devastating blows immediately after the first humans set foot on the islands. The Maoris, New Zealand's first Sapiens colonisers, reached the islands about 800 years ago. Within a couple of centuries, the majority of the local megafauna was extinct, along with 60 per cent of all bird species. A similar fate befell the mammoth population of Wrangel Island in the Arctic Ocean (200 kilo metres north of the Siberian coast). Mammoths had flourished for millions of years over most of the northern hemisphere, but as Homo sapiens spread — first over Eurasia and then over North America — the mammoths retreated. By 10,000 years ago there was not a single mammoth to be found in the world, except on a few remote Arctic islands, most conspicuously Wrangel. The mammoths of Wrangel continued to prosper for a few more millennia, then suddenly disappeared about 4,000 years ago, just when the first humans reached the island.” (pp. 66-67) 

Harari also makes an interesting comparison: “when climate change causes mass extinctions, sea creatures are usually hit as hard as land dwellers. Yet there is no evidence of any significant disappearance of oceanic fauna 45,000 years ago. Human involvement can easily explain why the wave of extinction obliterated the terrestrial megafauna of Australia while sparing that of the nearby oceans. Despite its burgeoning navigational abilities, Homo sapiens was still overwhelmingly a terrestrial menace.” 

This situation continued until just a few centuries ago, when the Basques and then many other nations undertook whaling on a large scale. 

Meanwhile, the story of super-sapiens’ success was written, in great part, in the blood of terrestrial mega-fauna felled by human hands. 

The ethological conditioning inherent therein, had not primarily to do with the act of killing itself. It was more so the tracking of game that was important to evolving human (perhaps especially male-human) psychology, as the amount of time actually chasing down fauna (large or otherwise) to kill, took much longer than the fatal activity itself. 

It was how Homo sapiens sapiens were able to “read” the terrain, literally on a step by step basis (with the footprints being signs virtually abstract of relation to the creatures that created them), in pursuit of a defined goal. 

Stone-age hunting was also a communal enterprise, as well, and it could be that the early human communication that Iain McGilchrist described as “musilanguage”, obtained true grammar as the super-sapiens pursued big game. Did the success that the cerebrally-modern humans have in killing mega-fauna contribute to a certainty that the world was their’s to own? The triumph of the diminutive man over the grandiose creature is a mainstay of storytelling, from Gilgamish down to King Kong

Herman Melville’s twist on this theme, was to have the beast — a sperm whale — prevail over captain Ahab and his crew. But as if to underline the lack of resonance for human defeat at the hands of the ocean Goliath, Moby Dick was initially a flop, and contributed to Melville’s departure from novel-writing in favour of being a New York city clerk. 

The reality is that millions of sperm and other whales did indeed perish by human hands in recent centuries (whereas very few men died because of attacks by whales). As with the hydraulic civilization of China (which enshrined the village-tribe as the key social unit), the whale-hunt used advanced technology to satisfy a primal human urge, in this case to pursue mega-quarry. 

Whales survived in such abundance, and grew so massive (the blue whale is the largest animal on record) just because people did not have, until modern times, the means of hunting them very effectively. 

However, when the harpooner’s role was mechanized in the 1860s (inaugurating the “age of modern whaling”, according to Norwegian academics J.N. Tonnessen and Arne Odd Johnsen in their mammoth history of the subject, published in English in 1982), the destruction of whale-stocks became so complete that by the early twentieth century, calls came to curb the fishery so as to ensure its future viability. 

Led by Great Britain, biologists began to study the great whales as part of this conservation effort. As described by Graham Burnett in the Sounding of the Whale, nascent whale-science was more focussed on husbandry of a resource, than inspired by outrage at the vivisection of ocean mammals. 

He argues further that ocean-biologists, who accompanied whalers on their often years-long voyages, naturally came to sympathize with the industry, and thus served as an impediment to efforts prevent the slaughter of whales entirely. 

Meanwhile, Burnett writes, the mechanical harpoon furthered the ambitions of Western colonialism, as the British took full possession of south Atlantic and Pacific islands over which they had long claimed sovereignty, but that had heretofore remain unsettled for lack of means to support them. 

Thus the paradox that global, ship-borne imperialism, though an ultramodern project, revitalized the old hunter-gatherer way of life as a factor in human experience. 

Whaling was just one, if a more conspicuous, part of this. For the first months, years, and even decades after their settlement of the Americas, Britons and Europeans had to make their way wholly or greatly by hunting. The North American continent was substantially explored by hunters of beaver pelts, the trappers who helped establish the white-man’s sovereignty over New France and the American west (so important was this fur trade in the north, that Canada is often personified as a beaver). 

Once hunted, and now adored, by the millions.
www venturegraphics ca

The settlement by ancient Asians of what became the Americas, was accompanied by the mass die-off of mega-fauna centuries later, following the pattern established when the super-sapiens set foot anywhere for the first time. 

European settlement of the hemisphere precipitated another die-off of animal species here through over-hunting (the most conspicuous mass slaughter occurring to the bison).  
Some would assert that American Indians were similarly hunted like animals, and in the far west at least consisted of the largest number of victims of homicidal violence (according to Page Stegner). 

Nevertheless, the frontiersman was an enduring figure of American folklore, the very epitome of the “rugged individualist” whose independence was based on the ability to “shoot his next meal.” 

Hunting remained a very popular means of recreation throughout the rural United States, until very recent times. Importantly, the hunting trip served in many families as a form of male initiation into adulthood — again, a very ancient practice of key intergenerational training flourished again amongst New-World settlers. 

Old-World peasants, on the other hand, faced outright prohibition or severe restrictions upon their hunting rights: their overlords reserved the right to hunt the game throughout all their lands, regardless of tenancy. Practically the first droit du seigneur that migrants to the Americas claimed for the common man, was unrestricted freedom to hunt. 

Colonization of the Western hemisphere was, as Lewis Mumford observed, substantially a reversion to the primitive, or “paleo-technical”, phase of humankind. 

As imperialism spread to Africa and Asia during the nineteenth century, the “great white hunter” became another mythical hero in Western society, pursuing exotic big-game in the tropical bush. Of many such expeditions into “darkest Africa”, one of the more famed was undertaken by Teddy Roosevelt, the most openly and vociferously imperialist of all U.S. presidents. Upon leaving office, T.R. led a safari which netted over six-hundred wild beasts. 

Returning in conclusion to the whale hunt, it seems clear that the scientific knowledge acquired through conservation efforts, encouraged people to conceive of whales as beautiful creatures to be preserved, not resources to be exploited. 

But the “save the whales” sentiment was part of a more fundamental change in psychology which took hold among educated Westerners at least, from the 1960s. 

Before that, Occidentals looked upon engineered technology as the crown of creation. But since then, and probably as a consequence of engineering being made so normal as to be unremarkable, middle-class professionals in the West began to cherish the natural economy over artificial things, and especially its conspicuous mega-fauna such as elephants, tigers, and whales. 

The Greenpeace organization, which in the early ‘70s pioneered “guerilla” efforts to prevent the slaughter of whales (such as activists in boats placing themselves between harpooners and the whales), by the 1980s were employing more sophisticated methods of persuasion. 

At that time, the group launched television ads which featured semi-abstract, animated images of great whales moving through the oceans. A deep-voiced, authoritative narrator (whom I remember to be the Canadian actor Don Francks, though I have been unable to find this spot on YouTube or anywhere else) described them as “nature’s works of art.” 

That would be him.
waytofamous com

These placid scenes then dissolved into images of harpoons and whaling boats, with which the narrator says something to the effect: “...but mankind is trying to destroy the last of these works of art”, concluding with a plea to “help Greenpeace preserve them.” 

In the Master and His Emissary, Iain McGilchrist describes how the right brain hemisphere is responsive to living things, while the left-lobe reacts to inanimate matter. 

But, he goes on, certain inanimate objects, such as musical instruments, are perceived by the brain as though they are living. 

After the postwar especially, certain forms of engineering must have been perceived as animate: in particular, mass-media such as the hi-fi stereo, radio and television. 

These technologies helped effect a reversal in Western psychology, especially and paradoxically on the educated, it would seem. Constant exposure to holistic sound and imagery through mass-media, evoked sensitivity to living things as opposed to the mechanical. This was the cultural ground upon which revulsion not only toward whaling, but any sort of sport or big-game hunting, overwhelmed the prestige previously enjoyed by hunters.