Monday, August 3, 2015

James Paul McCartney (18 June 1942 - 8 November 1966)

Turn me round dead man,,,

Or was it 7 January 1967?  Or even, 26 December 1966?

The documentary Good Ol' Freda, is about a secretary to the Beatles' manager Brian Epstein who also became president of the group's fan club.

At various points throughout the movie, Freda Kelly reads excerpts from the monthly fanzine, The Beatles' Book.  One of these is entitled, "Paul is Alive!", and denies rumours that Paul McCartney had recently died.

Though it isn't made clear, this didn't come in response to the famous reports of McCartney's death in the autumn of 1969; rather, it was published in the February, 1967 edition of the fanzine, when an earlier (and comparatively obscure) rumour emerged that the "cute Beatle" had perished in some kind of accident.


At whatever date, though, the idea that the co-leader of the most successful rock group in the world had passed away, is a fascinating study in what is called the "sociology of knowledge."

It is not entirely correct to characterize "Paul is dead" as a "rumour."  It started out as such, but (at least in 1969) it was picked up by the mainstream press as an "unconfirmed report."

Nor was it a "hoax", which implies that someone deliberately set out to disseminate the false idea that Paul McCartney had died.

"Paul is dead" is more so accurately described as an urban legend or myth.

The latter are not mere fabrications: as portrayed in the Tim Burton movie Big Fish, tall-tales do have some basis in reality.  But myths or legends exaggerate what actually occurred, so that the stories they tell do seem entirely fantastical.

This was the case with the "Paul is dead" legend.  But what was the factual basis for it?

There are a couple of versions.

One tells of a Beatle flunky who, in January 1967, was given the job of transporting illicit drugs from McCartney's London home to a country estate owned by Keith Richards of the Rolling Stones.

Using the Beatle's custom-made motorcar to transport the contraband, the man (who bore some resemblance to McCartney) instead crashed the vehicle on the M1 highway north of the metropolis.

He was only slightly injured.  Nevertheless the unique appearance of the car prompted rumours that McCartney himself had crashed the vehicle; and somehow, this was turned into the idea that he had died in the crash.

The other story goes that on Boxing Day, 1966, McCartney was riding around on a scooter he had given his younger brother Mike for Christmas.

Crashing the vehicle at slow speeds, Paul had minor injuries and received treatment at a London hospital.  


He was quickly discharged.  Nevertheless people started to whisper that Paul had died because, simultaneous with his admission to hospital, another young man of McCartney's age and description was treated for fatal injuries there as well

Whatever the source of the legend, I think this earlier rumour is key to the more grandiose mythology about McCartney's death that arose about two-and-a-half years later.

Not coincidentally, too, the idea that Paul McCartney had died emerged just after the Beatles themselves had perished, at least as a musical act.

According to this Wikipedia article, Lennon had advised the other three that he was leaving the group on 20 September, 1969.  Since the group and its management were in the midst of negotiations with their record company for higher royalties, it was decided that the "divorce" of Lennon from the Beatles should be kept secret.

Not even a month later, a Detroit-area disk-jockey, and then an article in the student newspaper at the University of Michigan, publicized "evidence" of the death of Paul McCartney.

Thereafter, the story spread like wildfire.

Here is my theory as to why this occurred.

While the report from early '67 about McCartney's death was quickly dispelled, it no doubt persisted as a meme thereafter (keeping in mind that the term "meme" wouldn't be coined until nearly a decade later).

Internet memes reach the far corners of the world in a matter of days or hours (or even minutes). During the late 1960s, however, information was transmitted from person-to-person (as opposed from a few to many) at rather slower speeds.  It could well have taken a year or two for the idea that Paul McCartney had passed away to spread from London to the U.S.: even long-distance, transatlantic phone calls were very expensive in those days, and so it probably was conveyed by face-to-face contact between people travelling between the two continents.

The demise of the Beatles was the "tipping point" for Paul McCartney's death to turn from a rumour into a report.

The fact that the band and their management opted to keep the breakup confidential is, however, key to the affair.

Perhaps it was an errant comment in the days after Lennon left, or even the generally mournful atmosphere around Beatle headquarters in London, which gave new life to the old rumour that McCartney had died.

Crucially, Paul McCartney himself was unavailable to disprove his own death in the period when the myth underwent germination.

Following Lennon’s announcement, McCartney had retreated in depression to his Scottish farm, which had no or unreliable communications with the outside world.

When the news-media began to inquire with the Beatles' press officers about the reports, they were reassured that, in fact, the whole idea was nonsense.

But, the band's representatives went on, it was not possible to actually speak to McCartney, because he was incommunicado with his family.

One can imagine that a seasoned reporter might well react with scepticism that an ultra-rich music star didn't own a telephone.

There was thus confluence between those who knew entirely too much about the Beatles and their music, and those, in “official” society, who knew not quite enough to see through the holes of what was potentially an explosive story: the death of a major celebrity.

It should be noted that the word "fan" is a contraction of "fanatic."  Perhaps more than any pop-culture acts before and since, the Beatles have attracted a devoted and dedicated following, a significant number of whom have sifted through the band's music for profound meanings and messages (perhaps only Bob Dylan's music has undergone more persistent lyrical exegesis).



Coming down fast but don't let me break you...

Infamously, one of these fanatics was Charles Manson, the Los Angeles cult leader who believed that the song Helter Skelter (released on the 1968 eponymous or "White Album") described an imminent race-war (when in reality it was about a spiraling children's slide found at British amusement parks).

Manson directed his followers to murder the actress Sharon Tate and her friends as a means of provoking all-out conflict between whites and African-Americans (coincidentally, this occurred on the evening of August 8, 1969, the very day that the Beatles were photographed crossing the Abbey Road outside EMI Studios in Westminster, which became the iconic cover of their final recorded album) .

Beatle-fans' discovery of hidden meanings not only in the band's songs, but on the covers of their albums as well, was almost otherwise more benevolent.

Indeed the practice was common enough to be lampooned by John Lennon, who on the song Glass Onion from the White Album, sang: "Here's another clue for you all/The walrus was Paul..." (whereas Lennon had previously declared, I Am the Walrus).

The "Paul is dead" meme was fuelled in large part by Beatle-maniacs finding clues as to the demise of McCartney in songs and on albums back to 1966.

The Abbey Road record itself seem to offer hints of it.  


The two biggest singles on the album, Something and Here Comes the Sun, were not (for the first time) Lennon and McCartney tunes at all, but composed and sung by George Harrison.

It was obvious: Lennon could not write songs without McCartney, his now-dead songwriting partner (by that time, of course, “Lennon and McCartney” was a copyright fiction, with the “partners” mostly composing individually for several years). 


The cover itself was full of clues: the white suit worn by Lennon was symbolic of... Jesus; the fancy suit worn by Ringo was indicative of the priest; the denim outfit of Harrison, the grave-digger.  McCartney was attired in a suit, shirt unbuttoned, without a tie, and wearing no shoes — which is how they bury people (or say it was said).  

The licence plate on the Volkswagen parked some distance up the Abbey Road, read “28IF”: that is, McCartney would have been 28 years old, if he were still living (he was only 27, and the licence actually reads "Two-Eight-One", followed by the letter "S").

On the back cover of Sgt. Pepper (1967), McCartney has his back to the world, while the other Beatles face the camera; in a booklet photo contained in the soundtrack to Magical Mystery Tour, also from 1967, McCartney is wearing a white suit with a black carnation, while the other three have white suits with red carnations; if one looked from a right angle at the front cover of the album Yesterday and Today, which depicted the band sitting in around a large trunk, it appears as though McCartney is laying in a coffin, his eyes closed and hands folded across his chest.

The songs too, were examined as to any hint of the demise.  Doesn’t Lennon sing, “I buried Paul”, at the close of Strawberry Fields Forever?  (It was, in fact, “cranberry sauce.”)   On A Day in the Life (perhaps the last true Lennon-McCartney song), John sings, “He blew his mind out in a car” indicating the manner of McCartney’s death, a car accident.  (This lyric referred in fact to the suicide death of a young heir, while he sat in his car at a stop light).

Why did the Beatles stop touring in 1966?  Because Paul was dead.  The plausible but improbable (that McCartney had died sometime recently, without anyone knowing it), became the implausible, even supernatural or sci-fi, when these hints were assembled into narrative form.  If McCartney had died in 1966 or before, who was it that was appearing as him on albums that were released since then?  An imposter, the identity of whom was variously proffered as Mike McCartney, Paul’s brother, or a look-a-like named “Billy Shears”, hence the line in Sgt. Pepper - or an android.  Isn’t the singer on post-1967 Beatles songs emphatically Paul McCartney?  These songs were recorded before his death, and the thus the inability of the band to tour to support the music, and the need to dress it up in studio effects.

On the other hand, the reporters sent to investigate the Paul-is-dead rumours, were themselves from an older demographic than the Beatles.  While in the main charmed by the Fab Four, but they weren't fans of their music, either.  So they weren't thus knowledgeable enough about them music to bring any critical scepticism toward the fan-detectives making these claims.

In the end, a clutch of international reporters were dispatched to Scotland, and reluctantly guided to McCartney’s primitive farm, where he was compelled to emerge from private life to “prove” that he was alive (posing for pictures that, appropriately, ended up on the cover of LIFE magazine).

What amazes me is that, long after McCartney verified that he was still alive, the "Paul is dead" legends continued. 


I was prompted to write these thoughts down by a Facebook post promoting this web site.  Even before the Internet became a public utility, however, I was amazed at seeing my roommates gathered around listening to a radio program in which the narrator went over the "evidence" proving that Paul McCartney had died (played to a loop of the guitar riff from I Want You (She's So Heavy).


These young men, either about to graduate from university or (like myself) just having done so, lapped up this stuff as though they were listening to a prophet's revelations.

This was in the early 1990s.




The magazine died soon after.

Tuesday, July 21, 2015

Styles of Auto-Garment

I was going to post a lengthy piece about the role of fashion in modern, engineered society. 

I won’t do that today, however; relevant to the subject, however, I wanted to recount something I saw many years ago while driving downtown. 


You old bag.
Photo: R.B. Glennie

I was a passenger in a car, and from the other direction approached another vehicle that I remember to be an early-to-mid 1980s Camaro, which may have been the exact model of the car pictured above (the image taken a few months ago at the parking lot of a local shopping centre). 

Except that, unusually for the time, the couple driving the automobile were older, both of them having solidly-gray hair. 

I silently noted the dissonance, but the driver of the car in which I was riding was ever-ready back then with the acid tongue.  He said something like, “Don’t old people know that they look stupid driving young people’s cars?” 

In retrospect though, this unknown couple, glimpsed very briefly so long ago, were auto-fashion pioneers. 

For, whatever was the case at that time, what used to be referred to as “sports cars” are in the present day driven by people (usually men) over the age of fifty. 

This point was driven home to me, when I noticed the car pictured above. I thought to myself, “You don’t see that car very much any more”, and almost on cue, a man of retirement age emerged from the vehicle. 

I would’ve been as surprised today to see a young person driving that sports car, as I was years ago seeing older people in the similar vehicle. 

Generally, movement within the “fashion system” occurs as those of lower classes adopt the styles originated by those of higher status, who in turn abandon these fashions in favour of something new, so as to not seem declasse. 


Never worn by peasants.
www.iclothstyle.com

At the end of the cycle, the upper classes tend to adopt modified versions of sartor identified with the lower-lower classes (as with the “peasant” dresses that became popular among young women of prosperous background during the 1980s). 

But fashion also progresses on a generational basis within the bourgeoisie. Youth are on the cutting-edge of fashion, of course. 

Not all or even most youth are fashion-plates, but the fashionable are young. The middle-aged and older are not at all fashionable, or at least the vast majority thereof. 

In fact, as particular clothing and other styles becomes associated with a succeeding generation, youth tend to reject them outright. 

The body of a car is as much of a garment as pants, shirts or hats, and sometime over the last twenty-five or thirty years or so — perhaps right around the time my brother and I spotted these oldsters cruising around town in their late-model Camaro — sports-cars became identified with the old and no longer thus sought after by the young. 

Certainly though, just as the upper-classes appropriate the dress of the very lower classes, so too do youth take on the styles once only associated with the old. 

Eyeglasses are fashion accessories as much as cars and clothes. For ages, horn-rimmed spectacles were so associated with the old that the stereotypical “grandpa” in popular culture was almost always shown wearing them. 


Geek!  I mean, Cool.
s844.photobucket.com

In recent years, however, young people have adopted style as the epitome of hip. I think it was the psychoanalyst John Flugel who pointed out the paradox of clothing. Clothes are a form of modesty, in so far as they cover up the naked skin. They are, though, also a means of ostentation in that they (can potentially) draw attention to the self. The common disguise, clothing is a form of deception that people engage in every day.

Friday, July 17, 2015

The English Never Were a Nation

Nationalism has been a key force for political change during modern times, in Western civilization as elsewhere. It is, however, an assuredly modern ideology, unknown in the ancient or medieval worlds. 


www.sodahead.com

The historian Benedict Anderson defined the nation as an “an imagined political community — and imagined as both inherently limited and sovereign. It is imagined because the members of even the smallest nation will never know most of their fellow-members, meet them, or even hear of them, yet in the minds of each lives the image of their communion.” 

Anderson went on, “The nation is imagined as limited because even the largest of them, encompassing perhaps a billion living human beings, has finite, if elastic boundaries, beyond which lie other nations. No nation imagines itself coterminous with mankind. The most messianic nationalists do not dream of a day when all the members of the human race will join their nation in the way that it is possible, in certain epochs, for, say, Christians to dream of a wholly Christian planet. It is imagined as sovereign because the concept was born in an age in which Enlightenment and Revolution were destroying the legitimacy of the divinely-ordained, hierarchical dynastic realm. ... Finally, it is imagined as a community, because, regardless of the actual inequality and exploitation that may prevail in each, the nation is always conceived as a deep, horizontal comradeship. Ultimately it is this fraternity which makes it possible, over the past two centuries, for so many millions of people, not so much to kill, as willingly to die for such limited imaginings.” (Imagined Communities: Reflections on the Origin and Spread of Nationalism, Verso Books edition, 1983, pages 15-16). 

In nationalist ideology, the political state grows “organically” from the nation. But in fact, the modern state preceded historically even the concept of nationhood. 

The flag of the modern state was originally raised under royal colours. It was a product of the dissolution of feudalism, the adoption of the gunpowder weaponry, and the opening of the oceans to long-distance trade. 

Absolute-monarchists such as Henry VII of Britain, Louis XIV of France, and Charles XI of Sweden, centralized formerly diffused political power in their own houses, or even in the person of the king himself: thus the statement attributed to King Louis: `L’etat, c’est moi’, often translated as “I am the state.”   

Nationalism arose partly in reaction to this centralization of power by modern royal-states. Modernization (of the state, economy and society) has always served to alienate people from kin and community. The idea of nation arose as a kind of substitute form of belonging for the deracinated. 

It was, as Benedict Anderson described it, an abstract and imaginary collective. Nevertheless, potent origin-mythologies evolved for each national group. Closely linked to this myth-making, was nationalists’ claim to political sovereignty in opposition to the divine-right of kings or nobles to rule society. Not coincidentally, too, nationalism was the ideology of the middle class: those who suffered social estrangement the most through the pursuit of commercial wealth, and who were moreover estranged from institutional power under absolutist regimes. 

Nationalism thus from early on has been a philosophy of challenge and defiance. Gaining intellectual force during the Enlightenment, nationalism had its first great political success with the French Revolution in 1789, when non-titled, mostly bourgeois members of the Third Estate met on a tennis court to form the first AssemblĂ©e nationale. 

The French Revolutionary Wars were intended to liberate all the other nations of Europe from arbitrary and absolutist rule. But nationalism in Europe really took flight because of Napoleon Bonaparte’s attempt to enslave all of Europe under his new-model empire. 

Nineteenth-century Germans especially became embroiled in nationalist fever – though the bourgeoisie there were unable to dislodge the ancien regime from power in the 1848 pan-European Revolution. Instead the Junker aristocracy more or less co-opted German nationalism for its own purposes, in fighting the 1870 war with France, and subsequently forming the German Empire. 


www.youtube.com

In the United Kingdom, nationalism has been strong amongst the people of the Celtic Fringe – the Irish, Welsh and Scots. Their common grievance has been, not unreasonably, against English domination. But realistically, the United Kingdom has served to blunt English nationalism, as well, even a sense of nationhood among the English. 

During the four-century union of the four realms (de facto or de jure), the English have been by far the largest group, demographically and geographically. But a literal United Kingdom, meant that the institutions characteristic of other modern states’ nation-building, were lacking for the English. Parliament, though dominated by members elected in English districts, was nevertheless at the head of a central government, a body which, moreover, was not until well into the nineteenth century anything resembling a representative legislature. 

If the masses in Scotland, Wales and Ireland (which remained part of the United Kingdom until 1922) were disenfranchised by the aristocratic dominance of Westminster Castle (in the Commons and Lords both), so were the English working-classes and bourgeoisie. 

After the extension of the franchise through the Reform Bills of 1832 and ‘67, the Celtic nations were over-represented in Parliament, given the number of seats they possessed in comparison with their proportion of the general population. Since Scotland and Wales, at least, voted overwhelmingly for one party that would support their interests as national minorities, they were important voting blocs for the creation of parliamentary majorities (the Irish population didn’t posses the same electoral ballast, because the Catholic majority was legally or practically disenfranchised until shortly before the establishment of the Free State in ‘22). 

The Whig, and the successor Liberal party, dominated British politics for much of the nineteenth century, and into the twentieth, based in large part upon control of the votes of the Scots and Welsh. David Lloyd George, who spoke Welsh as his native tongue, became a highly influential British Liberal politician, laying the foundation for the welfare state during his tenure as Chancellor of the Exchequer, then as Prime Minister leading Britain to victory in the Great War. 

As it happened, Lloyd George was the last Liberal prime minister (leaving office in 1922). Labour, which emerged as a major party with the 1924 election, assumed the position formerly held by the Liberals, as possessor of the Scots and Welsh voting blocs, as well as being the main voice for social reform. 

The British Conservative party, meanwhile, received much of its support from the English heartland. But it has never been a vehicle for a specially English nationalism (as opposed to pan-British jingoism). As (officially) the “Conservative and Unionist” party, it has been the stoutest faction in defence of a United Kingdom, even as Labour and the third-party Liberals (now known as the Liberal Democrats, after a merger with the Social Democratic party) became committed to devolution: finally, under Tony Blair’s New Labour, Scotland, Wales and Northern Island — but not England — were granted regional legislatures. 


cataloniawalesseminar.weebly.com

As for the monarchy, it hasn’t been in “English” hands since the early eighteenth century, when the German house of Hanover assumed possession of the crown, with their descendants (having assumed the name “Windsor” during World War I) still ruling today.  

There is an argument to made, in fact, that the “English” crown hasn’t been truly English since before the Conquest in 1066, or at least since the heads of the houses of York and Lancashire traded the monarchy back and forth in the thirteenth and fourteenth centuries.  
Modernity commenced in Britain with the House of Tudor, which was Welsh in origin, taking over the throne. This was relinquished only with the succession of the Stuarts, a Scots house. In 1603, new English King James I (who had been the fourth of his name as king of Scotland) formally unified the two crowns, assuming for the first time the title, “King of Great Britain”. However, Scotland still had its own Parliament until the union with Westminster in 1707. 

The Stuart kings’ relationship with the English Parliament steadily worsened, though, until the Puritan Revolution and the beheading of Charles Stuart in 1649. In this light, the English civil war could be viewed as resulting from nascent nationalist sentiments of members of the English yeoman class — men such as Oliver Cromwell — resentful against an interloper king. 

Whatever the case, “English” liberties were secured, decades after the Protectorate, by the Dutch Stadtholder, William of Orange (who died without issue and was succeeded by the Germans). 

After the freedom of Parliament was assured under King William, and especially following the union with Scotland, the central government became all the more preoccupied with colonial affairs, and progressively less so with “home” issues. 

The most important imperial territory during the eighteenth century was the Thirteen Colonies on the Atlantic coast of North America. They were a key source not only of the primary goods which fed Britain’s growing economy, but also of revenues from duties imposed on staple goods traded from the colonies. Such excisions were ultimately to provoke British Americans in successful rebellion against the Crown and Parliament. But until very shortly before the Declaration of Independence, the agitators in the Thirteen Colonies were nevertheless claiming rights as British subjects which, they believed, were denied by their lack of representation in Parliament. 

Reciprocally, then, the sense of nationhood accrued through a people’s possession of a defined territory was, for the English, impaired by the fact that so many of their countrymen had migrated to British America and other colonies. 

Even after the loss of the Thirteen Colonies, the nationhood concept was diluted for the English by the fact of colonial/imperial expansion throughout the globe. It was not only settlement, during the nineteenth century, in the Canadian territories, southern Africa, India, and the Antipodes. The imperial project itself, to some degree at least acted as a mortar for the nations of the British Isles, one in which Scots, Irish and Welsh participated in, far out of proportion to their number of the British population. 

The institutions through which the United Kingdom projected itself throughout the world — chiefly the Royal Navy, but also regular army expeditionary forces — were avowedly British, not English, institutions (again, the military having a disproportionate number from the Celtic nations in their ranks). 

Even the non-governmental, voluntary civil-society institutions that developed in the U.K. during modern times, such as the bodies representing physicians, surgeons, lawyers, scientists and other professionals, were specifically British (or “Royal”), not English, bodies. 

In other modern nations, the capital (or largest) city has been a lodestone for the national project: Paris, Berlin, Rome, Amsterdam, etc. But London doesn’t really fulfill this role for England, let alone the United Kingdom. Until comparatively recent times, “London” itself was a political fiction, given that the City of London covered just one square mile of territory on each side of the Thames river, and had only a few thousand citizens, with the other millions being divided and subdivided into boroughs and cities that made up the metropolis (there’s only been a city-wide London mayor since the year 2000).  

London isn’t really the capital of Great Britain, either, with the Parliament being located in the city of Westminster. Aside from this, the London metropolis has always been too cosmopolitan to be the mere cultural capital of England, or the U.K. 

There is, on the other hand, a palpable feeling of estrangement among English non-Londoners — the people “up north” — toward the metropolis on the Thames. 

There are few other institutions around which the sense of English nationhood could have coalesced. The Church of England ought to be, and to some extent is (or was), such an agency. But the Church also expanded its own jurisdiction in tandem with the growth of British imperialism, such that it became a global Anglican Communion, second in influence (at least in Western Christianity) only to the Church of Rome. Like the other British institutions of the imperial era, the Communion was directed more so to fostering Empire, than to promoting English nationhood. 

The post-imperial period, coincided with a gradual but inexorable decline in attendance for the official English church, such that the most influential diocese therein are actually outside of England, or the “Anglo-sphere” countries entirely, in Africa and Asia (as when Anglicans in latter regions balked at the Church establishment’s attempt to impose ordination of homosexuals on the entire Communion, forcing archbishop of Canterbury to back down). 

Perhaps the only specifically English institution, is the system of law, which in the U.K., is distinct from the Scottish legal framework. Going back even before the Magna Carta of 1215, English law grants the right to trial by jury, to silence in the face of criminal accusation, to representation by competent expertise, to face and cross-examine an accuser in open court, and so on. However, this system of law is not exclusively “English”, for it extends (in the U.K.) to Wales, as well. 

During the imperial era, the law courts even of self-governing dominions such as Canada, New Zealand, South Africa, and so on, were subservient to the high courts of Britain. The right of final appeal to the Judicial Committee of the Privy Council, the highest court in Britain, was abolished for Canadian citizens only after World War II. 

But the Committee remains the court of last resort for several Commonwealth countries, into the present day (a few years ago, the Judicial Committee held its first session outside of London, when it presided over a hearing in the Bahamas). 

The lack of national institutions may underlie several paradoxes and peculiarities of the English. For example, England has been a modern country — with an urban-based, industrialized, non-farm economy — longer than any other nation. 


orb.essex.ac.uk

In spite of this, the English maintain regional variation of language and custom that date back even before modern times, remarkable for a such a small country (about 50,000 square miles; the province of Ontario itself, is 415,000 square miles), let alone a thoroughly modernized one. 

As mentioned, nationalism is a project of the bourgeoisie, a response to the deracinating trend of modernity. It seeks to reestablish community and social-solidarity on mass-scale, put asunder on a local level by the increased mobility and velocity of life, through engineering and science. 

It is also a philosophy of right, arguing on behalf of middle-class supremacy, as against royal and noble cosmopolitanism. Nationalism, or nation-statism, dwells upon the past in search of the origin story that would unite otherwise socially and culturally disparate members of a single linguistic group. 

In this, nationalist mythology often turns to outright fable. Such myth-making didn’t have a chance to take hold among the English, or among them exclusively. The official story instead told of a union of nations, which thereby made English nationalism anathema to the British state. The British “imagined community” — the ultimate form of modern social unity — wasn’t the nation, but the kingdom, or “commonwealth,” as it was called even before the rise of Cromwell and the Puritans. 

But this is a construct even more artificial than the nation-state, and so the desire for community for the English settled not upon the nation, but the village, town, city, county and region. The lack of English national identity may also be responsible, at least in part, for the predominance over a long period of laissez-faire in the U.K. As a philosophy of political right on behalf of the bourgeoisie, nationalism is properly nation-statism, with a mandate for collective action and legislation in order to protect and defend the imagined community. 

As it was brought into being during the original French Revolution, nation-statism was a military project, in defence of the French nation, and liberation of other nations, from royal absolutism. The militaristic aspect of nationalism only increased following the Napoleonic wars. 

But nation-statism also became wedded to economic collectivism, so that by the twentieth century, the ultra-nationalist militants of Italian Fascism dreamt of an economy that was not only collectively owned, but also did not trade with other countries. The assumption of ownership of private industries by government, became known as “nationalization”, and twentieth century rebels against Western colonialism, even when adhering to internationalist Marxism, usually took on the guise of national-liberators before gaining power. 

Even non-Marxist governments in the former colonies of Africa and Asia, undertook large-scale economic nationalization of industries, though, in part as a means of fostering nationality. After 1945, Great Britain nationalized the “commanding heights” of industry. 

However, laissez-faire remained the abiding philosophy of British politics far longer than elsewhere in the Western world, due to the enfeeblement of English national identity. 

Officially, there was no imagined community at the national level to which people generally could assign their loyalties, and thus, no automatic lobby to act on its behalf. The “national-interest” factor which did so much to encourage the presumption toward statism and nationalization (even in the “laissez-faire” United States), hardly existed at all in the United Kingdom, due to the lack of national consciousness among the English. 

Welfare programs and nationalization came to Britain, also by deliberately ignoring nationalism. Initially the Liberal party, and later on Labour, were able to gain majority dominance in Parliament, in large part through the bloc-retention of seats in Scotland and Wales, the “Celtic fringe.” 

These delegates, not nearly enough to win government on their own, were supplemented by enough of the seats in England proper, either a bare majority (or often) a simple plurality, but enough to make the difference. When England voted against Liberal or Labour, that’s when the Conservative and Unionist party came to power. 

But especially when the Liberal party of the U.K. was most successful under Gladstone and the Welshman Lloyd George, in the late nineteenth and early twentieth, it appealed to the collective unity not of the English people, but of the Empire and a united Britain, with the Scots and Welsh as full partners in the enterprise. 

By the time Labour came to power in 1945, the British empire was fading away. But the Attlee government and its successors before New Labour under Tony Blair, were truly socialist in their program and policy of aggressive nationalization of industry. 

But dependent on the voting bloc of the Celtic fringe, they too had to appeal to supra-nationalist to get their policies into place — the interests of the industrial working class throughout Britain, as against the interests of bourgeoisie (as well as rural and small-town Tory noble and smallholder interests). 

Which brings us to the British class system. No human culture has been truly classless. Even savage societies award what little material goods are available, to some while excluding the rest. Characteristic of civilization, however, is the appearance of an enormous gap in wealth between an elite, and all the rest. 

The latter were usually very poor, or else completely destitute; there were also, in civilized societies of the past, a smattering of those that would be the equivalent of modern bourgeoisie — involved in trades and professions — that enjoyed relative prosperity compared to the misery of most others. The size of this “middle class” in ancient civilizations, was larger (though never approaching anything like a majority) dependent on whether wealth was acquired through commerce, instead of plunder and tribute. However, the widescale resort to mass slavery throughout the ancient world, inevitably affected the labour markets for free workers, including professionals and tradesman. 

As in Rome and Greece (societies that came to prominence in large part through trade and commerce), slaves in ancient times were used not just for hard labour, but also in skilled trades and professions. During modern times, when industry and commerce came to the fore, so grew the middle class in size and influence. This was nowhere more so, than in Great Britain. Yet, the British are as renowned for their snobbery as for any Whig sentiments. 


izquotes.com

Snobbery is indeed a function of the success of the bourgeois in the U.K., as noble and gentry sought to differentiate themselves, not any longer from wealth (as many bourgeois became richer than especially minor aristocrats), but through possession of noble and heroic breeding. The British are not unique in having class snobbery. 

But in the other European states where nation (sooner or later) became the key socio-political community, even the nobler classes sought to hide and disguise their condescension toward the masses, seeking to ally their interests, cynically or sincerely, with the nationalist project. The European nobility was, of course, chided and coerced into accommodating the interests of the bourgeoisie, including its nationalist yearnings, by the revolutionary wave that lasted from the 1770s to 1848 in its most activist phase. But even before the French Revolution of 1789, there was widespread understanding among the Gallic nobility that reform of the absolutist system along rational (and at least, somewhat more democratic) premises. 

As the insurrection became all the more radicalized, most of the aristocrats went along with the programme, voting enthusiastically in 1791 to abolish the feudal system, and renouncing their titles in ceremony at the National Convention. 

Only when Jacobinism declared them enemies of the revolution, as a class, did the nobility desert the cause (often fleeing across enemy lines only to be detained by royalist forces they were supposedly in conspiracy with). While the aristocracy was, following the defeat of Napoleon, restored to place, it never recovered the prestige it had before 1789. 

The enormous social and economic change that gathered after 1815, in spite of the reimposition of the ancien regime in France and elsewhere, as well as the outright success of liberal revolutionism in 1848, prodded the European nobility to turn away from cosmopolitanism, and invest psychologically with the nation. 

This didn’t occur in Britain, however, or at least nowhere to the extent that in did on the continent. The U.K. had experienced regicide and republicanism long before the revolutionary tide of Enlightened times. The austere and dictatorial Commonwealth led to the swift restoration of what ultimately became constitutional monarchy. The monarch’s role became, after the end of the seventeenth century, progressively ceremonial in nature. 

But British kings and queens were, paradoxically, powerful figureheads in so far as they did inspire the loyalty even of a majority of the Scots and Welsh (though not the Catholic Irish), to say nothing of the English themselves. They were dispensers, too, of titles, offices, patronage and other privileges that were, though largely ceremonial, were also key in sustaining an archaic sensibility among the British aristocracy. 

In Europe, the nobility and gentry attempted to pass themselves off as genuine natives, the rural husbandmen whence the national community sprang. Historian Peter Watson states, “In the late nineteenth century, Kaiser Wilhelm II described himself as "the No.1 German," but in saying this, as [historian Benedict] Anderson points out, "he implicitly conceded that he was one among many of the same kind as himself, that he could, in principle, be a traitor to his fellow-Germans, something inconceivable in, say, Frederick the Great's day." (The German Genius: Europe’s Third Renaissance, the Second Scientific Revolution, and the Twentieth Century, HarperCollins, 2010, p. 58n, citing from Anderson, Imagined Communities, p. 82)

By contrast, the “English” nobility was proud to trace its genealogy back not to the heathen Anglo-Saxons, but to the Norman invaders of the island in the eleventh century.

Monday, July 6, 2015

Gotham: A Functional Relic

Last summer, I spent some time in New York City (whence all the images below come from). I've been wanting to convey my ruminations about the recent history of Gotham (the nickname deriving not from the Gothic-style architecture characteristic of the city, but from the 1807 writings of Washington Irving and his fellow "Lads of Kilkenny").  

I had wanted to get these thoughts (not a travelogue) done in time for U.S. Independence Day, but did not.


This was intended as a docking station for dirigibles.  Really.
Photo: R.B. Glennie


I will, however, remark on an entirely different American city: Akron, Ohio.

This is the hometown of Chrissie Hynde, leader of the Pretenders.  On that group's 1984 LP, Learning to Crawl, there is a track, My City Was Gone

The lyrics were reportedly inspired by Hynde’s reaction at returning there, after years away in Britain (where she was a rock journalist before becoming a professional musician). 

Except that, where once was a vibrant city-core, was left only dereliction and dissection: 

I went back to Ohio 
But my city was gone 
There was no train station 
There was no downtown 

South Howard had disappeared 
All my favorite places 
My city had been pulled down 
Reduced to parking spaces

Akron was scarcely alone in witnessing such urban decay.  Yet, it wasn't the worst hit:
in the 2010 U.S. census, the city had a population of just under 200,000 — this is down from 290,000 in 1960.  

As Kenneth Jackson points out in his history of twentieth-century New York City, Robert Moses and the Modern City, Newark, New Jersey, just across the river from the "Big Apple", had a population of 439,000 in 1950, and just 272,000 in 2000: once a larger city than Akron, it had become smaller in just half a century. 

Similarly, Buffalo, New York went from 580,000 to 292,000 (roughly the same number of people as Akron) during the same period; Pittsburgh, 677,000 to 335,000; Boston, 801,000 to under 540 thousand; Cleveland 915,000 to less than 480,000, and St. Louis, Missouri went from 857,000 to 348,000 between 1950 and the end of the twentieth century.

Deindustrialization of a vast swath of the American midlands (what came to be known as the "rust belt") was in large part responsible for this decline in population of the older central cities.

At the same time, however, the total population of the metropolitan areas of all or most of these cities, grew dramatically, or at least, significantly, signalling the decentralization, or suburbanization, of residence and business.  The Buffalo-Niagara Falls' conurbation total population (according to a 2013 estimate) was about 1.2 million, versus about 900,000 in 1950.  The Pittsburgh metro area grew to 2.3 million in a 2014 estimate, but was 1.4 million in 1950.  Greater Boston's population was 2.3 million in 1950 - and 4.5 million in 2014.  Cleveland-Elyria's 2014 estimated population was just over 2 million, but 1.4 million in 1950.  Metro St. Louis' 2014 population was 2.7 million, and 1.5 million in 1950.


Flatiron Building, Fifth avenue and Broadway, Manhattan
Photo: R.B. Glennie

Newark is considered part of the "combined New York-New Jersey-Connecticut statistical area", and this conurbation, with Gotham at its centre, has also experienced dramatic growth in the second of the twentieth century and the first decade-and-a-half of this one: at twenty million, an increase of eight million people from 65 years ago.  

But, as Kenneth Jackson points out, New York City itself never saw the dramatic decline in population characteristic of virtually all other American cities: its population today is about 8.4 million; in 1950, it was 7.8 million.

It is true that, during the 1970s, when the city actually had to declare bankruptcy, crime became rampant, and significant parts of the New York became "no-go zones" even for police, Gotham suffered a net-loss of one in ten people.  Yet, coterminous declines in city-core populations were more severe for Buffalo (which lost almost a quarter of its people during the same period), Newark (13%), Boston (12%), Cleveland (14%), and St. Louis, Missouri (which lost 27% of its population between 1970 and 1980). 

What occurred uniquely in New York City during the later twentieth century was that, unlike with Akron and the vast majority of all other cities, the city did not leave.  

That is to say, in spite of the crime, the corruption, the drugs, the decay, and so on, the Big Apple did not rot away: it remained a centre of finance, advertising, publishing and mass-media.  

In this, the City of New York is a functional relic of the way all American cities used to be, as economic power is concentrated in the downtown core, with the surrounding boroughs, towns and cities existing as satellites to this powerhouse. 

This was how the town of New Amsterdam was founded by the Dutch in the seventeenth century, and it was the model upon which all U.S. cities were (consciously or not) organized, during the eighteenth and nineteenth centuries. 

This applied even to cities as distant geographically from New York city, as Los Angeles, California. 

However, after World War II, and the advent the modern suburb, virtually all American urban centres — led by L.A. itself — underwent transformation such that their downtowns were partially or almost wholly abandoned by businesses and the bourgeoisie, which decamped to the satellite towns and cities, and newer communities, in the regions surrounding the old cities. 

New York wasn't unaffected by this trend, of course: during the 1970s, as Gotham businesses fled to Long Island, or elsewhere in the tristate area, the city was forced to accept the disreputable merchants of paid sex and smut — the area around the world-famous Times Square becoming notorious for its porn-dealers and theatres, liquor peddlers and “massage” parlours. 


Not why it's called "Gotham", apparently.
Photo: R.B. Glennie

Still, the downtown core of New York City, located in midtown and lower Manhattan, didn’t undergo the wholesale abandonment and dessication that affected nine out of ten American cities in the period after the war — Philadelphia, Pittsburgh, Cleveland, Baltimore, Atlanta, St. Louis, Los Angeles, and not forgetting the worst case of all, Detroit.  

Whereas in these places, social and economic power departed from the inner core, becoming decentralized in the suburban communities adjoining the original city, New York’s status as the global financial capital, assured that its core remained the anchor of the whole metro area (arguably, only San Francisco, Boston and Chicago similarly resisted this complete decentralization of power to edge cities). 

It is important to remember, as well, that suburbanization is scarcely a post-World War II phenomenon.

A documentary film on the history of New York by Ric Burns (brother of the more famous Ken Burns) describes how, as lower Manhattan became the undisputed centre of American business and industry, residences became segregated from workplaces, and the island-borough's population gradually expanded northward to beyond present-day Harlem, to the farmlands north of the Harlem river that had been established in the seventeenth-century by Swede Jonas Bronk (hence, "Bronk's land" or the Bronx).  

The need to expand residential population to Long Island, spurred the construction of the Brooklyn Bridge (completed 1883, and the subject of an early documentary by Ken Burns), which caused the population of that city (not part of New York until 1898) to grow by almost 200% between 1880 and 1900 (versus "just" 150% population growth in Manhattan during the same period), becoming the largest of the five New York boroughs by the 1920s.

Suburban expansion then, as in recent decades, occurred as the more prosperous sought to escape the noise, smells, congestion and crime of the heavily-populated quarters in southern Manhattan. 

But suburbanization, as it occurred around Gotham throughout the nineteenth and twentieth centuries, differed from that which occurred later, elsewhere, in that sprawl didn’t culminate in the economic and physical enervation of the downtown (this occurred in New York, largely in the outer boroughs, especially the south Bronx and large parts of the former city of Brooklyn). 

Why then, did New York, and select other centres, not undergo total, or nearly so, suburbanization of its financial and social heart, that was the rule during the post-war period for all other American cities? 

The United States, as the continental nation that exists today, was established in the nineteenth century. Not accidentally, this growth arrived in tandem with the development and spread of the railway

Initially, migration to the western territories was a true step away from civilization, with the mortality of frontier people endangered by weather, outlaws, isolation and aboriginal resistance
Times Square: the only danger now is being crushed to death by tourists.  Literally.
Photo: R.B. Glennie


But the establishment of transcontinental rail after the American Civil War, resulted in the “closing of the west”, in the famous phrase of Frederick Jackson Turner. This was in fact to turn the American west into a periphery of the metropole back east, which was itself centred in New York City. 

Eventually, too, the west coast of the U.S., centred at first in San Francisco, then in the southern California mega-city, became its own metropole, partly eclipsing New York as the undisputed dominant locality of America. 

But before that, what had occurred on a country-wide scale, in regard to rail-train technology’s ability to carry goods and staples over long distances, also happened in the major towns and cities of the U.S. after the introduction of commuter-rail traffic

Thereby, at least the more prosperous of the urban population could live in outer neighbourhoods closer to, or beyond, the city limits, leaving the inner core of commercial and industrial districts with a residential population of at or near zero (the Wall Street financial district, for example, is a ghost town during nighttime hours). 

These areas were usually bordered by neighbourhoods of those too poor to move away, usually of African-American or immigrant background, in crowded, substandard and otherwise forsaken dwelling places (as with the long-notorious Lower East Side of New York city, not far from Wall Street). 

Even medium-sized American cities installed streetcars and “short-line” rails to bring people from satellite communities to the downtown. 

New York and Boston, which became large cities prior to the railway, placed their commuter rail underground, in extensive subway systems. By contrast, Chicago, which was founded after the invention of the railway (and was in great degree an invention of the railway), built elevated tracks over the city — the famous “L” — while elevated trains are a feature of the outer boroughs of New York and Boston, the latter constructed largely in the later nineteenth century, again following the invention of the train. 

The era of the rail-train was relatively brief, only a century or so, before it was overtaken by the motorcar. According to some, this occurred not because of the forces of supply and demand. Instead, a conspiracy of automobile interests, led by the General Motors company, deliberately bankrupted commuter- and tram-rail firms they purchased by the hundreds in the period after World War II. The obvious motive was to “force people into cars.” 

However, private passenger rail companies faced financial trouble and insolvency regardless of their ownership. When they weren’t owned by the municipality outright, they were often saved from closure by bailouts or outright takeover by city authorities. 

People preferred commuting by automobile, and trams and streetcars caused even further congestion to already heavily-trafficked streets. 

City planners, and the public in general, were glad to see them go, as with the elevated commuter-rail tracks, which were so noisy and ugly — all of these becoming subject to romantic veneration (like the railway itself) only decades after they disappeared. 

The automobile differs from the tram or train in that it is (for the most part) individually, not communally, owned, and cars are more versatile in how they could move about. 

Which is to say, while rail-travel was practically and technically delimited in the extent it can be routed through a city, and must be organized according to a strict schedule, the car faces many less restrictions for both these factors. 

It can be driven from one place to another at any time, and can use any city street for this purpose. So, even though traffic-jams and gridlock restrict the mobility of the motorcar, and most people drive their cars according to a particular schedule, an automobile-dominant city is much less beholden to the centre-periphery character of railway-driven cities. 


The big reason that New York isn't Detroit-on-the-Hudson.
Photo: R.B. Glennie

The majority of American cities that got rid of rail-based transport systems, no longer resembled New York city in their pattern of settlement. 

Instead, Los Angeles became the model. The latter grew into a big city almost entirely under the auspices of commuter-railroad travel. As Sir Peter Hall notes in his mammoth history of cities, commuter lines were often constructed by land speculators hoping to sell real-estate in the rural areas of L.A. county that later became the cities of Pasadena, Long Beach, Burbank, Venice, and so on (at more than 4,000 square miles, the county is home to eighty-eight municipalities with a total population of nearly 10 million). 

The latter became the “bedroom” communities for a regional economy that was still situated in downtown Los Angeles. But Angelenos also took to the automobile as no one else in the United States, or elsewhere: by the 1920s, according to Sir Peter, residents of Los Angeles county possessed one in every seven automobiles in use throughout the entire United States (this at time when about one in every 120 Americans lived in L.A. county). 



Empire State Building by day.
Photo: RB Glennie
Bowing to public demand, city authorities set about in the 1930s the planning and construction of several “parkways” to ease congestion on thru-streets — these became the freeways that for better or worse, Los Angeles is famous for. 

Authorities also dismantled the extensive street-car and commuter-train system, genuinely due to its insolvency (rather than due to any conspiracy of automobile interests). 

Given that travel between the “satellite” cities was, by car, just as easy as travelling downtown — easier in fact, as the latter was by definition more congested than the outlying areas — economic activity became decentralized between the various centres throughout the metropolitan area. 

The downtown area of Los Angeles, thereby, went into steep decline after the war, with large residential areas of the city itself becoming “no-go” zones for anyone not living there, such as the south-central district, which became notorious in the nineteen-eighties for gang activity and wholesale murder. 

The extent to which cities reorganized themselves under the auspices of motorcar travel, occurred most radically in L.A. 

But it affected all American cities, including New York, which saw the limits of settlement stretch far beyond Queens on Long Island, into neighbouring cities in New Jersey, and north to Westchester county and into the state of Connecticut. 

The difference, though, was that New York, unlike L.A., had an underground commuter system that didn’t impede above-ground traffic, on which could not be easily or cheaply abandoned, as were the municipal trolley and streetcar systems of Los Angeles (and indeed, of New York city itself). 

Commuter-travel declined steeply in N.Y., just as it did elsewhere, and during the 1970s and ‘80, the city’s subway system itself became a danger zone, with people refusing to use it other than at rush hour (when commuters stood and sat together, their eyes looking down, afraid to catch the eye of the many criminals and crazies who seemed to be on the trains at all times). 

Still, residents of suburban New York commute by train to Manhattan and elsewhere in within the city itself, just as hundreds of thousands in the outer boroughs leave behind their cars to travel to Manhattan island — living out Gotham’s status as functional relic of the way American cities used to work. 

The endurance of commuter systems was the key to the maintenance of the other select U.S. cities’ organization on a centre-periphery basis, namely, Boston, San Francisco and Chicago. 

xcept, the latter city didn’t have a subway system, but maintained above-ground commuter rail tracks (originally built privately, but taken over by the city during the 1940s and ‘50s, to create the Chicago Transit Authority). 

This isn’t an especially American phenomenon either. In Canada, the city of Toronto has maintained a Gotham-like centre-periphery orientation not only because, and going contrary to the trends in the U.S., it embarked on an extensive underground rail system (which was opened in 1954). 

It has also maintained an extensive street-car network, one of the only North American cities (aside from San Francisco) to do so. 

The city also has an extensive “Go-Train” commuter line, which brings people from the outer suburbs to the downtown area (of course, none of this is privately operated, as the commuter network in the city is owned by the Toronto Transit Commission). 

By contrast, the other large city in Ontario, Ottawa, was not large enough to have installed an subway system (a single, east-west line is about to built here). It also eliminated its streetcar network in the late 1950s. 


Though the city’s status as the national capital meant that the downtown didn’t undergo the decay that characterized large American cities, the dominance of the automobile did result in the decentralization of private economic activity to the adjoining cities of Nepean, Gloucester and Kanata. Consequently, the city of Ottawa (which was amalgamated into a single unit at the end of 2000) comprises an area larger than the city of Montreal, which has more than twice the number of people.