Tuesday, March 31, 2015

The Highest Sexual Organ

The recent “niqab debate” in Canada has brought attention to the fact that hair is a sexual organ


A woman in niqab.
muslimvillage.com

Some Muslims feel that a woman is irreligious if she fails to cover not only her hair, but her face. 

The garment used for this purpose is called a niqab

But many more feel it is essential for devout Muslims – men and women – to merely cover their hair (but not their face) out of concern for “modesty.” The female version of this is called a hijab

What is so immodest, though, about exposing the hair on one’s head? 

Head-wear has been a feature of civilized life going back many centuries, in the Occident and elsewhere. 

As modernity progressed, head-wear was worn more so in vanity than for modesty — this is true, especially in regard to women’s hats. But for vain flourishes in head-wear partaken by bohemians and Beau Brummel types, men’s hats tended toward the functional — like the stylishly-utilitarian “fedora” style of hat, which was the choice of virtually all men in the Western world, from the late nineteenth century until the 1950s. 

The fedora, in its many variants (such as the Homburg brand), was so commonplace during the first half of the twentieth century that in photographs of public scenes taken during that era, almost everyone is wearing a hat. 


Hardly a bare head in sight.


President John F. Kennedy is often credited with making the hat unfashionable for men, by refusing to wear one during his swearing-in ceremony in January, 1961. However, Kennedy did in fact have head-wear during at least some of his inauguration day — the round top-hat appropriate to his status as an untitled nobleman. 

The counterculture of the later 1960s is also blamed for the demise of the hat in men’s wardrobes. If anything, though, the hippie movement is noted for a revival of hat-wearing that had already been in decline for a decade or more. Amongst other types of hats, there was the Australian bush hat and its derivatives, that is so identified with the counterculture that it is colloquially known as the “hippie hat.” 

In recent times, the venerable fedora is reserved for those with artistic pretensions, and men too vain to expose their bald head, or won’t wear a toupee. It is more common to see a woman in a hat, but even women wearing hats is uncommon nowadays. 

The most common head-wear worn by men, is the baseball cap, which (as the name suggests) is scarcely a hat at all, and isn’t ever considered formalwear. Significantly, the ball-cap is commonly worn both in and out of doors. It is a concession, by contemporary men, of the functionality of the hat (as a means of keeping the head from the elements, and sun from the eyes), whilst its ignoring its formality. 

For that matter, bald men in fedoras and other types of hat, usually don’t remove their hat when coming indoors, either. Just as, until recent times, people were supposed to wear hats when out in public, decorum demanded also that, when coming indoors, the hat was to be removed. 

In the Psychology of Clothes, John Flugel observed as to how “We have invented a number of objects which are in the nature of transitions between clothes and houses. The roofed-in car or carriage is ... one type of such an object. The umbrella is another. As regards this little instrument with its emergency roof, it is difficult to say whether it corresponds more to a miniature transportable house or to a temporary outer garment.” 

The hat is transportable shelter, too, and its removal from the scene, just as television radically extended the reach of the public into the private realm, foreshadowed the informality and even dishevelment of men’s styles especially that characterized the second half of the twentieth century (though women’s fashions became more informal as well). 

First, out went the fedora. Then gradually, the other formal aspects of men’s attire — the necktie, the cufflinks, the suit jacket, the button-down shirt, creased pants — was put away but for “special occasions.” 

The disappearance of the hat was the first step toward today’s fashions, characterized by what people not so long ago would have considered indecent exposure. 


The last of the bourgeois gentlemen.
overmental.com

Hair is a sex object, as the ancients well knew (with the Biblical story of Samson losing his virility when the Delilah cuts off his hair). The bare head was suppressed, along with all other types of corporeal immodesty, with the rise of Christianity during late antiquity, as well as Islam later on. 

Religious authorities understood then – they understand now – that permitting the exposure of one’s hair, usually leads to the immodest exposure of the rest of the body as well.  This is why they fight tooth and nail against the immodesty of removing one's hat outdoors.

Monday, March 30, 2015

A World Full of Mirrors

It has often been asked, “Do the media shape society, or merely reflect it?” 


What the media do every day.
imgarcade.com

But it seems to me that behaviour is influenced most particularly in the presence of a mirror – as evident when someone catches sight of his own reflection. The person will usually at least pause, and usually more so fuss over hair, clothes and general appearance (I use the male pronoun for simplicity, but it goes for women too). 

Thus, if mass-media did truly reflect society, it would also shape it completely. 

However, since the term “media” is a plural, modern communications technology such as television, movies, newspapers and magazines, and especially the Internet are not but a single “mirror.” 

They instead interact with society, in the same way a visitor perceives a House of Mirrors at an amusement park. The attraction of such places is how they distort ordinary perception, in so far as one looking-glass will reflect upon another, and that reflection will appear in yet another mirror, and that one in another, and so on… 

Just for fun, too, the reflective panes in the House of Mirror are slightly or more so bent concave or convex, so as to distort the image being reflected and re-reflected all the more. Communications media do exactly the same thing, I think. 


Not particularly on topic.  But Hendrix was just so darn cool.
www.musiclessonswilliamsburg.com


They reflect reality, as does a mirror; but since the media are so omnipresent, so too do they also – and mainly – reflect not society but each other, with the image conveyed therein growing more distorted the more so it reflects a reflection, rather than the real thing. 

On the other hand, human beings crave psychological coherence and continuity.  The psyche cannot be satisfied with the fragmentary vision that is as inevitable in the mass-media as in the House of Mirrors. 

In response, the individual when faced with the mass-media landscape, will take the fragments, being they reflections on the real thing, or reflections of reflections (reflections of reflections of reflections) and try to make it whole, a pastiche of the substantive and the counterfeit. 

This is “reality” as we know it now. But like the House of Mirrors, for most people the “media-scape” is so much fun that they don’t both to look for the exits. They couldn't find them even if they tried.

Saturday, March 28, 2015

Why There are So Few Strangler Films

The slasher-film is one of the most successful genres ever. 

According to the web site Box Office Mojo, from the period 1978 to 2013, slasher-films grossed (in real 2015 dollars) US$4.28 billon in theatres – and this figure apparently excludes profits from rentals and purchases for home-viewing. The web page has, however, the following note: “Many slasher movies from the '70s and '80s have no box office records, and, hence, do not appear on this chart.” It is quite likely, in other words, that slasher-filmmaking has been more profitable even than the four-and-a-quarter billion figure. It is entirely possible, of course, that at least some of the slasher-movies for which there are no box-office records, flopped or were otherwise unsuccessful theatrically, thus skewering the total figures upward. 


Wouldn't hurt a fly.
lairofhorror.tripod.com

But this is unlikely, because slasher-films have been very cheap to produce. The original Friday the 13th cost, for example, $560,000 in 1980 (according to Wikipedia, all figures in U.S. dollars). Indexed for inflation, this is close to $1.6 million in today’s currency: many independent movies of the contemporary era (which typically include no make-up or other special effects at all) are far more costly. The 2013 indie Nebraska, for example, had a budget of $12 million, even when it was entirely filmed on location in the U.S. Midwest, in black-and-white, and using digital cameras (in order words, the production was about as cheap as is possible). 

Nebraska went on to generate $24 million in theatrical revenues, about double its production costs. For its part, Friday the 13th went on to generate almost $60 million in box-office revenues (which is about double the equivalent in 2015 currency). 

Halloween, the 1978 John Carpenter film that is often credited with touching off the slasher-film craze of the years that followed, had a budget of (at most) $360,000. This is $1.2 million in 2015 dollars. It was even more successful than the first Friday the 13th, taking in $70 million at the box-office (this is nearly $170 million in current U.S. dollars). 

The box-office profitability for Halloween was, in other words, almost 150 times its production cost. For Friday the 13th, the profit-margin was 8,100% of the filming budget (and this doesn't even include the revenues from home-rental and purchase). It is no wonder, then, that so many slasher-films were produced during the 1980s; and why they have continued to be made, if not as commonly now as thirty years ago, then often enough (in spite of the criticism that has been directed toward them by moralists, film critics and feminists). 


Impotent.
abstract.desktopnexus.com

It is clear that enough people have a morbid fascination for witnessing the commission of murder by homicidal maniacs, to pay money to see what is essentially the same story, over and over again. The structure of the typical slasher-film has been analyzed (as it were) to death. 

But I'm not sure anyone has looked at the reason why the maniac in these films has almost always used a knife or some kind of edge weapon to commit his crimes: hence the appellation, slasher film. 

Slasher films dramatize a real phenomenon of modern society: the serial killer. In fact, the earliest of the genre, Alfred Hitchcock’s Psycho (from 1960), and the Texas Chainsaw Massacre (filmed thirteen years later on a budget of $300,000, it made one-hundred times that amount at the box office), were both (loosely) based on the story of Ed Gein. 

However, the psycho in the slasher flick departed from the real serial-killer exactly in his choice of weapon. Real psychopaths, especially sexually-driven killers such as Ted Bundy, seem to prefer murder with no weapon at all — manual asphyxiation. It makes sense, given the sensual aspect of the deed, not to mention its relative cleanliness — no blood is spilled. 

Serial murderers have less preference for knifes, even in favour of blunt instruments, such as hammers or clubs (or in Bundy’s case, a tree-branch used on the victims of the infamous “sorority sister” murders in Florida in 1978). 

And again, slashing is less common to serial murders than is the use of firearms, as with the Zodiac killings in San Francisco in the late ‘60s, or the “son of Sam”, David Berkowitz, who terrorized New York city in the late 1970s by picking off couples making out in parked cars. The most infamous of modern serial-killers, the Whitechapel Killer (or “Jack the Ripper”) was a genuine slasher-murderer. But the rarity of the use of knives by serial-killers, does make sense from a strictly operational standpoint. Blades are, after all, very messy means of dispatch. Blood is a very incriminating piece of evidence, and it was even before the invention of DNA-detecting technology.

Blade-attacks are not only messy, but mucky, with blood liable to stick to clothes and skin, and easily mistaken for other things, except to the forensic eye. More than that, though, knife crime is risky: it is as terrifying as a bluff, as it is in execution (knife-psychos have no intention of bluffing, of course). Confrontation with a knife can easily result in injury to the attacker. For, unless a vital organ is cut in the struggle, someone injured with a knife is able to fight back for a far longer period than, say, a gunshot victim. 

Perhaps it is that the real methods typically deployed by serial-killers, are not as inherently cinematic as murder by edge-weapon, which is spectacular by virtue of causing so much blood to spill. 

On the other hand, few would say that the gunshot-death scene of Warren Beatty and Faye Dunaway at the conclusion of Bonnie and Clyde (1967), was not both spectacular and bloody; the slow-motion gunshot effect was used in most subsequent films by Sam Peckinpah, for example. 

Strangulation, when depicted cinematically, is just as horrifying, as Alfred Hitchcock showed in Frenzy, his second-last film from 1972. Shot in London, the serial-killer was in that feature more realistic for being a necktie strangler. 

Yet, Frenzy didn’t spawn a trend for “strangle” films (by contrast, Psycho is arguably the granddaddy of all slasher films, and it should be noted that Ed Gein, who inspired the original novel by Robert Bloch and movie adaptation, killed his two known victims by gunshot, rather than by knife or other edge weapon). 

There is something about the phenomenon of knife-murder, which makes it an unquestioned modus operandi for almost all popular movies which depict serial-killers. 

It is significant, I think, that the term “money shot” was coined not in reference to the literal climax of the porno film. It was, instead, used to describe shocking and graphic scenes in horror films, such as a person’s body being cut by a knife. 

The phallic-symbolism of the knife cannot be thus be denied. Explicitly, implicitly, unconsciously, but inevitably, the slasher-flick has as its theme a sexually-frustrated psychotic taking his “revenge” on the pretty people who have always done him wrong; nearly always, the perpetrator is a male (here at least, the movies reflect reality). From the first — the famous shower-scene in Psycho — knife-violence has been linked to vicarious sexuality. 

In the later slasher films, Freddy or Jason dispatched his (usually female) victims in one state or another of undress. It is interesting, in this regard, that it was an early slasher film, released in 1975, that gave rise to the myth of the “snuff” film, wherein people are actually killed, allegedly while having sex, or by sexually-sadistic means. 

The film was in fact called Snuff, and it has a peculiar history. The bulk of the movie was filmed in Argentina and released there years earlier. It was just a conventional horror flick, not especially (or believably) violent, in which victims were killed by a Manson-like cult. In the mid-1970s, some exploitation-movie producer purchased the Stateside rights to the film. However, the original ending was altered, with the new scenes staged so it appears that the performers were actually being killed on-camera (apparently, look-a-likes to the original actors were used for this end), this taking place during or after an orgy. Then, one of the killers turns to the camera, saying, “Did you get all that?” or some such, and the movie abruptly ends. 

The producer then put out “rumours” that the killings depicted in the movie were actual killings (he even reportedly paid actors to protest outside a New York theatre where the film was being exhibited); gullible press picked up the story, and a legend was born, never to die (ultimately, theatres playing Snuff were picketed by real anti-porn feminists). 

Controversy over snuff-films persists, just because (courtesy the slasher genre), special make-up effects have advanced to such a degree, that it is impossible to tell if someone is really being killed on screen. The actor Charlie Sheen once handed over to police a video he’d rented, which he believed depicted a woman being murdered while having sex. Authorities investigated, finding that the “snuff” film was produced in Japan. 


No Slasher.
© Corbis.  All Rights Reserved.

It turned out that the “murder” was staged, but done in such a cunningly realistic fashion, so that even a professional movie-star with experience in manufactured gore (appearing in the Vietnam flick Platoon) could not tell the difference (apparently, faux-snuff films are a thriving sub-genre in Japan). Most movie-goers, on the other hand, having neither witnessed a real murder, nor yet knowing how this is simulated by make-up effects, have little power to discern visually a real murder from a fake one. 

In any case, the sexual-sadism inherent in all slasher films, was made explicit due to the advent in the twenty-first century of “torture-porn” film, such as Saw and its sequels, and Hostel (which also had two sequels). 

While computer-effects have allowed the makers of these movies to go beyond mere knife-murder, the killings in Saw and Hostel are almost all committed with edge-weapons of one type or another (now, however, they are electrified saws or drills). In the case both of porno- or slasher-films, it is clear that the (human) male obsession with fugging (in its true semantic) is being exploited for profit.

Friday, March 27, 2015

Why a Painting is Now as Costly as a Jet-Plane

Pablo Picasso’s 1955 painting Women of Algiers (Version O), is being auctioned by Christie’s in London for US$140 million. This had me thinking as to why an object consisting of wood, canvas and oil paints, is evaluated – by the free-market – roughly to the same as a sizable company or even a 737 jet-plane

It does seem in direct contradiction to the image of artists as activists against convention – especially bourgeois capitalism. It might be a stereotype, but this was Picasso’s own politics

Impassive all the way to the bank.
www.biography.com

Moreover, the vast majority of working artists today are politically left-wing. This doesn't prevent the most famous of them from selling their works for millions of dollars, usually to admen or investment bankers: the cream of the crop of modern capitalism. These are the only people who can afford to pay for today’s modern art.  

More basically, though, the artist is legitimized through participation in the marketplace. That is, a “real” artist is someone is able to sell her work – be it a painting, sculpture or mixed-media installation. It is then that she stops being an amateur or a dabbler, and is now legitimately a painter, sculptor or mixed-media artist. 

Aside from the Renaissance development of perspective techniques to simulate reality, a key innovation in Western art was to turn the painting into marketable commodity. 

Prior to the fifteenth century, paintings were created on varnished wood, a very expensive medium due to the lengthy preparation and drying required after application of paints. A byproduct of the voluminous Mediterranean trade of Renaissance times, canvas was readily available to artists in Venice and other seaports, as cheap waste material from the making of sails.  

Even so, wood-panels remained a central medium for painting for a couple of centuries after the introduction of canvas. Only later in the Italian renaissance, and especially when the visual arts blossomed in Holland during the seventeenth century, did the canvas become the standard form for painting. Not coincidentally, the Netherlands had by then assumed economic primacy over the Italian city-states, its commercial and naval fleets controlling most of the world’s shipping-lanes. Demand for sailing canvas only intensified, and with it the availability of rags for use in oil painting. 

Canvas was not only very cheap compared to all other media. It was easy to prepare, and quick to dry. In 100 Ideas That Changed Art, Michael Bird writes (p. 92) that 

In Northern Europe, where the climate made fresco a less suitable medium than in the warmer, drier atmosphere of Italy, artists used canvas for wall-hung paintings, sometimes as a cheaper alternative to tapestry. For the purposes of painting, cloth has to be stretched taut and sealed to prevent oil- or water-based paint from seeping into the fibers and depositing a dry, dull layer of pigment on the surface. It became standard practice to stretch canvases on a wooden frame and to coat them with diluted animal glue, or size, followed by a chalky ground to which paint was applied. Even the very largest paintings constructed in this way are portable and can be removed from their stretchers or frames and rolled for transportation and storage, making it possible for artists to produce work for distant patrons and locations. As Church and aristocratic patronage of wall paintings and altarpieces was superseded from the sixteenth century onward, especially in the Protestant North, by private portrait commissions and a market for smaller paintings, canvas became the favored support. The assertive, often lifesize dynastic portraits that populated European elite residences would have been difficult to produce without canvas… 

Some of the greatest canvas paintings have been quite large. Rembrandt’s Militia Company of Captain Franz Banning Coq, for example, was originally larger than its current dimensions of nearly twelve by fourteen feet. The so-called Night Watch was cropped at each end when, more than a century after its commission, the painting was moved from its original place to the Amsterdam civic hall (obscuring the fact that the “captain and his worthy squad of keepers standing fast”, were assembling for a day-parade, not a night-watch). 
 

More like "Day Parade" actually
pastorwithapurse.com


But most canvas paintings have been small enough for a single person to carry. The canvas, too, was ideal for the application the perspective techniques that were more awkward with other media. It thereby took its place in townhouses as just another window. It could also be easily removed from the wall, and sold off or otherwise disposed of as the owner saw fit. 

The canvas painting was, in short, the commodification of art. The cultural ground of the Dutch renaissance was a bourgeoisie class made prosperous through international trade. It was not a mass-market by today’s standards, but certainly a much larger one that had ever existed before. Their relationship toward the painter was that of customer, as opposed to patron. 

The presumed role of the visual artist, during Renaissance and long after, was to create paintings for a market. If this was, to contemporary sensibilities, the surrender of artistic vision to commerce, no one would disagree that it was responsible for the most beautiful images created by a human hand. 

Market forces created the environment in which painters perfected the ultra-realistic approach to art. The arts flourish generally in centres of political and economic power. Canvas painting in particular finds a ready home in centres of modern finance and transport. There was Amsterdam in the seventeenth century, London in the eighteenth and nineteenth (with the Romantic predecessors of modern art), Paris later on in the eighteen-hundreds, as well as New York city following the second world war. 

Even after the perspective painting was gradually made obsolete through the invention of photography, the canvas form has persisted as a medium for artistic expression up to the present day. Its primacy has been lessened by the use of non-traditional media and materials, as well as a resurgence in sculpture (itself a result of government support of the arts). 

But the most common artform traded for commercial purposes, remains the canvass painting, whether its subject is rendered photo-realistically, or more commonly as an “impression”, or even in complete abstraction. No one places a multimedia art installation in their home. These things exist because arts councils, supported by the state or charitable foundations, provide the space and funding to make them happen. 

People do buy canvas paintings to place on their walls, which is why such artworks, cannot escape their status as commodity. Since the Romantics, the painter has developed a stance at least, of contempt for and rejection of traditional forms, and thus the “philistine” demands of the marketplace. 

The portrait of the Artist as outsider, became more clearer with the Impressionist and attendant movements, which sought to use oil and canvas to convey what the new visual media of photo and film, could not. These painters didn’t work to satisfy any particular demand. They instead used the canvas as expression for subjective perceptions and feelings. Yet, in a roundabout way, the modern canvas painting — everything from the Impressionists on — was dependent on the marketplace even more so than previous artforms. 

The naturalistic, perspective canvas was generally painted in response to demand, either on commission or in a particular style (pastoral, portrait, etc.) likely to attract buyers. But how could the idiosyncratic vision of the Impressionists, Cubists and so on, be subject to commission, or to any demand at all? It would somehow violate the spirit of modern art, for a Monet or a Picasso to create according to some specification, or even a vague outline. No, the post-Impressionist painter had to work according to his “vision”, and then subsequently market what is created, placing it on exhibit to the public, in the hope that an anonymous member thereof, will offer money for it. 

Certainly, this art market was already in place by the time post-realist French painting came into existence. Indeed, Paul Gauguin, regarded as a pioneering post-Impressionist artist, made his living originally as an art dealer. Theodorus van Gogh, younger brother to and champion of Vincent, was also an art-seller. Christopher St. John, a young British Marxist who fought and died in the Spanish civil war, wrote (under the pseudonym “Christopher Caudwell”) perceptively in the 1930s about the role of the market in the visual arts: “In later bourgeois culture economic differentiation becomes crippling and coercive instead of being the road to individuation of freedom. There is a reaction against content, which, as long as it remains within the bourgeois categories, appears as "commodity fetishism." The social forms which make the content marketable and give it an exchange value are elevated as ends in themselves. Hence, cubism, futurism, and various forms of so called "abstract" art.” 

If Caudwell was correct in his conclusions, the art world nevertheless no longer views, if it ever did, these works as “crippling” or “corrupt” at all. The modern painter rebels so decisively against capitalism, precisely because it is an inevitable part of the medium — canvas as opposed to large-scale multimedia or metallic and other nontraditional sculpture — in which he works. 

The sort of commerce that is characteristic of the art market, is hardly capitalism of the mass-industrial type. As the name “dealer” implies, the business is more so a throwback to an earlier type of merchantry, a sector dominated by small single-proprietors, engaged in haggling and chicanery to scratch out even a decent living. Certainly, there are larger players, but in general, no one becomes involved in the art business strictly to get rich. 

Unlike most other consumer products, too, a prize canvas painting can be given in lieu of cash, either as bequeathments in a last will and testament, or through direct exchange of goods and services. The canvas painting is, in that sense, precisely like a commodity. Specifically, the canvas can obtain the status of silver and gold, objects universally recognized for their exchange value, but which are valued in themselves. 

In this, works of art are like money (which was originally minted silver and gold) in that they are useless. Whatever pleasing or other effects a painting may have, it is not to be used for an instrumental purpose. Like all other works of art, it simply is. Money’s instrumental value ends with its exchange for other things, and it too is presented as “illustration”. Going back millennia, coinage displayed icons of kings and gods, and artistic sophistication only increased with the invention of the printing press. Banknotes present imagery in abstraction, and the beauty found in specie often has no correlation to its actual exchange value. 

In Making Modernism (Farrar, Straus and Giroux, 1995, p. 4), Michael Fitzgerald writes that Picasso and other avant-garde painters “were deeply immersed in the wide ranging business of the marketplace. Moreover, the market was not peripheral to the development of modernism but central to it. It was the crucible in which individual artists' reputations were forged as critics, collectors, and curators joined with artists and dealers to define and confer artistic standing.” 

Modern art in particular, which objectifies the subjective feelings and thoughts of the artists, must exist in an anonymous marketplace. It is the only through which, that the subjective expression could find a buyer who, according to his own subjective standards, would choose it over others. Fitzgerald writes (p. 7): 

By the middle of the nineteenth century, acclaim in the commercial arena rivaled the importance of institutional honors in making an artist's reputation, and the academy's prestige began to be usurped by artists who built their careers in the open marketplace. Whether one chooses to begin with Courbet's presentation of his own work in his Pavilion of Realism across from the Universal Exposition of 1855 or with the furor surrounding the Salon des Refuses in 1863, it is apparent that artists were searching for ways to establish themselves outside the purview of the academy and official patronage. The growing network of dealers began to respond. It was, of course, the Impressionists who achieved this breakthrough. ... the Impressionists did not simply create an art that repudiated the aesthetic norms of the academy. If they had done only that, they might well have remained as obscure as they were in the 1870s. The success of the Impressionists was based on a more remarkable — and more complex — achievement. By coupling their new aesthetic with the establishment of a commercial and critical system to support their art, they not only created the movement of Impressionism but also laid the foundation for the succession of modern movements that would dominate art through the twentieth century.

There is plentiful irony to all this.  Fitzgerald describes the lengths to which Pablo Picasso went to market his works - including the creation of an appropriate image to go along with it, one that emphasized the artist's disdain for the marketplace.  

Of course, no Picasso work sold for anywhere near 140 million green-backs during the artist's lifetime.  But two paintings did sell for more than a million dollars in 1968 - which is about 10 million dollars U.S. in 2014 currency.

Paintings such as Women of Algiers have achieved such high valuations, though, in large part due to the entrepreneurial spirit of Pablo Picasso, and his predecessors in Modern Art.

It is a lesson learned well by those contemporary artists, who might be called Pablo's Stepchildren.  Eschewing canvas mostly, artists in today's scene use all sorts of media and materials for their works - including a great-white shark placed in a tank of formaldehyde, which sold a few years ago for US$12 million.


Even Jaws cost less.
edsykesblog.blogspot.com

Hence another irony.  A key part of contemporary conservative philosophy is free-market economics.  At the same time, though, conservatives today and going back decades, are the biggest critics of modern art.  Yet, the art market is, as we've seen, a bastion of free-market capitalism.


Wednesday, March 25, 2015

Thoughts on Nightlife

The Great Cities are those which, as the saying goes, “never sleep.” 

This is literally untrue, of course. 

Most people who live in New York, Paris, London, Moscow or any of the other world cities, go to bed at the same time as people in Ottawa, Omaha, Bonn or Manchester. 


Man that looks real.
pixshark.com

However, the Great City emerges as a community only during the nighttime hours. Towns and smaller cities can remain socially connected during daylight hours, which is why these places are dormant after dark. 

In small towns, life is typically slower, “easier”, because people take the time to acknowledge and speak with one another. 


Rollin' up the streets.
www.flickr.com


In the metropolis, the frantic pace precludes this sort of intimacy. The Great City is machine-like to the extent that a breakdown in one function leads to breakdown or impairment in all other function: it persists to the extent that people are willing to live at an accelerated pace of life — at least during the daylight hours. In the seven-to-six rush of people and traffic, city-dwellers must relate to another as monads and means to an end. 

When darkness finally comes, after nine o’clock in the evening, Great-City dwellers can relate to one another as human beings. Most residents of the metropolis, perhaps eight-tenths, or even 90%, do not participate in its nightlife much, or at all. 

But when a city is large enough, one person in ten, or one in twenty even, is a population itself large enough to form the population of a small city at least. Then the community of the metropolis is formed. 

Nightlife exists, of course, in all places big and small. But in small towns and backwater cities, nocturnal activities are literally and figuratively relegated to the peripheral, conducted domestically or in covert or half-secretive public locales. 

Nightlife is regarded in the town and small city, even by the participants themselves, as derogative from and opposed to the norms of the community.  This is why, as they say in smaller places, they "roll up the streets after nine."

But the sort of community appropriate to the metropolis, is realized at night. As the Big City goes into slumber, its autonomic processes maintain basic services and amenities. The pulsing lifeblood of the daytime economy has been put to bed, all except for the public (and semi-private) establishments and spaces that cater to the nightlife. 

Geographically, metropolitan nightlife is not coterminous with the entire metro region. Cumulatively, it usually takes up space large enough only to accommodate a town or small city. It is the quality of the society found there, that makes so cosmopolitan. 


The stage, the scene, the audience.
galleryhip.com

At night, the streets become a stage, illuminated by powerful lamps, with the skyscrapers and billboards serving as a massive background, and the people come out dressed in their finest costumes, attending clubs and shows, eating out, being part of the show, the “scene.” 

This is the society of the nighttime, with its own rules, rituals and formalities. The automobile is, during nighttime hours for urban-dwellers, a means of sociality. During the day, the car is a necessity, an estranging and subdividing vehicle for “getting from A to B.” After dark is when the souped-up or “freaked” cars come out, each one identifiable by its owner, altogether forming a tight community based on a mutual desire both for speed and plumage. 

Though nightlife is referred to as the “fast-lane”, the world of muscle cars and custom compacts, as of the entire of nocturnal metro society, is only possible because of the deceleration of pace of the big city after dark. 

Thus, roads are free and clear to be raced upon, and enough people are free from responsibility to drink, dance and dine through to the early hours. Metro nightlife is so attractive because, with its ambience, spectacle and playfulness (and its discord and danger), it offers relief and respite from the necessities of deliberation, work and stress, so essential to life in the big city.

Saturday, March 21, 2015

The Idiosyncrasies that Dylan Did Assume

In order for young Bobby Zimmerman to become intimately acquainted with the salt-of-the-earth folk music of America, he had to leave his home in the bustling metropolis of Hibbing, Minnesota (population in 1960, 17,000) for the backwoods town of New York City (population that same year, about 7 million).


Bobby "Che" Dylan
(c) 1972, 2010 Susan Kawalerski


By the time Bob Dylan arrived in the Greenwich Village neighbourhood in south Manhattan, the revival of traditional British and American folk music was already very big business.  This belays its image “grassroots” movement based in coffeehouses and hootenannies. 

In fact, by 1961, four albums by the Kingston Trio had already reached the number 1 position on the Billboard charts.  That year, too, business manager Albert Grossman, after lengthy rehearsals, formed a trio in order to cash in the folk-revival boom: Peter, Paul and Mary were, it turns out, as fabricated as the Monkees.

Bob Dylan ultimately signed on with Grossman (a business relationship that did not end well).  He got a deal with the Columbia records conglomerate, and after an initial slow start, Dylan became much more successful than the Kingston Trio, Peter, Paul and Mary, and even his onetime girlfriend, Joan Baez.

Flushed with success, Dylan would soon turn his back on the purist folk world, who in turn rejected the new folk-rock style that Dylan premiered at the Newport folk festival in 1965, when he was backed by members of the Paul Butterfield  Blues Band.  “Going electric” assured Dylan’s place in the rock pantheon, even while his predecessors and contemporaries in the folk-revival movement – like the Kingston Trio, PP&M and even Baez herself – became increasingly obscure and irrelevant as the decades went on.

His adoption of amplified instruments (especially the electric-bass guitar) inspired many other folkies to “plug-in”: like the members of successful folk-rock groups such as the Mamas and the Papas, the Byrds, Jefferson Airplane, the Lovin’ Spoonful, Grateful Dead, Buffalo Springfield, and so on.

Of course, John Phillips, Jim McGuinn, Gene Clark, Grace Slick, John Sebastian, Jerry Garcia, Neil Young and Stephen Stills – along with Dylan himself – were themselves greatly influenced by the success of the Beatles. 

Dylan’s real consequence to popular music, however, had less to do with going electric, than with how he changed the focus of songwriting itself.  This started, in turn, even while he was still performing with an acoustic guitar accompanied only by his harmonica.

“Folk” has in recent decades been renamed more accurately as “roots” music.   By whatever name, however, before Dylan the form consisted of largely traditional songs and ballads that may have been decades or even centuries old.  The authorship of the House of the Rising Sun (widely known in the version as performed by the Animals, but made famous originally by Woody Guthrie), is unknown, for example, and the verses themselves may stretch back to the eighteenth century    

Thereby, pre-Dylan folk/roots music was sung by performers in what I call the Assumed-Voice.   That is to say, since the folk-singer didn’t write lyrics that were, however, intended to be a passionate expression of the plight of the little people, he (or she) had to essentially assume a role for the duration of the song.

Thus, when Woody Guthrie sang the original lyrics to House of the Rising Sun, it was understood that he, a grown man, was assuming the voice of a teenage girl who was trapped in a life of prostitution.  (Tellingly, on the Animals’ version, the gender of the song’s first-person narrator was changed to a male – though as Cracked pointed out a couple of years ago, this caused the lyrics to make no sense).

This tradition was so strong in the folk/roots tradition (before Zimmerman) that the many songs written by Guthrie himself maintained the conceit that the singer was assuming the voice of someone else (typically, the Everyman or Everywoman crushed under the foot of the Bosses).  


Bob Dylan was an acolyte of Guthrie, visiting his mentor as the older man slowly succumbed to Huntington’s disease, a progressive neuromuscular disorder – and writing his early music in the Assumed-Voice characteristic of all the old folkies, as well as most of his contemporaries in the roots-revival movement of the late 1950s and early ‘60s.  

Yet, even on these earlier albums, Bob Dylan was pioneering the songwriting approach that would characterize not only his subsequent work, as well as those “folk” singers that came after him.  This is what I call the Individual- or Idiosyncratic-Voice.

In contrast to the Assumed-Voice tradition, Dylan’s Idiosyncratic-Voice was openly about the experiences of the singer himself.  This was evident already on Dylan’s albums while he was still considered a “protest” singer, such as Girl From the North Country on the Freewheelin’ Bob Dylan (1963), or Boots of Spanish Leather from The Times They are a-Changing (1964).  The Idiosyncratic-Voice approach came to full flower, with the release in ’64 of Another Side of Bob Dylan.  

This was still solo-acoustic Zimmerman, accompanying himself on harmonica only.  Yet, it was qualitatively different from his previous albums, in so far as nearly all the lyrics were of a personal and “confessional” nature that would characterize the music of what came to be known as the Singer-Songwriter tradition.


It was not a move that was welcomed by the folk-music establishment.  Irwin Silber, editor of the folk periodical Sing Out!, wrote an open letter to Dylan in which he lamented that “You seem to be in a different kind of bag now, Bob.”  Silber went on, “Your new songs seem to be all inner-directed now, innerprobing, self-conscious – maybe even a little maudlin or a little cruel on occasion.”

Left-wing boosters of folk as exclusively protest music may not have liked it, but it was the wave of the future.  After Dylan came Joni Mitchell, Neil Young, Stephen Stills, Gordon Lightfoot, Tim Buckley Randy Newman, Nick Drake, Jackson Browne, and countless more by the 1970s.  After the folk-rock groups — the Byrds, Springfield, the Airplane — imploded or went on extended hiatus, their constituent members usually tried or succeeded as singer-songwriters themselves.  


Him and some of His Step-Children.
www.britannica.com

Some, of course, successfully resisted the trend away from the Assumed-Voice toward the Individual-Voice, at least for a while: Joan Baez, for example, who as late as 1971, had a hit with The Night They Drove Ol’ Dixie Down, a cover of a song by The Band, in which she assumed the role of Virgil Cain, a southern man lamenting the loss of the Confederacy in the American Civil War (which, one can assume, the leftist Baez would view as a great moment in U.S. and human history, for abolishing African-American slavery).

However, by 1975, Baez had shifted into the singer-songwriter mode, releasing Diamonds and Rust, the title track of which narrated her feelings about her old flame, Dylan, and which she wrote herself.  Another Idiosyncratic-Voice track on that album that received substantial radio play was Children and All That Jazz, about the stresses of single-parenthood.  

An even more prominent exception to the dominance of the Idiosyncratic-Voice among singer-songwriters has been Bruce Springsteen.  Like Dylan, Springsteen worshipped Woody Guthrie; but quite unlike Dylan, “the Boss” has never really given up on the Assumed-Voice style that characterized folk music until the mid-1960s (fusing it, as he did, with the 1950s-style r’-n-’b of his E-Street Band).

Though Springsteen came from humble roots, he has been a multimillionaire rock-star celebrity for four decades nearly.  Nevertheless, the persona projected on most of his songs (including his most famous titles) is that of a working-class American man whose dreams and ambitions are frustrated by “the System.”  

The voice of the downtrodden everyman who is ... the Boss?
www.today.com

It is the Assumed-Voice that we hear on Springsteen’s biggest hit, Born in the U.S.A., where he takes on the role of a Vietnam-veteran jailed for involvement in the drug trade; we know that he is not speaking from direct experience when he sings of himself as desperate gambler on the run in Atlantic City (done in the traditional folk style for Nebraska in 1982); and of course, he is not the person who describes himself driving through his new hometown with his son on his lap in My Hometown (at the time of the song’s release in 1984, Springsteen was childless).  

In this respect, then it is Springsteen, and not Dylan, who is the proper heir to the folk-tradition of Guthrie and Pete Seeger.

But in spite of Springsteen's vast success, most other singer-songwriters stick to the Idiosyncratic-Voice style established by Dylan.