Thursday, June 25, 2009

Charlie's Angels, the Unknown Comic, the Masquerade, and Nostalgia

Upon hearing the news about the death of Farrah Fawcett today, I thought I'd publish an essay I composed some years ago that concerned Charlie's Angels, the TV series which launched Fawcett to worldwide fame.


Nostalgia focuses on the period twenty years before, for two reasons: because it appeals to those entering middle age, who wish to be reminded of their ebbing youth; and, because it appeals to those in their late teen and early twenties, just entering adulthood, who wish to know about the years whence they began this world, but which they are too young to remember.


Nostalgia is, by definition, a remembrance of the past on exclusively pleasant terms. This is why some aspects of “twenty years ago” are revived, and others are left out. For example, in the ‘70s, nostalgia for the nineteen-fifties took the form of rock-n-roll “revival” shows, the staging of sock hops, the television series Happy Days (the title says it all), the stage-(and later film-)musical Grease, and a return by some youth to the greaser look. However, the youth-gangsterism, the fears of atomic war and racial violence of the 1950s were nearly forgotten.


Similarly with the nostalgia of the 1960s that took place in the ‘eighties, when “the sixties” came to be synonymous with psychedelia and protest (with the ugly edges of the counterculture mostly buffed away) while the Vietnam war, the Kennedy assassinations and the civil rights struggle were mostly overlooked (although the television series The Wonder Years, which was presented as a comedy-drama in the half-hour format usually reserved for sitcoms, did indeed tackle larger “issues” in its depiction of the family and school life of a twelve-year-old boy in suburban U.S.A., 1968). The nostalgia for the ‘seventies that occurred in the 1990s was even more frivolous, focussing on music and fashion and leaving almost everything else behind.


Indeed, even very famous entertainment personalities of that era, the same people who live in obscurity in the present day, did not achieve very much of a revival of their fortunes in the 1990s. For example, there was in the late ‘70s a performer known as the Unknown Comic — for that was precisely what he was: a stand-up comic wearing a brown paper bag over his head, with holes for his two eyes and mouth. Mr. Comic’s national fame (his real name was Murray Langston) didn’t last very long, perhaps a couple of years. During that time, however, he appeared on numerous TV variety and talk programmes in the U.S., and toured the country as a warm-up performer to several musical acts. Nevertheless, there has been little appreciable demand for a return of the Unknown Comic to stage or screen in recent years. Why? It’s not that the Unknown Comic was terrifically funny. But he wasn’t awful, either. It may be instead that laughing at someone with a paper bag over his head doesn’t seem very funny any more — a tad creepy even.


This figure, the Unknown Comic, was part of a subtle trend toward masquerade as a device in popular entertainment in the 1970s. Certainly, the masquerade has been used in Occidental drama from the beginning (the actors of the classical era wore grotesquely oversized masks to play their parts). It seems, however, that masquerade in fiction and drama has always been a corollary to ultimate exposure, as when the Great Wizard of Oz is revealed to be a trembling, shy little man. During the ‘70s, the masquerade was employed in certain products as an integral and inevitable part of character, theme and narrative. If the Unknown Comic had told jokes on television and stage without his banal paper bag, it is unlikely he would have achieved the level of fame that he did. It was the performer’s assumption of anonymity through the simple device of a paper bag over his head which seemed simple, yet absurd enough to inspire laughter at the “face” of it.


The series Charlie’s Angels began airing on American television in 1976. It portrayed three young women, former police recruits “who couldn’t quite make it on the force”, as private detectives in the employ of a never-seen “Charlie”, who gave instructions for each case (courtesy the voice of veteran actor John Forsyth) to his charges through a speaker telephone. I don’t believe the identity of “Charlie” was ever exposed throughout the run of the series. To have done so would have undermined the premise of the show — which incidentally, and unconsciously or not, is voyeuristic in conception.


Here is this mysterious “Charlie”, his voice suitably middle-aged and avuncular in tone, employing three beautiful young women (played originally by Farrah Fawcett, Jaclyn Smith and Kate Jackson), who are never allowed to see him but who, in turn, are seen by him (“Charlie” mentions in several episodes how he’s been “watching you Angels” or refers to an incident depicted previously, to which “Charlie” was obviously a witness outside the knowledge of the “Angels”; each time, they express shock and surprise at his unknown presence among them). It was never explained why “Charlie” never showed himself, nor what his motive was for employing only young women at his private-detective agency.


The obvious answer is that “Charlie” was, in conventional parlance, a “perv”, a voyeur who became stimulated at the sight of young women going about dangerous business wearing improbably skimpy and tight clothing. And, for all the “Angels” knew, “Charlie” could have been observing them when they were at home in skimpy night clothes or engaging in coitus with men-friends. Charlie’s Angels is remarkable because it employed the masquerade device in a way that incorporated the voyeuristic role of the show’s audience. The typical viewer of Charlie’s Angels was, like “Charlie”, a voyeur, tuning in not for anything related to drama or acting (for the show was indeed a drama), but for the attractions of tight, young, female, threatened, fighting, detained and otherwise kinetic bodies.


The rock band Kiss, in its heyday during the second half of the 1970s, never appeared on stage or on TV without full make-up, concealing their features. Band members in fact utilized their real names, but their masquerade was essential to the band’s early personae. Kiss did, after seemingly fading to obscurity, re-emerge without their makeup to further recording success (though not nearly the mass-popularity they enjoyed previously) in the early 1980s. It seems unlikely that Kiss would have originally won popularity only the strength of their material, presenting only their homely faces to the world.


The band, which at the height of its success toured North America featuring a highly pyrotechnic stage-show (which included also the lead singer and bassist, Gene Simmons, spitting “blood”) were the fag-end of the “glam rock” movement, which was essentially the application of burlesque and Broadway to rock-‘n’-roll. It was initiated by young gay or bisexual men such as David Bowie in Great Britain and the New York Dolls in the U.S. in the late ‘60s and early ‘70s. Bowie, the Dolls and many other like performers wore make-up and played roles (Bowie’s was named Ziggy Stardust), engaged in a subtler form of masquerade. In popular music in general there was a trend toward obscurantism. The major “non-glam” rock groups of the ‘70s, such as Yes, Genesis, Pink Floyd and Led Zeppelin released albums with sophisticated cover art and expensive jackets, often without including images of the band members themselves.


I can’t fathom at the moment the reason for this relatively short-lived masquerade trend. I don’t think it was recognized by anyone back then, or today, and I don’t think it has existed as a theatrical device very much in twenty years. The last show to employ it was Magnum, P.I., which ran from about 1980, on for most of the rest of the decade. The lead actor in that show, Tom Selleck, lived at the Hawaiian estate of the “billionaire Robin Masters”, who was not seen, and was featured only as a voice on a telephone (that of the great Orson Welles). Unlike “Charlie”, though, “Robin Masters” spoke to Magnum at best once in a season.


I read somewhere that the television series The West Wing, about the executive staff of the White House, was originally to have a complete on-screen absence of the President. It was later decided that the Commander-in-Chief should have a supporting role. Ultimatley, “President Bartlett” (played by Martin Sheen) is such the undisputed star of the show that the West Wing’s supposed “star”, faded cinema actor Rob Lowe, left part-way through its run. I doubt if the show would have been successful at all if the producers had chosen to show Bartlett only from the back of his head or via a muffled or disembodied voice. It would have seemed hokey, an outdated device, “something from twenty years ago.”


However, on television starting twenty and twenty-five years ago, there has been distinct counter-trend of the masquerade, toward exposure of the “behind-the-scenes”, as a deliberate part of the show, the flip-side to the indefinite use of masquerade. Variety programmes were popular on commercial television during the 1950s and ‘60s. The last variety show on American television (running from 1966 to ‘77) was that which starred Carol Burnett. What is remarkable is that the Carol Burnett show is remembered today not for its material, but for the fact that Burnett and her co-stars, including Vicki Lawrence, Tim Conway and Harvey Korman, would frequently crack-up or freeze-up while delivering their lines, providing more humour for the audience than would have been the case if the performers had not “accidentally” flubbed their lines. Some skits were even written with the premise of Burnett playing a woman in fits of hysterical laughter. Another popular feature of the Burnett show was her comedic bouncing of questions from the studio audience during the intro.


Also popular on late ‘70s TV were “blooper” shows, which featured discarded takes of major movies and series, when some particularly comic flubbing or mistake serves to expose, even if momentarily, the rouse of filmmaking itself. These blooper shows have disappeared from television mostly, if not entirely, but not because of a lack of popular demand. It seems more likely that actors and actresses have included in their contracts the proviso that their giggling and forgetfulness when making movies and television shows will not be subsequently seen in public.


A different form of exposure of behind-the-scenes occurs on television chat shows. On the programme Live, broadcast weekday mornings out of New York city and starring Regis Philbin and Kelly Ripa, the hosts frequently call on the show’s producer, a certain “Gellman” , to answer various questions about the guests, the new prop on the set, a current event, etc. The appearance each day of one of the show’s “behind-the-scenes” executives, Michael Gellman (no doubt, given the business he’s succeeded in, a shrew and wily man in his own right), as the doltish straight man “Gellman” to the hosts’ ribbing is not accidental or essential to the programme. When Philbin says, as he often does, that he “doesn’t understand” or “doesn’t get” something about a guest or the show’s new contest (and proceeds to quiz “Gellman” about it), he’s surely not being very candid. If Philbin were that absent intellectually, he wouldn’t be the star of the show. It is all to make the whole enterprise seem “natural” and “spontaneous”, as though the presence of broadcast technology in the studio were an incidental thing, with no regard given to the difference between on-stage/public and off-stage/private.


The Live show is a moderated, middle-of-the-road version of the deliberate violation of public/on-stage and private/off-stage on late-night talk TV, carried in the U.S. for more than twenty years by David Letterman. Letterman’s tenure on NBC, from 1982 to ‘93, saw him engage in the exposure form more thoroughly than after he made the sweetheart deal with CBS, though. Letterman then (as now) had the traditional talk show desk and chairs, but he would also, for example, request that the show’s director, “Hal”, emerge from behind his counsel in the control room and show off what he was wearing (always the same bland slacks and chino shoes) or engage in some other tomfoolery. Later on, Letterman would (like Philbin) pepper the show’s producer standing just off camera, “Morty”, concerning this or that “on the show tonight that I just don’t get.” “Morty” would answer inaudibly (for he was rarely miked), to which Letterman would crack a joke. Of course, Letterman and Robert Mortimer understood perfectly what they were doing and talking about. As with Live, the moments on Late Night where it seemed the show had broken down somewhat, where things weren’t certain, were just a pantomime in the effort to make the show seem natural and spontaneous.


To this end, Letterman would occasionally leave behind the Late Night set, pushing through heavy doors located a few feet to his left, to a white hallway, where various, usually surprised people were milling about or doing their jobs, and look for laughs by (for example) yelling through a bullhorn at fellow NBC hosts presenting a live show on the street below. On this particular occasion, he also used the megaphone to loudly berate those going about their business in the corridors, demanding that people ”stop crowding the hallways.” Another similar gag had Letterman securing the telephone numbers of office workers in buildings across the street from the NBC studios, and then calling them for their on-camera reaction to him (it was during this that Letterman came into contact with “Meg”, an attractive young book editor, whom he called and spoke to from an office window periodically for years, until he left NBC).


The long-time chat host Johnny Carson, star of the Tonight Show (which preceded Late Night with Letterman on NBC) would occasionally ask the show’s producer, Fred De Corva (who played a chat-show producer in the King of Comedy) a question or two. But De Corva was rarely if ever shown on-screen. Carson was a host of the “show must go on” variety, where no ignorance of on-stage and off-stage was tolerated. However, the most memorable moments on the show were those when order did break down, when two famous guests would ignore the host and begin talking with one another or become involved in some other hijinks. It is telling that Carson, given his hosting style, was visibly annoyed when this occurred (and why, in later years, very famous guests would leave immediately after their chat, begging that “I have a plane to catch”).


It may have been this peevishness that made Carson initiate, by accident, the sort of “guerilla” format later perfected by Letterman. One evening on the Tonight Show in the late ‘70s, Carson had returned to the programme after comic Don Rickles guest-hosted the night previously. Carson was bantering with his sidekick, Ed McMahon, when he moved to open his wooden cigarette box, and found a hinge or something was broken. McMahon informed Carson that Rickles had inadvertently broken it while clowning as guest host. Evidently, the box was expensive or meaningful to Carson, for when he saw the damage, he said, “What the hell happened to this?”, genuinely annoyed.


Carson then left the set with a remote camera, proceeded down a white hall to a studio where Rickles was taping a sitcom in which he starred as a naval petty officer. Carson entered the set and proceeded to rant and berate the uniformed Rickles about the broken butt case. It may have been a set-up, but Carson actually seemed pissed-off, just as Rickles and his co-stars (dressed in their naval costumes) appeared truly surprised and embarrassed about the intrusion. The next day, however, it was the talk of the town, reported in newspapers and on the news. The whole scene was ultimately replayed on the “best of” Tonight programme for that season, and probably again elsewhere.


It’s curious, though, that the first use of the “producer” persona in a television programme was not in the chat genre, but in a skit show, Bizarre, which was broadcast on Canadian television in the late 1970s. The show starred, and was introduced each week by an American actor, John Byner, who had appeared in the send-up Soap TV series in the U.S. Each episode, there was an intro and closing with Byner on a bare stage with curtains. Sometimes he would request the participation of one or more audience members. Byner would then involve them in short bits, but just when he asked them to do something particularly outrageous and “bizarre”, a tall man in a business suit would appear on camera and sombrely yet insistently announce, “John, you can’t do that on the show”, to which Byner would seem surprised and disappointed. Byner would ask why, and he’d then offer an alternative, equally bizarre prank for the audience-member to perform, to which the “producer” fellow would invariably respond, “You can’t do that, either.” At first restricted to these audience-participation segments, the “producer” (the fact that he was an actual producer of the show, Bob Einstein, has no relationship to the fact that he Einstein was playing a part when he appeared on-screen) would later on show up with his “You-can’t-do-that” line in the middle of skits, also, and his brief appearances stretched into skits in their own right, with Byner and Einstein leaving the studio to go outside on one occasion. It was evident by then that the “producer” intrusion was a comedy bit, but it was carried off at first at least half-
convincingly.


This use of exposure of the off-stage in contemporary television is paradoxical. It makes what is ostensibly “behind-the-scenes” part of the show, thus seeming to abolish the distinction between off-stage and on-stage. However, most of the interaction between “on-stage”/public and “off-stage”/private actors is itself staged, a fiction, and there are probably many elements of Live, in common with Late Night and Bizarre, that are always off-stage and private (it is rumoured, for example, that Reject and Smelly privately despise each other, but that never comes out on the air). Indeed, in some ways the staged use of the off-stage serves to obscure the less pleasant or uninteresting aspects of the show. Perhaps, then “exposure” is only the contemporary form of “masquerade.”

Thursday, June 18, 2009

Personal Computers, the Internet, and the "New Economy"

The history of the personal computer’s infiltration into almost every home is a quixotic one.


The Xerox corporation did, as early as 1971, develop a prototype personal computer, complete with a graphical-user interface, and a handheld device that later came to be known as the mouse. The company shelved any plan to sell it, judging the market to be too small. The first successful personal computer was marketed by a small startup, Apple computer. But the Apple II remained a boutique product, purchased only by those with an avid interest in computing.


The PC became attractive beyond this demographic, only when a relatively low-cost model was introduced by International Business Machines — the behemoth that was founded in 1888, long before the invention of the electronic computer itself. IBM originated the punched-card method of high-speed inventory in the nineteenth century, and the company (which was not renamed International Business Machines until 1924, taking its name from the Canadian subsidiary) successfully exploited the latest innovations in information technology, before and after the invention of the modern computer, to become one of the largest companies in the world (complete with its own company town).


From punched-cards, IBM moved on to the super-computers of the 1950s and ‘60s, with almost all of its business going to government or other large corporations. At the beginning of the 1980s, its executives sensed a great opportunity in shifting to the consumer marketplace. The company, with so much capital at its disposal, simply updated the old industrial method, going back long before Henry Ford, of manufacturing by assembly line on a scale great enough, to make the device affordable to the worker that manufactured it. The IBM personal computer — “The PC” as it came to be known — was judged inferior to its main competitor (the Apple, and later the “Mac” or Macintosh PC), but its relative cheapness could not be overcome in the mass market.


Moreover, the PC’s software was “open” to the degree that any startup firm could create programs for it (again unlike the Apple, which came with most common software pre-loaded). Apparently, however, IBM, a company with several hundred thousand employees, did not have the expertise on-hand to quickly create the necessary software (what came to be known as the disc-operating system) to make a personal computer functional. After they were unwisely turned down by Apple computer, IBM then turned to an obscure Washington state firm, Micro-Soft.


That company’s president, Bill Gates, was no computing genius, but had great business sense. His firm did not, however, actually have the necessary software, nor apparently, the expertise to write it. Micro-Soft discreetly purchased a different software firm’s “quick and dirty” operating system, patching it up as best they could before passing it off to IBM as their own work. The latter company was not able to purchase MS-DOS, however, instead licencing it from Micro-Soft. It was thus how Gates was able to leverage his very small firm into the biggest corporation in the world, ultimately dwarfing IBM. It was, again, old-fashioned business methods, instead of the superiority of the product on offer, which allowed Microsoft to become such a monstrosity, and Gates the world’s richest man.


Very different than the “new economy” described by some, Gates’ rise was almost a parody of the saga of the robber barons of the “gilded” age. Microsoft not only managed to become a monopoly interest in a key product, but Gates (like Carnegie and Rockefeller) ultimately turned to philanthropy in penance for his fifty-billion dollar fortune.


The Macintosh computer, the first successful graphical-user interface PC, is held in reverent esteem for it user-friendliness. The great costs of designing and manufacturing the GUI-PC nearly bankrupted Apple computer, however, such that its bohemian operatives were compelled to accept the leadership of an old business hand, someone heretofore uninvolved with the computing industry. Thereafter, its founder (Steve Jobs) was kicked out of the company. Again, however, the ultimate standard graphical-user PC was not the Mac, but the IBM model with its substandard Microsoft Windows operating system, which came out a couple of years after the Mac was first marketed in 1984. Mass-manufacturing won out over the boutique model, once again.


Given the plain facts of the development of the computer industry in the last few decades, it is hard to understand the credence given to the notion that computing will somehow overcome the difficulties associated with the “bricks and mortar” economy. The computer itself became a staple precisely through economies-of-scale and standardization-of-product, old-fashioned methods that information technology was supposedly going to supersede. For many years now, IBM has been but a minor player in the personal computer hardware market. Its place was taken by other manufacturers, such as Dell computer, which followed the IBM model by manufacturing on a mass scale (often in low-wage countries such as Mexico).


The new-economy utopians somehow convinced themselves that information-processing would, on it own, cause the lion to nestle up to the lamb, and the conflicts and stresses associated with the “industrial” age, would disappear. As to the value of software as opposed to hardware (ie. “bits and bites” as opposed to “bricks and mortar” — as though the latter were the most advanced material of industrial age), software only became valuable when the hardware on which it runs was mass-produced to be cheap enough to the common household. The biggest computer companies in the world — IBM, Intel, Cisco, Hewlett-Packard, Dell — are makers of hardware, not software. The exception is, of course, the biggest company in the world, Microsoft. But, as mentioned, Microsoft became so large because it had (and maintains) a proprietary hold on the computer operating system — essentially the interface between the hardware and software of the personal computer.


Besides, as the Economist noted in the 1990s, a good chunk of Microsoft’s profits has come from the sale of hardware, such as mice and other peripherals (which operated best, naturally, under the MS-Windows system). Initially, independent software producers such as WordPerfect and Lotus were able to make millions off their “killer” applications (in word-processing and spreadsheets, respectively). But inevitably, Microsoft itself introduced copycat software programs, which eventually marginalized both WordPerfect and Lotus 1-2-3 as the standard applications. Because Word and Excel (the MS spreadsheet program) were integrated into the Windows programming, they were easier to learn and manipulate than either WordPerfect or 1-2-3. However, after the introduction of the Windows Chicago system in 1995 (like the IBM PC fourteen years earlier, the initial sale of “Windows 95" was introduced by a massive advertising campaign which included the multimillion-dollar licencing of the Rolling Stones’ Start Me Up), direct sales of operating-system software became relatively small.


Mostly, Windows 95 and its successors came pre-loaded on virtually all PC’s that were sold, anywhere in the world. The fee paid for this privileged was incorporated into the cost of the computer itself. In fact, software became less and less valuable, the more accessible computer hardware became during the 1990s. The number of people using any particular software program, was far greater than the number who actually purchased it, directly or through the purchase of a computer, due to software “piracy.” And courtesy other innovations in computer hardware, involving the digital recording (“ripping”) of writeable compact discs, which made not only software programs easily distributable, but also made commercially-sold compact discs subject to piracy.


When the resulting digital-encoding of songs was made available through the peer-to-peer networks of the Internet, it ultimately caused the collapse of the value of the software of the music and movie industries in the general marketplace. Software producers have gone to great lengths to guard against software piracy, often causing problems with programs themselves. The failure of Microsoft’s ballyhooed Windows Millennium operating system, was largely due to problems caused by excessive security safeguards.


A publicly-accessible Internet became possible only after computing was made a household appliance via traditional methods of manufacturing and marketing. As mentioned, the Internet resulted from the needs of the U.S. defence department to build a computer network that was (relatively) safe from enemy nuclear attack. The Internet is, at its base, a triumph of hardware, not software, with its revolutionary method of simultaneously fragmenting, replicating, and then reassembling data between any two nodes on a network.


Again, without the subsidy provided by the government and universities, this hardware would not have been developed at all. Moreover, it took twenty-five years following its invention, before the Internet was made available, by commercial means, to the general public. At first, commercial network service providers, such as CompuServe and America On-Line, resisted the adoption of Internet technology and protocols. Even Microsoft had, at first, planned to construct its own network while setting up its online service.


As for the tiny, mostly local Internet service providers that sprang up in major centres in North
America and Europe around 1994 (when the development of the World Wide Web made going online a graphic experience), were remarkable for not being very profitable at all. Eventually, these startups went belly-up, or merged into ever-bigger regional and national companies. Eventually, long-established telecoms and cable-TV firms swallowed up most of the independent Internet service providers. As for the hardware side, the market for Internet equipment was for a long period held by one company, Cisco Systems, which continues to hold the lion’s share even now. This dominance of a single entity, whether Cisco, Microsoft and Intel, of each niche in the computer industry, is of course more similar to nineteenth-century monopoly capitalism than the vision of the twenty-first, as offered by the new-economy prophets.


There is, in fact, good reason why virtual-monopoly concerns would come to predominate the computer industry, on the hardware and software ends. As the Economist also noted at the beginning of the tech boom, widespread networked computing depends upon the adoption of an operating standard. Thus, either all players had to agree to the hardware and software standards in the beginning (which, as we know, they did not), or a single commercial provider would come to dominate a market so completely, as to shut out all other players (which is what in fact occurred).


During the nineteenth century, the cutthroat practices of Gould, Rockefeller, Edison and other robber barons, made available to the common lot thousands of goods and services. Would the oil, railway, electrical, telephone and other industries have been better served by a situation of perfect or ruinous competition? This is what in fact prevailed in the early years of most machine-technological industries from the nineteenth century on. Engineered technology, on the other hand, seems to promote oligopoly, and even monopoly. This was even more true of computer-engineering than the “industrial era” technologies of railway and motorcar. During the last quarter of the twentieth century, as advances in computer-engineering led to always-fresh opportunities for commercial exploitation, hundreds of thousands of startup firms have come and gone, with a relative handful, such as Micro-Soft, becoming behemoths. Monopolists such as Bill Gates and Andy Grove made computing available to the masses, for better or worse.

Monday, June 15, 2009

Rock-'n'-Roll, Live Performance and the "Recording Artists", Parts 2 and 3

Click here for Part 1



Part II: Radio


The recording medium, and the synthetic art produced through it, has relied upon broadcasting to promote big-label product, aiming thereby to corner the market by repetitious playback of the same songs. However, music genres have emerged, selling millions of records, without the aid of radio or TV promotion at all. This was true of jazz records in the 1920s, rock’n’roll in the 1950s, heavy-rock in the late ‘60s, disco in the ‘70s, and heavy metal and rap in the 1980s.


The record industry has supplied demand that radio, due to its own business model (its customers are not the audience, but the sponsors) was and always will be unable to provide. Records have no “sponsor” (although recordings have been produced for “educational” or propaganda purposes, and given away free or sold at far below cost). They are sold directly to the audience, and thus the “supply”, their music content, must conform to the demands of the marketplace. It is interesting, in this regard, that truly populist music (ie. that found on records) has usually been beat-driven, and evocative of forbidden desires and notions. Music intended for broadcast, on the other hand, has tended toward the melodious, non-offensive and “pleasant.” The record industry has had to grapple with this reality all along. Radio play was necessary to sell product: but radio would only play music that was “soothing” to the listener. On the other hand, people wanted to “get into” records, and that could be achieved best by a heavy beat.


The most successful performers in the twentieth century, have been able to fuse these contrapuntal tendencies, borrowing the rhythms if not the percussive force of beat-driven music, with the melodic sweetness of broadcast tunes. After the explosion of jazz and swing in the 1920s and early ‘30s, came the crooners, such as Bing Crosby and Frank Sinatra, both of whom leavened or eliminated the beat with smooth vocals and soothing string arrangements. By the ‘50s, music had become so “elevator” that youth were ready for the “race” records, which combined rhythm-’n’-blues and country-and-western music. Elvis “the Pelvis” achieved in this genre what Sinatra did in the realm of jazz, that is, effectively fuse melody with heavy. Presley gave rise to the many “Bobbies” and Pat Boone, rock’n’roll vocalists that eschewed beat nearly or entirely. Within a few years, though, the Beatles emerged with a new fusion. The group (formed in 1956 in the midst of the British “skiffle” craze) emerged out of the “beat” scene in northern England (hence the pun). Retaining the old folk traditions, however, the Beatles were also harmony and melody makers. Perfected on the early releases, the beat sound proceeded to conquer the airwaves, and the record charts as well. From 1964-66, rock’n’roll was performed in close harmony by virtually all popular groups. Thereby, rock became predominant on AM radio.


Beginning with “psychedelic” music in about 1967, however, rock groups became progressively more dependent upon selling their music on long-playing records. Thus, psychedelia, acid- and hard-rock became less dependent on commercial radio. There emerged the sub-genre, “soft” rock, which laid off the beat in favour of the melody — as found in the music of America, Bread, John Denver, Crosby, Stills Nash and Neil Young (on some albums), Fleetwood Mac and the Eagles, among many others. These artists became million-sellers through AM airplay, but the hard-rock sounds found a place on the FM band.


FM (which is short for "frequency modulation"), though superior in terms of audio fidelity than AM ("amplitude modulation"), was for years a snob’s reserve of jazz, classical and talk radio. Coincident with the rise of hard rock in the late ‘60s, FM-stations began to switch over to an album-oriented format (“AOR”) which played few if any, 45-rpm records at all, instead focussing on longer, harder-rocking compositions by groups such as Led Zeppelin, Pink Floyd, Genesis or Yes. All of the latter groups sold long-playing records in the millions, but few had 45-singles that reached the top-ten, or that charted at all. As the ‘70s went on, moreover, FM rock-radio became more and more popular, often coming out first or second in local markets. The AM band, in turn, moved away from top-40 toward news- and sports-oriented talk-radio, or the “classic” format. But, as FM radio became more popular, it took on the former AM adversity to beat-music.


This is perhaps no better illustrated in the career of Genesis. Formed in the late ‘60s in Britain, the group was an AOR darling throughout the ‘70s, when it had very few hit singles (in contrast to album tracks known to millions through FM airplay).[vii] In 1976, however, the group’s lead singer and chief songwriter, Peter Gabriel, unexpectedly quit the band for a solo career. His replacement was even more unexpected: Phil Collins, who was the drummer for Genesis. Thereafter, Genesis’ music took on a very MOR (ie. “middle of the road”) turn. While the band would rock out at concerts, their (enormous) radio hits were characterized by a minimization of the beat in favour of the melody. In the late 1970s, FM radio would have nothing to do with punk music, for example, just as it did not play heavy metal during the 1980s. For the same reason, FM-radio became very comfortable with the synth-pop sounds of the ‘80s, as represented by Human League, Wham! and Gary Neuman.


Crucial to the synth sound was the electronic drum, which could provide a beat without the beat being so hard. Music producers had for years looked for ways to muffle the sound of drums (as described by Doors' drummer John Densmore in part 1 of this essay), so that they would not drown out the melody (and thus, but for transient periods, not receive radio airplay). With electronic drums, the sound could be modified at will. In any case, FM commercial-radio missed out on the hard-rocking sound — a mixture of punk and metal — that emerged in the United States during the mid-1980s, especially in Minneapolis (with Husker Du, the Replacements and the Violent Femmes) and Seattle (Soundgarden, Pearl Jam, Faith No More, Nirvana, etc.). These rock acts and many others were at first snubbed by the big record labels, too, until the late 1980s. It was, of course, Nirvana’s 1991 disc, Nevermind, which turned “grunge” into a hot property. Even so, it did not become evident on radio until the mid-1990s, when Soundgarden’s Superunknown became a million-selling smash.



The grunge style became massively popular via a third medium, cable-television. MTV and its competitors and derivatives fulfil the role of AM radio during its top-40 days, albeit on a nationwide, televisual scale. Instead of the tight formats that evolved as FM became the more popular band in the 1970s and ‘80s, AM hit-radio would play any style that was popular, ie. that sold records. As late as the ‘70s, a listener to top-40 might hear Olivia Newton-John’s Please Mr. Please, followed by Popcorn by Tangerine Dream, and then Fame, by David Bowie, the set concluding with Sundown, by Gord Lightfoot.


Top-40 was a product of a time when marketing was not so sophisticated as to be able to segment local populations into target audiences (and thus, narrow formats). The goal was instead to get as many listeners as possible, by playing the most requested songs, regardless of their musical style. Until comparatively recent times, the FM band was difficult to receive through portable radios. AM was the more popular band, simply because more people had access to it. Although top-40 was criticized for “always playing the same songs,” it was far more amenable to populist pressure than later FM radio, with its rigid playlists. If a record sold, or if enough people called up to request a song, then an AM music station would play it.


The Beatles first gained notice in the U.S. after the famed New York D.J., Murray “the K.” Kaufman, noticed the “boards lighting up” after he played their early hits (Mr. K. went on to promote the band’s first concert tours).[viii] Similarly, Simon and Garfunkel’s original acoustic version of the Sound of Silence became an AM-radio favourite in 1965, even as the album on which it was released was a complete bomb. CBS records producer Tom Wilson dubbed a rock-combo onto the track, which became the monster hit that is known throughout the world, along with dozens of other Simon and Garfunkel hits.


It is the same now with music-television. When, in 1991, music-channel programmers noticed that Nevermind was flying up the charts, they placed the video-clip of the lead single, Smells Like Teen Spirit, in heavy-rotation. It became an instant classic, and soon after parodied by shlock-rocker “Weird” Al Yankovic. However, “grunge” became domesticated, emphasizing the melody over the beat, just as AM radio had rendered beat-music into soft-rock after a few years. The programme-content of AM radio, as with MTV now, was always biassed toward the middle of the road. In response to popular demand, they will play beat-driven music. But most of the time, it is the melody that is so prized by a mass audience. “Hard” music drives away a significant minority of any listening/viewing audience, where soft music does not have the similar, perverse effect. At various times, as we saw, talented acts such as Elvis, the Beatles, Nirvana and Soundgarden can merge the melody with heavy, to mega-success. However, as this attracts many others without the finesse for melody, or for heavy, the music becomes
segmented, with the heavier styles going “underground”, and the softer, turning into “pop.”


In the mid-60s, American acts such as Simon and Garfunkel and the Byrds were able to fend off the British invasion by adopting the harmony-vocals typical of the English beat groups. It is significant that by 1967, the biggest American group was the Doors, led by Jim Morrison, who sang unaccompanied by harmony. It many ways, the Doors established the arrangement rock groups have subsequently imitated — a single, domineering lead vocalist backed by a bass-drum-guitar combo (in the Doors’ case, the bass-guitar part was assumed by the organist, Ray Manzarek, using a foot pedal).


The change was demonstrated by the progress of the music of the Who. The vocal part on all of the group’s early singles — such as I Can't Explain, Anyway Anyhow Anywhere, Substitute, The Kids Are Alright, I'm a Boy, Happy Jack and Pictures of Lily (all from 1965-67) — were characterized by close harmonies. The group’s breakthrough U.S. hit, I Can See For Miles, from 1967, has vocalist Roger Daltrey singing the verses solo, joined in by guitarist Pete Townshend and bassist John Entwhistle on the famed chorus (“I can see for miles and miles and miles and miles and miles...”). By the 1970s, however, all Who songs were solo-vocal performances by either Daltrey, Townshend or Entwhistle (on his own songs).[ix]


This switch occurred even more dramatically with the Beatles. Harmonies were still evident throughout the Revolver and Sgt. Pepper albums, from 1966 and ‘67, respectively. By the self-titled “white” album, in 1968, the four Beatles not only stopped singing harmony, they stopped composing songs together.[x] “Psychedelic” music is characteristic both from the hard rock that followed it, and the beat-driven melodies of the 1964-66 period, for retaining vocal harmonies while also rocking it up. When the harmonies were abandoned soon after, “psychedelia” became hard rock. This is witnessed also in the career of the Yardbirds.


Formed in London in the early ‘60s, the Yardbirds featured young Eric Clapton on guitar, with he and his band-mates performing British-influenced American r-’n’-b. Signed to a major label, the Yardbirds then veered into harmony-pop territory (as with For Your Love, in 1965), which outraged blues-purist Clapton, who quickly left the group. Another guitar wizard, Jeff Beck, joined the line-up, to continued pop success. Later, studio man Jimmy Page (who played on the Who’s first single, I Can’t Explain), came in on bass guitar. In ‘67, the Yardbirds went psychedelic, like all other rock acts. By the next year, Beck was gone and the group fell apart. Plant organized a new lineup, with Robert Plant on lead vocal and John Bonham on drums. Bassist John Paul Jones had sessioned with the Yardbirds, and so he took over that instrument while Page became the lead guitarist. This act toured briefly as the New Yardbirds, but thereafter changed their name (at the suggestion of John Entwhistle) to Led Zeppelin.


On Zeppelin’s records, the only vocal heard is that of Plant. Page, though the producer and musical director, did not sing. The rare harmonies on Led Zeppelin songs are by double-tracking Plant’s vocals, or with session vocalists. Between the last of the Yardbirds records in about 1967, and the debut of the successor band, Led Zeppelin, in late ‘68, the switch from harmony-vocals to a single voice was complete. It was Jim Morrison that led the way, with the smash-hit Light My Fire, from 1967 (which was a three-minute version of the seven-minute album track). However, the biggest Doors hits aside from Light My Fire, were relatively lightweight tracks such as Love Me Two Times, Hello, I Love You, Touch Me, Tell All the People, Love Her Madly, and so on. The sombre verse Unknown Soldier, from ‘69, barely scraped the top 40. The brief rule of the Lizard-King was bought through AM radio. Even “blues-purist” Clapton became an MOR success, thanks to AM and later, MTV. It was as if musicians, unconsciously, abandoned melody as their music became heavier, while others abandoned the beat when melody became more important to their songwriting.



Part III: The Long-Player and the Compact Disc


The premiere music producers of the rock-album era, be they musicians such as Jimmy Page or Pete Townshend, or non-performers like Phil Spector or George Martin, were the true “recording artists” of the time, using recording media (chiefly multitrack audiotape) to create a novel and unique artform, music that could not be reproduced with any fidelity in live performance. The sounds were composed by them, through the skilful editing of the multiple, disparately recorded, parts, as a painter constructs a picture or composer on a score. Recording media have never been exploited particularly as self-conscious art. The true “artistry” of the recorded form was realized not (as with the visual and plastic arts in the twentieth century) in abstraction and obscurity, but in the most popular, most “accessible” music.


However, popular recorded music, as it depended upon artifice, was indeed “abstract”, transcendental, just as the visual arts in most times and places don’t aim for complete naturalism in depiction (as they did during the classical and Renaissance times). The “wall of sound” and classic albums defy nature, too, if authenticity is to be judged by ability to perform a piece of music live. The greatest records attract and obsess the many in the same manner as do the greatest paintings, poetry, novels, or operas. All great artforms objectify corporeality and transcend experience. The recording arts have been a potent cultural form, because their artifice is ignored or dismissed. Recorded music is the aural decor of contemporary society, a programmed sonance that seems to have encouraged the visual and plastic arts toward aggressive non-representation or “fragmentation”. During the twentieth century, aurality became a primary means of cultural enclosure, through recordings and radio (and also through telephony and television), as vision was exploited by artists in all media, to estrange the psyche from itself.


The rock era, which extended to about 1982, was killed off by the album format the genre perfected during the 1960s. Before the Beatles, popular music was heard on 45 rpm, mostly. Full length albums were usually collections of singles, plus a few other titles (usually covers). This was the pattern followed by the Beatles, the Stones, the Who, the Yardbirds and the rest, until Lennon and McCartney made albums as popular as singles (and extended-play records), starting with Revolver in 1966. After 1967, nearly all rock bands, and performers in other genres as well, concentrated on the production of albums over singles. Singles were no longer released independently of albums; they were simply tracks taken from the albums, as radio advertisements.[xii]


Thus, during the late ‘60s and throughout the 1970s, album production became much more technical, such that the gap between studio and performance yawned all the wider. The master LP artists during the ‘70s were Pink Floyd. Their 1973 release, Dark Side of the Moon, took nine months to record, and exploits the recording medium to its fullest, “a stereo wet dream for hi-fi snobs everywhere.”[xiii] The “rock opera” from 1979, The Wall, also used the recording medium as an artform. Accordingly, Floyd had to tour the world with an elaborate light show and giant props (such as an inflatable pig during their tour for the dreary 1977 album Animals) to disguise the fact that no matter how well they played their material on stage (often not very well anyway), it could never be played as well as it sounded on record.


Pink Floyd, which came late to the London blues scene, never released a live album during their active career. They appeared in a “concert” film, which was not in fact a concert at all, but the band playing, alone, amid the ruins of Pompeii. There is no legend of classic Pink Floyd concerts, because there haven’t been any. The band was a creature of recording media, a fact perhaps acknowledged by the members’ virtual non-appearance on album covers and non-communication with the press and public, after about 1970.


This divergence between recorded and live performance plagued nearly all the major acts of the ‘70s. Through albums, bands could combine classical or other influences that while successful, were difficult to carry out again on stage. The album-oriented music scene encouraged individuation in the listening experience, but the concert performance of album music, with its flashy effects and theatricality and maudlin singing, could no longer encourage communion among the listening public. Rock became exhausted, passe, when it became too artificial, and unable to effect psychic and cultural enclosure. Listeners turned to music, namely punk, which was deliberately unpolished and basic enough to be indistinguishable between live and recorded performance. Punk evolved into New Wave and heavy metal, musical forms of disparate aural character that were alike, in that they were each direct and simple enough to be performed as well on record as on stage. Thus these two genres, while largely neglected by commercial radio, produced million-selling artists and sold-out tours, by blending recording and concert media.


Once again, for the punk generation and their successors, 45 singles became as important as albums, not to get airplay, but to sell product cheaply to their many teenage fans to listen to in the “privacy” of their own homes.


Punk explicitly rejected the studio “trickery” of the dinosaur groups like Zeppelin and Floyd. But their successful translation to the recorded medium depended on the recording technology developed to accommodate the super groups of the rock era. Music from the early years of records of nearly always pleasant and singable, because anything very hard would not have been audible back then. Only when the guitar, bass and drum parts were recorded separately from one another, could the musician be free to play as aggressively as he pleased, without worry of drowning out or falling from time with the other players. This very multi-tracking is what killed the spontaneity of rock music, as we saw, and the post-rock artists of the ‘70s and 1980s overcame this problem by pushing music even further into automation and electronics, by adopting the synthesizer.


Electronic instrumentation was employed by the New Wave, and more radically by the New Romantic, Goth and other “synth” bands during the early 1980s, not (as previously) to be an often gimmicky back up for the regular performance, but rather, predominantly, without “analogue” playing at all. This repaired the disjunction between recorded and live performance, as synthesizer back tracks could be programmed identically for studio or stage. Rap and hip-hop, which also emerged as a popular form during the 1980s (its roots going back a decade to the “ghetto” streets of New York city), resolved differently the tension between studio and live music, namely by using long-playing records as the backing track to the rapper’s rhymes. While sales for new LP records declined throughout the 1980s — the highwater mark for LP sales was 1979, and thereafter they came to be replaced by cassette tapes, and then compact discs — the market in cheap, second-hand long-playing discs boomed, which was a boon to the rap music scene. The predominance of “programmed” music (game-show, commercial and series themes, for example), especially in the later forms of rap and hip hop, reflects the accessibility of the all-important DJ’s to this form. By the late 1990s, rap and hip hop were in the position of rock twenty and twenty-five years earlier, that is as the single best-selling musical form, a form that was nevertheless regarded as “noise” and otherwise derided for its vulgar and violent lyrics, by everyone but most people under the age of 25.


The 1990s’ resurgence in rock — in the form of grunge — was a fusion of punk and heavy metal, a more “mature” rendition of darkness-and-death motifs of the latter style. Nirvana, Soundgarden and numerous other groups from the Puget Sound area gained popularity as live bands, and as their sound deliberately eschewed the complexities of rock acts from the earlier era, it could be more easily translated to disc. The term “grunge” seems to have been derived from the sloppy, second-hand clothes worn by the groups and their “slacker” fans, and their recorded music was at first available only on local labels, such as Sub-Pop. It reached mass popularity with the 1991 release of Nevermind, by Nirvana. The band’s guitarist and lead singer, Kurt Cobain (who committed suicide in April, 1994), was probably the most talented songwriter of the Seattle performers. The grunge style was a perfect incorporation of Cobain’s mentality, which, as it turns out, was not only suicidal, but severely deranged and even borderline psychotic, as well. Nevermind was a good album, appropriately straightforward, and all the Seattle bands released multi-platinum albums in turn, but they were all more potent as live than recording acts.


The grunge concert was nothing like the lazy, hazy stone-fests hosted years earlier by Pink Floyd, Led Zeppelin and Chicago. Rather, the driving rhythms and singing style of grunge bands, at their most powerful, effected a literal merger of the audience with itself, such that at its focus, below the stage, the crowd would turn into a “mosh”, a mass of dancing, kicking, jumping, moving people, into which the lead singer would often dive, the performer literally becoming part of the audience. As Seattle acts became more adept with the recording medium (as did Soundgarden with their final two albums) the less “grunge” they became. Cobain may well have sensed the end, the transformation of the style he’d pioneered into a job, an artefact, a technique, removed from the psychic enclosure he’d evidently gained from playing music his way. Having failed weeks earlier to do himself in the soft way (overdose), he did the hard way (gunshot). Just about when Cobain pulled the trigger, Soundgarden were preparing for release, or had already published, Superunknown, their breakthrough disc, and definitely a product of the recording, as opposed to live, medium. The band became super well-known, and their next release, Down on the Upside (1996), was even better than the last. But the group, too, realized the exhaustion of the grunge form. Committed by then to recording technology, Soundgarden understood that they could not reproduce on stage what they’d created in the studio, and so (honourably) went their separate ways. Since the death of grunge, rock has remained steadily, but not explosively popular, and has accepted the hip hop form to a great degree (most popularly with Kid Rock and Linkin Park), which again aids in bridging the gap between the stage and recording forms.


The compact disc, which became the standard playback format in the second half of the 1980s, improved upon the convenience and durability of the long-play, and the sound quality of the cassette. It could play, continuously, longer than either of these formats, about 74 minutes. Music production in digital conditions obviously expands the possibilities for studio “trickery”, and so the last twenty years has seen the partial re-emergence of the Spector-like musical director, featuring names such as Rick Rubin (founder of the Def-Jam label, which released most of the popular rap in the early days), Robert J. “Mutt” Lange (producer of numerous popular acts, who lifted his former wife, Shania Twain, from the obscurity of Timmins, Ontario, to international super-stardom), Glenn Ballard (who similarly turned Ottawa’s Alanis Morrisette from teen queen to rock goddess), as well as Hull, Quebec native, Daniel Lanois, producer of U2's smash Joshua Tree.


The compact disc, during its fifteen to twenty year reign, created music super-stars of unprecedented popularity and exposure (aided by the rock video and heightened interest in pop music stars by periodicals and newspapers), whose success depended upon slickly-crafted full-length discs, integrating technology and performance to such a degree that the music could not be reproduced capably or with fidelity on stage, or not so without the aid of large backup ensemble and advanced equipment. The need of music acts for a large number of backup musicians, and the latest in audio technology, was a prime cause of the inflation of concert-ticket prices during much of the 1990s, far beyond the rate of normal devaluation (which was in fact negligible during the decade). Even then, performance could never live up to the clear sound heard in the digital format, and acts that soared to the top, selling millions, on the strength of well-produced discs, found their follow up releases underrated and ignored. Music produced for compact disc may not be translatable to the stage, at a very basic level.


The compact disc rendered music into an appliance, an aural background not particularly
attended to. CD music, because it is a translation of computer code, somehow resists listener involvement in the sound, as inspired by the long-playing record. It evokes not social enclosure, but domesticated individuation. The live reinforcement of the kind of society created by bands and styles, is not achievable in the digital world, or is available at too much of a premium. This may have been the reason for the remarkable popularity during the ‘90s of the “rave” dances, with their associated “electronica” style, the thundering, atmospheric sound that, while completely dependent upon electronic technology, was scarcely heard through the recording form at all. It was indeed music only for the drug-induced groove of the rave scene, and only a very few outfits that began on the scene (chiefly in Britain, where the movement began) have become major recording stars in their own right — but performing music that is far more conventional and commercial than what is typically played at rave dances. The great popularity among the young of electronica and its “underground” stars, shows that this form has used electronic technology, previously associated only with the studio, as a live form. There is, in fact, no real persistent structure to electronica “songs”. Different beats and styles are merged together, change abruptly from one to the other, and so on.

[footnotes i to vi are in part one]

[vii]. According to Wikipedia, the only Genesis single that charted during the Gabriel era came in 1974, as compared to several dozen top-10 U.S. and U.K. singles after Collins took over as lead singer. It is the same with the other big album-oriented groups: Pink Floyd’s only hit single in the early ‘70s was Money, from the album Dark Side of the Moon, which itself remained on the charts from 1973 until the early 1990s. Led Zeppelin did not usually release any singles from their albums. Yes’ only hit single from the ‘70s was an edited version of the eight-minute Roundabout, from 1972.
[viii]. Kaufman, who dubbed himself the “fifth Beatle,” was a pioneer of FM commercial radio when, at the end of 1964, his AM station switched to an all-news format. He became the programme-director and lead DJ of WOR-FM in New York in 1966, the first all-rock station. The American sitcom WKRP in Cincinnati, broadcast in the late 1970s, was loosely based on the experience of one of its creators, who worked at an Atlanta radio station in the late 1960s, when it (as with the fictional WKRP) switched from “elevator” music to rock. The actor who played D.J. Johnny Fever, Howard Hesseman, himself worked as a jock at the first San Francisco radio station to go rock in the late 1960s.
[ix]. In this respect, the later Who were more similar to the style of their earlier incarnation as the blues-oriented High Numbers.
[x]. Hey Jude, released as a single earlier in 1968, concluded with the famous “na-na-na”, was a last-hurrah for the Beatles as a harmony group. Even this part was essentially tacked on to the original song.
[xii]. Acts that concentrated on singles before albums remained, during the 1970s, a remunerative although not prestigious sub-sector of the record industry. See Robert A. Hull, “My Pop Conscience”, in Rolling Stone: The Seventies. Edited A. Kahn, H. George-Warren, S. Dahl (Boston, New York, Toronto: Little, Brown and Company, 1998), pp. 36-39.
[xiii]. “Pink Floyd.” The Harmony Illustrated Encyclopedia of Rock, 3rd. edition. R. Bonds, ed. (New York: Harmony Books, 1982), pp. 181-183.

Rock-'n'-Roll, Live Performance and the "Recording Artists", Part 1

Part I: Recording


Songs published during the earlier part of the era of recording, almost always had a traditional beginning, middle and end. This was true right up to the 1950s, but in the following decade, certain “artists” began to exploit recording for its own potential, indifferent to how it would be performed live. In particular, the rise of independent music producers such as the eccentric Phil Spector, saw the creation of songs that were meant to be heard as studio recordings, not recorded versions of songs intended for live performance.[i]


Spector was the originator of the famed “wall of sound” technique, in which “layers” or tracks of voices and instrumentation (including string and bass ensembles, but also, several different guitar, piano and drum parts) were used to create a thundering, “heavy” effect for songs such as Be My Baby or Do Run Run (both credited to the Crystals), music that was really lightweight pop. Spector wrote or co-wrote most of the songs recorded by the Crystals, the Blue Jeans, the Checkmates, the Ronnettes, and so on, and really all of these groups were not viable acts as such, until Spector assembled for them the purpose of recording songs, and the “wall of sound” itself could only be achieved in a studio.


Indicative of this is the use, on many of Spector ‘60s hit singles, of the “fade-out”, in which the chorus is usually repeated as the volume of the recording is gradually lowered to nil. Obviously, such a “conclusion” for a song could not be carried out on stage. Many Spector songs also departed from the traditional in Western music by including, say, only two verses instead of three, and ignoring the conventions of line composition. Certainly, the Spector acts actually did tour — not individually, but together, as part of old-style (paradoxically) revues reminiscent of the early recording age, in which each act would perform for a quarter-hour by turn, in front of a large orchestra with brass and woodwinds. It was the only way, outside the studio, that the “wall of sound” could be reproduced at all. But none of the Spector acts from the ‘60s were known for their prowess on stage, and indeed, the acts themselves and their individual members were scarcely known then, and not remembered at all in the present day. He took an established artform, the 45-rpm “single”, songs for which had previously consisted of recordings of live, in-studio performances, and created music that used the medium of recording itself as the starting point.


Spector is the only person, aside from George Martin, to have produced an original album by the Beatles, Let It Be (1970). This record, which was released following the unofficial break up of the band in 1969, was recorded in January of that year. Originally, with Martin at the helm, it was an attempt to do away with the recording-studio “trickery” that had characterized Beatles’ albums of previous years, to “get back” (the original title of the record) to where the group would record their songs live to tape. This decision proved disastrous, as John Lennon and Paul McCartney no longer very chummy (both had suddenly shown up with girlfriends at recording sessions, breaking an informal code). The results were, for the most part, desultory, and eventually the raw tapes were given to Spector to create a saleable product, which ironically called for the application of the studio “tricks” and “wall of sound” that the group had forsaken in recording the songs in the first place.[ii]


Nevertheless, the Beatles’ successful evolution from the pub band of Hamburg and Liverpool in the early ‘60s to key musical innovators during the period 1964-69, was due to their use of the recording medium as an artform, music created and manipulated before and beyond actual live performance. Unlike the acts produced in the U.S. by Spector, the Beatles were a genuine, though perhaps unremarkable, musical act, with years of gigging behind them, when they were signed by the EMI conglomerate in 1963. By 1966, however, their music had become so dependent upon, or founded in, the recording medium, that the group gave up live touring altogether.


Unlike Spector with his artists, the Beatles’ producer George Martin had no domineering role over the personalities of the group’s members. He was, though some years older than the band, just in early middle-age, and not in fact a fan of rock’n’roll. Martin was not much of a producer of recorded music at all before the Beatles, previously concentrating on spoken-word and comedy records, such as those for the Goons (which included the actor Peter Sellers). His own musical talent was minimal, but he certainly grasped the technical side of the recording medium as it existed at the time. Martin’s open and gentlemanly demeanour evidently won the trust of Lennon, McCartney and the others, and so he and the four were able, over many singles and full-length albums, to synthesize performance and recording in a very successful and exciting manner. For this reason, Martin must be considered the true “fifth Beatle.”


One example from the Beatles’ repertoire demonstrates how essential was the recording medium to their later music. The song Strawberry Fields Forever was released as a single (a “double A-side” with Penny Lane) in early 1967, just after the group’s last tour. The finished record consists of two separate sessions of the same song, the one a guitar-bass-drum-vocal track, the other a simple vocal in front a string ensemble. In production, Lennon, the lead vocalist, reportedly liked both versions, and wanted to combine them. Martin objected that they were in different keys, and thus could not be merged successfully, without the one or the other being off-key. To overcome this problem, Martin actually slowed down the tape-speed of the traditional band track, so that it could go together with the string arrangement. The result, which makes Lennon’s lead vocals seem dream-like, even narcotized (appropriately, given the year of the song’s release), is very memorable and effective. Yet, there was no way that it could be performed on stage (at least in that era) in the manner that it was heard on record, as the music is not actually heard in “real time.” (Strawberry Fields, perhaps for the first time ever, even has a “false” fade-out, in which the song “ends” with a seemingly conventional fade, but then returns for a brief instrumental / sound collage reprise, only to fade out again).


The most popular of recorded music, from the 1960s onward, has employed the medium itself in a highly “technical” way, to produce sound that could not be reproduced in live performance.


John Densmore, drummer for the Doors, described in a memoir the group’s early experience with the studio. He writes that while their first album took only six days to record, the “first few days were frustrating because recording wasn't the same as playing live. [Producer Paul] Rothchild held our hands as we learned the process. I didn't know you couldn't have the same `sound’ as onstage. `Too live and echoey,’ Rothchild said. Paul wanted to damp my drum skins, and it hindered my technique but after a while I fell in love with the big snare drum sound it made. A fatter, dead drum sound recorded better than a live hollow one.”[iii] “Too live”, is the key phrase. Recording is what transcends live, making music uniquely artificial. Densmore goes on to show how the hit song, Break On Through, was recorded:

`You should try another vocal, Jim,’ Paul prodded. `We'll put the new one on another track and you can choose between the two.‘

Jim [Morrison] nodded and headed back out to the vocal booth.

`Just point your thumb up or down if you want more track in your 'phones.’

After stumbling on a second take, Jim did a third, erasing the second because we were out of free tracks. (We were recording on four-track equipment, nothing like today's twenty-four track recording.)

`I like the first half of the original vocal and the second half of my second performance.’

`No problem. Bruce [Botnik, the engineer] and I will glue them together in the mix.’

I found the recording process fascinating — getting a basic rhythm track (drums, bass, and other rhythm instruments), then overdubbing voices and instruments as needed. The danger of so much control was the possibility of losing the feeling, the soul of a song; the advantage was that each of us had the chance to be satisfied with his performance.[iv]


Ray Manzarek, the group’s organist, speaks about far the Doors came when it came time to record their second album, 1968: "Strange Days is when we began to experiment with the studio itself, as an instrument to be played. It was now eight-track, and we thought, `My goodness, how amazing! We can do overdubs, we can do this, we can do that — we’ve got eight tracks to play with!’ It seems like nothing today, in these times of thirty-two- and even forty-eighth-track recording, but those eight tracks to us were really liberating. So, at that point, we really began to play ... it became five people: keyboard, guitar, drums, vocalist, and the studio."


The advent of multi-track studio recording introduced the experience of “virtual reality”, at a time when computers were still room-sized data-crunchers. The recording process, from the Beatles on, has generally ran as follows: a song, whether wholly or partially composed, is rehearsed over again until it is judged satisfactory by the performer and / or producer. Then, the musicians (whether a band or a group assembled especially for the purposes of recording) will “run through” the tentative composition while playing as an ensemble, but with each part recorded separately from the other (in booths or behind sound-barriers). Then, using the best recording of the bass-and-drum part, the “treble” parts (vocals, guitar, keyboards, strings, etc.), will be dubbed and re-dubbed (each separately) until they, too, are judged polished and professional enough for publication. The drum or bass parts might thus be recorded over again, using the playback of the treble parts already recorded.


Virtually all recordings consist of grafts of various other recordings, pastiches of space and time in which performers interact only with technology. This division and subdivision of aural reality has led, in some cases, to breakdown and even insanity. Brian Wilson, the composing genius behind most of the Beach Boys’ surfer hits, spent more than nine months and $100,000 recording the group’s classic, Good Vibrations. Some time later, in 1967, Wilson suffered a breakdown while trying to put together the legendary Smile LP, which never saw release as a Beach Boys record. The minute division and subdivision of sound by modern recording-technology is itself absurd, and dissonant, which can provoke nervous collapse in those already disposed (as with Brian Wilson) to mental illness.


Meanwhile, the Abbey Road record, released in the autumn of 1969, was a fitting coda to the career of the Beatles. The album itself seems a sort of last hurrah — the final credited song is even called The End (there is the short verse, Her Majesty, which comes a minute or so after the “official” end of the record). The last original Beatles release, in 1970, was Let It Be, which was recorded before AR. The band, on Abbey Road, could keep it together enough to complete one side of music, albeit in the non-collaborative manner as heard also on the self-titled album and Get Back/Let It Be. Side two, however, contains but three whole songs, followed by seven or eight half-songs that are “completed”, made to seem part of a suite of music, through the “trickery” of recording media, the specialty of the fifth Beatle, George Martin.


Abbey Road is the crowning glory of the band’s career, because Martin was made a full partner with the musicians in the enterprise known as “The Beatles”. In fact Lennon, McCartney, Martin, Harrison and Starkey, carried off this artifice rather handily, creating a very listenable, exciting album (the tricked up “suite”, starting with You Never Give Me Your Money, and ending with The End, is amongst the most attractive music the band ever recorded). There was no better title for it than Abbey Road, the (then) informal name of the EMI recording studio located on that street in the borough of Westminster (it is, literally, the road to Westminster Abbey). For, it was music that, in the form it was presented, could exist nowhere but in the recording studio — the exact studio in which the Beatles recorded all of their releases. Even the sleeve image seems fitting: no title, just a colour photo of the band walking single file on a zebra crossing on Abbey road. None of them, of course, are face to face. The assembly-line placement of the band in the photo, and the vanishing-point view of the street behind the group, seem figurative of the Beatles’ final incorporation of music with technology.


Other acts followed in the Beatles’ wake. Notably, Led Zeppelin had Robert Plant as its lead singer, but the real frontman was the guitarist and producer, Jimmy Page. Page never sang on a Zeppelin song, but his mastery of the recorded medium was evident all over the group’s repertoire. He was, essentially, the musical director of the band, and the “medieval” and mystical themes that permeated the music and visual motifs of Zeppelin (such as the design of the band’s logo) were largely of Page’s influence. The band’s fourth album, from 1971, is untitled (or its “title” consists of several occult symbols, with Page’s consisting of the famed “zoso” graphic), and contains the signature track, Stairway to Heaven. Timed at more than seven minutes, this song was for years played at the conclusion high-school dances, when teen couples got close, and the unlucky slunk away, in the dark (as satirized in the Barenaked Ladies’ track, Grade Nine). Stairway is a stunning testament to the power of records to create a truly unique form of music, “ensemble” playing that in fact is a skilful synthesis of individual performances, each rehearsed and recorded to a level of perfection and complexity not reproducible on stage. It merges what are, in effect, three different songs, and attracts because it touches on divergent aspects of the rock listening experience: soft, mid-tempo and hard. Indeed, most of Led Zeppelin’s music is a skilful, in-studio fusion of acoustic and electric, “the Incredible String Band meets Iron Butterfly”, as Page himself put it.


The seamy side of Zeppelin lies in the fact that, while Page was a recording and guitar “wizard” (for years, he’d been employed as a studio musician, playing on the Who’s first single, I Can’t Explain, from 1965, then joining the seminal rock act the Yardbirds — Zeppelin came out of the corpse of that group, at first known as the “New Yardbirds”)[v], and Plant was a fine singer, neither were songwriters of the calibre of Lennon and McCartney, Mick Jagger and Keith Richards, Pete Townshend of the Who. Accordingly, most of their early music was stolen outright from American blues, and British and U.S. folk, musicians. The song, Bring It On Home, which the rock world knows as being penned by Page and Plant, and released on the second Led Zeppelin LP, from 1970, is a direct rip-off of a song of the same name by Sonny Boy Williamson, who died in 1964. Numerous other Zeppelin songs, especially from the first albums, are wholly or in part stolen from earlier sources, without acknowledgement or royalties paid.


And, as Zeppelin’s best music was in fact a product of recording media, they were as a band not especially great as a live act (although their concerts always sold out), especially when called upon to play their own repertoire, over-complex as it was for a four-piece band without over-dubbing. This is witnessed on the only “live” album the band put out while still active (the group broke up in 1980 with the death of the drummer, John Bonham, from alcohol poisoning), called The Song Remains the Same, taken from a 1973 concert at Madison Square Garden in New York city (the performance was also released as a film). The double-album contains most of the Zep favourites, including Stairway to Heaven, all of them inferior to the same songs as “performed” in studio. Accordingly, the band had to employ various gimmicks (such as Page’s use of a violin pluck for his guitar on Dazed and Confused, or Plant’s ham act on Stairway —“Does anyone remember laughter?”) to keep the audience’s attention and its “faith.” Without the recording medium, and its mastery by Jimmy Page, Led Zeppelin would have remained just a very loud pub band.


Generally, a rock band’s prowess in live performance derogates from the overall strength and quality of their recorded output. The Beatles (or Lennon and McCartney) understood that they had to cease touring if they were to make great records. Led Zeppelin toured, very successfully, while publishing million-selling discs, because their audience were not discriminating or knowledgeable enough (being adolescent, for the most part) to understand that the group was, in reality, a counterfeit.


Superior to Led Zeppelin, in terms of stage prowess, were the Rolling Stones and the Who, both of which were London bands. The Who’s Pete Townshend was a very good, and often inspired, songwriter, both with the band and later as a solo artist. But he and his mates (Roger Daltrey on vocals, and the now-deceased rhythm section, John Entwhistle on bass, Keith Moon on drums) were also a very solid live band, even in spite of Moon’s “sloppy” technique, and were quite capable of bringing alive a football stadium of 70,000 or 80,000 people. Consequently, the Who only released two truly excellent albums, Who’s Next, from 1971, and Quadrophenia, two years later. The latter was a “rock opera”, depicting in songs the “four-part” personality (hence “quadrophenia” instead of “schizophrenia”) of a young London “mod” from the early ‘60s, over four sides of a long-play album. Its many songs stand up today because Townshend and his production staff had a couple of years (since the end of the last tour) to create music that was not dependent on live performance (such as The Punk Meets the Godfather, which concludes side one).


Consequently, not very many of these tracks were subsequently included in the Who’s live sets (the band only performed the whole opera a couple of times). The songs on Who’s Next, which was also recorded at leisure, after the end of the previous tour, were but fragments of another of Townshend’s “rock operas”, this one called Lifehouse, which was reportedly about the rediscovery of music in a post-apocalyptic future (as indicated in the first song, Baba O’Reilly, often mistakenly referred to as Teenage Wasteland.) Baba O’Reilly concluded with a violin solo by a session musician, itself not readily reproducible on stage (Daltrey did the part live on harmonica). The Who remained popular so long as they remained vital in live performance. When Moon died (of prescription drug overdose) in 1978, he was replaced by the technically-superior Kenny Jones, but the original esprit de band was somehow lost with the passing of the manic-depressive drummer. The group, less Moon, Jones or Entwhistle, who died (of cocaine-induced heart attack in 2003), continues to tour, cheaply trading on past glory for present enrichment as a retro act.


The Who’s early singles remain “classic”, while their first albums are deservedly obscure. Records like A Quick One, from 1966, and The Who Sell Out, from 1967, are full of ditty-like tunes or oddities composed by Entwhistle or other group members. The Who’s first solid LP was also Townshend’s initial “rock opera”, Tommy (1969), about the “deaf, dumb and blind” pinball wizard who goes on, for some reason, to lead a religious cult. A double LP, Tommy is a product of recording media. But Townshend and the producers had not yet mastered the technical side of the studio, and the “opera” is consequently an aural disappointment (Moon complained that the drum parts on most the songs sounded like he was hitting biscuit tins).


Nevertheless, the album, and its lead single, Pinball Wizard (which was, in fact, an effective use of the recording medium), were smash hits in the U.S. and Britain. By the time Who’s Next was released in ‘71, Townshend and his staff were masters of the studio and of recording. Who’s Next (voted by Time magazine in 1979 as one of the ten best of rock in the decade) effectively employed the novel electronic synthesizer, while keyboards, brass and lush string arrangements backed many of the songs on Quadrophenia. But again, as the Who became as famous for its live performances as for its records, the latter inevitably declined in quality — even before Moon’s death.


Townshend, the songwriter, became increasingly disenchanted with being a “travelling juke box” (contributing to his drug and alcohol problems), and attempted to go solo before staging a “farewell tour” with the band in 1982. His excellent Faces Dances (1981) seem to establish him independently of the Who. But Townshend’s solo career petered out during the ‘80s, forcing him, Daltrey and Entwhistle, by 1989, to embark on an ignominious “comeback” tour, followed by yet more nostalgia tours during the ‘90s. The utter collapse of the recording career of the Who, as well as the esteem accorded them generally (they were once treasured by critics as “thinking man’s rock”) is one of the most remarkable, and yet unremarked upon, stories in rock history.


The Rolling Stones remain, more than forty years after their formation, popular, not because of the strength of the band’s recent recorded output, but rather, because they can still expertly perform music in front of tens of thousands of people. Lead singer and songwriter Mick Jagger, in spite of being more than 60 years old, is a master frontman and a celebrity in his own right. The band, in its recording heyday in the ‘60s and early ‘70s, released many great singles (I Can’t Get No Satisfaction, Get Off of My Cloud, Paint it Black, You Can’t Always Get What You Want, etc.) and a trio of excellent albums, Beggar’s Banquet (1968), Let It Bleed (1970) and Sticky Fingers (1971). However, the Stones have not had a smash hit single (in North America or Britain) since Start Me Up, from 1981. The group came out of the London pub blues scene that arose in the early ‘60s (as did the Who, the Yardbirds — which included Eric Clapton and Jeff Beck — Fleetwood Mac, Procol Harum, Spencer Davis Group, John Mayall, Long John Baldry, and a host of lesser acts).[vi]


Jagger and Richards were heavily influenced and even borrowed from the bluesmen of Chicago and the Mississippi Delta, but never so blatantly as did Page and Plant some years later. They crafted these influences together with requisite originality, but Jagger and Richards are not quite of the songwriting genius of Lennon and McCartney, not even of Townshend. The Stones’ claim to rock “greatness” lies in the fact they’ve always had it together enough musically to play their songs, and “covers” of blues classics, with undeniable verve and potency, whether live or on record. Consequently, the early Stones are as well known for their singles, as their albums — as is the case also with the early Who. Records such as Stones’ Between the Buttons (1965) or Aftermath (1966) are far from awful, or even mediocre, but they don’t compare with the Beatle albums released during those two years, Rubber Soul and Revolver. Most of the early Rolling Stones records contain cover songs, and “filler” — well-performed, mind, but nevertheless not original. Their premiere foray into recorded music didn’t occur until 1967, with Their Satanic Majesty’s Request, a less-than-successful “psychedelic” album, in the mould of the Beatles’ Sgt. Pepper’s Lonely Hearts Club Band. Tellingly, the Stones’ tour-de-force records from 1968-72, eschewed the blatant studio trickery of Satanic Majesties, for the more stripped-down ensemble playing of blues and soul, supplemented by capable studio musicians on keyboards and brass.


But from 1972 on (when the group embarked on its first world tour), the Rolling Stones maintained their renown mostly through grand concert tours every four or five years, and thus the importance of their recorded output declined with each successive release. Jagger and Richards (the other original members have mostly retired) have defied not only maturity in years, but an adulthood of debauchery, fame and stress, to front a credible live act, even in the present day. But the utter professionalism, the theatricality, of the Stones as a live act, is translated into creative and career exhaustion when heard on record.


Certain groups, like the Who and the Stones, were able to balance, at least for a time, the contrary demands of music as it performed on stage, and as it is assembled in the studio to be heard on record. Success in the one form derogates from achievement in the other, on all but a temporary basis. The saying, “They (or he or she) are a great live act”, implies that the act’s recordings are not especially good, or are otherwise “hard to get into.” But acts that are dependent upon the recording medium for their music, are often said to “disappoint” when performing before a live audience. It isn’t as if the acts oriented toward live performance are more “authentic” or “genuine” than recording-focussed performers. Live performance has long been as dependent as recorded music upon electronic media. The difference between the two forms lies in that the one is public, literally “on-stage”, and aims for merger of feeling and consciousness between audience and performer, while the other is (mostly) domesticated, literally “off-stage”, and tries to affect the mind and body at a more idiosyncratic, individual level. The 45 rpm exists as both a private and public form, played over the radio and at dances, but also at home. A single is a not success unless it is heard in over radio and the hi-fi, although its “chart” placement will not depend on how well it is performed live.


Whole albums are almost never played except in the domestic setting (or in a domesticated place such as a quiet bar/restaurant or coffee house), however. When in concert, “album” rock groups don’t generally play the entire of the new release they are touring in support of, but mix these together with old favourites. The “classic” albums of the rock era, from 1967 to 1982, were, like all advanced artforms, meant to be absorbed intimately, as paintings or prints are placed on walls, and novels read when “curled up” in bed or on the couch. The looseness and flubs typical of the live performance of even the tightest acts, are accepted or unattended to by a paying audience, when the performer can inspire communal feeling among all assembled. The greatest records must exclude such errors, as they are more easily detected through multiple auditions in the domestic surround.


Parts 2 and 3 are in the same blog entry.


[i]. “Phil Spector.” The Harmony Illustrated Encyclopedia of Rock, 3rd. edition. R. Bonds, ed. (New York: Harmony Books, 1982), pp. 215-216. Spector was convicted in April, 2009 of the second-degree murder of a former shlock-film actress, and received a nineteen-year sentence. Aged 69, it may well be a life sentence for the producer.
[ii]. The song Respect, a hit single also from 1967, was written by the soul singer Otis Redding (1941-67), but made into a smash by Aretha Franklin. Franklin’s work from that period was produced on Atlantic records, by Jerry Wexler, who is less well-known than either Spector or Martin, but who similarly used the recording medium as an artform. Besides Franklin, Wexler produced singles by Redding and numerous others. He explained that, like Strawberry Fields, the Franklin version of Respect is actually two different versions of the song, put together by studio “trickery.” The two versions were not, like with Strawberry Fields, overlaid on top of each other by slowing down the faster take. Rather, the “other” version is heard during the brass-and-sax bridge, in which the song abruptly changes key but, Wexler claimed, “this was only a problem during live performances.” However, listening closely, there is a perceivable discontinuity between the vocal and the bridge. M. Azzerad, A. DeCurtis, D. Fricke, et al., “The Top 100 Singles of the Past 25 Years.” Rolling Stone (issue no. 534, Sept. 8, 1988): 61-149. Franklin’s version of Respect is ranked at no. 6 (67).
[iii]. John Densmore, Riders on the Storm: My Life with Jim Morrison and the Doors (New York: Dell Publishing/Delta paperbacks, 1991), p. 86.
[iv]. John Densmore, Riders on the Storm: My Life with Jim Morrison and the Doors (New York: Dell Publishing/Delta paperbacks, 1991), p. 87. The Manzarek quote following, is found on page 128 of this title.
[v]. “Led Zeppelin.” The Harmony Illustrated Encyclopedia of Rock, 3rd. edition. R. Bonds, ed. (New York: Harmony Books, 1982), pp. 137-138.
[vi]. “British R&B.” The Harmony Illustrated Encyclopedia of Rock, 3rd. edition. R. Bonds, ed. (New York: Harmony Books, 1982), p. 42.

Friday, June 12, 2009

Engineering and Freedom, Part 10

click here to read part 1 
click here to read part 2 
click here to read part 3 
click here to read part 4 
click here to read part 5 
click here to read part 6  
click here to read part 7 
click here to read part 8 
click here to read part 9

The marketing/advertising economy is generally conceived in terms far removed from the welfare/social economy.  

In fact, both offer goods and services at very low cost to the ultimate consumer, subsidies that must be borne elsewhere in the economy. Governments pay for their social programmes through borrowing or tax revenue. Industry pays for its entertainment programmes by incorporating the cost into the price of the products it creates. In both cases, the expense of the goods on offer is socialized. 

For both the social economy of government expenditure, and the promotional economy of advertising subsidy, then, the normal rules of the cash or price economy do not apply. 

The irrelevance of price, the primary source of data for productive decisions, is the reason both government and marketing agencies must collect so much “intelligence” on the everyday habits of their clientele. 

Post-Keynesian economics has outlined in great detail how the expropriation of wealth from private hands to finance the social economy has proven deleterious to the cash economy as a whole, in large measure because the irrelevance of price in the social distorts demand and supply decision-making. 

Yet little systemic analysis has undertaken as to how what one recent book called the “entertainment economy” may, too, distort the overall cash economy, even though the mechanism for the subsidy of the latter is the same as the former. 

The main argument against the social appropriation of capital investment, advanced by neo-classical economists, is that non-private interests are much less efficient in marshalling resources than are private concerns. But couldn’t this be said equally of the appropriation of actual investment in capital, for the purposes of advertising promotion through mass media? 

Marketing funds must be borrowed from the total of that available to produce a good in the first place. It is an expense without any necessary return. If advertising improves sales, that is well and good, but if an ad campaign does not do so, its expense cannot be recouped by selling it to someone else (as is the case with say, unneeded capital goods). 

Advertising, by its very nature, represents a potential wasteful expense of resources by private capital, in the same way as does wasteful government social spending. The money spent on marketing (as is the case with government programme-spending) does indeed create jobs, often well-paying ones in both cases. 

The question is whether the jobs created are worth their value to the economy overall. For the bureaucracy that must be established in order to administer state spending, the answer that has been returned by the neo-classicists has been a resounding, “No.” But for the industry-dependent marketing sector, few have even wondered to inquire if it is worth its value at all to the economy. 

Textbooks talk about “economies-of-scale,” or the savings achieved with mass production. But contemporary manufacturing concerns have grown into continent- and globe-sized monsters less to achieve economies of scale, than to afford the vast cost of mass-media advertising. 

Oligopoly or monopoly might be a matter of course in certain industries, notably resource-extraction, where there are inherently high fixed costs of exploration, etc. The only inherently high-cost factor of production that consumer-goods manufacturers must deal with is advertising and marketing. 

Where the consumer-goods sector ought by now to be highly competitive, sensitive to price fluctuations, instead there is oligopolistic concerns the size of Coca-Cola and countless other firms of the same proportion. The fact is, the largest consumer-goods firms long abandoned reliance upon crude supply and demand measures to make production decisions. 

The development of large industries devoted to consumer goods, while relatively free from government intervention (especially in the U.S.), did not otherwise rely solely on the “invisible hand” of the unregulated marketplace. Instead, consumer-based industries sought to inspire market demand by doses of propaganda through all available mass media. 

The author Susan Strasser has traced the development of the marketing economy during Victorian times, detailing in particular the selling of the Crisco brand by Procter and Gamble. She writes, “The corporations that made and distributed mass-produced goods did not necessarily set out to create needs, nor did they do so in any straightforward way. Procter and Gamble made Crisco in order to sell it. The company employed home economists to develop recipes, but did not in fact care what consumers did with the product as long as they bought it. Its goal, in Thorstein Veblen’s words, was the `quantity-production of customers’, the making of consumer markets. Sometimes manufacturers produced needs among children for products that parents bought. Those with goods in established product categories put most of their marketing effort into producing a demand for a particular brand, not a need for the product itself.” (Strasser, Satisfaction Guaranteed: The Making of the American Mass Market, 1989, p. 17) 

Strasser observes the economic mechanism of the marketing economy: “The manufacturers who adopted the conveyer belts and gravity slides of low production... needed to dispose of their huge outputs. Because mechanization demands large amounts of capital, they sought predictability and control; they could not afford large overstocks and they wanted to free themselves from dependence on wholesalers. They took their cue from a few industries, such as book publishing and patent medicines, where manufacturers courted customers directly, placing advertisements in magazines, selling by mail, or offering commissions to salesmen who went from house to house and put on public displays, the fabled medicine shows. Copyright and patent holders held monopolies on their products, and the largest and most successful flow producers could purchase Uneeda biscuits or Ivory soap only form the National Biscuit Company or Procter and Gamble, they would have to pay the manufacturers’ prices.”[ii]. (Strasser, Satisfaction Guaranteed, 1989, p. 19).

Thus it is that the mass media grew up in tandem with the inception and development of large industrial trusts devoted almost entirely to the consumer marketplace. Printed books and other materials were the first goods produced for a mass marketplace, and periodical literature in particular was crucial to the creation of an abstract marketplace for the sale of mass-produced goods. 

In the nineteenth century, when printing was industrialized under factory conditions, the new mass-circulation dailies and periodical weeklies or monthlies were economically sustainable through revenues provided by consumer advertising. So it was with the later development (in America) of broadcast radio and television. 

The commercial messages delivered through these media, while not literally brainwashing, employ many of the techniques of psychological manipulation. Advertising conditions consumers to accept the machine-technological way of life, the necessary apparatus for the creation of mass consumer goods. 

The goods and services offered by industry, submitting to the whimsical and “trivial” of everyday concerns and anxieties, are the salve, the tonic for the personal and social estrangement which occurs in society governed by technological imperatives. Mass-media entertainment, paid for the advertising of goods and services, is a crucial part of this nexus. 

The main difference between advertising expenditure and programme expenditure by governments is that while the former is financed on a private, volitional basis, the latter is not. 

However, the state is deeply implicated in the business of advertising. First, the main vehicle for it, broadcast media, are legally public utilities in most jurisdictions. Private concerns that lease these utilities, and the industries that finance these concerns through advertising dollars, do so at the pleasure of the state. Second, business tax law in most places allow corporations to write-off the expense of advertising, so that, while only large concerns can afford to spend the big bucks necessary for a really effective ad campaign, they can also profit from by eliminating the cost from the total taxes they must pay. 

Thus, consumers pay for the expense of the marketing industry twice, in the greater expense of the goods they buy, and in the larger chunk of their personal incomes taken by governments to make up the shortfall in revenues caused by ad-expense tax write-offs. More generally, though, the culture of the “entertainment sector” (the fortunes of which is completely dependent on advertising revenues) resembles that of the social economy rather than that of the regular price economy. 

This is no less true in spite of the fact that firms in the marketing industry compete vigorously for their clients’ business. In a state-dominated economy, private firms also compete assiduously with one another for the right to government business. This does not mean, however, that private companies’ revenues that come from state coffers are automatically more efficient, just because they are private firms. 

It is competition, not the mere fact of private ownership, which promotes efficiency and improved goods. Where one’s largest or only customer is the government, there is far less pressure to offer goods and services at a premium, precisely because the threat of competition from other firms is absent. 

Similarly, marketing firms win business for reasons having to do far less often with rational supply and demand decision-making, than with their ability to persuade clients of the probability of success of the “campaign” (the analogy of that word with army generals’ pursuit of the enemy in battle is entirely appropriate). 

However, since the exact relationship between any marketing campaign and any increase in sales cannot be established (it is said that half of advertising campaigns fail to have any influence at all, positive or negative, on sales), the success of any marketing firm relative to others depends not on their ability to produce anything, or carry out some service with demonstrable success, but on political connections and active lobbying. 

Advertising as often as not has no influence whatsoever on sales. But the fear that a competitive rival will usurp their market position is enough to encourage modern captains of industry to continue to perennially invest in the marketing of their goods, rather than in the goods themselves. 

The same logic has kept junta regimes throughout history expropriating the “surplus” of actual wealth-creators in order to finance their arms races. With regard to the entertainment or marketing sector of the economy, however, it is not only that business people feel constrained by competitive forces to spend such a great amount of their capital on advertising. 

The marketing/consulting industry is responsible, as we noted above, not only for representing clients from other parts of industry, but also for collecting vital information on the public. This information is the bread and butter of the marketing economy. It is also proprietary; as those who have the responsibility of handling it usually have to sign some sort of legal agreement not to reveal its contents to anyone. If they do, they could face expensive lawsuits. 

The term “information economy” is usually associated with computers and related technology, but in fact the information sector of the economy first took off after the war, when the average computer was still the size of a room. “Information” is exactly what the marketing sector of the general economy trades in. 

The social and cultural position of those who staff it, is roughly analogous to that held by scribes in the Latin church during the Middle Ages, which is to say, it has a monopoly hold on the vital data needed to operate the primary media of communication. Since people who work in this sector of the economy are recompensed handsomely by their clientele, they have plenty of cash to spread around. 

Those not directly involved in the “information” economy thus again lose out, as producers and middlemen pay less attention to the manufacture of more utilitarian things affordable to the common people in favour of baubles favoured by information professionals and those directly employed by them. 

The socioeconomic position of those within the information economy of advertising/marketing is also analogous to that of functionaries that staff the institutions of the social economy. Both groups are economic parasites, imposing their own distinct sorts of tithes on the productive activity of others. Any sophisticated economy requires some sort of non-productive parasitism, of course. 

But is the degree of parasitism evident with the contemporary information economy serving a socially useful good at all, except for enriching a relative few at the expense of many others? Advertising/marketing, at least through mass media, are financed by private business, but its principles are contrary to those of rational self-interest. Moreover, “the media,” or electronic means of communication, would never have achieved the primacy that they have without the vast sums spent on subsidizing them by advertising promotion. 

Thus, any consideration of “the effects of television” (and more broadly, all mass media) and “the influence of advertising” are really inseparable. These cultural forms are, however, generally analyzed not only in isolation, but primarily in terms of their content. 

Thus, there has been the constant worry, since the introduction of television, about how violence depicted on TV inspires actual violence in real life, or about how advertising encourages people to buy things “they don’t need or don’t want.” 

But if we assume, correctly, that human beings, as physical and social animals, have indigenous needs and wants, a fuller understanding of how “the media” condition their audiences will be gained. 

 Thus, the information/marketing economy is not, as depicted by some observers, an inevitable opponent of the state economy. In fact, both sectors have the same object of subsidizing certain forms of consumer activity at low or no charge, in order to maintain a dominant social, economic and political position. Advertising, as with “corporate sponsorship,” is indeed the alternative means of subsidy when government subsidies are insufficient or unavailable. This is the case with art and sport events, as noted. 

Television and radio networks that don’t survive on advertising subsidy, do so on government subsidy instead. The Internet functioned for many years on the subsidy of the U.S. Department of Defence, long before anyone but defence analysts or scientists had ever heard of it. 

The Internet has been a boon to marketing intelligence, however, because with the active participation of consumers in a mass computer network, companies can track their actual Web-surfing behaviour, right down to the name and number of Web sites they visit, even the contents of their host personal computer. 

Goods or services subsidized by taxes or by advertising revenue (which is a tithe on the cost of the product itself) are “in common” in that rarely could their activities be sustained by supply and demand means. 

The significant part of the workforce that now earns its living from the marketing economy, while officially employed by the private sector, are no necessary enemies of the public sector. All government departments, as well as their political masters, now are big clients of the marketing/information economy. 

Moreover, big business, by placing their stock not in a product but in an advertised image, have attempted to bury under a mound of mass-media propaganda their actual motivations for selling their product in the first place, that is, to make money. There is a fundamental symbiosis between the marketing and welfare sectors, as complementary methods of managing markets and people. 

The theoretical foundations of the modern welfare economy, laid down by Maynard Keynes and others, specifically identify the state as agent to encourage broad consumption, to avoid the catastrophic loss in spending confidence as occurred during the Great Depression. For their part, firms dependent on marketing aim to increase consumption, as well, the more so the better. 

While big business has long called for smaller government, in rhetoric, in actuality it long reconciled itself to the social economy and the consumers it made out of the bottom fifth of the population. 

The model information/welfare economy is not the United States, Canada, Britain, France or Germany, but tiny Sweden. There, the government for decades levied heavy personal and surtaxes to support a very generous welfare regime. The state does not, however, own very much of the general economy. Beyond strict health and social regulations, Swedish firms are able to do business as they wish. 

The government’s role in the marketplace is mainly to provide big tax-breaks to firms that invest in research and development. The tax savings accrued provide the capital for R&D, but also enforce industrial concentration. The Swedes have socialized not production, but consumption. The result, given the aims, has been very successful. Swedes live in social-security, and Swedish firms have burgeoned into global consumer giants. The Scandinavian experience shows that consumerism and welfarism, far from being adversarial, are mutually-dependent pillars of the modern engineered society.