Posted by Comments Off
April 24, 2015
On April 12, 2015 the wildly popular Game of Thrones returned to HBO for a fifth season. No doubt, this season, like all the others, will break ratings records and encourage endless speculation and debate by fans. The television series, based on a projected seven novel series A Song of Ice and Fire by George R. R. Martin, has a devoted following among viewers who are willing to wade through intricate plots, an enormous cast of characters and a world as rich as our own. The series is set in a fantasy world resembling feudal Europe and on the surface feels like many other “sword and sandal” epics, such as Lord of the Rings. However, the series is more than beach side reading — drawing extensively on history, mythology and literature.
Part of the appeal of Game of Thrones (especially for Marxists) is that, unlike Lord of the Rings, there are few clear cut heroes or villains; instead everyone is a shade of gray and presents a harsh view of the feudal world and its sharp class divisions, bourgeois revolutions from above, subordinate status of women, and brutal realpolitik. 
A historical materialist analysis of Game of Thrones has been the subject of two essays “Can Marxist theory predict the end of Game of Thrones?” by Paul Mason and “Game of Thrones and the End of Marxist Theory” by Sam Kriss (focusing heavily on the collapse of feudalism with arguments we will discuss in detail below). Kriss’ essay also argues that part of the appeal of Game of Thrones is that the series undermines any idealization of feudalism where “its kings aren’t just cruel and stupid but powerless, trying to bat away rapacious financiers and ghoulish monsters with both flapping, ineffectual hands…[and that this] was the last time that all the mystical creatures that hid in the dark places of society were known, named, and understood.” By contrast, capitalism presents itself as rational, while it shrouds real social relations beneath commodity fetishism and the mysteries of the market. The use of Marxist analysis to fantasies such as Game of Thrones, as Kriss rightfully points out, “helps explain our own demon-haunted world.”
The main settings for Game of Thrones are the fictional continents of Westeros and Essos. Westeros is made up of seven kingdoms — the Kingdom of the North, the Kingdom of Mountain and Vale, the Kingdom of the Isles and Rivers, the Kingdom of the Rock, the Kingdom of the Reach, the Kingdom of the Stormlands and Dorne. The Seven Kingdoms have existed for thousands of years largely as a feudal society and undergoing periodic dynastic shifts, civil wars and invasions (the dominant religion known as the “Faith of the Seven” forbids slavery).
One of the major plots of the series is a civil war by the noble kingdoms for control of the Iron Throne following the death of the King Robert Baratheon. The “War of the Five Kings,” which begins at the end of the first season initially involves five separate claimants to the Iron Throne (currently reduced to three by the end of season four) involves bloody battles, massacres and dynastic upheavals which devastate Westeros.
Posted by Comments Off
A life mesmerizingly truncated, James Dean left behind only three films, and the gaping absence of the career that might have been.
Even though he only made three films, James Dean introduced Hollywood to a new kind of man: Photo above: Hulton Archive/Getty Images
by India Ross
17 April, 2014 – In Rebel Without A Cause, from 1955, a 24-year old James Dean, red-jacketed and tight-jeaned, climbs behind the wheel of an old black Mercury. To his right, the opponent he will race to the edge of a cliff hangs out of his driver-side window for a last slug of bravado: “Hey Toreador!”, he jeers. “First man who jumps is a chicken.” Re-inserting a trademark cigarette, Dean flicks on his headlights and hits the gas, and the two cars accelerate towards the brink. Frames from the edge, Dean glances right, grabs for the door and rolls out onto the turf. His adversary, jacket sleeve caught on his door handle and jammed into his driver’s seat, slips wrenchingly over the edge with his car.
Less than a year later, the real life James Dean, whose legacy is the subject of an upcoming retrospective at the BFI, was to die in an echoing event, flipping a race-car on a bend on a California highway. A life mesmerisingly truncated, he left behind only three films, and the gaping absence of the career that might have been. It was a sequence of events morbidly inkeeping with the themes of doomed youth his characters embodied.
The word “iconic” is tossed around ad nauseum, but if ever it were to apply, in the sense of an individual and a star whose off-screen persona outshines the sum of their roles, who bends the fabric of the society in which they live, Dean would surely qualify. In life, and even more so in death, the bee-stung darling of early Technicolor has held the awe of the movie-going public.
But facial anatomy and excellent hair were not the traits for which Dean was influential. Hollywood does not suffer a shortage of cheekbones. He slotted into a blurry interlude following the second world war but before the flowering of the Beat movement, in which the role of a man in society was under sudden and unsuspected dispute. A generation primed for combat found itself at a loss of purpose, and gender roles that were without meaning overnight began to merge and reconfigure themselves.
Posted by Comments Off
By Ramon Glazov
Oct 28, 2013 - The easy narrative about Adbusters, accepted by its friends and enemies alike, is that it’s, at heart, an anarchist project. To those wishing it well, the magazine is one of the cornerstones of the Left, a wellspring of anti-authoritarian tools meant to revive progressive activism and shake things up for the greater good. For curmudgeonly detractors, “culture jamming” is little more than a powerless rehash of old Yippie protest tactics. Yet anarchism, nearly everyone assumes, is either the best or the worst part of Adbusters.
But those explanations miss a much weirder side of the magazine’s underlying politics.
This March, Adbusters jumped into what ought to seem like a marriage made in hell. It ran a glowing article  on Beppe Grillo – Italy’s scruffier answer to America’s Truther champion Alex Jones – calling him “nuanced, fresh, bold, and committed as a politician,” with “a performance artist edge” and “anti-austerity ideas… [C]ountries around the world, from Greece to the US, can look to [him] for inspiration.” Grillo, the piece gushed, was “planting the seed of a renewed – accountable, fresh, rational, responsible, energized – left, that we can hope germinates worldwide.”
Completely unmentioned was the real reason Grillo is so controversial in Italy: his blog is full of anti-vaccination and 9/11 conspiracy claims, pseudoscientific cancer cures and chemtrail -like theories about Italian incinerator-smoke. And, as Giovanni Tiso noted  in July, Grillo’s “5-Star Movement” also has an incredibly creepy backer: Gianroberto Casaleggio, “an online marketing expert whose only known past political sympathies lay with the right-wing separatist Northern League.” Casaleggio has also written kooky manifestoes about re-organizing society through virtual reality technology, with mandatory Internet citizenship and an online world government.
Adbusters could have stopped flirting with Grillo at that point, but it didn’t. Another Grillo puff-piece appeared in its May/June issue. Then the magazine’s outgoing editor-in-chief, Micah White (acknowledged by theNation as “the creator of the #occupywallstreet meme”) recently went solo to form his own “boutique activism consultancy,” promising clients a “discrete service” in “Social Movement Creation.” Two weeks ago, in a YouTube video, White proposed that the next step “after the defeat of Occupy” should be to import Grillo’s 5-Star Movement to the US in time for the 2014 mid-term elections:
After the defeat of Occupy, I don’t believe that there is any choice other than trying to grab power by means of an election victory … This is how I see the future: we could bring the 5-Star Movement to America and have the 5-Star Movement winning elections in Italy and in America, thereby forming an international party, not only with the 5-Star Movement, but with other parties as well.
The day after Adbusters ran its first pro-Grillo article, Der Spiegel compared Grillo’s tone – and sweeping plans to restructure Italy’s parliamentary system – to Mussolini’s rhetoric. Ten days before that, a 5-Star Movement MP, Roberta Lombardi, faced a media scandal  after writing a blog post praising early fascism for its “very high regard for the state and protection of the family.”
Most progressives might reconsider their glowing assessment of a party as “the seed for a renewed left” when its leaders peddle absurd conspiracy theories and praise fascists. No such signs from Adbusters or White.
But Grillo may be more than a random ally for the gang at Culture Jammers HQ.
Just where did Adbusters get its defining philosophy? Why was it always so obsessed with ads and consumerism, while hardly focusing on class dynamics until the financial crisis?
In 1989, Adbusters founder Kalle Lasn claimed to have had an epiphany in a supermarket and started a movement to fight branding and advertisement. This wasn’t to be a repeat of Abbie Hoffman’s Steal This Book! -style anarchism, with roots in Proudhon’s famous “property is theft” dictum. Culture jammers weren’t acting to communalize most products, but to “uncool” them by taking on those products’ ads, with their own slickly-produced spoofs.
To them, the brand names bearing the coolness were more important than what the branded products did. It wasn’t drinking itself that their anti-Absolut vodka ads seemed to target, but glamorous logo-brands – as if smokers and alcoholics were hooked solely on label prestige.
The earliest Adbusters website on the Wayback Machine reads like a tamer, more Canadian, version of Alex Jones’ operation. Greeting you on the intro page is a Marshall McLuhan quote about “guerrilla information war.” Above its table of contents is the All Seeing Eye engraving from US currency.
“There’s a war on for your mind!” is the current InfoWars tagline. Not too far from the early Adbusters (the “Journal of the Mental Environment”) which promised to “take on the archetypal mind polluters – Marlboro, Budweiser, Benetton, Coke, McDonalds, Calvin Klein – and beat them at their own game.”
Oddly for a site now considered left-wing, Adbusters 1.0. was cheesily evasive about its political position, claiming to be “neither left nor right, but straight ahead.”
There’s good reason to be suspicious of anyone who pulls that “neither left nor right” line. Though Alex Jones’ InfoWars may not have been directly based on early-days Adbusters, the two were undeniably similar in sentiment. Both take a hostile view to mass media and widely-available consumer products, pushing readers towards an ascetic alternative lifestyle that insulates them from “The System” and its toxic worldliness.
And, as luck would have it, both are also the merchants of the (rarer, more expensive) alternative products needed to live this lifestyle. Alex Jones expounds the virtues of food-hoarding and drives Truthers to amass his survival packs, anti-fluoride filters, and nascent iodine drops; Adbusters flogs Blackspot shoes, Corporate America protest flags, and overpriced culture-jamming kits to “create new ambiences and psychic possibilities.”
With Lasn as its guru, culture jamming became popular among activists in the 1990s. Behind all those “subvertisements” lay one big assumption: regular sheeple were so brainwashed by consumerism that they couldn’t even snicker at rose-petally tampon ads without an enlightened jammer to spell everything out for them. Every adbuster got to feel like Morpheus, unplugging Sleepers from the Matrix with the Red Pill of Situationism.
This view of society wasn’t Marxist, left-liberal, or anarchist, so much as Don Draperist: “We are the cool-makers and the cool-breakers,” Kalle Lasn told an audience of advertising “creatives” in 2006. “More than any other profession, I think that we have the power to change the world.”
Lasn might claim not to believe in leaders, but he believes in elites: marketing professionals with a higher calling, responsible for shepherding public consciousness to save humanity from brands, from themselves.
And by exaggerating the mass media’s ability to zombify the public, jammers could imagine that they, too, had Svengali-like powers over ordinary proles. For all the “tools” Adbusters offered to sway public consciousness – stencilling, stickering, page defacement, supermarket trolley sabotage – there was never much emphasis on social skills, on persuading people with politics instead of bombarding them with theater or treating them like hackable machines.
More than anything, what sets culture jammers apart from social anarchism and weds them to the Grillo camp of quacks is a unifying emphasis on a theory called “mental environmentalism.” Mental environmentalism, Micah White explains, is “the core idea behind Adbusters, the essential critique that motivates our struggle against consumer society.”
For Adbusters, concern over the flow of information goes beyond the desire to protect democratic transparency, freedom of speech or the public’s access to the airwaves. Although these are worthwhile causes, Adbusters instead situates the battle of the mind at the center of its political agenda. Fighting to counter pro-consumerist advertising is done not as a means to an end, but as the end in itself. This shift in emphasis is a crucial element of mental environmentalism.
Mental environmentalism is an emergent movement that in the coming years will be recognized as the fundamental social struggle of our era. It is both a unifying struggle – among mental environmentalists there are everything from conservative Mormons to far-left anarchists – and a struggle that finally, concretely explains the cause of the diversity of ills that threaten us.
To escape the mental chains, and finally pull off the glorious emancipatory revolution the left has so long hoped for, we must become meme warriors who, through the use of culture jamming, spark a wave of epiphanies that shatter the consumerist worldview.
“The end in itself.” For culture jammers, posters and billboards don’t justrepresent exploitation, they are the tyranny (“the cause of the diversity of ills that threaten us”), and fighting them trumps all the progressive causes of their would-be allies.
That “neither left nor right” thing? It wasn’t just posturing. Not only is White equally willing to work with “far-left anarchists” and “conservative Mormons” but his mentor Lasn once hoped to guide Occupy into a merger with the Tea Party , producing a “hybrid party” that would transcend America’s “rigid left-right divide.”
White’s explanation of how mental pollution works sinks even deeper into conspiracy babble. Sounding a bit like a Scientologist, he tells us that humanity’s biggest problems are due to something called “infotoxins” which enter us through “commercial messaging”:
If a key insight of environmentalism was that external reality, nature, could be polluted by industrial toxins, the key insight of mental environmentalism is that internal reality, our minds, can be polluted by infotoxins. Mental environmentalism draws a connection between the pollution of our minds by commercial messaging and the social, environmental, financial and ethical catastrophes that loom before humanity. Mental environmentalists argue that a whole range of phenomenon from the BP oil spill to the emergence of crony-democracy to the mass extinction of animals to the significant increase in mental illnesses are directly caused by the three thousand advertisements that assault our minds each day. And rather than treat the symptoms, by rushing to scrub the oil-soaked beaches or passing watered-down environmental protection legislation, mental environmentalists target the root cause: the advertising industry that fuels consumerism.
Instead of blaming mental illness rates on obvious culprits – workplace stress, problems at home, school bullying, bad genes, changes to DSM criteria – the “mental environmentalists” at Adbusters pin it all on subliminal infotoxins polluting our precious bodily fluids. How do they prove it? About as well as you can prove rock albums are demon-infested or that 70-million-year-old thetans cause influenza. White has decided that “external” environmentalism just doesn’t go deep enough – only “mental environmentalists,” with their meme wars, are fighting the “root cause.”
Lasn’s “mental environment” writings are just as L. Ron Hubbard-ish as White’s. (His epiphanies spawned the concept, after all.) In 2006, hesuggested to the Guardian  that advertising may be the cause of “mood disorders, anxiety attacks and depressions.” Four years later, he co-wrote an article with White repeating the same claims , along with new fears that TV was poisoning us with too many sensual images of “pouty lips, pert breasts [and] buns of steel”:
Growing up in a violent, erotically-charged media environment alters our psyches at a bedrock level. … And the constant flow of commercially scripted, violence-laced, pseudo-sex makes us more voyeuristic, insatiable and aggressive. Then, somewhere along the line, nothing – not even rape, torture, genocide, or war porn – shocks us anymore.
The commercial media are to the mental environment what factories are to the physical environment. A factory dumps pollution into the water or air because that’s the most efficient way to produce plastic or wood pulp or steel. A TV station or website pollutes the cultural environment because that’s the most efficient way to produce audiences. It pays to pollute. The psychic fallout is just the cost of putting on the show.
If “mental environmentalism” had a true ally in American political thought, it would be Allan Bloom, with his Platonist neocon fretting about Sony Walkmans and MTV reducing life to cultural impoverishment, a “nonstop, commercially prepackaged masturbational fantasy.” You can’t as easily picture Lasn agreeing with the “Anonymous” brand of anarchism or its “Information wants to be free!” maxims: whenever volume comes up in these mental environment articles, more infomation is apparently worse.
White made this explicit in a July blog post, “Toxic Culture: A Unified Theory of Mental Pollution,” writing:
How do we fight back against the incessant flow of logos, brands, slogans and jingles that submerge our streets, invade our homes and flicker on our screens? We could wage a counteroffensive at the level of content: attacking individual advertisements when they cross the decency line and become deceptive, violent or overly sexual. But this approach is like using napkins to clean up an oil spill. It fails to confront the true danger of advertising … is not in its individual messages but in the damage done to our mental ecology by the sheer volume of its flood.
White has even theorized a much earlier spiritual forefather for Adbustersthan Kalle Lasn: Emile Zola , “who wrote what may be the first mental environmentalist short story, Death by Advertising, in 1866” and offered “a deeper look at advertising’s role in inducing a consumerist mindset” with his later novel, Au Bonheur Des Dames. Yes, Zola the social reformer who devoted his career to chronicling the fecund depravity and bestial desires of the underclasses. The guy who wrote a twenty-novel cycle promoting determinist psychology and Second Empire theories about hereditary animal passions of the colonized. Au Bonheur Des Dames is a cautionary tale about the nervous excitation big department stores can wreak on women’s fragile senses.
White hopes to take some morals from Zola’s shorter fiction:
Like junk food can make us obese, junk thoughts and advertisements can make us moronic. …We are, in a literal way, poisoned each time we see an advertisement and that is the essential danger of a consumer society based upon advertising.
Zola glimpsed a hundred and forty years ago…that advertising has poisoned our minds and corrupted our culture. As we march toward collapse, the question remains whether we will go passively toward our death and remembered only as a foolish civilization killed by advertising, or whether there remains within us a spark of clarity from which a mental environment movement may catch flame.
Advertising, to culture jammers, is virtually the same kind of universal scapegoat psychiatry became for Scientologists: an insidious, corrupting Demiurge responsible for all evils. But you’ll rarely find paranoia without self-importance. The grander vision, for Lasn, White, and their associates, is a world where marketers have the power to save humanity or destroy it with their “carefully-crafted imagery.” Instead of “clearing” the planet with Hubbard’s E-meter auditing, they hope Zen subvertisments, Buy Nothing Days, and strange hybrid political parties will be the answer.
Given the focus of their psychosis, it can often seem like culture jammers have the same concerns as anarchists and socialists: saving the environment, fighting capitalist exploitation, building a popular movement. But if they hate some of the things leftists also hate, it’s for the wrong reasons – and worse, their solutions are quack ones.
So don’t be surprised by White’s new alliance with Grillo, or Lasn’s dashed hopes for a merger with the Tea Party: Adbusters was never on our side.
See more stories tagged with:
Posted by Comments Off
By Carl Wilson
Last month the electro-psychedelic band MGMT released a video for its “Cool Song No. 2.” It features Michael K. Williams of The Wire as a killer-dealer-lover-healer figure stalking a landscape of vegetation, narcotics labs, rituals, and Caucasians. “What you find shocking, they find amusing,” the singer drones in Syd Barrett-via-Spiritualized mode. The video is loaded with signposts of cool, first among them Williams, who played maybe the coolest TV character of the past decade as the gay Baltimore-drug-world stickup man Omar Little. But would you consider “Cool Song No. 2” genuinely cool, or is it trying too hard? (Is that why it’s called “No. 2”?)
The very question is cruel, of course, and competitive. You can praise the Brooklyn band’s surreal imagination, or you can call it a dull, derivative outfit renting out another artist’s aura to camouflage that it has none of its own. It depends which answer you think makes you cooler.
If that sounds cynical, cynicism is difficult to avoid when the subject of cool arises now. Self-conscious indie rockers are easy targets, vulnerable to charges of recycling half-century-old postures that arguably were purloined from African-American culture in the first place. But what is cool in 2013, and why are we still using this term for what scholar Peter Stearns pegged as “a twentieth-century emotional style”? Often credited to sax player Lester Young in the 1940s, the coinage was in general circulation by the mid-1950s, with Miles Davis’s Birth of the Cool and West Side Story’s finger-snapping gang credo “Cool.” You’d be unlikely to use other decades-old slang—groovy or rad or fly—to endorse any current cultural object, at least with a straight face, but somehow cool remains evergreen.
The standard bearers, however, have changed. Once the rebellious stuff of artists, bohemians, outlaws, and (some) movie stars, coolness is now as likely to be attributed to the latest smartphone or app or the lucre they produce: The iconic statement on the matter has to be Justin Timberlake as Sean Parker saying to Jesse Eisenberg as Mark Zuckerberg in The Social Network, “A million dollars isn’t cool. You know what’s cool? A billion dollars.” That is, provided you earn it before you’re 30—the tech age has also brought on an extreme-youth cult, epitomized by fashion blogger and Rookie magazine editor Tavi Gevinson, who is a tad less cool now at 17 than she was when she emerged at age 11. What would William S. Burroughs have had to say about that? (Maybe “Just Do It!”)
Cool has come a long way, literally. In a 1973 essay called “An Aesthetic of the Cool,” art historian Robert Farris Thompson traced the concept to the West African Yoruba idea of itutu—a quality of character denoting composure in the face of danger, as well as playfulness, humor, generosity, and conciliation. It was carried to America with slavery and became a code through which to conceal rage and cope with brutality with dignity; it went on to inform the emotional textures of blues, jazz, the Harlem Renaissance, and more, then percolated into the mainstream.
Posted by Comments Off
The Hotel El-Djazair, formerly known as the Hotel Saint-George, is an oasis of calm in the tense city of Algiers. A labyrinth of paved pathways winds through beds of hibiscus, cactuses and roses, shaded by palm and banana trees. In the lobby, bellhops in white tunics and red fezzes escort guests past Persian carpets and walls inlaid with mosaics. Beneath the opulence, violence lurks. During the week I was there, diplomats descended on the El-Djazair to repatriate the bodies of dozens of hostages killed in a shootout at a Sahara natural-gas plant between Al Qaeda in the Islamic Maghreb and the Algerian Army.
Violence was in the air as well in January 1956, when the celebrated writer Albert Camus checked into the Hotel Saint-George. The struggle against French colonialism was escalating, with civilians becoming the primary victims. Camus was a pied-noir—a term meaning “black foot,” perhaps derived from the coal-stained feet of Mediterranean sailors, or the black boots of French soldiers, and used to refer to the one million colonists of European origin living in Algeria during French rule. He had returned after 14 years in France to try to stop his homeland from sliding deeper into war. It was a perilous mission. Right-wing French settlers plotted to assassinate him. Algerian revolutionaries watched over him without his knowledge.
The Casablanca-style intrigue—freedom fighters, spies and an exotic North African setting—seemed appropriate. Camus, after all, was often thought of as a literary Humphrey Bogart—dashing, irresistible to women, a coolly heroic figure in a dangerous world.
Camus is regarded as a giant of French literature, but it was his North African birthplace that most shaped his life and his art. In a 1936 essay, composed during a bout of homesickness in Prague, he wrote of pining for “my own town on the shores of the Mediterranean…the summer evenings that I love so much, so gentle in the green light and full of young and beautiful women.” Camus set his two most famous works, the novels The Stranger and The Plague, in Algeria, and his perception of existence, a joyful sensuality combined with a recognition of man’s loneliness in an indifferent universe, was formed here.
In 1957, Anders Österling, the permanent secretary of the Swedish Academy, acknowledged the importance of Camus’ Algerian upbringing when he presented him with the Nobel Prize in Literature, a towering achievement, won when he was only 43. Österling attributed Camus’ view of the world in part to a “Mediterranean fatalism whose origin is the certainty that the sunny splendor of the world is only a fugitive moment bound to be blotted out by the shades.”
Camus is “the single reason people outside Algeria know about this country,” says Yazid Ait Mahieddine, a documentary filmmaker and Camus expert in Algiers, as we sit beneath a photograph of the writer in the El- Djazair bar, alongside images of other celebrities who have passed through here, from Dwight Eisenhower to Simone de Beauvoir. “He is our only ambassador.”
Posted by Comments Off
By GEORGE YANCY
NYT Opinionator, Sept 1, 2012
“Man, I almost blew you away!”
Those were the terrifying words of a white police officer — one of those who policed black bodies in low income areas in North Philadelphia in the late 1970s — who caught sight of me carrying the new telescope my mother had just purchased for me.
“I thought you had a weapon,” he said.
The words made me tremble and pause; I felt the sort of bodily stress and deep existential anguish that no teenager should have to endure.
This officer had already inherited those poisonous assumptions and bodily perceptual practices that make up what I call the “white gaze.” He had already come to “see” the black male body as different, deviant, ersatz. He failed to conceive, or perhaps could not conceive, that a black teenage boy living in the Richard Allen Project Homes for very low income families would own a telescope and enjoyed looking at the moons of Jupiter and the rings of Saturn.
A black boy carrying a telescope wasn’t conceivable — unless he had stolen it — given the white racist horizons within which my black body was policed as dangerous. To the officer, I was something (not someone) patently foolish, perhaps monstrous or even fictional. My telescope, for him, was a weapon.
In retrospect, I can see the headlines: “Black Boy Shot and Killed While Searching the Cosmos.”
That was more than 30 years ago. Only last week, our actual headlines were full of reflections on the 1963 March on Washington, the Rev. Dr. Martin Luther King’s “I Have a Dream” speech, and President Obama’s own speech at the steps of the Lincoln Memorial to commemorate it 50 years on. As the many accounts from that long ago day will tell you, much has changed for the better. But some things — those perhaps more deeply embedded in the American psyche — haven’t. In fact, we should recall a speech given by Malcolm X in 1964 in which he said, “For the 20 million of us in America who are of African descent, it is not an American dream; it’s an American nightmare.”
Despite the ringing tones of Obama’s Lincoln Memorial speech, I find myself still often thinking of a more informal and somber talk he gave. And despite the inspirational and ethical force of Dr. King and his work, I’m still thinking about someone who might be considered old news already: Trayvon Martin.
In his now much-quoted White House briefing several weeks ago, not long after the verdict in the trial of George Zimmerman, the president expressed his awareness of the ever-present danger of death for those who inhabit black bodies. “You know, when Trayvon Martin was first shot, I said that this could have been my son,” he said. “Another way of saying that is Trayvon Martin could have been me 35 years ago.” I wait for the day when a white president will say, “There is no way that I could have experienced what Trayvon Martin did (and other black people do) because I’m white and through white privilege I am immune to systemic racial profiling.”
Obama also talked about how black men in this country know what it is like to be followed while shopping and how black men have had the experience of “walking across the street and hearing the locks click on the doors of cars.” I have had this experience on many occasions as whites catch sight of me walking past their cars: Click, click, click, click. Those clicks can be deafening. There are times when I want to become their boogeyman. I want to pull open the car door and shout: “Surprise! You’ve just been car-jacked by a fantasy of your own creation. Now get out of the car.”
The president’s words, perhaps consigned to a long-ago news cycle now, remain powerful: they validate experiences that blacks have undergone in their everyday lives. Obama’s voice resonates with those philosophical voices (Frantz Fanon, for example) that have long attempted to describe the lived interiority of racial experiences. He has also deployed the power of narrative autobiography, which is a significant conceptual tool used insightfully by critical race theorists to discern the clarity and existential and social gravity of what it means to experience white racism. As a black president, he has given voice to the epistemic violence that blacks often face as they are stereotyped and profiled within the context of quotidian social spaces.
David Hume claimed that to be black was to be “like a parrot who speaks a few words plainly.” And Immanuel Kant maintained that to be “black from head to foot” was “clear proof” that what any black person says is stupid. In his “Notes on Virginia,” Thomas Jefferson wrote: “In imagination they [Negroes] are dull, tasteless and anomalous,” and inferior. In the first American Edition of the Encyclopaedia Britannica (1798), the term “Negro” was defined as someone who is cruel, impudent, revengeful, treacherous, nasty, idle, dishonest, a liar and given to stealing.
My point here is to say that the white gaze is global and historically mobile. And its origins, while from Europe, are deeply seated in the making of America.
Black bodies in America continue to be reduced to their surfaces and to stereotypes that are constricting and false, that often force those black bodies to move through social spaces in ways that put white people at ease. We fear that our black bodies incite an accusation. We move in ways that help us to survive the procrustean gazes of white people. We dread that those who see us might feel the irrational fear to stand their ground rather than “finding common ground,” a reference that was made by Bernice King as she spoke about the legacy of her father at the steps of the Lincoln Memorial.
The white gaze is also hegemonic, historically grounded in material relations of white power: it was deemed disrespectful for a black person to violate the white gaze by looking directly into the eyes of someone white. The white gaze is also ethically solipsistic: within it only whites have the capacity of making valid moral judgments.
Even with the unprecedented White House briefing, our national discourse regarding Trayvon Martin and questions of race have failed to produce a critical and historically conscious discourse that sheds light on what it means to be black in an anti-black America. If historical precedent says anything, this failure will only continue. Trayvon Martin, like so many black boys and men, was under surveillance (etymologically, “to keep watch”). Little did he know that on Feb. 26, 2012, that he would enter a space of social control and bodily policing, a kind of Benthamian panoptic nightmare that would truncate his being as suspicious; a space where he was, paradoxically, both invisible and yet hypervisible.
“I am invisible, understand, simply because people [in this case white people] refuse to see me.” Trayvon was invisible to Zimmerman, he was not seen as the black child that he was, trying to make it back home with Skittles and an iced tea. He was not seen as having done nothing wrong, as one who dreams and hopes.
As black, Trayvon was already known and rendered invisible. His childhood and humanity were already criminalized as part of a white racist narrative about black male bodies. Trayvon needed no introduction: “Look, the black; the criminal!”
Many have argued that the site of violence occurred upon the confrontation between Trayvon and Zimmerman. Yet, the violence began with Zimmerman’s non-emergency dispatch call, a call that was racially assaultive in its discourse, one that used the tropes of anti-black racism. Note, Zimmerman said, “There’s a real suspicious guy.” He also said, “This guy looks like he’s up to no good or he’s on drugs or something.” When asked by the dispatcher, he said, within seconds, that, “He looks black.” Asked what he is wearing, Zimmerman says, “A dark hoodie, like a gray hoodie.” Later, Zimmerman said that “now he’s coming toward me. He’s got his hands in his waist band.” And then, “And he’s a black male.” But what does it mean to be “a real suspicious guy”? What does it mean to look like one is “up to no good”? Zimmerman does not give any details, nothing to buttress the validity of his narration. Keep in mind that Zimmerman is in his vehicle as he provides his narration to the dispatcher. As “the looker,” it is not Zimmerman who is in danger; rather, it is Trayvon Martin, “the looked at,” who is the target of suspicion and possible violence.
After all, it is Trayvon Martin who is wearing the hoodie, a piece of “racialized” attire that apparently signifies black criminality. Zimmerman later said: “Something’s wrong with him. Yep, he’s coming to check me out,” and, “He’s got something in his hands.” Zimmerman also said, “I don’t know what his deal is.” A black young male with “something” in his hands, wearing a hoodie, looking suspicious, and perhaps on drugs, and there being “something wrong with him,” is a racist narrative of fear and frenzy. The history of white supremacy underwrites this interpretation. Within this context of discursive violence, Zimmerman was guilty of an act of aggression against Trayvon Martin, even before the trigger was pulled. Before his physical death, Trayvon Martin was rendered “socially dead” under the weight of Zimmerman’s racist stereotypes. Zimmerman’s aggression was enacted through his gaze, through the act of profiling, through his discourse and through his warped reconstruction of an innocent black boy that instigates white fear.
What does it say about America when to be black is the ontological crime, a crime of simply being?
Perhaps the religious studies scholar Bill Hart is correct: “To be a black man is to be marked for death.” Or as the political philosopher Joy James argues, “Blackness as evil [is] destined for eradication.” Perhaps this is why when writing about the death of his young black son, the social theorist W.E.B. Du Bois said, “All that day and all that night there sat an awful gladness in my heart — nay, blame me not if I see the world thus darkly through the Veil — and my soul whispers ever to me saying, ‘Not dead, not dead, but escaped; not bond, but free.’ ”
Trayvon Martin was killed walking while black. As the protector of all things “gated,” of all things standing on the precipice of being endangered by black male bodies, Zimmerman created the conditions upon which he had no grounds to stand on. Indeed, through his racist stereotypes and his pursuit of Trayvon, he created the conditions that belied the applicability of the stand your ground law and created a situation where Trayvon was killed. This is the narrative that ought to have been told by the attorneys for the family of Trayvon Martin. It is part of the narrative that Obama brilliantly told, one of black bodies being racially policed and having suffered a unique history of racist vitriol in this country.
Yet it is one that is perhaps too late, one already rendered mute and inconsequential by the verdict of “not guilty.”
George Yancy is a professor of philosophy at Duquesne University. He has authored, edited and co-edited 17 books, including “Black Bodies, White Gazes,” “Look, a White!” and (co-edited with Janine Jones) “Pursuing Trayvon Martin.”
Posted by Comments Off
Bob Simpson looks at how the ability for arts and culture to thrive relies upon working people’s fight for a space of their own.
“The right to the city is far more than the individual liberty to access urban resources: it is a right to change ourselves by changing the city. It is, moreover, a common rather than an individual right since this transformation inevitably depends upon the exercise of a collective power to reshape the processes of urbanization. The freedom to make and remake our cities and ourselves is, I want to argue, one of the most precious yet most neglected of our human rights.” — David Harvey , The Right to the City
By Bob Simpson
June 17, 2013 – The 1968 French student-worker uprising popularized the phrase “The Right to the City” from philosopher Henri Lefebvre’s book Le Droit à la ville. According to Lefebvre the right to transform the urban environment cannot be restricted to people who own substantial property, hold citizenship papers or are otherwise deemed to have a higher social status. It means all of us, regardless of race, gender, age, economic status or any narrowly defined category. The city is a place of possibilities and we have a basic human right to make those possibilities realities.
Lefebrve’s subsequent book, The Urban Revolution helped to expand on his Right to the City ideas. Written in 1970, the book speculates rather accurately how urban society would evolve. There is a now a World Charter for the Right to the City which came out of the Social Forum of the Americas held in Ecuador during July 2004. The Right to the City is a global movement as the urban dispossessed around the planet struggle to humanize their own cities.
I was reading Lefebvre’s The Urban Revolution while riding the CTA Red Line on an April morning earlier this year. I was headed to Chicago’s Uptown neighborhood. The economically and racially diverse Uptown community was fighting school closings and the forced exile of working class people to benefit wealthy real estate interests and corporate school privatizers.
View of Uptown from the Wilson CTA stop.
Led by a new organization called Uptown Uprising, Uptown’s embattled residents had called for a rally and march to show how the power of concentrated wealth was destroying a community. With blue skies overhead, I arrived at the Stewart Elementary School playground where Uptown Uprising was gathering. Stewart Elementary, along with Stockton Elementary in Uptown, was scheduled for closing. In Chicago, school closings are often closely linked with financial speculation and gentrification.
Reggie Spears, the Stewart music teacher, was leading his band students in a lively display of musical talent, while parents and students were making colorful signs on the playground’s artificial turf — for the city is a place of creation.
By Bill Fletcher, Jr.
April 16, 2013 – I lived in the Boston area for 18 years. The Marathon was something that i accepted as part of what it meant to live in Boston, though i was not moved by it. But it was comfortable.
I could not believe it this afternoon when i heard about the bombing. Like many other people i went through immediate denial. I did not want to believe that it actually had happened. Someone had to have made a mistake, i thought. But then there was no denying it.
I was amazed by the first responders. It was not just the official responders, but civilians in the area who came to the aid of those injured. Bostonians can and will come through in a crisis. I have seen it before, and we will probably be forced to see it again.
Yet i found myself thinking that we in the USA believe that these terrorist actions are either new or exceptional, at least for us in this country. We have, of course, heard about state-sponsored or non-state actor terrorism overseas. The Rwanda genocide; Israeli attacks on Gaza; the list goes on. We, in the USA, are always stunned, however, when it happens to us because we believe that somehow we are an exception to this madness. We are not.
But it is also important to remember that there is a long history of homegrown terrorism in the USA. I am not talking about those who have become jihaddists. I am thinking more about the Ku Klux Klan, or Aryan Nation, or Black Guard. The terror that groups like these perpetrated over years was often ignored in large parts of mainstream USA but was central to the experiences of those of us of color and those of us who chose different political directions.
We do not know who was behind the Marathon bombings. It could have been someone completely insane. It might have been motivated by domestic or international political matters. In any case it was carried out by a sociopath and has, at least as of this moment, killed at least three people, wounded dozens, and destroyed the lives of probably hundreds of people.
The Boston Marathon will never be the same. Boston will never be the same. And today we share so much in common with victims around the world of state-sponsored terrorism and the actions of terrorist groups who have decided that there is a percentage in killing civilians, as reprehensible as most of us may find it.
My heart is with the families of the dead and wounded, and hoping for a speedy recovery of the wounded.
i also hope for the capture of the criminals who carried out this 2013 Boston massacre. May they never again see the light of day.
By Tim Wise
April 16, 2013 – As the nation weeps for the victims of the horrific bombing in Boston yesterday, one searches for lessons amid the carnage, and finds few. That violence is unacceptable stands out as one, sure. That hatred — for humanity, for life, or whatever else might have animated the bomber or bombers — is never the source of constructive human action seems like a reasonably close second.
But I dare say there is more; a much less obvious and far more uncomfortable lesson, which many are loathe to learn, but which an event such as this makes readily apparent, and which we must acknowledge, no matter how painful.
It is a lesson about race, about whiteness, and specifically, about white privilege.
I know you don’t want to hear it. But I don’t much care. So here goes.
White privilege is knowing that even if the Boston Marathon bomber turns out to be white, his or her identity will not result in white folks generally being singled out for suspicion by law enforcement, or the TSA, or the FBI.
White privilege is knowing that even if the bomber turns out to be white, no one will call for whites to be profiled as terrorists as a result, subjected to special screening, or threatened with deportation.
White privilege is knowing that if the bomber turns out to be white, he or she will be viewed as an exception to an otherwise non-white rule, an aberration, an anomaly, and that he or she will be able to join the ranks of Tim McVeigh and Terry Nichols and Ted Kaczynski and Eric Rudolph and Joe Stack and George Metesky and Byron De La Beckwith and Bobby Frank Cherry and Thomas Blanton and Herman Frank Cash and Robert Chambliss and James von Brunn and Robert Mathews and David Lane and Michael F. Griffin and Paul Hill and John Salvi and James Kopp and Luke Helder and James David Adkisson and Scott Roeder and Shelley Shannon and Dennis Mahon and Wade Michael Page and Byron Williams and Kevin Harpham and William Krar and Judith Bruey and Edward Feltus and Raymond Kirk Dillard and Adam Lynn Cunningham and Bonnell Hughes and Randall Garrett Cole and James Ray McElroy and Michael Gorbey and Daniel Cowart and Paul Schlesselman and Frederick Thomas and Paul Ross Evans and Matt Goldsby and Jimmy Simmons and Kathy Simmons and Kaye Wiggins and Patricia Hughes and Jeremy Dunahoe and David McMenemy and Bobby Joe Rogers and Francis Grady and Demetrius Van Crocker and Floyd Raymond Looker and Derek Mathew Shrout, among the pantheon of white people who engage in (or have plotted) politically motivated violence meant to terrorize and kill, but whose actions result in the assumption of absolutely nothing about white people generally, or white Christians in particular.
And white privilege is being able to know nothing about the crimes committed by most of the terrorists listed above — indeed, never to have so much as heard most of their names — let alone to make assumptions about the role that their racial or ethnic identity may have played in their crimes.
White privilege is knowing that if the Boston bomber turns out to be white, we will not be asked to denounce him or her, so as to prove our own loyalties to the common national good. It is knowing that the next time a cop sees one of us standing on the sidewalk cheering on runners in a marathon, that cop will say exactly nothing to us as a result.
White privilege is knowing that if you are a white student from Nebraska — as opposed to, say, a student from Saudi Arabia — that no one, and I mean no one would think it important to detain and question you in the wake of a bombing such as the one at the Boston Marathon.
And white privilege is knowing that if this bomber turns out to be white, the United States government will not bomb whatever corn field or mountain town or stale suburb from which said bomber came, just to ensure that others like him or her don’t get any ideas. And if he turns out to be a member of the Irish Republican Army we won’t bomb Belfast. And if he’s an Italian American Catholic we won’t bomb the Vatican.
In short, white privilege is the thing that allows you (if you’re white) — and me — to view tragic events like this as merely horrific, and from the perspective of pure and innocent victims, rather than having to wonder, and to look over one’s shoulder, and to ask even if only in hushed tones, whether those we pass on the street might think that somehow we were involved.
It is the source of our unearned innocence and the cause of others’ unjustified oppression.
That is all. And it matters.
Posted by Comments Off
New York Review of Books
Before he died on February 14, Ronald Dworkin sent to The New York Review a text of his new book, Religion Without God, to be published by Harvard University Press later this year. We publish here an excerpt from the first chapter. —The Editors
The familiar stark divide between people of religion and without religion is too crude. Many millions of people who count themselves atheists have convictions and experiences very like and just as profound as those that believers count as religious. They say that though they do not believe in a “personal” god, they nevertheless believe in a “force” in the universe “greater than we are.” They feel an inescapable responsibility to live their lives well, with due respect for the lives of others; they take pride in a life they think well lived and suffer sometimes inconsolable regret at a life they think, in retrospect, wasted. They find the Grand Canyon not just arresting but breathtakingly and eerily wonderful. They are not simply interested in the latest discoveries about the vast universe but enthralled by them. These are not, for them, just a matter of immediate sensuous and otherwise inexplicable response. They express a conviction that the force and wonder they sense are real, just as real as planets or pain, that moral truth and natural wonder do not simply evoke awe but call for it.
There are famous and poetic expressions of the same set of attitudes. Albert Einstein said that though an atheist he was a deeply religious man:
To know that what is impenetrable to us really exists, manifesting itself as the highest wisdom and the most radiant beauty which our dull faculties can comprehend only in their most primitive forms—this knowledge, this feeling, is at the center of true religiousness. In this sense, and in this sense only, I belong in the ranks of devoutly religious men.1
Percy Bysshe Shelley declared himself an atheist who nevertheless felt that “The awful shadow of some unseen Power/Floats though unseen among us….”2 Philosophers, historians, and sociologists of religion have insisted on an account of religious experience that finds a place for religious atheism. William James said that one of the two essentials of religion is a sense of fundamentality: that there are “things in the universe,” as he put it, “that throw the last stone.”3 Theists have a god for that role, but an atheist can think that the importance of living well throws the last stone, that there is nothing more basic on which that responsibility rests or needs to rest.
Judges often have to decide what “religion” means for legal purposes. For example, the American Supreme Court had to decide whether, when Congress provided a “conscientious objection” exemption from military service for men whose religion would not allow them to serve, an atheist whose moral convictions also prohibited service qualified for the objection. It decided that he did qualify.4 The Court, called upon to interpret the Constitution’s guarantee of “free exercise of religion” in another case, declared that many religions flourish in the United States that do not recognize a god, including something the Court called “secular humanism.”5 Ordinary people, moreover, have come to use “religion” in contexts having nothing to do with either gods or ineffable forces. They say that Americans make a religion of their Constitution, and that for some people baseball is a religion. These latter uses of “religion” are only metaphorical, to be sure, but they seem parasitic not on beliefs about God but rather on deep commitments more generally.
So the phrase “religious atheism,” however surprising, is not an oxymoron; religion is not restricted to theism just as a matter of what words mean. But the phrase might still be thought confusing. Would it not be better, for the sake of clarity, to reserve “religion” for theism and then to say that Einstein, Shelley, and the others are “sensitive” or “spiritual” atheists? But on a second look, expanding the territory of religion improves clarity by making plain the importance of what is shared across that territory. Richard Dawkins says that Einstein’s language is “destructively misleading” because clarity demands a sharp distinction between a belief that the universe is governed by fundamental physical laws, which Dawkins thought Einstein meant, and a belief that it is governed by something “supernatural,” which Dawkins thinks the word “religion” suggests.
But Einstein meant much more than that the universe is organized around fundamental physical laws; indeed his view I quoted is, in one important sense, an endorsement of the supernatural. The beauty and sublimity he said we could reach only as a feeble reflection are not part of nature; they are something beyond nature that cannot be grasped even by finally understanding the most fundamental of physical laws. It was Einstein’s faith that some transcendental and objective value permeates the universe, value that is neither a natural phenomenon nor a subjective reaction to natural phenomena. That is what led him to insist on his own religiosity. No other description, he thought, could better capture the character of his faith.
So we should let Einstein have his self-description, the scholars their broad categories, the judges their interpretations. Religion, we should say, does not necessarily mean a belief in God. But then, granted that someone can be religious without believing in a god, what does being religious mean? What is the difference between a religious attitude toward the world and a nonreligious attitude? That is hard to answer because “religion” is an interpretive concept. That is, people who use the concept do not agree about precisely what it means: when they use it they are taking a stand about what it should mean. Einstein may well have had something different in mind when he called himself religious than William James did when he classified certain experiences as religious or the Supreme Court justices did when they said that atheistic beliefs could qualify as religious. So we should consider our question in that spirit. What account of religion would it be most revealing to adopt?
We must turn to this challenge almost immediately. But we should pause to notice the background against which we consider the issue. Religious war is, like cancer, a curse of our species. People kill each other, around the world, because they hate each other’s gods. In less violent places like America they fight mainly in politics, at every level from national elections to local school board meetings. The fiercest battles are then not between different sects of godly religion but between zealous believers and those atheists they regard as immoral heathens who cannot be trusted and whose growing numbers threaten the moral health and integrity of the political community.
The zealots have great political power in America now, at least for the present. The so-called religious right is a voting bloc still eagerly courted. The political power of religion has provoked, predictably, an opposite—though hardly equal—reaction. Militant atheism, though politically inert, is now a great commercial success. No one who called himself an atheist could be elected to any important office in America, but Richard Dawkins’s book The God Delusion (2006) has sold millions of copies here, and dozens of other books that condemn religion as superstition crowd bookstores. Books ridiculing God were once, decades ago, rare. Religion meant a Bible and no one thought it worth the trouble to point out the endless errors of the biblical account of creation. No more. Scholars devote careers to refuting what once seemed, among those who enthusiastically buy their books, too silly to refute.
If we can separate God from religion—if we can come to understand what the religious point of view really is and why it does not require or assume a supernatural person—then we may be able to lower, at least, the temperature of these battles by separating questions of science from questions of value. The new religious wars are now really culture wars. They are not just about scientific history—about what best accounts for the development of the human species, for instance—but more fundamentally about the meaning of human life and what living well means.
As we shall see, logic requires a separation between the scientific and value parts of orthodox godly religion. When we separate these properly we discover that they are fully independent: the value part does not depend—cannot depend—on any god’s existence or history. If we accept this, then we formidably shrink both the size and the importance of the wars. They would no longer be culture wars. This ambition is utopian: violent and nonviolent religious wars reflect hatreds deeper than philosophy can address. But a little philosophy might help.
What, then, should we count as a religious attitude? I will try to provide a reasonably abstract and hence ecumenical account. The religious attitude accepts the full, independent reality of value. It accepts the objective truth of two central judgments about value. The first holds that human life has objective meaning or importance. Each person has an innate and inescapable responsibility to try to make his life a successful one: that means living well, accepting ethical responsibilities to oneself as well as moral responsibilities to others, not just if we happen to think this important but because it is in itself important whether we think so or not.
The second holds that what we call “nature”—the universe as a whole and in all its parts—is not just a matter of fact but is itself sublime: something of intrinsic value and wonder. Together these two comprehensive value judgments declare inherent value in both dimensions of human life: biological and biographical. We are part of nature because we have a physical being and duration: nature is the locus and nutrient of our physical lives. We are apart from nature because we are conscious of ourselves as making a life and must make decisions that, taken together, determine what life we have made.
For many people religion includes much more than those two values: for many theists it also includes obligations of worship, for instance. But I shall take these two—life’s intrinsic meaning and nature’s intrinsic beauty—as paradigms of a fully religious attitude to life. These are not convictions that one can isolate from the rest of one’s life. They engage a whole personality. They permeate experience: they generate pride, remorse, and thrill. Mystery is an important part of that thrill. William James said that
like love, like wrath, like hope, ambition, jealousy, like every other instinctive eagerness and impulse, [religion] adds to life an enchantment which is not rationally or logically deducible from anything else.6
The enchantment is the discovery of transcendental value in what seems otherwise transient and dead.
But how can religious atheists know what they claim about the various values they embrace? How can they be in touch with the world of value to check the perhaps fanciful claim in which they invest so much emotion? Believers have the authority of a god for their convictions; atheists seem to pluck theirs out of the air. We need to explore a bit the metaphysics of value.
The religious attitude rejects naturalism, which is one name for the very popular metaphysical theory that nothing is real except what can be studied by the natural sciences, including psychology. That is, nothing exists that is neither matter nor mind; there is really, fundamentally, no such thing as a good life or justice or cruelty or beauty. Richard Dawkins spoke for naturalists when he suggested the scientists’ proper reply to people who, criticizing naturalism, endlessly quote Hamlet: “There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy.” “Yes,” Dawkins replied, “but we’re working on it.”7
Some naturalists are nihilists: they say that values are only illusions. Other naturalists accept that in some sense values exist, but they define them so as to deny them any independent existence: they make them depend entirely on people’s thoughts or reactions. They say, for instance, that describing someone’s behavior as good or right only means that, as a matter of fact, the lives of more people will be pleasant if everyone behaves in that way. Or that saying a painting is beautiful only means that in general people take pleasure in looking at it.
The religious attitude rejects all forms of naturalism. It insists that values are real and fundamental, not just manifestations of something else; they are as real as trees or pain. It also rejects a very different theory we might call grounded realism. This position, also popular among philosophers, holds that values are real and that our value judgments can be objectively true—but only on the assumption, which might be wrong, that we have good reason, apart from our own confidence in our value judgments, to think that we have the capacity to discover truths about value.
There are many forms of grounded realism: one is a form of theism that traces our capacity for value judgment to a god. (I shall shortly argue that this supposed grounding goes in the wrong direction.) They all agree that, if value judgment can ever be sound, there must be some independent reason to think that people have a capacity for sound moral judgment—independent because it does not itself rely on that capacity. That makes the status of value hostage to biology or metaphysics. Suppose we find undeniable evidence that we hold the moral convictions we do only because they were evolutionarily adaptive, which certainly did not require them to be true. Then, on this view, we would have no reason to think that cruelty is really wrong. If we think it is, then we must think we have some other way of being “in touch with” moral truth.
The religious attitude insists on a much more fundamental divorce between the world of value and facts about our natural history or our psychological susceptibilities. Nothing could impeach our judgment that cruelty is wrong except a good moral argument that cruelty is not after all wrong. We ask: What reason do we have for supposing that we have the capacity for sound value judgment? Ungrounded realism answers: the only possible reason we could have—we reflect responsibly on our moral convictions and find them persuasive. We think them true, and we therefore think we have the capacity to find the truth. How can we reject the hypothesis that all our convictions about value are only mutually supporting illusions? Ungrounded realism answers: we understand that hypothesis in the only way that makes it intelligible. It suggests that we do not have an adequate moral case for any of our moral judgments. We refute that suggestion by making moral arguments for some of our moral judgments.
The religious attitude, to repeat, insists on the full independence of value: the world of value is self-contained and self-certifying. Does that disqualify the religious attitude on grounds of circularity? Notice that there is no finally noncircular way to certify our capacity to find truth of any kind in any intellectual domain. We rely on experiment and observation to certify our judgments in science. But experiment and observation are reliable only in virtue of the truth of basic assumptions about causation and optics that we rely on science itself, and nothing more basic, to certify. And of course our judgments about the nature of the external world all depend, even more fundamentally, on a universally shared assumption that there is an external world, an assumption that science cannot itself certify.
We find it impossible not to believe the elementary truths of mathematics and, when we understand them, the astonishingly complex truths that mathematicians have proved. But we cannot demonstrate either the elementary truths or the methods of mathematical demonstration from outside mathematics. We feel that we do not need any independent certification: we know we have an innate capacity for logic and mathematical truth. But how do we know we have that capacity? Only because we form beliefs in these domains that we simply cannot, however we try, disown. So we must have such a capacity.
We might say: we accept our most basic scientific and mathematical capacities finally as a matter of faith. The religious attitude insists that we embrace our values in the same way: finally as a matter of faith as well. There is a striking difference. We have generally agreed standards of good scientific argument and valid mathematical demonstration; but we have no agreed standards for moral or other forms of reasoning about value. On the contrary, we disagree markedly about goodness, right, beauty, and justice. Does that mean that we have an external certification of our capacities for science and mathematics that we lack in the domain of value?
No, because interpersonal agreement is not an external certification in any domain. The principles of scientific method, including the need for interpersonal confirmation of observation, are justified only by the science these methods have produced. As I said, everything in science, including the importance of shared observation, hangs together: it rests on nothing outside science itself. Logic and mathematics are different still. Consensus about the validity of a complex mathematical argument is in no way evidence of that validity. What if—unimaginable horror—the human race ceased to agree about valid mathematical or logical arguments? It would fall into terminal decline, but no one would have any good reason, along the way, to doubt that five and seven make twelve. Value is different still. If value is objective, then consensus about a particular value judgment is irrelevant to its truth or anyone’s responsibility in thinking it true, and experience shows, for better or worse, that the human community can survive great discord about moral or ethical or aesthetic truth. For the religious attitude, disagreement is a red herring.
I said, just now, that the religious attitude rests finally on faith. I said that mainly to point out that science and mathematics are, in the same way, matters of faith as well. In each domain we accept felt, inescapable conviction rather than the benediction of some independent means of verification as the final arbiter of what we are entitled responsibly to believe. This kind of faith is not just passive acceptance of the conceptual truth that we cannot justify our science or our logic or our values without appealing to science or logic or value. It is a positive affirmation of the reality of these worlds and of our confidence that though each of our judgments may be wrong we are entitled to think them right if we have reflected on them responsibly enough.
In the special case of value, however, faith means something more, because our convictions about value are emotional commitments as well and, whatever tests of coherence and internal support they survive, they must feel right in an emotional way as well. They must have a grip on one’s whole personality. Theologians often say that religious faith is a sui generis experience of conviction. Rudolf Otto, in his markedly influential book, The Idea of the Holy, called the experience “numinous” and said it was a kind of “faith-knowledge.”8 I mean to suggest that convictions of value are also complex, sui generis, emotional experiences. As we will see [in a later section of the new book, Religion Without God], when scientists confront the unimaginable vastness of space and the astounding complexity of atomic particles they have an emotional reaction that matches Otto’s description surprisingly well. Indeed many of them use the very term “numinous” to describe what they feel. They find the universe awe-inspiring and deserving of a kind of emotional response that at least borders on trembling.
But of course I do not mean, in speaking of faith, that the fact that a moral conviction survives reflection is itself an argument for that conviction. A conviction of truth is a psychological fact and only a value judgment can argue for the conviction’s truth. And of course I do not mean that value judgments are in the end only subjective. Our felt conviction that cruelty is wrong is a conviction that cruelty is really wrong; we cannot have that conviction without thinking that it is objectively true. Acknowledging the role of felt, irresistible conviction in our experience of value just recognizes the fact that we have such convictions, that they can survive responsible reflection, and that we then have no reason at all, short of further evidence or argument, to doubt their truth.
You may think that if all we can do to defend value judgments is appeal to other value judgments, and then finally to declare faith in the whole set of judgments, then our claims to objective truth are just whistles in the dark. But this challenge, however familiar, is not an argument against the religious worldview. It is only a rejection of that worldview. It denies the basic tenets of the religious attitude: it produces, at best, a standoff. You just do not have the religious point of view.
I have already suggested reasons why we should treat the attitude I have been describing as religious and recognize the possibility of religious atheism. We hope better to understand why so many people declare that they have a sense of value, mystery, and purpose in life in spite of their atheism rather than in addition to their atheism: why they associate their values with those of conventional religion in that way. We also hope to produce an account of religion that we can use to interpret the widespread conviction that people have special rights to religious freedom. [That is one of the projects of the new book.]
I want now to explore another, more complex, reason for treating the attitude I describe as religious. Theists assume that their value realism is grounded realism. God, they think, has provided and certifies their perception of value: of the responsibilities of life and the wonders of the universe. In fact, however, their realism must finally be ungrounded. It is the radical independence of value from history, including divine history, that makes their faith defensible.
The heart of my argument is the following assumption. The conventional, theistic religions with which most of us are most familiar—Judaism, Christianity, and Islam—have two parts: a science part and a value part. The science part offers answers to important factual questions about the birth and history of the universe, the origin of human life, and whether or not people survive their own death. That part declares that an all-powerful and all-knowing god created the universe, judges human lives, guarantees an afterlife, and responds to prayer.
Of course I do not mean that these religions offer what we count as scientific arguments for the existence and career of their god. I mean only that this part of many religions makes claims about matters of fact and about historical and contemporary causes and effects. Some believers do defend these claims with what they take to be scientific arguments; others profess to believe them as a matter of faith or through the evidence of sacred texts. I call them all scientific in virtue of their content, not their defense.
The value part of a conventional theistic religion offers a variety of convictions about how people should live and what they should value. Some of these are godly commitments, that is, commitments that are parasitic on and make no sense without the assumption of a god. Godly convictions declare duties of worship, prayer, and obedience to the god the religion endorses. But other religious values are not, in that way, godly: they are at least formally independent of any god. The two paradigm religious values I identified are in that way independent. Religious atheists do not believe in a god and so reject the science of conventional religions and the godly commitments, like a duty of ritual worship, that are parasitic on that part. But they accept that it matters objectively how a human life goes and that everyone has an innate, inalienable ethical responsibility to try to live as well as possible in his circumstances. They accept that nature is not just a matter of particles thrown together in a very long history but something of intrinsic wonder and beauty.
The science part of conventional religion cannot ground the value part because—to put it briefly at first—these are conceptually independent. Human life cannot have any kind of meaning or value just because a loving god exists. The universe cannot be intrinsically beautiful just because it was created to be beautiful. Any judgment about meaning in human life or wonder in nature relies ultimately not only on descriptive truth, no matter how exalted or mysterious, but finally on more fundamental value judgments. There is no direct bridge from any story about the creation of the firmament, or the heavens and earth, or the animals of the sea and the land, or the delights of Heaven, or the fires of Hell, or the parting of any sea or the raising of any dead, to the enduring value of friendship and family or the importance of charity or the sublimity of a sunset or the appropriateness of awe in the face of the universe or even a duty of reverence for a creator god.
I am not arguing, against the science of the traditional Abrahamic religions, that there is no personal god who made the heavens and loves its creatures. I claim only that such a god’s existence cannot in itself make a difference to the truth of any religious values. If a god exists, perhaps he can send people to Heaven or Hell. But he cannot of his own will create right answers to moral questions or instill the universe with a glory it would not otherwise have. A god’s existence or character can only figure in the defense of such values as a fact that makes some different, independent background value judgment pertinent; it can only figure, that is, as a minor premise. Of course, a belief in a god can shape a person’s life dramatically. Whether and how it does this depends on the character of the supposed god and the depth of commitment to that god. An obvious and crude case: someone who believes he will go to Hell if he displeases a god will very likely lead a different life from someone who does not have any such belief. But whether what displeases a god is morally wrong is not up to that god.
I am now relying on an important conceptual principle that we might call “Hume’s principle” because it was defended by that eighteenth-century Scottish philosopher. This principle insists that one cannot support a value judgment—an ethical or moral or aesthetic claim—just by establishing some scientific fact about how the world is or was or will be. Something else is always necessary: a background value judgment that shows why the scientific fact is relevant and has that consequence. Yes, whenever I see that someone is in pain, or threatened with danger, I have a moral responsibility to help if I can. Just the plain fact of pain or danger appears to generate, all by itself, a moral duty. But the appearance is deceptive: the pain and danger would not generate a moral duty unless it was also true, as a matter of background moral truth, that people have a general duty to relieve or prevent suffering. Very often, as in this case, the background principle is too obvious to need stating or even thinking. But it must still be there, and it must still really connect the ordinary judgment with the more concrete moral or ethical or aesthetic judgment it is supposed to support.
I agree that the existence of a personal god—a supernatural, all-powerful, omniscient, and loving being—is a very exotic kind of scientific fact. But it is still a scientific fact and it still requires a pertinent background moral principle to have any impact on value judgments. That is important because those background value judgments can only themselves be defended—to the extent they can be defended at all—by locating them in a larger network of values each of which draws on and justifies the others. They can only be defended, as my account of the religious attitude insists, within the overall scheme of value.
So a god’s existence can be shown to be either necessary or sufficient to justify a particular conviction of value only if some independent background principle explains why. We might well be convinced of some such principle. We might think, for instance, that the sacrifice of God’s son on the Cross gives us a responsibility of gratitude to honor the principles for which He died. Or that we owe the deference to the god who created us that we owe a parent, except that our deference to that god must be unlimited and unstinting. Believers will have no trouble constructing other such principles. But the principles they cite, whatever they are, must have independent force seen only as claims of morality or some other department of value. Theists must have an independent faith in some such principle; it is that principle, rather than just the divine events or other facts they claim pertinent, that they must find they cannot but believe. What divides godly and godless religion—the science of godly religion—is not as important as the faith in value that unites them.
Copyright ©2013 by Ronald Dworkin
Albert Einstein, in Living Philosophies: The Reflections of Some Eminent Men and Women of Our Time, edited by Clifton Fadiman (Doubleday, 1990), p. 6. ↩
“Hymn to Intellectual Beauty” (1816). ↩
William James, The Will to Believe and Other Essays in Popular Philosophy (Longmans, Green, and Co., 1896), p. 25. ↩
United States v. Seeger, 380 US 163 (1965). ↩
Torcaso v. Watkins, 367 US 488 (1961), fn. 11: “Among religions in this country which do not teach what would generally be considered a belief in the existence of God are Buddhism, Taoism, Ethical Culture, Secular Humanism and others. See Washington Ethical Society v. District of Columbia, 101 US App. D.C. 371, 249 F. 2d 127; Fellowship of Humanity v. County of Alameda, 153 Cal. App. 2d 673, 315 P. 2d 394; II Encyclopaedia of the Social Sciences 293; 4 Encyclopaedia Britannica (1957 ed.) 325–327; 21 id., at 797; Archer, Faiths Men Live By (2d ed. revised by Purinton) 120–138, 254–313; 1961 World Almanac 695, 712; Year Book of American Churches for 1961, at 29, 47.” ↩
William James, The Varieties of Religious Experience (The Modern Library, 1902), p. 47. ↩
Richard Dawkins, Unweaving the Rainbow: Science, Delusion and the Appetite for Wonder (Houghton Mifflin, 1998), p. xi. ↩
Rudolf Otto, The Idea of the Holy, translated by John W. Harvey (Oxford University Press, 1923). Originally published in German in 1917. ↩
Copyright © 1963-2013 NYREV, Inc. All rights reserved.
Posted by Comments Off
By Harry Targ
This paper was a presentation at “Woody at 100: Woody Guthrie’s Legacy to Working Men and Women”, a conferences at Penn State University, September 8-9, 2012 
Several key concepts in the Marxian tradition influenced the consciousness and political practice of Paul Robeson, Woody Guthrie, and Pete Seeger. First, all three were historical and dialectical materialists. They conceived of the socio-economic condition of people’s lives as fundamental to the shaping of their activities and consciousness. They were historical materialists in that they understood that the material conditions of people’s lives changed as the economic system in which they lived changed. And they were dialectical in that they were sensitive to the contradictory character of human existence.
Second, class as the fundamental conceptual tool for examining a society shaped their thinking. Increasingly they realized that class struggle was a fundamental force for social change. Given the American historical context they saw that class and race were inextricably interconnected.
Third, all three addressed a theory of imperialism which they regarded as critical to understanding international relations. Living in an age of colonialism and neo-colonialism all three performer/activists, but particularly Paul Robeson, saw imperialism as a central structural feature of relations between nations, peoples and classes. They were inspired by those resisting the yoke of foreign domination.
Fourth, Robeson, Guthrie, and Seeger saw that community, harmony, and socialism would represent the next stage of societal development. They believed that the vision of socialism had the potential for improving the quality of life of humankind. Robeson’s experiences in the Soviet Union led him to a greater degree to regard the experience of existing socialist states as free of the kind of racism endemic to the United States.
Fifth, Robeson, Guthrie, and Seeger emphasized the connection between theory and practice. Each artist in his own way articulated what Robeson proclaimed in 1937 in the context of supporting the Loyalists in the Spanish Civil War that every artist must take a stand. The artist (i.e., the intellectual) must act in the context of a world of exploitation. One was either on the side of the ongoing oppressive order or on the side of change.
Armed with these insights, the three folk artist/activists discussed below committed themselves to action; action grounded in the struggles of their day. In Gramsci’s terms, they were organic intellectuals. They joined anti-racist, anti-colonial, labor and peace struggles. They walked picket lines, entertained Spanish Civil War loyalists, striking workers and other protesters, and sought to lend support to international socialist solidarity. Being an organic intellectual in the 1930s and 40s, and in the case of Pete Seeger the 1940s and beyond, meant participating in what Michael Denning called “the cultural front.” The ambience of the CIO, the Communist movement, civil rights and anti-war struggles, and building the New Deal provided the social forces out of which Robeson, Guthrie, and Seeger could thrive and grow. The three,–Robeson, Guthrie, and Seeger–artists and activists, were both agents and products of Marxist ideas engaged in practical political work as organic intellectuals participating in a broad cultural front.
Each artist/activist projected an image of human oneness. They saw the connections between the defense of democracy in Spain and the U.S. South and the necessity of building a peaceful and democratic post-World War II order to achieve justice for the working classes of all lands. Robeson’s consciousness was shaped by the vision of a common pentagonal chord structure in the world’s folk music; a metaphor that privileges difference and unity. The musical visions of Guthrie and Seeger celebrated what was common in the human experience as well.
In sum, the remarks below address the implicit Marxist lens that shaped the consciousness and behavior of three giants-Paul Robeson, Woody Guthrie, and Pete Seeger. It addresses how their artistic and political work was shaped by and shaped the social movements of the period from the 1930s to the present. It draws upon cultural theory, particularly Michael Denning’s idea of a multilayered “cultural front.” And it links the theory, practice and context to the political strategy of the “popular front.”
Finally, the paper suggests that the theory and practice of Robeson, Guthrie, and Seeger represent a model for building contemporary mass movements in the face of economic and political crises. Over the past two years the world has seen mass mobilizations against dictatorship in Middle Eastern regimes; emerging new socialist forces in France, the Netherlands, and Denmark; mass movements against wars on workers, women, and minorities in the United States; and the emergence of grassroots mobilizations, particularly the Occupy Movement, all across the North American continent. The framework of struggle-the 99 percent versus the one percent-while not expressly Marxist, can have the same animating effect on workers, youth, minorities, and women, that the songs of Robeson, Guthrie, and Seeger did from the 1930s to the present time.
Marxist Ideas: Historical and Dialectical Materialism
Marxist analysis begins with the presupposition that humans create the conditions for the production and reproduction of life. These involve the satisfaction of basic needs. To do so requires the organization of production: of human labor, technology, science, and society. “This connection is ever taking on new forms, and thus presents a ‘history’ independently of the existence of any political or religious nonsense which in addition may hold men together.”