Sunday, October 31, 2010

I'm nobody! Who are you? by Emily Dickinson




I'm Nobody! Who are you?
Are you Nobody, too?
Then there's a pair of us -don't tell!
They'd banish us, you know.

How dreary to be Somebody!
How public, like a frog
To tell your name the livelong day
To an admiring bog!

Tuesday, October 26, 2010

The Book of Mormon by Grant Hardy



Understanding The Book of Mormon; A Reader's Guide, by Grant Hardy, Professor of History and Religious Studies, University of North Carolina; Oxford University Press, 2010

The Book of Mormon is a literary enactment of the all encompassing plan God has for human history and for individuals. It is somewhat surprising that Mormonism – generally regarded as an optimistic, forward-looking faith – has as its foundation such an unrelenting record of human folly and ruin. Simply put, this astounding work of early 19th century American literature, is a tragedy in which humanity repeatedly fails to take advantage of the tender mercies of faith which a compassionate God has provided to make them mighty 'even unto the power of deliverance' from all the horrors of this life and the next.

While historians have searched The Book of Mormon for clues about 19th century America or Joseph Smith, Mormon writers have generally focused either on the evidence for the book's historical claims or correlations with the current LDS theology. And for many Latter-day Saints, careful scrutiny of the volume's contents is secondary to the direct relationship with God that the book makes possible. They are encouraged to pray about the Book of Mormon, in accordance with the promise that God “will manifest the truth of it...and by the power of the Holy Ghost” to those who “ask with a sincere heart, with real intent.” Individuals who feel they have received such a spiritual witness are often content to redirect their energies from textual analysis towards living the wholesome sort of lifestyle that Mormonism advocates.

What all these approaches have in common is the urge to start with something outside the Book of Mormon- Joseph Smith, Jacksonian America, Meso-American archeology, ancient Near Eastern culture, Mormon theology, or a personal spiritual quest – and then selectively identify and interpret pertinent passages. The book, after all, is long and complicated, and the double-columned verses of the official edition offer little guidance to those trying to make sense of the narrative. In addition, the copious references at the bottom of the pages steer readers towards doctrinal or topical approaches.

Literary theorist Dominick LaCapra has offered a general warning that “ the rhetoric of contextualization has often encouraged narrowly documentary readings in which the text becomes little more than a sign of the times or a straightforward expression of one phenomena or another. At the limit, this indiscriminate approach to reading and interpretation becomes a detour around texts and an excuse for not really reading them at all.” Or as the Catholic sociologist Thomas O'Dea famously pointed out, “The Book of Mormon has not been universally considered by its critics as one of those books that must be read in order to have an opinion of it.”

The situation is similar to how David Bell has characterized reading books on a computer screen:

“If physical discomfort discourages the reading of texts sequentially, from start to finish, computers make it spectacularly easy to move through texts in other ways – in particular, by searching for particular pieces of information. Reading in this strategic, targeted manner can feel empowering. Instead of surrendering to the organizing logic of the book you are reading, you can approach it with your own questions and glean precisely what you want from it. You are the master, not some dead author. And this is precisely where the danger lie, because when you are reading you should not be the master. Information is not knowledge; searching is not reading; and surrender to the organizing logic of a book is, after all, the way one learns.”

Few readers, even if we count Latter-day Saints, have surrendered to the organizing logic of the Book of Mormon as a whole. This is perhaps because it appears at first glance to be a confused jumble of strange names and odd stories, told in a quirky style, and all of very suspicious origins. An 1841 critic described it as “mostly a blind mass of words, interwoven with scriptural language and quotations, without much of a leading plan or design. It is in fact such a production as might be expected from a person of Smith's abilities and turn of mind.” The world's general impression has not changed much in the last 180 years. What is one to do with such a text other than scan it for phrases and incidents that might have some bearing on a particular thesis?

There has never been a detailed guide to the contents of the Book of Mormon. In this study, for the first time, I have suggested that the book of Mormon can be read as literature – a genre that encompasses history, fiction, and scripture- by anyone trying to understand this odd but fascinating book. The starting point for all serious readers has to be the recognition that it is first and foremost a narrative, offered to us by specific, named narrators. Every detail and incident in the book has to be weighed against their intentions and rhetorical strategies. We might imagine a history written by an impersonal, omniscient narrator whose point of view was similar to Joseph Smith's but that is not what we really have. The heterogeneous materials in the Book of Mormon – including historical accounts, prophesies, sermons, letters, poems, allegories, and apocalypses supposedly written by different authors in different periods, are all presented as the work of three primary editors/historians, each with a distinct life story, perspective, set of concerns, style, and sense of who their audience will be.

Imagine, for a moment, the situation of Nephi, the first important narrator of the Book of Mormon. He was educated in Jerusalem and literate at a time when such training was rare. He seems to have been fascinated by books and records and then, in his teenage years, he was suddenly taken from the culturally rich and intellectually stimulating environment of Judah's capital to live in a distant land, in the company of only his relatives, with a single text (the Brass Plates) to read for the rest of his life. He pored over the ancient text, offering interpretations, interweaving his own revelations with the words of the past prophets, reading himself back into existing scripture, and envisioning himself as the author of future scripture.

No one else in Nephi's family seemed much interested in such close readings and creative interpretations of the Brass Plates. In fact, Jacob was born after the family left Jerusalem so he had no firsthand knowledge whatsoever about the traditions and culture of the Jews. So, in many respects, Nephi is a tragic figure, principally engaged in solitary, intellectual, introspective, time-consuming and frustrating literary activity, cut off from his culture, despairing of his descendants and alienated from his own society. It is only through the entire course of his narrative that he gradually becomes aware of his true prophetic mission.

Unfortunately,, what Nephi comes to know is not always pleasing. Even before he arrived in the Promised Land, he learned to his dismay that his descendants will have no long term future there, while the posterity of his wicked brothers will continue on. Towards the end of his writings, he finally has to concede that most people are not like him:

“And now, I, Nephi cannot say more; the Spirit stoppeth my utterance, and I am left to mourn because of unbelief, and the wickedness, and the ignorance, and the stiffneckedness of men; for they will not search knowledge, nor understand great knowledge, when it is given unto them in plainness, even as plain as word can be.”

In the end Nephi is wiser but not happier and this correlation of knowledge and suffering is, of course, the stuff of Greek tragedy, though perhaps a more pertinent example can be found in Milton's Paradise Lost, when Adam sees in vision the destruction of his posterity in the Flood:

"How didst thou grieve then, Adam, to behold
The end of all thy offspring, end so sad,
Depopulation; thee another flood,
Of tears and sorrow a flood thee also drowned,
And sunk thee as thy sons; till gently reared
By th' Angel, on thy feet thou stood'st at last,
Thou comfortless, as when a father mourns
His children, all in view destroyed at once;
And scarce to th' Angel utter'dst thus thy plain:
“ O visions ill foreseen! Better had I
Lived ignorant of future, so had bourne
My part of evil only, each day's lot
Enough to bear...”

Writing many centuries later, the second narrator of the Book of Mormon is Mormon himself who tells stories with unmistakable spiritual meanings as if, in contrast to the consciousness of the Hebrew Bible, history and theology were inseparable. He presents characters as moral exemplars, and identifies patterns such as “God's arm is extended to all people who will repent and believe on his name, "the Lord worketh in many ways to the salvation of his people,” and “the devil will not support his children at the last day, but doth speedily drag them down to hell.” Properly interpreted, history itself reveals religious truths, the totality of human experience offers sufficient evidence to demonstrate God's divine plan and influence in earthy affairs.

Or at least that is what Mormon says he believes. What makes his book interesting is watching how he selects, adapts and arranges his material, particularly when it is plain that his sources do not seem to adequately illustrate spiritual verities on their own. This is particularly the case in his characterization of Captain Moroni when, in his narrative, a large space opens up between what Mormon says about this incomparable warrior, and what he actually shows.( the captain's victories, for example, come at the cost huge cost of lives which Mormon tries to gloss over).

The third most important narrator is Moroni, Mormon's son ( not to be confused with the earlier military chief). There is a note of resignation and passivity in his narrative not previously encountered in the Book of Mormon. At least four times Moroni confesses that he doesn't know or doesn't care about the fate of his Nephite people. His overriding emotion is one of loss – he is alone, little space left to write, no ore to create new Plates, no family, no friends, and no plans beyond finishing his father's record and burying the plates. Sixteen years after the final battle (though apparently there have been subsequent traumas), Moroni is still in shock. Both the physical and psychological challenges to writing are nearly overwhelming, though he is writing.


Sometime later, after he evidently found some breathing space and ore enough to fashion additional metal tablets on which to inscribe his redaction of Jaredite history, he confesses to yet another set of challenges (directing his concerns to God):

“Lord, the Gentiles will mock all these things, because of our weakness in writing; for Lord, thou hast made us mighty in word by faith, but thou hast not made us mighty in writing...wherefore when we behold our weakness and the stumble because of the placement of our words, I fear lest the Gentiles shall mock us.”

In this passage Moroni even seems to be speaking on behalf of the entire line of Nephite record keepers and otherwise expresses a combination of frustration, self-consciousness and anxiety, though familiar to anyone who has tried to put his or her thoughts on paper for public display, are exceedingly uncommon in the Book of Mormon.

Moroni's peculiar character is manifestly demonstrated, though it may come as a surprise to most Latter-day saints, in the way he artificially Christianizes his chronicle of the Jaredite people, who left for the new world before the time of Moses and whose own records show no evidence of the knowledge of Christ, even as He was foretold by the early prophets of the Hebrew religion. Perhaps no theme was as important to Book of Mormon narrators as demonstrating the universality of the Christian religion, showing that the prophets could guide the faithful in every land and era (even before Jesus's birth) to believe in Christ and accept his salvation. The challenge for Moroni, then, was to do so with Jaredite record ( Ether's book) and to make it more consistent with his father's previous work.

Ultimately, it probably makes little difference to Latter-day Saints whether the Jaredites worshiped Jesus or not – unlike the Lehites, they are completely annihilated with no identifiable posterity, no role to ply in the larger story of the House of Israel, and no direct connection to moderns readers, but the fact that a close reading of the narrative of the Book of Mormon shows that Moroni failed to make a convincing connection in this respect ( except as might be acceptable to the “gutter journalism” of our own day), despite many ingenious efforts, is, at the very least, a tribute to Joseph Smith's inventive imagination.

Thursday, October 21, 2010

American Media by Reese Erlich



Reese Erlich was born and raised in Los Angeles. In 1965 he enrolled at the University of California, Berkeley, and later became active in the anti-Vietnam War movement. In October 1967 Erlich and others organized Stop the Draft Week They were arrested and became known as the "Oakland Seven." In their trial they were acquitted of all charges.

Erlich first worked as a staff writer and research editor for Ramparts, a national investigative reporting magazine published in San Francisco from 1963 to 1975. His magazine articles have appeared in San Francisco Magazine, California Monthly, Mother Jones, The Progressive, The Nation, and AARP's Segunda Juventud.
Erlich's book, Target Iraq: What the News Media Didn't Tell You, co-authored with Norman Solomon, became a best seller in 2003. His book, The Iran Agenda: the Real Story of U.S. Policy and the Middle East Crisis, was published in October 2007

http://en.wikipedia.org/wiki/Reese_Erlich

Why are stories so similar from a media supposedly in fierce competition with one another? If all the media run variations of the same story [which all omit important data and perspectives], isn't there a conspiracy at the highest levels? I wish it were that simple.


The fact is, U.S. politicians impact media coverage in a number of pernicious ways without having to resort to secret meetings in parking garages. The first line of defense is ideological. Mainstream foreign correspondents receive top salaries and garner lots of prestige. As I described in Target Iraq: What the News Media Didn't Tell You, anyone who writes too critically of U.S. foreign policy doesn't stay employed. You don't win a Pulitzer Prize for questioning the basic assumptions of empire. You do advance your career, however, by cultivating high-level diplomatic, military, and intelligence sources.

In the aftermath of 9/11, the Bush administration pushed all the right media buttons. It appealed to patriotism and reporters' fears that they might be out of sync with public opinion.

The mainstream media followed the administration's line that the United States was under assault by a vicious enemy at home and abroad. When the CIA and other agencies leaked [phony] classified documents suggesting that Saddam Hussein had weapons of mass destruction and ties to al Qaeda, almost all the mainstream media ran the story without deeper investigation.

When reporters occasionally run major stories highly unpopular in Washington, they feel the full wrath of the empire. When CBS TV aired a story questioning President George W. Bush's service in the National Guard during the Vietnam War, even someone with the prestige of Dan Rather came under attack. Eventually Rather was hounded into leaving CBS. As the major media consolidate into fewer and fewer oligopolies, companies cut back newsroom staff and eliminate bureaus. Foreign correspondents, including those who might consider themselves politically liberal, fear causing too much controversy. They keep their heads down and their hands outstretched for a paycheck.

Editors also use sharply different criteria for evaluating the validity of information critical of U.S. power. No reporter gets fired for accurately reporting statements from high American officials, even if they are outright lies. But you may lose your job if you write a story too critical of those same high officials, unless your source is some other high-level official [though when Admiral William Fallon called General Petraeus an “ass-kissing chickenshit' this received no coverage in mainstream media].

I've written articles about the Afghan drug trade only to have editors cut sections naming Karzai ministers and their links to the U.S. Government. That information would be included in the story, I was told, only if confirmed by the DEA, CIA, or a similar Washington source. That, of course, gives the government virtual censorship power over controversial stories.

The dominant narrative on any given story trickles down to local media as well. After my 2002 trip to Afghanistan and publication in a local magazine of an article about the U.S.- allied warlord's involvement in the drug trade, I was contacted by a local TV reporter. She did a long interview and aired a story about the growing danger of the heroin trade. She systematically edited out every comment I made about pro- U.S. warlords, however, and inserted her own opinion that the Taliban was was at fault. And she had a lot of editing to do, because I mentioned it in almost every other sentence.

The degree to which the American people are deceived by the cozy relationship between the press and government in the United States was clearly evidenced in my interview with with Mohammad Nizami, formerly the head of the Taliban's radio and TV network and now a member of the Karzai regime. He expressed his support for an Islamic government of Afghanistan ruled by strict Sharia law. He opposed equal rights for women and wanted to see foreign troops withdrawn. Only his timetable had changed. As a Taliban leader he had called for an immediate withdrawal. As a Karzai supporter he thought they should wait until the Afghan Army could stand on its own.

That tolerance for the continued presence of U.S. Troops was sufficient to make him an ally in the view of the United States and Karzai. The dirty secret you will never see exposed in the mainline media is that the Taliban's ideology and political views on the future of Afghanistan are quite similar to many of Karzai's top supporters, including members of his cabinet. They, too, want a fundamentalist-ruled Afghanistan and have nothing but contempt for democratic elections. The war pits two sets of fundamentalists against one another, the difference being one side has U.S. support and the other doesn't.

An Ex-CIA Perspective on the Global War on Terror by Robert Baer



March 28, 2010

I joined the CIA out of curiosity about other peoples and cultures. I first served in India, quickly moved to the Arab world, and was stationed in Lebanon during a very tumultuous time. I was particularly interested in the April 18, 1983 bombing of the U.S. Embassy in Beirut. It was a very good operation from a technical standpoint, The car bomber drove into the lobby, obstructed the guard's line of fire, and detonated the explosives – killing over 60 staff, CIA and military personnel. We never did identify the driver; the truck was stolen and not traceable. On October 23, 1983, a similar truck bomb attack killed 299 American Marines and French soldiers in Beirut.

The U.S. Government still blames Hezbollah for both bombings, part of the rationale for declaring it a terrorist organization today. As someone who personally investigated at the time, however, I can tell you we still don't know who was responsible for the two bombings. We do know that the perpetrators were sophisticated militants attempting to drive the United States out of Lebanon.

Never-the-less, the Reagan White House and other American Leaders denounced both bombings as unspeakable acts of terrorism. But it's just dumb to call the bombings 'terrorism.” Many Lebanese looked on the United States as colonizers. The Lebanese were waging a war of national liberation to get foreigners out of their country. Lebanon had been a formal French colony until 1943; the United States landed Marines in Lebanon in 1958. Our presence in 1983 became a rallying cry for Shiites and other Lebanese opposed to foreign occupation. The attackers used bombs to kill foreign diplomats, soldiers and intelligence officers. They were horrific, violent attacks, but they weren't acts of terrorism.

For its part, the U.S. Government employed terrorist tactics to go after its perceived enemies. The CIA was convinced, on no evidence, that Ayatollah Mohammad Fadlallah had masterminded the Marine barracks bombing. The CIA paid Saudi Arabians to assassinate him. The Saudis hired Lebanese operatives to plant a powerful car bomb outside Fadlallah's apartment building. He wasn't injured, but the bomb murdered 80 people and wounded 200.

The CIA had the wrong guy. Fadlallah was politically independent of Hezbollah and opposed Iranian influence in Lebanon. Today Fadlallah is a respected Grand Ayatollah seeking reconciliation among the various political factions. There have been far too many similar cases in the so-called Global War on Terrorism.

Far too often the definition of 'terrorist' depends upon who is throwing the bomb. It seems that most of the world has largely forgotten the Stern Gang and Irgun, two Zionist groups that used terrorist tactics against the British and Arabs in the 1940s. The leaders of these groups, Menachem Begin and Yitzhak Shamir, later went on to become prime ministers of Israel. In more recent times the United States has been happy to ally with groups using terrorist tactics. In the 1980s, the U.S. Embraced the right-wing Christian Lebanese Forces, whose members massacred civilians in Beirut's refugee camps. That same militia kidnapped four Iranian diplomats and executed them. We have a habit of not looking too closely at the actions of our allies, but in the end, we get held responsible for their actions.

U.S. credibility around the world is similarly undermined by the use of torture and detention without trial. How can we thus claim to uphold the rule of law? The U.S.'s reputation certainly suffered by supporting the Contras in Nicaragua and other human rights violators in Central America, but the Bush years made things even worse. Today, what separates the U.S. Policy from that of authoritarian regimes in the Middle East?

The American firebombing of Germany in 1945 was terrorism. We didn't focus on military or industrial targets. We wanted to terrify the civilian population so the German military would surrender [ they didn't and the destruction of cities like Dresden had no adverse effect on their ability to continue the war] but that's what Al Qaeda wants to do on a smaller scale today. But al Qaeda has no chance of success and has created the opposite effect. The 9/11 attacks alienated most Muslims around the world from Al Qaeda and rallied support for America. By invading and occupying Afghanistan and Iraq, and carrying out another war in Pakistan, however, the United States has actually helped Al Qaeda's recruiting efforts.

The United States tries to link al Qaeda to every Muslim group opposed to U.S. Policy, but it's a conscious lie. The CIA agents and analysts I know are much more intelligent than the propaganda fed to the public. They don't throw around the term “terrorism”. Terrorism is a tactic; it's not a strategy. We understood that. When the CIA chief of station in Lebanon was kidnapped, it wasn't an end in itself. It was a tactic to get the United State's out of Lebanon. We understood the differences between militant Sunni and Shia groups, and between the various governments of the Middle East. We never lumped them altogether as terrorists.

But the CIA leadership goes along with White House policy. They are selling war to the American people. So they repeat the lie that the Muslims are coming to get us. If we don't stop them at the Kabul river, they'll be pulling up to the Delaware River.

Unfortunately, President Barak Obama is continuing these same wrong policies. Continued troop escalations won't win the war. We've got to get our troops out. Foreign troops in a country only succeed in rallying people against the occupier. We've got to undermine the jihadists politically. Individual countries must fight the battle against their own extremists.

Long time Middle East correspondent Reese Erlich's book Conversations with Terrorists, of which this brief essay is the Foreword ,offers many insights into the phony War on Terrorism. Today most Americans oppose the wars in Iraq and Afghanistan. They don't trust Washington, the wars cost too much, and too many American troops are dying. But the American people don't necessarily understand the situation on the ground in those countries or the extent of the lying in Washington. Conversations with Terrorists provides that important background.

Former CIA field officer Robert Baer authored the book See No Evil, which later became the film Syriana

In Conversations With Terrorists Reese Erlich writes:

I strongly believe the United States must radically shift gears. It must recognize the difference between isolated fanatics and groups fighting for legitimate causes, even if we disagree with their ideologies and tactics. It must pull all U.S. troops and mercenaries from Iraq, Afghanistan, and Pakistan. It must take immediate steps to resolve the Israeli-Palestinian issue. Such a shift in policy will do more to undermine such groups as Al Qaeda that all U.S. invasions combined.

Like the communist menace of years past, the terrorist menace is used to terrify people into accepting aggression abroad, and repression at home. Ironically, the phony war against communism had an actual end, the collapse of the Soviet Union [though not due, as many scholars now note, to any effort on our part]. The Global War on Terrorism has no end. I don't think [ and it is hard to believe] that the American people will accept perpetual war, thousands of deaths and the waste of trillions of dollars. At some point an American administration will simply drop the disastrous policy. I hope that day comes soon and that GWOT will end, not in victory, but with [the] whimper with which it began.

Wednesday, October 20, 2010

Satan in America by W. Scott Poole



Satan emerged in the ancient Near East as a minor character in Yahweh's heavenly court. Today, after a 4,000 year historical and literary transformation from God's envoy of disaster to God's archenemy, he enjoys a celebrity status. Horror films make us cringe at his power to corrupt the human personality, while best-selling books offer to tell his side of the story. Novelizations of Armageddon declare him a character of cosmic importance with an intricate and devious design on humanity. Tens of millions of America's evangelical Christians believe they are in constant daily combat with the same dark angel who has warred with God since the endless eons even before creation.

Most Americans do believe in Satan. A 2005 poll (Gallop and the Baylor Institute for the Study of Religion) found that 55 percent of Americans claimed they believe he is a literal being, a supernatural entity dedicated to evil and corruption and that he is active in the world today through a host of demonic minions. A 2007 Harris poll revealed that 62 percent believed Satan to be alive and well (while only 42% accepted the Darwinian theory of evolution). Belief in the devil among evangelical Christians is especially high but even a large percentage of Roman Catholics and mainline Protestants shared similar beliefs. In America, Satan has survived. He has more believers here than in any other country in the developed world.

The idea of satanic evil has been the progenitor of much mayhem in America's national history. Satan provided us a metaphor for what our culture hates and fears most in our history. The devil and our fascination with him has served as a blind for our society's darker moments, those times when the United States has renounced its collective moral obligations and acted out of its anxiety or lust for power. Puritans found Satan lurking in the “howling wilderness” (native peoples) and in marginal members of their own community. In the 1970s, more concerned seemed focused on discovering the “truth about exorcism” than facing the hard truths of the Pentagon Papers. In the 1980s, the absurd SRA (satanic ritual abuse) upheaval unleashed anxieties that seemed straight out of the peasant village politics of early modern Europe. During the same decade, millions of children in the United States did suffer, not from the actions of conspiratorial Satanists, but from poverty, poor schooling, the emerging crack epidemic and inadequate health care. Those religious traditions and leaders most interested in speaking of the devil remained the most silent on those grinding social problems, even as they created intricate demonologies. The inevitable exclusions, persecutions and violence followed.

A look at the American experience shows that we love the notion of evil. Moralists and social conservatives may insist that Americans have lost the sense of evil and the sense of sin. This is not the case. In the days immediately following 9/11 President Bush's deployment of the ideas of evil and evil-doers swept history clean of all ambiguity and focused the collective rage and sorrow to a sharpened spear tip. A chorus of voices soon joined him and soon we no longer faced a human tragedy or even human enemies. We were instead in a mythic battle with monsters. Exactly like those who murdered our fellow citizens, we were fighting the Great Satan, the Cults and Axis of Evil in a military operation deemed Infinite Justice.

The story of Satan in America reveals central truths about American culture. The religious history of America has been informed by the concept of spiritual warfare, combat with evil. The images that have shaped American misogyny, racism and imperial hubris are largely demonic images. We have seldom asked the more profound questions about evil and instead focused on the nature of evil using the mythical language of the apocalypse. This language has fed the thirst for power and violence while also allowing us a language of innocence.

Belief in a metaphysical Devil allows us to ignore the fact that America has been a fallen angel from the beginning. The rhetoric of religious declension, use by Puritan ministers and today by the religious right has an ahistorical diversion for powerful cultural forces that imagines a golden age destroyed by the growth of sexual freedom and secularization. But their golden age was one of segregation, disenfranchisement, the restriction of women's lives and bodies, and the birth of an imperialistic hubris that is still with us, that pretends to save a village by destroying it, that seeks to make the stars fall from heaven in pursuit of millennial dreams, and constructs Satan as the ultimate origin of any effort towards progressive political change.

The ideology of innocence has long informed the writing of American history: “Other nations may seek to fashion empire; America has always and only represented, and fought on behalf of, democracy and freedom.' Although this view has long been under attack and has no contemporary defenders among professional historians, it remains a popular folk belief and a handy rhetorical tool for politicians. There is some truth to the notion of American exceptionalism, but it is not a happy one. To rephrase Tony Judt's famous comment on postwar Germany, America is a nation uniquely unconscious of its crimes and unaware of the scale of its accountability. The American democratic experiment is unique in human history not because we are God's chosen people to lead the world, nor because we are always a force for good in the world, but because of our refusal to acknowledge the deeply racist and imperial roots of our democratic process.

An America drunk on notions of its own innocence and goodness has easily identified the devil with its enemies and its enemies with the devil time and time again. At one time or another in American history, the most influential religious movements, the most powerful politicians, and the dominant trends in popular culture have identified marginalized women, native peoples, slaves, Roman Catholics, Muslims, social progressives, alienated young adults, immigrants and numerous other social and political groupings and identities as satanic, inspired by Satan, or even Satan himself.

“Tell the truth and shame the devil”, “don't paint the devil on the wall” are East European folk sayings that suggest a better path for dealing with the devil. The discourse of evil is comforting because it feeds our worst appetites, calling us to supine indifference or explosive violence but it betrays a lack of moral imagination. Our invocations of the devil become the worst kind of hubris, cynical legitimation of past error and a prologue for future mayhem. We imagine we are looking into the abyss , not realizing that it is looking into us. Individual and collective introspection is the more difficult choice. But with it comes the recognition that it is America whose name is legion. It is our dark history, not devils that must be cast out.

Monday, October 18, 2010

War, what is it good for? by Nick Turse



A marketplace filled with books by former military men devoted to tweaking, enhancing, and improving war-fighting capabilities cries out for some counterbalance. This year's foremost civilian-authored text on the conflict in Afghanistan is, without a doubt, Sebastian Junger's War. While nothing like the antiwar texts of the 1960s and 1970s that laid bare the folly and terror of American campaigns in Southeast Asia, War still offers a rare glimpse of the horrors that authors like Celeski, Henrikson, and Kilcullen tend to skip over or discount.

Early in his book, Junger recounts a Navy SEAL's admission that the only thing that stopped him from executing three unarmed Afghans was concern about the press catching wind of the murders. A page later, he writes of an American attempt to take out a mid-level Taliban leader in Chichal, a village high above Afghanistan's Korengal Valley, that killed 17 civilians instead. The military responsible for training that elite fighter who felt unconstrained by the laws of war and the men who called in the air strike on Chichal are the very ones Kilcullen and various Pentagon minds think can carry out kind-COIN [Counter-Insurgency].

As a book, War suffers from many of the pitfalls that afflicted its movie companion, the documentary Restrepo. The overly ambitious title belies the fact that it is not about "war," but one aspect of war, combat, as experienced by US Army troops in the Korengal Valley. Moreover, there's a dismaying amount of combat-friendly hyperbole and celebratory rhetoric in and around the book, from the publisher's book-jacket prose labeling combat "the ultimate test of character" - a theme that buzzes through the entire book - to a famous chapter-leading quote by George Orwell or Winston Churchill (Junger refuses to decide which) that tells us we all "sleep soundly in our beds because rough men stand ready in the night to visit violence on those who would do us harm."

Unfortunately, as the last century showed, too many "rough men" were all too willing to do the bidding of leaders like Adolf Hitler, Joseph Stalin, Pol Pot, Suharto, Leonid Brezhnev, Lyndon Johnson, and Richard Nixon, to name just a few, to the detriment of many millions who ended up dead, wounded, or psychologically scarred. All of this suggests that perhaps if we stopped celebrating "rough men," we could all sleep easier.

That said, there is much to be learned from Junger's in-print version of Americans-at-war. His blow-by-blow accounts of small-unit combat actions, for instance, drive home the tremendous firepower American troops unleash on enemies often armed with little more than rifles and rocket-propelled grenades.

Page after page tallies up American technology and firepower: M-4 assault rifles (some with M-203 grenade launchers), squad automatic weapons or SAWs, .50 caliber machine guns, M-240 machine guns, Mark-19 automatic grenade launchers, mortars, 155mm artillery, surveillance drones, Apache attack helicopters, AC-130 Spectre gunships, A-10 Warthogs, F-15 and F-16 fighter-bombers, B-52 and B-1 bombers, all often brought to bear against boys who may be wielding nothing more than Lee Enfield bolt-action rifles - a state of the art weapon when introduced. That, however, was in the 1890s.

The profligacy of relying on such overwhelming firepower is not lost on Junger, who offers a useful insight in regard to another high-tech, high-priced piece of US weaponry, "a huge shoulder-fired rocket called a Javelin." Junger writes: "Each Javelin round costs US$80,000, and the idea that it's fired by a guy who doesn't make that in a year at a guy who doesn't make that in a lifetime is somehow so outrageous it almost makes the war seem winnable."

But "almost," as the old adage goes, only counts when it comes to horseshoes and hand grenades. And bombs dropped by B-1s, like one unleashed at night near the village of Yaka Chine, are certainly not hand grenades. Junger chronicles the aftermath of that strike when US troops encountered "three children with blackened faces ... a woman lying stunned mute on the floor while five corpses lie on wooden pallets covered by white cloth outside the house, all casualties from the air strikes the night before." He continues: "The civilian casualties are a serious matter and will require diplomacy and compensation."

Instead, an American lieutenant colonel choppers in to lecture village elders about the evils of "miscreants" in their midst and brags about his officers' educational prowess and how it can benefit the Afghans. "They stare back unmoved," writes Junger. "The Americans fly out of Yaka Chine, and valley elders meet among themselves to decide what to do. Five people are dead in Yaka Chine, along with ten wounded, and the elders declare jihad against every American in the valley." Vignettes like this drive home the reasons why, after nearly a decade of overwhelming firepower, the US war in Afghanistan has yet to prove "winnable", despite the ministrations of Kilcullen and crew.

Later in the book we read about how Junger survives an improvised explosive device that detonates beneath his vehicle. He's saved only by a jumpy trigger-man who touches two wires to a battery a bit too early to kill Junger and the other occupants of the army Humvee he's riding in. In response, Junger writes: "This man wanted to negate everything I'd ever done in my life or might ever do. It felt malicious and personal in a way that combat didn't. Combat gives you the chance to react well and survive; bombs don't allow for anything."

Junger, at least, traveled across the world to consciously and deliberately put himself in harm's way. Imagine how the poor people of Yaka Chine must have felt when a $300 million American aircraft swooped in to drop a bomb on them in the dead of night. Junger's book helps reveal these facts far better than his movie.

Getting a read on war



Surveying this year's Afghan war literature from popular bestsellers to little noticed Army monographs is generally disheartening but illuminating. "The moral basis of the war doesn't interest soldiers much," writes Junger near the beginning of his book. "They generally leave the big picture to others."

America's fighting men at the front are not alone. Most Americans have similarly chosen to ignore the "moral basis" for the war and the big picture as well. They have been aided and abetted in this not only by a president evidently bent on escalating the conflict at every turn, but also by a coterie of authors - many of them connected to the Pentagon - content to critique only doctrine, strategy, and tactics.

Each of them is eager to push for his favorite flavor of warfare, but loath to address weightier issues. Perhaps this is one reason why Junger's front-line troops - if they are indeed sampling the best the military's prescribed reading lists have to offer - have a tendency to ignore fundamental issues and skip intellectual and moral inquiry.

If Pentagon-consultant-turned-potential-defense-contractor Kilcullen and the Joint Special Operations University's author corps aren't going to address morals and "big picture" issues, then the Sebastian Jungers of the world need to step up and cover the real, everyday face of war: the plight of civilians in the conflict zone.

They also should focus on big-picture issues like whether the United States actually has anything approaching a true strategic vision when it comes to its wars and occupations abroad, whether there truly is a global Islamist insurgency as Kilcullen maintains, whether it could ever coalesce into a worldwide threat, and whether whatever it is that exists should be attacked with the force of arms. They need to offer more help in launching serious mainstream debate about America's permanent state of war and its fallout.

The US military's reading lists are, not surprisingly, dedicated to combat and counter-insurgency. So are its favorite authors. To them, combat is war. Civilians in war zones know better. They know that war is suffering, because they live with it, not a tour at a time but constantly, day after day, week after week, year after year. Civilians outside war zones should know, too. It would be helpful if they had authors with the skill, intellect, and courage to help them to understand the truth.

(Copyright 2010 Nick Turse.)
http://www.atimes.com/atimes/South_Asia/LJ19Df03.html


Nick Turse is the associate editor of TomDispatch.com. His latest book, The Case for Withdrawal from Afghanistan (Verso Books), which brings together leading analysts from across the political spectrum, has just been published. Turse is currently a fellow at Harvard University's Radcliffe Institute. You can follow him on Twitter @NickTurse, on Tumblr, and on Facebook. His website is NickTurse.com.

Sunday, October 17, 2010

Innocent Until Interrogated by Gary Stuart



This is an account of the investigation, trials, appeals and civil lawsuits of the 1991 murder of nine Buddhist Monks at the Wat Promkunaram Temple in Maricopa County, Arizona. In the early days of the investigation, conducted by a huge multi-agency task force, four innocent suspects were induced to confess to the crime. Eventually another suspect, whose guilt was established with the only corroborative evidence apart from confession in the case, obtained a non-capital plea bargain at the expense of his testimony against his 'partner in crime' who was subsequently released on appeal. A seventh suspect confessed in a related murder he did not commit .

Most Americans, jurors and police believe that only guilty people confess. But this is not true. A comprehensive compilation of wrongful convictions in potentially capital cases in the United States from 1900 to 1985, published in 1987, indicated that “police-induced false confessions was the primary or contributing cause of wrongful convictions in 14.3 per cent of the cases examined.

The case in Arizona demonstrates the typical manner in which false confessions are obtained. First, investigators anxious to solve a horrendous crime as quickly as possible, give too much credit to the inconsistent testimony of an unreliable witness, without adequately checking-out his/her various assertions of facts or opinion. Second, search warrants are issued based on these lies, guesswork inconsistencies- glossed over ('cleaned-up') or omitted in the paperwork submitted to prosecutors or Courts by the investigators. Evidence gathered with such warrants may be mishandled or wrongly identified as pertinent to the case being investigated, while, in the rush to judgment, other evidence is not collected or simply ignored.


Then the suspect (s) are rounded up and interrogated for long periods of time without the opportunity to rest, given proper food or clear and proper warnings about their right and the importance of not talking to investigating officers without an attorney. The option of confession is often presented with the promise of mitigating the suspect's ultimate sentence. Officers will often lie about the facts in the case and the testimony of other suspects. Although many portions of the interrogation are recorded, long parts are not and their contents only become available in the sometimes 'willfully constructed' summaries of interrogators or their executive officers.


Officers generally do not believe that if a suspect is innocent he/she would continue to submit to examination, and though posing “as a friend' who is just 'trying to help', relentlessly deny that the suspect's protests of innocence are true. Throughout the lengthy and exhausting interview the cops often provide the suspect with detailed information about the crimes being investigated so that when the defendant finally confesses in order to escape the nightmare of his tortured situation, his story will be at least modestly consistent with the way officers perceive it.

In most cases the suspects who falsely confess to a crime have their own problems such as a marginal economic status, educational deficits, language barriers and emotional problems. They may have had drug or alcohol induced blackouts in their previous experiences and thus remain uncertain that, if they had committed the crime of which they are accused, they would remember it. They may have little faith in the justice system and believe that confessing will mitigate their ultimate sentence.


In this case the four individuals who falsely confessed were eventually released after many months in prison, and considerable damage to their personal lives. The conduct of the Maricopa County Sheriff's Department was so egregious that their insurance company was forced to settle the lawsuits of the wrongly and unlawfully accused and arrested individuals with substantial amounts of money. All the primary offenders in the Sheriff's department were first demoted and then left public service within two years. Newly elected Sheriff Joe Arapaio instituted 'reforms'. Never-the-less, 11 years later MCSO detectives interrogated a fifty-year old machinist named Robert Louis Armstrong and extracted a false confession to a triple murder that had gone unsolved for five years. Instead of correcting his deputies' interrogation abuses, Joe's new policy merely documented them on videotape. The same techniques and shortcomings that were so egregious in the Temple Massacre case – coercion, inattention to detail, lazy acceptance of unreliable informants and an easy confession- marred the Armstrong case.

Friday, October 15, 2010

Lies and Damned Lies by David H. Freedman



Much of what medical researchers conclude in their studies is misleading, exaggerated, or flat-out wrong. So why are doctors—to a striking extent—still drawing upon misinformation in their everyday practice? Dr. John Ioannidis has spent his career challenging his peers by exposing their bad science.
By David H. Freedman

Image credit: Robyn Twomey/Redux


In 2001, rumors were circulating in Greek hospitals that surgery residents, eager to rack up scalpel time, were falsely diagnosing hapless Albanian immigrants with appendicitis. At the University of Ioannina medical school’s teaching hospital, a newly minted doctor named Athina Tatsioni was discussing the rumors with colleagues when a professor who had overheard asked her if she’d like to try to prove whether they were true—he seemed to be almost daring her. She accepted the challenge and, with the professor’s and other colleagues’ help, eventually produced a formal study showing that, for whatever reason, the appendices removed from patients with Albanian names in six Greek hospitals were more than three times as likely to be perfectly healthy as those removed from patients with Greek names. “It was hard to find a journal willing to publish it, but we did,” recalls Tatsioni. “I also discovered that I really liked research.” Good thing, because the study had actually been a sort of audition. The professor, it turned out, had been putting together a team of exceptionally brash and curious young clinicians and Ph.D.s to join him in tackling an unusual and controversial agenda.


Last spring, I sat in on one of the team’s weekly meetings on the medical school’s campus, which is plunked crazily across a series of sharp hills. The building in which we met, like most at the school, had the look of a barracks and was festooned with political graffiti. But the group convened in a spacious conference room that would have been at home at a Silicon Valley start-up. Sprawled around a large table were Tatsioni and eight other youngish Greek researchers and physicians who, in contrast to the pasty younger staff frequently seen in U.S. hospitals, looked like the casually glamorous cast of a television medical drama. The professor, a dapper and soft-spoken man named John Ioannidis, loosely presided.


One of the researchers, a biostatistician named Georgia Salanti, fired up a laptop and projector and started to take the group through a study she and a few colleagues were completing that asked this question: were drug companies manipulating published research to make their drugs look good? Salanti ticked off data that seemed to indicate they were, but the other team members almost immediately started interrupting. One noted that Salanti’s study didn’t address the fact that drug-company research wasn’t measuring critically important “hard” outcomes for patients, such as survival versus death, and instead tended to measure “softer” outcomes, such as self-reported symptoms (“my chest doesn’t hurt as much today”). Another pointed out that Salanti’s study ignored the fact that when drug-company data seemed to show patients’ health improving, the data often failed to show that the drug was responsible, or that the improvement was more than marginal.


Salanti remained poised, as if the grilling were par for the course, and gamely acknowledged that the suggestions were all good—but a single study can’t prove everything, she said. Just as I was getting the sense that the data in drug studies were endlessly malleable, Ioannidis, who had mostly been listening, delivered what felt like a coup de grâce: wasn’t it possible, he asked, that drug companies were carefully selecting the topics of their studies—for example, comparing their new drugs against those already known to be inferior to others on the market—so that they were ahead of the game even before the data juggling began? “Maybe sometimes it’s the questions that are biased, not the answers,” he said, flashing a friendly smile. Everyone nodded. Though the results of drug studies often make newspaper headlines, you have to wonder whether they prove anything at all. Indeed, given the breadth of the potential problems raised at the meeting, can any medical-research studies be trusted?


That question has been central to Ioannidis’s career. He’s what’s known as a meta-researcher, and he’s become one of the world’s foremost experts on the credibility of medical research. He and his team have shown, again and again, and in many different ways, that much of what biomedical researchers conclude in published studies—conclusions that doctors keep in mind when they prescribe antibiotics or blood-pressure medication, or when they advise us to consume more fiber or less meat, or when they recommend surgery for heart disease or back pain—is misleading, exaggerated, and often flat-out wrong. He charges that as much as 90 percent of the published medical information that doctors rely on is flawed. His work has been widely accepted by the medical community; it has been published in the field’s top journals, where it is heavily cited; and he is a big draw at conferences. Given this exposure, and the fact that his work broadly targets everyone else’s work in medicine, as well as everything that physicians do and all the health advice we get, Ioannidis may be one of the most influential scientists alive. Yet for all his influence, he worries that the field of medical research is so pervasively flawed, and so riddled with conflicts of interest, that it might be chronically resistant to change—or even to publicly admitting that there’s a problem.


The city of Ioannina is a big college town a short drive from the ruins of a 20,000-seat amphitheater and a Zeusian sanctuary built at the site of the Dodona oracle. The oracle was said to have issued pronouncements to priests through the rustling of a sacred oak tree. Today, a different oak tree at the site provides visitors with a chance to try their own hands at extracting a prophecy. “I take all the researchers who visit me here, and almost every single one of them asks the tree the same question,” Ioannidis tells me, as we contemplate the tree the day after the team’s meeting. “‘Will my research grant be approved?’” He chuckles, but Ioannidis (pronounced yo-NEE-dees) tends to laugh not so much in mirth as to soften the sting of his attack. And sure enough, he goes on to suggest that an obsession with winning funding has gone a long way toward weakening the reliability of medical research.
He first stumbled on the sorts of problems plaguing the field, he explains, as a young physician-researcher in the early 1990s at Harvard. At the time, he was interested in diagnosing rare diseases, for which a lack of case data can leave doctors with little to go on other than intuition and rules of thumb. But he noticed that doctors seemed to proceed in much the same manner even when it came to cancer, heart disease, and other common ailments. Where were the hard data that would back up their treatment decisions? There was plenty of published research, but much of it was remarkably unscientific, based largely on observations of a small number of cases. A new “evidence-based medicine” movement was just starting to gather force, and Ioannidis decided to throw himself into it, working first with prominent researchers at Tufts University and then taking positions at Johns Hopkins University and the National Institutes of Health. He was unusually well armed: he had been a math prodigy of near-celebrity status in high school in Greece, and had followed his parents, who were both physician-researchers, into medicine. Now he’d have a chance to combine math and medicine by applying rigorous statistical analysis to what seemed a surprisingly sloppy field. “I assumed that everything we physicians did was basically right, but now I was going to help verify it,” he says. “All we’d have to do was systematically review the evidence, trust what it told us, and then everything would be perfect.”


It didn’t turn out that way. In poring over medical journals, he was struck by how many findings of all types were refuted by later findings. Of course, medical-science “never minds” are hardly secret. And they sometimes make headlines, as when in recent years large studies or growing consensuses of researchers concluded that mammograms, colonoscopies, and PSA tests are far less useful cancer-detection tools than we had been told; or when widely prescribed antidepressants such as Prozac, Zoloft, and Paxil were revealed to be no more effective than a placebo for most cases of depression; or when we learned that staying out of the sun entirely can actually increase cancer risks; or when we were told that the advice to drink lots of water during intense exercise was potentially fatal; or when, last April, we were informed that taking fish oil, exercising, and doing puzzles doesn’t really help fend off Alzheimer’s disease, as long claimed. Peer-reviewed studies have come to opposite conclusions on whether using cell phones can cause brain cancer, whether sleeping more than eight hours a night is healthful or dangerous, whether taking aspirin every day is more likely to save your life or cut it short, and whether routine angioplasty works better than pills to unclog heart arteries.


But beyond the headlines, Ioannidis was shocked at the range and reach of the reversals he was seeing in everyday medical research. “Randomized controlled trials,” which compare how one group responds to a treatment against how an identical group fares without the treatment, had long been considered nearly unshakable evidence, but they, too, ended up being wrong some of the time. “I realized even our gold-standard research had a lot of problems,” he says. Baffled, he started looking for the specific ways in which studies were going wrong. And before long he discovered that the range of errors being committed was astonishing: from what questions researchers posed, to how they set up the studies, to which patients they recruited for the studies, to which measurements they took, to how they analyzed the data, to how they presented their results, to how particular studies came to be published in medical journals.


This array suggested a bigger, underlying dysfunction, and Ioannidis thought he knew what it was. “The studies were biased,” he says. “Sometimes they were overtly biased. Sometimes it was difficult to see the bias, but it was there.” Researchers headed into their studies wanting certain results—and, lo and behold, they were getting them. We think of the scientific process as being objective, rigorous, and even ruthless in separating out what is true from what we merely wish to be true, but in fact it’s easy to manipulate results, even unintentionally or unconsciously. “At every step in the process, there is room to distort results, a way to make a stronger claim or to select what is going to be concluded,” says Ioannidis. “There is an intellectual conflict of interest that pressures researchers to find whatever it is that is most likely to get them funded.”


Perhaps only a minority of researchers were succumbing to this bias, but their distorted findings were having an outsize effect on published research. To get funding and tenured positions, and often merely to stay afloat, researchers have to get their work published in well-regarded journals, where rejection rates can climb above 90 percent. Not surprisingly, the studies that tend to make the grade are those with eye-catching findings. But while coming up with eye-catching theories is relatively easy, getting reality to bear them out is another matter. The great majority collapse under the weight of contradictory data when studied rigorously. Imagine, though, that five different research teams test an interesting theory that’s making the rounds, and four of the groups correctly prove the idea false, while the one less cautious group incorrectly “proves” it true through some combination of error, fluke, and clever selection of data. Guess whose findings your doctor ends up reading about in the journal, and you end up hearing about on the evening news? Researchers can sometimes win attention by refuting a prominent finding, which can help to at least raise doubts about results, but in general it is far more rewarding to add a new insight or exciting-sounding twist to existing research than to retest its basic premises—after all, simply re-proving someone else’s results is unlikely to get you published, and attempting to undermine the work of respected colleagues can have ugly professional repercussions.


In the late 1990s, Ioannidis set up a base at the University of Ioannina. He pulled together his team, which remains largely intact today, and started chipping away at the problem in a series of papers that pointed out specific ways certain studies were getting misleading results. Other meta-researchers were also starting to spotlight disturbingly high rates of error in the medical literature. But Ioannidis wanted to get the big picture across, and to do so with solid data, clear reasoning, and good statistical analysis. The project dragged on, until finally he retreated to the tiny island of Sikinos in the Aegean Sea, where he drew inspiration from the relatively primitive surroundings and the intellectual traditions they recalled. “A pervasive theme of ancient Greek literature is that you need to pursue the truth, no matter what the truth might be,” he says. In 2005, he unleashed two papers that challenged the foundations of medical research.


He chose to publish one paper, fittingly, in the online journal PLoS Medicine, which is committed to running any methodologically sound article without regard to how “interesting” the results may be. In the paper, Ioannidis laid out a detailed mathematical proof that, assuming modest levels of researcher bias, typically imperfect research techniques, and the well-known tendency to focus on exciting rather than highly plausible theories, researchers will come up with wrong findings most of the time. Simply put, if you’re attracted to ideas that have a good chance of being wrong, and if you’re motivated to prove them right, and if you have a little wiggle room in how you assemble the evidence, you’ll probably succeed in proving wrong theories right. His model predicted, in different fields of medical research, rates of wrongness roughly corresponding to the observed rates at which findings were later convincingly refuted: 80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials. The article spelled out his belief that researchers were frequently manipulating data analyses, chasing career-advancing findings rather than good science, and even using the peer-review process—in which journals ask researchers to help decide which studies to publish—to suppress opposing views. “You can question some of the details of John’s calculations, but it’s hard to argue that the essential ideas aren’t absolutely correct,” says Doug Altman, an Oxford University researcher who directs the Centre for Statistics in Medicine.


Still, Ioannidis anticipated that the community might shrug off his findings: sure, a lot of dubious research makes it into journals, but we researchers and physicians know to ignore it and focus on the good stuff, so what’s the big deal? The other paper headed off that claim. He zoomed in on 49 of the most highly regarded research findings in medicine over the previous 13 years, as judged by the science community’s two standard measures: the papers had appeared in the journals most widely cited in research articles, and the 49 articles themselves were the most widely cited articles in these journals. These were articles that helped lead to the widespread popularity of treatments such as the use of hormone-replacement therapy for menopausal women, vitamin E to reduce the risk of heart disease, coronary stents to ward off heart attacks, and daily low-dose aspirin to control blood pressure and prevent heart attacks and strokes. Ioannidis was putting his contentions to the test not against run-of-the-mill research, or even merely well-accepted research, but against the absolute tip of the research pyramid. Of the 49 articles, 45 claimed to have uncovered effective interventions. Thirty-four of these claims had been retested, and 14 of these, or 41 percent, had been convincingly shown to be wrong or significantly exaggerated. If between a third and a half of the most acclaimed research in medicine was proving untrustworthy, the scope and impact of the problem were undeniable. That article was published in the Journal of the American Medical Association.


Driving me back to campus in his smallish SUV—after insisting, as he apparently does with all his visitors, on showing me a nearby lake and the six monasteries situated on an islet within it—Ioannidis apologized profusely for running a yellow light, explaining with a laugh that he didn’t trust the truck behind him to stop. Considering his willingness, even eagerness, to slap the face of the medical-research community, Ioannidis comes off as thoughtful, upbeat, and deeply civil. He’s a careful listener, and his frequent grin and semi-apologetic chuckle can make the sharp prodding of his arguments seem almost good-natured. He is as quick, if not quicker, to question his own motives and competence as anyone else’s. A neat and compact 45-year-old with a trim mustache, he presents as a sort of dashing nerd—Giancarlo Giannini with a bit of Mr. Bean.


The humility and graciousness seem to serve him well in getting across a message that is not easy to digest or, for that matter, believe: that even highly regarded researchers at prestigious institutions sometimes churn out attention-grabbing findings rather than findings likely to be right. But Ioannidis points out that obviously questionable findings cram the pages of top medical journals, not to mention the morning headlines. Consider, he says, the endless stream of results from nutritional studies in which researchers follow thousands of people for some number of years, tracking what they eat and what supplements they take, and how their health changes over the course of the study. “Then the researchers start asking, ‘What did vitamin E do? What did vitamin C or D or A do? What changed with calorie intake, or protein or fat intake? What happened to cholesterol levels? Who got what type of cancer?’” he says. “They run everything through the mill, one at a time, and they start finding associations, and eventually conclude that vitamin X lowers the risk of cancer Y, or this food helps with the risk of that disease.” In a single week this fall, Google’s news page offered these headlines: “More Omega-3 Fats Didn’t Aid Heart Patients”; “Fruits, Vegetables Cut Cancer Risk for Smokers”; “Soy May Ease Sleep Problems in Older Women”; and dozens of similar stories.


When a five-year study of 10,000 people finds that those who take more vitamin X are less likely to get cancer Y, you’d think you have pretty good reason to take more vitamin X, and physicians routinely pass these recommendations on to patients. But these studies often sharply conflict with one another. Studies have gone back and forth on the cancer-preventing powers of vitamins A, D, and E; on the heart-health benefits of eating fat and carbs; and even on the question of whether being overweight is more likely to extend or shorten your life. How should we choose among these dueling, high-profile nutritional findings? Ioannidis suggests a simple approach: ignore them all.


For starters, he explains, the odds are that in any large database of many nutritional and health factors, there will be a few apparent connections that are in fact merely flukes, not real health effects—it’s a bit like combing through long, random strings of letters and claiming there’s an important message in any words that happen to turn up. But even if a study managed to highlight a genuine health connection to some nutrient, you’re unlikely to benefit much from taking more of it, because we consume thousands of nutrients that act together as a sort of network, and changing intake of just one of them is bound to cause ripples throughout the network that are far too complex for these studies to detect, and that may be as likely to harm you as help you. Even if changing that one factor does bring on the claimed improvement, there’s still a good chance that it won’t do you much good in the long run, because these studies rarely go on long enough to track the decades-long course of disease and ultimately death. Instead, they track easily measurable health “markers” such as cholesterol levels, blood pressure, and blood-sugar levels, and meta-experts have shown that changes in these markers often don’t correlate as well with long-term health as we have been led to believe.


On the relatively rare occasions when a study does go on long enough to track mortality, the findings frequently upend those of the shorter studies. (For example, though the vast majority of studies of overweight individuals link excess weight to ill health, the longest of them haven’t convincingly shown that overweight people are likely to die sooner, and a few of them have seemingly demonstrated that moderately overweight people are likely to live longer.) And these problems are aside from ubiquitous measurement errors (for example, people habitually misreport their diets in studies), routine misanalysis (researchers rely on complex software capable of juggling results in ways they don’t always understand), and the less common, but serious, problem of outright fraud (which has been revealed, in confidential surveys, to be much more widespread than scientists like to acknowledge).


If a study somehow avoids every one of these problems and finds a real connection to long-term changes in health, you’re still not guaranteed to benefit, because studies report average results that typically represent a vast range of individual outcomes. Should you be among the lucky minority that stands to benefit, don’t expect a noticeable improvement in your health, because studies usually detect only modest effects that merely tend to whittle your chances of succumbing to a particular disease from small to somewhat smaller. “The odds that anything useful will survive from any of these studies are poor,” says Ioannidis—dismissing in a breath a good chunk of the research into which we sink about $100 billion a year in the United States alone.


And so it goes for all medical studies, he says. Indeed, nutritional studies aren’t the worst. Drug studies have the added corruptive force of financial conflict of interest. The exciting links between genes and various diseases and traits that are relentlessly hyped in the press for heralding miraculous around-the-corner treatments for everything from colon cancer to schizophrenia have in the past proved so vulnerable to error and distortion, Ioannidis has found, that in some cases you’d have done about as well by throwing darts at a chart of the genome. (These studies seem to have improved somewhat in recent years, but whether they will hold up or be useful in treatment are still open questions.) Vioxx, Zelnorm, and Baycol were among the widely prescribed drugs found to be safe and effective in large randomized controlled trials before the drugs were yanked from the market as unsafe or not so effective, or both.


“Often the claims made by studies are so extravagant that you can immediately cross them out without needing to know much about the specific problems with the studies,” Ioannidis says. But of course it’s that very extravagance of claim (one large randomized controlled trial even proved that secret prayer by unknown parties can save the lives of heart-surgery patients, while another proved that secret prayer can harm them) that helps gets these findings into journals and then into our treatments and lifestyles, especially when the claim builds on impressive-sounding evidence. “Even when the evidence shows that a particular research idea is wrong, if you have thousands of scientists who have invested their careers in it, they’ll continue to publish papers on it,” he says. “It’s like an epidemic, in the sense that they’re infected with these wrong ideas, and they’re spreading it to other researchers through journals.”


Though scientists and science journalists are constantly talking up the value of the peer-review process, researchers admit among themselves that biased, erroneous, and even blatantly fraudulent studies easily slip through it. Nature, the grande dame of science journals, stated in a 2006 editorial, “Scientists understand that peer review per se provides only a minimal assurance of quality, and that the public conception of peer review as a stamp of authentication is far from the truth.” What’s more, the peer-review process often pressures researchers to shy away from striking out in genuinely new directions, and instead to build on the findings of their colleagues (that is, their potential reviewers) in ways that only seem like breakthroughs—as with the exciting-sounding gene linkages (autism genes identified!) and nutritional findings (olive oil lowers blood pressure!) that are really just dubious and conflicting variations on a theme.


Most journal editors don’t even claim to protect against the problems that plague these studies. University and government research overseers rarely step in to directly enforce research quality, and when they do, the science community goes ballistic over the outside interference. The ultimate protection against research error and bias is supposed to come from the way scientists constantly retest each other’s results—except they don’t. Only the most prominent findings are likely to be put to the test, because there’s likely to be publication payoff in firming up the proof, or contradicting it.


But even for medicine’s most influential studies, the evidence sometimes remains surprisingly narrow. Of those 45 super-cited studies that Ioannidis focused on, 11 had never been retested. Perhaps worse, Ioannidis found that even when a research error is outed, it typically persists for years or even decades. He looked at three prominent health studies from the 1980s and 1990s that were each later soundly refuted, and discovered that researchers continued to cite the original results as correct more often than as flawed—in one case for at least 12 years after the results were discredited.


Doctors may notice that their patients don’t seem to fare as well with certain treatments as the literature would lead them to expect, but the field is appropriately conditioned to subjugate such anecdotal evidence to study findings. Yet much, perhaps even most, of what doctors do has never been formally put to the test in credible studies, given that the need to do so became obvious to the field only in the 1990s, leaving it playing catch-up with a century or more of non-evidence-based medicine, and contributing to Ioannidis’s shockingly high estimate of the degree to which medical knowledge is flawed. That we’re not routinely made seriously ill by this shortfall, he argues, is due largely to the fact that most medical interventions and advice don’t address life-and-death situations, but rather aim to leave us marginally healthier or less unhealthy, so we usually neither gain nor risk all that much.


Medical research is not especially plagued with wrongness. Other meta-research experts have confirmed that similar issues distort research in all fields of science, from physics to economics (where the highly regarded economists J. Bradford DeLong and Kevin Lang once showed how a remarkably consistent paucity of strong evidence in published economics studies made it unlikely that any of them were right). And needless to say, things only get worse when it comes to the pop expertise that endlessly spews at us from diet, relationship, investment, and parenting gurus and pundits. But we expect more of scientists, and especially of medical scientists, given that we believe we are staking our lives on their results. The public hardly recognizes how bad a bet this is. The medical community itself might still be largely oblivious to the scope of the problem, if Ioannidis hadn’t forced a confrontation when he published his studies in 2005.


Ioannidis initially thought the community might come out fighting. Instead, it seemed relieved, as if it had been guiltily waiting for someone to blow the whistle, and eager to hear more. David Gorski, a surgeon and researcher at Detroit’s Barbara Ann Karmanos Cancer Institute, noted in his prominent medical blog that when he presented Ioannidis’s paper on highly cited research at a professional meeting, “not a single one of my surgical colleagues was the least bit surprised or disturbed by its findings.” Ioannidis offers a theory for the relatively calm reception. “I think that people didn’t feel I was only trying to provoke them, because I showed that it was a community problem, instead of pointing fingers at individual examples of bad research,” he says. In a sense, he gave scientists an opportunity to cluck about the wrongness without having to acknowledge that they themselves succumb to it—it was something everyone else did.


To say that Ioannidis’s work has been embraced would be an understatement. His PLoS Medicine paper is the most downloaded in the journal’s history, and it’s not even Ioannidis’s most-cited work—that would be a paper he published in Nature Genetics on the problems with gene-link studies. Other researchers are eager to work with him: he has published papers with 1,328 different co-authors at 538 institutions in 43 countries, he says. Last year he received, by his estimate, invitations to speak at 1,000 conferences and institutions around the world, and he was accepting an average of about five invitations a month until a case last year of excessive-travel-induced vertigo led him to cut back. Even so, in the weeks before I visited him he had addressed an AIDS conference in San Francisco, the European Society for Clinical Investigation, Harvard’s School of Public Health, and the medical schools at Stanford and Tufts.


The irony of his having achieved this sort of success by accusing the medical-research community of chasing after success is not lost on him, and he notes that it ought to raise the question of whether he himself might be pumping up his findings. “If I did a study and the results showed that in fact there wasn’t really much bias in research, would I be willing to publish it?” he asks. “That would create a real psychological conflict for me.” But his bigger worry, he says, is that while his fellow researchers seem to be getting the message, he hasn’t necessarily forced anyone to do a better job. He fears he won’t in the end have done much to improve anyone’s health. “There may not be fierce objections to what I’m saying,” he explains. “But it’s difficult to change the way that everyday doctors, patients, and healthy people think and behave.”


As helter-skelter as the University of Ioannina Medical School campus looks, the hospital abutting it looks reassuringly stolid. Athina Tatsioni has offered to take me on a tour of the facility, but we make it only as far as the entrance when she is greeted—accosted, really—by a worried-looking older woman. Tatsioni, normally a bit reserved, is warm and animated with the woman, and the two have a brief but intense conversation before embracing and saying goodbye. Tatsioni explains to me that the woman and her husband were patients of hers years ago; now the husband has been admitted to the hospital with abdominal pains, and Tatsioni has promised she’ll stop by his room later to say hello. Recalling the appendicitis story, I prod a bit, and she confesses she plans to do her own exam. She needs to be circumspect, though, so she won’t appear to be second-guessing the other doctors.


Tatsioni doesn’t so much fear that someone will carve out the man’s healthy appendix. Rather, she’s concerned that, like many patients, he’ll end up with prescriptions for multiple drugs that will do little to help him, and may well harm him. “Usually what happens is that the doctor will ask for a suite of biochemical tests—liver fat, pancreas function, and so on,” she tells me. “The tests could turn up something, but they’re probably irrelevant. Just having a good talk with the patient and getting a close history is much more likely to tell me what’s wrong.” Of course, the doctors have all been trained to order these tests, she notes, and doing so is a lot quicker than a long bedside chat. They’re also trained to ply the patient with whatever drugs might help whack any errant test numbers back into line. What they’re not trained to do is to go back and look at the research papers that helped make these drugs the standard of care. “When you look the papers up, you often find the drugs didn’t even work better than a placebo. And no one tested how they worked in combination with the other drugs,” she says. “Just taking the patient off everything can improve their health right away.” But not only is checking out the research another time-consuming task, patients often don’t even like it when they’re taken off their drugs, she explains; they find their prescriptions reassuring.


Later, Ioannidis tells me he makes a point of having several clinicians on his team. “Researchers and physicians often don’t understand each other; they speak different languages,” he says. Knowing that some of his researchers are spending more than half their time seeing patients makes him feel the team is better positioned to bridge that gap; their experience informs the team’s research with firsthand knowledge, and helps the team shape its papers in a way more likely to hit home with physicians. It’s not that he envisions doctors making all their decisions based solely on solid evidence—there’s simply too much complexity in patient treatment to pin down every situation with a great study. “Doctors need to rely on instinct and judgment to make choices,” he says. “But these choices should be as informed as possible by the evidence. And if the evidence isn’t good, doctors should know that, too. And so should patients.”


In fact, the question of whether the problems with medical research should be broadcast to the public is a sticky one in the meta-research community. Already feeling that they’re fighting to keep patients from turning to alternative medical treatments such as homeopathy, or misdiagnosing themselves on the Internet, or simply neglecting medical treatment altogether, many researchers and physicians aren’t eager to provide even more reason to be skeptical of what doctors do—not to mention how public disenchantment with medicine could affect research funding. Ioannidis dismisses these concerns. “If we don’t tell the public about these problems, then we’re no better than nonscientists who falsely claim they can heal,” he says. “If the drugs don’t work and we’re not sure how to treat something, why should we claim differently? Some fear that there may be less funding because we stop claiming we can prove we have miraculous treatments. But if we can’t really provide those miracles, how long will we be able to fool the public anyway? The scientific enterprise is probably the most fantastic achievement in human history, but that doesn’t mean we have a right to overstate what we’re accomplishing.”


We could solve much of the wrongness problem, Ioannidis says, if the world simply stopped expecting scientists to be right. That’s because being wrong in science is fine, and even necessary—as long as scientists recognize that they blew it, report their mistake openly instead of disguising it as a success, and then move on to the next thing, until they come up with the very occasional genuine breakthrough. But as long as careers remain contingent on producing a stream of research that’s dressed up to seem more right than it is, scientists will keep delivering exactly that.
“Science is a noble endeavor, but it’s also a low-yield endeavor,” he says. “I’m not sure that more than a very small percentage of medical research is ever likely to lead to major improvements in clinical outcomes and quality of life. We should be very comfortable with that fact.”


This article available online at:
http://www.theatlantic.com/magazine/archive/2010/11/lies-damned-lies-and-medical-science/8269/

Thursday, October 14, 2010

The Rebbe by S. Hellman and M. Friedman



[This book is about Menachem Mendel Schneerson (1902-1994), the 7th Rebbe ( 'Prince' - tzadikim, - righteous spiritual leader) of the 'ultra-orthodox' Chabad Lubavitch Hasidim. It is 'about' the rebbe but, in my estimation, should not be considered the comprehensive story ( biography) of his life because it does not cover his distinctive religious thinking in sufficient depth. Where does Menachem Mendel's life work stand in relation Talmudic literature and Jewish Mysticism ( Kabbalah) in general ? What was so special about the way he dealt with the tales and maxims of the Midrash, the Baal Shem Tov and the Tanya in his talks, lectures and writings ( described as 'bravura performances')? Maybe I'm being picky but such intellectual concerns were a central focus of his life and teachings, and his accomplishments in such matters- his interpretations of what it meant to be Jewish, the nature of the exile of the Jewish soul ( , and G-d's own alienation from the world)- were obviously the main reason he became the 7th Rebbe in the first place and even today is subject to a significant degree of occultation among the Lubavitcher Hasidim. Of course the authors rightly wished to avoid writing a mere hagiography of the rebbe but, perhaps they learned a bit much over to other side thus leaving goyims like myself too much in the dark!]


Menachem Mendel was a directly related to the third rebbe and became, after an unusually long engagement, the son-in-law of the sixth rebbe and was thus one of two presumptive heirs to the leadership Lubavitch Hasidic Court which originated in Russia and Lithuania during the 18th century. Early in his adult life, however, he, together with his new wife and against the expressed wishes of her father, detached himself to a large degree from the household and immediate community of the sixth rebbe in order to study to become an engineer first in Berlin and then in Paris. This ambition could only be achieved by stint of a great and prolonged effort because he lacked secondary school certification, had to audit many courses and was thus forced to rely on testimonials rather than official grades to advance his academic career. He eventually succeeded in getting his degree but, as the result of the rising tide fascism in Europe and anti-immigrant fervor in France, was unable to follow it up with a useful internship in his chosen field of endeavor. He was then forced to flee to America for his very life. Very few opportunities to work as an engineer presented themselves to Menachem in his new country, the language of which he did not even speak. Thus he settled into an intimate connection with the Court of the sixth rebbe.

Throughout is years as a student of the secular arts Schneerson maintained his studies in Talmudic literature, the kabbalah, Midrash, the Baal Shem Tov and the Tanya . In so far as possible – for the schedules of classes at the Universities were very exacting, synagogues at great distances from his living quarters (sometimes in quite multi-cultural and artistic neighborhoods)- he maintained orthodox religious practices and returned to his father-in-law's household for important holidays when circumstances permitted. It is virtually certain that both he and his wife enjoyed a more cosmopolitan outlook than was generally acceptable among the Lubavitch Hasidim and had other plans than the one that fate eventually provided.

The sixth rebbe – Rabbi Yoseph Yitchak- had been admired within Jewry as a whole for his advocacy on behalf of persecuted Soviet Jewry. After his arrival in the United States, however, during the last decade of his life, his increasingly insistent messianism embroiled him in controversy. His messianism carried within itself a powerful criticism of broad swaths of American Jewry, whose laxity in Jewish observance, he argued, had defiled the world and brought about catastrophe and ruin. Only immediate repentance and return to ultra-orthodox would ensure the Messiah's coming and the redemption of the world (which he predicted would occur in his own lifetime.) In August 1941, fifteen months after his arrival as a refugee in his adopted country – a rescue brought about by Jews were not at all orthodox in their observance - he had argued that American Jews “coldness an indifference... towards Torah and religion” were no less destructive than the fire in Europe that threatened “ to annihilate two-thirds of the Jewish people.”

Many of Rabbi Yitzchak's contemporaries understood these words to be a criticism of other rabbis, religious leaders, and yeshiva heads who had clearly been unable to turn American Jewry towards greater religious observance. In words that stung much of the Orthodox religious rabbinate when they were published in 1941, at a time when Nazi general Erwin Rommel's tanks threatened to occupy the Holy Land, Yosef Yitzhack invited Jews to “imagine what would have happened if a few hundred rabbis... had appeared before the community and announced that the day of redemption is coming soon and that the tribulations of the Jewish people were simply the birth pangs of the Messiah; how powerful would have been the repentance of the Jewish people been had they done so." In the eyes of many Jews, he was blaming the victim.


Rabbi Yosef Yitzchak had hoped to offer a recipe for a religious revival and Jewish survival, but instead he created conditions that led to the decline of his influence in America, a decline accelerated by his physical deterioration as his life neared its end. Such statements, as well as what many of the influential considered his radical messianism ( and aggressive fund-raising at their expense), did not endear him to many of the religious leaders in America, and probably accounted for their absence at his funeral in 1950. Never-the-less, the family and the Lubavitch Community, centered at 770 Eastern Parkway in Brooklyn Heights, remain united in their bereavement and commitment to survival and growth.

This was the situation in which Mendel Schneerson stepped into the shoes of the 7th and last Lubavitch rebbe, claiming to channel the soul of Yosef Yitchak and overwhelming the pretensions of the other candidate with his sensitive understanding of the Kabbalah texts and their adaptability to modern life. He breathed life into the Chabad Lubavitch Hasidim. General accounts of his and the community's accomplishments such as provided in this book are readily available on-line.

Sunday, October 10, 2010

E.M. Forster by Wendy Moffat



To John Lehmann and Christopher Isherwood E.M. Forster was the “master” whom they called by his intimate name, Morgan. He was the only writer of the previous generation they admired without reservation. On the face of it he seemed like an odd literary mentor. Born in 1879, Forster was more than twenty years their senior. He made his name before the Fist World War, publishing a collection of short stories and four well-received novels : Where Angels Fear To Tread, The Longest Journey, A Room with a View, and Howards End. Compared to the great experimenters Joyce or Woolf, Forster's early novels seemed sedate. But to John and Christopher, these subtle satires of buttoned-up English life were revelatory and unpredictable. They admired Morgan's light touch, his razor balance of humor and wryness, insight and idealism. “Instead of trying to screw all his scenes to the highest possible pitch, he tones them down until they sound like mothers'- meeting gossip... There's actually less emphasis laid on the big scenes than on the unimportant ones.”


The novels looked at life from a complicated position – finding a dark vein of social comedy in the tragic blindness of British self-satisfaction. In spite of their sensitivity, they had a sinewy wit.


After the first four novels, there was silence. Morgan struggled for more than a decade to produce his last novel. A Passage to India came out in 1924. It had all the hallmarks of his earlier novels, but Morgan's insight was burnished into tragic wisdom. His complex and enlightened characters faced a world that seemed destined to break their wills and their hearts. But after A Passage to India, a curious silence. One of the most prominent novelists of his time appeared to simply cease writing fiction at the relatively young age of forty-five. Though he had almost fifty more years to live, there would be no more novels from Morgan.

Lehmann and Isherwood knew – or suspected- that by the time he published Howards End in 1910, Morgan had grown tired of the masquerade of propriety – the unspoiled countryside settings, the oh-so English people in their white linen suits, the clever repartee- that generated his plots. As early as June 1911, he confided in his diary his 'weariness of the only subject that I both can and may treat – the love of men for women & vice versa.”

But Forster forged on as a journalist, a reviewer, and an advocate for writers freedom. Despite being “so shy it makes one feel embarrassed,” he became a pungent social critic. He argued that Western democracies misunderstood the third world. And he believed that democracy can be sustained only through tolerance and openness, especially when these qualities seem to threaten national security. For more than fifty years Forster entered political fights from the position of the underdog. Almost every week one could read a pithy and pointed letter to the editor in his inimitable voice. He protested against fascism, against censorship, against communism, against “Jew-Consciousness,” against the British occupation of Egypt and India, against racism and jingoism and anything that smelled of John Bull. Morgan's public voice wasn't stentorian. He raised it, tremulously, often alone, against the edifice of conformity.

As self-proclaimed gay men, Isherwood and Lehmann adopted same the American neologism as the men who resisted police harassment at the Stonewall Inn in Sheridan Square, the men who embraced gay liberation, who eschewed the medical term homosexual, which had marked them for decades as a “species”. But the fact that they had lived through the sea change in attitudes and argot gave them fierce insight into the mystery of Morgan's strange broken-backed career.


Only weeks before Morgan died, Christopher made a pilgrimage to see him at King's College. On that spring morning in 1969, as always, Morgan looked impeccably ordinary, like “the man who comes to clean the clocks.” It was a canny disguise. In the 1920s, his college friend Lytton Strachey had nicknamed him the “Taupe”, a French word for “mole.” Though he was one of the great living men of letters, in a loose-fitting tweed suit and a cloth cap he slipped unnoticed into the crowd or sat quietly at the edge of the conversational circle. This mousy self-presentation was no accident. Forster came of age sexually in the shadow of the 1895 trials of Oscar Wilde, and he had learned his lessons well. Naturally quite shy, he consciously inverted Wilde's boldly effeminate persona. Where Wilde – and Stratchy after him – cut flamboyant and dandified figures, Forster disappeared into the woodwork. Wilde's bon mots became famous epigrams, but Forster instead chose to draw people inward, to reveal themselves to him as he remained enigmatic.

To speak with Forster was to be seduced by an inverse charisma, a sense of being listened to with such intensity that you had to be your most honest, sharpest, and best self. Morgan's steadfast scrutiny tested his friends' nerves. Siegfried Sassoon found it “always makes me into a chatterbox.” The attention made Christopher feel “false and tricky and embarrassed.” He always had to suppress the urge to act the clown, to “amuse” Morgan to dispel the moral weight of his stillness and empathy.

All his life Morgan's friends struggled to put their finger on the ineffable quality that made him such an exceptional man, His pale blue eyes were terribly near-sighted, but everyone close to him noticed that they missed nothing. He had a “startlingly shrewd look of appraisal...behind the steel-framed spectacles... It was a curious feeling to be welcomed and judged at the same time.” To Christopher, Morgan's eyes made him look like “a baby who remembers his previous incarnation and is more amused than dismayed to find himself reborn in new surroundings.” In life and in writing, Morgan preferred to plumb the depths and to leave himself open to surprise. Even the most ordinary conversation could “tip a sentence into an unexpected direction and deliver a jolt.”

Forster conducted his life as if everyone lived in a novel, with the rich inner life of characters' motives and feelings operating as the rules of the world. Every occasion was carefully observed, and even the most clear-cut matters subjected to interpretation. His excessive insight made him seem hopeless about practicalities. One friend called him a “dreamer” and counseled that he should “face facts.” Morgan responded precisely: “It's impossible to face facts. They're like the walls of a room, all around you. If you face one wall, you must have your back to the other three.” His hyperprecision sometimes savored of the absurd: once when asked if it was raining, Forster slowly walked to the window and replied, “I will try to decide.”

The previous July, just after he arrived at King's College for his residency, Mark Lancaster found himself alone in an octagonal room where a tiny black-and-white television had been installed on a tea cart before the fireplace as a begrudging acknowledgment of the wider world. Next door was the Fellows' Senior Combination room, on whose claret-colored walls the portraits of great Kingsmen – all friends of Morgan, all dead, - gazed down: Rupert Brooke, a Roger Fry self-portrait, Duncan Grant's painting of Maynard Keynes. In contrast, the little room was barely big enough for two armchairs and the vitrines stuffed with ancient pottery that flanked the Gothic window. It was a nondescript time in the mid-morning, and the BBC was broadcasting coverage of the first moon landing. Decades later, Lancaster still remembered the scene clearly. Morgan “shuffled in, ask me what it was, settled down to watch” on the armchair beside him. He leaned forward conspiratorially towards Mark. “I'm not sure they should be doing that,” he said quietly...

The terms of E.M. Forster's will determined access to his unpublished writings, the “great unrecorded history” of his love for men that he had so carefully preserved. There were no restrictions placed on what readers could see, but Morgan forbade any sort of mechanical reproduction of manuscript material. From the beautiful glass box of the Ransom Center at the University of Texas in Austin to a friend's sitting room in Hampstead, from the cool majesty of the Huntington Library in Southern California to the modern hush of the Beinecke Library at Yale, and especially in the serene little room in King's that looks out across the lawn to the great Gothic chapel, you must touch the letters and notebooks, the photographs, the ticket stub from Mohammad's trolley car, and the baby Morgan's wispy lock of hair. And you must take the time. Penetrating and puzzling out the difficult, dense penmanship, copying out the relevant scraps by hand, phrase by phrase, engenders a trance, a feeling of automatic writing, a fleeting fantasy of complete connection to Morgan's remarkable mind and heart. So great and honest a writer and so humane a man, whose “defense at any last Judgment would be 'I was trying to connect up and use all the fragments I was born with.'”