Page 3938 – Christianity Today (2024)

Table of Contents
1 2 3 4 5

James Calvin Schaap

Wasicu at Chankpe Opi (a white man at Wounded Knee

  1. View Issue
  2. Subscribe
  3. Give a Gift
  4. Archives

Think of it as a tawny ocean stopped in time, a vast landscape of grass, here and there mustache-like strips of trees darkening creek beds or running along the ridges like an old headdress unfurled in wind. Today, the place where the Wounded Knee Massacre took place looks very much as it did in the early winter of 1890: a featureless, shallow valley in a seemingly unending field of prairie grass that, on a gray day, weaves itself inconspicuously into the cloudy sky at its reaches.

On December 28, 1890, four Hotchkiss guns—the Sioux called them the guns that fire in the morning and kill the next day—stood on a small, whitecap hill amid this arid ocean, all four aimed down into the camp of a Minneconjou chief named Sitanka, or Big Foot. There, 300 men, women, and children were camped, hoping to reach Pine Ridge Agency the next day.

Today, more than a century later, a single battered billboard offers the only available outline of the story, the word “battle” crossed out and “massacre” scribbled in roughly above it. Otherwise, there is little to mark the spot. It is almost impossible to stand on that small hill and look down into the valley of Wounded Knee Creek and imagine what the place must have looked like so full of people.

But try. Try to imagine with this yawning, empty space, a couple hundred Lakota just beneath the promontory where we’re standing, their worn and ripped tipis thrown up quickly, campfires floating thin plumes of smoke. These folks have been hungry for days—and tired, having marched hundreds of miles south toward Chief Red Cloud at the Pine Ridge Agency, where they thought they’d be safe.

But there’s more, far more. Across the ravine west, maybe a half mile away on another hill, sits a sprawling encampment of several hundred troops under the command of Col. James W. Forsyth, the largest military encampment since the Civil War. Picture a campground of nearly a thousand people in tents, then cut down all the trees in your mind’s eye to take in the sweep of the Dakota prairie.

Big Foot’s people were dancers, Ghost Dancers, strong believers in a frenetic, mystic ceremony, a hobgoblin of Christianity, Native ritual, and sheer desperation. If they would dance, they thought, Christ would return because he’d heard their prayers and felt their suffering. When he came he’d bring with him the old ones (hence, the Ghost Dance). And the buffalo would return. Once again the people could take up their beloved way of life. If they would dance, a cloud of dust from the new heaven and the new earth would swallow the Wasicu, all of them. If they would dance, their hunger would be satiated, desperation comforted.

There was no dancing here on the night before the massacre, December 28, 1890, but for almost a year leading up to it “the Messiah craze”—as the Wasicu called it—had spread throughout the newly sectioned reservations, as unstoppable as a prairie fire. A committee of Sioux holy men had returned from Nevada, where they’d met Wovoka, the Paiute who’d seen the original vision. They returned as disciples of a new religion.

Wovoka designed the ritual from his own visions. Erect a sapling in the middle of an open area, like the one in front of us now—the tree, a familiar symbol from rituals like the Sun Dance, then banned by reservation agents. Purge yourselves—enter sweat lodges, prostrate yourself before Wakan Tanka, the Great Mysterious. Show your humility—often warriors would cut out pieces of their own flesh and lay them at the base of that sapling to bear witness to their selflessness.

Then dance—women and men together, something rare in Sioux religious tradition. Dance around that sapling totem, dance and dance and dance and don’t stop until you fall from physical exhaustion and spiritual plenitude. Dance until the mind numbs and the spirit emerges. Dance into frenzy. Dance into ecstasy.

Now look back down into the valley, and imagine 300 men and women being slain by the spirit, most of them writhing in fine dust. Such mass frenzy made Wasicu of every denomination or political persuasion shudder. To them, it seemed madness on a cosmic scale.

The exultation of the Ghost Dance was the vision given to those who fell in frenzy. When they recovered their senses, each of them would reveal what he or she had seen, a collective vision: life would be good, rich, abundant, everything the coming of the Wasicu had ended. Jesus Christ, rejected by his own, had heard the voice of the people’s suffering and would bring them joy.

“The great underlying principle of the Ghost dance doctrine,” says James Mooney in his rich study written already in 1896, “is that the whole Indian race, living and dead, will be reunited upon a regenerated earth, to live a life of aboriginal happiness, forever free from death, disease, and misery.” It was that simple and that compelling, a vision of heaven.

For me as a white man, and a Christian, it is not pleasant to admit that in the summer of 1890, the sheer desperation of Native people, fueled by poverty, malnutrition, and the near death of their culture, created a tragically false religion that played a significant role in what we’ve come to call, simply, Wounded Knee.

Throughout the West, the whole First Nation danced. What was peculiar to the Sioux, however, was this solitary tenet: those who wore the ghost shirt or ghost dress—the prescribed apparel of the faith—could assume themselves impervious to bluecoat bullets. Dancers could not die. They were holy.

It would be dead wrong to assume that that belief or any other created by the Messiah craze was the single cause for the horror that happened here in December 1890. Others are far more prominent: the disappearance of the buffalo, the unceasing trek of white settlers onto traditional Lakota land, a long history of broken treaties, distrust on every side, the searing memory of “Custer’s Last Stand,” and, perhaps most of all, the inability of two peoples to understand each other. When you look down on the shallow valley of the Wounded Knee, bear in mind that what happened here was the confluence of many motives, some of them even well-meaning, but all of them finally tragic.

Here we are. Look around. If you stand on this promontory in the summer, the heat can be oppressive; but on a good day you might be surrounded by a couple dozen tourists. That’s all. Wounded Knee doesn’t exactly border the Black Hills, and it’s not on the way to Yellowstone. It’s not on the way to anything, really. Right now you’re in the heart of flyover America, many millions of Americans never coming closer to this shallow valley than, say Chicago. Any time of year, the twisted vapor trails of jets on their way to lax or LaGuardia float like ribbons in the genial sky.

Cars and trucks navigate the reservation roads that cross almost directly at the point of battle, but in the late fall or muddy spring or cold midwinter—like that December day in 1890—it’s likely you’ll stand very much alone at Wounded Knee. Gettysburg National Military Park offers an aging but impressive Cyclorama, a remarkable circular painting, 356 feet by 26 feet, that puts visitors at the heart of the battle. Little Big Horn’s visitors center sells helpful interpretive audio tapes to use as you tour several miles of battlefield from the air-conditioned comfort of your minivan. But if you want to know what you can about Wounded Knee, the only storyteller there, all year round, is the wind.

Just imagine the encampment before you, and keep in mind the despair and the frantic hope of the dancers. “To live was now no more than to endure / The purposeless indignity of breath,” says John G. Neihardt in The Twilight of the Sioux. Millions of buffalo once roamed here, the staple of existence for thousands of nomadic Native people, the soul of their culture and faith. By 1890, they were gone.

In North Dakota’s horrible winter of 1996, while thousands of cattle died in the monstrous cold, it is reported that only one bison perished. Once the buffalo ruled here. In all the openness all around you, the Great Plains stretching out almost forever in every direction, try to imagine what it must have been like to stand on this promontory and look over herds so large you could see the mass ripple as they shifted slightly when detecting human scent, almost like watching wind on water. That’s what’s gone. To the Sioux, the hunt was not only manhood’s proving ground but a celebration for the family, often opened and closed with prayer. Few 19th-century Wasicu could understand that the disappearance of the buffalo seemed, to many Plains Indians, almost the death of God. I don’t believe I can, try as I might.

Today, right behind you, you’ll see a fenced-in enclosure where a granite monument, nine feet tall, lists the names of a few of those killed here. “Chief Big Foot,” it says, and then lists “Mr. Shading Bear, Long Bull, White American, Black Coyote, Ghost Horse, Living Bear, Afraid of Bear, Young Afraid of Bear, Yellow Robe, Wounded Hand, Red Eagle,” and just a handful more. Estimates vary on the number of dead buried where you’re standing, but most think 150 or so frozen bodies were dumped into a mass grave that now lies beneath the cordon of cement. No ceremony—Native or white. Just a dump.

On the other side of the stone there’s an inscription, still visible 70 years after the marker was placed where you’re standing:

This monument is erected by surviving relatives and other Ogallala and Cheyenne River Sioux Indians in Memory of the Chief Big Foot Massacre Dec. 29, 1890.

Col. Forsyth in command of U. S. Troops. Big Foot was a great chief to the Sioux Indians. He often said “I will stand in peace till my last day comes.” He did many good and brave deeds for the white man and the red man. Many innocent women and children who knew no wrong died here.

As Harry W. Paige says in Land of the Spotted Eagle, this isn’t the grammar, the syntax, or mechanics of an Oxford don. What it is, he says, is “writing that weeps.”

But what exactly did happen on the morning of December 29, 1890?

With nothing to stop it, sound travels easily on a landscape this barren. So imagine the bleat of reveille cutting through the morning cold. It’s eight o’clock, and the sun rises magnificently, albeit late, winter solstice just a few days behind. Many of the women, some of them singing, are packing for the 17-mile trip to Pine Ridge, where they anticipate meeting relatives and friends. Children play innocently around the ragged tipis and wagons, and for the first morning in many, most have eaten well.

By Indian messenger, Col. Forsyte, the commanding officer, calls the men of Big Foot’s band to come to parley directly southeast of us, at the spot where the chief’s tent stands, maybe 300 yards down the hill. Spread around the entire encampment like a huge lariat, even beyond the dozens of Indian ponies just west of Big Foot’s camp and the ravine behind it, 76 unmounted sentries, equally spaced, watch the movement. On the rise beyond the ravine and set against the horizon, a long line of mounted bluecoats wait menacingly, just in front of them, some several dozen of the cavalry’s Indian scouts. From the vantage point of the soldiers, the field seems well in hand, the position geometrically arranged to prevent escape. There is no chaos, yet.

As they were commanded, something close to 100 men from Big Foot’s band take their places in the council circle. Behind them, those lines of bluecoats move quickly to separate the men from their women and children.

The command is given to disarm. In the face of such untoward odds, the Sioux men are wary—not only does the positioning all around them seem ominous, but to a culture created on institutional violence, giving up one’s means to fight is giving up oneself. What’s more, they’d been promised the day before that they could keep their arms until they arrived at Pine Ridge.

Troops are dispatched to search and seize what arms they can turn up in the encampment behind them. What happens is not pleasant. The women do not take kindly to the sometimes brutal ways of the bluecoats. When the soldiers return, they have more guns, but also axes, knives, bows and arrows, tent stakes, even beadwork awls.

It is early winter, remember, but there is more than enough emotion in the air to ignite the landscape. Fear, prejudice, a history of deception, mutually defiant cultural values, and nothing less than hate lay beneath us here like so much kindling, waiting for the pop of a flame; the whole place is combustible. What exactly happened next may be debated forever, but the trajectory of events is no more debatable than the outcome.

Somewhere on the peripheries of the council circle stands a man variously described as half-crazed or desperate. He was, by all accounts, a man of faith, a medicine man, who considered it his duty to advise the men in council circle of their dignity and their calling.

One account describes him this way: “a grand figure … with green-colored face and a yellow nose, terrifying to behold. He wore with pride his floating crown of eagle feathers, while his costume was a wonder of wild adornments.” Some name this man Yellow Bird, while others claim Yellow Bird was nowhere near Big Foot’s camp. Whatever his identity, his eccentric look and behavior calls upon the dignity of Lakota history and culture. What he espouses is at least something of the doctrine of the Ghost Dance. He tells the men not to fear. As Crazy Horse, by legend, once exhorted his men before Little Big Horn, this man reportedly cried and sang to his people, told them this was a good day to fight and a good day to die. He promises eternal life.

For centuries, white performers have been instructed to sing from the diaphragm. The sound produced in Native songs and chants begins in the front of the throat; the difference is startling. To white folks unaccustomed to the keening, me among them, the sound produced seems more like a shriek than a hymn. As you stand there, those Hotchkiss guns poised just beneath you, listen to the medicine man’s seemingly mad music and try to stop your fists from tightening.

“The men are hiding guns,” an officer says.

It’s December, still early in the morning, and the Sioux men are wrapped in blankets. A search follows. In a pile in the middle, almost 70 old rifles lie over each other like fallen branches.

Then, something happens—nobody knows exactly what. The bluecoats draw their rifles and swords. Rifle magazines click open and close; guns are brought into position to fire.

A single soldier begins to wrestle with Black Coyote, one of the Sioux men. Some say he was deaf. At the same moment, the medicine man gets to his feet, picks up a handful of dust, and throws it at the soldiers, his shrieking exhortation continuing in the Sioux language. The soldier and Black Coyote struggle for possession of a rifle, while down the line another soldier begins struggling with another Sioux for a rifle wrapped in a blanket. The medicine man keeps telling his people white bullets will not harm them.

One shot. Whose was it? Did it come from Black Coyote in the struggle? No one knows for sure. But in a moment all hell breaks loose.

An old woman who used to live down our street claims that out here on the prairie we get only about ten sweet days a year. Prairie cold locks life in its frigid jaws; the heat wilts anything that grows; and always, the wind blows. In the summer, it’s capable of sand-blasting your face; in the winter, its bite is not only dangerous but deadly. But that morning, December 29, 1890, the wind stood still.

When you look down now, from the promontory where the First and Second Artillery have been firing those Hotchkiss guns into the horror beneath them, imagine a cloud of dust and smoke so thick as to stop breath. In seconds, in the very middle of the fray, combatants cannot see each other, but blindness doesn’t stop the killing. Bullets fly indiscriminately, killing not only many of the Sioux in the middle but also bluecoats on either side. (Indeed, that the cavalry could have avoided shooting each other at such close quarters seems impossible, despite claims to the contrary in military hearings conducted later.)

Seventeen miles away, at the Pine Ridge Agency, people claimed to hear the firing. In less than half an hour, the “battle” is over.

Just exactly who fired first might never be established, but there is no question whose rifles ended the massacre. With the first shots, scores of Lakota women and children run away into that ravine you see just beyond the fighting. With dozens of their own down in the middle of the madness, Forsyte’s men are in no mood to take prisoners, so for several hours after the bloody combat that began in front of Big Foot’s tent, scattered gunfire continues as far as three miles away, up and down the ravines that cut through the tawny prairie around the creek called Wounded Knee. What began in the intolerable heat of combat ended in cold-blooded murder.

If you’d like, perhaps you could walk down into those ravines, no more than a half mile from where we’re standing. There are no markers anywhere, like the ones at Little Big Horn, no whited stones to mark the spots where people fell. But even in their absence, ghosts linger.

That afternoon, when the shooting ended, Army personnel loaded 39 of their wounded into wagons, along with their 25 dead. Fifty-one wounded Sioux were located, 47 of them women and children, some of whom—like six of the cavalry survivors—would soon succumb to their injuries. The Sioux dead were left on the field and in the ravines, but exactly how many had been killed will never be known. Native people consider 300 a fairly just estimate. “And I can see that something else died there in the bloody mud, and was buried in the blizzard,” Black Elk said. “A people’s dream died there. It was a beautiful dream.”

That night, a blizzard came in on the wind and laid a gossamer veil over the carnage—some say mercifully; some think the hand of the white men’s God was simply covering their sin. Wounded Knee was the final military action in the Plains Indians Wars, the horrid, bloody conclusion of a cultural and religious confrontation that, from my vantage point as a white man at Wounded Knee, looks even today like something obscenely inevitable. Millions of white people—my own Dutch immigrant ancestors among them—went west for cheap land they assumed the Sioux didn’t value. After all, where were the improvements, the tree lines, the fences, the buildings, the cut sod? Millions of white people—my own ancestors among them—thought our holy book was a generous gift in exchange for the millions of acres those pagan people had once roamed in freedom.

There’s more. You must have noticed because you can’t have missed what’s right in front of us—what’s been there the whole time we’ve been watching what happened. Be careful as you walk around on that promontory because a block foundation, scattered with crumpled beer cans and trash, marks the outline of what was once a Catholic church, right there where those Hotchkiss guns rained death on the council circle. It’s crumbling, as things do that are not preserved.

The church that once stood here was destroyed in the 1973 Wounded Knee conflict, when, once more, violence occurred not far from where we’re standing. Men and women who held radically different views of Native dignity squared off against each other in this very valley. That dispute brought in U.S. marshals and turned deadly when armed Wasicu, here once again, dug in like the cavalry. For many, those government marshals were here to defend tribal leaders some thought violent, despotic men who’d long ago sold their souls for fools’ gold.

It isn’t pretty—this crumbling shell. There’s nothing to suggest that what once stood above ground here represented—even offered—the Prince of Peace.

In Coventry, England, you can walk within the skull-like remains of a cathedral destroyed by Nazi bombs during World War II, a remarkable memento of Brit suffering during relentless air strikes. Coventry Cathedral is what much of Europe looked like after Hitler. That foundation is immense, its walls rise and fall jaggedly. But its perimeters are festooned with plaques and flowers and all kinds of memorials neatly commemorating suffering and heroism.

No walls still stand on the foundation half-buried in the crest of the hill where we’re standing. No memorials—just graffiti—decorate what’s there. No one keeps the place up, so what’s left deteriorates in the abusive hands of changing prairie seasons. You can walk into that foundation, if you dare. The empty shell of the church that once looked over the field where hundreds died is nothing at all like the monument at Coventry.

And yet it is. It’s just not sanitized. But then, nothing is at Wounded Knee.

There is a circular visitors center down the hill to the east; the pit toilets stand just outside. The center itself is black, and it’s likely you’ve parked beside it before you walked up the hill to the burial monument. In the summer, the place is open. You can wander into its dark confines, where various displays will tell part of the story. But most of the year you’ll find a padlock on the door, which means you’re on your own at Wounded Knee.

Now look down at the sign where the reservation roads cross, 300 yards from where we’re standing. In summer, you might see a car or two. Go ahead. Walk down. People there beneath a brush arbor—Sioux people—will be happy to sell you some keepsake from your visit.

I have one—a little cowhide drum, two inches across, decorated with beaded fringe and hand-painted on both sides. On one side, the image of a red drum; on the other, the words “Wounded Knee” painted above a single eagle feather flanked by two dates: 1890 and 1973.

Cost me 20 dollars. I bought it from an angular man in a Western shirt who had three of them strung over his hand when he showed me his goods. His dark, expressionless face was pockmarked, his eyes blood-lit. I am sad to say he looked far too much like the image most of us have of reservation people today.

“My wife makes them,” he told me slowly, handing me the one that now hangs on my wall. He pointed to an old Ford parked ten feet away. I looked into the interior where she was sitting on the passenger’s side. She didn’t move, her head drooping as if she were asleep, or drunk.

I picked a crisp twenty out of my billfold and handed it to him. He took it and left. I suppose the next day he would return with the other two he’d shown me.

I don’t know that I can unpack the whole meaning of that 20-dollar transaction—what percentage of what I gave him may have come from pity, what percentage from blood guilt, what percentage from the very real desire to take some icon home to remember Wounded Knee. I honestly cannot interpret my motives, in part because I don’t know that I want to look that closely into my own heart.

But I’m happy that little cowhide drum is here beside me as I write these words. Not because it’s cute—it isn’t. I have no doubt that some enterprising Wasicu could create a kiosk and churn out Wounded Knee kitsch far more marketable—refrigerator magnets, pens with pinto ponies that run up and down the shaft. But there’s something about the people who sold it to me that I can’t forget, any more than the tawny prairie landscape all around or the entire awful story that gives the valley its ghostly life. Mystery and sadness are here in my little buckskin drum, a drum that doesn’t sound.

Mostly, at Wounded Knee, there is silence. When you visit, you won’t read or hear many words at all. If you’re white and you want to understand, you’ll have to look deeply into your own heart, stare into your deepest values, listen to the songs you sing, examine the history your family has lived and the faith you celebrate.

Maybe it’s best to simply stand in awe at Wounded Knee and pray with your silence. That’s not easy. We’re not good at lamentations. White folks would much rather see Wounded Knee as a battle than a massacre—as we have, officially, for more than a century.

Look up. Somewhere in that vast azure dome a jet will be cutting a swath across the openness. Inside, 300 people are sipping co*kes, reading Danielle Steele, watching a movie. Some are sleeping. Some are traveling home.

Do the math. Count them yourself—the thousands each day that only incidentally glance out from cornerless airplane windows as they pass over the spot we’re standing. Then look around and see how alone you are up here on the hill with four silent Hotchkiss guns.

Maybe we’d all rather not know. We’d all rather fly over Wounded Knee.

Visit sometime. Leave the kids at home. Welcome the silence. Stand here for an hour until the keening, the death songs, rise from the ravines as they once did. Look out over nearly a thousand ghosts assembled in space so open it’s almost frightening.

Stand here alone for awhile, and I swear that what you’ll read in the flow of prairie grasses and hear in the spirit of the wind is that, really, despite the tracks of those jets in the skies above and the immensity of silence all around, once upon a time every last one of us was here.

James Calvin Schaap is professor of English at Dordt College. His most recent book is an edited collection, More Than Words: Contemporary Writers on the Works That Shaped Them, just published by Baker.

Copyright © 2002 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.

    • More fromJames Calvin Schaap

Thomas Albert Howard

Where did the ideas that shape our world begin?

  1. View Issue
  2. Subscribe
  3. Give a Gift
  4. Archives

It might be a stretch, but perhaps only a slight one, to suggest that the century just past, so celebrated and reviled at the turn of the millennium, will ultimately be judged inconsequential by historians when compared to the “long century” that, as historians reckon, began with the strident cries of “liberty, equality, and fraternity” in 1789 and ended with the bloodbath of 1914-18.

Consider, for example, the revolutions of 1989, the crumbling of the Soviet Bloc in Eastern Europe. That the events of that year signified the retreat of the long shadow cast by Karl Marx was almost universally acknowledged. But to grasp their full significance, Francis f*ckuyama—who became one of the most influential interpreters of that year of revolutions—turned for inspiration to none other than Marx’s own mentor, G. W. F. Hegel, proclaiming 1989 as a sign of the necessary global triumph of liberalism and “the end of history.”

And this is but one instance out of many. In fact, much of the 20th century could be construed as the conflicted outworking of 19th-century thought. Nationalism, born in the experience of the French Revolution and the Napoleonic conquests, proved a dominant factor in the origins of the Great War and, subsequently, in the rise of Fascist Italy and Nazi Germany. The theories of Marx, refracted through Lenin, fueled the 1917 Russian Revolution, as well as other socialist experiments throughout the world. Western liberalism itself, though born in the Enlightenment, achieved coherence and direction only in the democratic developments of the early and mid-19th century and in the writings of liberals like Alexis de Tocqueville and J. S. Mill.

Indeed, wherever we look, the innovations and questions of 19th-century thinkers persist. Scientists today are still indebted to Charles Darwin as we embark upon the wonderful, horrible genetic future. Psychologists and psychiatrists remain preoccupied with exorcising the ghost of Freud, while some critics have hailed the notion of “the therapeutic” as the key to explaining contemporary culture. What are 20th-century existentialism and neo-orthodox theology if not evidence of the prescience and power of Søren Kierkegaard’s melancholy mind? Contemporary feminism traces its roots to writings like Mary Wollstonecraft’s Vindication of the Rights of Woman (1792) and Margaret Fuller’s Woman in the Nineteenth Century (1845). And aren’t the myriad moods of postmodernism ultimately footnotes to the scribblings of Friedrich Nietzsche, whose terrible originality seems slightly less so when one reads fellow 19th-century pessimists like Arthur Schopenhauer and Jacob Burckhardt? What is more, the above catalogue does not even glance at the influence of Friedrich Schleiermacher, Feodor Dostoevsky, August Comte, Ralph Waldo Emerson, Leopold von Ranke, John Henry Newman, William James, Max Weber, Émile Durkheim, and many more. A century of dunderheads it was not.

The abiding significance of the 19th century’s intellectual legacy was not lost on the brightest lights of the century just past. One thinks of Karl Löwith’s From Hegel to Nietzsche: The Revolution in Nineteenth-Century Thought; Hayden White’s Metahistory; The Historical Imagination in Nineteenth-Century Europe; Karl Barth’s Protestant Theology in the Nineteenth Century; and Owen Chadwick’s The Secularization of the European Mind in the Nineteenth Century. Alasdair MacIntyre’s Three Rival Forms of Moral Inquiry, one of the most provocative works of contemporary philosophy, amounts to an extended meditation on divergent 19th-century intellectual currents.

But if we draw attention to the enormous legacy of 19th-century thought, we are also bound to acknowledge the forbidding and difficult character of many of its master texts. Anyone courageous enough to sit down with Hegel’s Phenomenology of Mind for example, probably shares a special kinship with spelunkers overtaken by panic and fatigue in labyrinthine darkness. The best way to read Marx’s Das Kapital, someone once quipped, is to tear Marx out and read the introductory material. Personal experience confirms such frustration. As a teacher of modern European history, I find that students regularly gravitate to the bloody drama and ostensible relevance of more contemporary events, despite my repeated suggestion that historical perspective might reveal the origins of 20th-century events in 19th-century ideas.

It is for precisely this reason that Steve Wilkens and Alan Padgett’s survey of “Faith and Reason” in the 19th century is so valuable. Their book is the second in a series, complementing Colin Brown’s earlier one that covered the period from the classical world to the Enlightenment. Wilkens and Padgett provide an eminently readable “overview for students.” Beginning with Kant and ending with Freud, they leave few stones unturned as they touch upon German Idealism, Romanticism, Transcendentalism, Left Hegelianism, Positivism, Liberalism, Pragmatism, Confessionalism, Darwinism, and more. (One wonders if the century would be at all comprehensible without the fortuitous suffix “ism.”) Their method is straightforward, as befits a survey. For the most part, they proceed with a brief sketch of each thinker’s life before summarizing the most salient and influential aspects of his thought. A section on “significance” usually follows and, finally, a critical assessment that often includes comparisons and contrasts with previously discussed thinkers and movements.

Although most sections are compelling and informative, I was especially pleased by the authors’ ability to sort out the shape and significance of German Idealism. I was equally impressed by the lucidity of the chapter entitled “Confessionalism and Liberalism,” in which they discussed several tradition-based reactions to popular religion and to the century’s faith in rationality; here they consider John Henry Newman and the Oxford Movement, Neo-Lutheranism, Mercersburg Theology, and Charles Hodge and Princeton Theology. Under theological liberalism, they discuss major German figures like Albrecht Ritschl, Adolf von Harnack, and Wilhelm Herrmann; the Catholic Modernism of Alfred Loisy; and Walter Rauschenbush and the Social Gospel. They conclude this chapter with a suggestive allusion to Karl Barth’s reaction to 19th-century theology, which includes the well-known quote that his Commentary on the Letter to the Romans (1918) “fell like a bomb on the playground of the theologians.” (This whets one’s appetite for a projected volume by the authors on 20th-century philosophy and theology.)

A volume like Wilkens and Padgett’s raises the question of what is the appropriate nature of a Christian survey of an important area of academic inquiry. What might a student at, say, Azusa Pacific University (where both authors were based when their book was published, though Padgett has since moved to Luther Seminary in Saint Paul, Minnesota) or at my own institution, Gordon College, take away from a course on 19th-century intellectual history that differs from such a course taught in a secular environment? The authors state explicitly that their faith “stimulated and guided” the work. Yet I was struck by how little a colleague who might not share the authors’ faith commitments would object to their assertions. The contours of Christian reasoning are apparent, but the book is remarkably evenhanded. And this is for the good, for all too often Christian scholars have abetted their ghettoization by using “insider” vocabulary or by making unwarranted appeals to divine insight.

Still, I wonder if the authors were, at times, too cautious in pursuing the implications of their faith. This is especially apparent in their criteria of selection. Bearing in mind that selection is always a daunting task for a survey and that the authors are primarily interested in philosophers, I was left with several concerns. With the exception of the aforementioned chapter on confessionalism and liberalism, the book rarely departs from what one might call the canonical “grand narrative” of modern Western intellectual history. But by adhering so closely to this narrative, Wilkens and Padgett omit several routinely overlooked but nonetheless significant Christian thinkers. For example, the University of Berlin’s E.W. Hengstenberg (1802-69) was a conservative Lutheran thinker of tremendous influence in Germany during the heyday of theological liberalism. While the authors mentioned him briefly, I cannot help but think that students might benefit from knowing how a mind like Hengstenberg’s responded to his Berlin colleagues Schleiermacher and Hegel and to radical biblical critics like D. F. Strauss and F. C. Baur, all of whom receive substantially more attention than Hengstenberg.

More significantly, I was struck by the paucity of Catholic thinkers. Besides gaining familiarity with John Henry Newman and Alfred Loisy, a reader might assume that Catholics made little or no contribution to 19th-century thought. Félicité de Lammenais, a pioneer of Catholic modernism, is not discussed. The authors refer to Leo XIII twice in passing, but his most influential encyclicals—such as Aeterni Patriss (1879), which sparked a Thomist revival in philosophy still felt in John Paul II’s recent Fides et Ratio, and Rerum Novarum (1891), the Magna Charta of modern Catholic social thought—are not mentioned, let alone discussed. Other conspicuous Catholic omissions include René Chateaubriand, Alexis de Tocqueville, Lord Acton, and J. J. Döllinger. While I’m inclined to agree that 19th-century Protestant and secular thought was both more innovative and, for better or worse, influential, I’m also persuaded that all Christian students—perhaps especially evangelicals—would profit considerably from knowledge of the Catholic intellectual tradition from this period.

Yet as it stands, Christianity and Western Thought is a useful and much-needed book, which could be fittingly employed in a number of history, theology, and philosophy courses. It could also serve as a general introduction for anyone who desires to make sense of the significant and complex terrain of modern thought. And a worthwhile task this is, for as the violent convulsions of the 20th century recede in memory, the intellectual convulsions of the 19th century seem interminably with us.

Thomas Albert Howard is associate professor of history at Gordon College and author of Religion and the Rise of Historicism (Cambridge Univ. Press).

Copyright © 2002 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.

    • More fromThomas Albert Howard

Paul Willis

The virtues of walking

  1. View Issue
  2. Subscribe
  3. Give a Gift
  4. Archives

In the heady Oregon spring of my senior year of high school I embarked on a grand experiment: for 30 days, I would not ride in an automobile. I was curious to see if life could be lived, and what life might be like, on my own two feet (and occasionally on the two fat wheels of my one-speed Schwinn). School was six miles away by road, but four by shortcut through woods and fields, and that part I liked. A little extra time in transit was rewarded by a zesty hike and the scenic wonders of buttercups and barbed wire. I remember practicing my choir numbers in the forest. And I loved the sense of my home being connected to school in one, steady, bodily motion. For the last few yards I strode through a large and thoroughly needless parking lot; I was already there, with nothing to encumber me. For the 30 days I quite literally paced myself and managed to get to my various obligations in due time.

The only problems I remember were with the women in my life. My mother asked me to rush down to the grocery store to buy some eggs for a recipe she was baking for some company that was soon to arrive. I calmly informed her I would ride my bicycle down to the store, but not drive. That, she said, made no sense at all. She was mad. And I seem to recall that my girfriend and I may have broken up that fateful month. Her house was eight miles from mine, and I showed up less frequently and sometimes asked for a shower when I got there. Evening dates became difficult, as did making out in the nonexistent front seat of the nonexistent car.

As it turns out, I married a woman who likes to walk, though unfortunately her natural pace is about twice the speed of mine. I have heard that in some parts of India the husband walks 20 paces in front of his wife. In our own progressive but pedestrian marriage, this arrangement is reversed. Just a couple of summers ago we spent a week in Tuolumne Meadows to celebrate our 20th anniversary, scrambling up some of the peaks we had first climbed on our honeymoon. On the way down Mt. Dana, only just recovering from the aerobic trauma of our ascent, I told my wife I was thinking of writing a novel called In Her Steps. She was far enough ahead of me that she didn’t quite hear what I’d said.

“Isn’t that the one that asks, ‘What would Jesus do?'” she said.

“I think Jesus would walk more slowly,” I told her.

To say, as the warbling soloists of my childhood often did, that “I walked today where Jesus walked” is to equate walking with living. That we have in recent centuries invested walking with particular cultural significance is the fascinating thesis of Rebecca Solnit’s Wanderlust: A History of Walking. And that in relatively recent decades Americans have impoverished themselves by subtracting walking from their living is Solnit’s prophetic word. City planners, she informs us, now calculate “walking distance” as one quarter mile or less. Anything more is a hop in the car. Beyond a quarter mile, apparently, our legs, our patience, our time, and our imagination give out. I drove today where Jesus walked.

Solnit is eloquent in suggesting the advantages of engaging thought and place through our feet:

Walking, ideally, is a state in which the mind, the body, and the world are aligned, as though they were three characters finally in conversation together, three notes suddenly making a chord. Walking allows us to be in our bodies and in the world without being made busy by them. It leaves us free to think without being wholly lost in our thoughts.

For Solnit, “a certain kind of wanderlust can only be assuaged by the acts of the body itself in motion, not the motion of the car, boat, or plane. It is the movement as well as the sights going by that seems to make things happen in the mind.” One of the most alienating experiences to be had, she says, is watching a film on a jetliner at 35,000 feet—doubly removed from the earth. She also holds special contempt for the fashion of the exercise treadmill. Such simulated walking suggests that space itself has disappeared: “The treadmill is … a device with which to go nowhere in places where there is now nowhere to go. Or no desire to go.”

Automated motion, actual or virtual, creates a false urgency, an “insistence that travel is less important than arrival. I like walking,” says Solnit, “because it is slow, and I suspect that the mind, like the feet, works at about three miles an hour. If this is so, then modern life is moving faster than the speed of thought, or thoughtfulness.”

If this sounds romantic, Solnit would agree: Romantic with a capital R. She traces the roots of her sense of walking back to Rosseau and Wordsworth in particular. (They, in turn, draw upon the medieval tradition of pilgrimage.) Rosseau claimed, “I can only meditate when I am walking. When I stop, I cease to think; my mind only works with my legs.” Wordsworth, along with his immediate peers and predecessors, began the practice of walking for the pleasure of being in landscape. Rosseau was interested in the landscape of his own mind. So was Wordsworth, but he made a particular virtue of engaging his mind with the world that he walked through.

Solnit claims that perhaps his most revolutionary act was walking through France to Switzerland with a Cambridge friend when they should have been studying for exams. In effect, Wordsworth subverted the tradition of the Grand Tour, in which young Englishmen of privilege would travel by coach to the principal cities of Europe to meet other persons of their own class and view the established artworks and monuments. Instead, Wordsworth and his college chum traveled 2,000 miles by foot that summer, some 30 miles a day. Their goal was not the cultural treasures of Italy but the ruggedness of the Swiss Alps. They did not benefit from arranged introductions to persons of note but from chance encounters with country peasants.

I think now of the many off-campus programs offered by colleges today. Those in Europe often still emulate the old Grand Tour in a dizzying round of coaches, hotels, cathedrals, theaters, and museums—the students sometimes little more than FedEx parcels in transit. And I think of an old professor of mine expressing his indignance with those who refused to stray far from the bus: “You don’t know a land until you feel it in your feet!” Wordsworth remains countercultural, though he has spawned a large subculture, both in Europe and North America. The same students who gladly submit to mechanized McTours of Europe are also likely to feel the lure of a backpack trip in the Adirondacks or the Rockies.

Solnit observes that Wordsworth made walking central to his life and art to a degree almost unparalleled before or since. It was how he encountered the world and how he composed poetry, a mode not only of traveling but of being. It is tempting to read Wordsworth’s 1802 “Preface to Lyrical Ballads” in light of Solnit’s observations. Wordsworth regrets the desire of his contemporaries for “gross and violent” entertainment, their “degrading thirst after outrageous stimulation.” He was referring to gothic novels, sentimental melodrama, and the recent advent of daily newspapers. (One can only guess what he would have made of ten minutes of previews at the local cineplex.) The human mind is capable of being more gently moved, he says, through his own particular poetry—a poetry grounded in his own (iambic) feet. Walkers of the world, unite!

Which they have, occasionally, from Chaucer’s Canterbury pilgrims to protest marchers in Tallahassee. Solnit’s more speculative but nevertheless interesting chapters seek to interpret the religious and political meanings of various kinds of corporate walking. She also treats such diverse subjects as English gardens, traditional courtship, mountaineering, church labyrinths, city pedestrians, suburban disembodiment, and the politics of open spaces. Her approach is consistently and rather convincingly feminist, marred on occasion by personal indulgence. Antinuclear activists who walk in procession at the Nevada Test Site rank as her holiest heroes, and indeed it seems that almost every street movement of the past generation, whether it be for gay rights or against the Gulf War, has her implicit blessing. The assumption appears to be that if people are willing to band together and walk for a cause, it must be a worthy one. (Conveniently, walkers for more conservative causes are generally absent from her discussion.) But this pervasive bias is a small price to pay for her wide-ranging and spirited analysis of walking in general—a subject of such universal human significance that we tend to take it for granted.

If Rebecca Solnit provides us with a theory of walking, John Leax, in Out Walking: Reflections on Our Place in the Natural World, provides us with a reassuring record of particular instances. Solnit is a young woman at home in the urban wild of San Francisco; she admits to being “raised as nothing in particular by a lapsed Catholic and a nonpracticing Jew.” Leax is a graying but still middle-aged man at home with his wife and garden and five-acre woodlot in western New York; he teaches English at Houghton College and attends the local Wesleyan church. Where Solnit dazzles with erudition and eloquence, Leax settles the reader with a plainspoken profundity. Solnit is a roller-coaster; Leax is a good walk with a friend. I am reminded of the differences between Annie Dillard and Wendell Berry. One constantly seeks to astonish with her amazing juxtapositions of disparate materials, while the other goes about his sober business, plowing the page behind his horses. Dillard and Solnit are essentially metaphysical poets, after the manner of John Donne; Berry and Leax are heirs of the careful decorum of Ben Jonson.

Out Walking is a small but significant intermixture of essays and poems, most of them set close to home. Stewardship is a constant theme, as is gratitude to God. Many of the essays are marked by a Wordsworthian sense of encounter. Typically, Leax is walking, or paddling, or just standing on his porch, and he meets with something that must be reckoned with. It might be something physically immediate: a newt, a heron, or a triple-stemmed ash. It might be something in his mind: a remembered passage from Aldo Leopold or William Stafford, or a recollected experience from his childhood in rural Pennsylvania. His richest essays combine all of these elements: something alive in the natural world and something alive in the memory of his reading and his past. The result is a full but understated sense of connection, or wholeness, a “momentary stay against confusion” that the reader may share.

“Of Humans and Turtles” is a good example of Leax’s method. It begins with the memory of turtles kept by his grandfather, who drilled their shells and tethered them to trees with wires. His grandfather says there is one live turtle in the river with the date 1860 carved on its back. Then Leax recalls his contempt and admiration for a turtle that ate his goldfish in a pond he dug as a teenager. And then a more recent experience—watching a man aim his car at a painted turtle on the road: “Beside me he swerved. I heard the shell pop and felt the spatter of the turtle’s life on my bare arms.” In the climactic moment of the essay, Leax is fishing waist-deep in the Genesee River and hooks something powerful. It swims toward him:

About twenty feet away, it surfaced—a plate-sized snapping turtle, a nightcrawler hanging from its jaw. I knew what it could do; I knew its beak could slice to bone. In its domain I felt soft and vulnerable. It came on relentlessly. I stopped reeling, dug my knife from my pocket, and cut it loose. Still coming at me, it dived. Too shocked to move, I watched the dark disk beneath the water. Totally other, as beautiful as the wild itself, it bumped my leg, tilted, and swerved away into blackness. For more than a moment, standing in that water, I was afraid, capable of killing what I feared.

This gives rise to a meditation on his fear, how it could turn to either hatred or awe, separation or belonging, and he bases part of his reflection on remembered lines from a poem by Frederick Morgan. Finally, Leax provides a postscript on the man who ran over the turtle: “This winter everyone was shocked when he emptied his company’s bank account, abandoned his family, and skipped town for parts unknown. Somehow, I wasn’t surprised.”

That would have been a good place to end the essay, but Leax gives in, as he often does, to the temptation to be more didactic than necessary. He continues for a paragraph on the nature of sin, the self, and the fall. However, as I have said before in another context, that is a small price to pay for the art and pleasure of the entire work.

Part of that art and pleasure are the poems included in Leax’s book, a dozen earnest “psalms” and a dozen rather playful haiku. The psalms give thanks for God’s creation, sometimes in a way that is reminiscent of Wendell Berry’s Sabbath poems. I particularly like “The Clever Trout,” in which the fish “swims as he / was made to swim,” the popple “stands / as it was made to stand,” and the jay “cries / as he was made to cry.” Also, “To Christ the Creator”:

With your eyes I see
the six-inch snake,
green as mint, soft
as a baby’s hand, curled
about my finger,
and love it with your love.

In this poem, I think John Leax walks where Jesus did.

John Suiter’s well-crafted book, Poets on the Peaks, might aptly be subtitled “I sat today where Buddha sat.” For it is not so much about walking as perching, and all about finding dharma. With sensitive prose and exquisite black-and-white photographs, he chronicles the experience of the Beat writers Gary Snyder, Philip Whalen, and Jack Kerouac as fire lookouts in the 1950s on lonely summits of the North Cascades in Washington.

And lonely it was. Kerouac, in his 63 days atop Desolation Peak near the Canadian border, had nary a visitor; Snyder and Whalen, in their multiple seasons, saw only a handful. For all three of these writers, their times aloft in the North Cascades were immersions not only in alpine wilderness but also in Buddhist writings and practice. (“Ever, ever be on the lookout!” says a medieval Rinzai master.) For Kerouac, this was the end of a relatively brief fling; for Snyder and Whalen, an important beginning. Snyder went on for many years of Zen apprenticeship in Japan, and Whalen became a Buddhist monk.

Poets on the Peaks suffers from a hagiographic impulse. Himself a Buddhist convert, Suiter paints a glowing triptych of saints’ lives, of stylites, of holy hermits on mountaintops, treating the Beat writers—Kerouac in particular—with palpable reverence. (The alcohol abuse, the drug addiction, the womanizing, and the narcissism of Kerouac would seem to make him an unlikely object of adoration. But there it is. And many of my students share it.) In the course of his research, Suiter revisits each fire lookout as one might journey to a shrine.

I am reminded of another recent book, Real Matter, in which David Robertson, a professor of English at the University of California at Davis, accompanies Gary Snyder on a climb of Matterhorn Peak in Yosemite, carefully retracing the steps that Snyder took with Kerouac in 1955. This is the ascent that Kerouac describes in his novel Dharma Bums, the same novel that ends with his lookout experience on Desolation. To be fair to Suiter and Robertson, it must be said that misplaced pilgrim awe is a very common human failing, shared, for example, by many Christian biographers of C. S. Lewis. One can worship at a mountaintop or worship at a wardrobe, it makes no difference.

I have not hiked to the fire lookouts in the North Cascades that Suiter so beautifully describes, but in the 1970s I quite often climbed Matterhorn Peak when I was a guide in the Sierra. I went back with a friend last summer, and was surprised to find the summit register overflowing with tributes to Snyder and Kerouac. It was a little more than I could take, considering that Kerouac, hung over and out of shape, had not even made it to the top. So I divided my entry, first quoting a line from Snyder’s poem upon his own last climb of the Matterhorn: “I am still in love.” Then I added, “Jack Kerouac is a weenie.” According to Suiter, that is what most people born and bred in the Northwest think of Jack Kerouac. Still, I’m glad my wife wasn’t there to beat me back down the mountain.

Paul Willis, a poet and novelist, is professor of English at Westmont College.

Copyright © 2002 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.

    • More fromPaul Willis

Christian Smith

Hostility and condescension toward religion in the university

  1. View Issue
  2. Subscribe
  3. Give a Gift
  4. Archives

On the surface, it seems that the university is slowly but surely getting religion. New courses on religion and politics, development, and culture are popping up here and there. Newly proposed programs and institutes in Islamic and Jewish studies are generating faculty discussion. Top university presses are expanding the number of titles they publish on religion. Everywhere one turns, there seems to be a newly funded initiative for scientific research on forgiveness, spiritual progress, public religion, or some such topic. Publishing journal authors who once ignored religion are now entering religion variables in their regression models and often finding significant results. It seems as if there are a few more new jobs in the sociology of religion at decent schools than there used to be. And, in my own experience, senior colleagues are no longer suggesting to me that I should study in a higher-status field than the sociology of religion. Maybe the overtly anti-religious culture that dominated the American university since the late 19th century is slowly but surely dissipating, lo these so many years after the philosophical discrediting of logical positivism. At one level, it appears to be.

Then again, I have found that, just below the surface of this apparent ever-widening openness to and interest in things religious in the academy, there abides a tenacious anti-religious sensibility among many faculty that is, for me, both amusing and wearying. Here I recount only a few examples from my experience lately.

A fellow academic from another department, who had recently attended an explicitly religious wedding service for a mutual acquaintance, stopped me in the hallway to vent his astonishment that some of our colleagues who had also attended the wedding revealed in the way they had participated that they might actually believe in God and heaven. “Here are these well-educated people,” he marveled, eyes rolling, “who actually are sitting there talking about ‘Lord this, and Lord that,’ and singing hymns about heaven and ‘up there’! Can you believe that?!” This person knows that I am a churchgoing Anglican with interests in religion and theology. Thinking that I must be missing a nuance that would explain this interchange in a way that would not be insulting to me, I mumbled something about the believability of such things being rooted in narratives that are not directly empirically verifiable but that can make perfect sense to people who embrace the stories. By the time I fully realized what was actually going on, he too seemed to be catching on; he started to backpedal by saying that he guessed he was really only emotionally reacting against the religious practices in which he had been forced to engage in public school as a child. Okay, I said. Whatever.

In a recent small meeting of interdisciplinary faculty convened to evaluate a particular piece of scholarship on schools and education, the discussion turned to the lamentable activism of “the Christians” who were promoting an abstinence curriculum for local middle-school sex education. (This, of course, violated the 12-year-olds’ inherent right to fully explore and express their blossoming sexual desires.). I suggested that, to be precise, we should not really be talking about “the Christians” categorically, but rather a fairly small group of local, politically active Christians. The others conceded the obvious point, and then immediately proceeded to cheerfully note the good news that they had recently heard: “All of the Christians recently left the district schools after losing the sex education struggle, and went into home schooling.” Was it really worth it, I wondered, to point out that the 30 to 60 percent of district resident families who must be Christian could not possibly have left the public schools in the last year? That probably no more than a handful of families had actually recently begun home-schooling because of frustrations with local sex-education policies? Nah. Why even try? Getting them to distinguish between Pat Robertson and “the Christians” seemed hopeless.

Instead, I tried to point out the intellectual incoherence of the piece of scholarship we were evaluating, which advocated a libertine ethic of unlimited sexual freedom for 12-year-old boys and girls. How would this square with the piece’s feminist imperative to protect girls from the exploits of sexually predatory boys? Do we really want to give boys full freedom to pursue any and all of their sexual desires? But that suggestion didn’t go anywhere either.

Recently, in the lobby of my building, I ran into an academic acquaintance from the West Coast. Although I was in a rush to get somewhere, she had to tell me about her Jewish boyfriend, who, looking to earn a few extra dollars teaching a sabbatical-replacement class, had recently interviewed at a local Christian liberal arts college. Apparently, the dean of the college had nervously reminded her boyfriend before he signed the contract that his was a Christian school. Puzzled as to how Christianity could possibly make any difference in college education, the boyfriend asked what exactly that meant. The dean, my storytelling colleague related with scorn (however accurately, I do not know), replied that this meant that there was no smoking on campus and students should not cheat. How stupid, she laughed, to think that Christianity might affect higher education in any way, and to imply that all non-Christians were running around the world smoking and cheating. The Moron. But, oh, she was late and had to run. Goodbye.

And then there was the occasion, not long ago, when I found myself part of an informal discussion of faculty evaluating the merits of a professional talk we had all heard. The talk had included a significant religious analysis, which sought to explain some of the variance in a particular outcome with a religious explanatory variable. In other words, religion matters in shaping human action. But not all of the faculty in the discussion were satisfied, and to some degree for legitimate reasons, in my view. After a slight pause in the discussion, however, one colleague disclosed what he took to be an enlightening fact about the presenter of the talk: “In an earlier discussion with this person, I discovered that he himself was personally raised to be religious.” Then silence. Yeah, so?, I thought. (I can be a little slow.) Then I realized the point: because the presenter had a personal religious history, he was probably biased in his interest in religion and perhaps in his analysis of religious effects, and so their research and findings were suspect. The discussion moved on. Afterward, I emailed the faculty member who made this comment. I asked whether, if a personal religious identity had this biasing effect, having a given social-class location, gender identity, or racial or ethnic heritage would likewise inevitably make suspect a scholar’s research on social inequality, sex and gender, or race and ethnic relations. To this question I received no reply.

Life in the university goes on. And in the academy, particularistic identities are often what authenticate scholars’ authority and insight in research. Who but a woman or African American, after all, could understand the reality of women or African Americans? Except for a religious identity, which is uniquely suspect for providing not insight or understanding, but bias and questionable findings.

There are many more similar stories to be told—they are a dime a dozen, in fact. But these suffice to remind us that anti-religion is still alive and well among the university professoriate. Particularly anti-Christianity, which disdains a faith neither exotic nor “subaltern” enough to merit the admiration of intellectuals.

Such anti-religion in American higher education was launched in full force in the late 19th and early 20th centuries by confident apostles of secularization who sought to popularize the doctrines of positivism, epistemological foundationalism, and scientific objectivity. Of course, each of these perspectives has been thoroughly dissected for decades now by all manner of philosophers, historians, theologians, and social theorists. The corpse of logical positivism is badly decomposed, but its ghost still haunts the halls and classrooms of the academy. Some kind of emotional and political energy seems to keep this anti-religious spirit alive among many university faculty, despite the fact that the key intellectual assumptions that once legitimated it are now largely discredited.

Perhaps, I have been thinking lately, it might help to consider the academy’s anti-religious spirit as a case of what the late, renowned French sociologist Pierre Bourdieu called habitus. Seeking to explain a cultural dynamic that mediates between the objective and the subjective, between social structures and lived practices, Bourdieu somewhat opaquely described habitus as “a system of durable, transposable dispositions, … principles which generate and organize practices and representations that can be objectively adapted to their outcomes without presupposing a conscious aiming at ends or an express mastery of the operations necessary in order to obtain them.” Habitus, in other words, involves persistent and deeply internalized mental schemes that correspond to and reinforce particular social conditions, and that operate often prereflectively through human actors. Bourdieu’s jargon aside, the idea of habitus can help to conceptualize and name the reality of pervasive, durable, habitual, prereflective, master mental schemes that culturally reproduce themselves, and the larger social conditions to which they correspond, through actors who may not be fully conscious about the process.

In particular, the notion of habitus helps to explain some curious features of academic anti-religion. One is that none of the anti-religious faculty I know as individuals are nasty people out to make religious believers feel bad. They’re smart, interesting, morally serious, and well-intentioned. I prize my relationships with them. They’re not aiming to be anti-religious, anti-Christian. They don’t have to try. It just comes naturally to them, almost automatically, as if from a fundamental predisposition. Nor can their attitude be explained simply as evidence of “hardened hearts,” for outside the academy, even among people who have little use for Christianity such disdain for religious faith is noticeably less common and intense. But with the idea of habitus in mind, we can see that in their anti-religion, these faculty are expressing a deeply interiorized mental scheme that is more prereflective than conscious, more conventional than intentional—yet one that has an immense power to reproduce a pervasive institutional culture.

Second, my own university is not especially anti-religious in character. In fact, from what I hear, it is much less anti-religious than many others. Given our Southern whereabouts, it hardly could be otherwise. Indeed, many faculty and administrators I know are practicing Christians or observant Jews. And yet somehow, despite all that, a persistent and sometimes oppressive anti-religious spirit penetrates, and at times even seems to pervade faculty culture. It is our academic habitus.

Bourdieu intended the concept of habitus to explain the social reproduction of economic and status inequalities, which actually fits well with this interpretation of the anti-religious spirit in the university. For at issue, finally, is not simply different individuals’ personal assumptions and preferences about religious faith, but larger cultural and institutional struggles around the control of socially legitimate knowledge—with implications for the distribution of status and resources. What will count as fundamental categories and classifications of knowledge that define and interpret reality, that are either so obvious as to be unquestionable, or that will stick when questioned? Knowledge informed by religious commitments once held authority in American higher education, but was cast out as irrelevant, if not pernicious, more than a century ago. Today’s anti-religious habitus in the university works to make sure religious knowledge remains excluded and irrelevant.

But what about the problem of the last many decades’ intellectual discrediting of the ideas that originally legitimated the secularization of American higher education—logical positivism, strong foundationalism, scientific objectivity, and their intellectual kin? Again, the answer lies in habitus. After the original rationales and ideologies forming a culture have passed, the culture continues in turn to form those humans whose lives it subsequently encompasses. Their lives are thus patterned in consciousness and practice, ironically, in ways still shaped by the expired rationales and ideologies that are often explicitly rejected.

This gives rise to interesting cultural contradictions and inconsistencies—which may help to explain some of the odd characters one often encounters in the university: self-professed relativists who are absolutely devoted to the rightness of their moral commitments; hard-core social constructionists who insist that there really are binding standards of justice and rights that apply to everyone; champions of diversity and equality who would like nothing more than to see religion—especially “conservative” religion—disempowered and marginalized from public life, if not eliminated altogether. Sometimes it is the biggest proponents of self-reflexivity who seem least aware of their own incongruities. Sometimes it is the most progressive faculty who act as if philosophically we’re still living in the 1880s or 1930s, as if Comte and Spencer and Carnap were still quite credible. I suppose academic secularists still have the doctrine of materialism to hold onto, although I have found they are unwilling to concede the philosophical point that materialism is no less a matter of faith than is theism. There are some days when I just want to pull out my hair (what little there is of it) over the pervasive intellectual incoherence and contradictions in the academy. But for the most part, I find it all merely annoying. (No doubt I also have a few contradictions of my own.)

The truth is, I love my department and my university. I am privileged to work with very impressive colleagues in a genuinely great institution. Still, I am persuaded that American higher education in general would benefit intellectually and be more honest by putting to rest the outdated cultural heritage of the 19th-century apostles of secularization, and more fully tolerating, if not respecting and embracing, the variety of religious perspectives on life and the world represented not only in society but in fact among university faculty. Only time will tell whether the notably increased interest and openness to religion at the level of university courses, programs, institutes, research, and publications will have any impact on higher education’s deeper anti-religious culture. Habitus by nature is resistant to change. In any case and meanwhile, university faculty of religious faith can continue on in their callings, riding the waves of amusem*nt and weariness with as much grace as is given them.

Christian Smith is a professor of sociology at the University of North Carolina at Chapel Hill, and editor of and author in the forthcoming book, The Secular Revolution: Power, Interest, and Conflict in the Secularization of American Public Life (University of California Press, 2003).

Copyright © 2002 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.

    • More fromChristian Smith

Alan Jacobs

Computer Control, part 3

  1. View Issue
  2. Subscribe
  3. Give a Gift
  4. Archives

This is the concluding article in a three-part series.

Article 1: Computer Control

Article 2: Life Among the Cyber-Amish

Dr. Gelernter:

People with advanced degrees aren’t as smart as they think they are. If you’d had any brains you would have realized that there are a lot of people out there who resent bitterly the way techno-nerds like you are changing the world and you wouldn’t have been dumb enough to open an unexpected package from an unknown source.

In the epilog of your book, “Mirror Worlds,” you tried to justify your research by claiming that the developments you describe are inevitable, and that any college person can learn enough about computers to compete in a computer-dominated world. Apparently, people without a college degree don’t count. In any case, being informed about computers won’t enable anyone to prevent invasion of privacy (through computers), genetic engineering (to which computers make an important contribution), environmental degradation through excessive economic growth (computers make an important contribution to economic growth) and so forth.

As for the inevitability argument, if the developments you describe are inevitable, they are not inevitable in the way that old age and bad weather are inevitable. They are inevitable only because techno-nerds like you make them inevitable. If there were no computer scientists there would be no progress in computer science. If you claim you are justified in pursuing your research because the developments involved are inevitable, then you may as well say that theft is inevitable, therefore we shouldn’t blame thieves.

But we do not believe that progress and growth are inevitable.

We’ll have more to say about that later.

FC

1

“Dr. Gelernter” is David Gelernter, a computer scientist at Yale University, who received this letter on April 23, 1995. “FC,” other documents from the same author explained, stands for Freedom Club—but despite the use of plural pronouns in this letter and many others, one person wrote the message: Theodore Kaczynski, otherwise known as the Unabomber. On June 23, 1993, Gelernter had opened “an unexpected package” that immediately exploded, wounding him severely. In 1998, Theodore Kaczynski pled guilty to the charge of being the “unknown source” of the package that injured Gelernter.

It seemed strange to many that Kaczynski should single out Gelernter, who is distinctive among computer scientists for his aesthetic sensibilities and his lack of enthusiasm for technology as such. Indeed, some of Gelernter’s warnings about over-reliance on computers can sound oddly like statements in the Unabomber’s notorious Manifesto. (A more likely antagonist would be someone like Ted Nelson, inventor and promoter of “hypertext,” who in his 1974 book Computer Lib/Dream Machines exhorted, “You can and must understand computers NOW.”) But perhaps it was Gelernter’s very humaneness that, to Kaczynski, made him so dangerous: by striving, in several books, to demystify computer technology and usage; by designing hardware and software that would be comfortable, functional, and unintimidating to ordinary users; by insisting that people with no formal training in computer programming could nevertheless come to understand at least the basics of how computers work, Gelernter might actually do more to solidify the place of computers in our everyday lives than the real “techno-nerds” ever could.

Kaczynski’s arguments stand in direct contradiction to the thoughts and concerns that have motivated this series of essays. Like Gelernter, I have assumed that the continuing, indeed the increasing, centrality of computers to our culture is “inevitable.” I suspect that Kaczynski secretly thought so too: he was certainly smart enough to know that the use of computers is not curtailed by the bombing of a computer scientist. If he had real hopes of lessening our dependence on computers, he would have attacked the machines themselves—or the factories that made them—just as the 19th-century Luddites destroyed the knitting machines that were putting them out of work. Kaczynski’s resort to mail bombs is really an admission of futility.

But I do not believe that the inevitability of computers equals the inevitability of theft. Theft is a crime, the computer a technological product; and the problem with technology is always to find a way to put it to proper uses while avoiding putting it to dangerous, destructive, or immoral uses. True, any knowledge I gain about computers will do nothing to halt experiments in genetic engineering or slow “excessive economic growth,” though I can imagine ways in which computer-literate others might contribute to those causes; I also think it safe to say that my refraining from computer literacy, or even computer usage, won’t be of any help. But within my own daily sphere of action, I believe that increasing my ability to use computers can be helpful to me. (And it can surely help me to preserve my privacy, though that goal is not high on my list.)

I was encouraged, as I began this project in self-education, to discover the very comment from Gelernter’s Mirror Worlds that angered Kaczynski. I was likewise emboldened by this statement from the engineer Henry Petroski: “I believe that anyone today is capable of comprehending the essence of, if not of contributing to, even the latest high technology”—though I think I would have felt considerably more emboldened if this sentence had come at the beginning of a 400-page book about computers, instead of a 400-page book called The Pencil. (It’s a wonderful book, though.) I have tried to record, especially in the second essay in this series, some of the rewards (as well as some of the frustrations) that I have received in my plunge into the world of computer technology, especially my encounter with Linux and the world of open-source software. But I am faced now with certain important questions that I have not even begun to address.

2

Looking back over the reading I have done in preparing to write this essay, I notice a widespread tendency to speak of the concerns raised by the increasing prevalence of computers as technological concerns; the assumption shared by almost all parties is that any “problem” following from the cultural dominance of computers is but a special case of what the philosopher Martin Heidegger famously called “the question concerning technology.” Henry Petroski emphasizes the links between pencils and computers: both are technological products. Kaczynski sneers at “techno-nerds”; some years later, as I noted in the first essay in this series, Gelernter would tacitly respond by writing that “to hate technology is in the end to hate humanity, to hate yourself, because technology is what human beings do.” (Thus the title of another of Petroski’s books: To Engineer Is Human.)

But I have become convinced that technology as such is not the issue at all.

We come closer to the heart of the matter when we think of computers in terms of information technology. Here the work of the philosopher Albert Borgmann is important. In his seminal book Holding on to Reality: The Nature of Information at the Turn of the Millennium, Borgmann identifies three types of information:

  1. Information about reality. In this category Borgmann includes many forms of “reports and records,” from “medicine rings” constructed by the Blackfoot Indians of Montana, and the altar Abram built to the Lord at Hebron, to many forms of the written word.
  2. Information for reality, or “cultural information.” This includes recipes and instructions of all types: “there are plans, scores, and constitutions, information for erecting buildings, making music, and ordering society.”
  3. Information as reality. This is the peculiar province of certain, especially digital, technologies: in it, “the paradigms of report and recipe are succeeded by the paradigm of the recording. The technological information on a compact disc is so detailed and controlled that it addresses us virtually as reality.”

The power that we have achieved to produce so much of this third type of information, and produce it so skillfully, concerns Borgmann deeply. He believes that throughout most of human history we have managed a degree of balance between “signs and things,” but in these last days have achieved a technology of signs so masterful that it “steps forward as a rival of reality.”

Borgmann’s book is excellent in many ways, but in his complaints about the dangers of a world dominated by technologically produced signs he often descends into a metaphorical vagueness—the sort of vagueness that tends to get a writer called a Luddite. For instance, he is somewhat unhappy about the creation of enormous and sophisticated databases of ancient Greek and Roman texts because he believes that, in the use of such databases, “texts get flattened out, and scholars get detached from their work.” But what Borgmann means by “flattened” and “detached” never becomes clear, at least to me.

In more anecdotal passages, though, his argument takes on meaningful flesh, and does so in ways that illuminate the issues I am concerned with. Considering Microsoft’s virtual version of London’s National Gallery (a cd-rom from the mid-’90s), Borgmann comments,

No amount of commentary can substitute for the grandly bourgeois and British setting of Trafalgar Square whose center is marked by the monumental column that supports Lord Nelson, one of the protagonists in Britain’s rise to world power. But it is not simply a matter of perfecting technological information to the point where users of the Microsoft Art Gallery can have an interactive full motion video that furnishes them with the experience of strolling through the museum and ambling out on Trafalgar Square to admire the Nelson column. The highly impoverished quality of such walking aside, virtual reality, however expansive, is finally bounded and connects of itself to nothing while the actual Gallery borders on Trafalgar Square adjoining in turn St. Martin’s Church and neighboring Charing Cross and so on in the inexhaustible texture of streets and focal points that is London.

The virtual gallery necessarily lacks the surround of the real (historical and physical) world: it cannot provide the contexts, contrasts, and surprises that that world offers.

To be sure, the virtual world offers contexts, contrasts, and surprises of its own—after all, a virtual Louvre is available for purchase also, which makes it possible to compare the holdings of two great museums without having to take a train through the Chunnel. On the other hand, as long as I am sitting in front of my computer I can’t take the trip from London to Paris. I can’t experience the important feeling of disorientation, so striking to almost every American (even before trains connected the cities), that derives from experiencing the geographical proximity of these two dramatically different capitals. I can’t know the neighborhoods in which the great museums are situated. It would not even be possible for any Londoner or Parisian, no matter how eager they might be, to be rude to me.

These experiences would be unavailable because I would be sitting at my desk, looking at my computer, and scanning the images produced by software that I purchased—images that can inform me about, but not allow me to experience, the different sizes of the paintings, or their full dimensionality, since the textures produced by different brush techniques are often invisible even on the highest-resolution monitor. (Such problems, of course, also place limitations on books and indeed all forms of mechanical reproduction of the visual arts.)

I can easily imagine the responses advocates of this technology would make to the points Borgmann and I are raising. In fact, I do not need to imagine them: I can simply consult a book like Multi-media: From Wagner to Virtual Reality, and find on almost every page celebrations of the immense aesthetic and informational capabilities of computer technology. Scott Fisher enthuses: “The possibilities of virtual realities, it appears, are as limitless as the possibilities of reality.” Lynn Hershman claims that digital works of art allow people to replace “longing, nostalgia and emptiness with a sense of identity, purpose and hope.” Marcos Novak imagines “liquid architectures in cyberspace,” in which “the next room is always where I need it to be and what I need it to be.”

I do not wish to dispute any of these claims; they are often interesting and sometimes compelling. Rather, I merely wish to note that the conflict between Borgmann and the celebrants of multimedia centers on two issues: first, the relative value of different kinds of information, and second, the importance of wide accessibility of information. Borgmann makes a strong case for the depth of the losses incurred when we forsake information about and for reality in favor of information as reality; and he shows how the ready accessibility of an overwhelming range of technological information creates the temptation always to settle for the instantly available. After all, it takes a lot more trouble and money to buy tickets and drive to see the Angeles Quartet than to sample my collection of their cds—and the inertia can be hard to resist even if I know that the “context” and “surround” of the live performance offer me a quality and quantity of experiential information not available on compact disc.

What Borgmann does not adequately address is the compensatory value of technological information for those who do not, and cannot reasonably hope to, have access to the “real thing”; nor does he give judicious assessment of the claim that the marshaling of diverse kinds of information on a single computer enables the user to produce and control context in a way that has its own distinctive value. And so the argument goes on—indeed, I believe that it is in its early stages, because I believe that as yet we have no conceptual vocabulary adequate to assessing these various and often competing goods.

Therefore, I don’t claim that I can even begin to answer the questions raised by the technophiles and their critics. But I do believe that in raising and considering them, I am led back to the unique role of the computer as an information machine—to my claim that the lexicon of “technology” doesn’t help us very much as we try to think well about these things. Borgmann has clarified the situation considerably, but to get to the heart of things, we need to consider the intellectual origin of the modern computer, in a paper written by the English mathematician Alan Turing in 1938.

3

The paper is called “On Computable Numbers,” and its chief purpose was to work through a question (called the Entscheidungsproblem, or “decision problem”) that had been raised a few years earlier by the German mathematician David Hilbert, and had been complicated by the work of the mathematical logician Kurt Gödel. I cannot explain this problem, because I do not understand it; but for our purposes here what matters is a thought experiment Turing undertook in his pursuit of the problem: he imagined a certain machine, which he would call a “universal machine,” though later it became known as a “Turing machine.” He wrote: “It is possible to invent a single machine which can be used to compute any computable sequence,” and one could say—indeed, many have said—that in imagining it Turing did invent it. He did not build a computer at that time, but he showed that such a machine could be built, and that the key to it would be the simplicity of its basic functions: “Let us imagine the operations performed … to be split up into ‘simple operations’ which are so elementary that it is not easy to imagine them further divided.” In fact, today’s digital computer chips, based as they are on a binary system where the only possibilities are zero or one, on or off, work in a manner so simple that it cannot possibly be “further divided.”

How operations so basic can be multiplied and combined until they produce the extravagantly complex results that we see on our computers today is explained, with wonderful clarity, by W. Daniel Hillis in his book The Pattern on the Stone; but what is so extraordinary about Turing’s little paper is his ability to intuit, long before our current sciences of chaos and complexity, that the more simple his imagined machine was, the more powerful and indeed universal it could become.1 It is the very simplicity of the Turing machine’s organizing structure that enables it, as Turing demonstrated, to perfectly imitate any other machine organized on similar principles.

Today’s computers come remarkably close to being universal machines in practice as well as in theory. My laptop fulfills the functions that, when I was in high school, were fulfilled by the following “machines”: typewriter, radio, stereo, television, film projector, calculator, ledger, address book, mailbox, tape recorder, chessboard, games arcade, clock, newspaper, magazine, encyclopedia, dictionary, thesaurus, film projector, slide projector—even library and museum. And that is of course a very incomplete list. This comprehensive ability to imitate—what I will call the computer’s “mimeticism”—is what makes the computer so different from any other form of technology; it is also what makes the challenge of responding wisely to the machine’s enormous promise so formidable.

In daily practice, it seems to me, the most important consequences of the potent mimeticism of the computer are two: the constriction of spatial experience, and the reduction of the play of the human body. When my computer becomes the sole, or at least prime, source for a great deal of information that once I would have sought from many different machines, located in many different places in my house, my community, or beyond, the meaningful space of my daily life is more and more often reduced to the size of my screen. As a direct result, sometimes the only parts of my body that receive meaningful employment in my daily labors are my eyes and my fingers—I don’t even have to turn my head to find out what time it is, nor, if I wish to listen to music (for example), do I have to get up, cross the room, find a cd, insert it in my cd player, and turn it on. I need do no more than shift my eyeballs and tap a few keys.

Interestingly, fictional dreams of “virtual reality”—starting, perhaps, with Vernor Vinge’s 1981 story “True Names” and proceeding through William Gibson’s Neuromancer (1984) and Neal Stephenson’s Snow Crash (1992)—imagine realms of purely mental experience: one lives in a digitally generated world, possessing an equally digital “body.” One’s real, material corpus lies motionless at some insignificant point in “meatspace” while one’s mind explores the Metaverse (Stephenson) or the Other Plane (Vinge).

Such fantasies enact, as many commentators have noted, a classically Gnostic longing for liberation from the body. And even for those of us who have no interest in experiential games of that particular kind, if we feel that our most important work is done at our computers, then our bodies’ needs—food, sleep, exercise, urination, defecation—can seem irritatingly distracting or even embarrassing. As though bodily functions were signs of weakness; as though thought alone dignifies us.

Hence one of Vinge’s characters, an elderly woman, wishes to record her whole being in the bits and bytes of the Other Plane so that, as she puts it, “when this body dies, I will still be”—transformed, Vinge suggests, into a more exciting, elegant, and powerful self than her embodied self ever was or could have been.

4

Perhaps what I am saying here is little more than a rephrasing of Borgmann’s distinction between information about and for reality (which I get by moving physically about in “meatspace”) and information as reality (which the computer, by miming so many machines and therefore encouraging me to stay in front of it, wants me to be content with). But I believe I am pointing to something that Borgmann does not address except, perhaps, by implication: the relation between thinking and embodied experience.

In order to elucidate this point, let’s revisit that fruitful period of 60 or so years ago during which our computerized world was launched. If the work of Turing and Shannon laid the theoretical groundwork for the rise to dominance of the computer, some of the key imaginative groundwork was laid by a man named Vannevar Bush, who during World War II (while Turing was building computers to break the codes created by the German Enigma machines) was the chief scientific adviser to President Roosevelt. As the combat drew to a close, and as the technological achievements of the war years filtered into civilian life to find new uses, Bush understood that one of the great problems of the coming decades would be the organization of information. He believed that what was needed, and what indeed could be built, was a “memory extender,” or a “Memex” for short.

Bush’s Memex, which he conceived in the form of a large desk with multiple hidden components, would be able to store information of many types, visual and aural—words, plans, drawings, diagrams, voice recordings, music—and would possess an elaborate mechanism to file, classify, and cross-reference all that it contained. In short, Bush imagined a personal computer with an Internet connection (though in his prospectus the Memex was mechanical in a Rube Goldbergish sort of way, rather than digital and electronic).

What I find especially telling is the title Bush gave to the essay—it appeared in The Atlantic Monthly in June 1945—in which he described the Memex: “As We May Think.” Bush’s argument is that the technologies of warfare can be converted into technologies of knowledge, and that the conversion will enable us to think differently and better.

It strikes me that the hidden, and vital, connection between these two technologies is the principle of action at a distance. After the horrific trench warfare of World War I, military minds spent much of the next 20 years engineering combat machines that would enable armies to inflict damage on enemies too far away to be seen, much less fought with hand-to-hand. From the expanded use of hand grenades, to the increase in the range of artillery, to the development of plans for extensive strategic bombing, the methods of warfare during the second world war sought to act against the enemy from long range. (Of course, all parties to the war developed similar methods and machines, so none got its wish of being able to fight from a position of safety.)

Vannevar Bush seems to have translated this principle to the struggle to acquire and organize information: he imagines people of the future conquering their enemies—Ignorance and Disorder—without ever leaving their Memexes. Military technology and information technology, in Bush’s vision, turn out to have the same goals: the maximizing of efficiency and the minimizing of both risk and the expense of energy. It is a vision prompted by a belief in the scarcity of resources and the imminence of danger; and it has become the vision of the Information Age.

Because we believe in this vision, because we think (against all the evidence) that we need to conserve our intellectual resources—or, perhaps, simply because we are lazy—we listen eagerly to those who offer us machines that are more and more truly universal; and we become increasingly deaf to the call of other voices from other rooms. In such a climate, one is tempted to believe that what the Universal Machine doesn’t offer can’t be of such value that it would be worthwhile to get up from one’s desk and seek it out. I recall a forecast Jean-François Lyotard made in The Postmodern Condition, almost 20 years ago: “We can predict that anything in the constituted body of knowledge that is not translatable in this way will be abandoned and that the direction of new research will be dictated by the possibility of its eventual results being translated into computer language.”

In 19th-century Oxford, a little poem circulated featuring as its purported speaker Benjamin Jowett, translator of Plato and Master of Balliol:

First come I, my name is Jowett;
There’s no knowledge but I know it.
I am the master of this College;
What I don’t know isn’t knowledge.

The personal computer is the Jowett of our time: what it doesn’t know isn’t knowledge.

5

It was, I now see, an intuited sense of the dangers posed by the Jowettization of the computer that led me to conduct the experiment with Linux that I described in my previous essay: I was seeking (with apologies to the prophet Isaiah) to make the straight paths crooked and the plain places rough. If David Gelernter—as I noted in the first essay of this series—wants software that will make the computer “transparent” to our desires, I craved opacity. I had become so used to my computer, so disposed to exploit its resources and explore its capabilities, that I had begun to wonder, like one of the travelers in Bunyan’s Pilgrim’s Progress, if perhaps this smooth, broad road were a little too inviting, a little too easy to traverse; a feeling that intensified at those points when the tiniest of difficulties presented itself and, lo, Bill Gates appeared at my elbow, saying, “Here, let me help you with that.”

Some years ago the novelist John Updike wrote this telling reflection on much art, especially visual art, of the 20th century: “we feel in each act not only a plenitude (ambition, intuition, expertise, delight, etc.) but an absence—a void that belongs to these creative acts. Nothing is preventing them.” In contrast, “works like Madame Bovary and Ulysses glow with the heat of resistance that the will to manipulate meets in banal, heavily actual subjects.”2

Precisely: resistance. The mind needs resistance in order to function properly; it understands itself and its surroundings through encountering boundaries, borders, limits—all that pushes back or refuses to yield. Now, Updike believes that artistic greatness is often achieved by those who overcome such resistance; but the resistance must be felt, and forcefully felt, for that overcoming to be artistically productive. I am no artist, and I doubt that Updike would feel plenitude in anything I do; but his notion seems immensely relevant to my condition nonetheless.

A curious feature of this resistance is that it can only happen when each party is exerting pressure on the other; and as my computing life became smoother and more featureless, I became increasingly unable to tell whether this was because my computer was yielding to my desires or I to its. The more confused and uncomfortable a computer user is, the more enthralled he or she becomes to the computer’s preferences; such a user offers little resistance to the “defaults.” The issue of resistance is significant for every computer user, though in different ways.

So I plunged into the world of open-source software precisely because, in the words of the aficionado I quoted in my previous essay, “nothing in Linux works the first time.” I wanted to be puzzled; I wanted to be at a loss sometimes. I wanted to have to get up and go to the library or bookstore; I wanted to need to call a friend for help. Linux user groups—there are hundreds of them across the country and thousands around the world—periodically stage “Installfests,” where users bring their computers and software and either help or get help from others.

In short, running Linux often involves moving one’s body, expanding one’s spatial environment, and interacting with other people in a kind of ad hoc community. The resistance offered by the collaborative and decentered development of Linux, and its consequent lack of immediate “user-friendliness,” may create frustrations, but it also encourages the cultivation of certain virtues—patience, humility, teachableness—and opens the user to a range of benefits. I have described this project of mine as a quest for control, but in some ways it would be more accurate to describe it as a quest for a situation in which control is always being negotiated; where the boundaries shift because the forces of resistance wax and wane, on both sides.

However, the Linux experiment, I must admit, is one that I now find hard to sustain. Like most people, I have daily responsibilities that do not allow me to spend an indefinite amount of time fiddling with configuration files, or solving whatever the Linux conundrum of the moment happens to be. Sometimes I have to go back to what I know, whether I want to or not. And in this context the new Unix-based Macintosh os x begins to feel lile a rather insidious temptation: whenever I start to feel a longing for “resistance,” I can always fire up the Terminal and use old-fashioned text-based applications, like the Lynx web browser, Pine for email, emacs for text editing—though whenever these pleasures ebb I can immediately switch back to the inimitable Mac eye-candy. If using Linux is like moving into a log cabin, using os x is like visiting a dude ranch: you know that whenever “roughing it” grows tiresome or uncomfortable, all the comforts of capitalist modernity are ready and waiting to meet your needs.

But still, I think, my experiment has reminded me that the ways we use our computers could be other—there are alternative models of organizing and deploying information than those which our computers supply by default, out of the box. Even when I set aside my Linux box and return to my Macintosh, I find myself using that computer in a more self-conscious way, with a clearer sense of its capabilities and its limitations. And I find it easier to spend time away from the computer, reacquainting myself with some of the nondigital ways of thinking and learning and writing with which I grew up. I value my computer as much as, or more than, I ever have; but I feel that in some sense I have put it in its place.

And what is its place? As a tool: an unprecedentedly resourceful and adaptable tool, to be sure, but no more. It lacks purposes of its own. Those it appears to have are the purposes of its makers, and these may or may not be our purposes; we should ask whether they are. Many years ago Jacques Ellul called us to “the measuring of technique by other criteria than those of technique itself,” and this obligation is all the more vital when the “technique” involved is a universal machine that shapes, or seeks to shape, how we may think. Ellul even goes so far as to call that task of measurement “the search for justice before God.”

Now, it is very difficult to think, as one sits down before one’s computer keyboard, that what the prophets of Israel call shalom could be at stake. The incongruity is striking: “the search for justice before God” seems so noble, even heroic an endeavor; mouse, keyboard, and screen seem thoroughly, insignificantly everyday by comparison. Yet we are accustomed, in other (generally more poetic and “humanistic”) contexts, to hearing and affirming that God makes his will and character known through the ordinary. It’s just hard to believe that we can hear the still small voice in the midst of the technological ordinary: can God make himself manifest through the binary logic of silicon chips?

The effort to think spiritually about computers meets a great deal of resistance, we might say—something is preventing it. (Maybe many things are.) But, as Updike teaches us, resistance can be enormously productive, if we neither ignore it nor are daunted by it. If I had to say what was the most important lesson I learned from my plunge into the strange world of computer technology and open-source software, it was that I need to start thinking in the way Ellul counsels: to pursue “computer control” not in order to repudiate those machines but in order to harness them and employ them in the search for justice before God.

Alan Jacobs is professor of English at Wheaton College. He is the author most recently of A Theology of Reading: The Hermeneutics of Love (Westview Press).

Copyright © 2002 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.

    • More fromAlan Jacobs

Agnieszka Tennant

Why the defeater of communism finds himself defeated by ex-communists—and why he and the American public haven’t noticed.

  1. View Issue
  2. Subscribe
  3. Give a Gift
  4. Archives

There wasn’t a cloud in the sky on the day I met Lech Walesa in Gdansk earlier this year. A gentle breeze stirred. I extended my hand to the slayer of communism. He reciprocated humbly, I thought, without eye contact. Stocky and gray-haired, Walesa greeted me in a traditional Polish way:

“Quickly, quickly, let’s get this over with,” he said. “I don’t have much time.”

I had been warned. Taxi drivers, retirees, priests, doctors, an owner of a Christian radio station—most of them former Solidarity members—all got red in the face when I told them I was going to interview Lech Walesa. “He squandered his opportunity!” most of them said, as if they’d had a chance to rehearse before I crossed paths with them in various Polish cities. The ensuing litanies accused Walesa of making deals with the communists, not to mention megalomania, greed, pride, verbal gaffes, and stupidity.

But, before I met with Walesa, I was too Americanized to take my unscientific survey of Polish public opinion at face value. The one exception was the Walesaisms, Poland’s equivalent to Bushisms. Walesa has always been known in Poland for his entertaining malapropisms and other miscues—such as when he exclaimed “I am pro, and even con!” or when he talked about “positive and negative pluses.” As to the more serious indictments, I attributed them to jealousy and a complaining spirit, Poland’s lingering inheritance from communism. Is it Walesa’s fault that he banked a hefty sum for his Nobel prize? Is it his fault that he’s received 30 (and counting) honorary doctorates from institutions ranging from Harvard through St. Ambrose University in Davenport, Iowa, to Pontificia Universidad Catolica Madre Y Maestra in the Dominican Republic? Surely he deserved them all for freeing Poland from communism.

Another, more powerful, reason why I didn’t want to see Walesa’s darker side had to do with my family’s history. On June 28, 1956, my grandfather built his own glass ceiling by joining the first big anti-communist demonstration in Poland, on the streets of Poznan. The citizens’ militia (police in communist times) killed many protesters, including two of grandpa’s friends. Following the strike, grandpa was reprimanded and a communist crony began to shadow him. His pay, benefits, and promotion opportunities were lowered. Years later, other family members and relatives joined Solidarity. My father, a regional head of Solidarity, was among those who voted for the spirited electrician from a Gdansk shipyard to become the head of the union.

I left Poland in the early ’90s, just in time for my idealized view of Walesa and Solidarity to remain intact. My ignorance of Walesa’s current status in Poland resembled that of the American public.

During the Cold War and shortly after the Soviet Union’s collapse, when Walesa became Poland’s first democratically elected president in 1990, the American media devoted an enormous amount of coverage to Eastern Europe. It was easy. Even those who didn’t believe the Soviet Union was an evil empire had a clear framework for disagreement. The rise of Solidarity was a tidy, morally unambiguous story, whose actors belonged to two camps: bad (communists) and good (anti-communists).

The story was so neat and thrilling—genuinely so—that it’s surprising that Hollywood hasn’t made a movie about Walesa yet, with DeNiro cast as the protagonist. The script is already written. It could come straight from a Time magazine profile: “Sacked from his job” for his involvement with opposition groups, the “fly, feisty, mustachioed electrician,” son of peasant farmers, climbs over the fence of the Lenin Shipyard in Gdansk to join the occupation strike in August of 1980. He soon becomes the leader of the strike and then head of Solidarity, scaring the hell out of the bad guys.

A particularly—and legitimately—moving character in this blockbuster would be the man who today remains one of the few upright participants in the story. Here’s how Walesa explained to me the role of Pope John Paul II: “After the martial law, when I told [various foreign politicians] that in Poland we had begun to overturn communism and that we would overturn it, no one believed me . …But our compatriot had become the pope [in 1978]. At that time, I had maybe 10 people per 40 million citizens in Poland who wanted to fight communism. The rest didn’t believe, they didn’t want to fight. Some of them were afraid, some were part of the communist system. The pope arrived in Poland [in 1979]. A year after that, I multiplied those 10 people into 10 million . …People began to believe in themselves, they stopped being afraid, and this allowed for the party we called Solidarity. If not [for] the pope—and the pope is faith—if not [for] our compatriot, we would not have accomplished this.”

The script from Time continues the story: After prime minister Wojciech Jaruzelski imposes marshal law in 1981, Solidarity is outlawed and Walesa is either under arrest, “watched closely by secret police, or harassed” for the following seven years. Eventually, the protagonist beats the odds. The happy ending is the ouster of totalitarianism and the establishment of what many thought was going to be democracy, with the electrician becoming Poland’s president. This is where the movie would have to end.

Alas, the “happily ever after” never follows. What does follow is material for an ambitious indie movie, with a much lower budget. In the last decade, ambiguity set in. The moral roles became gray, complicated, murky. The communists and their former foes exchanged their black hats and white hats for multicolored and gray ones.

Lech Walesa is a perfect example of this.

He has given, as he told me, “hundreds, many hundreds” of lectures at American colleges and universities. He speaks to Americans about democracy and globalization, proposing “that we make the United Nations into a global parliament and make the u.n. Security Council into a global government,” governed by new principles and laws. With the Warsaw Pact now obsolete, nato should become the Ministry of Global Defense, Walesa told me. “We’d give them topics that are uncomfortable for the United States to handle on their own—terrorism, Israel and Palestine, anti-Semitism, and racism.”

“Each one [of my American lectures],” Walesa pointed out, “has ended so far with standing ovations. So, my message gets to them and I am liked there.” He may have been exaggerating the numbers, but not the warmth of his reception. When it comes to Walesa, Americans still live in the early ’80s. The invitations, the effusive audiences, the ever-increasing number of honorary degrees, as well as about 50 awards that include six Man of the Year titles from publications such as Time (1981) and Saudi Gazette (1989)—they all inadvertently feed his delusions of grandeur. One wonders if Walesa was joking when he once retorted to someone, “I know the law better than that. I have, after all, honoris causa from many a university.”

Even insightful chroniclers such as Peggy Noonan still seem to see things from an outdated perspective. In April, her account of her interview with Walesa ran on the first page of the first issue of The New York Sun. Although he lost two bids for re-election to an ex-communist (receiving a meager 1.01 percent of the vote in the last election) and hasn’t made any substantial news since, Noonan gushed over him with the absolving gullibility of a high-school girl talking about a crush. Walesa, “who is a great man,” has gone “from being one kind of romantic to another.” Having once been an “Extremely Important Person,” he has turned into a “Guy You Don’t Look Twice at On the Street,” she says. Oh, how the Poles wish this were the case! The truth is, his megalomania won’t let him return to ordinary life. He’s already volunteered to become the president of the United States of Europe, “because long ago, it was my idea to form [such a federation],” as he told me.

The Lech Walesa I met in Gdansk seemed far from a romantic. It appeared, in fact, that he has gone from Someone at Whom You Don’t Look Twice on the Street to an Extremely Self-Important Person. In spite of being in a hurry, as he told me in his greeting and repeated thereafter, he kept interrupting the interview by answering the phone. Here’s a condensed record of one chat with a caller:

“Yes. Hello. Yes . …It depends how many people are going to be there—if there are only five, then of course I’m not going to come. If there will be over a hundred, then I’ll show up . …But why? … Dang, I don’t know, I don’t like it . …Well, because he is not even a president. He’s a former president. Two political corpses? Why do I need that? … Yes, but what do I … what’s in it for me? You know how it is with morality these days.

. … I don’t know . …Yes, but you know—I’m a tired, sick man. Why should I drag myself there? In the name of what? … Listen, I already take changes of climate, travel, uncomfortable airplanes, uncomfortable cars . …Okay, I need an exact list of who’s going to be there. Then maybe really I should . …Come on! Albania? [He gasps and laughs.] Albania?! … They’re political corpses . …If there aren’t any corpses, then maybe I’ll be there, okay?”

When Walesa—who became a political corpse, as a Polish idiom has it, after the presidential election in 2000—wasn’t on the phone, he answered my questions. Given his initial hostility or tiredness or both, I tried hard to be like Diane Sawyer. I nodded politely, maintained eye contact, smiled empathetically, and respectfully asked open-ended questions. But from the very start, he rarely looked at me, and instead talked to the table on his left or to the lamp, gesticulating impulsively.

I upset him by asking why only 1.01 percent of the Polish electorate voted for him in the last election, instead choosing to reelect an ex-communist. Did the people expect too much of him—or did he promise too much?

“Madam, I don’t know what you consider a victory and a loss,” he said. “If names are important, then Lech Walesa lost. But if what a name represents is more important—what Lech Walesa represented or headed—then all of it won. So you have to pick: Are you asking about Lech Walesa? … Or are you asking about programs and solutions? If you’re asking about solutions, then Walesa won. Walesa had said that we needed … different economic solutions … in Poland. Today everyone confirms that I was right. So I didn’t lose anything; I won everything.”

I remembered a taxi driver telling me that Walesa was a leader for a time of war, not for a time of peace. It’s not that all the solutions he touted were bad. I especially appreciate his decision to split Solidarity into several parties because, as he told me, “a monopoly within Solidarity would have been worse than a communist monopoly.” But quite apart from his political goals—a very mixed bag—his demeanor got in the way. For example, when his ex-communist opponent and now Poland’s president, Aleksander Kwasniewski, offered a handshake to Walesa in public, saying, “I extend my hand to you,” Walesa snapped, “I extend my leg!” The same combativeness that helped topple communism is in part responsible for Walesa’s eviction from the political scene.

“But your philosophy is this,” Walesa continued. “When you fight with someone, then you beat your opponent and you sit down in his place. I see it differently: I beat communism—you don’t doubt it, do you?” I didn’t. “The communists are implementing my program. They have to go through gymnastics to do it all . …So I am a double winner, but you say that I’m a loser.”

My question about the role religion plays in his private life—”for example prayer, Mass, or the reading of Scripture”—was another opportunity for him to launch an attack.

“I am a man of faith,” he said. “I treat religion seriously. Without religion, without faith, I wouldn’t do all

I did—I wouldn’t even be talking to you—because why? I did my thing, and you ask me tricky questions. Why do I need this? Why should I play a mentor or a teacher? But because I am a man of faith, I know that you will make this [interview] into something, people in America may be helped by it, you will make money on it, you will do a good job. That’s why—because of faith—I am meeting with you, I am talking with you, I am enduring this . …I do everything possible in order not to go to hell. If I weren’t working, I would surely end up there.”

After he opened his heart like this about his faith in God, I had to tell him that I hadn’t intended to ask him tricky questions. I explained that I wanted to give him a chance to respond to the criticisms that are circulating about him in Poland, ones that I was sure he’d heard. He responded with a non sequitur that might have been uttered by one of Dostoevsky’s minor characters:

“Madam, it’s democracy—it’s weakness, it’s wisdom, it’s stupidity, it’s normality, and I’m not surprised. It would be worse if it weren’t here. Then you could tell me, ‘You hadn’t built democracy!’ because you would have no doubt. So, that’s what makes me happy. But I don’t like being jerked around.”

The taxi driver was right: Walesa was a man of war for a time of war. He had the guts, the spunk, the ideas, the charisma. But he was poorly equipped to face the challenges of peace.

Today, when it seems that his 10 million supporters have shriveled into 10, he is probably a lonely man. Too bad he isn’t taking the way out of this political deadend, the way that begins with a question.

Why, if his democratic ideas are good, have the Poles, in a cruel irony, chosen his opponents—the former communists—to implement them?

Walesa’s quick temper and lack of refinement are not the only reasons. Andrzej Nowak, Polish historian and editor of Arcana, a bimonthly on politics and culture, writes that Walesa conducted his presidential campaign under the motto of abandoning any compromise with the communists. But he didn’t follow through. He disappointed the voters with his decision not to dissolve the compromised “contract” sejm, the main chamber of the parliament (in which Solidarity members’ involvement was limited by a deal with the ex-communists), and with his symbolic visit to the communist-run daily Trybuna Ludu, writes Nowak. Walesa’s veto of a law intended to strip the retirement privileges of former Stalinist police operatives—the same gang that fired at my grandpa and his colleagues in 1956—was equally demoralizing. The forced resignation of Walesa’s prime minister, Jozef Oleksy, after he was accused of cooperation with Russian agents, didn’t help either. Nor did the failure of the first Solidarity-based government, headed by Jan Olszewski.

Scientist and contributing editor of Pigulki magazine Marek Cypryk sums it up this way: “While the ruling Leninist party, PZPR, transformed itself into a Western European-style social democratic party, Solidarity failed because it did not manage to work out a program of popularly supported reforms. The politicians lacked experience, capacity to prioritize their causes and … to make compromises. Too few activists from the era of struggle knew how to switch into a political style of building. Amateur politicians lost to professionals, to people who knew how to read expectations of the electorate and how to manipulate it.”

This situation isn’t unique to Poland. Former communists have retained varying degrees of control over government, the economy, and the media in Romania, Hungary, Lithuania, Belarus, Bulgaria, and Ukraine. For the present and for the near future, at least, we lack an overarching interpretive framework for understanding the new situation, not only in the post-communist countries but in Europe generally. In the meantime, there always will be—however convoluted—the facts.

Agnieszka Tennant is an associate editor at Christianity Today magazine.

Copyright © 2002 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.

    • More fromAgnieszka Tennant
  • Agnieszka Tennant

Agnieszka Tennant

A conversation with poet Adam Zagajewski

  1. View Issue
  2. Subscribe
  3. Give a Gift
  4. Archives

When, after September 11, The New Yorker published a poem, “Try to Praise the Mutilated World,” on its back page—a rare departure from the cartoons and parodies that usually occupy that space—it resonated with many readers. They posted it on their refrigerators, on bulletin boards and websites. The author who helped America heal is Adam Zagajewski, a cosmopolitan poet and essayist who writes in Polish. His most recent collection of poems, in which irony is balanced by wonder, is called Without End. It was published this year by Farrar, Straus, and Giroux. In the spring, when Zagajewski was teaching creative writing at the University of Houston, Agnieszka Tennant gave him a call.

You wrote “Try to Praise the Mutilated World” before the terrorist attacks. What occasion inspired it?

No particular occasion, no single event. For me, it’s the way I have always seen the world. When I was growing up I saw a lot of ruins in postwar Poland. This is my landscape. Somehow it stayed with me, this feeling that the world is wounded or mutilated. The poem reflects a philosophical conviction more than an event.

Let’s talk about this conviction. In the last lines of the poem you speak of “the gentle light that strays and vanishes / and returns,” a description that beautifully captures hope. Where does your hope—hope about anything—come from, and what makes you its advocate?

It’s a very interesting question—one I never ask myself but I’ll try to answer nevertheless. The experience of someone who tries to live and write is very rich and encompasses the register of ecstasy, of joy. Years ago I was with someone in a taxi, and he asked me, “Do you believe in happiness?” and I said, “No, I don’t believe in happiness, I believe in joy.” I don’t believe in happiness as a constant state, but I do believe in joy. Which I always think has to correspond to something.

What does it correspond to in your case?

I am a religious person—a bad Christian, a slightly lapsed Catholic. From the pope’s point of view, I’m probably a very bad Catholic.

But maybe not from God’s point of view.

[Laughter] This we cannot know. I believe that our mental states are not purely subjective. They correspond to something that’s transcendent. We don’t produce everything in us. On the other hand, probably not every despair and not every joy is produced by something outside us. Far from that: we can swallow a pill and become this way or another. So it’s not an absolute rule.

If you were to name a poetic manifesto—a poem that best captures your approach to life—which one of your poems would it be?

In a way this little poem “Try to Praise the Mutilated World” is a manifesto. But in a way you can say every poem is this kind of manifesto. It would be almost sad if one single poem functioned in this way, because our creeds are mostly multifaceted. So writing poems is perhaps continuous manifesto-writing.

The poems are different articles of the manifesto?

Yes, it’s like a constitution—there are many paragraphs.

You were born in Lvov after the Soviets took it from Poland following World War II. You lived and studied in Krakow, Poland. You moved to Paris in 1982. Was it for political reasons?

I was a dissident in my country, but the reason that propelled me to leave Poland was not a political one. I fell in love with a woman. It sounds a little bit goofy, because when I moved, it was a historical moment when many of my compatriots were moving to the West for political reasons.

Whatever happened with that woman?

I married her, and we’re still happy together.

Since 1988 you’ve spent your springs in Houston, where you teach creative writing. Lvov, Krakow, Paris, and Houston are settings for your poems, which one could almost categorize geographically. You often go to Germany for readings. What parts of Polish, French, American, and perhaps German culture have you adopted into yourself willingly, and what aspects of your Polish background and your immersion in French and American ways have you consciously rejected?

In a way, I consider myself a traveler against my will. I don’t like travel so much, to tell you the truth. I’ve told you honestly why I moved to Paris. And when I came to Houston it was out of dire financial necessity. It wasn’t the decision of an aesthete who wanted to taste life in Texas. I don’t regret it, and I think I have learned something, but I can imagine myself living in a village in Poland—with a good library of course—and being exactly the same person.

In your times of nostalgia—”When Europe is sound asleep at last, / America will keep watch // over the poor mute world / mistrustfully, like a younger sister”—what do you find yourself missing the most?

I see a dramatic split in the so-called developed world, between the United States and Europe. What I love in America is the human energy; this continent is incredibly energetic, and people are driven by this energy, which is mostly friendly. Europe is a little sleepy. Western Europe is a beautiful museum. But of course when I am here in Houston, I miss the old stones in the villages near Paris. In every village there’s a tiny, beautiful Romanesque church where you can almost see the human hands that made it 800 years ago—it gives you a feeling of solidarity with the old generations and tenderness for them. Here, where people think that a house should last maybe 40 years, not more, you cannot have it. So I miss this oldness of things and the tenderness of the oldness. But when I go to Europe, I miss the energy of America—and the people. I have many friends here.

What is home for you?

Home for me is Krakow, my university town. I grew up in Silesia, the unattractive industrial region in the south of Poland. Krakow was the enchantment of my youth. I pilgrimaged there as a young man, and I fell in love with the city. Even now, after so many years and continents, when I go back to Krakow, I still feel admiration for this old city, which has been miraculously preserved. Even the population seems ancient, unlike the people in Warsaw and the cities with German influences such as Wroclaw and Gdansk.

What do you think is the place of poetry in modern culture, so bombarded by images, sound bites, and other forms of ceaseless stimulation?

This is a delicate question. Poets have a tendency to magnify the importance of poetry and to close their eyes to its real situation. I try to be quite sober. My late friend, the great poet Joseph Brodsky, was a tireless defender of poetry. I try not to defend poetry too much. But when people compare poetry to modern classical music and say that it’s one of the dead arts, of course you cannot expect me to agree. By pure chance yesterday night I heard Carl Dennis, who won the Pulitzer Prize in poetry this year and is a friend of mine, defending poetry by saying it’s the voice of a human being. I think that’s a very good point of view. Unlike other arts, lyric poetry offers—well, “offers” is maybe too capitalistic a word—but in poetry you hear a voice, you hear someone speaking. It’s not a narrative, it’s not a cascade of images, as in a film. Voice is most important in poetry. But I would add to Dennis’ words that this voice has to say something important. The voice is just a vehicle for it.

It strikes me that your life resembles that of Polish filmmaker Krzysztof Kieslowski. When in the ’70s Kieslowski’s films were a part of the so-called Cinema of Moral Anxiety, your poems also bucked against the abuses of communism. After you and Kieslowski left Poland, both his cinematography and your writing began addressing universal, human—and not so much specifically Polish—experiences. What took you in this direction?

First, I too see this parallel. I met Kieslowski in Paris. We became very good friends, and I was immensely sad when he died so soon. We recognized in each other these kinds of similarities, so there was a solidarity between us.

What brought me to this change was a feeling that there’s something universal in poetry, and that dwelling on very local subjects and answers is a kind of castration for a writer. Not that you have to totally move away from your local subject. Even when I write about Houston, which happens very rarely, I still react as a Polish poet and person. But there are some traditional subjects in poetry—transcendence, death, beauty, time—that are given to all of us. I don’t think we should avoid these traditionally given subjects, we should just treat them in a fresh way. Any art is a combination of the very old and the very new.

Another reason why you remind me of Kieslowski—and of course the comparison can only be taken so far—is that you employ understatatement in your portrayal of spiritual matters. God, for example, is mentioned, hinted about, or implied in several of your poems. He’s there, but he’s marginal, a little like the lanky young man who appears in eight of Kieslowski’s films in The Decalogue—always watching but not intervening. To what degree is the portrayal of God in your poems a reflection of how you personally perceive him?

My poetry is more a poetry of longing than a poetry of assurance. The reason for this is in my metaphysics. I’m not one of the lucky ones who have a very strong faith and a solid assurance. My religious experience expresses itself in a quest that is never resolved. So this understatement is not an artistic take, not a trick—it’s something that corresponds to the way I experience God. But also in reading Dante, for instance, I’m afraid we’ll never be able—and now I’m switching to the plural to be safer—we’ll never be able to have such a firm, strong vision, which in Dante’s case, of course, was helped by St. Thomas Aquinas. Still, maybe I shouldn’t use the plural: I’m sure there are poets who have a different kind of assurance and faith than I do.

In “A Quick Poem,” you contrast civilization and progress (you in a speeding car on a highway) and the enduring values of remembrance, ritual, place (represented by monks singing Gregorian chant). You say, “In place of walls—sheet metal. / Instead of a vigil—a flight. / Travel instead of remembrance.” Am I right in detecting a warning in these words?

Yes, but the message is ambivalent. The warning and anxiety are there because the way we live is very fragile. The car is such a good symbol of the fragility of our existence. Think of the difference between a commercial that shows a brand new shiny car and a glimpse of a wrecked car on the side of the road, a shell of something that you hardly recognize—you see how destructible it is. But, at the same time, I also like driving. I’m not a staunch conservative who, like Pascal, thinks that all disasters stem from leaving your room. This poem is also a way of claiming modernity—that you can drive and listen to Gregorian chant, that you can have a meaningful life in modernity. I see both readings as legitimate. You know, I live this modern life—I fly and I drive, and I find joy in it. I try to make it meaningful.

You’re fluent in Polish, English, French, and German, but you continue to write in Polish and have your works translated, very skillfully, into English. Why do you keep writing in Polish?

For a simple reason. It’s the only language in which the inspiration comes. It’s almost impossible for me to write poetry without the spark of inspiration. As much as I adore English—it’s such a wonderful, strong, sensual, and rich language—it’s not my language. I know all the words in Polish in a way I will never know the idioms in English. Writing a poem and having to look up words in a dictionary every other line—it’s impossible.

When I arrived in the states, I was surprised by the stereotype of a dumb Polak in the so-called Polish jokes. At first, I couldn’t understand why the word “Polak” has a derogatory meaning—whereas in Poland it simply means “a Polish man.” What was your encounter with this and your first reaction?

To tell you the truth, I never encountered it. I came to Houston, which doesn’t have many Polish people in it, and I have good friends who never treated me with Polish jokes. But someone told me an anecdote about another Polish poet, the late Zbigniew Herbert. He spent a year or two in this country in the late ’60s. A dean of a university approached him at a party and told him Polish jokes, one after another. And after a while, Herbert stopped him, and said, “Excuse me. My English is very poor, so I’m not quite sure whether the word I’m going to use is right. But you are—and correct me if I’m using the wrong word—you are an idiot, aren’t you?”

Copyright © 2002 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.

    • More fromAgnieszka Tennant
  • Agnieszka Tennant

Rodney Clapp

A forgotten episode in American terrorism

  1. View Issue
  2. Subscribe
  3. Give a Gift
  4. Archives

For white, middle-and upper-class Americans such as myself, the acute threat of terrorism right here at home is something new. Indeed, the media we run and predominantly staff routinely described the 1995 bombing of Oklahoma City’s Murrah Federal Building as the first significant case of terrorism in the nation’s history. Of course, the horrifying events of a year ago—all the more shocking because they were played out on live national television—even more decisively made terrorism in the homeland a clear, present, and ongoing danger. Suddenly, we all believe it can happen here.

Black Americans are no less threatened by and no more enamored with the likes of Osama Bin Laden than other Americans are. But there is a difference. For blacks, homeland insecurity and the all too palpable danger of terrorism are nothing new.

As opposed to acts of war, meant to directly subdue and conquer an enemy, acts of terrorism are more immediately symbolic and even theatrical. Terrorists may dream of someday seeing an enemy under their boot. But their more proximate aim is intimidation of the spirit and toxic pollution of the imagination. They will settle for emotional and spiritual subjugation, short of a more comprehensive and physical subjugation by outright war. So the diabolically spectacular events of September 11 struck at the heart of American faith in this nation’s global economic and technological superiority. It was a superiority we tended to think of as invulnerable. But now the skyscrapers and airplanes that so vividly embodied this superiority are also signs of our vulnerability; they indicate not so much the armor of the national self-image as its exposed underbelly.

Similarly, African Americans know the period after Reconstruction and into the early 20th century as one marked by virulent terrorism. Whites who would officially and wholly subjugate blacks were militarily defeated in 1865. But in 1866, the Ku Klux Klan was founded in Pulaski, Tennessee. It was originally nothing more than a diversionary social club. Not for long, though. From its intuitively haunting name to its ghostly robes and hoods, with burning crosses and menacing midnight raids, the Klan soon developed into a homegrown terrorist organization. Its threatening theatricality turned especially deadly with public lynchings, often preceded by highly symbolic torture involving blinding or castration. Between 1890 and 1930, nearly 3,000 Americans (mostly blacks) were lynched—if not always by the Klan, usually in the tradition of its cruel theater of intimidation. Bodies were frequently left hanging for days. Hanging ropes were cut into pieces and sold. Even more gruesome “souvenirs” included victims’ fingers or knuckles. A market developed for postcards of satisfied executioners (and their wives and children) posed with hanging or burned black corpses. All this was meant to spiritually subjugate “uppity Negroes” and keep them “in their place.”

Such instances of terrorism are readily found in American history books. But there are others more effectively buried and hidden from public consciousness. Like every Oklahoma schoolchild, I had my share of lessons in state history and civics. And as an undergraduate at Oklahoma State University in the late 1970s, one of my most memorable courses was in black American history. But I never heard of the 1921 Tulsa Race Riot before stumbling across it in some readings of “radical” history in the early 1990s. More recently, the worldwide trend toward consideration of reparations for past wrongs against ethnic groups, and the tenacity of Oklahoma’s blacks, have decisively disinterred Tulsa’s shame.

Within the last year, three books on the Tulsa Riot have been issued by major publishers; an earlier, pioneering but brief chronicle was published in 1982. Whatever their differences in detail, and some are important, these four accounts establish one thing beyond dispute: In the annals of significant events of terrorism in America, to the April 12, 1995, Oklahoma City bombing and the September 11, 2001, fall of the World Trade Center must be added the June 1, 1921, destruction of Tulsa’s African American community.

To appreciate why a vast tract of Tulsa caught fire and burned in the spring of 1921, it is necessary to take account of racial passions and fears that began smoldering in the 19th century. Oklahoma did not attain statehood until 1907. Before then it was a territory not clearly claimed or regulated by any state or nation. In the 1830s, it was, frankly, a dumping ground. When the Five Civilized Tribes were driven from the southeast United States, they were herded along what came to be called the Trail of Tears and into the Oklahoma Territory.

The Five Civilized Tribes held black slaves. Treatment of slaves varied from tribe to tribe, but the Creeks and Seminoles allowed intermarriage and, after Emancipation, adopted African Americans into their tribes. Creek and Seminole freedmen voted, owned land, attended tribal schools, and received equal tribal justice. Predictably, these conditions drew other freed slaves to the Oklahoma Territory. And by the closing decades of the 19th century, blacks in the Territory enjoyed far more rights and privileges than blacks in the official U.S., North or South. As James Hirsch notes, the Territory became a “bastion for black nationalism” and, beginning in the 1880s, there were indeed hopes that Oklahoma would become the nation’s first black state. Blacks across the continent thought of the Oklahoma Territory as “the Promised Land.”

One people’s promise, however, can be perceived as another people’s threat. “If the black population could be distributed evenly over the United States,” The New York Times editorialized in 1890, “it would not constitute a social or political danger. But an exclusively or overwhelming negro settlement in any part of the country is, to all intents and purposes, a camp of savages.” With the nearing of Oklahoma statehood, “the Negro Question” grew hot in the Territory. Oklahoma Republicans of the day divided on voting and other equal rights for blacks. Democrats vigorously opposed civil rights for blacks and argued that “Republican success means African domination”—this despite the fact that, by 1900, Oklahoma Territory’s whites outnumbered blacks 10 to 1.

One man became emblematic of the majority sentiment. Democrat William H. “Alfalfa Bill” Murray was elected president of the Oklahoma state convention in 1906. Alfalfa Bill, who would continue for decades as a major force in Oklahoma politics, was notoriously racist. He thought blacks were inferior to whites because of their putatively lesser “brain weights” and said blacks could thrive as porters, bootblacks, and barbers but never as lawyers and doctors. He praised Hitler for “being right in his science” about the Jews. (Decades later an Oklahoma University professor stumbled across an autographed copy of Murray’s book The Negro’s Place in Call of Race. Alfalfa Bill had inscribed on its endpages, “I hate Indians too.”)

Such bigotry quickly translated into policy, so that the very first legislation passed by Oklahoma’s state senate segregated whites and blacks, and later Oklahoma would become the first state to segregate telephone booths. But Oklahoma’s blacks had enjoyed a few decades of comparatively decent levels of freedom, education, business establishment, and self-government. They were not easily put entirely back in “their place.” An oil boom in the first decade of the 20th century turned Tulsa from a backwater to a rich and bustling center of the petroleum industry (a 1909 city directory listed 126 oil companies headquartered there). Black Tulsa, known as Greenwood because its business district lined its namesake avenue, enjoyed a degree of the entire city’s prosperity. By 1921, Tulsa was a city of nearly 100,000 people; Greenwood’s population had grown to 11,000. Greenwood boasted two black schools, a hospital, two newspapers, 13 churches, three fraternal lodges, two theaters, and a public library. Its Stradford Hotel, with three stories and 54 rooms, was believed to be the nation’s largest hotel owned and operated by an African American—and its opulence matched that of any hotel in white Tulsa. There were also barbers, real estate agents, lawyers, and a surgeon with a national reputation. As far away as Chicago, Greenwood was seen as the peak of American achievement for blacks. It was sometimes referred to as the “Negro Wall Street.”

Educated, justifiably proud, articulate, and ambitious, Greenwood’s citizens were early leaders in struggles for civil rights. At the same time, the country as a whole was seeing a resurgence of its racism. Bigotry was intensified by World War I and its attendant insecurities. Nativist movements such as the KKK revived. Historian John Hope Franklin has called this “the greatest period of inter-racial strife the nation ever witnessed.” So, even as African Americans were meeting with a degree of success and getting more assertive in their call for civil rights, hotels and restaurants in Northern cities that had served blacks began to turn them away. Some Southern newspapers ran ads inviting the public to witness the burning of live Negro men. In East St. Louis, all too typically of the horrible violence across the country, marauding whites shot a black infant and tossed its body from a flaming building.

Such was the racial climate when, in the waning days of May 1921, a 17-year-old, white, female elevator operator in downtown Tulsa claimed that a black man named Dick Rowland had attacked and attempted to molest her. The scenario strained credulity—what sane black man would accost a young white woman at the most public spot of a busy office building, during peak business hours, in white Tulsa?—and later the woman would retract her accusation and Rowland would be acquitted. But at the time, Tulsa did not stop to reason: ugly passions and irrational desperation prevailed. Rowland was jailed. On May 31, newspaper accounts assumed his guilt. That evening, hundreds, and perhaps as many as 1,500, whites gathered outside the courthouse where Rowland was incarcerated.

Greenwood’s citizens feared the worst and vowed they would not passively allow one of their own to be lynched.

A delegation visited the jail and was assured by the sheriff that he would not release Rowland from protective custody. But as the evening deepened, the white crowd swelled and grew increasingly restive. A year before, a white man had been snatched from a Tulsa jail and lynched by a mob. By all accounts, jailers had given up their captive to the vigilantes without a struggle; by some accounts, policemen directed traffic as spectators flocked to witness the hanging. If even a white man was not safe from vigilante violence in a Tulsa jail, Greenwood’s leaders reasoned, how could they believe Rowland—a black man accused of trying to rape a white woman—was secure?

Late in the evening, around 75 black men, many of them veterans of World War I, took up weapons and marched in formation on the jail. Someone in the crowd tried to snatch a pistol from a would-be Rowland protector, and the gun discharged. The white crowd instantaneously transformed into a mob. Blacks were knocked down, hit, and stomped to death by knots of whites. The blacks availed themselves of military retreat strategy, and broke into ranks alternately firing on the pursuing mob and falling back toward Greenwood.

Facing weaponry wielded with some efficiency, the white mob diverted attention to its own arming, breaking into sporting goods and hardware stores and looting guns and ammunition. Greenwood men hunkered down on the border between white and black Tulsa, defending their community with gunfire, and nightfall stalled the white advance for several hours.

Through the night the white mob regrouped, gathering weaponry and ammunition and strategizing—at least partly under the direction of city officials. Tulsa’s police chief deputized many rioters and even armed some. A light-skinned naacp official, masquerading as white, said that he was deputized with only three questions, then told he could “go out and shoot any nigg*r you see and the law’ll be behind you.”

Just after 5 a.m. on June 1, a siren sounded and the mob invaded Greenwood. Now armed, and with far superior numbers, the white vigilantes pressed into black Tulsa. They shot many blacks and incinerated the corpses of some. One body was tied to a car and trophy-dragged through the downtown. The mob forced all blacks out of their homes, looting clothing, jewelry, phonographs, curtains, and the like, then torching the houses. The Stradford Hotel and the Dreamland Theatre were only the most illustrious of many businesses. As word of the invasion spread, thousands of blacks fled. Some would relocate with family or friends as far away as Kansas City; many were found in the countryside outside Tulsa over the first few days following the riot.

A last redoubt in Greenwood was Mt. Zion Baptist Church, a brick structure comparable in size and beauty to any church in white Tulsa. Mt. Zion’s construction had been completed only 57 days before. An effective defense was mounted with gunfire from its windows and belltower. By mid-morning National Guardsmen from Oklahoma City rather belatedly moved into Greenwood.

The Guard would later say it thought it could most effectively protect blacks by rapidly evacuating them. Black survivors say Guardsmen simply protected and assisted the vigilantes. By all accounts, Mt. Zion was vanquished only when Guardsmen backed a flatbed truck close to the church, then uncovered and began firing a belt-loaded machine gun. Shards of bricks and mortar flew from the belfry. The big gun cut jagged holes in the side of the building. Rioters were afforded cover to rush Mt. Zion with torches and kerosene. The defenders evacuated their breached fortress and the church burned.

Dozens more stories have been passed down from that terrible day. Tim Madigan’s account especially, and often devastatingly, draws from interviews with eyewitnesses. James Hirsch also interviewed many survivors, but he leans more cautiously on documented accounts. This juxtaposition in itself could engender profitable consideration of the merits and limits of oral and of documentary history. Black culture is orally gifted and may honor oral accounts more readily than much of white culture. In any event, Greenwood did not have its own police force or other official sources of documentation. Its newspaper offices were destroyed that day, and its journalists were literally running for their lives. So, no matter how fair historical researchers may be, the documentary account comes largely from white sources. And even here crucial police records and (white) newspaper accounts have been expunged or lost. Some significant points of contention will never be settled.

It is argued, for instance, that the Tulsa Tribune on May 31 ran an incendiary front-page editorial headlined “To Lynch a Negro Tonight.” But the editorial in question has been (quite suspiciously) clipped from all archived or otherwise found copies of that day’s paper. Certainly no less significantly, black witnesses said airplanes flew over the riot, firing on the community and even dropping firebombs. White authorities said the airplanes were unarmed and were merely surveying and assessing the scope of the conflagration. Airplanes bombing American citizens on their own soil would (as September 11 too graphically demonstrated) be especially heinous and terrifying. Hirsch’s careful weighing of the matter persuades me, at least, that airplanes probably did not bomb Greenwood. But some Greenwood survivors and their descendants are convinced they did.

Out of this climate of controversy and nondefinitive evidence, it is difficult, even impossible in some respects, to exactly assess the horror of June 1, 1921. What is known beyond debate is that an entire community was destroyed that day; 1,256 houses were burned in a 35-square block area. The burned property, including businesses, was valued at $1.5 to $1.8 million (more than $14 million in 2000 dollars). We also know that about 6,000 Greenwood citizens were forced into detention camps in the days after the riot.

Much less certain is a more significant statistic: the number of people killed. Death estimates range from about 30 to 300 people (with probably at least 75 percent black). The death count is in any case substantially higher when one takes into account those who died from disease and exposure as a direct consequence of the destruction. Immediately after the riot the Red Cross erected nearly 400 army tents for survivors, and from June 18 to June 28 heavy storms flooded the Arkansas River. Rain blew down tents and soaked bedding, clothing, and firewood; two feet of water stood in some streets. Pneumonia, typhoid fever, malnutrition, and smallpox were rampant, and resulted in uncounted additional deaths.

By even the most conservative estimates, however, the Tulsa Riot is clearly one of the most destructive non-wartime attacks on an American community in this country’s history. Indeed, the nature, enormity, and evil of the tragedy raise the question of what we should call it, now that we are belatedly remembering it. It is commonly denominated a “riot,” but that word fails to capture the intentionality and not altogether spontaneous demeanor of the event. We might recognize it as a “massacre,” but that may downplay the active resistance and defense of Greenwood by its very capable citizens. Arguably, what happened that day was a pogrom or even, to use a more up-to-date word, an act of ethnic cleansing. After all, there is evidence—documentary and otherwise—that many vigilantes intended to drive blacks from Tulsa. Surviving photographs of Greenwood under billowing black smoke include some with scrawled legends such as “Runing [sic] the Negro Out of Tulsa.”

Whatever we call June 1, 1921, surely it was terrorism. And it was an act of gross injustice, still awaiting anything like appropriate restitution. Only one man served jail time: a black sentenced to 30 days for carrying a concealed weapon on the night of the riot. Though some Tulsa officials, including a prominent judge and the head of the Chamber of Commerce, said the city should rebuild Greenwood, city officials soon adopted the tack that the event was an “unlawful uprising of Negroes” and the city was liable for no damages. (The federal government never entered the debate.) The day after Greenwood burned, Tulsa’s mayor ceded all responsibility for relief work to the Red Cross, and the city and county provided only a paltry $200,000 for relief efforts.

In the late 1990s, Oklahoma state legislators appointed a study committee. While the legislators ultimately disavowed the interpretation of June 1 as a “Negro uprising” and admitted in vague terms the harm of racism in Oklahoma’s history, they rejected the study committee’s recommendations for monetary and scholarship reparations. Instead, each survivor was given a gold-plated medal bearing the state seal. No other attempts at reparation are in view.

Nevertheless, Alfred Brophy, a legal historian, makes a compelling case that if any event justifies reparation, it is that of June 1, 1921. Brophy allows that

reparation debates are often complicated because state or municipal liability cannot be resoundingly demonstrated. But, with such actions as deputizing

vigilantes and the National Guard’s machine-gunning of a church, it is indisputable that there was official culpability for much of Greenwood’s destruction. Brophy also acknowledges that reparations often present difficulties in terms

of who exactly deserves any monetary restitution. Opponents of reparations argue, for instance, that today’s descendants of Native Americans or African Americans were not directly harmed by events centuries prior. But in the Tulsa case, some 100 black victims of the riot are still alive.

Still, Brophy takes seriously the argument against reparations that wonders how fair it is to impose on a bygone era our contemporary, and so anachronistic, standards of right and wrong. But again the Tulsa case cannot be thus dismissed. White Tulsans, in newspapers and official documents, acknowledged the shame of the event mere days or weeks after Greenwood burned. The injustice was recognized as an injustice at the time it occurred.

Given the cogency of Brophy’s strictly legal arguments, it seems to me that the moral case for Greenwood reparations is even more impressive. With official liability demonstrable, with some who suffered damages still alive, and with standards applicable to the time indisputably relevant, I can imagine no compelling argument that serious, reasonable, and adequate

restitution has been made. Leaving the events of June 1, 1921, unrestituted really amounts to an argument, in deed if not in word, for a nihilistic morality of might makes right. Greenwood’s survivors, as members of a minority, cannot legally or politically compel reparations. But how can other Oklahomans—and other Americans—not impair our credibility to speak on behalf of the rule of law, and against the gross wrongs of all terrorism, if we accept that the Tulsa Riot has been fittingly redressed by half-apologies and a passel of gold medallions?

Perhaps it is time for American churches to step up to the plate, to speak out, and even, if no governmental institution will act, to imagine ecclesial forms of restitution. In 1921, white Tulsa’s churches, with the exception of the women of First Presbyterian who nursed in detention camps, merely stood by at best. At worst they inveighed against “vicious” black “agitators” like W.E.B. DuBois, or sermonized that the stage for the riot was set by such immoralities as public dancing and uncensored movies.

Their witness may not be the best model for us. Instead, when it comes to Christian and moral responsibility, we might do better to look to the example of Greenwood’s Mt. Zion Baptist Church. As mentioned, the new building had stood for barely two months before it was destroyed. A $50,000 loan, made by a bank in white Tulsa, remained outstanding. After their building was leveled, Mt. Zion’s congregants covered the church basem*nt and met and worshiped in their Sunday best on a dirt floor, over plank pews laid across sawhorses. There they also collected tithes to meet their debts. Twenty-one years later, they paid off the loan.

Rodney Clapp is editorial director of Brazos Press. Among his books are A Peculiar People: The Church as Culture in a Post-Christian Society (IVP) and Border Crossings: Christian Trespasses on Popular Culture and Public Affairs (Brazos).

Books Discussed in this Essay:

Alfred L. Brophy, Reconstructing the Dreamland: The Tulsa Riot of 1921/Race, Reparations, and Reconciliation (Oxford Univ. Press, 2002).

Scott Ellsworth, Death in a Promised Land: The Tulsa Race Riot of 1921 (Louisiana State Univ. Press, 1982).

James S. Hirsch, Riot and Remembrance: The Tulsa Race War and Its Legacy (Houghton Mifflin, 2002).

Tim Madigan, The Burning: Massacre, Destruction, and the Tulsa Race Riot of 1921 (St. Martin’s, 2001).

Copyright © 2002 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.

    • More fromRodney Clapp

Jana Riess

The past and future of Christian Science

  1. View Issue
  2. Subscribe
  3. Give a Gift
  4. Archives

Christian Science has attracted a good deal of attention in the last decade, most of it negative. Alarming court cases have featured heart-wrenching stories of Christian Science children who died of preventable illnesses and treatable injuries while prayerful parents and practitioners looked on. Rumors of financial battles and personality conflicts at the Mother Church in the early 1990s did little to elevate Christian Science in the public eye.

Such controversy is not new to Christian Science, which has endured its naysayers ever since Mary Baker Eddy first declared herself healed after falling on the ice in Lynn, Massachusetts, in 1866. In the last five years, a number of books have attempted to plumb Christian Science’s colorful past and ascertain, from there, its hazy future. Gillian Gill offers fresh perspectives on the faith’s unconventional founder; Caroline Fraser exorcises personal demons in an acerbic exposé; and Barbara Wilson establishes a theological and literary standard with an autobiography of her loss of faith.

Gillian Gill’s Mary Baker Eddy is the best biography to date. In her preface, Gill suggests that both the hagiographic and the unflattering portrayals of Eddy “are implicitly agreed on one essential point—that she deserved no personal credit for anything important she did.” Sympathizers have depicted Eddy as the vehicle for the birth of Christian Science, while detractors consider her “merely very lucky and very unscrupulous.” Gill, in contrast, employs her 700-plus pages to evaluate Eddy as a leader whose “main problem was that she had an extraordinary talent for life in the public sphere but was barred from entering it.”

On the surface, Eddy’s rise to prominence is the quintessential Horatio Alger story, but Gill serves it up with some distinctly feminist twists. Indeed, Gill openly esteems the strength that other biographers censure, seeing Eddy’s lifelong struggle against feminine dependence (especially in the forms of frequent illness and destitution) as her most remarkable quality. Eddy palpably threatened 19th-century womanly ideals. She married three times, mostly unsuccessfully, and was forced by indigence and widowhood to give up custody of her son. She refused to retreat into the private sphere of hearth and home, instead becoming one of a handful of women in American history to found a lasting religious movement. She defied the Victorian female life sequence, Gill concludes, by being “conventional in her twenties, weak in her thirties, impoverished and sick in her forties, struggling in her fifties, exercising her talents at last in her sixties, famous in her seventies, [and] formidable in her eighties.”

Gill acknowledges Eddy’s tendency to rewrite her own past, suggesting that Eddy thereby refused to be regarded as “the passive victim of circ*mstances.” So, for example, Eddy exaggerated the thoroughness and speed of her healing from her famous 1866 fall on the ice. Although she later claimed she “rose from [her] bed and … commenced [her] usual avocations” on the third day after the fall, contemporaneous evidence shows her second-guessing her own healing two weeks later, and “slowly failing” in her health before her eventual full recovery.

On some points Gill is frankly critical. She considers Eddy’s poetry “dreadful” and concedes that Eddy managed, through her intense demands on her closest followers, to alienate many of them. (Her craving for genuine familial affection often caused her to make poor choices in her most intimate friends, especially in the last decade of her life.) Nevertheless, Gill’s biography ventures farther than any other to demonstrate that by removing herself from Boston and the day-to-day workings of the church, Eddy took great pains to discourage a “cult of personality” in Christian Science—a determination that sharply distinguished her from most of the figures who founded religious movements in the spiritual hothouse of 19th-century America.

Gill’s study of Eddy is not perfect. She could have placed Christian Science more accurately within the larger religious context, and in particular she might have traced the influence of New Thought, a diffuse 19th-century movement that emphasized spiritual healing. The work also comes uncomfortably close to the genre of psychobiography, with Gill speculating overly much about the motivations of her subject. At times, her undisguised admiration overcompensates for the polemical nature of most previous biographies, but she is extremely thorough with the historical record, allowing all available documents to come to light. Gill’s biography will stand the test of time as the first major study to mine the considerable scholarly possibilities that exist between church-sanctioned hagiography and muckraking exposé.

The virtues of Gill’s biography are perhaps best appreciated when it’s read alongside Caroline Fraser’s book, God’s Perfect Child: Living and Dying in the Christian Science Church, the most damning critique of Christian Science to appear in 90 years. The polymath Martin Gardner, who has himself authored a scathing biography of Eddy, applauded Fraser’s book in The Los Angeles Times Book Review as “a skillful account of Mary Baker Eddy’s deluded, discombobulated life” and “the most powerful and persuasive attack on Christian Science to have been written in this century.” In a rather contradictory review, Publishers Weekly acknowledged that Fraser’s study was “a rousing exposé” but also called it “an evenhanded historical analysis.”

Exposé, by its very nature, is never evenhanded. Fraser’s agenda is clear from the outset, in her early declaration that “Christian Science has killed and maimed and materially damaged people,” including, she relates, a childhood acquaintance who died of a ruptured appendix. Fraser learned of his death while in college, and the discovery forced her to admit that she “had never seen the healings that Christian Science promised. I had heard people talk about their healings, but I had never seen anyone healed at all, not anyone in our entire church.”

In the preface Fraser, an evocative and skillful writer, describes her Christian Science upbringing. She portrays her mother as “a classic fair-weather Christian Scientist” who took birth control pills and slipped Fraser orange-flavored children’s aspirins when the girl felt ill. Fraser’s father, however, was a stern and upright Scientist who refused to have a radio or a lifejacket on his sailboat, “because we knew we would never have an accident requiring the use of one.”

Fraser’s personal history is only made explicit in the book’s opening pages, though her negative experiences color all of the subsequent historical sections of the book. Her analysis of Eddy relies on secondary sources; she has borrowed liberally from copious negative biographies (including the infamous “Milmine” biography that is partially attributed to Willa Cather). Indeed, Fraser acknowledges in the book’s opening line that she never sought access to the church’s archives in Boston to conduct original research. Her 19th-century sections show all of the marks of a prejudiced historian; for example, she appropriately criticizes Eddy’s reminiscences of her early life, written more than half a century after the events in question, but then accepts wholesale the ex post facto testimonies of neighbors who claimed they had known Eddy as a child.

By drawing almost exclusively on anti-Science biographies (except the one written by Scientist Robert Peel, which she employs only to ridicule), Fraser unquestioningly accepts the familiar litany of charges laid against Eddy. In her early life, this story goes, Mary Baker Glover Patterson Eddy was the equivalent of a welfare queen, freeloading on unsuspecting friends and family members while she sipped lemonade, complained of malaise, and rocked on their front porches. She was a rotten mom, to boot—too wrapped up in her own hysterical hypochondria to care for her only son. She hated men, and was probably frigid, informing third husband Asa Eddy that theirs would be a chaste relationship. Late in life, her unchecked social ambition, lack of scruples, and “mesmeric” personal charm landed her in a position of power, which she used to enrich herself. She died a paranoid, embittered, lonely woman.

Fraser never analyzes these caricatures, or stops to consider how heavily they are gender-biased. In her portrait, no subtlety or complexity of character is permissible. It is almost as if, in losing her childhood model of Eddy as savior, she could only substitute another childhood Disney character in its stead: the villainess.

Still, Fraser’s book improves markedly as she enters the era she knows personally. The brightest sections deal with Christian Science and the law. Although Fraser’s doctorate is in literature, she reveals a keen legal mind here as she argues relentlessly for the prosecution. She raises valuable constitutional questions about the accepted practice of providing Medicare payment for Christian Science care facilities. (Scientists have argued that their form of healing is religious and should be exempt from government scrutiny on the basis of the Free Exercise Clause of the First Amendment; Fraser points out that Medicare payments for such “religious” healing violate the Establishment Clause of that same amendment.) Fraser also explores Christian Science’s vast (though shrinking) network of nursing homes and sanatoriums, relying heavily upon accounts in the popular press and on personal interviews, particularly with those who have left the faith. She is able to provide an insider’s view, however myopic, of a fascinating subculture.

Alas, these sections also suffer from Fraser’s tiresome vendetta against the church of her youth. In describing one 1980s trial, she depicts Christian Science officials as “startlingly smug” in their healing ability, “although they had no hard scientific or statistical evidence supporting them. They seemed confident in the power of their church and proud of their ignorance of the human body and disease.”

If Fraser’s book is surpassed as biography by Gill’s, it is outclassed as memoir by Barbara Wilson’s deeply poignant Blue Windows: A Christian Science Childhood. Wilson can sometimes be as critical as Fraser, but her censures replace vitriol with the wisdom of time’s passage. There are some unforgettable scenes. In 1956, Wilson lined up with the rest of her first grade class to receive the marvelous new Salk vaccine against polio. She clutched a note from her mother, explaining that the family was refusing the vaccination. As a six-year-old child, she was forced to defend ideals she did not yet fully understand to an angry teacher and school nurse.

Wilson’s childhood was a complex mixture of warm security and stifling insularity. Christian Science gave her the conviction that God was love and that she was God’s perfect Idea, but it also refused to acknowledge the realities of some of her childhood sufferings—not only the occasional illness, or even her mother’s mental breakdown and death from cancer when Wilson was in junior high, but also sexual abuse. Christian Science, she says, could not give her names for the body parts she knew her uncle had violated; Christian Science steadfastly maintained that evil was merely illusion. “Being touched down there, being forced to lie there while it happened, was so far out of my experience and the experience of a totally good universe, that I couldn’t assimilate it,” Wilson writes.

Like almost everyone who has been critical of Christian Science, Wilson discusses Eddy’s odd teaching about malicious animal magnetism (m.a.m.). Now downplayed by the church, m.a.m. involved the idea that practitioners could be mentally poisoned from a distance. (In one of her most notorious statements, Eddy claimed that her last husband had been “mentally murdered” by her enemies.) But unlike Fraser and other muckrakers of the Christian Science past, Wilson does not mock this particular teaching. Hers is a touching, deeply theological meditation on the human need to confront evil. “In some ways,” she muses, “Christian Science, which presents such a benign face today, perhaps was healthier when Mrs. Eddy was struggling openly with her reputed mental poisoners and those who sought to maliciously magnetize her.”

Indoctrinated by the irrepressible cheerfulness of a religion that denied the existence of evil and regarded death as an illusion, Wilson made it to high school before she’d ever heard of Hiroshima or Auschwitz. It was then that she lost her faith in the God of perfect love—”and I knew that once I let go of this God, the God of my childhood, I would not be able to believe in another one. And that meant I would cease to believe.”

Issues of Christian Science belief lie at the core of Mrs. Eddy’s textbook, Science and Health with Key to the Scriptures. It is at once a heady theological treatise and a pragmatic, down-to-earth handbook on spiritual healing. For various reasons, thousands of people—many of whom are not Christian Scientists—still turn to the book seeking comfort and the power to heal themselves and others.

Mark Twain wrote that “Christian Science, like Mohammedanism, is restricted to the unintelligent, the people who do not think.”1 (No fan of 19th-century sectarian revelations, Twain also dismissed the Book of Mormon as “chloroform in print.”) A century later, the church was promoting the new edition of Science and Health with a vigorous campaign, marketing the book “for people who aren’t afraid to think.” For the first time, the book is being carried by major chain bookstores, not just Christian Science Reading Rooms. After the trade edition was released in 1994, the book enjoyed total annual sales of 125,000 to 130,000 copies, more than doubling its previous distribution. Sales soon jumped to the next level: more than 200,000 copies of Science and Health were purchased in the year 2000, and in April of 2001 the church celebrated the purchase of the ten millionth copy of Eddy’s textbook.

The new edition of Science and Health has been the particular project of the church’s lively, highly visible chair of the board of directors, Virginia Harris. In promoting the book, Harris has appeared in some unlikely venues, including an improbable—and marvelous—1999 interview on Larry King Live. (It is perhaps an indication of the countercultural nature of Christian Science that during the hour-long interview, cnn chose to air national ads for Webmd.com, gingko biloba, and America’s Pharmaceutical Companies.) Most oddly, Harris is a regular participant in the Harvard Medical School’s Mind-Body Symposium. This last rapprochement is delightfully ironic, given the history of tension between Christian Scientists and the medical profession. “The medical schools would learn the state of man from matter instead of from Mind,” Eddy admonished in Science and Health.

The book itself is essentially unchanged from previous 20th-century editions, though the Committee on Publication has added a “Publisher’s Note” that addresses the reader in the second person and positions the book as an aid to spiritual seekers of all descriptions. In an interview Harris pointed out that when Eddy wrote it in 1875, there was no church. The denomination developed from the book, not the other way around.

Science and Health may help the church to capitalize on a trend now achieving cultural currency: the intersection of spirituality and health, as popularized by contemporary writers such as Deepak Chopra, Bernie Siegal, and Marianne Williamson. Spiritual healing, the cornerstone of Science and Health, could bring the book to a whole new generation of readers. Yet Christian Science’s approach to the human body is fundamentally at odds with the body-spirit fusion mantras of many recent health books. For Christian Scientists, the key to health does not lie in more fully integrating the body and the spirit. True health is possible only when individuals recognize that the body itself is mortal mind, or error. Eddy taught that when people realized that divine Mind was all, and that the material world, including the body, was a mirage, then spiritual victory was possible: “God is Mind, and God is infinite; hence all is Mind.

Science and Health is a compelling book: a fascinating snapshot of 19th-century optimism, uncompromising in its commitment to spiritual wholeness, and liberally sprinkled with protofeminist overtones. Yet it is too early to say what effect, if any, rekindled outside interest in Eddy’s text might exert on the church as an institution. As Mark Twain put it, “Environment is the chief thing to be considered when one is proposing to predict the future of Christian Science.”2

Although the church enjoyed much of the prosperity of mainline Protestant denominations in the middle of the 20th century, preaching an enlightened gospel of culture and comfort, it has, like them, recently fallen on hard times. An aging and declining membership has weathered financial storms, protracted (and heavily publicized) legal disputes, and inner divisions. Fraser claims that branch churches are closing at an estimated rate of 2 percent annually, and that membership has likely fallen below the 100,000 mark. But like the persistent rumors of Eddy’s death in the late 19th century (she didn’t actually pass away until 1910, although the press chronicled her demise many times before that), the current forecasts of doom may be premature as Christian Science reinvents itself in a new millennium.

Most of the new readers who are discovering Science and Health and even incorporating its teachings into their own spiritual and medical practices are not becoming Christian Scientists. The same seekers who gravitate toward Christian Science’s radical spiritual perspectives appear to be embracing Eddy’s teachings as one more serving on the tray of cafeteria spirituality.

The new face of Christian Science has been concretized in a building: the Mary Baker Eddy Library for the Betterment of Humanity. Scheduled to open in Boston on September 29, this four-story, $23 million library—with state-of-the-art multi-media exhibits—will provide the public with unprecedented access to all of Eddy’s previously unpublished writings. The body of this work—which has not been made available to scholars, church members, or the public before now—consists of half a million pages of letters, scrapbooks, theological writings, and other documents.

The library will sponsor core programs in American religious history, women’s history, and medicine.

For the church it is a bold gamble. Almost certainly in that mass of papers there is material that will reignite old controversies and start new ones. But the library’s website takes the long view, celebrating “the power of ideas throughout history,” a potent if often underestimated force that all observers—adherents and critics alike—must reckon with in the story of Mary Baker Eddy and Christian Science

Jana Riess, religion book review editor for Publishers Weekly, holds a doctorate in American religious history from Columbia University, as well as degrees from Wellesley College and Princeton Theological Seminary. She is the author of The Spiritual Traveler: Boston and New England, and recently wrote the foreword to a volume of Mary Baker Eddy’s autobiographies. Both books will be released in September.

1. Mark Twain, Christian Science (Reprint. Oxford Univ. Press, 1996), p. 96.

2. Mark Twain, Christian Science, p. 93.

Books Considered in this Essay:

Mary Baker Eddy, Science and Health with Key to the Scriptures (The First Church of Christ, Scientist).

Caroline Fraser, God’s Perfect Child: Living and Dying in the Christian Science Church (Metropolitan, 1999).

Gillian Gill, Mary Baker Eddy (Perseus, 1998).

Barbara Wilson, Blue Windows: A Christian Science Childhood (Picador USA, 1997).

Copyright © 2002 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.

    • More fromJana Riess

David Lyle Jeffrey

The Beginning of Wisdom

  1. View Issue
  2. Subscribe
  3. Give a Gift
  4. Archives

There are people for whom news that evangelicals are reading The Chronicle of Higher Education might well be taken as a bona fide sign of the end times. For reasons of charitable deference therefore, some of us sneak our peek after hours at the online version. In June there appeared an article by Robert J. Sternberg, director of Yale’s Center for the Psychology of Abilities, Competencies and Expertise, entitled “Teaching for Wisdom.” Unaccustomed to that particular noun in the Chronicle, I read on. All too soon, alas, I began to recognize the familiar and usually well-intended confusion which results when a sermon embraces too warmly the character of the folly it presumes to denounce. As anyone routinely subjected to rhetorical sabbaths can attest, the “rousem*nts” which attend such enthusiasm tend less to resemble those of the biblical Dame Wisdom (Proverbs) than those of another Dame (same book).

The necessity which appears to have mothered invention in Prof. Sternberg’s case is a sharp decline in the authority of “Abilities, Competencies and Expertise” as he and others have been peddling it. Since market value tends to rise and fall with credibility in these matters also, the crisis occasioned for such a curriculum by the felonious demise of Enron (et al.) is as real as that which attacks the professorial pension plan. Perhaps considering this entailment, Sternberg’s observation is tart: “traditional education, and the intellectual and academic skills it provides, furnishes little protection against evil-doing, or, for that matter, plain foolishness.”

Pressed as we evangelicals are to overcome our well-earned reputation for anti-intellectualism, carrion comfort like this is hard to pass up. Here, it seems, is an Ivy League professor offering pronouncements more or less equivalent with those of a red-neck rural preacher of the 1950s. To be sure, the preacher might have said it more colorfully (e.g., “show me an educated Baptist and I’ll show you a backslider”), but the point of the criticism is surprisingly harmonious.

The learned Dr. Sternberg is a little more nuanced. Ardently advertising à la mode a book he has edited—Why Smart People Can Be So Stupid (Yale, 2002)—our author claims to have discovered four reasons for this pathology. Concisely, these are: (1) that smart [i.e., educated] people are self-centered; (2) they think themselves “omniscient”; (3) they act as if they are “omnipotent”; (4) they think they have overcome the problem of consequences. But this, too, sounds familiar—almost like plagiarism from an old-fashioned sermon on Romans 1. Until, that is, the altar call. No unpleasant denunciations of pride or “playing God” ensue here, and nothing quite so gauche as a call to repentance. The call is rather for a more general teaching of ethics—not in any such way as to suggest a hierarchy of values, mind you, or virtue (a similarly embarrassing term), but a dialogical approach to values clarification. Sternberg believes this exercise will produce the “wisdom” of “a socially desirable use of. … knowledge.”

Thoughtful readers of the Bible are unlikely to disagree with Sternberg that wisdom is to be sought after, and that much good comes of it, personal as well as social. On the biblical account, wisdom always has ethical implications. Respective conclusions about how to find wisdom, however, are sharply divergent. Christian educators should carefully consider that the biblical prescription (when followed according to the Manufacturer’s directions) depends upon forthright acknowledgment that there is a God, and that all our usurpations of his prerogatives are at best unworkable.

That “the fear of the Lord is the beginning of wisdom” (Ps. 111:10) and also “the beginning of knowledge” (Prov. 1:7) is indicative of a necessary and ongoing reciprocity, moreover, between our acquisition of knowledge and the getting of wisdom. It is irrational, on this account, to blame universities for the moral blindness of some they have “educated.” Every educational project, including Christian ones, should declare a truth about its own limits: that no education will of itself preserve us from carnality, greed, and fraud. Moral intelligence does not follow from analytical intelligence; it precedes it, and, when married to it wittingly, as the second chapter of Proverbs is at pains to teach, the student is far likelier in the end “to understand righteousness and justice, equity and every good path.” But even then it requires an act of the will to fear God and keep his commandments such that this understanding is put into reliable practice. Kierkegaard was not wide of the mark when he quipped that it just isn’t the same thing to say to someone, “You should live accountably” as to say, “You should live accountably; there is a Last Judgment coming.”

Many are the educational theories that have imagined learning to be the sufficient condition of virtue. Hegel was not the first to assert that “education is the art of making men ethical” or to think that it “shows men the way to a second birth.” Even when the experimental results have been repeatedly disconfirming, there have been eager apologists for this wobbly hypothesis. Bertrand Russell thought that poor education makes us “lazy, cowardly, hard-hearted and stupid,” and that better methods “must give us the opposite virtues.” Thoughtfully diagnostic educational critics of the late 20th century (e.g., Allan Bloom, George Steiner) have clung to similar wan hopes.

The time is ripe for Christian educators to draw more directly upon the perdurable strengths of biblical tradition as we take up our own part in addressing both a problem and a diagnosis with which we do not in fact dissent—and of which we also remain susceptible. We need to practice a learning that is deep, and dedicated to wisdom as its goal, but discerning enough to know where wisdom and knowledge alike must begin if right action is to follow.

Copyright © 2002 by the author or Christianity Today/Books & Culture magazine.Click here for reprint information on Books & Culture.

    • More fromDavid Lyle Jeffrey
Page 3938 – Christianity Today (2024)
Top Articles
Latest Posts
Article information

Author: Jamar Nader

Last Updated:

Views: 5906

Rating: 4.4 / 5 (75 voted)

Reviews: 90% of readers found this page helpful

Author information

Name: Jamar Nader

Birthday: 1995-02-28

Address: Apt. 536 6162 Reichel Greens, Port Zackaryside, CT 22682-9804

Phone: +9958384818317

Job: IT Representative

Hobby: Scrapbooking, Hiking, Hunting, Kite flying, Blacksmithing, Video gaming, Foraging

Introduction: My name is Jamar Nader, I am a fine, shiny, colorful, bright, nice, perfect, curious person who loves writing and wants to share my knowledge and understanding with you.