Progressivism, Postmodernism and Republicanism: The Relevance of James Conant to Educational Theory Today

Progressivism, Postmodernism and Republicanism:

The Relevance of James Conant to Educational Theory Today

Burton Weltman

Recovering a long lost era in Republicanism

I am writing this preface during the spring of 2016 in the midst of the Presidential primary election season.  In the context of the Republican Party’s policies and politics of the last eight years, and especially during this primary election cycle, James Conant was a Republican of a sort it is almost impossible to imagine today.  He was a hawk on foreign policy, a Cold Warrior during the 1950’s and 1960’s, which is similar to most of the current crop of Republican Presidential candidates.  But he was a social democrat on domestic policy, taking positions not unlike those of the socialist Democratic Presidential candidate today, and a world away from those of the present day Republican Party.  The gist of this essay is that Conant has a lot to teach self-styled progressives about education, and that progressive educators should acknowledge Conant as one of their own.  But a subordinate thesis is that Conant has a lot to teach Republicans about making humane public policy and behaving in a sane and sensible way.

Recovering James Conant

James Conant was one of the most prominent scientists, political figures and educational leaders of mid-twentieth century America.[i]  A life-long Republican who was seriously considered for the 1952 presidential nomination, Conant was also a precursor of postmodernism, an avowed social democrat, and a professed progressive educator.  A self-proclaimed member of the Power Elite that ostensibly ran the country, he was at the same time a devotee of John Dewey, exclaiming in a parody of Voltaire’s comment about God that “if John Dewey hadn’t existed, he would have had to be invented.”  A traditionalist in science, politics and education at the start of his career, Conant evolved into a self-styled radical for whom “conservative” was a dirty word and who combined liberalism and Republicanism in ways that might seem oxymoronic today.[ii]

Conant was an innovative thinker in science and education who has for too long been a lost figure in progressive educational theory.  Highly regarded by many progressives during the 1950’s and 1960’s for his defense of comprehensive high schools,[iii] Conant’s reputation among progressives has fared poorly since.[iv]  Educationally, Conant is generally portrayed as an elitist who proposed tracking students according to the needs of the military-industrial complex.[v]  Politically, he is derided as either a one-time liberal turned conservative or a life-long conservative who sometimes pretended to liberalism.  Intellectually, he is discounted as an arch-empiricist whose statistical studies of high schools during the 1950’s and 1960’s have little theoretical value.  Conant, who died in 1978, is at this point almost routinely classified as a “conservative” who promoted “traditional schooling” based on disciplinary curricula and social control methods.[vi]

The thesis of this article is that Conant’s ideas have been widely misconstrued by critics who have focused on his later works written in the midst of the Cold War during the 1950’s and 1960’s, and who have failed to place those books in the context of his earlier works from the 1930’s and 1940’s.  Conant’s is a story about the interaction of politics, ambition and theory.  It is in part an all-too-common tale of progressive theory warped by conservative political pressures and personal ambitions.  It is also, however, an example of the importance of theory and the staying power of progressive educational theory.  It is my contention that in the midst of all his personal and political peregrinations, Conant’s core educational theories remained progressive.  The purpose of this article is to examine Conant’s educational theories in the context of his life and times with the goal of demonstrating their importance for educators today.

Conant and His Critics

Conant’s progressive critics have generally focused three charges against him which they think demonstrate his anti-progressive and anti-democratic tendencies.  First, they say, Conant was a petty-bureaucrat who sought to consolidate small community-based schools into centralized schools that reduce students and teachers to mere cogs in a giant machine.  Second, he was an elitist who promoted stratified schools that ignore slower students in favor of the faster.  Third, he was a Cold Warrior who favored repressing dissenters and subordinating schools to the military-industrial complex.  Conant’s response to these critics was to plead guilty to their premises – he was a bureaucrat, elitist and Cold Warrior – but to deny their conclusion that he was anti-progressive and anti-democratic.

With respect to school consolidation, Conant argued that community-based schools too often fostered racism and ethnic exclusion, and that small schools did not provide enough ethnic or intellectual diversity.  In his best known educational work, The American High School Today,[vii] published in 1959, Conant called for eliminating almost half of the nation’s high schools as part of what he considered a progressive defense of comprehensive high schools as the cornerstone of a democratic educational system.  Writing in reply to Admiral Hyman Rickover’s highly publicized call for the establishment of separate high schools for high-achieving students as a necessary means to fight the Cold War,[viii] Conant vehemently rejected Rickover’s proposal as undemocratic and unnecessary to achieve educational excellence.[ix]

Responding to Rickover’s attack on the intellectual deficiencies and administrative inefficiencies of the public schools, Conant contended that centralized schools would be more efficient and more likely to have diverse student bodies.  In turn, larger schools would be better able to offer more advanced courses for advanced students but would also be able to offer a greater variety of courses to meet the needs of a diverse student population.  Contrary to Rickover and other conservative critics of comprehensive high schools, Conant rejected any form of tracking students into separate programs according to ability or achievement and any ability grouping in general education classes.  Ability grouping was acceptable to Conant only in the most advanced and specialized courses and this would be achieved largely through self-selection by students for these courses.[x]

With respect to the charge of elitism, Conant contended that democracy required well-educated leaders, and that American schools were not providing enough leadership education.  Conant defined democracy as “government by and for the people” but not of the people,[xi] and believed in what could be called plebiscitary democracy or democracy from the top-down rather than the bottom-up.[xii]  Conant’s ideal was a society in which the best and brightest people propelled themselves to the fore and then pulled the masses along.  He rejected the idea of what today would be called participatory democracy in which leaders are ostensibly pushed to the fore and pushed along by the people.[xiii]  At the same time, he believed that democracy required well educated followers who could check and balance, support but also critique, their leaders.[xiv]  As a result, he advocated not only advanced programs of specialized education for those who would become the scientific and political leaders of the country, but also rigorous programs of general education in which all students would participate together, hoping thereby to establish a common understanding and basis for communication between the elite and the masses.

With respect to the Cold War, Conant argued that public service was the foundation of democracy and the goal of progressive education, and that in times of national crisis, it is necessary for people to support their leaders even if they do not entirely agree with their policies.  Conant had doubts about the Cold War from its inception, but suppressed his doubts to support America’s leaders against what they defined as the menace of Communism.  In so doing, Conant admittedly subordinated some of his progressive ideas to Cold War imperatives.  But he did not abandon them and believed that in supporting some repressive aspects of the anti-Communist crusade, he was preventing it from becoming worse and was saving a place for progressive values.

The premises that led Conant to support centralized schools, elitist programs and Cold War repressions are very different from the more egalitarian, participatory democratic premises of most present-day progressives, including the author of this article.  It is my contention, however, that Conant’s primary concerns – with promoting ways that individuals and institutions can transcend and transform themselves – do not depend on these premises.  In turn, Conant’s core theories – models of science as social studies in a pluralistic universe, politics as social democracy in a multicultural world, and education as social problem-solving in a diverse community – remain relevant to theorists today.

Social Studies and Postmodern Science

Conant was a peripatetic polymorph who took on many different roles, and enjoyed a career that moved successfully from science to administration to politics to educational policy.[xv]  He was an openly ambitious person who sought power and status as a means of doing good for himself and for the world.  Born in 1893 to a middle class family in Dorchester, Massachusetts, Conant attended Roxbury Latin School and Harvard University, gaining a B.A. in 1913 and a Ph.D. in 1916.  He then worked as a chemistry professor at Harvard, becoming President of the University in 1933 and serving in that capacity until 1953.  From 1941 to 1946, Conant was also Chair of the National Defense Research Committee and overall coordinator of the effort to produce an A-bomb.  He continued as an advisor on atomic weapons until 1953.  From 1953 to 1957, he was first the U.S. High Commissioner and then Ambassador to Germany.  From 1957 until his death in 1978, he worked primarily on educational research and writing.

Conant was first and foremost a scientist and continued working on science even as he did other things.  During the 1910’s and 1920’s, he was a laboratory chemist doing pure research.  In the 1930’s, he turned to the history and theory of science.  During the 1940’s, he worked in applied science, mainly on building the first A-bombs and other nuclear weapons projects.  Finally, in the 1950’s and 1960’s, he returned to the history and theory of science.  During his career, Conant’s theories of science evolved from the positivist philosophy that characterized most of his colleagues to what could be described as a post-modern relativism.  Conant described the evolution of his philosophy as “a mixture of William James’ Pragmatism and the Logical Empiricism of the Vienna Circle with at least two jiggers of skepticism thrown in.”[xvi]  Becoming less sure about science as he became more powerful as a scientist, Conant eventually came to the conclusion that scientific theories were influenced by social circumstances as much as empirical evidence.  And he argued that studying social science was almost as important to understanding physical science as studying physical science itself.[xvii]  The development of Conant’s scientific ideas greatly influenced his educational theories.

Conant’s scientific interests began in his childhood home.  His parents were devotees of the eighteenth century scientist and theologian Emanuel Swedenborg.[xviii]  Swedenborg argued that matter and mind are two sides of the same spiritual coin.  He sought to extend the physical theories of Newton and the psychological theories of Locke, and to solve the spirit/body problem posed by Descartes, through what was essentially a pantheistic explanation of the universe.  Although ostensibly a Christian, Swedenborg claimed that different cultures have different ways of explaining the universe and each may be valid in its own way.  Rather than demanding doctrinal purity or ritual uniformity, God, according to Swedenborg, wanted humankind to cooperate in socially useful work.[xix] Although Conant eventually joined the Unitarian Church, his interdisciplinary approach to science, trying to consolidate the various physical sciences and combine the physical and social sciences, and his pluralistic and social democratic approach to society were similar to Swedenborg’s views.

Conant’s interest in science was given direction by his high school chemistry teacher, Mr. Black.  Conant greatly appreciated the teacher’s hands-on methods and personal concern for students, and Mr. Black often served as an example of a good teacher in Conant’s later educational writings.  At Harvard, Conant pursued a double undergraduate major and did a dual doctoral thesis in physical chemistry and organic chemistry, considered an innovative combination at that time.  Mentored by Theodore Richards, who was one of the most prominent chemists of the day and whose daughter Conant later married, Conant was initiated into the American scientific elite at Harvard.[xx]

Upon earning his doctorate, Conant was encouraged by Richards to work in what was then considered an area of vital national interest: developing poison gas.  Making his first essay into weapons of mass killing, Conant worked initially with colleagues on some private research and then spent World War I working for the Army.  He was highly praised for his work and was well regarded within high-level military circles.[xxi]

After the War, Conant returned to Harvard and during the 1920’s undertook “pioneering efforts to apply the techniques of physical chemistry to the study of organic reactions.”[xxii]  In 1933, he published a textbook, The Chemistry of Organic Compounds, which became the “standard work in the field” during the 1930’s and 1940’s.[xxiii]  He was frequently consulted by major corporate and government officials and thereby gained entre to the industrial and political elites of the day.

Conant left his laboratory in 1933 to become President of Harvard and turned to working in science on a more theoretical level.  Influenced by his contact as President with scholars from the humanities and social sciences, Conant began to develop a humanistic approach to science, taking what was then the radical step of using case histories in teaching physical science courses.  Conant was worried during the 1930’s about what he saw as the debasement of science by ideologically driven scientists, and particularly the environmental genetics being promoted by the Soviet scientist Lysenko and the racist genetics of the Nazis.  He was also concerned that ignorance of how science and scientists actually work left ordinary people open to the pseudo-scientific charlatanism of ideologues such as Lysenko and the Nazis.  Conant hoped that a historical approach to science, one that examined the relationship between social and scientific developments, would help both budding scientists and the general public to appreciate the nature of science and the need for enlightened scientific leadership.[xxiv]

Conant’s work on scientific theory was cut short by the start of World War II and his work on the A-bomb.  It was a project Conant undertook with typical thoroughness but also characteristic ambivalence.  Fearing that nuclear weapons would lead to a nuclear holocaust, Conant fervently prayed until the moment the first A-bomb was successfully tested that it would not work.[xxv]  At the end of World War II, Conant hoped that he could atone for the A-bomb by working on peaceful uses of science and atomic energy.  But with the advent of the Cold War, he was asked to advise the government on building newer and better atomic weapons.  Although he answered what he saw as the call of duty, the nature of the work affected his thinking.  He could no longer accept the positivists’ beliefs in the inevitability of progress through science and science as progress, or in the neutrality of scientists.[xxvi]

Continuing his work on the history of science, Conant developed innovative ideas that anticipated and coincided with the theories of Thomas Kuhn, who worked with Conant as a graduate student at Harvard during the 1940’s and whose work became a foundation of much postmodern theory.  Postmodernism has been described as a revolt against the positivists’ doctrine of res ipsa loquitur, that facts speak for themselves and that the more facts you have the better your conclusions.[xxvii]  Like Kuhn later, Conant claimed that science develops through paradigm shifts rather than incremental changes and that these shifts result mostly from cultural changes rather than new evidence.  New evidence, Conant contended, leads merely to the amendment of old theories.  New theories result from new questions, questions that reflect changes in social structure, problems and philosophies.  In Conant’s view, scientists are “revolutionists” who arise out of the prevailing culture, transcend it, and then pull the culture up with them.  Scientific revolutions, in turn, require a high level of popular education so that the public can intelligently support the work of creative scientists.[xxviii]

In his historical case studies, Conant contrasted the social and intellectual circumstances under which scientists worked before a new scientific discovery and after, focusing particularly on the questions they asked.  Based on his method, the successful overthrow of Aristotle’s theory of motion (that things will stop moving unless force is continuously exerted on them) by Newton’s theory (that things will continue moving forever unless a force is exerted to stop them) could, for example, be explained in part as a result of differing social circumstances: Aristotle lived in a traditional society in which stasis was the norm and the primary question was how anything changed; Newton lived in a dynamic society in which change was the rule and the primary question was how anything stayed the same.  Ancient and modern scientists asked different questions and got different answers, but both were useful to the societies in which they were conceived.  Taking this argument even further, Conant contended that people no longer believe in Homer’s myths because Greek gods are not useful in answering present-day questions, not because the myths are untrue.  Conant claimed that there is no reason to think that Zeus and the other gods did not in some sense actually exist for the people for whom Homer’s myths were useful answers to important questions.[xxix]

Rejecting the idea of a universal science which is good for all people at all times, Conant tended toward what might be called a soft postmodernism grounded in relativism rather than nihilism.[xxx]  Picking up on William James’ notions of an open universe and the effects of theory on reality,[xxxi] he promoted a vision of scientists transcending their cultures and transforming the world thereby.  At the same time, continuing his debate with Lysenko, Conant insisted there is a vital difference between partisanship and objectivity – that while scientists cannot be neutral, they can be objective and need not be mere propagandists.  Scientists can and must fairly consider all of the relevant evidence and pertinent points of view on a subject.  They can and must consider opponents’ arguments in ways that the opponents would recognize, and not merely set up straw men to knock down.  In sum, while there may be more than one right answer to any important question, there are also wrong answers, answers that do not fit the evidence or meet opponents’ arguments.  While there may be no final Truth, scientists must cooperatively strive for the broadest working consensus on what may be right and what is wrong under the prevailing circumstances.[xxxii]  In a pluralistic universe, Conant concluded, the goal of science is not certainty but contingency, not merely answers to questions but also new questions to answer so that the quest for a better life can continue.[xxxiii]

Conant’s iconoclasm extended to rejecting the prevailing notion that the physical sciences were radically different from and inherently superior to the social sciences.  Conant indicated that they were essentially the same and that there were only two main differences between them.  The first difference lay in the range of choices that their subjects enjoy.  Physical scientists study things that have relatively little variation or choice as to what they will do.  An electron might, for example, at any given moment act like a particle or a wave, or go through one hole or another in a screen, so that the individual electron’s behavior cannot be predicted.  But electrons have relatively few options, so that their behavior is for the most part a matter of simple probabilities that can be accurately predicted in the aggregate.  Humans are not so simple and neither their individual nor group behavior is easily predictable.  The choices that humans can make and the variations in their behavior are enormous, making social science more complex but also more important to study than physical science.[xxxiv]

The second difference between the physical and social sciences lay in the role of politics in their workings.  While physical science is fraught with political issues, social science deals with political issues per se.  As a consequence, social scientists have historically been less willing and able than physical scientists to agree upon common frameworks for research and development, and to work cooperatively within those frameworks.  In turn, while clear-cut paradigm shifts have occurred in the physical sciences, so that it is possible, for example, to say that the Copernican view of the universe has replaced the Ptolemaic view, such shifts have not been as clear cut or conclusive in the social sciences.  This greater degree of cooperation among physical scientists – their willingness to work together and to accept each other’s findings and conclusions – is a major reason for the greater success and public acceptance of the physical sciences.  It is something that the social sciences need to develop and that schools could help foster with a more pro-social, social problem-centered curriculum.[xxxv]

Conant’s concerns about the interplay of the social and physical sciences, and the relationship between scientists and the general public, were not merely academic matters for him.  Although he was himself a member of the scientific policy elite, he worried about the tendency of scientists to become “exalted and isolated” to the detriment of democracy and their own best judgments.[xxxvi]  In the wake of the astonishing development of the A-bomb, Conant warned that science was being glorified as magic and scientists as demigods.  He fretted that lay people could not understand the scientific issues of the atomic age and that decisions involving science would by default be made by scientists alone.  Conant worried that unchecked power and popular adulation could corrupt science and scientists.  These concerns are reflected in his curricular proposals for a program of general education that connects scientists and laypeople.[xxxvii]

Conant’s concerns were exacerbated during the Cold War by the unparalleled secrecy imposed by the government on scientists who in any way worked in areas that might have some military application.[xxxviii]  The troublesome consequences were exemplified in Conant’s own flip-flopping positions on the H-bomb.  Conant initially opposed the production of an H-bomb on the moral grounds that it was a genocidal weapon.  He also initially supported his friend Robert Oppenheimer when Oppenheimer spoke out publicly against the H-bomb.  But Conant backed off when Edward Teller and other H-bomb proponents accused Oppenheimer of being a national security risk and effectively destroyed Oppenheimer’s career.[xxxix]  Conant refused to make public either his concerns about the H-bomb or his support for Oppenheimer, seemingly for fear of jeopardizing his own standing within the inner circles of power.

Conant has been accused of hypocrisy and cowardice for these actions,[xl] but I think the roots of his contradictions are more subtle.  On the one hand, Conant was genuinely concerned for the security of the United States if the Soviets forged ahead in the development of nuclear weapons.  On the other hand, he was fearful of what might happen to the world if nuclear militants such as Edward Teller ran things unchecked.  Given what had happened to Oppenheimer when he dared to speak publicly about nuclear policy, Conant decided that the best thing he could do for humankind was to stay silent and stay part of the Power Elite, hoping thereby to exert some salutary influence on American policy.  The exigencies of the Cold War plus his own top-down view of society arguably led him to choose power over principle in these matters.  A prime architect of atomic weapons, Conant could, nonetheless, sincerely yearn for the simpler world of his youth and exclaim: “I do not like the atomic age or any of its consequences.”[xli]

Social Democracy, Republicanism and the Cold War

Conant’s youth was spent in Massachusetts at the turn of the twentieth century, living in a region and household in which the Civil War was still a current event, Republicanism was synonymous with patriotism and progress, and Democrats were considered traitors and reactionaries.  To the young Conant, Republicans stood for enlightenment and industry, Democrats for racism and feudalism.[xlii]

For most of his career, Conant portrayed politics in fairly simple terms: liberals were the good guys and conservatives were not.  Conant rejected what he saw as the conservative ideal of a laissez-faire economy in which every person must fend for him/herself and government exists to protect private property.  He supported, instead, the liberal idea of a regulated economy in which government guarantees each person a decent job and standard of living.  Conant also decried what he saw as the conservative ideal of a traditional culture enforced through censorship to ensure that each generation follows blindly and blandly in the footsteps of the last.  He supported, instead, the liberal idea of a laissez-faire culture in which each generation develops its own way of life and the government encourages diversity and creativity.[xliii]

Citing Jefferson as his mentor, Conant combined meritocratic views of leadership with social democratic views of public policy.  Politics, Conant claimed, is social science in action, a process in which officials experiment with hypotheses as to what will best serve the public interest and the people register their support and dissent at the polls.  Democracy is a form of permanent revolution in which enlightened leaders with the support of educated followers continually transcend the status quo and continuously move the country toward a more creative and cooperative society.[xliv]

In the early stages of his career, Conant found a relatively comfortable home for these political views within a Republican Party that harbored such liberals as Robert La Follette, Sr., Robert LaFollette, Jr. and Henry Wallace, Sr.  As time passed, liberals were more likely to be found in the Democratic Party, but Conant stayed a Republican.  He seemed more comfortable with the Republican constituency of business people and others who identified with the upper classes than with the Democrats’ primary constituency of small farmers, workers and those who identified with the lower classes.[xlv]  He sought to fight on behalf of the masses, but wanted to work primarily with his own kind within the elite.[xlvi]  As the Republican Party became more conservative, Conant tried to guide the party to the left while fighting the increasing power of the Right.[xlvii]

As a young man, Conant believed that Weimar Germany might provide a model for American development.  Supported by scientific and educational systems that were the best and most meritocratic in the world, German institutions during the 1920’s were governed by social democrats and led by a technocratic elite.[xlviii]  The rise of the Nazis in Germany was a great shock to Conant.  He reluctantly conceded that science and education do not guarantee political rationality, and concluded that fascism underscored the need for a pro-social democratic education for both the elite leaders and the masses in a liberal society.[xlix]

Conant’s revulsion toward the Nazis led him to buck the prevailing isolationism within the Republican Party during the late 1930’s, and he became a fervent advocate of military preparedness and militant action against fascism.[l]  With the coming of World War II, Conant supported the most vigorous prosecution of the war by any means available, even to the point of suppressing civil liberties at home and using weapons of mass killing abroad.  His view of Nazism as fundamentally evil led him to conclude that in times of war, liberal measures must be suspended: “All war is immoral” and, therefore, all is fair in war.[li]  He later applied this same doctrine to the Cold War.  At the same time, Conant saw the social cooperation required for prosecuting World War II as an opportunity to advance social democracy[lii] and he called for an upsurge of radicalism in the United States.[liii]

As World War II ended, Conant believed that conflict between the United States and the Soviet Union could be avoided.  While he thought the U.S. and U.S.S.R. would inevitably compete, he believed the Soviets wanted to win ideologically and economically, not militarily.  Once again bucking mainstream opinion within the Republican Party, Conant proposed sharing A-bomb secrets with the Russians to forestall a nuclear arms race,[liv] and as late as 1948, he was still forcefully arguing that “there is little or no analogy between the Nazi menace and the Soviet challenge.”[lv]  Conant similarly argued that the challenge of domestic Communism should be met through intellectual competition, rather than repression.  In Conant’s view, Communists were wrong but not evil, their methods misguided but their goals relatively benign.  Conant warned that “reactionaries” will try to use anti-Communism “as an excuse” to attack liberals.  Citing the attacks on Alger Hiss as an example of this tactic, Conant initially came out strongly in Hiss’ defense when Hiss was accused of being a Communist spy.[lvi]

But by the end of 1948, Conant was taking a very different stance.  Under intense pressure from his colleagues within the foreign policy elite, and under the pressure of events such as the Communist coup in Czechoslovakia and the Berlin Blockade, Conant joined the Cold War and anti-Communist crusades with the fervor of a convert.  Seemingly concerned with protecting his status as a member of the Power Elite, he acted like someone who felt the need to prove his loyalty.  Rewriting his own history, Conant retrospectively claimed that he was “one of the first of the Cold War warriors” when in fact he did not join their ranks until late 1948.[lvii]  He also retroactively chastised those who had in 1948 “still clung to the belief in cooperation” with the Soviets, when he had done so himself.[lviii]  In any case, Conant now portrayed Communism as a fundamental evil and significant threat.[lix]

Abandoning his previous analysis, Conant fell into line with the prevailing Cold War analogy between Nazi Germany and Soviet Russia.  While he still did not think the Soviets posed any immediate military threat, neither, he said, had Nazi Germany in the early 1930’s.  Conant concluded that it was the failure of the West to challenge the fascists militarily when they were weak during the 1930’s that had emboldened and enabled them to start World War II.  Elected officials in the West, faced with strong pacifist sentiments amongst the public, had lacked the will to undertake a military buildup, thereby encouraging the fascists.  Similarly, Conant believed, it was necessary to challenge the Soviets militarily before they could move toward conquest.  Toward that end, Conant founded in 1950 the Committee on the Present Danger, an organization of political, business and educational leaders, for the purpose of propagandizing the public and lobbying Congress in favor of a peacetime draft, an expanded nuclear arsenal, and a large-scale military buildup.  Abandoning objectivity as well as neutrality on this issue, Conant joined with other Cold Warriors in deliberately exaggerating the immediate threat from the Soviet Union.  He rationalized this deception on the grounds that scare tactics were necessary to build public support for a show of military force that would forestall the Soviets and prevent another world war in the long run.[lx]

Conant also fell into line on the prevailing opinion of domestic Communism.  In the wake of Alger Hiss’ perjury conviction and Senator Joseph McCarthy’s assault on Communist sympathizers, Conant took a strong anti-Communist stance.  Contradicting his previous statements that attacking Communists would open the door to attacks on liberalism, he rationalized his cooperation with McCarthy and other anti-Communists as a means of protecting liberals from attack.  Despite his previous opposition to loyalty oaths as a violation of free speech, Conant became a firm supporter of loyalty oaths.  And, contrary to his previous support of academic freedom for all political opinions, Conant campaigned for a ban on Communist teachers in the public schools.  While privately dismissing any threat from domestic Communism, he publicly contended that Communists had abdicated their intellectual freedom in becoming mouthpieces for the Communist Party and agents of the Soviet Union, and therefore had no place in a free marketplace of ideas.[lxi]

The Cold War strained Conant’s liberal commitments more than any other crisis in his life.  He seemed at times during the 1950’s and 1960’s to abdicate his own judgment in favor of automatically rejecting anything Communists might support or might be construed as pro-Communist.  While Conant predictably liked Ike but not Goldwater or Nixon, he was a strong supporter of the Vietnam War after most liberals, including many Republicans, turned against it.  Publicly condemning the anti-war movement and New Left as traitorous, Conant in effect practiced McCarthyism beyond the McCarthy era.[lxii]  Privately, however, he criticized the war and voted for the anti-war presidential candidate George McGovern in 1972. [lxiii]

Conant has been condemned as a hypocrite and coward for his political actions.  From issues of anti-Semitism and free speech for radicals at Harvard during the 1930’s to McCarthyism and the Cold War during the 1950’s to student radicalism and the Vietnam War in the 1970’s, Conant held private views that were more liberal than his public positions.[lxiv]  But Conant’s problem was neither cowardice nor hypocrisy but social theory.  Conant believed that if you were not a member of the Power Elite, your principles were impotent and irrelevant.  This was a pragmatic judgment from one who contended that liberal values could only be implemented from the top-down.  So, when it came to risking his status within the elite for his principles, Conant generally found a way to rationalize his principles.

Meritocracy, Democracy and Public Education

Consistent with his training as scientist during the early 20th century, Conant began his career as a staunch traditionalist in education, favoring a strictly disciplinary curriculum, teacher-centered teaching methods, and rote learning and testing.  He came to the presidency of Harvard in 1933 with a low opinion of the university’s School of Education as a den of progressive anti-intellectuals.  In Conant’s view, teaching was something any well-educated person could do and he initially hoped to abolish the School but was convinced otherwise because it was a moneymaker for the University.  He decided, instead, to try to reform the School and in the process was converted to progressivism.[lxv]

Conant described progressive education as a system of student-centered pedagogy with teaching methods that focus on students’ interests and activities; social-centered curricula based on interdisciplinary subjects that focus on social problems of concern to students; and practical forms of evaluation, or what today would be called authentic assessment.  Progressivism was a means of encouraging students to transcend their backgrounds, engage in critical and reflective thinking, and transform themselves and their society.  Consistent with his top-down vision of democracy, Conant promoted a top-down version of progressivism.  He projected four main educational goals: (1) a high level of civic education to prepare every student for the rights and duties of a social democracy; (2) a high level of specialized education for those who will be the elite scientists and leaders of tomorrow; (3) a high level of general education to prepare the masses to evaluate the work of their leaders; and (4) a high level of vocational education to prepare non-elite students for gainful and socially useful employment.[lxvi]

Although Conant is best known for his empirical studies of schools from the 1950’s and 1960’s, he claimed four “inventions” from the 1930’s and 1940’s as his primary contributions to the field of education.  These were: initiating the Masters of Arts in Teaching (MAT) degree at Harvard in the mid-1930’s; supporting standardized testing in the late 1930’s, which led to the foundation of Educational Testing Service (ETS); organizing the Harvard report on General Education in a Free Society (GEFS) in the mid-1940’s; and participating in the National Education Association report on Education for ALL American Youth (EAAY) in the late 1940’s.[lxvii]  The MAT, GEFS and EEAY represent progressive innovations that have not yet had the impact for which Conant hoped.  ETS is an innovation that Conant thought would be progressive, later concluded was not, and has had a far greater impact than he desired.

The MAT was for Conant a model of progressive teacher education.  Jointly developed and administered by academic and education professors, it divided prospective teachers’ coursework evenly between academic subjects and pedagogy.  Working on the MAT brought Conant a new respect for teaching as an art that needed to be taught by professional educators.[lxviii]

ETS was for Conant a vehicle for establishing what he thought would be progressive means of assessment.[lxix]  Standardized testing appealed to Conant’s democratic, meritocratic and scientific orientations.  Testing, he claimed, is democratic because it is the same for all.  It is meritocratic because it aims at identifying the best students.  And, it is scientific because it is quantifiable and ostensibly objective.  Conant hoped that standardized testing would undermine the advantages that wealth and cultural background give to students from upper class families, and would open the doors of higher education and higher social position to middle and lower class students.  He also hoped that standardized testing would encourage progressive methods of teaching.  Conant’s support for testing rested, however, on two assumptions that he later questioned: that standardized aptitude tests measure some sort of generalized intelligence common to everyone, and that standardized achievement tests measure genuine knowledge of a subject.[lxx]

By the 1950’s, Conant had concluded that there is no such thing as a singular intelligence or a singular measure of intelligence, but that people are endowed with what today would be called multiple intelligences, and there is no universal way to measure these aptitudes.  He also seemed to conclude that achievement tests are self-defeating and self-invalidating, seeming to presage present-day concerns about standardized testing.  To be valid, an achievement test must be based on a random sample of knowledge from a generalized subject.  Standardized testing, however, leads schools to teach to the test, narrowing their curricula to the questions that are most likely to be asked on the test.  The results are that students no longer get the benefit of a general education and standardized tests no longer measure the general education they were intended to evaluate.  Students end up merely learning how to take the test and the test merely measures that ability.  While continuing to support testing as an adjunct method of evaluation, Conant became a proponent of what would today be called authentic assessment – observing students perform real world activities – as the best measure of aptitude and achievement.[lxxi]

GEFS and EAAY were essays in social democratic curricula.[lxxii]  Although GEFS was produced by an elite corps of professors and EAAY was produced largely by a group of schoolteachers, Conant claimed that the core recommendations of the two reports were essentially similar.[lxxiii]  Both proposed that schools focus on “life education” rather than merely the academic disciplines.  Both proposed that schools develop diversified curricula to meet the needs of diverse students and diversified extra-curricular activities to encourage students toward progressive social change.  And both proposed that the primary goal of education be “cultural literacy,” defining that goal in pluralistic and pragmatic rather than mono-cultural and absolutist terms.[lxxiv]  Cultural literacy is the understanding of different cultures through comparing and contrasting each with the others, transcending your own culture, and working with others toward common social goals.[lxxv]

GEFS argued that schools should help students transcend their everyday experiences and environments, deal with a diverse and changing world, and transform themselves and their society.  The report recommended a curriculum based on the “five fingers of education:” Language Arts, or transcending oneself through communication; Fine Arts, or self-transcendence through self-expression; Mathematics and Science, or transcending common sense through scientific methods; Social Studies, or transcending the here and now through history, geography and the social sciences; and the Vocations, or “putting into practice the bookish theory of the classroom.”[lxxvi]  While rejecting any standardized national curriculum, the report recommended a common core curriculum for students within each school so that every student “should be able to talk with his fellows…above the level of casual conversation” and students will be better able to organize themselves for social action.[lxxvii]

EEAY proposed to supplement the traditional academic curriculum with courses that start with everyday problems and then proceed to more complex intellectual issues, serving as an introduction and inducement to academic work by adapting the academic disciplines to everyday life.  Under EEAY, all students would take a “common learnings core” consisting of cultural education dealing with issues of family life, health, consumerism, and leisure, and citizenship education dealing with social problems, human rights and civic responsibilities.  All students would participate in community service to develop pro-social attitudes, vocational work to explore career choices, and political campaigns to “develop competence in political action.”  EEAY rejected educational tracking and ability grouping of students, proposing, instead, that students be placed in heterogeneous classes in which they would work on individual and group projects that reflect their varying interests and abilities.[lxxviii]

EEAY was intended as a proposal for continuous educational reform.  Calling for a “grass roots approach to improving programs in local schools,” the report proposed an ongoing series of community-school surveys of parents, teachers, students and community members that would help determine how schools operated and what should be included in the school curriculum.[lxxix]   The surveys asked adults what they thought they needed to know to be successful as adults, and asked children what they thought they needed to know to be successful as children.  This procedure was used with success in many school districts during the late 1940’s and early 1950’s.  It was a method of creating what today could be considered an authentic curriculum.[lxxx]

From the mid 1940’s through the early 1950’s, Conant vigorously campaigned in support of the proposals in GEFS and EAAY.  With his friend Dwight Eisenhower as a member of the EEAY board that Conant chaired,[lxxxi] Conant argued that education must focus on “the study of the economic, political, and social problems of the day” and promote the principles of liberal democracy.  To develop a social democracy, Conant insisted, you must have a social democratic educational system.[lxxxii]

With the advent of the Cold War and the McCarthy era in the early 1950’s, progressive educators came under withering attack as part of an overall assault on liberals and liberalism,[lxxxiii] and progressive education was maligned as an anti-intellectual and even subversive scourge on American education.[lxxxiv]  During this period, Conant never went back on his support for GEFS and EAAY and repeatedly cited them as curricular models.  But, under the pressure of the Cold War, he subordinated these proposals to arguments that the first priority of American schools must be “the education of their gifted students,” those who will become the scientists and leaders needed to defeat the Soviets.[lxxxv]

It was in this context that Conant produced his three best known reports on education, The American High School Today in 1959, Slums and Suburbs in 1961, and The Education of American Teachers in 1963.  They are empirical studies of social problems affecting schools – inadequate staffing and curricula in small schools, poverty and racial segregation in inner city schools, and inadequately educated teachers – in which Conant ties social policy recommendations to the progressive educational theories of GEFS and EAAY.  While the reports are distinctly elitist in tone, they emphatically reject the even more elitist proposals that were popular at that time.

In The American High School Today, Conant was concerned that Americans, particularly those in the middle and upper classes, suffered from delusions of self-sufficiency, unable to see the connections between themselves and other people, especially the less affluent.  As a result, Americans are often unable to see why they should support social institutions that benefit others, especially at expense to themselves.  Conant thought that heterogeneous classes and general education courses in comprehensive high schools would help remedy this problem.[lxxxvi]  He naively underestimated the invidious effects of social class and academic competition on school life, and overestimated the democratizing effects of heterogeneous homerooms and general education classes.  While school consolidation was widely undertaken during the 1960’s and 1970’s, few of the newly consolidated schools promoted the pluralism or adhered to the restrictions on ability grouping that Conant proposed. [lxxxvii]

In Slums and Suburbs, Conant rejected the idea that black schoolchildren were genetically inferior to whites in intelligence and called for expanded jobs programs, better social service programs, and greater spending on inner city schools.[lxxxviii]  In The Education of American Teachers, Conant repeated arguments he had previously made in favor of the MAT program, proposing that prospective teachers take a relatively equal number of courses in pedagogy and academic subjects, and insisting that in order to obtain permanent certification, teachers should demonstrate their knowledge of the social and cultural backgrounds of the students in their schools. [lxxxix]

While the tone of Conant’s educational writings changed during the Cold War, the substance did not.  As a top-down democrat, he consistently throughout his career placed greater emphasis than would participatory democrats on the education of higher achieving students, and this emphasis was even greater during the 1950’s and 1960’s.  But Conant continued during this period to promote progressive principles of interdisciplinary and problem-solving curricula, student-centered teaching methods, pluralistic schools and heterogeneous classrooms, and greater equality in both educational opportunities and outcomes.

Conant’s Educational Legacy

Although Conant was widely considered to be successful at almost everything he did, he did not agree.  Commenting in 1977, a year before his death, on trends in politics and education, Conant complained that “Everything I’ve worked for has been rejected.”[xc]  He had good cause to lament.

Politically, liberal Republicans were a dying breed by the late 1970’s[xci] and Conant was having second thoughts about the Cold War.[xcii]   By his own standards, many of Conant’s actions during the Cold War were not exemplary.  He frequently said one thing privately while publicly doing the opposite, and orchestrated a massive campaign of deception in order to gain popular support for the government’s Cold War policies.  His behavior seems even more reprehensible to those, like the author of this article, who are old enough to have lived through the Cold War and who viewed it as Conant initially did in the 1940’s: that the Soviet Union did not pose a military threat sufficient to warrant an all-out arms race and the militarization of American foreign policy; and that domestic Communists were a relatively benign group – sometimes helpful in the labor and civil rights movements, sometimes harmful, mainly to themselves, in their blind support for the Soviet Union.  To those who see the breakup of the Soviet empire and the historical revelations of the last ten years as further confirmation of this view, the Cold War was a terrible mistake, a mistake for which Conant bears significant responsibility as I think he began to see toward the end of his life.  In education, Conant was not doing much better.  Shopping mall high schools, racial segregation, standardized testing, traditional curricula and mechanical teaching methods – all of which he had opposed – were the norm in the late 1970’s, as they still are today.

In evaluating Conant’s failure to achieve his goals, or even sometimes to practice what he preached, I think that the major flaw in his theories and practices was his elitist concept of leadership.  Conant’s belief that social transformation comes from the top down, and his determination to stay within the policy elite at almost all costs, forced him into all sorts of theoretical and practical contradictions.  Conant expected from his Power Elite the sort of long-term thinking and pro-social consciousness that may have been realistic to expect from the Bob LaFollettes and Henry Wallaces of his youth, but were not very evident by the 1970’s and have not been since.  Conant’s peers let him down, but so did his premises.  Because there is strong reason to believe, and I would argue strongly, that the sorts of progressive educational and social reforms Conant wanted can only be achieved from the bottom up.[xciii]

I conclude, nonetheless, that Conant’s ideas can be purged of their elitist undertones and still resonate with progressive theorists today.  Among other things, General Education for a Free Society and Education for All American Youth remain two of the most interesting and promising proposals of the last hundred years.  Moreover, Conant faced during the 1940’s and 1950’s the same sorts of questions about school choice, privatization, tracking, standardized curricula and standardized testing that educators are facing today.  At a time when standardized curricula and testing have become the rage, and technology and quantification have become the standards for all knowledge, having opinions to the contrary from someone like Conant – a founder of ETS, world-renowned scientist, and Republican stalwart – constitutes important support for those who would buck the current trends.

Finally, Conant represents a time, not so long ago, when progressive reform was at least on the left bank of the mainstream, and a broad coalition of educators rallied around a common program of reform, even if from somewhat different perspectives – top-down for those like Conant, bottom-up for others.  A reconsideration of James Conant recovers a time when “radical” was even for many Republicans an ideal, “liberal” a term of praise, “conservative” a dirty word, and social democracy the goal of education.  This should be considered a valuable legacy for educational theorists today.

[i] Gordon Swanson, “The Hall of Shame,” Phi Delta Kappan, Vol.74, 10 (June 1993): 797.

[ii] James Conant, The Child, the Parent and the State (New York: McGraw Hill, 1959), 94; James Conant, Scientific Principles and Moral Conduct (Cambridge: Cambridge University Press, 1967), 37; James Conant, My Several Lives: Memoirs of a Social Inventor (New York: Harper & Row, 1970), 536; James Hershberg, James B. Conant: Harvard to Hiroshima and the Making of the Nuclear Age (New York: Alfred Knopf, 1993), 9, 13, 120, 512.

[iii] Daniel Tanner, Secondary Curriculum (New York: Macmillan Publishing Co., 1971), 17.

[iv] Diane Ravitch, The Revisionists Revisited: A Critique of the Radical Attack on the Schools (New York: Basic Books, 1978); Walter Feinberg, Harvey Kantor, Michael Katz & Paul Violas, Revisionists Respond to Ravitch (Washington, DC: National Academy of Education, 1980).

[v] For example, see Edgar Gumbert & Joel Spring, The Superschool & the Superstate: American Education in the Twentieth Century, 1918-1970 (New York: John Wiley & Sons, 1974), 40, 78-79, 137-139; David Tyack, The One Best System (Cambridge, MA: Harvard University Press, 1974), 276; Walter Feinberg, Reason and Rhetoric: The Intellectual Foundations of 20th Century Liberal Educational Policy (New York: John Wiley & Sons, 1975), 153-155; Paul Westmeyer, A History of American Higher Education (Springfield, IL: Charles Thomas, 1985), 102; Joel Spring, The American High School, 1642-1985 (New York: Longman, 1986), 287; Clarence Karier, The Individual, Society and Education (Urbana: University of Illinois Press, 1986), 255; Swanson, “The Hall of Shame,” 797-798; Peter Hlebowitx & Kip Tellez, American Education: Purpose and Promise (Belmont, CA: West/Wadsworth, 1997), 257; Dean Webb, Arlene Metha & Forbes Jordon, Foundation of American Education (Upper Saddle River, NJ: Prentice-Hall, 2000), 220; but, to the contrary, some recent appreciations of Conant are: Fred Hechinger, “School for Teenagers: A Historic Dilemma,” Teachers College Record, 94, 3 (Spring 1993): 522-539; Jurgen Herbst, The Once and Future School: Three Hundred and Fifty Years of Secondary Education (New York: Routledge, 1996), 181; John Brubacher & Willis Rudy, Higher Education in Transition (New Brunswick, NJ: Transactions Press, 1997), 424-426.

[vi] Larry Cuban, “Managing the Dilemmas of High School Reform,” Curriculum Inquiry, 30, 1 (Winter 2000): 106.

[vii] James Conant, The American High School Today (New York: McGraw Hill, 1959).

[viii] Hyman Rickover, Education and Freedom (New York: E.P. Dutton & Co., 1959).

[ix] Conant, American High School Today, 37, 63.  He later raised his proposed minimum school population to 750 students in James Conant, The Comprehensive High School (New York: McGraw Hill, 1967), 2.

[x] Conant, American High School Today, 20, 37, 46, 48, 63.

[xi] Conant, Education in a Divided World, 234.

[xii] See Benjamin Barber, A Passion for Democracy (Princeton, NJ: Princeton University Press, 1998), 95-110; Benjamin Barber, Strong Democracy: Participatory Politics for a New Age (Berkeley, CA: University of California Press, 1984).

[xiii] Barber, Strong Democracy, XIV; Barber, A Passion for Democracy, 6, 10.

[xiv] James Conant, Slums and Suburbs (New York: McGraw Hill, 1961), 109.

[xv] Paul Bartlett, “James Bryant Conant,” Biographical Memoirs (Washington, DC: National Academy Press, 1983), 107.

[xvi] Quoted at Hershberg, James B. Conant, 578.

[xvii] James Conant, Two Modes of Thought: My Encounters with Science and Education (New York: Trident Press, 1964), 13-14.

[xviii] Conant, My Several Lives, 10; Hershberg, James B. Conant, 13.

[xix] Gregory Baker, Religion and Science: From Swedenborg to Chaotic Dynamics (New York: Solomon Press, 1992), 13-14, 21-25; Inge Jonsson, Emanuel Swedenborg (New York: Twayne Publishers, 1971), 14, 40, 72, 79.

[xx] Conant, My Several Lives, 15, 19; Hershberg, James B. Conant, 27.

[xxi] Conant, My Several Lives, 44; Hershberg, James B. Conant, 38-39, 48-49.

[xxii]  Martin Saltzman, “James Bryant Conant and the Development of Physical Organic Chemistry.” Journal of Chemical Education, 49, 6 (June 1972): 411; Hershberg, James B. Conant, 55-56.

[xxiii] Harry Passow, American Secondary Education: The Conant Influence (Reston, VA: National Association of Secondary School Administrators, 1977), 3; George Kistiakowsky, “James B. Conant, 1893-1978,” Nature, 273, 5665 (June 29, 1978): 793.

[xxiv] James Conant, Germany and Freedom (Cambridge, MA: Harvard University Press, 1958), 26-27; Conant, My Several Lives, 140-145, 373.

[xxv] James Conant, On Understanding Science: An Historical Approach (Hew Haven, CN: Yale University Press, 1947), XII; Conant, My Several Lives, 236, 242, 272, 274, 298; Hershberg, James B. Conant, 157, 170, 325.

[xxvi] James Conant, Anglo-American Relations in the Atomic Age (London: Oxford University Press, 1952), 17-18; James Conant, Modern Science and Modern Man (New York: Columbia University Press, 1952), 12-16.

[xxvii] Stefan Morawski, The Trouble with Postmodernism (London: Routledge, 1996),

2; Stanley Grenz, A Primer on PostModernism (Grand Rapids, MI: William B. Eerdmans

Publishing Co.), 7, 34, 40-46.

[xxviii] Conant, On Understanding Science, 25, 36, 91; Conant, Science and Common Sense, VIII; Hershberg, James B. Conant, 410, 860-footnote 84; Thomas Kuhn, The Copernican Revolution (New York: Vintage Books, 1957), IX.

[xxix] Conant, On Understanding Science, 11-12; Conant, Science and Common Sense, 8, 10, 15, 25-26; Conant, Modern Science and Modern Man, 19, 22, 23, 54, 58, 62; Conant, Two Modes of Thought, 13, 14,15-17, 18, 83; Conant, Scientific Principles and Moral Conduct, 15-16, 25; Philip Kitcher, “A Plea for Science Studies,” in A House Built on Sand: Exposing Postmodernist Myths About Science, ed. Noretta Koertge (New York: Oxford University Press, 1998), 34, 36; Bruno Latour, Science in Action (Cambridge, MA: Harvard University Press, 1987), 2-4, 12.

[xxx]  S. Morawski, The Trouble with Postmodernism (London: Routledge, 1996), 2; Richard Rorty, Consequences of Pragmatism (Minneapolis: University of Minnesota Press, 1982), XXXIX; P. Feyerabend, Against Method (New York: Verso, 1988), 189.

[xxxi] William James, “The Will to Believe,” in Essays on Faith and Morals, ed. R.B. Perry  (Cleveland: World Publishing Co., 1962), 32-62.

[xxxii] Conant, On Understanding Science, 30; Conant, Science and Common Sense, 10, 17, 30-31; Conant, Modern Science and Modern Man, 82, 88; Conant, Two Modes of Thought, 33; Conant, Scientific Principles and Moral Conduct, 38.

[xxxiii] James Conant, Science and Common Sense (New Haven, CN: Yale University Press, 1951), 25-26; Conant, Modern Science and Modern Man, 54, 62; James Conant, Scientific Principles and Moral Conduct (Cambridge: Cambridge University Press, 1967), 8, 29.

[xxxiv] Conant, Anglo-American Relations in the Atomic Age, 32, 37-38; Conant, Scientific Principles and Moral Conduct, 34-35; John Gribbin, Almost Everyone’s Guide to Science (New Haven, CN: Yale University Press, 2000), 45-47.

[xxxv] Conant, On Understanding Science, 22; Conant, Science and Common Sense, 38-39; Conant, Two Modes of Thought, 82-83; Garvin McCain & Erwin Segal, The Game of Science (Belmont, CA: Brooks/Cole Pub. 1969), 80.

[xxxvi] Conant, Modern Science and Modern Man, 66-67.

[xxxvii] Conant, On Understanding Science, l1; Conant, Modern Science and Modern Man, 66-67.

[xxxviii] Conant, Anglo-American Relations in the Atomic Age, 17-18, 23; Conant, Modern Science and Modern Man, 12-13, 16, 30.

[xxxix] Hershberg, James B. Conant, 466, 474, 482.

[xl] Hershberg, James B. Conant, 82, 93, 325, 404.

[xli] Conant, Modern Science and Modern Man, 6.

[xlii] Conant, My Several Lives, 11; Hershberg, James B. Conant, 14.

[xliii] James Conant, Education in a Divided World: The Function of Public Schools in Our Unique Society  (Cambridge, MA: Harvard University Press, 1948), 30-31, 172-173, 178.

[xliv] James Conant, “Wanted: American Radicals,” The Atlantic Monthly, 171, 5 (May 1943) 43; Conant, Education in a Divided World, 4-7; James Conant, Germany and Freedom (Cambridge: Harvard University Press, 1958), 67-69.

[xlv] Conant, “Wanted: American Radicals;” D.W. Brogan, Politics in America (Garden City, NY: Doubleday & Co., 1960), 37-54; Clinton Rossiter, Parties and Politics in America (Ithaca, NY: Cornell University Press, 1960), pp.107-151; Joyner, The Republican Dilemma; Rae, The Decline and Fall of the Liberal Republicans.

[xlvi]  Conant, The Child, the Parent and the State, 102.

[xlvii] Conrad Joyner, The Republican Dilemma: Conservatism or Progressivism (Tucson, AZ: University of Arizona Press, 1963); Nicol Rae, The Decline and Fall of the Liberal Republicans: From 1952 to the Present (New York: Oxford University Press, 1989).

[xlviii]  Conant, My Several Lives, 41, 68-69, 71; Hershberg, James B. Conant, 38, 42, 61.

[xlix]  Conant, Germany and Freedom, p.4.

[l] Conant, My Several Lives, 212, 308, 320-322.

[li] Conant, My Several Lives, 49; Hershberg, James B. Conant, 120.

[lii] Conant, My Several Lives, 364, 374-381.

[liii] Conant, “Wanted: American Radicals;” James Conant, General Education in a Free Society (Cambridge, MA: Harvard University Press, 1945), 34.  Conant claimed for himself a somewhat larger role in GEFS than some historians have described for him.  For purposes of this article, the exact extent of Conant’s role in producing the report is not as important as the fact that he continuously thereafter supported the recommendations of the report.  Hershberg, James B. Conant, 236.

[liv] Conant, My Several Lives, 300.

[lv] Conant, Education in a Divided World, 21, 24, 218.

[lvi]  Conant, Education in a Divided World, 172-173; Hershberg, James B. Conant, 435.

[lvii]  Quoted at Hershberg, James B. Conant, 322.

[lviii] Conant, The Child, the Parent and the State, 33.

[lix]  Hersherg, James B. Conant, 360, 462.

[lx]  Conant, 1970, My Several Lives, 506, 509, 512; Hershberg, James B. Conant, 384, 390, 493, 498, 521, 674.

[lxi]  Conant, My Several Lives, 456; Hershberg, James B. Conant, 431, 435.

[lxii]  Conant, My Several Lives, 640-642; Hershberg, James B. Conant, 746, 751.

[lxiii] Hershberg, James B. Conant, 752.

[lxiv]  Hershberg, James B. Conant, 82, 89, 93, 276, 404.

[lxv] Conant, My Several Lives, 137, 189.

[lxvi]  James Conant, “Education for a Classless Society,” The Atlantic Monthly, 165, 5 (May 1940): 596.

[lxvii] Conant, My Several Lives, XV-XVI.

[lxviii] James Conant, The Education of American Teachers (New York: McGraw Hill, 1963),1-2; Conant, My Several Lives, 181, 185.

[lxix] Conant, My Several Lives, 417, 419.

[lxx] Conant, My Several Lives, 417, 424, 432: Nicholas Lemann, The Big Test: The Secret History of the American Meritocracy (New York: Farrar, Straus & Giroux, 1999), 3.  Conant claimed for himself a bigger role in the founding of Educational Testing Services than described by Lemann in his seminal book.  For purposes of this article, the exact extent of Conant’s role is not as important as the fact that Conant initially supported standardized testing and then questioned it, citing what he considered to be progressive educational principles in both cases.

[lxxi]  Conant, The American High School Today, 62; Conant, My Several Lives, 419; Robert Hampel, “The American High School Today: James Bryant Conant’s Reservations and Reconsiderations,” Phi Delta Kappan (May 1983): 608-609; Lemann, The Big Test, 38, 78-79, 228..

[lxxii] James Conant, “American Remakes the University,” The Atlantic Monthly, 177, 5 (May 1946): 41-45; Patricia Graham (New York: Teachers College Press, 1967), 136.

[lxxiii] Conant, General Education in a Free Society; Educational Policies Commission, Education for ALL American Youth (Washington, DC: National Education Association, 1944); Paul Elicker, Planning for American Youth (Washington, DC: National Association of Secondary School Principles, 1951); Conant, Education in a Divided World, VII.  While some historians have characterized General Education in a Free Society as a conservative defense of the traditional academic disciplines – for example, Paul Westmeyer, A History of American Higher Education (Springfield, IL: Charles Thomas, 1985), 102 – most have described it as a progressive proposal for interdisciplinary and student-centered education – for example, Daniel Tanner & Laurel Tanner, Curriculum Development: Theory into Practice (New York: Macmillan, 1980), 445.

[lxxiv] Conant, General Education in a Free Society, pp.IX, 135; Educational Policies Commission, Education for ALL American Youth, 21, 102, 225-226; Elicker, Planning for American Youth , 19.

[lxxv]  Conant, General Education in a Free Society, 4, 58; Conant, My Several Lives, 366, 368.

[lxxvi]  Conant, General Education in a Free Society, 10, 32, 33, 118, 128, 139, 153, 171.

[lxxvii] Conant, General Education in a Free Society,  33, 77, 114, 171, 192.

[lxxviii] Educational Policies Commission, Education for ALL American Youth, 71-71, 85-87, 234-238, 299; Elicker, Planning for American Youth, 8-9, 19; Harold Hand, “The World Our Pupils Face,” Science Education, 31, 2 (Summer 1947): 55-60; Harold Hand, “The Case for the Common Learnings Course,” Science Education, 32, 1 (Spring 1948): 5-11.

[lxxix] Educational Policies Commission, Education for ALL American Youth: A Further Look (Washington DC: National Educational Association, 1952), 88-89, 380.

[lxxx] Harold Hand, “Local Studies Lead to Curriculum Change,” Educational Leadership, 8 (January 1951): 240-243; Harold Hand, “Making the Public School Curriculum Public Property,” Educational Leadership, 10 (January 1953), 261-264.

[lxxxi] Educational Policies Commission, Education for ALL American Youth: A Further Look, V.

[lxxxii] Conant, Education in a Divided World, VII, 100, 106, 110.

[lxxxiii] Cremin, The Transformation of the School, 348-351.

[lxxxiv] Mortimer Smith, And Madly Teach (Chicago, Henry Regenery Co., 1949), 90; Albert Lynd, Quackery in the Public Schools (Boston: Little, Brown & Co., 1950), 35; Arthur Bestor, Educational Wastelands (Urbana IL: University of Illinois Press, 1953), 81-100.

[lxxxv] James Conant, The Citadel of Learning (New Haven, CN: Yale University Press, 1956), V, 40, 42; Conant, The Child, the Parent and the State, 16, 34, 48, 76, 94; Conant, Slums and Suburbs, 136, 140; Conant, The Education of American Teachers, 6; James Conant, Shaping Educational Policy (New York: McGraw Hill, 1964), 4, 21-24.

[lxxxvi]  James Conant, Thomas Jefferson and the Development of American Public Education (Berkeley, CA: University of California Press, 1962), 61; Conant, The American High School Today, 7.

[lxxxvii]  Landon Beyer, “The American High School Today: A First Report to American

Citizens,” Educational Studies, 27, 4 (1996-1997): 319-337.

[lxxxviii]  Conant, Slums and Suburbs, 3, 4, 12, 36-37, 39; Hershberg, James B. Conant, 726-727.

[lxxxix] Conant, The Education of American Teachers, 7-8, 15, 71, 113.

[xc] Quoted at Hershberg, James B. Conant, 754.

[xci] Rae, The Decline and Fall of the Liberal Republicans, 155-156.

[xcii] Hershberg, James B. Conant, 752.

[xciii] For example, John Goodlad, Educational Renewal (San Francisco: Jossey-Bass,

1994). Goodlad, nonetheless, regards Conant as “one of my mentors” (29).  Also, Barber,

Strong Democracy.

 

 

 

 

Strangers in an Estranged Land: A Reexamination of Camus’ “The Stranger” and Review of Daoud’s “The Meursault Investigation.”

Strangers in an Estranged Land.
A Reexamination of Camus’ The Stranger
                  and Review of Daoud’s The Meursault Investigation               

Burton Weltman

Do not mistreat or oppress a stranger,

for you were strangers in Egypt.

Exodus 22:21

 A.  Dead Men Talking: Albert Camus’ Meursault and Kamel Daoud’s Musa.

“If the world were clear, art would not exist.”

Albert Camus. The Myth of Sisyphus.[1]

Does it matter if a literary work is widely misread in a way that is contrary to the intentions of its author and/or the plain meaning of the text?  It clearly matters if a legal text is misread.  The Second Amendment to the United States Constitution, for example, has recently been misread by a majority of the Justices on the United States Supreme Court to function as a guarantee of the right of people to keep guns in their homes and carry guns with them almost anywhere they want.  This is a misunderstanding of the intentions of the Second Amendment’s authors and a misreading of the plain language of the Amendment’s text that is so unreasonable and so contrary to the facts of the Amendment’s adoption as to be absurd.[2]  It is a misreading that has, however, contributed to the proliferation of guns and the epidemic of gun violence in the United States, and people are dying because of it.  It clearly matters.  But what about the misreading of a literary text?  Does that matter?

The premise of Kamel Daoud’s recent novel The Meursault Investigation[3] is that the misreading of a literary text does matter, and the narrator of Daoud’s book claims that Albert Camus’ novel The Stranger has almost invariably been misread for over seventy years since its publication in 1942.  The Stranger is the story of the murder of an Arab by a Frenchman in colonial Algeria.  The murderer’s name is Meursault and he is the narrator of the book.  Meursault has, much to his surprise, been found guilty of premeditated murder and sentenced to death for shooting the Arab.  He had assumed that he would be found guilty of the lesser offense of unpremeditated manslaughter or not guilty by reason of self-defense.  Although an appeal of his sentence is pending, Meursault tells his story while facing possible execution, and death envelops the book.  It opens with the death of Meursault’s mother, is punctuated by the Arab’s death, and closes with the prospect of Meursault’s death.  Meursault tells his story in deadpanned language, and portrays himself as an emotionally deadened person who has endured life in a chronically depressed state.

For many reviewers over the years, Meursault has been seen as the ideal of an honest and dispassionate man, and an existentialist or absurdist hero.[4]  This seems also to be the view of the general reading public, based on comments provided on popular websites that can be taken as reflecting mainstream public opinion.  These websites include Wikipedia (Meursault is “often cited as an exemplar of Camus’ philosophy of the absurd and existentialism.”)[5] and Sparknotes (Meursault represents “Camus’ philosophical notion of absurdity.”).[6]  Amazon reports that The Stranger remains a best seller to the present day, as it is “a staple of U.S. high school literature courses.”[7]  It is, thus, a widely read and potentially influential book.

The narrator of The Meursault Investigation is an old man named Harun who seeks to dispel Meursault’s heroic image.  His argument is based on a critical rereading of The Stranger, and on providing a side-story to Meursault’s narrative, as well as a sequel to the events in the book up to the present day.  Harun is ostensibly the brother of the Arab murdered by Meursault, and he claims to speak for his dead brother.  Harun complains that decades of readers have failed to react to the fact that his brother (whose name is Musa) is not even named in The Stranger (he is merely called “the Arab”) and that nothing is told in the book about Musa or his family.

Since no one has previously spoken for Musa, Harun claims that readers have missed the underlying meaning of the events in The Stranger.  Only Meursault’s side of the story has been told, and Musa’s death has been seen only in the light of Meursault’s brilliant portrayal of his own pathetic life.  As a result, Harun argues, Meursault has effectively gotten away with murder in the public mind, and Meursault’s account of the killing has both trivialized murder and perpetuated racist views of Arab Algerians.  Although the story dates from 1942, Harun contends that people are still dying today because of the attitudes toward murder and toward Algerians presented by Meursault in the book.  In Harun’s mind, the public’s misunderstanding of the story clearly matters.

History is full of dead men talking, and the meaning of what they said and did is often important to us.  They help us to figure out who we are and what we ought to do.  That is why historians and Supreme Court Justices continually review and revise what they think the Founders meant to say in the Constitution.  The thesis of Daoud’s book is that it is also important to set the record straight as to the meaning and message of fictional dead men.  Fiction can influence people as fully as facts can.  There are, for example, lots of young people today who cite the wisdom of Professor Dumbledore from the Harry Potter books as though he is a real person.  Daoud has provided us with the novel case of a fictional character calling out another fictional character in order to get a fictional situation right.

Getting things right in a work of fiction is not, however, always easy.  It has been said that great books are those that can be reread over and over again with the reader getting something different each time.[8]  Great books, such as The Stranger, can legitimately be interpreted many different ways.  The same can be said for the United States Constitution.  One of the things that makes the Constitution great is that it is a living document that can be interpreted in different ways as circumstances change.  However, there are some interpretations of the Constitution that are just plain wrong, such as the Supreme Court’s recent ruling on the Second Amendment.  Similarly, there are some interpretations of a novel that are just wrong, and they can have consequences.

Both The Stranger and The Meursault Investigation are written with first-person narrators. Interpretation is particularly tricky with first-person narration because it raises hard questions about to what extent and in what ways does and does not the narrator speak for the author.  It also raises questions as to the reliability of the narrator.   Conflating a first-person narrator with a book’s author, or assuming that the narrator is reliable, can lead to misunderstanding of a book.  In the cases of The Stranger and The Meursault Investigation, this problem has been exacerbated by the tendency of reviewers to focus on what they see as Camus’ philosophical views and Daoud’s social and religious views, and to ignore the psychological nuances and character development of the narrators in the course of the books.  The result is often a misunderstanding of both the authors’ views and the narrators’ characters.

It is my contention that neither Meursault nor Harun has been intended by their creators as a hero or a role model, and that neither of them can be taken as either reliable narrators or spokespersons for their authors.  Meursault’s narrative is essentially an exercise in what existentialists call “bad faith.”[9]  Jean Paul Sartre, the existentialist-in-chief, described bad faith as dodging responsibility for the effects of one’s choices.  He claimed that one has to realize that when one chooses to do or not do something, one is choosing not only what one wants to be oneself, but also “choosing at the same time what humanity as a whole should be.”  People act in bad faith when they “believe their actions involve no one but themselves.”  For Sartre, “any man who takes refuge behind his passions, any man who fabricates some deterministic theory, is operating in bad faith.”[10]

Meursault fits this description.  He does not take responsibility for his actions or for the way his actions affect others, and he seeks to explain away the harms he has done to others.  The book begins with the excuse he gave when he asked his boss for time off to go to his mother’s funeral (“Sorry, sir, but it’s not my fault, you know”), and ends with him giving himself absolution for his actions (“I’d been right, I was still right, I was always right.”).[11]  His narrative is a sustained attempt to exonerate himself for his actions.

Harun’s story is essentially a guilt trip.  It is at first an attempt to avoid guilt, then a reluctant admission of guilt and, finally, an attempt to purge himself of guilt.  It is a circuitous narrative that starts with his blaming Meursault and the world at large for the death and indignity suffered by his brother, and the hardships suffered by him and his mother.  It ends with a confession and a mea culpa for committing the murder of a Frenchman.  He begins his story by distinguishing himself from Meurault and ends by identifying with him.  They are, he acknowledges, blood brothers under the skin.[12]

Harun is an alcoholic, a self-described blowhard, and a murderer.  He is no hero and he is not Daoud.  The consequences of misreading Daoud’s book have, however, been frightening.  As a result of things that Huran says about religion, a death sentence fatwa has been issued against Daoud by a radical Muslim cleric in Algeria.  Daoud has responded that “It was a fictional character in the novel who said those things, not me,” but to no avail thus far.[13]

The thesis of the present essay is that The Stranger has been widely misread and that The Meursault Investigation seems in danger of being similarly misunderstood.  With respect to The Stranger, I think that reviewers and readers often miss that Meursault is relating and reconstructing past events, not telling about things as they happen.  They also miss that Meursault is telling his story in the immediate aftermath of being condemned to death.  They mistakenly think that Meursault is speaking for Camus.  And, they mistakenly think that Meurault represents the absurd man that Camus promoted in his book The Myth of Sisyphus.

Critics often extol the at-best amoral Meursault as some kind of existentialist hero or romantic anti-hero.  This sort of misreading demeans the work of Camus who was, above all else, a passionate moralist.  In conflating Meursault with Camus, these critics have missed what seems to be Camus’ intent that readers empathize with Meursault and see something of themselves in him, even as they hopefully disagree with him and reject his behavior.  These critics effectively undermine the moral value of the book.

With respect to The Meursault Investigation, I think that reviewers are in danger of mistakenly treating Harun as a hero, a reliable narrator, and a spokesman for Daoud.  These mistakes would diminish the social and political meaning of the work.  And that matters.

B.  Meursault in the Face of Death: The Stages of Grief.

We live “as man condemned to death.”

Albert Camus. The Myth of Sisyphus[14]

“Aujourd’hui, maman est morte.”  These are the opening words of The Stranger.  They are generally translated as mother or mama died today.  The words seem to situate the narrator, Meursault, in the present, as though he is learning of his mother’s death at the time he is telling us about it.  The rest of that paragraph and the next also give the appearance that the narrator is describing what he is currently experiencing.  But then the narrative abruptly turns into what is clearly a description of the past, of thoughts, feelings, and events the narrator has previously experienced, and the narrative continues that way for the rest of the book.

Camus wrote The Stranger in the present perfect tense in which “etre” or “avoir” is added to the past participle of a verb in French, just as “have” is added to the past participle in English.[15]  The effect of using that tense is to produce the feeling of an indefinite past, as though the past continues into the present.  This seems to be part of what Camus is proposing in the book, that one cannot escape the past or responsibility for one’s actions.

Meursault’s story opens with a description of his mother’s death and her funeral.  These events are the alpha and omega of his story.  The facts he relates include that his mother died in a nursing home to which she had been sent by Meursault over her strenuous objections, and that he wandered about at her funeral without showing any interest or emotion.  The overwhelming importance of these facts to Meursault stems from his contention that the way he treated his mother and behaved at her funeral were the main reason he was convicted of first degree capital murder.

Meursault repeatedly complains that his murder trial seemed to be more about disparaging his character over the way he treated his mother and her death than about ferreting out the facts of the shooting.[16]  The prosecutor repeatedly railed against Meursault, insisting he was “morally guilty of his mother’s death,” and was “an inhuman monster wholly without a moral sense.”  The court was seemingly more concerned with Meursault’s mother’s death than with the Arab’s, and Meursault was apparently convicted of the premeditated murder of the Arab because he was found to have behaved badly toward his mother.[17]

Camus once facetiously said that the moral of The Stranger was that “In our society any man who does not weep at his mother’s funeral runs the risk of being sentenced to death.”[18]  And Camus’ narrator, Meursault, tries to use the absurdity of his trial to portray himself as the victim in his case.  Camus was not, however, justifying Meursault’s actions or criticizing Meursault’s conviction for murder.  Camus was criticizing a society that seemed more concerned with enforcing social conventions than with enforcing laws against murder, especially when the victims were Arabs.  And he was asking us to identify with Meursault, despite our objections to Meursault’s behavior.

Opinions of Meursault’s state of mind as a narrator, and as a character in his own story, have been varied over the years.  To some reviewers, he is the soul of objectivity[19], sensitivity,[20] and honesty.[21]  To others, he is “a clinical psychopath,”[22] who “cares about practically nothing.”[23]  But one thing these reviewers have had in common is that they treat Meursault as a reliable narrator and take his version of events on face value.  This is not plausible and does not seem to have been intended by Camus for at least two reasons.

First, Meursault is still in the process of appealing his death sentence as he is narrating his story.  He has a life-and-death interest in making himself look as sympathetic as possible.  We have to see his story as potentially self-serving, and as not necessarily reflecting events as they actually happened.  Camus portrays Meursault as an ingenious fellow, and Meursault tells what seems to be a tale designed to gain our sympathy and minimize our antipathy.  For example, he leaves out any account of what happened in the immediate aftermath of the shooting.[24]  The artfulness of his narrative is emphasized in The Meursault Investigation by Harun, who insists that Meursault’s story is a fiction designed to justify himself to posterity.[25]

Meursault comes across as a distressed person.  He repeatedly describes himself to the people around him, and portrays himself to us readers, as a person without deep emotions.  Most commentators take it for granted that Meursault was, in fact, that kind of person.  But we cannot take Meursault’s portrayal of himself as being the way he always was.  He may have been rendered emotionally numb by his recent experiences and his narrative may reflect that effect, or he may be dissembling for sympathy.

Second, Meursault had just been sentenced to death when he begins telling his story.  This is a key to his psychological state and his character development as he goes on.  His deadened picture of himself could be a result of shock.  He is seemingly in a state of shock as he begins the story, and his anxiety level increases toward the end as his execution date approaches.  As a result of his emotional wavering, Meursault’s story does not come out as well as he would have liked it.  He does not make his best case for himself, either for his appeal or for posterity.  This is seemingly part of the story that Camus is telling us, through Meursault, about humans facing death.

Most reviewers treat Meursault’s narrative as being of a piece and his narrative tone as being uniform throughout.  This does not do justice to the psychological subtlety and complexity of Camus’ book.  The Stranger was Camus’ first published novel.  In his other works of fiction, the characters tend to be one-dimensional representatives of philosophical or social positions rather than complex persons. That is not the case with Meursault.   He is a complex character who morphs in the course of his tale.

Meursault’s narrative, in fact, seems to unroll in stages, almost like what have been described as the five stages of grief.[26]  He has just been told he is going to die, and his story seems to proceed from denial, which is ostensibly the first stage of grief, then to the next stages of anger, bargaining, depression and, finally, acceptance.  This is not to say that the book can be explained by some psychological formula, but that analyzing it in those terms can help illustrate the changes in Meursault’s narrative tone as he tells his story.

In the first stage of his story, Meursault essentially portrays himself as a victim of circumstances.  His mantra in this phase is “it’s not my fault.”[27]  He repeats this sentiment throughout the scenes of his mother’s funeral, which go on for many pages.  In a foreshadowing of his complaint about his trial, he complains that people at the funeral kept looking at him askance because he did not exhibit any emotion.  “I had an absurd impression,” he says, “that they had come to sit in judgment of me.”[28]  Meursault’s affect at this point is that of a pathetic person in a state of denial.

In the second part of his story and the second stage of grief, Meursault portrays himself as just an ordinary fellow who goes along to get along, and who follows the path of least resistance as he claims most people do.  He describes his relationships with his neighbors, his friend Raymond, and his girlfriend Marie in this segment.  Raymond is a pimp who beats up his Arab girlfriend, and who repeatedly says that he wants to be “pals” with Meursault.  Meursault claims that he does not know what that means.  But he hangs around with Raymond and helps him in his schemes, which eventually leads to Meursault shooting the Arab.  Meursault also repeatedly tells his girlfriend, Marie, that he does not love her, that the word love “had no meaning” for him.  But he also tells her that if she wants to marry him, “I didn’t mind.”[29]  The affect in this part of the story is defensive, as of a person who is upset at being picked on and just doesn’t want to be bothered.

It is at the close of this segment that Meursault commits the murder.  He, Raymond, and Marie are at the beach when they came upon “the Arab” and another Algerian Arab.  Meursault believed that the Arab was the brother of the girlfriend who Raymond had assaulted, and that the Arab had a knife and might be out for revenge.  Meursault was holding Raymond’s gun.  Meursault claims he was overpowered by the heat and confused by the glare of the sun so that, standing there with Raymond’s gun in his hand, “it crossed my mind that one might fire, or not fire – and it would come to absolutely the same thing.”  He had seemingly lost his sense of reality and self-control.  Later, when he shoots the Arab, he describes holding the gun in his hand and then “The trigger gave,” as though the shot just happened and he was not responsible for it.[30]  His attitude toward the murder is completely passive, a “things just happen” tone.  It is as though in describing the event, he is either still in a state of shock or he is trying to avoid responsibility for his action.

In the next stage of the story, Meursault describes the police interrogation and the trial, and the failure of his attempts to work things out with the authorities.  He begins to sound persecuted and even paranoid.  It is not only that the police and the prosecutor keep describing him as “callous” and “inhuman,”[31] but that “there seemed to be a conspiracy to exclude me from the proceedings.”  The lawyers, court officials, reporters, and spectators all seemed to know each other, and fraternized as though they were members of a club that excluded him.  He felt like “a gate crasher.”  He wanted to tell them that “I was just like everybody else, quite an ordinary person,” but they would not listen to him.  He says that he realized then “how all these people loathed me,” and were out to get him.[32]

Following the verdict and sentencing ,Meursault describes going into a state of anxiety and depression.  He is desperate to find a way out of being executed.  He says that to find “a loophole [in the law] obsesses me.”  Although he still has an appeal pending, and repeatedly expresses hope that the appeal will be successful, his tone is increasingly agitated.  He is assured by a visiting priest that “my appeal would succeed,” but he is, nonetheless, admittedly possessed by fear, and he goes into a rage at the priest when the priest suggests that he repent.[33]

Finally, on the last page of the book, Meursault says he has become “emptied of hope” and that “for the first time, the first, I laid my heart open to the tendre indifference of the universe.”[34]  This statement is generally taken by reviewers to mean that he has come to accept his fate and has realized the absurdity of life.  But, of course, he still has at this point an appeal of his sentence in the works, so it is not clear that he has really given up hope.  In addition, the French word tendre can be translated as “benign” or “tender.”  In using the word tendre to describe the universe, Meursault has essentially contradicted the idea that the universe is indifferent or that he has given up hope.  A benign or tender indifference is not indifferent.  It is sympathetic, caring, and agreeable.  The universe will, he seems still to hope, help him.

In sum, Meursault is a cunning but not entirely consistent apologist for himself.  My purpose in analyzing The Stranger in this way is not to reduce Camus’ complex novel to a series of formulaic stages.  It is merely to demonstrate that the emotional tone of Meursault’s story evolves as he narrates it, and that the narrator is not to be taken as totally reliable.  It is also the case that he is not a spokesperson for Camus nor is he intended as an existential hero.

C.  Meursault and the Myth of Sisyphus: Apathy versus Absurdity.

“To an absurd mind reason is useless and there is nothing beyond reason.”

Albert Camus. The Myth of Sisyphus.[35]

Meursault has been seen by most commentators as a spokesman for Camus, and as an ideal exemplar of the absurd person that Camus promotes in his philosophical work The Myth of Sisyphus. They see The Stranger and The Myth of Sisyphus, which were both published in 1942, as companion pieces, with Meursault representing Camus’ philosophy of absurdism.  Some of these commentators admire Camus’ philosophy and extend this admiration to Meursault as its exemplar.[36]  Others are appalled by what they see as Meursault’s callous and inhuman behavior, and extend this negative opinion to Camus’ philosophy.  Some have even accused Camus of racism based on Meursault’s attitude toward “the Arab” he has killed.[37]

Conflating Meursault with Camus and The Stranger with The Myth of Sisyphus began with an influential review of The Stranger in the mid-1940’s by Camus’ then friend Jean Paul Sartre.  Sartre, who was already a famous philosopher and novelist, gave the neophyte Camus and The Stranger a strangely ambivalent review.  In the review, Sartre repeatedly insists that The Stranger is a fictional rendering of the philosophy in The Myth of Sisyphus, with the message that life is absurd.  Along the way, he also comments that Camus “seems to pride himself on quoting” philosophers in The Myth of Sisyphus “whom he seems not to have always understood.”   And he says that Camus’ methods of writing can be best compared to those of Charles Maurras, who was a notorious anti-Semite and fascist.  Sartre concludes that The Stranger, as a novel about absurdity, “aims at being magnificently sterile,” and succeeds.  With friends like this… [38]

The problem with all of these opinions, from that of Sartre on down to the present, is that The Stranger and The Myth of Sisyphus do not function as companion pieces.  They deal with very different issues.  The Myth of Sisyphus opens with the declaration that “There is but one truly serious philosophical problem, and that is suicide.”[39]  The book is, thereafter, a sustained argument that although life is absurd, it is for that very reason worth living.  Life and living with others are all that we have for sure, so we ought to hang onto them.  The Stranger is not a novel about suicide.  It is about murder.  It deals with the reaction of a character to having committed a murder and to his impending execution.  In any case, Meursault is in no way an exemplar of Camus’ philosophy in The Myth of Sisyphus.  To the contrary, he is better seen as a negative foil to Camus’ ideal of the absurd person.

In The Myth of Sisyphus, Camus describes and prescribes a philosophy of absurdity.  Absurdity is a “feeling of strangeness in the world” that results from the contradiction between our attempts to find transcendent meaning in the universe and our inevitable failure to do so.  The absurd person recognizes that “I don’t know whether this world has a meaning that transcends it.  But I know that I do not know that meaning and that it is impossible for me just now to know it.”  As a result, the absurd person tries “to live without appeal” to any higher authority, which includes God, the gods, or any metaphysical concepts, and to live without hope for life after death.[40]  This is not an easy thing to do.

Absurdity, according to Camus, is not a stable or secure position.  We are forced to live in a state of “permanent revolution” against ourselves because what we can rationally establish as truth conflicts with what we feel ought to be the case.  We are perpetually caught up in a contradiction between the inescapable conclusion that we cannot reasonably find any final answers, and our incorrigible feeling that they must exist.  “There is so much stubborn hope in the human heart,” Camus warns, “that hope cannot be eluded forever and that it can beset even those who wanted to be free of it.”  He concludes that “Absurdity, hope and death carry on their dialogue” in the mind of an absurd person, all of which makes for an impossible situation, but it is one the absurd person has to live with.[41]

The absurd person is best exemplified for Camus by the mythological figure of Sisyphus.  Sisyphus is variously portrayed in ancient Greek mythology as a villain and a hero, but all accounts agree that he was the craftiest of mortals, and that he frequently defied and outwitted the gods.  At one point, he even succeeded in enchaining Hades, the god of death, and thereby put a halt to humans dying.  Sisyphus was eventually defeated by Zeus, so that Hades was able to go back to work, and he was sentenced by the gods to eternally push a rock up a hill, only to have it fall back again so that he would have to push it up again.

Camus presents Sisyphus’ situation as a metaphor for the human condition.  We are all engaged in what seems like pointless activity.  But, Camus claims, Sisyphus does not despair.  Having defied the gods and rebelled against death on behalf of humankind, Sisyphus is actually happy in his perpetual toil.  In “his scorn of the gods, his hatred of death, and his passion for life,” Sisyphus epitomizes “the absurd hero.”  He is physically chained but metaphysically free.  And even as Sisyphus knows that the rock will roll back down each time he gets it to the top of the hill, he can feel that maybe this time it won’t.[42]

The Myth of Sisyphus opens with the question of whether suicide is warranted given the opacity of the universe.  Camus’ answer is an emphatic “No.”  An absurd person does not despair of his/her hopeless condition but, instead, revels in “my revolt, my freedom, and my passion” for life.  This is a passion that must includes others.  People, says Camus, have to make their own meaning in life, and that is a social and collective activity.  In an absurd world, he insists, there is one value that is certain and that is the value of “human relations,” “friendship,” and “fraternity.”  The isolated individual is an idiot and the isolated life is without value.  Meaning comes from solidarity.  We live with and for others, so that whatever the universe is, we are all in it and in for it together.[43]

Critics who portray Meursault as some sort of existentialist hero extol what they see as his honesty in admitting his indifference to the deaths of his mother and the Arab.  This, they contend, makes him a forthright nonconformist. [44]  They also admire what they claim is his sensitivity to those around him.  He does not deliberately offend anyone, with the exception of the dead Arab.[45]  And they commend his “emotional detachment” from the awful things he has experienced in his life.  He is in their eyes a genuine Stoic. [46]  In sum, they see his life story as a “tragedy of integrity” and a “tragedy of the ethical,” a man who was vilified at trial and convicted of murder because he failed to proclaim grief for his dead mother or love for his girl friend.[47]  Camus himself apparently once said that Meursault was condemned because “he does not play the game,” “refuses to lie,” and “agrees to die for the truth.”[48]  But none of these things make Meursault either an existentialist or an absurdist, let alone a hero.

Existentialism has been described as the doctrine that existence precedes essence, and that we are what we are not and are not what we are.  That is, it is a philosophy of becoming and change in which people are seen as having continually to go beyond themselves and make choices as to what they become next.  Existentialism insists that we must take responsibility for who we are and what we do.[49]  Given this description, Meursault is clearly not an existentialist because he continuously refuses to take responsibility for his actions, and particularly eschews responsibility for shooting the Arab.  He repeatedly describes his life as something that just came to pass, and describes the shooting as though the gun just went off almost by itself.  He also insists that he has always been the same, has never changed, and has rarely made a deliberate choice.

Meurault is also not an absurdist as Camus describes that doctrine.  Absurdism requires a person to be constantly at war with himself, looking for where and how he is starting to believe in transcendent ideas, and then rejecting them.  The absurd person has to be vigilantly self-reflective, watching what he/she thinks and feels, continually engaging in a vigorous  internal dialogue.  Meursault, to the contrary, is completely and admittedly unreflective.[50]  He is a creature of impulse, which is epitomized by his shooting of the Arab.

Some readers have mistaken Meursault’s complete absorption in the present as a sign of his existentialist and absurdist leanings.  But his self-absorption is merely a sign of selfishness and self-centeredness, which are contrary to the emphases of both existentialism and absurdism on our need to work with others to define and develop ourselves.  Significantly, Meursault is capable of sympathizing with others — he even feels sorry sometimes for his neighbor’s annoying dog — but he is incapable of empathizing with them.  He is emotionally and intellectually isolated, from others and even from himself.

Some readers have also mistaken Meursault’s unconventionality with Camus’ absurdity, but Meursault represents the apathetic person rather than the absurd person.   As he describes his life, what looks like nonconformity is really just indifference.  Deliberate rebellion is foreign to Meursault’s personality, as is passion.  He repeatedly tells his girlfriend that he does not know what love means, and he repeatedly says about choices he has to make, including the choice to shoot the Arab, that it makes no difference what he does.  The passion for life, the feeling of solidarity with others, and the revolt against injustice that characterize Camus’ absurd person are not sentiments that one could plausibly ascribe to Meursault.

Finally, while Camus emphasizes that the absurd person is energized in the face of death, defying its inevitability and gaining from it a passion for life, Meursault is depressed by his impending death and his narrative is a depressing tale told in a depressed voice.  In sum, Meursault is the opposite of the absurd person Camus is describing in The Myth of Sisyphus.

D.  Meursault and Murder: A Rebel without a Cause.

“I rebel – therefore we exist.”

Albert Camus: The Rebel.[51]

Camus’ next philosophical book after The Myth of Sisyphus was The Rebel, which was published in 1951.  It is an essay on “whether or why we have the right to kill.”  Camus says that the book extends to a consideration of murder the “train of thought which began with suicide and the absurd” in The Myth of Sisyphus.[52]  If one must not kill oneself, may one kill others?  Reviewers have generally construed The Rebel in light of the breakup of the political alliance and friendship between Sartre and Camus over the former’s support for revolutionary Communism and the latter’s support for reformist socialism.[53]

Camus argues that revolution, which tries to impose all at once a final regime of justice on society, inevitably leads to oppression and murder.  Only a reformist movement that recognizes limits on what it can do can move toward genuine social justice.  The anti-revolutionary position Camus takes in The Rebel is generally seen as a function of the end of alliances between socialists and Communists that formed during World War II and that broke up with the beginning of the Cold War.

But there is also a continuity in Camus’ thinking that goes back to the composition of The Stranger during World War  IIMost reviewers, having already taken for granted that The Stranger is a companion piece to The Myth of Sisyphus, have not made a connection between The Rebel and The Stranger.  But The Myth of Sisyphus is a book about suicide.  The Rebel and The Stranger are both books about murder.  In this light, The Stranger can be best seen as a fictional prologue to Camus’ philosophical speculations in The Rebel, not as a companion piece to The Myth of Sisyphus.  And in light of the precepts promoted by Camus in The Rebel, Meursault comes across as a negative foil to the ideal rebel.

Camus reiterates in The Rebel many key concepts from The Myth of Sisyphus.  He insists that absurdism means that “human life is the only necessary good” and that, therefore, murder, which like suicide destroys life, is wrong.  Murder splits the soul in two, which is a good description of Meursault in The Stranger, a person living a half-life.  Camus acknowledges that “The absurd is, in itself, contradictory” because it denies value judgments but judges life to be of value, which is a value judgment.  Absurdism is a prescription for contradiction because it requires us to continually rebel against beliefs that we inevitably fall into.  But these contradictions are life-giving, Camus contends, because stagnation is death.  Rebellion, which is a “protest against death” and which was Sisyphus’ crime and his glory, is life.[54]    

Camus also insists in The Rebel, as he did in The Myth of Sisyphus, that humans are social creatures, not isolated individuals, and that “Human solidarity is metaphysical,” not merely conventional.  The person who does not engage in collective activity, either rebelling against  social oppression or in favor of greater social justice, is a stranger to humanity and foreigner in the world.  The stranger is the self-imposed outcaste who does not recognize that “dignity is common to all men,” or acknowledge the ultimate truth that “I rebel – therefore we exist.”[55]  That is, Camus concludes, we only truly exist to the extent we engage in collective rebellion.

Meursault seems to think of himself as a rebel, and many critics have thought likewise, because he does not conform to social conventionalities.  But he is an unrepentant murderer who does not stand for anything or with anybody.  He has no cause to which he is dedicated.  He is merely an isolated individual, who is strange to others and strange to himself.  In Camus’ terms, a person like Meursault is not a rebel and has only a form of half-life.

E.  Brothers in Blood: Meursault and Harun.  Who is the Stranger of the two?

“The absurdity of my condition, which consisted in pushing a corpse to the top of a hill before it rolled back down again, endlessly.”

Harun, the narrator of The Meursault Investigation.[56]

The words L’Etranger, the French title of Camus’ novel, can be translated as the stranger, the outsider or the foreigner.  It is usually translated as The Stranger and most commentators see Meursault as the stranger.  He is a man estranged from himself and society.  But, the Arab he kills is also a stranger and a foreigner to Meursault, just as the Frenchman Meursault is a stranger and foreigner to the Arab.  So, who is the stranger?  Who is the foreigner?

That is a question that Huran, the narrator of Daoud’s novel The Meursault Investigation, repeatedly asks.  Both the French and the Arabs saw themselves as the genuine Algerians.  Each claimed the land was rightfully theirs, and saw the others as foreigners.  They also knew little about each other and were effectively strangers to each other in the same land.  With the independence of Algeria from France, Harun contends, this did not change.  “Independence only pushed people on both sides to switch roles,” with the oppressed becoming the oppressors and the oppressors becoming the oppressed.[57]

Harun tells his tale over the course of several days to an auditor in an Algerian bar.  He claims to be telling the story of his brother, Musa, and, thereby, reclaiming Musa’s dignity and the dignity of Arab Algerians as a whole.  His story is replete with critical comments about the French colonial regime and the current Algerian government and society.  He is himself an outsider or stranger to contemporary Algerian society.  Harun is particularly critical of the conservative Islam that has increasingly been dominating Algerian culture.  It is these latter comments that have sparked the enmity of conservative Muslims toward Daoud, as though Harun is speaking for Daoud.  Although Harun makes comments about society and religion with which apparently Daoud agrees, Harun is too unreliable and erratic a narrator to be considered Daoud’s spokesman.  He tends to discredit himself.

Harun’s narrative is more of a rant than a story, and the facts come out in dribs and drabs with lots of inconsistencies.  Ostensibly correcting Meursault’s narrative with the story of his brother, Harun’s narrative is actually a winding, whining, long-winded complaint about his own life.  His father abandoned the family when Harun was a small child.  Harun’s mother then favored Musa and neglected Harun.  When Musa was killed, Harun’s mother was inconsolable and, according to Harun, thereafter made him feel like she wished he had died rather than Musa.  Harun idolized his brother, but also feared him.  Musa seems to have been a bit of a brute who mistreated Harun.  Harun actually knows very little about Musa’s life except what his mother told him, and she was an unreliable narrator who constantly changed her stories and magnified Musa’s achievements.  She also was obsessed with getting revenge for Musa’s murder, and put the burden on Harun to achieve that.  In sum, he portrays his mother as a monster who has pushed him around all his life.

When the revolution of Arab Algerians against the French began, Harun did not join the rebels, and was subsequently scorned by his neighbors for being an outsider to their liberation struggle.  In the immediate aftermath of the revolution, at his mother’s instigation, Harun shoots and kills a Frenchman who was seeking sanctuary in their shed.  This man was a member of a neighboring family that had previously gotten Harun a place in a French school at which Harun gained the education that enabled him to get a good government job.

It is not clear exactly what was the relationship between the dead man and Huran’s mother, but the man may even have had some sort of sexual relationship with Harun’s mother.[58]  Harun was arrested by the new Algerian government for shooting the man, but was released and, as he puts it, was condemned to live rather than condemned to die as Meursault had been.  Harun seems incapable of having close relationships with anyone.  He is a very old man but in his long life he has had one girlfriend for one summer, and then she left him.

Although some reviewers have rushed to crown Harun as “an existential hero”[59] or the ideal of an honest man,[60] and others have proclaimed him a liberal social reformer,[61] Harun does not present himself as a social reformer.  Although he continually complains about the way Algeria was under the French and the way it is now, he has never done anything to change things.  One reviewer has aptly called him “a barroom kvetcher.”[62]  Like Meursault, he has been wandering through life without purpose, seemingly looking after only himself.  He is no hero and he is not Daoud.

Harun parades his alienation from society.  He is an atheist and an alcoholic in a deeply religious and abstemious society.  “I detest religions and submission,” he declaims.[63]  He is a stranger in an estranged land.  But he is no existentialist.  Like Meursault, Harun refuses to take responsibility for his actions, blaming everything on his mother, the French, his Arab neighbors, and circumstances out of his control.  With respect to the murder, he says “I blame my mother, I lay the blame on her.  The truth is, she committed that crime.”[64]

The underlying theme of the book is Harun’s feelings of guilt, which he seemingly tries to pass on to his auditor in the book and to readers of the book.  Like The Stranger, The Meursault Investigation is divided into two parts.  The Stranger is formally divided into two parts, punctuated by the murder of the Arab.  The Meursault Investigation is informally divided into two parts, with Harun’s admission that he murdered the Frenchman as the dividing point.  In the first part, Harun focuses on the murder of Musa and on his own survivor’s guilt.  In the second part, he focuses on his murder of the Frenchman and his efforts to deal with his feelings of guilt about that.

Harun’s diatribe has the superficial appearance of spontaneity, but seems really to be orchestrated.  He releases information in drips and in ways that seem calculated for maximum shock to the auditor, but also for maximum sympathy.  His is a strategy of ostensibly admitting the worst about himself as a way of pretending he is being honest, but he is really being manipulative.   When, for example, Harun finally admits his murder of the Frenchman, he at first claims that he did not know the man.  Eventually, however, he admits that he did know the man and, in fact, knew him well.  Harun first gets his audience used to the fact that he killed someone, and then gradually lets us know how awful his act really was.

Harun is an admittedly unreliable narrator.  At the close of the book, he even hints that he may be “just a compulsive liar.”[65]  In discussing The Stranger, for example, he talks at one point about “when the murderer leaves prison,” as though Meursault got the reprieve he had been seeking and was not executed.  But later in talking about Meursault, Harun refers to “after his execution,” as though Meursault had been executed.[66]  Harun also claims that this is the first time he has ever told his story, but he seems to be such a compulsive talker that this is hard to believe.

Harun’s story is laced with references to The Stranger, The Myth of Sisyphus and, significantly, The Rebel, and the word “absurd” abounds throughout.  I think the main point of Harun’s story is proclaimed midway through the book when he paraphrases the theme of The Rebel, saying that “whether or not to commit murder is the only proper question for a philosopher.”[67]  That is, when faced with the slings and arrows of outrageous fortune, do we have the right to murder our way out of our troubles?  And I think that Harun’s answer is “No, because you can never live it down.”

Harun illustrates this in a paraphrase of an image from The Myth of Sisyphus, when he compares his situation to “pushing a corpse to the top of a hill before it rolled back down, endlessly.”[68]  Instead of the rock that Sisyphus had to push around, Harun has to deal with guilt for two corpses, those of his brother and the Frenchman.  He seems to need to tell his story as a way of relieving himself of his guilt feelings, and thereby getting the corpses to the top of the hill.  But the guilt feelings will inevitably return again, the corpses rolling back down upon him, so that he probably has been compulsively telling his story over and over again all his adult life.  The story ends with an almost complete identification of the murderer Harun with the murderer Meusault, and the last pages of the book consist of Harun telling about how he started yelling at an Imam just as Meursault did to a priest. Harun repeats virtually the same words that Meursault said at the end of his story.[69]

The theme of The Meursault Investigation was aptly stated by one reviewer as the importance of “individual responsibility,” which is something Harun does not display, nor did Meursault.[70]   In Meursault and Harun, we have characters pushed to the extreme of facing death as isolated individuals, Meursault through execution and Harun through old age.    They make some cogent social criticisms, because self-centered people are often acutely sensitive to slights and slight social injustices to themselves.  But they are also both selfish and at best amoral.  They are not held up by their creators as model citizens.  The moral of both books seems to be the need for human solidarity as a basis for individual responsibility.  Camus once commented that the trajectory of his work from The Stranger on was toward calling more insistently for human solidarity.  Daoud seems to be furthering that trajectory.

[1] Camus, Albert. The Myth of Sisyphus. New York: Vintage Books, 1955. p.73.

[2] As though any sane person during the 1780’s would want to keep a musket (the standard  gun at that time) in his/her house along with a bag of volatile gunpowder (needed for loading a musket) which could explode with the slightest change in humidity.  The reason the British were marching on Lexington and Concord during April, 1775, and fought the battles that are seen as the start of the American Revolution, was to confiscate the muskets and gunpowder Americans had stored in their militia armories that were located a safe distance from their homes.

[3] Daoud, Kamel. The Meursault Investigation. New York: Other Press, 2015.

[4]  Scherr, Arthur. “Camus’ The Stranger.” Baltimore Polytechnic Institute.bpi.edu  9/5/08.  Gnanasekarau, R. “Psychological Interpretation of the novel ‘The Stranger’ by Camus.” International Journal of English Literature and Culture, Vol.2(6). 6/6/14.  Charomonte, Nicola. “Albert Camus Thought That Life Is Meaningless.” The New Republic. newrepublic.com  11/7/14. John. “Algerian Writer Kamel Daoud Stands Camus’ ‘The Stranger’ on Its Head.” NPR Book Reviews. NPR.org. 6/23/15.

[5]  “The Stranger (novel). Wikipedia. 1/23/16.

[6]  “The Stranger.” Sparknotes.com. 1/23/16.

[7]   “The Stranger.” Amazon.com Review. 1/23/16.

[8]  Adler, Mortimer. How to Read a Book.

[9]  Although Camus worked with Sartre and other existentialists, he repeatedly rejected applying the label existentialist to himself.  Camus rejected what he saw as the radical skepticism bordering on nihilism of some existentialists.  But I think that some of the concepts developed by his one-time mentor and colleague Sartre can be legitimately used in analyzing The Stranger.

[10]  Sartre, Jean Paul. “Existentialism is a Humanism.”  Existentialism is a Humanism. New Haven, CN: Yale University Press, 2007. pp.25, 47.

[11]  Camus, Albert. The Stranger. New York: Vintage Books, 1946. pp.1, 151.

[12]  Daoud, Kamel. The Meursault Investigation. New York: Other Press, 2015. pp.137, 143.

[13]  Messud, Claire. “The Brother of ‘The Stranger.'” New York Review of Books. 10/22/15.

[14]  Camus, Albert. The Myth of Sisyphus. New York: Vintage Books, 1955. p.

[15]  Sartre, Jean Paul. “A Commentary on The Stranger.” Existentialism is a Humanism. New Haven, CN: Yale University Press, 2007. p.94.

[16]  Camus, Albert. The Stranger. New York: Vintage Books, 1946. p.123.

[17]  Camus, Albert. The Stranger. New York: Vintage Books, 1946. pp.121, 128.

[18]  Quoted in Gnanasekarau, R. “Psychological Interpretation of the novel ‘The Stranger’ by Camus.” International Journal of English Literature and Culture, Vol.2(6). June 6, 2014.

[19]  Gwyn, Aaron. “Albert Camus’ Poker-faced ‘Stranger’ Became a Much Needed Friend.” NPR Books, WBEZ.  August 10,2014.  Charomonte, Nicola. “Albert Camus Thought That Life Is Meaningless.” The New Republic. newrepublic.com  November 7, 2014.

[20]  Scherr, Arthur. “Camus’ The Stranger.” Baltimore Polytechnic Institute.bpi.edu  9/5/08.

[21]  Hudon, Louis. “The Stranger and the Critics.” Yale French Studies #25. New Haven: Yale University Press, 1960. pp.62-63.  Gnanasekarau, R. “Psychological Interpretation of the novel ‘The Stranger’ by Camus.”  International Journal of English Literature and Culture, Vol.2(6). June 6, 2014.

[22]  Podhoretz, Norman. “Camus and his critics.”  The New Criterion. November, 1982. at newcriterion.com

[23]  Poore, Charles. “The Stranger.” Books of the Times. The New York Times, April 11, 1946.

[24]  Camus, Albert. The Stranger. New York: Vintage Books, 1946. pp.76, 89.

[25]  Daoud, Kamel. The Meursault Investigation. New York: Other Press, 2015. pp.2, 7-8, 53.

[26] [26]  See grief.com/the-five-stages-of-grief

[27]  Camus, Albert. The Stranger. New York: Vintage Books, 1946. p.1.

[28]  Camus, Albert. The Stranger. New York: Vintage Books, 1946. p.11.

[29]  Camus, Albert. The Stranger. New York: Vintage Books, 1946. pp.44, 52.

[30]  Camus, Albert. The Stranger. New York: Vintage Books, 1946. pp.72, 75, 76.

[31]  Camus, Albert. The Stranger. New York: Vintage Books, 1946. pp.79, 109-112, 120, 125, 128.

[32]  Camus, Albert. The Stranger. New York: Vintage Books, 1946. pp.104-105, 112, 124, 130.

[33]  Camus, Albert. The Stranger. New York: Vintage Books, 1946. pp.136, 141, 143, 146, 148.

[34]  Camus, Albert. The Stranger. New York: Vintage Books, 1946. p.154.

[35] Camus, Albert. The Myth of Sisyphus. New York: Vintage Books, 1955. p.27.

[36] Hudon, Louis. “The Stranger and the Critics.” Yale French Studies #25. New Haven: Yale University Press, 1960. p.60.  Gnanasekarau, R. “Psychological Interpretation of the novel ‘The Stranger’ by Camus.” International Journal of English Literature and Culture, Vol.2(6). June 6, 2014.

[37] Podhoretz, Norman. “Camus and his critics.”  The New Criterion. November, 1982. at newcriterion.com  Ulin, David. “Review ‘The Meursault Investigation’ re-imagines Camus’ ‘The Stranger.'” Los Angeles Times. 5/28/15.

[38]  Sartre, Jean Paul.  “A Commentary on The Stranger.” Existentialism is a Humanism. New Haven, CN: Yale University Press, 2007. pp.76, 80, 81, 82, 84, 85.

[39] Camus, Albert. The Myth of Sisyphus. New York: Vintage Books, 1955. p.3.

[40] Camus, Albert. The Myth of Sisyphus. New York: Vintage Books, 1955. pp.11, 38, 39.

[41] Camus, Albert. The Myth of Sisyphus. New York: Vintage Books, 1955. pp.8, 22, 40, 76, 83.

[42] Camus, Albert. The Myth of Sisyphus. New York: Vintage Books, 1955. pp.89, 90.

[43] Camus, Albert. The Myth of Sisyphus. New York: Vintage Books, 1955. pp.41, 47, 66.

[44] Gnanasekarau, R. “Psychological Interpretation of the novel ‘The Stranger’ by Camus.” International Journal of English Literature and Culture, Vol.2(6). June 6, 2014.

[45] Scherr, Arthur. “Camus’ The Stranger.” Baltimore Polytechnic Institute.bpi.edu  9/5/08.

[46] Gwyn, Aaron. “Albert Camus’ Poker-faced ‘Stranger’ Became a Much Needed Friend.” NPR Books, WBEZ.  August 10,2014.

[47] Charomonte, Nicola. “Albert Camus Thought That Life Is Meaningless.” The New Republic. newrepublic.com  November 7, 2014.

[48] Quoted in Gnanasekarau, R. “Psychological Interpretation of the novel ‘The Stranger’ by Camus.” International Journal of English Literature and Culture, Vol.2(6). June 6, 2014.

[49]  Sartre, Jean Paul. Existentialism is a Humanism. New Haven: Yale University Press, 2007.

[50]  Camus, Albert. The Stranger. New York: Vintage Books, 1946. p.127.

[51] Camus, Albert. The Rebel. New York: Vintage Books, 1956, p.22

[52] Camus, Albert. The Rebel. New York: Vintage Books, 1956. pp.4-5.

[53]  “Albert Camus.” Stanford Encyclopedia of Philosophy. plato.stanford.edu  “The Rebel: Essay by Camus.” britannica.com.  “Camus: Portrait of a Rebel.” Socialist Standard. worldsocialism.org

[54]  Camus, Albert. The Rebel. New York: Vintage Books, 1956. pp.6, 8, 10, 281, 285.

[55]  Camus, Albert. The Rebel. New York: Vintage Books, 1956. pp.17, 22, 280, 297.

[56]  Daoud, Kamel. The Meursault Investigation. New York: Other Press, 2015. p.47.

[57]  Daoud, Kamel. The Meursault Investigation. New York: Other Press, 2015. pp.11, 34, 60.

[58]  Daoud, Kamel. The Meursault Investigation. New York: Other Press, 2015. pp.119,122.

[59]  Yassin-Kassab, Robin. “The Meursault Investigation by Kamel Daoud review – an instant classic.” the guardian. 6/24/15.

[60]  Messud, Claire. “The Brother of the ‘Stranger.'” New York Review of Books. 10/22/15.

[61]  Moaveni, Azadeh. “‘The Meursault Investigation’ by Kamel Daoud.” Financial Times. 6/10/15. Battersby, Ellen. “The Meursault Investigation by Kamel Daoud review: L’Estranger danger.” Irish Times. 6/27/15.

[62]  “The Meursault Investigation.” Kirkus Review.

[63]  Daoud, Kamel. The Meursault Investigation. New York: Other Press, 2015. p.66.

[64]  Daoud, Kamel. The Meursault Investigation. New York: Other Press, 2015. pp.77, 84, 88, 89.

[65]  Daoud, Kamel. The Meursault Investigation. New York: Other Press, 2015. p.143.

[66]  Daoud, Kamel. The Meursault Investigation. New York: Other Press, 2015. pp.53, 55.

[67]  Daoud, Kamel. The Meursault Investigation. New York: Other Press, 2015. p.89.

[68]  Daoud, Kamel. The Meursault Investigation. New York: Other Press, 2015. p.47.

[69]  Daoud, Kamel. The Meursault Investigation. New York: Other Press, 2015. p.140-142.

[70]  Powers, John. “Algerian Writer Kamel Daoud Stands Camus’ ‘The Stranger’ on Its Head.” NPR Book Reviews. NPR.org  6/23/15.

Beckett’s “Waiting for Godot”: Why do we keep waiting? Hope among the Hopeless.

  Beckett’s Waiting for Godot: Why do we keep waiting? Hope among the Hopeless.                       

                                       Burton Weltman

A country road.  A tree.  Evening.

Estragon, sitting on a low mound, is trying to take off his boot.

He pulls at it with both hands, panting.

He gives up, exhausted, rests, tries again.

Setting and action at the beginning of Act I of Waiting for Godot.

A guy trying to take off his boots, and failing.  That is how Waiting for Godot opens, and it is a prime example of the sort of action that takes place during the play.  There is, in fact, very little dramatic action at the beginning of the play, and none at the end.  In between, two ragged men, Estragon and Vladimir (Gogo and Didi for short), wander back and forth on a bleak stage and talk at each other as they wait for the arrival of someone named Godot, whom they may never have met (it isn’t clear) and know almost nothing about.  They are briefly interrupted by four other characters, a poltroon named Pozzo with his slave Lucky, and two messenger boys sent by Godot.  That’s it.

Godot was completed in 1949 by Samuel Beckett in the wake of the Great Depression and World War II.  It was a time when many Europeans were suffering from what we might today call post traumatic stress disorder.  They were still trying to figure out what had hit them and what they could do about it.  Godot was part of a flood of existentialist works produced during the 1940’s and 1950’s by Beckett, Jean Paul Sartre, Albert Camus, Simone de Beauvoir and other writers.  Sartre and Camus, the leading figures in the existentialist group, emphasized the helplessness, hopelessness, and pointlessness of human existence.  Godot has been compared with their works.  The setting of Godot is bleak, the main characters wander about to no obvious purpose, and the play has no obvious plot.  I intend to show, however, that Beckett makes a very different point than Sartre and Camus.

Godot has also been compared in recent years with the television comedy show Seinfeld.  Seinfeld has been famously characterized and satirized by its own characters as a show about nothing.  And although Seinfeld is amusing, it really is pretty much about nothing.  Godot has been similarly characterized as being about nothing because the play seems so unfocussed and nothing dramatic happens.  But this comparison is weak.  Godot is amusing, but there is also something to the play that has led critics to describe it as “mesmerizing,” and induced many to rate it as a great work of art.  One may ridicule Godot, it has been said, but one “cannot ignore it.”[1]  Very few people would say that about Seinfeld.  What is it about Godot that accounts for its hold on audiences?  I hope in this essay to show what that is.

A great work of art has been described as one that can be experienced repeatedly with something new gained each time.  A great book is, for example, one that can be read over and over, with the reader getting more and different things each time.  A great play is one that can be seen many times with new insights each time.  The more a work can be profitably reread or re-watched, the more there is to it and the greater it is.[2]

That is something we can do with the plays of William Shakespeare and the novels of Charles Dickens, and that is why people today frequently read and reread, watch and re-watch works by these authors.  It is not something that most people can do with the plays of Shakespeare’s contemporary and friend Christopher Marlowe or with the books of Dickens’ contemporary and friend Edward Bulwer-Lytton.   Marlow and Bulwer-Lytton were considered innovative and widely popular authors in their day.  But they have not stood well the test of time, and their works are not often performed or read.[3]  Bulwer-Lytton has even had the singular misfortune to have named after him an annual contest for the worst opening sentence for a novel, having opened one of his novels with the oft ridiculed line “It was a dark and stormy night.”[4]

Great literary works like those of Shakespeare and Dickens appeal to us to consider them carefully.  They connect with us in a way that says that there is more to them than meets the eye at our first glance, and that we are missing something important if we don’t try to find it.  Great works are also multidimensional, not merely one-dimensional, sentimental appeals to our emotions or didactic appeals to our intellect.  They appeal to us and challenge us in a variety of ways, intellectually, experientially, imaginatively, and emotionally.

A literary work is said, for example, to have intellectual appeal if it challenges our ideas about things.  It has experiential appeal if it relates to things with which we are familiar but focuses on things we have ignored.  A work has imaginative appeal if it is couched in imagery that opens our eyes to something we are capable of seeing but have not seen before.  It has  emotional appeal if it evokes empathy and emotionally involves us in unexpected ways.  A great work makes the strange familiar and the familiar strange.[5]  Godot does just that.  As I hope to demonstrate in this essay, the play appeals to our intellects, personal experiences, imaginations and emotions, and provokes us to think and feel about things in new ways.  It can also be seen over and over without exhausting its appeal.  In sum, it is well worth waiting for Godot.

Estragon: Nothing to be done.

Vladimir: I am beginning to come around to that opinion.  All my life I’ve tried to put it from me, saying Vladimir, be reasonable, you haven’t tried everything.  And I resumed the struggle.

Opening lines of Act I of Waiting for Godot.

“What is to be done?” asked Vladimir Lenin in the title of his famous book of 1901.  The book was written at a low point in working class struggles in Europe, at a time when apathetic workers seemed to be adapting to their oppression under the capitalist system.  Lenin’s answer was to build a revolutionary movement led by a vanguard cadre of radicals who would energize workers and show them the way.  Estragon parodies and critiques Lenin with his “Nothing to be done” as the opening salvo of the debate between him and his comrade Vladimir, which largely constitutes Godot.  Vladimir responds in Leninist fashion that whenever he feels at a low point, he thinks of all the things he has not yet tried, and then he resumes the struggle.

But there are limits to Vladimir’s stamina.  He is beginning to despair.  His despair recalls that of his namesake Lenin, wasting away in exile in Switzerland during January, 1917.  Lenin told a group of visiting comrades that they must reconcile themselves to the fact that there would probably be no revolution in Russia during their lifetimes.  But, he adjured, they must keep the faith and wait things out.  Quite unexpectedly, revolution broke out the next month in Russia and Lenin returned to lead it.  One never knows what can be done if one has not tried everything.

What is to be done, Estragon and Vladimir are continually asking?  How should they spend their time while they wait for God knows what?   So, they play with words and play verbal games, just as Beckett wrote plays and played with words.  They goad each other with what are seemingly intentional misunderstandings of the other, a way of making something of a conversation out of nothing.  “Let’s contradict each other,” Estragon suggests and later insists “Let’s ask each other questions.”  After one such episode, Estragon rejoices that “We always find something, eh Didi, to give us the impression we exist.”  “Yes, yes, we’re magicians,” Vladimir responds.

They sprinkle their conversation with allusions to books, events and ideas that they have difficulty recalling and construing, just as Beckett sprinkles Godot with allusions to things for us, the audience, to try to decipher and ponder.  Vladimir, for example, referring to the story that one of the two thieves who were to be crucified with Jesus was spared, notes that only one of the four Gospels mentions the story.  Estragon’s reply is “Well?  They don’t agree and that’s all there is to it.”  Vladimir’s response is “But all four were there and only one speaks of a thief being saved.  Why believe him rather than the other four?”  This is not only a question about the New Testament, it is a question about evidence and testimony of all sorts, and about ethical choices.

“It is a game, everything is a game,” Beckett once supposedly said about Godot.[6]  There is an almost endless number of things in the play for Estragon and Vladimir to think about, and us too.  The play has enormous intellectual appeal and appeal for intellectuals.  Philosophy, religion, politics, and ethics are just a few of the themes with which it deals, and which the characters discuss.  It is not clear that Estragon and Vladimir make any progress in their speculations, but they greet each day and each other with an embrace and a celebration.

Vladimir: It’s a scandal!

Pozzo: Are you alluding to anything in particular?                                                            

Vladimir: To treat a man…like that…I think that…no…a human being…no…it’s a         scandal.

Estragon: A disgrace.

Vladimir and Estragon reacting to Pozzo’s treatment of his slave Lucky in Act I.

“When Adam delved and Eve span, Who was then the gentleman?,” asked the Lollard priest John Ball, one of the leaders of the English Peasant Revolt of 1381.  In fighting against the oppression of the peasants by their overlords, Ball exhorted his followers to return to the simplicity and social equality of the Garden of Eden, where there was no private property or social hierarchy.  Ball’s appeal tapped into a traditional Christian utopian dream of the sort that in modern times was voiced by John Lennon in his song Imagine.  “Imagine there’s no heaven…Imagine there’s no countries…Imagine there’s no possession,” John Lennon asks us.  And then, he says, imagine the wonderful consequences, with everyone living in peace, sharing the world, and living for today.

Lennon’s words are a surprisingly plausible way of describing the situation of Estragon and Vladimir in Godot.  They own virtually no property, and share what they have.  They do not demonstrate any tribal loyalties or prejudices.  They bicker a lot, but they do not actually fight.  They sometimes envy the seemingly wealthy Pozzo and hope for riches for themselves, but they don’t do anything about it.  They live totally for the day.  So, is Godot intended as a description of utopia?  Or a portrait of dystopia?  Is it a parody of the Garden of Eden?

Godot been called “a mystery wrapped in an enigma.” [7]  It has also been declared so ambiguous as to be “Whatever you want it to be,” let your mind make of it what you will.[8]  Although I think that is an overstatement, the play does make a strong appeal to the imagination.  A big part of this appeal stems from its minimalism.  Godot has a minimalist script calling for a minimalist setting and a minimalist performance.  It strips life down to a bare minimum of things, and focuses on the moment-to-moment and day-to-day survival of its two main characters.  This minimalism makes for a maximum of interpretations.  Godot has been produced as a comedy, tragedy, tragic-comedy, farce, and melodrama.  It has been interpreted as a psychological, political, sociological, metaphysical, and/or religious drama.

The setting is stark, and the play has been described as “about nowhere and therefore about everywhere.”[9]  The stage set consists essentially of a dying tree and a rock.  If it is Eden, it is a devastated garden.  Beckett sets his characters in a barren physical and psychological environment in which they are starving for stimulation.  They seem to suffer from sensory and intellectual deprivation and, as a result, they often imagine things.  Upon first meeting Pozzo, for example, they mistake him for Godot.  Estragon explains: “That is to say…you understand…the dusk…the strain…waiting…I confess…I imagined…for a second.”  We, the audience, too thought that our waiting might be over, that Godot had arrived.  But no, we must wait further.

The imagery is haunting.  It is a post-apocalyptic setting that is befitting a Europe devastated by economic depression and war.  But the setting also befits a post-Holocaust and post-Hiroshima world that has been stripped of its moral veneer.  It is a world that needs an imaginative revival.   Beckett provides a structure for our imaginations, and forces us to think about the possibilities.

Estragon: Well, shall we go?

Vladimir:  Yes, let’s go.

They do not move.

End of Act I of Waiting for Godot.

 “To be or not to be, that is the question,” Hamlet proclaims, as he contemplates suicide and ponders what he should be and how to be it.  Hamlet’s answer is essentially a cop-out.  He claims that killing oneself may not end one’s problems because there may be an afterlife in which one’s tribulations may continue and even increase.  But Hamlet then goes on to pontificate in terms that seem to negate taking action of any sort, and do not apply merely to committing suicide:

Thus conscience doth make cowards of us all,

And thus the native hue of resolution

Is sicklied o’er with the pale cast of thought,

And enterprises of great pitch and moment

With this regard their currents turn awry

And lose the name of action.

This is an elaborate excuse for inaction.  Hamlet is a play about someone who does not want to choose, and does not want to act.  Godot is a play about people who are making choices and taking action.  This is the case even when the result looks like indecision and inaction.

In a seeming parody and rebuke of Hamlet, Vladimir claims that “What are we doing here? That is the question (emphasis in original).”  Suicide is not the question.  Action versus inaction is not the question.  The question is what should we do and why should we do it, since we are always doing something whether we like it or not.  This is the core question of the play and one that almost all of us ask ourselves at least sometimes, some of us a lot.  With this question, the play appeals to the personal experience of the audience, all of us wanderers in a time and place not of our choosing, searching for some meaning and for something meaningful to do with our lives.

Vladimir’s question is also arguably a response to Albert Camus’ influential book The Myth of Sisyphus.  Sisyphus was written in 1942, while France was under Nazi occupation and Camus was involved in the seemingly hopeless struggle of the French underground against the Nazi occupiers.  The opening words of Sisyphus are “There is but one truly philosophical problem, and that is suicide.”  As with Hamlet, suicide is the question.  For Camus, living without hope is the answer.[10]

Sisyphus was a character from Greek mythology who was condemned for eternity to push a rock up a mountain, only to have it roll back down, so that he would have to push it back up again.  Camus claims that Sisyphus embraces this “futile and hopeless labor” because “There is no fate that cannot be surmounted by scorn,” and Sisyphus’ scorn for the gods sets him free.  “Sisyphus,” Camus claims, “teaches the higher fidelity that negates the gods and raises rocks.”  He concludes that “One must imagine Sisyphus happy.”[11]  Heroic endurance, an acceptance of hopelessness, and happiness through scorn for one’s oppressors is Camus’ answer to the question of suicide.

Although Beckett’s main characters in Godot repeatedly consider killing themselves, boredom seems to be the main philosophic question for them, not suicide.  In contrast with Sisyphus, Godot was written at a time when economic depression and war were giving way to economic and political recovery, and the conformity of mass society had become a main worry among intellectuals.  Cultural critics such as Theodor Adorno and Max Horkheimer were warning about the coming loss of individuality in what was becoming a homogenized Western society.[12]

Adorno and Horkheimer were the advanced guard of a legion of critics concerned that an age of coerced uniformity by fascist dictators was being succeeded by an era of voluntary conformity, and by the boredom that comes from a paucity of imagination, genuine choices and meaning in people’s lives.  Beckett was writing at the dawn of the age of David Reisman’s The Lonely Crowd[13] succeeded by Vance Packard’s The Organization Man,[14] which eventually became Herbert Marcuse’s One Dimensional Man.[15]  Self-suppression and willful conformity were their main concerns.  Western culture, they complained, was becoming a domain of intellectual, experiential, imaginative, and emotional vacuity.

Physical suicide was not the problem for these intellectuals.  Psychological suicide was.  Both Act I and Act II of Godot end with Estragon and Vladimir saying they will kill themselves tomorrow.  But we know they won’t.  They are merely bored, and are entertaining themselves with speculations about committing suicide.  It is just one of the many things they think of doing, but don’t do.  Estragon and Vladimir are continually thinking about how to be, even when they are speculating about how not to be.  They seem to be Beckett’s response to the complaints of mass society theorists.  Beckett’s everymen are as shabby as they can be, but they are anything but conformists.  There is no “Keeping up with the Joneses” with them.  Beckett seems to be saying that a tawdry tedium should not be confused with a vacuous conformity.        

In a contrast with Hamlet, who does not really answer his own question about being, Vladimir answers his.  He says “And we are blessed in this, that we happen to know the answer. Yes, in this immense confusion one thing alone is clear.  We are waiting for Godot to come.”  Unlike Hamlet, Estragon and Vladimir are not dithering around in a quandary about whether or not to do something.  They are doing something, according to Vladimir, even if, like Lenin biding his time in Switzerland, it is only keeping the faith and keeping themselves together while they wait for things to unfold.

“We are not saints,” Vladimir concludes, “but we have kept our appointment” with Godot, and that is something to be proud of.  It is also something with which we in the audience can empathize.  “Eighty percent of success is showing up,” Woody Allen once said.  “I can’t go on,” Estragon complains at one point.  “That’s what you think,” Vladimir responds, and they go on.  Vladimir and Estragon show up every day to wait for Godot.  Most of us would do well to do the same in our own lives.

Estragon: Well, shall we go?                                                                                                        

Vladimir:  Yes, let’s go.

They do not move.

End of Act II of Waiting for Godot.

 “It’s all symbiosis,” Beckett is supposed to have once said about Godot.[16]  Beckett was extremely reluctant to comment on the meaning of his plays, but he seems hereby to have acknowledged that Godot is above all a play about human relationships.  Strip life down to its bare bones and what you have left is relationships.  Godot is frequently paired with Jean Paul Sartre’s No Exit as a play about people who are trapped physically and psychologically, and who cannot get out of the vicious cycles in which their lives, or their afterlives in the case of No Exit, unhappily revolve.  No Exit portrays what Sartre saw as the contradiction between being metaphysically free but psychologically imprisoned, which is a frequent theme in existentialist writing.

Similar to Camus’ writing of Sisyphus, Sartre wrote No Exit in Paris during 1944,while France was still under Nazi occupation.  It is a story about three dead people, a man and two women, who are locked in a room. The room is ostensibly Hell.  In the beginning, they marvel at the idea that where they are is Hell, and they anticipate that they will be okay if being in a locked room is the worst they will suffer for their misdeeds in life.  But then their personalities start to come into play.

The man is chronically depressed and despondent.  One of the women increasingly lusts after him.  The other woman increasingly lusts after the first woman and scorns the man.  He, in turn, seeks the scornful woman’s approval.  The net result is a vicious circle in which each of them preys on the others.  Toward the end of the play, the door to the room opens so that they apparently could exit the room.  None of them, however, chooses to leave.  They seemingly want or need to be tortured.  Psychologically, there is no way out for them.

The man sums up what the play says about the human condition with the phrase: “L’enfer, ces les autres” or “Hell is other people.”  He also voices the moral of the story in the last words of the play: “Eh bien, continuons,” that is, “Let’s continue” or “Let’s get on with it.”  Written in circumstances similar to those in which Camus wrote Sisyphus, Sartre’s moral in No Exit is similar to Camus’ in Sisyphus.  We must resign ourselves to a living hell.  The moral of Godot is different.

There are three sets of symbiotic relationships in Godot: Estragon and Vladimir, Pozzo and Lucky, and the two messenger boys and Godot.  As Pozzo appears in the first act of the play, he is a pompous braggart and a wealthy bully.  He drags his slave Lucky around with a rope and routinely denigrates him.  Although Pozzo looks down upon Estragon and Vladimir for their poverty and for hanging about waiting for Godot, he goes hither and yon without seeming to get anywhere.  In the second act, Pozzo shows up having been accidentally blinded.  Now the slave is pulling him around by the rope.  Pozzo has gone from bumptious to pathetic, but Lucky remains his slave and neither knows how to get away from the other.  Theirs is a symbiotic master-slave relationship that has enslaved and degraded them both, but with no way out.

The two boys have an ambiguous relationship with Godot.  One is a shepherd, the other a goatherd.  Godot apparently mistreats and beats one of them, but it is not clear which.  This is like the Cain and Able story in the Bible in which God favors the shepherd Able over the farmer Cain for no apparent reason.  From passages such as this, many interpreters of the play claim that “It seem fairly certain that Godot stands for God.”[17]  In this view, waiting for Godot would seem like an act of religious faith.  This view is reinforced by Vladimir’s response to Estragon’s question about Godot.  “And if he comes?” asks Estragon.  “We’ll be saved,” answers Vladimir, with salvation generally regarded as a religious goal.  But Godot and salvation could stand for any number of things for which people hope, from God to Lenin’s revolution.  I do not think it matters to the moral of the play.

The moral of the play, I think, resides in the relationship between Estragon and Vladimir.  Most interpretations of the play focus on the dourness of the characters’ situation and the hopelessness of their enterprise.[18]  It has been said that the play has “a unique resonance during times of social and political crisis,” and that its appeal is as a catharsis for people’s despair.[19]  I do not see the play as a catharsis for despair.  I propose, instead, that the play is a success story with a happy ending, thus making for the strong emotional connection that we feel for the characters.

Waiting for the arrival of Godot is primarily an excuse for Estragon and Vladimir to stay together.  The real reason they sit and wait is that they complement each other, care about each other, and take care of each other.  They bicker constantly and repeatedly consider going their separate ways, but they don’t go and they don’t separate.  “It’d be better if we parted,” Estragon suggests for the nth time.  “You always say that,” Vladimir responds, “and you always come crawling back.”

Beckett has been quoted as saying that “Estragon and Vladimir are like a married couple who’ve been together too long.”[20]  They go nowhere, but they have each other.  They seem pathetic at first, but not later.  In the repetition of their daily tedium, Estragon and Vladimir encourage each other to assume a dignified posture, and they appeal to us in their striving for integrity and meaning in their lives.  As they struggle at one point with Estragon’s boots, he observes that “We don’t manage too badly, eh Didi, between the two of us.”  “Yes, yes,” Vladimir agrees, and the conclusion seems to apply to more than just the boots.

Pozzo looks down on Estragon and Vladimir in the first act when he is flying high, but envies them in the second act when he has fallen and they have stayed the same.  Vladimir asks Estragon at one point whether he thinks Pozzo and Lucky have changed.  “Very likely,” Estragon responds, “They all change.  Only we can’t.”  It has been said that the play mocks us, the audience.  We sit in the theater doing nothing while watching actors who do nothing.  We fill our meaningless time watching characters who fill their meaningless time waiting for a phantasm.[21]  I do not agree.

I think the play is in the end a love story, a story of endless love that abides through boredom and makes the tedium of daily life worthwhile.  “How long have we been together all time now?,” Estragon asks.  “I don’t know, fifty years maybe,” Vladimir answers.  Out of almost nothing, out of merely their meager selves, Estragon and Vladimir make meaningful lives through caring about each other and taking care of each other.  The hopefulness in their relationship belies the sparseness of their situation.  It does not matter whether Godot ever shows up.  And that, I believe, best explains the hold that the play has on audiences, and why people continue to sit time and again with Estragon and Vladimir, waiting for Godot.

[1] Atkinson, Brooks. “Beckett’s ‘Waiting for Godot.'”  The New York Times. 4/20/56. at The New York Times>

Beckett-Godot

[2] Adler, Mortimer. How to Read a Book. New York: Simon & Schuster, 1940.

[3] Bulwer-Lytton is even reportedly responsible for convincing Dickens to change the ending of Great Expectations to leave open the possibility that Pip and Estelle will get together, a change that clearly weakened the ending.

[4] The contest has been held annually since 1982 by the English Department at San Jose State University.

[5] Bruner, Jerome. Actual Minds, Possible Worlds.  Cambridge, MA: Harvard University Press, 1986.

[6] Quoted in www//en.wikipedia.org/wiki/Waiting_for_Godot

[7]  Atkinson, Brooks. “Beckett’s ‘Waiting for Godot.'”  The New York Times. 4/20/56. at The New York Times>

Beckett-Godot

[8] Smith, David; Imogen Carter; & Ally Carnwath.  “In Godot we trust.” 3/7/09.  The Guardian. at http://www.the guardian.com

[9] Smith, David; Imogen Carter; & Ally Carnwath.  “In Godot we trust.” 3/7/09.  The Guardian. at http://www.the guardian.com

[10] Camus, Albert.  The Myth of Sisyphus and Other Essays. New York: Vintage Books, 1955. p.3.

[11]  Camus, Albert.  The Myth of Sisyphus and Other Essays. New York: Vintage Books, 1955. pp.90-91.

[12] Adorno, Theodor & Max Horkheimer.

[13] Reisman, David, et al.  The Lonely Crowd.

[14] Packard, Vance. The Organization Man.

[15] Marcuse, Herbert. The One Dimensional Man.

[16] Quoted in www//en.wikipedia.org/wiki/Waiting_for_Godot

[17] Atkinson, Brooks. “Beckett’s ‘Waiting for Godot.'”  The New York Times. 4/20/56. at The New York Times>

Beckett-Godot

[18] Atkinson, Brooks. “Beckett’s ‘Waiting for Godot.'”  The New York Times. 4/20/56. at The New York Times>

Beckett-Godot

[19] Smith, David; Imogen Carter; & Ally Carnwath.  “In Godot we trust.” 3/7/09.  The Guardian. at http://www.the guardian.com

[20] Smith, David; Imogen Carter; & Ally Carnwath.  “In Godot we trust.” 3/7/09.  The Guardian. at http://www.the guardian.com

[21] Gardner, Lyn. “Waiting for Godot review – a dystopian Laurel and Hardy after an apocalypse.” 6/7/15.  Theatre. at http://www.theguardian.com

Better Dead than Red: Shakespeare’s “Hamlet” and the Cold War against Catholicism in Elizabethan England

                                                        Better Dead than Red:

Shakespeare’s Hamlet and the Cold War against Catholicism in Elizabethan England

Burton Weltman

The Devil Made Me Do It: The Ghost from Hell.

“Who is there?”  These are the first words of Hamlet, and they pose the key question of the play.  The question is asked by a soldier nervously standing guard on a dark night, worried by ominous reports of a ghost on the prowl.  Understandably upset by the nightly appearance and disappearance of the ghost, the soldier poses the underlying problem of Hamlet, and then himself disappears from the play.  The problem he poses is that of who and what is a person’s self.   How can one distinguish a real self from one that is false, a good self from one that is evil?  How can one know who and what is Hamlet?  How can one know who and what are the other living characters in the play?  Most important, who and what is the ghost?  Who really is there?[1]

The ghost is the key to Hamlet. The action in the play all stems from his demand that Hamlet kill Claudius, the king of Denmark.  The ghost claims to be Hamlet’s father, the previous king.  He says he was murdered by Claudius, and he has come from Purgatory to demand that Hamlet avenge his murder.  Hamlet’s friend Horatio doubts the identity and intentions of the ghost, and battles the influence of the ghost on Hamlet throughout the play.  Hamlet himself swings back and forth from believing in the bone fides of the ghost to doubting them, repeatedly asking himself whether the ghost might be from Hell.  “The spirit that I have seen may be a devil,” he worries, “and the devil hath power t’ assume a pleasing shape, yea, and perhaps out of my weakness and my melancholy, as he is very potent with such spirits, abuses me to damn me.”[2]

So, who and what is the ghost?  The thesis of this essay is that Shakespeare intended his audience to see the ghost as an agent of the Devil, an evil spirit whose mission was to use the truth about the murder of Hamlet’s father as a means of promoting unholy havoc in Denmark.   The evidence for this interpretation is the ghost’s reference to Purgatory and to other elements of Catholicism that were rejected as perverse doctrines by Protestants in the sixteenth century.  The ghost represents Catholicism.  Hamlet’s Denmark, like Shakespeare’s England, was a Protestant country.  Within the Protestant ideology of those countries, the Catholic Church was an agency of the Devil.  The ghost’s espousal of Catholic doctrines would make him an agent of the Devil.  This is a conclusion that Shakespeare would have expected his Elizabethan audience to reach.

There is a perverse influence that pervades Hamlet and overcomes most of the characters in the play.  It is the influence of the ghost.  The tragedy of Hamlet is that Hamlet does not follow his better judgment that the ghost is an agent of the Devil.  Instead, he makes the fateful and fatal error of keeping the ghost’s story secret and promising to undertake an act of murderous revenge at the ghost’s behest.  This is a conclusion that Shakespeare would also have expected his audience to reach based on the anti-Catholic prejudice that they shared.

The underlying anti-Catholicism is an aspect of the play that most interpreters either miss or slur over.  In a production of Hamlet that I recently saw at the Stratford Theatre Festival in Canada, the actors and the stage were festooned with Catholic symbols, as though Hamlet and the other Danes were Catholics.  The point is not to highlight or promote the anti-Catholicism in the play.  But if one does not take it into consideration, one can miss other key points in the play.

This was the case, for example, in the performance of Hamlet at Stratford that I recently saw which was played essentially as melodrama, with Hamlet as a romantic hero, rather than tragedy as Shakespeare intended.  My conclusion is that an understanding of what Shakespeare intended in his plays requires an appreciation of the cold war against Catholicism in Elizabethan England, and the anti-Catholicism embedded within Shakespeare’s plays and the roles that his characters play.

Hamlet is a play about role playing, about the question of “Who is there?”  The main characters self-consciously play different roles at different times, and display different selves depending on their audiences.  This theme is accentuated by the play within the play that is staged by Hamlet, a fictional representation of the sort of murder that Claudius committed against Hamlet’s father.  Hamlet hopes that by showing Claudius a fictional version of his misdeeds, Claudius might be provoked into publicly revealing his evil self and his guilt.

Claudius does react in a way that confirms his guilt to Hamlet and Horatio who already suspect him, but Claudius is able to put on an act that convinces others at the performance that he is only unwell.  This scene highlights the problem that is posed in Hamlet.  The characters in the play, and this includes the ghost, are playing a form of “prisoners game” in which they have to continually decide what truths of themselves to reveal or hide, and whether and to what extent they can believe in the others.  Deception and hypocrisy abound in this game.

“To thine own self be true,” intones Polonius, Claudius’ chief advisor.  It is his penultimate advice in a series of platitudinous admonitions with which he has been regaling his son Laertes and his daughter Ophelia in an early scene of Hamlet.  This last exhortation is generally treated by interpreters of the play as a serious piece of advice, unlike the platitudes Polonius has previously been spouting.  In the performance of Hamlet that I recently saw, the actor playing Polonius paused and took on a portentously solemn tone when he came to this line.

But this last admonition is, in fact, as inane as the bromides that preceded it because it begs the question of “Which self?”  Everyone in this play has many selves.  To which self should one be true?  The hypocrisy of Polonius’ advice is also immediately revealed when a few moments later he orders Ophelia to pretend indifference to Hamlet, whom she clearly and dearly loves.  That is, Polonius insists that Ophelia play true to herself in her role as a dutiful daughter, but be untrue to herself and play false in her role as a lover.  Hamlet also loses himself in the multiple roles he is trying to play, and ends up playing the fool to the ghost, the Devil and the hated Catholic Church.

Catholicism, Protestantism and Shakespeare: Situating Hamlet in his place and time.

Most modern day admirers of Shakespeare, of which I am one, would like to acquit the Bard of the conventional prejudices of his era.  England in the late 1500’s and early 1600’s was rife with sexism, anti-Semitism, racism and anti-Catholicism.  Since Shakespeare’s plays, like those of any writer, inevitably reflect the society in which he lived, his plays are full of examples of these prejudices.  They include sexism in The Taming of the Shrew, anti-Semitism in The Merchant of Venice, racism in Othello, and anti-Catholicism in King John.  Shakespeare’s plays have historically been usually performed in ways that accept and even promote these prejudices.

In most productions of Taming, for example, Kate’s last speech, in which she professes abject obedience to her husband, has been played as the moral of the story.[3]  In productions of Merchant, Shylock has often been “played by a comedian as a repulsive clown or, alternatively, as a monster of unrelieved evil.”[4]  The play has often also been retitled as “The Jew of Venice,” thereby focusing on Shylock and his religion.[5]  Othello has often been portrayed in the past as a lascivious African, which played into racist stereotypes of blacks.  The play has frequently been retitled “The Moor of Venice,” thereby focusing on Othello’s supposed racial difference.[6]

Since sexism, anti-Semitism and racism are offensive to most present-day sensibilities, modern interpreters have tried to re-imagine what Shakespeare might have meant so as to remove the sting of prejudice from lines and scenes that have previously been performed in invidious ways.  One of the great things about Shakespeare’s plays is that the same words can be spoken and enacted in different ways.  He gives interpreters an opportunity to stay true to the scripts yet perform the plays with a variety of different characterizations and actions.  Given this latitude, I think one can reasonably interpret the instances of sexism, anti-Semitism and racism in plays such as Taming, Merchant, and Othello as ironic rather than prescriptive.  One can, thereby, place Shakespeare in the position of obliquely critiquing rather than promoting those biases.

One could, for example, play Kate in Taming as retreating at the end of the play in the face of overwhelming pressure, but ready to resume the battle against sexism at a later date.  One could portray Antonio, the merchant in Merchant, and his colleagues as hypocrites who condemn Shylock for holding to a materialistic ethos and engaging in sharp practices of which they are themselves more guilty.  One could cast Othello as a swarthy North African no darker than the Italians with whom he lives and who taunt him as black merely because of his immigrant origins, as Irish were similarly taunted in the United States during the nineteenth century.

I do not, however, think that the same ironical approach can be taken with the anti-Catholicism in Shakespeare’s plays.  It is too pervasive in the plays and in Elizabethan society.  There are limits to what one can legitimately do with Shakespeare’s plays without rewriting or deleting the offensive parts, as some interpreters do, so that the plays are no longer Shakespeare’s.  Nor can one just ignore the anti-Catholicism, as many do, and interpret the plays as though it was not there.  Shakespeare had ideas about things and a legitimate interpretation of his work must stay within the range of his ideas.  A different strategy must be employed with Shakespeare’s anti-Catholicism to save the integrity of the scripts without promoting the prejudice.

 

Papism, Communism, and Paranoia: Cold Wars and their Cultural Consequences.

The Protestant Reformation of the early sixteenth century triggered violent religious conflicts between Catholics and Protestants in England and most of Europe, some of which continue to the present day in places such as Ireland.  These conflicts were very similar to the Cold War between Communism and capitalism that occurred during the last half of the twentieth century.  This Cold War is in the living memory of those of us in the older generation.  It is also, hopefully, within the historical memory of younger people who have studied it in school.  A comparison of the recent Cold War against Communism and the Elizabethan cold war against Catholicism will help elucidate the circumstances in which Shakespeare composed his plays.

In the capitalist United States during the Cold War, and especially at the height of tensions during the 1950’s and early 1960’s, Communist countries were widely portrayed by the government and mass media as totalitarian dictatorships in which people were brainwashed into zombies.  People in these countries supposedly suffered through gray lives in slavish subjugation to an all-powerful government.  American Communists were, in turn, portrayed as traitorous agents of a monolithic movement that was steadily and stealthily taking over the world, forcefully conquering countries that were weakly defended militarily, and subversively undermining countries that were weakly defended morally.[7]

Communism was condemned as an absolute evil, with Communists acting essentially as agents of the Devil, and identified with the Devil’s color as Reds.  Since Communists generally eschewed religion, they were condemned as godless by political and religious conservatives, many of whom took this identification with the Devil literally.[1]  It was widely believed that once Communists took over a country, they created an all embracing godless tyranny from which people could never escape.  From this portrait of Communism emerged the war cry of many conservatives during this period of “Better dead than Red,” that is, better to have a nuclear war that kills all life on earth than let Communism take over America.  Any cooperation with a Communist or tolerance of Communism anywhere was deemed an act of treason to the United States, to American ideals of freedom and democracy, and to God.[2]

Political conservatives during this period used anti-Communism as a club against liberals.  Any criticism of American society — whether it be racism, sexism, inequality, or poverty –was condemned as a form of aiding and abetting the Communist enemy, even if, and especially if, the criticism was accurate.  Communists, the conservatives claimed, would seize on any fault or flaw in American society to create discontent and disorder, to discredit the legitimate authorities, and in this way seduce people into supporting Communism.[3]

Congressional Committees and vigilante organizations worked to eliminate alleged Communists (Commies), radicals (Commie symps) and liberals (Commie dupes) from working in the government, the schools, the professions, and the entertainment industry.  Almost every industry was affected.  If a person was named as a Commie, Commie symp or Commie dupe, the person’s name would generally appear on a blacklist and employers would be warned not to hire the person upon penalty of being boycotted or possibly even prosecuted.[4]  As a result of this red-baiting, as it was called, many progressive social movements that had been active during the 1930’s and 1940’s died out.[5]

In the wake of the Cold War, we can see today that the fears of Communism and measures taken against it were clearly excessive.  Although Communist regimes were invariably oppressive, they were also frequently incompetent.  Even if the Soviet Union posed some threat to the United States during this period, the Soviets were never in any position to invade Western Europe, let alone the United States.  Communism was, in turn, not a monolithic movement.  It took different forms in the various countries in which Communists held power and among the Communist parties that operated within capitalist countries.  Communist countries were, in fact, in almost constant conflict with each other, as were Communist parties.   Nor were Communist regimes totalitarian, whatever might have been the aspirations of their rulers.  This is shown by the fact that Communism in the Soviet Union and almost all of Eastern Europe fell peacefully and as a result of internal revolts by people who had just had enough of it.  These people were clearly not brainwashed zombies.

It is also the case that very few American Communists were spies or traitors.  The Soviet Union actually preferred to use mercenary spies who worked for money rather than American Communists who might be motivated by idealism.  Mercenaries were more reliable than idealists who might object to doing something that harmed the United States.  Most American Communists were motivated primarily by patriotism, whether or not misguided.[6]  Nonetheless, many people’s lives were ruined in this country by misdirected anti-Communist attacks, and social progress was stalled.  Abroad, unnecessary wars were fought, cruel dictators were supported, and money was wasted on unnecessary armaments.

Anti-Communism also had a constricting effect on American culture, especially during the 1950’s and early 1960’s.  Controversial issues and social problems were generally avoided, and anti-Communist themes were awkwardly interjected, as writers, producers and directors of plays, movies and television shows bowed to Cold War priorities.  Their works were distorted and diminished in ways that were sometimes blatant but often subtle.  Playing into the common understandings of people at that time, anti-Communist themes were inserted in their works in ways that would have been recognized by people then, even though they might not be understood by audiences today.  The result has been widely considered a gray era in American culture.[7]

The work of Elia Kazan, one of the greatest movie directors of all time, exemplifies this effect.  Because of Kazan’s membership in the Communist Party during the 1930’s, he was summoned to appear before the House Un-American Activities Committee (HUAC) in 1953.  He had two choices at that point.  He could either testify against friends and colleagues who had been Communists or had been otherwise politically active in progressive causes, or be black-listed from working as a director.  He chose to testify against his friends –“Naming names” this sort of testimony was called — and he thereby saved his career.  But he was thereafter roundly criticized and ostracized by many of his former associates, both Communist and non-Communist alike.

Stung by this criticism, Kazan made the movie On the Waterfront (1954) which glorifies snitching on one’s friends and colleagues to a government committee.  Although Communists do not appear in the film, which is about gangsters, the movie was clearly a defense of Kazan’s finking on his friends and a testament to anti-Communism.  It is a great movie because of the performances of the actors and Kazan’s filming, but the plot is overblown and overly melodramatic as a result of Kazan’s desire to justify himself and pay homage to HUAC.  The movie was essentially a testimonial in support of the damage done to American culture by HUAC and other anti-Communist organizations.[8]

Kazan bowed even lower to the anti-Communist crusaders in the film Viva Zapata (1952), which was made just prior to his HUAC testimony.  It is a portrayal of the early twentieth century Mexican revolutionary Emilio Zapata.  The movie is a cautionary tale about how a revolution can become corrupt and dictatorial and, as such, was a clear reference to the Soviet Union.  Kazan also insisted that the script include a fictional character named Fernando Aguirre.  Aguirre is a vicious revolutionary who turns on Zapata when he thinks Zapata is getting too soft, and who is clearly modeled after the 1950’s anti-Communist stereotype of a Communist agent.  Aguirre is an anachronism and out of place in the film.  The purpose of his character was not, however, aesthetic.  It was specifically to enable Kazan to tell HUAC that “This is an anti-Communist picture.”   That is, even though Communism had nothing to do with the Mexican Revolution and is not mentioned in the film, Kazan felt the necessity to distort and diminish his movie in order to placate the anti-Communist sentiment in the country.[9]

A similar Cold War of Protestants against Catholics occurred in England during Shakespeare’s time with similar effects.  If one substitutes the words Catholicism and Catholics for the words Communism and Communists, one can use essentially the same language and descriptions of the Capitalist-Communist Cold War to describe the conflict between Protestants and Catholics.  Each side portrayed the other as the Devil’s disciples.  Savage wars were waged between Protestant and Catholic countries, and cruel tortures were inflicted, in the name of God and the true religion.  Ordinary people could not avoid the conflict.  Everyone was forced to own up to being either Protestant or Catholic and, thereby, forced to take sides and take the consequences.[10]

England went back and forth several times during the sixteenth century between being controlled by Catholic regimes and Protestant regimes, each of which savaged adherents of the opposing religion.  The changes were abrupt and left many people in limbo, unsure which way to turn because turning the wrong way could be fatal.  As during the Cold War in America, families were split over the issue.  Friends turned against friends.  Neighbors spied on neighbors and reported them to the authorities.  Paranoia and hysteria were always just around the corner.

Catholics were disparaged by Protestants as Papists.  Just as American Communists were considered to be loyal to the Communist government in the Soviet Union rather than to the United States, English Catholics were considered to be loyal to the Pope and the Church in Rome instead of their Queen and country.  Hence the term Papist, someone who supposedly worships the Pope.  Similar to the Communists, Catholics were believed to be part of a monolithic international conspiracy that aimed to control the world through force or subversion.  Powered by a vanguard of Jesuit priests whose supposed stock-in-trade was using tricks of logic to seduce people into converting to Catholicism (hence the pejorative term “Jesuitical”), Jesuits were accused of trying to worm their way into English society in order to subvert and pervert it.

As with Communists during the Cold War, Catholics were portrayed by Protestant leaders as traitors who could not be trusted, subversives who had to be rooted out of public life, and spies who had to be caught and even killed.  In 1559, a year after Elizabeth’s ascension to the throne and abrupt reconversion of England from Catholicism to Protestantism, being a practicing Catholic was made illegal and saying Mass was made a capital offense.  Although these laws were honored more in the breach, they were designed to keep Catholics on edge and in line.  As a result, Catholics were forced to hold Mass in secret, which only reinforced Protestant fears of a subversive Catholic conspiracy.

The trials, tribulations and murder of Shakespeare’s fellow playwright Christopher Marlowe, who was charged with heresy and was a Catholic-Protestant double agent, attest to the dangers of stepping out of line.  Shakespeare was, thus, writing at a time when Protestants and Catholics were at each others’ throats, and in a place where being caught practicing Catholicism could get you killed.  These circumstances are reflected in Shakespeare’s plays.[11]

As with the Cold War against Communism, Elizabethan anti-Catholicism appears in retrospect to have been both excessive and irrational.  Catholics and Catholic countries did not constitute a monolithic movement manipulated by the Pope.  To the contrary, Catholic countries often disobeyed and even attacked the Pope, and were almost as likely to go to war against each other as against Protestant countries.  Likewise, different orders within the Catholic Church — Jesuits, Dominicans, Franciscans, et al.– were almost as opposed to each other as to Protestants.  Anti-Catholicism had, nonetheless, a significant effect on Elizabethan culture and society.

There has been speculation that Shakespeare’s father, who was born Catholic, remained  a closet Catholic after the English Reformation and that Shakespeare had Catholic sympathies.[12] Although there were Catholics in Shakespeare’s extended family, there is no evidence that he was a Catholic.[13]  In any case, whatever Shakespeare’s sympathies, the key fact is that he was writing for an overwhelmingly Protestant audience and for theaters that were being closely monitored by a fiercely Protestant government.  This was a government that, according to historian Michael Wood, “employed a network of informers, spies and bounty hunters, who pried into every aspect of people’s business affairs, their religion, and even their sex life.”[14]  Shakespeare’s family’s connections to the Catholic Church might have made him even more careful to be seen on the Protestant side of things.  If he had not adhered to the Protestant line, the Bard would likely have been debarred from public life.

Shakespeare set many of his plays in England and Italy during times when those places were under the religious hegemony of the Catholic Church, and in each of these plays he portrays Catholic priests, officials and doctrines in negative ways.  While Shakespeare never uses the terms Catholic or Protestant and never attacks the Catholic Church by name, he plays into the understanding that his audiences would have had of the differences and disputes between the religions, and he invariably comes down against the Catholics.  Obvious examples of this include the reprehensible representative of the Pope in King John, the warmongering Cardinals in Henry V, and the foolish priest in Romeo and Juliet. 

The merchant Antonio and the other Catholics in The Merchant of Venice are less obvious examples of Shakespeare’s anti-Catholicism until you recognize that money lending was prohibited by the Catholic Church but allowed by Protestant churches, and that Shakespeare’s father was a moneylender who had been arrested at least twice for usury by Catholic authorities.  Given these facts, Shakespeare was not likely to intend Shylock as a villain based on his being a moneylender nor intend Antonio as a hero based on his opposition to moneylending .  Since Antonio engages in business practices that are portrayed in the play as comparable to usury, it is even less likely that Shakespeare intended him to be viewed as a hero.  Although they are rarely played in this way, Antonio and his Catholic colleagues seem intended by Shakespeare to be played as bigoted hypocrites.

The conflict between Protestants and Catholics is a theme that I think is not sufficiently acknowledged in most interpretations and performances of Shakespeare’s plays.  Since the anti-Catholicism in the plays is pervasive and not easy to delete or dissolve, it is often just ignored and the plays are then performed in ways that I believe do not reflect the light in which Shakespeare intended audiences to see his characters.  Thus, the merchant Antonio is generally played as a good guy in Merchant and, as a result, no matter how sympathetically the actor playing Shylock says his lines — even weeping when he asks “If you prick us, do we not bleed?” — the play comes off as anti-Semitic.  This is, I think, a mistake.  The historian Christopher Hill has warned that “We should always take seriously the religious professions of sixteenth century men and women, for many of whom eternity might seem much more real than this brief and uncertain life on earth.”[15]  This would likely be true of many in Shakespeare’s audience and might even be true of Shakespeare himself.

At the same time, acknowledging the anti-Catholicism in Shakespeare’s plays does not require one to promote it.  His decision not to explicitly denote people and things in his plays as Catholic and Protestant is significant.  In this way, Shakespeare stands in sharp contrast with Marlowe who openly promoted the prejudices of his age.  Marlowe’s The Jew of Malta features a Jew who “becomes a greedy murderer.”  The play is explicitly anti-Semitic and presents a very different picture of Jews than Shakespeare’s Merchant.  Marlowe’s Massacre of Paris features a group of Catholics who want to slaughter Protestants.  “The basic message is that Catholics are murderous beasts.”[16]  This vicious portrait of Catholics is very different from Shakespeare’s oblique obeisance to the anti-Catholicism of his society.  Although Marlowe has his devotees, some of whom even claim that he wrote Shakespeare’s plays, his plays are rarely performed.  Their overt and overwhelming anti-Semitism and anti-Catholicism are a big part of the reason.

Shakespeare used code words and cues to express anti-Catholicism.  In so doing, he gave interpreters an opportunity to recognize the light in which he wanted characters and ideas to be portrayed without their having explicitly to engage in anti-Catholicism.  He may have done this deliberately.  In staging Merchant, for example, one does not have to attire Antonio with a cross or have him fingering Rosary beads, which would explicitly denote him as a Catholic.  One merely has to understand that Shakespeare did not intend to portray Antonio as a model citizen or damn Shylock for his being a moneylender.  This understanding sheds a whole new light on the play as compared with the way it is usually performed.[17]

Hamlet is set in a country, Denmark, that had abruptly converted from Catholicism to Protestantism during the 1530’s.  This setting provided Shakespeare with an opportunity to portray some of the confusion and controversies that had been experienced in England as a result of Henry VIII’s similarly abrupt conversion of England to Protestantism during the 1530’s and Elizabeth’s abrupt reconversion of the country during the 1550’s.

Medievalism Run Rampant: Better Dead than Dread.

“Something is rotten in the state of Denmark” concludes one of the soldiers who has seen the ghost in an early scene of Hamlet, and who then disappears from the playThis line is usually interpreted as meaning that the appearance of the ghost indicates something is wrong with the country.  In this interpretation, the ghost is a sign of existing corruption in the state, which we soon understand as the murder of Hamlet’s father by Claudius.[18]  But the line could also mean that something bad is beginning and that the ghost is both a cause and an effect of it.  This latter interpretation is, I think, the better of the two.  The murder of Hamlet’s father may have begun the rot, but rot spreads.  The whole edifice can come tumbling down unless the spread is checked.  The soldier’s statement is, in this light, an ominous prediction of how murder can lead to murder, and a premonition about the effect that the ghost is going to have on the country.

The ghost dominates the play and essentially ruins the country.  The name of the play is Hamlet and Hamlet is the name of the young prince who runs riot through the play, but it is also the name of the prince’s dead father whom the ghost ostensibly represents.  It is that elder Hamlet who is the center of the action in the play.  Almost everything bad that happens is a result of the ghost’s insistence that young Hamlet avenge the death of his father.  And even though the ghost directly participates in only three scenes, he is a pervading evil influence throughout the play.

The effect of the ghost is often underplayed in performances of Hamlet.  To dramatize his effect on the action, I would arrange the stage lighting to indicate day versus night, and have the ghost lurking in the background unseen by the other characters during the nighttime scenes.  Hamlet’s most violent scenes would be played at night with the ghost lurking about.  At the very end of the final murderous scene, I would have the ghost leave the stage appearing to be satisfied at the outcome.  Elizabethans believed that the Devil could manipulate the truth in the service of evil.  The ghost should be seen as a demon from Hell who has been sent to undermine Protestant Denmark with the truth about the death of Hamlet’s father, and succeeds in this mission.

Hamlet is one of Shakespeare’s most pliant plays.  Hamlet, for example, can be characterized and played in a wide variety of ways.  Samuel Taylor Coleridge saw him as a dithering intellectual who knows that he must kill Claudius but gets caught up in “endless reasoning and hesitating.”  Coleridge viewed Hamlet as unmanly and a weakling. [19]  Mark Van Dorn agreed that Hamlet is an intellectual but claimed that “Hamlet is an actor,” and a chronic dissembler.  “We cannot assume, indeed, that he believes what he says.”  Van Doren sees Hamlet as essentially a schizoid with multiple personalities. [20]  Fintan O’Toole sees Hamlet as a sociopath who is caught between medieval and modern ways of thinking, and does not know which way to turn.[21]  Harold Goddard saw Hamlet as a pacifist who tries everything he can to avoid killing Claudius.[22]  Each of these is a plausible and playable interpretation of the character.

Hamlet has been condemned as “a slob, a shirker, or a mother-fixated neurotic” with an Oedipus Complex.[23]  He has been “pronounced both a hero and a dreamer, hard and soft, cruel and gentle, brutal and angelic, like a lion and like a dove.”[24]  He has been seen as an existentialist (“To be or not to be…”), a moral relativist (“There is nothing good or bad but thinking makes it so.”), a skeptic (What a piece of work is a man…what is this quintessence of dust?”), a determinist (“There’s a predestinate providence in the fall of a sparrow.”), or some combination of the above.  The vast possibilities contribute to making Hamlet such an interesting play.

Whatever else Hamlet is, however, he is also a religiously perplexed person.  When we first meet him, he is arguing with Claudius about his desire to go back to school in Wittenberg, which is also the alma mater of Hamlet’s good friend Horatio.  Later, when the ghost first talks to Hamlet, the ghost says that he resides by day in Purgatory and walks abroad by night.  These two references, the one to Luther’s Protestant university in Wittenberg, and the other to the Catholic doctrine of Purgatory which Protestants rejected, would delineate for Shakespeare’s audience a conflict between Catholicism and Protestantism that confounds Hamlet and permeates the play.

Hamlet is clearly religious or he would not be attending Wittenberg University.  There were plenty of other less religious schools that a Danish prince could have attended.  So, how can a religiously Protestant Hamlet believe a ghost that says it resides in Purgatory, a place whose existence Protestants deny?  Belief in ghosts was common among Protestants and Renaissance philosophers, but not a ghost from Purgatory.  It stands to reason that Hamlet would be perplexed.  So that when he tells a skeptical Horatio that “There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy,” i.e. in Horatio’s and Hamlet’s Protestant philosophy, these lines should probably be articulated in a tentative, quizzical way.  In most performances of Hamlet, the lines are said in an emphatic, declaratory manner, as though Hamlet is completely convinced of the reliability of the ghost.  But that reading of the lines does not fit the situation of a young man who has just had his whole universe turned upside down.

Hamlet’s reluctance to kill Claudius is generally interpreted in a negative way.  He is portrayed as overly intellectual or cowardly or depressive or passive.  But given the religious and intellectual shock Hamlet has just been given — Could Catholicism be the true religion? — his caution would seem well-founded.  Hamlet’s hemming and hawing back and forth during the play correspond to an internal battle between his Reformation/Renaissance self and the Medieval/Catholic memes that he inherited from his ancestors.  This conflict within Hamlet, between old and new ways of being and believing, was analogous to the contemporary social and religious conflict in Elizabethan England.

Hamlet’s father must have been sympathetic to the Renaissance or he would not have sent his son to a university, and he must have been a dedicated Protestant or he would not have sent Hamlet to Wittenberg.  Hamlet’s father was, however, likely born a Catholic and was a transitional figure between Medieval ways and Renaissance society.  The ghost’s claim that he resides in Purgatory reflects the father’s likely childhood Catholic beliefs.  The ghost’s appearance in Medieval armor,[25] which had been rendered virtually useless by the development of armor-penetrating guns during the Renaissance, represents the Medieval side of Hamlet’s father.  And his insistence on Hamlet’s revenging the murder of Hamlet’s father also reflects a Medieval perspective.[26]

Revenge was a Medieval form of justice that the Catholic Church had criticized but ultimately tolerated.  European countries did not have well-developed criminal justice systems during the Middle Ages and did not have prison systems.  As a result, private justice and corporal punishment were the norms.  “Vengeance and feud were an essential part” of Medieval culture, and revenge was “both a right and a duty, and was legislated and regulated by social norms.”[27]  Renaissance reformers promoted a more rational system of justice in which the rule of law rather than the rule of the strongest would prevail.  Renaissance monarchs embraced these reforms as a means of centralizing the power of the justice system in their own hands.  Prisons were, likewise, a recent Renaissance development in Europe, as places where convicted wrongdoers could be punished through being incarcerated instead of being physically harmed.[28]

These were reforms that the fictional Hamlet and the author Shakespeare would likely have endorsed.  The ghost’s insistence on murderous revenge indicates that he is out of step with the times and not to be trusted by Hamlet.  The ghost represents a side of Hamlet’s father that Hamlet had seemingly wished to leave behind in going to Wittenberg, and a barbarous Medieval past that England was trying to get beyond.

So, how is it that the ghost succeeds in entrapping Hamlet with his wiles?  He is a cunningly manipulative ghost.  He arrives in Denmark at a time when people are seeming to begin to have doubts about King Claudius.  The “Something is rotten” statement by a common soldier, who knows nothing of what the ghost is going to tell Hamlet about the death of his father, indicates that common people were uneasy about the state of affairs.  The ghost also arrives at a time when Hamlet is feeling renewed disgust about his mother’s marriage to Claudius, and when Hamlet is in turmoil about whether or not to abandon a rotting Denmark for school in Wittenberg.

The ghost describes the death of his father in terms most likely to inflame Hamlet.  He also disingenuously tells Hamlet not to “Taint thy mind” against his mother but then describes Hamlet’s mother as only “seeming virtuous” and says that she has made “the royal bed of Denmark…a couch for luxury and damned incest.”  Having just had those same thoughts about his mother earlier before meeting the ghost, Hamlet exclaims “O my prophetic soul” in response to the ghost’s tale.  He very much wants at that point to believe the ghost.

Horatio is skeptical, as would Shakespeare’s audience.  People of that time would find it hard to believe that God “unleashed  [the dead] back on earth to stir up revenge.”[29]  That was the Devil’s business.  Horatio had also noticed at the ghost’s previous appearance that when the cock crowed at the break of dawn, “it started, like a guilty thing upon a fearful summons,” which would seem to be a call from Hell.  Hamlet tries to assure Horatio that “It is an honest ghost, that let me tell you.”  But, he is seemingly not so assured himself because he concocts on the spot a method of testing the ghost’s honesty by feigning madness.  In this way, he can ask questions of people in the castle that might otherwise seem suspicious and thereby, he hopes, provoke responses that might be telling.[30]

It is a plan, however, that plays right into the hands of the ghost.  Encouraged by the ghost, Hamlet swears to secrecy his friends who have seen the ghost.  Hamlet then goes off to conduct his researches on his own, taking it upon himself to right the wrongs that have been done to the state of Denmark, as though the only wrong was to himself as the son of a murdered King and an ostensibly incestuous Queen.  Within the context of Elizabethan times and the play itself, this is wrong and a mistake.

The rottenness of the state is a matter of concern for everyone in Denmark, as indicated by the soldier’s comment.  The problem is not merely the murder of a father and the incest of a mother.  It is having a king who has murdered his way to the throne and who seems more interested in drinking and partying than in protecting Denmark from a potential invasion from Norway.  In turn, when Laertes is able to rouse public concern about his father’s death and, thereby, force an inquiry into the circumstances, Laertes demonstrates that it is possible to take political concerns to the people and get action that way.  Hamlet could seemingly have done something similar.  His anger and his arrogance, encouraged by the ghost, lead him to go off on his own, and wreak the havoc on Denmark that the ghost seemingly intended.

Although Hamlet’s enthusiasm for killing Claudius ebbs and flows in the course of the play, the ghost gradually extends his evil influence over Hamlet, and Hamlet loses his better self.  Hamlet, in turn, descends from feigning madness into actual madness as he goes from murder to murder: killing Polonius, driving Ophelia to suicide, arranging the murders of Rosenkrantz and Guildenstern, and finally participating in the slaughter at the end of the play.

As a means of dramatizing the growing control of the ghost over Hamlet , one might have the ghost appear with a long mustache, beard and hair, groomed but in a style that would have been seen by Shakespeare’s audience as Medieval.  Hamlet would appear initially in the short-haired, highly groomed style of a Renaissance courtier.  As the play proceeds, Hamlet would gradually grow a long mustache, beard and hair in an unkempt manner befitting his madness, feigned and/or real.  In the next to last scene of the play with Horatio, Hamlet would groom himself into the Medieval likeness of his father as represented by the ghost, and then go off to the fatal duel.

In most modern productions, the character of Hamlet is played as being surprised by the murderous turn at the end of the play.  But this reaction does not seem plausible.  Hamlet knows at that point that Claudius is trying to kill him and that Laertes is outraged over Hamlet’s murder of Laertes’ father Polonius.  Hamlet must surmise that the supposedly harmless duel Claudius has arranged between Hamlet and Laertes is actually a setup for mortal combat between them.  Hamlet’s bantering with Horatio, Laertes and the others before the duel is just another bit of posing.  Hamlet would likely be on guard and might even be shown to have secreted a weapon on his person.  As he and Laertes duel, Hamlet would almost certainly see that Laertes’ sword is unabated, albeit he does not know its tip is poisoned.  When Hamlet exchanges swords with Laertes in the midst of their duel, he would know that he is grabbing a murderous weapon.  We cannot know what Hamlet has in mind or plans then to do, because events take an unexpected turn as a result of the various poisons taking effect at that point.

When Hamlet dies, the ghost is satisfied but I think that we in the audience also feel relief.  Hamlet has morally descended under the influence of the ghost, and we feel it despite our sympathy for him and his predicament.  He has directly or indirectly been the cause of the deaths of seven people, not including himself.  And he has admitted that he has no qualms about having killed the innocent Polonius and caused the deaths of the hapless Rosenkrantz and Guildenstern.  Having become a Papist dupe of the ghost, Hamlet has become a symbol of the Medieval violence that Elizabethans hoped to leave behind.  He has also essentially become as much of a villain as the man he had sworn to eliminate.  For this reason, Mark Van Doren concluded that “The world could not let so destructive a man live longer.”[31]

At the same time, members of Shakespeare’s contemporary audience would recognize that the ghost had just done to sixteenth century Protestant Denmark what they believed the Pope was trying to do to turn-of-the-seventeenth century Protestant England, which was to bring down the state and leave the country vulnerable to invasion by its enemies.  The play served as a warning to them.

Cold Wars and their Cultural Consequences: Playing Down Paranoia.

Shakespeare incorporated anti-Catholic elements into his plays that probably were necessary and seemingly were sufficient to satisfy the prejudice and paranoia of his audiences and the authorities.  But one of the great things about Shakespeare is that he was able to do this without significantly diminishing or distorting his work.  The anti-Catholic intimations and implications in his plays were clear to people of his time.  But his indirection also allows us today to recognize the anti-Catholicism in his plays, and incorporate it into our analysis of them, without promoting the prejudice and paranoia of Elizabethan England that prompted it.

Shakespeare made it possible for us to perform his scripts without showing overt anti-Catholicism in our performances of them.  We don’t have to make the ghost wear a Catholic cross.  We don’t have to think of Hamlet as a Papist dupe, as Elizabethans might have.  But honoring Shakespeare’s scripts does require us to accept the evaluation of characters and events as he indicated them through his anti-Catholic references.  Those references are often keys to understanding the plays as he meant them.  As to Hamlet, those references means the ghost is evil and Hamlet is a dupe.  We can avoid displaying the prejudice but not its implications for the meaning of the play.

[1]  “Cold War.” American History. ABC-CLIO, 2011.

[2]   Goldwater, Barry. Conscience of a Conservative. New York: Victor Pub. Co., 1960. pp.25, 71.

[3]  Foner, Eric. The Story of American Freedom. New York: W.W. Norton & Co., 1998. pp.256-258.

Lens, Sidney. Radicalism in America. New York: Thomas Y. Crowell Co., 1969. p.343.

[4]  Caute, David. The Great Fear: The Anti-Communist Purge under Truman and Eisenhower.  New York: Simon & Schuster, 1979.

[5]  “Cold War.” American History. ABC-CLIO, 2011.

[6]  Lyons, Paul. Philadelphia Communists, 1936-1956. Philadelphia: Temple University Press, 1982.

[7]  Navasky, Victor.  “The Social Costs of McCarthyism.” from  Naming Names..New York: Viking Press, 1980.   at english.illinois.edu/MAPS/McCarthy/navasky.

[8]  Ebert, Roger. rogerebert.com/review/great-movie-on-the-waterfront-1954. March 21, 1999.

[9]  Crowther, Bosley. “Viva Zapata.” New York Times Movie Review. 2/ 8/52.

Rothman. Lily. “Art Imitates Life: 10 Movies Altered Due to Real-Life Events.” Time Magazine.                             at entertainment.time.com/2012/07/27- art-imitates-life.

Susman, Gary. “Viva Zapata’s 60th Anniversary.” news.moviefone.com/2012/02/06/                           viva-zapata-anniversary.

[10]  Kermode, Frank. The Age of Shakespeare. Modern Library: New York, 2005. pp.14-16.

[11]  Wills, Garry. Making Make-Believe Real: Politics as Theater in Shakespeare’s Time. Yale University Press: New Haven, 2014. pp.157-168.

[12]  Wood, Michael. Shakespeare. Basic Books: New York, 2003. pp.270-271.

[13]  Kermode, Frank. The Age of Shakespeare. Modern Library: New York, 2005. p.39.

[14]  Wood, Michael. Shakespeare. Basic Books: New York, 2003. p.39.

[15]  Hill, Christopher. The Pelican Economic History of Britain: Reformation to Industrial Revolution. Penguin Books: Baltimore, 1969. p.110.

[16]  Scott, Jeffrey. “The Influences of Elizabethan Society on the Writings of Christopher Marlowe.”  The Marlowe Society Research Journal. Vol.05-2008. p.3.   at http://www.marlowe-society.org.

[17] I have written elsewhere an essay on Merchant outlining this view of the play.  The essay is entitled “Shakespeare, Shylock and History as Choice: A Protestant versus Catholic view of the Merchant of Venice.”

[18]  Hamlet. 1.4.90

[19]  Coleridge, Samuel Taylor. Selected Poetry and Prose. Elizabeth Schneider, Ed. “Lecture Series on Hamlet.” San Francisco: Rinehart Press, 1971, 461-462.

[20]  Van Doren, Mark. Shakespeare. New York: New York Review of Books, 2005, 167-168.

[21]  O’Toole, Fintan. Shakespeare is Hard, but so is Life. London: Granta Books, 2002, 45-54.

[22]  Goddard, Harold. The Meaning of Shakespeare. Chicago: University of Chicago Press, 1951, 341.

[23]  O’Toole, Fintan. Shakespeare is Hard, but so is Life. London: Granta Books, 2002, 40.

[24]  Goddard, Harold. The Meaning of Shakespeare. Chicago: University of Chicago Press, 1951, 333.

[25]  Hamlet.1.2.200.  “Armed at point exactly, cap-a-pie.”

[26]  Matheson, Mark. “Hamlet and a Matter Tender and Dangerous.” 1995. at enotes.com/topics/hamlet/critical-        essays/hamlet-and-matter-tender-and-dangerous

[27]  Lampher, Ann. The Problem of Revenge in Medieval Literature. PhD Thesis. University of Toronto, 2010. p.ii.

[28]  Prisons. mapoflondon.UVIC.ca/PRIS1

Elizabethan Crime and Punishment. william-shakespeare.info/elizabethan -crime-punishment

[29]  Wills, Garry. Making Make-Believe Real: Politics as Theater in Shakespeare’s Time.                    New Haven, CN: Yale University Press, 2014. p.173.

[30]  Hamlet.  1.1.48, 1.5.40, 1.5.46, 1.5.81-86, 1.5.138.

[31]  Van Doren, Mark. Shakespeare. New York: New York Review of Books, 2005, 172.

]  Schwartz, Richard A. “Red Scare, 1950’s.” Cold War Culture: Media and the Arts, 1945-1990. New York: Facts    on File Inc, 2000. American History Online. Facts on File, Inc., 2011.

Gardner, Lloyd. “Origins of the Cold War” in The Origins of the Cold War, J.J. Huttmacher & Warren Susman, eds. Waltham, MA: Ginn & Blaisdell, 1970. pp.3-40.

[8]  “Cold War.” American History. ABC-CLIO, 2011.

[9]   Goldwater, Barry. Conscience of a Conservative. New York: Victor Pub. Co., 1960. pp.25, 71.

[10]  Foner, Eric. The Story of American Freedom. New York: W.W. Norton & Co., 1998. pp.256-258.

Lens, Sidney. Radicalism in America. New York: Thomas Y. Crowell Co., 1969. p.343.

[11]  Caute, David. The Great Fear: The Anti-Communist Purge under Truman and Eisenhower.  New York:  Simon & Schuster, 1979.

[12]  “Cold War.” American History. ABC-CLIO, 2011.

[13]  Lyons, Paul. Philadelphia Communists, 1936-1956. Philadelphia: Temple University Press, 1982.

[14]  Navasky, Victor.  “The Social Costs of McCarthyism.” from  Naming Names..New York: Viking Press, 1980.   at english.illinois.edu/MAPS/McCarthy/navasky.

[15]  Ebert, Roger. rogerebert.com/review/great-movie-on-the-waterfront-1954. March 21, 1999.

[16]  Crowther, Bosley. “Viva Zapata.” New York Times Movie Review. 2/ 8/52.

Rothman. Lily. “Art Imitates Life: 10 Movies Altered Due to Real-Life Events.” Time Magazine.                            at entertainment.time.com/2012/07/27- art-imitates-life.

Susman, Gary. “Viva Zapata’s 60th Anniversary.” news.moviefone.com/2012/02/06/viva-zapata-anniversary.

[17]  Kermode, Frank. The Age of Shakespeare. Modern Library: New York, 2005. pp.14-16.

[18]  Wills, Garry. Making Make-Believe Real: Politics as Theater in Shakespeare’s Time. Yale University Press: New Haven, 2014. pp.157-168.

[19]  Wood, Michael. Shakespeare. Basic Books: New York, 2003. pp.270-271.

[20]  Kermode, Frank. The Age of Shakespeare. Modern Library: New York, 2005. p.39.

[21]  Wood, Michael. Shakespeare. Basic Books: New York, 2003. p.39.

[22]  Hill, Christopher. The Pelican Economic History of Britain: Reformation to Industrial Revolution. Penguin Books: Baltimore, 1969. p.110.

[23]  Scott, Jeffrey. “The Influences of Elizabethan Society on the Writings of Christopher Marlowe.”

The Marlowe Society Research Journal. Vol.05-2008. p.3.   at http://www.marlowe-society.org.

[24] I have written elsewhere an essay on Merchant outlining this view of the play.  The essay is entitled “Shakespeare, Shylock and History as Choice: A Protestant versus Catholic view of the Merchant of Venice.”

[25]  Hamlet. 1.4.90

[26]  Coleridge, Samuel Taylor. Selected Poetry and Prose. Elizabeth Schneider, Ed. “Lecture Series on Hamlet.” San Francisco: Rinehart Press, 1971, 461-462.

[27]  Van Doren, Mark. Shakespeare. New York: New York Review of Books, 2005, 167-168.

[28]  O’Toole, Fintan. Shakespeare is Hard, but so is Life. London: Granta Books, 2002, 45-54.

[29]  Goddard, Harold. The Meaning of Shakespeare. Chicago: University of Chicago Press, 1951, 341.

[30]  O’Toole, Fintan. Shakespeare is Hard, but so is Life. London: Granta Books, 2002, 40.

[31]  Goddard, Harold. The Meaning of Shakespeare. Chicago: University of Chicago Press, 1951, 333.

[32]  Hamlet.1.2.200.  “Armed at point exactly, cap-a-pie.”

[33]  Matheson, Mark. “Hamlet and a Matter Tender and Dangerous.” 1995. at enotes.com/topics/hamlet/critical-essays/hamlet-and-matter-tender-and-dangerous

[34]  Lampher, Ann. The Problem of Revenge in Medieval Literature. PhD Thesis. University of Toronto, 2010. p.ii.

[35]  Prisons. mapoflondon.UVIC.ca/PRIS1

Elizabethan Crime and Punishment. william-shakespeare.info/elizabethan -crime-punishment

[36]  Wills, Garry. Making Make-Believe Real: Politics as Theater in Shakespeare’s Time. New Haven, CN: Yale University Press, 2014. p.173.

[37]  Hamlet.  1.1.48, 1.5.40, 1.5.46, 1.5.81-86, 1.5.138.

[38]  Van Doren, Mark. Shakespeare. New York: New York Review of Books, 2005, 172.

A Note on “History as Choice” Gone Wrong: Taking Choices Out of Context and Blaming the Victim.

Burton Weltman

The idea of looking at history as people making choices –i.e. history as choice — has gained some popularity in recent years as an alternative to conventional historical narratives.  Conventional historical narratives, and especially history textbooks, generally approach history as a process of causation, a series of causes and effects in which one thing seems inevitably to lead to the next and in which human choice seems to have little play.  In this view, people seem to be merely cogs in a big historical machine.

Promoters of history as choice, including myself, seek to humanize history by focusing on the role that people and their choices play.  In emphasizing the drama of people debating options and making decisions, this approach makes history more interesting to students.  In relating the decision-making processes of people in the past to the social problems and choices we face in the present, this approach also makes history more relevant.

But, as is often the case with intellectual and cultural developments when they are popularized, the idea of history as choice has been diluted and misdirected by some of its practitioners.  In turn, there has been some backlash against the idea of history as choice from people who are reacting against the misconceptions of those practitioners.  In particular, representatives of historically oppressed peoples have objected to history as choice as essentially a way of blaming the victims for their oppression.  They complain that the implications of this approach are that since history is a result of people making choices, then oppressed peoples must have chosen to be oppressed or made the choices that led to their oppression.  Oppressed peoples are, thereby, responsible for their own oppression.

 

The idea that the poor and oppressed are responsible for their own problems is an old one that dates back to ancient times and recurs periodically in history.  In modern times, the idea resurfaced in the population theories of Thomas Malthus during the early nineteenth century and then in the Social Darwinian theories of Herbert Spencer and others during the late nineteenth century.  The idea regained impetus during the late twentieth century in the United States largely through the work of Edward Banfield in his influential book The Unheavenly City Revisited (1974), and through his successors.

Blaming the poor and the oppressed for their problems is not the purpose of approaching history as choice.   Properly understood, this method has the opposite effect.  Approaching history as choice requires one to delineate the feasible options that people had within the circumstances that those people faced.  Oppressed peoples do not have unlimited options and unlimited resources.  Like everyone else, they have to work within the circumstances in which they find themselves.  The key to approaching history as choice is to look at what people did with the options and the resources they had.  This is a means of humanizing them and seeing the oppressed as not merely victims of their circumstances but also creators of culture and history within their circumstances.

One of the most amazing stories in American history is the way in which African-American slaves created a thriving culture within the restricted circumstances in which they lived.  The moral of their story is not that African-Americans were to blame for their enslavement or that their oppression was somehow a good thing, but that they were able to make profound meaning and beauty in spite of their oppression.  They made great choices within the limited range of feasible options and with the limited resources they had.

So, the problem is not with approaching history as choice but rather with failing to consider the context within which people made their choices.  This has been the modus operandi of Social Darwinians and others who blame the poor and oppressed for being poor and oppressed.  And this has been the implication, often unintended, of some of those who have been promoting history as choice.  But this is not the method as it has been intended.  The solution to this problem is not to abandon the method of history as choice but to apply it properly.

Struggling to Raise the Norm: Essentialism, Progressivism and the Persistence of Common/Normal Schooling in America

Burton Weltman

Preface.

This essay is an attempt to clarify present-day debates about educational reform in the United States through painting a broad-stroked picture of how these debates came to be.  As the key to the portrait, I have compressed the educational controversies of the past century into three traditions – common/normal schooling, Essentialism, and Progressivism – that I depict as the main positions in the educational debates of the twentieth century into the twenty-first.  This is a simplified view of the past that will hopefully help to clarify the complexities of the present.

In describing the three traditions, I have knowingly assigned to each category people who might be uncomfortable with some of their classmates.  Important differences between people and programs have inevitably been blurred, but I hope that important connections between them have also been highlighted.  I believe these categories reflect important realities that affect the possibilities for educational reform today.

I write this article as a self-styled Progressive and long-time partisan for Progressive causes but my main argument is that Progressives should unite with Essentialists against what I contend is the prevailing common/normal schooling tradition in American education.  In arguing that Essentialists and Progressives should make common cause against the common/normal schooling tradition, I am not dismissing the significant issues that divide Essentialists from Progressives or that pit many of those I have described as Progressives against each other.  I am merely contending that Progressives and Essentialists have much in common with each other, and have much to gain toward improving education by working together against the common/normal schooling that predominates in our schools.

Henry Adams’ Challenge.

The philosopher George Santayana famously warned that those who don’t study history are doomed to repeat it.  Santayana didn’t promise that studying history would in itself prevent unwanted repetitions, but he thought it would help.  Henry Adams, a contemporary of Santayana, was not so sanguine.  Adams, the great grandson and grandson of Presidents John Adams and John Quincy Adams, and a celebrated historian, litterateur and leading figure in the power elite of late nineteenth-century America, doubted the ability of most people to learn from history.  In The Education of Henry Adams, a stream-of-ideas autobiography that he wrote in 1905, Adams related his life-long effort to understand historical change in an increasingly dynamic world.

Adams claimed that Western history since the Middle Ages has been governed by a “law of acceleration,” such that people attempting to maintain traditional ways of life only warp past practices while improperly adapting to the present, and end up with the worst aspects of both.  He complained that most people study history to worship the past rather than prepare for the future.  Describing what people later characterized as postmodernism, Adams predicted that social and intellectual change will continue to accelerate during the twentieth century and that concepts with which people of one generation understand the universe will be completely inadequate for the next.

Given this situation, Adams asked, how and what is a teacher to teach?  And what, if anything, does it mean to pursue education?  He concluded that the foolish and the frightened will cling futilely to old ways of thinking and teaching, but the wise will teach themselves and their students to react constructively to the constantly changing world.  Adams challenged educators to rise to the latter but expected they would succumb to the former (Adams 1918/1961, 493, 497-498; Adams 1919/1969, 305-306).

Public education at the turn of the twentieth century generally comported with Adams’ dour expectations.   Following patterns that had been established during the 1840’s in the so-called common schools for elementary students and normal schools for prospective teachers, education was geared primarily toward passing on to the next what was considered the wisdom of the previous generation.  Adhering to what was considered the common sense of education, most schools focused on teaching the 4 R’s – reading, writing, arithmetic and religion, a program designed to acculturate students to the prevailing social system.  In a dynamically changing world, schools were expected to impose social and cultural law and order on students, and the common/normal schooling orthodoxy met these expectations (Butts & Cremin 1953, 545).

There were, however, groups of educators at the turn of the century who eschewed common sense and challenged the common/normal schooling orthodoxy with theories that embraced the dynamics of social and cultural change.  Two of these theories have remained particularly influential to the present day.  The first theory, which came to be known as Essentialism, promotes teaching the modern academic disciplines as a way of plugging students into currents of intellectual change.  Focusing on what students learn, Essentialists are subject matter mavens who want students to become scholars immersed in academic knowledge.

The second theory, Progressivism, focuses on teaching interdisciplinary, problem-solving techniques as a way for students to participate actively in their changing society.  Focusing on how students learn, Progressives are proselytizers for critical thinking who want students to learn how to use knowledge for socially constructive purposes.    Both theories were promoted by their founders as liberal, dynamic alternatives to what they condemned as the conservatism of common/normal schooling methods (Cremin 1961, IX, 328).

Essentialism and Progressivism quickly became the prevailing theories among educational reformers and education professors during the early twentieth century, and most of the most prominent educational innovators over the last one hundred years can be identified with one or the other theory.  Essentialists, for example, include William Bagley, the leading curricularist of the early twentieth century (Kandel 1961, 9-11, 108); Arthur Bestor, whose Educational Wastelands has been called the most influential book on education in mid-century (Cremin 1961, 344); Jerome Bruner, whose theory of “the structures of the disciplines” underlay most movements for curriculum reform during the 1960’s and 1970’s (Jenness 1990, 129); and E.D. Hirsch, whose theories of “cultural literacy” have been highly influential since the 1980’s (Feinberg 1999).

Progressives include John Dewey, perhaps the preeminent educator of modern times (Church & Sedlak 1976, 200); William Kilpatrick, whose “project method” has been almost universally adapted by schools since the 1920’s (Tennenbaum 1951, 88, 108; Church & Sedlak 1976, 379); Benjamin Bloom, whose taxonomy of thinking and methods of “mastery learning” have been widely cited since the 1960’s (Pulliam & Van Patten 1999, 174); and John Goodlad, one of the leading practitioners of school reform and the reform of teacher education over the last half century (Wisniewski 1990).

Educational debate since the early 1900’s has consisted largely of arguments between Essentialists and Progressives in which each side blames the other for the problems of American education and extols its own methods as the solution.  It is a debate that has become almost ritualized in its accusations and responses, with each side blaming the other for the same things and neither side responding to the other’s arguments.  This was highlighted for me in articles by E.D. Hirsch and Walter Feinberg.

In the first article, Hirsch contrasts his proposals for “knowledge-based education,” which is another term for Essentialism, with Progressivism.  Progressivism, in his description, is an elitist system that caters to the best and brightest students at the expense of ordinary students, promotes mindless activity and fragmented learning, and focuses on easy and boring subjects.  Hirsch particularly condemns Kilpatrick’s project method as the epitome of anti-intellectualism.

In contrast, Hirsch describes his own educational program as egalitarian in that it promotes the same curriculum for all students and expects the same results from all students.  He also claims that, given its focus on the academic disciplines, his program is intellectually challenging and coherent.  Hirsch complains that Progressivism dominates American public schools, and blames Progressives for what he sees as the decline of American education since the 1960’s (Hirsch 2002; also, Hirsch 1996).

In the second article, Feinberg, a distinguished, long-time advocate of multicultural education, contrasts his proposals for a reflective thinking pedagogy, which is the core of Progressivism, with Hirsch’s ideas.  In Feinberg’s description, Hirsch is an elitist who caters to the best and brightest students at the expense of ordinary students, promotes mindless memorization of fragmented bits of academic knowledge instead of active learning, and focuses on boring academic abstractions instead of issues of interest and importance to students.  In contrast, Feinberg claims his proposals promote intellectual integrity and democratic citizenship.  Complaining that the methods promoted by Hirsch predominate in American schools, Feinberg blames Essentialists for the problems of American education (Feinberg 1999; also, Feinberg 1998).

Taking the two articles together, Hirsch and Feinberg extol their own positions in almost exactly the same terms, condemn the other’s in almost exactly the same terms, and claim the other’s adherents are running and ruining the American educational system.  Most importantly, neither addresses the common/normal schooling tradition or what I contend is the persistence of common/normal schooling methods in public education.

Studies over the last seventy years have repeatedly indicated that while Essentialists and Progressives dominate the public debate, common schooling methods still predominate in American elementary and secondary schools, and normal schooling methods still prevail in schools of education.  Researchers have estimated that up to ninety percent of teachers today teach ninety percent of their classes in the same way as teachers did in the nineteenth century, and conclude that the reform movements of the twentieth century have at best affected only the periphery of education.

Today, as in the common schools of the 1840’s, most public school curricula are standardized around age-graded textbooks and workbooks, and most classrooms are dominated by teacher-talk, textbook reading, recitation and review, seatwork from worksheets, and tests of recall and basic skills.  And while religion is no longer explicitly promoted, the conformist law and order ethics that underlay religious instruction in nineteenth century schools still prevail (Cuban 1991, 198-200; Sizer 1992, 6; Nelson, Polansky & Carlson 2000, 15).

There have been incremental changes around the edges of education, such as more projects and group work that reflect the influence of Progressivism, and more research assignments that reflect the influence of Essentialism, but these innovations are almost always standardized into routine exercises in basic skills.  Likewise, while there have been dramatic changes in educational technology, from the blackboard, first introduced in the 1840’s, to radios, films, televisions and computers successively introduced during the twentieth century, these technologies have almost always been used in the common school mode to drill facts and basic skills.

In sum, while reform movements have come and gone in almost every decade over the last century, they have generally left only superficial residue, while common school methods still predominate.  The trend in most states over the last fifteen years has been toward even more standardized curricula in the guise of curriculum standards, and more standardized testing and teaching to the test in the guise of academic accountability.  This trend has recently culminated in the federal “No Child Left Behind” act, which attempts to make common school methods the law of the land (Sizer 1992, 210-211; Goodlad 1984, 236, 264; Kliebard 1986, 121; Tyack & Cuban 1995, 7, 9, 121; Sarason 1996; Marshak 2003, 229).

Teacher education also remains basically the same today as in the nineteenth century.  Although the rhetoric of most programs is Essentialist and Progressive, the reality of teacher training is overwhelmingly in normal school methods of lecturing, drilling, standardized curricula and standardized testing.  Programs espousing innovative methods are almost invariably warped by social and political pressures and by the weight of tradition into normal schooling forms.  And few education programs are connected with innovative elementary and secondary schools in which student-teachers can practice creative methods.  As a result, most prospective teachers end up in field placements with common schooling supervisory teachers so that no matter what student-teachers have been taught in their education classes, they are almost invariably socialized into common schooling methods before they graduate (Goodlad 1982, 19; Goodlad 1989, A3; Morrison & Marashall 2003, 292-297).

There have been many efforts to reform teacher education but with little effect.  In what is almost a parody of Santayana’s warning, the same reform proposals come and go every few years, with different names for the same things, and with generally the same outcome that whatever is innovative is trimmed and tamed into a common/normal schooling format.  What is today called performance-based education, for example, used to be called outcome-based education and, before that, competency-based education.  What are today called education school-public school partnerships used to be called public school-education school alliances and, before that, school/university cooperatives.  In almost every generation, old reforms are proposed as radically new ideas, but after a hundred years of tumult, there has been little large-scale or long term change in teacher education (Goodlad 1990, 186-189; Lucas 1997, 84, 89-90).

Although most studies point to the persistence of common/normal schooling methods as the main reason educational reforms fail, Essentialists and Progressives still invariably blame each other.  When Essentialist reforms fail, Essentialists blame it on sabotage by Progressive educators.  When Progressive reforms fail, Progressives blame it on sabotage by Essentialists.  In most cases, however, it is either the successful resistance of the common school orthodoxy or the cooptation of the reforms into a common schooling regimen that has foiled the reformers.  The net result is that American schools have entered the twenty-first century still dominated by nineteenth century methods (Tyack & Cuban 1995, 7).

The purpose of this article is to examine the conflict between Essentialists and Progressives.  Although I am a self-styled Progressive, the article is going to suggest that the differences between Essentialists and Progressives are less significant than their mutual differences with the common/normal schooling methods of education that still predominate in public schools and schools of education.  I will also suggest that the differences between Essentialist educators and common/normal schooling educators are more significant than the differences between Progressives and Essentialists.  The conclusion of the article is that in order for educators finally to respond effectively to Henry Adams’ challenge, and get out from under the doom of Santayana’s prophecy, Essentialists and Progressives must work together to overcome the crippling legacy of common/normal schooling in American education.

Norms of the Normal Schools.

The 1840’s were a major turning point in American education.  Prior to the 1840’s, education was almost entirely private and even public schools required tuition from students.  School attendance was voluntary and generally of brief duration.  Most people worked on farms and it was widely felt that farmers did not require much, if any, formal education.  The industrial revolution changed this situation and common schools were an innovative response to the industrialization, urbanization and immigration of the mid-nineteenth century.

Common schools were free, publicly supported and mandatory, and were intended to provide a common or standardized education for common, ordinary working people.  Their curricula generally focused on the 4 R’s: reading, writing, arithmetic, and religion – Protestant religion.  Their goal was to instill in students, many of whom were the children of Catholic immigrants from peasant backgrounds, the basic skills that would enable them to live peacefully in cities and work productively in factories.  Common schools stressed what were considered the Protestant moral virtues of hard work, obedience, patriotism, temperance, cleanliness and thrift, and were intended to Americanize students into an Anglo-centric mono-culture.

Common schools reflected the innovations of the industrial era and were organized on a factory model, using assembly line teaching methods.  Education before the 1840’s was generally based on the premise that children are inherently wicked and need to be suppressed according to the principle of “spare the rod and spoil the child.”  Common schools were based on what was considered the more humane premise that children are inherently neither good nor bad, and should be treated as raw materials to be molded into good citizens as they moved from one grade to another.  Teachers were seen as skilled mechanics, molding children in specified ways as they passed by on the assembly line.  Rote memorization, routine drilling and recall testing were the most common teaching methods (Katz 1971, 28, 33-38; Tyack 1974, 33-35; Church & Sedlak 1976, 98-100).

Normal schools were established in the mid-nineteenth century to train teachers for the growing number of common schools.  Prior to the 1840’s, teaching had been done largely by college-trained tutors for rich children and semi-literate housewives for ordinary children.  The skills and methods of teachers varied widely.  Normal schools were designed to provide standardized training for teachers who would implement the standardized curricula and methods of the common schools, according to what were considered the “scientific rules” of teaching.

The normal school curriculum focused on the 4 R’s taught in the common schools, plus lesson planning and classroom management.  Normal school teaching methods emphasized rote and routine learning.  Breaking with the punitive nature of prior teaching methods, most normal schools advocated systems of reward rather than punishment as the best form of motivation for student achievement.  Most normal schools had their own practice schools in which prospective teachers could practice on students what they learned in class (Mann 1840/1989, 9, 21, 29; Harper 1939, 31-32; Lucas 1997, 4, 25, 30, 62).

Standardization was the watchword of most normal schools and innovative techniques were invariably reshaped into the common mold.  In the 1840’s and 1850’s, for example, many educators became proponents of Johann Pestalozzi’s object method in which teachers attempted to pique students’ natural curiosity by using real-world objects in their teaching (Pestalozzi 1898, 57, 60, 180, 324).  Although Pestalozzi proposed active and creative learning techniques similar to those later proposed by Progressives, Pestalozzi’s method was quickly degraded in most normal schools to a mechanical technique for rote learning.

Similarly, in the late nineteenth century, many normal schools adopted Johann Herbart’s method of teaching through literature.  Herbart wanted teachers to use literature to focus on the “meaning” of things and the “interests” of students in a manner that also presaged Progressivism (Herbart 1911, 16, 31, 72).  But what Herbart proposed as a creative method was soon reduced to a formulaic five steps of teaching – preparation, presentation, association, generalization, and application – that all teachers were expected to mechanically implement for any subject (Harper 1939, 124-136; Church & Sedlak 1976, 104).

Normal schools varied to some extent by region.  Although they developed first in the Northeast, Midwest schools were often the most innovative.  While Northeastern normal schools focused on the mechanics of teaching with the goal that “the teacher be a good technician,” Midwestern schools frequently emphasizedbartHeH

teachers’ subject matter knowledge and, thereby, presaged Essentialism.  Likewise, Northeastern normal schools organized their practice schools on the model of existing common schools, thereby perpetuating the status quo in teaching, but Midwestern normal schools frequently founded laboratory schools in which teachers could experiment with innovative teaching methods as the Progressives later recommended (Harper 1939, 31-32; Levin 1994, 153-154; Lucas 1997, 29-31, 46).  In general, however, creative methods did not fare well or last long in most normal schools, wherein all things were eventually reduced to a lowest common denominator of curriculum and instruction.

College Forms and Academic Norms.

            Normal school standards generally followed common school norms during the nineteenth century.  Since the goal of the common schools was to transmit basic skills and rudimentary knowledge to children, normal schools trained prospective teachers in the rudimentary knowledge and basic skills they were expected to teach their students.  Normal schools generally attempted to educate teachers to one level above the children they would be teaching, that is, lower grade teachers should have an elementary school education, middle grade teachers a high school education, and high school teachers what we today would consider a junior college education.  This practice did not make for a very highly educated teacher corps, and normal schools struggled from their inception for academic respectability in the educational market (Harper 1939, 129; Lucas 1997, 33).

As high schools proliferated in the late nineteenth century and the demand for high school teachers rose, liberal arts colleges began to view teacher education as a potentially lucrative business.  At the same time, as colleges expanded and accepted increasingly more graduates of public high schools, they developed an interest in the quality of the education their prospective students were getting in the public schools.  As a means of both raising money and raising the educational level of high school graduates, colleges began setting up departments of education, sometimes incorporating already existing normal schools into their institutions.

In turn, faced with competition from liberal arts colleges, normal schools began upgrading themselves into teachers colleges.  As a result of these trends, there were very few avowed normal schools left by the 1930’s.  This process of institutional upgrading continued after World War II when teachers colleges began transforming themselves into full-service liberal arts colleges and universities, with the result that there are very few teachers colleges left today (Harper 1939, 113; Church & Sedlak 1976, 227; Lucas 1997, 35-38, 295).

The process of upgrading teacher education programs was neither smooth nor consistent.  There was considerable opposition from academic faculty within liberal arts colleges to the development of teacher education programs.  Academic professors complained that education professors and students of education were inferior and that education courses were sophomoric.  In the late nineteenth and early twentieth centuries, most education professors were themselves graduates of normal schools and did not have even bachelors’ degrees let alone doctorates.  And many prospective elementary school teachers had not graduated high school.  Hiring standards for education professors and entrance standards for education students steadily rose during the twentieth century but objections by academic professors to the quality of the students, professors and instruction in education programs continue to the present day, even in universities that started as normal schools and teachers colleges (Harper 1939, 102; Lucas 1997, 40, 44).

Essentialism and Progressivism began as efforts to reconcile universities’ academic departments and education programs, and thereby upgrade both teacher education and public school teaching.  Both theories derived from the seminar methods that were introduced from Germany into the academic departments of emerging American universities in the late nineteenth century.  American universities developed as institutions devoted to the practical study of modern academic disciplines, especially the physical and social sciences, as opposed to the classical curriculum of ancient languages, ancient history and philosophy that prevailed in most nineteenth century colleges.  Seminar methods encouraged critical thinking and in-depth discussion of academic subjects, as opposed to the lecture method of transmitting information and ideas that prevailed in most traditional colleges.  Essentialism and Progressivism attempted to translate seminar methods into elementary and secondary school teaching (Bestor 1953, 169-170; Westbrook 1991, 107).

Essentialism stems from the National Education Association’s Committee of Ten on Secondary School Studies, chaired by Charles Eliot, the president of Harvard University.  In a report issued in 1893, the Committee proposed a high school curriculum, and by inference an elementary curriculum and a program of teacher education, that focuses on the modern academic disciplines.  This is generally considered the founding document of Essentialism and the program around which Essentialists have rallied to the present day.  The Committee rejected both the classical curriculum followed by most of the elite private schools of that day and the standardized materials, tests and teaching methods of the common schools and normal schools.  In promoting the social and physical sciences, the report proposed treating teachers and students as scholars who would conduct in-depth analyses of academic problems.

Although almost all public school curricula from the early 1900’s to the present day have followed the basic format recommended in the 1893 report, Essentialists have perennially complained that the content of school curricula has not matched their form, and generally attributed this disparity to the weakness of teacher education programs.  As a remedy, Essentialists have argued that education programs should be controlled by the academic departments of colleges and universities (Sizer 1964, 209, 264-265; Church & Sedlak 1976, 295, 298, 300; Tanner & Tanner 1980, 232-239; Ravitch 1985, 71; Hirsch 1987, 116).

Progressivism is exemplified by the 1918 National Education Association Report entitled “The Cardinal Principles of Secondary Education” which called for a social-centered curriculum and a problem-centered methodology for schools.  The report rejected the standardization and mechanization of common/normal school pedagogy in favor of treating teachers as professionals and students as citizens who engage in real-life decision-making activities.  The report proposed that teachers and students be allowed considerable leeway in their curricula and methodologies.  It intended teacher education programs to be jointly developed by academic departments and education schools.  From the 1920’s to the present day, Progressivism has been the professed theory, but not often the practice, of most schools of education (Cremin 1961, 4; Tanner & Tanner 1980, 276-278; Goodlad 1990, 187-189).

Essentialism and Progressivism share many key principles that distinguish them both from common/normal schooling.  Both are based on the premise that basic skills, the primary goal of common/normal schools, are necessary but not sufficient for either teachers or students.  Both contend that teachers can and should teach academic subject matter and higher level thinking skills to children starting in the earliest grades.  Both insist that most students are best taught through creative and critical thinking, rather than through recitation and drill.  And both contend that standardized textbooks, tests, workbooks and other standardized teaching methods should be used only as supplementary tools.  In sum, while common/normal schoolers regard creative activities as at most supplementary to their primary methods of recitation and drill, Essentialists and Progressives promote creative activities and critical thinking as their primary methods, supplemented with recitation and drill when necessary.

Essentialists and Progressives regard teaching as a creative exercise for teachers and students alike, unlike common/normal schoolers who view teaching through the lens of nineteenth century positivism.  In common/normal schooling practice, science is regarded primarily as a set of rules and results rather than an experimental process, and scientific teaching is seen as implementing formulaic methods.  Essentialists and Progressives generally regard teaching as a scientific method itself in which lesson plans are merely hypotheses, tentative proposals to solve pedagogical problems that can be modified as they are implemented.  Teaching is science in the making, not ready-made.

Essentialists and Progressives also insist that it is not enough for teachers to be merely an academic step ahead of their students but that all teachers, and especially lower grade teachers, must be academically well prepared.  The lower the grade level, and the less skilled and knowledgeable the students, the more teachers must know to be able to translate and discuss complex subjects with their fledgling students.  For Essentialists and Progressives, the primary purpose of a college education for teachers is to develop disciplinary and interdisciplinary expertise that will take them and their students beyond basic skills and rudimentary knowledge.  While the two theories differ somewhat in their emphases – Essentialists focusing on knowledge of the separate disciplines and Progressives focusing on interdisciplinary problem-solving – both stress academic learning and promote the integrity and independence of academic departments.  In sum, they would seem to have much in common and a common cause.

Parting of the Ways.

            Part 1: Politics.  Although there were differences between Essentialism and Progressivism from the start, the differences were not so great that a person could not be an adherent of both.  Essentialism was initially the product of college professors from elite universities and was intended for college-bound, middle-class students.  Hence it’s intellectual emphasis on scholarship and the academic disciplines.  Progressivism was largely a product of public school teachers and was initially intended primarily for working class students who would be lucky to graduate high school.  Hence it’s emphasis on problem-solving and practical thinking.  These intellectual differences were not, however, as divisive as they have become.

Charles Eliot, an early founder of what became the Essentialist program, was also the first chairperson of the Progressive Education Association in 1919 (Cremin 1961, 240).  John Dewey, who succeeded Eliot as head of the Progressive Education Association, spoke out forcefully about the importance of teaching the academic disciplines (Dewey 1938, 2, 12-13, 80, 83, 95, 99, 109).  So, what happened?  How can one explain the parting of the ways of these two movements?  I think that in addition to their intellectual differences, a variety of political and institutional factors drove them apart.

Politically, Progressives and Essentialists polarized as they competed for the attention of educators, and both sides succumbed to extremism, exaggeration and alliances based on the principle that my enemy’s enemy is my friend.  Extremists in both camps took control of the debate and defined each camp’s view of the other in the most pejorative terms.  This is exemplified in the characterization of Progressivism by E.D. Hirsch as “anti-knowledge” and by Diane Ravitch as “a cult” that appeals to “below-average students” (Hirsch 1996, 3; Ravitch 1983, 79-80).  In turn, Progressives have characterized Essentialism as “an academic utopia” that appeals to professors but denigrates students (Trow 1954, 21).  Each side sets up the other as a straw man and then knocks him down, which may be personally satisfying but resolves little.

Excessive partisanship has sometimes led otherwise reputable scholars to exaggerate and even misrepresent their evidence in order to make political points.  Arthur Bestor, for example, was a meticulous historian whose research methods were considered a model of thoroughness and objectivity (Clark 1950, 282).  In his polemical writings against Progressivism, however, he resorted to personal attacks and unsubstantiated claims which he was forced to admit were erroneous but then justified on the grounds that political debate did not have to meet the same high standards as historical scholarship (Bestor 1955, 438-447).

Similarly, Diane Ravitch, a disciple of Bestor, is a highly regarded historian who was for many years a vehement critic of Progressivism, and whose works are still cited by conservatives in their attacks on Progressives.  (Ravitch 1985, 74, 81).  In her polemical writings, she too seemed to abandon historical objectivity in favor of scoring political points.  For example, in attacks on the “language police” who she claimed were censoring textbooks,  Ravitch railed against present-day Progressives based almost entirely on stale examples of a small group of overzealous feminists and civil rights activists during the 1960’s and 1970’s (Ravitch 2003, 14-16).

Although Progressives have been more sinned against than sinners, they have, nonetheless, been guilty of similar excesses.  Progressives, for example, tried to have articles by Bestor excluded from educational journals during the 1950’s on the spurious grounds that he was anti-education.  And Harold Hand, a noted Progressive education professor at the University of Illinois, tried to stop the University of Illinois Press from publishing one of Bestor’s books on the misbegotten grounds that Bestor was anti-democratic (Brickman 1953, 154; Hand 1954, 27).

Progressives and Essentialists promote their separate myths of a “golden age” of schooling in which the others play the spoiler role of serpents in the garden.  Successive generations of Essentialists, viewing their childhoods through rosy lenses and their middle age through a glass darkly, have complained about the downfall of public schooling since their youth and blamed Progressivism for the calamity.  Mortimer Adler, for example, bemoaned the degraded condition of public schooling in the 1930’s compared to the education he had received during the early 1900’s (Adler 1939/1988, 78).  But then Arthur Bestor complained in the 1950’s about the decline of public schools since what he claimed was their heyday in the 1930’s (Bestor 1955, 140).  And E.D. Hirsch complained in the 1980’s about the decline of the schools since what he saw as their high point in the 1950’s (Hirsch 1987, 1-4).

United only in blaming the decline of public education on Progressivism, each of these Essentialists identified the other’s low point as his high point, and each pointed to a different decade as the date of the alleged Progressive takeover of the schools – Adler the 1920’s and 1930’s, Bestor the 1940’s and 1950’s, and Hirsch the 1960’s and 1970’s.  Although these claims are inconsistent to the point of absurdity, successive generations of Progressives have similarly blamed Essentialists for the all of the ills of the schools (Rugg 1926, 30, 39, 67-68; Burnett 1954, 74; Engle & Ochoa 1988, 107-108).

Essentialists and Progressives also have made political alliances with political conservatives that exacerbated their differences while weakening the integrity of their respective positions.  Historically, most prominent Essentialists have identified themselves as political liberals, including Charles Eliot, William Bagley, Arthur Bestor, Jerome Bruner, E.D. Hirsch and Diane Ravitch, and promoted Essentialism as part of their liberal agenda.  During the 1930’s, for example, Bagley advocated teaching the liberal academic disciplines in order to promote liberal social goals, including a cooperative economy and a comprehensive system of social welfare, and to combat the rise of fascism in America (Bagley 1934, 33, 120-122; Bagley 1937, 73).  Bestor, a disciple of Bagley, argued during the 1950’s that teaching the liberal disciplines would help foster social democracy and defeat McCarthyism (Bestor 1952, 4; Bestor 1953 25-39; Bestor 1955a, 18).  Hirsch, a disciple of Bagley and Bestor, has argued since the 1980’s that studying the academic disciplines is the “only sure avenue of opportunity for disadvantaged children” and the best way to make America a more liberal and just society (Hirsch 1987, XIII; Hirsch 1996, 16).

These same Essentialists have, however, frequently joined with conservatives in their monomania to defeat Progressivism.  During the 1950’s, Bestor joined with Mortimer Smith, an avowed Social Darwinist and political reactionary, to form the Council on Basic Education, hoping to outflank Progressives through such an alliance (Smith 1949, 90-92; Cremin 1961, 546).  During the 1980’s, Ravitch worked in the Reagan administration and is currently a trustee of the arch-conservative Thomas B. Fordham Foundation.  Meanwhile, Hirsch proclaimed that “after six decades of anti-knowledge extremism” from Progressives, he was going to become an extremist himself and join with anybody who would oppose Progressivism in the education wars (Hirsch 1997, 7, 126).

Toward this end, both Hirsch and Ravitch supported the onerous testing provisions of the “No Child Left Behind” (NCLB) act.  Under NCLB, elementary students in grades three through eight have been tested every year in reading, math and, eventually, science, with schools penalized unless test scores rise substantially every year.  Although Hirsch and Ravitch generally concede that standardized tests do not reflect the kind of in-depth knowledge that Essentialists desire, and that Essentialism works best with the sort of low-stakes portfolios promoted by Progressives as an alternative to high-stakes tests, they, like their mentor Bestor, support standardized testing seemingly both as a means of measuring and, thereby, promoting students’ academic subject matter knowledge and as a means of thwarting Progressives who oppose standardized tests.

Political conservatives, such as William Bennett and Allen Bloom, took advantage of the Essentialist-Progressive conflict to promote their culture wars against what they claim is “the prevailing liberal orthodoxy” in America by supporting the Essentialist cause.  They hoped to use Essentialism as a vehicle for reinstating the system of elite schools for the few and common schools for the many that prevailed in the nineteenth century, and the Anglo-centered mono-culture that was taught in those schools.  Citing Bagley and Bestor as predecessors and claiming Hirsch and Ravitch as allies, Bennett and Bloom advocated an educational system in which ordinary students will be taught the 3 R’s plus moral education (essentially the 4 R’s of the common schools) while only the best and brightest will pursue academic subjects in depth.

Other conservatives have cited Essentialism as a rationale for privatizing schools and returning to the pre-common school system of the early nineteenth century, or cite Essentialist arguments in favor of greater academic content in the school curriculum as support for proposals for indoctrinating students with politically conservative ideas.  These are very different goals than those proclaimed by Bagley, Bestor, Ravitch and Hirsch (Bennett 1984, 6; Bennett 1991, 1-3; Bloom 1987, 25-43; Rochester 2003, 19, 21, 27).

Progressives have made similar alliances with conservatives, although not as frequently and rarely in recent decades.  While the most prominent Progressives have been politically liberal, from John Dewey to William Kilpatrick, Harold Rugg, George Counts, Theodore Brameld, Benjamin Bloom and John Goodlad, there have also been self-styled Progressives such as Edward Thorndike, whose support for intelligence testing and standardized achievement testing led him to elitist theories of  society and education that contravene mainstream Progressivism.  Like conservative Essentialists, Thorndike advocated critical thinking for the best and brightest students and social control for ordinary children.  And he developed so-called scientific rules for teaching that were basically a more sophisticated version of the standardized methods promoted in nineteenth century normal schools (Church & Seldak 1976, 334).  Allies such as Thorndike were worse than enemies for Progressives.  In characterizing themselves as Progressives, Thorndike and his followers merely provided ammunition for self-styled liberal Essentialists such as Ravitch and Hirsch. (Ravitch 1983, 56; Ravitch 1985 14).

Progressivism has also often been used by common/normal schoolers as a cover for their anti-intellectual practices, most frequently by citing Progressive “child-centered” methods as an excuse for adopting academically and intellectually empty curricula.  And there has been a tendency among Progressives to defend this sort of incompetence on the grounds that any criticism of schools or school teachers undermines support for public education, a tactic that has left Progressivism open to well-deserved ridicule (Eklund 1954, 350).

Arthur Bestor, for example, liked to tell the story of a junior high school principal who claimed on Progressive grounds that since most people work with their hands, not every child needs to learn how to read and write (Bestor 1953, 54-56).  E.D. Hirsch tells a similar story of a self-styled Progressive elementary school principal who claimed that since most people don’t travel, children don’t need to learn geography (Hirsch 1996, 55).  In recent years, we have heard so-called Progressives who, in the name of holistic learning, won’t teach their students the multiplication tables in math or the structure of words and sentences in reading.  In sum, the tendency of Essentialists to join with political reactionaries and Progressives to defend extremists has significantly exacerbated their differences with each other.

Part 2: Institutions.  The differences between Essentialism and Progressivism have also been exacerbated by institutional factors, especially the pervasiveness of common/normal schooling practices in public schools and schools of education which has warped both Essentialist and Progressive reforms and, thereby, lent support to each side’s criticisms of the other.  Among Progressive reforms, for example, Kilpatrick’s project method, which he intended as a vehicle for creativity among teachers and students, quickly devolved in most public schools into just another standardized routine, codified in textbooks and teaching packages as either a means of drilling students in basic skills or a meaningless activity about which Essentialists such as Hirsch have justifiably complained (Church & Sedlak 1976, 381).

Similarly, Benjamin Bloom’s taxonomy of cognitive skills, which he developed during the 1950’s as a means of promoting critical thinking and creative teaching, was soon reduced in most public schools and schools of education to a standardized lesson-planning format for teachers with nary a critical or creative piece.  While Bloom emphasized that critical thinking in students could only be taught through critical thinking by teachers, schools of education regularly misrepresent recitation and recall as critical thinking, and textbook publishers routinely supplement their textbooks and workbooks with so-called analytical and critical thinking questions that are merely recall by another name (Bloom et al. 1956; Brown 1998, 1, 4, 15; Marshall 2003, 195-196).

Bloom’s program of “mastery learning” suffered the same fate.  Developed during the 1960’s to help teachers in low income school districts who teach large classes of educationally disadvantaged children, mastery learning was designed as a whole-class method of teaching basic skills.  Bloom emphasized that mastery learning is not an educational panacea.  He cautioned that it is not applicable to creative subjects or critical thinking and it is not as effective as either tutoring students individually or teaching them in small groups.

Despite his warnings, Bloom’s proposal was quickly reduced to a seven-step formula by Madeline Hunter, who advertised her program as effective for all subjects and skills.  Reminiscent of the five-step formula to which Herbart’s theories of creative learning were reduced in the late nineteenth century, Hunter’s seven steps have often been adopted as a blueprint by public schools and schools of education and reduced to bullet-points in teaching textbooks and model lesson plans, a blueprint that has little room for creative or critical thinking (Bloom 1976, 5, 21, 41, 105, 200; Hunter 1973, 97; Humter 1977, 100; Hunter 1985, 58; Brandt 1985, 61; Freer & Dawson 1987, 68; Gibbony 1987, 47-48).

In a similar fashion, John Goodlad’s experiments in whole-school reform during the 1970’s, predicated on bottom-up cooperative action by parents, students and teachers, have been misused to justify top-down, state-mandated reforms since the 1990’s, one of the most serious and ominous misuses of an erstwhile Progressive reform (Goodlad 1975, 5, 152, 177, 209; Goodlad, 1997).  In the wake of NCLB, the language of whole-school reform and student/teacher empowerment was co-opted to promote the whole-sale reorganization of schools to raise standardized test scores.  In a book that exemplifies this trend, Eugene Kennedy noted that the most difficult task is to convince skeptical students and teachers that teaching to the test is real learning.  His proposed reforms are common/normal schooling practices in participatory democratic wrappings (Kennedy 2003).

Essentialism has also been deformed by the hegemony of common/normal school practices, and is almost invariably reduced to a list of common facts and basic skills that ostensibly represent the core of the academic disciplines.  This conflict between Essentialist ideals and practices is exemplified by E.D. Hirsch’s writings.  In his best-reasoned theoretical statements, Hirsch has rejected what I have described as common/normal schooling and shares many key positions with Progressives.

He opposes drill and recitation as boring and rigid, and explicitly supports Progressive methods of active learning.  He rejects ethnocentric curricula and explicitly supports multicultural education.  He opposes emphasizing basic skills and rudimentary knowledge, and promotes a combination of skills, academic knowledge and problem-solving.  He promotes the idea of the teacher as a “guide” rather than dictator.  Most significant, the curricular guidebooks that Hirsch has prepared for elementary school teachers incorporate multicultural materials and multiple perspectives, and emphasize creative and critical thinking of sorts that are consistent with Progressive theories and practices (Hirsch 1987, 125; Hirsch 1993; Hirsch 1996, 102, 150, 174).

At the same time, in his polemical statements against Progressivism, Hirsch has essentially caricatured his own ideas, reducing his proposed curricula to arbitrary lists of facts and ideas that he claims everyone needs to be familiar with, even if they do not understand them, and promoting the rote memorization of these lists on the ostensible grounds that children like to memorize things.  Calling for nationally standardized lists and tests, and promoting the NCLB, he would seemingly make common schooling the law of the land.  Hirsch’s is almost a Doctor Jeckyl and Mr. Hyde performance (Hirsch 1987, 14, 30, 131, 141).

What is to be done?

It has frequently been said that education during the twentieth century was a battle between Dewey and Thorndike, and Thorndike won (Levin 1994, 6).  This is another way of saying that despite all the sound and fury of the Essentialists and Progressives, it was common/normal schooling practices that prevailed.  A variety of political and institutional factors have contributed to this outcome.

Politically, it is very hard to displace a long-time hegemonic theory such as common/normal schooling, especially when that hegemony is supported by powerful groups of educators who are satisfied with the status quo – what Arthur Bestor and other critics have called “the interlocking group” of education professors, school teachers and state education officials who set the standards and requirements for public schools and schools of education.  Trying to organize an opposition to these groups is an uphill struggle (Bestor 1953, 101; Ravitch 1985, 94).

Common/normal schooling also has political appeal to conservatives who are afraid of change and to reactionaries who want to go back to the nineteenth century.  Essentialism expects teachers and students to work on the frontiers of knowledge, with cutting edge ideas that will inherently foster change.  Progressivism expects teachers and students to work on solving social problems and making cultural innovations which may also lead to change.  As such, Essentialism and Progressivism seem dangerous to many people – parents, teachers, administrators, politicians – including many who say they are in favor of innovative methods but do not practice what they preach (Goodlad 1984, 236).

Institutionally, it is difficult to overcome the inertia of a longstanding set of practices such as common/normal schooling, which are to many the common sense of education.  To suggest any change is to risk getting the bewildered response “But we’ve always done it that way” or “But everybody does it that way.”  Trying to convince people to adopt alternative methods can seem a Sisyphean task (Sarason 1971, 4, 19).

Common/normal schooling also has the popular appeal of standardization, which is widely seen as the common sense of an industrial society and bureaucratic system.  Standardized curricula, teaching methods and testing seem the safe way to do things, to impose order on a situation that could otherwise be messy.  Standardization also responds to the imperial urge to impose what you see as the one best system on everyone else (Tyack 1974, 4, 197, 238).

But common schooling methods cripple students and teachers, and normal schooling methods warp schools of education and the universities that house them.  Common schooling cripples students because in a society as dynamic as ours, children cannot merely follow in their parents’ footsteps.  The most important skill they need to learn is how to think critically and reflectively about themselves and their world, so that they can creatively and effectively respond to change.  Although studies indicate that students will do as well on standardized tests if you teach them well – according to Essentialist and/or Progressive methods – as if you teach them to the test, most public school and school of education administrators scurry to the common schooling mode in the face of standardized testing requirements (Kohn 2000).

Common schooling cripples teachers by depriving them of the opportunity to make professional choices and by forcing them to use so-called teacher-proof materials and methods, the sorts of things that anyone can use without having to know very much.  The persistence of common schooling reflects a profound disparagement of teachers and their potential to act as professionals, as people capable of making informed decisions of their own.

Common schooling methods also contribute to the chronic problem of teacher drop-out which has plagued school systems since the nineteenth century.  From the 1840’s to the present day, some fifty percent of teachers regularly leave the profession within five years of entering.  Boredom has consistently been cited by ex-teachers as one of the main reasons they left education.  Using the same textbooks and workbooks, teaching the same basic skills in the same ways over and over, without any impetus and little opportunity for intellectual growth, can become very stale after a very few years.  And when teachers get bored, they generally get boring and then their students get bored, and that leads to trouble.  Although Essentialist and Progressive methods require somewhat more intellectual effort from teachers, creative and critical thinking are generally more interesting to teachers and students alike and, as such, less draining.  Using Essentialist and Progressive methods, teachers can spend more time teaching and less time disciplining their students – and less time ruing the day they decided to become teachers (Bagley 1937, 81; Bagley & Alexander 1937, 6; Notebook 2003, 3).

Normal schooling methods turn schools of education and the universities that house them into glorified trade schools churning out low-level technicians instead of educating scholars and professionals.  While Essentialists and Progressives seek to elevate school teachers closer to the status of college professors, normal school practices tend to reduce college professors to the status of elementary school teachers.  To the extent that standardization is the goal of teacher education programs, professors will be subject to petty-bureaucratic controls of their courses and their teaching, and not merely the education professors.

In most universities today, academic departments are expected to offer lower level versions of their courses and programs for prospective school teachers, or to support so-called general education degrees for teachers, which are usually smorgasbords of introductory courses that are neither in-depth in any discipline nor reflectively interdisciplinary, and in which students study a little bit about everything but all too often learn a lot about nothing.  The normal school rationale offered for degrading academic programs in this way is that teachers do not have to know much about anything.  They only need to know a bit more than their students, just enough to follow the directions in the teachers’ manual and stay a chapter ahead in the teachers’ edition of the textbook, the one with the answers in the back.  This is a demeaning program for academic professors as well as teacher educators (Rhodes 1998, 144).

So, where do we go from here?  As a self-styled Progressive, I have for many years regarded Essentialists as at best wrong and more generally wrong-headed.  At the same time, I have sometimes found myself secretly agreeing with some of their statements – academic knowledge is good, academic disciplines are productive ways of organizing knowledge, and knowledge of the disciplines can be useful.  I have usually kept these thoughts to myself but have finally decided that reconciliation between Essentialism and Progressivism is possible and necessary.  I believe that there is good reason and reasonable hope for Essentialists and Progressives to work together to meet Henry Adams’ challenge and finally end the persistence of common/normal schooling in America.

 References

 Adams, Henry, 1918/1961.  The Education of Henry Adams. Boston: Houghton-Mifflin.

Adams, Henry, 1919/1969. The Degradation of the Democratic Dogma. New York: Harper & Row.

Adler, Mortimer, 1939/1988. “Tradition and Progress in Education.” In Reforming Education, G. Van Doren, Ed. New York: Macmillan.

Bagley, William, 1934.  Emergent Education and Emergent Man. New York: Thomas Nelson & Sons

Bagley, William, 1937. A Century of the Universal School. New York: Macmillan.

Bagley, William & T. Alexander, 1937. The Teacher of the Social Studies. New York: Charles Scribner’s Sons.

Bennett, William, 1984. To Reclaim a Legacy. Washington, D.C.: National Endowment for the Humanities

Bennett, William, 1991. The War Over Culture in Education. The Heritage Foundation.

Bestor, Arthur, 1952. “The Study of American Civilization: Jingoism or Scholarship?William and Mary Quarterly, 9, 3rd Series

Bestor, Arthur, 1953.  Educational Wastelands. Urbana, IL: University of Illinois Press.

Bestor, Arthur, 1955. The Restoration of Learning. New York: Alfred Knopf.

Bestor, Arthur, August 29, 1955a. “John Dewey and American Liberalism.” The New Republic.

Bloom, Allan, 1987. The Closing of the American Mind. New York: Simon & Schuster.

Bloom, Benjamin et al., 1956. Taxonomy of Educational Objectives: The Classification of Educational Goals, Handbook I – Cognitive Domain. New York: David McKay Co.

Bloom, Benjamin, 1976. Human Characteristics and School Learning. New York: McGraw-Hill.

Brandt, R., February 1985.  “On Teaching and Supervising: A Conversation with Madeline Hunter.” Educational Leadership, 42.

Brickman, W., 1953. “Criticism and Defense of American Education.” School and Society, Vol. 77, No. 9.

Brown, K., 1998. Education, Culture and Critical Thinking. Aldershot, Eng: Ashgate. Burnett, R.W., 1954. “Mr. Bestor in the Land of the Philistines.” progressive education, Vol. 31, no.3.

Butts, R.F. & L.A. Cremin, 1953.  A History of Education in American Culture.  New York: Henry Holt & Co.

Church, R.L. & M.W. Sedlak, 1976. Education in the United States. New York: Free Press.

Clark, S.D., November 1950.  “Review of the book Backwoods Utopias by Arthur Bestor, Jr.” American Journal of Sociology, 56.

Cremin, Lawrence, 1961. The Transformation of the School. New York: Vintage Books.

Cuban, Larry, 1991. “History of Teaching in Social Studies.”  In J. Shaver, Ed., Handbook of Research on Social Studies Teaching and Learning. New York: Macmillan.

Dewey, John, 1938. Experience and Education. New York: Macmillan.

Eklund, J., 1954. “We Must Fight Back.” In Public Education Under Criticism,  C.W Scott & C.M. Hill, Eds. New York: Prentice Hall.

Engle, Shirley & Anna Ochoa, Education for Democratic Citizenship. New York: Teachers College Press.

Feinberg, Walter, 1998. Common Schools/Uncommon Identities: National Unity and Cultural Difference. New Haven, CN: Yale University Press.

Feinberg, Walter, Spring 1999. “The Influential E.D. Hirsch.” Rethinking Schools, 13, 3.

Freer, M. & J. Dawson, “The Pudding’s the Proof.” Educational Leadership, 44.

Gibbony, R., February 1987. “A Critique of Madeline Hunter’s Teaching Model from Dewey’s Perspective.” Educational Leadership, 44.

Goodlad, John, 1975.  The Dynamics of Educational Change. New York: McGraw Hill.

Goodlad, John, 1982. “Let’s Get On With The Reconstruction.” Phi Delta Kappan 61, no.1.

Goodlad, John, 1984. A Place Called School. New York: McGraw Hill.

Goodlad, John, 1989. “Healing the fractured movement for educational reform.” The Chronicle of Higher Education 35, no.27.

Goodlad, John, November 1990.  “Better teachers for our nation’s schools.” Phi Delta Kappan.

Goodlad, John, 1997. In Praise of Education. New York: Teachers College Press.

Hand, Harold, January 1954.  “A Scholar’s Documentation.” Educational Theory, IV. Harper, C.A., 1939. A Century of Public Teacher Education. Washington, D.C.: National Education Association 1939).

Herbart, J.F., 1911.  Outlines of Educational Doctrine. New York: Macmillan 1911.

Hirsch, E.D., 1987. Cultural Literacy. New York: Random House.

Hirsch, E.D., 1993. What Your Fifth Grader Needs to Know: Fundamentals of a Good             Fifth-Grade Education. New York: Delta.

Hirsch, E.D., 1996. The Schools We Need and Why We Don’t Have Them. New York: Doubleday.

Hirsch, E.D., December 5, 2002.  “Traditional Education IS Progressive.” The American Enterprise.

Hunter, Madeline, November 1973.  “Make Each Five Minutes Count.” Instructor, 83.

Hunter, Madeline, April 1977. “Humanism vs. Behaviorism.” Instructor, 86.

Hunter, Madeline, February 1985. “What’s Wrong With Madeline Hunter?” Educational Leadership, 42.

Jenness, David, 1990.  Making Sense of Social Studies. New York: Macmillan.

Kandel, I.L., 1961. William Chandler Bagley: Stalwart Educator. New York: Teachers College Bureau of Publications.

Katz, Michael, 1971.  Class, Bureaucracy and Schools. New York: Praeger 1971.

Kennedy, Eugene, 2003. Raising Test Scores for ALL Students: An Administrator’s Guide      to Improving Standardized Test Performance. Thousand Oaks, CA: Corwin Press.

Kliebard, Herbert, 1986. The Struggle for the American Curriculum, 1893-1958. Boston: Routledge & Kegan Paul.

Kohn, Alfie, 2000. The Case Against Standardized Testing: Raising the Scores, Ruining the Schools. Portsmouth, N.H.: Heinemann.

Levin, Robert, 1994. Educating Elementary School Teachers: The Struggle for Coherent Visions, 1909-1978. Lanham, MD: University Press of America.

Lucas, C., 1997. Teacher Education in America: Reform Agenda for the Twenty-First Century. New York: St Martin’s Press.

Mann, Horace, 1840/1989. On the Art of Teaching. Boston: Applewood Books.

Marshak, D., 2003. “No Child Left Behind: A Foolish Race into the Past.” Phi Delta Kappan 85, no.1.

Marshall, J., November 2003. “Math Wars: Taking Sides.” Phi Delta Kappan, Vol. 85, no. 3.

Morrison, K.L. & C.S. Marshall, 2003. “Universities and Public Schools: Are We Disconnected?” Phi Delta Kappan 85, No.4.

Nelson, Jack, Stuart Polansky & Kenneth. Carlson, 2000. Critical Issues in Education. New York: McGraw Hill.

Notebook, Summer 2003. “Attrition, Not Recruitment, Is Root of Teacher Shortage.” American Educator, Vol. 27, no.2.

Pestalozzi, J.H., 1898. How Gertrude Teachers Her Children. Syracuse, N.Y.: C.W. Bardeen.

Pulliam, J.D. & J.J. Van Patten, 1999. History of Education in America. Upper Saddle River, NJ: Prentice Hall.

Ravitch, Diane, 1983.  The Troubled Crusade, New York: Basic Books.

Ravitch, Diane, 1985. The Schools We Deserve: Reflections on the Educational Crisis of  Our Times. New York: Basic Books.

Ravitch, Diane, Summer 2003. “Thin Gruel: How the Language Police Drain the Life and Content from Our Texts.” American Educator, Vol. 27, no. 2.

Rhodes, C., 1998. Structures of the Jazz Age: Mass Culture, Progressive Education, and  Racial Disclosures in American Modernism. New York: Verso.

Rochester, J.M., 2003.  “The Training of Idiots: Civics Education in America’s Schools.” In Where Did Social Studies Go Wrong? J. Leming, L. Ellington & K. Porter, Eds. Upper Marlboro, MD: Thomas B. Fordham Foundation.

Rugg, Harold, 1926. The 26th Yearbook of the National Society for the Study of Education The Foundations and Technique of Curriculum-Construction – Part I: Curriculum Making: Past and Present. Bloomington, IL: Public School Publishing Co.

Sarason, Seymour, 1971. The Culture of the School and the Problem of Change. Boston: Allyn & Bacon.

Sarason, Seymour, 1996.  Revisiting “The Culture of the School and the Problem of  Change. New York: Teachers College Press.

Sizer, Theodore, 1964. Secondary Schools at the Turn of the Century. New Haven, CT: Yale University Press

Sizer, Theodore, 1992. Horace’s Compromise. Boston: Houghton Mifflin.

Smith, Mortimer, 1949.  And Madly Teach. Chicago: Henry Regenery Co.

Tanner, Daniel & Laurel Tanner, 1980. Curriculum Development, Theory into Practice.  New York: Macmillan.

Tennenbaum, Samuel, 1951. William Heard Kilpatrick. New York: Harper & Bros.

Trow, W.C., January 1954. “Academic Utopia? An Evaluation of Educational Wastelands.” Educational Theory, IV.

Tyack, David, 1974. The One Best System. Cambridge, MA: Harvard University Press.

Tyack, David & Larry Cuban, 1995. Tinkering Toward Utopia, A Century of Public School Reform. Cambridge, MA: Harvard University Press.

Westbrook, Robert, 1991. John Dewey and American Democracy. Ithaca, NY: Cornell University Press.

Wisniewski, R., November 1990.  “Let’s get on with it.” Phi Delta Kappan, 195.

What to do about the Big Bad Wolf: Narrative Choices and the Moral of a Story

What to do about the Big Bad Wolf: Narrative Choices and the Moral of a Story

Burton Weltman

Once upon a time, there were three little pigs.  The pigs, having apparently reached adolescence, were forced by their mother to leave home and make their own way in the world.  So, each of them went off by himself to build a house.  Two of the pigs were foolish and lazy, and they built houses of straw and sticks respectively.  The third pig was wise and hardworking so he built a house of bricks.  A big, bad wolf came along and easily destroyed the houses of the two foolish pigs.  They barely escaped with their lives before he could eat them.  The wolf could not destroy the brick house, however, so he tried to trick the third pig into coming outside.  But the wise pig was not fooled.  Instead, he tricked the wolf into coming down the chimney of the house, at which point the wolf fell into a pot of boiling water and ran away with a scorched rear end.

This is the gist of Walt Disney’s version of the story of The Three Little Pigs, a traditional European folktale that Disney adapted and made popular in America during the 1930’s.  Appearing originally as a cartoon movie, the Disney story has since been continuously in publication as a very popular illustrated children’s book (Disney 1933; Disney 2001, 69-84).  Later variations of the story, such as those by Paul Galdone and Gavin Bishop, follow the gist of the Disney version but have the wolf eat the first two pigs and have the third pig then eat the wolf at the end (Galdone 1970; Bishop 1989).

Like all Disney stories, The Three Little Pigs is full of lessons.  The first lesson is that in this world it’s every pig for himself.  Significantly, the pig brothers did not work together to build a house but went off individually.  It is an eat-or-be-eaten world, according to Walt Disney, and you’ve got to take care of yourself first and foremost.  This lesson is even clearer in Galdone’s and Bishop’s versions.  A second lesson of the story is that difference is dangerous.  The sympathetic characters are all pinkish pigs.  The evil character is a black wolf.  In the context of the story, the pigs are right to be afraid of an animal that is not like them.

The racial implications of the Disney story, which are followed by Galdone and Bishop, are seemingly no accident, especially when you consider that most adolescent pigs are not pinkish and most wolves are not black.  The implicit racism reflects, among other things, the dramatic imagery of America in the 1930’s.  During the 1930’s, if you wanted to make something scary for mainstream, pinkish American audiences, you made it big and black.  The overall moral of Disney’s story is that we live in a world in which good is continually being confronted by evil, and the good characters must fight to the death against the bad ones.  These are significant lessons for children to learn from a story.

1. Coming to Terms with “The Three Little Pigs”: What to do about the Big Bad Wolf?

At the end of Into the Woods, a wonderful musical about children’s stories by Steven Sondheim and James Lapine, the witch intones one of the play’s main themes: “Careful the things you say, children will listen…Careful the spell you cast…Sometimes the spell may last past what you can see…Careful the tale you tell. That is the spell.  Children will listen.”   Storytellers have long known of the influence their tales can have on children and many, like Disney, have deliberately tried to use this power for purposes of moral, social and political education.  “Writing for children is usually purposeful,” James Stephens has noted, “its intention to foster in the child reader a positive apperception of some socio-cultural values.”  In turn, Stephens says, “Every book has an implicit ideology” (Stephens 1992, 3, 9).  Children’s stories are, thus, a contested terrain over which storytellers of different political persuasions have fought for many years.

Disney’s version of The Three Little Pigs conveys a view of the world that most politically progressive people would not accept.  The selfish individualism, the genetic determinism, the tinge of racism, the Social Darwinism, and the inevitable violence in the story are contrary to views that most progressives would like to impart to children.  So, some have recently tried re-writing the popular story to better fit with their progressive ideals.

As one alternative to the Disney story, Jon Scieska, a well-known author of children’s books, has written The True Story of the 3 Little Pigs (Scieska 1989) in which the wolf tells his side of the story.  Scieska picks up on the underlying racism of the Disney version and tries to counter it.  In Scieska’s book, the wolf is portrayed as a member of a persecuted minority in a predominantly pig society, paralleling the racial story of blacks in predominantly white American society.  The wolf has been jailed for the murder of the first two little pigs and, in his defense, claims the two pigs died by accident and that he then ate them only because he did not want to let good meat go to waste.

Scieska seems to hope we will come to sympathize with the good-natured, humorous wolf, and we do.  But there is an underlying moral to the story that Scieska seems to have missed.  Even if you believe the wolf’s story, you still have to come to the conclusion that pigs and wolves cannot live together in the same society because wolves eat pigs, as the wolf admittedly did.  The only solution to the problem posed in this story is to segregate the wolves from the pigs.  Despite his liberal intentions, Scieska has unwittingly written a story that seems to justify racial segregation.

Another alternative to Disney’s story is The Three Little Wolves and the Big Bad Pig (Trivias 1993) by Eugene Trivias, another highly regarded author of children’s books.  Trivias seemingly tries to deal with the problem that Scieska ran into by making the wolves weak and vulnerable and making the pig big and scary.  Trivias has three little wolves – one black, one white and one grey – being chased by a big pink pig.  The pig claims to want to be friends with the wolves but they are afraid of him and they work together to try to protect themselves from the pig.  The wolves build increasingly stronger houses that are successively knocked down by the pig.

In the end, the wolves decide to try being friendly to the pig and it works.  The wolves and the pig have a party and, according to Trivias, “they all lived happily together ever after.”  But that is not plausible.  They cannot have “lived happily together ever after” because eventually the wolves will grow up and wolves eat pigs.  So when the three little wolves become three big wolves, the pig is likely to become lunch.  Again, despite the author’s multi-racial, multi-cultural, all-inclusive intentions, the story implicitly leads the reader to the conclusion that some sort of racial or species segregation is necessary.

The moral of the story of these three versions of “The Three Little Pigs” is that if you want to write a story about the virtues of diversity and the peaceful reconciliation of differences, you should not choose wolves and pigs as your main characters.  The biological imperatives of pigs and wolves will defeat your intentions.  The underlying lesson is that the narrative choices an author makes in setting up a story can predetermine the moral outcome, regardless of the author’s overt intentions.  Walt Disney seemingly had a message he was trying to convey with his story and he made narrative choices on that basis. If you follow his narrative choices, as Scieska and Trivias did in accepting Disney’s cast of characters, you are likely to end up supporting his conclusions.

By contrast, in Click, Clack, Moo: Cows That Type (Cronin 2000), Doreen Cronin tells a story of farm animals – mainly dairy cows and egg-producing chickens – who successfully organize a strike against the farmer who owns them.  This is a story about strength through cooperation and diversity, as each type of animal is able to contribute to the group effort based on its particular characteristics.  It is also a story about the advantages of a peaceful resolution of differences, both among the animals and between the animals and the farmer.  A key to the success of this story is that none of the animals is a predator, and none of the animals is being used by the farmer for meat, so there are no biologically determined irreconcilable differences among them.  Cronin starts with different narrative choices than Disney and, as a result, is able to convey different moral conclusions.

The purpose of this essay is to discuss some of the narrative choices that storytellers make and the effect that these choices can have on the moral of their stories and the messages their audiences are likely to get.  Most scholarly analyses of the meaning and messages in children’s literature focus on the subject matter of the books and the political orientations of the authors (Bacon 1988; Clark 2003; Ellis 1968; Gillespie 1970; Hines 2004; Lehr 2001; Lucas 2003; Lurie 1990; MacLeod 1994; Moynihan 1988; Taxel 1988; Thaler 2003).  But, as Peter Hunt has noted, “What may be more important than what the story is about is the way in which it is shaped” and the way in which stories are shaped has been “the most important and neglected of literary features” (Hunt 1991, 73, 119).

The main thesis of this essay is that the narrative structure of a story can determine the moral of the story, irrespective of its subject matter and its author’s intentions, and that teachers must be particularly attuned to this fact in choosing both what things they have their students read and how they discuss things with their students.  The moral of a story is often determined by its structural medium and cannot be characterized solely through its subject matter and its author’s political orientation (Witherell et al 1995, 40; also Egan 1988; Egan 1992).

Whether an author or teacher is telling factual or fictional stories, discussing the news or fairytales, relating anecdotes of daily life or theories of society, writing history books or novels, teaching social studies or literature – that is, dealing with anything that has explicitly or implicitly a narrative form – the narrative structure can determine the meaning and effect of the story irrespective of the storyteller’s intent or the subject matter of his/her narrative.  Depending on their narrative structures, stories with essentially the same subject matter and political intentions can have very different moral, cultural, social and political messages (Stephens 1999, 74, 78).

The primary conclusion of the article is that when teachers choose things for their students to read or to discuss, it is important for them to know what messages are being conveyed through the narrative structure of the reading and/or discussion.  What a teacher thinks is being conveyed through the content of a book or a discussion may be contradicted by the underlying structural message of the book or discussion (Sarland 1999, 37, 39).  As such, in analyzing a book, it is important to focus on the relationship between the book’s content and structure so as to explore fully the meaning of the book and its impact on its readers.  Likewise, in preparing a class discussion, it is important for a teacher to match his/her subject matter content with his/her narrative structure so as to convey a consistent and coherent message.  There is a message in the medium of our expression that we and our students need to understand.

2. Defining Narrative Terms: The Message in the Medium

This article focuses on four aspects of narrative structure that have significant impact on the moral of a story: (1) the characterizations in the story and, in particular, where the story stands in the debate between “nature versus nurture” and whether or not the main characters in the story are able to learn and change; (2) the dramatic form of the story and, in particular, whether it can be characterized as primarily a melodrama, comedy or tragedy; (3) the agency of the story and whether the story moves primarily as a result of chance, causation or choice; and, (4) the perspective of the story and whether the perspective is primarily top-down or bottom-up.

Although most stories incorporate a mixture of different factors, almost all are structured primarily around a particular type of characterization, form, agency and perspective.  In turn, although these four elements interrelate, so that storytellers’ choices with respect to one will likely influence their choices as to the others, storytellers are not always consistent in their narrative choices, which can lead them unwittingly to send mixed moral messages.  In sum, an author’s or teacher’s narrative choices with respect to these factors will have a significant impact on the moral of the story being told.

(a) Nature/Nurture.  The moral of a story will depend in large part on the characterizations of the people in the story and whether people are seen as able or unable to change.  This has been the gist of the argument over whether nature or nurture, genetics or environment, inherited social class and culture or acquired social skills and character, are most important in the development of individuals and society.  It has also been a crucial element in the political debate between traditionalists and progressives.

Traditionalists have generally taken the “nature over nurture” side of this debate.  One of the elements of conservative social theory from ancient times to the present has been the idea that a person is born with a certain essence which forms his/her nature and that a person cannot significantly change his/her character.  This is essentially a classist or hierarchal theory of society that justifies the rule of the well-born few – well-born in character and culture as well as wealth and power – over the disadvantaged many, and the passing of wealth and power, and poverty and powerlessness, from parents to children.

In this theory, nature controls character and justice requires that “we must leave each class to have the share of happiness which their nature gives to each” (Plato 1956, 219; Banfield 1990).  The moral imperative for people is to discover their true natures and follow the predetermined course of their lives.  For most people, this will mean staying in the social class in which they were born and doing the things that their parents did, which is consistent with the goal of most traditionalists of a society in which children can and will grow up to follow in their parents’ footsteps.

This view of character and society tends to promote individual self-discovery and self-development and to discourage social activism and social change.  In this model, problems most arise when characters attempt to step outside their predestined social roles or are unfairly evicted from their proper social roles, or when people stupidly misconceive the nature and character of themselves or others.  Some people are naturally good, smart and otherwise qualified to occupy positions in the upper level of the social hierarchy and others are naturally bad or stupid and need to be controlled by their betters.

Social reform in this model consists of the good/smart people defeating the bad/stupid people and either eliminating or subjugating them, as is the case in Disney’s The Three Little Pigs. Many traditional children’s fairy tales – especially those told by the Grimm brothers – take this “nature over nurture” side of the argument.  The dire consequences of denying biological imperatives and/or defying inherited social roles – for example, children disobeying their parents (Rapunzel), commoners pretending to powers they don’t naturally have (Rumpelstiltskin); workers trying assume the roles of their bosses (The Sorcerer’s Apprentice), people welcoming monsters in disguise (Little Red Riding Hood) – are emphasized.

In contrast, progressives have generally taken the “nurture over nature” side of the argument.  One of the elements of progressive social theory has been the idea that a person will develop and change depending on his/her environment – on the nurturing and education that he/she receives – and that a person can, in turn, help change the world around him/her (Barber 1998).  In this model, the moral imperative is to figure out how best to develop oneself and help develop others so that the development of each person will encourage the development of all.  In many cases, this will mean leaving the place and the social class in which a person was born and doing different things than his/her parents.

This model tends to promote self-development through cooperative social activism with education as a primary means of self and social change.  In this model, problems arise when people are blocked from individual and social growth and when society is prevented from changing with changing circumstances.  The genre of bildungsroman in which, typically, an adolescent learns and grows and then changes himself and his social surroundings exemplifies the “nurture over nature” side of the argument.  The Harry Potter series is an example of this genre.

The “Three Little Pigs” stories of Disney, Scieska and Trivias demonstrate the effect that choices about characterization can have on the message conveyed by a story.  Disney’s The Three Little Pigs is an example of a “nature over nurture” characterization.  The main characters are biologically determined.  Wolves are by nature predators.  They inevitably attack pigs.  There is nothing anyone can do to change that and any pig who underestimates the biological imperative of wolves is likely to be eaten.  In turn, there is nothing anyone can do to change the brutally competitive, zero-sum society made up of wolves and pigs.

Scieska in The True Story of the Three Little Pigs tries to humanize the wolf by telling us the wolf’s side of the story, and it works to some extent.  The wolf seems to be an amiable character.  But the wolf is still a meat eater and his genetic characteristics override his pleasant personality.  Similarly, Trivias tries to resolve the conflict in The Three Little Wolves and the Big Bad Pig by reversing the roles of pig and wolf and by having the wolves learn that if they “make love, not war,” they can be friends and not enemies with the pig.  But this can only be a temporary peace because of the zoological imperative that wolves eat pigs.  Both Scieska and Trivias get caught in the “nature over nurture” side of the debate implied in Disney’s choice of wolves and pigs as the story’s main characters.

By contrast, in Click, Clack, Moo: Cows that Type, Cronin is able to tell a story of characters who plausibly achieve a cooperative and peaceable solution because she has chosen characters whose biological imperatives do not get in the way of that solution.  In her story, the animals and the farmer change themselves through education and cooperation and, in turn, change their society for the better.  As an educator who believes in the power of education as a means of self and social development and who tries to convince students to engage in self and social development, I prefer the “nurture over nature” side of this argument and try wherever possible to convey that message in the stories I tell and the discussions I lead.

(b) Melodrama/Comedy/Tragedy.  The dramatic form in which we couch a story and/or an explanation will also have a major effect on how we react to a given situation.  For purposes of this article, I have roughly categorized stories as melodramas, comedies or tragedies, or some combination of the three, because each of these dramatic forms conveys a different social message.

In defining melodrama, comedy and tragedy, I have relied on literary definitions of these terms that are largely derived from Aristotle.  Following the lead of Paul Goodman, I have, however, extended the terms to focus on the moral implications of narrative forms and their effect on the moral of a story (Goodman 1954).  Goodman was a poet, playwright and novelist as well as a social and educational reformer and he often framed his social and educational analyses within the narrative categories of melodrama, comedy and tragedy.

I define melodrama as a story of Good versus Evil, Good Guys versus Bad Guys.  It is a narrative form that like the traditional epic deals in extremes of emotion and action, and is based on an absolutist view of morality (Goodman 1954, 127-149).  Soap operas and crime shows are classic examples of melodrama.  In a melodrama, the problem in the story is created by the evil actions of evil people.  These are people who cannot be trusted and have to be eliminated.  Since there can be no compromise with Evil or evil people, melodrama portrays a world in which problems almost always must be settled by war or conflict of some sort (Burke 1961, 34).  A melodrama may have a happy or unhappy ending depending on whether the good or the evil prevails.  In a typical episode of the melodramatic television show “Law and Order,” for example, the murderer is usually convicted but sometimes goes free.

Melodrama is the predominant story form in our society and the form in which most people seem instinctively to react to adversity.  “Who is doing this to me and how can I defeat them” is the first reaction of most people to a problem.  Arguably, this melodramatic reaction has been programmed into us by evolutionary processes, “an aggression drive inherited [by man] from his anthropoid ancestors” (Lorenz 1966, 49), leaving us “hardwired to distinguish between ‘us’ and ‘them’ and to behave inhumanely toward ‘them’ at the slightest provocation” (Wilson 2007, 285).  It is essentially the story form of the “fright, then fight or flight” reaction of our piglet-like precursors who had to make their way in a world of giant carnivores.

The melodramatic reaction also seems to be a function of the brain stem, the earliest and least sophisticated portion of the human brain, which we inherited from those puny ancestors.  Comedy and tragedy are more complex reactions that apparently derive from the more developed areas of the cerebral cortex which evolved later in humanoids.  Melodrama was seemingly a successful survival strategy for helpless mini-mammals, but it may not be as useful, and may often be counterproductive, in the world of modern humans in which shooting first and asking questions later can lead to unnecessary wars and suffering (Diamond 1993, 220-221, 276-310; also Wilson 2007, 51-57).

I define comedy as a story of wisdom versus folly, wise people versus foolish people (Aristotle 1961, 59).  In comedy, the problem is created by someone acting out of stupidity or ignorance, “the intervention of fools” (Burke 1961, 41).  It is a narrative form that promotes education and experimentation as the solution to problems, as the wise try to teach the fools or at least restrain them from further foolishness (Goodman 1954, 82-100).  When we think someone is acting foolishly, our reactions typically are either to correct the person, compete with the person to see who is correct, constrain and control the person so that he/she can do no further harm, or some combination of these three.

Comedy usually promotes a hierarchical world in which the knowledgeable people are empowered to control the stupid and ignorant people, educating them in proper behavior and belief when that is possible, and tricking, controlling or excluding them when that is not.  Comedy involves conflicts and struggles but the action generally stays peaceful or, at least, not fatal.  If, however, a fool refuses instruction, disdains competition, and rejects containment, comedy can descend into violent struggle and metamorphose into melodrama.  A comedy may have a happy or unhappy ending depending on whether the fools learn their lesson.  In a typical episode of the comedic television show “Seinfeld,” for example, the main characters are usually still enmeshed at the end of show in some mess of their own foolish making.

I define tragedy as a story of too much of a good thing becoming bad.  Tragedy in this definition describes a character that pursues a too narrowly prescribed good too far until it turns on itself, becomes bad and precipitates a potential disaster.  The character’s “tragic flaw” is a lack of perspective, the failure to see things in a broader context, for example failing to recognize that one person’s good may be someone else’s bad, the world may contain competing goods, and an individual’s good ultimately depends on the good of all (Goodman 1954, 35, 172).

Tragedy is a story of hubris versus humility, the failure of the tragic character to recognize his/her “personal limits” and reconcile contradictions within him/herself, within his/her society and/or between him/herself and society (Burke 1961, 37).  While the tragic character’s actions demonstrate his/her “moral purpose,” they also demonstrate “the necessary or probable outcome of his character,” which is a downfall as a result of his/her pride (Aristotle 1961, 81-83).  Tragedy “deals sympathetically with crime,” with the good intentions that can pave the way to hell (Burke, 1961, p.39), and, thereby, arouses pity and fear in the audience (Aristotle 1961, 61) – pity that a good person has tried to do a good deed and gone wrong, fear that but the grace of the gods this could be any of us.

Although there is more to a great tragedy than a simple story-line, in the medieval society portrayed in Shakespeare’s tragedy Macbeth, for example, ambition is considered a good thing but Macbeth takes it too far and this excess of ambition brings his downfall (Van Doren 2005, 216).  Macbeth, a would-be self-made man living in a highly structured, hierarchical society, is fatally caught in “a struggle between [his] desire to make his own destiny … and the rule-bound order in which he lives,” and tries to bully his way through these contradictions (O’Toole 2002, 138).  In the Renaissance society of Hamlet, deliberation is a good thing but excessive deliberation produces a paralysis of the will and Hamlet’s downfall (Van Doren 2005, 161).  Hamlet is a “humanist” intellectual caught between medieval Gothic and modern rational social mores and modes of thought, and he fatally vacillates between the one and the other (O’Toole 2002, 46, 48).  Neither character is able to transcend his narrow focus and reconcile his contradictions until it is too late.

Tragedy, as I am using the term, is based on a relativistic view of morality and promotes negotiation and inclusion as the way to avoid the conflict and calamity that befall tragic figures such as Macbeth and Hamlet.  The goal of tragedy is for the tragic hero and the audience to recognize the narrowness of the hero’s perspective – “recognition” of the character’s flaw at the end of the story by the character and the audience is a key to this narrative form (Aristotle 1961, 84-86) – and reconcile his/her views with the views of others, thereby promoting compromise so that all can cooperate or, at least, peacefully co-exist.

The moral of a tragedy is to avoid the narrow-mindedness of the fallen characters and thereby avoid their fates.  When, however, people fail to recognize the tragic nature of a situation, they may act as though it is melodrama, pursue their own narrow ends to the bitter end, and fight, flee or fall to a fatal conclusion.  Although fictional tragedies, such as Macbeth and Hamlet, generally have unhappy endings, a tragedy, as I define the term, may have either a happy or unhappy ending depending on whether the main characters have recognized and then successfully reformed their narrow and short-sighted views of things.  While in the original version of Shakespeare’s King Lear, for example, Lear and his daughter Cordelia die at the end, in some later productions of the play, they live happily thereafter (Harbage 1970, 17).

The differences in the moral messages conveyed by melodrama, comedy and tragedy are significant.  If a person sees the world primarily in melodramatic terms, he/she will tend to see social problems as the result of the evil actions of evil people, to see enemies all around, and to see war or coercion of some sort as the solution to most social problems.  If a person sees the world in comic terms, he/she will tend to see social problems as the result of foolish people and to see education and/or containment as the solution to social problems.  If a person sees the world in tragic terms, he/she will tend to see social problems as the result of competing goods and competing good intentions, and to see negotiation as the solution.  In sum, the dramatic form in which a person tells the story of any particular social problem will largely determine his/her moral reaction and the nature of his/her ethical engagement.

In deciding which dramatic form to use for telling a story, my preference is to choose the tragic form whenever and to the greatest extent possible because it is the most peaceful approach to solving social problems and the one in which ordinary people can most actively engage.  The tragic mode asks you to put yourself in the shoes of the other person, broaden your perspective to include his/hers, and negotiate a compromise solution to your differences.  The tragic mode also encourages ordinary people such as our students to engage in the discussion and solution of social problems.

To the extent the facts of my story do not fit into the tragic mold, my preference is to choose the comic form as a potentially peaceful way of resolving a problem.  In comedy, you see your side as wise and the other as foolish, and you set your side up to help instruct or contain the fools.  This tactic has the potential for generating antagonism if the other side does not see itself as foolish, and resents and resists your efforts.  But it is the educational mode and it encourages students to think critically about social problems and try to develop rational solutions to them.  Properly done, the comic mode has the potential for a peaceful and mutually satisfactory resolution of differences.

Finally, to the extent the facts do not fit into either the tragic or comic modes, I describe the situation in melodramatic terms.  Melodrama is for me the form of last resort because it portrays a world in which differences can be settled only by fighting and or war.  The more you use the melodramatic mode, the more you are telling your students that conflicts must be resolved through fighting and war.  The more you use the tragic and comic modes, the more likely your students may come to see the world in terms of peaceful resolutions and to act on that basis.

Disney’s Three Little Pigs is a melodrama, with the good pigs pitted against the evil wolf in a life-and-death struggle, and this is the moral world his story conveys to children.  In The True Story of the 3 Little Pigs, Scieska tries to turn the story of the pigs and the wolf into a comedy.  The story is comic, not only because it is funny, but also because Scieska is trying to wise the readers up to the possibility that things may not be as they seem at first glance and that the wolf is not really a villain.  But his efforts are ultimately not successful because in accepting Disney’s choice of animals as characters, Scieska is trapped into the logical consequences of that choice: wolves kill pigs and, therefore, his story has an underlying melodramatic message of Social Darwinian struggle.

In The Three Little Wolves and the Big Bad Pig, Trivias tries to turn the story into a tragedy.  In his story, the wolves are merely trying to protect themselves from a perceived danger and the pig is merely trying to be treated with respect.  Their mutual pig-headedness leads them all to misunderstand each other, which, in turn, leads to conflict.  Eventually they broaden their perspectives to include each other and the tragic consequences are abated, at least for the short run.  Because a fatal conflict between the wolves and the pigs is inherent and inevitable in the choice of wolves and pigs as main characters in the story, and Trivias cannot keep these are melodramatic consequences out of the moral of his story.

By contrast in Click, Clack, Moo: Cows that Type, Cronin is successfully able to combine tragedy and comedy, and avoid the underlying conflicts that fatally undermine the stories by Scieska and Trivias.  In her story, the animals initially misunderstand each other and the farmer initially misunderstands the animals, each promoting his/her perspective as the only one, which leads to conflict among the animals and a strike by the animals against the farmer.  But all parties have some right on their side and eventually they are able to negotiate their differences and resolve the problem in a plausible way.

(c) Chance/Causation/Choice. The agency of a story, and whether events happen primarily as a result of chance, causation or choice, also has a major effect on the story’s moral message.  Chance is pure luck, unpredictable and uncontrollable.  Causation is a chain of causes and effects or a series of forces that are inevitable and unavoidable.  Choice is people operating within a set of circumstances, evaluating the range of options permitted by the circumstances, and then making decisions and acting on those decisions, with consequences that become the circumstances within which they must make their next decision.  The explanation of events – chance, causation or choice – that a storyteller uses will largely determine the moral of his/her story.

If a story moves primarily either by chance or by causation, then the moral of the story is that the world is beyond our influence and we might as well sit back and do nothing.  If the story moves as a result of the characters’ choices, then the moral is that we can affect the world through our thoughts and actions.  The moral of portraying events as the result of chance and/or causation is that trying to change things and make the world better is useless because what will be, will be, regardless of our actions.  And the curricular message of portraying the world as chance and/or causation is that education is useless because what will be, will be, regardless of whether or not we know about it (Berlin 1954, 3, 20-21, 68).

If, instead, a story is told as a complex of circumstances, choices and consequences, students are empowered and education becomes worthwhile.  Education, thereby, becomes largely a process of putting oneself into the shoes of other people, understanding the problems that they faced and the circumstances that circumscribed their actions, evaluating the options they had, the choices they made and the consequences of their decisions, and relating them to our choices here and now.  In this way, things can be discussed in a way that helps students learn to use a story’s lessons to make their own decisions and encourage their social engagement.

Although most stories necessarily include elements of chance and causation, to the extent that a story allows a choice of explanations – the factual situation will determine “how wide the realm of possibility and alternatives freely choosable (sic)” is available to the characters (Berlin 1954, 29) – my preference is to focus on choice rather than chance or causation because it is the narrative form that best empowers people and encourages students to think in terms of social engagement.  By rephrasing and reframing what is often portrayed as causation (Carr 1967, 113-115) into the language of circumstances, choices and consequences, we can retain the explanatory power of our story while adding a clearer moral dimension.  Both storyteller and audience are thereby rewarded with “a broader awareness of the alternatives open to us and armed with a sharper perceptiveness with which to make our choices” in the world in which we live (Williams 1974, 8, 10).

In Disney’s The Three Little Pigs, causation in the form of biological determinism is the primary agency of the story.  Wolves are predators.  They eat pigs and there is nothing we can do to change that.  Scieska in The True Story of the Three Little Pigs tries to absolve the wolf of the deaths of the pig brothers by introducing an element of chance into the story, with the wolf’s claim that the pigs were killed by accident.  But his attempt does not work because we really don’t believe the wolf’s story – the succession of coincidences he relates is very funny but not plausible – and because the biological determinism that underlies the relationship of wolves and pigs overrides any explanation that the wolf could give.

Trivias tries to make choice the primary agency of The Three Little Wolves and the Big Bad Pig by having the wolves and the pig choose to be friends in the end.  But this ultimately does not work because of the zoological imperative that wolves eat pigs.  Both Scieska and Trivias get caught in the chain of causation wrought by Disney’s choice of wolves and pigs as the story’s main characters.  By contrast, in Click, Clack, Moo: Cows that Type, Cronin is able to tell a plausible story of characters making choices because she has not chosen characters whose biological imperatives get in the way of the choices she wants them to make.

(d) Top-Down/Bottom-Up.  Finally, it makes a big moral difference whether stories portray the world as being controlled by the few at the top or the many at the bottom.  The top-down perspective focuses on great people, extraordinary individuals, heroes and charismatic leaders.  Events are explained primarily in terms of the actions of these few top people.  The top-down perspective portrays social progress as the result of great leaders reaching down and pulling the masses of people up to a higher level.  Since most students do not see themselves as great or heroic or charismatic, top-down stories tend to demean and demoralize the majority of students and convey a message that they need do nothing but wait for their leaders to act.  The top-down approach tends to portray leaders as miraculous saviors who appear by chance and/or as heroic individualists whose choices are portrayed out of the context of the circumstances that made them possible (Lemish 1969, 5-6).

The bottom-up approach portrays events as the result of actions and movements of ordinary people (Levine et al 1989, XI; Freeman et al 1992, .X).  Bottom-up stories explain leadership as a consequence of the masses of people pushing representative leaders to the fore, great individuals standing on the shoulders of their predecessors and colleagues.  The moral of a bottom-up story is for ordinary people to join together to effectuate necessary social changes so that “The people, then, can make their own history” (Lemish 1967, 5).  Top-down stories can demoralize children who do not see themselves as great or may inspire students toward self-centered social climbing toward personal greatness.  Bottom-up stories can help empower children from ordinary backgrounds and inspire them to work with their peers rather than away from them.  Although some stories may require some top-down orientation (Lemish 1967, 4), my preference is to emphasize a bottom-up approach whenever possible.

Disney’s The Three Little Pigs is a top-down story in which the superior pig gets the better of both his brothers and the wolf.  While it adjures children to be smart like the wise pig, it also tells them not to get bogged down in acting as their inferior brothers’ keeper or trying to deal peacefully with those who threaten them.  Scieska tries to reverse the moral direction of the story by having it told from the wolf’s point of view, the bottom-up view of a disadvantaged member of a minority group.  But we don’t believe the wolf, which only makes the situation worse because now we have additional reasons to distrust wolves and the minority groups he represents in the story.

In his story, Trivias tries to reverse the natural hierarchy by making the wolves little and the pig big – putting the wolves at the bottom and the pig at the top of the hierarchy – but this ultimately does not work because we know that the wolves will soon grow up to be predators of pigs.  Again, having accepted Disney’s main structural choices in setting up the story, Scieska and Trivias are condemned to Disney’s main conclusions.  By contrast, Cronin is able to tell a genuinely bottom-up story of ordinary characters rising together to great deeds because she has wisely chosen animals that are on essentially the same rank of the food chain hierarchy and are at least theoretically compatible with each other.

3. Diversity as Dangerous, Dispensable or Desirable: Out of the Fire and into the Pot

One of the main themes running through the various versions of “The Three Little Pigs,” and through children’s literature as a whole, is the question of how to think about and deal with diversity.  Do, for example, differences make a difference?  If so, is it for better or worse?  The narrative choices a storyteller makes – how he/she deals with characterization, dramatic form, agency, and perspective – can largely determine the message his/her story conveys about diversity.

The United States has from its inception been primarily a nation of immigrants, and what to do about diversity in our population has been an ongoing theme in American history.  One of the main concerns has been how to avoid the racial and ethnic conflicts and conflagrations that have periodically erupted.  Ever since Hector St. John de Crevecoeur referred to America as a “melting pot” in 1782, Americans have tended to frame the issue of diversity in terms of chemistry, as though cultural differences are chemical additives that people compound onto their otherwise common human nature.  Americans have, in turn, tended to respond to cultural differences in three main ways, portraying America as either what could be called a “smelting pot,” a “melting pot,” or a “stew pot,” depending on whether they see cultural diversity as dangerous, dispensable or desirable.

For most of the period from the founding of the first European colonies in the 1600’s until the early 1900’s, the predominant approach to cultural diversity in this country was the “smelting pot” view.  In this view, differences make a difference and they are deleterious.  Harkening back to the English origins of the colonies, White Anglo-Saxon Protestants, or WASPs, are generally seen in this view as the ideal Americans, and those who are different are seen to need to have those differences smelted away so that they can become like WASPs.  This view is still widely held by people who consider themselves politically and socially conservative.  The smelting pot view is based on a melodramatic and top-down history of America in which the ethnically and ethically pure are pitted against the degraded and degenerate who would pull America down if they weren’t defeated.

The “melting pot” view of America was popularized during the 1910’s in a play of that name by Israel Zangweel, a Russian Jewish immigrant.  In this view, differences don’t make a real difference and they should be either ignored in favor of our commonality or blended into the existing common mix to make a slightly new and better commonality.  This view is based on a comic narrative of the world in which people need to be taught either to ignore or relinquish unimportant differences.  The melting pot gradually became the predominant view of self-styled liberals during the course of the twentieth century.

What could be called the “stew pot” view was promoted in the early twentieth century by Horace Kallen, another Russian Jewish immigrant, and Randolph Bourne.  In this view, differences make a difference and they are generally desirable.  Cultural diversity provides a plethora of resources and perspectives with which to help solve the social problems we face.  In this view, diversity among people should be preserved even as they interact in a common democratic broth in which they solve common problems.  This view derives from a tragic and bottom-up perspective on the world in which people are able to recognize and negotiate their differences.  This has become the view of the multicultural movement among liberals in recent decades.

Depending on how a story deals with differences, the moral can be that differences are dangerous, dispensable or desirable, a lesson that can make a big difference in the way children approach the world.  These responses are illustrated in the three versions of “The Three Little Pigs” discussed herein.  Disney’s The Three Little Pigs portrays a world in which difference is dangerous.  His is a smelting pot view.  Scieska tries to counter that view by portraying the wolf as good-natured and as essentially a pig in wolf’s clothing – a melting pot view that differences don’t make a difference – but it does not work.  The zoological differences between wolves and pigs make a big difference and it is potentially a fatal one for the pigs.

Trivias tries to portray the differences between wolves and pigs as desirable, so long as they are able to recognize and negotiate their differences – a stew pot view – because then they are able to use their differences to have more fun together.  But, again, in the long run this view cannot hold given the biological imperatives that control wolves and pigs.  In choosing to promote and popularize a story about pigs and wolves, Disney has effectively controlled the message about diversity in the subsequent versions of “The Three Little Pigs” despite the liberal and multicultural intentions of Scieska and Trivias.  By contrast, Cronin is able to tell a story that demonstrates the stew pot view of diversity because she has chosen characters who are compatible and who make valuable contributions to the whole based on their differences.

4. Narrative Choices in Early Childhood Storytelling: Walt Disney versus Dr. Seuss.

Probably the two most popular and influential storytellers of the last half of the twentieth century were Walt Disney and Theodor Geisel, alias Dr. Seuss.  Both used their stories to educate children in the morals and manners they believed in.  Disney was politically and culturally conservative, and his stories are filled with conservative lessons that he hoped would influence children for the rest of their lives.  Dr. Seuss, was politically and culturally liberal, and his stories are filled with liberal lessons that he hoped children would absorb.

Both authors conveyed their views through their subject matter and their narrative structures.  The coalescence of subject matter and narrative structure is one of the things that made their books so powerful, with their messages reflecting the narrative choices they made and, in turn, their narrative choices reflecting their political and cultural inclinations.  Disney’s stories were mostly melodramas with top-down perspectives, genetically generated characters, and crucial events occurring primarily through chance or causation.  Most of his most famous stories were adaptations of traditional folktales, including several from the Grimm brothers (Snow White, Cinderella, Sleeping Beauty).  Dr. Seuss’ stories were of his own invention, and were mainly comedies and tragedies with bottom-up perspectives, characters that learn and change themselves and their societies, and events that result from characters’ choices.  Comparing some of their stories can help illustrate my thesis.

Disney portrays life as primarily a competition among individuals and a melodramatic battle of good individuals against the evil.  Disney, in turn, portrays cultural, economic, social and biological differences among characters as crucial causes of the characters’ good and evil behavior.  Such differences, and especially genetic differences, almost invariably determine the outcome of the story.  The wolf in The Three Little Pigs, for example, is by nature – by genetics – a big, bad, black character.  By contrast, Mickey Mouse, the star of many Disney cartoons, is by nature a happy-go-lucky, harmless, little black character.  In his original guise during the late 1920’s, Mickey looks and acts much like one of the minstrel performers – blackened-faced white men who mocked and caricatured black men – that were very popular at the time.  Both the wolf and Mickey reflect and perpetuate the racist stereotypes of black men as either ghouls or fools that were widespread in the period in which Disney was working.

Disney stories are almost always top-down in their perspective and generally with a genetic twist.  In Disney, class difference is generally biological difference and biology will triumph irrespective of the environment.  Born a princess, end up with a prince.  Born a worker, end up a worker.  A typical Disney story is about a princess or prince who yearns for recognition as what she/he really is by birth and for her/his rightful place in the world.  In Cinderella (1999), for example, the heroine’s noble birth is evidenced by her petite feet, and her natural superiority is duly recognized in the end.  In Bambi (1942), a prince finds his rightful place.

Disney stories tend to disparage ordinary people as mere facilitators for the nobility.  In Snow White and the Seven Dwarfs (1999), Princess Snow White comes upon a bunch of diminutive miners who, despite producing prodigious quantities of precious jewels in their work, live poorly and like pigs.  She promptly cleans them up and civilizes them.  Then, although the dwarves generously take her in and protect her, she leaves them in the end in their hovel to continue slaving in the mines while she goes off to live with a prince in a castle.  The dwarves seemingly get what they deserve as mere workers and Snow White gets what she deserves as a princess.  Disney’s is essentially a “creationist’s” universe – as you were created so should you live.

The plot-lines of Disney stories typically move as a result of chance or causation.  Disney’s heroines, such as Cinderella, Snow White and Sleeping Beauty, are passive, waiting for a prince to discover them by chance or circumstance.  Independent and intelligent women in Disney’s stories are almost invariably evil witches and/or evil stepmothers.  In the face of twentieth-century feminism, Disney seems to be trying to put the genie back in the bottle.  His treatment of Cinderella exemplifies this.  There have been scores of Cinderella-type stories throughout history in cultures all over the world.  While these stories are inherently sexist and classist – a poor young woman seeking to marry a wealthy man – the Disney version is distinguished by the helplessness and passivity of his heroine.  In stories from other times and places, the heroine is active and intelligent in making her way in the world (Climo, 1989; Louie, 1982; Huck, 1989).  Not so in the Disney version.

Even the wise pig in The Three Little Pigs does nothing pro-active about the evils of the world but merely waits for the wolf to come to him, at which point he reacts.  In Disney stories, ordinary people are expected to do what they are told by their superiors, or else.  In Pinocchio (1948), the would-be boy is given a cricket to act as his conscience.  The cricket regularly counsels Pinocchio to follow the conventional straight and narrow path, from which Pinocchio deviates to his detriment in search of illicit fun.  The overall moral of Disney stories for children is to defer uncritically to established authority and accept uncritically the social status quo.

Dr. Seuss is the anti-Disney, and his liberalism is reflected in the narrative choices he makes.  In The Cat in the Hat (1957), for example, the children have a fish who, like the cricket in Pinocchio, acts as their conscience and counsels them to follow the conventional path.  But, unlike Pinocchio, when they deviate from that path to have illicit fun, they suffer no consequences.  And at the end of the story, when the issue is raised as to whether the kids should tell their mother about what they have done, the book merely closes with a question to the reader: “Well…What would you do if your mother asked you?”  This is a patently subversive question that raises the possibility of children rejecting parental authority.  In raising open-ended questions about right and wrong, and trying to portray things from a child’s point of view, Dr. Seuss has rejected the moralistic, melodramatic mode of Disney and adopted a comic-tragic narrative mode.

Dr. Seuss’s stories are invariably bottom-up in their perspective, emphasizing the ability of ordinary folks, the little people – children included – to change the world and make it a better place.  In Dr. Seuss’s world, class difference is usually environment difference and if the environment changes, people change, sometimes for the better and sometimes for the worse depending on whether the new environment calls forth people’s better or worse selves.  Dr. Seuss is essentially an evolutionist.

In The Lorax (1971), a child is given the last Truffula seed and the job of saving the environment.  In Yertle the Turtle (1950), a “plain little turtle whose name was just Mack,” and who finds himself at the bottom of the social pile, is able to bring down the king and help establish freedom and democracy in his society.  In Green Eggs and Ham (1960), the childlike Sam-I-Am turns the tables on the adult character and harasses him into trying something that he does not want to try, a reversal of traditional roles in which adults try to teach children new things and force children to do things they do not want to do.

In Horton Hears a Who (1954), the main character is an elephant who, with his giant-sized ears, can hear the pleas for help of tiny people that live on a speck of dust.  Horton tries to help them save their tiny world from destruction but is mocked by other creatures in the forest that have smaller ears and cannot hear the “Who’s.”  Although Horton is by far the largest creature in the forest, he is eventually overpowered by his neighbors who think he is deranged and who want to get rid of the speck of dust.  Horton pleads with the “Who’s” to make enough noise so that the other animals will be able to hear them and, in the end, it is the added voice of the smallest “Who” child that makes the difference so that Horton is vindicated and the “Who’s” world is saved.

The lessons of the story include: that those like Horton with special strengths and abilities must help those without; that not even the mightiest individual, such as Horton, can prevail against the collective efforts of ordinary people; that only through the collective efforts of ordinary people, such as the “Who’s,” can good things get done; and, that even the smallest person, such as that last Who child, can make the difference.  These are empowering lessons for children that follow from Dr. Seuss’ decision to tell his story as a comedy-tragedy, from the bottom-up and as a function of characters’ choices.

Dr. Seuss insists in his stories that ordinary people can change the world for the better and that a changed world can make people happier.  Toward this end, he emphasizes the importance of nurture over nature and man-made environment over biology, to the point of sometimes even denying the scientific facts of genetics.  In Horton Hatches an Egg (1940), a lazy bird tricks the elephant Horton into sitting on her egg while she goes partying.  When, after many trials and tribulations which test Horton’s devotion to his task, the egg is finally hatched, the new born creature is half bird and half elephant.  In Dr. Seuss’s moral world, Horton deserves some tangible credit for parenting the egg even though he is not a biological parent, and the new born creature deserves some of Horton’s benevolent, beneficent characteristics rather than merely those of his/her absent father and selfish mother.  This is a very different moral universe than Disney’s genetically determined world.

Dr. Seuss generally focuses in his stories on differences among people that don’t make a difference and takes a “melting pot” view of cultural differences.  In The Sneetches (1961) and in The Butter Battle Book (1984), he comically satirizes the foolishness and potentially deadly consequences of fighting over superficial differences, such as having stars on your belly and buttering your bread on the upside or downside.  This assimilationist approach to cultural differences was characteristic of liberals in the period of the 1950’s to 1970’s in which Dr. Seuss did most of his work, a time when liberals were promoting integration through the civil rights movement.

While Disney’s “smelting pot” and Dr. Seuss’ “melting pot” views of diversity largely dominated the discussion of diversity during most of the twentieth century, some authors of children’s literature have presented a “stew pot” view.  The Araboolies of Liberty Street (1989), by Sam Swope, is an attempt to portray differences in a multicultural “stew pot” way.  In this story, a large extended family of colorful but quirky and noisy people move onto a bourgeois, suburban street, much to the glee of the children and the consternation of an uptight couple who are the conservative culture-cops of the neighborhood.

The story is sympathetic to the gentle and kind countercultural Araboolies and hostile to the nasty culture-cops, but it also raises the question of whether the reader would like to live on the same block with people as noisy, sloppy and erratic as the Araboolies.  In adopting what could be considered a tragic or relativistic view of cultural differences, the book forces the reader to consider which differences among people actually make a difference and how one can deal amicably with those differences.  In this regard, Swope’s view is essentially an extension of Dr. Seuss’ and a rejection of Disney’s.

In sum, Disney typically made narrative choices in favor of melodramatic form, top-down perspective, genetic determinism, and plot-lines based on chance and causation.  Because of these narrative choices, his stories are generally disabling and disempowering to children.  Dr. Seuss made narrative choices in favor of comedy or tragedy, a bottom-up perspective, nurture over nature, and plot-lines based on choice.  Because of these narrative choices, his stories are generally enabling and empowering for children.

The nature and effect of an author’s narrative choices is easy to see in early childhood stories such as those by Disney and Dr. Seuss but they are no less evident and important in literature for adolescents and adults.  Compare, for example, Madeline L’Engel’s A Wrinkle in Time (1962) with J.K. Rowling’s Harry Potter and the Sorcerer’s Stone (1997), both stories about magic and magical children  The conservative tone and lessons of L’Engel’s book are based on the melodramatic, top-down, genetic determinism of her narrative.

Rowling’s more liberal tone and lessons reflect the comedic mixture of top-down-bottom-up perspectives and nature-nurture influences in her narrative.  The lessons with which readers of any form of literature are left depend in large part on the narrative choices I have described.  In deciding which stories to use and how to use them, educators routinely look at the messages that the stories convey.  In looking for the message of a story, teachers should look at the narrative medium of the story and the narrative choices the author has made in setting up the story.  The message is often in the medium.

5. Making Narrative Choices in Historical Fiction: What to do about Hitler?

The same principles of storytelling that apply to fiction also apply to real world stories about history, current events and personal experiences – the way an author or a teacher tells the story will largely determine the message he/she conveys.  Real world stories, including historical fiction, must of course be firmly based on the best available evidence and conform to all the available facts.  While the author of a historical fiction may invent characters and events that are characteristic of the times, he/she cannot change the known facts of the times.

Unlike the author of a purely fictional story, the author of a historical fiction may not choose to talk about cows when the facts point to wolves.  Nonetheless, an author of historical fiction or historical non-fiction and any other factual story for that matter, and a teacher when discussing a historical or any other factual situation, has considerable leeway in presenting the facts, and the meaning and effect of those facts can vary considerably depending on the narrative choices he/she makes.

While some factual stories fit naturally into one narrative form or another, other stories can be told as melodramas, comedies or tragedies, and you have your choice of story forms.  The history of Adolf Hitler and the Holocaust, for example, naturally fits into melodrama.  Hitler was an evil man and the Holocaust was an evil event.  The history of the American Revolution, on the other hand, can legitimately be told in various forms.  It can be told as a melodrama in which the Good revolutionaries fought against the Bad British or, from the British point of view, the Good British against the Bad Americans.  It can also be told as a comedy in which the British foolishly thought they could keep the American colonies forever as dependencies, or as a comedy in which the Americans foolishly rebelled because they mistakenly thought the King intended to repress the colonies.  And it can be told as a tragedy in which the British government and the American revolutionaries each sought narrow goals that were good in and of themselves, and that could have and should have been peacefully reconciled, but were not to the detriment of both sides.

The history of the American Revolution can, in turn, be told from a top-down perspective as the result of actions by an elite group of American revolutionaries and/or British officials, or as a bottom-up movement of ordinary people, or some combination of the two.  It can be told as the chance result of a series of fortunate or unfortunate accidents, the inevitable result of a chain of causation, or the result of a series of choices that could have been otherwise.  The best available evidence on the American Revolution will support any of these versions, and the best historians differ in their approaches, so teachers are left with important choices as to the narrative forms for this story and, in turn, as to the moral of the story.

Choosing the narrative approach appropriate to a real world issue is sometimes simple and other times quite complex.  When the facts clearly dictate a particular narrative approach that approach is the one you simply must take.  When the facts leave you with a choice, I think the choice should, and almost invariably will, be based on your educational goals.  Most authors of books for young people and most teachers primarily rely on melodrama and top-down perspectives.  This seems in part because they think that presenting battles between good and evil, and focusing on larger-than-life heroes and villains, are the liveliest and most interesting ways to tell a story.  This is, I think, a mistake in at least two respects.

Authors and teachers who rely on top-down melodrama underestimate their audience, their materials, their message and themselves.  Their books and lessons are composed as though young people cannot understand and accept that heroes can have flaws and villains can have virtues.  As a result, the books and lessons are demoralizing to students because if heroes are perfectly good and villains are purely evil, then there is no useful explanation of why the good often fail and the bad often succeed.  How can students understand the rise of Hitler, for example, without reference to qualities that were appealing to ordinary people?  How can students understand the greatness of Lincoln without reference to his struggles with his own racism?

These books and lessons are also debilitating because if students do not understand that the good may not be perfectly good and the bad may not be entirely bad, they are not equipped to recognize good and bad people or good and bad ideas.  Without such understanding, how can they learn to recognize and respect the most important qualities in a person or idea, and avoid being unduly swayed by superficial flaws and superficial appeals?  And, perhaps most important, how can they learn to deal with their own internal contradictions and struggles between their better selves and worse selves?  Rather than portraying heroes in purely melodramatic terms, it would be better to present them in more comic and tragic terms.   Heroes could be seen as worth admiring for the ideals they represent and tried to fulfill, and worth studying for the ways they did not measure up to those ideals and, thereby, started a job that we should try to finish.

Real world stories, including historical fiction, can be successfully written in comic and tragic terms, from the bottom-up as well as the top-down, and with an emphasis on choice rather than chance or causation.  This is demonstrated by the popularity of historical fiction for young people by such award-winning authors as Kathryn Lasky (True North, 1996; A Journey to the New World, 1996; Dreams of a Golden Country, 1998), Katherine Paterson (Lyddie, 1991; Bread and Roses, Too, 2006), James and Christopher Collier (My Brother Sam is Dead, 1974), and Gary Paulsen (NightJohn, 1993; The Rifle, 1995), whose books are written for the most part in what I have defined as the tragic mode, with a bottom-up perspective and an emphasis on characters’ choices as the moving agency of their stories.  Teachers can do likewise in their lessons.

Authors and teachers who depend on melodrama, top-down narratives and causal explanations also frequently undercut their own intended message.  This point is exemplified in the similarities and differences between two historical novels about adolescent girls working in the Lowell textile mills during the 1840’s, So Far From Home: The Diary of Mary Driscoll, an Irish Mill Girl (1997) by Barry Denenberg and Lyddie (1991) by Katherine Paterson.  Both stories are written from a liberal political and social perspective.  Both are harshly critical of child labor and working conditions in the factories and express sympathy for immigrants and for the labor movement.  Both stories follow essentially the same factual pattern, as follows:

Family breakdown forces a young girl to leave home (Ireland for Mary; western Massachusetts for Lyddie) to seek work.  The girl endures a hard passage from home to Lowell but makes friends along the way.  The girl is at first excited about factory work, enjoying the independence and money, but the work soon becomes grueling and the life tedious.  The girl makes friends with some co-workers and struggles with others.  One friend is a union leader who involves the girl with a nascent union.  The girl’s original goal is to make enough money to reunite her family but this goal is foiled by deaths and dispersion of the family, which leaves the girl with new choices to make at the end.

Despite similarities in the stories’ subject matter and their authors’ political orientation, the stories leave very different impressions on the reader.  So Far From Home is a melodrama of good against evil.  The bad guys are the English landowners in Ireland, the English sailors on the boat Mary takes to America, and the mill owners and supervisors in Lowell.  These people are prejudiced against the Irish and have no qualms about exploiting poor people.  The good guys are Mary, her friends and several good-hearted adults.  The book is full of sensational events – heroic rescues and heartbreaking deaths – representing them as normal everyday life, similar to a soap opera.

The book is also highly sentimental, starting with an idealized version of rural family life in the good old days, and contrasting industrialization and urbanization in the nineteenth century with a romanticized past.  Mary has almost no choices to make in the book, and she and the story are driven by economic and social forces over which neither she nor the other characters have any control.  The story ends with no hope of collective social action against the bad guys, and the moral of the story is that a combination of self-help, mutual aid and family solidarity is the only way for an individual to survive.  This is a moral very similar to Disney’s in his Three Little Pigs.

Lyddie is a tragedy with characters straining against their personal limitations and situations that are full of internal contradictions.  Lyddie and other characters repeatedly act with narrow-minded good intentions that lead to bad ends, followed by negotiations among the characters that lead to new solutions.  The plot proceeds dialectically as Lyddie tries something, goes too far, then recovers and reconfigures her position to try something else.  The book highlights the importance of the choices that Lyddie and her colleagues make, for their own lives and for society.  The book projects a stoic view of life – hope for the best while expecting the worst – but also offers its characters and readers a utopian ideal of a cooperative society.

Lyddie has three main historical themes, each of which is connected to a social issue of today.  The first theme is family and the book essentially asks “What is a family?”  Lyddie begins with the goal of sustaining her biological nuclear family, a laudable goal.  But, unlike Mary’s family in So Far From Home, Lyddie’s biological family is almost totally dysfunctional – her father abandons the family, her mother becomes psychotic, her other relatives are uncaring, and her siblings scatter to foster homes.  Lyddie’s parents are wrong about almost everything.

Lyddie’s initial focus on reuniting her nuclear family leads her to frustration and isolation.  So, Lyddie has to create alternative families out of co-workers and friends, as do most of the main characters in the book.  Ultimately, the book’s answer to this question seems to be that any group of people that works together and supports its members is a family – a definition of family that eschews any sentimentalism about the supposedly ideal biologically-based nuclear family of olden days and is very relevant to current discussions about marriage and family life.

The book’s second theme is work and the purpose of work.  Lyddie initially wants to make money to help her nuclear family, a laudable goal, but then falls for the myth of the self-made person and the lure of money.  She becomes greedy and selfish, and harsh toward Irish immigrants similar to Mary in So Far From Home who threaten the jobs and wage-levels of native workers.  But, in the end, Lyddie learns that work should be a satisfactory way of life, not merely a means to make money for oneself, and that cooperation is the key to this.

The book’s third theme is the role of women in society.  Lyddie begins with traditional aspirations of getting married and becoming a housewife.  But her observation and experience leads her to abandon this notion.  The traditional role of the dependent wife is portrayed negatively – Lyddie’s dad leaves his then helpless wife, Lyddie’s best friend in the mills is impregnated by a married man who abandons her, and Lyddie eventually decides not get married until she can support herself intellectually as well as economically.  In the end, Lyddie goes off to Oberlin College to learn how to make a better contribution to society.

Although the Lowell labor union in Lyddie fails to achieve its goals, the message of the book is that collective social action is the best way to make your way in the world.  This is a moral very similar to Cronin’s in Click, Clack, Moo: Cows that Type.  Paterson’s recent book Bread and Roses, Too (2006) deals with the same themes as Lyddie in the context of the 1912 Lawrence textile strike, a situation in which Eastern European immigrants threatened the wage levels and jobs of the now-established Irish workers and bosses.  In this book, the union wins and the civic messages of feminism and cooperative social action are even clearer.

So Far From Home and Lyddie have the same basic subject matter and their authors have the same basic liberal intentions.  But the moral of their stories is, nonetheless, very different.  So Far From Home conveys a message of individual self-help and civic disengagement.  Lyddie conveys a message of collective action and civic engagement, and of using past decisions to help understand present-day choices.  It is primarily the respective structures of the two books – melodramatic versus tragic and causation versus choice – that makes the difference.

At the end of the musical Into the Woods, the witch warns that “Children may not obey, but children will listen.  Children will look to you for which way to turn, to learn what to be,” so be careful of the stories you tell them.  And the chorus responds that “You just can’t act, you have to listen.  You just can’t act, you have to think.”  In choosing books for students to read, in discussing books and events with them, and in preparing lessons, we need to think about the narrative structure of the stories we are presenting to them.  We need to listen carefully to the messages being conveyed by the way we are saying things, and think not only about the substance of what we want to say but also about the form.  The narrative choices we make can determine the moral of our story.

References

Bacon, Betty. 1988. “Introduction.” Pp.1-14 in How Much Truth Do We Tell The Children: The Politics of Children’s Literature.  Edited by B. Bacon. Minneapolis, MN: Marxist Educational Press.

Banfield, Edward. 1990. The Unheavenly City Revisited. Long Grove, IL: Waveland Press..

Barber, Benjamin. 1998. A Passion for Democracy. Princeton: Princeton University Press.

Berlin, Isaiah. 1954. Historical Inevitability. London: Oxford University Press.

Bishop, Gavin. 1989.  The Three Little Pigs. New York: Scholastic, Inc.

Burke, Kenneth. 1961. Attitudes Toward History. Boston: Beacon Press.

Carr, Edward Hallett. 1967. What is History?  New York: Vintage Books.

Clark, Beverly. 2003. Kiddie Lit: The Cultural Construction of Children’s Literature in America. Baltimore, MD: Johns Hopkins University Press.

Climo, Shirley. 1989.  The Egyptian Cinderella. New York: HarperCollins.

Collier, James & Christopher Collier. 1974. My Brother Sam is Dead. New York: Scholastic, Inc.

Cronin, Doreen. 2000.  Click, Clack, Moo: Cows that Type.  New York: Simon & Schuster.

Denenberg, Barry. 1997. So Far from Home: The Diary of Mary Driscoll, an Irish Mill  Girl, Lowell, Massachusetts, 1847. New York, Scholastic, Inc.

Diamond, Jared. 1993. The Third Chimpanzee: The Evolution and Future of the Human Animal.  New York: Harper Perennial.

Disney, Walt. 1933. The Three Little Pigs. Hollywood, CA: Walt Disney Pictures.

Disney, Walt. 1942. Bambi. New York: Walt Disney Productions.

Disney, Walt. 1948. Pinocchio. New York: Golden Book.

Disney, Walt. 1999. Snow White and the Seven Dwarfs. New York: Disney Enterprises.

Disney, Walt. 1999. Cinderella. New York: Disney Enterprises.

Disney, Walt. 2001. “The Three Little Pigs” Pp. 69-84. in Walt Disney’s Classic  Storybook. New York: Disney Press.

Dorfman, Ariel & Armand Mattelart. 1988. Pp. 22-31. “How to read Donald Duck andother innocent literature for children.” in How Much Truth Do We Tell The Children: The Politics of Children’s Literature.  Edited by B. Bacon. Minneapolis, MN: Marxist Educational Press.

Dr. Seuss. 1940. Horton Hatches the Egg. New York: Random House.

Dr. Seuss. 1950. Yertle the Turtle. New York: Random House.

Dr. Seuss. 1954. Horton Hears a Who. New York: Random House.

Dr. Seuss. 1957. The Cat in the Hat. New York: Random House.

Dr. Seuss. 1960. Green Eggs and Ham. New York: Random House.

Dr. Seuss. 1961. The Sneetches. New York: Random House.

Dr. Seuss. 1971. The Lorax. New York: Random House.

Dr. Seuss. 1984.  The Butter Battle Book. New York: Random House.

Egan, Kieran. 1988. Teaching as Storytelling. London: Routledge.

Egan, Kieran. 1992. Imagination in Teaching and Learning. Chicago: University of Chicago Press.

Ellis, Alec. 1968. A History of Children’s Reading and Literature. Oxford: Pergamon Press.

Freeman, Joshua et al. 1992. Who Built America: Working People & the Nation’s Economy, Politics, Culture & Society, Vol. Two: From the Gilded Age to the Present.  New York: Pantheon Books.

Galdone, Paul. 1970. The Three Little Pigs. New York: Clarion Books.

Gillespie, Margaret. 1970. Literature for Children: History and Trends. Dubuque, IA: Wm. C. Brown Company.

Goodman, Paul. 1954..The Structure of Literature. Chicago: University of Chicago Press.

Harbage, Alfred. 1970. “Introduction.” Pp.14-27 in King Lear by William Shakespeare. Baltimore, Md: Penguin Books.

Hines, Maude. 2004. “He Made Us Very Much Like the Flowers.”  Pp. 16-30 in Wild Things: Children’s Culture and Ecocriticism. Edited by S. Dobrin & K. Kidd. Detroit, MI: Wayne State Press.

Huck, Charlotte. 1989.  Princess Furball. New York: Greenwillow Books.

Hunt, Peter. 1991. Criticism, Theory and Children’s Literature. Cambridge, MA: Basil Blackwell.

Jurich, Marilyn. 1988. “What is left out of biography for children.” Pp.206-216 in How Much Truth Do We Tell The Children :The Politics of Children’s Literature. Edited by B. Bacon.  Minneapolis, MN: Marxist Educational Press.

Lasky, Kathryn. 1996. A Journey to the New World: The Diary of Remember Patience Whipple.  New York: Scholastic, Inc.

Lasky, Kathryn. 1996. True North. New York: Scholastic, Inc.

Lasky, Kathryn. 1998. Dreams in the Golden Country: The Diary of Zipporah  Feldman, a Jewish Immigrant Girl.  New York: Scholastic, Inc.

Lehr, Susan. 2001. “The Hidden Curriculum: Are We Teaching Young Girls to Wait for a Prince?” Pp.1-20 in Beauty, Brains and Brawn: The Construction of Gender in Children’s Literature. Edited by S. Lehr. Portsmith, NH: Heinemann.

L’Engle, Madeleine. 1962. A Wrinkle in Time.  New York: Dell Publishing.

Lemisch, Jesse. 1967. Towards a Democratic History, A Radical Education Project             Occasional Paper.  Madison, WI: Radical Education Project.

Lemisch, Jesse. 1969. “The American Revolution seen from the Bottom Up.” Pp.3-45 inToward a New Past: Dissenting Essays in American History. Edited by Bernstein. New York: Vintage Books.

Levine, Bruce et al. 1989. Who Built America? Working People & the Nation’s Economy, Politics, Culture & Society, Vol. One: From Conquest & Colonization through Reconstruction & the Great Uprising of 1877. New York: Pantheon Books.

Louie, Ai-Ling. 1982. Yeh Shen: A Cinderella Story from China. New York: Philomel Books.

Lorenz, Konrad. 1966. On Aggression. New York: Harcourt, Brace & World.

Lucas, Ann. 2003. “The Past in the Present of Children’s Literature.” Pp.XIII-XXI in The Presence of the Past in Children’s Literature  Edited by A. Lucas. Westport, CN: Praeger.

Lurie, Alison. 1990. Don’t Tell the Grownups: Subversive Children’s Literature.  Boston: Little Brown & Co.

MacLeod, Ann. 1994. American Childhood: Essays on Children’s Literature of the Nineteenth and Twentieth Centuries. Athens, GA: University of Georgia Press.

McEwan, Hunter & Kieren Egan. 1995. “Introduction.” Pp.VII-XV in Narrative in Teaching, Learning and Research. Edited by H. McEwan & K. Egan.  New York: Teachers College Press.

Moynihan, Ruth. 1988. “Ideologies in Children’s Literature: Some Preliminary Notes.” Pp.93-100 in How Much Truth Do We Tell The Children: The Politics of Children’s Literature.  Edited by B. Bacon.  Minneapolis, MN: Marxist Educational Press.

O’Toole, Fintan. 2002. Shakespeare is Hard, but so is Life: A Radical Guide to            Shakespearian Tragedy. London: Granta Books.

Paterson, Katherine. 1991. Lyddie. New York: Puffin Books.

Paterson, Katherine. 2006.  Bread and Roses, Too.  New York: Clarion Books.

Paulsen, Gary. 1993.  NightJohn.  New York: Bantam Doubleday Dell.

Paulsen, Gary. 1995.  The Rifle.   New York: Bantam Doubleday Dell.

Plato. 1956. Great Dialogues of Plato. New York: Mentor Books.

Rowling, J.K. 1997. Harry Potter and the Sorcerer’s Stone. New York: Scholastic, Inc.

Sarland, Charles. 1999. “Critical Tradition and Ideological Positioning.” Pp.30-49 in Understanding Children’s Literature. Edited by P. Hunt. London: Routledge.

Scieszka, Jon. 1989. The True Story of the Three Little Pigs. New York: Scholastic, Inc.

Stephens, John. 1992. Language and Ideology in Children’s Fiction. London: Longman.

Stephens, John. 1999. “Linguistics and Stylistics.” Pp.73-85 in Understanding Children’s Literature. Edited by P. Hunt. London: Routledge.

Swope, Sam. 1989. The Araboolies of Liberty Street. New York: Farrar, Strauss & Giroux.

Taxel, Joel. 1988. “The American Revolution in Children’s Books: Issues of Race and Class.” Pp.157-172 in How Much Truth Do We Tell The Children: The Politics of Children’s Literature. Edited by B. Bacon.  Minneapolis, MN: Marxist Educational Press.

Thaler, Danielle. 2003. “Fiction vs. History: History’s Ghosts.” Pp.3-11. in The Presence of the Past in Children’s Literature. Edited by A. Lucas. Westport, CN: Praeger.

Trivias, Eugene. 1993.  The Three Little Wolves and the Big Bad Pig.  New York: Scholastic, Inc.

Van Doren, Mark. 2005. Shakespeare. New York: New York Review Books.

Williams, William Appleman. 1974. History as a Way of Learning. New York: New Viewpoints.

Wilson, David Sloane. 2007. Evolution of Everyone. New York: Delacorte Press.

Witherell, Carol et al. 1995. “Narrative Landscapes and the Moral Imagination.” Pp.VII-XV in Narrative in Teaching, Learning and Research. Edited by McEwan & K. Egan. New York: Teachers College Press.

Rethinking Descartes’ Cogito: “I think, therefore we are (not I am).” Part III: A Cross of Gold and the Golden Rule.

Burton Weltman

The Gold Standard versus the Golden Rule.

“You shall not press down upon the brow of labor this crown of thorns.  You shall not crucify mankind upon a cross of gold.”  William Jennings Bryan. Chicago: July 9, 1896.

In what is generally considered the greatest political convention speech in American history, William Jennings Bryan delivered his “Cross of Gold” speech to an ecstatic Democratic Party National Convention in 1896 and secured for himself the party’s presidential nomination.  Although Bryan lost that presidential election and two more after that, his words dramatized a recurring theme in American social and political thinking that still resonates today: That the Golden Rule should not mean that Gold Rules.

Conventional histories of that election generally focus on the specific issue that Bryan addressed in the speech, which was whether the United States should remain on the gold standard for its currency.  The gold standard was favored by bankers and creditors generally, and resulted in tighter credit and higher interest rates for farmers, workers and small businessmen.  The alternative promoted by Bryan was a bi-metal gold and silver standard, which would help debtors, farmers, small businessmen and the working classes.  But this was only part of his message.  The speech was rooted in a much broader debate that has recurred throughout American history between those who propound individualism and favor the hierarchical society of winners and losers that inevitably results, and those like Bryan who promote communalism and a more egalitarian society.  That was Bryan’s deeper message.

This debate has many different aspects and ramifications.  It can be encapsulated in the question of what comes first “We” or “I?”  It  can be characterized ontologically and psychologically in  the difference between Descartes’ formulation of “I think, therefore I am” and the alternative formulation that I have been promoting of “I think, therefore we are.”

The debate has ethical dimensions that can be seen in the difference  between “measure for measure” ethics, or “what you do unto me, I can do unto you,” and  “reciprocity” ethics, or “do unto others as you would have others do unto you.”  The former is based on a contractual model of ethics.  Contracts are agreements between individuals in which each pledges to do something that the other wants.  Failure to fulfill the terms of a contract can result in retribution.  This retribution can be either compensatory — you pay damages that make me whole — or punitive — I take the Biblical “Eye for an eye and tooth for a tooth.”  It is an impersonal and individualistic ethics that treats others as means towards a person’s own ends.

Reciprocity ethics are based on the Golden Rule in which others are treated as an extension of ourselves.  “Love thy neighbor as thyself,” in Biblical terms.  It is a communal ethics of personal relationships and caring for others both in the here-and-now and in the future.  The closer the connection of the other to you, the more intense your care and caring will likely be.  But every interaction is based on an assumption of care.  Bryan spoke on behalf of reciprocity ethics.

The debate can be exemplified in legal terms by the question of who gets to keep stolen property sold by a thief to an innocent third party.  Under community law principles, the innocent buyer gets to keep the property because the goal is to promote trust among people.  If an innocent buyer was to lose the benefit of his/her good faith bargain with an apparently legitimate seller, it would breed mistrust among people and undermine the community. The right of the innocent buyer was supported by Medieval Canon Law and in many early American jurisdictions.

Under modern American property law principles, the stolen property goes back to the original owner because the goal is to uphold the sacred rights of individual property.  Contract law in the nineteenth century operated almost entirely on the principle of “buyer beware,” which was consistent with the predominant individualism of the society, and had the effect of promoting distrust and disunity among people.  This was especially the case in the late nineteenth century.  Bryan spoke for the rights of people as a community over the rights of property owners..

Bryan was an imposing figure with a leonine mane of hair and a powerful voice.  The Golden Rule was his standard.  The Gold Standard was for him one of the ways in which the laissez-faire individualistic economic principles that characterized the Gilded Age of the late nineteenth century had led to the oppression of ordinary people and the ascension of the fortunate few — the 1% we might say today — to positions of Midas-like wealth and Ozymandias-like power.

Bryan warned the Democratic Convention that a war was being waged in this country, “a struggle between the idle holders of idle capital and the struggling masses who produce the wealth and pay the taxes of the country.”  And he challenged the delegates to declare which side they were on, “upon the side of the idle holders of idle capital, or upon the side of the struggling masses?” Democrats chose him and his side in that war three times, as did some 46% , 45% and 43% of the voters in his three presidential runs.  This was a substantial showing on behalf of the Golden Rule, albeit not enough to get him elected.

Bryan’s presidential runs coincided with two major reform movements in this country, the Populist Movement of the late nineteenth century and the Progressive Movement of the early twentieth century.  Both movements were primarily cooperative and anti-individualist in their main thrust, although both  also had their individualist wings.  Although Bryan was not literally either a Populist or a Progressive, he was widely seen as a spokesperson for the communitarian side of both movements.  Bryan was a devout Christian and derived many of his communal political principles from his roots in the liberal Social Gospel religious movement of that period.

With the demise in the 1920’s of both Bryan’s political career and the social movements that had defined his career, he turned to defending Biblical creationism against the theory of evolution.  Political conservatives since the late nineteenth century had been using the evolutionary theory as a rationalization of laissez-faire capitalism and rule by the rich.  They extolled dog-eat-dog competition and celebrated rich people as deserving winners in the evolutionary struggle, as the fittest in the evolutionary “survival of the fittest.” Conservatives had scorned the poor as deserving hardship and opposed any efforts to help the working classes  In the famous Scopes Trial and other forums, Bryan waged a battle between this so-called Social Darwinism and the doctrine of the Golden Rule.  It was for him the same struggle he had proclaimed in 1896.

Following the Golden Rule along the Yellow Brick Road.

“We have entreated, and our entreaties have been disregarded.  We have begged, and they have mocked when our calamity came.”  William Jennings Bryan. Chicago: July 9, 1896.

In The Wonderful Wizard of Oz, written by L. Frank Baum in 1900 and made into a movie in 1939, the heroine Dorothy finds herself stranded in the magical land of Oz and seeks a way to get back home.  She is advised by the inhabitants to follow a yellow brick road which will lead her to a wizard who will magically solve her problem.  Along the way, she makes three friends and finds a fellowship in their company that ultimately solves her problem and theirs as well.

The story has been interpreted by some readers as an allegory representing the struggles of the Populists against exploitive Eastern bankers, represented by the Wicked Witch of the East, and Western railroad interests, represented by the Wicked Witch of the West.  Dorothy’s three friends are a scarecrow, representing a farmer stuck in a spiral of debt that he cannot figure out how to end; a tin man, representing a worker laboring in a heartless mechanical factory system; and a cowardly lion, representing William Jennings Bryan who critics said roared like a lion but acted like a mouse when confronting the powers-that-be. Dorothy represented the American public caught up in the social and economic storm of that time and just wanting things to get back to normal.  Dorothy and her friends hope that the wizard, who represents a transcendent authority or all-powerful leader, will send Dorothy home and give the scarecrow brains, the tin man a heart, and the lion some courage.

The yellow brick road, which represents the gold standard, leads the four comrades to a so-called Emerald City which is emerald only because people are required to wear glasses with tinted green lenses.  It is a fake.  And the wizard turns out to be a fraud who has no magic.  But the four companions are able to solve their problems and to become their hoped for best selves through helping each other, one for all and all for one, without any need of a transcendent authority.  That is, by seeing each other as extensions of themselves and treating each other as they would want to be treated if they were in the other’s situation, they are able to achieve their goals.  It is a story of the power of ordinary people acting as a community and following the Golden Rule.

The moral of the story of The Wonderful Wizard of Oz is that the Golden Rule is a sufficient ground for establishing our existential being, and for deriving ethical principles and enforcing them. A transcendent authority — a god, guru, wizard, or absolutist government — is not necessary to establish our identities as individuals, and to prescribe and enforce ethical rules.  Dostoevsky claimed that if God does not exist, then everything is permitted and nihilism prevails.  Voltaire said that if God did not exist, we would have to invent Him.  Not so, I would argue.  God may be sufficient for these purposes but He is not necessary.

The key to my argument is the contention that the Golden Rule is not merely an ethical adjuration to do better but is a statement of psychological and social fact.  We, in fact, live according to the principles of “Love thy neighbor as thyself” and “Do unto others as you would have them do unto you.”  It is part of our human condition.  As I contended in Part I of this essay, we cannot know ourselves without knowing others.  “I think, therefore we are” is an existential fact of life.  I know myself through knowing others and their knowing me.

The way we think of ourselves, therefore, depends on how we think of others.  If we think of the well-being of others as connected with our own well-being, loving our neighbors as though they are extensions of ourselves, then we are likely to think well of ourselves.  If we disregard others’ well-being, we are likely to think poorly of ourselves.  In turn, the way we expect others to treat us depends on how we treat them, on doing unto them as we would have them do unto us if we were in their situation.  If we treat others poorly, we are likely to expect them to treat us poorly, and they probably will.  Hostility breeds hostility.  If we treat others well, we are likely to expect the same from them and are more likely to be treated that way.

The Golden Rule is contained in one form or another within virtually every major existential and ethical philosophy and religion in the history of the world.  Many philosophical schools and religions around the world since ancient times have also contended that humans are happiest and healthiest when they are in close communal and cooperative relations with others and when they follow the Golden Rule.  Anthropologists have provided support for this contention in examining societies past and present.  Evolutionary biologists have contended that the ability of humans to cooperate is a key to our evolutionary success.  And research on human brains has recently supported this contention.  Humans are apparently hardwired to feel best when helping others and living in close communal relations with family, neighbors and co-workers.  The philosophy of ethics, which is often ridiculed as soft and fuzzy, is being supported by hard science.

That the Golden Rule is a psychological reality and possibly even a biological fact of human life does not mean that we do not need government or laws or means of enforcing those laws.  Although the Golden Rule explicitly describes how one individual person should treat another individual person — “Love thy neighbor as thyself,” for example — it implicitly requires a cooperative community and a government to fulfill its purposes.  Treating others as an extension of oneself requires institutions greater than oneself to fulfill their needs, which ultimately means government.  The Golden Rule points, however, toward a government of, by and for the people instead of a government that rules as a transcendent authority over people.  Ironically, it is individualism and anti-government libertarianism that almost inevitably lead to authoritarian government.  In the absence of communal ties and cooperative ethics, government must impose itself on isolated individuals in order to establish law and order.  Communalism can, instead, lead to the sort of participatory government that the Founders of our country intended.

Individualism, Individuality and Death.

“We do not come as individuals.”  William Jennings Bryan. Chicago: July 9, 1896.

The Founding Fathers were not wallflowers.  They were men with large egos who openly sought lasting fame.  That they promoted communalism does not mean that they did not also seek to assert their own individuality.  Individuality and communalism are complementary, not contradictory, values.  It is important in this regard to distinguish between individualism and individuality.  Individualism is an ideology that promotes a cult of self-development by the self-sufficient individual.  It places “I” before “We” and relates everything in the world to “Me.”  It also invariably places each individual in competition with other individuals, and leaves him/her in a perpetually precarious position facing potential attack from other individuals.  While stressing personal independence, individualism effectively makes a person dependent upon the willingness of others to leave that person alone.  With individualism as one’s starting point, the Golden Rule can seem silly, a pious ideal and ritual wish that one might recite on Sundays but that one knows is an impossible dream.

Individuality is the quality that distinguishes each individual from other people.  Individuality is a relative term that delineates how a person compares and contrasts with other persons. It is what makes a person unique.  Individuality can, as such, be developed only in communal relations with others and most securely in cooperation with them.  Individuality implies mutual interdependence rather than either independence or dependence, as each person makes his/her unique contribution to the communal whole.  The Golden Rule is a prescription for individuality and the quest for individuality leads to the Golden Rule.

But then there is death.  For advocates of individualism and advocates of a transcendent authority such as God, death is their strongest argument.  Humans are aware from an early age that they are going to die and each person dies his or her own death.  Exponents of individualism claim that both the contemplation of death and the experience of dying create an insuperable gulf that separates all of us and renders each of us an isolated individual.  In turn, religious advocates claim that awareness of death leads people to long for a transcendent authority such as God to whom they can attach themselves and who might grant them eternal life after death.  Without God, they say, life is short, mean and meaningless.  This is a powerful argument but many thinkers from David Hume to Thomas Mann have tried to counter it.

Thomas Mann, for example, turned the argument on its head and claimed that without death, life is mean and meaningless.  Through the character of Herr Settembrini in The Magic Mountain, Mann claimed that if life were eternal, then time would be valueless, effort would be worthless, and commitment would be silly.  Life would be profligate, something to squander because there would always be more to come.  And people would have no rhyme or reason to come together.  With or without God, eternal life would isolate individuals and render ethics meaningless.  With eternal life at stake, belief in God leads to the sort of cynical commitment represented by Pascal’s wager.  Pascal, a contemporary of Descartes, said that even if God does not exist, you have nothing to lose by believing in Him.  And if God does exist, then you have everything to gain by believing in Him and everything to lose by not believing in Him.  So, he concluded, believing in God is the practical thing to do.  But where is the dignity in life or in God with such a bargain?

Death, according to Mann, gives life dignity.  Rather than being the enemy of life, death makes life worth living.  And rather than isolating individuals from each other, death makes life  a shared experience with others.  Death creates boundaries to life within which we are challenged to do our best and make a contribution to each other.  Commitment makes sense in this context because we have only so much time and we must make the best of it.  And since we are all in this together, and no one gets out alive, we have reason to see each other as an extension of ourselves and to cooperate with each other.  Rather than separating one from the other, death makes us part of a collective life in which the Golden Rule makes the best sense.  Following the Golden Rule allows one to live with dignity without God but can also help one to live with dignity with God.

Mann’s argument does not take the sting out of death.  But individualism, with its focus on “I” and the isolation of each individual, effectively focuses the individual on death and makes death a continual source of anxiety.  Communalism, with its focus on “We,” makes one part of something bigger than oneself without having to abjectly submit to a transcendent authority such as an absolute God or an authoritarian government.  It does not rule out the need for government or eliminate the desire in some to believe in God, but it places those attachments on a more dignified and secure footing.  Communalism provides better protection for individuality and relief of anxiety than individualism, and leaves one better able to pursue happiness in life.

John Locke versus Francis Hutcheson: Considering “The Pursuit of Happiness.”

The man who is employed for wages is as much a businessman as his employer.”           William Jennings Bryan. Chicago: July 9, 1896.

Communalism has been a major part of social theory and practice throughout American history.  An important ingredient of most Native American societies, communalism was also integral to the first European American settlements.  The Puritans, for example, explicitly rejected individualism and sought to establish a communal society in Massachusetts during the early 1600’s.  They enacted maximum price and minimum wage laws so that no one could take advantage of another’s need for goods and services or for a sufficient income.  They established procedures for sharing and rotating land occupancy so that all would take turns farming the best land and no one could monopolize all of the best land.

Jonathan Winthrop, the Puritan leader, denounced the idea “That a man may sell as dear as he can and buy as cheap as he can” and “That a man may take advantage of his own skill or ability, so he may of another’s ignorance or necessity.”  The Reverend John Wise explained the Puritan theory of government saying that it must “Use and Apply the strength and riches of Private Persons towards maintaining the Common Peace, Security, and Well-being of All.”  All must share in the common wealth of the commonwealth.

Individualism developed, however, as a competing orientation in America during the seventeenth and the eighteenth centuries.  The competition revolved in large part around differences between the philosophies of the Englishman John Locke and the Scotsman Francis Hutcheson.  Conventional American histories focus on the influence that Locke had on the colonists and often ignore Hutcheson completely.  But Hutcheson was extremely influential, especially with leading figures in the founding of the country such as Thomas Jefferson, James Madison, and Benjamin Franklin among many others.  Hutcheson is the originator of the phrase “pursuit of happiness” that is enshrined as an inalienable human right in the Declaration of Independence.

Locke was the most important philosopher of individualism during this time.  Echoing Descartes’ Cogito, Locke claimed that each human is born as a “tabula rasa,” that is, as a blank slate devoid of knowledge or personality.  Intellectual development consists of amassing facts to fill up the brain.  Personal development consists of amassing private property as a means of establishing one’s identity and one’s relations with others.  In Locke’s formula, you are what you own and the people you control through that ownership.  In Locke’s view, the primary purpose of government is to protect “life, liberty and property,” since life depended on owning property, and liberty consisted of being able to own and operate property.  His was a philosophy that stressed the self, selfishness, and self-aggrandizement.

Hutcheson was a leading figure in the Scottish Enlightenment of the eighteenth century and a primary originator of the Common Sense moral and social theories that were held by most of the American Founding Fathers.  Hutcheson developed theories of benevolence in explicit opposition to Locke’s theories of selfishness.  Contrary to Locke, Hutcheson claimed that the primary purpose of life was to make oneself happy by making others happy, and that the primary purpose of government was to protect “life, liberty, and the pursuit of happiness.”  This meant that government should encourage cooperation because that’s how people achieved happiness.

Hutcheson rejected Locke’s “tabula rasa” theory of personality and contended that each human is born with a common sense intellectual faculty, that is, a capacity for higher level thinking of the sort that we today would call critical thinking.  Critical thinking involves comparing and contrasting viewpoints.  It cannot be done in isolation because it requires others’ viewpoints for purposes of analysis. For Hutcheson, intellectual development consists of pursuing knowledge through critical thinking with others, not, as Locke would have it, through amassing facts and experiences by oneself.  Hutcheson also contended that humans are born with a common sense moral faculty, what we might call a conscience.  A person’s social development consists in exercising the person’s moral faculty in helping others, and pursuing happiness for oneself by bringing happiness to others.  A person gains an identity and develops his or her individuality not by controlling property and other people, but by working with others.

Although the Founders later injected Locke’s formula of “life, liberty and property” into the Fifth Amendment of the Constitution in defining one of the primary purposes of government, that did not mean they were abandoning the “pursuit of happiness” delineated earlier in the Declaration.  Nor, in protecting individual rights in the Bill of Rights, were they opting for individualism or so-called libertarianism.  Despite the unhistorical and hysterical contentions of libertarians and others on the political right wing, the Constitution is on the whole a communal document that was adopted by “We the People” in order to “promote the general Welfare” and that endows a government of the people with “Power… to make all laws necessary and proper” toward that end.

Libertarianism is essentially an anti-government version of Hobbes’ war of each against all.  Libertarians do not trust anyone and especially do not trust anyone in power in the government.  For them, government merely provides an opportunity for selfish individuals (like themselves) to get over on the rest of us.  So, they claim, government must be crippled and everyone must protect himself and his property as best he can, essentially on his own.  As with Locke, they believe that what you own defines who you are.  Consistent with this ideology, libertarians strongly support an unhistorical and illogical interpretation of the Second Amendment that allows everyone to have whatever guns and other weapons they might want and think they need to protect themselves and their property against each other and against the government.  Libertarians have thereby slid down a slippery slope of individualism from caution to paranoia.

In this context, it might be useful to compare and contrast libertarianism with anarchism.  Anarchism is essentially a utopian extension of the Golden Rule.  Like libertarians, anarchists distrust government and worry that power corrupts.  But whereas libertarians worry that government supports the unworthy masses against the deserving few, anarchists claim that government inevitably supports the rich and powerful against everyone else.  Anarchists want a world without government but they base their hopes on a belief that people are essentially good and that if we only got rid of private property, we could  also get rid of government and happily live communally ever after in peace and harmony.  Libertarians may be described as utopian or, maybe, dystopian capitalists, anarchists as utopian socialists.

Herbert Spencer versus Lester Frank Ward: Reconsidering “Survival of the Fittest.”

“There are two ideas of government.  There are those who believe that if you just legislate to make the well-to-do prosperous, that then their prosperity will leak through on those below.  The Democratic idea has been that if you legislate to make the masses prosperous, their prosperity will find its way up and through every class that rests upon it.” 

William Jennings Bryan. Chicago: July 9, 1896.

Native American, African American, and European American social theories and practices were predominantly communal from the seventeenth through the early nineteenth century.  During the so-called Jacksonian era of the second quarter of the nineteenth century, the social paradigm among European Americans flipped so that individualism became the dominant ideology for them and communalism became a secondary principle.  And laissez-faire capitalism became the predominant economic theory in the United States, even if it was not uniformly the practice.

Even as laissez-faire capitalism was being trumpeted as the American way of life during the nineteenth century, and the courts regularly struck down regulations of big business and support for small farmers and workers as unconstitutional, state and federal governments routinely provided economic support for big business enterprises.  And the courts supported this disparate treatment on the grounds of protecting the sacred rights of property.  The Supreme Court, led by Justice Stephen Field, read laissez-faire principles into the Fifth and Fourteenth Amendments of the Constitution, claiming that almost any regulation of business, as opposed to support for business — including child labor laws, minimum wage laws, health and safety laws — was an unconstitutional taking of property under those Amendments.  It was in this context that the Supreme Court first ruled that corporations were “persons” under the Constitution.  Since the protections of property under the Fifth and Fourteenth Amendments applied only to persons, if corporations were not deemed persons, then state and federal governments could completely regulate them.  Such an outcome did not fit with the Court’s theory of individualism in which winners were supposed to form a ruling hierarchy of wealth over lower class losers.

In sum, while the predominant theory in this country as Bryan spoke in 1896 was laissez-faire individualism, the predominant practice was cutthroat competition for ordinary people but corporate monopolies for the rich.  This situation was often described as capitalism for the poor and socialism for the rich.  The social ideas of Thomas Malthus, the evolutionary theory of Social Darwinism, and a Hobbesian interpretation of the mantra “survival of the fittest” were widely used to justify this disparate treatment of economic winners and losers.  The use of these ideas for those purposes was decried by Darwin as well as Bryan.

Thomas Malthus was an early nineteenth century Protestant English clergyman who claimed that in the absence of strictly enforced population control measures, population inevitably outruns the supply of food and other necessities of life, and the result is famine and social dislocation.  The problem is the lower classes who have no self-control and tend to breed like rabbits.  Malthus claimed that without strict controls on the behavior of the lower classes, periodic famines, deadly epidemic diseases, and wars were necessary to rid the world of the excess lower class population.  He was particularly opposed to any sort of charity for the poor because that would encourage the poor to have more children.  The lower classes must essentially be starved into birth control.

Charles Darwin admitted to using Malthus’ population theories in developing his evolutionary theory of “natural selection,” according to which those species that are best adapted to a given environment will survive while others will perish.  But Darwin did not apply natural selection to the internal operations of human society.  That is, natural selection applied to competition among species and did not necessarily imply competition within species.  To the contrary, some species might thrive and survive on the basis of cooperation among its members rather than competition, and that, according to Darwin, includes humans.

It was Herbert Spencer who jumped on Darwinism as a justification for laissez-faire capitalist individualism.  He coined the slogan “survival of the fittest” and promoted the rule of the rich as an example of that principle.  Spencer is essentially the founder of the Social Darwinism that Bryan found so offensive.  But there was an alternative version of Social Darwinism in the late nineteenth century that took its cue from Darwin himself and that emphasized the inherently cooperative nature of human existence.  Lester Frank Ward, one of the pioneers of sociology, argued that “fittest” did not mean strongest, richest or most powerful.  Fitness is a function of adaptation to the environment and especially to changing environments.  The fitness of humans, said Ward, is a result of our adaptability and our ability to work together.  It is social cooperation and not laissez-faire competition that makes humans fit and has historically enabled humans to survive and thrive.  Laissez-faire individualism is a prescription for human catastrophe.

Ward’s message was overwhelmed in its time by support for Spencer’s version of Social Darwinism by Andrew Carnegie and other wealthy and powerful people, all of whom preached competitive individualism for the masses while building huge monopolistic corporate empires for themselves.  It is ironic that opposition to evolutionary theory was led in the late nineteenth and early twentieth century by political and religious liberals such as Bryan who feared the conservative message of Social Darwinism while opposition to evolutionary theory has in recent years been promoted by political and religious conservatives whose social views are essentially similar to those of the Social Darwinians whom Bryan opposed.

The essence of Ward’s liberal Social Darwinian message survived, however, in the theories of many of the Progressives, New Dealers, Great Society proponents, and other liberals of the twentieth and early twenty-first century.  America’s major contribution to world philosophy — the school of thought embodied in the pragmaticism of Charles Sanders Peirce, pragmatism of William James, and experimentalism of John Dewey — is rooted in evolutionary theory and in Golden Rule ethics.  It is the message of “I think, therefore we are,” and the idea of seeing others as extensions of ourselves in the here-and-now and in the future.

We see examples of this message all around us.  The teacher who sows seeds of learning  in the hope they will grow in future ways and times the teacher may not see.  The grandparents who try to provide for grandchildren they may never see or see grow up.  The politician who enacts long-term policies that may not succeed until after the next election.  Caring for others now and in the future.  This sort of thinking flies in the face of the predominant selfish individualism fostered by our society, a society in which a right wing majority of the Supreme Court has ruled that corporations — legal fictions defined solely by money — have the same rights as humans and that under the free speech guarantees of the Constitution, money literally talks.  These are fantasies worthy of L. Frank Baum that would be comical if they were not so harmful.  But hope remains in the staying power and underlying reality of the cooperative message, a human and humane message that is increasingly being supported by science.

Egoism and its attendant evils will never completely go away but maybe they can become the exception rather than the ruling ideal.  We can see every day the increasingly destructive effects on the environment of our current ways of thinking and acting.  We may be in the process of destroying the environment in which humans have survived and creating an environment in which we may no longer fit.  In the competition among species for survival of the fittest, cockroaches, just as one example, may be a better fit than humans for the new environment we are creating and they may outlast us.  What we need is a paradigm change of the sort demanded by William Jennings Bryan so that the Golden Rule, instead of the rule of gold, becomes the norm in our society and  no longer an ideal exception, and so that we no longer stumble and fall  on a fool’s gold errand down the yellow brick road.

Rethinking Descartes’ Cogito: “I think, therefore we are (not I am).” Part II: The World According to Calvin and Hobbes.

Burton Weltman

I think, therefore I laugh… Parody as Reality.

Calvin: “Do you believe in the Devil?  You know, a supreme evil being dedicated to the temptation, corruption and destruction of man?”

Hobbes: “I’m not sure man needs help.”

This is a sample colloquy between the main characters of Calvin and Hobbes, a popular nationally syndicated comic strip produced by Bill Watterson from 1985 to 1995.  Calvin was a little boy.  Hobbes was his toy stuffed tiger who became a full-sized talking tiger whenever no one else was around.  The comic strip featured Calvin’s and Hobbes’ sardonic observations about the foibles and foolishness of adult humans and the hypocrisies and atrocities of human society.

The comic strip Calvin was named after John Calvin, a sixteenth century leader of the Protestant Reformation who founded a strict version of Protestantism. John Calvin and his followers focused on what they claimed is the lasting and pervasive effect of Adam’s Original Sin.  They insisted that humans are born in sin, inevitably live in sin whatever their best efforts to do good, and all deserve to be sent to Hell when they die.  That conclusion includes even newborn babies and children who have done little or nothing in their short lives.  Only God’s arbitrary forgiveness saves some few humans from the eternal damnation they all deserve.

John Calvin also insisted that it was up to each individual to read the Bible for him/herself, decide what it means, and determine at his/her peril how to lead a righteously God-dominated life.  This line of thought could sometimes lead to holier-than-thou competition among believers and vengeance-is-mine-on-behalf-of-the-Lord persecution of non-believers.  The comic strip Calvin often reflects the misanthropic John Calvin’s dark perspective.

The comic strip tiger was named Hobbes after Thomas Hobbes, a seventeenth century philosopher and contemporary of Descartes, who believed that humans were voracious and vicious by nature, and that life without a dictatorial government to control people would consist of a “war of everyone against everyone.”  People would be in “continual fear and danger of violent death” and their lives would be “solitary, poor, nasty, brutish, and short.”  Hobbes believed that people are inherently self-centered and selfish and that only an authoritarian government, “a common power to keep them in awe,” could save humans from themselves and society from chaos.  People were by nature free, independent and individualistic, but needed to be tightly leashed and dominated to survive.  Unlike Calvin, Hobbes did not believe in Divine Providence and insisted that only complete subservience to government, not God, could bring peace on earth.  The comic strip Hobbes often reflects the dour philosopher’s sour perspective.

In choosing to parody the perspectives of John Calvin and Thomas Hobbes in his comic strip, Bill Watterson hit on two of the main tributaries to the mainstream of Western social thought over the past four hundred years.  As with most social thinkers during that time, including most conservatives, liberals and radicals, Calvin and Hobbes took individualism as the starting point of their theories and came to some form of authoritarianism as their conclusion — God domination for Calvin, government domination for Hobbes.  Descartes’ Cogito has been popularly understood to provide philosophical support for this line of thought.

 I think, therefore I am free…But not for long .

Hobbes: “Until you stalk and overrun, you cannot devour anyone.”

Descartes’ “I think, therefore I am” has been widely understood to demonstrate that each of us is an isolated individual and is the center of his/her own universe.  It has also been understood to establish that society is merely a conglomeration of independent individuals, and that without some transcendent authority to connect us to each other, dictate moral values to us, and enforce law and order on us, there is no feasible way for ethical human beings and ethical societies to survive.  Chaos, violence and a dog-eat-dog cycle of predatory behavior would prevail.

This intellectual transition from a premise of individual freedom to an authoritarian conclusion follows logically from Descartes’ Cogito and is reflected in the theories of Calvin, Hobbes and many others over the last four hundred years.  If each of us is an isolated individual then each of us is theoretically a law unto him/herself.  The  logical result of this sort of self-willed insularity would be chaos if it was allowed to play out.  While exceptional individuals might on their own behave morally and even heroically in accord with what they perceive as a  common ethic, we cannot trust that others will do so.

Therefore, a transcendent authority, which is generally taken to be God or government or both, is required to establish a code of ethics that would be accepted by everyone.  And a transcendent enforcer, which can punish transgressors in this life and/or the next, is necessary to ensure that law and order prevails and that everyone obeys the code.  Although one can try to limit the ways and means, and the extent to which a transcendent authority can restrict individual freedoms and inflict punishments on transgressors — through constitutions, checks and balances, division of powers, and other vehicles — individualism is a theory and practice that cannot sustain itself and authoritarianism seems to emerge inevitably from individualism even as the two theories contradict each other.

Authoritarianism is, however, also a fragile system that almost invariably produces the chaos and violence it is intended to prevent.  The logic of this outcome was described by Hegel in his discussion of the master-slave relationship.  Individualism generates a war of each against all for security and supremacy.  This war will invariably result in might making right and the stronger imposing their will as masters on the weaker.  Peace will be imposed on the populace but it will be a fragile peace.  Any victory in this war will be inherently precarious and fraught with peril for the masters because the self-esteem and the security of the masters depend on the slaves accepting their subordination.  If the slaves insist on their equality with the masters, either through outright rebellion or even just insubordination, then the masters’ self-esteem is undermined and their safety is threatened.  The pride of the masters goeth before their fall.  The result, says Hegel, is that masters are as much enslaved by their domination of their slaves as the slaves are by their masters, and masters are inevitably insecure.

Hegel’s conclusions were exemplified in the ante-bellum American South by the exaggerated fears of slave masters of slave rebellions.  Most slave owners lived in constant fear of slave uprisings and imposed all sorts of onerous restrictions on their own activities out of fear of their slaves.  Slave owners also frequently undertook preemptive violence against their slaves, thereby coarsening their own lives as a means of protecting themselves, and for the same reason often made their plantations less efficient and profitable than they otherwise would have been.

Hegel’s master-slave analysis has been applied to other unequal relationships and winner/loser outcomes that result from individualistic struggles for supremacy and survival.  Winner/loser outcomes and have/have-not relationships almost inevitably generate self-fulfilling vicious cycles of fear and violence as the winners and the haves try to protect their dominant positions.  We can see this vicious cycle over the last one hundred years in the United States in the exaggerated fears of the upper classes of uprisings by workers, blacks, immigrants, Muslims, and other have-not groups.  These fears have led to preemptive repression and violence by ruling class winners against downtrodden losers, with the result that the rulers have often brought about the violence that they feared and also generated cycles of seemingly endless conflict with many of those they rule.

I think, therefore I cry: The Cogito and Social Theory

Calvin:”There’s no problem so awful that you can’t add some guilt to it and make it even worse.”

Individualism and its consequences have preoccupied Western thinkers since the sixteenth century, albeit with a tremendous amount of ambivalence and guilt built into the theory and practice of it.  Both in theory and in practice people have been whipsawed back and forth, for example, between commitment to absolute individual freedom and the necessity of law and order discipline, between belief in individual responsibility and belief in charity toward the have-nots, and between the integrity of the isolated individual and the warmth of a shared community.  Whichever way people have turned, they seem to experience loss and guilt.

The Protestant Reformation of the sixteenth century, and especially Calvinism, significantly changed the way in which believers related to each other and to their God.  The Catholic Church, which had dominated religious life during the European Middle Ages, portrayed the church as a community of believers and a society for mutual aid toward salvation.  In this view, the Church acted as an intermediary between people and God to help them get right with the Lord.

In contrast, Protestants promoted individual self-help toward salvation.  Protestants proclaimed the responsibility of each individual to get straight with God on his/her own.  Each person was free to do as he/she pleased but would be condemned to eternal torment in Hell if he/she did not strictly follow the dictates of God in the Bible.  Protestant ministers, except for a few preachers who occasionally claimed to be inspired by God or in some kind of direct contact with Him, did not provide the help that Catholic priests offered to mediate between people and God.

Protestantism has, however, been full of conflicts, contradictions and agonizing ambivalence that is a consequence of its individualistic premises.  Can, for example, someone who has had a salvation experience do whatever he/she wants without any of his/her acts being deemed sins, as Anne Hutchinson and other seventeenth century antinomian Protestants claimed?  Is absolute individual freedom the meaning of Jesus’ death for our sins and Paul’s repudiation of the Law?  Or must everyone still obey the letter of Biblical Law, and even then possibly be damned to Hell?  Are some of us innocent at least sometimes or are all of us always guilty?  Is it fair that newborn babies are considered sinners and are damned if they die without baptism?  The history of Protestantism has from its inception been replete with conflicts between individualism and authoritarianism, and recurring cycles of antinomian radicalism and orthodox conservatism.

Like Protestantism, most political philosophy over the last four hundred years has started with the isolated individual and then tried to explain and justify society, social control and government.  Most thinkers have come to some sort of unstable compromise between freedom and authority in which government is seen as a necessary evil.  This conclusion has been reached and preached by conservative theorists such as Thomas Hobbes, liberal theorists such as John Locke, who heavily influenced the American Founding Fathers, and more recent radical thinkers such as John Rawls.  It is a conclusion that inevitably breeds distrust of government as a repressive institution that threatens to swallow up individuals and their freedoms.

Political theorizing during this time has typically started with a “state of nature” (Hobbes and Locke) or an “original position” (Rawls) in which isolated individuals face each other and face a choice of how to establish social relations with each other.  This choice is typically accomplished through some sort of social contract in which individuals agree to relinquish some of their rights (Locke and Rawls) or all of their freedom (Hobbes) to the government in return for protection and social services.  This is a deal that tends to be portrayed as a devil’s bargain that makes government a constant threat to people.

Even most socialistic theories, such as those of the influential nineteenth century radicals Robert Owen and Charles Fourier, have been premised with individualism.  This is a starting point that puts cooperative and communitarian programs at a disadvantage, and led Owen and Fourier to make compromises with authoritarianism that contradicted many of their socialistic goals.

Mainstream economic theories in the modern era, such as those of David Ricardo, Alfred Marshall, and Milton Friedman, have similarly started with the individual producer and consumer who is then required to submit him/herself to domination by so-called market-place principles and to let the “invisible hand” of competition determine his/her life choices.  These theories insist that individuals be free to make their own decisions but also insist that people make those decisions according to the demands of the marketplace, thereby mixing freedom with submission.  The theories also generally insist as a matter of personal responsibility and economic rationality that the haves deserve to enjoy their wealth and the have-nots deserve to suffer their poverty.  Charity is often lauded in individual cases but discouraged as a general practice because it breeds sloth and it wastes resources that might otherwise be productively and profitably invested.  In these theories, government involvement in the economy is seen as at best an occasional necessary evil  and government is invariably blamed for anything that goes wrong.

Finally, ethical theories over the last four hundred years as diverse as those of Descartes, Kant, Herbert Spencer, Nietzsche, and Sartre have all started with isolated individuals and ended by either applauding or denouncing the imposition of moral and social codes on people.  The conclusions of Calvin and Hobbes that individualism requires repression has been reflected in most social and political theory and practice since his time, and the idea that people might willingly behave according to the Golden Rule or some other cooperative ethic has generally been rejected and even ridiculed.

I think, therefore I choose: Individualism in History

Calvin:”It’s hard to be religious when certain people are never incinerated by bolts of lightning.”

Conventional thought tends to portray individualism and individual freedom as the central issue in history, as though arguments and struggles for and against individualism were the most important thing for everyone.  This was not, however, the case in most societies and prior times in which Descartes’ Cogito and the idea of individualism would not have made sense to people.

In ancient Greece, for example, Aristotle  reflected the general sentiment when he claimed that man is a social animal who gains an identity from those around him not from himself.  The concept of “We” came before the idea of “I.”  For Aristotle and most other Greek thinkers, an isolated individual was literally an idiot.  Reality derives from society not from the individual.  While the Greeks recognized and focused on the problematic relationship of the one to the many, and the difficulty of integrating and interrelating individuals and society, the idea of “one for all and all for one” was for most Greeks not an ideal that was largely honored in the breach but a statement of  fact that explained how they saw themselves and justified their actions, even when they were not ideal.

Politics and participation in government were considered by Aristotle and most other Greeks to be the highest forms of activity.  Freedom was exercised through government, not against it, and government was the expression of freedom, not its enemy.  Exile and exclusion from society were widely considered fates worse than death, as exemplified by Socrates’ choice of death rather than escape from Athens when he was convicted of blasphemy.  Human sociability, the willingness and need of people to be with other people and to get along with them, rather than authoritarian imposition of law and order, was considered by most Greek thinkers to be the best source of ethics and ethical enforcement.

A similar communal emphasis permeated mainstream thought in the European Middle Ages, during which intellectual and ethical debate was dominated by the Catholic Church.  Reiterating the idea that man is a social animal, Thomas Aquinas, the Church’s preeminent philosopher, claimed that natural law was innate in humans and coexisted with divine law.  Even an atheist, he argued, was capable of ethical behavior in fulfillment of his/her best self and most enlightened self-interest.  Monastic communalism was the highest ideal of medieval society and the model for Thomas More’s critique in his Utopia of the emerging individualism in Western society during the 1500’s.  Exemplifying this communalism, artistic and architectural creations in medieval Europe were done anonymously and were considered collective creations of the community not the product of individuals. “We” came before “I.”

The Renaissance and the Protestant Reformation changed this.  We know today who did most of the creative work during that time because Renaissance artists aggressively sought individual fame.  Reflecting the consequences of the emerging individualism in society, most Western societies starting in the 1500s developed both dictatorial churches and dictatorial governments, ruled by absolute monarchs, so-called divine right kings, in conjunction with government-controlled churches, pray my way or die institutions.

This too began to pass starting in the late eighteenth century.  Since then, most Western societies have gradually become less authoritarian in government, albeit with totalitarian spasms such as the fascist and Nazi regimes of the twentieth century, and most have also moved toward religious tolerance and even indifference.  In turn, most countries have replaced extreme forms of individualism with modest forms of social democracy and have been able to adopt cooperative ethical codes without resorting to religious coercion or political oppression.  The reasons for this change are complex but they include the collectivism that is inherent in the industrialization and urbanization that have characterized this period, and the labor unions and socialistic movements that emerged.

These changes have, nonetheless, been largely understood and undertaken as modifications of the individualistic logic that stems from Descartes’ Cogito rather than a rejection of individualism.  The continued prevalence of an individualist ethos has inhibited the more humane theory and practice that would be a consequence of recognizing that “We” precedes “I” and enacting the Golden Rule.  Individualism has been particularly persistent in the United States where the fear of the authoritarian consequences of their own laissez-faire individualism has led so-called libertarians such Ron and Rand Paul and other far-right wingers to resent and reject almost all government programs.  Projecting onto government the fears generated by the imagined consequences of their own individualistic premises, they warn that any and every government program is the beginning of authoritarianism and the end of freedom.  They are hoisted on their own petards and aim to hoist the rest of us in the same way.

I think, therefore what would the Founding Fathers do?

Calvin: Today at school I tried to decide whether to cheat on my test or not…I wondered whether it is better to do the right thing and fail…or is it better to do the wrong thing and succeed.

Hobbes: So what did you decide?

Calvin: Nothing.  I ran out of time and I had to turn in a blank paper.

Hobbes: Anyway, simply acknowledging the issue is a moral victory.

Calvin: Well, it just seemed wrong to cheat on an ethics test.

 

But it does not have to be this way and the American Founding Fathers knew it.  Most of the Founders were not adherents of individualism and they rejected both a dictatorial God and a dictatorial government as the guarantor of an ethical society.  Theirs was a social theory that was based on communalism and that incorporated an ethic based on the Golden Rule.  They believed that there can be worlds other than the world according to Calvin and Hobbes, and their belief still has validity.  I will discuss this contention in the third part of this essay: Rethinking Descartes’ Cogito:”I think, therefore we are (not I am).”  Part III:  A Cross of Gold and the Golden Rule.

Rethinking Descartes’ Cogito: “I think, therefore we are (not I am).” Part I: Resolving the Popeye Perplex.

Burton Weltman

“I’m Popeye the Sailor Man.”

“I am what I am, and that’s all I am.”  So sayeth Popeye the Sailor Man before he downs a can of spinach and goes forth to pummel some bad guys and save some innocent people from harm.  Popeye was a popular comic strip super hero before there were superheroes, with bulging biceps and enormous strength that he derived from consuming spinach.  A precursor from the 1920’s of the age of superheroes that emerged during the Depression years of the 1930’s, Popeye shared a key trait with Superman and most other superheroes from then to the present day: a belief in himself as a miraculously conceived individual with unique characteristics that he derived from no one else.  Popeye was his own man, an independent individual, and there was no one to whom he owed a debt of gratitude for his specialness.  “I yam what I yam,” Popeye would repeatedly declaim in a slurred expression that was alternately and ambivalently humble – he did not claim to be more than he actually was – and proud – because what he actually was was plenty good.

“I am that I am.”  So sayeth God to Moses before He goes forth to pummel the wicked Pharaoh with plagues and enables Moses to lead the Hebrews out of slavery.  From ancient times in Western society through the Middle Ages, it was generally held that no person could claim to be self-sufficient or the author of his/her own powers, and no one could claim to be unique.  Among Jews, Christians and Muslims, it was generally believed that only God could say that He was what He was and that it was the sin of pride for a person to say that of him/herself.  Pride, the belief that one was the author of his/her own virtues and accomplishments without the support of other people or God, was seen as the root of all evils.

But things changed.  Beginning with the Protestant Reformation and the emergence of capitalist economic systems in the 1500’s, and then with the Enlightenment and the rise of liberal and democratic social and political theories and practices in the 1700’s, an ideology of individualism developed that has permeated Western societies to the present day.  Pride, personal independence and self-sufficiency became virtues.  We routinely praise people whom we identify as being self-made and independent, and who have pride in their individual selves.  And we criticize people who do not claim to be independent and who seem not to have pride in themselves or their work.

We are, however, ambivalent about our pride. So, for example, expressions of pride in a personal achievement, such as scoring the winning touchdown in a football game, are often accompanied by “Thank the Lord” statements which ostensibly denote humility.  But even these statements sometimes seem to connote some special relationship of the speaker with God and some favoritism from the Lord, as though He actually cares who wins a football game.  Pride seems thereby to emerge even from within a statement expressing humility.  The upshot is that we tend in our society to display an ambivalence and an internalized contradiction between pride and humility that could be termed a Popeye Perplex.

Philosophical support for the ideology of individualism was supplied in the early 1600’s by Rene Descartes through his formulation of Cogito, ergo sum or “I think, therefore I am.”  Popeye was  a disciple of Descartes.  But I do not think that either Descartes or Popeye got things quite right, and I think that a reformulation of the Cogito could be a way of resolving the Perplex.

The Cogito: If I yam what I yam, what are you?

We humans seem to be among the few creatures on earth who are aware of ourselves.  We are, in turn, plagued by persistent existential questions about who and what we are.  Hence Popeye’s almost obsessive concern to reassure himself and others that he was what he was, whatever that was.  Descartes’ claim that “I think, therefore I am” represents the predominant answer in our society to these existential questions.

Descartes’ formulation has been widely interpreted to mean that we humans are thinking creatures who can know only one thing for sure — that each of us exists as an isolated individual.  This conclusion is reflected in Popeye’s mantra of “I yam what I yam.”  Descartes’ Cogito is also popularly taken to mean that each individual is the center of his/her own universe and can rely only on his/her own observations and conclusions in deciding how life should be lived.  Based on these interpretations, Descartes has often been viewed as the godfather of individualism.

But Descartes’ actual intention was quite different.  His intention was to establish God as the center of the universe, as the central point of meaning for humankind, and to connect us to each other through God.  Descartes is often credited with allowing us to turn Adam’s Original Sin of pride and personal independence into a high virtue.  In this misreading of Descartes, we fail to understand that he actually sought to promote a collective communion with the Lord and a humble recognition of humans’ dependence on God.

Descartes begins his reasoning with an attempt to find something that a person can know for sure.  He expresses concern that some Evil Genius might be feeding him misleading perceptions which would lead him to false thoughts.  He then hits on the indisputable fact that he is thinking and the fact that he is thinking is indubitable even if an Evil Genius is otherwise misleading him.  From this fact, he proceeds to the conclusion that a person can know for sure that he/she exists.  A person can ostensibly know this because each person is aware of his/her own thoughts and is, therefore, aware of him/herself as an existent being.

But Descartes does not rest with “I think, therefore I am.”  That was only a preliminary conclusion.  For if the only thing I can know for sure by myself is myself, how is it that I can successfully act in the world outside of myself?  We all act and operate successfully as though we know things outside of ourselves.  We communicate with each other, work with each other, and manipulate all sorts of other things as though we know about them.  How can this be?

Descartes’ answer is God: “When I turn my mind’s eye on myself, I understand that I am incomplete and dependent on another,” he says.  That other is God.  After considering all that he thinks he knows about the universe, Descartes comes to the conclusion that “all these attributes are such that, the more carefully I concentrate on them, the less possible it seems they could have originated from me alone [even from an Evil Genius].”  They must, instead, come from God and “it must be concluded that God necessarily exists.”

God brings things together and holds them together for us so that we can function in a world about which we cannot really know anything outside of our individual selves.  It is through our common connection with God that we can connect with each other and the outside world.  God is the deus ex machina who makes the machinery of the universe work.  In sum, instead of celebrating independence and promoting individualism, the Cogito functioned for Descartes as a proof of God’s existence and our communal dependence on Him.

Despite Descartes’ intentions, his Cogito has been used to justify the individualism that has permeated Western societies over the last four hundred years.  The ethical, political and economic theories developed during this time have almost invariably started with the isolated, independent individual and then tried to explain and justify society.  This has been true of even most socialistic and communitarian thinking.

Social and political practice has followed from this same starting point.  It is a starting point that puts most cooperative and communitarian theories and practices at a disadvantage.  It can lead to the extreme conclusion of Margaret Thatcher, England’s longtime conservative Prime Minister, that there is no such thing as society, only a bunch of individuals.  It therefore behooves anyone who wants to attack the prevailing individualism, and who hopes to replace it with a more cooperative social theory and practice, to address the Cogito and see if there is an alternative to its insistent focus on “I.”

Cognition and the Cogito: We before Me.

Descartes’ Cogito has been repeatedly criticized by philosophers even as it has become a popular mantra.  Criticism multiplied during the mid to late nineteenth century as philosophers increasingly rejected the dualism — mind versus body, self versus the world — of Descartes and his disciples and promoted, instead, more integrated and dialectical philosophies.

Kierkegaard, for example, complained that the Cogito was a circular argument that presupposes “I” and then uses it to prove the existence of “I.”  Nietzsche claimed that in phenomenologically examining one’s thoughts, one could at most say that “It thinks” but that one has no basis for saying “I.”  William James, following a suggestion from George Lichtenberg, went a step further and concluded that the most one could say is that “Thinking is occurring” but not that “I” think.  Following the lead of these critics, I think there are at least two problems with Descartes’ Cogito that lead me to conclude that a better formulation would be “I think, therefore we are,” a formulation that would provide philosophical and psychological support for a cooperative and communal social theory and practice.

First, Descartes confounds the difference between consciousness and self-consciousness.  The first “I” in his Cogito is not the same as the second “I” and there is no logical connection between the two.  Consciousness is an awareness of things outside of ourselves which is generally demonstrated by a responsiveness to those outside things.  Arguably, any living creature has consciousness of some sort because all living creatures, even amoebas, respond to outside influences and seem able to process information they receive from the outside world to reach conclusions upon which they act or at least react.  Self-consciousness, in contrast with simple consciousness, is an awareness of our awareness of things.  Seemingly, only the so-called higher life-forms, which do not include amoebas, have this second sort of awareness.

Descartes seems to think that because I – the first “I” in his Cogito – have an awareness of things, I have an awareness of my thinking about these things and, therefore, an awareness of myself.  That logic is flawed.  All Descartes has proved by saying “I think” is that he is comparable to an amoeba.  By the mere fact of thinking, Descartes has not established knowledge of himself or knowledge of his own existence.  He has merely established what William James called “a stream of consciousness,” a blur of perceptions and thoughts, the sort of thing that James Joyce portrayed brilliantly in Ulysses and confoundedly in Finnegan’s Wake.

The second problem with Descartes’ Cogito is that as a matter of philosophical logic and psychological fact he proceeds backwards.  It is not from my awareness of myself that I then gain awareness of others, it is from an awareness of others that I gain an awareness of myself.  Ontologically and psychologically “We” or at least “You” comes before “Me.”  I cannot say “I” without an awareness of other people with whom I interact and with whom I can compare and contrast myself.  As such, a ireformulated Cogito might better be “I think, therefore we are.”

There is an ethical dimension to this critique of the Cogito.  In order to get past an amoeba-like awareness of others as merely stimuli which require a response and reach a self-conscious awareness of myself as one among many beings, I must recognize other people as essentially the same as me and equal to me.  That is, in order to see myself, I must see others as beings to whom I can compare myself and against whom I can contrast myself.  If these others are completely foreign and unlike me, then I cannot see myself in them.  If they are completely like me, then I cannot see myself as distinguished from them.  In any case, I must first see others in order to then see myself.

I must also see other people as essentially equal to me in order to trust the evidence about myself that I receive from my interactions with them.  My self-awareness stems in large part from other people’s reactions to me, including their judgments of me, and from my reactions to them, including my judgments of them.  In order to trust their reactions to me and their judgments of me, I must respect them as people essentially equal to me.  In turn, in order to rely on my judgments of them, I must see them as like me and not so different as to be beyond comparison with me.  In sum, self-consciousness, an ability to say “I” and actually know what you are talking about,  requires respect for others.  So too does self-respect.  Your respect for others is a catalyst for and a measure of your respect for yourself.

It is from this circumstance that I believe the Golden Rule emerges as a statement of fact as well as an ethical ideal.  “Do unto others as you would have them do unto you” and “Love thy neighbor as thyself” are descriptions of reality and not merely ideality, because the way we think of ourselves depends on how we think of others.  If we think of the well-being of others as connected with our own well-being, loving our neighbors as though they are extensions of ourselves, then we are likely to think well of ourselves.  If we disregard others’ well-being, we are likely to think poorly of ourselves.  In turn, the way we expect others to treat us depends on how we treat them, on doing unto them as we would have them do unto us if we were in their situation.  If we treat others poorly, we are likely to expect them to treat us poorly, and they probably will.  If we treat others well, we are likely to expect the same from them and are more likely to be treated that way.

Recognizing the reality of a reformulated Cogito and the Golden Rule is a way to solve the Popeye Perplex and resolve our chronic ambivalence and alternation between pride and humility.  This recognition means acknowledging that others are an extension of ourselves and that the way we treat others is a reflection of what we think of ourselves, so that thinking well of ourselves does not involve promoting ourselves above others, denigrating them, and seeing ourselves as self-made successes.  In turn, we do not have to humiliate ourselves in order to accept that we are part of a common humanity.

This acknowledgement that “We” precedes “Me” means accepting the contention of John Dewey and other pragmatic and progressive social thinkers that our self-development as individuals starts with our actual experience as social beings, and that theories and practices which reflect that fact protect individuals and promote individuality better than abstract formulations which start with isolated individuals.  Or as Karl Marx claimed: “The self-development of the individual is the basis for the development of all” and vice versa: The development of the collective is the foundation of the development of the individual.  Dewey and Marx were thereby contending that the world actually works according to the Golden Rule and were encouraging recognition of that fact as a first step toward realizing the benefits thereof.  I will elaborate on these contentions in the second part of this essay: Rethinking Descartes’ Cogito: “I think, therefore we are (not I am).”  Part II: The World According to Calvin and Hobbes.