FOUR CARDINAL ERRORS THAT ALMOST DESTROYED AMERICA
October 17, 2009
Cardinal Error Three. This error in a nutshell:
Americans—intelligentsia first, consciously, but eventually our so-called leaders and much of the public tacitly—abandoned the inherent religiosity of the Founders and embraced a naturalistic materialism also imported from Europe. This has had serious consequences for our basic moral convictions, which became more hedonistic and utilitarian with every decade. Hedonism sees pleasure as the fundamental good. Utilitarianism holds that an amorphous “greater good” is what counts. Ultimately, lives are expendable and may be sacrificed, even involuntarily, in the name of this “greater good,” always defined by those with money and power.
Early Americans may not have seen eye to eye on all theological specifics, but almost all had an inherent religiosity, as I call it. That is, they believed in a transcendent realm and a transcendent God who had created the world and from whose perfect character had come eternal standards of moral truth. Such notions were the key to a free and responsible people, as John Adams understood: “We have no government armed with the power capable of contending with human passions unbridled by morality and religion…. Our Constitution was made only for a moral and a religious people. It is wholly inadequate to the government of any other” (1798).
Many others of the time recognized the indispensability of Christian theology in the body politic, whatever their specific doctrinal differences. They realized, as have many wise men and women down through the ages, that life in this world only acquires a stable and non-ephemeral meaning through insight into a world beyond this one, and that this alone can provide the basis for a system of morals based on something other than enculturation—and when that fails, intimidation.
Religion is invariably a contentious subject, of course, because it deals with ultimates in ways no other area of human life touches. Spiritual convictions have, for some, infused lives immersed in a daily struggle for survival with ultimate meaning, and given their pain significance. (See Victor Frankl, Man’s Search For Meaning, 1945) Religion has had its dark side, of course. Religious institutions and their leaders are as vulnerable as anyone to the lure of power. They founded the Spanish Inquisition, for instance. They held heads of state in thrall prior to the rise of dynasties such as the Rothschilds. Governments had generally been more than willing to sponsor national churches such as the Church of England. When governments sponsored specific bodies of religious doctrine, as they always had in Europe, it was invariably a recipe for repression. Thus Thomas Jefferson’s call, however often abused, for a “wall of separation between church and state” in his letter to the Danbury Baptist Association (1802).
So again Americans blazed a different trail, one set out in our First Amendment which promised “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof;…” Seas of ink have been spilled over what this establishment clause means—in recent years, ignoring everything after the comma. The point is: the American Republic was of necessity Christianity-friendly but did not aim to establish any denomination as a national church akin to the Church of England. There was to be room for many denominations, many ways of worship. These were left up to the people themselves, without interference from the state and without their interfering with the state. The assumption here is that a society can be infused with a bottom-up commitment to one or more variations on a Christian worldview without allowing that worldview to be transformed into a top-down theocracy. If society is decentralized and potential instruments for the abuse of power are dispersed as widely as possible, we can prevent national theocracy even if local theocracies occasionally develop here and there. So why has the “establishment” clause been so troublesome? Why have numerous Supreme Court cases, one by one, forced Christianity out of education and out of the public square? The Adamses and Jeffersons of the period 1798 – 1802 never in their wildest imaginings thought that in two centuries’ time the federal government would completely overwhelm the states, that federal power would be everywhere, dictating terms to every institution. Since federal dollars going to any institution that openly acknowledged a Creator would be subject to legal challenge, and since by the twentieth century that was most institutions, most institutions found themselves unable legally to acknowledge the Creator.
There were other factors curtailing the influence of Christianity. It is well known that Christianity was losing the allegiance of the European intelligentsia before the American founding. What went wrong? Historians and other scholars are bound to disagree on many of the specifics. Please allow me to offer some of my own thoughts—on how philosophy and philosophical theology overstepped their boundaries and set themselves up for a fall, so to speak. The story is longer than we can tell here. In a nutshell: major voices in medieval philosophical theology and early modern philosophy opened the door to skepticism and thence to humanism. They insisted (as in St. Anselm of Canterbury, St. Thomas Aquinas and René Descartes) that rigorous deductive proofs be given for God’s existence. These proofs employed the kinds of logical devices familiar from Aristotelian logic and Euclidean geometry. In the hands of fallible, finite minds, they turned out to have insurmountable weaknesses on their own terms. Soon, those who identified reasonability of belief with rigorous proof or at least decisive evidence found themselves forced to conclude that belief in the Christian God was not rational. We saw this attitude exemplified in (for example) the eighteenth century Scottish philosopher David Hume’s castigations of “natural religion,” and the emergence of Enlightenment humanism. In the face of formulations of, say, the “problem of evil and suffering,” moreover, Christianity’s problems seemed even worse. Given these human-centered criteria, belief in God seemed impossible to the “enlightened” human mind.
What was—and is—Enlightenment humanism? While hardly a unified school of thought, in all its forms it represented a man-centered view of existence with roots also going back to the ancient Greeks (especially Plato). It drew heavily on the work of social philosophers the most famous of whom was Jean-Jacques Rousseau, author of Du Contrat Social (1762). It was optimistic and idealistic. Its primary tenets: (1) human nature is inherently good but has been corrupted; (2) what has corrupted human nature are society’s institutions—especially religious ones but also monarchies; (3) we therefore need to abolish these institutions and work for the transformation of society en toto; and that (4) doing so will bring about indefinite social improvement and, one day, possibly even perfection. Liberté! Egalité! Fraternité!
It dawned on some observers (Edmund Burke, for example) that something was wrong when the French Revolution turned into a bloodbath.
Christians, of course, believe human nature is sinful because of the Fall and so not inherently good (see Rom. 3:23; Job 5:7; Isaiah 64:6, etc.), that this explains the corruption of institutions which were designed after all by human beings, and that therefore every transformational agenda rests on a false premise. Attempts to place a culture on the path of universal progress towards Utopia are therefore bound to turn tyrannical when human nature refuses to cooperate. Sin, for the Christian, explains the lust for power itself—even when manifested in supposedly Christian institutions. All of us recall Lord Acton’s adage that “power tends to corrupt, and absolute power corrupts absolutely.” According to a Christian worldview this doesn’t get matters quite right. Human nature is already corrupted by sin; concentrations of power just play to this corruption and make it more dangerous.
All of this explains, of course, why the humanist stance, in the hands of those who knew how to exploit it, eventually gave us tyrannies that made 1790s France look like a walk in the park by comparison: the Soviet Union under first Lenin and then Stalin, Nazi Germany under Hitler, China under Mao, and so on. Utopia looks like paradise to the intellectual with a vision—but actual flesh-and-blood human beings never really fit into his vision. In practice, they become cogwheels in the state machinery, their lives meaningless otherwise. Those who resist or who otherwise do not fit the Plan are crushed like insects by the state machinery’s police force. There are, of course, more “modest” forms of humanism as we’ll see in the next and final installment of this series. These forms of humanism are not totalitarian, and in fact their purveyors would prefer to avoid totalitarianism using some of the behavior-modification techniques we saw in Part 2. What humanist systems all have in common, though, is that in practice the individual person is never an end in himself, or an entity with intrinsic value, owned only by himself and his God. He becomes a subject, if not to a bloody tyranny then to the corporate powers that be that have come to own the political class under the direction British-American capitalism took after naturalistic materialism became the dominant theory of reality.
Even before Rousseau, physician-philosopher Julien Offray de la Mettrie penned L’Homme Machine (Man, A Machine) (1748). The stage was set for the rise for a materialism which added: (5) the universe is self-existent, and not created; (6) it is comprised exclusively of entities that obey the laws of physics and chemistry, or entities whose behaviors can be explained ultimately in terms of physical causality; and therefore (7) the totality of humanity—human nature, action, society—was ultimately subject to physical causality and biological explanation. The originators of the scientific revolution had all been essentially Christian, even if influenced by various ancient Greek schools (Pythagoras, for example). That is to say, they believed a rational, law-governed natural order had been created by a rational God, and thus was capable of being apprehended by beings who were rational because they had been created in God’s image (that’s us). If the leading intellects of a culture do not believe, a priori, that the universe is intelligible to the human mind, then for them science is pointless. Many ancient cultures never developed any sciences because they believed that nature was controlled by whimsical, irrational entities, not the law-governed providence of the Christian deity.
But as a radical empiricism took over European thought (especially in Great Britain), what couldn’t be observed through the senses or their extensions (scientific instruments) was dismissed as unreal. This extended to metaphysics, which Hume dismissed at the end of his famous Inquiry as “sophistry and illusion” (also in 1748). The views of a Wilhelm Wundt, discussed last week, would simply apply radical empiricism to children as would his descendents, the behaviorists, to human beings generally. No one had ever seen a human soul (psyche), or experimented on it in the laboratory. Therefore it did not exist, and probably neither did a transcendent God. As for Jesus Christ dying on the cross to pay the price for our sins and then rising from the dead? Forget it!
By the early 1800s, other things being equal, an open clash between the two worldviews, the Christian one and the materialist one, seemed inevitable. Around this time a new development would change the rules of the game, making it difficult to talk about worldviews at all. Hume’s dismissal of metaphysics foreshadowed the new development. The pivotal figure would be the so-called father of sociology, Auguste Comte. Comte’s writings, culminating in the multi-volume Philosophy and Public Polity (1850), would unleash on the world the doctrine known as positivism. Positivism formalized the idea that scientific-empirical methods alone held the key to knowledge of factual truth, that all religions were by nature superstitions, and that traditional philosophy could be set aside as a pointless exercise in building intellectual air-castles.
Comte argued that a civilization underwent three stages of development. He called this the “law of the three stages.” During the first stage, the religious one, supernatural explanations of the world prevail. Since the gods are whimsical at best, irrational and malevolent at worst, no explanation of the world in the scientific sense is possible. Comte granted that monotheism is the highest form of the first stage. One fickle deity makes for a more orderly universe than a legion of them!
During the second stage, the metaphysical one, philosophers spin grand systems out of their imaginations to explain the world—examples range from Aristotle’s cosmology, Aquinas and his doctrine of Natural Law, the dualism of Descartes the “father of modern philosophy,” down through Hegel and “idealism.” They speak of such notions as “natural law” and “natural rights.” Since in the last analysis none of these systems or notions are empirically testable, systemic philosophy can be nothing other than a clash of systems, with each new philosopher crossing out the system developed by his predecessor and substituting his own. Philosophical explanations of the world are more rational than supernatural ones, Comte granted, but they hold out no hope for genuine cognitive progress.
During the third stage, the scientific one, investigators of the world put forth hypotheses and test them step by step against observation, empirical testing, and data collection. Those that survive the best empirical tests deserve to be called knowledge. The general idea caught on, and soon dominated scholarly thought. Science, after all, seemed to be advancing by leaps and bounds. Physics was the quintessential science. With Darwin publishing not long after Comte’s ideas circulated, biology began slowly to coalesce around the idea of evolution by natural selection. Darwinism offered a materialistic theory of the origin of life and of all species including humanity. With Thomas Henry Huxley, “Darwin’s bulldog,” evolution soon dominated. Evidence that a materialist worldview—not empirical evidence—explained the show of support for evolution can be found, surprisingly, in textbooks themselves. My copy of paleontologist J. Marvin Weller’s The Course of Evolution states openly: “Darwin was particularly fortunate in his timing because the intellectual atmosphere in England was favorable for the consideration of a new materialistic theory of evolution, and he promptly gained the active support of several able and aggressive young biologists” (1969, p. 2).
Positivism, like any humanism, was optimistic about human nature, focusing on the possibilities of science and technique. Its advocates soon combined it with powerful analytic tools nineteenth century logicians had developed, and in the early decades of the twentieth century was transformed into logical positivism. Logical positivism relied on an idea going back to Kant (versions of it are found in Hume and Leibniz), that of a cleavage between two kinds of propositions—analytic and synthetic. The former were matters of logical truth, mathematical truth, or definition, and were empty of empirical content (that is, they said nothing about the world). The latter were matters of empirical truth (or falsity) because they could be tested against experience and experiment and would either meet or fail such tests. The propositions of religion (also ethics) seemed to be neither; so, to the logical positivist, they weren’t ‘real’ propositions at all. Perhaps they had emotive significance. In the legal realm, this translated into “legal positivism,” first given voice by utilitarian thinkers such as Jeremy Bentham who dismissed all talk of natural rights as “nonsense upon stilts.”
In this view, propositions about worldviews are as meaningless as those of religion since they can’t be tested and validated by empirical science. The practical consequence of this was that a specific worldview such as materialism could win the allegiance of intellectuals without much in the way of criticism or opposition from within the intellectual community. The would-be critic or opponent could not formulate his objections in acceptable language! Schools of thought drawing attention to the metaphysical commitments of modern science failed to gain traction.
With Comte the positivist founder of sociology, Wundt and Freud dominant in psychology, and Darwin in biology, by the early twentieth century materialism had won the day in Europe, and would soon be making extensive inroads in the U.S., working through the educational networks being set up and funded by, e.g., the Rockefellers. It would enter the mainstream of American education through John Dewey, also encountered in Part 2. Soon the positivist-materialist mindset would overwhelm American culture itself.
By the middle of the twentieth century, America was ready for Alfred C. Kinsey’s divorce of sexual activity from morality. Kinsey had been an entomologist—a biologist specializing in insect behavior; but most importantly, he was a hard core materialist whose early interest in the sexual behavior of insects grew into an interest in sexuality generally. The Rockefeller Foundation bankrolled his Institute for Research in Sex, Gender and Reproduction at Indiana University, and the result was two volumes that caused the greatest stir since Darwin’s Origin of Species: Sexual Behavior in the Human Male (1948) and Sexual Behavior in the Human Female (1953). One of Kinsey’s key theses was that children were sexual from infancy. At the time no one seemed too curious about how Kinsey and his team had obtained some of their data sets. In fact, Kinsey and his team were most likely conducting torturous experiments with children and even infants that would have been considered criminal offenses. Kinsey also interviewed sex criminals (some of them in prison); several members of his team were almost undoubtedly pedophiles. (See Judith Reisman, Kinsey: Crimes and Consequences, 2000).
Kinsey became an academic celebrity whose ideas filtered into the cultural mainstream. There can be no doubt that the sex drive is one of our most powerful, and entirely nonrational in the intellectual’s sense of rationality—which is why cultures either keep it on a short leash or perish. The first highly visible Kinseyite was Hugh Hefner, founder of what became the Playboy empire. The sexual revolution followed in the 1960s. Hedonism (whether specifically sexual or not) was becoming America’s reigning personal ethic, as utilitarianism had already become its reigning social ethic. Religion was retreating to Sunday social events having little affect on a culture which less and less looked to it as a source of morality and meaning (see Harvey Cox, The Secular City, 1965). The extended family had already been replaced by the nuclear family as children left the nest in search of employment. Now, the nuclear family was jeopardized as cultural forces set hormone-driven teenagers against their more traditional parents; it was jeopardized further by economic forces of the sort we considered in Part 1: the devaluation of the dollar, resulting in declining actual incomes that sent mothers into the workplace as a matter of necessity.
Some would argue that the “religious right” began to restore “traditional values,” at least in part, during the 1980s. This movement, in retrospect, barely even scratched the surface of what had been a seismic cultural shift; and it never even seemed aware of the economic forces threatening the family. Neither the political nor the popular cultures were much swayed by the pretenses of the “religious right”; not helping matters was the penchant many of its supposed leaders seemed to have for shooting themselves in the foot with their own sexual misdeeds.
We should note that by this time Western philosophy (like history, as seen last week) had long been transformed into a professional academic guild and effectively neutered. The “trained” philosopher was invariably warehoused in an academic department as a professor; in a civilization built on bankers’ fractional money and being taken in a specific direction, he or she had few viable alternatives other than law school, cab driving, burger flipping, or—later—computer programming. Logical positivism, even though largely supplanted by even more specialized “analytic” academic fashions, had been the perfect vehicle for a “philosophy” that would never challenge the rapidly growing alliances of governmental bureaucracies, globalist bankers and the industrial capitalists who would grow up around them, or the cultural forces being bankrolled by huge foundations such as Rockefeller and Ford. Its tools of analysis would not allow it to ask the “big questions.” Logical positivism was really more of an antiphilosophy—suited to a societal power network that had just eliminated a potential threat. The discipline whose ancestor had set out the premises on which Western civilization—and ultimately the natural philosophical and moral bases for the founding of the American Republic—was now incapable of producing effective critiques of power. It had quietly and almost ashamedly retreated into the recesses of academic decoration behind the colleges of business and centers for technical training that the land grant system had originally arranged for.
So-called “academic radicals” would be permitted to exist. They would apply a kind of baby Marxism to the production of tomes that were just as specialized as the works of logical positivism and analysis, even less intelligible, and just as oblivious to the international banking cartel at the core of the capitalist system they sought to expose. They would eventually go crackers over race, gender, homosexuality, and so on, as if those with real power cared about minority groups, or were interested in gays beyond their capacity to disrupt “traditional morality.” This was academic philosophy by the end of the twentieth century: divided into dozens of micro-specialties, feminized, politically correct, and utterly unable to affect the real power system in Western civilization.
Those who as students refused to cooperate were refused admission to the guild (stable academic employment, that is). Such individuals—with way too much education for today’s dumbed down marketplace—pay a steep price. I know of cases of individuals with doctorates now engaged in a daily struggle to keep the wolves at bay, since neither government nor corporations will hire them. The job market for professors collapsed in the early 1970s. A situation where there are more jobseekers than jobs always invites abuses: conformists are hired while dissidents are weeded out. A system unable to affect power thus perpetuates itself in multiple academic guilds through the intellectual equivalent of inbreeding as ideologues hire their own in the name of “diversity.”
There were a few exceptions to the rule that a philosopher must be a professor. Ayn Rand comes to mind. A self-taught Russian immigrant, she would gain a substantial following in the 1960s and 1970s, especially among college students (much to the chagrin of their philosophy professors). Unfortunately, Rand, too, was essentially a materialist who accepted the Enlightenment view of human nature and made autonomous Reason into the equivalent of a deity, capable of solving all human problems. Rand’s “unknown ideal” was (what else?) capitalism, to which she sought to supply the philosophical foundation she maintains capitalism never had. Eventually, though, one had to notice that the perfect capitalist heroes of her novels, such as Howard Roark of The Fountainhead (1943) or John Galt and Hank Rearden in Atlas Shrugged (1957), simply have no counterparts in real life. Ultimately, “Objectivism” also rests on a false premise, and doubtless this was clear to those in power who allowed thousands of people to be diverted in a direction no more effective than academic Marxism. Likewise Misesians in economics, members of other free market schools, and Libertarians generally, tending (with rare exceptions) to rest their views on the same Enlightenment premises, were allowed to believe that capitalists really wanted competition in markets free of state regulation, as opposed to dominance over markets achieved with the help of the state. To show the short-sightedness of intellectual defenses of capitalism today, all one has to do is look at the rise of the multi-billion dollar pharmaceutical industry (Big Pharma), the insurance industry (think AIG), the food industry (think of Monsanto), or any of a dozen other industries where corporate behemoths grew to dominance with the help of government regulation they embraced because it limited competition.
There were thinkers of various stripes who saw the cracks in this edifice and set out to expose them to the light of day.
Return to Comte. Comte—and those in the academic guilds who built on his work—simply assumed the metaphysical neutrality of modern science. That is, they assumed that science makes no substantive metaphysical propositions about the world. Just the facts, ma’am. This, too, is a false premise, as countless writers eventually pointed out. By the middle of the twentieth century several historians of ideas had shown the falsity of this image of science; two examples include Edwin Arthur Burtt’s The Metaphysical Foundations of Modern Science (1952) and Alexander Koyré’s From the Closed World to the Infinite Universe (1957). No less a scientist than Einstein himself rejected the empiricist image of science, having repeatedly spoken (sometimes with great awe) of the comprehensibility of nature that must be presupposed a priori by physics.
The most visible rebel against the positivist image was Thomas S. Kuhn, in The Structure of Scientific Revolutions (1962). Kuhn argued in great detail, with a wealth of historical examples, that a mature science both does and must make substantial non-empirical presuppositions about its subject matter; and that these presuppositions change over time (scientific revolutions) in ways incompatible with the radical empiricism of logical positivism. British philosopher Nicholas Maxwell went further, arguing in his books From Knowledge to Wisdom (1984) and The Comprehensibility of the Universe (1998) and in many articles, that underlying the diversity of the sciences is a single metaphysical proposition which scientists presuppose a priori: the universe is intelligible or comprehensible to the human mind (does this sound familiar by now?).
Positivism and radical empiricism are, in fact, false—in the embarrassing position of being undermined by their own inner logic. The general idea or thesis of positivism that all valid knowledge rests on, and must be tested against, sense experience or scientific experiment cannot itself be tested against sense experience or scientific experiment. It is therefore invalid by its own standard of validity. Empiricism, in fact, faces a very similar predicament. It maintains that our senses and their extensions (plus inductive reasoning) are the sole sources of knowledge. Is this a knowledge claim, or isn’t it? It is not something we learn through the senses, or through scientific instruments, or through inductive reasoning. Empiricism therefore also fails by its own standards.
Christian philosophy offers a metaphysics, epistemology, ethics, and so on. One of its pinnacles was expressed in the philosophical theology of St. Thomas Aquinas. Aquinas, despite his error in believing that God’s existence could be “demonstrated,” believed that a perfectly rational and benevolent God had created a rational world order, and that those created in His image had been “imprinted” with a finite version of the Divine Reason (Logos). God’s infinite and nonspatiotemperal perception of the Creation was that of Eternal Law; our finite perception was of Natural Law, made possible by that spark of Eternal Law in us. Divine Law then consisted of God’s direct commands to humanity; Human Law consisted of laws passed by governments, intended as subordinate to the first three. The point: God’s cosmos is “governed” by providential patterns inherent in all created things, and these patterns are intelligible (perhaps up to a point) by rational beings applying correct methods of inquiry.
This, I submit, provided the original basis for the rise of Western science, which, again, did not arise elsewhere precisely because the god or gods of other faiths was/were not perceived as rational or as having created rational beings capable of grasping a comprehensible world. This is what was abandoned, when Enlightenment philosophers embraced radical empiricism in the 1700s, science embraced materialism in the 1800s, when philosophy forgot how to talk about worldviews also in the 1800s, and eventually when Western culture itself nudged its Christian roots aside in the 1900s. We are now in the 2000s and already paying a very steep price!
Many intellectuals no longer grasp how we can speak of an intelligible world outside our linguistic and cultural constructs. These schools go by such names as post-structuralism, deconstructionism, and so on. Their advocates are fascinated by power. They believe that assertions of truth conceal longstanding structures of domination (usually by white men over everyone else). They utterly fail to see the real institutions of domination, of course: those of the international banking cartel and the semi-secret organizations that have grown up alongside it, often in mutual penetration and permeation.
Subscribe to the NewsWithViews Daily News Alerts!
One of Comte’s core interests was in the future of humanity and in the possibility of a “scientifically planned” society. Such visions would capture the imaginations of major writers of the 1900s. Examples include H.G. Wells and Bertrand Russell. Both, at different times, were members of the British Fabian Society, for whom the doors had been left wide open. This organization’s role in the near-destruction of America will be our next and final topic. For part one and two click below.
© 2009 Steven Yates - All Rights Reserved