Archive for the 'Michael Oakeshott' Category

Michael Oakeshott Exposes Originalism’s Puerile Rationalistic Pretension to Jurisprudential Profundity

Last week in my post about Popperian Falsificationism, I quoted at length from Michael Oakeshott’s essay “Rationalism in Politics.” Rereading Oakeshott’s essay reminded me that Oakeshott’s work also casts an unflattering light on the faux-conservative jurisprudential Originalism, of which right-wing pretend-populists masquerading as conservatives have become so enamored under the expert tutelage of their idol Justice Scalia.

The faux-conservative nature of Originalism was nowhere made so obvious as in Scalia’s own Tanner Lectures at the University of Utah College of Law, “Common-Law Courts in a Civil-Law System” in which Scalia made plain his utter contempt for the common-law jurisprudence upon which the American legal system is founded. Here is that contempt on display in a mocking description of how law is taught in American law schools.

It is difficult to convey to someone who has not attended law school the enormous impact of the first year of study. Many students remark upon the phenomenon: It is like a mental rebirth, the acquisition of what seems like a whole new mode of perceiving and thinking. Thereafter, even if one does not yet know much law, he – as the expression goes – “thinks like a lawyer.”

The overwhelming majority of the courses taught in that first year of law school, and surely the ones that have the most impact, are courses that teach the substance, and the methodology, of the common law – torts, for example; contracts; property; criminal law. We lawyers cut our teeth upon the common law. To understand what an effect that must have, you must appreciate that the common law is not really common law, except insofar as judges can be regarded as common. That is to say, it is not “customary law,” or a reflection of the people’s practices, but is rather law developed by the judges. Perhaps in the very infancy of the common law it could have been thought that the courts were mere expositors of generally accepted social practices ; and certainly, even in the full maturity of the common law, a well established commercial or social practice could form the basis for a court’s decision. But from an early time – as early as the Year Books, which record English judicial decisions from the end of the thirteenth century to the beginning of the sixteenth – any equivalence between custom and common law had ceased to exist, except in the sense that the doctrine of stare decisis rendered prior judicial decisions “custom.” The issues coming before the courts involved, more and more, refined questions that customary practice gave no answer to.

Oliver Wendell Holmes’s influential book The Common Law – which is still suggested reading for entering law students – talks a little bit about Germanic and early English custom. . . . Holmes’s book is a paean to reason, and to the men who brought that faculty to bear in order to create Anglo-American law. This is the image of the law – the common law – to which an aspiring lawyer is first exposed, even if he hasn’t read Holmes over the previous summer as he was supposed to. (pp. 79-80)

What intellectual fun all of this is! I describe it to you, not – please believe me – to induce those of you in the audience who are not yet lawyers to go to law school. But rather, to explain why first-year law school is so exhilarating: because it consists of playing common-law judge. Which in turn consists of playing king – devising, out of the brilliance of one’s own mind, those laws that ought to govern mankind. What a thrill! And no wonder so many lawyers, having tasted this heady brew, aspire to be judges!

Besides learning how to think about, and devise, the “best” legal rule, there is another skill imparted in the first year of law school that is essential to the making of a good common-law judge. It is the technique of what is called “distinguishing” cases. It is a necessary skill, because an absolute prerequisite to common-law lawmaking is the doctrine of stare decisis – that is, the principle that a decision made in one case will be followed in the next. Quite obviously, without such a principle common-law courts would not be making any “law”; they would just be resolving the particular dispute before them. It is the requirement that future courts adhere to the principle underlying a judicial decision which causes that decision to be a legal rule. (There is no such requirement in the civil-law system, where it is the text of the law rather than any prior judicial interpretation of that text which is authoritative. Prior judicial opinions are consulted for their persuasive effect, much as academic commentary would be; but they are not binding.)

Within such a precedent-bound common-law system, it is obviously critical for the lawyer, or the judge, to establish whether the case at hand falls within a principle that has already been decided. Hence the technique – or the art, or the game – of “distinguishing” earlier cases. A whole series of lectures could be devoted to this subject, and I do not want to get into it too deeply here. Suffice to say that there is a good deal of wiggle-room as to what an earlier case “holds.” In the strictest sense, the holding of a decision cannot go beyond the facts that were before the court. . . .

As I have described, this system of making law by judicial opinion, and making law by distinguishing earlier cases, is what every American law student, what every newborn American lawyer, first sees when he opens his eyes. And the impression remains with him for life. His image of the great judge — the Holmes, the Cardozo — is the man (or woman) who has the intelligence to know what is the best rule of law to govern the case at hand, and then the skill to perform the broken-field running through earlier cases that leaves him free to impose that rule — distinguishing one prior case on his left, straight-arming another one on his right, high-stepping away from another precedent about to tackle him from the rear, until (bravo!) he reaches his goal: good law. That image of the great judge remains with the former law student when he himself becomes a judge, and thus the common-law tradition is passed on and on. (pp. 83-85)

In place of common law judging, Scalia argues that the judicial function should be confined to the parsing of statutory or Constitutional texts to find their meaning, contrasting that limited undertaking to the anything-goes practice of common-law judging.

[T]he subject of statutory interpretation deserves study and attention in its own right, as the principal business of lawyers and judges. It will not do to treat the enterprise as simply an inconvenient modern add-on to the judges’ primary role of common-law lawmaking. Indeed, attacking the enterprise with the Mr. Fix-it mentality of the common-law judge is a sure recipe for incompetence and usurpation.

The state of the science of statutory interpretation in American law is accurately described by Professors Henry Hart and Albert Sacks (or by Professors William Eskridge and Philip Frickey, editors of the famous often-taught-but-never-published Hart-Sachs materials on the legal process) as follows:

Do not expect anybody’s theory of statutory interpretation, whether it is your own or somebody else’s, to be an accurate statement of what courts actually do with statutes. The hard truth of the matter is that American courts have no intelligible, generally accepted, and consistently applied theory of statutory interpretation.

Surely this is a sad commentary: We American judges have no intelligible theory of what we do most. (pp. 89-90)

But the Great Divide with regard to constitutional interpretation is not that between Framers’ intent and objective meaning; but rather that between original meaning (whether derived from Framers’ intent or not) and current meaning. The ascendant school of constitutional interpretation affirms the existence of what is called the “living Constitution,” a body of law that (unlike normal statutes) grows and changes from age to age, in order to meet the needs of a changing society. And it is the judges who determine those needs and “find” that changing law. Seems familiar, doesn’t it? Yes, it is the common law returned, but infinitely more powerful than what the old common law ever pretended to be, for now it trumps even the statutes of democratic legislatures.

If you go into a constitutional law class, or study a constitutional-law casebook, or read a brief filed in a constitutional-law case, you will rarely find the discussion addressed to the text of the constitutional provision that is at issue, or to the question of what was the originally understood or even the originally intended meaning of that text. Judges simply ask themselves (as a good common-law judge would) what ought the result to be, and then proceed to the task of distinguishing (or, if necessary, overruling) any prior Supreme Court cases that stand in the way. Should there be (to take one of the less controversial examples) a constitutional right to die? If so, there is. Should there be a constitutional right to reclaim a biological child put out for adoption by the other parent? Again, if so, there is. If it is good, it is so. Never mind the text that we are supposedly construing; we will smuggle these in, if all else fails, under the Due Process Clause (which, as I have described, is textually incapable of containing them). Moreover, what the Constitution meant yesterday it does not necessarily mean today. As our opinions say in the context of our Eighth Amendment jurisprudence (the Cruel and Unusual Punishments Clause), its meaning changes to reflect “the evolving standards of decency that mark the progress of a maturing society.”

This is preeminently a common-law way of making law, and not the way of construing a democratically adopted text. . . . The Constitution, however, even though a democratically adopted text, we formally treat like the common law. What, it is fair to ask, is our justification for doing so? (pp. 112-14)

Aside from engaging in the most ridiculous caricature of how common-law judging is conducted by actual courts, Scalia, in describing statutory interpretation as a science, either deliberately misrepresents or simply betrays his own misunderstanding of what science is all about. Scientists seek to discover anomalies, contradictions, and gaps within a received body of conjectural knowledge by finding solutions for those anomalies and contradictions and finding new hypotheses to explain gaps in knowledge. And they evaluate their work by criticizing the logic of their solutions and hypotheses and by testing those solutions and hypotheses against empirical evidence.

What Scalia calls a science of statutory interpretation seems to be nothing more than a set exegetical or hermeneutic rules passively and mechanically applied to arrive at a supposedly authoritative reading of the statute without regard to the substantive meaning or practical implications of applying the statute after those exegetical rules have been faithfully applied. In other words, the role of judge is to skillfully read and interpret legal texts, not to render a just verdict or decision, not unless, that is, justice is tautologically defined as the outcome of the Scalia-sanctioned exegetical/hermeneutic exercise. Scalia fraudulently attempts to endow this purely formal approach to textual exegesis with scientific authority, as if by so doing he could invoke the authority of science to override, or annihilate, the authority of judging.

Here is where I want to invite Michael Oakeshott into the conversation. I quote from his essay “Political Education” reprinted as chapter two of his Rationalism in Politics and Other Essays.

[A] tradition of behaviour is a tricky thing to get to know. Indeed, it may even appear to be essentially unintelligible. It is neither fixed nor finished; it has no changeless centre to which understanding anchor itself; there is no sovereign purpose to be perceived or inevitable direction to be detected; there is no model to be copied, idea to be realized, or rule to be followed. Some parts of it may change more slowly than others, but none is immune from change. Everything is temporary. Nevertheless, though a tradition of behaviour is flimsy and elusive, it is not without identity, and what makes it a possible object of knowledge is the fact that all its parts do not change at the same time and that the changes it undergoes are potential within it. Its principle is a principle of continuity: authority is diffused between past, present, and future; between the old, the new, and what is to come. It is steady because, though it moves, it is never wholly in motion; and though it is tranquil, it is never wholly at rest. Nothing that ever belonged to it: we are always swerving back to recover and make something topical out of even its remotest moments; and nothing for long remains unmodified. Everything is temporary, but nothing is arbitrary. Everything figures by comparison, not with what stands next to it, but with the whole. And since a tradition of behaviour is not susceptible of the distinction between essence and accident, knowledge of it is unavoidable knowledge of its detail: to know only the gist is to know nothing. What has to be learned is not an abstract idea, or a set of tricks, not even a ritual, but a concrete, coherent manner of living in all its intricateness. (pp. 61-62).

In a footnote to this passage, Oakeshott added the following comment.

The critic who found “some mystical qualities” in this passage leaves me puzzled: it seems to me an exceedingly matter-of-fact description of the characteristics of any tradition — the Common Law of England, for example, the so-called British Constitution, the Christian religion, modern physics, the game of cricket, shipbuilding.

I will close with another passage from Oakeshott, this time from his essay Rationalism in Politics, but with certain terms placed in parentheses to be replaced with corresponding, substitute terms placed in brackets.

The heart of the matter is the pre-occupation of the [Originalist] (Rationalist) with certainty. Technique and certainty are, for him, inseparably joined because certain knowledge is, for him, knowledge that is, which not only ends with certainty but begins with  certainty and is certain throughout. And this is precisely what [textual exegesis] (technical knowledge) appears to be. It seems to be a self-complete sort of knowledge because it seems to range between an identifiable initial point (where it breaks in upon sheer ignorance) and an identifiable terminal point, where it is complete, as in learning the rules of a new game. It has the aspect of knowledge that can be contained wholly between the covers of a [written statutory code], whose application is, as nearly as possible, purely mechanical, and which does not assume knowledge not itself provided in the [exegetical] technique. For example, the superiority of an ideology over a tradition of thought lies in the appearance of being self-contained. It can be taught best to those whose minds are empty: and if it is to be taught to one who already believes something, the first step of the teacher must be to administer a purge, to make certain that all prejudices and preconceptions are removed, to lay his foundation upon the unshakeable rock of absolute ignorance. In short, [textual exegesis] (technical knowledge) appears to e the only kind of knowledge which satisfies the standard of certainty which the [Originalist] (Rationalist) has chosen. (p. 16)

Dr. Popper: Or How I Learned to Stop Worrying and Love Metaphysics

Introduction to Falsificationism

Although his reputation among philosophers was never quite as exalted as it was among non-philosophers, Karl Popper was a pre-eminent figure in 20th century philosophy. As a non-philosopher, I won’t attempt to adjudicate which take on Popper is the more astute, but I think I can at least sympathize, if not fully agree, with philosophers who believe that Popper is overrated by non-philosophers. In an excellent blog post, Phillipe Lemoine gives a good explanation of why philosophers look askance at falsificationism, Popper’s most important contribution to philosophy.

According to Popper, what distinguishes or demarcates a scientific statement from a non-scientific (metaphysical) statement is whether the statement can, or could be, disproved or refuted – falsified (in the sense of being shown to be false not in the sense of being forged, misrepresented or fraudulently changed) – by an actual or potential observation. Vulnerability to potentially contradictory empirical evidence, according to Popper, is what makes science special, allowing it to progress through a kind of dialectical process of conjecture (hypothesis) and refutation (empirical testing) leading to further conjecture and refutation and so on.

Theories purporting to explain anything and everything are thus non-scientific or metaphysical. Claiming to be able to explain too much is a vice, not a virtue, in science. Science advances by risk-taking, not by playing it safe. Trying to explain too much is actually playing it safe. If you’re not willing to take the chance of putting your theory at risk, by saying that this and not that will happen — rather than saying that this or that will happen — you’re playing it safe. This view of science, portrayed by Popper in modestly heroic terms, was not unappealing to scientists, and in part accounts for the positive reception of Popper’s work among scientists.

But this heroic view of science, as Lemoine nicely explains, was just a bit oversimplified. Theories never exist in a vacuum, there is always implicit or explicit background knowledge that informs and provides context for the application of any theory from which a prediction is deduced. To deduce a prediction from any theory, background knowledge, including complementary theories that are presumed to be valid for purposes of making a prediction, is necessary. Any prediction relies not just on a single theory but on a system of related theories and auxiliary assumptions.

So when a prediction is deduced from a theory, and the predicted event is not observed, it is never unambiguously clear which of the multiple assumptions underlying the prediction is responsible for the failure of the predicted event to be observed. The one-to-one logical dependence between a theory and a prediction upon which Popper’s heroic view of science depends doesn’t exist. Because the heroic view of science is too simplified, Lemoine considers it false, at least in the naïve and heroic form in which it is often portrayed by its proponents.

But, as Lemoine himself acknowledges, Popper was not unaware of these issues and actually dealt with some if not all of them. Popper therefore dismissed those criticisms pointing to his various acknowledgments and even anticipations of and responses to the criticisms. Nevertheless, his rhetorical style was generally not to qualify his position but to present it in stark terms, thereby reinforcing the view of his critics that he actually did espouse the naïve version of falsificationism that, only under duress, would be toned down to meet the objections raised to the usual unqualified version of his argument. Popper after all believed in making bold conjectures and framing a theory in the strongest possible terms and characteristically adopted an argumentative and polemical stance in staking out his positions.

Toned-Down Falsificationism

In his tone-downed version of falsificationism, Popper acknowledged that one can never know if a prediction fails because the underlying theory is false or because one of the auxiliary assumptions required to make the prediction is false, or even because of an error in measurement. But that acknowledgment, Popper insisted, does not refute falsificationism, because falsificationism is not a scientific theory about how scientists do science; it is a normative theory about how scientists ought to do science. The normative implication of falsificationism is that scientists should not try to shield their theories by making just-so adjustments in their theories through ad hoc auxiliary assumptions, e.g., ceteris paribus assumptions, to shield their theories from empirical disproof. Rather they should accept the falsification of their theories when confronted by observations that conflict with the implications of their theories and then formulate new and better theories to replace the old ones.

But a strict methodological rule against adjusting auxiliary assumptions or making further assumptions of an ad hoc nature would have ruled out many fruitful theoretical developments resulting from attempts to account for failed predictions. For example, the planet Neptune was discovered in 1846 by scientists who posited (ad hoc) the existence of another planet to explain why the planet Uranus did not follow its predicted path. Rather than conclude that the Newtonian theory was falsified by the failure of Uranus to follow the orbital path predicted by Newtonian theory, the French astronomer Urbain Le Verrier posited the existence of another planet that would account for the path actually followed by Uranus. Now in this case, it was possible to observe the predicted position of the new planet, and its discovery in the predicted location turned out to be a sensational confirmation of Newtonian theory.

Popper therefore admitted that making an ad hoc assumption in order to save a theory from refutation was permissible under his version of normative faslisificationism, but only if the ad hoc assumption was independently testable. But suppose that, under the circumstances, it would have been impossible to observe the existence of the predicted planet, at least with the observational tools then available, making the ad hoc assumption testable only in principle, but not in practice. Strictly adhering to Popper’s methodological requirement of being able to test independently any ad hoc assumption would have meant accepting the refutation of the Newtonian theory rather than positing the untestable — but true — ad hoc other-planet hypothesis to account for the failed prediction of the orbital path of Uranus.

My point is not that ad hoc assumptions to save a theory from falsification are ok, but to point out that a strict methodological rules requiring rejection of any theory once it appears to be contradicted by empirical evidence and prohibiting the use of any ad hoc assumption to save the theory unless the ad hoc assumption is independently testable might well lead to the wrong conclusion given the nuances and special circumstances associated with every case in which a theory seems to be contradicted by observed evidence. Such contradictions are rarely so blatant that theory cannot be reconciled with the evidence. Indeed, as Popper himself recognized, all observations are themselves understood and interpreted in the light of theoretical presumptions. It is only in extreme cases that evidence cannot be interpreted in a way that more or less conforms to the theory under consideration. At first blush, the Copernican heliocentric view of the world seemed obviously contradicted by direct sensory observation that earth seems flat and the sun rise and sets. Empirical refutation could be avoided only by providing an alternative interpretation of the sensory data that could be reconciled with the apparent — and obvious — flatness and stationarity of the earth and the movement of the sun and moon in the heavens.

So the problem with falsificationism as a normative theory is that it’s not obvious why a moderately good, but less than perfect, theory should be abandoned simply because it’s not perfect and suffers from occasional predictive failures. To be sure, if a better theory than the one under consideration is available, predicting correctly whenever the one under consideration predicts correctly and predicting more accurately than the one under consideration when the latter fails to predict correctly, the alternative theory is surely preferable, but that simply underscores the point that evaluating any theory in isolation is not very important. After all, every theory, being a simplification, is an imperfect representation of reality. It is only when two or more theories are available that scientists must try to determine which of them is preferable.

Oakeshott and the Poverty of Falsificationism

These problems with falsificationism were brought into clearer focus by Michael Oakeshott in his famous essay “Rationalism in Politics,” which though not directed at Popper himself (whose colleague at the London School of Economics he was) can be read as a critique of Popper’s attempt to prescribe methodological rules for scientists to follow in carrying out their research. Methodological rules of the kind propounded by Popper are precisely the sort of supposedly rational rules of practice intended to ensure the successful outcome of an undertaking that Oakeshott believed to be ill-advised and hopelessly naïve. The rationalist conceit in Oakesott’s view is that there are demonstrably correct answers to practical questions and that practical activity is rational only when it is based on demonstrably true moral or causal rules.

The entry on Michael Oakeshott in the Stanford Encyclopedia of Philosophy summarizes Oakeshott’s position as follows:

The error of Rationalism is to think that making decisions simply requires skill in the technique of applying rules or calculating consequences. In an early essay on this theme, Oakeshott distinguishes between “technical” and “traditional” knowledge. Technical knowledge is of facts or rules that can be easily learned and applied, even by those who are without experience or lack the relevant skills. Traditional knowledge, in contrast, means “knowing how” rather than “knowing that” (Ryle 1949). It is acquired by engaging in an activity and involves judgment in handling facts or rules (RP 12–17). The point is not that rules cannot be “applied” but rather that using them skillfully or prudently means going beyond the instructions they provide.

The idea that a scientist’s decision about when to abandon one theory and replace it with another can be reduced to the application of a Popperian falsificationist maxim ignores all the special circumstances and all the accumulated theoretical and practical knowledge that a truly expert scientist will bring to bear in studying and addressing such a problem. Here is how Oakeshott addresses the problem in his famous essay.

These two sorts of knowledge, then, distinguishable but inseparable, are the twin components of the knowledge involved in every human activity. In a practical art such as cookery, nobody supposes that the knowledge that belongs to the good cook is confined to what is or what may be written down in the cookery book: technique and what I have called practical knowledge combine to make skill in cookery wherever it exists. And the same is true of the fine arts, of painting, of music, of poetry: a high degree of technical knowledge, even where it is both subtle and ready, is one thing; the ability to create a work of art, the ability to compose something with real musical qualities, the ability to write a great sonnet, is another, and requires in addition to technique, this other sort of knowledge. Again these two sorts of knowledge are involved in any genuinely scientific activity. The natural scientist will certainly make use of observation and verification that belong to his technique, but these rules remain only one of the components of his knowledge; advances in scientific knowledge were never achieved merely by following the rules. . . .

Technical knowledge . . . is susceptible of formulation in rules, principles, directions, maxims – comprehensively, in propositions. It is possible to write down technical knowledge in a book. Consequently, it does not surprise us that when an artist writes about his art, he writes only about the technique of his art. This is so, not because he is ignorant of what may be called asesthetic element, or thinks it unimportant, but because what he has to say about that he has said already (if he is a painter) in his pictures, and he knows no other way of saying it. . . . And it may be observed that this character of being susceptible of precise formulation gives to technical knowledge at least the appearance of certainty: it appears to be possible to be certain about a technique. On the other hand, it is characteristic of practical knowledge that it is not susceptible of formulation of that kind. Its normal expression is in a customary or traditional way of doing things, or, simply, in practice. And this gives it the appearance of imprecision and consequently of uncertainty, of being a matter of opinion, of probability rather than truth. It is indeed knowledge that is expressed in taste or connoisseurship, lacking rigidity and ready for the impress of the mind of the learner. . . .

Technical knowledge, in short, an be both taught and learned in the simplest meanings of these words. On the other hand, practical knowledge can neither be taught nor learned, but only imparted and acquired. It exists only in practice, and the only way to acquire it is by apprenticeship to a master – not because the master can teach it (he cannot), but because it can be acquired only by continuous contact with one who is perpetually practicing it. In the arts and in natural science what normally happens is that the pupil, in being taught and in learning the technique from his master, discovers himself to have acquired also another sort of knowledge than merely technical knowledge, without it ever having been precisely imparted and often without being able to say precisely what it is. Thus a pianist acquires artistry as well as technique, a chess-player style and insight into the game as well as knowledge of the moves, and a scientist acquires (among other things) the sort of judgement which tells him when his technique is leading him astray and the connoisseurship which enables him to distinguish the profitable from the unprofitable directions to explore.

Now, as I understand it, Rationalism is the assertion that what I have called practical knowledge is not knowledge at all, the assertion that, properly speaking, there is no knowledge which is not technical knowledge. The Rationalist holds that the only element of knowledge involved in any human activity is technical knowledge and that what I have called practical knowledge is really only a sort of nescience which would be negligible if it were not positively mischievous. (Rationalism in Politics and Other Essays, pp. 12-16)

Almost three years ago, I attended the History of Economics Society meeting at Duke University at which Jeff Biddle of Michigan State University delivered his Presidential Address, “Statistical Inference in Economics 1920-1965: Changes in Meaning and Practice, published in the June 2017 issue of the Journal of the History of Economic Thought. The paper is a remarkable survey of the differing attitudes towards using formal probability theory as the basis for making empirical inferences from the data. The underlying assumptions of probability theory about the nature of the data were widely viewed as being too extreme to make probability theory an acceptable basis for empirical inferences from the data. However, the early negative attitudes toward accepting probability theory as the basis for making statistical inferences from data were gradually overcome (or disregarded). But as late as the 1960s, even though econometric techniques were becoming more widely accepted, a great deal of empirical work, including by some of the leading empirical economists of the time, avoided using the techniques of statistical inference to assess empirical data using regression analysis. Only in the 1970s was there a rapid sea-change in professional opinion that made statistical inference based on explicit probabilisitic assumptions about underlying data distributions the requisite technique for drawing empirical inferences from the analysis of economic data. In the final section of his paper, Biddle offers an explanation for this rapid change in professional attitude toward the use of probabilistic assumptions about data distributions as the required method of the empirical assessment of economic data.

By the 1970s, there was a broad consensus in the profession that inferential methods justified by probability theory—methods of producing estimates, of assessing the reliability of those estimates, and of testing hypotheses—were not only applicable to economic data, but were a necessary part of almost any attempt to generalize on the basis of economic data. . . .

This paper has been concerned with beliefs and practices of economists who wanted to use samples of statistical data as a basis for drawing conclusions about what was true, or probably true, in the world beyond the sample. In this setting, “mechanical objectivity” means employing a set of explicit and detailed rules and procedures to produce conclusions that are objective in the sense that if many different people took the same statistical information, and followed the same rules, they would come to exactly the same conclusions. The trustworthiness of the conclusion depends on the quality of the method. The classical theory of inference is a prime example of this sort of mechanical objectivity.

Porter [Trust in Numbers: The Pursuit of Objectivity in Science and Public Life] contrasts mechanical objectivity with an objectivity based on the “expert judgment” of those who analyze data. Expertise is acquired through a sanctioned training process, enhanced by experience, and displayed through a record of work meeting the approval of other experts. One’s faith in the analyst’s conclusions depends on one’s assessment of the quality of his disciplinary expertise and his commitment to the ideal of scientific objectivity. Elmer Working’s method of determining whether measured correlations represented true cause-and-effect relationships involved a good amount of expert judgment. So, too, did Gregg Lewis’s adjustments of the various estimates of the union/non-union wage gap, in light of problems with the data and peculiarities of the times and markets from which they came. Keynes and Persons pushed for a definition of statistical inference that incorporated space for the exercise of expert judgment; what Arthur Goldberger and Lawrence Klein referred to as ‘statistical inference’ had no explicit place for expert judgment.

Speaking in these terms, I would say that in the 1920s and 1930s, empirical economists explicitly acknowledged the need for expert judgment in making statistical inferences. At the same time, mechanical objectivity was valued—there are many examples of economists of that period employing rule-oriented, replicable procedures for drawing conclusions from economic data. The rejection of the classical theory of inference during this period was simply a rejection of one particular means for achieving mechanical objectivity. By the 1970s, however, this one type of mechanical objectivity had become an almost required part of the process of drawing conclusions from economic data, and was taught to every economics graduate student.

Porter emphasizes the tension between the desire for mechanically objective methods and the belief in the importance of expert judgment in interpreting statistical evidence. This tension can certainly be seen in economists’ writings on statistical inference throughout the twentieth century. However, it would be wrong to characterize what happened to statistical inference between the 1940s and the 1970s as a displace-ment of procedures requiring expert judgment by mechanically objective procedures. In the econometric textbooks published after 1960, explicit instruction on statistical inference was largely limited to instruction in the mechanically objective procedures of the classical theory of inference. It was understood, however, that expert judgment was still an important part of empirical economic analysis, particularly in the specification of the models to be estimated. But the disciplinary knowledge needed for this task was to be taught in other classes, using other textbooks.

And in practice, even after the statistical model had been chosen, the estimates and standard errors calculated, and the hypothesis tests conducted, there was still room to exercise a fair amount of judgment before drawing conclusions from the statistical results. Indeed, as Marcel Boumans (2015, pp. 84–85) emphasizes, no procedure for drawing conclusions from data, no matter how algorithmic or rule bound, can dispense entirely with the need for expert judgment. This fact, though largely unacknowledged in the post-1960s econometrics textbooks, would not be denied or decried by empirical economists of the 1970s or today.

This does not mean, however, that the widespread embrace of the classical theory of inference was simply a change in rhetoric. When application of classical inferential procedures became a necessary part of economists’ analyses of statistical data, the results of applying those procedures came to act as constraints on the set of claims that a researcher could credibly make to his peers on the basis of that data. For example, if a regression analysis of sample data yielded a large and positive partial correlation, but the correlation was not “statistically significant,” it would simply not be accepted as evidence that the “population” correlation was positive. If estimation of a statistical model produced a significant estimate of a relationship between two variables, but a statistical test led to rejection of an assumption required for the model to produce unbiased estimates, the evidence of a relationship would be heavily discounted.

So, as we consider the emergence of the post-1970s consensus on how to draw conclusions from samples of statistical data, there are arguably two things to be explained. First, how did it come about that using a mechanically objective procedure to generalize on the basis of statistical measures went from being a choice determined by the preferences of the analyst to a professional requirement, one that had real con-sequences for what economists would and would not assert on the basis of a body of statistical evidence? Second, why was it the classical theory of inference that became the required form of mechanical objectivity? . . .

Perhaps searching for an explanation that focuses on the classical theory of inference as a means of achieving mechanical objectivity emphasizes the wrong characteristic of that theory. In contrast to earlier forms of mechanical objectivity used by economists, such as standardized methods of time series decomposition employed since the 1920s, the classical theory of inference is derived from, and justified by, a body of formal mathematics with impeccable credentials: modern probability theory. During a period when the value placed on mathematical expression in economics was increasing, it may have been this feature of the classical theory of inference that increased its perceived value enough to overwhelm long-standing concerns that it was not applicable to economic data. In other words, maybe the chief causes of the profession’s embrace of the classical theory of inference are those that drove the broader mathematization of economics, and one should simply look to the literature that explores possible explanations for that phenomenon rather than seeking a special explanation of the embrace of the classical theory of inference.

I would suggest one more factor that might have made the classical theory of inference more attractive to economists in the 1950s and 1960s: the changing needs of pedagogy in graduate economics programs. As I have just argued, since the 1920s, economists have employed both judgment based on expertise and mechanically objective data-processing procedures when generalizing from economic data. One important difference between these two modes of analysis is how they are taught and learned. The classical theory of inference as used by economists can be taught to many students simultaneously as a set of rules and procedures, recorded in a textbook and applicable to “data” in general. This is in contrast to the judgment-based reasoning that combines knowledge of statistical methods with knowledge of the circumstances under which the particular data being analyzed were generated. This form of reasoning is harder to teach in a classroom or codify in a textbook, and is probably best taught using an apprenticeship model, such as that which ideally exists when an aspiring economist writes a thesis under the supervision of an experienced empirical researcher.

During the 1950s and 1960s, the ratio of PhD candidates to senior faculty in PhD-granting programs was increasing rapidly. One consequence of this, I suspect, was that experienced empirical economists had less time to devote to providing each interested student with individualized feedback on his attempts to analyze data, so that relatively more of a student’s training in empirical economics came in an econometrics classroom, using a book that taught statistical inference as the application of classical inference procedures. As training in empirical economics came more and more to be classroom training, competence in empirical economics came more and more to mean mastery of the mechanically objective techniques taught in the econometrics classroom, a competence displayed to others by application of those techniques. Less time in the training process being spent on judgment-based procedures for interpreting statistical results meant fewer researchers using such procedures, or looking for them when evaluating the work of others.

This process, if indeed it happened, would not explain why the classical theory of inference was the particular mechanically objective method that came to dominate classroom training in econometrics; for that, I would again point to the classical theory’s link to a general and mathematically formalistic theory. But it does help to explain why the application of mechanically objective procedures came to be regarded as a necessary means of determining the reliability of a set of statistical measures and the extent to which they provided evidence for assertions about reality. This conjecture fits in with a larger possibility that I believe is worth further exploration: that is, that the changing nature of graduate education in economics might sometimes be a cause as well as a consequence of changing research practices in economics. (pp. 167-70)

The correspondence between Biddle’s discussion of the change in the attitude of the economics profession about how inferences should be drawn from data about empirical relationships is strikingly similar to Oakeshott’s discussion and depressing in its implications for the decline of expert judgment by economics, expert judgment having been replaced by mechanical and technical knowledge that can be objectively summarized in the form of rules or tests for statistical significance, itself an entirely arbitrary convention lacking any logical, or self-evident, justification.

But my point is not to condemn using rules derived from classical probability theory to assess the significance of relationships statistically estimated from historical data, but to challenge the methodological prohibition against the kinds of expert judgments that many statistically knowledgeable economists like Nobel Prize winners such as Simon Kuznets, Milton Friedman, Theodore Schultz and Gary Becker routinely used to make in their empirical studies. As Biddle notes:

In 1957, Milton Friedman published his theory of the consumption function. Friedman certainly understood statistical theory and probability theory as well as anyone in the profession in the 1950s, and he used statistical theory to derive testable hypotheses from his economic model: hypotheses about the relationships between estimates of the marginal propensity to consume for different groups and from different types of data. But one will search his book almost in vain for applications of the classical methods of inference. Six years later, Friedman and Anna Schwartz published their Monetary History of the United States, a work packed with graphs and tables of statistical data, as well as numerous generalizations based on that data. But the book contains no classical hypothesis tests, no confidence intervals, no reports of statistical significance or insignificance, and only a handful of regressions. (p. 164)

Friedman’s work on the Monetary History is still regarded as authoritative. My own view is that much of the Monetary History was either wrong or misleading. But my quarrel with the Monetary History mainly pertains to the era in which the US was on the gold standard, inasmuch as Friedman simply did not understand how the gold standard worked, either in theory or in practice, as McCloskey and Zecher showed in two important papers (here and here). Also see my posts about the empirical mistakes in the Monetary History (here and here). But Friedman’s problem was bad monetary theory, not bad empirical technique.

Friedman’s theoretical misunderstandings have no relationship to the misguided prohibition against doing quantitative empirical research without obeying the arbitrary methodological requirement that statistical be derived in a way that measures the statistical significance of the estimated relationships. These methodological requirements have been adopted to support a self-defeating pretense to scientific rigor, necessitating the use of relatively advanced mathematical techniques to perform quantitative empirical research. The methodological requirements for measuring statistical relationships were never actually shown to be generate more accurate or reliable statistical results than those derived from the less technically advanced, but in some respects more economically sophisticated, techniques that have almost totally been displaced. One more example of the fallacy that there is but one technique of research that ensures the discovery of truth, a mistake even Popper was never guilty of.

Methodological Prescriptions Go from Bad to Worse

The methodological requirement for the use of formal tests of statistical significance before any quantitative statistical estimate could be credited was a prelude, though it would be a stretch to link them causally, to another and more insidious form of methodological tyrannizing: the insistence that any macroeconomic model be derived from explicit micro-foundations based on the solution of an intertemporal-optimization exercise. Of course, the idea that such a model was in any way micro-founded was a pretense, the solution being derived only through the fiction of a single representative agent, rendering the entire optimization exercise fundamentally illegitimate and the exact opposite of micro-founded model. Having already explained in previous posts why transforming microfoundations from a legitimate theoretical goal into methodological necessity has taken a generation of macroeconomists down a blind alley (here, here, here, and here) I will only make the further comment that this is yet another example of the danger of elevating technique over practice and substance.

Popper’s More Important Contribution

This post has largely concurred with the negative assessment of Popper’s work registered by Lemoine. But I wish to end on a positive note, because I have learned a great deal from Popper, and even if he is overrated as a philosopher of science, he undoubtedly deserves great credit for suggesting falsifiability as the criterion by which to distinguish between science and metaphysics. Even if that criterion does not hold up, or holds up only when qualified to a greater extent than Popper admitted, Popper made a hugely important contribution by demolishing the startling claim of the Logical Positivists who in the 1920s and 1930s argued that only statements that can be empirically verified through direct or indirect observation have meaning, all other statements being meaningless or nonsensical. That position itself now seems to verge on the nonsensical. But at the time many of the world’s leading philosophers, including Ludwig Wittgenstein, no less, seemed to accept that remarkable view.

Thus, Popper’s demarcation between science and metaphysics had a two-fold significance. First, that it is not verifiability, but falsifiability, that distinguishes science from metaphysics. That’s the contribution for which Popper is usually remembered now. But it was really the other aspect of his contribution that was more significant: that even metaphysical, non-scientific, statements can be meaningful. According to the Logical Positivists, unless you are talking about something that can be empirically verified, you are talking nonsense. In other words they were deliberately hoisting themselves on their petard, because their discussions about what is and what is not meaningful, being discussions about concepts, not empirically verifiable objects, were themselves – on the Positivists’ own criterion of meaning — meaningless and nonsensical.

Popper made the world safe for metaphysics, and the world is a better place as a result. Science is a wonderful enterprise, rewarding for its own sake and because it contributes to the well-being of many millions of human beings, though like many other human endeavors, it can also have unintended and unfortunate consequences. But metaphysics, because it was used as a term of abuse by the Positivists, is still, too often, used as an epithet. It shouldn’t be.

Certainly economists should aspire to tease out whatever empirical implications they can from their theories. But that doesn’t mean that an economic theory with no falsifiable implications is useless, a judgment whereby Mark Blaug declared general equilibrium theory to be unscientific and useless, a judgment that I don’t think has stood the test of time. And even if general equilibrium theory is simply metaphysical, my response would be: so what? It could still serve as a source of inspiration and insight to us in framing other theories that may have falsifiable implications. And even if, in its current form, a theory has no empirical content, there is always the possibility that, through further discussion, critical analysis and creative thought, empirically falsifiable implications may yet become apparent.

Falsifiability is certainly a good quality for a theory to have, but even an unfalsifiable theory may be worth paying attention to and worth thinking about.

Neo- and Other Liberalisms

Everybody seems to be worked up about “neoliberalism” these days. A review of Quinn Slobodian’s new book on the Austrian (or perhaps the Austro-Hungarian) roots of neoliberalism in the New Republic by Patrick Iber reminded me that the term “neoliberalism” which, in my own faulty recollection, came into somewhat popular usage only in the early 1980s, had actually been coined in the early the late 1930s at the now almost legendary Colloque Walter Lippmann and had actually been used by Hayek in at least one of his political essays in the 1940s. In that usage the point of neoliberalism was to revise and update the classical nineteenth-century liberalism that seemed to have run aground in the Great Depression, when the attempt to resurrect and restore what had been widely – and in my view mistakenly – regarded as an essential pillar of the nineteenth-century liberal order – the international gold standard – collapsed in an epic international catastrophe. The new liberalism was supposed to be a kinder and gentler — less relentlessly laissez-faire – version of the old liberalism, more amenable to interventions to aid the less well-off and to social-insurance programs providing a safety net to cushion individuals against the economic risks of modern capitalism, while preserving the social benefits and efficiencies of a market economy based on private property and voluntary exchange.

Any memory of Hayek’s use of “neo-liberalism” was blotted out by the subsequent use of the term to describe the unorthodox efforts of two young ambitious Democratic politicians, Bill Bradley and Dick Gephardt to promote tax reform. Bradley, who was then a first-term Senator from New Jersey, having graduated directly from NBA stardom to the US Senate in 1978, and Gephardt, then an obscure young Congressman from Missouri, made a splash in the first term of the Reagan administration by proposing to cut income tax rates well below the rates to which Reagan had proposed when running for President, in 1980, subsequently enacted early in his first term. Bradley and Gephardt proposed cutting the top federal income tax bracket from the new 50% rate to the then almost unfathomable 30%. What made the Bradley-Gephardt proposal liberal was the idea that special-interest tax exemptions would be eliminated, so that the reduced rates would not mean a loss of tax revenue, while making the tax system less intrusive on private decision-making, improving economic efficiency. Despite cutting the top rate, Bradley and Gephardt retained the principle of progressivity by reducing the entire rate structure from top to bottom while eliminating tax deductions and tax shelters.

Here is how David Ignatius described Bradley’s role in achieving the 1986 tax reform in the Washington Post (May 18, 1986)

Bradley’s intellectual breakthrough on tax reform was to combine the traditional liberal approach — closing loopholes that benefit mainly the rich — with the supply-side conservatives’ demand for lower marginal tax rates. The result was Bradley’s 1982 “Fair Tax” plan, which proposed removing many tax preferences and simplifying the tax code with just three rates: 14 percent, 26 percent and 30 percent. Most subsequent reform plans, including the measure that passed the Senate Finance Committee this month, were modelled on Bradley’s.

The Fair Tax was an example of what Democrats have been looking for — mostly without success — for much of the last decade. It synthesized liberal and conservative ideas in a new package that could appeal to middle-class Americans. As Bradley noted in an interview this week, the proposal offered “lower rates for the middle-income people who are the backbone of America, who are paying most of the freight.” And who, it might be added, increasingly have been voting Republican in recent presidential elections.

The Bradley proposal also offered Democrats a way to shed their anti-growth, tax-and-spend image by allowing them, as Bradley says, “to advocate economic growth and fairness simultaneously.” The only problem with the idea was that it challenged the party’s penchant for soak-the-rich rhetoric and interest-group politics.

So the new liberalism of Bradley and Gephardt was an ideological movement in the opposite direction from that of the earlier version of neoliberalism; the point of neoliberalism 1.0 was to moderate classical laissez-faire liberal orthodoxy; neoliberalism 2.0 aimed to counter the knee-jerk interventionism of New Deal liberalism that favored highly progressive income taxation to redistribute income from rich to poor and price ceilings and controls to protect the poor from exploitation by ruthless capitalists and greedy landlords and as an anti-inflation policy. The impetus for reassessing mid-twentieth-century American liberalism was the evident failure in the 1970s of wage and price controls, which had been supported with little evidence of embarrassment by most Democratic economists (with the notable exception of James Tobin) when imposed by Nixon in 1971, and by the decade-long rotting residue of Nixon’s controls — controls on crude oil and gasoline prices — finally scrapped by Reagan in 1981.

Although the neoliberalism 2.0 enjoyed considerable short-term success, eventually providing the template for the 1986 Reagan tax reform, and establishing Bradley and Gephardt as major figures in the Democratic Party, neoliberalism 2.0 was never embraced by the Democratic grassroots. Gephardt himself abandoned the neo-liberal banner in 1988 when he ran for President as a protectionist, pro-Labor Democrat, providing the eventual nominee, the mildly neoliberalish Michael Dukakis, with plenty of material with which to portray Gephardt as a flip-flopper. But Dukasis’s own failure in the general election did little to enhance the prospects of neoliberalism as a winning electoral strategy. The Democratic acceptance of low marginal tax rates in exchange for eliminating tax breaks, exemptions and shelters was short-lived, and Bradley himself abandoned the approach in 2000 when he ran for the Democratic Presidential nomination from the left against Al Gore.

So the notion that “neoliberalism” has any definite meaning is as misguided as the notion that “liberalism” has any definite meaning. “Neoliberalism” now serves primarily as a term of abuse for leftists to impugn the motives of their ideological and political opponents in exactly the same way that right-wingers use “liberal” as a term of abuse — there are so many of course — with which to dismiss and denigrate their ideological and political opponents. That archetypical classical liberal Ludwig von Mises was openly contemptuous of the neoliberalism that emerged from the Colloque Walter Lipmann and of its later offspring Ordoliberalism (frequently described as the Germanic version of neoliberalism) referring to it as “neo-interventionism.” Similarly, modern liberals who view themselves as upholders of New Deal liberalism deploy “neoliberalism” as a useful pejorative epithet with which to cast a rhetorical cloud over those sharing a not so dissimilar political background or outlook but who are more willing to tolerate the outcomes of market forces than they are.

There are many liberalisms and perhaps almost as many neoliberalisms, so it’s pointless and futile to argue about which is the true or legitimate meaning of “liberalism.” However, one can at least say about the two versions of neoliberalism that I’ve mentioned that they were attempts to moderate more extreme versions of liberalism and to move toward the ideological middle of the road: from the extreme laissez-faire of classical liberalism on the one right and from the dirigisme of the New Deal on the left toward – pardon the cliché – a third way in the center.

But despite my disclaimer that there is no fixed, essential, meaning of “liberalism,” I want to suggest that it is possible to find some common thread that unites many, if not all, of the disparate strands of liberalism. I think it’s important to do so, because it wasn’t so long ago that even conservatives were able to speak approvingly about the “liberal democratic” international order that was created, largely thanks to American leadership, in the post-World War II era. That time is now unfortunately past, but it’s still worth remembering that it once was possible to agree that “liberal” did correspond to an admirable political ideal.

The deep underlying principle that I think reconciles the different strands of the best versions of liberalism is a version of Kant’s categorical imperative: treat every individual as an end not a means. Individuals must not be used merely as tools or instruments with which other individuals or groups satisfy their own purposes. If you want someone else to serve you in accomplishing your ends, that other person must provide that assistance to you voluntarily not because you require him to do so. If you want that assistance you must secure it not by command but by persuasion. Persuasion can be secured in two ways, either by argument — persuading the other person to share your objective — or if you can’t, or won’t, persuade the person to share your objective, you can still secure his or her agreement to help you by offering some form of compensation to induce the person to provide you the services you desire.

The principle has an obvious libertarian interpretation: all cooperation is secured through voluntary agreements between autonomous agents. Force and fraud are impermissible. But the Kantian ideal doesn’t necessarily imply a strictly libertarian political system. The choices of autonomous agents can — actually must — be restricted by a set of legal rules governing the conduct of those agents. And the content of those legal rules must be worked out either by legislation or by an evolutionary process of common law adjudication or some combination of the two. The content of those rules needn’t satisfy a libertarian laissez-faire standard. Rather the liberal standard that legal rules must satisfy is that they don’t prescribe or impose ends, goals, or purposes that must be pursued by autonomous agents, but simply govern the means agents can employ in pursuing their objectives.

Legal rules of conduct are like semantic rules of grammar. Like rules of grammar that don’t dictate the ideas or thoughts expressed in speech or writing, only the manner of their expression, rules of conduct don’t specify the objectives that agents seek to achieve, only the acceptable means of accomplishing those objectives. The rules of conduct need not be libertarian; some choices may be ruled out for reasons of ethics or morality or expediency or the common good. What makes the rules liberal is that they apply equally to all citizens, and that the rules allow sufficient space to agents to conduct their own lives according to their own purposes, goals, preferences, and values.

In other words, the rule of law — not the rule of particular groups, classes, occupations — prevails. Agents are subject to an impartial legal standard, not to the will or command of another agent, or of the ruler. And for this to be the case, the ruler himself must be subject to the law. But within this framework of law that imposes no common goals and purposes on agents, a good deal of collective action to provide for common purposes — far beyond the narrow boundaries of laissez-faire doctrine — is possible. Citizens can be taxed to pay for a wide range of public services that the public, through its elected representatives, decides to provide. Those elected representatives can enact legislation that governs the conduct of individuals as long as the legislation does not treat individuals differently based on irrelevant distinctions or based on criteria that disadvantage certain people unfairly.

My view that the rule of law, not laissez-faire, not income redistribution, is the fundamental value and foundation of liberalism is a view that I learned from Hayek, who, in his later life was as much a legal philosopher as an economist, but it is a view that John Rawls, Ronald Dworkin on the left, and Michael Oakeshott on the right, also shared. Hayek, indeed, went so far as to say that he was fundamentally in accord with Rawls’s magnum opus A Theory of Justice, which was supposed to have provided a philosophical justification for modern welfare-state liberalism. Liberalism is a big tent, and it can accommodate a wide range of conflicting views on economic and even social policy. What sets liberalism apart is a respect for and commitment to the rule of law and due process, a commitment that ought to take precedence over any specific policy goal or preference.

But here’s the problem. If the ruler can also make or change the laws, the ruler is not really bound by the laws, because the ruler can change the law to permit any action that the ruler wants to take. How then is the rule of law consistent with a ruler that is empowered to make the law to which he is supposedly subject. That is the dilemma that every liberal state must cope with. And for Hayek, at least, the issue was especially problematic in connection with taxation.

With the possible exception of inflation, what concerned Hayek most about modern welfare-state policies was the highly progressive income-tax regimes that western countries had adopted in the mid-twentieth century. By almost any reasonable standard, top marginal income-tax rates were way too high in the mid-twentieth century, and the economic case for reducing the top rates was compelling when reducing the top rates would likely entail little, if any, net revenue loss. As a matter of optics, reductions in the top marginal rates had to be coupled with reductions of lower tax brackets which did entail revenue losses, but reforming an overly progressive tax system without a substantial revenue loss was not that hard to do.

But Hayek’s argument against highly progressive income tax rates was based more on principle than on expediency. Hayek regarded steeply progressive income tax rates as inherently discriminatory by imposing a disproportionate burden on a minority — the wealthy — of the population. Hayek did not oppose modest progressivity to ease the tax burden on the least well-off, viewing such progressivity treating as a legitimate concession that a well-off majority could allow to a less-well-off minority. But he greatly feared attempts by the majority to shift the burden of taxation onto a well-off minority, viewing that kind of progressivity as a kind of legalized hold-up, whereby the majority uses its control of the legislature to write the rules to their own advantage at the expense of the minority.

While Hayek’s concern that a wealthy minority could be plundered by a greedy majority seems plausible, a concern bolstered by the unreasonably high top marginal rates that were in place when he wrote, he overstated his case in arguing that high marginal rates were, in and of themselves, unequal treatment. Certainly it would be discriminatory if different tax rates applied to people because of their religion or national origin or for reasons unrelated to income, but even a highly progressive income tax can’t be discriminatory on its face, as Hayek alleged, when the progressivity is embedded in a schedule of rates applicable to everyone that reaches specified income thresholds.

There are other reasons to think that Hayek went too far in his opposition to progressive tax rates. First, he assumed that earned income accurately measures the value of the incremental contribution to social output. But Hayek overlooked that much of earned income reflects either rents that are unnecessary to call forth the efforts required to earn that income, in which case increasing the marginal tax rate on such earnings does not diminish effort and output. We also know as a result of a classic 1971 paper by Jack Hirshleifer that earned incomes often do not correspond to net social output. For example, incomes earned by stock and commodity traders reflect only in part incremental contributions to social output; they also reflect losses incurred by other traders. So resources devoted to acquiring information with which to make better predictions of future prices add less to output than those resources are worth, implying a net reduction in total output. Insofar as earned incomes reflect not incremental contributions to social output but income transfers from other individuals, raising taxes on those incomes can actually increase aggregate output.

So the economic case for reducing marginal tax rates is not necessarily more compelling than the philosophical case, and the economic arguments certainly seem less compelling than they did some three decades ago when Bill Bradley, in his youthful neoliberal enthusiasm, argued eloquently for drastically reducing marginal rates while broadening the tax base. Supporters of reducing marginal tax rates still like to point to the dynamic benefits of increasing incentives to work and invest, but they don’t acknowledge that earned income does not necessarily correspond closely to net contributions to aggregate output.

Drastically reducing the top marginal rate from 70% to 28% within five years, greatly increased the incentive to earn high incomes. The taxation of high incomes having been reducing so drastically, the number of people earning very high incomes since 1986 has grown very rapidly. Does that increase in the number of people earning very high incomes reflect an improvement in the overall economy, or does it reflect a shift in the occupational choices of talented people? Since the increase in very high incomes has not been associated with an increase in the overall rate of economic growth, it hardly seems obvious that the increase in the number of people earning very high incomes is closely correlated with the overall performance of the economy. I suspect rather that the opportunity to earn and retain very high incomes has attracted a many very talented people into occupations, like financial management, venture capital, investment banking, and real-estate brokerage, in which high incomes are being earned, with correspondingly fewer people choosing to enter less lucrative occupations. And if, as I suggested above, these occupations in which high incomes are being earned often contribute less to total output than lower-paying occupations, the increased opportunity to earn high incomes has actually reduced overall economic productivity.

Perhaps the greatest effect of reducing marginal income tax rates has been sociological. I conjecture that, as a consequence of reduced marginal income tax rates, the social status and prestige of people earning high incomes has risen, as has the social acceptability of conspicuous — even brazen — public displays of wealth. The presumption that those who have earned high incomes and amassed great fortunes are morally deserving of those fortunes, and therefore entitled to deference and respect on account of their wealth alone, a presumption that Hayek himself warned against, seems to be much more widely held now than it was forty or fifty years ago. Others may take a different view, but I find this shift towards increased respect and admiration for the wealthy, curiously combined with a supposedly populist political environment, to be decidedly unedifying.

Two Cheers (Well, Maybe Only One and a Half) for Falsificationism

Noah Smith recently wrote a defense (sort of) of falsificationism in response to Sean Carroll’s suggestion that the time has come for scientists to throw falisficationism overboard as a guide for scientific practice. While Noah isn’t ready to throw out falsification as a scientific ideal, he does acknowledge that not everything that scientists do is really falsifiable.

But, as Carroll himself seems to understand in arguing against falsificationism, even though a particular concept or entity may itself be unobservable (and thus unfalsifiable), the larger theory of which it is a part may still have implications that are falsifiable. This is the case in economics. A utility function or a preference ordering is not observable, but by imposing certain conditions on that utility function, one can derive some (weakly) testable implications. This is exactly what Karl Popper, who introduced and popularized the idea of falsificationism, meant when he said that the aim of science is to explain the known by the unknown. To posit an unobservable utility function or an unobservable string is not necessarily to engage in purely metaphysical speculation, but to do exactly what scientists have always done, to propose explanations that would somehow account for some problematic phenomenon that they had already observed. The explanations always (or at least frequently) involve positing something unobservable (e.g., gravitation) whose existence can only be indirectly perceived by comparing the implications (predictions) inferred from the existence of the unobservable entity with what we can actually observe. Here’s how Popper once put it:

Science is valued for its liberalizing influence as one of the greatest of the forces that make for human freedom.

According to the view of science which I am trying to defend here, this is due to the fact that scientists have dared (since Thales, Democritus, Plato’s Timaeus, and Aristarchus) to create myths, or conjectures, or theories, which are in striking contrast to the everyday world of common experience, yet able to explain some aspects of this world of common experience. Galileo pays homage to Aristarchus and Copernicus precisely because they dared to go beyond this known world of our senses: “I cannot,” he writes, “express strongly enough my unbounded admiration for the greatness of mind of these men who conceived [the heliocentric system] and held it to be true […], in violent opposition to the evidence of their own senses.” This is Galileo’s testimony to the liberalizing force of science. Such theories would be important even if they were no more than exercises for our imagination. But they are more than this, as can be seen from the fact that we submit them to severe tests by trying to deduce from them some of the regularities of the known world of common experience by trying to explain these regularities. And these attempts to explain the known by the unknown (as I have described them elsewhere) have immeasurably extended the realm of the known. They have added to the facts of our everyday world the invisible air, the antipodes, the circulation of the blood, the worlds of the telescope and the microscope, of electricity, and of tracer atoms showing us in detail the movements of matter within living bodies.  All these things are far from being mere instruments: they are witness to the intellectual conquest of our world by our minds.

So I think that Sean Carroll, rather than arguing against falisficationism, is really thinking of falsificationism in the broader terms that Popper himself laid out a long time ago. And I think that Noah’s shrug-ability suggestion is also, with appropriate adjustments for changes in expository style, entirely in the spirit of Popper’s view of falsificationism. But to make that point clear, one needs to understand what motivated Popper to propose falsifiability as a criterion for distinguishing between science and non-science. Popper’s aim was to overturn logical positivism, a philosophical doctrine associated with the group of eminent philosophers who made up what was known as the Vienna Circle in the 1920s and 1930s. Building on the British empiricist tradition in science and philosophy, the logical positivists argued that our knowledge of the external world is based on sensory experience, and that apart from the tautological truths of pure logic (of which mathematics is a part) there is no other knowledge. Furthermore, no meaning could be attached to any statement whose validity could not checked either by examining its logical validity as an inference from explicit premises or verified by sensory experience. According to this criterion, much of human discourse about ethics, morals, aesthetics, religion and much of philosophy was simply meaningless, aka metaphysics.

Popper, who grew up in Vienna and was on the periphery of the Vienna Circle, rejected the idea that logical tautologies and statements potentially verifiable by observation are the only conveyors of meaning between human beings. Metaphysical statements can be meaningful even if they can’t be confirmed by observation. Metaphysical statements are meaningful if they are coherent and are not nonsensical. If there is a problem with metaphysical statements, the problem is not necessarily because they have no meaning. In making this argument, Popper suggested an alternative criterion of demarcation to that between meaning and non-meaning: a criterion of demarcation between science and metaphysics. Science is indeed different from metaphysics, but the difference is not that science is meaningful and metaphysics is not. The difference is that scientific statements can be refuted (or falsified) by observations while metaphysical statements cannot be refuted by observations. As a matter of logic, the only way to refute a proposition by an observation is for the proposition to assert that the observation was not possible. Unless you can say what observation would refute what you are saying, you are engaging in metaphysical, not scientific, talk. This gave rise to Popper’s then very surprising result. If you positively assert the existence of something – an assertion potentially verifiable by observation, and hence for logical positivists the quintessential scientific statement — you are making a metaphysical, not a scientific, statement. The statement that something (e.g., God, a string, or a utility function) exists cannot be refuted by any observation. However the unobservable phenomenon may be part of a theory with implications that could be refuted by some observation. But in that case it would be the theory not the posited object that was refuted.

In fact, Popper thought that metaphysical statements not only could be meaningful, but could even be extremely useful, coining the term “metaphysical research programs,” because a metaphysical, unfalsifiable idea or theory could be the impetus for further research, possibly becoming scientifically fruitful in the way that evolutionary biology eventually sprang from the possibly unfalsifiable idea of survival of the fittest. That sounds to me pretty much like Noah’s idea of shrug-ability.

Popper was largely successful in overthrowing logical positivism, though whether it was entirely his doing (as he liked to claim) and whether it was fully overthrown are not so clear. One reason to think that it was not all his doing is that there is still a lot of confusion about what the falsification criterion actually means. Reading Noah Smith and Sean Carroll, I almost get the impression that they think the falsification criterion distinguishes not just between science and non-science but between meaning and non-meaning. Otherwise, why would anyone think that there is any problem with introducing an unfalsifiable concept into scientific discussion. When Popper argued that science should aim at proposing and testing falsifiable theories, he meant that one should not design a theory so that it can’t be tested, or adopt stratagems — ad hoc hypotheses — that serve only to account for otherwise falsifying observations. But if someone comes up with a creative new idea, and the idea can’t be tested, at least given the current observational technology, that is not a reason to reject the theory, especially if the new theory accounts for otherwise unexplained observations.

Another manifestation of Popper’s imperfect success in overthrowing logical positivism is that Paul Samuelson in his classic The Foundations of Economic Analysis chose to call the falsifiable implications of economic theory, meaningful theorems. By naming those implications “meaningful theorems,” Samuelson clearly was operating under the positivist presumption that only a proposition that could (at least in principle) be falsified by observation was meaningful. However, that formulation reflected an untenable compromise between Popper’s criterion for distinguishing science from metaphysics and the logical positivist criterion for distinguishing meaningful from meaningless statements. Instead of referring to meaningful theorems, Samuelson should have called them, more modestly, testable or scientific theorems.

So, at least as I read Popper, Noah Smith and Sean Carroll are only discovering what Popper already understood a long time ago.

At this point, some readers may be wondering why, having said all that, I seem to have trouble giving falisficationism (and Popper) even two cheers. So I am afraid that I will have to close this post on a somewhat critical note. The problem with Popper is that his rhetoric suggests that scientific methodology is a lot more important than it really is. Apart from some egregious examples like Marxism and Freudianism, which were deliberately formulated to exclude the possibility of refutation, there really aren’t that many theories entertained by scientists that can be ruled out of order on strictly methodological grounds. Popper can occasionally provide some methodological reminders to scientists to avoid relying on ad hoc theorizing — at least when a non-ad-hoc alternative is handy — but beyond that I don’t think methodology counts for very much in the day to day work of scientists. Many theories are difficult to falsify, but the difficulty is not necessarily the result of deliberate choices by the theorists, it is the result of the nature of the problem and the nature of the evidence that could potentially refute the theory. The evidence is what it is. It is nice to come up with a theory that predicts a novel fact that can be observed, but nature is not always so accommodating to our theories.

There is a kind of rationalistic (I am using “rationalistic” in the pejorative sense of Michael Oakeshott) faith that following the methodological rules that Popper worked so hard to formulate will guarantee scientific progress. Those rules tend to encourage an unrealistic focus on making theories testable (especially in economics) when by their nature the phenomena are too complex for theories to be formulated in ways that are susceptible to decisive testing. And although Popper recognized that empirical testing of a theory has very limited usefulness unless the theory is being compared to some alternative theory, too often discussions of theory testing are in the context of testing a single theory in isolation. Kuhn and others have pointed out that science is not routinely carried out in the way that Popper suggested it should be. To some extent, Popper acknowledged the truth of that observation, though he liked to cite examples from the history of science to illustrate his thesis, but argued that he was offering a normative, not a positive, theory of scientific discovery. But why should we assume that Popper had more insight into the process of discovery for particular sciences than the practitioners of those sciences actually doing the research? That is the nub of the criticism of Popper that I take away from Oakeshott’s work. Life and any form of endeavor involves the transmission of ways of doing things, traditions, that cannot be reduced to a set of rules, but require education, training, practice and experience. That’s what Kuhn called normal science. Normal science can go off the tracks too, but it is naïve to think that a list of methodological rules is what will keep science moving constantly in the right direction. Why should Popper’s rules necessarily trump the lessons that practitioners have absorbed from the scientific traditions in which they have been trained? I don’t believe that there is any surefire recipe for scientific progress.

Nevertheless, when I look at the way economics is now being practiced and taught, I can’t help but think that a dose of Popperianism might not be the worst thing that could be administered to modern economics. But that’s a discussion for another day.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,263 other subscribers
Follow Uneasy Money on WordPress.com