Posts Tagged 'Bertrand Russell'

Justice Scalia and the Original Meaning of Originalism

humpty_dumpty

(I almost regret writing this post because it took a lot longer to write than I expected and I am afraid that I have ventured too deeply into unfamiliar territory. But having expended so much time and effort on this post, I must admit to being curious about what people will think of it.)

I resist the temptation to comment on Justice Scalia’s character beyond one observation: a steady stream of irate outbursts may have secured his status as a right-wing icon and burnished his reputation as a minor literary stylist, but his eruptions brought no credit to him or to the honorable Court on which he served.

But I will comment at greater length on the judicial philosophy, originalism, which he espoused so tirelessly. The first point to make, in discussing originalism, is that there are at least two concepts of originalism that have been advanced. The first and older concept is that the provisions of the US Constitution should be understood and interpreted as the framers of the Constitution intended those provisions to be understood and interpreted. The task of the judge, in interpreting the Constitution, would then be to reconstruct the collective or shared state of mind of the framers and, having ascertained that state of mind, to interpret the provisions of the Constitution in accord with that collective or shared state of mind.

A favorite originalist example is the “cruel and unusual punishment” provision of the Eighth Amendment to the Constitution. Originalists dismiss all arguments that capital punishment is cruel and unusual, because the authors of the Eighth Amendment could not have believed capital punishment to be cruel and unusual. If that’s what they believed then, why, having passed the Eighth amendment, did the first Congress proceed to impose the death penalty for treason, counterfeiting and other offenses in 1790? So it seems obvious that the authors of Eighth Amendment did not intend to ban capital punishment. If so, originalists argue, the “cruel and unusual” provision of the Eighth Amendment can provide no ground for ruling that capital punishment violates the Eighth Amendment.

There are a lot of problems with the original-intent version of originalism, the most obvious being the impossibility of attributing an unambiguous intention to the 50 or so delegates to the Constitutional Convention who signed the final document. The Constitutional text that emerged from the Convention was a compromise among many competing views and interests, and it did not necessarily conform to the intentions of any of the delegates, much less all of them. True, James Madison was the acknowledged author of the Bill of Rights, so if we are parsing the Eighth Amendment, we might, in theory, focus exclusively on what he understood the Eighth Amendment to mean. But focusing on Madison alone would be problematic, because Madison actually opposed adding a Bill of Rights to the original Constitution; Madison introduced the Bill of Rights as amendments to the Constitution in the first Congress, only because the Constitution would not have been approved without an understanding that the Bill of Rights that Madison had opposed would be adopted as amendments to the Constitution. The inherent ambiguity in the notion of intention, even in the case of a single individual acting out of mixed, if not conflicting, motives – an ambiguity compounded when action is undertaken collectively by individuals – causes the notion of original intent to dissolve into nothingness when one tries to apply it in practice.

Realizing that trying to determine the original intent of the authors of the Constitution (including the Amendments thereto) is a fool’s errand, many originalists, including Justice Scalia, tried to salvage the doctrine by shifting its focus from the inscrutable intent of the Framers to the objective meaning that a reasonable person would have attached to the provisions of the Constitution when it was ratified. Because the provisions of the Constitution are either ordinary words or legal terms, the meaning that would reasonably have been attached to those provisions can supposedly be ascertained by consulting the contemporary sources, either dictionaries or legal treatises, in which those words or terms were defined. It is this original meaning that, according to Scalia, must remain forever inviolable, because to change the meaning of provisions of the Constitution would allow unelected judges to covertly amend the Constitution, evading the amendment process spelled out in Article V of the Constitution, thereby nullifying the principle of a written constitution that constrains the authority and powers of all branches of government. Instead of being limited by the Constitution, judges not bound by the original meaning arrogate to themselves an unchecked power to impose their own values on the rest of the country.

To return to the Eighth Amendment, Scalia would say that the meaning attached to the term “cruel and unusual” when the Eighth Amendment was passed was clearly not so broad that it prohibited capital punishment. Otherwise, how could Congress, having voted to adopt the Eighth Amendment, proceed to make counterfeiting and treason and several other federal offenses capital crimes? Of course that’s a weak argument, because Congress, like any other representative assembly is under no obligation or constraint to act consistently. It’s well known that democratic decision-making need not be consistent, and just because a general principle is accepted doesn’t mean that the principle will not be violated in specific cases. A written Constitution is supposed to impose some discipline on democratic decision-making for just that reason. But there was no mechanism in place to prevent such inconsistency, judicial review of Congressional enactments not having become part of the Constitutional fabric until John Marshall’s 1803 opinion in Marbury v. Madison made judicial review, quite contrary to the intention of many of the Framers, an organic part of the American system of governance.

Indeed, in 1798, less than ten years after the Bill of Rights was adopted, Congress enacted the Alien and Sedition Acts, which, I am sure even Justice Scalia would have acknowledged, violated the First Amendment prohibition against abridging the freedom of speech and the press. To be sure, the Congress that passed the Alien and Sedition Acts was not the same Congress that passed the Bill of Rights, but one would hardly think that the original meaning of abridging freedom of speech and the press had been forgotten in the intervening decade. Nevertheless, to uphold his version of originalism, Justice Scalia would have to argue either that the original meaning of the First Amendment had been forgotten or acknowledge that one can’t simply infer from the actions of a contemporaneous or nearly contemporaneous Congress what the original meaning of the provisions of the Constitution were, because it is clearly possible that the actions of Congress could have been contrary to some supposed original meaning of the provisions of the Constitution.

Be that as it may, for purposes of the following discussion, I will stipulate that we can ascertain an objective meaning that a reasonable person would have attached to the provisions of the Constitution at the time it was ratified. What I want to examine is Scalia’s idea that it is an abuse of judicial discretion for a judge to assign a meaning to any Constitutional term or provision that is different from that original meaning. To show what is wrong with Scalia’s doctrine, I must first explain that Scalia’s doctrine is based on legal philosophy known as legal positivism. Whether Scalia realized that he was a legal positivist I don’t know, but it’s clear that Scalia was taking the view that the validity and legitimacy of a law or a legal provision or a legal decision (including a Constitutional provision or decision) derives from an authority empowered to make law, and that no one other than an authorized law-maker or sovereign is empowered to make law.

According to legal positivism, all law, including Constitutional law, is understood as an exercise of will – a command. What distinguishes a legal command from, say, a mugger’s command to a victim to turn over his wallet is that the mugger is not a sovereign. Not only does the sovereign get what he wants, the sovereign, by definition, gets it legally; we are not only forced — compelled — to obey, but, to add insult to injury, we are legally obligated to obey. And morality has nothing to do with law or legal obligation. That’s the philosophical basis of legal positivism to which Scalia, wittingly or unwittingly, subscribed.

Luckily for us, we Americans live in a country in which the people are sovereign, but the power of the people to exercise their will collectively was delimited and circumscribed by the Constitution ratified in 1788. Under positivist doctrine, the sovereign people in creating the government of the United States of America laid down a system of rules whereby the valid and authoritative expressions of the will of the people would be given the force of law and would be carried out accordingly. The rule by which the legally valid, authoritative, command of the sovereign can be distinguished from the command of a mere thug or bully is what the legal philosopher H. L. A. Hart called a rule of recognition. In the originalist view, the rule of recognition requires that any judicial judgment accord with the presumed original understanding of the provisions of the Constitution when the Constitution was ratified, thereby becoming the authoritative expression of the sovereign will of the people, unless that original understanding has subsequently been altered by way of the amendment process spelled out in Article V of the Constitution. What Scalia and other originalists are saying is that any interpretation of a provision of the Constitution that conflicts with the original meaning of that provision violates the rule of recognition and is therefore illegitimate. Hence, Scalia’s simmering anger at decisions of the court that he regarded as illegitimate departures from the original meaning of the Constitution.

But legal positivism is not the only theory of law. F. A. Hayek, who, despite his good manners, somehow became a conservative and libertarian icon a generation before Scalia, subjected legal positivism to withering criticism in volume one of Law Legislation and Liberty. But the classic critique of legal positivism was written a little over a half century ago by Ronald Dworkin, in his essay “Is Law a System of Rules?” (aka “The Model of Rules“) Dworkin’s main argument was that no system of rules can be sufficiently explicit and detailed to cover all possible fact patterns that would have to be adjudicated by a judge. Legal positivists view the exercise of discretion by judges as an exercise of personal will authorized by the Sovereign in cases in which no legal rule exactly fits the facts of a case. Dworkin argued that rather than an imposition of judicial will authorized by the sovereign, the exercise of judicial discretion is an application of the deeper principles relevant to the case, thereby allowing the judge to determine which, among the many possible rules that could be applied to the facts of the case, best fits with the totality of the circumstances, including prior judicial decisions, the judge must take into account. According to Dworkin, law and the legal system as a whole is not an expression of sovereign will, but a continuing articulation of principles in terms of which specific rules of law must be understood, interpreted, and applied.

The meaning of a legal or Constitutional provision can’t be fixed at a single moment, because, like all social institutions, meaning evolves and develops organically. Not being an expression of the sovereign will, the meaning of a legal term or provision cannot be identified by a putative rule of recognition – e.g., the original meaning doctrine — that freezes the meaning of the term at a particular moment in time. It is not true, as Scalia and originalists argue, that conceding that the meaning of Constitutional terms and provisions can change and evolve allows unelected judges to substitute their will for the sovereign will enshrined when the Constitution was ratified. When a judge acknowledges that the meaning of a term has changed, the judge does so because that new meaning has already been foreshadowed in earlier cases with which his decision in the case at hand must comport. There is always a danger that the reasoning of a judge is faulty, but faulty reasoning can beset judges claiming to apply the original meaning of a term, as Chief Justice Taney did in his infamous Dred Scot opinion in which Taney argued that the original meaning of the term “property” included property in human beings.

Here is an example of how a change in meaning may be required by a change in our understanding of a concept. It may not be the best example to shed light on the legal issues, but it is the one that occurs to me as I write this. About a hundred years ago, Bertrand Russell and Alfred North Whitehead were writing one the great philosophical works of the twentieth century, Principia Mathematica. Their objective was to prove that all of mathematics could be reduced to pure logic. It was a grand and heroic effort that they undertook, and their work will remain a milestone in history of philosophy. If Russell and Whitehead had succeeded in their effort of reducing mathematics to logic, it could properly be said that mathematics is really the same as logic, and the meaning of the word “mathematics” would be no different from the meaning of the word “logic.” But if the meaning of mathematics were indeed the same as that of logic, it would not be the result of Russell and Whitehead having willed “mathematics” and “logic” to mean the same thing, Russell and Whitehead being possessed of no sovereign power to determine the meaning of “mathematics.” Whether mathematics is really the same as logic depends on whether all of mathematics can be logically deduced from a set of axioms. No matter how much Russell and Whitehead wanted mathematics to be reducible to logic, the factual question of whether mathematics can be reduced to logic has an answer, and the answer is completely independent of what Russell and Whitehead wanted it to be.

Unfortunately for Russell and Whitehead, the Viennese mathematician Kurt Gödel came along a few years after they completed the third and final volume of their masterpiece and proved an “incompleteness theorem” showing that mathematics could not be reduced to logic – mathematics is therefore not the same as logic – because in any axiomatized system, some true propositions of arithmetic will be logically unprovable. The meaning of mathematics is therefore demonstrably not the same as the meaning of logic. This difference in meaning had to be discovered; it could not be willed.

Actually, it was Humpty Dumpty who famously anticipated the originalist theory that meaning is conferred by an act of will.

“I don’t know what you mean by ‘glory,’ ” Alice said.
Humpty Dumpty smiled contemptuously. “Of course you don’t—till I tell you. I meant ‘there’s a nice knock-down argument for you!’ ”
“But ‘glory’ doesn’t mean ‘a nice knock-down argument’,” Alice objected.
“When I use a word,” Humpty Dumpty said, in rather a scornful tone, “it means just what I choose it to meanan—neither more nor less.”
“The question is,” said Alice, “whether you can make words mean so many different things.”
“The question is,” said Humpty Dumpty, “which is to be master—that’s all.”

In Humpty Dumpty’s doctrine, meaning is determined by a sovereign master. In originalist doctrine, the sovereign master is the presumed will of the people when the Constitution and the subsequent Amendments were ratified.

So the question whether capital punishment is “cruel and unusual” can’t be answered, as Scalia insisted, simply by invoking a rule of recognition that freezes the meaning of “cruel and unusual” at the presumed meaning it had in 1790, because the point of a rule of recognition is to identify the sovereign will that is given the force of law, while the meaning of “cruel and unusual” does not depend on anyone’s will. If a judge reaches a decision based on a meaning of “cruel and unusual” different from the supposed original meaning, the judge is not abusing his discretion, the judge is engaged in judicial reasoning. The reasoning may be good or bad, right or wrong, but judicial reasoning is not rendered illegitimate just because it assigns a meaning to a term different from the supposed original meaning. The test of judicial reasoning is how well it accords with the totality of judicial opinions and relevant principles from which the judge can draw in supporting his reasoning. Invoking a supposed original meaning of what “cruel and unusual” meant to Americans in 1789 does not tell us how to understand the meaning of “cruel and unusual” just as the question whether logic and mathematics are synonymous cannot be answered by insisting that Russel and Whitehead were right in thinking that mathematics and logic are the same thing. (I note for the record that I personally have no opinion about whether capital punishment violates the Eighth Amendment.)

One reason meanings change is because circumstances change. The meaning of freedom of the press and freedom of speech may have been perfectly clear in 1789, but our conception of what is protected by the First Amendment has certainly expanded since the First Amendment was ratified. As new media for conveying speech have been introduced, the courts have brought those media under the protection of the First Amendment. Scalia made a big deal of joining with the majority in Texas v. Johnson a 1989 case in which the conviction of a flag burner was overturned. Scalia liked to cite that case as proof of his fidelity to the text of the Constitution; while pouring scorn on the flag burner, Scalia announced that despite his righteous desire to exact a terrible retribution from the bearded weirdo who burned the flag, he had no choice but to follow – heroically, in his estimation — the text of the Constitution.

But flag-burning is certainly a form of symbolic expression, and it is far from obvious that the original meaning of the First Amendment included symbolic expression. To be sure some forms of symbolic speech were recognized as speech in the eighteenth century, but it could be argued that the original meaning of freedom of speech and the press in the First Amendment was understood narrowly. The compelling reason for affording flag-burning First Amendment protection is not that flag-burning was covered by the original meaning of the First Amendment, but that a line of cases has gradually expanded the notion of what activities are included under what the First Amendment calls “speech.” That is the normal process by which law changes and meanings change, incremental adjustments taking into account unforeseen circumstances, eventually leading judges to expand the meanings ascribed to old terms, because the expanded meanings comport better with an accumulation of precedents and the relevant principles on which judges have relied in earlier cases.

But perhaps the best example of how changes in meaning emerge organically from our efforts to cope with changing and unforeseen circumstances rather than being the willful impositions of a higher authority is provided by originalism itself, because, “originalism” was originally about the original intention of the Framers of the Constitution. It was only when it became widely accepted that the original intention of the Framers was not something that could be ascertained, that people like Antonin Scalia decided to change the meaning of “originalism,” so that it was no longer about the original intention of the Framers, but about the original meaning of the Constitution when it was ratified. So what we have here is a perfect example of how the meaning of a well-understood term came to be changed, because the original meaning of the term was found to be problematic. And who was responsible for this change in meaning? Why the very same people who insist that it is forbidden to tamper with the original meaning of the terms and provisions of the Constitution. But they had no problem in changing the meaning of their doctrine of Constitutional interpretation. Do I blame them for changing the meaning of the originalist doctrine? Not one bit. But if originalists were only marginally more introspective than they seem to be, they might have realized that changes in meaning are perfectly normal and legitimate, especially when trying to give concrete meaning to abstract terms in a way that best fits in with the entire tradition of judicial interpretation embodied in the totality of all previous judicial decisions. That is the true task of a judge, not a pointless quest for original meaning.

Another Complaint about Modern Macroeconomics

In discussing modern macroeconomics, I’ve have often mentioned my discomfort with a narrow view of microfoundations, but I haven’t commented very much on another disturbing feature of modern macro: the requirement that theoretical models be spelled out fully in axiomatic form. The rhetoric of axiomatization has had sweeping success in economics, making axiomatization a pre-requisite for almost any theoretical paper to be taken seriously, and even considered for publication in a reputable economics journal.

The idea that a good scientific theory must be derived from a formal axiomatic system has little if any foundation in the methodology or history of science. Nevertheless, it has become almost an article of faith in modern economics. I am not aware, but would be interested to know, whether, and if so how widely, this misunderstanding has been propagated in other (purportedly) empirical disciplines. The requirement of the axiomatic method in economics betrays a kind of snobbishness and (I use this word advisedly, see below) pedantry, resulting, it seems, from a misunderstanding of good scientific practice.

Before discussing the situation in economics, I would note that axiomatization did not become a major issue for mathematicians until late in the nineteenth century (though demands – luckily ignored for the most part — for logical precision followed immediately upon the invention of the calculus by Newton and Leibniz) and led ultimately to the publication of the great work of Russell and Whitehead, Principia Mathematica whose goal was to show that all of mathematics could be derived from the axioms of pure logic. This is yet another example of an unsuccessful reductionist attempt, though it seemed for a while that the Principia paved the way for the desired reduction. But 20 years after the Principia was published, Kurt Godel proved his famous incompleteness theorem, showing that, as a matter of pure logic, not even all the valid propositions of arithmetic, much less all of mathematics, could be derived from any system of axioms. This doesn’t mean that trying to achieve a reduction of a higher-level discipline to another, deeper discipline is not a worthy objective, but it certainly does mean that one cannot just dismiss, out of hand, a discipline simply because all of its propositions are not deducible from some set of fundamental propositions. Insisting on reduction as a prerequisite for scientific legitimacy is not a scientific attitude; it is merely a form of obscurantism.

As far as I know, which admittedly is not all that far, the only empirical science which has been axiomatized to any significant extent is theoretical physics. In his famous list of 23 unsolved mathematical problems, the great mathematician David Hilbert included the following (number 6).

Mathematical Treatment of the Axioms of Physics. The investigations on the foundations of geometry suggest the problem: To treat in the same manner, by means of axioms, those physical sciences in which already today mathematics plays an important part, in the first rank are the theory of probabilities and mechanics.

As to the axioms of the theory of probabilities, it seems to me desirable that their logical investigation should be accompanied by a rigorous and satisfactory development of the method of mean values in mathematical physics, and in particular in the kinetic theory of gasses. . . . Boltzman’s work on the principles of mechanics suggests the problem of developing mathematically the limiting processes, there merely indicated, which lead from the atomistic view to the laws of motion of continua.

The point that I want to underscore here is that axiomatization was supposed to ensure that there was an adequate logical underpinning for theories (i.e., probability and the kinetic theory of gasses) that had already been largely worked out. Thus, Hilbert proposed axiomatization not as a method of scientific discovery, but as a method of checking for hidden errors and problems. Error checking is certainly important for science, but it is clearly subordinate to the creation and empirical testing of new and improved scientific theories.

The fetish for axiomitization in economics can largely be traced to Gerard Debreu’s great work, The Theory of Value: An Axiomatic Analysis of Economic Equilibrium, in which Debreu, building on his own work and that of Kenneth Arrow, presented a formal description of a decentralized competitive economy with both households and business firms, and proved that, under the standard assumptions of neoclassical theory (notably diminishing marginal rates of substitution in consumption and production and perfect competition) such an economy would have at least one, and possibly more than one, equilibrium.

A lot of effort subsequently went into gaining a better understanding of the necessary and sufficient conditions under which an equilibrium exists, and when that equilibrium would be unique and Pareto optimal. The subsequent work was then brilliantly summarized and extended in another great work, General Competitive Analysis by Arrow and Frank Hahn. Unfortunately, those two books, paragons of the axiomatic method, set a bad example for the future development of economic theory, which embarked on a needless and counterproductive quest for increasing logical rigor instead of empirical relevance.

A few months ago, I wrote a review of Kartik Athreya’s book Big Ideas in Macroeconomics. One of the arguments of Athreya’s book that I didn’t address was his defense of modern macroeconomics against the complaint that modern macroeconomics is too mathematical. Athreya is not responsible for the reductionist and axiomatic fetishes of modern macroeconomics, but he faithfully defends them against criticism. So I want to comment on a few paragraphs in which Athreya dismisses criticism of formalism and axiomatization.

Natural science has made significant progress by proceeding axiomatically and mathematically, and whether or not we [economists] will achieve this level of precision for any unit of observation in macroeconomics, it is likely to be the only rational alternative.

First, let me observe that axiomatization is not the same as using mathematics to solve problems. Many problems in economics cannot easily be solved without using mathematics, and sometimes it is useful to solve a problem in a few different ways, each way potentially providing some further insight into the problem not provided by the others. So I am not at all opposed to the use of mathematics in economics. However, the choice of tools to solve a problem should bear some reasonable relationship to the problem at hand. A good economist will understand what tools are appropriate to the solution of a particular problem. While mathematics has clearly been enormously useful to the natural sciences and to economics in solving problems, there are very few scientific advances that can be ascribed to axiomatization. Axiomatization was vital in proving the existence of equilibrium, but substantive refutable propositions about real economies, e.g., the Heckscher-Ohlin Theorem, or the Factor-Price Equalization Theorem, or the law of comparative advantage, were not discovered or empirically tested by way of axiomatization. Arthreya talks about economics achieving the “level of precision” achieved by natural science, but the concept of precision is itself hopelessly imprecise, and to set precision up as an independent goal makes no sense. Arthreya continues:

In addition to these benefits from the systematic [i.e. axiomatic] approach, there is the issue of clarity. Lowering mathematical content in economics represents a retreat from unambiguous language. Once mathematized, words in any given model cannot ever mean more than one thing. The unwillingness to couch things in such narrow terms (usually for fear of “losing something more intelligible”) has, in the past, led to a great deal of essentially useless discussion.

Arthreya writes as if the only source of ambiguity is imprecise language. That just isn’t so. Is unemployment voluntary or involuntary? Arthreya actually discusses the question intelligently on p. 283, in the context of search models of unemployment, but I don’t think that he could have provided any insight into that question with a purely formal, symbolic treatment. Again back to Arthreya:

The plaintive expressions of “fear of losing something intangible” are concessions to the forces of muddled thinking. The way modern economics gets done, you cannot possibly not know exactly what the author is assuming – and to boot, you’ll have a foolproof way of checking whether their claims of what follows from these premises is actually true or not.

So let me juxtapose this brief passage from Arthreya with a rather longer passage from Karl Popper in which he effectively punctures the fallacies underlying the specious claims made on behalf of formalism and against ordinary language. The extended quotations are from an addendum titled “Critical Remarks on Meaning Analysis” (pp. 261-77) to chapter IV of Realism and the Aim of Science (volume 1 of the Postscript to the Logic of Scientific Discovery). In this addendum, Popper begins by making the following three claims:

1 What-is? questions, such as What is Justice? . . . are always pointless – without philosophical or scientific interest; and so are all answers to what-is? questions, such as definitions. It must be admitted that some definitions may sometimes be of help in answering other questions: urgent questions which cannot be dismissed: genuine difficulties which may have arisen in science or in philosophy. But what-is? questions as such do not raise this kind of difficulty.

2 It makes no difference whether a what-is question is raised in order to inquire into the essence or into the nature of a thing, or whether it is raised in order to inquire into the essential meaning or into the proper use of an expression. These kinds of what-is questions are fundamentally the same. Again, it must be admitted that an answer to a what-is question – for example, an answer pointing out distinctions between two meanings of a word which have often been confused – may not be without point, provided the confusion led to serious difficulties. But in this case, it is not the what-is question which we are trying to solve; we hope rather to resolve certain contradictions that arise from our reliance upon somewhat naïve intuitive ideas. (The . . . example discussed below – that of the ideas of a derivative and of an integral – will furnish an illustration of this case.) The solution may well be the elimination (rather than the clarification) of the naïve idea. But an answer to . . . a what-is question is never fruitful. . . .

3 The problem, more especially, of replacing an “inexact” term by an “exact” one – for example, the problem of giving a definition in “exact” or “precise” terms – is a pseudo-problem. It depends essentially upon the inexact and imprecise terms “exact” and “precise.” These are most misleading, not only because they strongly suggest that there exists what does not exist – absolute exactness or precision – but also because they are emotionally highly charged: under the guise of scientific character and of scientific objectivity, they suggest that precision or exactness is something superior, a kind of ultimate value, and that it is wrong, or unscientific, or muddle-headed, to use inexact terms (as it is indeed wrong not to speak as lucidly and simply as possible). But there is no such thing as an “exact” term, or terms made “precise” by “precise definitions.” Also, a definition must always use undefined terms in its definiens (since otherwise we should get involved in an infinite regress or in a circle); and if we have to operate with a number of undefined terms, it hardly matters whether we use a few more. Of course, if a definition helps to solve a genuine problem, the situation is different; and some problems cannot be solved without an increase of precision. Indeed, this is the only way in which we can reasonably speak of precision: the demand for precision is empty, unless it is raised relative to some requirements that arise from our attempts to solve a definite problem. (pp. 261-63)

Later in his addendum Popper provides an enlightening discussion of the historical development of calculus despite its lack of solid logical axiomatic foundation. The meaning of an infinitesimal or a derivative was anything but precise. It was, to use Arthreya’s aptly chosen term, a muddle. Mathematicians even came up with a symbol for the derivative. But they literally had no precise idea of what they were talking about. When mathematicians eventually came up with a definition for the derivative, the definition did not clarify what they were talking about; it just provided a particular method of calculating what the derivative would be. However, the absence of a rigorous and precise definition of the derivative did not prevent mathematicians from solving some enormously important practical problems, thereby helping to change the world and our understanding of it.

The modern history of the problem of the foundations of mathematics is largely, it has been asserted, the history of the “clarification” of the fundamental ideas of the differential and integral calculus. The concept of a derivative (the slope of a curve of the rate of increase of a function) has been made “exact” or “precise” by defining it as the limit of the quotient of differences (given a differentiable function); and the concept of an integral (the area or “quadrature” of a region enclosed by a curve) has likewise been “exactly defined”. . . . Attempts to eliminate the contradictions in this field constitute not only one of the main motives of the development of mathematics during the last hundred or even two hundred years, but they have also motivated modern research into the “foundations” of the various sciences and, more particularly, the modern quest for precision or exactness. “Thus mathematicians,” Bertrand Russell says, writing about one of the most important phases of this development, “were only awakened from their “dogmatic slumbers” when Weierstrass and his followers showed that many of their most cherished propositions are in general false. Macaulay, contrasting the certainty of mathematics with the uncertainty of philosophy, asks who ever heard of a reaction against Taylor’s theorem. If he had lived now, he himself might have heard of such a reaction, for his is precisely one of the theorems which modern investigations have overthrown. Such rude shocks to mathematical faith have produced that love of formalism which appears, to those who are ignorant of its motive, to be mere outrageous pedantry.”

It would perhaps be too much to read into this passage of Russell’s his agreement with a view which I hold to be true: that without “such rude shocks” – that is to say, without the urgent need to remove contradictions – the love of formalism is indeed “mere outrageous pedantry.” But I think that Russell does convey his view that without an urgent need, an urgent problem to be solved, the mere demand for precision is indefensible.

But this is only a minor point. My main point is this. Most people, including mathematicians, look upon the definition of the derivative, in terms of limits of sequences, as if it were a definition in the sense that it analyses or makes precise, or “explicates,” the intuitive meaning of the definiendum – of the derivative. But this widespread belief is mistaken. . . .

Newton and Leibniz and their successors did not deny that a derivative, or an integral, could be calculated as a limit of certain sequences . . . . But they would not have regarded these limits as possible definitions, because they do not give the meaning, the idea, of a derivative or an integral.

For the derivative is a measure of a velocity, or a slope of a curve. Now the velocity of a body at a certain instant is something real – a concrete (relational) attribute of that body at that instant. By contrast the limit of a sequence of average velocities is something highly abstract – something that exists only in our thoughts. The average velocities themselves are unreal. Their unending sequence is even more so; and the limit of this unending sequence is a purely mathematical construction out of these unreal entities. Now it is intuitively quite obvious that this limit must numerically coincide with the velocity, and that, if the limit can be calculated, we can thereby calculate the velocity. But according to the views of Newton and his contemporaries, it would be putting the cart before the horse were we to define the velocity as being identical with this limit, rather than as a real state of the body at a certain instant, or at a certain point, of its track – to be calculated by any mathematical contrivance we may be able to think of.

The same holds of course for the slope of a curve in a given point. Its measure will be equal to the limit of a sequence of measures of certain other average slopes (rather than actual slopes) of this curve. But it is not, in its proper meaning or essence, a limit of a sequence: the slope is something we can sometimes actually draw on paper, and construct with a compasses and rulers, while a limit is in essence something abstract, rarely actually reached or realized, but only approached, nearer and nearer, by a sequence of numbers. . . .

Or as Berkeley put it “. . . however expedient such analogies or such expressions may be found for facilitating the modern quadratures, yet we shall not find any light given us thereby into the original real nature of fluxions considered in themselves.” Thus mere means for facilitating our calculations cannot be considered as explications or definitions.

This was the view of all mathematicians of the period, including Newton and Leibniz. If we now look at the modern point of view, then we see that we have completely given up the idea of definition in the sense in which it was understood by the founders of the calculus, as well as by Berkeley. We have given up the idea of a definition which explains the meaning (for example of the derivative). This fact is veiled by our retaining the old symbol of “definition” for some equivalences which we use, not to explain the idea or the essence of a derivative, but to eliminate it. And it is veiled by our retention of the name “differential quotient” or “derivative,” and the old symbol dy/dx which once denoted an idea which we have now discarded. For the name, and the symbol, now have no function other than to serve as labels for the defiens – the limit of a sequence.

Thus we have given up “explication” as a bad job. The intuitive idea, we found, led to contradictions. But we can solve our problems without it, retaining the bulk of the technique of calculation which originally was based upon the intuitive idea. Or more precisely we retain only this technique, as far as it was sound, and eliminate the idea its help. The derivative and the integral are both eliminated; they are replaced, in effect, by certain standard methods of calculating limits. (oo. 266-70)

Not only have the original ideas of the founders of calculus been eliminated, because they ultimately could not withstand logical scrutiny, but a premature insistence on logical precision would have had disastrous consequences for the ultimate development of calculus.

It is fascinating to consider that this whole admirable development might have been nipped in the bud (as in the days of Archimedes) had the mathematicians of the day been more sensitive to Berkeley’s demand – in itself quite reasonable – that we should strictly adhere to the rules of logic, and to the rule of always speaking sense.

We now know that Berkeley was right when, in The Analyst, he blamed Newton . . . for obtaining . . . mathematical results in the theory of fluxions or “in the calculus differentialis” by illegitimate reasoning. And he was completely right when he indicated that [his] symbols were without meaning. “Nothing is easier,” he wrote, “than to devise expressions and notations, for fluxions and infinitesimals of the first, second, third, fourth, and subsequent orders. . . . These expressions indeed are clear and distinct, and the mind finds no difficulty in conceiving them to be continued beyond any assignable bounds. But if . . . we look underneath, if, laying aside the expressions, we set ourselves attentively to consider the things themselves which are supposed to be expressed or marked thereby, we shall discover much emptiness, darkness, and confusion . . . , direct impossibilities, and contradictions.”

But the mathematicians of his day did not listen to Berkeley. They got their results, and they were not afraid of contradictions as long as they felt that they could dodge them with a little skill. For the attempt to “analyse the meaning” or to “explicate” their concepts would, as we know now, have led to nothing. Berkeley was right: all these concept were meaningless, in his sense and in the traditional sense of the word “meaning:” they were empty, for they denoted nothing, they stood for nothing. Had this fact been realized at the time, the development of the calculus might have been stopped again, as it had been stopped before. It was the neglect of precision, the almost instinctive neglect of all meaning analysis or explication, which made the wonderful development of the calculus possible.

The problem underlying the whole development was, of course, to retain the powerful instrument of the calculus without the contradictions which had been found in it. There is no doubt that our present methods are more exact than the earlier ones. But this is not due to the fact that they use “exactly defined” terms. Nor does it mean that they are exact: the main point of the definition by way of limits is always an existential assertion, and the meaning of the little phrase “there exists a number” has become the centre of disturbance in contemporary mathematics. . . . This illustrates my point that the attribute of exactness is not absolute, and that it is inexact and highly misleading to use the terms “exact” and “precise” as if they had any exact or precise meaning. (pp. 270-71)

Popper sums up his discussion as follows:

My examples [I quoted only the first of the four examples as it seemed most relevant to Arthreya’s discussion] may help to emphasize a lesson taught by the whole history of science: that absolute exactness does not exist, not even in logic and mathematics (as illustrated by the example of the still unfinished history of the calculus); that we should never try to be more exact than is necessary for the solution of the problem in hand; and that the demand for “something more exact” cannot in itself constitute a genuine problem (except, of course, when improved exactness may improve the testability of some theory). (p. 277)

I apologize for stringing together this long series of quotes from Popper, but I think that it is important to understand that there is simply no scientific justification for the highly formalistic manner in which much modern economics is now carried out. Of course, other far more authoritative critics than I, like Mark Blaug and Richard Lipsey (also here) have complained about the insistence of modern macroeconomics on microfounded, axiomatized models regardless of whether those models generate better predictions than competing models. Their complaints have regrettably been ignored for the most part. I simply want to point out that a recent, and in many ways admirable, introduction to modern macroeconomics failed to provide a coherent justification for insisting on axiomatized models. It really wasn’t the author’s fault; a coherent justification doesn’t exist.

Uneasy Money Marks the Centenary of Hawtrey’s Good and Bad Trade

As promised, I am beginning a series of posts about R. G. Hawtrey’s book Good and Bad Trade, published 100 years ago in 1913. Good and Bad Trade was not only Hawtrey’s first book on economics, it was his first publication of any kind on economics, and only his second publication of any kind, the first having been an article on naval strategy written even before his arrival at Cambridge as an undergraduate. Perhaps on the strength of that youthful publication, Hawtrey’s first position, after having been accepted into the British Civil Service, was in the Admiralty, but he soon was transferred to the Treasury where he remained for over forty years till 1947.

Though he was a Cambridge man, Hawtrey had studied mathematics and philosophy at Cambridge. He was deeply influenced by the Cambridge philosopher G. E. Moore, an influence most clearly evident in one of Hawtrey’s few works of economics not primarily concerned with monetary theory, history or policy, The Economic Problem. Hawtrey’s mathematical interests led him to a correspondence with another Cambridge man, Bertrand Russell, which Russell refers to in his Principia Mathematica. However, Hawtrey seems to have had no contact with Alfred Marshall or any other Cambridge economist. Indeed, the only economist mentioned by Hawtrey in Good and Bad Trade was none other than Irving Fisher, whose distinction between the real and nominal rates of interest Hawtrey invokes in chapter 5. So Hawtrey was clearly an autodidact in economics. It is likely that Hawtrey’s self-education in economics started after his graduation from Cambridge when he was studying for the Civil Service entrance examination, but it seems likely that Hawtrey continued an intensive study of economics even afterwards, for although Hawtrey was not in the habit of engaging in lengthy discussions of earlier economists, his sophisticated familiarity with the history of economics and of economic history is quite unmistakable. Nevertheless, it is a puzzle that Hawtrey uses the term “natural rate of interest” to signify more or less the same idea that Wicksell had when he used the term, but without mentioning Wicksell.

In his introductory chapter, Hawtrey lays out the following objective:

My present purposed is to examine certain elements in the modern economic organization of the world, which appear to be intimately connected with [cyclical] fluctuations. I shall not attempt to work back from a precise statistical analysis of the fluctuations which the world has experienced to the causes of all the phenomena disclosed by such analysis. But I shall endeavor to show what the effects of certain assumed economic causes would be, and it will, I think, be found that these calculated effects correspond very closely with the observed features of the fluctuations.

The general result up to which I hope to work is that the fluctuations are due to disturbances in the available stock of “money” – the term “money” being take to cover every species of purchasing power available for immediate use, both legal tender money and credit money, whether in the form of coin, notes, or deposits at banks. (p. 3)

In the remainder of this post, I will present a quick overview of the entire book, and, then, as a kind of postscript to my earlier series of posts on Hawtrey and Keynes, I will comment on the fact that it seems quite clear that it was Hawtrey who invented the term “effective demand,” defining it in a way that does not appear significantly different from the meaning that Keynes attached to it.

Hawtrey posits that the chief problem associated with the business cycle is that workers are unable to earn an income with which to sustain themselves during business-cycle contractions. The source of this problem in Hawtrey’s view is some sort of malfunction in the monetary system, even though money, when considered from the point of view of an equilibrium, seems unimportant, inasmuch as any set of absolute prices would work just as well as another, provided that relative prices were consistent with equilibrium.

In chapter 2, Hawtrey explains the idea of a demand for money and how this demand for money, together with any fixed amount of inconvertible paper money will determine the absolute level of prices and the relationship between the total amount of money in nominal terms and the total amount of income.

In chapter 3, Hawtrey introduces the idea of credit money and banks, and the role of a central bank.

In chapter 4, Hawtrey discusses the organization of production, the accumulation of capital, and the employment of labor, explaining the matching circular flows: expenditure on goods and services, the output of goods and services, and the incomes accruing from that output.

Having laid the groundwork for his analysis, Hawtrey in chapter 5 provides an initial simplified analysis of the effects of a monetary disturbance in an isolated economy with no banking system.

Hawtrey continues the analysis in chapter 6 with a discussion of a monetary disturbance in an isolated economy with a banking system.

In chapter 7, Hawtrey discusses how a monetary disturbance might actually come about in an isolated community.

In chapter 8, Hawtrey extends the discussion of the previous three chapters to an open economy connected to an international system.

In chapter 9, Hawtrey drops the assumption of an inconvertible paper money and introduces an international metallic system (corresponding to the international gold standard then in operation).

Having completed his basic model of the business cycle, Hawtrey, in chapter 10, introduces other sources of change, e.g., population growth and technological progress, and changes in the supply of gold.

In chapter 11, Hawtrey drops the assumption of the previous chapters that there are no forces leading to change in relative prices among commodities.

In chapter 12, Hawtrey enters into a more detailed analysis of money, credit and banking, and, in chapter 13, he describes international differences in money and banking institutions.

In chapters 14 and 15, Hawtrey traces out the sources and effects of international cyclical disturbances.

In chapter 16, Hawtey considers financial crises and their relationship to cyclical phenomena.

In chapter 17, Hawtrey discusses banking and currency legislation and their effects on the business cycle.

Chapters 18 and 19 are devoted to taxation and public finance.

Finally in chapter 20, Hawtrey poses the question whether cyclical fluctuations can be prevented.

After my series on Hawtrey and Keynes, I condensed those posts into a paper which, after further revision, I hope will eventually appear in the forthcoming Elgar Companion to Keynes. After I sent it to David Laidler for comments, he pointed out to me that I had failed to note that it was actually Hawtrey who, in Good and Bad Trade, introduced the term “effective demand.”

The term makes its first appearance in chapter 1 (p. 4).

The producers of commodities depend, for their profits and for the means of paying wages and other expenses, upon the money which they receive for the finished commodities. They supply in response to a demand, but only to an effective demand. A want becomes an effective demand when the person who experiences the want possesses (and can spare) the purchasing power necessary ot meet the price of the thing which will satisfy it. A man may want a hat, but if he has no money [i.e., income or wealth] he cannot buy it, and his want does not contribute to the effective demand for hats.

Then at the outset of chapter 2 (p. 6), Hawtrey continues:

The total effective demand for all finished commodities in any community is simply the aggregate of all money incomes. The same aggregate represents also the total cost of production of all finished commodities.

Once again, Hawtrey, in chapter 4 (pp. 32-33), returns to the concept of effective demand

It was laid down that the total effective demand for all commodities si simply the aggregate of all incomes, and that the same aggregate represents the total cost of production of all commodities.

Hawtrey attributed fluctuations in employment to fluctuations in effective demand inasmuch as wages and prices would not adjust immediately to a change in total spending.

Here is how Keynes defines aggregate demand in the General Theory (p. 55)

[T]he effective demand is simply the aggregate income or (proceeds) which the entrepreneurs expect to receive, inclusive of the income which they will hand on to the other factors of production, from the amount of current employment which they decide to give. The aggregate demand function relates various hypothetical quantities of employment to the proceeds which their outputs are expected to yield; and the effective demand is the point on the aggregate demand function which becomes effective because, taken in conjunction with the conditions of supply, it corresponds to the level of employment which maximizes the entrepreneur’s expectation of profit.

So Keynes in the General Theory obviously presented an analytically more sophisticated version of the concept of effective demand than Hawtrey did over two decades earlier, having expressed the idea in terms of entrepreneurial expectations of income and expenditure and specifying a general functional relationship (aggregate demand) between employment and expected income. Nevertheless, the basic idea is still very close to Hawtrey’s. Interestingly, Hawtrey never asserted a claim of priority on the concept, whether it was because of his natural reticence or because he was unhappy with how Keynes made use of the idea, or perhaps some other reason, I would not venture to say. But perhaps others would like to weigh in with some speculations of their own.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,272 other subscribers
Follow Uneasy Money on WordPress.com