Archive for the 'Milton Friedman' Category

Dr. Popper: Or How I Learned to Stop Worrying and Love Metaphysics

Introduction to Falsificationism

Although his reputation among philosophers was never quite as exalted as it was among non-philosophers, Karl Popper was a pre-eminent figure in 20th century philosophy. As a non-philosopher, I won’t attempt to adjudicate which take on Popper is the more astute, but I think I can at least sympathize, if not fully agree, with philosophers who believe that Popper is overrated by non-philosophers. In an excellent blog post, Phillipe Lemoine gives a good explanation of why philosophers look askance at falsificationism, Popper’s most important contribution to philosophy.

According to Popper, what distinguishes or demarcates a scientific statement from a non-scientific (metaphysical) statement is whether the statement can, or could be, disproved or refuted – falsified (in the sense of being shown to be false not in the sense of being forged, misrepresented or fraudulently changed) – by an actual or potential observation. Vulnerability to potentially contradictory empirical evidence, according to Popper, is what makes science special, allowing it to progress through a kind of dialectical process of conjecture (hypothesis) and refutation (empirical testing) leading to further conjecture and refutation and so on.

Theories purporting to explain anything and everything are thus non-scientific or metaphysical. Claiming to be able to explain too much is a vice, not a virtue, in science. Science advances by risk-taking, not by playing it safe. Trying to explain too much is actually playing it safe. If you’re not willing to take the chance of putting your theory at risk, by saying that this and not that will happen — rather than saying that this or that will happen — you’re playing it safe. This view of science, portrayed by Popper in modestly heroic terms, was not unappealing to scientists, and in part accounts for the positive reception of Popper’s work among scientists.

But this heroic view of science, as Lemoine nicely explains, was just a bit oversimplified. Theories never exist in a vacuum, there is always implicit or explicit background knowledge that informs and provides context for the application of any theory from which a prediction is deduced. To deduce a prediction from any theory, background knowledge, including complementary theories that are presumed to be valid for purposes of making a prediction, is necessary. Any prediction relies not just on a single theory but on a system of related theories and auxiliary assumptions.

So when a prediction is deduced from a theory, and the predicted event is not observed, it is never unambiguously clear which of the multiple assumptions underlying the prediction is responsible for the failure of the predicted event to be observed. The one-to-one logical dependence between a theory and a prediction upon which Popper’s heroic view of science depends doesn’t exist. Because the heroic view of science is too simplified, Lemoine considers it false, at least in the naïve and heroic form in which it is often portrayed by its proponents.

But, as Lemoine himself acknowledges, Popper was not unaware of these issues and actually dealt with some if not all of them. Popper therefore dismissed those criticisms pointing to his various acknowledgments and even anticipations of and responses to the criticisms. Nevertheless, his rhetorical style was generally not to qualify his position but to present it in stark terms, thereby reinforcing the view of his critics that he actually did espouse the naïve version of falsificationism that, only under duress, would be toned down to meet the objections raised to the usual unqualified version of his argument. Popper after all believed in making bold conjectures and framing a theory in the strongest possible terms and characteristically adopted an argumentative and polemical stance in staking out his positions.

Toned-Down Falsificationism

In his tone-downed version of falsificationism, Popper acknowledged that one can never know if a prediction fails because the underlying theory is false or because one of the auxiliary assumptions required to make the prediction is false, or even because of an error in measurement. But that acknowledgment, Popper insisted, does not refute falsificationism, because falsificationism is not a scientific theory about how scientists do science; it is a normative theory about how scientists ought to do science. The normative implication of falsificationism is that scientists should not try to shield their theories by making just-so adjustments in their theories through ad hoc auxiliary assumptions, e.g., ceteris paribus assumptions, to shield their theories from empirical disproof. Rather they should accept the falsification of their theories when confronted by observations that conflict with the implications of their theories and then formulate new and better theories to replace the old ones.

But a strict methodological rule against adjusting auxiliary assumptions or making further assumptions of an ad hoc nature would have ruled out many fruitful theoretical developments resulting from attempts to account for failed predictions. For example, the planet Neptune was discovered in 1846 by scientists who posited (ad hoc) the existence of another planet to explain why the planet Uranus did not follow its predicted path. Rather than conclude that the Newtonian theory was falsified by the failure of Uranus to follow the orbital path predicted by Newtonian theory, the French astronomer Urbain Le Verrier posited the existence of another planet that would account for the path actually followed by Uranus. Now in this case, it was possible to observe the predicted position of the new planet, and its discovery in the predicted location turned out to be a sensational confirmation of Newtonian theory.

Popper therefore admitted that making an ad hoc assumption in order to save a theory from refutation was permissible under his version of normative faslisificationism, but only if the ad hoc assumption was independently testable. But suppose that, under the circumstances, it would have been impossible to observe the existence of the predicted planet, at least with the observational tools then available, making the ad hoc assumption testable only in principle, but not in practice. Strictly adhering to Popper’s methodological requirement of being able to test independently any ad hoc assumption would have meant accepting the refutation of the Newtonian theory rather than positing the untestable — but true — ad hoc other-planet hypothesis to account for the failed prediction of the orbital path of Uranus.

My point is not that ad hoc assumptions to save a theory from falsification are ok, but to point out that a strict methodological rules requiring rejection of any theory once it appears to be contradicted by empirical evidence and prohibiting the use of any ad hoc assumption to save the theory unless the ad hoc assumption is independently testable might well lead to the wrong conclusion given the nuances and special circumstances associated with every case in which a theory seems to be contradicted by observed evidence. Such contradictions are rarely so blatant that theory cannot be reconciled with the evidence. Indeed, as Popper himself recognized, all observations are themselves understood and interpreted in the light of theoretical presumptions. It is only in extreme cases that evidence cannot be interpreted in a way that more or less conforms to the theory under consideration. At first blush, the Copernican heliocentric view of the world seemed obviously contradicted by direct sensory observation that earth seems flat and the sun rise and sets. Empirical refutation could be avoided only by providing an alternative interpretation of the sensory data that could be reconciled with the apparent — and obvious — flatness and stationarity of the earth and the movement of the sun and moon in the heavens.

So the problem with falsificationism as a normative theory is that it’s not obvious why a moderately good, but less than perfect, theory should be abandoned simply because it’s not perfect and suffers from occasional predictive failures. To be sure, if a better theory than the one under consideration is available, predicting correctly whenever the one under consideration predicts correctly and predicting more accurately than the one under consideration when the latter fails to predict correctly, the alternative theory is surely preferable, but that simply underscores the point that evaluating any theory in isolation is not very important. After all, every theory, being a simplification, is an imperfect representation of reality. It is only when two or more theories are available that scientists must try to determine which of them is preferable.

Oakeshott and the Poverty of Falsificationism

These problems with falsificationism were brought into clearer focus by Michael Oakeshott in his famous essay “Rationalism in Politics,” which though not directed at Popper himself (whose colleague at the London School of Economics he was) can be read as a critique of Popper’s attempt to prescribe methodological rules for scientists to follow in carrying out their research. Methodological rules of the kind propounded by Popper are precisely the sort of supposedly rational rules of practice intended to ensure the successful outcome of an undertaking that Oakeshott believed to be ill-advised and hopelessly naïve. The rationalist conceit in Oakesott’s view is that there are demonstrably correct answers to practical questions and that practical activity is rational only when it is based on demonstrably true moral or causal rules.

The entry on Michael Oakeshott in the Stanford Encyclopedia of Philosophy summarizes Oakeshott’s position as follows:

The error of Rationalism is to think that making decisions simply requires skill in the technique of applying rules or calculating consequences. In an early essay on this theme, Oakeshott distinguishes between “technical” and “traditional” knowledge. Technical knowledge is of facts or rules that can be easily learned and applied, even by those who are without experience or lack the relevant skills. Traditional knowledge, in contrast, means “knowing how” rather than “knowing that” (Ryle 1949). It is acquired by engaging in an activity and involves judgment in handling facts or rules (RP 12–17). The point is not that rules cannot be “applied” but rather that using them skillfully or prudently means going beyond the instructions they provide.

The idea that a scientist’s decision about when to abandon one theory and replace it with another can be reduced to the application of a Popperian falsificationist maxim ignores all the special circumstances and all the accumulated theoretical and practical knowledge that a truly expert scientist will bring to bear in studying and addressing such a problem. Here is how Oakeshott addresses the problem in his famous essay.

These two sorts of knowledge, then, distinguishable but inseparable, are the twin components of the knowledge involved in every human activity. In a practical art such as cookery, nobody supposes that the knowledge that belongs to the good cook is confined to what is or what may be written down in the cookery book: technique and what I have called practical knowledge combine to make skill in cookery wherever it exists. And the same is true of the fine arts, of painting, of music, of poetry: a high degree of technical knowledge, even where it is both subtle and ready, is one thing; the ability to create a work of art, the ability to compose something with real musical qualities, the ability to write a great sonnet, is another, and requires in addition to technique, this other sort of knowledge. Again these two sorts of knowledge are involved in any genuinely scientific activity. The natural scientist will certainly make use of observation and verification that belong to his technique, but these rules remain only one of the components of his knowledge; advances in scientific knowledge were never achieved merely by following the rules. . . .

Technical knowledge . . . is susceptible of formulation in rules, principles, directions, maxims – comprehensively, in propositions. It is possible to write down technical knowledge in a book. Consequently, it does not surprise us that when an artist writes about his art, he writes only about the technique of his art. This is so, not because he is ignorant of what may be called asesthetic element, or thinks it unimportant, but because what he has to say about that he has said already (if he is a painter) in his pictures, and he knows no other way of saying it. . . . And it may be observed that this character of being susceptible of precise formulation gives to technical knowledge at least the appearance of certainty: it appears to be possible to be certain about a technique. On the other hand, it is characteristic of practical knowledge that it is not susceptible of formulation of that kind. Its normal expression is in a customary or traditional way of doing things, or, simply, in practice. And this gives it the appearance of imprecision and consequently of uncertainty, of being a matter of opinion, of probability rather than truth. It is indeed knowledge that is expressed in taste or connoisseurship, lacking rigidity and ready for the impress of the mind of the learner. . . .

Technical knowledge, in short, an be both taught and learned in the simplest meanings of these words. On the other hand, practical knowledge can neither be taught nor learned, but only imparted and acquired. It exists only in practice, and the only way to acquire it is by apprenticeship to a master – not because the master can teach it (he cannot), but because it can be acquired only by continuous contact with one who is perpetually practicing it. In the arts and in natural science what normally happens is that the pupil, in being taught and in learning the technique from his master, discovers himself to have acquired also another sort of knowledge than merely technical knowledge, without it ever having been precisely imparted and often without being able to say precisely what it is. Thus a pianist acquires artistry as well as technique, a chess-player style and insight into the game as well as knowledge of the moves, and a scientist acquires (among other things) the sort of judgement which tells him when his technique is leading him astray and the connoisseurship which enables him to distinguish the profitable from the unprofitable directions to explore.

Now, as I understand it, Rationalism is the assertion that what I have called practical knowledge is not knowledge at all, the assertion that, properly speaking, there is no knowledge which is not technical knowledge. The Rationalist holds that the only element of knowledge involved in any human activity is technical knowledge and that what I have called practical knowledge is really only a sort of nescience which would be negligible if it were not positively mischievous. (Rationalism in Politics and Other Essays, pp. 12-16)

Almost three years ago, I attended the History of Economics Society meeting at Duke University at which Jeff Biddle of Michigan State University delivered his Presidential Address, “Statistical Inference in Economics 1920-1965: Changes in Meaning and Practice, published in the June 2017 issue of the Journal of the History of Economic Thought. The paper is a remarkable survey of the differing attitudes towards using formal probability theory as the basis for making empirical inferences from the data. The underlying assumptions of probability theory about the nature of the data were widely viewed as being too extreme to make probability theory an acceptable basis for empirical inferences from the data. However, the early negative attitudes toward accepting probability theory as the basis for making statistical inferences from data were gradually overcome (or disregarded). But as late as the 1960s, even though econometric techniques were becoming more widely accepted, a great deal of empirical work, including by some of the leading empirical economists of the time, avoided using the techniques of statistical inference to assess empirical data using regression analysis. Only in the 1970s was there a rapid sea-change in professional opinion that made statistical inference based on explicit probabilisitic assumptions about underlying data distributions the requisite technique for drawing empirical inferences from the analysis of economic data. In the final section of his paper, Biddle offers an explanation for this rapid change in professional attitude toward the use of probabilistic assumptions about data distributions as the required method of the empirical assessment of economic data.

By the 1970s, there was a broad consensus in the profession that inferential methods justified by probability theory—methods of producing estimates, of assessing the reliability of those estimates, and of testing hypotheses—were not only applicable to economic data, but were a necessary part of almost any attempt to generalize on the basis of economic data. . . .

This paper has been concerned with beliefs and practices of economists who wanted to use samples of statistical data as a basis for drawing conclusions about what was true, or probably true, in the world beyond the sample. In this setting, “mechanical objectivity” means employing a set of explicit and detailed rules and procedures to produce conclusions that are objective in the sense that if many different people took the same statistical information, and followed the same rules, they would come to exactly the same conclusions. The trustworthiness of the conclusion depends on the quality of the method. The classical theory of inference is a prime example of this sort of mechanical objectivity.

Porter [Trust in Numbers: The Pursuit of Objectivity in Science and Public Life] contrasts mechanical objectivity with an objectivity based on the “expert judgment” of those who analyze data. Expertise is acquired through a sanctioned training process, enhanced by experience, and displayed through a record of work meeting the approval of other experts. One’s faith in the analyst’s conclusions depends on one’s assessment of the quality of his disciplinary expertise and his commitment to the ideal of scientific objectivity. Elmer Working’s method of determining whether measured correlations represented true cause-and-effect relationships involved a good amount of expert judgment. So, too, did Gregg Lewis’s adjustments of the various estimates of the union/non-union wage gap, in light of problems with the data and peculiarities of the times and markets from which they came. Keynes and Persons pushed for a definition of statistical inference that incorporated space for the exercise of expert judgment; what Arthur Goldberger and Lawrence Klein referred to as ‘statistical inference’ had no explicit place for expert judgment.

Speaking in these terms, I would say that in the 1920s and 1930s, empirical economists explicitly acknowledged the need for expert judgment in making statistical inferences. At the same time, mechanical objectivity was valued—there are many examples of economists of that period employing rule-oriented, replicable procedures for drawing conclusions from economic data. The rejection of the classical theory of inference during this period was simply a rejection of one particular means for achieving mechanical objectivity. By the 1970s, however, this one type of mechanical objectivity had become an almost required part of the process of drawing conclusions from economic data, and was taught to every economics graduate student.

Porter emphasizes the tension between the desire for mechanically objective methods and the belief in the importance of expert judgment in interpreting statistical evidence. This tension can certainly be seen in economists’ writings on statistical inference throughout the twentieth century. However, it would be wrong to characterize what happened to statistical inference between the 1940s and the 1970s as a displace-ment of procedures requiring expert judgment by mechanically objective procedures. In the econometric textbooks published after 1960, explicit instruction on statistical inference was largely limited to instruction in the mechanically objective procedures of the classical theory of inference. It was understood, however, that expert judgment was still an important part of empirical economic analysis, particularly in the specification of the models to be estimated. But the disciplinary knowledge needed for this task was to be taught in other classes, using other textbooks.

And in practice, even after the statistical model had been chosen, the estimates and standard errors calculated, and the hypothesis tests conducted, there was still room to exercise a fair amount of judgment before drawing conclusions from the statistical results. Indeed, as Marcel Boumans (2015, pp. 84–85) emphasizes, no procedure for drawing conclusions from data, no matter how algorithmic or rule bound, can dispense entirely with the need for expert judgment. This fact, though largely unacknowledged in the post-1960s econometrics textbooks, would not be denied or decried by empirical economists of the 1970s or today.

This does not mean, however, that the widespread embrace of the classical theory of inference was simply a change in rhetoric. When application of classical inferential procedures became a necessary part of economists’ analyses of statistical data, the results of applying those procedures came to act as constraints on the set of claims that a researcher could credibly make to his peers on the basis of that data. For example, if a regression analysis of sample data yielded a large and positive partial correlation, but the correlation was not “statistically significant,” it would simply not be accepted as evidence that the “population” correlation was positive. If estimation of a statistical model produced a significant estimate of a relationship between two variables, but a statistical test led to rejection of an assumption required for the model to produce unbiased estimates, the evidence of a relationship would be heavily discounted.

So, as we consider the emergence of the post-1970s consensus on how to draw conclusions from samples of statistical data, there are arguably two things to be explained. First, how did it come about that using a mechanically objective procedure to generalize on the basis of statistical measures went from being a choice determined by the preferences of the analyst to a professional requirement, one that had real con-sequences for what economists would and would not assert on the basis of a body of statistical evidence? Second, why was it the classical theory of inference that became the required form of mechanical objectivity? . . .

Perhaps searching for an explanation that focuses on the classical theory of inference as a means of achieving mechanical objectivity emphasizes the wrong characteristic of that theory. In contrast to earlier forms of mechanical objectivity used by economists, such as standardized methods of time series decomposition employed since the 1920s, the classical theory of inference is derived from, and justified by, a body of formal mathematics with impeccable credentials: modern probability theory. During a period when the value placed on mathematical expression in economics was increasing, it may have been this feature of the classical theory of inference that increased its perceived value enough to overwhelm long-standing concerns that it was not applicable to economic data. In other words, maybe the chief causes of the profession’s embrace of the classical theory of inference are those that drove the broader mathematization of economics, and one should simply look to the literature that explores possible explanations for that phenomenon rather than seeking a special explanation of the embrace of the classical theory of inference.

I would suggest one more factor that might have made the classical theory of inference more attractive to economists in the 1950s and 1960s: the changing needs of pedagogy in graduate economics programs. As I have just argued, since the 1920s, economists have employed both judgment based on expertise and mechanically objective data-processing procedures when generalizing from economic data. One important difference between these two modes of analysis is how they are taught and learned. The classical theory of inference as used by economists can be taught to many students simultaneously as a set of rules and procedures, recorded in a textbook and applicable to “data” in general. This is in contrast to the judgment-based reasoning that combines knowledge of statistical methods with knowledge of the circumstances under which the particular data being analyzed were generated. This form of reasoning is harder to teach in a classroom or codify in a textbook, and is probably best taught using an apprenticeship model, such as that which ideally exists when an aspiring economist writes a thesis under the supervision of an experienced empirical researcher.

During the 1950s and 1960s, the ratio of PhD candidates to senior faculty in PhD-granting programs was increasing rapidly. One consequence of this, I suspect, was that experienced empirical economists had less time to devote to providing each interested student with individualized feedback on his attempts to analyze data, so that relatively more of a student’s training in empirical economics came in an econometrics classroom, using a book that taught statistical inference as the application of classical inference procedures. As training in empirical economics came more and more to be classroom training, competence in empirical economics came more and more to mean mastery of the mechanically objective techniques taught in the econometrics classroom, a competence displayed to others by application of those techniques. Less time in the training process being spent on judgment-based procedures for interpreting statistical results meant fewer researchers using such procedures, or looking for them when evaluating the work of others.

This process, if indeed it happened, would not explain why the classical theory of inference was the particular mechanically objective method that came to dominate classroom training in econometrics; for that, I would again point to the classical theory’s link to a general and mathematically formalistic theory. But it does help to explain why the application of mechanically objective procedures came to be regarded as a necessary means of determining the reliability of a set of statistical measures and the extent to which they provided evidence for assertions about reality. This conjecture fits in with a larger possibility that I believe is worth further exploration: that is, that the changing nature of graduate education in economics might sometimes be a cause as well as a consequence of changing research practices in economics. (pp. 167-70)

The correspondence between Biddle’s discussion of the change in the attitude of the economics profession about how inferences should be drawn from data about empirical relationships is strikingly similar to Oakeshott’s discussion and depressing in its implications for the decline of expert judgment by economics, expert judgment having been replaced by mechanical and technical knowledge that can be objectively summarized in the form of rules or tests for statistical significance, itself an entirely arbitrary convention lacking any logical, or self-evident, justification.

But my point is not to condemn using rules derived from classical probability theory to assess the significance of relationships statistically estimated from historical data, but to challenge the methodological prohibition against the kinds of expert judgments that many statistically knowledgeable economists like Nobel Prize winners such as Simon Kuznets, Milton Friedman, Theodore Schultz and Gary Becker routinely used to make in their empirical studies. As Biddle notes:

In 1957, Milton Friedman published his theory of the consumption function. Friedman certainly understood statistical theory and probability theory as well as anyone in the profession in the 1950s, and he used statistical theory to derive testable hypotheses from his economic model: hypotheses about the relationships between estimates of the marginal propensity to consume for different groups and from different types of data. But one will search his book almost in vain for applications of the classical methods of inference. Six years later, Friedman and Anna Schwartz published their Monetary History of the United States, a work packed with graphs and tables of statistical data, as well as numerous generalizations based on that data. But the book contains no classical hypothesis tests, no confidence intervals, no reports of statistical significance or insignificance, and only a handful of regressions. (p. 164)

Friedman’s work on the Monetary History is still regarded as authoritative. My own view is that much of the Monetary History was either wrong or misleading. But my quarrel with the Monetary History mainly pertains to the era in which the US was on the gold standard, inasmuch as Friedman simply did not understand how the gold standard worked, either in theory or in practice, as McCloskey and Zecher showed in two important papers (here and here). Also see my posts about the empirical mistakes in the Monetary History (here and here). But Friedman’s problem was bad monetary theory, not bad empirical technique.

Friedman’s theoretical misunderstandings have no relationship to the misguided prohibition against doing quantitative empirical research without obeying the arbitrary methodological requirement that statistical be derived in a way that measures the statistical significance of the estimated relationships. These methodological requirements have been adopted to support a self-defeating pretense to scientific rigor, necessitating the use of relatively advanced mathematical techniques to perform quantitative empirical research. The methodological requirements for measuring statistical relationships were never actually shown to be generate more accurate or reliable statistical results than those derived from the less technically advanced, but in some respects more economically sophisticated, techniques that have almost totally been displaced. One more example of the fallacy that there is but one technique of research that ensures the discovery of truth, a mistake even Popper was never guilty of.

Methodological Prescriptions Go from Bad to Worse

The methodological requirement for the use of formal tests of statistical significance before any quantitative statistical estimate could be credited was a prelude, though it would be a stretch to link them causally, to another and more insidious form of methodological tyrannizing: the insistence that any macroeconomic model be derived from explicit micro-foundations based on the solution of an intertemporal-optimization exercise. Of course, the idea that such a model was in any way micro-founded was a pretense, the solution being derived only through the fiction of a single representative agent, rendering the entire optimization exercise fundamentally illegitimate and the exact opposite of micro-founded model. Having already explained in previous posts why transforming microfoundations from a legitimate theoretical goal into methodological necessity has taken a generation of macroeconomists down a blind alley (here, here, here, and here) I will only make the further comment that this is yet another example of the danger of elevating technique over practice and substance.

Popper’s More Important Contribution

This post has largely concurred with the negative assessment of Popper’s work registered by Lemoine. But I wish to end on a positive note, because I have learned a great deal from Popper, and even if he is overrated as a philosopher of science, he undoubtedly deserves great credit for suggesting falsifiability as the criterion by which to distinguish between science and metaphysics. Even if that criterion does not hold up, or holds up only when qualified to a greater extent than Popper admitted, Popper made a hugely important contribution by demolishing the startling claim of the Logical Positivists who in the 1920s and 1930s argued that only statements that can be empirically verified through direct or indirect observation have meaning, all other statements being meaningless or nonsensical. That position itself now seems to verge on the nonsensical. But at the time many of the world’s leading philosophers, including Ludwig Wittgenstein, no less, seemed to accept that remarkable view.

Thus, Popper’s demarcation between science and metaphysics had a two-fold significance. First, that it is not verifiability, but falsifiability, that distinguishes science from metaphysics. That’s the contribution for which Popper is usually remembered now. But it was really the other aspect of his contribution that was more significant: that even metaphysical, non-scientific, statements can be meaningful. According to the Logical Positivists, unless you are talking about something that can be empirically verified, you are talking nonsense. In other words they were deliberately hoisting themselves on their petard, because their discussions about what is and what is not meaningful, being discussions about concepts, not empirically verifiable objects, were themselves – on the Positivists’ own criterion of meaning — meaningless and nonsensical.

Popper made the world safe for metaphysics, and the world is a better place as a result. Science is a wonderful enterprise, rewarding for its own sake and because it contributes to the well-being of many millions of human beings, though like many other human endeavors, it can also have unintended and unfortunate consequences. But metaphysics, because it was used as a term of abuse by the Positivists, is still, too often, used as an epithet. It shouldn’t be.

Certainly economists should aspire to tease out whatever empirical implications they can from their theories. But that doesn’t mean that an economic theory with no falsifiable implications is useless, a judgment whereby Mark Blaug declared general equilibrium theory to be unscientific and useless, a judgment that I don’t think has stood the test of time. And even if general equilibrium theory is simply metaphysical, my response would be: so what? It could still serve as a source of inspiration and insight to us in framing other theories that may have falsifiable implications. And even if, in its current form, a theory has no empirical content, there is always the possibility that, through further discussion, critical analysis and creative thought, empirically falsifiable implications may yet become apparent.

Falsifiability is certainly a good quality for a theory to have, but even an unfalsifiable theory may be worth paying attention to and worth thinking about.


Cleaning Up After Burns’s Mess

In my two recent posts (here and here) about Arthur Burns’s lamentable tenure as Chairman of the Federal Reserve System from 1970 to 1978, my main criticism of Burns has been that, apart from his willingness to subordinate monetary policy to the political interests of he who appointed him, Burns failed to understand that an incomes policy to restrain wages, thereby minimizing the tendency of disinflation to reduce employment, could not, in principle, reduce inflation if monetary restraint did not correspondingly reduce the growth of total spending and income. Inflationary (or employment-reducing) wage increases can’t be prevented by an incomes policy if the rate of increase in total spending, and hence total income,  isn’t controlled. King Canute couldn’t prevent the tide from coming in, and neither Arthur Burns nor the Wage and Price Council could slow the increase in wages when total spending was increasing at rate faster than was consistent with the 3% inflation rate that Burns was aiming for.

In this post, I’m going to discuss how the mess Burns left behind him upon leaving the Fed in 1978 had to be cleaned up. The mess got even worse under Burns’s successor, G. William Miller. The clean up did not begin until Carter appointed Paul Volcker in 1979 when it became obvious that the monetary policy of the Fed had failed to cope with problems left behind by Burns. After unleashing powerful inflationary forces under the cover of the wage-and-price controls he had persuaded Nixon to impose in 1971 as a precondition for delivering the monetary stimulus so desperately desired by Nixon to ensure his reelection, Burns continued providing that stimulus even after Nixon’s reelection, when it might still have been possible to taper off the stimulus before inflation flared up, and without aborting the expansion then under way. In his arrogance or ignorance, Burns chose not to adjust the policy that had so splendidly accomplished its intended result.

Not until the end of 1973, after crude oil prices quadrupled owing to a cutback in OPEC oil output, driving inflation above 10% in 1974, did Burns withdraw the monetary stimulus that had been administered in increasing doses since early 1971. Shocked out of his complacency by the outcry against 10% inflation, Burns shifted monetary policy toward restraint, bringing down the growth in nominal spending and income from over 11% in Q4 1973 to only 8% in Q1 1974.

After prolonging monetary stimulus unnecessarily for a year, Burn erred grievously by applying monetary restraint in response to the rise in oil prices. The largely exogenous rise in oil prices would most likely have caused a recession even with no change in monetary policy. By subjecting the economy to the added shock of reducing aggregate demand, Burns turned a mild recession into the worst recession since 1937-38 recession at the end of the Great Depression, with unemployment peaking at 8.8% in Q2 1975.. Nor did the reduction in aggregate demand have much anti-inflationary effect, because the incremental reduction in total spending occasioned by the monetary tightening was reflected mainly in reduced output and employment rather than in reduced inflation.

But even with unemployment reaching the highest level in almost 40 years, inflation did not fall below 5% – and then only briefly – until a year after the bottom of the recession. When President Carter took office in 1977, Burns, hoping to be reappointed to another term, provided Carter with a monetary expansion to hasten the reduction in unemployment that Carter has promised in his Presidential campaign. However, Burns’s accommodative policy did not sufficiently endear him to Carter to secure the coveted reappointment.

The short and unhappy tenure of Carter’s first appointee, G. William Miller, during which inflation rose from 6.5% to 10%, ended abruptly when Carter, with his Administration in crisis, sacked his Treasury Secretary, replacing him with Miller. Under pressure from the financial community to address the seemingly intractable inflation that seemed to be accelerating in the wake of a second oil shock following the Iranian Revolution and hostage taking, Carter felt constrained to appoint Volcker, formerly a high official in the Treasury in both the Kennedy and Nixon administrations, then serving as President of the New York Federal Reserve Bank, who was known to be the favored choice of the financial community.

A year after leaving the Fed, Burns gave the annual Per Jacobson Lecture to the International Monetary Fund. Calling his lecture “The Anguish of Central Banking,” Burns offered a defense of his tenure, by arguing, in effect, that he should not be blamed for his poor performance, because the job of central banking is so very hard. Central bankers could control inflation, but only by inflicting unacceptably high unemployment. The political authorities and the public to whom central bankers are ultimately accountable would simply not tolerate the high unemployment that would be necessary for inflation to be controlled.

Viewed in the abstract, the Federal Reserve System had the power to abort the inflation at its incipient stage fifteen years ago or at any later point, and it has the power to end it today. At any time within that period, it could have restricted money supply and created sufficient strains in the financial and industrial markets to terminate inflation with little delay. It did not do so because the Federal Reserve was itself caught up in the philosophic and political currents that were transforming American life and culture.

Burns’s framing of the choices facing a central bank was tendentious; no policy maker had suggested that, after years of inflation had convinced the public to expect inflation to continue indefinitely, the Fed should “terminate inflation with little delay.” And Burns was hardly a disinterested actor as Fed chairman, having orchestrated a monetary expansion to promote the re-election chances of his benefactor Richard Nixon after securing, in return for that service, Nixon’s agreement to implement an incomes policy to limit the growth of wages, a policy that Burns believed would contain the inflationary consequences of the monetary expansion.

However, as I explained in my post on Hawtrey and Burns, the conceptual rationale for an incomes policy was not to allow monetary expansion to increase total spending, output and employment without causing increased inflation, but to allow the monetary restraint to be administered without increasing unemployment. But under the circumstances in the summer of 1971, when a recovery from the 1970 recession was just starting, and unemployment was still high, monetary expansion might have hastened a recovery in output and employment the resulting increase in total spending and income might still increase output and employment rather than being absorbed in higher wages and prices.

But using controls over wages and prices to speed the return to full employment could succeed only while substantial unemployment and unused capacity allowed output and employment to increase; the faster the recovery, the sooner increased spending would show up in rising prices and wages, or in supply shortages, rather than in increased output. So an incomes policy to enable monetary expansion to speed the recovery from recession and restore full employment might theoretically be successful, but, only if the monetary stimulus were promptly tapered off before driving up inflation.

Thus, if Burns wanted an incomes policy to be able to hasten the recovery through monetary expansion and maximize the political benefit to Nixon in time for the 1972 election, he ought to have recognized the need to withdraw the stimulus after the election. But for a year after Nixon’s reelection, Burns continued the monetary expansion without let up. Burns’s expression of anguish at the dilemma foisted upon him by circumstances beyond his control hardly evokes sympathy, sounding more like an attempt to deflect responsibility for his own mistakes or malfeasance in serving as an instrument of the criminal Campaign to Re-elect the President without bothering to alter that politically motivated policy after accomplishing his dishonorable mission.

But it was not until Burns’s successor, G. William Miller, was succeeded by Paul Volcker in August 1979 that the Fed was willing to adopt — and maintain — an anti-inflationary policy. In his recently published memoir Volcker recounts how, responding to President Carter’s request in July 1979 that he accept appointment as Fed chairman, he told Mr. Carter that, to bring down inflation, he would adopt a tighter monetary policy than had been followed by his predecessor. He also writes that, although he did not regard himself as a Friedmanite Monetarist, he had become convinced that to control inflation it was necessary to control the quantity of money, though he did not agree with Friedman that a rigid rule was required to keep the quantity of money growing at a constant rate. To what extent the Fed would set its policy in terms of a fixed target rate of growth in the quantity of money became the dominant issue in Fed policy during Volcker’s first term as Fed chairman.

In a review of Volcker’s memoir widely cited in the econ blogosphere, Tim Barker decried Volcker’s tenure, especially his determination to control inflation even at the cost of spilling blood — other people’s blood – if that was necessary to eradicate the inflationary psychology of the 1970s, which become a seemingly permanent feature of the economic environment at the time of Volcker’s appointment.

If someone were to make a movie about neoliberalism, there would need to be a starring role for the character of Paul Volcker. As chair of the Federal Reserve from 1979 to 1987, Volcker was the most powerful central banker in the world. These were the years when the industrial workers movement was defeated in the United States and United Kingdom, and third world debt crises exploded. Both of these owe something to Volcker. On October 6, 1979, after an unscheduled meeting of the Fed’s Open Market Committee, Volcker announced that he would start limiting the growth of the nation’s money supply. This would be accomplished by limiting the growth of bank reserves, which the Fed influenced by buying and selling government securities to member banks. As money became more scarce, banks would raise interest rates, limiting the amount of liquidity available in the overall economy. Though the interest rates were a result of Fed policy, the money supply target let Volcker avoid the politically explosive appearance of directly raising rates himself. The experiment—known as the Volcker Shock—lasted until 1982, inducing what remains the worst unemployment since the Great Depression and finally ending the inflation that had troubled the world economy since the late 1960s. To catalog all the results of the Volcker Shock—shuttered factories, broken unions, dizzying financialization—is to describe the whirlwind we are still reaping in 2019. . . .

Barker is correct that Volcker had been persuaded that to tighten monetary policy the quantity of reserves that the Fed was providing to the banking system had to be controlled. But making the quantity of bank reserves the policy instrument was a technical change. Monetary policy had been — and could still have been — conducted using an interest-rate instrument, and it would have been entirely possible for Volcker to tighten monetary policy using the traditional interest-rate instrument. It is possible that, as Barker asserts, it was politically easier to tighten policy using a quantity instrument than an interest-rate instrument.

But even if so, the real difficulty was not the instrument used, but the economic and political consequences of a tight monetary policy. The choice of the instrument to carry out the policy could hardly have made more than a marginal difference on the balance of political forces favoring or opposing that policy. The real issue was whether a tight monetary policy aimed at reducing inflation was more effectively conducted using the traditional interest-rate instrument or the quantity-instrument that Volcker adopted. More on this point below.

Those who praise Volcker like to say he “broke the back” of inflation. Nancy Teeters, the lone dissenter on the Fed Board of Governors, had a different metaphor: “I told them, ‘You are pulling the financial fabric of this country so tight that it’s going to rip. You should understand that once you tear a piece of fabric, it’s very difficult, almost impossible, to put it back together again.” (Teeters, also the first woman on the Fed board, told journalist William Greider that “None of these guys has ever sewn anything in his life.”) Fabric or backbone: both images convey violence. In any case, a price index doesn’t have a spine or a seam; the broken bodies and rent garments of the early 1980s belonged to people. Reagan economic adviser Michael Mussa was nearer the truth when he said that “to establish its credibility, the Federal Reserve had to demonstrate its willingness to spill blood, lots of blood, other people’s blood.”

Did Volcker consciously see unemployment as the instrument of price stability? A Rhode Island representative asked him “Is it a necessary result to have a large increase in unemployment?” Volcker responded, “I don’t know what policies you would have to follow to avoid that result in the short run . . . We can’t undertake a policy now that will cure that problem [unemployment] in 1981.” Call this the necessary byproduct view: defeating inflation is the number one priority, and any action to put people back to work would raise inflationary expectations. Growth and full employment could be pursued once inflation was licked. But there was more to it than that. Even after prices stabilized, full employment would not mean what it once had. As late as 1986, unemployment was still 6.6 percent, the Reagan boom notwithstanding. This was the practical embodiment of Milton Friedman’s idea that there was a natural rate of unemployment, and attempts to go below it would always cause inflation (for this reason, the concept is known as NAIRU or non-accelerating inflation rate of unemployment). The logic here is plain: there need to be millions of unemployed workers for the economy to work as it should.

I want to make two points about Volcker’s policy. The first, which I made in my book Free Banking and Monetary Reform over 30 years ago, and which I have reiterated in several posts on this blog and which I discussed in my recent paper “Rules versus Discretion in Monetary Policy Historically Contemplated” (for an ungated version click here) is that using a quantity instrument to tighten monetary policy, as advocated by Milton Friedman, and acquiesced in by Volcker, induces expectations about the future actions of the monetary authority that undermine the policy and render it untenable. Volcker eventually realized the perverse expectational consequences of trying to implement a monetary policy using a fixed rule for the quantity instrument, but his learning experience in following Friedman’s advice needlessly exacerbated and prolonged the agony of the 1982 downturn for months after inflationary expectations had been broken.

The problem was well-known in the nineteenth century thanks to British experience under the Bank Charter Act that imposed a fixed quantity limit on the total quantity of banknotes issued by the Bank of England. When the total of banknotes approached the legal maximum, a precautionary demand for banknotes was immediately induced by those who feared that they might not later be able to obtain credit if it were needed because the Bank of England would be barred from making additional credit available.

Here is how I described Volcker’s Monetarist experiment in my book.

The danger lurking in any Monetarist rule has been perhaps best summarized by F. A. Hayek, who wrote:

As regards Professor Friedman’s proposal of a legal limit on the rate at which a monopolistic issuer of money was to be allowed to increase the quantity in circulation, I can only say that I would not like to see what would happen if under such a provision it ever became known that the amount of cash in circulation was approaching the upper limit and therefore a need for increased liquidity could not be met.

Hayek’s warnings were subsequently borne out after the Federal Reserve Board shifted its policy from targeting interest rates to targeting the monetary aggregates. The apparent shift toward a less inflationary monetary policy, reinforced by the election of a conservative, antiinflationary president in 1980, induced an international shift from other currencies into the dollar. That shift caused the dollar to appreciate by almost 30 percent against other major currencies.

At the same time the domestic demand for deposits was increasing as deregulation of the banking system reduced the cost of holding deposits. But instead of accommodating the increase in the foreign and domestic demands for dollars, the Fed tightened monetary policy. . . . The deflationary impact of that tightening overwhelmed the fiscal stimulus of tax cuts and defense buildup, which, many had predicted, would cause inflation to speed up. Instead the economy fell into the deepest recession since the 1930s, while inflation, by 1982, was brought down to the lowest levels since the early 1960s. The contraction, which began in July 1981, accelerated in the fourth quarter of 1981 and the first quarter of 1982.

The rapid disinflation was bringing interest rates down from the record high levels of mid-1981 and the economy seemed to bottom out in the second quarter, showing a slight rise in real GNP over the first quarter. Sticking to its Monetarist strategy, the Fed reduced its targets for monetary growth in 1982 to between 2.5 and 5.5 percent. But in January and February, the money supply increased at a rapid rate, perhaps in anticipation of an incipient expansion. Whatever its cause, the early burst of the money supply pushed M-1 way over its target range.

For the next several months, as M-1 remained above its target, financial and commodity markets were preoccupied with what the Fed was going to do next. The fear that the Fed would tighten further to bring M-1 back within its target range reversed the slide in interest rates that began in the fall of 1981. A striking feature of the behavior of interest rates at that time was that credit markets seemed to be heavily influenced by the announcements every week of the change in M-1 during the previous week. Unexpectedly large increases in the money supply put upward pressure on interest rates.

The Monetarist explanation was that the announcements caused people to raise their expectations of inflation. But if the increase in interest rates had been associated with a rising inflation premium, the announcements should have been associated with weakness in the dollar on foreign exchange markets and rising commodities prices. In fact, the dollar was rising and commodities prices were falling consistently throughout this period – even immediately after an unexpectedly large jump in M-1 was announced. . . . (pp. 218-19)

I pause in my own earlier narrative to add the further comment that the increase in interest rates in early 1982 clearly reflected an increasing liquidity premium, caused by the reduced availability of bank reserves, making cash desirable to hold than real assets thereby inducing further declines in asset values.

However, increases in M-1 during July turned out to be far smaller than anticipated, relieving some of the pressure on credit and commodities markets and allowing interest rates to begin to fall again. The decline in interest rates may have been eased slightly by . . . Volcker’s statement to Congress on July 20 that monetary growth at the upper range of the Fed’s targets would be acceptable. More important, he added that he Fed was willing to let M-1 remain above its target range for a while if the reason seemed to be a precautionary demand for liquidity. By August, M-1 had actually fallen back within its target range. As fears of further tightening by the Fed subsided, the stage was set for the decline in interest rates to accelerate, [and] the great stock market rally began on August 17, when the Dow . . . rose over 38 points [almost 5%].

But anticipation of an incipient recovery again fed monetary growth. From the middle of August through the end of September, M-1 grew at an annual rate of over 15 percent. Fears that rapid monetary growth would induce the Fed to tighten monetary policy slowed down the decline in interest rates and led to renewed declines in commodities price and the stock market, while pushing up the dollar to new highs. On October 5 . . . the Wall Street Journal reported that bond prices had fallen amid fears that the Fed might tighten credit conditions to slow the recent strong growth in the money supply. But on the very next day it was reported that the Fed expected inflation to stay low and would therefore allow M-1 to exceed its targets. The report sparked a major decline in interest rates and the Dow . . . soared another 37 points. (pp. 219-20)

The subsequent recovery, which began at the end of 1982, quickly became very powerful, but persistent fears that the Fed would backslide, at the urging of Milton Friedman and his Monetarist followers, into its bad old Monetarist habits periodically caused interest-rate spikes reflecting rising liquidity premiums as the public built up precautionary cash balances. Luckily, Volcker was astute enough to shrug off the overwrought warnings of Friedman and other Monetarists that rapid increases in the monetary aggregates foreshadowed the imminent return of double-digit inflation.

Thus, the Monetarist obsession with controlling the monetary aggregates senselessly prolonged an already deep recession that, by Q1 1982, had already slain the inflationary dragon, inflation having fallen to less than half its 1981 peak while GDP actually contracted in nominal terms. But because the money supply was expanding at a faster rate than was acceptable to Monetarist ideology, the Fed continued in its futile but destructive campaign to keep the monetary aggregates from overshooting their arbitrary Monetarist target range. It was not until Volcker in summer of 1982 finally and belatedly decided that enough was enough and announced that the Fed would declare victory over inflation and call off its Monetarist campaign even if doing so meant incurring Friedman’s wrath and condemnation for abandoning the true Monetarist doctrine.

Which brings me to my second point about Volcker’s policy. While it’s clear that Volcker’s decision to adopt control over the monetary aggregates as the focus of monetary policy was disastrously misguided, monetary policy can’t be conducted without some target. Although the Fed’s interest rate can serve as a policy instrument, it is not a plausible policy target. The preferred policy target is generally thought to be the rate of inflation. The Fed after all is mandated to achieve price stability, which is usually understood to mean targeting a rate of inflation of about 2%. A more sophisticated alternative would be to aim at a suitable price level, thereby allowing some upward movement, say, at a 2% annual rate, the difference between an inflation target and a moving price level target being that an inflation target is unaffected by past deviations of actual from targeted inflation while a moving price level target would require some catch up inflation to make up for past below-target inflation and reduced inflation to compensate for past above-target inflation.

However, the 1981-82 recession shows exactly why an inflation target and even a moving price level target is a bad idea. By almost any comprehensive measure, inflation was still positive throughout the 1981-82 recession, though the producer price index was nearly flat. Thus, inflation targeting during the 1981-82 recession would have been almost as bad a target for monetary policy as the monetary aggregates, with most measures of inflation showing that inflation was then between 3 and 5 percent even at the depth of the recession. Inflation targeting is thus, on its face, an unreliable basis for conducting monetary policy.

But the deeper problem with targeting inflation is that seeking to achieve an inflation target during a recession, when the very existence of a recession is presumptive evidence of the need for monetary stimulus, is actually a recipe for disaster, or, at the very least, for needlessly prolonging a recession. In a recession, the goal of monetary policy should be to stabilize the rate of increase in nominal spending along a time path consistent with the desired rate of inflation. Thus, as long as output is contracting or increasing very slowly, the desired rate of inflation should be higher than the desired rate over the long-term. The appropriate strategy for achieving an inflation target ought to be to let inflation be reduced by the accelerating expansion of output and employment characteristic of most recoveries relative to a stable expansion of nominal spending.

The true goal of monetary policy should always be to maintain a time path of total spending consistent with a desired price-level path over time. But it should not be the objective of the monetary policy to always be as close as possible to the desired path, because trying to stay on that path would likely destabilize the real economy. Market monetarists argue that the goal of monetary policy ought to be to keep nominal GDP expanding at that whatever rate is consistent with maintaining the desired long-run price-level path. That is certainly a reasonable practical rule for monetary policy, but the policy criterion I have discussed here would, at least in principle, be consistent with a more activist approach in which the monetary authority would seek to hasten the restoration of full employment during recessions by temporarily increasing the rate of monetary expansion and in nominal GDP as long as real output and employment remained below the maximum levels consistent with desired price level path over time. But such a strategy would require the monetary authority to be able to fine tune its monetary expansion so that it was tapered off just as the economy was reaching its maximum sustainable output and employment path. Whether such fine-tuning would be possible in practice is a question to which I don’t think we now know the answer.


Friedman and Schwartz, Eichengreen and Temin, Hawtrey and Cassel

Barry Eichengreen and Peter Temin are two of the great economic historians of our time, writing, in the splendid tradition of Charles Kindleberger, profound and economically acute studies of the economic and financial history of the nineteenth and early twentieth centuries. Most notably they have focused on periods of panic, crisis and depression, of which by far the best-known and most important episode is the Great Depression that started late in 1929, bottomed out early in 1933, but lingered on for most of the 1930s, and they are rightly acclaimed for having emphasized and highlighted the critical role of the gold standard in the Great Depression, a role largely overlooked in the early Keynesian accounts of the Great Depression. Those accounts identified a variety of specific shocks, amplified by the volatile entrepreneurial expectations and animal spirits that drive, or dampen, business investment, and further exacerbated by inherent instabilities in market economies that lack self-stabilizing mechanisms for maintaining or restoring full employment.

That Keynesian vision of an unstable market economy vulnerable to episodic, but prolonged, lapses from full-employment was vigorously, but at first unsuccessfully, disputed by advocates of free-market economics. It wasn’t until Milton Friedman provided an alternative narrative explaining the depth and duration of the Great Depression, that the post-war dominance of Keynesian theory among academic economists seriously challenged. Friedman’s alternative narrative of the Great Depression was first laid out in the longest chapter (“The Great Contraction”) of his magnum opus, co-authored with Anna Schwartz, A Monetary History of the United States. In Friedman’s telling, the decline in the US money stock was the critical independent causal factor that directly led to the decline in prices, output, and employment. The contraction in the quantity of money was not caused by the inherent instability of free-market capitalism, but, owing to a combination of incompetence and dereliction of duty, by the Federal Reserve.

In the Monetary History of the United States, all the heavy lifting necessary to account for both secular and cyclical movements in the price level, output and employment is done by, supposedly exogenous, changes in the nominal quantity of money, Friedman having considered it to be of the utmost significance that the largest movements in both the quantity of money, and in prices, output and employment occurred during the Great Depression. The narrative arc of the Monetary History was designed to impress on the mind of the reader the axiomatic premise that monetary authority has virtually absolute control over the quantity of money which served as the basis for inferring that changes in the quantity of money are what cause changes in prices, output and employment.

Friedman’s treatment of the gold standard (which I have discussed here, here and here) was both perfunctory and theoretically confused. Unable to reconcile the notion that the monetary authority has absolute control over the nominal quantity of money with the proposition that the price level in any country on the gold standard cannot deviate from the price levels of other gold standard countries without triggering arbitrage transactions that restore the equality between the price levels of all gold standard countries, Friedman dodged the inconsistency repeatedly invoking his favorite fudge factor: long and variable lags between changes in the quantity of money and changes in prices, output and employment. Despite its vacuity, the long-and-variable-lag dodge allowed Friedman to ignore the inconvenient fact that the US price level in the Great Depression did not and could not vary independently of the price levels of all other countries then on the gold standard.

I’ll note parenthetically that Keynes himself was also responsible for this unnecessary and distracting detour, because the General Theory was written almost entirely in the context of a closed economy model with an exogenously determined quantity of money, thereby unwittingly providing with a useful tool with which to propagate his Monetarist narrative. The difference of course is that Keynes, as demonstrated in his brilliant early works, Indian Currency and Finance and A Tract on Monetary Reform and the Economic Consequences of Mr. Churchill, had a correct understanding of the basic theory of the gold standard, an understanding that, owing to his obsessive fixation on the nominal quantity of money, eluded Friedman over his whole career. Why Keynes, who had a perfectly good theory of what was happening in the Great Depression available to him, as it was to others, was diverted to an unnecessary, but not uninteresting, new theory is a topic that I wrote about a very long time ago here, though I’m not so sure that I came up with a good or even adequate explanation.

So it does not speak well of the economics profession that it took nearly a quarter of a century before the basic internal inconsistency underlying Friedman’s account of the Great Depression was sufficiently recognized to call for an alternative theoretical account of the Great Depression that placed the gold standard at the heart of the narrative. It was Peter Temin and Barry Eichengreen, both in their own separate works (e.g., Lessons of the Great Depression by Temin and Golden Fetters by Eichengreen) and in an important paper they co-authored and published in 2000 to remind both economists and historians how important a role the gold standard must play in any historical account of the Great Depression.

All credit is due to Temin and Eichengreen for having brought to the critical role of the gold standard in the Great Depression to the attention of economists who had largely derived their understanding of what had caused the Great Depression from either some variant of the Keynesian narrative or of Friedman’s Monetarist indictment of the Federal Reserve System. But it’s unfortunate that neither Temin nor Eichnegreen gave sufficient credit to either R. G. Hawtrey or to Gustav Cassel for having anticipated almost all of their key findings about the causes of the Great Depression. And I think that what prevented Eichengreen and Temin from realizing that Hawtrey in particular had anticipated their explanation of the Great Depression by more than half a century was that they did not fully grasp the key theoretical insight underlying Hawtrey’s explanation of the Great Depression.

That insight was that the key to understanding the common world price level in terms of gold under a gold standard is to think in terms of a given world stock of gold and to think of total world demand to hold gold consisting of real demands to hold gold for commercial, industrial and decorative uses, the private demand to hold gold as an asset, and the monetary demand for gold to be held either as a currency or as a reserve for currency. The combined demand to hold gold for all such purposes, given the existing stock of gold, determines a real relative price of gold in terms of all other commodities. This relative price when expressed in terms of a currency unit that is convertible into gold corresponds to an equivalent set of commodity prices in terms of those convertible currency units.

This way of thinking about the world price level under the gold standard was what underlay Hawtrey’s monetary analysis and his application of that analysis in explaining the Great Depression. Given that the world output of gold in any year is generally only about 2 or 3 percent of the existing stock of gold, it is fluctuations in the demand for gold, of which the monetary demand for gold in the period after the outbreak of World War I was clearly the least stable, that causes short-term fluctuations in the value of gold. Hawtrey’s efforts after the end of World War I were therefore focused on the necessity to stabilize the world’s monetary demands for gold in order to avoid fluctuations in the value of gold as the world moved toward the restoration of the gold standard that then seemed, to most monetary and financial experts and most monetary authorities and political leaders, to be both inevitable and desirable.

In the opening pages of Golden Fetters, Eichengreen beautifully describes backdrop against which the attempt to reconstitute the gold standard was about to made after World War I.

For more than a quarter of a century before World War I, the gold standard provided the framework for domestic and international monetary relations. . .  The gold standard had been a remarkably efficient mechanism for organizing financial affairs. No global crises comparable to the one that began in 1929 had disrupted the operation of financial markets. No economic slump had so depressed output and employment.

The central elements of this system were shattered by . . . World War I. More than a decade was required to complete their reconstruction. Quickly it became evident that the reconstructed gold standard was less resilient that its prewar predecessor. As early as 1929 the new international monetary system began to crumble. Rapid deflation forced countries to  producing primary commodities to suspend gold convertibility and depreciate their currencies. Payments problems spread next to the industrialized world. . . Britain, along with United State and France, one of the countries at the center of the international monetary system, was next to experience a crisis, abandoning the gold standard in the autumn of 1931. Some two dozen countries followed suit. The United States dropped the gold standard in 1933; France hung on till the bitter end, which came in 1936.

The collapse of the international monetary system is commonly indicted for triggering the financial crisis that transformed a modes economic downturn gold standard into an unprecedented slump. So long as the gold standard was maintained, it is argued, the post-1929 recession remained just another cyclical contraction. But the collapse of the gold standard destroyed confidence in financial stability, prompting capital flight which undermined the solvency of financial institutions. . . Removing the gold standard, the argument continues, further intensified the crisis. Having suspended gold convertibility, policymakers manipulated currencies, engaging in beggar thy neighbor depreciations that purportedly did nothing to stimulate economic recovery at home while only worsening the Depression abroad.

The gold standard, then, is conventionally portrayed as synonymous with financial stability. Its downfall starting in 1929 is implicated in the global financial crisis and the worldwide depression. A central message of this book is that precisely the opposite was true. (Golden Fetters, pp. 3-4).

That is about as clear and succinct and accurate a description of the basic facts leading up to and surrounding the Great Depression as one could ask for, save for the omission of one important causal factor: the world monetary demand for gold.

Eichengreen was certainly not unaware of the importance of the monetary demand for gold, and in the pages that immediately follow, he attempts to fill in that part of the story, adding to our understanding of how the gold standard worked by penetrating deeply into the nature and role of the expectations that supported the gold standard, during its heyday, and the difficulty of restoring those stabilizing expectations after the havoc of World War I and the unexpected post-war inflation and subsequent deep 1920-21 depression. Those stabilizing expectations, Eichengreen argued, were the result of the credibility of the commitment to the gold standard and the international cooperation between governments and monetary authorities to ensure that the international gold standard would be maintained notwithstanding the occasional stresses and strains to which a complex institution would inevitably be subjected.

The stability of the prewar gold standard was instead the result of two very different factors: credibility and cooperation. Credibility is the confidence invested by the public in the government’s commitment to a policy. The credibility of the gold standard derived from the priority attached by governments to the maintenance of to the maintenance of balance-of-payments equilibrium. In the core countries – Britain, France and Germany – there was little doubt that the authorities would take whatever steps were required to defend the central bank’s gold reserves and maintain the convertibility of the currency into gold. If one of these central banks lost gold reserves and its exchange rate weakened, fund would flow in from abroad in anticipation of the capital gains investors in domestic assets would reap once the authorities adopted measures to stem reserve losses and strengthen the exchange rate. . . The exchange rate consequently strengthened on its own, and stabilizing capital flows minimized the need for government intervention. The very credibility of the official commitment to gold meant that this commitment was rarely tested. (p. 5)

But credibility also required cooperation among the various countries on the gold standard, especially the major countries at its center, of which Britain was the most important.

Ultimately, however, the credibility of the prewar gold standard rested on international cooperation. When the stabilizing speculation and domestic intervention proved incapable of accommodating a disturbance, the system was stabilized through cooperation among governments and central banks. Minor problems could be solved by tacit cooperation, generally achieved without open communication among the parties involved. . .  Under such circumstances, the most prominent central bank, the Bank of England, signaled the need for coordinated action. When it lowered its discount rate, other central banks usually responded in kind. In effect, the Bank of England provided a focal point for the harmonization of national monetary policies. . .

Major crises in contrast typically required different responses from different countries. The country losing gold and threatened by a convertibility crisis had to raise interest rates to attract funds from abroad; other countries had to loosen domestic credit conditions to make funds available to the central bank experiencing difficulties. The follow-the-leader approach did not suffice. . . . Such crises were instead contained through overt, conscious cooperation among central banks and governments. . . Consequently, the resources any one country could draw on when its gold parity was under attack far exceeded its own reserves; they included the resources of the other gold standard countries. . . .

What rendered the commitment to the gold standard credible, then, was that the commitment was international, not merely national. That commitment was achieved through international cooperation. (pp. 7-8)

Eichengreen uses this excellent conceptual framework to explain the dysfunction of the newly restored gold standard in the 1920s. Because of the monetary dislocation and demonetization of gold during World War I, the value of gold had fallen to about half of its prewar level, thus to reestablish the gold standard required not only restoring gold as a currency standard but also readjusting – sometimes massively — the prewar relative values of the various national currency units. And to prevent the natural tendency of gold to revert to its prewar value as gold was remonetized would require an unprecedented level of international cooperation among the various countries as they restored the gold standard. Thus, the gold standard was being restored in the 1920s under conditions in which neither the credibility of the prewar commitment to the gold standard nor the level of international cooperation among countries necessary to sustain that commitment was restored.

An important further contribution that Eichengreen, following Temin, brings to the historical narrative of the Great Depression is to incorporate the political forces that affected and often determined the decisions of policy makers directly into the narrative rather than treat those decisions as being somehow exogenous to the purely economic forces that were controlling the unfolding catastrophe.

The connection between domestic politics and international economics is at the center of this book. The stability of the prewar gold standard was attributable to a particular constellation of political as well as economic forces. Similarly, the instability of the interwar gold standard is explicable in terms of political as well as economic changes. Politics enters at two levels. First, domestic political pressures influence governments’ choices of international economic policies. Second, domestic political pressures influence the credibility of governments’ commitments to policies and hence their economic effects. . . (p. 10)

The argument, in a nutshell, is that credibility and cooperation were central to the smooth operation of the classical gold standard. The scope for both declined abruptly with the intervention of World War I. The instability of the interwar gold standard was the inevitable result. (p. 11)

Having explained and focused attention on the necessity for credibility and cooperation for a gold standard to function smoothly, Eichengreen then begins his introductory account of how the lack of credibility and cooperation led to the breakdown of the gold standard that precipitated the Great Depression, starting with the structural shift after World War I that made the rest of the world highly dependent on the US as a source of goods and services and as a source of credit, rendering the rest of the world chronically disposed to run balance-of-payments deficits with the US, deficits that could be financed only by the extension of credit by the US.

[I]f U.S. lending were interrupted, the underlying weakness of other countries’ external positions . . . would be revealed. As they lost gold and foreign exchange reserves, the convertibility of their currencies into gold would be threatened. Their central banks would be forced to  restrict domestic credit, their fiscal authorities to compress public spending, even if doing so threatened to plunge their economies into recession.

This is what happened when U.S. lending was curtailed in the summer of 1928 as a result of increasingly stringent Federal Reserve monetary policy. Inauspiciously, the monetary contraction in the United States coincided with a massive flow of gold to France, where monetary policy was tight for independent reasons. Thus, gold and financial capital were drained by the United States and France from other parts of the world. Superimposed on already weak foreign balances of payments, these events provoked a greatly magnified monetary contraction abroad. In addition they caused a tightening of fiscal policies in parts of Europe and much of Latin America. This shift in policy worldwide, and not merely the relatively modest shift in the United States, provided the contractionary impulse that set the stage for the 1929 downturn. The minor shift in American policy had such dramatic effects because of the foreign reaction it provoked through its interactions with existing imbalances in the pattern of international settlements and with the gold standard constraints. (pp. 12-13)

Eichengreen then makes a rather bold statement, with which, despite my agreement with, and admiration for, everything he has written to this point, I would take exception.

This explanation for the onset of the Depression, which emphasizes concurrent shifts in economic policy in the Unites States and abroad, the gold standard as the connection between them, and the combined impact of U.S. and foreign economic policies on the level of activity, has not previously appeared in the literature. Its elements are familiar, but they have not been fit together into a coherent account of the causes of the 1929 downturn. (p. 13)

I don’t think that Eichengreen’s claim of priority for his explanation of the onset of the 1929 downturn can be defended, though I certainly wouldn’t suggest that he did not arrive at his understanding of what caused the Great Depression largely on his own. But it is abundantly clear from reading the writings of Hawtrey and Cassel starting as early as 1919, that the basic scenario outlined by Eichengreen was clearly spelled out by Hawtrey and Cassel well before the Great Depression started, as papers by Ron Batchelder and me and by Doug Irwin have thoroughly documented. Undoubtedly Eichengreen has added a great deal of additional insight and depth and done important quantitative and documentary empirical research to buttress his narrative account of the causes of the Great Depression, but the basic underlying theory has not changed.

Eichengreen is not unaware of Hawtrey’s contribution and in a footnote to the last quoted paragraph, Eichengreen writes as follows.

The closest precedents lie in the work of the British economists Lionel Robbins and Ralph Hawtrey, in the writings of German historians concerned with the causes of their economy’s precocious slump, and in Temin (1989). Robbins (1934) hinted at many of the mechanism emphasized here but failed to develop the argument fully. Hawtrey emphasized how the contractionary shift in U.S. monetary policy, superimposed on an already weak British balance of payments position, forced a draconian contraction on the Bank of England, plunging the world into recession. See Hawtrey (1933), especially chapter 2. But Hawtrey’s account focused almost entirely on the United States and the United Kingdom, neglecting the reaction of other central banks, notably the Bank of France, whose role was equally important. (p. 13, n. 17)

Unfortunately, this footnote neither clarifies nor supports Eichengreen’s claim of priority for his account of the role of the gold standard in the Great Depression. First, the bare citation of Robbins’s 1934 book The Great Depression is confusing at best, because Robbins’s explanation of the cause of the Great Depression, which he himself later disavowed, is largely a recapitulation of the Austrian business-cycle theory that attributed the downturn to a crisis caused by monetary expansion by the Fed and the Bank of England. Eichengreen correctly credits Hawtrey for attributing the Great Depression, in almost diametric opposition to Robbins, to contractionary monetary policy by the Fed and the Bank of England, but then seeks to distinguish Hawtrey’s explanation from his own by suggesting that Hawtrey neglected the role of the Bank of France.

Eichengreen mentions Hawtrey’s account of the Great Depression in his 1933 book, Trade Depression and the Way Out, 2nd edition. I no longer have a copy of that work accessible to me, but in the first edition of this work published in 1931, Hawtrey included a brief section under the heading “The Demand for Gold as Money since 1914.”

[S]ince 1914 arbitrary changes in monetary policy and in the demand for gold as money have been greater and more numerous than ever before. Frist came the general abandonment of the gold standard by the belligerent countries in favour of inconvertible paper, and the release of hundreds of millions of gold. By 1920 the wealth value of gold had fallen to two-fifths of what it had been in 1913. The United States, which was almost alone at that time in maintaining a gold standard, thereupon started contracting credit and absorbing gold on a vast scale. In June 1924 the wealth value of gold was seventy per cent higher than at its lowest point in 1920, and the amount of gold held for monetary purposes in the United States had grown from $2,840,000,000 in 1920 to $4,488,000,000.

Other countries were then beginning to return to the gold standard, Gemany in 1924, England in 1925, besides several of the smaller countries of Europe. In the years 1924-8 Germany absorbed over £100,000,000 of gold. France stabilized her currency in 1927 and re-established the gold standard in 1928, and absorbed over £60,000,000 in 1927-8. But meanwhile, the Unitd States had been parting with gold freely and her holding had fallen to $4,109,000,000 in June 1928. Large as these movements had been, they had not seriously disturbed the world value of gold. . . .

But from 1929 to the present time has been a period of immense and disastrous instability. France has added more than £200,000,000 to her gold holding, and the United Statesmore than $800,000,000. In the two and a half years the world’s gold output has been a little over £200,000,000, but a part of this been required for the normal demands of industry. The gold absorbed by France and America has exceeded the fresh supply of gold for monetary purposes by some £200,000,000.

This has had to be wrung from other countries, and much o of it has come from new countries such as Australia, Argentina and Brazil, which have been driven off the gold standard and have used their gold reserves to pay their external liabilities, such as interest on loans payable in foreign currencies. (pp. 20-21)

The idea that Hawtrey neglected the role of the Bank of France is clearly inconsistent with the work that Eichengreen himself cites as evidence for that neglect. Moreover in Hawtrey’s 1932 work, The Art of Central Banking, his first chapter is entitled “French Monetary Policy” which directly addresses the issues supposedly neglected by Hawtrey. Here is an example.

I am inclined therefore to say that while the French absorption of gold in the period from January 1929 to May 1931 was in fact one of the most powerful causes of the world depression, that is only because it was allowed to react an unnecessary degree upon the monetary policy of other countries. (p. 38)

In his foreward to the 1962 reprinting of his volume, Hawtrey mentions his chapter on French Monetary Policy in a section under the heading “Gold and the Great Depression.”

Conspicuous among countries accumulating reserves foreign exchange was France. Chapter 1 of this book records how, in the course of stabilizing the franc in the years 1926-8, the Bank of France accumulated a vast holding of foreign exchange [i.e., foreign bank liabilities payable in gold], and in the ensuing years proceeded to liquidate it [for gold]. Chapter IV . . . shows the bearing of the French absorption of gold upon the starting of the great depression of the 1930s. . . . The catastrophe foreseen in 1922 [!] had come to pass, and the moment had come to point to the moral. The disaster was due to the restoration of the gold standard without any provision for international cooperation to prevent undue fluctuations in the purchasing power of gold. (pp. xiv-xv)

Moreover, on p. 254 of Golden Fetters, Eichengreen himself cites Hawtrey as one of the “foreign critics” of Emile Moreau, Governor of the Bank of France during the 1920s and 1930s “for failing to build “a structure of credit” on their gold imports. By failing to expand domestic credit and to repel gold inflows, they argued, the French had violated the rules of the gold standard game.” In the same paragraph Eichengreen also cites Hawtrey’s recommendation that the Bank of France change its statutes to allow for the creation of domestically supplied money and credit that would have obviated the need for continuing imports of gold.

Finally, writers such as Clark Johnson and Kenneth Mouré, who have written widely respected works on French monetary policy during the 1920s and 1930s, cite Hawtrey extensively as one of the leading contemporary critics of French monetary policy.

PS I showed Barry Eichengreen a draft of this post a short while ago, and he agrees with my conclusion that Hawtrey, and presumably Cassel also, had anticipated the key elements of his explanation of how the breakdown of the gold standard, resulting largely from the breakdown of international cooperation, was the primary cause of the Great Depression. I am grateful to Barry for his quick and generous response to my query.

Milton Friedman’s Rabble-Rousing Case for Abolishing the Fed

I recently came across this excerpt from a longer interview of Milton Friedman conducted by Brian Lamb on Cspan in 1994. In this excerpt Lamb asks Friedman what he thinks of the Fed, and Friedman, barely able to contain his ideological fervor, quickly rattles off his version of the history of the Fed, blaming the Fed, at least by implication, for all the bad monetary and macroeconomic events that happened between 1914, when the Fed came into existence, and the1970s.

Here’s a rough summary of Friedman’s tirade:

I have long been in favor of abolishing [the Fed]. There is no institution in the United States that has such a high public standing and such a poor record of performance. . . . The Federal Reserve began operations in 1914 and presided over a doubling of prices during World War I. It produced a major collapse in 1921. It had a good period from about 1922 to 1928. It took actions in 1928 and 1929 that led to a major recession in 1929 and 1930, and it converted that recession by its actions into the Great Depression. The major villain in the Great Depression in my opinion was unquestionably the Federal Reserve System. Since that time, it presided over a doubling of price in World War II. It financed the inflation of the 1970s. On the whole it has a very poor record. It’s done far more harm than good.

Let’s go through Friedman’s complaints one at a time.

World War I inflation.

Friedman blames World War I inflation on the Fed. Friedman, as I have shown in many previous posts, had a very shaky understanding of how the gold standard worked. His remark about the Fed’s “presiding over a doubling of prices” during World War I is likely yet another example of Friedman’s incomprehension, though his use of the weasel words “presided over” rather than the straightforward “caused” does suggest that Friedman was merely trying to insinuate that the Fed was blameworthy when he actually understood that the Fed had almost no control over inflation in World War I, the US remaining formally on the gold standard until April 6, 1917, when the US declared war on Germany and entered World War I, formally suspending the convertibility of the dollar into gold.

As long as the US remained on a gold standard, the value of the dollar was determined by the value of gold. The US was importing lots of gold during the first two and a half years of the World War I as the belligerents used their gold reserves and demonetized their gold coins to finance imports of war material from the US. The massive demonetization of gold caused gold to depreciate on world markets. Another neutral country, Sweden, actually left the gold standard during World War I to avoid the inevitable inflation associated with the wartime depreciation of gold. So it was either ignorant or disingenuous for Friedman to attribute the World War I inflation to the actions of the Federal Reserve. No country could have remained on the gold standard during World War I without accepting inflation, and the Federal Reserve had no legal authority to abrogate or suspend the legal convertibility of the dollar into a fixed weight of gold.

The Post-War Collapse of 1921

Friedman correctly blames the 1921 collapse to the Fed. However, after a rapid wartime and postwar inflation, the US was trying to recreate a gold standard while holding 40% of the world’s gold reserves. The Fed therefore took steps to stabilize the value of gold, which meant raising interest rates, thereby inducing a further inflow of gold into the US to stop the real value of gold from falling in international markets. The problem was that the Fed went overboard, causing a really, and probably unnecessarily, steep deflation.

The Great Depression

Friedman is right that the Fed helped cause the Great Depression by its actions in 1928 and 1929, raising interest rates to try to quell rapidly rising stock prices. But the concerns about rising stock-market prices were probably misplaced, and the Fed’s raising of interest rates caused an inflow of gold into the US just when a gold outflow from the US was needed to accommodate the rising demand for gold on the part of the Bank of France and other central banks rejoining the gold standard and accumulating gold reserves. It was the sudden tightening of the world gold market, with the US and France and other countries rejoining the gold standard simultaneously trying to increase their gold holdings, that caused the value of gold to rise (and nominal prices to fall) in 1929 starting the Great Depression. Friedman totally ignored the international context in which the Fed was operating, failing to see that the US price level under the newly established gold standard, being determined by the international value of gold, was beyond the control of the Fed.

World War II Inflation

As with World War I, Friedman blamed the Fed for “presiding over” a doubling of prices in World War II. But unlike World War I, when rising US prices reflected a falling real value of gold caused by events outside the US and beyond the control of the Fed, in World War II rising US prices reflected the falling value of an inconvertible US dollar caused by Fed “money printing” at the behest of the President and the Treasury. But why did Friedman consider Fed money printing in World War II to have been a blameworthy act on the part of the Fed? The US was then engaged in a total war against the Axis powers. Under those circumstances, was the primary duty of the Fed to keep prices stable or to use its control over “printing press” to ensure that the US government had sufficient funds to win the war against Nazi totalitarianism and allied fascist forces, thereby preserving American liberties and values even more fundamental than keeping inflation low and enabling creditors to extract what was owed to them by their debtors in dollars of undiminished real purchasing power.

Now it’s true that many of Friedman’s libertarian allies were appalled by US participation in World War II, but Friedman, to his credit, did not share their disapproval of US participation in World War II. But, given his support for World War II, Friedman should have at least acknowledged the obvious role of inflationary finance in emergency war financing, a role which, as Earl Thompson and I and others have argued, rationalizes the historic legal monopoly on money printing maintained by almost all sovereign states. To condemn the Fed for inflationary policies during World War II without recognizing the critical role of the “printing press” in war finance was a remarkably uninformed and biased judgment on Friedman’s part.

1970s Inflation

The Fed certainly had a major role in inflation during the 1970s, which as early as 1966 was already starting to creep up from 1-2% rates that had prevailed from 1953 to 1965. The rise in inflation was again triggered by war-related expenditures, owing to the growing combat role of the US in Vietnam starting in 1965. The Fed’s role in rising inflation in the late 1960s and early 1970s was hardly the Fed’s finest hour, but again, it is unrealistic to expect a public institution like the Fed to withhold the financing necessary to support a military action undertaken by the national government. Certainly, the role of Arthur Burns, appointed by Nixon in 1970 to become Fed Chairman in encouraging Nixon to impose wage-and-price controls as an anti-inflationary measure was one of the most disreputable chapters in the Fed’s history, and the cluelessness of Carter’s first Fed Chairman, G. William Miller, appointed to succeed Burns, is almost legendary, but given the huge oil-price increases of 1973-74 and 1978-79, a policy of accommodating those supply-side shocks by allowing a temporary increase in inflation was probably optimal. So, given the difficult circumstances under which the Fed was operating, the increased inflation of the 1970s was not entirely undesirable.

But although Friedman was often sensitive to the subtleties and nuances of policy making when rendering scholarly historical and empirical judgments, he rarely allowed subtleties and nuances to encroach on his denunciations when he was operating in full rabble-rousing mode.

Pedantry and Mastery in Following Rules

From George Polya’s classic How to Solve It (p. 148).

To apply a rule to the letter, rigidly, unquestioningly, in cases where it fits and cases where it does not fit, is pedantry. Some pedants are poor fools; they never did understand the rule which they apply so conscientiously and so indiscriminately. Some pedants are quite successful; they understood their rule, at least in the beginning (before they became pedants), and chose a good one that fits in many cases and fails only occasionally.

To apply a rule with natural ease, with judgment, noticing the cases where it fits, and without ever letting the words of the rule obscure the purpose of the action or the opportunities of the situation, is mastery.

Polya, of course, was distinguishing between pedantry and mastery in applying rules for problem solving, but his distinction can be applied more generally: a distinction between following rules using judgment (aka discretion) and following rules mechanically without exercising judgment (i.e., without using discretion). Following rules by rote need not be dangerous when circumstances are more or less those envisioned when the rules were originally articulated, but, when unforeseen circumstances arise,  making the rule unsuitable to the new circumstances, following rules mindlessly can lead to really bad outcomes.

In the real world, the rules that we live by have to be revised and reinterpreted constantly in the light of experience and of new circumstances and changing values. Rules are supposed to conform to deeper principles, but the specific rules that we try to articulate to guide our actions are in need of periodic revision and adjustment to changing circumstances.

In deciding cases, judges change the legal rules that they apply by recognizing subtle — and relevant — distinctions that need to be taken into account in rendering decisions. They do not adjust rules willfully and arbitrarily. Instead, relying on deeper principles of justice and humanity, they adjust or bend the rules to temper the injustices that would from a mechanical and unthinking application of the rules. By exercising judgment — in other words, by doing what judges are supposed to do — they uphold, rather than subvert, the rule of law in the process of modifying the existing rules. The modern fetish for depriving judges of the discretion to exercise judgment in rendering decisions is antithetical to the concept of the rule of law.

A similar fetish for rules-based monetary policy, i.e., a monetary system requiring the monetary authority to mechanically follow some numerical rule, is an equally outlandish misapplication of the idea that law is nothing more than a system of rules and that judges should do more than select the relevant rule to be applied and render a decision based on that rule without considering whether the decision is consistent with the deeper underlying principles of justice on which the legal system as a whole is based.

Because judges exercise coercive power over the lives and property of individuals, the rule of law requires their decisions to be justified in terms of the explicit rules and implicit and explicit principles of the legal system judges apply. And litigants have a right to appeal judgments rendered if they can argue that the judge misapplied the relevant legal rules. Having no coercive power over the lives or property of individuals, the monetary authority need not be bound by the kind of legal constraints to which judges are subject in rendering decisions that directly affect the lives and property of individuals.

The apotheosis of the fetish for blindly following rules in monetary policy was the ideal expressed by Henry Simons in his famous essay “Rules versus Authorities in Monetary Policy” in which he pleaded for a monetary rule that “would work mechanically, with the chips falling where they may. We need to design and establish a system good enough so that, hereafter, we may hold to it unrationally — on faith — as a religion, if you please.”

However, Simons, recovering from this momentary lapse into irrationality, quickly conceded that his plea for a monetary system good enough to be held on faith was impractical, abandoning it in favor of the more modest goal of stabilizing the price level. However, Simons’s student Milton Friedman, surpassed his teacher in pedantry, invented what came to be known as his k-percent rule, under which the Federal Reserve was to be required to make the total quantity of  money in the economy increase continuously at an annual rate of growth equal to k percent. Friedman actually believed that his rule could be implemented by a computer, so that he confidently — and foolishly — recommended abolishing the Fed.

Eventually, after erroneously forecasting the return of double-digit inflation for nearly two decades, Friedman, a fervent ideologue but also a superb empirical economist, reluctantly allowed his ideological predispositions to give way in the face of contradictory empirical evidence and abandoned his k-percent rule. That was a good, if long overdue, call on Friedman’s part, and it should serve as a lesson and a warning to advocates of imposing overly rigid rules on the monetary authorities.

Milton Friedman and the Phillips Curve

In December 1967, Milton Friedman delivered his Presidential Address to the American Economic Association in Washington DC. In those days the AEA met in the week between Christmas and New Years, in contrast to the more recent practice of holding the convention in the week after New Years. That’s why the anniversary of Friedman’s 1967 address was celebrated at the 2018 AEA convention. A special session was dedicated to commemoration of that famous address, published in the March 1968 American Economic Review, and fittingly one of the papers at the session as presented by the outgoing AEA president Olivier Blanchard, who also wrote one of the papers discussed at the session. Other papers were written by Thomas Sargent and Robert Hall, and by Greg Mankiw and Ricardo Reis. The papers were discussed by Lawrence Summers, Eric Nakamura, and Stanley Fischer. An all-star cast.

Maybe in a future post, I will comment on the papers presented in the Friedman session, but in this post I want to discuss a point that has been generally overlooked, not only in the three “golden” anniversary papers on Friedman and the Phillips Curve, but, as best as I can recall, in all the commentaries I’ve seen about Friedman and the Phillips Curve. The key point to understand about Friedman’s address is that his argument was basically an extension of the idea of monetary neutrality, which says that the real equilibrium of an economy corresponds to a set of relative prices that allows all agents simultaneously to execute their optimal desired purchases and sales conditioned on those relative prices. So it is only relative prices, not absolute prices, that matter. Taking an economy in equilibrium, if you were suddenly to double all prices, relative prices remaining unchanged, the equilibrium would be preserved and the economy would proceed exactly – and optimally – as before as if nothing had changed. (There are some complications about what is happening to the quantity of money in this thought experiment that I am skipping over.) On the other hand, if you change just a single price, not only would the market in which that price is determined be disequilibrated, at least one, and potentially more than one, other market would be disequilibrated. The point here is that the real economy rules, and equilibrium in the real economy depends on relative, not absolute, prices.

What Friedman did was to argue that if money is neutral with respect to changes in the price level, it should also be neutral with respect to changes in the rate of inflation. The idea that you can wring some extra output and employment out of the economy just by choosing to increase the rate of inflation goes against the grain of two basic principles: (1) monetary neutrality (i.e., the real equilibrium of the economy is determined solely by real factors) and (2) Friedman’s famous non-existence (of a free lunch) theorem. In other words, you can’t make the economy as a whole better off just by printing money.

Or can you?

Actually you can, and Friedman himself understood that you can, but he argued that the possibility of making the economy as a whole better of (in the sense of increasing total output and employment) depends crucially on whether inflation is expected or unexpected. Only if inflation is not expected does it serve to increase output and employment. If inflation is correctly expected, the neutrality principle reasserts itself so that output and employment are no different from what they would have been had prices not changed.

What that means is that policy makers (monetary authorities) can cause output and employment to increase by inflating the currency, as implied by the downward-sloping Phillips Curve, but that simply reflects that actual inflation exceeds expected inflation. And, sure, the monetary authorities can always surprise the public by raising the rate of inflation above the rate expected by the public , but that doesn’t mean that the public can be perpetually fooled by a monetary authority determined to keep inflation higher than expected. If that is the strategy of the monetary authorities, it will lead, sooner or later, to a very unpleasant outcome.

So, in any time period – the length of the time period corresponding to the time during which expectations are given – the short-run Phillips Curve for that time period is downward-sloping. But given the futility of perpetually delivering higher than expected inflation, the long-run Phillips Curve from the point of view of the monetary authorities trying to devise a sustainable policy must be essentially vertical.

Two quick parenthetical remarks. Friedman’s argument was far from original. Many critics of Keynesian policies had made similar arguments; the names Hayek, Haberler, Mises and Viner come immediately to mind, but the list could easily be lengthened. But the earliest version of the argument of which I am aware is Hayek’s 1934 reply in Econometrica to a discussion of Prices and Production by Alvin Hansen and Herbert Tout in their 1933 article reviewing recent business-cycle literature in Econometrica in which they criticized Hayek’s assertion that a monetary expansion that financed investment spending in excess of voluntary savings would be unsustainable. They pointed out that there was nothing to prevent the monetary authority from continuing to create money, thereby continually financing investment in excess of voluntary savings. Hayek’s reply was that a permanent constant rate of monetary expansion would not suffice to permanently finance investment in excess of savings, because once that monetary expansion was expected, prices would adjust so that in real terms the constant flow of monetary expansion would correspond to the same amount of investment that had been undertaken prior to the first and unexpected round of monetary expansion. To maintain a rate of investment permanently in excess of voluntary savings would require progressively increasing rates of monetary expansion over and above the expected rate of monetary expansion, which would sooner or later prove unsustainable. The gist of the argument, more than three decades before Friedman’s 1967 Presidential address, was exactly the same as Friedman’s.

A further aside. But what Hayek failed to see in making this argument was that, in so doing, he was refuting his own argument in Prices and Production that only a constant rate of total expenditure and total income is consistent with maintenance of a real equilibrium in which voluntary saving and planned investment are equal. Obviously, any rate of monetary expansion, if correctly foreseen, would be consistent with a real equilibrium with saving equal to investment.

My second remark is to note the ambiguous meaning of the short-run Phillips Curve relationship. The underlying causal relationship reflected in the negative correlation between inflation and unemployment can be understood either as increases in inflation causing unemployment to go down, or as increases in unemployment causing inflation to go down. Undoubtedly the causality runs in both directions, but subtle differences in the understanding of the causal mechanism can lead to very different policy implications. Usually the Keynesian understanding of the causality is that it runs from unemployment to inflation, while a more monetarist understanding treats inflation as a policy instrument that determines (with expected inflation treated as a parameter) at least directionally the short-run change in the rate of unemployment.

Now here is the main point that I want to make in this post. The standard interpretation of the Friedman argument is that since attempts to increase output and employment by monetary expansion are futile, the best policy for a monetary authority to pursue is a stable and predictable one that keeps the economy at or near the optimal long-run growth path that is determined by real – not monetary – factors. Thus, the best policy is to find a clear and predictable rule for how the monetary authority will behave, so that monetary mismanagement doesn’t inadvertently become a destabilizing force causing the economy to deviate from its optimal growth path. In the 50 years since Friedman’s address, this message has been taken to heart by monetary economists and monetary authorities, leading to a broad consensus in favor of inflation targeting with the target now almost always set at 2% annual inflation. (I leave aside for now the tricky question of what a clear and predictable monetary rule would look like.)

But this interpretation, clearly the one that Friedman himself drew from his argument, doesn’t actually follow from the argument that monetary expansion can’t affect the long-run equilibrium growth path of an economy. The monetary neutrality argument, being a pure comparative-statics exercise, assumes that an economy, starting from a position of equilibrium, is subjected to a parametric change (either in the quantity of money or in the price level) and then asks what will the new equilibrium of the economy look like? The answer is: it will look exactly like the prior equilibrium, except that the price level will be twice as high with twice as much money as previously, but with relative prices unchanged. The same sort of reasoning, with appropriate adjustments, can show that changing the expected rate of inflation will have no effect on the real equilibrium of the economy, with only the rate of inflation and the rate of monetary expansion affected.

This comparative-statics exercise teaches us something, but not as much as Friedman and his followers thought. True, you can’t get more out of the economy – at least not for very long – than its real equilibrium will generate. But what if the economy is not operating at its real equilibrium? Even Friedman didn’t believe that the economy always operates at its real equilibrium. Just read his Monetary History of the United States. Real-business cycle theorists do believe that the economy always operates at its real equilibrium, but they, unlike Friedman, think monetary policy is useless, so we can forget about them — at least for purposes of this discussion. So if we have reason to think that the economy is falling short of its real equilibrium, as almost all of us believe that it sometimes does, why should we assume that monetary policy might not nudge the economy in the direction of its real equilibrium?

The answer to that question is not so obvious, but one answer might be that if you use monetary policy to move the economy toward its real equilibrium, you might make mistakes sometimes and overshoot the real equilibrium and then bad stuff would happen and inflation would run out of control, and confidence in the currency would be shattered, and you would find yourself in a re-run of the horrible 1970s. I get that argument, and it is not totally without merit, but I wouldn’t characterize it as overly compelling. On a list of compelling arguments, I would put it just above, or possibly just below, the domino theory on the basis of which the US fought the Vietnam War.

But even if the argument is not overly compelling, it should not be dismissed entirely, so here is a way of taking it into account. Just for fun, I will call it a Taylor Rule for the Inflation Target (IT). Let us assume that the long-run inflation target is 2% and let us say that (YY*) is the output gap between current real GDP and potential GDP (i.e., the GDP corresponding to the real equilibrium of the economy). We could then define the following Taylor Rule for the inflation target:

IT = α(2%) + β((YY*)/ Y*).

This equation says that the inflation target in any period would be a linear combination of the default Inflation Target of 2% times an adjustment coefficient α designed to keep successively chosen Inflation targets from deviating from the long-term price-level-path corresponding to 2% annual inflation and some fraction β of the output gap expressed as a percentage of potential GDP. Thus, for example, if the output gap was -0.5% and β was 0.5, the short-term Inflation Target would be raised to 4.5% if α were 1.

However, if on average output gaps are expected to be negative, then α would have to be chosen to be less than 1 in order for the actual time path of the price level to revert back to a target price-level corresponding to a 2% annual rate.

Such a procedure would fit well with the current dual inflation and employment mandate of the Federal Reserve. The long-term price level path would correspond to the price-stability mandate, while the adjustable short-term choice of the IT would correspond to and promote the goal of maximum employment by raising the inflation target when unemployment was high as a countercyclical policy for promoting recovery. But short-term changes in the IT would not be allowed to cause a long-term deviation of the price level from its target path. The dual mandate would ensure that relatively higher inflation in periods of high unemployment would be compensated for by periods of relatively low inflation in periods of low unemployment.

Alternatively, you could just target nominal GDP at a rate consistent with a long-run average 2% inflation target for the price level, with the target for nominal GDP adjusted over time as needed to ensure that the 2% average inflation target for the price level was also maintained.

Does Economic Theory Entail or Support Free-Market Ideology?

A few weeks ago, via Twitter, Beatrice Cherrier solicited responses to this query from Dina Pomeranz

It is a serious — and a disturbing – question, because it suggests that the free-market ideology which is a powerful – though not necessarily the most powerful — force in American right-wing politics, and probably more powerful in American politics than in the politics of any other country, is the result of how economics was taught in the 1970s and 1980s, and in the 1960s at UCLA, where I was an undergrad (AB 1970) and a graduate student (PhD 1977), and at Chicago.

In the 1950s, 1960s and early 1970s, free-market economics had been largely marginalized; Keynes and his successors were ascendant. But thanks to Milton Friedman and his compatriots at a few other institutions of higher learning, especially UCLA, the power of microeconomics (aka price theory) to explain a very broad range of economic and even non-economic phenomena was becoming increasingly appreciated by economists. A very broad range of advances in economic theory on a number of fronts — economics of information, industrial organization and antitrust, law and economics, public choice, monetary economics and economic history — supported by the award of the Nobel Prize to Hayek in 1974 and Friedman in 1976, greatly elevated the status of free-market economics just as Margaret Thatcher and Ronald Reagan were coming into office in 1979 and 1981.

The growing prestige of free-market economics was used by Thatcher and Reagan to bolster the credibility of their policies, especially when the recessions caused by their determination to bring double-digit inflation down to about 4% annually – a reduction below 4% a year then being considered too extreme even for Thatcher and Reagan – were causing both Thatcher and Reagan to lose popular support. But the growing prestige of free-market economics and economists provided some degree of intellectual credibility and weight to counter the barrage of criticism from their opponents, enabling both Thatcher and Reagan to use Friedman and Hayek, Nobel Prize winners with a popular fan base, as props and ornamentation under whose reflected intellectual glory they could take cover.

And so after George Stigler won the Nobel Prize in 1982, he was invited to the White House in hopes that, just in time, he would provide some additional intellectual star power for a beleaguered administration about to face the 1982 midterm elections with an unemployment rate over 10%. Famously sharp-tongued, and far less a team player than his colleague and friend Milton Friedman, Stigler refused to play his role as a prop and a spokesman for the administration when asked to meet reporters following his celebratory visit with the President, calling the 1981-82 downturn a “depression,” not a mere “recession,” and dismissing supply-side economics as “a slogan for packaging certain economic ideas rather than an orthodox economic category.” That Stiglerian outburst of candor brought the press conference to an unexpectedly rapid close as the Nobel Prize winner was quickly ushered out of the shouting range of White House reporters. On the whole, however, Republican politicians have not been lacking of economists willing to lend authority and intellectual credibility to Republican policies and to proclaim allegiance to the proposition that the market is endowed with magical properties for creating wealth for the masses.

Free-market economics in the 1960s and 1970s made a difference by bringing to light the many ways in which letting markets operate freely, allowing output and consumption decisions to be guided by market prices, could improve outcomes for all people. A notable success of Reagan’s free-market agenda was lifting, within days of his inauguration, all controls on the prices of domestically produced crude oil and refined products, carryovers of the disastrous wage-and-price controls imposed by Nixon in 1971, but which, following OPEC’s quadrupling of oil prices in 1973, neither Nixon, Ford, nor Carter had dared to scrap. Despite a political consensus against lifting controls, a consensus endorsed, or at least not strongly opposed, by a surprisingly large number of economists, Reagan, following the advice of Friedman and other hard-core free-market advisers, lifted the controls anyway. The Iran-Iraq war having started just a few months earlier, the Saudi oil minister was predicting that the price of oil would soon rise from $40 to at least $50 a barrel, and there were few who questioned his prediction. One opponent of decontrol described decontrol as writing a blank check to the oil companies and asking OPEC to fill in the amount. So the decision to decontrol oil prices was truly an act of some political courage, though it was then characterized as an act of blind ideological faith, or a craven sellout to Big Oil. But predictions of another round of skyrocketing oil prices, similar to the 1973-74 and 1978-79 episodes, were refuted almost immediately, international crude-oil prices falling steadily from $40/barrel in January to about $33/barrel in June.

Having only a marginal effect on domestic gasoline prices, via an implicit subsidy to imported crude oil, controls on domestic crude-oil prices were primarily a mechanism by which domestic refiners could extract a share of the rents that otherwise would have accrued to domestic crude-oil producers. Because additional crude-oil imports increased a domestic refiner’s allocation of “entitlements” to cheap domestic crude oil, thereby reducing the net cost of foreign crude oil below the price paid by the refiner, one overall effect of the controls was to subsidize the importation of crude oil, notwithstanding the goal loudly proclaimed by all the Presidents overseeing the controls: to achieve US “energy independence.” In addition to increasing the demand for imported crude oil, the controls reduced the elasticity of refiners’ demand for imported crude, controls and “entitlements” transforming a given change in the international price of crude into a reduced change in the net cost to domestic refiners of imported crude, thereby raising OPEC’s profit-maximizing price for crude oil. Once domestic crude oil prices were decontrolled, market forces led almost immediately to reductions in the international price of crude oil, so the coincidence of a fall in oil prices with Reagan’s decision to lift all price controls on crude oil was hardly accidental.

The decontrol of domestic petroleum prices was surely as pure a victory for, and vindication of, free-market economics as one could have ever hoped for [personal disclosure: I wrote a book for The Independent Institute, a free-market think tank, Politics, Prices and Petroleum, explaining in rather tedious detail many of the harmful effects of price controls on crude oil and refined products]. Unfortunately, the coincidence of free-market ideology with good policy is not necessarily as comprehensive as Friedman and his many acolytes, myself included, had assumed.

To be sure, price-fixing is almost always a bad idea, and attempts at price-fixing almost always turn out badly, providing lots of ammunition for critics of government intervention of all kinds. But the implicit assumption underlying the idea that freely determined market prices optimally guide the decentralized decisions of economic agents is that the private costs and benefits taken into account by economic agents in making and executing their plans about how much to buy and sell and produce closely correspond to the social costs and benefits that an omniscient central planner — if such a being actually did exist — would take into account in making his plans. But in the real world, the private costs and benefits considered by individual agents when making their plans and decisions often don’t reflect all relevant costs and benefits, so the presumption that market prices determined by the elemental forces of supply and demand always lead to the best possible outcomes is hardly ironclad, as we – i.e., those of us who are not philosophical anarchists – all acknowledge in practice, and in theory, when we affirm that competing private armies and competing private police forces and competing judicial systems would not provide for common defense and for domestic tranquility more effectively than our national, state, and local governments, however imperfectly, provide those essential services. The only question is where and how to draw the ever-shifting lines between those decisions that are left mostly or entirely to the voluntary decisions and plans of private economic agents and those decisions that are subject to, and heavily — even mainly — influenced by, government rule-making, oversight, or intervention.

I didn’t fully appreciate how widespread and substantial these deviations of private costs and benefits from social costs and benefits can be even in well-ordered economies until early in my blogging career, when it occurred to me that the presumption underlying that central pillar of modern right-wing, free-market ideology – that reducing marginal income tax rates increases economic efficiency and promotes economic growth with little or no loss in tax revenue — implicitly assumes that all taxable private income corresponds to the output of goods and services whose private values and costs equal their social values and costs.

But one of my eminent UCLA professors, Jack Hirshleifer, showed that this presumption is subject to a huge caveat, because insofar as some people can earn income by exploiting their knowledge advantages over the counterparties with whom they trade, incentives are created to seek the kinds of knowledge that can be exploited in trades with less-well informed counterparties. The incentive to search for, and exploit, knowledge advantages implies excessive investment in the acquisition of exploitable knowledge, the private gain from acquiring such knowledge greatly exceeding the net gain to society from the acquisition of such knowledge, inasmuch as gains accruing to the exploiter are largely achieved at the expense of the knowledge-disadvantaged counterparties with whom they trade.

For example, substantial resources are now almost certainly wasted by various forms of financial research aiming to gain information that would have been revealed in due course anyway slightly sooner than the knowledge is gained by others, so that the better-informed traders can profit by trading with less knowledgeable counterparties. Similarly, the incentive to exploit knowledge advantages encourages the creation of financial products and structuring other kinds of transactions designed mainly to capitalize on and exploit individual weaknesses in underestimating the probability of adverse events (e.g., late repayment penalties, gambling losses when the house knows the odds better than most gamblers do). Even technical and inventive research encouraged by the potential to patent those discoveries may induce too much research activity by enabling patent-protected monopolies to exploit discoveries that would have been made eventually even without the monopoly rents accruing to the patent holders.

The list of examples of transactions that are profitable for one side only because the other side is less well-informed than, or even misled by, his counterparty could be easily multiplied. Because much, if not most, of the highest incomes earned, are associated with activities whose private benefits are at least partially derived from losses to less well-informed counterparties, it is not a stretch to suspect that reducing marginal income tax rates may have led resources to be shifted from activities in which private benefits and costs approximately equal social benefits and costs to more lucrative activities in which the private benefits and costs are very different from social benefits and costs, the benefits being derived largely at the expense of losses to others.

Reducing marginal tax rates may therefore have simultaneously reduced economic efficiency, slowed economic growth and increased the inequality of income. I don’t deny that this hypothesis is largely speculative, but the speculative part is strictly about the magnitude, not the existence, of the effect. The underlying theory is completely straightforward.

So there is no logical necessity requiring that right-wing free-market ideological policy implications be inferred from orthodox economic theory. Economic theory is a flexible set of conceptual tools and models, and the policy implications following from those models are sensitive to the basic assumptions and initial conditions specified in those models, as well as the value judgments informing an evaluation of policy alternatives. Free-market policy implications require factual assumptions about low transactions costs and about the existence of a low-cost process of creating and assigning property rights — including what we now call intellectual property rights — that imply that private agents perceive costs and benefits that closely correspond to social costs and benefits. Altering those assumptions can radically change the policy implications of the theory.

The best example I can find to illustrate that point is another one of my UCLA professors, the late Earl Thompson, who was certainly the most relentless economic reductionist whom I ever met, perhaps the most relentless whom I can even think of. Despite having a Harvard Ph.D. when he arrived back at UCLA as an assistant professor in the early 1960s, where he had been an undergraduate student of Armen Alchian, he too started out as a pro-free-market Friedman acolyte. But gradually adopting the Buchanan public-choice paradigm – Nancy Maclean, please take note — of viewing democratic politics as a vehicle for advancing the self-interest of agents participating in the political process (marketplace), he arrived at increasingly unorthodox policy conclusions to the consternation and dismay of many of his free-market friends and colleagues. Unlike most public-choice theorists, Earl viewed the political marketplace as a largely efficient mechanism for achieving collective policy goals. The main force tending to make the political process inefficient, Earl believed, was ideologically driven politicians pursuing ideological aims rather than the interests of their constituents, a view that seems increasingly on target as our political process becomes simultaneously increasingly ideological and increasingly dysfunctional.

Until Earl’s untimely passing in 2010, I regarded his support of a slew of interventions in the free-market economy – mostly based on national-defense grounds — as curiously eccentric, and I am still inclined to disagree with many of them. But my point here is not to argue whether Earl was right or wrong on specific policies. What matters in the context of the question posed by Dina Pomeranz is the economic logic that gets you from a set of facts and a set of behavioral and causality assumptions to a set of policy conclusion. What is important to us as economists has to be the process not the conclusion. There is simply no presumption that the economic logic that takes you from a set of reasonably accurate factual assumptions and a set of plausible behavioral and causality assumptions has to take you to the policy conclusions advocated by right-wing, free-market ideologues, or, need I add, to the policy conclusions advocated by anti-free-market ideologues of either left or right.

Certainly we are all within our rights to advocate for policy conclusions that are congenial to our own political preferences, but our obligation as economists is to acknowledge the extent to which a policy conclusion follows from a policy preference rather than from strict economic logic.

Milton Friedman and How not to Think about the Gold Standard, France, Sterilization and the Great Depression

Last week I listened to David Beckworth on his excellent podcast Macro Musings, interviewing Douglas Irwin. I don’t think I’ve ever met Doug, but we’ve been in touch a number of times via email. Doug is one of our leading economic historians, perhaps the foremost expert on the history of US foreign-trade policy, and he has just published a new book on the history of US trade policy, Clashing over Commerce. As you would expect, most of the podcast is devoted to providing an overview of the history of US trade policy, but toward the end of the podcast, David shifts gears and asks Doug about his work on the Great Depression, questioning Doug about two of his papers, one on the origins of the Great Depression (“Did France Cause the Great Depression?”), the other on the 1937-38 relapse into depression, (“Gold Sterlization and the Recession of 1937-1938“) just as it seemed that the US was finally going to recover fully  from the catastrophic 1929-33 downturn.

Regular readers of this blog probably know that I hold the Bank of France – and its insane gold accumulation policy after rejoining the gold standard in 1928 – primarily responsible for the deflation that inevitably led to the Great Depression. In his paper on France and the Great Depression, Doug makes essentially the same argument pointing out that the gold reserves of the Bank of France increased from about 7% of the world stock of gold reserves to about 27% of the world total in 1932. So on the substance, Doug and I are in nearly complete agreement that the Bank of France was the chief culprit in this sad story. Of course, the Federal Reserve in late 1928 and 1929 also played a key supporting role, attempting to dampen what it regarded as reckless stock-market speculation by raising interest rates, and, as a result, accumulating gold even as the Bank of France was rapidly accumulating gold, thereby dangerously amplifying the deflationary pressure created by the insane gold-accumulation policy of the Bank of France.

Now I would not have taken the time to write about this podcast just to say that I agreed with what Doug and David were saying about the Bank of France and the Great Depression. What prompted me to comment about the podcast were two specific remarks that Doug made. The first was that his explanation of how France caused the Great Depression was not original, but had already been provided by Milton Friedman, Clark Johnson, and Scott Sumner.  I agree completely that Clark Johnson and Scott Sumner wrote very valuable and important books on the Great Depression and provided important new empirical findings confirming that the Bank of France played a very malign role in creating the deflationary downward spiral that was the chief characteristic of the Great Depression. But I was very disappointed in Doug’s remark that Friedman had been the first to identify the malign role played by the Bank of France in precipitating the Great Depression. Doug refers to the foreward that Friedman wrote for the English translation of the memoirs of Emile Moreau the Governor of the Bank of France from 1926 to 1930 (The Golden Franc: Memoirs of a Governor of the Bank of France: The Stabilization of the Franc (1926-1928). Moreau was a key figure in the stabilization of the French franc in 1926 after its exchange rate had fallen by about 80% against the dollar between 1923 and 1926, particularly in determining the legal exchange rate at which the franc would be pegged to gold and the dollar, when France officially rejoined the gold standard in 1928.

That Doug credits Friedman for having – albeit belatedly — grasped the role of the Bank of France in causing the Great Depression, almost 30 years after attributing the Depression in his Monetary History of the United States, almost entirely to policy mistakes mistakes by the Federal Reserve in late 1930 and early 1931 is problematic for two reasons. First, Doug knows very well that both Gustave Cassel and Ralph Hawtrey correctly diagnosed the causes of the Great Depression and the role of the Bank of France during – and even before – the Great Depression. I know that Doug knows this well, because he wrote this paper about Gustav Cassel’s diagnosis of the Great Depression in which he notes that Hawtrey made essentially the same diagnosis of the Depression as Cassel did. So, not only did Friedman’s supposed discovery of the role of the Bank of France come almost 30 years after publication of the Monetary History, it was over 60 years after Hawtrey and Cassel had provided a far more coherent account of what happened in the Great Depression and of the role of the Bank of France than Friedman provided either in the Monetary History or in his brief foreward to the translation of Moreau’s memoirs.

That would have been bad enough, but a close reading of Friedman’s foreward shows that even though, by 1991 when he wrote that foreward, he had gained some insight into the disruptive and deflationary influence displayed exerted by the Bank of France, he had an imperfect and confused understanding of the transmission mechanism by which the actions of the Bank of France affected the rest of the world, especially the countries on the gold standard. I have previously discussed in a 2015 post, what I called Friedman’s cluelessness about the insane policy of the Bank of France. So I will now quote extensively from my earlier post and supplement with some further comments:

Friedman’s foreward to Moreau’s memoir is sometimes cited as evidence that he backtracked from his denial in the Monetary History that the Great Depression had been caused by international forces, Friedman insisting that there was actually nothing special about the initial 1929 downturn and that the situation only got out of hand in December 1930 when the Fed foolishly (or maliciously) allowed the Bank of United States to fail, triggering a wave of bank runs and bank failures that caused a sharp decline in the US money stock. According to Friedman it was only at that point that what had been a typical business-cycle downturn degenerated into what he liked to call the Great Contraction. Let me now quote Friedman’s 1991 acknowledgment that the Bank of France played some role in causing the Great Depression.

Rereading the memoirs of this splendid translation . . . has impressed me with important subtleties that I missed when I read the memoirs in a language not my own and in which I am far from completely fluent. Had I fully appreciated those subtleties when Anna Schwartz and I were writing our A Monetary History of the United States, we would likely have assessed responsibility for the international character of the Great Depression somewhat differently. We attributed responsibility for the initiation of a worldwide contraction to the United States and I would not alter that judgment now. However, we also remarked, “The international effects were severe and the transmission rapid, not only because the gold-exchange standard had rendered the international financial system more vulnerable to disturbances, but also because the United States did not follow gold-standard rules.” Were I writing that sentence today, I would say “because the United States and France did not follow gold-standard rules.”

I find this minimal adjustment by Friedman of his earlier position in the Monetary History totally unsatisfactory. Why do I find it unsatisfactory? To begin with, Friedman makes vague references to unnamed but “important subtleties” in Moreau’s memoir that he was unable to appreciate before reading the 1991 translation. There was nothing subtle about the gold accumulation being undertaken by the Bank of France; it was massive and relentless. The table below is constructed from data on official holdings of monetary gold reserves from December 1926 to June 1932 provided by Clark Johnson in his important book Gold, France, and the Great Depression, pp. 190-93. In December 1926 France held $711 million in gold or 7.7% of the world total of official gold reserves; in June 1932, French gold holdings were $3.218 billion or 28.4% of the world total. [I omit a table of world monetary gold reserves from December 1926 to June 1932 included in my earlier post.]

What was it about that policy that Friedman didn’t get? He doesn’t say. What he does say is that he would not alter his previous judgment that the US was responsible “for the initiation of a worldwide contraction.” The only change he would make would be to say that France, as well as the US, contributed to the vulnerability of the international financial system to unspecified disturbances, because of a failure to follow “gold-standard rules.” I will just note that, as I have mentioned many times on this blog, references to alleged “gold standard rules” are generally not only unhelpful, but confusing, because there were never any rules as such to the gold standard, and what are termed “gold-standard rules” are largely based on a misconception, derived from the price-specie-flow fallacy, of how the gold standard actually worked.

New Comment. And I would further add that references to the supposed gold-standard rules are confusing, because, in the misguided tradition of the money multiplier, the idea of gold-standard rules of the game mistakenly assumes that the direction of causality between monetary reserves and bank money (either banknotes or bank deposits) created either by central banks or commercial banks goes from reserves to money. But bank reserves are held, because banks have created liabilities (banknotes and deposits) which, under the gold standard, could be redeemed either directly or indirectly for “base money,” e.g., gold under the gold standard. For prudential reasons, or because of legal reserve requirements, national monetary authorities operating under a gold standard held gold reserves in amounts related — in some more or less systematic fashion, but also depending on various legal, psychological and economic considerations — to the quantity of liabilities (in the form of banknotes and bank deposits) that the national banking systems had created. I will come back to, and elaborate on, this point below. So the causality runs from money to reserves, not, as the price-specie-flow mechanism and the rules-of-the-game idea presume, from reserves to money. Back to my earlier post:

So let’s examine another passage from Friedman’s forward, and see where that takes us.

Another feature of Moreau’s book that is most fascinating . . . is the story it tells of the changing relations between the French and British central banks. At the beginning, with France in desperate straits seeking to stabilize its currency, [Montagu] Norman [Governor of the Bank of England] was contemptuous of France and regarded it as very much of a junior partner. Through the accident that the French currency was revalued at a level that stimulated gold imports, France started to accumulate gold reserves and sterling reserves and gradually came into the position where at any time Moreau could have forced the British off gold by withdrawing the funds he had on deposit at the Bank of England. The result was that Norman changed from being a proud boss and very much the senior partner to being almost a supplicant at the mercy of Moreau.

What’s wrong with this passage? Well, Friedman was correct about the change in the relative positions of Norman and Moreau from 1926 to 1928, but to say that it was an accident that the French currency was revalued at a level that stimulated gold imports is completely — and in this case embarrassingly — wrong, and wrong in two different senses: one strictly factual, and the other theoretical. First, and most obviously, the level at which the French franc was stabilized — 125 francs per pound — was hardly an accident. Indeed, it was precisely the choice of the rate at which to stabilize the franc that was a central point of Moreau’s narrative in his memoir . . . , the struggle between Moreau and his boss, the French Premier, Raymond Poincaré, over whether the franc would be stabilized at that rate, the rate insisted upon by Moreau, or the prewar parity of 25 francs per pound. So inquiring minds can’t help but wonder what exactly did Friedman think he was reading?

The second sense in which Friedman’s statement was wrong is that the amount of gold that France was importing depended on a lot more than just its exchange rate; it was also a function of a) the monetary policy chosen by the Bank of France, which determined the total foreign-exchange holdings held by the Bank of France, and b) the portfolio decisions of the Bank of France about how, given the exchange rate of the franc and given the monetary policy it adopted, the resulting quantity of foreign-exchange reserves would be held.

I referred to Friedman’s foreward in which he quoted from his own essay “Should There Be an Independent Monetary Authority?” contrasting the personal weakness of W. P. G. Harding, Governor of the Federal Reserve in 1919-20, with the personal strength of Moreau. Quoting from Harding’s memoirs in which he acknowledged that his acquiescence in the U.S. Treasury’s desire to borrow at “reasonable” interest rates caused the Board to follow monetary policies that ultimately caused a rapid postwar inflation

Almost every student of the period is agreed that the great mistake of the Reserve System in postwar monetary policy was to permit the money stock to expand very rapidly in 1919 and then to step very hard on the brakes in 1920. This policy was almost surely responsible for both the sharp postwar rise in prices and the sharp subsequent decline. It is amusing to read Harding’s answer in his memoirs to criticism that was later made of the policies followed. He does not question that alternative policies might well have been preferable for the economy as a whole, but emphasizes the treasury’s desire to float securities at a reasonable rate of interest, and calls attention to a then-existing law under which the treasury could replace the head of the Reserve System. Essentially he was saying the same thing that I heard another member of the Reserve Board say shortly after World War II when the bond-support program was in question. In response to the view expressed by some of my colleagues and myself that the bond-support program should be dropped, he largely agreed but said ‘Do you want us to lose our jobs?’

The importance of personality is strikingly revealed by the contrast between Harding’s behavior and that of Emile Moreau in France under much more difficult circumstances. Moreau formally had no independence whatsoever from the central government. He was named by the premier, and could be discharged at any time by the premier. But when he was asked by the premier to provide the treasury with funds in a manner that he considered inappropriate and undesirable, he flatly refused to do so. Of course, what happened was that Moreau was not discharged, that he did not do what the premier had asked him to, and that stabilization was rather more successful.

Now, if you didn’t read this passage carefully, in particular the part about Moreau’s threat to resign, as I did not the first three or four times that I read it, you might not have noticed what a peculiar description Friedman gives of the incident in which Moreau threatened to resign following a request “by the premier to provide the treasury with funds in a manner that he considered inappropriate and undesirable.” That sounds like a very strange request for the premier to make to the Governor of the Bank of France. The Bank of France doesn’t just “provide funds” to the Treasury. What exactly was the request? And what exactly was “inappropriate and undesirable” about that request?

I have to say again that I have not read Moreau’s memoir, so I can’t state flatly that there is no incident in Moreau’s memoir corresponding to Friedman’s strange account. However, Jacques Rueff, in his preface to the 1954 French edition (translated as well in the 1991 English edition), quotes from Moreau’s own journal entries how the final decision to stabilize the French franc at the new official parity of 125 per pound was reached. And Friedman actually refers to Rueff’s preface in his foreward! Let’s read what Rueff has to say:

The page for May 30, 1928, on which Mr. Moreau set out the problem of legal stabilization, is an admirable lesson in financial wisdom and political courage. I reproduce it here in its entirety with the hope that it will be constantly present in the minds of those who will be obliged in the future to cope with French monetary problems.

“The word drama may sound surprising when it is applied to an event which was inevitable, given the financial and monetary recovery achieved in the past two years. Since July 1926 a balanced budget has been assured, the National Treasury has achieved a surplus and the cleaning up of the balance sheet of the Bank of France has been completed. The April 1928 elections have confirmed the triumph of Mr. Poincaré and the wisdom of the ideas which he represents. . . . Under such conditions there is nothing more natural than to stabilize the currency, which has in fact already been pegged at the same level for the last eighteen months.

“But things are not quite that simple. The 1926-28 recovery restored confidence to those who had actually begun to give up hope for their country and its capacity to recover from the dark hours of July 1926. . . . perhaps too much confidence.

“Distinguished minds maintained that it was possible to return the franc to its prewar parity, in the same way as was done with the pound sterling. And how tempting it would be to thereby cancel the effects of the war and postwar periods and to pay back in the same currency those who had lent the state funds which for them often represented an entire lifetime of unremitting labor.

“International speculation seemed to prove them right, because it kept changing its dollars and pounds for francs, hoping that the franc would be finally revalued.

“Raymond Poincaré, who was honesty itself and who, unlike most politicians, was truly devoted to the public interest and the glory of France, did, deep in his heart, agree with those awaiting a revaluation.

“But I myself had to play the ungrateful role of representative of the technicians who knew that after the financial bloodletting of the past years it was impossible to regain the original parity of the franc.

“I was aware, as had already been determined by the Committee of Experts in 1926, that it was impossible to revalue the franc beyond certain limits without subjecting the national economy to a particularly painful re-adaptation. If we were to sacrifice the vital force of the nation to its acquired wealth, we would put at risk the recovery we had already accomplished. We would be, in effect, preparing a counter-speculation against our currency that would come within a rather short time.

“Since the parity of 125 francs to one pound has held for long months and the national economy seems to have adapted itself to it, it should be at this rate that we stabilize without further delay.

“This is what I had to tell Mr. Poincaré at the beginning of June 1928, tipping the scales of his judgment with the threat of my resignation.” [my emphasis, DG]

So what this tells me is that the very act of personal strength that so impressed Friedman . . . was not about some imaginary “inappropriate” request made by Poincaré (“who was honesty itself”) for the Bank to provide funds to the treasury, but about whether the franc should be stabilized at 125 francs per pound, a peg that Friedman asserts was “accidental.” Obviously, it was not “accidental” at all, but . . . based on the judgment of Moreau and his advisers . . . as attested to by Rueff in his preface.

Just to avoid misunderstanding, I would just say here that I am not suggesting that Friedman was intentionally misrepresenting any facts. I think that he was just being very sloppy in assuming that the facts actually were what he rather cluelessly imagined them to be.

Before concluding, I will quote again from Friedman’s foreword:

Benjamin Strong and Emile Moreau were admirable characters of personal force and integrity. But in my view, the common policies they followed were misguided and contributed to the severity and rapidity of transmission of the U.S. shock to the international community. We stressed that the U.S. “did not permit the inflow of gold to expand the U.S. money stock. We not only sterilized it, we went much further. Our money stock moved perversely, going down as the gold stock went up” from 1929 to 1931. France did the same, both before and after 1929.

Strong and Moreau tried to reconcile two ultimately incompatible objectives: fixed exchange rates and internal price stability. Thanks to the level at which Britain returned to gold in 1925, the U.S. dollar was undervalued, and thanks to the level at which France returned to gold at the end of 1926, so was the French franc. Both countries as a result experienced substantial gold inflows.

New Comment. Actually, between December 1926 and December 1928, US gold reserves decreased by almost $350 million while French gold reserves increased by almost $550 million, suggesting that factors other than whether the currency peg was under- or over-valued determined the direction in which gold was flowing.

Gold-standard rules called for letting the stock of money rise in response to the gold inflows and for price inflation in the U.S. and France, and deflation in Britain, to end the over-and under-valuations. But both Strong and Moreau were determined to prevent inflation and accordingly both sterilized the gold inflows, preventing them from providing the required increase in the quantity of money. The result was to drain the other central banks of the world of their gold reserves, so that they became excessively vulnerable to reserve drains. France’s contribution to this process was, I now realize, much greater than we treated it as being in our History.

New Comment. I pause here to insert the following diatribe about the mutually supporting fallacies of the price-specie-flow mechanism, the rules of the game under the gold standard, and central-bank sterilization expounded on by Friedman, and, to my surprise and dismay, assented to by Irwin and Beckworth. Inflation rates under a gold standard are, to a first approximation, governed by international price arbitrage so that prices difference between the same tradeable commodities in different locations cannot exceed the cost of transporting those commodities between those locations. Even if not all goods are tradeable, the prices of non-tradeables are subject to forces bringing their prices toward an equilibrium relationship with the prices of tradeables that are tightly pinned down by arbitrage. Given those constraints, monetary policy at the national level can have only a second-order effect on national inflation rates, because the prices of non-tradeables that might conceivably be sensitive to localized monetary effects are simultaneously being driven toward equilibrium relationships with tradeable-goods prices.

The idea that the supposed sterilization policies about which Friedman complains had anything to do with the pursuit of national price-level targets is simply inconsistent with a theoretically sound understanding of how national price levels were determined under the gold standard. The sterilization idea mistakenly assumes that, under the gold standard, the quantity of money in any country is what determines national price levels and that monetary policy in each country has to operate to adjust the quantity of money in each country to a level consistent with the fixed-exchange-rate target set by the gold standard.

Again, the causality runs in the opposite direction;  under a gold standard, national price levels are, as a first approximation, determined by convertibility, and the quantity of money in a country is whatever amount of money that people in that country want to hold given the price level. If the quantity of money that the people in a country want to hold is supplied by the national monetary authority or by the local banking system, the public can obtain the additional money they demand exchanging their own liabilities for the liabilities of the monetary authority or the local banks, without having to reduce their own spending in order to import the gold necessary to obtain additional banknotes from the central bank. And if the people want to get rid of excess cash, they can dispose of the cash through banking system without having to dispose of it via a net increase in total spending involving an import surplus. The role of gold imports is to fill in for any deficiency in the amount of money supplied by the monetary authority and the local banks, while gold exports are a means of disposing of excess cash that people are unwilling to hold. France was continually importing gold after the franc was stabilized in 1926 not because the franc was undervalued, but because the French monetary system was such that the additional cash demanded by the public could not be created without obtaining gold to be deposited in the vaults of the Bank of France. To describe the Bank of France as sterilizing gold imports betrays a failure to understand the imports of gold were not an accidental event that should have triggered a compensatory policy response to increase the French money supply correspondingly. The inflow of gold was itself the policy and the result that the Bank of France deliberately set out to implement. If the policy was to import gold, then calling the policy gold sterilization makes no sense, because, the quantity of money held by the French public would have been, as a first approximation, about the same whatever policy the Bank of France followed. What would have been different was the quantity of gold reserves held by the Bank of France.

To think that sterilization describes a policy in which the Bank of France kept the French money stock from growing as much as it ought to have grown is just an absurd way to think about how the quantity of money was determined under the gold standard. But it is an absurdity that has pervaded discussion of the gold standard, for almost two centuries. Hawtrey, and, two or three generations later, Earl Thompson, and, independently Harry Johnson and associates (most notably Donald McCloskey and Richard Zecher in their two important papers on the gold standard) explained the right way to think about how the gold standard worked. But the old absurdities, reiterated and propagated by Friedman in his Monetary History, have proven remarkably resistant to basic economic analysis and to straightforward empirical evidence. Now back to my critique of Friedman’s foreward.

These two paragraphs are full of misconceptions; I will try to clarify and correct them. First Friedman refers to “the U.S. shock to the international community.” What is he talking about? I don’t know. Is he talking about the crash of 1929, which he dismissed as being of little consequence for the subsequent course of the Great Depression, whose importance in Friedman’s view was certainly far less than that of the failure of the Bank of United States? But from December 1926 to December 1929, total monetary gold holdings in the world increased by about $1 billion; while US gold holdings declined by nearly $200 million, French holdings increased by $922 million over 90% of the increase in total world official gold reserves. So for Friedman to have even suggested that the shock to the system came from the US and not from France is simply astonishing.

Friedman’s discussion of sterilization lacks any coherent theoretical foundation, because, working with the most naïve version of the price-specie-flow mechanism, he imagines that flows of gold are entirely passive, and that the job of the monetary authority under a gold standard was to ensure that the domestic money stock would vary proportionately with the total stock of gold. But that view of the world ignores the possibility that the demand to hold money in any country could change. Thus, Friedman, in asserting that the US money stock moved perversely from 1929 to 1931, going down as the gold stock went up, misunderstands the dynamic operating in that period. The gold stock went up because, with the banking system faltering, the public was shifting their holdings of money balances from demand deposits to currency. Legal reserves were required against currency, but not against demand deposits, so the shift from deposits to currency necessitated an increase in gold reserves. To be sure the US increase in the demand for gold, driving up its value, was an amplifying factor in the worldwide deflation, but total US holdings of gold from December 1929 to December 1931 rose by $150 million compared with an increase of $1.06 billion in French holdings of gold over the same period. So the US contribution to world deflation at that stage of the Depression was small relative to that of France.

Friedman is correct that fixed exchange rates and internal price stability are incompatible, but he contradicts himself a few sentences later by asserting that Strong and Moreau violated gold-standard rules in order to stabilize their domestic price levels, as if it were the gold-standard rules rather than market forces that would force domestic price levels into correspondence with a common international level. Friedman asserts that the US dollar was undervalued after 1925 because the British pound was overvalued, presuming with no apparent basis that the US balance of payments was determined entirely by its trade with Great Britain. As I observed above, the exchange rate is just one of the determinants of the direction and magnitude of gold flows under the gold standard, and, as also pointed out above, gold was generally flowing out of the US after 1926 until the ferocious tightening of Fed policy at the end of 1928 and in 1929 caused a sizable inflow of gold into the US in 1929.

However, when, in the aggregate, central banks were tightening their policies, thereby tending to accumulate gold, the international gold market would come under pressure, driving up the value of gold relative goods, thereby causing deflationary pressure among all the gold standard countries. That is what happened in 1929, when the US started to accumulate gold even as the insane Bank of France was acting as a giant international vacuum cleaner sucking in gold from everywhere else in the world. Friedman, even as he was acknowledging that he had underestimated the importance of the Bank of France in the Monetary History, never figured this out. He was obsessed, instead with relatively trivial effects of overvaluation of the pound, and undervaluation of the franc and the dollar. Talk about missing the forest for the trees.

Milton Friedman and the Chicago School of Debating

I had planned to follow up my previous post, about Milton Friedman and the price of money, with a clarification and further explanation of my assertion that Friedman’s failure to understand that there is both a purchase price of money – roughly corresponding to the inverse of the price level – and a rental price of money – roughly corresponding, but not necessarily equal, to the rate of interest. The basic clarification and extension were prompted by a comment/question from Bob Murphy to which I responded with a comment of my own. I thought that it would be worth a separate post to elaborate on that point (and perhaps I’ll get around to writing it), but in the meantime I have been captivated by several intertwined Twitter threads – triggered by the recent scandal over the deplorable, abusive and sexist putdowns that infest so many of the interactions on the now infamous Economics Job Market Rumors website – about the historical role of the economics workshops in fostering a culture of rudeness in academic economic interactions and whether such rudeness has discouraged young women entering the economics profession.

Rather than run through the Twitter threads here I will just focus on an excellent post by Carolyn Sissoko who recognizes the value of the aggressive debating fostered by the Chicago workshops in honing the critical skills that young economists need to be make real contributions to the advancement of knowledge. The truth is that being overly kind and solicitous toward the feelings of a scientific researcher doesn’t do the researcher a favor nor does it promote the advancement of science, or, for that matter, of any intellectual discipline. The only way that knowledge really advances is by rooting out error, not an easy task, and critical skills — the skills to tease out the implications of an argument and to check its consistency with other propositions that we believe or that seem reasonable, or with the empirical facts that we already know or that might be able to discover – are essential to performing the task well.

I think Carolyn was aiming at a similar point in her blogpost. Here’s how she puts it:

Claudia Sahm writes about ” the toll that our profession’s aggressive, status-obsessed culture can take” and references specific dismissive criticism that is particularly content-free and therefore non-constructive. Matthew Kahn follows up with some ideas about improving mutual respect noting that “researchers are very tough on each other in public seminars (the “Chicago seminar” style).” This is followed up by prominent economists’ tweets about economics’ hyper-aggressiveness and rudeness.

I think it’s important to distinguish between the consequences of “status-obsession,” dismissiveness of women’s work and an “aggressive” seminar-style.

First, a properly run “Chicago-style” seminar requires senior economists who set the right tone. The most harshly criticized economists are senior colleagues and the point is that the resultant debate about the nature of economic knowledge is instructive and constructive for all. Yes, everyone is criticized, but students have been shown many techniques for responding to criticism by the time they are presenting. Crucial is the focus on advancing economic knowledge and an emphasis on argument rather than “status-obsession”.

The simple fact is that “Chicago-style” seminars when they are conducted by “status-obsessed” economists are likely to go catastrophically wrong. One cannot mix a kiss up-kick down culture with a “Chicago-style” seminar. They are like oil and water.

Carolyn is totally right to stress the importance of debate and criticism, and she is equally right to point out the need for the right kind of balance in the workshop environment so that criticism and debate are focused on ideas and concepts and evidence, and not on social advancement for oneself by trying to look good at someone else’s expense and even more so not to use an unavoidably adversarial social situation as an opportunity to make someone look bad or foolish. And the responsibility for setting the right tone is necessarily the responsibility of the leader(s) of the workshop.

In a tweet responding to Carolyn’s post, Beatrice Cherrier quoted an excerpt from a 2007 paper by Ross Emmett about the origins of the Chicago workshops which grew out of the somewhat contentious environment at Chicago where the Cowles Commission was housed in the 1940s and early 1950s before moving to Yale. The first formal workshop at Chicago – the money workshop – was introduced by Milton Friedman in the early 1950s when he took over responsibility for teaching the graduate course in monetary theory. However, Emmett, who draws on extensive interviews with former Chicago graduate students, singles out the Industrial Organization workshop presided over by George Stigler, a pricklier character than Friedman, and the Law and Economics workshop in the Law School as “the most notorious, and [having given] Chicago workshops a reputation for chewing up visitors.” But Emmett notes that “most workshop debate was intense without being insulting.”

That characterization brought to mind the encounter at the money workshop at Chicago in the early 1970s between Milton Friedman and a young assistant professor recently arrived at Chicago by the name of Fischer Black. The incident is recounted in chapter six (“The Money Wars) of Perry Mehrling’s wonderful biography of Black (Fischer Black and the Revolutionary Idea of Finance). Here is how Mehrling describes the encounter.

Friedman’s Workshop in Money and Banking was the most famous workshop at Chicago, and special rules applied. You had to have Friedman’s permission to attend, and one of the requirements for attendance was to offer work of your own for discussion by the other members of the workshop. Furthermore, in Friedman’s workshop presentation was limited to just a few minutes at the beginning. Everyone was expected to have read the paper already, and to have come prepared to discuss it. Friedman himself always led off the discussion, framing the issues that he thought most needed attention.

Into the lion’s den went Fischer, with the very paper that Friedman had dismissed as fallacious (Fisher arguing that inflationary overissue of money by banks is impossible because of the law of reflux). Jim Lorie recalls, “It was like an infidel going to St. Peter’s and announcing that all this stuff about Jesus was wrong.” Friedman led off the discussion: “Fisher Black will be presenting his paper today on money in a two-sector model. We all know that the paper is wrong. We have two hours to work out why it is wrong.” And so it began. But after two hours of defending the indefensible, Fischer emerged bloodied but unbowed. As one participant remembers, the final score was Fisher Black 10, Monetary Workshop 0.

And the next week, Fischer was back again, now forcing others to defend themselves against his own criticisms. If it was a theoretical paper, he would point out the profit opportunity implied for anyone who understood the model. If it was an empirical paper, he would point out how the correlations were consistent with his own theory as well as the quantity theory. “But, Fischer, there is a ton of evidence that money causes prices!” Friedman would insist. “Name one piece,” Fischer would respond. The fact that the measured money supply moves in tandem with  nominal income and the price level could mean that an increase in money causes prices to rise, as Friedman insisted, but it could also mean that  an increase in prices causes the quantity of money to rise, as Fischer thought more reasonable. Empirical evidence could not decide the issue. (pp. 159-60)

So here was a case in which Friedman, the senior economist responsible for the seminar engaged in some blatant intimidation tactics against a junior colleague with whom he happened to disagree on a fundamental theoretical point. Against most junior colleagues, and almost all graduate students, such tactics would likely have succeeded in cowing the insubordinate upstart. But Fischer Black, who relished the maverick role, was not one to be intimidated. The question is what lesson did graduate students take away from the Friedman/Black encounter. That you could survive a battle with Friedman, or that, if you dissented from orthodoxy, Friedman would try to crush you?

Milton Friedman Says that the Rate of Interest Is NOT the Price of Money: Don’t Listen to Him!

In the comments to Scott Sumner’s post asking for a definition of currency manipulation, one of Scott’s regular commenters, Patrick Sullivan, wrote the following in reply to an earlier comment by Bob Murphy:

‘For example, if Fed officials take some actions during the day and we see interest rates go up, surely that’s all we need to know if we’re going to classify it as “tight” or “loose” money, right?’

As I was saying just a day or so ago, until the economics profession grasps that interest rates are NOT the price[s] of money, there’s no hope that journalists or the general public will.

Bob Murphy, you might want to reread ‘Monetary Policy v. Fiscal Policy.’ The transcript of the famous NYU debate in 1968 between Walter Heller and Milton Friedman. You’ve just made the same freshman error Heller made back then. Look for Friedman’s correction of that error in his rebuttal.

Friedman’s repeated claims that the rate of interest is not the price of money have been echoed by his many acolytes so often that it is evidently now taken as clear evidence of economic illiteracy (or “a freshman error,” as Patrick Sullivan describes it) to suggest that the rate of interest is the price of money. It was good of Sullivan to provide an exact reference to this statement of Friedman, not that similar references are hard to find, Friedman never having been one who was loathe to repeat himself. He did so often, and not without eloquence. Even though I usually quote Friedman to criticize him, I would never dream of questioning his brilliance or his skill as an economic analyst, but he was a much better price theorist than a monetary theorist, and he was a tad too self-confident, which made him disinclined to be self-critical or to admit error, or even entertain such a remote possibility.

So I took Sullivan’s advice and found the debate transcript and looked up passage in which Friedman chided Heller for saying that the rate of interest is the price of money. Here is what Friedman said in responding to Heller:

Let me turn to some of the specific issues that Walter raised in his first discussion and see if I can clarify a few points that came up.

First of all, the question is, Why do we look only at the money stock? Why don’t we also look at interest rates? Don’t you have to look at both quantity and price? The answer is yes, but the interest rate is not the price of money in the sense of the money stock. The interest rate is the price of credit. The price of money is how much goods and services you have to give up to get a dollar. You can have big changes in the quantity of money without any changes in credit. Consider for a moment the 1848-58 period in the United States. We had a big increase in the quantity of money because of the discovery of gold. This increase didn’t, in the first instance, impinge on the credit markets at all. You must sharply distinguish between money in the sense of the money or credit market, and money in the sense of the quantity of money. And the price of money in that second sense is the inverse of the price level—not the interest rate. The interest rate is the price of credit. As I mentioned earlier, the tax increase we had would tend to reduce the price of credit because it reduces the demand for credit, even though it didn’t affect the money supply at all.

So I do think you have to look at both price and quantity. But the price you have to look at from this point of view is the price level, not the interest rate.

What is wrong with Friedman’s argument? Simply this: any asset has two prices, a purchase price and a rental price. The purchase price is the price one pays (or receives) to buy (or to sell) the asset; the rental price is the price one pays to derive services from the asset for a fixed period of time. The purchase price of a unit of currency is what one has to give up in order to gain ownership of that unit. The purchasing price of money, as Friedman observed, can be expressed as the inverse of the price level, but because money is the medium of exchange, there will actually be a vector of distinct purchase prices of a unit of currency depending on what good or service is being exchanged for money.

But there is also a rental price for money, and that rental price represents what you have to give up in order to hold a unit of currency in your pocket or in your bank account. What you sacrifice is the interest you pay to the one who lends you the unit of currency, or if you already own the unit of currency, it is the interest you forego by not lending that unit of currency to someone else who would be willing to pay to have that additional unit of currency in his pocket or in his bank account instead of in yours. So although the interest rate is in some sense the price of credit, it is, indeed, also the price that one has to pay (or of which to bear the opportunity cost) in order to derive the liquidity services provided by that unit of currency.

It therefore makes perfect sense to speak about the rate of interest as the price of money. It is this price – the rate of interest – that is the cost of holding money and governs how much money people are willing to keep in their pockets and in their bank accounts. The rate of interest is also the revenue per unit of currency per unit of time derived by suppliers of money for as long as the unit of money is held by the public. Money issued by the government generates a return to the government equal to the interest that the government would have had to pay had it borrowed the additional money instead of printing the money itself. That flow of revenue is called seignorage or, alternatively, the inflation tax (which is actually a misnomer, because if nominal interest rates are positive, the government derives revenue from printing money even if inflation is zero or negative).

Similarly banks, by supplying deposits, collect revenue per unit of time equal to the interest collected per unit of time from borrowers. But all depositors, not just borrowers, bear that interest cost, because anyone holding deposits is either by paying interest — in this theoretical exposition I ignore the reprehensible fees and charges that banks routinely exact from their customers — to the bank or is foregoing interest that could have been earned by exchanging the money for an interest-bearing instrument.

Now if banking is a competitive industry banks compete to gain market share by paying depositors interest on deposits held in their institutions, thereby driving down the cost of holding money in the form of deposits rather than in the form of currency. In an ideal competitive banking system, banks would pay depositors interest nearly equal to the interest charged to borrowers, making it almost costless to hold money so that liquidity premium (the difference between the lending rate and the deposit rate) would be driven close to zero.

Friedman’s failure to understand why the rate of interest is indeed a price of money was an unfortunate blind spot in his thinking which led him into a variety of theoretical and policy errors over the course of his long, remarkable, but far from faultless career.

About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,162 other followers

Follow Uneasy Money on