Dr. Popper: Or How I Learned to Stop Worrying and Love Metaphysics

Introduction to Falsificationism

Although his reputation among philosophers was never quite as exalted as it was among non-philosophers, Karl Popper was a pre-eminent figure in 20th century philosophy. As a non-philosopher, I won’t attempt to adjudicate which take on Popper is the more astute, but I think I can at least sympathize, if not fully agree, with philosophers who believe that Popper is overrated by non-philosophers. In an excellent blog post, Phillipe Lemoine gives a good explanation of why philosophers look askance at falsificationism, Popper’s most important contribution to philosophy.

According to Popper, what distinguishes or demarcates a scientific statement from a non-scientific (metaphysical) statement is whether the statement can, or could be, disproved or refuted – falsified (in the sense of being shown to be false not in the sense of being forged, misrepresented or fraudulently changed) – by an actual or potential observation. Vulnerability to potentially contradictory empirical evidence, according to Popper, is what makes science special, allowing it to progress through a kind of dialectical process of conjecture (hypothesis) and refutation (empirical testing) leading to further conjecture and refutation and so on.

Theories purporting to explain anything and everything are thus non-scientific or metaphysical. Claiming to be able to explain too much is a vice, not a virtue, in science. Science advances by risk-taking, not by playing it safe. Trying to explain too much is actually playing it safe. If you’re not willing to take the chance of putting your theory at risk, by saying that this and not that will happen — rather than saying that this or that will happen — you’re playing it safe. This view of science, portrayed by Popper in modestly heroic terms, was not unappealing to scientists, and in part accounts for the positive reception of Popper’s work among scientists.

But this heroic view of science, as Lemoine nicely explains, was just a bit oversimplified. Theories never exist in a vacuum, there is always implicit or explicit background knowledge that informs and provides context for the application of any theory from which a prediction is deduced. To deduce a prediction from any theory, background knowledge, including complementary theories that are presumed to be valid for purposes of making a prediction, is necessary. Any prediction relies not just on a single theory but on a system of related theories and auxiliary assumptions.

So when a prediction is deduced from a theory, and the predicted event is not observed, it is never unambiguously clear which of the multiple assumptions underlying the prediction is responsible for the failure of the predicted event to be observed. The one-to-one logical dependence between a theory and a prediction upon which Popper’s heroic view of science depends doesn’t exist. Because the heroic view of science is too simplified, Lemoine considers it false, at least in the naïve and heroic form in which it is often portrayed by its proponents.

But, as Lemoine himself acknowledges, Popper was not unaware of these issues and actually dealt with some if not all of them. Popper therefore dismissed those criticisms pointing to his various acknowledgments and even anticipations of and responses to the criticisms. Nevertheless, his rhetorical style was generally not to qualify his position but to present it in stark terms, thereby reinforcing the view of his critics that he actually did espouse the naïve version of falsificationism that, only under duress, would be toned down to meet the objections raised to the usual unqualified version of his argument. Popper after all believed in making bold conjectures and framing a theory in the strongest possible terms and characteristically adopted an argumentative and polemical stance in staking out his positions.

Toned-Down Falsificationism

In his tone-downed version of falsificationism, Popper acknowledged that one can never know if a prediction fails because the underlying theory is false or because one of the auxiliary assumptions required to make the prediction is false, or even because of an error in measurement. But that acknowledgment, Popper insisted, does not refute falsificationism, because falsificationism is not a scientific theory about how scientists do science; it is a normative theory about how scientists ought to do science. The normative implication of falsificationism is that scientists should not try to shield their theories by making just-so adjustments in their theories through ad hoc auxiliary assumptions, e.g., ceteris paribus assumptions, to shield their theories from empirical disproof. Rather they should accept the falsification of their theories when confronted by observations that conflict with the implications of their theories and then formulate new and better theories to replace the old ones.

But a strict methodological rule against adjusting auxiliary assumptions or making further assumptions of an ad hoc nature would have ruled out many fruitful theoretical developments resulting from attempts to account for failed predictions. For example, the planet Neptune was discovered in 1846 by scientists who posited (ad hoc) the existence of another planet to explain why the planet Uranus did not follow its predicted path. Rather than conclude that the Newtonian theory was falsified by the failure of Uranus to follow the orbital path predicted by Newtonian theory, the French astronomer Urbain Le Verrier posited the existence of another planet that would account for the path actually followed by Uranus. Now in this case, it was possible to observe the predicted position of the new planet, and its discovery in the predicted location turned out to be a sensational confirmation of Newtonian theory.

Popper therefore admitted that making an ad hoc assumption in order to save a theory from refutation was permissible under his version of normative faslisificationism, but only if the ad hoc assumption was independently testable. But suppose that, under the circumstances, it would have been impossible to observe the existence of the predicted planet, at least with the observational tools then available, making the ad hoc assumption testable only in principle, but not in practice. Strictly adhering to Popper’s methodological requirement of being able to test independently any ad hoc assumption would have meant accepting the refutation of the Newtonian theory rather than positing the untestable — but true — ad hoc other-planet hypothesis to account for the failed prediction of the orbital path of Uranus.

My point is not that ad hoc assumptions to save a theory from falsification are ok, but to point out that a strict methodological rules requiring rejection of any theory once it appears to be contradicted by empirical evidence and prohibiting the use of any ad hoc assumption to save the theory unless the ad hoc assumption is independently testable might well lead to the wrong conclusion given the nuances and special circumstances associated with every case in which a theory seems to be contradicted by observed evidence. Such contradictions are rarely so blatant that theory cannot be reconciled with the evidence. Indeed, as Popper himself recognized, all observations are themselves understood and interpreted in the light of theoretical presumptions. It is only in extreme cases that evidence cannot be interpreted in a way that more or less conforms to the theory under consideration. At first blush, the Copernican heliocentric view of the world seemed obviously contradicted by direct sensory observation that earth seems flat and the sun rise and sets. Empirical refutation could be avoided only by providing an alternative interpretation of the sensory data that could be reconciled with the apparent — and obvious — flatness and stationarity of the earth and the movement of the sun and moon in the heavens.

So the problem with falsificationism as a normative theory is that it’s not obvious why a moderately good, but less than perfect, theory should be abandoned simply because it’s not perfect and suffers from occasional predictive failures. To be sure, if a better theory than the one under consideration is available, predicting correctly whenever the one under consideration predicts correctly and predicting more accurately than the one under consideration when the latter fails to predict correctly, the alternative theory is surely preferable, but that simply underscores the point that evaluating any theory in isolation is not very important. After all, every theory, being a simplification, is an imperfect representation of reality. It is only when two or more theories are available that scientists must try to determine which of them is preferable.

Oakeshott and the Poverty of Falsificationism

These problems with falsificationism were brought into clearer focus by Michael Oakeshott in his famous essay “Rationalism in Politics,” which though not directed at Popper himself (whose colleague at the London School of Economics he was) can be read as a critique of Popper’s attempt to prescribe methodological rules for scientists to follow in carrying out their research. Methodological rules of the kind propounded by Popper are precisely the sort of supposedly rational rules of practice intended to ensure the successful outcome of an undertaking that Oakeshott believed to be ill-advised and hopelessly naïve. The rationalist conceit in Oakesott’s view is that there are demonstrably correct answers to practical questions and that practical activity is rational only when it is based on demonstrably true moral or causal rules.

The entry on Michael Oakeshott in the Stanford Encyclopedia of Philosophy summarizes Oakeshott’s position as follows:

The error of Rationalism is to think that making decisions simply requires skill in the technique of applying rules or calculating consequences. In an early essay on this theme, Oakeshott distinguishes between “technical” and “traditional” knowledge. Technical knowledge is of facts or rules that can be easily learned and applied, even by those who are without experience or lack the relevant skills. Traditional knowledge, in contrast, means “knowing how” rather than “knowing that” (Ryle 1949). It is acquired by engaging in an activity and involves judgment in handling facts or rules (RP 12–17). The point is not that rules cannot be “applied” but rather that using them skillfully or prudently means going beyond the instructions they provide.

The idea that a scientist’s decision about when to abandon one theory and replace it with another can be reduced to the application of a Popperian falsificationist maxim ignores all the special circumstances and all the accumulated theoretical and practical knowledge that a truly expert scientist will bring to bear in studying and addressing such a problem. Here is how Oakeshott addresses the problem in his famous essay.

These two sorts of knowledge, then, distinguishable but inseparable, are the twin components of the knowledge involved in every human activity. In a practical art such as cookery, nobody supposes that the knowledge that belongs to the good cook is confined to what is or what may be written down in the cookery book: technique and what I have called practical knowledge combine to make skill in cookery wherever it exists. And the same is true of the fine arts, of painting, of music, of poetry: a high degree of technical knowledge, even where it is both subtle and ready, is one thing; the ability to create a work of art, the ability to compose something with real musical qualities, the ability to write a great sonnet, is another, and requires in addition to technique, this other sort of knowledge. Again these two sorts of knowledge are involved in any genuinely scientific activity. The natural scientist will certainly make use of observation and verification that belong to his technique, but these rules remain only one of the components of his knowledge; advances in scientific knowledge were never achieved merely by following the rules. . . .

Technical knowledge . . . is susceptible of formulation in rules, principles, directions, maxims – comprehensively, in propositions. It is possible to write down technical knowledge in a book. Consequently, it does not surprise us that when an artist writes about his art, he writes only about the technique of his art. This is so, not because he is ignorant of what may be called asesthetic element, or thinks it unimportant, but because what he has to say about that he has said already (if he is a painter) in his pictures, and he knows no other way of saying it. . . . And it may be observed that this character of being susceptible of precise formulation gives to technical knowledge at least the appearance of certainty: it appears to be possible to be certain about a technique. On the other hand, it is characteristic of practical knowledge that it is not susceptible of formulation of that kind. Its normal expression is in a customary or traditional way of doing things, or, simply, in practice. And this gives it the appearance of imprecision and consequently of uncertainty, of being a matter of opinion, of probability rather than truth. It is indeed knowledge that is expressed in taste or connoisseurship, lacking rigidity and ready for the impress of the mind of the learner. . . .

Technical knowledge, in short, an be both taught and learned in the simplest meanings of these words. On the other hand, practical knowledge can neither be taught nor learned, but only imparted and acquired. It exists only in practice, and the only way to acquire it is by apprenticeship to a master – not because the master can teach it (he cannot), but because it can be acquired only by continuous contact with one who is perpetually practicing it. In the arts and in natural science what normally happens is that the pupil, in being taught and in learning the technique from his master, discovers himself to have acquired also another sort of knowledge than merely technical knowledge, without it ever having been precisely imparted and often without being able to say precisely what it is. Thus a pianist acquires artistry as well as technique, a chess-player style and insight into the game as well as knowledge of the moves, and a scientist acquires (among other things) the sort of judgement which tells him when his technique is leading him astray and the connoisseurship which enables him to distinguish the profitable from the unprofitable directions to explore.

Now, as I understand it, Rationalism is the assertion that what I have called practical knowledge is not knowledge at all, the assertion that, properly speaking, there is no knowledge which is not technical knowledge. The Rationalist holds that the only element of knowledge involved in any human activity is technical knowledge and that what I have called practical knowledge is really only a sort of nescience which would be negligible if it were not positively mischievous. (Rationalism in Politics and Other Essays, pp. 12-16)

Almost three years ago, I attended the History of Economics Society meeting at Duke University at which Jeff Biddle of Michigan State University delivered his Presidential Address, “Statistical Inference in Economics 1920-1965: Changes in Meaning and Practice, published in the June 2017 issue of the Journal of the History of Economic Thought. The paper is a remarkable survey of the differing attitudes towards using formal probability theory as the basis for making empirical inferences from the data. The underlying assumptions of probability theory about the nature of the data were widely viewed as being too extreme to make probability theory an acceptable basis for empirical inferences from the data. However, the early negative attitudes toward accepting probability theory as the basis for making statistical inferences from data were gradually overcome (or disregarded). But as late as the 1960s, even though econometric techniques were becoming more widely accepted, a great deal of empirical work, including by some of the leading empirical economists of the time, avoided using the techniques of statistical inference to assess empirical data using regression analysis. Only in the 1970s was there a rapid sea-change in professional opinion that made statistical inference based on explicit probabilisitic assumptions about underlying data distributions the requisite technique for drawing empirical inferences from the analysis of economic data. In the final section of his paper, Biddle offers an explanation for this rapid change in professional attitude toward the use of probabilistic assumptions about data distributions as the required method of the empirical assessment of economic data.

By the 1970s, there was a broad consensus in the profession that inferential methods justified by probability theory—methods of producing estimates, of assessing the reliability of those estimates, and of testing hypotheses—were not only applicable to economic data, but were a necessary part of almost any attempt to generalize on the basis of economic data. . . .

This paper has been concerned with beliefs and practices of economists who wanted to use samples of statistical data as a basis for drawing conclusions about what was true, or probably true, in the world beyond the sample. In this setting, “mechanical objectivity” means employing a set of explicit and detailed rules and procedures to produce conclusions that are objective in the sense that if many different people took the same statistical information, and followed the same rules, they would come to exactly the same conclusions. The trustworthiness of the conclusion depends on the quality of the method. The classical theory of inference is a prime example of this sort of mechanical objectivity.

Porter [Trust in Numbers: The Pursuit of Objectivity in Science and Public Life] contrasts mechanical objectivity with an objectivity based on the “expert judgment” of those who analyze data. Expertise is acquired through a sanctioned training process, enhanced by experience, and displayed through a record of work meeting the approval of other experts. One’s faith in the analyst’s conclusions depends on one’s assessment of the quality of his disciplinary expertise and his commitment to the ideal of scientific objectivity. Elmer Working’s method of determining whether measured correlations represented true cause-and-effect relationships involved a good amount of expert judgment. So, too, did Gregg Lewis’s adjustments of the various estimates of the union/non-union wage gap, in light of problems with the data and peculiarities of the times and markets from which they came. Keynes and Persons pushed for a definition of statistical inference that incorporated space for the exercise of expert judgment; what Arthur Goldberger and Lawrence Klein referred to as ‘statistical inference’ had no explicit place for expert judgment.

Speaking in these terms, I would say that in the 1920s and 1930s, empirical economists explicitly acknowledged the need for expert judgment in making statistical inferences. At the same time, mechanical objectivity was valued—there are many examples of economists of that period employing rule-oriented, replicable procedures for drawing conclusions from economic data. The rejection of the classical theory of inference during this period was simply a rejection of one particular means for achieving mechanical objectivity. By the 1970s, however, this one type of mechanical objectivity had become an almost required part of the process of drawing conclusions from economic data, and was taught to every economics graduate student.

Porter emphasizes the tension between the desire for mechanically objective methods and the belief in the importance of expert judgment in interpreting statistical evidence. This tension can certainly be seen in economists’ writings on statistical inference throughout the twentieth century. However, it would be wrong to characterize what happened to statistical inference between the 1940s and the 1970s as a displace-ment of procedures requiring expert judgment by mechanically objective procedures. In the econometric textbooks published after 1960, explicit instruction on statistical inference was largely limited to instruction in the mechanically objective procedures of the classical theory of inference. It was understood, however, that expert judgment was still an important part of empirical economic analysis, particularly in the specification of the models to be estimated. But the disciplinary knowledge needed for this task was to be taught in other classes, using other textbooks.

And in practice, even after the statistical model had been chosen, the estimates and standard errors calculated, and the hypothesis tests conducted, there was still room to exercise a fair amount of judgment before drawing conclusions from the statistical results. Indeed, as Marcel Boumans (2015, pp. 84–85) emphasizes, no procedure for drawing conclusions from data, no matter how algorithmic or rule bound, can dispense entirely with the need for expert judgment. This fact, though largely unacknowledged in the post-1960s econometrics textbooks, would not be denied or decried by empirical economists of the 1970s or today.

This does not mean, however, that the widespread embrace of the classical theory of inference was simply a change in rhetoric. When application of classical inferential procedures became a necessary part of economists’ analyses of statistical data, the results of applying those procedures came to act as constraints on the set of claims that a researcher could credibly make to his peers on the basis of that data. For example, if a regression analysis of sample data yielded a large and positive partial correlation, but the correlation was not “statistically significant,” it would simply not be accepted as evidence that the “population” correlation was positive. If estimation of a statistical model produced a significant estimate of a relationship between two variables, but a statistical test led to rejection of an assumption required for the model to produce unbiased estimates, the evidence of a relationship would be heavily discounted.

So, as we consider the emergence of the post-1970s consensus on how to draw conclusions from samples of statistical data, there are arguably two things to be explained. First, how did it come about that using a mechanically objective procedure to generalize on the basis of statistical measures went from being a choice determined by the preferences of the analyst to a professional requirement, one that had real con-sequences for what economists would and would not assert on the basis of a body of statistical evidence? Second, why was it the classical theory of inference that became the required form of mechanical objectivity? . . .

Perhaps searching for an explanation that focuses on the classical theory of inference as a means of achieving mechanical objectivity emphasizes the wrong characteristic of that theory. In contrast to earlier forms of mechanical objectivity used by economists, such as standardized methods of time series decomposition employed since the 1920s, the classical theory of inference is derived from, and justified by, a body of formal mathematics with impeccable credentials: modern probability theory. During a period when the value placed on mathematical expression in economics was increasing, it may have been this feature of the classical theory of inference that increased its perceived value enough to overwhelm long-standing concerns that it was not applicable to economic data. In other words, maybe the chief causes of the profession’s embrace of the classical theory of inference are those that drove the broader mathematization of economics, and one should simply look to the literature that explores possible explanations for that phenomenon rather than seeking a special explanation of the embrace of the classical theory of inference.

I would suggest one more factor that might have made the classical theory of inference more attractive to economists in the 1950s and 1960s: the changing needs of pedagogy in graduate economics programs. As I have just argued, since the 1920s, economists have employed both judgment based on expertise and mechanically objective data-processing procedures when generalizing from economic data. One important difference between these two modes of analysis is how they are taught and learned. The classical theory of inference as used by economists can be taught to many students simultaneously as a set of rules and procedures, recorded in a textbook and applicable to “data” in general. This is in contrast to the judgment-based reasoning that combines knowledge of statistical methods with knowledge of the circumstances under which the particular data being analyzed were generated. This form of reasoning is harder to teach in a classroom or codify in a textbook, and is probably best taught using an apprenticeship model, such as that which ideally exists when an aspiring economist writes a thesis under the supervision of an experienced empirical researcher.

During the 1950s and 1960s, the ratio of PhD candidates to senior faculty in PhD-granting programs was increasing rapidly. One consequence of this, I suspect, was that experienced empirical economists had less time to devote to providing each interested student with individualized feedback on his attempts to analyze data, so that relatively more of a student’s training in empirical economics came in an econometrics classroom, using a book that taught statistical inference as the application of classical inference procedures. As training in empirical economics came more and more to be classroom training, competence in empirical economics came more and more to mean mastery of the mechanically objective techniques taught in the econometrics classroom, a competence displayed to others by application of those techniques. Less time in the training process being spent on judgment-based procedures for interpreting statistical results meant fewer researchers using such procedures, or looking for them when evaluating the work of others.

This process, if indeed it happened, would not explain why the classical theory of inference was the particular mechanically objective method that came to dominate classroom training in econometrics; for that, I would again point to the classical theory’s link to a general and mathematically formalistic theory. But it does help to explain why the application of mechanically objective procedures came to be regarded as a necessary means of determining the reliability of a set of statistical measures and the extent to which they provided evidence for assertions about reality. This conjecture fits in with a larger possibility that I believe is worth further exploration: that is, that the changing nature of graduate education in economics might sometimes be a cause as well as a consequence of changing research practices in economics. (pp. 167-70)

The correspondence between Biddle’s discussion of the change in the attitude of the economics profession about how inferences should be drawn from data about empirical relationships is strikingly similar to Oakeshott’s discussion and depressing in its implications for the decline of expert judgment by economics, expert judgment having been replaced by mechanical and technical knowledge that can be objectively summarized in the form of rules or tests for statistical significance, itself an entirely arbitrary convention lacking any logical, or self-evident, justification.

But my point is not to condemn using rules derived from classical probability theory to assess the significance of relationships statistically estimated from historical data, but to challenge the methodological prohibition against the kinds of expert judgments that many statistically knowledgeable economists like Nobel Prize winners such as Simon Kuznets, Milton Friedman, Theodore Schultz and Gary Becker routinely used to make in their empirical studies. As Biddle notes:

In 1957, Milton Friedman published his theory of the consumption function. Friedman certainly understood statistical theory and probability theory as well as anyone in the profession in the 1950s, and he used statistical theory to derive testable hypotheses from his economic model: hypotheses about the relationships between estimates of the marginal propensity to consume for different groups and from different types of data. But one will search his book almost in vain for applications of the classical methods of inference. Six years later, Friedman and Anna Schwartz published their Monetary History of the United States, a work packed with graphs and tables of statistical data, as well as numerous generalizations based on that data. But the book contains no classical hypothesis tests, no confidence intervals, no reports of statistical significance or insignificance, and only a handful of regressions. (p. 164)

Friedman’s work on the Monetary History is still regarded as authoritative. My own view is that much of the Monetary History was either wrong or misleading. But my quarrel with the Monetary History mainly pertains to the era in which the US was on the gold standard, inasmuch as Friedman simply did not understand how the gold standard worked, either in theory or in practice, as McCloskey and Zecher showed in two important papers (here and here). Also see my posts about the empirical mistakes in the Monetary History (here and here). But Friedman’s problem was bad monetary theory, not bad empirical technique.

Friedman’s theoretical misunderstandings have no relationship to the misguided prohibition against doing quantitative empirical research without obeying the arbitrary methodological requirement that statistical be derived in a way that measures the statistical significance of the estimated relationships. These methodological requirements have been adopted to support a self-defeating pretense to scientific rigor, necessitating the use of relatively advanced mathematical techniques to perform quantitative empirical research. The methodological requirements for measuring statistical relationships were never actually shown to be generate more accurate or reliable statistical results than those derived from the less technically advanced, but in some respects more economically sophisticated, techniques that have almost totally been displaced. One more example of the fallacy that there is but one technique of research that ensures the discovery of truth, a mistake even Popper was never guilty of.

Methodological Prescriptions Go from Bad to Worse

The methodological requirement for the use of formal tests of statistical significance before any quantitative statistical estimate could be credited was a prelude, though it would be a stretch to link them causally, to another and more insidious form of methodological tyrannizing: the insistence that any macroeconomic model be derived from explicit micro-foundations based on the solution of an intertemporal-optimization exercise. Of course, the idea that such a model was in any way micro-founded was a pretense, the solution being derived only through the fiction of a single representative agent, rendering the entire optimization exercise fundamentally illegitimate and the exact opposite of micro-founded model. Having already explained in previous posts why transforming microfoundations from a legitimate theoretical goal into methodological necessity has taken a generation of macroeconomists down a blind alley (here, here, here, and here) I will only make the further comment that this is yet another example of the danger of elevating technique over practice and substance.

Popper’s More Important Contribution

This post has largely concurred with the negative assessment of Popper’s work registered by Lemoine. But I wish to end on a positive note, because I have learned a great deal from Popper, and even if he is overrated as a philosopher of science, he undoubtedly deserves great credit for suggesting falsifiability as the criterion by which to distinguish between science and metaphysics. Even if that criterion does not hold up, or holds up only when qualified to a greater extent than Popper admitted, Popper made a hugely important contribution by demolishing the startling claim of the Logical Positivists who in the 1920s and 1930s argued that only statements that can be empirically verified through direct or indirect observation have meaning, all other statements being meaningless or nonsensical. That position itself now seems to verge on the nonsensical. But at the time many of the world’s leading philosophers, including Ludwig Wittgenstein, no less, seemed to accept that remarkable view.

Thus, Popper’s demarcation between science and metaphysics had a two-fold significance. First, that it is not verifiability, but falsifiability, that distinguishes science from metaphysics. That’s the contribution for which Popper is usually remembered now. But it was really the other aspect of his contribution that was more significant: that even metaphysical, non-scientific, statements can be meaningful. According to the Logical Positivists, unless you are talking about something that can be empirically verified, you are talking nonsense. In other words they were deliberately hoisting themselves on their petard, because their discussions about what is and what is not meaningful, being discussions about concepts, not empirically verifiable objects, were themselves – on the Positivists’ own criterion of meaning — meaningless and nonsensical.

Popper made the world safe for metaphysics, and the world is a better place as a result. Science is a wonderful enterprise, rewarding for its own sake and because it contributes to the well-being of many millions of human beings, though like many other human endeavors, it can also have unintended and unfortunate consequences. But metaphysics, because it was used as a term of abuse by the Positivists, is still, too often, used as an epithet. It shouldn’t be.

Certainly economists should aspire to tease out whatever empirical implications they can from their theories. But that doesn’t mean that an economic theory with no falsifiable implications is useless, a judgment whereby Mark Blaug declared general equilibrium theory to be unscientific and useless, a judgment that I don’t think has stood the test of time. And even if general equilibrium theory is simply metaphysical, my response would be: so what? It could still serve as a source of inspiration and insight to us in framing other theories that may have falsifiable implications. And even if, in its current form, a theory has no empirical content, there is always the possibility that, through further discussion, critical analysis and creative thought, empirically falsifiable implications may yet become apparent.

Falsifiability is certainly a good quality for a theory to have, but even an unfalsifiable theory may be worth paying attention to and worth thinking about.

Advertisements

Cleaning Up After Burns’s Mess

In my two recent posts (here and here) about Arthur Burns’s lamentable tenure as Chairman of the Federal Reserve System from 1970 to 1978, my main criticism of Burns has been that, apart from his willingness to subordinate monetary policy to the political interests of he who appointed him, Burns failed to understand that an incomes policy to restrain wages, thereby minimizing the tendency of disinflation to reduce employment, could not, in principle, reduce inflation if monetary restraint did not correspondingly reduce the growth of total spending and income. Inflationary (or employment-reducing) wage increases can’t be prevented by an incomes policy if the rate of increase in total spending, and hence total income,  isn’t controlled. King Canute couldn’t prevent the tide from coming in, and neither Arthur Burns nor the Wage and Price Council could slow the increase in wages when total spending was increasing at rate faster than was consistent with the 3% inflation rate that Burns was aiming for.

In this post, I’m going to discuss how the mess Burns left behind him upon leaving the Fed in 1978 had to be cleaned up. The mess got even worse under Burns’s successor, G. William Miller. The clean up did not begin until Carter appointed Paul Volcker in 1979 when it became obvious that the monetary policy of the Fed had failed to cope with problems left behind by Burns. After unleashing powerful inflationary forces under the cover of the wage-and-price controls he had persuaded Nixon to impose in 1971 as a precondition for delivering the monetary stimulus so desperately desired by Nixon to ensure his reelection, Burns continued providing that stimulus even after Nixon’s reelection, when it might still have been possible to taper off the stimulus before inflation flared up, and without aborting the expansion then under way. In his arrogance or ignorance, Burns chose not to adjust the policy that had so splendidly accomplished its intended result.

Not until the end of 1973, after crude oil prices quadrupled owing to a cutback in OPEC oil output, driving inflation above 10% in 1974, did Burns withdraw the monetary stimulus that had been administered in increasing doses since early 1971. Shocked out of his complacency by the outcry against 10% inflation, Burns shifted monetary policy toward restraint, bringing down the growth in nominal spending and income from over 11% in Q4 1973 to only 8% in Q1 1974.

After prolonging monetary stimulus unnecessarily for a year, Burn erred grievously by applying monetary restraint in response to the rise in oil prices. The largely exogenous rise in oil prices would most likely have caused a recession even with no change in monetary policy. By subjecting the economy to the added shock of reducing aggregate demand, Burns turned a mild recession into the worst recession since 1937-38 recession at the end of the Great Depression, with unemployment peaking at 8.8% in Q2 1975.. Nor did the reduction in aggregate demand have much anti-inflationary effect, because the incremental reduction in total spending occasioned by the monetary tightening was reflected mainly in reduced output and employment rather than in reduced inflation.

But even with unemployment reaching the highest level in almost 40 years, inflation did not fall below 5% – and then only briefly – until a year after the bottom of the recession. When President Carter took office in 1977, Burns, hoping to be reappointed to another term, provided Carter with a monetary expansion to hasten the reduction in unemployment that Carter has promised in his Presidential campaign. However, Burns’s accommodative policy did not sufficiently endear him to Carter to secure the coveted reappointment.

The short and unhappy tenure of Carter’s first appointee, G. William Miller, during which inflation rose from 6.5% to 10%, ended abruptly when Carter, with his Administration in crisis, sacked his Treasury Secretary, replacing him with Miller. Under pressure from the financial community to address the seemingly intractable inflation that seemed to be accelerating in the wake of a second oil shock following the Iranian Revolution and hostage taking, Carter felt constrained to appoint Volcker, formerly a high official in the Treasury in both the Kennedy and Nixon administrations, then serving as President of the New York Federal Reserve Bank, who was known to be the favored choice of the financial community.

A year after leaving the Fed, Burns gave the annual Per Jacobson Lecture to the International Monetary Fund. Calling his lecture “The Anguish of Central Banking,” Burns offered a defense of his tenure, by arguing, in effect, that he should not be blamed for his poor performance, because the job of central banking is so very hard. Central bankers could control inflation, but only by inflicting unacceptably high unemployment. The political authorities and the public to whom central bankers are ultimately accountable would simply not tolerate the high unemployment that would be necessary for inflation to be controlled.

Viewed in the abstract, the Federal Reserve System had the power to abort the inflation at its incipient stage fifteen years ago or at any later point, and it has the power to end it today. At any time within that period, it could have restricted money supply and created sufficient strains in the financial and industrial markets to terminate inflation with little delay. It did not do so because the Federal Reserve was itself caught up in the philosophic and political currents that were transforming American life and culture.

Burns’s framing of the choices facing a central bank was tendentious; no policy maker had suggested that, after years of inflation had convinced the public to expect inflation to continue indefinitely, the Fed should “terminate inflation with little delay.” And Burns was hardly a disinterested actor as Fed chairman, having orchestrated a monetary expansion to promote the re-election chances of his benefactor Richard Nixon after securing, in return for that service, Nixon’s agreement to implement an incomes policy to limit the growth of wages, a policy that Burns believed would contain the inflationary consequences of the monetary expansion.

However, as I explained in my post on Hawtrey and Burns, the conceptual rationale for an incomes policy was not to allow monetary expansion to increase total spending, output and employment without causing increased inflation, but to allow the monetary restraint to be administered without increasing unemployment. But under the circumstances in the summer of 1971, when a recovery from the 1970 recession was just starting, and unemployment was still high, monetary expansion might have hastened a recovery in output and employment the resulting increase in total spending and income might still increase output and employment rather than being absorbed in higher wages and prices.

But using controls over wages and prices to speed the return to full employment could succeed only while substantial unemployment and unused capacity allowed output and employment to increase; the faster the recovery, the sooner increased spending would show up in rising prices and wages, or in supply shortages, rather than in increased output. So an incomes policy to enable monetary expansion to speed the recovery from recession and restore full employment might theoretically be successful, but, only if the monetary stimulus were promptly tapered off before driving up inflation.

Thus, if Burns wanted an incomes policy to be able to hasten the recovery through monetary expansion and maximize the political benefit to Nixon in time for the 1972 election, he ought to have recognized the need to withdraw the stimulus after the election. But for a year after Nixon’s reelection, Burns continued the monetary expansion without let up. Burns’s expression of anguish at the dilemma foisted upon him by circumstances beyond his control hardly evokes sympathy, sounding more like an attempt to deflect responsibility for his own mistakes or malfeasance in serving as an instrument of the criminal Campaign to Re-elect the President without bothering to alter that politically motivated policy after accomplishing his dishonorable mission.

But it was not until Burns’s successor, G. William Miller, was succeeded by Paul Volcker in August 1979 that the Fed was willing to adopt — and maintain — an anti-inflationary policy. In his recently published memoir Volcker recounts how, responding to President Carter’s request in July 1979 that he accept appointment as Fed chairman, he told Mr. Carter that, to bring down inflation, he would adopt a tighter monetary policy than had been followed by his predecessor. He also writes that, although he did not regard himself as a Friedmanite Monetarist, he had become convinced that to control inflation it was necessary to control the quantity of money, though he did not agree with Friedman that a rigid rule was required to keep the quantity of money growing at a constant rate. To what extent the Fed would set its policy in terms of a fixed target rate of growth in the quantity of money became the dominant issue in Fed policy during Volcker’s first term as Fed chairman.

In a review of Volcker’s memoir widely cited in the econ blogosphere, Tim Barker decried Volcker’s tenure, especially his determination to control inflation even at the cost of spilling blood — other people’s blood – if that was necessary to eradicate the inflationary psychology of the 1970s, which become a seemingly permanent feature of the economic environment at the time of Volcker’s appointment.

If someone were to make a movie about neoliberalism, there would need to be a starring role for the character of Paul Volcker. As chair of the Federal Reserve from 1979 to 1987, Volcker was the most powerful central banker in the world. These were the years when the industrial workers movement was defeated in the United States and United Kingdom, and third world debt crises exploded. Both of these owe something to Volcker. On October 6, 1979, after an unscheduled meeting of the Fed’s Open Market Committee, Volcker announced that he would start limiting the growth of the nation’s money supply. This would be accomplished by limiting the growth of bank reserves, which the Fed influenced by buying and selling government securities to member banks. As money became more scarce, banks would raise interest rates, limiting the amount of liquidity available in the overall economy. Though the interest rates were a result of Fed policy, the money supply target let Volcker avoid the politically explosive appearance of directly raising rates himself. The experiment—known as the Volcker Shock—lasted until 1982, inducing what remains the worst unemployment since the Great Depression and finally ending the inflation that had troubled the world economy since the late 1960s. To catalog all the results of the Volcker Shock—shuttered factories, broken unions, dizzying financialization—is to describe the whirlwind we are still reaping in 2019. . . .

Barker is correct that Volcker had been persuaded that to tighten monetary policy the quantity of reserves that the Fed was providing to the banking system had to be controlled. But making the quantity of bank reserves the policy instrument was a technical change. Monetary policy had been — and could still have been — conducted using an interest-rate instrument, and it would have been entirely possible for Volcker to tighten monetary policy using the traditional interest-rate instrument. It is possible that, as Barker asserts, it was politically easier to tighten policy using a quantity instrument than an interest-rate instrument.

But even if so, the real difficulty was not the instrument used, but the economic and political consequences of a tight monetary policy. The choice of the instrument to carry out the policy could hardly have made more than a marginal difference on the balance of political forces favoring or opposing that policy. The real issue was whether a tight monetary policy aimed at reducing inflation was more effectively conducted using the traditional interest-rate instrument or the quantity-instrument that Volcker adopted. More on this point below.

Those who praise Volcker like to say he “broke the back” of inflation. Nancy Teeters, the lone dissenter on the Fed Board of Governors, had a different metaphor: “I told them, ‘You are pulling the financial fabric of this country so tight that it’s going to rip. You should understand that once you tear a piece of fabric, it’s very difficult, almost impossible, to put it back together again.” (Teeters, also the first woman on the Fed board, told journalist William Greider that “None of these guys has ever sewn anything in his life.”) Fabric or backbone: both images convey violence. In any case, a price index doesn’t have a spine or a seam; the broken bodies and rent garments of the early 1980s belonged to people. Reagan economic adviser Michael Mussa was nearer the truth when he said that “to establish its credibility, the Federal Reserve had to demonstrate its willingness to spill blood, lots of blood, other people’s blood.”

Did Volcker consciously see unemployment as the instrument of price stability? A Rhode Island representative asked him “Is it a necessary result to have a large increase in unemployment?” Volcker responded, “I don’t know what policies you would have to follow to avoid that result in the short run . . . We can’t undertake a policy now that will cure that problem [unemployment] in 1981.” Call this the necessary byproduct view: defeating inflation is the number one priority, and any action to put people back to work would raise inflationary expectations. Growth and full employment could be pursued once inflation was licked. But there was more to it than that. Even after prices stabilized, full employment would not mean what it once had. As late as 1986, unemployment was still 6.6 percent, the Reagan boom notwithstanding. This was the practical embodiment of Milton Friedman’s idea that there was a natural rate of unemployment, and attempts to go below it would always cause inflation (for this reason, the concept is known as NAIRU or non-accelerating inflation rate of unemployment). The logic here is plain: there need to be millions of unemployed workers for the economy to work as it should.

I want to make two points about Volcker’s policy. The first, which I made in my book Free Banking and Monetary Reform over 30 years ago, and which I have reiterated in several posts on this blog and which I discussed in my recent paper “Rules versus Discretion in Monetary Policy Historically Contemplated” (for an ungated version click here) is that using a quantity instrument to tighten monetary policy, as advocated by Milton Friedman, and acquiesced in by Volcker, induces expectations about the future actions of the monetary authority that undermine the policy and render it untenable. Volcker eventually realized the perverse expectational consequences of trying to implement a monetary policy using a fixed rule for the quantity instrument, but his learning experience in following Friedman’s advice needlessly exacerbated and prolonged the agony of the 1982 downturn for months after inflationary expectations had been broken.

The problem was well-known in the nineteenth century thanks to British experience under the Bank Charter Act that imposed a fixed quantity limit on the total quantity of banknotes issued by the Bank of England. When the total of banknotes approached the legal maximum, a precautionary demand for banknotes was immediately induced by those who feared that they might not later be able to obtain credit if it were needed because the Bank of England would be barred from making additional credit available.

Here is how I described Volcker’s Monetarist experiment in my book.

The danger lurking in any Monetarist rule has been perhaps best summarized by F. A. Hayek, who wrote:

As regards Professor Friedman’s proposal of a legal limit on the rate at which a monopolistic issuer of money was to be allowed to increase the quantity in circulation, I can only say that I would not like to see what would happen if under such a provision it ever became known that the amount of cash in circulation was approaching the upper limit and therefore a need for increased liquidity could not be met.

Hayek’s warnings were subsequently borne out after the Federal Reserve Board shifted its policy from targeting interest rates to targeting the monetary aggregates. The apparent shift toward a less inflationary monetary policy, reinforced by the election of a conservative, antiinflationary president in 1980, induced an international shift from other currencies into the dollar. That shift caused the dollar to appreciate by almost 30 percent against other major currencies.

At the same time the domestic demand for deposits was increasing as deregulation of the banking system reduced the cost of holding deposits. But instead of accommodating the increase in the foreign and domestic demands for dollars, the Fed tightened monetary policy. . . . The deflationary impact of that tightening overwhelmed the fiscal stimulus of tax cuts and defense buildup, which, many had predicted, would cause inflation to speed up. Instead the economy fell into the deepest recession since the 1930s, while inflation, by 1982, was brought down to the lowest levels since the early 1960s. The contraction, which began in July 1981, accelerated in the fourth quarter of 1981 and the first quarter of 1982.

The rapid disinflation was bringing interest rates down from the record high levels of mid-1981 and the economy seemed to bottom out in the second quarter, showing a slight rise in real GNP over the first quarter. Sticking to its Monetarist strategy, the Fed reduced its targets for monetary growth in 1982 to between 2.5 and 5.5 percent. But in January and February, the money supply increased at a rapid rate, perhaps in anticipation of an incipient expansion. Whatever its cause, the early burst of the money supply pushed M-1 way over its target range.

For the next several months, as M-1 remained above its target, financial and commodity markets were preoccupied with what the Fed was going to do next. The fear that the Fed would tighten further to bring M-1 back within its target range reversed the slide in interest rates that began in the fall of 1981. A striking feature of the behavior of interest rates at that time was that credit markets seemed to be heavily influenced by the announcements every week of the change in M-1 during the previous week. Unexpectedly large increases in the money supply put upward pressure on interest rates.

The Monetarist explanation was that the announcements caused people to raise their expectations of inflation. But if the increase in interest rates had been associated with a rising inflation premium, the announcements should have been associated with weakness in the dollar on foreign exchange markets and rising commodities prices. In fact, the dollar was rising and commodities prices were falling consistently throughout this period – even immediately after an unexpectedly large jump in M-1 was announced. . . . (pp. 218-19)

I pause in my own earlier narrative to add the further comment that the increase in interest rates in early 1982 clearly reflected an increasing liquidity premium, caused by the reduced availability of bank reserves, making cash desirable to hold than real assets thereby inducing further declines in asset values.

However, increases in M-1 during July turned out to be far smaller than anticipated, relieving some of the pressure on credit and commodities markets and allowing interest rates to begin to fall again. The decline in interest rates may have been eased slightly by . . . Volcker’s statement to Congress on July 20 that monetary growth at the upper range of the Fed’s targets would be acceptable. More important, he added that he Fed was willing to let M-1 remain above its target range for a while if the reason seemed to be a precautionary demand for liquidity. By August, M-1 had actually fallen back within its target range. As fears of further tightening by the Fed subsided, the stage was set for the decline in interest rates to accelerate, [and] the great stock market rally began on August 17, when the Dow . . . rose over 38 points [almost 5%].

But anticipation of an incipient recovery again fed monetary growth. From the middle of August through the end of September, M-1 grew at an annual rate of over 15 percent. Fears that rapid monetary growth would induce the Fed to tighten monetary policy slowed down the decline in interest rates and led to renewed declines in commodities price and the stock market, while pushing up the dollar to new highs. On October 5 . . . the Wall Street Journal reported that bond prices had fallen amid fears that the Fed might tighten credit conditions to slow the recent strong growth in the money supply. But on the very next day it was reported that the Fed expected inflation to stay low and would therefore allow M-1 to exceed its targets. The report sparked a major decline in interest rates and the Dow . . . soared another 37 points. (pp. 219-20)

The subsequent recovery, which began at the end of 1982, quickly became very powerful, but persistent fears that the Fed would backslide, at the urging of Milton Friedman and his Monetarist followers, into its bad old Monetarist habits periodically caused interest-rate spikes reflecting rising liquidity premiums as the public built up precautionary cash balances. Luckily, Volcker was astute enough to shrug off the overwrought warnings of Friedman and other Monetarists that rapid increases in the monetary aggregates foreshadowed the imminent return of double-digit inflation.

Thus, the Monetarist obsession with controlling the monetary aggregates senselessly prolonged an already deep recession that, by Q1 1982, had already slain the inflationary dragon, inflation having fallen to less than half its 1981 peak while GDP actually contracted in nominal terms. But because the money supply was expanding at a faster rate than was acceptable to Monetarist ideology, the Fed continued in its futile but destructive campaign to keep the monetary aggregates from overshooting their arbitrary Monetarist target range. It was not until Volcker in summer of 1982 finally and belatedly decided that enough was enough and announced that the Fed would declare victory over inflation and call off its Monetarist campaign even if doing so meant incurring Friedman’s wrath and condemnation for abandoning the true Monetarist doctrine.

Which brings me to my second point about Volcker’s policy. While it’s clear that Volcker’s decision to adopt control over the monetary aggregates as the focus of monetary policy was disastrously misguided, monetary policy can’t be conducted without some target. Although the Fed’s interest rate can serve as a policy instrument, it is not a plausible policy target. The preferred policy target is generally thought to be the rate of inflation. The Fed after all is mandated to achieve price stability, which is usually understood to mean targeting a rate of inflation of about 2%. A more sophisticated alternative would be to aim at a suitable price level, thereby allowing some upward movement, say, at a 2% annual rate, the difference between an inflation target and a moving price level target being that an inflation target is unaffected by past deviations of actual from targeted inflation while a moving price level target would require some catch up inflation to make up for past below-target inflation and reduced inflation to compensate for past above-target inflation.

However, the 1981-82 recession shows exactly why an inflation target and even a moving price level target is a bad idea. By almost any comprehensive measure, inflation was still positive throughout the 1981-82 recession, though the producer price index was nearly flat. Thus, inflation targeting during the 1981-82 recession would have been almost as bad a target for monetary policy as the monetary aggregates, with most measures of inflation showing that inflation was then between 3 and 5 percent even at the depth of the recession. Inflation targeting is thus, on its face, an unreliable basis for conducting monetary policy.

But the deeper problem with targeting inflation is that seeking to achieve an inflation target during a recession, when the very existence of a recession is presumptive evidence of the need for monetary stimulus, is actually a recipe for disaster, or, at the very least, for needlessly prolonging a recession. In a recession, the goal of monetary policy should be to stabilize the rate of increase in nominal spending along a time path consistent with the desired rate of inflation. Thus, as long as output is contracting or increasing very slowly, the desired rate of inflation should be higher than the desired rate over the long-term. The appropriate strategy for achieving an inflation target ought to be to let inflation be reduced by the accelerating expansion of output and employment characteristic of most recoveries relative to a stable expansion of nominal spending.

The true goal of monetary policy should always be to maintain a time path of total spending consistent with a desired price-level path over time. But it should not be the objective of the monetary policy to always be as close as possible to the desired path, because trying to stay on that path would likely destabilize the real economy. Market monetarists argue that the goal of monetary policy ought to be to keep nominal GDP expanding at that whatever rate is consistent with maintaining the desired long-run price-level path. That is certainly a reasonable practical rule for monetary policy, but the policy criterion I have discussed here would, at least in principle, be consistent with a more activist approach in which the monetary authority would seek to hasten the restoration of full employment during recessions by temporarily increasing the rate of monetary expansion and in nominal GDP as long as real output and employment remained below the maximum levels consistent with desired price level path over time. But such a strategy would require the monetary authority to be able to fine tune its monetary expansion so that it was tapered off just as the economy was reaching its maximum sustainable output and employment path. Whether such fine-tuning would be possible in practice is a question to which I don’t think we now know the answer.

 

Judy Shelton Speaks Up for the Gold Standard

I have been working on a third installment in my series on how, with a huge assist from Arthur Burns, things fell apart in the 1970s. In my third installment, I will discuss the sad denouement of Burns’s misunderstandings and mistakes when Paul Volcker administered a brutal dose of tight money that caused the worst downturn and highest unemployment since the Great Depression in the Great Recession of 1981-82. But having seen another one of Judy Shelton’s less than enlightening op-eds arguing for a gold standard in the formerly respectable editorial section of the Wall Street Journal, I am going to pause from my account of Volcker’s monetary policy in the early 1980s to give Dr. Shelton my undivided attention.

The opening paragraph of Dr. Shelton’s op-ed is a less than auspicious start.

Since President Trump announced his intention to nominate Herman Cain and Stephen Moore to serve on the Federal Reserve’s board of governors, mainstream commentators have made a point of dismissing anyone sympathetic to a gold standard as crankish or unqualified.

That is a totally false charge. Since Herman Cain and Stephen Moore were nominated, they have been exposed as incompetent and unqualified to serve on the Board of Governors of the world’s most important central bank. It is not support for reestablishing the gold standard that demonstrates their incompetence and lack of qualifications. It is true that most economists, myself included, oppose restoring the gold standard. It is also true that most supporters of the gold standard, like, say — to choose a name more or less at random — Ron Paul, are indeed cranks and unqualified to hold high office, but there is indeed a minority of economists, including some outstanding ones like Larry White, George Selgin, Richard Timberlake and Nobel Laureate Robert Mundell, who do favor restoring the gold standard, at least under certain conditions.

But Cain and Moore are so unqualified and so incompetent, that they are incapable of doing more than mouthing platitudes about how wonderful it would be to have a dollar as good as gold by restoring some unspecified link between the dollar and gold. Because of their manifest ignorance about how a gold standard would work now or how it did work when it was in operation, they were unprepared to defend their support of a gold standard when called upon to do so by inquisitive reporters. So they just lied and denied that they had ever supported returning to the gold standard. Thus, in addition to being ignorant, incompetent and unqualified to serve on the Board of Governors of the Federal Reserve, Cain and Moore exposed their own foolishness and stupidity, because it was easy for reporters to dig up multiple statements by both aspiring central bankers explicitly calling for a gold standard to be restored and muddled utterances bearing at least vague resemblance to support for the gold standard.

So Dr. Shelton, in accusing mainstream commentators of dismissing anyone sympathetic to a gold standard as crankish or unqualified is accusing mainstream commentators of a level of intolerance and closed-mindedness for which she supplies not a shred of evidence.

After making a defamatory accusation with no basis in fact, Dr. Shelton turns her attention to a strawman whom she slays mercilessly.

But it is wholly legitimate, and entirely prudent, to question the infallibility of the Federal Reserve in calibrating the money supply to the needs of the economy. No other government institution had more influence over the creation of money and credit in the lead-up to the devastating 2008 global meltdown.

Where to begin? The Federal Reserve has not been targeting the quantity of money in the economy as a policy instrument since the early 1980s when the Fed misguidedly used the quantity of money as its policy target in its anti-inflation strategy. After acknowledging that mistake the Fed has, ever since, eschewed attempts to conduct monetary policy by targeting any monetary aggregate. It is through the independent choices and decisions of individual agents and of many competing private banking institutions, not the dictate of the Federal Reserve, that the quantity of money in the economy at any given time is determined. Indeed, it is true that the Federal Reserve played a great role in the run-up to the 2008 financial crisis, but its mistake had nothing to do with the amount of money being created. Rather the problem was that the Fed was setting its policy interest rate at too high a level throughout 2008 because of misplaced inflation fears fueled by a temporary increases in commodity prices that deterred the Fed from providing the monetary stimulus needed to counter a rapidly deepening recession.

But guess who was urging the Fed to raise its interest rate in 2008 exactly when a cut in interest rates was what the economy needed? None other than the Wall Street Journal editorial page. And guess who was the lead editorial writer on the Wall Street Journal in 2008 for economic policy? None other than Stephen Moore himself. Isn’t that special?

I will forbear from discussing Dr. Shelton’s comments on the Fed’s policy of paying interest on reserves, because I actually agree with her criticism of the policy. But I do want to say a word about her discussion of currency manipulation and the supposed role of the gold standard in minimizing such currency manipulation.

The classical gold standard established an international benchmark for currency values, consistent with free-trade principles. Today’s arrangements permit governments to manipulate their currencies to gain an export advantage.

Having previously explained to Dr. Shelton that currency manipulation to gain an export advantage depends not just on the exchange rate, but the monetary policy that is associated with that exchange rate, I have to admit some disappointment that my previous efforts to instruct her don’t seem to have improved her understanding of the ABCs of currency manipulation. But I will try again. Let me just quote from my last attempt to educate her.

The key point to keep in mind is that for a country to gain a competitive advantage by lowering its exchange rate, it has to prevent the automatic tendency of international price arbitrage and corresponding flows of money to eliminate competitive advantages arising from movements in exchange rates. If a depreciated exchange rate gives rise to an export surplus, a corresponding inflow of foreign funds to finance the export surplus will eventually either drive the exchange rate back toward its old level, thereby reducing or eliminating the initial depreciation, or, if the lower rate is maintained, the cash inflow will accumulate in reserve holdings of the central bank. Unless the central bank is willing to accept a continuing accumulation of foreign-exchange reserves, the increased domestic demand and monetary expansion associated with the export surplus will lead to a corresponding rise in domestic prices, wages and incomes, thereby reducing or eliminating the competitive advantage created by the depressed exchange rate. Thus, unless the central bank is willing to accumulate foreign-exchange reserves without limit, or can create an increased demand by private banks and the public to hold additional cash, thereby creating a chronic excess demand for money that can be satisfied only by a continuing export surplus, a permanently reduced foreign-exchange rate creates only a transitory competitive advantage.

I don’t say that currency manipulation is not possible. It is not only possible, but we know that currency manipulation has been practiced. But currency manipulation can occur under a fixed-exchange rate regime as well as under flexible exchange-rate regimes, as demonstrated by the conduct of the Bank of France from 1926 to 1935 while it was operating under a gold standard.

Dr. Shelton believes that restoring a gold standard would usher in a period of economic growth like the one that followed World War II under the Bretton Woods System. Well, Dr. Shelton might want to reconsider how well the Bretton Woods system worked to the advantage of the United States.

The fact is that, as Ralph Hawtrey pointed out in his Incomes and Money, the US dollar was overvalued relative to the currencies of most its European trading parties, which is why unemployment in the US was chronically above 5% after 1954 to 1965. With undervalued currencies, West Germany, Italy, Belgium, Britain, France and Japan all had much lower unemployment than the US. It was only in 1961, after John Kennedy became President, when the Federal Reserve systematically loosened monetary policy, forcing Germany and other countries to revalue their countries upward to avoid importing US inflation that the US was able redress the overvaluation of the dollar. But in doing so, the US also gradually rendered the $35/ounce price of gold, at which it maintained a kind of semi-convertibility of the dollar, unsustainable, leading a decade later to the final abandonment of the gold-dollar peg.

Dr. Shelton is obviously dedicated to restoring the gold standard, but she really ought to study up on how the gold standard actually worked in its previous incarnations and semi-incarnations, before she opines any further about how it might work in the future. At present, she doesn’t seem to be knowledgeable about how the gold standard worked in the past, and her confidence that it would work well in the future is entirely misplaced.

Ralph Hawtrey Wrote the Book that Arthur Burns Should Have Read — but Didn’t

In my previous post I wrote about the mistakes made by Arthur Burns after Nixon appointed him Chairman of the Federal Reserve Board. Here are the critical missteps of Burns’s unfortunate tenure.

1 Upon becoming chairman in January 1970, with inflation running at over 5% despite a modest tightening by his predecessor in 1969, Burns further tightened monetary policy, causing a downturn and a recession lasting the whole of 1970. The recession was politically damaging to Nixon, leading to sizable Republican losses in the November midterm elections, and causing Nixon to panic about losing his re-election bid in 1972. In his agitation, Nixon then began badgering Burns to loosen monetary policy.

2 Yielding to Nixon’s demands for an easing of monetary policy, Burns eased monetary policy sufficiently to allow a modest recovery to get under way in 1971. But the recovery was too tepid to suit Nixon. Fearing the inflationary implications of a further monetary loosening, Burns began publicly lobbying for the adoption of an incomes policy to limit the increase of wages set by collective bargaining between labor unions and major businesses.

3 Burns’s unwillingness to provide the powerful stimulus desired by Nixon until an incomes policy was in place to hold down inflation led Nixon to abandon his earlier opposition to wage-and-price controls. On August 15, 1971 Nixon imposed a 90-day freeze on all wages and prices to be followed by comprehensive wage-and-price controls. With controls in place, Burns felt secure in accelerating the rate of monetary expansion, leaving it to those controlling wages and prices to keep inflation within acceptable bounds.

4 With controls in place, monetary expansion at first fueled rapid growth of output, but as time passed, the increase in spending was increasingly reflected in inflation rather than output growth. By Q4 1973, inflation rose to 7%, a rate only marginally affected by the Arab oil embargo on oil shipments to the United States and a general reduction in oil output, which led to a quadrupling of oil prices by early 1974.

5 The sharp oil-price increase simultaneously caused inflation to rise sharply above the 7% rate it had reached at the end of 1973 even as it caused a deep downturn and recession in the first quarter of 1974. Rather than accommodate the increase in oil prices by tolerating a temporary increase in inflation, Burns sharply tightened monetary policy reducing the rate of monetary expansion so that the rate of growth of total spending dropped precipitously. Given the increase in oil prices, the drop in total spending caused a major contraction in output and employment, resulting in the deepest recession since 1937-38.

These mistakes all stemmed from a failure by Burns to understand the rationale of an incomes policy. Burns was not alone in that failure, which was actually widespread at the time. But the rationale for such a policy and the key to its implementation had already been spelled out cogently by Ralph Hawtrey in his 1967 diagnosis of the persistent failures of British monetary policy and macroeconomic performance in the post World War II period, failures that had also been deeply tied up in the misunderstanding of the rationale for – and the implementation of — an incomes policy. Unlike Burns, Hawtrey did not view an incomes policy as a substitute for, or an alternative to, monetary policy to reduce inflation. Rather, an incomes policy was precisely the use of monetary policy to achieve a rate of growth in total spending and income that could be compatible with full employment, provided the rate of growth of wages was consistent with full employment.

In Burns’s understanding, the role of an incomes policy was to prevent wage increases from driving up production costs so high that businesses could not operate profitably at maximum capacity without a further increase in inflation by the Federal Reserve. If the wage increases negotiated by the unions exceeded the level compatible with full employment at the targeted rate of inflation, businesses would reduce output and lay off workers. Faced with that choice, the Fed or any monetary authority would be caught in the dreaded straits of Scylla and Charybdis (aka between a rock and a hard place).

What Burns evidently didn’t understand, or chose to ignore, was that adopting an incomes policy to restrain wage increases did not allow the monetary authority to implement a monetary policy that would cause nominal GDP to rise at a rate faster than was consistent with full employment at the target rate of inflation. If, for example, the growth of the labor force and the expected increase in productivity was consistent with a 4% rate of real GDP growth over time and the monetary authority was aiming for an inflation rate no greater than 3%, the monetary authority could not allow nominal GDP to grow at a rate above 7%.

This conclusion is subject to the following qualification. During a transition from high unemployment to full employment, a faster rate of nominal GDP growth than the posited 7% rate could hasten the restoration of full employment. But temporarily speeding nominal GDP growth would also require that, as a state of full employment was approached, the growth of nominal GDP be tapered off and brought down to a sustainable rate.

But what if an incomes policy does keep the rate of increase in wages below the rate consistent with 3% inflation? Could the monetary authority then safely conduct a monetary policy that increased the rate of nominal GDP growth in order to accelerate real economic growth without breaching the 3% inflation target? Once again, the answer is that real GDP growth can be accelerated only as long as sufficient slack remains in an economy with less than full employment so that accelerating spending growth does not result in shortages of labor or intermediate products. Once shortages emerge, wages or prices of products in short supply must be raised to allocate resources efficiently and to prevent shortages from causing production breakdowns.

Burns might have pulled off a remarkable feat by ensuring Nixon’s re-election in 1972 with a massive monetary stimulus causing the fastest increase in nominal real GDP since the Korean War in Q4 of 1972, while wage-and-price controls ensured that the monetary stimulus would be channeled into increased output rather than accelerating inflation. But that strategy was viable only while sufficient slack remained to allow additional spending to call forth further increases in output rather than cause either price increases, or, if wages and prices are subject to binding controls, shortages of supply. Early in 1973, as inflation began to increase and real GDP growth began to diminish, the time to slow down monetary expansion had arrived. But Burns was insensible to the obvious change in conditions.

Here is where we need to focus the discussion directly on Hawtrey’s book Incomes and Money. By the time Hawtrey wrote this book – his last — at the age of 87, he had long been eclipsed not only in the public eye, but in the economics profession, by his slightly younger contemporary and fellow Cambridge graduate, J. M. Keynes. For a while in the 1920s, Hawtrey might have been the more influential of the two, but after The General Theory was published, Hawtrey was increasingly marginalized as new students no longer studied Hawtrey’s writing, while older economists, who still remembered Hawtrey and were familiar with his work, gradually left the scene. Moreover, as a civil servant for most of his career, Hawtrey never collected around himself a group disciples who, because they themselves had a personal stake in the ideas of their mentor, would carry on and propagate those ideas. By the end of World War II, Hawtrey was largely unknown to younger economists.

As a graduate student in the early 1970s, Hawtrey’s name came only occasionally to my attention, mostly in the context of his having been a notable pre-Keynesian monetary theorist whose ideas were of interest mainly to historians of thought. My most notable recollection relating to Hawtrey was that in a conversation with Hayek, whose specific context I no longer recall, Hayek mentioned Hawtrey to me as an economist whose work had been unduly neglected and whose importance was insufficiently recognized, even while acknowledging that he himself had written critically about what he regarded as Hawtrey’s overemphasis on changes in the value of money as the chief cause of business-cycle fluctuations.

It was probably because I remembered that recommendation that when I was in Manhattan years later and happened upon a brand new copy of Incomes and Money on sale in a Barnes and Noble bookstore, I picked it up and bought it. But buying it on the strength of Hayek’s recommendation didn’t lead me to actually read it. I actually can’t remember when I finally did read the book, but it was likely not until after I discovered that Hawtrey had anticipated the gold-appreciation theory of the Great Depression that I had first heard, as a graduate student, from Earl Thompson.

In Incomes and Money, Hawtrey focused not on the Great Depression, which he notably had discussed in earlier books like The Gold Standard and The Art of Central Banking, but on the experience of Great Britain after World War II. That experience was conditioned on the transition from the wartime controls under which Britain had operated in World War II to the partial peacetime decontrol under the Labour government that assumed power at the close of World War II. One feature of wartime controls was that, owing to the shortages and rationing caused by price controls, substantial unwanted holdings of cash were accumulating in the hands of individuals unable to use their cash to purchase desired goods and services.

The US dollar and the British pound were then the two primary currencies used in international trade, but as long as products were in short supply because of price controls, neither currency could serve as an effective medium of exchange for international transactions, which were largely conducted via managed exchange or barter between governments. After the war, the US moved quickly to decontrol prices, allowing prices to rise sufficiently to eliminate excess cash, thereby enabling the dollar to again function as an international medium of exchange and creating a ready demand to hold dollar balances outside the US. The Labour government being ideologically unwilling to scrap price controls, excess holdings of pounds within Britain could only be disposed of insofar as they could be exchanged for dollars with which products could be procured from abroad.

There was therefore intense British demand for dollars but little or no American demand for pounds, an imbalance reflected in a mounting balance-of-payments deficit. The balance-of-payments deficit was misunderstood and misinterpreted as an indication that British products were uncompetitive, British production costs (owing to excessive British wages) supposedly being too high to allow the British products to be competitive in international markets. If British production costs were excessive, then the appropriate remedy was either to cut British wages or to devalue the pound to reduce the real wages paid to British workers. But Hawtrey maintained that the balance-of-payments deficit was a purely monetary phenomenon — an excess supply of pounds and an excess demand for dollars — that could properly be remedied either by withdrawing excess pounds from the holdings of the British public or by decontrolling prices so that excess pounds could be used to buy desired goods and services at market-clearing prices.

Thus, almost two decades before the Monetary Approach to the Balance of Payments was developed by Harry Johnson, Robert Mundell and associates, Hawtrey had already in the 1940s anticipated its principal conclusion that a chronic balance-of-payments disequilibrium results from a monetary policy that creates either more or less cash than the public wishes to hold rather than a disequilibrium in its exchange rate. If so, the remedy for the disequilibrium is not a change in the exchange rate, but a change in monetary policy.

In his preface to Incomes and Money, Hawtrey set forth the main outlines of his argument.

This book is primarily a criticism of British monetary policy since 1945, along with an application of the criticism to questions of future policy.

The aims of policy were indicated to the Radcliffe Committee in 1957 in a paper on Monetary Policy and the Control of Economic Conditions: “The primary object of policy has been to combine a high and stable level of employment with a satisfactory state of the balance of payments”. When Sir Robert Hall was giving oral evidence on behalf of the Treasury, Lord Radcliffe asked, ”Where does sound money as an objective stand?” The reply was that “there may well be a conflict between the objective of high employment and the objective of sound money”, a dilemma which Treasury did not claim to have solved.

Sound money here meant price stability, and Sir Robert Hall admitted that “there has been a practically continuous rise in the price level. The rise in prices of manufactures since 1949 had in fact been 40 percent. The wage level had risen 70 percent.

Government pronouncements ever since 1944 had repeatedly insisted that wages ought not to rise more than in proportion to productivity. This formula meaning in effect price level of home production, embodies the incomes policy which is now professed by all parties. But it has never been enforced through monetary policy. It has only been enjoined by exhortation and persuasion. (p. ix)

The lack of commitment to a policy of stabilizing the price level was the key point for Hawtrey. If policy makers desired to control the rise in the price level by controlling the increase in incomes, they could, in Hawtrey’s view, only do so by way of a monetary policy whose goal was to keep total spending (and hence total income) at a level – or on a path – that was consistent with the price-level objective that policy-makers were aiming for. If there was also a goal of full employment, then the full-employment goal could be achieved only insofar as the wage rates arrived at in bargaining between labor and management were consistent with the targeted level of spending and income.

Incomes policy and monetary policy cannot be separated. Monetary policy includes all those measures by which the flow of money can be accelerated or retarded, and it is by them that the money value of a given structure of incomes is determined. If monetary policy is directed by some other criterion than the desired incomes policy, the income policy gives way to the other criterion. In particular, if monetary policy is directed to maintaining the money unit at a prescribed exchange rate parity, the level of incomes will adapt itself to this parity and not to the desired policy.

When the exchange parity of sterling was fixed in 1949 at $2.80, the pound had already been undervalued at the previous rate of $4.03. The British wage level was tied by the rate of exchange to the American. The level of incomes was predetermined, and there was no way for an incomes policy to depart from it. Economic forces came into operation to correct the undervaluation by an increase in the wage level. . . .

It was a paradox that the devaluation, which had been intended as a remedy for an adverse balance of payments, induced an inflation which was liable itself to cause an adverse balance. The undervaluation did indeed swell the demand for British exports, but when production passed the limit of capacity, and output could not be further increased, the monetary expansion continued by its own momentum. Demand expanded beyond output and attracted an excess of imports. There was no dilemma, because the employment situation and the balance of payments situation both required the same treatment, a monetary contraction. The contraction would not cause unemployment, provided it went no further than to eliminate over-employment.

The White Paper of 1956 on the Economic Implications of Full Employment, while confirming the Incomes Policy of price stabilization, placed definitely on the Government the responsibility for regulating the pressure of demand through “fiscal, monetary and social policies”. The Radcliffe Committee obtained from the Treasury the admission that this was not being done. No measures other than persuasion and exhortation were being taken to give effect to the incomes policy. Reluctant as the authorities were to resort to deflation, they nevertheless imposed a Bank rate of 7 per cent and other contractive measures to cope with a balance of payments crisis at the very moment when the Treasury representative were appearing before the Committee. But that did not mean that they were prepared to pursue a contractive policy in support of the incomes policy. The crises of 1957 and 1961 were no more than episodes, temporarily interfering with the policy of easy credit and expansion. The crisis of 1964-6 has been more than an episode, only because the deflationary measures were long delayed, and when taken, were half-hearted.

It would be unfair to impute the entire responsibility for these faults of policy to Ministers. They are guided by their advisers, and they can plead in their defence that their misconceptions have been shared by the vast majority of economists. . . .

The fault of traditional monetary theory has been that it is static, and that is still true of Keynes’s theory. But a peculiarity of monetary policy is that, whenever practical measures have to be taken, the situation is always one of transition, when the conditions of static equilibrium have been departed from. The task of policy is to decide the best way to get back to equilibrium, and very likely to choose which of several alternative equilibrium positions to aim at. . . .

An incomes policy, or a wages policy, is the indispensable means of stabilizing the money unit when an independent metallic standard has failed us. Such a policy can only be given effect by a regulation of credit. The world has had long experience of the regulation of credit for the maintenance of a metallic standard. Maintenance of a wages standard requires the same instruments but will be more exacting because it will be guided by many symptoms instead of exclusively by movements of gold, and because it will require unremitting vigilance instead of occasional interference. (pp. ix-xii)

The confusion identified by Hawtrey between an incomes policy aiming at achieving a level of income consistent with full employment at a given level of wages by the appropriate conduct of monetary policy and an incomes policy aiming at the direct control of wages was precisely the confusion that led to the consistent failure of British monetary policy after World War II and to the failure of Arthur Burns. The essence of an incomes policy was to control total spending by way of monetary policy while gaining the cooperation of labor unions and business to prevent wage increases that would be inconsistent with full employment at the targeted level of income. Only monetary policy could determine the level of income, and the only role of exhortation and persuasion or direct controls was to prevent excessive wage increases that would prevent full employment from being achieved at the targeted income level.

After the 1949 devaluation, the Labour government appealed to the labour unions, its chief constituency, not to demand wage increases larger than productivity increases, so that British exporters could maintain the competitive advantage provided them by devaluation. Understanding the protectionist motive for devaluation was to undervalue the pound with a view to promoting exports and discouraging imports, Hawtrey also explained why the protectionist goal had been subverted by the low interest-rate, expansionary monetary policy of the Labour government to keep unemployment well below 2 percent.

British wages rose therefore not only because the pound was undervalued, but because monetary expansion increased aggregate demand faster than the British productive capacity was increasing, adding further upward pressure on British wages and labor costs. Excess aggregate demand in Britain also meant that domestic output that might have been exported was instead sold to domestic customers, while drawing imports to satisfy the unmet demands of domestic consumers, so that the British trade balance showed little improvement notwithstanding a 40% devaluation.

In this analysis, Hawtrey anticipated Max Corden’s theory of exchange-rate protection in identifying the essential mechanism by which to manipulate a nominal exchange rate so as to subsidize the tradable-goods sector (domestic export industries and domestic import-competing industries) as a tight-money policy that creates an excess demand for cash, thereby forcing the public to reduce spending as they try to accumulate the desired increases in cash holdings. The reduced demand for home production as spending is reduced results in a shift of productive resources from the non-tradable- to the tradable-goods sector.

To sum up, what Burns might have learned from Hawtrey was that even if some form of control of wages was essential for maintaining full employment in an economic environment in which strong labor unions could bargain effectively with employers, that control over wages did not — and could not — free the central bank from its responsibility to control aggregate demand and the growth of total spending and income.

Arthur Burns and How Things Fell Apart in the 1970s

Back in 2013 Karl Smith offered a startling rehabilitation of Arthur Burns’s calamitous tenure as Fed Chairman, first under Richard Nixon who appointed him, later under Gerald Ford who reappointed him, and finally, though briefly, under Jimmy Carter who did not reappoint him. Relying on an academic study of Burns by Fed economist Robert Hetzel drawing extensively from Burns’s papers at the Fed, Smith argued that Burns had a more coherent and sophisticated view of how the economy works and of the limitations of monetary policy than normally acknowledged by the standard, and almost uniformly negative, accounts of Burns’s tenure, which portray Burns either as a willing, or as a possibly reluctant, and even browbeaten, accomplice of Nixon in deploying Fed powers to rev up the economy and drive down unemployment to ensure Nixon’s re-election in 1972, in willful disregard of the consequences of an overdose of monetary stimulus.

According to Smith, Burns held a theory of inflation in which the rate of inflation corresponds to the average, or median, expected rate of inflation held by the public. (I actually don’t disagree with this at all, and it’s important, but I don’t think it’s enough to rationalize Burns’s conduct and policies as Fed chairman.) When, as was true in the 1970s, wages determined through collective bargaining between big corporations and big labor unions, the incentive of every union was to negotiate contracts providing members with wage increases not less than the average rate of wage increase being negotiated by other unions.

Given the pressure on all unions to negotiate higher-than-average wage increases, using monetary policy to reduce inflation would inevitably aggregate spending to fall short of the level needed to secure full employment, but without substantially moderating the rate of increase in wages and prices. As long as the unions were driven to negotiate increasing rates of wage increase for their members, increasing rates of wage inflation could be accommodated only by ever-increasing growth rates in the economy or by progressive declines in the profit share of business. But without accelerating real economic growth or a declining profit share, union demands for accelerating wage increases could be accommodated only by accelerating inflation and corresponding increases in total spending.

But rising inflation triggers political demands for countermeasures to curb inflation. Believing the Fed incapable of controlling inflation through monetary policy, restrictive monetary policy affecting output and employment rather than wages and prices, Burns concluded that inflation could controlled only by limiting the wage increases negotiated between employers and unions. Control over wages, Burns argued, would cause inflation expectations to moderate, thereby allowing monetary policy to reduce aggregate spending without reducing output and employment.

This, at any rate, was the lesson that Burns drew from the short and relatively mild recession of 1970 after he assumed the Fed chairmanship in which unemployment rose to 6 percent from less than 4 percent, with only a marginal reduction in inflation from the pre-recession rate of 4-5%, before Nixon, fearing his bid for re-election would fail, literally assaulted Burns, blaming him for a weak recovery that, Nixon believed, had resulted in substantial Republican losses in the 1970 midterm elections, just as a Fed-engineered recession in 1960 had led to his own loss to John Kennedy in the 1960 Presidential election. Here is how Burns described the limited power of monetary policy to reduce inflation.

The hard fact is that market forces no longer can be counted on to check the upward course of wages and prices even when the aggregate demand for goods and services declines in the course of a business recession. During the recession of 1970 and the weak recovery of early 1971, the pace of wage increases did not at all abate as unemployment rose….The rate of inflation was almost as high in the first half of 1971, when unemployment averaged 6 percent of the labor force, as it was in 1969, when the unemployment rate averaged 3 1/2 percent….Cost-push inflation, while a comparatively new phenomenon on the American scene, has been altering the economic environment in fundamental ways….If some form of effective control over wages and prices were not retained in 1973, major collective bargaining settlements and business efforts to increase profits could reinforce the pressures on costs and prices that normally come into play when the economy is advancing briskly, and thus generate a new wave of inflation. If monetary and fiscal policy became sufficiently restrictive to deal with the situation by choking off growth in aggregate demand, the cost in terms of rising unemployment, lost output, and shattered confidence would be enormous.

So in 1971 Burns began advocating for what was then called an incomes policy whose objective was to slow the rate of increase in wages being negotiated by employers and unions so that full employment could be maintained while inflation was reduced. Burns declared the textbook rules of economics obsolete, because big labor and big business had become impervious to the market forces that, in textbook theory, were supposed to discipline wage demands and price increases in the face of declining demand. The ability of business and labor to continue to raise prices and wages even in a recession made it impossible to control inflation by just reducing the rate of growth in total spending. As Burns wrote:

. . . the present inflation in the midst of substantial unemployment poses a problem that traditional monetary and fiscal policy remedies cannot solve as quickly as the national interest demands. That is what has led me…to urge additional governmental actions involving wages and prices….The problem of cost-push inflation, in which escalating wages lead to escalating prices in a never-ending circle, is the most difficult economic issue of our time.

As for excessive power on the part of some of our corporations and our trade unions, I think it is high time we talked about that in a candid way. We will have to step on some toes in the process. But I think the problem is too serious to be handled quietly and politely….we live in a time when there are abuses of economic power by private groups, and abuses by some of our corporations, and abuses by some of our trade unions.

Relying on statements like these, Karl Smith described Burns’s strategy as Fed Chairman as a sophisticated approach to the inflation and unemployment problems facing the US in the early 1970s when organized labor exercised substantial market power, making it impossible for monetary policy to control inflation without bearing an unacceptable cost of lost output and employment, with producers unable to sell the output that could be produced at prices sufficient to cover their costs (largely determined by union contracts already agreed to). But the rub is that even if unions recognized that their wage demands would result in unemployment, they would still find it in their self-interest not to moderate their wage demands.

[T]he story here is pretty sophisticated and well beyond the simplistic tale of wage-price spirals I heard as an econ student. The core idea is that while unions and corporations are nominally negotiating with each other, the real action is an implicit game between various unions.

One union, say the autoworkers, pushes for higher wages. The auto industry will consent and then the logic of profit maximization dictates that industry push at least some, if not all, of that cost on to their customers as high beer prices, and the rest on to their investors as a lower dividends and the government as lower taxes (since profits are lower.)

Higher prices for cars, increases the cost of living for most workers in the economy and thus lowers their real wages. In response, those workers will ask for a raise. Its straightforward how this will echo through the economy raising all prices. The really sexy part, however, is yet to come. The autoworkers union understands that all of this is going to happen, and so they push for even higher wages, to compensate them for the loss they know they are going to experience through the resulting ripple of price increases throughout the country.

Now, one might say – shouldn’t the self-defeating nature of this exercise be obvious and lead union leaders to give up? Oh [sic] contraire! The self-defeating nature of the enterprise demands that they participate. Suppose all unions except one stopped demanding excessive wage increases. Then the general increase in prices would stop and that one union would receive a huge windfall. Thus, there is a prisoners dilemma encouraging all unions to seek unreasonably high wage increases.

Yet, the plot thickens still. This upward push in prices factors into expectations throughout the entire economy, so that interest rates, asset prices, etc. are all set on the assumption that the upward push will continue. At that point the upward push must continue or else there will be major dislocations in financial markets. And, in order to accommodate that push the Fed must print more money. . . .

So, casting Burn’s view in our modern context would go something like this. Unemployment rises when inflation falls short of expected inflation. Expected inflation is determined by how much consumers think major corporations will raises their prices. Corporations plan price raises based on what they expect their unions to demand. Unions set their demands based on what they expect other unions to do. “Other unions” are always expected to make unreasonable demands because the unions are locked in prisoners dilemma. Actual inflation tends towards expected inflation unless the Fed curtails money growth.

Thus the Federal Reserve could only halt inflation by refusing to play along. . . In general, high unemployment would persist for however long it took to breakdown this entire chain of expectations. Moreover, unless the power of unions was broken the cycle would simply start back immediately after the disinflation.

No, instead the government had to find a way to get all participants in the economy to expect low inflation. How to do this? Outlaw inflation. Then unemployment need not rise since everyone expects the law to be followed. At that point the Federal Reserve could slow money creation without doing damage to the economy. Wage and price controls are thus a means of coordinating expectations.

Burns’s increasingly outspoken advocacy for an incomes policy after Nixon began pressing him to ease monetary policy in time to ensure Nixon’s re-election bore fruit in August 1971 when Nixon announced a 90-day wage and price freeze to be followed by continuing wage and price controls to keep inflation below 3% thereafter. Relieved of responsibility for controlling inflation, Burns was liberated to provide the monetary stimulus on which Nixon was insisting.

I pause here to note that Nixon had no doubt about the capacity of the Fed to deliver the monetary stimulus he was demanding, and there is no evidence that I am aware to suggest that Burns told Nixon that the Fed was not in a position to provide the desired stimulus.

To place the upsurge in total spending presided over by Burns in context, the figure below shows the year over year increase in nominal GDP from the first quarter of 1960 in the last year of the Eisenhower administration when the economy was in recession through the second quarter of 1975 when the economy had just begun to recover from the 1974-75 recession that followed the Yom Kippur War between Israel and its Arab neighbors, which triggered an Arab embargo of oil shipments to the United States and a cutback in the total world output of oil resulting in a quadrupling of crude oil prices within a few months.

 

https://fred.stlouisfed.org/graph/?graph_id=552628

As Hetzel documents, Burns sought to minimize the magnitude of the monetary stimulus provided after August 15, 1971, instead attributing inflation to special factors like increasing commodity prices and devaluation of the dollar, as if rising commodity prices were an independent of cause of inflation, rather than a manifestation of it, and as if devaluation of the dollar was some sort of random non-monetary event. Rising commodity prices, such as the increase in oil prices after the Arab oil embargo, and even devaluation of the dollar could, indeed, be the result of external non-monetary forces. But the Arab oil embargo did not take place until late in 1973 when inflation, wage-and-price controls notwithstanding, had already surged well beyond acceptable limits, and commodity prices were rising rapidly, largely because of high demand, not because of supply disruptions. And it strains credulity to suppose that devaluation of the dollar was not primarily the result of the cumulative effects of monetary policy over a long period of time rather than a sudden shift in the terms of trade between the US and its trading partners.

Burns also blamed loose fiscal policy in the 1960s for the inflation that started rising in the late 1960s. But the notion that loose fiscal policy in the 1970s significantly affected inflation is inconsistent with the fact that the federal budget deficit exceeded 2% of GDP only once (1968) between 1960 and 1974.

It’s also clear that the fluctuation in the growth rate of nominal GDP in the figure above were quite closely related to changes in Fed policy. The rise in nominal GDP growth after the 1960 recession followed an easing of Fed policy while the dip in nominal GDP growth in 1966-67 was induced by a deliberate tightening by the Fed as a preemptive move against inflation that was abandoned because of a credit crunch that adversely affected mortgage lending and the home building industry.

When Nixon took office in Q1 1969 was 9.3% higher than in Q1 1968, the fourth consecutive quarter in which the rate of NGDP increase was between 9 and 10%. Having pledged to reduce inflation without wage and price controls or raising taxes, the only anti-inflation tool in Nixon’s quiver was monetary policy. It was therefore up to the Fed, then under the leadership of William McChesney Martin, an Eisenhower appointee, was thus expected to tighten monetary policy.

A moderate tightening, reflected in a modestly slower increases in NGDP in the remainder of 1969 (8% in Q2, 8.3% in Q3 and 7.2% in Q4) began almost immediately. The slowdown in the growth of spending did little to subdue inflation, leading instead to a slowing of real GDP growth, but without increasing unemployment. Not until 1970, after Burns replaced Martin at the Fed, and further tightened monetary policy, causing nominal spending growth to slow further (5.8% in Q1 and Q2, 5.4% in Q3 and 4.9% in Q4), did real GDP growth stall, with unemployment sharply rising from less than 4% to just over 6%. The economy having expanded and unemployment having fallen almost continuously since 1961, the sharp rise in unemployment provoked a strong outcry and political reaction, spurring big Democratic gains in the 1970 midterm elections.

After presiding over the first recession in almost a decade, Burns, under pressure from Nixon, reversed course, eased monetary policy to fuel a modest recovery in 1971, with nominal GDP growth increasing to rates higher than 1969 and almost as high as in 1968 (8% in Q1, 8.3% in Q2 and 8.4% in Q3). It was at the midpoint of Q3 (August 15) that Nixon imposed a 90-day wage-and-price freeze, and nominal GDP growth accelerated to 9.3% in Q4 (the highest rate since Q4 1968). With costs held in check by the wage-and-price freeze, the increase in nominal spending induced a surge in output and employment.

In 1972, nominal GDP growth, after a slight deceleration in Q1, accelerated to 9.5% in Q2, to 9.6% in Q3 and to 11.6% in Q4, a growth rate maintained during 1973. So Burns’s attempt to disclaim responsibility for the acceleration of inflation associated with accelerating growth in nominal spending and income between 1971 and 1973 was obviously disingenuous and utterly lacking in credibility.

Hetzel summed up Burns’s position after the imposition of wage-and-price controls as follows:

More than anyone else, Burns had created widespread public support for the wage and price controls imposed on August 15, 1971. For Burns, controls were the prerequisite for the expansionary monetary policy desired by the political system—both Congress and the Nixon Administration. Given the imposition of the controls that he had promoted, Burns was effectively committed to an expansionary monetary policy. Moreover, with controls, he did not believe that expansionary monetary policy in 1972 would be inflationary.

Perhaps Burns really did believe that an expansionary monetary policy would not be inflationary with wage-and-price controls in place. But if that’s what Burns believed, he was in a state of utter confusion. An expansionary monetary policy followed under cover of wage-and-price controls could contain inflation only as long as there was sufficient excess capacity and unemployment to channel increased aggregate spending to induce increased output and employment rather than create shortages of products and resources that would drive up costs and prices. To suppress the pressure of rising costs and prices wage-and-price controls would inevitably distort relative prices and create shortages, leading to ever-increasing and cascading waste and inefficiency, and eventually to declining output. That’s what began to happen in 1973 making it politically impossible, to Burns’s chagrin, to re-authorize continuation of those controls after the initial grant of authority expired in April 1974.

I discussed the horrible legacy of Nixon’s wage-and-price freeze and the subsequent controls in one of my first posts on this blog, so I needn’t repeat myself here about the damage done by controls; the point I do want to emphasize is, Karl Smith to the contrary notwithstanding, how incoherent Burns’s thinking was in assuming that a monetary policy leading aggregate spending to rise by a rate exceeding 11% for four consecutive quarters wasn’t seriously inflationary.

If monetary policy is such that nominal GDP is growing at an 11% rate, while real GDP grows at a 4% rate, the difference between those two numbers will necessarily manifest itself in 7% inflation. If wage-and-price controls suppress inflation, the suppressed inflation will be manifested in shortages and other economic dislocations, reducing the growth of real GDP and causing an unwanted accumulation of cash balances, which is what eventually happened under wage-and-price controls in late 1973 and 1974. Once an economy is operating at full capacity, as it surely was by the end of 1973, there could have been no basis for thinking that real GDP could increase at substantially more than a 4% rate, which is why real GDP growth diminished quarter by quarter in 1973 from 7.6% in Q1 to 6.3% in Q2 to 4.8% in Q3 and 4% in Q4.

Thus, in 1973, even without an oil shock in late 1973 used by Burns as an excuse with which to deflect the blame for rising inflation from himself to uncontrollable external forces, Burns’s monetary policy was inexorably on track to raise inflation to 7%. Bad as the situation was before the oil shock, Burns chose to make the situation worse by tightening monetary policy, just as oil prices were quadrupling, It was the worst possible time to tighten policy, because the negative supply shock associated with the rise in oil and other energy prices would likely have led the economy into a recession even if monetary policy had not been tightened.

I am planning to write another couple of posts on what happened in the 1970s, actually going back to the late sixties and forward to the early eighties. The next post will be about Ralph Hawtrey’s last book Incomes and Money in which he discussed the logic of incomes policies that Arthur Burns would have done well to have studied and could have provided him with a better approach to monetary policy than his incoherent embrace of an incomes policy divorced from any notion of the connection between monetary policy and aggregate spending and nominal income. So stay tuned, but it may take a couple of weeks before the next installment.

James Buchanan Calling the Kettle Black

In the wake of the tragic death of Alan Krueger, attention has been drawn to an implicitly defamatory statement by James Buchanan about those who, like Krueger, dared question the orthodox position taken by most economists that minimum-wage laws increase unemployment among low-wage, low-skilled workers whose productivity, at the margin, is less than the minimum wage that employers are required to pay employees.

Here is Buchanan’s statement:

The inverse relationship between quantity demanded and price is the core proposition in economic science, which embodies the presupposition that human choice behavior is sufficiently relational to allow predictions to be made. Just as no physicist would claim that “water runs uphill,” no self-respecting economist would claim that increases in the minimum wage increase employment. Such a claim, if seriously advanced, becomes equivalent to a denial that there is even minimal scientific content in economics, and that, in consequence, economists can do nothing but write as advocates for ideological interests. Fortunately, only a handful of economists are willing to throw over the teachings of two centuries; we have not yet become a bevy of camp-following whores.

Wholly apart from its odious metaphorical characterization of those he was criticizing, Buchanan’s assertion was substantively problematic in two respects. The first, which is straightforward and well-known, and which Buchanan was obviously wrong not to acknowledge, is that there are obvious circumstances in which a minimum-wage law could simultaneously raise wages and reduce unemployment without contradicting the inverse relationship between quantity demanded and price. Such circumstances obtain whenever employers exercise monopsony power in the market for unskilled labor. If employers realize that hiring additional low-skilled workers drives up the wage paid to all the low-skilled workers that they employ, not just the additional ones hired, the wage paid by employers will be less than the value of the marginal product of labor. If employers exercise monopsony power, then divergence between the wage and the marginal product is not a violation, but an implication, of the inverse relationship between quantity demanded and price. If Buchanan had written on his price theory preliminary exam for a Ph. D at Chicago that support for a minimum wage could be rationalized only be denying the inverse relationship between quantity demanded and price, he would have been flunked.

The second problem with Buchanan’s position is less straightforward and less well-known, but more important, than the first. The inverse relationship by which Buchanan set such great store is valid only if qualified by a ceteris paribus condition. Demand is a function of many variables of which price is only one. So the inverse relationship between price and quantity demanded is premised on the assumption that all the other variables affecting demand are held (at least approximately) constant.

Now it’s true that even the law of gravity is subject to a ceteris paribus condition; the law of gravity will not control the movement of objects in a magnetic field. And it would be absurd to call a physicist an advocate for ideological interests just because he recognized that possibility.

Of course, the presence or absence of a magnetic field is a circumstance that can be easily ascertained, thereby enabling a physicist to alter his prediction of the movement of an object according as the the relevant field for predicting the motion of the object under consideration is gravitational or magnetic. But the magnitude and relevance of other factors affecting demand are not so easily taken into account by economists. That’s why applied economists try to focus on markets in which the effects of “other factors” are small or on markets in which “other factors” can easily be identified and measured or treated qualitatively as fixed effects.

But in some markets the factors affecting demand are themselves interrelated so that the ceteris paribus assumption can’t be maintained. Such markets can’t be analyzed in isolation, they can only be analyzed as a system in which all the variables are jointly determined. Economists call the analysis of an isolated market partial-equilibrium analysis. And it is partial-equilibrium analysis that constitutes the core of price theory and microeconomics. The ceteris paribus assumption either has to be maintained by assuming that changes in the variables other than price affecting demand and supply are inconsequential or by identifying other variable whose changes could affect demand and supply and either measuring them quantitatively or at least accounting for them qualitatively.

But labor markets, except at a granular level, when the focus is on an isolated region or a specialized occupation, cannot be modeled usefully with the standard partial-equilibrium techniques of price theory, because income effects and interactions between related markets cannot appropriately be excluded from the partial-equilibrium analysis of supply and demand in a broadly defined market for labor. The determination of the equilibrium price in a market that encompasses a substantial share of economic activity cannot be isolated from the determination of the equilibrium prices in other markets.

Moreover, the idea that the equilibration of any labor market can be understood within a partial-equiilbrium framework in which the wage responds to excess demands for, or excess supplies of, labor just as the price of a standardized commodity adjusts to excess demands for, or excess supplies of, that commodity, reflects a gross misunderstanding of the incentives of employers and workers in reaching wage bargains for the differentiated services provided by individual workers. Those incentives are in no way comparable to the incentives of businesses to adjust the prices of their products in response to excess supplies of or excess demands for those products.

Buchanan was implicitly applying an inappropriate paradigm of price adjustment in a single market to the analysis of how wages adjust in the real world. The truth is we don’t have a good understanding of how wages adjust, and so we don’t have a good understanding of the effects of minimum wages. But in arrogantly and insultingly dismissing Krueger’s empirical research on the effects of minimum wage laws, Buchanan was unwittingly exposing not Krueger’s ideological advocacy but his own.

There They Go Again (And Now They’re Back!)

Note: On August 5, 2011, one month after I started blogging, I wrote the following post responding to an op-ed in the Wall Street Journal by David Malpass, an op-ed remarkable for its garbled syntax, analytical incoherence, and factual misrepresentations. All in all, quite a performance. Today, exactly seven and a half years later, we learn that the estimable Mr. Malpass, currently serving as Undersecretary for International Affairs in the U.S. Treasury Department, is about to be nominated to become the next President of the World Bank.

In today’s Wall Street Journal, David Malpass, who, according to the bio, used to be a deputy assistant undersecretary of the Treasury in the Reagan administration, and is now President of something called Encima Global LLC (his position as Chief Economist at Bear Stearns was somehow omitted) carries on about the terrible damage inflicted by the Fed on the American economy.

The U.S. is practically alone in the world in pursuing a near-zero interest rate and letting its central bank leverage to the hilt to buy up the national debt. By choosing to pay savers nearly nothing, the Fed’s policy discourages thrift and is directly connected to the weakness in personal income.

Where Mr. Malpass gets his information, I haven’t a clue, but looking at the table of financial and trade statistics on the back page of the July 16 edition of the Economist, I see that in addition to the United States, Japan, Switzerland, Hong Kong, and Singapore, had 3-month rates less than 0.5%.  Britain, Canada, and Saudi Arabia had rates between 0.5 and 1%.  The official rate of the Swedish Riksbank is now 2.5%, but it held the rate at 0.5% until economic conditions improved.

As for Malpass’s next sentence, where to begin?  I won’t dwell on the garbled syntax, but, even if that were its intention, the Fed is obviously not succeeding in discouraging thrift, as private indebtedness has been falling consistently over the past three years.  The question is whether it would be good for the economy if people were saving even more than they are now, and the answer to that, clearly, is:  not unless there was a great deal more demand by private business to invest than there is now.  Why is business not investing?  Despite repeated declamations about the regulatory overkill and anti-business rhetoric of the Obama administration, no serious observer doubts that the main obstacle to increased business investment is that expected demand does not warrant investments aimed at increasing capacity when existing capacity is not being fully utilized.  And for the life of me I cannot tell what it is that Mr. Malpass thinks is connected to the weakness in personal income.  Nor am I am so sure that I know what “weakness in personal income” even means.

From here Malpass meanders into the main theme of his tirade which is how terrible it is that we have a weak dollar.

One of the fastest, most decisive ways to restart U.S. private-sector job growth would be to end the Fed’s near-zero interest rate and the Bush-Obama weak-dollar policy. As Presidents Reagan and Clinton showed, sound money is a core growth strategy—the fastest and most effective way to tell world capital that the U.S. is back in business.

Mr. Malpass served in the Reagan administration, so I would have expected him to know something about what happened in that administration.  Obviously, my expectations were too high.  According to the Federal Reserve’s index of trade weighted dollar exchange rate, the dollar exchange rate stood at 95.66 when Reagan took office in January 1981 and at 90.82 when Reagan left office 8 years later.  Now it is true that the dollar rose rapidly in Reagan’s first term reaching about 141 in May 1985, but it fell even faster for the remainder of Reagan’s second term.  So what exactly is the lesson that Mr. Malpass thinks that the Reagan administration taught us?  Certainly the reduction in dollar exchange rate in Reagan’s second term was much greater than the reduction in the exchange rate so far under Mr. Obama, from about 83 to 68.

Then going in for the kill, Mr. Malpass warns us not to repeat Japan’s mistakes.

Only Japan, after the bursting of its real-estate bubble in 1990, has tried anything similar to U.S. policy. For close to a decade, Tokyo pursued a policy of amped-up government spending, high tax rates, zero-interest rates and mega-trillion yen central-bank buying of government debt. The weak recovery became a deep malaise, with Japan’s own monetary officials warning the U.S. not to follow their lead.

Funny, Mr. Malpass seems to forget that Japan also pursued the sound money policy that he extols.  Consider the foreign exchange value of the yen.   In April 1990, the yen stood at 159 to the dollar.  Last week it was at 77 to the dollar.  Sounds like a strong yen policy to me.  Is that the example Mr. Malpass wants us to follow?

Actually the Wall Street Journal in its editorial today summed up its approach to economic policy making rather well.

The Keynesians have fired all their ammo, and here we are, going south.  Maybe now President Obama should consider everything he’s done to revive the American economy — and do the opposite.

That’s what it comes down to for the Journal.  If Obama is for it, we’re against it.  Simple as that.  Leave your brain at the door.

Friedman and Schwartz, Eichengreen and Temin, Hawtrey and Cassel

Barry Eichengreen and Peter Temin are two of the great economic historians of our time, writing, in the splendid tradition of Charles Kindleberger, profound and economically acute studies of the economic and financial history of the nineteenth and early twentieth centuries. Most notably they have focused on periods of panic, crisis and depression, of which by far the best-known and most important episode is the Great Depression that started late in 1929, bottomed out early in 1933, but lingered on for most of the 1930s, and they are rightly acclaimed for having emphasized and highlighted the critical role of the gold standard in the Great Depression, a role largely overlooked in the early Keynesian accounts of the Great Depression. Those accounts identified a variety of specific shocks, amplified by the volatile entrepreneurial expectations and animal spirits that drive, or dampen, business investment, and further exacerbated by inherent instabilities in market economies that lack self-stabilizing mechanisms for maintaining or restoring full employment.

That Keynesian vision of an unstable market economy vulnerable to episodic, but prolonged, lapses from full-employment was vigorously, but at first unsuccessfully, disputed by advocates of free-market economics. It wasn’t until Milton Friedman provided an alternative narrative explaining the depth and duration of the Great Depression, that the post-war dominance of Keynesian theory among academic economists seriously challenged. Friedman’s alternative narrative of the Great Depression was first laid out in the longest chapter (“The Great Contraction”) of his magnum opus, co-authored with Anna Schwartz, A Monetary History of the United States. In Friedman’s telling, the decline in the US money stock was the critical independent causal factor that directly led to the decline in prices, output, and employment. The contraction in the quantity of money was not caused by the inherent instability of free-market capitalism, but, owing to a combination of incompetence and dereliction of duty, by the Federal Reserve.

In the Monetary History of the United States, all the heavy lifting necessary to account for both secular and cyclical movements in the price level, output and employment is done by, supposedly exogenous, changes in the nominal quantity of money, Friedman having considered it to be of the utmost significance that the largest movements in both the quantity of money, and in prices, output and employment occurred during the Great Depression. The narrative arc of the Monetary History was designed to impress on the mind of the reader the axiomatic premise that monetary authority has virtually absolute control over the quantity of money which served as the basis for inferring that changes in the quantity of money are what cause changes in prices, output and employment.

Friedman’s treatment of the gold standard (which I have discussed here, here and here) was both perfunctory and theoretically confused. Unable to reconcile the notion that the monetary authority has absolute control over the nominal quantity of money with the proposition that the price level in any country on the gold standard cannot deviate from the price levels of other gold standard countries without triggering arbitrage transactions that restore the equality between the price levels of all gold standard countries, Friedman dodged the inconsistency repeatedly invoking his favorite fudge factor: long and variable lags between changes in the quantity of money and changes in prices, output and employment. Despite its vacuity, the long-and-variable-lag dodge allowed Friedman to ignore the inconvenient fact that the US price level in the Great Depression did not and could not vary independently of the price levels of all other countries then on the gold standard.

I’ll note parenthetically that Keynes himself was also responsible for this unnecessary and distracting detour, because the General Theory was written almost entirely in the context of a closed economy model with an exogenously determined quantity of money, thereby unwittingly providing with a useful tool with which to propagate his Monetarist narrative. The difference of course is that Keynes, as demonstrated in his brilliant early works, Indian Currency and Finance and A Tract on Monetary Reform and the Economic Consequences of Mr. Churchill, had a correct understanding of the basic theory of the gold standard, an understanding that, owing to his obsessive fixation on the nominal quantity of money, eluded Friedman over his whole career. Why Keynes, who had a perfectly good theory of what was happening in the Great Depression available to him, as it was to others, was diverted to an unnecessary, but not uninteresting, new theory is a topic that I wrote about a very long time ago here, though I’m not so sure that I came up with a good or even adequate explanation.

So it does not speak well of the economics profession that it took nearly a quarter of a century before the basic internal inconsistency underlying Friedman’s account of the Great Depression was sufficiently recognized to call for an alternative theoretical account of the Great Depression that placed the gold standard at the heart of the narrative. It was Peter Temin and Barry Eichengreen, both in their own separate works (e.g., Lessons of the Great Depression by Temin and Golden Fetters by Eichengreen) and in an important paper they co-authored and published in 2000 to remind both economists and historians how important a role the gold standard must play in any historical account of the Great Depression.

All credit is due to Temin and Eichengreen for having brought to the critical role of the gold standard in the Great Depression to the attention of economists who had largely derived their understanding of what had caused the Great Depression from either some variant of the Keynesian narrative or of Friedman’s Monetarist indictment of the Federal Reserve System. But it’s unfortunate that neither Temin nor Eichnegreen gave sufficient credit to either R. G. Hawtrey or to Gustav Cassel for having anticipated almost all of their key findings about the causes of the Great Depression. And I think that what prevented Eichengreen and Temin from realizing that Hawtrey in particular had anticipated their explanation of the Great Depression by more than half a century was that they did not fully grasp the key theoretical insight underlying Hawtrey’s explanation of the Great Depression.

That insight was that the key to understanding the common world price level in terms of gold under a gold standard is to think in terms of a given world stock of gold and to think of total world demand to hold gold consisting of real demands to hold gold for commercial, industrial and decorative uses, the private demand to hold gold as an asset, and the monetary demand for gold to be held either as a currency or as a reserve for currency. The combined demand to hold gold for all such purposes, given the existing stock of gold, determines a real relative price of gold in terms of all other commodities. This relative price when expressed in terms of a currency unit that is convertible into gold corresponds to an equivalent set of commodity prices in terms of those convertible currency units.

This way of thinking about the world price level under the gold standard was what underlay Hawtrey’s monetary analysis and his application of that analysis in explaining the Great Depression. Given that the world output of gold in any year is generally only about 2 or 3 percent of the existing stock of gold, it is fluctuations in the demand for gold, of which the monetary demand for gold in the period after the outbreak of World War I was clearly the least stable, that causes short-term fluctuations in the value of gold. Hawtrey’s efforts after the end of World War I were therefore focused on the necessity to stabilize the world’s monetary demands for gold in order to avoid fluctuations in the value of gold as the world moved toward the restoration of the gold standard that then seemed, to most monetary and financial experts and most monetary authorities and political leaders, to be both inevitable and desirable.

In the opening pages of Golden Fetters, Eichengreen beautifully describes backdrop against which the attempt to reconstitute the gold standard was about to made after World War I.

For more than a quarter of a century before World War I, the gold standard provided the framework for domestic and international monetary relations. . .  The gold standard had been a remarkably efficient mechanism for organizing financial affairs. No global crises comparable to the one that began in 1929 had disrupted the operation of financial markets. No economic slump had so depressed output and employment.

The central elements of this system were shattered by . . . World War I. More than a decade was required to complete their reconstruction. Quickly it became evident that the reconstructed gold standard was less resilient that its prewar predecessor. As early as 1929 the new international monetary system began to crumble. Rapid deflation forced countries to  producing primary commodities to suspend gold convertibility and depreciate their currencies. Payments problems spread next to the industrialized world. . . Britain, along with United State and France, one of the countries at the center of the international monetary system, was next to experience a crisis, abandoning the gold standard in the autumn of 1931. Some two dozen countries followed suit. The United States dropped the gold standard in 1933; France hung on till the bitter end, which came in 1936.

The collapse of the international monetary system is commonly indicted for triggering the financial crisis that transformed a modes economic downturn gold standard into an unprecedented slump. So long as the gold standard was maintained, it is argued, the post-1929 recession remained just another cyclical contraction. But the collapse of the gold standard destroyed confidence in financial stability, prompting capital flight which undermined the solvency of financial institutions. . . Removing the gold standard, the argument continues, further intensified the crisis. Having suspended gold convertibility, policymakers manipulated currencies, engaging in beggar thy neighbor depreciations that purportedly did nothing to stimulate economic recovery at home while only worsening the Depression abroad.

The gold standard, then, is conventionally portrayed as synonymous with financial stability. Its downfall starting in 1929 is implicated in the global financial crisis and the worldwide depression. A central message of this book is that precisely the opposite was true. (Golden Fetters, pp. 3-4).

That is about as clear and succinct and accurate a description of the basic facts leading up to and surrounding the Great Depression as one could ask for, save for the omission of one important causal factor: the world monetary demand for gold.

Eichengreen was certainly not unaware of the importance of the monetary demand for gold, and in the pages that immediately follow, he attempts to fill in that part of the story, adding to our understanding of how the gold standard worked by penetrating deeply into the nature and role of the expectations that supported the gold standard, during its heyday, and the difficulty of restoring those stabilizing expectations after the havoc of World War I and the unexpected post-war inflation and subsequent deep 1920-21 depression. Those stabilizing expectations, Eichengreen argued, were the result of the credibility of the commitment to the gold standard and the international cooperation between governments and monetary authorities to ensure that the international gold standard would be maintained notwithstanding the occasional stresses and strains to which a complex institution would inevitably be subjected.

The stability of the prewar gold standard was instead the result of two very different factors: credibility and cooperation. Credibility is the confidence invested by the public in the government’s commitment to a policy. The credibility of the gold standard derived from the priority attached by governments to the maintenance of to the maintenance of balance-of-payments equilibrium. In the core countries – Britain, France and Germany – there was little doubt that the authorities would take whatever steps were required to defend the central bank’s gold reserves and maintain the convertibility of the currency into gold. If one of these central banks lost gold reserves and its exchange rate weakened, fund would flow in from abroad in anticipation of the capital gains investors in domestic assets would reap once the authorities adopted measures to stem reserve losses and strengthen the exchange rate. . . The exchange rate consequently strengthened on its own, and stabilizing capital flows minimized the need for government intervention. The very credibility of the official commitment to gold meant that this commitment was rarely tested. (p. 5)

But credibility also required cooperation among the various countries on the gold standard, especially the major countries at its center, of which Britain was the most important.

Ultimately, however, the credibility of the prewar gold standard rested on international cooperation. When the stabilizing speculation and domestic intervention proved incapable of accommodating a disturbance, the system was stabilized through cooperation among governments and central banks. Minor problems could be solved by tacit cooperation, generally achieved without open communication among the parties involved. . .  Under such circumstances, the most prominent central bank, the Bank of England, signaled the need for coordinated action. When it lowered its discount rate, other central banks usually responded in kind. In effect, the Bank of England provided a focal point for the harmonization of national monetary policies. . .

Major crises in contrast typically required different responses from different countries. The country losing gold and threatened by a convertibility crisis had to raise interest rates to attract funds from abroad; other countries had to loosen domestic credit conditions to make funds available to the central bank experiencing difficulties. The follow-the-leader approach did not suffice. . . . Such crises were instead contained through overt, conscious cooperation among central banks and governments. . . Consequently, the resources any one country could draw on when its gold parity was under attack far exceeded its own reserves; they included the resources of the other gold standard countries. . . .

What rendered the commitment to the gold standard credible, then, was that the commitment was international, not merely national. That commitment was achieved through international cooperation. (pp. 7-8)

Eichengreen uses this excellent conceptual framework to explain the dysfunction of the newly restored gold standard in the 1920s. Because of the monetary dislocation and demonetization of gold during World War I, the value of gold had fallen to about half of its prewar level, thus to reestablish the gold standard required not only restoring gold as a currency standard but also readjusting – sometimes massively — the prewar relative values of the various national currency units. And to prevent the natural tendency of gold to revert to its prewar value as gold was remonetized would require an unprecedented level of international cooperation among the various countries as they restored the gold standard. Thus, the gold standard was being restored in the 1920s under conditions in which neither the credibility of the prewar commitment to the gold standard nor the level of international cooperation among countries necessary to sustain that commitment was restored.

An important further contribution that Eichengreen, following Temin, brings to the historical narrative of the Great Depression is to incorporate the political forces that affected and often determined the decisions of policy makers directly into the narrative rather than treat those decisions as being somehow exogenous to the purely economic forces that were controlling the unfolding catastrophe.

The connection between domestic politics and international economics is at the center of this book. The stability of the prewar gold standard was attributable to a particular constellation of political as well as economic forces. Similarly, the instability of the interwar gold standard is explicable in terms of political as well as economic changes. Politics enters at two levels. First, domestic political pressures influence governments’ choices of international economic policies. Second, domestic political pressures influence the credibility of governments’ commitments to policies and hence their economic effects. . . (p. 10)

The argument, in a nutshell, is that credibility and cooperation were central to the smooth operation of the classical gold standard. The scope for both declined abruptly with the intervention of World War I. The instability of the interwar gold standard was the inevitable result. (p. 11)

Having explained and focused attention on the necessity for credibility and cooperation for a gold standard to function smoothly, Eichengreen then begins his introductory account of how the lack of credibility and cooperation led to the breakdown of the gold standard that precipitated the Great Depression, starting with the structural shift after World War I that made the rest of the world highly dependent on the US as a source of goods and services and as a source of credit, rendering the rest of the world chronically disposed to run balance-of-payments deficits with the US, deficits that could be financed only by the extension of credit by the US.

[I]f U.S. lending were interrupted, the underlying weakness of other countries’ external positions . . . would be revealed. As they lost gold and foreign exchange reserves, the convertibility of their currencies into gold would be threatened. Their central banks would be forced to  restrict domestic credit, their fiscal authorities to compress public spending, even if doing so threatened to plunge their economies into recession.

This is what happened when U.S. lending was curtailed in the summer of 1928 as a result of increasingly stringent Federal Reserve monetary policy. Inauspiciously, the monetary contraction in the United States coincided with a massive flow of gold to France, where monetary policy was tight for independent reasons. Thus, gold and financial capital were drained by the United States and France from other parts of the world. Superimposed on already weak foreign balances of payments, these events provoked a greatly magnified monetary contraction abroad. In addition they caused a tightening of fiscal policies in parts of Europe and much of Latin America. This shift in policy worldwide, and not merely the relatively modest shift in the United States, provided the contractionary impulse that set the stage for the 1929 downturn. The minor shift in American policy had such dramatic effects because of the foreign reaction it provoked through its interactions with existing imbalances in the pattern of international settlements and with the gold standard constraints. (pp. 12-13)

Eichengreen then makes a rather bold statement, with which, despite my agreement with, and admiration for, everything he has written to this point, I would take exception.

This explanation for the onset of the Depression, which emphasizes concurrent shifts in economic policy in the Unites States and abroad, the gold standard as the connection between them, and the combined impact of U.S. and foreign economic policies on the level of activity, has not previously appeared in the literature. Its elements are familiar, but they have not been fit together into a coherent account of the causes of the 1929 downturn. (p. 13)

I don’t think that Eichengreen’s claim of priority for his explanation of the onset of the 1929 downturn can be defended, though I certainly wouldn’t suggest that he did not arrive at his understanding of what caused the Great Depression largely on his own. But it is abundantly clear from reading the writings of Hawtrey and Cassel starting as early as 1919, that the basic scenario outlined by Eichengreen was clearly spelled out by Hawtrey and Cassel well before the Great Depression started, as papers by Ron Batchelder and me and by Doug Irwin have thoroughly documented. Undoubtedly Eichengreen has added a great deal of additional insight and depth and done important quantitative and documentary empirical research to buttress his narrative account of the causes of the Great Depression, but the basic underlying theory has not changed.

Eichengreen is not unaware of Hawtrey’s contribution and in a footnote to the last quoted paragraph, Eichengreen writes as follows.

The closest precedents lie in the work of the British economists Lionel Robbins and Ralph Hawtrey, in the writings of German historians concerned with the causes of their economy’s precocious slump, and in Temin (1989). Robbins (1934) hinted at many of the mechanism emphasized here but failed to develop the argument fully. Hawtrey emphasized how the contractionary shift in U.S. monetary policy, superimposed on an already weak British balance of payments position, forced a draconian contraction on the Bank of England, plunging the world into recession. See Hawtrey (1933), especially chapter 2. But Hawtrey’s account focused almost entirely on the United States and the United Kingdom, neglecting the reaction of other central banks, notably the Bank of France, whose role was equally important. (p. 13, n. 17)

Unfortunately, this footnote neither clarifies nor supports Eichengreen’s claim of priority for his account of the role of the gold standard in the Great Depression. First, the bare citation of Robbins’s 1934 book The Great Depression is confusing at best, because Robbins’s explanation of the cause of the Great Depression, which he himself later disavowed, is largely a recapitulation of the Austrian business-cycle theory that attributed the downturn to a crisis caused by monetary expansion by the Fed and the Bank of England. Eichengreen correctly credits Hawtrey for attributing the Great Depression, in almost diametric opposition to Robbins, to contractionary monetary policy by the Fed and the Bank of England, but then seeks to distinguish Hawtrey’s explanation from his own by suggesting that Hawtrey neglected the role of the Bank of France.

Eichengreen mentions Hawtrey’s account of the Great Depression in his 1933 book, Trade Depression and the Way Out, 2nd edition. I no longer have a copy of that work accessible to me, but in the first edition of this work published in 1931, Hawtrey included a brief section under the heading “The Demand for Gold as Money since 1914.”

[S]ince 1914 arbitrary changes in monetary policy and in the demand for gold as money have been greater and more numerous than ever before. Frist came the general abandonment of the gold standard by the belligerent countries in favour of inconvertible paper, and the release of hundreds of millions of gold. By 1920 the wealth value of gold had fallen to two-fifths of what it had been in 1913. The United States, which was almost alone at that time in maintaining a gold standard, thereupon started contracting credit and absorbing gold on a vast scale. In June 1924 the wealth value of gold was seventy per cent higher than at its lowest point in 1920, and the amount of gold held for monetary purposes in the United States had grown from $2,840,000,000 in 1920 to $4,488,000,000.

Other countries were then beginning to return to the gold standard, Gemany in 1924, England in 1925, besides several of the smaller countries of Europe. In the years 1924-8 Germany absorbed over £100,000,000 of gold. France stabilized her currency in 1927 and re-established the gold standard in 1928, and absorbed over £60,000,000 in 1927-8. But meanwhile, the Unitd States had been parting with gold freely and her holding had fallen to $4,109,000,000 in June 1928. Large as these movements had been, they had not seriously disturbed the world value of gold. . . .

But from 1929 to the present time has been a period of immense and disastrous instability. France has added more than £200,000,000 to her gold holding, and the United Statesmore than $800,000,000. In the two and a half years the world’s gold output has been a little over £200,000,000, but a part of this been required for the normal demands of industry. The gold absorbed by France and America has exceeded the fresh supply of gold for monetary purposes by some £200,000,000.

This has had to be wrung from other countries, and much o of it has come from new countries such as Australia, Argentina and Brazil, which have been driven off the gold standard and have used their gold reserves to pay their external liabilities, such as interest on loans payable in foreign currencies. (pp. 20-21)

The idea that Hawtrey neglected the role of the Bank of France is clearly inconsistent with the work that Eichengreen himself cites as evidence for that neglect. Moreover in Hawtrey’s 1932 work, The Art of Central Banking, his first chapter is entitled “French Monetary Policy” which directly addresses the issues supposedly neglected by Hawtrey. Here is an example.

I am inclined therefore to say that while the French absorption of gold in the period from January 1929 to May 1931 was in fact one of the most powerful causes of the world depression, that is only because it was allowed to react an unnecessary degree upon the monetary policy of other countries. (p. 38)

In his foreward to the 1962 reprinting of his volume, Hawtrey mentions his chapter on French Monetary Policy in a section under the heading “Gold and the Great Depression.”

Conspicuous among countries accumulating reserves foreign exchange was France. Chapter 1 of this book records how, in the course of stabilizing the franc in the years 1926-8, the Bank of France accumulated a vast holding of foreign exchange [i.e., foreign bank liabilities payable in gold], and in the ensuing years proceeded to liquidate it [for gold]. Chapter IV . . . shows the bearing of the French absorption of gold upon the starting of the great depression of the 1930s. . . . The catastrophe foreseen in 1922 [!] had come to pass, and the moment had come to point to the moral. The disaster was due to the restoration of the gold standard without any provision for international cooperation to prevent undue fluctuations in the purchasing power of gold. (pp. xiv-xv)

Moreover, on p. 254 of Golden Fetters, Eichengreen himself cites Hawtrey as one of the “foreign critics” of Emile Moreau, Governor of the Bank of France during the 1920s and 1930s “for failing to build “a structure of credit” on their gold imports. By failing to expand domestic credit and to repel gold inflows, they argued, the French had violated the rules of the gold standard game.” In the same paragraph Eichengreen also cites Hawtrey’s recommendation that the Bank of France change its statutes to allow for the creation of domestically supplied money and credit that would have obviated the need for continuing imports of gold.

Finally, writers such as Clark Johnson and Kenneth Mouré, who have written widely respected works on French monetary policy during the 1920s and 1930s, cite Hawtrey extensively as one of the leading contemporary critics of French monetary policy.

PS I showed Barry Eichengreen a draft of this post a short while ago, and he agrees with my conclusion that Hawtrey, and presumably Cassel also, had anticipated the key elements of his explanation of how the breakdown of the gold standard, resulting largely from the breakdown of international cooperation, was the primary cause of the Great Depression. I am grateful to Barry for his quick and generous response to my query.

Was There a Blue Wave?

In the 2018 midterm elections on two weeks ago on November 6, Democrats gained about 38 seats in the House of Representatives with results for a few seats still incomplete. Polls and special elections for vacancies in the House and Senate and state legislatures indicated that a swing toward the Democrats was likely, raising hopes among Democrats that a blue wave would sweep Democrats into control of the House of Representatives and possibly, despite an unfavorable election map with many more Democratic Senate seats at state than Republican seats, even the Senate.

On election night when results in the Florida Senate and Governor races suddenly swung toward the Democrats, the high hopes for a blue wave began to ebb, especially as results from Indiana, Misouri, and South Dakota showed that Democratic incumbent Senators trailing by substantial margins. Other results seemed like a mixed bag, with some Democratic gains, but hardly providing clear signs of a blue wave. The mood was not lifted when the incumbent Democratic Senator from Montana fell behind his Republican challenger and Ted Cruz seemed to be maintaining a slim lead over his charismatic opponent Beto O’Rourke and the Republican candidate for the open Senate seat held by the retiring Jeff Flake of Arizona was leading the Democratic candidate.

As the night wore on, although it seemed that the Democrats would gain a majority in the House of Representatives, estimates of the number of seats gained were only in the high twenties or low thirties, while it appeared that Republicans might gain as many as five Senate Seats. President Trump was able to claim, almost credibly, the next morning at his White House news conference that the election results had been an almost total victory for himself and his party.

It was not till later the next day that it became clear that the Democratic gains in the House would not be just barely enough (23) to gain a majority in the House but would likely be closer to 40 than to 30. The apparent losses of the Montana seat was reversed by late results, and the delayed results from Nevada showed that a Democrat had defeated the Republican incumbent while the Democratic candidate in Arizona had substantially cut into the lead built up by the Republican candidate with most of the of the uncounted votes in Democratic strongholds. Instead of winning 56 Senate seats a pickup of 5, as seemed likely on Tuesday night, the Republicans gains were cut to no more than 2, and the apparent defeat of an incumbent in the Florida election was thrown into doubt, as late returns showed a steadily shrinking Republican margin, sending Republicans into an almost hysterical panic at the prospect gaining no more than one seat rather than five they had been expecting on Tuesday night.

So, within a day or two after the election, the narrative of a Democratic wave began to reemerge. Many commentators accepted the narrative of a covert Democratic wave, but others disagreed. For example, Sean Trende at Real Clear Politics argues that there really wasn’t a Blue Wave, even though Democratic House gains of nearly 40 seats, taken in isolation, might qualify for that designation. Trende thinks the Democratic losses in the Senate, though not as large as they seemed originally, are inconsistent with a wave election as were Democratic gains in governorships and state legislatures.

However, a pickup of seven governorships, while not spectacular is hardly to be sneezed at, and Democratic gains in state legislative seats would have been substantially greater than they were had it not been for extremely effective gerrymandering that kept democratic gains well below their share of the vote in state legislatures even though their effect on races for the House were fairly minimal. So I think that the best measure of the wave-like character of the 2018 elections is provided by the results for the House of Representatives.

Now the problem with judging whether the House results were a wave or were not a wave is that midterm election results are sensitive to economic conditions, so before you can compare results you need to adjust for how well or poorly the economy was performing. You also need to adjust for how many seats the President’s party has going into the election. The more seats the President’s Party has to defend, the greater its potential loss in the election.

To test this idea, I estimated a simple regression model with the number of seats lost by the President’s party in the midterm elections as the dependent variable and the number of seats held by the President’s party as one independent variable and the ratio of real GDP in the year of the midterm election to real GDP in the year of the previous Presidential election as the other independent variable. One would expect the President’s party to perform better in the midterm elections the higher the ratio of real GDP in the midterm year to real GDP in the year of the previous Presidential election.

My regression equation is thus ΔSeats = C + aSeats + bRGDPratio + ε,

where ΔSeats is the change in the number of seats held by the President’s party after the midterm election, Seats is the number of seats held before the midterm, RGDPratio is the ratio of real GDP in the midterm election year to the real GDP in the previous Presidential election year, C is a constant reflecting the average change in the number of seats of the President’s party in the midterm elections, and a and b are the coefficients reflecting the marginal effect of a change in the corresponding independent variables on the dependent variable, with the other independent variable held constant.

I estimated this equation using data in the 18 midterm elections from 1946 through 2014. The estimated regression equation was the following:

ΔSeats = 24.63 – .26Seats + 184.48RGDPratio

The t values for Seats and RGDPratio are both slightly greater than 2 in absolute value, indicating that they are statistically significant at the 10% level and nearly significant at the 5% level. But given the small number of observations, I wouldn’t put much store on the significance levels except as an indication of plausibility. The assumption that Seats is linearly related to ΔSeats doesn’t seem right, but I haven’t tried alternative specifications. The R-squared and adjusted R-squared statistics are .31 and .22, which seem pretty high.

At any rate when I plotted the predicted changes in the number of seats against the actual number of seats changed in the elections from 1946 to 2018 I came up with the following chart:

 

The blue line in the chart represents the actual number of seats gained or lost in each midterm election since 1946 and the orange line represents the change in the number of seats predicted by the model. One can see that the President’s party did substantially better than expected in 1962, 1978, 1998, and 2002 elections, while the President’s party did substantially worse than expected in the 1958, 1966, 1974, 1994, 2006, 2010 and 2018 elections.

In 2018, the Democrats gained approximately 38 seats compared to the 22 seats the model predicted, so the Democrats overperformed by about 16 seats. In 2010 the Republicans gained 63 seats compared to a predicted gain of 35. In 2006, the Democrats gained 32 seats compared to a predicted gain of 22. In 1994 Republicans gained 54 seats compared to a predicted gain of 26 seats. In 1974, Democrats gains 48 seats compared to a predicted gain of 20 seats. In 1966, Republicans gained 47 seats compared to a predicted gain of 26 seats. And in 1958, Democrats gained 48 seats compared to a predicted gain of 20 seats.

So the Democrats in 2018 did not over-perform as much as they did in 1958 and 1974, or as much as the Republicans did in 1966, 1994, and 2010. But the Democrats overperformed by more in 2018 than they did in 2006 when Mrs. Pelosi became Speaker of the House the first time, and actually came close to the Republicans’ overperformance of 1966. So, my tentative conclusion is yes, there was a blue wave in 2018, but it was a light blue wave.

 

More on Sticky Wages

It’s been over four and a half years since I wrote my second most popular post on this blog (“Why are Wages Sticky?”). Although the post was linked to and discussed by Paul Krugman (which is almost always a guarantee of getting a lot of traffic) and by other econoblogosphere standbys like Mark Thoma and Barry Ritholz, unlike most of my other popular posts, it has continued ever since to attract a steady stream of readers. It’s the posts that keep attracting readers long after their original expiration date that I am generally most proud of.

I made a few preliminary points about wage stickiness before getting to my point. First, although Keynes is often supposed to have used sticky wages as the basis for his claim that market forces, unaided by stimulus to aggregate demand, cannot automatically eliminate cyclical unemployment within the short or even medium term, he actually devoted a lot of effort and space in the General Theory to arguing that nominal wage reductions would not increase employment, and to criticizing economists who blamed unemployment on nominal wages fixed by collective bargaining at levels too high to allow all workers to be employed. So, the idea that wage stickiness is a Keynesian explanation for unemployment doesn’t seem to me to be historically accurate.

I also discussed the search theories of unemployment that in some ways have improved our understanding of why some level of unemployment is a normal phenomenon even when people are able to find jobs fairly easily and why search and unemployment can actually be productive, enabling workers and employers to improve the matches between the skills and aptitudes that workers have and the skills and aptitudes that employers are looking for. But search theories also have trouble accounting for some basic facts about unemployment.

First, a lot of job search takes place when workers have jobs while search theories assume that workers can’t or don’t search while they are employed. Second, when unemployment rises in recessions, it’s not because workers mistakenly expect more favorable wage offers than employers are offering and mistakenly turn down job offers that they later regret not having accepted, which is a very skewed way of interpreting what happens in recessions; it’s because workers are laid off by employers who are cutting back output and idling production lines.

I then suggested the following alternative explanation for wage stickiness:

Consider the incentive to cut price of a firm that can’t sell as much as it wants [to sell] at the current price. The firm is off its supply curve. The firm is a price taker in the sense that, if it charges a higher price than its competitors, it won’t sell anything, losing all its sales to competitors. Would the firm have any incentive to cut its price? Presumably, yes. But let’s think about that incentive. Suppose the firm has a maximum output capacity of one unit, and can produce either zero or one units in any time period. Suppose that demand has gone down, so that the firm is not sure if it will be able to sell the unit of output that it produces (assume also that the firm only produces if it has an order in hand). Would such a firm have an incentive to cut price? Only if it felt that, by doing so, it would increase the probability of getting an order sufficiently to compensate for the reduced profit margin at the lower price. Of course, the firm does not want to set a price higher than its competitors, so it will set a price no higher than the price that it expects its competitors to set.

Now consider a different sort of firm, a firm that can easily expand its output. Faced with the prospect of losing its current sales, this type of firm, unlike the first type, could offer to sell an increased amount at a reduced price. How could it sell an increased amount when demand is falling? By undercutting its competitors. A firm willing to cut its price could, by taking share away from its competitors, actually expand its output despite overall falling demand. That is the essence of competitive rivalry. Obviously, not every firm could succeed in such a strategy, but some firms, presumably those with a cost advantage, or a willingness to accept a reduced profit margin, could expand, thereby forcing marginal firms out of the market.

Workers seem to me to have the characteristics of type-one firms, while most actual businesses seem to resemble type-two firms. So what I am suggesting is that the inability of workers to take over the jobs of co-workers (the analog of output expansion by a firm) when faced with the prospect of a layoff means that a powerful incentive operating in non-labor markets for price cutting in response to reduced demand is not present in labor markets. A firm faced with the prospect of being terminated by a customer whose demand for the firm’s product has fallen may offer significant concessions to retain the customer’s business, especially if it can, in the process, gain an increased share of the customer’s business. A worker facing the prospect of a layoff cannot offer his employer a similar deal. And requiring a workforce of many workers, the employer cannot generally avoid the morale-damaging effects of a wage cut on his workforce by replacing current workers with another set of workers at a lower wage than the old workers were getting.

I think that what I wrote four years ago is clearly right, identifying an important reason for wage stickiness. But there’s also another reason that I didn’t mention then, but whose importance has since come to appear increasingly significant to me, especially as a result of writing and rewriting my paper “Hayek, Hicks, Radner and three concepts of intertemporal equilibrium.”

If you are unemployed because the demand for your employer’s product has gone down, and your employer, planning to reduce output, is laying off workers no longer needed, how could you, as an individual worker, unconstrained by a union collective-bargaining agreement or by a minimum-wage law, persuade your employer not to lay you off? Could you really keep your job by offering to accept a wage cut — no matter how big? If you are being laid off because your employer is reducing output, would your offer to work at a lower wage cause your employer to keep output unchanged, despite a reduction in demand? If not, how would your offer to take a pay cut help you keep your job? Unless enough workers are willing to accept a big enough wage cut for your employer to find it profitable to maintain current output instead of cutting output, how would your own willingness to accept a wage cut enable you to keep your job?

Now, if all workers were to accept a sufficiently large wage cut, it might make sense for an employer not to carry out a planned reduction in output, but the offer by any single worker to accept a wage cut certainly would not cause the employer to change its output plans. So, if you are making an independent decision whether to offer to accept a wage cut, and other workers are making their own independent decisions about whether to accept a wage cut, would it be rational for you or any of them to accept a wage cut? Whether it would or wouldn’t might depend on what each worker was expecting other workers to do. But certainly given the expectation that other workers are not offering to accept a wage cut, why would it make any sense for any worker to be the one to offer to accept a wage cut? Would offering to accept a wage cut, increase the likelihood that a worker would be one of the lucky ones chosen not to be laid off? Why would offering to accept a wage cut that no one else was offering to accept, make the worker willing to work for less appear more desirable to the employer than the others that wouldn’t accept a wage cut? One reaction by the employer might be: what’s this guy’s problem?

Combining this way of looking at the incentives workers have to offer to accept wage reductions to keep their jobs with my argument in my post of four years ago, I now am inclined to suggest that unemployment as such provides very little incentive for workers and employers to cut wages. Price cutting in periods of excess supply is often driven by aggressive price cutting by suppliers with large unsold inventories. There may be lots of unemployment, but no one is holding a large stock of unemployed workers, and no is in a position to offer low wages to undercut the position of those currently employed at  nominal wages that, arguably, are too high.

That’s not how labor markets operate. Labor markets involve matching individual workers and individual employers more or less one at a time. If nominal wages fall, it’s not because of an overhang of unsold labor flooding the market; it’s because something is changing the expectations of workers and employers about what wage will be offered by employers, and accepted by workers, for a particular kind of work. If the expected wage is too high, not all workers willing to work at that wage will find employment; if it’s too low, employers will not be able to find as many workers as they would like to hire, but the situation will not change until wage expectations change. And the reason that wage expectations change is not because the excess demand for workers causes any immediate pressure for nominal wages to rise.

The further point I would make is that the optimal responses of workers and the optimal responses of their employers to a recessionary reduction in demand, in which the employers, given current input and output prices, are planning to cut output and lay off workers, are mutually interdependent. While it is, I suppose, theoretically possible that if enough workers decided to immediately offer to accept sufficiently large wage cuts, some employers might forego plans to lay off their workers, there are no obvious market signals that would lead to such a response, because such a response would be contingent on a level of coordination between workers and employers and a convergence of expectations about future outcomes that is almost unimaginable.

One can’t simply assume that it is in the independent self-interest of every worker to accept a wage cut as soon as an employer perceives a reduced demand for its product, making the current level of output unprofitable. But unless all, or enough, workers decide to accept a wage cut, the optimal response of the employer is still likely to be to cut output and lay off workers. There is no automatic mechanism by which the market adjusts to demand shocks to achieve the set of mutually consistent optimal decisions that characterizes a full-employment market-clearing equilibrium. Market-clearing equilibrium requires not merely isolated price and wage cuts by individual suppliers of inputs and final outputs, but a convergence of expectations about the prices of inputs and outputs that will be consistent with market clearing. And there is no market mechanism that achieves that convergence of expectations.

So, this brings me back to Keynes and the idea of sticky wages as the key to explaining cyclical fluctuations in output and employment. Keynes writes at the beginning of chapter 19 of the General Theory.

For the classical theory has been accustomed to rest the supposedly self-adjusting character of the economic system on an assumed fluidity of money-wages; and, when there is rigidity, to lay on this rigidity the blame of maladjustment.

A reduction in money-wages is quite capable in certain circumstances of affording a stimulus to output, as the classical theory supposes. My difference from this theory is primarily a difference of analysis. . . .

The generally accept explanation is . . . quite a simple one. It does not depend on roundabout repercussions, such as we shall discuss below. The argument simply is that a reduction in money wages will, cet. par. Stimulate demand by diminishing the price of the finished product, and will therefore increase output, and will therefore increase output and employment up to the point where  the reduction which labour has agreed to accept in its money wages is just offset by the diminishing marginal efficiency of labour as output . . . is increased. . . .

It is from this type of analysis that I fundamentally differ.

[T]his way of thinking is probably reached as follows. In any given industry we have a demand schedule for the product relating the quantities which can be sold to the prices asked; we have a series of supply schedules relating the prices which will be asked for the sale of different quantities. .  . and these schedules between them lead up to a further schedule which, on the assumption that other costs are unchanged . . . gives us the demand schedule for labour in the industry relating the quantity of employment to different levels of wages . . . This conception is then transferred . . . to industry as a whole; and it is supposed, by a parity of reasoning, that we have a demand schedule for labour in industry as a whole relating the quantity of employment to different levels of wages. It is held that it makes no material difference to this argument whether it is in terms of money-wages or of real wages. If we are thinking of real wages, we must, of course, correct for changes in the value of money; but this leaves the general tendency of the argument unchanged, since prices certainly do not change in exact proportion to changes in money wages.

If this is the groundwork of the argument . . ., surely it is fallacious. For the demand schedules for particular industries can only be constructed on some fixed assumption as to the nature of the demand and supply schedules of other industries and as to the amount of aggregate effective demand. It is invalid, therefore, to transfer the argument to industry as a whole unless we also transfer our assumption that the aggregate effective demand is fixed. Yet this assumption amount to an ignoratio elenchi. For whilst no one would wish to deny the proposition that a reduction in money-wages accompanied by the same aggregate demand as before will be associated with an increase in employment, the precise question at issue is whether the reduction in money wages will or will not be accompanied by the same aggregate effective demand as before measured in money, or, at any rate, measured by an aggregate effective demand which is not reduced in full proportion to the reduction in money-wages. . . But if the classical theory is not allowed to extend by analogy its conclusions in respect of a particular industry to industry as a whole, it is wholly unable to answer the question what effect on employment a reduction in money-wages will have. For it has no method of analysis wherewith to tackle the problem. (General Theory, pp. 257-60)

Keynes’s criticism here is entirely correct. But I would restate slightly differently. Standard microeconomic reasoning about preferences, demand, cost and supply is partial-equilbriium analysis. The focus is on how equilibrium in a single market is achieved by the adjustment of the price in a single market to equate the amount demanded in that market with amount supplied in that market.

Supply and demand is a wonderful analytical tool that can illuminate and clarify many economic problems, providing the key to important empirical insights and knowledge. But supply-demand analysis explicitly – but too often without realizing its limiting implications – assumes that other prices and incomes in other markets are held constant. That assumption essentially means that the market – i.e., the demand, cost and supply curves used to represent the behavioral characteristics of the market being analyzed – is small relative to the rest of the economy, so that changes in that single market can be assumed to have a de minimus effect on the equilibrium of all other markets. (The conditions under which such an assumption could be justified are themselves not unproblematic, but I am now assuming that those problems can in fact be assumed away at least in many applications. And a good empirical economist will have a good instinctual sense for when it’s OK to make the assumption and when it’s not OK to make the assumption.)

So, the underlying assumption of microeconomics is that the individual markets under analysis are very small relative to the whole economy. Why? Because if those markets are not small, we can’t assume that the demand curves, cost curves, and supply curves end up where they started. Because a high price in one market may have effects on other markets and those effects will have further repercussions that move the very demand, cost and supply curves that were drawn to represent the market of interest. If the curves themselves are unstable, the ability to predict the final outcome is greatly impaired if not completely compromised.

The working assumption of the bread and butter partial-equilibrium analysis that constitutes econ 101 is that markets have closed borders. And that assumption is not always valid. If markets have open borders so that there is a lot of spillover between and across markets, the markets can only be analyzed in terms of broader systems of simultaneous equations, not the simplified solutions that we like to draw in two-dimensional space corresponding to intersections of stable supply curves with stable supply curves.

What Keynes was saying is that it makes no sense to draw a curve representing the demand of an entire economy for labor or a curve representing the supply of labor of an entire economy, because the underlying assumption of such curves that all other prices are constant cannot possibly be satisfied when you are drawing a demand curve and a supply curve for an input that generates more than half the income earned in an economy.

But the problem is even deeper than just the inability to draw a curve that meaningfully represents the demand of an entire economy for labor. The assumption that you can model a transition from one point on the curve to another point on the curve is simply untenable, because not only is the assumption that other variables are being held constant untenable and self-contradictory, the underlying assumption that you are starting from an equilibrium state is never satisfied when you are trying to analyze a situation of unemployment – at least if you have enough sense not to assume that economy is starting from, and is not always in, a state of general equilibrium.

So, Keynes was certainly correct to reject the naïve transfer of partial equilibrium theorizing from its legitimate field of applicability in analyzing the effects of small parameter changes on outcomes in individual markets – what later came to be known as comparative statics – to macroeconomic theorizing about economy-wide disturbances in which the assumptions underlying the comparative-statics analysis used in microeconomics are clearly not satisfied. That illegitimate transfer of one kind of theorizing to another has come to be known as the demand for microfoundations in macroeconomic models that is the foundational methodological principle of modern macroeconomics.

The principle, as I have been arguing for some time, is illegitimate for a variety of reasons. And one of those reasons is that microeconomics itself is based on the macroeconomic foundational assumption of a pre-existing general equilibrium, in which all plans in the entire economy are, and will remain, perfectly coordinated throughout the analysis of a particular parameter change in a single market. Once you relax the assumption that all, but one, markets are in equilibrium, the discipline imposed by the assumption of the rationality of general equilibrium and comparative statics is shattered, and a different kind of theorizing must be adopted to replace it.

The search for that different kind of theorizing is the challenge that has always faced macroeconomics. Despite heroic attempts to avoid facing that challenge and pretend that macroeconomics can be built as if it were microeconomics, the search for a different kind of theorizing will continue; it must continue. But it would certainly help if more smart and creative people would join in that search.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,137 other followers

Follow Uneasy Money on WordPress.com
Advertisements