Archive for the 'Uncategorized' Category

Dr. Popper: Or How I Learned to Stop Worrying and Love Metaphysics

Introduction to Falsificationism

Although his reputation among philosophers was never quite as exalted as it was among non-philosophers, Karl Popper was a pre-eminent figure in 20th century philosophy. As a non-philosopher, I won’t attempt to adjudicate which take on Popper is the more astute, but I think I can at least sympathize, if not fully agree, with philosophers who believe that Popper is overrated by non-philosophers. In an excellent blog post, Phillipe Lemoine gives a good explanation of why philosophers look askance at falsificationism, Popper’s most important contribution to philosophy.

According to Popper, what distinguishes or demarcates a scientific statement from a non-scientific (metaphysical) statement is whether the statement can, or could be, disproved or refuted – falsified (in the sense of being shown to be false not in the sense of being forged, misrepresented or fraudulently changed) – by an actual or potential observation. Vulnerability to potentially contradictory empirical evidence, according to Popper, is what makes science special, allowing it to progress through a kind of dialectical process of conjecture (hypothesis) and refutation (empirical testing) leading to further conjecture and refutation and so on.

Theories purporting to explain anything and everything are thus non-scientific or metaphysical. Claiming to be able to explain too much is a vice, not a virtue, in science. Science advances by risk-taking, not by playing it safe. Trying to explain too much is actually playing it safe. If you’re not willing to take the chance of putting your theory at risk, by saying that this and not that will happen — rather than saying that this or that will happen — you’re playing it safe. This view of science, portrayed by Popper in modestly heroic terms, was not unappealing to scientists, and in part accounts for the positive reception of Popper’s work among scientists.

But this heroic view of science, as Lemoine nicely explains, was just a bit oversimplified. Theories never exist in a vacuum, there is always implicit or explicit background knowledge that informs and provides context for the application of any theory from which a prediction is deduced. To deduce a prediction from any theory, background knowledge, including complementary theories that are presumed to be valid for purposes of making a prediction, is necessary. Any prediction relies not just on a single theory but on a system of related theories and auxiliary assumptions.

So when a prediction is deduced from a theory, and the predicted event is not observed, it is never unambiguously clear which of the multiple assumptions underlying the prediction is responsible for the failure of the predicted event to be observed. The one-to-one logical dependence between a theory and a prediction upon which Popper’s heroic view of science depends doesn’t exist. Because the heroic view of science is too simplified, Lemoine considers it false, at least in the naïve and heroic form in which it is often portrayed by its proponents.

But, as Lemoine himself acknowledges, Popper was not unaware of these issues and actually dealt with some if not all of them. Popper therefore dismissed those criticisms pointing to his various acknowledgments and even anticipations of and responses to the criticisms. Nevertheless, his rhetorical style was generally not to qualify his position but to present it in stark terms, thereby reinforcing the view of his critics that he actually did espouse the naïve version of falsificationism that, only under duress, would be toned down to meet the objections raised to the usual unqualified version of his argument. Popper after all believed in making bold conjectures and framing a theory in the strongest possible terms and characteristically adopted an argumentative and polemical stance in staking out his positions.

Toned-Down Falsificationism

In his tone-downed version of falsificationism, Popper acknowledged that one can never know if a prediction fails because the underlying theory is false or because one of the auxiliary assumptions required to make the prediction is false, or even because of an error in measurement. But that acknowledgment, Popper insisted, does not refute falsificationism, because falsificationism is not a scientific theory about how scientists do science; it is a normative theory about how scientists ought to do science. The normative implication of falsificationism is that scientists should not try to shield their theories by making just-so adjustments in their theories through ad hoc auxiliary assumptions, e.g., ceteris paribus assumptions, to shield their theories from empirical disproof. Rather they should accept the falsification of their theories when confronted by observations that conflict with the implications of their theories and then formulate new and better theories to replace the old ones.

But a strict methodological rule against adjusting auxiliary assumptions or making further assumptions of an ad hoc nature would have ruled out many fruitful theoretical developments resulting from attempts to account for failed predictions. For example, the planet Neptune was discovered in 1846 by scientists who posited (ad hoc) the existence of another planet to explain why the planet Uranus did not follow its predicted path. Rather than conclude that the Newtonian theory was falsified by the failure of Uranus to follow the orbital path predicted by Newtonian theory, the French astronomer Urbain Le Verrier posited the existence of another planet that would account for the path actually followed by Uranus. Now in this case, it was possible to observe the predicted position of the new planet, and its discovery in the predicted location turned out to be a sensational confirmation of Newtonian theory.

Popper therefore admitted that making an ad hoc assumption in order to save a theory from refutation was permissible under his version of normative faslisificationism, but only if the ad hoc assumption was independently testable. But suppose that, under the circumstances, it would have been impossible to observe the existence of the predicted planet, at least with the observational tools then available, making the ad hoc assumption testable only in principle, but not in practice. Strictly adhering to Popper’s methodological requirement of being able to test independently any ad hoc assumption would have meant accepting the refutation of the Newtonian theory rather than positing the untestable — but true — ad hoc other-planet hypothesis to account for the failed prediction of the orbital path of Uranus.

My point is not that ad hoc assumptions to save a theory from falsification are ok, but to point out that a strict methodological rules requiring rejection of any theory once it appears to be contradicted by empirical evidence and prohibiting the use of any ad hoc assumption to save the theory unless the ad hoc assumption is independently testable might well lead to the wrong conclusion given the nuances and special circumstances associated with every case in which a theory seems to be contradicted by observed evidence. Such contradictions are rarely so blatant that theory cannot be reconciled with the evidence. Indeed, as Popper himself recognized, all observations are themselves understood and interpreted in the light of theoretical presumptions. It is only in extreme cases that evidence cannot be interpreted in a way that more or less conforms to the theory under consideration. At first blush, the Copernican heliocentric view of the world seemed obviously contradicted by direct sensory observation that earth seems flat and the sun rise and sets. Empirical refutation could be avoided only by providing an alternative interpretation of the sensory data that could be reconciled with the apparent — and obvious — flatness and stationarity of the earth and the movement of the sun and moon in the heavens.

So the problem with falsificationism as a normative theory is that it’s not obvious why a moderately good, but less than perfect, theory should be abandoned simply because it’s not perfect and suffers from occasional predictive failures. To be sure, if a better theory than the one under consideration is available, predicting correctly whenever the one under consideration predicts correctly and predicting more accurately than the one under consideration when the latter fails to predict correctly, the alternative theory is surely preferable, but that simply underscores the point that evaluating any theory in isolation is not very important. After all, every theory, being a simplification, is an imperfect representation of reality. It is only when two or more theories are available that scientists must try to determine which of them is preferable.

Oakeshott and the Poverty of Falsificationism

These problems with falsificationism were brought into clearer focus by Michael Oakeshott in his famous essay “Rationalism in Politics,” which though not directed at Popper himself (whose colleague at the London School of Economics he was) can be read as a critique of Popper’s attempt to prescribe methodological rules for scientists to follow in carrying out their research. Methodological rules of the kind propounded by Popper are precisely the sort of supposedly rational rules of practice intended to ensure the successful outcome of an undertaking that Oakeshott believed to be ill-advised and hopelessly naïve. The rationalist conceit in Oakesott’s view is that there are demonstrably correct answers to practical questions and that practical activity is rational only when it is based on demonstrably true moral or causal rules.

The entry on Michael Oakeshott in the Stanford Encyclopedia of Philosophy summarizes Oakeshott’s position as follows:

The error of Rationalism is to think that making decisions simply requires skill in the technique of applying rules or calculating consequences. In an early essay on this theme, Oakeshott distinguishes between “technical” and “traditional” knowledge. Technical knowledge is of facts or rules that can be easily learned and applied, even by those who are without experience or lack the relevant skills. Traditional knowledge, in contrast, means “knowing how” rather than “knowing that” (Ryle 1949). It is acquired by engaging in an activity and involves judgment in handling facts or rules (RP 12–17). The point is not that rules cannot be “applied” but rather that using them skillfully or prudently means going beyond the instructions they provide.

The idea that a scientist’s decision about when to abandon one theory and replace it with another can be reduced to the application of a Popperian falsificationist maxim ignores all the special circumstances and all the accumulated theoretical and practical knowledge that a truly expert scientist will bring to bear in studying and addressing such a problem. Here is how Oakeshott addresses the problem in his famous essay.

These two sorts of knowledge, then, distinguishable but inseparable, are the twin components of the knowledge involved in every human activity. In a practical art such as cookery, nobody supposes that the knowledge that belongs to the good cook is confined to what is or what may be written down in the cookery book: technique and what I have called practical knowledge combine to make skill in cookery wherever it exists. And the same is true of the fine arts, of painting, of music, of poetry: a high degree of technical knowledge, even where it is both subtle and ready, is one thing; the ability to create a work of art, the ability to compose something with real musical qualities, the ability to write a great sonnet, is another, and requires in addition to technique, this other sort of knowledge. Again these two sorts of knowledge are involved in any genuinely scientific activity. The natural scientist will certainly make use of observation and verification that belong to his technique, but these rules remain only one of the components of his knowledge; advances in scientific knowledge were never achieved merely by following the rules. . . .

Technical knowledge . . . is susceptible of formulation in rules, principles, directions, maxims – comprehensively, in propositions. It is possible to write down technical knowledge in a book. Consequently, it does not surprise us that when an artist writes about his art, he writes only about the technique of his art. This is so, not because he is ignorant of what may be called asesthetic element, or thinks it unimportant, but because what he has to say about that he has said already (if he is a painter) in his pictures, and he knows no other way of saying it. . . . And it may be observed that this character of being susceptible of precise formulation gives to technical knowledge at least the appearance of certainty: it appears to be possible to be certain about a technique. On the other hand, it is characteristic of practical knowledge that it is not susceptible of formulation of that kind. Its normal expression is in a customary or traditional way of doing things, or, simply, in practice. And this gives it the appearance of imprecision and consequently of uncertainty, of being a matter of opinion, of probability rather than truth. It is indeed knowledge that is expressed in taste or connoisseurship, lacking rigidity and ready for the impress of the mind of the learner. . . .

Technical knowledge, in short, an be both taught and learned in the simplest meanings of these words. On the other hand, practical knowledge can neither be taught nor learned, but only imparted and acquired. It exists only in practice, and the only way to acquire it is by apprenticeship to a master – not because the master can teach it (he cannot), but because it can be acquired only by continuous contact with one who is perpetually practicing it. In the arts and in natural science what normally happens is that the pupil, in being taught and in learning the technique from his master, discovers himself to have acquired also another sort of knowledge than merely technical knowledge, without it ever having been precisely imparted and often without being able to say precisely what it is. Thus a pianist acquires artistry as well as technique, a chess-player style and insight into the game as well as knowledge of the moves, and a scientist acquires (among other things) the sort of judgement which tells him when his technique is leading him astray and the connoisseurship which enables him to distinguish the profitable from the unprofitable directions to explore.

Now, as I understand it, Rationalism is the assertion that what I have called practical knowledge is not knowledge at all, the assertion that, properly speaking, there is no knowledge which is not technical knowledge. The Rationalist holds that the only element of knowledge involved in any human activity is technical knowledge and that what I have called practical knowledge is really only a sort of nescience which would be negligible if it were not positively mischievous. (Rationalism in Politics and Other Essays, pp. 12-16)

Almost three years ago, I attended the History of Economics Society meeting at Duke University at which Jeff Biddle of Michigan State University delivered his Presidential Address, “Statistical Inference in Economics 1920-1965: Changes in Meaning and Practice, published in the June 2017 issue of the Journal of the History of Economic Thought. The paper is a remarkable survey of the differing attitudes towards using formal probability theory as the basis for making empirical inferences from the data. The underlying assumptions of probability theory about the nature of the data were widely viewed as being too extreme to make probability theory an acceptable basis for empirical inferences from the data. However, the early negative attitudes toward accepting probability theory as the basis for making statistical inferences from data were gradually overcome (or disregarded). But as late as the 1960s, even though econometric techniques were becoming more widely accepted, a great deal of empirical work, including by some of the leading empirical economists of the time, avoided using the techniques of statistical inference to assess empirical data using regression analysis. Only in the 1970s was there a rapid sea-change in professional opinion that made statistical inference based on explicit probabilisitic assumptions about underlying data distributions the requisite technique for drawing empirical inferences from the analysis of economic data. In the final section of his paper, Biddle offers an explanation for this rapid change in professional attitude toward the use of probabilistic assumptions about data distributions as the required method of the empirical assessment of economic data.

By the 1970s, there was a broad consensus in the profession that inferential methods justified by probability theory—methods of producing estimates, of assessing the reliability of those estimates, and of testing hypotheses—were not only applicable to economic data, but were a necessary part of almost any attempt to generalize on the basis of economic data. . . .

This paper has been concerned with beliefs and practices of economists who wanted to use samples of statistical data as a basis for drawing conclusions about what was true, or probably true, in the world beyond the sample. In this setting, “mechanical objectivity” means employing a set of explicit and detailed rules and procedures to produce conclusions that are objective in the sense that if many different people took the same statistical information, and followed the same rules, they would come to exactly the same conclusions. The trustworthiness of the conclusion depends on the quality of the method. The classical theory of inference is a prime example of this sort of mechanical objectivity.

Porter [Trust in Numbers: The Pursuit of Objectivity in Science and Public Life] contrasts mechanical objectivity with an objectivity based on the “expert judgment” of those who analyze data. Expertise is acquired through a sanctioned training process, enhanced by experience, and displayed through a record of work meeting the approval of other experts. One’s faith in the analyst’s conclusions depends on one’s assessment of the quality of his disciplinary expertise and his commitment to the ideal of scientific objectivity. Elmer Working’s method of determining whether measured correlations represented true cause-and-effect relationships involved a good amount of expert judgment. So, too, did Gregg Lewis’s adjustments of the various estimates of the union/non-union wage gap, in light of problems with the data and peculiarities of the times and markets from which they came. Keynes and Persons pushed for a definition of statistical inference that incorporated space for the exercise of expert judgment; what Arthur Goldberger and Lawrence Klein referred to as ‘statistical inference’ had no explicit place for expert judgment.

Speaking in these terms, I would say that in the 1920s and 1930s, empirical economists explicitly acknowledged the need for expert judgment in making statistical inferences. At the same time, mechanical objectivity was valued—there are many examples of economists of that period employing rule-oriented, replicable procedures for drawing conclusions from economic data. The rejection of the classical theory of inference during this period was simply a rejection of one particular means for achieving mechanical objectivity. By the 1970s, however, this one type of mechanical objectivity had become an almost required part of the process of drawing conclusions from economic data, and was taught to every economics graduate student.

Porter emphasizes the tension between the desire for mechanically objective methods and the belief in the importance of expert judgment in interpreting statistical evidence. This tension can certainly be seen in economists’ writings on statistical inference throughout the twentieth century. However, it would be wrong to characterize what happened to statistical inference between the 1940s and the 1970s as a displace-ment of procedures requiring expert judgment by mechanically objective procedures. In the econometric textbooks published after 1960, explicit instruction on statistical inference was largely limited to instruction in the mechanically objective procedures of the classical theory of inference. It was understood, however, that expert judgment was still an important part of empirical economic analysis, particularly in the specification of the models to be estimated. But the disciplinary knowledge needed for this task was to be taught in other classes, using other textbooks.

And in practice, even after the statistical model had been chosen, the estimates and standard errors calculated, and the hypothesis tests conducted, there was still room to exercise a fair amount of judgment before drawing conclusions from the statistical results. Indeed, as Marcel Boumans (2015, pp. 84–85) emphasizes, no procedure for drawing conclusions from data, no matter how algorithmic or rule bound, can dispense entirely with the need for expert judgment. This fact, though largely unacknowledged in the post-1960s econometrics textbooks, would not be denied or decried by empirical economists of the 1970s or today.

This does not mean, however, that the widespread embrace of the classical theory of inference was simply a change in rhetoric. When application of classical inferential procedures became a necessary part of economists’ analyses of statistical data, the results of applying those procedures came to act as constraints on the set of claims that a researcher could credibly make to his peers on the basis of that data. For example, if a regression analysis of sample data yielded a large and positive partial correlation, but the correlation was not “statistically significant,” it would simply not be accepted as evidence that the “population” correlation was positive. If estimation of a statistical model produced a significant estimate of a relationship between two variables, but a statistical test led to rejection of an assumption required for the model to produce unbiased estimates, the evidence of a relationship would be heavily discounted.

So, as we consider the emergence of the post-1970s consensus on how to draw conclusions from samples of statistical data, there are arguably two things to be explained. First, how did it come about that using a mechanically objective procedure to generalize on the basis of statistical measures went from being a choice determined by the preferences of the analyst to a professional requirement, one that had real con-sequences for what economists would and would not assert on the basis of a body of statistical evidence? Second, why was it the classical theory of inference that became the required form of mechanical objectivity? . . .

Perhaps searching for an explanation that focuses on the classical theory of inference as a means of achieving mechanical objectivity emphasizes the wrong characteristic of that theory. In contrast to earlier forms of mechanical objectivity used by economists, such as standardized methods of time series decomposition employed since the 1920s, the classical theory of inference is derived from, and justified by, a body of formal mathematics with impeccable credentials: modern probability theory. During a period when the value placed on mathematical expression in economics was increasing, it may have been this feature of the classical theory of inference that increased its perceived value enough to overwhelm long-standing concerns that it was not applicable to economic data. In other words, maybe the chief causes of the profession’s embrace of the classical theory of inference are those that drove the broader mathematization of economics, and one should simply look to the literature that explores possible explanations for that phenomenon rather than seeking a special explanation of the embrace of the classical theory of inference.

I would suggest one more factor that might have made the classical theory of inference more attractive to economists in the 1950s and 1960s: the changing needs of pedagogy in graduate economics programs. As I have just argued, since the 1920s, economists have employed both judgment based on expertise and mechanically objective data-processing procedures when generalizing from economic data. One important difference between these two modes of analysis is how they are taught and learned. The classical theory of inference as used by economists can be taught to many students simultaneously as a set of rules and procedures, recorded in a textbook and applicable to “data” in general. This is in contrast to the judgment-based reasoning that combines knowledge of statistical methods with knowledge of the circumstances under which the particular data being analyzed were generated. This form of reasoning is harder to teach in a classroom or codify in a textbook, and is probably best taught using an apprenticeship model, such as that which ideally exists when an aspiring economist writes a thesis under the supervision of an experienced empirical researcher.

During the 1950s and 1960s, the ratio of PhD candidates to senior faculty in PhD-granting programs was increasing rapidly. One consequence of this, I suspect, was that experienced empirical economists had less time to devote to providing each interested student with individualized feedback on his attempts to analyze data, so that relatively more of a student’s training in empirical economics came in an econometrics classroom, using a book that taught statistical inference as the application of classical inference procedures. As training in empirical economics came more and more to be classroom training, competence in empirical economics came more and more to mean mastery of the mechanically objective techniques taught in the econometrics classroom, a competence displayed to others by application of those techniques. Less time in the training process being spent on judgment-based procedures for interpreting statistical results meant fewer researchers using such procedures, or looking for them when evaluating the work of others.

This process, if indeed it happened, would not explain why the classical theory of inference was the particular mechanically objective method that came to dominate classroom training in econometrics; for that, I would again point to the classical theory’s link to a general and mathematically formalistic theory. But it does help to explain why the application of mechanically objective procedures came to be regarded as a necessary means of determining the reliability of a set of statistical measures and the extent to which they provided evidence for assertions about reality. This conjecture fits in with a larger possibility that I believe is worth further exploration: that is, that the changing nature of graduate education in economics might sometimes be a cause as well as a consequence of changing research practices in economics. (pp. 167-70)

The correspondence between Biddle’s discussion of the change in the attitude of the economics profession about how inferences should be drawn from data about empirical relationships is strikingly similar to Oakeshott’s discussion and depressing in its implications for the decline of expert judgment by economics, expert judgment having been replaced by mechanical and technical knowledge that can be objectively summarized in the form of rules or tests for statistical significance, itself an entirely arbitrary convention lacking any logical, or self-evident, justification.

But my point is not to condemn using rules derived from classical probability theory to assess the significance of relationships statistically estimated from historical data, but to challenge the methodological prohibition against the kinds of expert judgments that many statistically knowledgeable economists like Nobel Prize winners such as Simon Kuznets, Milton Friedman, Theodore Schultz and Gary Becker routinely used to make in their empirical studies. As Biddle notes:

In 1957, Milton Friedman published his theory of the consumption function. Friedman certainly understood statistical theory and probability theory as well as anyone in the profession in the 1950s, and he used statistical theory to derive testable hypotheses from his economic model: hypotheses about the relationships between estimates of the marginal propensity to consume for different groups and from different types of data. But one will search his book almost in vain for applications of the classical methods of inference. Six years later, Friedman and Anna Schwartz published their Monetary History of the United States, a work packed with graphs and tables of statistical data, as well as numerous generalizations based on that data. But the book contains no classical hypothesis tests, no confidence intervals, no reports of statistical significance or insignificance, and only a handful of regressions. (p. 164)

Friedman’s work on the Monetary History is still regarded as authoritative. My own view is that much of the Monetary History was either wrong or misleading. But my quarrel with the Monetary History mainly pertains to the era in which the US was on the gold standard, inasmuch as Friedman simply did not understand how the gold standard worked, either in theory or in practice, as McCloskey and Zecher showed in two important papers (here and here). Also see my posts about the empirical mistakes in the Monetary History (here and here). But Friedman’s problem was bad monetary theory, not bad empirical technique.

Friedman’s theoretical misunderstandings have no relationship to the misguided prohibition against doing quantitative empirical research without obeying the arbitrary methodological requirement that statistical be derived in a way that measures the statistical significance of the estimated relationships. These methodological requirements have been adopted to support a self-defeating pretense to scientific rigor, necessitating the use of relatively advanced mathematical techniques to perform quantitative empirical research. The methodological requirements for measuring statistical relationships were never actually shown to be generate more accurate or reliable statistical results than those derived from the less technically advanced, but in some respects more economically sophisticated, techniques that have almost totally been displaced. One more example of the fallacy that there is but one technique of research that ensures the discovery of truth, a mistake even Popper was never guilty of.

Methodological Prescriptions Go from Bad to Worse

The methodological requirement for the use of formal tests of statistical significance before any quantitative statistical estimate could be credited was a prelude, though it would be a stretch to link them causally, to another and more insidious form of methodological tyrannizing: the insistence that any macroeconomic model be derived from explicit micro-foundations based on the solution of an intertemporal-optimization exercise. Of course, the idea that such a model was in any way micro-founded was a pretense, the solution being derived only through the fiction of a single representative agent, rendering the entire optimization exercise fundamentally illegitimate and the exact opposite of micro-founded model. Having already explained in previous posts why transforming microfoundations from a legitimate theoretical goal into methodological necessity has taken a generation of macroeconomists down a blind alley (here, here, here, and here) I will only make the further comment that this is yet another example of the danger of elevating technique over practice and substance.

Popper’s More Important Contribution

This post has largely concurred with the negative assessment of Popper’s work registered by Lemoine. But I wish to end on a positive note, because I have learned a great deal from Popper, and even if he is overrated as a philosopher of science, he undoubtedly deserves great credit for suggesting falsifiability as the criterion by which to distinguish between science and metaphysics. Even if that criterion does not hold up, or holds up only when qualified to a greater extent than Popper admitted, Popper made a hugely important contribution by demolishing the startling claim of the Logical Positivists who in the 1920s and 1930s argued that only statements that can be empirically verified through direct or indirect observation have meaning, all other statements being meaningless or nonsensical. That position itself now seems to verge on the nonsensical. But at the time many of the world’s leading philosophers, including Ludwig Wittgenstein, no less, seemed to accept that remarkable view.

Thus, Popper’s demarcation between science and metaphysics had a two-fold significance. First, that it is not verifiability, but falsifiability, that distinguishes science from metaphysics. That’s the contribution for which Popper is usually remembered now. But it was really the other aspect of his contribution that was more significant: that even metaphysical, non-scientific, statements can be meaningful. According to the Logical Positivists, unless you are talking about something that can be empirically verified, you are talking nonsense. In other words they were deliberately hoisting themselves on their petard, because their discussions about what is and what is not meaningful, being discussions about concepts, not empirically verifiable objects, were themselves – on the Positivists’ own criterion of meaning — meaningless and nonsensical.

Popper made the world safe for metaphysics, and the world is a better place as a result. Science is a wonderful enterprise, rewarding for its own sake and because it contributes to the well-being of many millions of human beings, though like many other human endeavors, it can also have unintended and unfortunate consequences. But metaphysics, because it was used as a term of abuse by the Positivists, is still, too often, used as an epithet. It shouldn’t be.

Certainly economists should aspire to tease out whatever empirical implications they can from their theories. But that doesn’t mean that an economic theory with no falsifiable implications is useless, a judgment whereby Mark Blaug declared general equilibrium theory to be unscientific and useless, a judgment that I don’t think has stood the test of time. And even if general equilibrium theory is simply metaphysical, my response would be: so what? It could still serve as a source of inspiration and insight to us in framing other theories that may have falsifiable implications. And even if, in its current form, a theory has no empirical content, there is always the possibility that, through further discussion, critical analysis and creative thought, empirically falsifiable implications may yet become apparent.

Falsifiability is certainly a good quality for a theory to have, but even an unfalsifiable theory may be worth paying attention to and worth thinking about.

Advertisements

Judy Shelton Speaks Up for the Gold Standard

I have been working on a third installment in my series on how, with a huge assist from Arthur Burns, things fell apart in the 1970s. In my third installment, I will discuss the sad denouement of Burns’s misunderstandings and mistakes when Paul Volcker administered a brutal dose of tight money that caused the worst downturn and highest unemployment since the Great Depression in the Great Recession of 1981-82. But having seen another one of Judy Shelton’s less than enlightening op-eds arguing for a gold standard in the formerly respectable editorial section of the Wall Street Journal, I am going to pause from my account of Volcker’s monetary policy in the early 1980s to give Dr. Shelton my undivided attention.

The opening paragraph of Dr. Shelton’s op-ed is a less than auspicious start.

Since President Trump announced his intention to nominate Herman Cain and Stephen Moore to serve on the Federal Reserve’s board of governors, mainstream commentators have made a point of dismissing anyone sympathetic to a gold standard as crankish or unqualified.

That is a totally false charge. Since Herman Cain and Stephen Moore were nominated, they have been exposed as incompetent and unqualified to serve on the Board of Governors of the world’s most important central bank. It is not support for reestablishing the gold standard that demonstrates their incompetence and lack of qualifications. It is true that most economists, myself included, oppose restoring the gold standard. It is also true that most supporters of the gold standard, like, say — to choose a name more or less at random — Ron Paul, are indeed cranks and unqualified to hold high office, but there is indeed a minority of economists, including some outstanding ones like Larry White, George Selgin, Richard Timberlake and Nobel Laureate Robert Mundell, who do favor restoring the gold standard, at least under certain conditions.

But Cain and Moore are so unqualified and so incompetent, that they are incapable of doing more than mouthing platitudes about how wonderful it would be to have a dollar as good as gold by restoring some unspecified link between the dollar and gold. Because of their manifest ignorance about how a gold standard would work now or how it did work when it was in operation, they were unprepared to defend their support of a gold standard when called upon to do so by inquisitive reporters. So they just lied and denied that they had ever supported returning to the gold standard. Thus, in addition to being ignorant, incompetent and unqualified to serve on the Board of Governors of the Federal Reserve, Cain and Moore exposed their own foolishness and stupidity, because it was easy for reporters to dig up multiple statements by both aspiring central bankers explicitly calling for a gold standard to be restored and muddled utterances bearing at least vague resemblance to support for the gold standard.

So Dr. Shelton, in accusing mainstream commentators of dismissing anyone sympathetic to a gold standard as crankish or unqualified is accusing mainstream commentators of a level of intolerance and closed-mindedness for which she supplies not a shred of evidence.

After making a defamatory accusation with no basis in fact, Dr. Shelton turns her attention to a strawman whom she slays mercilessly.

But it is wholly legitimate, and entirely prudent, to question the infallibility of the Federal Reserve in calibrating the money supply to the needs of the economy. No other government institution had more influence over the creation of money and credit in the lead-up to the devastating 2008 global meltdown.

Where to begin? The Federal Reserve has not been targeting the quantity of money in the economy as a policy instrument since the early 1980s when the Fed misguidedly used the quantity of money as its policy target in its anti-inflation strategy. After acknowledging that mistake the Fed has, ever since, eschewed attempts to conduct monetary policy by targeting any monetary aggregate. It is through the independent choices and decisions of individual agents and of many competing private banking institutions, not the dictate of the Federal Reserve, that the quantity of money in the economy at any given time is determined. Indeed, it is true that the Federal Reserve played a great role in the run-up to the 2008 financial crisis, but its mistake had nothing to do with the amount of money being created. Rather the problem was that the Fed was setting its policy interest rate at too high a level throughout 2008 because of misplaced inflation fears fueled by a temporary increases in commodity prices that deterred the Fed from providing the monetary stimulus needed to counter a rapidly deepening recession.

But guess who was urging the Fed to raise its interest rate in 2008 exactly when a cut in interest rates was what the economy needed? None other than the Wall Street Journal editorial page. And guess who was the lead editorial writer on the Wall Street Journal in 2008 for economic policy? None other than Stephen Moore himself. Isn’t that special?

I will forbear from discussing Dr. Shelton’s comments on the Fed’s policy of paying interest on reserves, because I actually agree with her criticism of the policy. But I do want to say a word about her discussion of currency manipulation and the supposed role of the gold standard in minimizing such currency manipulation.

The classical gold standard established an international benchmark for currency values, consistent with free-trade principles. Today’s arrangements permit governments to manipulate their currencies to gain an export advantage.

Having previously explained to Dr. Shelton that currency manipulation to gain an export advantage depends not just on the exchange rate, but the monetary policy that is associated with that exchange rate, I have to admit some disappointment that my previous efforts to instruct her don’t seem to have improved her understanding of the ABCs of currency manipulation. But I will try again. Let me just quote from my last attempt to educate her.

The key point to keep in mind is that for a country to gain a competitive advantage by lowering its exchange rate, it has to prevent the automatic tendency of international price arbitrage and corresponding flows of money to eliminate competitive advantages arising from movements in exchange rates. If a depreciated exchange rate gives rise to an export surplus, a corresponding inflow of foreign funds to finance the export surplus will eventually either drive the exchange rate back toward its old level, thereby reducing or eliminating the initial depreciation, or, if the lower rate is maintained, the cash inflow will accumulate in reserve holdings of the central bank. Unless the central bank is willing to accept a continuing accumulation of foreign-exchange reserves, the increased domestic demand and monetary expansion associated with the export surplus will lead to a corresponding rise in domestic prices, wages and incomes, thereby reducing or eliminating the competitive advantage created by the depressed exchange rate. Thus, unless the central bank is willing to accumulate foreign-exchange reserves without limit, or can create an increased demand by private banks and the public to hold additional cash, thereby creating a chronic excess demand for money that can be satisfied only by a continuing export surplus, a permanently reduced foreign-exchange rate creates only a transitory competitive advantage.

I don’t say that currency manipulation is not possible. It is not only possible, but we know that currency manipulation has been practiced. But currency manipulation can occur under a fixed-exchange rate regime as well as under flexible exchange-rate regimes, as demonstrated by the conduct of the Bank of France from 1926 to 1935 while it was operating under a gold standard.

Dr. Shelton believes that restoring a gold standard would usher in a period of economic growth like the one that followed World War II under the Bretton Woods System. Well, Dr. Shelton might want to reconsider how well the Bretton Woods system worked to the advantage of the United States.

The fact is that, as Ralph Hawtrey pointed out in his Incomes and Money, the US dollar was overvalued relative to the currencies of most its European trading parties, which is why unemployment in the US was chronically above 5% after 1954 to 1965. With undervalued currencies, West Germany, Italy, Belgium, Britain, France and Japan all had much lower unemployment than the US. It was only in 1961, after John Kennedy became President, when the Federal Reserve systematically loosened monetary policy, forcing Germany and other countries to revalue their countries upward to avoid importing US inflation that the US was able redress the overvaluation of the dollar. But in doing so, the US also gradually rendered the $35/ounce price of gold, at which it maintained a kind of semi-convertibility of the dollar, unsustainable, leading a decade later to the final abandonment of the gold-dollar peg.

Dr. Shelton is obviously dedicated to restoring the gold standard, but she really ought to study up on how the gold standard actually worked in its previous incarnations and semi-incarnations, before she opines any further about how it might work in the future. At present, she doesn’t seem to be knowledgeable about how the gold standard worked in the past, and her confidence that it would work well in the future is entirely misplaced.

James Buchanan Calling the Kettle Black

In the wake of the tragic death of Alan Krueger, attention has been drawn to an implicitly defamatory statement by James Buchanan about those who, like Krueger, dared question the orthodox position taken by most economists that minimum-wage laws increase unemployment among low-wage, low-skilled workers whose productivity, at the margin, is less than the minimum wage that employers are required to pay employees.

Here is Buchanan’s statement:

The inverse relationship between quantity demanded and price is the core proposition in economic science, which embodies the presupposition that human choice behavior is sufficiently relational to allow predictions to be made. Just as no physicist would claim that “water runs uphill,” no self-respecting economist would claim that increases in the minimum wage increase employment. Such a claim, if seriously advanced, becomes equivalent to a denial that there is even minimal scientific content in economics, and that, in consequence, economists can do nothing but write as advocates for ideological interests. Fortunately, only a handful of economists are willing to throw over the teachings of two centuries; we have not yet become a bevy of camp-following whores.

Wholly apart from its odious metaphorical characterization of those he was criticizing, Buchanan’s assertion was substantively problematic in two respects. The first, which is straightforward and well-known, and which Buchanan was obviously wrong not to acknowledge, is that there are obvious circumstances in which a minimum-wage law could simultaneously raise wages and reduce unemployment without contradicting the inverse relationship between quantity demanded and price. Such circumstances obtain whenever employers exercise monopsony power in the market for unskilled labor. If employers realize that hiring additional low-skilled workers drives up the wage paid to all the low-skilled workers that they employ, not just the additional ones hired, the wage paid by employers will be less than the value of the marginal product of labor. If employers exercise monopsony power, then divergence between the wage and the marginal product is not a violation, but an implication, of the inverse relationship between quantity demanded and price. If Buchanan had written on his price theory preliminary exam for a Ph. D at Chicago that support for a minimum wage could be rationalized only be denying the inverse relationship between quantity demanded and price, he would have been flunked.

The second problem with Buchanan’s position is less straightforward and less well-known, but more important, than the first. The inverse relationship by which Buchanan set such great store is valid only if qualified by a ceteris paribus condition. Demand is a function of many variables of which price is only one. So the inverse relationship between price and quantity demanded is premised on the assumption that all the other variables affecting demand are held (at least approximately) constant.

Now it’s true that even the law of gravity is subject to a ceteris paribus condition; the law of gravity will not control the movement of objects in a magnetic field. And it would be absurd to call a physicist an advocate for ideological interests just because he recognized that possibility.

Of course, the presence or absence of a magnetic field is a circumstance that can be easily ascertained, thereby enabling a physicist to alter his prediction of the movement of an object according as the the relevant field for predicting the motion of the object under consideration is gravitational or magnetic. But the magnitude and relevance of other factors affecting demand are not so easily taken into account by economists. That’s why applied economists try to focus on markets in which the effects of “other factors” are small or on markets in which “other factors” can easily be identified and measured or treated qualitatively as fixed effects.

But in some markets the factors affecting demand are themselves interrelated so that the ceteris paribus assumption can’t be maintained. Such markets can’t be analyzed in isolation, they can only be analyzed as a system in which all the variables are jointly determined. Economists call the analysis of an isolated market partial-equilibrium analysis. And it is partial-equilibrium analysis that constitutes the core of price theory and microeconomics. The ceteris paribus assumption either has to be maintained by assuming that changes in the variables other than price affecting demand and supply are inconsequential or by identifying other variable whose changes could affect demand and supply and either measuring them quantitatively or at least accounting for them qualitatively.

But labor markets, except at a granular level, when the focus is on an isolated region or a specialized occupation, cannot be modeled usefully with the standard partial-equilibrium techniques of price theory, because income effects and interactions between related markets cannot appropriately be excluded from the partial-equilibrium analysis of supply and demand in a broadly defined market for labor. The determination of the equilibrium price in a market that encompasses a substantial share of economic activity cannot be isolated from the determination of the equilibrium prices in other markets.

Moreover, the idea that the equilibration of any labor market can be understood within a partial-equiilbrium framework in which the wage responds to excess demands for, or excess supplies of, labor just as the price of a standardized commodity adjusts to excess demands for, or excess supplies of, that commodity, reflects a gross misunderstanding of the incentives of employers and workers in reaching wage bargains for the differentiated services provided by individual workers. Those incentives are in no way comparable to the incentives of businesses to adjust the prices of their products in response to excess supplies of or excess demands for those products.

Buchanan was implicitly applying an inappropriate paradigm of price adjustment in a single market to the analysis of how wages adjust in the real world. The truth is we don’t have a good understanding of how wages adjust, and so we don’t have a good understanding of the effects of minimum wages. But in arrogantly and insultingly dismissing Krueger’s empirical research on the effects of minimum wage laws, Buchanan was unwittingly exposing not Krueger’s ideological advocacy but his own.

Was There a Blue Wave?

In the 2018 midterm elections on two weeks ago on November 6, Democrats gained about 38 seats in the House of Representatives with results for a few seats still incomplete. Polls and special elections for vacancies in the House and Senate and state legislatures indicated that a swing toward the Democrats was likely, raising hopes among Democrats that a blue wave would sweep Democrats into control of the House of Representatives and possibly, despite an unfavorable election map with many more Democratic Senate seats at state than Republican seats, even the Senate.

On election night when results in the Florida Senate and Governor races suddenly swung toward the Democrats, the high hopes for a blue wave began to ebb, especially as results from Indiana, Misouri, and South Dakota showed that Democratic incumbent Senators trailing by substantial margins. Other results seemed like a mixed bag, with some Democratic gains, but hardly providing clear signs of a blue wave. The mood was not lifted when the incumbent Democratic Senator from Montana fell behind his Republican challenger and Ted Cruz seemed to be maintaining a slim lead over his charismatic opponent Beto O’Rourke and the Republican candidate for the open Senate seat held by the retiring Jeff Flake of Arizona was leading the Democratic candidate.

As the night wore on, although it seemed that the Democrats would gain a majority in the House of Representatives, estimates of the number of seats gained were only in the high twenties or low thirties, while it appeared that Republicans might gain as many as five Senate Seats. President Trump was able to claim, almost credibly, the next morning at his White House news conference that the election results had been an almost total victory for himself and his party.

It was not till later the next day that it became clear that the Democratic gains in the House would not be just barely enough (23) to gain a majority in the House but would likely be closer to 40 than to 30. The apparent losses of the Montana seat was reversed by late results, and the delayed results from Nevada showed that a Democrat had defeated the Republican incumbent while the Democratic candidate in Arizona had substantially cut into the lead built up by the Republican candidate with most of the of the uncounted votes in Democratic strongholds. Instead of winning 56 Senate seats a pickup of 5, as seemed likely on Tuesday night, the Republicans gains were cut to no more than 2, and the apparent defeat of an incumbent in the Florida election was thrown into doubt, as late returns showed a steadily shrinking Republican margin, sending Republicans into an almost hysterical panic at the prospect gaining no more than one seat rather than five they had been expecting on Tuesday night.

So, within a day or two after the election, the narrative of a Democratic wave began to reemerge. Many commentators accepted the narrative of a covert Democratic wave, but others disagreed. For example, Sean Trende at Real Clear Politics argues that there really wasn’t a Blue Wave, even though Democratic House gains of nearly 40 seats, taken in isolation, might qualify for that designation. Trende thinks the Democratic losses in the Senate, though not as large as they seemed originally, are inconsistent with a wave election as were Democratic gains in governorships and state legislatures.

However, a pickup of seven governorships, while not spectacular is hardly to be sneezed at, and Democratic gains in state legislative seats would have been substantially greater than they were had it not been for extremely effective gerrymandering that kept democratic gains well below their share of the vote in state legislatures even though their effect on races for the House were fairly minimal. So I think that the best measure of the wave-like character of the 2018 elections is provided by the results for the House of Representatives.

Now the problem with judging whether the House results were a wave or were not a wave is that midterm election results are sensitive to economic conditions, so before you can compare results you need to adjust for how well or poorly the economy was performing. You also need to adjust for how many seats the President’s party has going into the election. The more seats the President’s Party has to defend, the greater its potential loss in the election.

To test this idea, I estimated a simple regression model with the number of seats lost by the President’s party in the midterm elections as the dependent variable and the number of seats held by the President’s party as one independent variable and the ratio of real GDP in the year of the midterm election to real GDP in the year of the previous Presidential election as the other independent variable. One would expect the President’s party to perform better in the midterm elections the higher the ratio of real GDP in the midterm year to real GDP in the year of the previous Presidential election.

My regression equation is thus ΔSeats = C + aSeats + bRGDPratio + ε,

where ΔSeats is the change in the number of seats held by the President’s party after the midterm election, Seats is the number of seats held before the midterm, RGDPratio is the ratio of real GDP in the midterm election year to the real GDP in the previous Presidential election year, C is a constant reflecting the average change in the number of seats of the President’s party in the midterm elections, and a and b are the coefficients reflecting the marginal effect of a change in the corresponding independent variables on the dependent variable, with the other independent variable held constant.

I estimated this equation using data in the 18 midterm elections from 1946 through 2014. The estimated regression equation was the following:

ΔSeats = 24.63 – .26Seats + 184.48RGDPratio

The t values for Seats and RGDPratio are both slightly greater than 2 in absolute value, indicating that they are statistically significant at the 10% level and nearly significant at the 5% level. But given the small number of observations, I wouldn’t put much store on the significance levels except as an indication of plausibility. The assumption that Seats is linearly related to ΔSeats doesn’t seem right, but I haven’t tried alternative specifications. The R-squared and adjusted R-squared statistics are .31 and .22, which seem pretty high.

At any rate when I plotted the predicted changes in the number of seats against the actual number of seats changed in the elections from 1946 to 2018 I came up with the following chart:

 

The blue line in the chart represents the actual number of seats gained or lost in each midterm election since 1946 and the orange line represents the change in the number of seats predicted by the model. One can see that the President’s party did substantially better than expected in 1962, 1978, 1998, and 2002 elections, while the President’s party did substantially worse than expected in the 1958, 1966, 1974, 1994, 2006, 2010 and 2018 elections.

In 2018, the Democrats gained approximately 38 seats compared to the 22 seats the model predicted, so the Democrats overperformed by about 16 seats. In 2010 the Republicans gained 63 seats compared to a predicted gain of 35. In 2006, the Democrats gained 32 seats compared to a predicted gain of 22. In 1994 Republicans gained 54 seats compared to a predicted gain of 26 seats. In 1974, Democrats gains 48 seats compared to a predicted gain of 20 seats. In 1966, Republicans gained 47 seats compared to a predicted gain of 26 seats. And in 1958, Democrats gained 48 seats compared to a predicted gain of 20 seats.

So the Democrats in 2018 did not over-perform as much as they did in 1958 and 1974, or as much as the Republicans did in 1966, 1994, and 2010. But the Democrats overperformed by more in 2018 than they did in 2006 when Mrs. Pelosi became Speaker of the House the first time, and actually came close to the Republicans’ overperformance of 1966. So, my tentative conclusion is yes, there was a blue wave in 2018, but it was a light blue wave.

 

More on Sticky Wages

It’s been over four and a half years since I wrote my second most popular post on this blog (“Why are Wages Sticky?”). Although the post was linked to and discussed by Paul Krugman (which is almost always a guarantee of getting a lot of traffic) and by other econoblogosphere standbys like Mark Thoma and Barry Ritholz, unlike most of my other popular posts, it has continued ever since to attract a steady stream of readers. It’s the posts that keep attracting readers long after their original expiration date that I am generally most proud of.

I made a few preliminary points about wage stickiness before getting to my point. First, although Keynes is often supposed to have used sticky wages as the basis for his claim that market forces, unaided by stimulus to aggregate demand, cannot automatically eliminate cyclical unemployment within the short or even medium term, he actually devoted a lot of effort and space in the General Theory to arguing that nominal wage reductions would not increase employment, and to criticizing economists who blamed unemployment on nominal wages fixed by collective bargaining at levels too high to allow all workers to be employed. So, the idea that wage stickiness is a Keynesian explanation for unemployment doesn’t seem to me to be historically accurate.

I also discussed the search theories of unemployment that in some ways have improved our understanding of why some level of unemployment is a normal phenomenon even when people are able to find jobs fairly easily and why search and unemployment can actually be productive, enabling workers and employers to improve the matches between the skills and aptitudes that workers have and the skills and aptitudes that employers are looking for. But search theories also have trouble accounting for some basic facts about unemployment.

First, a lot of job search takes place when workers have jobs while search theories assume that workers can’t or don’t search while they are employed. Second, when unemployment rises in recessions, it’s not because workers mistakenly expect more favorable wage offers than employers are offering and mistakenly turn down job offers that they later regret not having accepted, which is a very skewed way of interpreting what happens in recessions; it’s because workers are laid off by employers who are cutting back output and idling production lines.

I then suggested the following alternative explanation for wage stickiness:

Consider the incentive to cut price of a firm that can’t sell as much as it wants [to sell] at the current price. The firm is off its supply curve. The firm is a price taker in the sense that, if it charges a higher price than its competitors, it won’t sell anything, losing all its sales to competitors. Would the firm have any incentive to cut its price? Presumably, yes. But let’s think about that incentive. Suppose the firm has a maximum output capacity of one unit, and can produce either zero or one units in any time period. Suppose that demand has gone down, so that the firm is not sure if it will be able to sell the unit of output that it produces (assume also that the firm only produces if it has an order in hand). Would such a firm have an incentive to cut price? Only if it felt that, by doing so, it would increase the probability of getting an order sufficiently to compensate for the reduced profit margin at the lower price. Of course, the firm does not want to set a price higher than its competitors, so it will set a price no higher than the price that it expects its competitors to set.

Now consider a different sort of firm, a firm that can easily expand its output. Faced with the prospect of losing its current sales, this type of firm, unlike the first type, could offer to sell an increased amount at a reduced price. How could it sell an increased amount when demand is falling? By undercutting its competitors. A firm willing to cut its price could, by taking share away from its competitors, actually expand its output despite overall falling demand. That is the essence of competitive rivalry. Obviously, not every firm could succeed in such a strategy, but some firms, presumably those with a cost advantage, or a willingness to accept a reduced profit margin, could expand, thereby forcing marginal firms out of the market.

Workers seem to me to have the characteristics of type-one firms, while most actual businesses seem to resemble type-two firms. So what I am suggesting is that the inability of workers to take over the jobs of co-workers (the analog of output expansion by a firm) when faced with the prospect of a layoff means that a powerful incentive operating in non-labor markets for price cutting in response to reduced demand is not present in labor markets. A firm faced with the prospect of being terminated by a customer whose demand for the firm’s product has fallen may offer significant concessions to retain the customer’s business, especially if it can, in the process, gain an increased share of the customer’s business. A worker facing the prospect of a layoff cannot offer his employer a similar deal. And requiring a workforce of many workers, the employer cannot generally avoid the morale-damaging effects of a wage cut on his workforce by replacing current workers with another set of workers at a lower wage than the old workers were getting.

I think that what I wrote four years ago is clearly right, identifying an important reason for wage stickiness. But there’s also another reason that I didn’t mention then, but whose importance has since come to appear increasingly significant to me, especially as a result of writing and rewriting my paper “Hayek, Hicks, Radner and three concepts of intertemporal equilibrium.”

If you are unemployed because the demand for your employer’s product has gone down, and your employer, planning to reduce output, is laying off workers no longer needed, how could you, as an individual worker, unconstrained by a union collective-bargaining agreement or by a minimum-wage law, persuade your employer not to lay you off? Could you really keep your job by offering to accept a wage cut — no matter how big? If you are being laid off because your employer is reducing output, would your offer to work at a lower wage cause your employer to keep output unchanged, despite a reduction in demand? If not, how would your offer to take a pay cut help you keep your job? Unless enough workers are willing to accept a big enough wage cut for your employer to find it profitable to maintain current output instead of cutting output, how would your own willingness to accept a wage cut enable you to keep your job?

Now, if all workers were to accept a sufficiently large wage cut, it might make sense for an employer not to carry out a planned reduction in output, but the offer by any single worker to accept a wage cut certainly would not cause the employer to change its output plans. So, if you are making an independent decision whether to offer to accept a wage cut, and other workers are making their own independent decisions about whether to accept a wage cut, would it be rational for you or any of them to accept a wage cut? Whether it would or wouldn’t might depend on what each worker was expecting other workers to do. But certainly given the expectation that other workers are not offering to accept a wage cut, why would it make any sense for any worker to be the one to offer to accept a wage cut? Would offering to accept a wage cut, increase the likelihood that a worker would be one of the lucky ones chosen not to be laid off? Why would offering to accept a wage cut that no one else was offering to accept, make the worker willing to work for less appear more desirable to the employer than the others that wouldn’t accept a wage cut? One reaction by the employer might be: what’s this guy’s problem?

Combining this way of looking at the incentives workers have to offer to accept wage reductions to keep their jobs with my argument in my post of four years ago, I now am inclined to suggest that unemployment as such provides very little incentive for workers and employers to cut wages. Price cutting in periods of excess supply is often driven by aggressive price cutting by suppliers with large unsold inventories. There may be lots of unemployment, but no one is holding a large stock of unemployed workers, and no is in a position to offer low wages to undercut the position of those currently employed at  nominal wages that, arguably, are too high.

That’s not how labor markets operate. Labor markets involve matching individual workers and individual employers more or less one at a time. If nominal wages fall, it’s not because of an overhang of unsold labor flooding the market; it’s because something is changing the expectations of workers and employers about what wage will be offered by employers, and accepted by workers, for a particular kind of work. If the expected wage is too high, not all workers willing to work at that wage will find employment; if it’s too low, employers will not be able to find as many workers as they would like to hire, but the situation will not change until wage expectations change. And the reason that wage expectations change is not because the excess demand for workers causes any immediate pressure for nominal wages to rise.

The further point I would make is that the optimal responses of workers and the optimal responses of their employers to a recessionary reduction in demand, in which the employers, given current input and output prices, are planning to cut output and lay off workers, are mutually interdependent. While it is, I suppose, theoretically possible that if enough workers decided to immediately offer to accept sufficiently large wage cuts, some employers might forego plans to lay off their workers, there are no obvious market signals that would lead to such a response, because such a response would be contingent on a level of coordination between workers and employers and a convergence of expectations about future outcomes that is almost unimaginable.

One can’t simply assume that it is in the independent self-interest of every worker to accept a wage cut as soon as an employer perceives a reduced demand for its product, making the current level of output unprofitable. But unless all, or enough, workers decide to accept a wage cut, the optimal response of the employer is still likely to be to cut output and lay off workers. There is no automatic mechanism by which the market adjusts to demand shocks to achieve the set of mutually consistent optimal decisions that characterizes a full-employment market-clearing equilibrium. Market-clearing equilibrium requires not merely isolated price and wage cuts by individual suppliers of inputs and final outputs, but a convergence of expectations about the prices of inputs and outputs that will be consistent with market clearing. And there is no market mechanism that achieves that convergence of expectations.

So, this brings me back to Keynes and the idea of sticky wages as the key to explaining cyclical fluctuations in output and employment. Keynes writes at the beginning of chapter 19 of the General Theory.

For the classical theory has been accustomed to rest the supposedly self-adjusting character of the economic system on an assumed fluidity of money-wages; and, when there is rigidity, to lay on this rigidity the blame of maladjustment.

A reduction in money-wages is quite capable in certain circumstances of affording a stimulus to output, as the classical theory supposes. My difference from this theory is primarily a difference of analysis. . . .

The generally accept explanation is . . . quite a simple one. It does not depend on roundabout repercussions, such as we shall discuss below. The argument simply is that a reduction in money wages will, cet. par. Stimulate demand by diminishing the price of the finished product, and will therefore increase output, and will therefore increase output and employment up to the point where  the reduction which labour has agreed to accept in its money wages is just offset by the diminishing marginal efficiency of labour as output . . . is increased. . . .

It is from this type of analysis that I fundamentally differ.

[T]his way of thinking is probably reached as follows. In any given industry we have a demand schedule for the product relating the quantities which can be sold to the prices asked; we have a series of supply schedules relating the prices which will be asked for the sale of different quantities. .  . and these schedules between them lead up to a further schedule which, on the assumption that other costs are unchanged . . . gives us the demand schedule for labour in the industry relating the quantity of employment to different levels of wages . . . This conception is then transferred . . . to industry as a whole; and it is supposed, by a parity of reasoning, that we have a demand schedule for labour in industry as a whole relating the quantity of employment to different levels of wages. It is held that it makes no material difference to this argument whether it is in terms of money-wages or of real wages. If we are thinking of real wages, we must, of course, correct for changes in the value of money; but this leaves the general tendency of the argument unchanged, since prices certainly do not change in exact proportion to changes in money wages.

If this is the groundwork of the argument . . ., surely it is fallacious. For the demand schedules for particular industries can only be constructed on some fixed assumption as to the nature of the demand and supply schedules of other industries and as to the amount of aggregate effective demand. It is invalid, therefore, to transfer the argument to industry as a whole unless we also transfer our assumption that the aggregate effective demand is fixed. Yet this assumption amount to an ignoratio elenchi. For whilst no one would wish to deny the proposition that a reduction in money-wages accompanied by the same aggregate demand as before will be associated with an increase in employment, the precise question at issue is whether the reduction in money wages will or will not be accompanied by the same aggregate effective demand as before measured in money, or, at any rate, measured by an aggregate effective demand which is not reduced in full proportion to the reduction in money-wages. . . But if the classical theory is not allowed to extend by analogy its conclusions in respect of a particular industry to industry as a whole, it is wholly unable to answer the question what effect on employment a reduction in money-wages will have. For it has no method of analysis wherewith to tackle the problem. (General Theory, pp. 257-60)

Keynes’s criticism here is entirely correct. But I would restate slightly differently. Standard microeconomic reasoning about preferences, demand, cost and supply is partial-equilbriium analysis. The focus is on how equilibrium in a single market is achieved by the adjustment of the price in a single market to equate the amount demanded in that market with amount supplied in that market.

Supply and demand is a wonderful analytical tool that can illuminate and clarify many economic problems, providing the key to important empirical insights and knowledge. But supply-demand analysis explicitly – but too often without realizing its limiting implications – assumes that other prices and incomes in other markets are held constant. That assumption essentially means that the market – i.e., the demand, cost and supply curves used to represent the behavioral characteristics of the market being analyzed – is small relative to the rest of the economy, so that changes in that single market can be assumed to have a de minimus effect on the equilibrium of all other markets. (The conditions under which such an assumption could be justified are themselves not unproblematic, but I am now assuming that those problems can in fact be assumed away at least in many applications. And a good empirical economist will have a good instinctual sense for when it’s OK to make the assumption and when it’s not OK to make the assumption.)

So, the underlying assumption of microeconomics is that the individual markets under analysis are very small relative to the whole economy. Why? Because if those markets are not small, we can’t assume that the demand curves, cost curves, and supply curves end up where they started. Because a high price in one market may have effects on other markets and those effects will have further repercussions that move the very demand, cost and supply curves that were drawn to represent the market of interest. If the curves themselves are unstable, the ability to predict the final outcome is greatly impaired if not completely compromised.

The working assumption of the bread and butter partial-equilibrium analysis that constitutes econ 101 is that markets have closed borders. And that assumption is not always valid. If markets have open borders so that there is a lot of spillover between and across markets, the markets can only be analyzed in terms of broader systems of simultaneous equations, not the simplified solutions that we like to draw in two-dimensional space corresponding to intersections of stable supply curves with stable supply curves.

What Keynes was saying is that it makes no sense to draw a curve representing the demand of an entire economy for labor or a curve representing the supply of labor of an entire economy, because the underlying assumption of such curves that all other prices are constant cannot possibly be satisfied when you are drawing a demand curve and a supply curve for an input that generates more than half the income earned in an economy.

But the problem is even deeper than just the inability to draw a curve that meaningfully represents the demand of an entire economy for labor. The assumption that you can model a transition from one point on the curve to another point on the curve is simply untenable, because not only is the assumption that other variables are being held constant untenable and self-contradictory, the underlying assumption that you are starting from an equilibrium state is never satisfied when you are trying to analyze a situation of unemployment – at least if you have enough sense not to assume that economy is starting from, and is not always in, a state of general equilibrium.

So, Keynes was certainly correct to reject the naïve transfer of partial equilibrium theorizing from its legitimate field of applicability in analyzing the effects of small parameter changes on outcomes in individual markets – what later came to be known as comparative statics – to macroeconomic theorizing about economy-wide disturbances in which the assumptions underlying the comparative-statics analysis used in microeconomics are clearly not satisfied. That illegitimate transfer of one kind of theorizing to another has come to be known as the demand for microfoundations in macroeconomic models that is the foundational methodological principle of modern macroeconomics.

The principle, as I have been arguing for some time, is illegitimate for a variety of reasons. And one of those reasons is that microeconomics itself is based on the macroeconomic foundational assumption of a pre-existing general equilibrium, in which all plans in the entire economy are, and will remain, perfectly coordinated throughout the analysis of a particular parameter change in a single market. Once you relax the assumption that all, but one, markets are in equilibrium, the discipline imposed by the assumption of the rationality of general equilibrium and comparative statics is shattered, and a different kind of theorizing must be adopted to replace it.

The search for that different kind of theorizing is the challenge that has always faced macroeconomics. Despite heroic attempts to avoid facing that challenge and pretend that macroeconomics can be built as if it were microeconomics, the search for a different kind of theorizing will continue; it must continue. But it would certainly help if more smart and creative people would join in that search.

Only Idiots Think that Judges Are Umpires and Only Cads Say that They Think So

It now seems besides the point, but I want to go back and consider something Judge Kavanaugh said in his initial testimony three weeks ago before the Senate Judiciary Committee, now largely, and deservedly, forgotten.

In his earlier testimony, Judge Kavanaugh made the following ludicrous statement, echoing a similar statement by (God help us) Chief Justice Roberts at his confirmation hearing before the Senate Judiciary Committee:

A good judge must be an umpire, a neutral and impartial arbiter who favors no litigant or policy. As Justice Kennedy explained in Texas versus Johnson, one of his greatest opinions, judges do not make decisions to reach a preferred result. Judges make decisions because “the law and the Constitution, as we see them, compel the result.”

I don’t decide cases based on personal or policy preferences.

Kavanaugh’s former law professor Akhil Amar offered an embarrassingly feeble defense of Kavanaugh’s laughable comparison, in a touching gesture of loyalty to a former student, to put the most generous possible gloss on his deeply inappropriate defense of an indefensible trivialization of what judging is all about.

According to the Chief Justice and to Judge Kavanaugh, judges, like umpires, are there to call balls and strikes. An umpire calls balls and strikes with no concern for the consequences of calling a ball or a strike on the outcome of the game. Think about it: do judges reach decisions about cases, make their rulings, write their opinions, with no concern for the consequences of their decisions?

Umpires make their calls based on split-second responses to their visual perceptions of what happens in front of their eyes, with no reflection on what implications their decisions have for anyone else, or the expectations held by the players whom they are watching. Think about it: would you want a judge to decide a case without considering the effects of his decision on the litigants and on the society at large?

Umpires make their decisions without hearing arguments from the players before rendering their decisions. Players, coaches, managers, or their spokesmen do not submit written briefs, or make oral arguments, to umpires in an effort to explain to umpires why justice requires that a decision be rendered in their favor. Umpires don’t study briefs or do research on decisions rendered by earlier umpires in previous contests. Think about it: would you want a judge to decide a case within the time that an umpire takes to call balls and strikes and do so with no input from the litigants?

Umpires never write opinions in which they explain (or at least try to explain) why their decisions are right and just after having taken into account on all the arguments advanced by the opposing sides and any other relevant considerations that might properly be taken into account in reaching a decision. Think about it: would you want a judge to decide a case without having to write an opinion explaining why his or her decision is the right and just one?

Umpires call balls on strikes instinctively, unreflectively, and without hesitation. But to judge means to think, to reflect, to consider both (or all) sides, to consider the consequences of the decision for the litigants and for society, and for future judges in future cases who will be guided by the decision being rendered in the case at hand. Judging — especially appellate judging — is a deeply intellectual and reflective vocation requiring knowledge, erudition, insight, wisdom, temperament, and, quite often, empathy and creativity.

To reduce this venerable vocation to the mere calling of balls and strikes is deeply dishonorable, and, coming from a judge who presumes to be worthy of sitting on the highest court in the land, supremely offensive.

What could possibly possess a judge — and a judge, presumably neither an idiot nor insufficiently self-aware to understand what he is actually doing — to engage in such obvious sophistry? The answer, I think, is that it has come to be in the obvious political and ideological self-interest of many lawyers and judges, to deliberately adopt a pretense that judging is — or should be — a mechanical activity that can be reduced to simply looking up and following already existing rules that have already been written down somewhere, and that to apply those rules requires nothing more than knowing how to read them properly. That idea can be summed up in two eight-letter words, one of which is nonsense, and those who knowingly propagate it are just, well, dare I say it, deplorable.

Why Judge Kavanaugh Shamefully Refused to Reject Chae Chan Ping v. United States (AKA Chinese Exclusion Case) as Precedent

Senator Kamala Harris asked Judge Kavanaugh if he considered the infamous Supreme Court decision in Chae Chan Ping v. United States (AKA Chinese Exclusion Case) as a valid precedent. Judge Kavanaugh disgraced himself by refusing to say that the case was in error from the moment it was rendered, no less, if not even more so, than was Plessy v. Ferguson overturned by the Supreme Court in Brown v. Board of Education.

The question is why would he not want to distance himself from a racist abomination of a decision that remains a stain on the Supreme Court to this day? After all, Judge Kavanaugh, in his fastidiousness, kept explaining to Senators that he wouldn’t want to get within three zipcodes of a political controversy. But, although obviously uncomfortable in his refusal to do so, he could not bring himself to say that Chae Chan Ping belongs in the garbage can along with Dred Scott and Plessy.

Here’s the reason. Chae Chan Ping is still an important precedent that has been and continues to be relied on by the government and the Supreme Court to uphold the power of President to keep out foreigners whenever he wants to.

In a post in March 2017, I quoted from Justice Marshall’s magnificent dissent in Kleindienst v. Mandel, a horrible decision in which the Court upheld the exclusion of a Marxist scholar from the United States based on, among other precedents, the execrable Chae Chan Ping decision. Here is a brief excerpt from Justice Marshall’s opinion, which I discuss at greater length in my 2017 post.

The heart of appellants’ position in this case . . . is that the Government’s power is distinctively broad and unreviewable because “the regulation in question is directed at the admission of aliens.” Brief for Appellants 33. Thus, in the appellants’ view, this case is no different from a long line of cases holding that the power to exclude aliens is left exclusively to the “political” branches of Government, Congress, and the Executive.

These cases are not the strongest precedents in the United States Reports, and the majority’s baroque approach reveals its reluctance to rely on them completely. They include such milestones as The Chinese Exclusion Case, 130 U.S. 581 (1889), and Fong Yue Ting v. United States, 149 U.S. 698 (1893), in which this Court upheld the Government’s power to exclude and expel Chinese aliens from our midst.

Kleindienst has become the main modern precedent affirming the nearly unchecked power of the government to arbitrarily exclude foreigners from entering the United States on whatever whim the government chooses to act upon, so long as it can come up with an excuse, however pretextual, that the exclusion has a national security rationale.

And because Judge Kavanaugh will be a solid vote in favor of affirming the kind of monumentally dishonest decision made by Justice Roberts in the Muslim Travel Ban case, he can’t disavow Chae Chan Ping without undermining Kleindienst which, in turn, would undermine the Muslim Travel Ban. 

Aside from being a great coach of his daughter’s basketball team, and superb carpool driver, I’m sure Judge Kavanaugh appreciates and understands how I feel.

Whatta guy.

Hayek v. Rawls on Social Justice: Correcting the False Narrative

Matt Yglesias, citing an article (“John Rawls, Socialist?“) by Ed Quish in the Jacobin arguing that Rawls, in his later years, drifted from his welfare-state liberalism to democratic socialism, tweeted a little while ago

I’m an admirer of, but no expert on, Rawls, so I won’t weigh in on where to pigeon-hole Rawls on the ideological spectrum. In general, I think such pigeon-holing is as likely to mislead as to clarify because it tends to obscure the individuality of the individual or thinker being pigeon-hold. Rawls was above all a Rawlsian and to reduce his complex and nuanced philosophy to simple catch-phrase like “socialism” or even “welfare-state liberalism” cannot possibly do his rich philosophical contributions justice (no pun intended).

A good way to illustrate both the complexity of Rawls’s philosophy and that of someone like F. A. Hayek, often regarded as standing on the opposite end of the philosophical spectrum from Rawls, is to quote from two passages of volume 2 of Law, Legislation and Liberty. Hayek entitled this volume The Mirage of Social Justice, and the main thesis of that volume is that the term “justice” is meaningful only in the context of the foreseen or foreseable consequences of deliberate decisions taken by responsible individual agents. Social justice, because it refers to the outcomes of complex social processes that no one is deliberately aiming at, is not a meaningful concept.

Because Rawls argued in favor of the difference principle, which says that unequal outcomes are only justifiable insofar as they promote the absolute (though not the relative) well-being of the least well-off individuals in society, most libertarians, including famously Robert Nozick whose book Anarchy, State and Utopia was a kind of rejoinder to Rawls’s book A Theory of Justice, viewed Rawls as an ideological opponent.

Hayek, however, had a very different take on Rawls. At the end of his preface to volume 2, explaining why he had not discussed various recent philosophical contributions on the subject of social justice, Hayek wrote:

[A]fter careful consideration I have come to the conclusion that what I might have to say about John Rawls’ A theory of Justice would not assist in the pursuit of my immediate object because the differences between us seemed more verbal than substantial. Though the first impression of readers may be different, Rawls’ statement which I quote later in this volume (p. 100) seems to me to show that we agree on what is to me the essential point. Indeed, as I indicate in a note to that passage, it appears to me that Rawls has been widely misunderstood on this central issue. (pp. xii-xiii)

Here is what Hayek says about Rawls in the cited passage.

Before leaving this subject I want to point out once more that the recognition that in such combinations as “social”, “economic”, “distributive”, or “retributive” justice the term “justice” is wholly empty should not lead us to throw the baby out with the bath water. Not only as the basis of the legal rules of just conduct is the justice which the courts of justice administer exceedingly important; there unquestionably also exists a genuine problem of justice in connection with the deliberate design of political institutions the problem to which Professor John Rawls has recently devoted an important book. The fact which I regret and regard as confusing is merely that in this connection he employs the term “social justice”. But I have no basic quarrel with an author who, before he proceeds to that problem, acknowledges that the task of selecting specific systems or distributions of desired things as just must be abandoned as mistaken in principle and it is, in any case, not capable of a definite answer. Rather, the principles of justice define the crucial constraints which institutions and joint activities must satisfy if persons engaging in them are to have no complaints against them. If these constraints are satisfied, the resulting distribution, whatever it is, may be accepted as just (or at least not unjust).” This is more or less what I have been trying to argue in this chapter.

In the footnote at the end of the quotation, Hayek cites the source from which he takes the quotation and then continues:

John Rawls, “Constitutional Liberty and the Concept of Justice,” Nomos IV, Justice (New York, 1963), p. 102. where the passage quoted is preceded by the statement that “It is the system of institutions which has to be judged and judged from a general point of view.” I am not aware that Professor Rawls’ later more widely read work A Theory of Justice contains a comparatively clear statement of the main point, which may explain why this work seems often, but as it  appears to me wrongly, to have been interpreted as lending support to socialist demands, e.g., by Daniel Bell, “On Meritocracy and Equality”, Public Interest, Autumn 1972, p. 72, who describes Rawls’ theory as “the most comprehensive effort in modern philosophy to justify a socialist ethic.”

My Paper (with Sean Sullivan) on Defining Relevant Antitrust Markets Now Available on SSRN

Antitrust aficionados may want to have a look at this new paper (“The Logic of Market Definition”) that I have co-authored with Sean Sullivan of the University of Iowa School of Law about defining relevant antitrust markets. The paper is now posted on SSRN.

Here is the abstract:

Despite the voluminous commentary that the topic has attracted in recent years, much confusion still surrounds the proper definition of antitrust markets. This paper seeks to clarify market definition, partly by explaining what should not factor into the exercise. Specifically, we identify and describe three common errors in how courts and advocates approach market definition. The first error is what we call the natural market fallacy: the mistake of treating market boundaries as preexisting features of competition, rather than the purely conceptual abstractions of a particular analytical process. The second is the independent market fallacy: the failure to recognize that antitrust markets must always be defined to reflect a theory of harm, and do not exist independent of a theory of harm. The third is the single market fallacy: the tendency of courts and advocates to seek some single, best relevant market, when in reality there will typically be many relevant markets, all of which could be appropriately drawn to aid in competitive effects analysis. In the process of dispelling these common fallacies, this paper offers a clarifying framework for understanding the fundamental logic of market definition.

Martin Wolf Reviews Adam Tooze on the 2008 Financial Crisis

The eminent Martin Wolf, a fine economist and the foremost financial journalist of his generation, has written an admiring review of a new book (Crashed: How a Decade of Financial Crises Changed the World) about the financial crisis of 2008 and the ensuing decade of aftershocks and turmoil and upheaval by the distinguished historian Adam Tooze. This is not the first time I have written a post commenting on a review of a book by Tooze; in 2015, I wrote a post about David Frum’s review of Tooze’s book on World War I and its aftermath (Deluge: The Great War, America and the Remaking of the World Order 1916-1931). No need to dwell on the obvious similarities between these two impressive volumes.

Let me admit at the outset that I haven’t read either book. Unquestionably my loss, but I hope at some point to redeem myself by reading both of them. But in this post I don’t intend to comment at length about Tooze’s argument. Judging from Martin Wolf’s review, I fully expect that I will agree with most of what Tooze has to say about the crisis.

My criticism – and I hesitate even to use that word – will be directed toward what, judging from Wolf’s review, Tooze seems to have been left out of his book. I am referring to the role of tight monetary policy, motivated by an excessive concern with inflation, when what was causing inflation was a persistent rise in energy and commodity prices that had little to do with monetary policy. Certainly, the failure to fully understand the role of monetary policy during the 2006 to 2008 period in the run-up to the financial crisis doesn’t negate all the excellent qualities that the book undoubtedly has, nevertheless, leaving out that essential part of the story that is like watching Hamlet without the prince.

Let me just offer a few examples from Wolf’s review. Early in the review, Wolf provides a clear overview of the nature of the crisis, its scope and the response.

As Tooze explains, the book examines “the struggle to contain the crisis in three interlocking zones of deep private financial integration: the transatlantic dollar-based financial system, the eurozone and the post-Soviet sphere of eastern Europe”. This implosion “entangled both public and private finances in a doom loop”. The failures of banks forced “scandalous government intervention to rescue private oligopolists”. The Federal Reserve even acted to provide liquidity to banks in other countries.

Such a huge crisis, Tooze points out, has inevitably deeply affected international affairs: relations between Germany and Greece, the UK and the eurozone, the US and the EU and the west and Russia were all affected. In all, he adds, the challenges were “mind-bogglingly technical and complex. They were vast in scale. They were fast moving. Between 2007 and 2012, the pressure was relentless.”

Tooze concludes this description of events with the judgment that “In its own terms, . . . the response patched together by the US Treasury and the Fed was remarkably successful.” Yet the success of these technocrats, first with support from the Democratic Congress at the end of the administration of George W Bush, and then under a Democratic president, brought the Democrats no political benefits.

This is all very insightful and I have no quarrel with any of it. But it mentions not a word about the role of monetary policy. Last month I wrote a post about the implications of a flat or inverted yield curve. The yield curve usually has an upward slope because short-term rates interest rates tend to be lower than long-term rates. Over the past year the yield curve has been steadily flattening as short term rates have been increasing while long-term rates have risen only slightly if at all. Many analysts are voicing concern that the yield curve may go flat or become inverted once again. And one reason that they worry is that the last time the yield curve became flat was late in 2006. Here’s how I described what happened to the yield curve in 2006 after the Fed started mechanically raising its Fed-funds target interest rate by 25 basis points every 6 weeks starting in June 2004.

The Fed having put itself on autopilot, the yield curve became flat or even slightly inverted in early 2006, implying that a substantial liquidity premium had to be absorbed in order to keep cash on hand to meet debt obligations. By the second quarter of 2006, insufficient liquidity caused the growth in total spending to slow, just when housing prices were peaking, a development that intensified the stresses on the financial system, further increasing the demand for liquidity. Despite the high liquidity premium and flat yield curve, total spending continued to increase modestly through 2006 and most of 2007. But after stock prices dropped in August 2007 and home prices continued to slide, growth in total spending slowed further at the end of 2007, and the downturn began.

Despite the weakening economy, the Fed remained focused primarily on inflation. The Fed did begin cutting its Fed Funds target from 5.25% in late 2007 once the downturn began, but the Fed’s reluctance to move aggressively to counter a recession that worsened rapidly in spring and summer of 2008 because the Fed remain fixated on headline inflation which was consistently higher than the Fed’s 2% target. But inflation was staying above the 2% target simply because of an ongoing supply shock that began in early 2006 when the price of oil was just over $50 a barrel and rose steadily with a short dip late in 2006 and early 2007 and continuing to rise above $100 a barrel in the summer of 2007 and peaking at over $140 a barrel in July 2008.

The mistake of tightening monetary policy in response to a supply shock in the midst of a recession would have been egregious under any circumstances, but in the context of a seriously weakened and fragile financial system, the mistake was simply calamitous. And, indeed, the calamitous consequences of that decision are plain. But somehow the connection between the focus of the Fed on inflation while the economy was contracting and the financial system was in peril has never been fully recognized by most observers and certainly not by the Federal Reserve officials who made those decisions. A few paragraphs later, Wolf observes.

Furthermore, because the banking systems had become so huge and intertwined, this became, in the words of Ben Bernanke — Fed chairman throughout the worst days of the crisis and a noted academic expert — the “worst financial crisis in global history, including the Great Depression”. The fact that the people who had been running the system had so little notion of these risks inevitably destroyed their claim to competence and, for some, even probity.

I will not agree or disagree with Bernanke that the 2008 crisis was the worse than 1929-30 or 1931 or 1933 crises, but it appears that they still have not fully understood their own role in precipitating the crisis. That is a story that remains to be told. I hope we don’t have to wait too much longer.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,137 other followers

Follow Uneasy Money on WordPress.com
Advertisements