Archive for the 'expectations' Category

You Say Potato, I Say Potahto; You Say Tomato, I Say Tomahto; You Say Distribution, I Say Expectation

Once again, the estimable Olivier Blanchard is weighing in on the question of inflation, expressing fears about an impending wage-price spiral that cannot be controlled by conventional monetary policy unless the monetary authority is prepared to impose sufficiently tight monetary conditions that would entail substantially higher unemployment than we have experienced since the aftermath of the 2008 financial crisis and Little Depression (aka Great Recession). Several months ago, Blanchard, supporting Larry Summers’s warnings that rising inflation was not likely to be transitory and instead would likely remain high and possibly increase over time, I tried to explain why his fears of high and rising inflation were likely exaggerated. Unpersuaded, he now returns to provide a deeper explanation of his belief that, unless the deep systemic forces that cause inflation are addressed politically rather than left, by default, to be handled by the monetary authority, inflation will remain a persistent and vexing problem.

I’m sorry to say that Professor Blanchard starts off with a massive overstatement. While I don’t discount the possibility — even the reality — that inflation may sometimes be triggered by the attempt of a particular sector of the economy to increase the relative price of the goods or services that it provides, thereby increasing its share of total income at the expense of other sectors, I seriously question whether this is a typical, or even frequent, source of inflation. For example, oil-price increases in the 1970s and wage increases in France after the May 1968 student uprisings did trigger substantial inflation. Inflation served as a method of (1) mitigating ing adverse macroeconomic effects on output and employment and (2) diluting the size of the resulting wealth transfer from other sectors.

Blanchard continues:

2/8. The source of the conflict may be too hot an economy: In the labor market, workers may be in a stronger position to bargain for higher wages given prices. But, in the goods market, firms may also be in a stronger position to increase prices given wages. And, on, it goes.


Again, I’m sorry to say that I find this remark incomprehensible. Blanchard says “the source of the conflict may be too hot an economy,” and in the very next breath says that in the labor market (as if it made sense to view the labor market, accounting for more than half the nominal income of the economy, as a single homogeneous market with stable supply and demand curves), “workers may be in a strong position to bargain for higher wages given prices,” while in the goods market firms may be in a strong position to bargain for higher prices relative to wages. What kind of bargaining position is Blanchard talking about? Is it real, reflecting underlying economic conditions, or is it nominal, reflecting macroeconomic conditions. He doesn’t seem to know. And if he does know, he’s not saying. But he began by saying that the source of the conflict “may be too hot an economy,” suggesting that the source of the conflict is macroeconomic, not a conflict over shares. So I’m confused. We can only follow him a bit further to see what he may be thinking.

3/8. The source of the conflict may be in too high prices of commodities, such as energy. Firms want to increase prices given wages, to reflect the higher cost of intermediate inputs. Workers want to resist the decrease in the real wage, and ask for higher wages. And on it goes.

Now Blanchard seems to be attributing the conflict to an exogenous — and unexmplained — increase in commodity prices. One sector presumably enjoys an improvement in its bargaining position relative to the rest of the economy, thereby being enabled to increase its share of total income. Rather than consider the appropriate response to such an exercise of raw market power, Blanchard simply assumes that, but doesn’t explain how, this increase in share triggers a vicious cycle of compensating increases in the prices and wages of other sectors, rather than a one-off distributional change to reflect a new balance of economic power. This is a complicated story with interesting macroeconomic implications, but Blanchard doesn’t bother to do more than assert that the disturbance sets in motion an ongoing, possibly unlimited, cycle of price and wage increases.

4/8. The state can play various roles. Through fiscal policy, it can slow down the economy and eliminate the overheating. It can subsidize the cost of energy, limiting the decrease in the real wage and the pressure on nominal wages.

5/8. It can finance the subsidies by increasing taxes on some current taxpayers, say exceptional profit taxes, or through deficits and eventual taxes on future taxpayers (who have little say in the process…)

These two statements from the thread are largely innocuous and contribute little or nothing to an understanding of the cause or causes of inflation or of the policies that might mitigate inflation or its effect,

6/8. But, in the end, forcing the players to accept the outcome, and thus stabilizing inflation, is typically left to the central bank. By slowing down the economy, it can force firms to accept lower prices given wages, and workers to accept lower wages given prices.

It’s not clear to me what constitutes “acceptance” of the outcome. Under any circumstance, the players will presumably still seek to choose and execute what, given the situation in which they find themselves, they regard as an optimal plan. Whether the players can execute the plan that they choose will depend on the plans chosen by other players and on the policies adopted by the central bank and other policy makers. If policy makers adopt a consistent set of policies that are feasible and are aligned with the outcomes expected by the players, then the players will likely succeed in implementing what they regard as optimal plans. If the policies that are followed are not consistent and not feasible, then those policies will not be aligned with the outcomes expected by the players. In the latter case, matters will likely get worse not better.

7/8. It is a highly inefficient way to deal with distributional conflicts. One can/should dream of a negotiation between workers, firms, and the state, in which the outcome is achieved without triggering inflation and requiring a painful slowdown.

I can’t help but observe the vagueness associated with the pronoun “it” and its unidentified antecedent. The outcome of a complex economic process cannot be achieved by a negotiation between workers,firms and the state. Things don’t work that way. Whatever negotiation Professor Blanchard is dreaming about, no negotiation can possibly determine the details of an outcome. What is possible is some agreement on policy goals or targets for inflation and some feasible set of policies aimed at achieving, or coming close to, a target rate of inflation. The key variable over which policy makers have some control is total aggregate demand for the economy measured either as a rate of nominal spending and nominal income over a year or as a rate of growth in spending and income compared to the previous year. Since inflation is itself a rate of change, presumaby the relevant target should be a rate of change in total nominal spending and nominal income. Thus, the appropriate target for policy makers to aim for is the yearly rate of growth in total nominal spending and total nominal income.

Given some reasonable expectation about the rate of technical progress (labor productivity) and the rate of increase in the labor force, a target rate of inflation implies a target rate of increase in total nominal spending and total nominal income. Given expectations about the increase in labor productivity, there is an implied rate of increase in nominal wages that is broadly consistent with the inflation target. But that average rate of increase in nominal wages can hardly be expected to be uniform for all industries and all firms and all workers, and it would be folly, on purely technical reasons, to attempt to enforce such a target in average nominal wage growth. And for legal and political reasons, it would be an even greater folly to try to do so.

Besides announcing the underlying basis for the targeted rate of nominal income growth, and encouraging workers and firms to take those targets seriously when negotiating wage contracts and setting prices, while recognizing that deviations from those targets are often reasonable and appropriate in the light of the specific circumstances in which particular firms and industries and labor unions are operating, policy makers have no constructive role to play in the setting of prices or wages for individual firms industries or labor contracts. But providing useful benchmarks for private agents to use as a basis for forming their own expectations about the future to guide their planning for the future is entirely appropriate and useful.

I should acknowledge that, as I have done previously, that the approach to policy making summarized here is based on the analysis developed by Ralph Hawtrey over the course of more than a half century as both a monetary theorist and a policy advisor, and, especially, as Hawtrey explained over a half-century ago in his final book, Incomes and Money.

8/8. But, unfortunately, this requires more trust than can be hoped for and just does not happen. Still, this way of thinking inflation shows what the problem is, and how to think of the least painful solution.

Insofar as policymakers can show that they are coming reasonably close to meeting their announced targets, they will encourage private actors to take those announced targets seriously when forming their own expectations and when negotiating with counterparties on the terms of their economic relationships. The least painful solutions are those in which economic agents align their expectations with the policy targets announced — and achieved — by policy makers.

Originally tweeted by Olivier Blanchard (@ojblanchard1) on December 30, 2022.

Axel Leijonhufvud and Modern Macroeconomics

For many baby boomers like me growing up in Los Angeles, UCLA was an almost inevitable choice for college. As an incoming freshman, I was undecided whether to major in political science or economics. PoliSci 1 didn’t impress me, but Econ 1 did. More than my Econ 1 professor, it was the assigned textbook, University Economics, 1st edition, by Alchian and Allen that impressed me. That’s how my career in economics started.

After taking introductory micro and macro as a freshman, I started the intermediate theory sequence of micro (utility and cost theory, econ 101a), (general equilibrium theory, 101b), and (macro theory, 102) as a sophomore. It was in the winter 1968 quarter that I encountered Axel Leijonhufvud. This was about a year before his famous book – his doctoral dissertation – Keynesian Economics and the Economics of Keynes was published in the fall of 1968 to instant acclaim. Although it must have been known in the department that the book, which he’d been working on for several years, would soon appear, I doubt that its remarkable impact on the economics profession could have been anticipated, turning Axel almost overnight from an obscure untenured assistant professor into a tenured professor at one of the top economics departments in the world and a kind of academic rock star widely sought after to lecture and appear at conferences around the globe. I offer the following scattered recollections of him, drawn from memories at least a half-century old, to those interested in his writings, and some reflections on his rise to the top of the profession, followed by a gradual loss of influence as theoretical marcroeconomics, fell under the influence of Robert Lucas and the rational-expectations movement in its various forms (New Classical, Real Business-Cycle, New-Keynesian).

Axel, then in his early to mid-thirties, was an imposing figure, very tall and gaunt with a short beard and a shock of wavy blondish hair, but his attire reflecting the lowly position he then occupied in the academic hierarchy. He spoke perfect English with a distinct Swedish lilt, frequently leavening his lectures and responses to students’ questions with wry and witty comments and asides.  

Axel’s presentation of general-equilibrium theory was, as then still the norm, at least at UCLA, mostly graphical, supplemented occasionally by some algebra and elementary calculus. The Edgeworth box was his principal technique for analyzing both bilateral trade and production in the simple two-output, two-input case, and he used it to elucidate concepts like Pareto optimality, general-equilibrium prices, and the two welfare theorems, an exposition which I, at least, found deeply satisfying. The assigned readings were the classic paper by F. M. Bator, “The Simple Analytics of Welfare-Maximization,” which I relied on heavily to gain a working grasp of the basics of general-equilibrium theory, and as a supplementary text, Peter Newman’s The Theory of Exchange, much of which was too advanced for me to comprehend more than superficially. Axel also introduced us to the concept of tâtonnement and highlighting its importance as an explanation of sorts of how the equilibrium price vector might, at least in theory, be found, an issue whose profound significance I then only vaguely comprehended, if at all. Another assigned text was Modern Capital Theory by Donald Dewey, providing an introduction to the role of capital, time, and the rate of interest in monetary and macroeconomic theory and a bridge to the intermediate macro course that he would teach the following quarter.

A highlight of Axel’s general-equilibrium course was the guest lecture by Bob Clower, then visiting UCLA from Northwestern, with whom Axel became friendly only after leaving Northwestern, and two of whose papers (“A Reconsideration of the Microfoundations of Monetary Theory,” and “The Keynesian Counterrevolution: A Theoretical Appraisal”) were discussed at length in his forthcoming book. (The collaboration between Clower and Leijonhufvud and their early Northwestern connection has led to the mistaken idea that Clower had been Axel’s thesis advisor. Axel’s dissertation was actually written under Meyer Burstein.) Clower himself came to UCLA economics a few years later when I was already a third-year graduate student, and my contact with him was confined to seeing him at seminars and workshops. I still have a vivid memory of Bob in his lecture explaining, with the aid of chalk and a blackboard, how ballistic theory was developed into an orbital theory by way of a conceptual experiment imagining that the distance travelled by a projectile launched from a fixed position being progressively lengthened until the projectile’s trajectory transitioned into an orbit around the earth.

Axel devoted the first part of his macro course to extending the Keynesian-cross diagram we had been taught in introductory macro into the Hicksian IS-LM model by making investment a negative function of the rate of interest and adding a money market with a fixed money stock and a demand for money that’s a negative function of the interest rate. Depending on the assumptions about elasticities, IS-LM could be an analytical vehicle that could accommodate either the extreme Keynesian-cross case, in which fiscal policy is all-powerful and monetary policy is ineffective, or the Monetarist (classical) case, in which fiscal policy is ineffective and monetary policy all-powerful, which was how macroeconomics was often framed as a debate about the elasticity of the demand for money curve with respect to interest rate. Friedman himself, in his not very successful attempt to articulate his own framework for monetary analysis, accepted that framing, one of the few rhetorical and polemical misfires of his career.

In his intermediate macro course, Axel presented the standard macro model, and I don’t remember his weighing in that much with his own criticism; he didn’t teach from a standard intermediate macro textbook, standard textbook versions of the dominant Keynesian model not being at all to his liking. Instead, he assigned early sources of what became Keynesian economics like Hicks’s 1937 exposition of the IS-LM model and Alvin Hansen’s A Guide to Keynes (1953), with Friedman’s 1956 restatement of the quantity theory serving as a counterpoint, and further developments of Keynesian thought like Patinkin’s 1948 paper on price flexibility and full employment, A. W. Phillips original derivation of the Phillips Curve, Harry Johnson on the General Theory after 25 years, and his own preview “Keynes and the Keynesians: A Suggested Interpretation” of his forthcoming book, and probably others that I’m not now remembering. Presenting the material piecemeal from original sources allowed him to underscore the weaknesses and questionable assumptions latent in the standard Keynesian model.

Of course, for most of us, it was a challenge just to reproduce the standard model and apply it to some specific problems, but we at least we got the sense that there was more going on under the hood of the model than we would have imagined had we learned its structure from a standard macro text. I have the melancholy feeling that the passage of years has dimmed my memory of his teaching too much to adequately describe how stimulating, amusing and enjoyable his lectures were to those of us just starting our journey into economic theory.

The following quarter, in the fall 1968 quarter, when his book had just appeared in print, Axel created a new advanced course called macrodynamics. He talked a lot about Wicksell and Keynes, of course, but he was then also fascinated by the work of Norbert Wiener on cybernetics, assigning Wiener’s book Cybernetics as a primary text and a key to understanding what Keynes was really trying to do. He introduced us to concepts like positive and negative feedback, servo mechanisms, stable and unstable dynamic systems and related those concepts to economic concepts like the price mechanism, stable and unstable equilibria, and to business cycles. Here’s how a put it in On Keynesian Economics and the Economics of Keynes:

Cybernetics as a formal theory, of course, began to develop only during the was and it was only with the appearance of . . . Weiner’s book in 1948 that the first results of serious work on a general theory of dynamic systems – and the term itself – reached a wider public. Even then, research in this field seemed remote from economic problems, and it is thus not surprising that the first decade or more of the Keynesian debate did not go in this direction. But it is surprising that so few monetary economists have caught on to developments in this field in the last ten or twelve years, and that the work of those who have has not triggered a more dramatic chain reaction. This, I believe, is the Keynesian Revolution that did not come off.

In conveying the essential departure of cybernetics from traditional physics, Wiener once noted:

Here there emerges a very interesting distinction between the physics of our grandfathers and that of the present day. In nineteenth-century physics, it seemed to cost nothing to get information.

In context, the reference was to Maxwell’s Demon. In its economic reincarnation as Walras’ auctioneer, the demon has not yet been exorcised. But this certainly must be what Keynes tried to do. If a single distinction is to be drawn between the Economics of Keynes and the economics of our grandfathers, this is it. It is only on this basis that Keynes’ claim to have essayed a more “general theory” can be maintained. If this distinction is not recognized as both valid and important, I believe we must conclude that Keynes’ contribution to pure theory is nil.

Axel’s hopes that cybernetics could provide an analytical tool with which to bring Keynes’s insights into informational scarcity on macroeconomic analysis were never fulfilled. A glance at the index to Axel’s excellent collection of essays written from the late 1960s and the late 1970s Information and Coordination reveals not a single reference either to cybernetics or to Wiener. Instead, to his chagrin and disappointment, macroeconomics took a completely different path following the path blazed by Robert Lucas and his followers of insisting on a nearly continuous state of rational-expectations equilibrium and implicitly denying that there is an intertemporal coordination problem for macroeconomics to analyze, much less to solve.

After getting my BA in economics at UCLA, I stayed put and began my graduate studies there in the next academic year, taking the graduate micro sequence given that year by Jack Hirshleifer, the graduate macro sequence with Axel and the graduate monetary theory sequence with Ben Klein, who started his career as a monetary economist before devoting himself a few years later entirely to IO and antitrust.

Not surprisingly, Axel’s macro course drew heavily on his book, which meant it drew heavily on the history of macroeconomics including, of course, Keynes himself, but also his Cambridge predecessors and collaborators, his friendly, and not so friendly, adversaries, and the Keynesians that followed him. His main point was that if you take Keynes seriously, you can’t argue, as the standard 1960s neoclassical synthesis did, that the main lesson taught by Keynes was that if the real wage in an economy is somehow stuck above the market-clearing wage, an increase in aggregate demand is necessary to allow the labor market to clear at the prevailing market wage by raising the price level to reduce the real wage down to the market-clearing level.

This interpretation of Keynes, Axel argued, trivialized Keynes by implying that he didn’t say anything that had not been said previously by his predecessors who had also blamed high unemployment on wages being kept above market-clearing levels by minimum-wage legislation or the anticompetitive conduct of trade-union monopolies.

Axel sought to reinterpret Keynes as an early precursor of search theories of unemployment subsequently developed by Armen Alchian and Edward Phelps who would soon be followed by others including Robert Lucas. Because negative shocks to aggregate demand are rarely anticipated, the immediate wage and price adjustments to a new post-shock equilibrium price vector that would maintain full employment would occur only under the imaginary tâtonnement system naively taken as the paradigm for price adjustment under competitive market conditions, Keynes believed that a deliberate countercyclical policy response was needed to avoid a potentially long-lasting or permanent decline in output and employment. The issue is not price flexibility per se, but finding the equilibrium price vector consistent with intertemporal coordination. Price flexibility that doesn’t arrive quickly (immediately?) at the equilibrium price vector achieves nothing. Trading at disequilibrium prices leads inevitably to a contraction of output and income. In an inspired turn of phrase, Axel called this cumulative process of aggregate demand shrinkage Say’s Principle, which years later led me to write my paper “Say’s Law and the Classical Theory of Depressions” included as Chapter 9 of my recent book Studies in the History of Monetary Theory.

Attention to the implications of the lack of an actual coordinating mechanism simply assumed (either in the form of Walrasian tâtonnement or the implicit Marshallian ceteris paribus assumption) by neoclassical economic theory was, in Axel’s view, the great contribution of Keynes. Axel deplored the neoclassical synthesis, because its rote acceptance of the neoclassical equilibrium paradigm trivialized Keynes’s contribution, treating unemployment as a phenomenon attributable to sticky or rigid wages without inquiring whether alternative informational assumptions could explain unemployment even with flexible wages.

The new literature on search theories of unemployment advanced by Alchian, Phelps, et al. and the success of his book gave Axel hope that a deepened version of neoclassical economic theory that paid attention to its underlying informational assumptions could lead to a meaningful reconciliation of the economics of Keynes with neoclassical theory and replace the superficial neoclassical synthesis of the 1960s. That quest for an alternative version of neoclassical economic theory was for a while subsumed under the trite heading of finding microfoundations for macroeconomics, by which was meant finding a way to explain Keynesian (involuntary) unemployment caused by deficient aggregate demand without invoking special ad hoc assumptions like rigid or sticky wages and prices. The objective was to analyze the optimizing behavior of individual agents given limitations in or imperfections of the information available to them and to identify and provide remedies for the disequilibrium conditions that characterize coordination failures.

For a short time, perhaps from the early 1970s until the early 1980s, a number of seemingly promising attempts to develop a disequilibrium theory of macroeconomics appeared, most notably by Robert Barro and Herschel Grossman in the US, and by and J. P. Benassy, J. M. Grandmont, and Edmond Malinvaud in France. Axel and Clower were largely critical of these efforts, regarding them as defective and even misguided in many respects.

But at about the same time, another, very different, approach to microfoundations was emerging, inspired by the work of Robert Lucas and Thomas Sargent and their followers, who were introducing the concept of rational expectations into macroeconomics. Axel and Clower had focused their dissatisfaction with neoclassical economics on the rise of the Walrasian paradigm which used the obviously fantastical invention of a tâtonnement process to account for the attainment of an equilibrium price vector perfectly coordinating all economic activity. They argued for an interpretation of Keynes’s contribution as an attempt to steer economics away from an untenable theoretical and analytical paradigm rather than, as the neoclassical synthesis had done, to make peace with it through the adoption of ad hoc assumptions about price and wage rigidity, thereby draining Keynes’s contribution of novelty and significance.

And then Lucas came along to dispense with the auctioneer, eliminate tâtonnement, while achieving the same result by way of a methodological stratagem in three parts: a) insisting that all agents be treated as equilibrium optimizers, and b) who therefore form identical rational expectations of all future prices using the same common knowledge, so that c) they all correctly anticipate the equilibrium price vector that earlier economists had assumed could be found only through the intervention of an imaginary auctioneer conducting a fantastical tâtonnement process.

This methodological imperatives laid down by Lucas were enforced with a rigorous discipline more befitting a religious order than an academic research community. The discipline of equilibrium reasoning, it was decreed by methodological fiat, imposed a question-begging research strategy on researchers in which correct knowledge of future prices became part of the endowment of all optimizing agents.

While microfoundations for Axel, Clower, Alchian, Phelps and their collaborators and followers had meant relaxing the informational assumptions of the standard neoclassical model, for Lucas and his followers microfoundations came to mean that each and every individual agent must be assumed to have all the knowledge that exists in the model. Otherwise the rational-expectations assumption required by the model could not be justified.

The early Lucasian models did assume a certain kind of informational imperfection or ambiguity about whether observed price changes were relative changes or absolute changes, which would be resolved only after a one-period time lag. However, the observed serial correlation in aggregate time series could not be rationalized by an informational ambiguity resolved after just one period. This deficiency in the original Lucasian model led to the development of real-business-cycle models that attribute business cycles to real-productivity shocks that dispense with Lucasian informational ambiguity in accounting for observed aggregate time-series fluctuations. So-called New Keynesian economists chimed in with ad hoc assumptions about wage and price stickiness to create a new neoclassical synthesis to replace the old synthesis but with little claim to any actual analytical insight.

The success of the Lucasian paradigm was disheartening to Axel, and his research agenda gradually shifted from macroeconomic theory to applied policy, especially inflation control in developing countries. Although my own interest in macroeconomics was largely inspired by Axel, my approach to macroeconomics and monetary theory eventually diverged from Axel’s, when, in my last couple of years of graduate work at UCLA, I became close to Earl Thompson whose courses I had not taken as an undergraduate or a graduate student. I had read some of Earl’s monetary theory papers when preparing for my preliminary exams; I found them interesting but quirky and difficult to understand. After I had already started writing my dissertation, under Harold Demsetz on an IO topic, I decided — I think at the urging of my friend and eventual co-author, Ron Batchelder — to sit in on Earl’s graduate macro sequence, which he would sometimes offer as an alternative to Axel’s more popular graduate macro sequence. It was a relatively small group — probably not more than 25 or so attended – that met one evening a week for three hours. Each session – and sometimes more than one session — was devoted to discussing one of Earl’s published or unpublished macroeconomic or monetary theory papers. Hearing Earl explain his papers and respond to questions and criticisms brought them alive to me in a way that just reading them had never done, and I gradually realized that his arguments, which I had previously dismissed or misunderstood, were actually profoundly insightful and theoretically compelling.

For me at least, Earl provided a more systematic way of thinking about macroeconomics and a more systematic critique of standard macro than I could piece together from Axel’s writings and lectures. But one of the lessons that I had learned from Axel was the seminal importance of two Hayek essays: “The Use of Knowledge in Society,” and, especially “Economics and Knowledge.” The former essay is the easier to understand, and I got the gist of it on my first reading; the latter essay is more subtle and harder to follow, and it took years and a number of readings before I could really follow it. I’m not sure when I began to really understand it, but it might have been when I heard Earl expound on the importance of Hicks’s temporary-equilibrium method first introduced in Value and Capital.

In working out the temporary equilibrium method, Hicks relied on the work of Myrdal, Lindahl and Hayek, and Earl’s explanation of the temporary-equilibrium method based on the assumption that markets for current delivery clear, but those market-clearing prices are different from the prices that agents had expected when formulating their optimal intertemporal plans, causing agents to revise their plans and their expectations of future prices. That seemed to be the proper way to think about the intertemporal-coordination failures that Axel was so concerned about, but somehow he never made the connection between Hayek’s work, which he greatly admired, and the Hicksian temporary-equilibrium method which I never heard him refer to, even though he also greatly admired Hicks.

It always seemed to me that a collaboration between Earl and Axel could have been really productive and might even have led to an alternative to the Lucasian reign over macroeconomics. But for some reason, no such collaboration ever took place, and macroeconomics was impoverished as a result. They are both gone, but we still benefit from having Duncan Foley still with us, still active, and still making important contributions to our understanding, And we should be grateful.

On the Labor Supply Function

The bread and butter of economics is demand and supply. The basic idea of a demand function (or a demand curve) is to describe a relationship between the price at which a given product, commodity or service can be bought and the quantity that will bought by some individual. The standard assumption is that the quantity demanded increases as the price falls, so that the demand curve is downward-sloping, but not much more can be said about the shape of a demand curve unless special assumptions are made about the individual’s preferences.

Demand curves aren’t natural phenomena with concrete existence; they are hypothetical or notional constructs pertaining to individual preferences. To pass from individual demands to a market demand for a product, commodity or service requires another conceptual process summing the quantities demanded by each individual at any given price. The conceptual process is never actually performed, so the downward-sloping market demand curve is just presumed, not observed as a fact of nature.

The summation process required to pass from individual demands to a market demand implies that the quantity demanded at any price is the quantity demanded when each individual pays exactly the same price that every other demander pays. At a price of $10/widget, the widget demand curve tells us how many widgets would be purchased if every purchaser in the market can buy as much as desired at $10/widget. If some customers can buy at $10/widget while others have to pay $20/widget or some can’t buy any widgets at any price, then the quantity of widgets actually bought will not equal the quantity on the hypothetical widget demand curve corresponding to $10/widget.

Similar reasoning underlies the supply function or supply curve for any product, commodity or service. The market supply curve is built up from the preferences and costs of individuals and firms and represents the amount of a product, commodity or service that would be willing to offer for sale at different prices. The market supply curve is the result of a conceptual summation process that adds up the amounts that would be hypothetically be offered for sale by every agent at different prices.

The point of this pedantry is to emphasize the that the demand and supply curves we use are drawn on the assumption that a single uniform market price prevails in every market and that all demanders and suppliers can trade without limit at those prices and their trading plans are fully executed. This is the equilibrium paradigm underlying the supply-demand analysis of econ 101.

Economists quite unself-consciously deploy supply-demand concepts to analyze labor markets in a variety of settings. Sometimes, if the labor market under analysis is limited to a particular trade or a particular skill or a particular geographic area, the supply-demand framework is reasonable and appropriate. But when applied to the aggregate labor market of the whole economy, the supply-demand framework is inappropriate, because the ceteris-paribus proviso (all prices other than the price of the product, commodity or service in question are held constant) attached to every supply-demand model is obviously violated.

Thoughtlessly applying a simple supply-demand model to analyze the labor market of an entire economy leads to the conclusion that widespread unemployment, when some workers are unemployed, but would have accepted employment offers at wages that comparably skilled workers are actually receiving, implies that wages are above the market-clearing wage level consistent with full employment.

The attached diagram for simplest version of this analysis. The market wage (W1) is higher than the equilibrium wage (We) at which all workers willing to accept that wage could be employed. The difference between the number of workers seeking employment at the market wage (LS) and the number of workers that employers seek to hire (LD) measures the amount of unemployment. According to this analysis, unemployment would be eliminated if the market wage fell from W1 to We.

Applying supply-demand analysis to aggregate unemployment fails on two levels. First, workers clearly are unable to execute their plans to offer their labor services at the wage at which other workers are employed, so individual workers are off their supply curves. Second, it is impossible to assume, supply-demand analysis requires, that all other prices and incomes remain constant so that the demand and supply curves do not move as wages and employment change. When multiple variables are mutually interdependent and simultaneously determined, the analysis of just two variables (wages and employment) cannot be isolated from the rest of the system. Focusing on the wage as the variable that needs to change to restore full employment is an example of the tunnel vision.

Keynes rejected the idea that economy-wide unemployment could be eliminated by cutting wages. Although Keynes’s argument against wage cuts as a cure for unemployment was flawed, he did have at least an intuitive grasp of the basic weakness in the argument for wage cuts: that high aggregate unemployment is not usefully analyzed as a symptom of excessive wages. To explain why wage cuts aren’t the cure for high unemployment, Keynes introduced a distinction between voluntary and involuntary unemployment.

Forty years later, Robert Lucas began his effort — not the first such effort, but by far the most successful — to discredit the concept of involuntary unemployment. Here’s an early example:

Keynes [hypothesized] that measured unemployment can be decomposed into two distinct components: ‘voluntary’ (or frictional) and ‘involuntary’, with full employment then identified as the level prevailing when involuntary employment equals zero. It seems appropriate, then, to begin by reviewing Keynes’ reasons for introducing this distinction in the first place. . . .

Accepting the necessity of a distinction between explanations for normal and cyclical unemployment does not, however, compel one to identify the first as voluntary and the second as involuntary, as Keynes goes on to do. This terminology suggests that the key to the distinction lies in some difference in the way two different types of unemployment are perceived by workers. Now in the first place, the distinction we are after concerns sources of unemployment, not differentiated types. . . .[O]ne may classify motives for holding money without imagining that anyone can subdivide his own cash holdings into “transactions balances,” “precautionary balances”, and so forth. The recognition that one needs to distinguish among sources of unemployment does not in any way imply that one needs to distinguish among types.

Nor is there any evident reason why one would want to draw this distinction. Certainly the more one thinks about the decision problem facing individual workers and firms the less sense this distinction makes. The worker who loses a good job in prosperous time does not volunteer to be in this situation: he has suffered a capital loss. Similarly, the firm which loses an experienced employee in depressed times suffers an undesirable capital loss. Nevertheless, the unemployed worker at any time can always find some job at once, and a firm can always fill a vacancy instantaneously. That neither typically does so by choice is not difficult to understand given the quality of the jobs and the employees which are easiest to find. Thus there is an involuntary element in all unemployment, in the sense that no one chooses bad luck over good; there is also a voluntary element in all unemployment, in the sense that however miserable one’s current work options, one can always choose to accept them.

Lucas, Studies in Business Cycle Theory, pp. 241-43

Consider this revision of Lucas’s argument:

The expressway driver who is slowed down in a traffic jam does not volunteer to be in this situation; he has suffered a waste of his time. Nevertheless, the driver can get off the expressway at the next exit to find an alternate route. Thus, there is an involuntary element in every traffic jam, in the sense that no one chooses to waste time; there is also a voluntary element in all traffic jams, in the sense that however stuck one is in traffic, one can always take the next exit on the expressway.

What is lost on Lucas is that, for an individual worker, taking a wage cut to avoid being laid off by the employer accomplishes nothing, because the willingness of a single worker to accept a wage cut would not induce the employer to increase output and employment. Unless all workers agreed to take wage cuts, a wage cut to one employee would have not cause the employer to reconsider its plan to reduce in the face of declining demand for its product. Only the collective offer of all workers to accept a wage cut would induce an output response by the employer and a decision not to lay off part of its work force.

But even a collective offer by all workers to accept a wage cut would be unlikely to avoid an output reduction and layoffs. Consider a simple case in which the demand for the employer’s output declines by a third. Suppose the employer’s marginal cost of output is half the selling price (implying a demand elasticity of -2). Assume that demand is linear. With no change in its marginal cost, the firm would reduce output by a third, presumably laying off up to a third of its employees. Could workers avoid the layoffs by accepting lower wages to enable the firm to reduce its price? Or asked in another way, how much would marginal cost have to fall for the firm not to reduce output after the demand reduction?

Working out the algebra, one finds that for the firm to keep producing as much after a one-third reduction in demand, the firm’s marginal cost would have to fall by two-thirds, a decline that could only be achieved by a radical reduction in labor costs. This is surely an oversimplified view of the alternatives available to workers and employers, but the point is that workers facing a layoff after the demand for the product they produce have almost no ability to remain employed even by collectively accepting a wage cut.

That conclusion applies a fortiori when decisions whether to accept a wage cut are left to individual workers, because the willingness of workers individually to accept a wage cut is irrelevant to their chances of retaining their jobs. Being laid off because of decline in the demand for the product a worker is producing is a much different situation from being laid off, because a worker’s employer is shifting to a new technology for which the workers lack the requisite skills, and can remain employed only by accepting re-assignment to a lower-paying job.

Let’s follow Lucas a bit further:

Keynes, in chapter 2, deals with the situation facing an individual unemployed worker by evasion and wordplay only. Sentences like “more labor would, as a rule, be forthcoming at the existing money wage if it were demanded” are used again and again as though, from the point of view of a jobless worker, it is unambiguous what is meant by “the existing money wage.” Unless we define an individual’s wage rate as the price someone else is willing to pay him for his labor (in which case Keynes’s assertion is defined to be false to be false), what is it?

Lucas, Id.

I must admit that, reading this passage again perhaps 30 or more years after my first reading, I’m astonished that I could have once read it without astonishment. Lucas gives the game away by accusing Keynes of engaging in evasion and wordplay before embarking himself on sustained evasion and wordplay. The meaning of the “existing money wage” is hardly ambiguous, it is the money wage the unemployed worker was receiving before losing his job and the wage that his fellow workers, who remain employed, continue to receive.

Is Lucas suggesting that the reason that the worker lost his job while his fellow workers who did not lose theirs is that the value of his marginal product fell but the value of his co-workers’ marginal product did not? Perhaps, but that would only add to my astonishment. At the current wage, employers had to reduce the number of workers until their marginal product was high enough for the employer to continue employing them. That was not necessarily, and certainly not primarily, because some workers were more capable than those that were laid off.

The fact is, I think, that Keynes wanted to get labor markets out of the way in chapter 2 so that he could get on to the demand theory which really interested him.

More wordplay. Is it fact or opinion? Well, he says that thinks it’s a fact. In other words, it’s really an opinion.

This is surely understandable, but what is the excuse for letting his carelessly drawn distinction between voluntary and involuntary unemployment dominate aggregative thinking on labor markets for the forty years following?

Mr. Keynes, really, what is your excuse for being such an awful human being?

[I]nvoluntary unemployment is not a fact or a phenomenon which it is the task of theorists to explain. It is, on the contrary, a theoretical construct which Keynes introduced in the hope it would be helpful in discovering a correct explanation for a genuine phenomenon: large-scale fluctuations in measured, total unemployment. Is it the task of modern theoretical economics to ‘explain’ the theoretical constructs of our predecessor, whether or not they have proved fruitful? I hope not, for a surer route to sterility could scarcely be imagined.

Lucas, Id.

Let’s rewrite this paragraph with a few strategic word substitutions:

Heliocentrism is not a fact or phenomenon which it is the task of theorists to explain. It is, on the contrary, a theoretical construct which Copernicus introduced in the hope it would be helpful in discovering a correct explanation for a genuine phenomenon the observed movement of the planets in the heavens. Is it the task of modern theoretical physics to “explain” the theoretical constructs of our predecessors, whether or not they have proved fruitful? I hope not, for a surer route to sterility could scarcely be imagined.

Copernicus died in 1542 shortly before his work on heliocentrism was published. Galileo’s works on heliocentrism were not published until 1610 almost 70 years after Copernicus published his work. So, under Lucas’s forty-year time limit, Galileo had no business trying to explain Copernican heliocentrism which had still not yet proven fruitful. Moreover, even after Galileo had published his works, geocentric models were providing predictions of planetary motion as good as, if not better than, the heliocentric models, so decisive empirical evidence in favor of heliocentrism was still lacking. Not until Newton published his great work 70 years after Galileo, and 140 years after Copernicus, was heliocentrism finally accepted as fact.

In summary, it does not appear possible, even in principle, to classify individual unemployed people as either voluntarily or involuntarily unemployed depending on the characteristics of the decision problem they face. One cannot, even conceptually, arrive at a usable definition of full employment

Lucas, Id.

Belying his claim to be introducing scientific rigor into macroeocnomics, Lucas restorts to an extended scholastic inquiry into whether an unemployed worker can really ever be unemployed involuntarily. Based on his scholastic inquiry into the nature of volunatriness, Lucas declares that Keynes was mistaken because would not accept the discipline of optimization and equilibrium. But Lucas’s insistence on the discipline of optimization and equilibrium is misplaced unless he can provide an actual mechanism whereby the notional optimization of a single agent can be reconciled with notional optimization of other individuals.

It was his inability to provide any explanation of the mechanism whereby the notional optimization of individual agents can be reconciled with the notional optimizations of other individual agents that led Lucas to resort to rational expectations to circumvent the need for such a mechanism. He successfully persuaded the economics profession that evading the need to explain such a reconciliation mechanism, the profession would not be shirking their explanatory duty, but would merely be fulfilling their methodological obligation to uphold the neoclassical axioms of rationality and optimization neatly subsumed under the heading of microfoundations.

Rational expectations and microfoundations provided the pretext that could justify or at least excuse the absence of any explanation of how an equilibrium is reached and maintained by assuming that the rational expectations assumption is an adequate substitute for the Walrasian auctioneer, so that each and every agent, using the common knowledge (and only the common knowledge) available to all agents, would reliably anticipate the equilibrium price vector prevailing throughout their infinite lives, thereby guaranteeing continuous equilibrium and consistency of all optimal plans. That feat having been securely accomplished, it was but a small and convenient step to collapse the multitude of individual agents into a single representative agent, so that the virtue of submitting to the discipline of optimization could find its just and fitting reward.

Wherein I Try to Calm Professor Blanchard’s Nerves

Olivier Blanchard is rightly counted among the most eminent macroeconomists of our time, and his pronouncements on macroeconomic matters should not be dismissed casually. So his commentary yesterday for the Peterson Institute of International Economics, responding to a previous policy brief, by David Reifschneider and David Wilcox, arguing that the recent burst of inflation is likely to recede, bears close attention.

Blanchard does not reject the analysis of Reifschneider and Wilcox outright, but he argues that they overlook factors that could cause inflation to remain high unless policy makers take more aggressive action to bring inflation down than is recommended by Reifschneider and Wilcox. Rather than go through the details of Blanchard’s argument, I address the two primary concerns he identifies: (1) the potential for inflation expectations to become unanchored, as they were in the 1970s and early 1980s, by persistent high inflation, and (2) the potential inflationary implications of wage catchup after the erosion of real wages by the recent burst of inflation.

Unanchored Inflation Expectations and the Added Cost of a Delayed Response to Inflation

Blanchard cites a forthcoming book by Alan Blinder on soft and hard landings from inflation in which Blinder examines nine Fed tightening episodes in which tightening was the primary cause of a slowdown or a recession. Based on the historical record, Blinder is optimistic that the Fed can manage a soft landing if it needs to reduce inflation. Blanchard doesn’t share Blinder’s confidence.

[I]n most of the episodes Blinder has identified, the movements in inflation to which the Fed reacted were too small to be of direct relevance to the current situation, and the only comparable episode to today, if any, is the episode that ended with the Volcker disinflation of the early 1980s.

I find that a scary comparison. . . .

[I]t shows what happened when the Fed got seriously “behind the curve” in 1974–75. . . . It then took 8 years, from 1975 to 1983, to reduce inflation to 4 percent.

And I find Blanchard’s comparison of the 1975-1983 period with the current situation problematic. First, he ignores the fact that the 1975-1983 episode did not display a steady rate of inflation or a uniform increase in inflation from 1975 until Volcker finally tamed it by way of the brutal 1981-82 recession. As I’ve explained previously in posts on the 1970s and 1980s (here, here, and here), and in chapters 7 and 8 of my book Studies in the History of Monetary Theory the 1970s inflation was the product of a series of inflationary demand-side and supply-shocks and misguided policy responses by the Fed, guided by politically motivated misconceptions, with little comprehension of the consequences of its actions.

It would be unwise to assume that the Fed will never embark on a similar march of folly, but it would be at least as unwise to adopt a proposed policy on the assumption that the alternative to that policy would be a repetition of the earlier march. What commentary on the 1970s largely overlooks is that there was an enormous expansion of the US labor force in that period as baby boomers came of age and as women began seeking and finding employment in steadily increasing numbers. The labor-force participation rate in the 1950s and 1960s fluctuated between about 58% to about 60%, mirroring fluctuations in the unemployment rate. Between 1970 and 1980 the labor force participation rate rose from just over 60% to just over 64% even as the unemployment rate rose from about 5% to over 7%. The 1970s were not, for the most part, a period of stagflation, but a period of inflation and strong growth interrupted by one deep recession (1974-75) and bookended by two minor recessions (1969-70) and (1979-80). But the rising trend of unemployment during the decade was largely attributable not to stagnation but to a rapidly expanding labor force and a rising labor participation rate.

The rapid increase in inflation in 1973 was largely a policy-driven error of the Nixon/Burns collaboration to ensure Nixon’s reelection in 1972 without bothering to taper the stimulus in 1973 after full employment was restored just in time for Nixon’s 1972 re-election. The oil shock of 1973-74 would have justified allowing a transitory period of increased inflation to cushion the negative effect of the increase in energy prices and to dilute the real magnitude of the nominal increase in oil prices. But the combined effect of excess aggregate demand and a negative supply shock led to an exaggerated compensatory tightening of monetary policy that led to the unnecessarily deep and prolonged recession in 1974-75.

A strong recovery ensued after the recession which, not surprisingly, was associated with declining inflation that fell below 5% in 1976. However, owing to the historically high rate of unemployment, only partially attributable to the previous recession, the incoming Carter administration promoted expansionary fiscal and monetary policies, which Arthur Burns, hoping to be reappointed by Carter to another term as Fed Chairman, willingly implemented. Rather than continue on the downward inflationary trend inherited from the previous administration, inflation resumed its upward trend in 1977.

Burns’s hopes to be reappointed by Carter were disappointed, but his replacement G. William Miller made no effort to tighten monetary policy to reverse the upward trend in inflation. A second oil shock in 1979 associated with the Iranian Revolution and the taking of US hostages in Iran caused crude oil prices over the course in 1979 to more than double. Again, the appropriate monetary-policy response was not to tighten monetary policy but to accommodate the price increase without causing a recession.

However, by the time of the second oil shock in 1979, inflation was already in the high single digits. The second oil shock, combined with the disastrous effects of the controls on petroleum prices carried over from the Nixon administration, created a crisis atmosphere that allowed the Reagan administration, with the cooperation of Paul Volcker, to implement a radical Monetarist anti-inflation policy. The policy was based on the misguided presumption that keeping the rate of growth of some measure of the money stock below a 5% annual rate would cure inflation with little effect on the overall economy if it were credibly implemented.

Volcker’s reputation was such that it was thought by supporters of the policy that his commitment would be relied upon by the public, so that a smooth transition to a lower rate of inflation would follow, and any downturn would be mild and short-lived. But the result was an unexpectedly deep and long-lasting recession.

The recession was needlessly prolonged by the grave misunderstanding of the causal relationship between the monetary aggregates and macroeconomic performance that had been perpetrated by Milton Friedman’s anti-Keynesian Monetarist counterrevolution. After triggering the sharpest downturn of the postwar era, the Monetarist anti-inflation strategy adopted by Volcker was, in the summer of 1982, on the verge of causing a financial crisis before Volcker announced that the Fed would no longer try to target any of the monetary aggregates, an announcement that triggered an immediate stock-market boom and, within a few months, the start of an economic recovery.

Thus, Blanchard is wrong to compare our current situation to the entire 1975-1983 period. The current situation, rather, is similar to the situation in 1973, when an economy, in the late stages of a recovery with rising inflation, was subjected to a severe supply shock. The appropriate response to that supply shock was not to tighten monetary policy, but merely to draw down the monetary stimulus of the previous two years. However, the Fed, perhaps shamed by the excessive, and politically motivated, monetary expansion of the previous two years, overcompensated by tightening monetary policy to counter the combined inflationary impact of its own previous policy and the recent oil price increase, immediately triggering the sharpest downturn of the postwar era. That is the lesson to draw from the 1970s, and it’s a mistake that the Fed ought not repeat now.

The Catch-Up Problem: Are Rapidly Rising Wages a Ticking Time-Bomb

Blanchard is worried that, because price increases exceeded wage increases in 2021, causing real wages to fall in 2021, workers will rationally assume, and demand, that their nominal wages will rise in 2022 to compensate for the decline in real wages, thereby fueling a further increase in inflation. This is a familiar argument based on the famous short-run Phillips-Curve trade-off between inflation and unemployment. Reduced unemployment resulting from the real-wage reduction associated with inflation will cause inflation to increase.

This argument is problematic on at least two levels. First, it presumes that the Phillips Curve represents a structural relationship, when it is merely a reduced form, just as an observed relationship between the price of a commodity and sales of that commodity is a reduced form, not a demand curve. Inferences cannot be made from a reduced form about the effect of a price change, nor can inferences about the effect of inflation be made from the Phillips Curve.

But one needn’t resort to a somewhat sophisticated argument to see why Blanchard’s fears that wage catchup will lead to a further round of inflation are not well-grounded. Blanchard argues that business firms, having pocketed windfall profits from rising prices that have outpaced wage increases, will grant workers compensatory wage increases to restore workers’ real wages, while also increasing prices to compensate themselves for the increased wages that they have agreed to pay their workers.

I’m sorry, but with all due respect to Professor Blanchard, that argument makes no sense. Evidently, firms have generally enjoyed a windfall when market conditions allowed them to raise prices without raising wages. Why, if wages finally catch up to prices, will they raise prices again? Either firms can choose, at will, how much profit to make when they set prices or their prices are constrained by market forces. If Professor Blanchard believes that firms can simply choose how much profit they make when they set prices, then he seems to be subscribing to Senator Warren’s theory of inflation: that inflation is caused by corporate greed. If he believes that, in setting prices, firms are constrained by market forces, then the mere fact that market conditions allowed them to increase prices faster than wages rose in 2021 does not mean that, if market conditions cause wages to rise at a faster rate than they did in 2022, firms, after absorbing those wage increases, will automatically be able to maintain their elevated profit margins in 2022 by raising prices in 2022 correspondingly.

The market conditions facing firms in 2022 will be determined by, among other things, the monetary policy of the Fed. Whether firms are able to raise prices in 2022 as fast as wages rise in 2022 will depend on the monetary policy adopted by the Fed. If the Fed’s monetary policy aims at gradually slowing down the rate of increase in nominal GDP in 2022 from the 2021 rate of increase, firms overall will not easily be able to raise prices as fast as wages rise in 2022. But why should anyone expect that firms that enjoyed windfall profits from inflation in 2021 will be able to continue enjoying those elevated profits in perpetuity?

Professor Blanchard posits simple sectoral equations for the determination of the rate of wage increases and for the rate of price increases given the rate of wage increases. This sort of one-way causality is much too simplified and ignores the fundamental fact all prices and wages and expectations of future prices and wages are mutually determined in a simultaneous system. One can’t reason from a change in a single variable and extrapolate from that change how the rest of the system will adjust.

Robert Lucas and the Pretense of Science

F. A. Hayek entitled his 1974 Nobel Lecture whose principal theme was to attack the simple notion that the long-observed correlation between aggregate demand and employment was a reliable basis for conducting macroeconomic policy, “The Pretence of Knowledge.” Reiterating an argument that he had made over 40 years earlier about the transitory stimulus provided to profits and production by monetary expansion, Hayek was informally anticipating the argument that Robert Lucas famously repackaged two years later in his famous critique of econometric policy evaluation. Hayek’s argument hinged on a distinction between “phenomena of unorganized complexity” and phenomena of organized complexity.” Statistical relationships or correlations between phenomena of disorganized complexity may be relied upon to persist, but observed statistical correlations displayed by phenomena of organized complexity cannot be relied upon without detailed knowledge of the individual elements that constitute the system. It was the facile assumption that observed statistical correlations in systems of organized complexity can be uncritically relied upon in making policy decisions that Hayek dismissed as merely the pretense of knowledge.

Adopting many of Hayek’s complaints about macroeconomic theory, Lucas founded his New Classical approach to macroeconomics on a methodological principle that all macroeconomic models be grounded in the axioms of neoclassical economic theory as articulated in the canonical Arrow-Debreu-McKenzie models of general equilibrium models. Without such grounding in neoclassical axioms and explicit formal derivations of theorems from those axioms, Lucas maintained that macroeconomics could not be considered truly scientific. Forty years of Keynesian macroeconomics were, in Lucas’s view, largely pre-scientific or pseudo-scientific, because they lacked satisfactory microfoundations.

Lucas’s methodological program for macroeconomics was thus based on two basic principles: reductionism and formalism. First, all macroeconomic models not only had to be consistent with rational individual decisions, they had to be reduced to those choices. Second, all the propositions of macroeconomic models had to be explicitly derived from the formal definitions and axioms of neoclassical theory. Lucas demanded nothing less than the explicit assumption individual rationality in every macroeconomic model and that all decisions by agents in a macroeconomic model be individually rational.

In practice, implementing Lucasian methodological principles required that in any macroeconomic model all agents’ decisions be derived within an explicit optimization problem. However, as Hayek had himself shown in his early studies of business cycles and intertemporal equilibrium, individual optimization in the standard Walrasian framework, within which Lucas wished to embed macroeconomic theory, is possible only if all agents are optimizing simultaneously, all individual decisions being conditional on the decisions of other agents. Individual optimization can only be solved simultaneously for all agents, not individually in isolation.

The difficulty of solving a macroeconomic equilibrium model for the simultaneous optimal decisions of all the agents in the model led Lucas and his associates and followers to a strategic simplification: reducing the entire model to a representative agent. The optimal choices of a single agent would then embody the consumption and production decisions of all agents in the model.

The staggering simplification involved in reducing a purported macroeconomic model to a representative agent is obvious on its face, but the sleight of hand being performed deserves explicit attention. The existence of an equilibrium solution to the neoclassical system of equations was assumed, based on faulty reasoning by Walras, Fisher and Pareto who simply counted equations and unknowns. A rigorous proof of existence was only provided by Abraham Wald in 1936 and subsequently in more general form by Arrow, Debreu and McKenzie, working independently, in the 1950s. But proving the existence of a solution to the system of equations does not establish that an actual neoclassical economy would, in fact, converge on such an equilibrium.

Neoclassical theory was and remains silent about the process whereby equilibrium is, or could be, reached. The Marshallian branch of neoclassical theory, focusing on equilibrium in individual markets rather than the systemic equilibrium, is often thought to provide an account of how equilibrium is arrived at, but the Marshallian partial-equilibrium analysis presumes that all markets and prices except the price in the single market under analysis, are in a state of equilibrium. So the Marshallian approach provides no more explanation of a process by which a set of equilibrium prices for an entire economy is, or could be, reached than the Walrasian approach.

Lucasian methodology has thus led to substituting a single-agent model for an actual macroeconomic model. It does so on the premise that an economic system operates as if it were in a state of general equilibrium. The factual basis for this premise apparently that it is possible, using versions of a suitable model with calibrated coefficients, to account for observed aggregate time series of consumption, investment, national income, and employment. But the time series derived from these models are derived by attributing all observed variations in national income to unexplained shocks in productivity, so that the explanation provided is in fact an ex-post rationalization of the observed variations not an explanation of those variations.

Nor did Lucasian methodology have a theoretical basis in received neoclassical theory. In a famous 1960 paper “Towards a Theory of Price Adjustment,” Kenneth Arrow identified the explanatory gap in neoclassical theory: the absence of a theory of price change in competitive markets in which every agent is a price taker. The existence of an equilibrium does not entail that the equilibrium will be, or is even likely to be, found. The notion that price flexibility is somehow a guarantee that market adjustments reliably lead to an equilibrium outcome is a presumption or a preconception, not the result of rigorous analysis.

However, Lucas used the concept of rational expectations, which originally meant no more than that agents try to use all available information to anticipate future prices, to make the concept of equilibrium, notwithstanding its inherent implausibility, a methodological necessity. A rational-expectations equilibrium was methodologically necessary and ruthlessly enforced on researchers, because it was presumed to be entailed by the neoclassical assumption of rationality. Lucasian methodology transformed rational expectations into the proposition that all agents form identical, and correct, expectations of future prices based on the same available information (common knowledge). Because all agents reach the same, correct expectations of future prices, general equilibrium is continuously achieved, except at intermittent moments when new information arrives and is used by agents to revise their expectations.

In his Nobel Lecture, Hayek decried a pretense of knowledge about correlations between macroeconomic time series that lack a foundation in the deeper structural relationships between those related time series. Without an understanding of the deeper structural relationships between those time series, observed correlations cannot be relied on when formulating economic policies. Lucas’s own famous critique echoed the message of Hayek’s lecture.

The search for microfoundations was always a natural and commendable endeavor. Scientists naturally try to reduce higher-level theories to deeper and more fundamental principles. But the endeavor ought to be conducted as a theoretical and empirical endeavor. If successful, the reduction of the higher-level theory to a deeper theory will provide insight and disclose new empirical implications to both the higher-level and the deeper theories. But reduction by methodological fiat accomplishes neither and discourages the research that might actually achieve a theoretical reduction of a higher-level theory to a deeper one. Similarly, formalism can provide important insights into the structure of theories and disclose gaps or mistakes the reasoning underlying the theories. But most important theories, even in pure mathematics, start out as informal theories that only gradually become axiomatized as logical gaps and ambiguities in the theories are discovered and filled or refined.

The resort to the reductionist and formalist methodological imperatives with which Lucas and his followers have justified their pretentions to scientific prestige and authority, and have used that authority to compel compliance with those imperatives, only belie their pretensions.

The Walras-Marshall Divide in Neoclassical Theory, Part II

In my previous post, which itself followed up an earlier post “General Equilibrium, Partial Equilibrium and Costs,” I laid out the serious difficulties with neoclassical theory in either its Walrasian or Marshallian versions: its exclusive focus on equilibrium states with no plausible explanation of any economic process that leads from disequilibrium to equilibrium.

The Walrasian approach treats general equilibrium as the primary equilibrium concept, because no equilibrium solution in a single market can be isolated from the equilibrium solutions for all other markets. Marshall understood that no single market could be in isolated equilibrium independent of all other markets, but the practical difficulty of framing an analysis of the simultaneous equilibration of all markets made focusing on general equilibrium unappealing to Marshall, who wanted economic analysis to be relevant to the concerns of the public, i.e., policy makers and men of affairs whom he regarded as his primary audience.

Nevertheless, in doing partial-equilibrium analysis, Marshall conceded that it had to be embedded within a general-equilibrium context, so he was careful to specify the ceteris-paribus conditions under which partial-equilibrium analysis could be undertaken. In particular, any market under analysis had to be sufficiently small, or the disturbance to which that market was subject had to be sufficiently small, for the repercussions of the disturbance in that market to have only minimal effect on other markets, or, if substantial, those effects had to concentrated on a specific market (e.g., the market for a substitute, or complementary, good).

By focusing on equilibrium in a single market, Marshall believed he was making the analysis of equilibrium more tractable than the Walrasian alternative of focusing on the analysis of simultaneous equilibrium in all markets. Walras chose to make his approach to general equilibrium, if not tractable, at least intuitive by appealing to the fiction of tatonnement conducted by an imaginary auctioneer adjusting prices in all markets in response to any inconsistencies in the plans of transactors preventing them from executing their plans at the announced prices.

But it eventually became clear, to Walras and to others, that tatonnement could not be considered a realistic representation of actual market behavior, because the tatonnement fiction disallows trading at disequilibrium prices by pausing all transactions while a complete set of equilibrium prices for all desired transactions is sought by a process of trial and error. Not only is all economic activity and the passage of time suspended during the tatonnement process, there is not even a price-adjustment algorithm that can be relied on to find a complete set of equilibrium prices in a finite number of iterations.

Despite its seeming realism, the Marshallian approach, piecemeal market-by-market equilibration of each distinct market, is no more tenable theoretically than tatonnement, the partial-equilibrium method being premised on a ceteris-paribus assumption in which all prices and all other endogenous variables determined in markets other than the one under analysis are held constant. That assumption can be maintained only on the condition that all markets are in equilibrium. So the implicit assumption of partial-equilibrium analysis is no less theoretically extreme than Walras’s tatonnement fiction.

In my previous post, I quoted Michel De Vroey’s dismissal of Keynes’s rationale for the existence of involuntary unemployment, a violation in De Vroey’s estimation, of Marshallian partial-equilibrium premises. Let me quote De Vroey again.

When the strict Marshallian viewpoint is adopted, everything is simple: it is assumed that the aggregate supply price function incorporates wages at their market-clearing magnitude. Instead, when taking Keynes’s line, it must be assumed that the wage rate that firms consider when constructing their supply price function is a “false” (i.e., non-market-clearing) wage. Now, if we want to keep firms’ perfect foresight assumption (and, let me repeat, we need to lest we fall into a theoretical wilderness), it must be concluded that firms’ incorporation of a false wage into their supply function follows from their correct expectation that this is indeed what will happen in the labor market. That is, firms’ managers are aware that in this market something impairs market clearing. No other explanation than the wage floor assumption is available as long as one remains in the canonical Marshallian framework. Therefore, all Keynes’s claims to the contrary notwithstanding, it is difficult to escape the conclusion that his effective demand reasoning is based on the fixed-wage hypothesis. The reason for unemployment lies in the labor market, and no fuss should be made about effective demand being [the reason rather] than the other way around.

A History of Macroeconomics from Keynes to Lucas and Beyond, pp. 22-23

My interpretation of De Vroey’s argument is that the strict Marshallian viewpoint requires that firms correctly anticipate the wages that they will have to pay in making their hiring and production decisions, while presumably also correctly anticipating the future demand for their products. I am unable to make sense of this argument unless it means that firms — and why should firm owners or managers be the only agents endowed with perfect or correct foresight? – correctly foresee the prices of the products that they sell and of the inputs that they purchase or hire. In other words, the strict Marshallian viewpoint invoked by De Vroey assumes that each transactor foresees, without the intervention of a timeless tatonnement process guided by a fictional auctioneer, the equilibrium price vector. In other words, when the strict Marshallian viewpoint is adopted, everything is simple; every transactor is a Walrasian auctioneer.

My interpretation of Keynes – and perhaps I’m just reading my own criticism of partial-equilibrium analysis into Keynes – is that he understood that the aggregate labor market can’t be analyzed in a partial-equilibrium setting, because Marshall’s ceteris-paribus proviso can’t be maintained for a market that accounts for roughly half the earnings of the economy. When conditions change in the labor market, everything else also changes. So the equilibrium conditions of the labor market must be governed by aggregate equilibrium conditions that can’t be captured in, or accounted for by, a Marshallian partial-equilibrium framework. Because something other than supply and demand in the labor market determines the equilibrium, what happens in the labor market can’t, by itself, restore an equilibrium.

That, I think, was Keynes’s intuition. But while identifying a serious defect in the Marshallian viewpoint, that intuition did not provide an adequate theory of adjustment. But the inadequacy of Keynes’s critique doesn’t rehabilitate the Marshallian viewpoint, certainly not in the form in which De Vroey represents it.

But there’s a deeper problem with the Marshallian viewpoint than just the interdependence of all markets. Although Marshall accepted marginal-utility theory in principle and used it to explain consumer demand, he tried to limit its application to demand while retaining the classical theory of the cost of production as a coordinate factor explaining the relative prices of goods and services. Marginal utility determines demand while cost determines supply, so that the interaction of supply and demand (cost and utility) jointly determine price just as the two blades of a scissor jointly cut a piece of cloth or paper.

This view of the role of cost could be maintained only in the context of the typical Marshallian partial-equilibrium exercise in which all prices — including input prices — except the price of a single output are held fixed at their general-equilibrium values. But the equilibrium prices of inputs are not determined independently of the values of the outputs they produce, so their equilibrium market values are derived exclusively from the value of whatever outputs they produce.

This was a point that Marshall, desiring to minimize the extent to which the Marginal Revolution overturned the classical theory of value, either failed to grasp, or obscured: that both prices and costs are simultaneously determined. By focusing on partial-equilibrium analysis, in which input prices are treated as exogenous variables rather than, as in general-equilibrium analysis, endogenously determined variables, Marshall was able to argue as if the classical theory that the cost incurred to produce something determines its value or its market price, had not been overturned.

The absolute dependence of input prices on the value of the outputs that they are being used to produce was grasped more clearly by Carl Menger than by Walras and certainly more clearly than by Marshall. What’s more, unlike either Walras or Marshall, Menger explicitly recognized the time lapse between the purchasing and hiring of inputs by a firm and the sale of the final output, inputs having been purchased or hired in expectation of the future sale of the output. But expected future sales are at prices anticipated, but not known, in advance, making the valuation of inputs equally conjectural and forcing producers to make commitments without knowing either their costs or their revenues before undertaking those commitments.

It is precisely this contingent relationship between the expectation of future sales at unknown, but anticipated, prices and the valuations that firms attach to the inputs they purchase or hire that provides an alternative to the problematic Marshallian and Walrasian accounts of how equilibrium market prices are actually reached.

The critical role of expected future prices in determining equilibrium prices was missing from both the Marshallian and the Walrasian theories of price determination. In the Walrasian theory, price determination was attributed to a fictional tatonnement process that Walras originally thought might serve as a kind of oversimplified and idealized version of actual market behavior. But Walras seems eventually to have recognized and acknowledged how far removed from reality his tatonnement invention actually was.

The seemingly more realistic Marshallian account of price determination avoided the unrealism of the Walrasian auctioneer, but only by attributing equally, if not more, unrealistic powers of foreknowledge to the transactors than Walras had attributed to his auctioneer. Only Menger, who realistically avoided attributing extraordinary knowledge either to transactors or to an imaginary auctioneer, instead attributing to transactors only an imperfect and fallible ability to anticipate future prices, provided a realistic account, or at least a conceptual approach toward a realistic account, of how prices are actually formed.

In a future post, I will try spell out in greater detail my version of a Mengerian account of price formation and how this account might tell us about the process by which a set of equilibrium prices might be realized.

The Hawley-Smoot Tariff and the Great Depression

The role of the Hawley-Smoot Tariff (aka Smoot-Hawley Tariff) in causing the Great Depression has been an ongoing subject of controversy for close to a century. Ron Batchelder and I wrote a paper (“Debt, Deflation and the Great Depression”) published in this volume (Money and Banking: The American Experience) that offered an explanation of the mechanism by which the tariff contributed to the Great Depression. That paper was written before and inspired another paper “Pre-Keynesian Theories of the Great Depression: What Ever Happened to Hawtrey and Cassell“) I am now revising the paper for republication, and here is the new version of the relevant section discussing the Hawley-Smoot Tariff.

Monetary disorder was not the only legacy of World War I. The war also left a huge burden of financial obligations in its wake. The European allies had borrowed vast sums from the United States to finance their war efforts, and the Treaty of Versailles imposed on Germany the obligation to pay heavy reparations to the allies, particularly to France.

We need not discuss the controversial question whether the burden imposed on Germany was too great to have been discharged. The relevant question for our purposes is by what means the reparations and war debts could be paid, or, at least, carried forward without causing a default on the obligations. To simplify the discussion, we concentrate on the relationship between the U.S. and Germany, because many of the other obligations of the allies to the U.S. were offset by those of Germany to the allies.[1]

The debt to the U.S. could be extinguished either by a net payment in goods reflected in a German balance-of-trade surplus and a U.S. balance-of-trade deficit, or by a transfer of gold from Germany to the U.S. Stretching out the debt would have required the U.S., in effect to lend Germany the funds required to service its obligations.

For most of the 1920s, the U.S. did in fact lend heavily to Germany, thereby lending Germany the funds to meet its financial obligations to the U.S. (and its European creditors). U.S. lending was not explicitly for that purpose, but on the consolidated national balance sheets, U.S. lending offset German financial obligations, obviating any real transfer.

Thus, to avoid a transfer, in goods or specie, from Germany to the U.S., continued U.S. lending to Germany was necessary. But the sharp tightening of monetary policy by the Federal Reserve in 1928 raised domestic interest rates to near record levels and curtailed lending abroad, as foreign borrowers were discouraged from seeking funds in U.S. capital markets. Avoiding an immediate transfer from Germany to the U.S. was no longer possible except by default. To effect the necessary transfer in goods, Germany would have been required to shift resources from its non-tradable-goods sector to its tradable-goods sector, which would require reducing spending on, and the relative prices of, non-tradable goods. Thus, Germany began to slide into a recession in 1928.

In 1929 the United States began making the transfer even more difficult when the newly installed Hoover administration reaffirmed the Republican campaign commitment to raising U.S. tariffs, thereby imposing a tax on the goods transfer through which Germany could discharge its obligations. Although the bill to increase tariffs that became the infamous Hawley-Smoot Act was not passed until 1930, the commitment to raise tariffs made it increasingly unlikely that the U.S. would allow the debts owed it to be discharged by a transfer of goods. The only other means by which Germany could discharge its obligations was a transfer of gold. Anticipating that its obligations to the U.S. could be discharged only by transferring gold, Germany took steps to increase its gold holdings to be able to meet its debt obligations. The increased German demand for gold was reflected in a defensive tightening of monetary policy to raise domestic interest rates to reduce spending and to induce an inflow of gold to Germany.

The connection between Germany’s debt obligations and its demand for gold sheds light on the deflationary macroeconomic consequences of the Hawley-Smoot tariff. Given the huge debts owed to the United States, the tariff imposed a deflationary monetary policy on all U.S. debtors as they attempted to accumulate sufficient gold to be able to service their debt obligations to the U.S. But, under the gold standard, the United States could not shield itself from the deflationary effects that its trade policy was imposing on its debtors.[2]

The U.S. could have counteracted these macroeconomic pressures by a sufficiently expansive monetary policy, thereby satisfying the demand of other countries for gold. Monetary expansion would have continued, by different means, the former policy of lending to debtors, enabling them to extend their obligations. But preoccupied with, or distracted by the stock-market boom, U.S. monetary authorities were oblivious to the impossible alternatives that were being forced on U.S. debtors by a combination of tight U.S. monetary policy and a protectionist trade policy.

As the prospects that protectionist legislation would pass steadily improved even as tight U.S. monetary policy was being maintained, deflationary signs became increasingly clear and alarming. The panic of October 1929, in our view, was not, as much Great Depression historiography describes it, the breaking of a speculative bubble, but a correct realization that a toxic confluence of monetary and trade policies was leading the world over a deflationary precipice.

Once the deflation took hold, the nature of the gold standard with a fixed price of gold was such that gold would likely appreciate against weak currencies that were likely to be formally devalued, or allowed to float, relative to gold. A vicious cycle of increasing speculative demand for gold in anticipation of currency devaluation further intensified the deflationary pressures (Hamilton, 1988). Moreover, successive devaluations by one country at a time increased the deflationary pressure in the remaining gold-standard countries. A uniform all-around devaluation might have had some chance of quickly controlling the deflationary process, but piecemeal deflation could only prolong the deflationary pressure on nations that remained on the gold standard.

FOOTNOTES

[1] The United States, as a matter of law, always resisted such a comparison, contending that the war debts were commercial obligations in no way comparable to the politically imposed reparations. However, as a final matter, there was obviously a strict correspondence between the two sets of obligations. The total size of German obligations was never precisely determined. However, those obligations were certainly several times the size of the war debts owed the United States. Focusing simply on the U.S.-German relationship is therefore simply a heuristic device.

[2] Viewed from a different perspective, the tariff aimed at transferring wealth from the foreign debtors to the U.S. government by taxing debt payments on debt already fixed in nominal terms. Moreover, deflation from whatever source increased the real value of the fixed nominal debts owed the U.S.

The Real-Bills Doctrine, the Lender of Last Resort, and the Scope of Banking

Here is another section from my work in progress on the Smithian and Humean traditions in monetary economics. The discussion starts with a comparison of the negative view David Hume took toward banks and the positive view taken by Adam Smith which was also discussed in the previous post on the price-specie-flow mechanism. This section discusses how Smith, despite viewing banks positively, also understood that banks can be a source of disturbances as well as of efficiencies, and how he addressed that problem and how his followers who shared a positive view toward banks addressed the problem. Comments and feedback are welcome and greatly appreciated.

Hume and Smith had very different views about fractional-reserve banking and its capacity to provide the public with the desired quantity of money (banknotes and deposits) and promote international adjustment. The cash created by banks consists of liabilities on themselves that they exchange for liabilities on the public. Liabilities on the public accepted by banks become their assets, generating revenue streams with which banks cover their outlays including obligations to creditors and stockholders.

The previous post focused on the liability side of bank balance sheets, and whether there are economic forces that limit the size of those balance sheets, implying a point of equilibrium bank expansion. Believing that banks have an unlimited incentive to issue liabilities, whose face value exceeds their cost of production, Hume considered banks dangerous and inflationary. Smith disagreed, arguing that although bank money is a less costly alternative to the full-bodied money preferred by Hume, banks don’t create liabilities limitlessly, because, unless those liabilities generate corresponding revenue streams, they will be unable to redeem those liabilities, which their creditors may require of them, at will. To enhance the attractiveness of those liabilities and to increase the demand to hold them, competitive banks promise to convert those liabilities, at a stipulated rate, into an asset whose value they do not control. Under those conditions, banks have neither the incentive nor the capacity to cause inflation.

I turn now to a different topic: whether Smith’s rejection of the idea that banks are systematically biased toward overissuing liabilities implies that banks require no external control or intervention. I begin by briefly referring to Smith’s support of the real-bills doctrine and then extend that discussion to two other issues: the lender of last resort and the scope of banking.

A         Real-Bills Doctrine

I have argued elsewhere that, besides sketching the outlines of Fullarton’s argument for the Law of Reflux, Adam Smith recommended that banks observe a form of the real-bills doctrine, namely that banks issue sight liabilities only in exchange for real commercial bills of short (usually 90-days) duration. Increases in the demand for money cause bank balance sheets to expand; decreases cause them to contract. Unlike Mints (1945), who identified the Law of Reflux with the real-bills doctrine, I suggested that Smith viewed the real-bills doctrine as a pragmatic policy to facilitate contractions in the size of bank balance sheets as required by the reflux of their liabilities. With the discrepancy between the duration of liabilities and assets limited by issuing sight liabilities only in exchange for short-term bills, bank balance sheets would contract automatically thereby obviating, at least in part, the liquidation of longer-term assets at depressed prices.

On this reading, Smith recognized that banking policy ought to take account of the composition of bank balance sheets, in particular, the sort of assets that banks accept as backing for the sight liabilities that they issue. I would also emphasize that on this interpretation, Smith did not believe, as did many later advocates of the doctrine, that lending on the security of real bills is sufficient to prevent the price level from changing. Even if banks have no systematic incentive to overissue their liabilities, unless those liabilities are made convertible into an asset whose value is determined independently of the banks, the value of their liabilities is undetermined. Convertibility is how banks anchor the value of their liabilities, thereby increasing the attractiveness of those liabilities to the public and the willingness of the public to accept and hold them.

But Smith’s support for the real-bills doctrine indicates that, while understanding the equilibrating tendencies of competition on bank operations, he also recognized the inherent instability of banking caused by fluctuations in the value and liquidity of their assets. Smith’s support for the real-bills doctrine addressed one type of instability: the maturity mismatch between banks’ assets and liabilities. But there are other sources of instability, which may require further institutional or policy measures beyond the general laws of property and contract whose application and enforcement, in Smith’s view, generally sufficed for the self-interested conduct of private firms to lead to socially benign outcomes.

In the remainder of this section, I consider two other methods of addressing the vulnerability of bank assets to sudden losses of value: (1) the creation or empowerment of a lender of last resort capable of lending to illiquid, but solvent, banks possessing good security (valuable assets) as collateral against which to borrow, and (2) limits beyond the real-bills doctrine over the permissible activities undertaken by commercial banks.

B         Lender of Last Resort

Although the real-bills doctrine limits the exposure of bank balance sheets to adverse shocks on the value of long-term liabilities, even banks whose liabilities were issued in exchange for short-term real bills of exchange may be unable to meet all demands for redemption in periods of extreme financial distress, when debtors cannot sell their products at the prices they expected and cannot meet their own obligations to their creditors. If banks are called upon to redeem their liabilities, banks may be faced with a choice between depleting their own cash reserves, when they are most needed, or liquidating other assets at substantial, if not catastrophic, losses.

Smith’s version of the real-bills doctrine addressed one aspect of balance-sheet risk, but the underlying problem is deeper and more complicated than the liquidity issue that concerned Smith. The assets accepted by banks in exchange for their liabilities are typically not easily marketable, so if those assets must be shed quickly to satisfy demands for payment, banks’ solvency may be jeopardized by consequent capital losses. Limiting portfolios to short-term assets limits exposure to such losses, but only when the disturbances requiring asset liquidation affect only a relatively small number of banks. As the number of affected banks increases, their ability to counter the disturbance is impaired, as the interbank market for credit starts to freeze up or break down entirely, leaving them unable to offer short-term relief to, or receive it from, other momentarily illiquid banks. It is then that emergency lending by a lender of last resort to illiquid, but possibly still solvent, banks is necessary.

What causes a cluster of expectational errors by banks in exchanging their liabilities for assets supplied by their customers that become less valuable than they were upon acceptance? Are financial crises that result in, or are caused by, asset write downs by banks caused by random clusters of errors by banks, or are there systematic causes of such errors? Does the danger lie in the magnitude of the errors or in the transmission mechanism?

Here, too, the Humean and Smithian traditions seem to be at odds, offering different answers to problems, or, if not answers, at least different approaches to problems. Focusing on the liability side of bank balance sheets, the Humean tradition emphasizes the expansion of bank lending and the consequent creation of banknotes or deposits as the main impulse to macroeconomic fluctuations, a boom-bust or credit cycle triggered by banks’ lending to finance either business investment or consumer spending. Despite their theoretical differences, both Austrian business-cycle theory and Friedmanite Monetarism share a common intellectual ancestry, traceable by way of the Currency School to Hume, identifying the source of business-cycle fluctuations in excessive growth in the quantity of money.

The eclectic Smithian tradition accommodates both monetary and non-monetary business-cycle theories, but balance-sheet effects on banks are more naturally accommodated within the Smithian tradition than the Humean tradition with its focus on the liabilities not the assets of banks. At any rate, more research is necessary before we can decide whether serious financial disturbances result from big expectational errors or from contagion effects.

The Great Depression resulted from a big error. After the steep deflation and depression of 1920-22, followed by a gradual restoration of the gold standard, fears of further deflation were dispelled and steady economic expansion, especially in the United States, resulted. Suddenly in 1929, as France and other countries rejoined the gold standard, the fears voiced by Hawtrey and Cassel that restoring the gold standard could have serious deflationary consequences appeared increasingly more likely to be realized. Real signs of deflation began to appear in the summer of 1929, and in the fall the stock market collapsed. Rather than use monetary policy to counter incipient deflation, policy makers and many economists argued that deflation was part of the solution not the problem. And the Depression came.

It is generally agreed that the 2008 financial crisis that triggered the Little Depression (aka Great Recession) was largely the result of a housing bubble fueled by unsound mortgage lending by banks and questionable underwriting practices in packaging and marketing of mortgage-backed securities. However, although the housing bubble seems to have burst the spring of 2007, the crisis did not start until September 2008.

It is at least possible, as I have argued (Glasner 2018) that, despite the financial fragility caused by the housing bubble and unsound lending practices that fueled the bubble, the crisis could have been avoided but for a reflexive policy tightening by the Federal Reserve starting in 2007 that caused a recession starting in December 2007 and gradually worsening through the summer of 2008. Rather than ease monetary policy as the recession deepened, the Fed, distracted by rising headline inflation owing to rising oil prices that summer, would not reduce its interest-rate target further after March 2008. If my interpretation is correct, the financial crisis of 2008 and the subsequent Little Depression (aka Great Recession) were as much caused by bad monetary policy as by the unsound lending practices and mistaken expectations by lenders.

It is when all agents are cash constrained that a lender of last resort that is able to provide the liquidity that the usual suppliers of liquidity cannot provide, but are instead demanding, is necessary to avoid a systemic breakdown. In 2008, the Fed was unwilling to satisfy demands for liquidity until the crisis had deteriorated to the point of a worldwide collapse. In the nineteenth century, Thornton and Fullarton understood that the Bank of England was uniquely able to provide liquidity in such circumstances, recommending that it lend freely in periods of financial stress.

That policy was not viewed favorably either by Humean supporters of the Currency Principle, opposed to all forms of fractional-reserve banking, or by Smithian supporters of free banking who deplored the privileged central-banking position granted to the Bank of England. Although the Fed in 2008 acknowledged that it was both a national and international lender of last resort, it was tragically slow to take the necessary actions to end the crisis after allowing it to spiral nearly out of control.

While cogent arguments have been made that a free-banking alternative to the lender-of-last-resort services of the Bank of England might have been possible in the nineteenth century,[2] even a free-banking system would require a mechanism for handling periods of financial stress. Free-banking supporters argue that bank clearinghouses have emerged spontaneously in the absence of central banks, and could provide the lender-of-last resort services provided by central banks. But, insofar as bank clearinghouses would take on the lender-of-last-resort function, which involves some intervention and supervision of bank activities by either the clearinghouse or the central bank, the same anticompetitive or cartelistic objections to the provision of lender-of-last-resort services by central banks also would apply to the provision of those services by clearinghouses. So, the tension between libertarian, free-market principles and lender-of-last-resort services would not necessarily be eliminated bank clearinghouses instead of central banks provided those services.

This is an appropriate place to consider Walter Bagehot’s contribution to the lender-of-last-resort doctrine. Building on the work of Thornton and Fullarton, Bagehot formulated the classic principle that, during times of financial distress, the Bank of England should lend freely at a penalty rate to banks on good security. Bagehot, himself, admitted to a certain unease in offering this advice, opining that it was regrettable that the Bank of England achieved a pre-eminent position in the British banking system, so that a decentralized banking system, along the lines of the Scottish free-banking system, could have evolved. But given the historical development of British banking, including the 1844 Bank Charter Act, Bagehot, an eminently practical man, had no desire to recommend radical reform, only to help the existing system operate as smoothly as it could be made to operate.

But the soundness of Bagehot’s advice to lend freely at a penalty rate is dubious. In a financial crisis, the market rate of interest primarily reflects a liquidity premium not an expected real return on capital, the latter typically being depressed in a crisis. Charging a penalty rate to distressed borrowers in a crisis only raises the liquidity premium. Monetary policy ought to aim to reduce, not to increase, that premium. So Bagehot’s advice, derived from a misplaced sense of what is practical and prudent and financially sound, rather than from sound analysis, was far from sound.

Under the gold standard, or under any fixed-exchange-rate regime, a single country has an incentive to raise interest rates above the rates of other countries to prevent a gold outflow or attract an inflow. Under these circumstances, a failure of international cooperation can lead to competitive rate increases as monetary authorities scramble to maintain or increase their gold reserves. In testimony to the Macmillan Commission in 1930, Ralph Hawtrey masterfully described the obligation of a central bank in a crisis. Here is his exchange with the Chairman of the Commission Hugh Macmillan:

MACMILLAN: Suppose . . . without restricting credit . . . that gold had gone out to a very considerable extent, would that not have had very serious consequences on the international position of London?

HAWTREY: I do not think the credit of London depends on any particular figure of gold holding. . . . The harm began to be done in March and April of 1925 [when] the fall in American prices started. There was no reason why the Bank of England should have taken ny action at that time so far as the question of loss of gold is concerned. . . . I believed at the time and I still think that the right treatment would have been to restore the gold standard de facto before it was restored de jure. That is what all the other countries have done. . . . I would have suggested that we should have adopted the practice of always selling gold to a sufficient extent to prevent the exchange depreciating. There would have been no legal obligation to continue convertibility into gold . . . If that course had been adopted, the Bank of England would never have been anxious about the gold holding, they would have been able to see it ebb away to quite a considerable extent with perfect equanimity, . . and might have continued with a 4 percent Bank Rate.

MACMILLAN: . . . the course you suggest would not have been consistent with what one may call orthodox Central Banking, would it?

HAWTREY: I do not know what orthodox Central Banking is.

MACMILLAN: . . . when gold ebbs away you must restrict credit as a general principle?

HAWTREY: . . . that kind of orthodoxy is like conventions at bridge; you have to break them when the circumstances call for it. I think that a gold reserve exists to be used. . . . Perhaps once in a century the time comes when you can use your gold reserve for the governing purpose, provided you have the courage to use practically all of it.

Hawtrey here was echoing Fullarton’s insight that there is no rigid relationship between the gold reserves held by the Bank of England and the total quantity of sight liabilities created by the British banking system. Rather, he argued, the Bank should hold an ample reserve sufficient to satisfy the demand for gold in a crisis when a sudden and temporary demand for gold had to be accommodated. That was Hawtrey’s advice, but not Bagehot’s, whose concern was about banks’ moral hazard and imprudent lending in the expectation of being rescued in a crisis by the Bank of England. Indeed, moral hazard is a problem, but in a crisis it is a secondary problem, when, as Hawtrey explained, alleviating the crisis, not discouraging moral hazard, must be the primary concern of the lender of last resort.

            C         Scope of Banking

Inclined to find remedies for financial distress in structural reforms limiting the types of assets banks accept in exchange for their sight liabilities, Smith did not recommend a lender of last resort.[3] Another method of reducing risk, perhaps more in tune with the Smithian real-bills doctrine than a lender of last resort, is to restrict the activities of banks that issue banknotes and deposits.

In Anglophone countries, commercial banking generally evolved as separate and distinct from investment banking. It was only during the Great Depression and the resulting wave of bank failures that the combination of commercial and investment banking was legally prohibited by the Glass-Steagall Act, eventually repealed in 1999. On the Continent, where commercial banking penetrated less deeply into the fabric of economic and commercial life than in Anglophone countries, commercial banking developed more or less along with investment banking in what are called universal banks.

Whether the earlier, and more widespread, adoption of commercial banking in Anglophone countries than on the Continent advanced the idea that no banking institution should provide both commercial- and investment-banking services is not a question about which I offer a conjecture, but it seems a topic worthy of study. The Glass-Steagall Act, which enforced that separation after being breached early in the twentieth century, a breach thought by some to have contributed to US bank failures in the Great Depression, was based on a presumption against combining and investment-banking in a single institution. But even apart from the concerns that led to enactment of Glass-Steagall, limiting the exposure of commercial banks, which supply most of the cash held by the public, to the balance-sheet risk associated with investment-banking activities seems reasonable. Moreover, the adoption of government deposit insurance after the Great Depression as well as banks’ access to the discount window of the central bank may augment the moral hazard induced by deposit insurance and a lender of last resort, offsetting potential economies of scope associated with combining commercial and investment banking.

Although legal barriers to the combination of commercial and investment banking have long been eliminated, proposals for “narrow banking” that would restrict the activities undertaken by commercial banks continue to be made. Two different interpretations of narrow banking – one Smithian and one Humean – are possible.

The Humean concern about banking was that banks are inherently disposed to overissue their liabilities. The Humean response to the concern has been to propose 100-percent reserve banking, a comprehensive extension of the 100-percent marginal reserve requirement on the issue of banknotes imposed by the Bank Charter Act. Such measures could succeed, as some supporters (Simons 1936) came to realize, only if accompanied by a radical change the financial practices and arrangements on which all debt contracts are based. It is difficult to imagine that the necessary restructuring of economic activity would ever be implemented or tolerated.

The Humean concern was dismissed by the Smithian tradition, recognizing that banks, even if unconstrained by reserve requirements, have no incentive to issue liabilities without limit. The Smithian concern was whether banks could cope with balance-sheet risks after unexpected losses in the value of their assets. Although narrow banking proposals are a legitimate and possibly worthwhile response to that concern, the acceptance by central banks of responsibility to act as a lender of last resort and widespread government deposit insurance to dampen contagion effects have taken the question of narrowing or restricting the functions of money-creating banks off the table. Whether a different strategy for addressing the systemic risks associated with banks’ creation of money by relying solely on deposit insurance and a lender of last resort is a question that still deserves thoughtful attention.

An Austrian Tragedy

It was hardly predictable that the New York Review of Books would take notice of Marginal Revolutionaries by Janek Wasserman, marking the susquicentenial of the publication of Carl Menger’s Grundsätze (Principles of Economics) which, along with Jevons’s Principles of Political Economy and Walras’s Elements of Pure Economics ushered in the marginal revolution upon which all of modern economics, for better or for worse, is based. The differences among the three founding fathers of modern economic theory were not insubstantial, and the Jevonian version was largely superseded by the work of his younger contemporary Alfred Marshall, so that modern neoclassical economics is built on the work of only one of the original founders, Leon Walras, Jevons’s work having left little impression on the future course of economics.

Menger’s work, however, though largely, but not totally, eclipsed by that of Marshall and Walras, did leave a more enduring imprint and a more complicated legacy than Jevons’s — not only for economics, but for political theory and philosophy, more generally. Judging from Edward Chancellor’s largely favorable review of Wasserman’s volume, one might even hope that a start might be made in reassessing that legacy, a process that could provide an opportunity for mutually beneficial interaction between long-estranged schools of thought — one dominant and one marginal — that are struggling to overcome various conceptual, analytical and philosophical problems for which no obvious solutions seem available.

In view of the failure of modern economists to anticipate the Great Recession of 2008, the worst financial shock since the 1930s, it was perhaps inevitable that the Austrian School, a once favored branch of economics that had made a specialty of booms and busts, would enjoy a revival of public interest.

The theme of Austrians as outsiders runs through Janek Wasserman’s The Marginal Revolutionaries: How Austrian Economists Fought the War of Ideas, a general history of the Austrian School from its beginnings to the present day. The title refers both to the later marginalization of the Austrian economists and to the original insight of its founding father, Carl Menger, who introduced the notion of marginal utility—namely, that economic value does not derive from the cost of inputs such as raw material or labor, as David Ricardo and later Karl Marx suggested, but from the utility an individual derives from consuming an additional amount of any good or service. Water, for instance, may be indispensable to humans, but when it is abundant, the marginal value of an extra glass of the stuff is close to zero. Diamonds are less useful than water, but a great deal rarer, and hence command a high market price. If diamonds were as common as dewdrops, however, they would be worthless.

Menger was not the first economist to ponder . . . the “paradox of value” (why useless things are worth more than essentials)—the Italian Ferdinando Galiani had gotten there more than a century earlier. His central idea of marginal utility was simultaneously developed in England by W. S. Jevons and on the Continent by Léon Walras. Menger’s originality lay in applying his theory to the entire production process, showing how the value of capital goods like factory equipment derived from the marginal value of the goods they produced. As a result, Austrian economics developed a keen interest in the allocation of capital. Furthermore, Menger and his disciples emphasized that value was inherently subjective, since it depends on what consumers are willing to pay for something; this imbued the Austrian school from the outset with a fiercely individualistic and anti-statist aspect.

Menger’s unique contribution is indeed worthy of special emphasis. He was more explicit than Jevons or Walras, and certainly more than Marshall, in explaining that the value of factors of production is derived entirely from the value of the incremental output that could be attributed (or imputed) to their services. This insight implies that cost is not an independent determinant of value, as Marshall, despite accepting the principle of marginal utility, continued to insist – famously referring to demand and supply as the two blades of the analytical scissors that determine value. The cost of production therefore turns out to be nothing but the value the output foregone when factors are used to produce one output instead of the next most highly valued alternative. Cost therefore does not determine, but is determined by, equilibrium price, which means that, in practice, costs are always subjective and conjectural. (I have made this point in an earlier post in a different context.) I will have more to say below about the importance of Menger’s specific contribution and its lasting imprint on the Austrian school.

Menger’s Principles of Economics, published in 1871, established the study of economics in Vienna—before then, no economic journals were published in Austria, and courses in economics were taught in law schools. . . .

The Austrian School was also bound together through family and social ties: [his two leading disciples, [Eugen von] Böhm-Bawerk and Friedrich von Wieser [were brothers-in-law]. [Wieser was] a close friend of the statistician Franz von Juraschek, Friedrich Hayek’s maternal grandfather. Young Austrian economists bonded on Alpine excursions and met in Böhm-Bawerk’s famous seminars (also attended by the Bolshevik Nikolai Bukharin and the German Marxist Rudolf Hilferding). Ludwig von Mises continued this tradition, holding private seminars in Vienna in the 1920s and later in New York. As Wasserman notes, the Austrian School was “a social network first and last.”

After World War I, the Habsburg Empire was dismantled by the victorious Allies. The Austrian bureaucracy shrank, and university placements became scarce. Menger, the last surviving member of the first generation of Austrian economists, died in 1921. The economic school he founded, with its emphasis on individualism and free markets, might have disappeared under the socialism of “Red Vienna.” Instead, a new generation of brilliant young economists emerged: Schumpeter, Hayek, and Mises—all of whom published best-selling works in English and remain familiar names today—along with a number of less well known but influential economists, including Oskar Morgenstern, Fritz Machlup, Alexander Gerschenkron, and Gottfried Haberler.

Two factual corrections are in order. Menger outlived Böhm-Bawerk, but not his other chief disciple von Wieser, who died in 1926, not long after supervising Hayek’s doctoral dissertation, later published in 1927, and, in 1933, translated into English and published as Monetary Theory and the Trade Cycle. Moreover, a 16-year gap separated Mises and Schumpeter, who were exact contemporaries, from Hayek (born in 1899) who was a few years older than Gerschenkron, Haberler, Machlup and Morgenstern.

All the surviving members or associates of the Austrian school wound up either in the US or Britain after World War II, and Hayek, who had taken a position in London in 1931, moved to the US in 1950, taking a position in the Committee on Social Thought at the University of Chicago after having been refused a position in the economics department. Through the intervention of wealthy sponsors, Mises obtained an academic appointment of sorts at the NYU economics department, where he succeeded in training two noteworthy disciples who wrote dissertations under his tutelage, Murray Rothbard and Israel Kirzner. (Kirzner wrote his dissertation under Mises at NYU, but Rothbard did his graduate work at Colulmbia.) Schumpeter, Haberler and Gerschenkron eventually took positions at Harvard, while Machlup (with some stops along the way) and Morgenstern made their way to Princeton. However, Hayek’s interests shifted from pure economic theory to deep philosophical questions. While Machlup and Haberler continued to work on economic theory, the Austrian influence on their work after World War II was barely recognizable. Morgenstern and Schumpeter made major contributions to economics, but did not hide their alienation from the doctrines of the Austrian School.

So there was little reason to expect that the Austrian School would survive its dispersal when the Nazis marched unopposed into Vienna in 1938. That it did survive is in no small measure due to its ideological usefulness to anti-socialist supporters who provided financial support to Hayek, enabling his appointment to the Committee on Social Thought at the University of Chicago, and Mises’s appointment at NYU, and other forms of research support to Hayek, Mises and other like-minded scholars, as well as funding the Mont Pelerin Society, an early venture in globalist networking, started by Hayek in 1947. Such support does not discredit the research to which it gave rise. That the survival of the Austrian School would probably not have been possible without the support of wealthy benefactors who anticipated that the Austrians would advance their political and economic interests does not invalidate the research thereby enabled. (In the interest of transparency, I acknowledge that I received support from such sources for two books that I wrote.)

Because Austrian School survivors other than Mises and Hayek either adapted themselves to mainstream thinking without renouncing their earlier beliefs (Haberler and Machlup) or took an entirely different direction (Morgenstern), and because the economic mainstream shifted in two directions that were most uncongenial to the Austrians: Walrasian general-equilibrium theory and Keynesian macroeconomics, the Austrian remnant, initially centered on Mises at NYU, adopted a sharply adversarial attitude toward mainstream economic doctrines.

Despite its minute numbers, the lonely remnant became a house divided against itself, Mises’s two outstanding NYU disciples, Murray Rothbard and Israel Kirzner, holding radically different conceptions of how to carry on the Austrian tradition. An extroverted radical activist, Rothbard was not content just to lead a school of economic thought, he aspired to become the leader of a fantastical anarchistic revolutionary movement to replace all established governments under a reign of private-enterprise anarcho-capitalism. Rothbard’s political radicalism, which, despite his Jewish ancestry, even included dabbling in Holocaust denialism, so alienated his mentor, that Mises terminated all contact with Rothbard for many years before his death. Kirzner, self-effacing, personally conservative, with no political or personal agenda other than the advancement of his own and his students’ scholarship, published hundreds of articles and several books filling 10 thick volumes of his collected works published by the Liberty Fund, while establishing a robust Austrian program at NYU, training many excellent scholars who found positions in respected academic and research institutions. Similar Austrian programs, established under the guidance of Kirzner’s students, were started at other institutions, most notably at George Mason University.

One of the founders of the Cato Institute, which for nearly half a century has been the leading avowedly libertarian think tank in the US, Rothbard was eventually ousted by Cato, and proceeded to set up a rival think tank, the Ludwig von Mises Institute, at Auburn University, which has turned into a focal point for extreme libertarians and white nationalists to congregate, get acquainted, and strategize together.

Isolation and marginalization tend to cause a subspecies either to degenerate toward extinction, to somehow blend in with the members of the larger species, thereby losing its distinctive characteristics, or to accentuate its unique traits, enabling it to find some niche within which to survive as a distinct sub-species. Insofar as they have engaged in economic analysis rather than in various forms of political agitation and propaganda, the Rothbardian Austrians have focused on anarcho-capitalist theory and the uniquely perverse evils of fractional-reserve banking.

Rejecting the political extremism of the Rothbardians, Kirznerian Austrians differentiate themselves by analyzing what they call market processes and emphasizing the limitations on the knowledge and information possessed by actual decision-makers. They attribute this misplaced focus on equilibrium to the extravagantly unrealistic and patently false assumptions of mainstream models on the knowledge possessed by economic agents, which effectively make equilibrium the inevitable — and trivial — conclusion entailed by those extreme assumptions. In their view, the focus of mainstream models on equilibrium states with unrealistic assumptions results from a preoccupation with mathematical formalism in which mathematical tractability rather than sound economics dictates the choice of modeling assumptions.

Skepticism of the extreme assumptions about the informational endowments of agents covers a range of now routine assumptions in mainstream models, e.g., the ability of agents to form precise mathematical estimates of the probability distributions of future states of the world, implying that agents never confront decisions about which they are genuinely uncertain. Austrians also object to the routine assumption that all the information needed to determine the solution of a model is the common knowledge of the agents in the model, so that an existing equilibrium cannot be disrupted unless new information randomly and unpredictably arrives. Each agent in the model having been endowed with the capacity of a semi-omniscient central planner, solving the model for its equilibrium state becomes a trivial exercise in which the optimal choices of a single agent are taken as representative of the choices made by all of the model’s other, semi-omnicient, agents.

Although shreds of subjectivism — i.e., agents make choices based own preference orderings — are shared by all neoclassical economists, Austrian criticisms of mainstream neoclassical models are aimed at what Austrians consider to be their insufficient subjectivism. It is this fierce commitment to a robust conception of subjectivism, in which an equilibrium state of shared expectations by economic agents must be explained, not just assumed, that Chancellor properly identifies as a distinguishing feature of the Austrian School.

Menger’s original idea of marginal utility was posited on the subjective preferences of consumers. This subjectivist position was retained by subsequent generations of the school. It inspired a tradition of radical individualism, which in time made the Austrians the favorite economists of American libertarians. Subjectivism was at the heart of the Austrians’ polemical rejection of Marxism. Not only did they dismiss Marx’s labor theory of value, they argued that socialism couldn’t possibly work since it would lack the means to allocate resources efficiently.

The problem with central planning, according to Hayek, is that so much of the knowledge that people act upon is specific knowledge that individuals acquire in the course of their daily activities and life experience, knowledge that is often difficult to articulate – mere intuition and guesswork, yet more reliable than not when acted upon by people whose livelihoods depend on being able to do the right thing at the right time – much less communicate to a central planner.

Chancellor attributes Austrian mistrust of statistical aggregates or indices, like GDP and price levels, to Austrian subjectivism, which regards such magnitudes as abstractions irrelevant to the decisions of private decision-makers, except perhaps in forming expectations about the actions of government policy makers. (Of course, this exception potentially provides full subjectivist license and legitimacy for macroeconomic theorizing despite Austrian misgivings.) Observed statistical correlations between aggregate variables identified by macroeconomists are dismissed as irrelevant unless grounded in, and implied by, the purposeful choices of economic agents.

But such scruples about the use of macroeconomic aggregates and inferring causal relationships from observed correlations are hardly unique to the Austrian school. One of the most important contributions of the 20th century to the methodology of economics was an article by T. C. Koopmans, “Measurement Without Theory,” which argued that measured correlations between macroeconomic variables provide a reliable basis for business-cycle research and policy advice only if the correlations can be explained in terms of deeper theoretical or structural relationships. The Nobel Prize Committee, in awarding the 1975 Prize to Koopmans, specifically mentioned this paper in describing Koopmans’s contributions. Austrians may be more fastidious than their mainstream counterparts in rejecting macroeconomic relationships not based on microeconomic principles, but they aren’t the only ones mistrustful of mere correlations.

Chancellor cites mistrust about the use of statistical aggregates and price indices as a factor in Hayek’s disastrous policy advice warning against anti-deflationary or reflationary measures during the Great Depression.

Their distrust of price indexes brought Austrian economists into conflict with mainstream economic opinion during the 1920s. At the time, there was a general consensus among leading economists, ranging from Irving Fisher at Yale to Keynes at Cambridge, that monetary policy should aim at delivering a stable price level, and in particular seek to prevent any decline in prices (deflation). Hayek, who earlier in the decade had spent time at New York University studying monetary policy and in 1927 became the first director of the Austrian Institute for Business Cycle Research, argued that the policy of price stabilization was misguided. It was only natural, Hayek wrote, that improvements in productivity should lead to lower prices and that any resistance to this movement (sometimes described as “good deflation”) would have damaging economic consequences.

The argument that deflation stemming from economic expansion and increasing productivity is normal and desirable isn’t what led Hayek and the Austrians astray in the Great Depression; it was their failure to realize the deflation that triggered the Great Depression was a monetary phenomenon caused by a malfunctioning international gold standard. Moreover, Hayek’s own business-cycle theory explicitly stated that a neutral (stable) monetary policy ought to aim at keeping the flow of total spending and income constant in nominal terms while his policy advice of welcoming deflation meant a rapidly falling rate of total spending. Hayek’s policy advice was an inexcusable error of judgment, which, to his credit, he did acknowledge after the fact, though many, perhaps most, Austrians have refused to follow him even that far.

Considered from the vantage point of almost a century, the collapse of the Austrian School seems to have been inevitable. Hayek’s long-shot bid to establish his business-cycle theory as the dominant explanation of the Great Depression was doomed from the start by the inadequacies of the very specific version of his basic model and his disregard of the obvious implication of that model: prevent total spending from contracting. The promising young students and colleagues who had briefly gathered round him upon his arrival in England, mostly attached themselves to other mentors, leaving Hayek with only one or two immediate disciples to carry on his research program. The collapse of his research program, which he himself abandoned after completing his final work in economic theory, marked a research hiatus of almost a quarter century, with the notable exception of publications by his student, Ludwig Lachmann who, having decamped in far-away South Africa, labored in relative obscurity for most of his career.

The early clash between Keynes and Hayek, so important in the eyes of Chancellor and others, is actually overrated. Chancellor, quoting Lachmann and Nicholas Wapshott, describes it as a clash of two irreconcilable views of the economic world, and the clash that defined modern economics. In later years, Lachmann actually sought to effect a kind of reconciliation between their views. It was not a conflict of visions that undid Hayek in 1931-32, it was his misapplication of a narrowly constructed model to a problem for which it was irrelevant.

Although the marginalization of the Austrian School, after its misguided policy advice in the Great Depression and its dispersal during and after World War II, is hardly surprising, the unwillingness of mainstream economists to sort out what was useful and relevant in the teachings of the Austrian School from what is not was unfortunate not only for the Austrians. Modern economics was itself impoverished by its disregard for the complexity and interconnectedness of economic phenomena. It’s precisely the Austrian attentiveness to the complexity of economic activity — the necessity for complementary goods and factors of production to be deployed over time to satisfy individual wants – that is missing from standard economic models.

That Austrian attentiveness, pioneered by Menger himself, to the complementarity of inputs applied over the course of time undoubtedly informed Hayek’s seminal contribution to economic thought: his articulation of the idea of intertemporal equilibrium that comprehends the interdependence of the plans of independent agents and the need for them to all fit together over the course of time for equilibrium to obtain. Hayek’s articulation represented a conceptual advance over earlier versions of equilibrium analysis stemming from Walras and Pareto, and even from Irving Fisher who did pay explicit attention to intertemporal equilibrium. But in Fisher’s articulation, intertemporal consistency was described in terms of aggregate production and income, leaving unexplained the mechanisms whereby the individual plans to produce and consume particular goods over time are reconciled. Hayek’s granular exposition enabled him to attend to, and articulate, necessary but previously unspecified relationships between the current prices and expected future prices.

Moreover, neither mainstream nor Austrian economists have ever explained how prices are adjust in non-equilibrium settings. The focus of mainstream analysis has always been the determination of equilibrium prices, with the implicit understanding that “market forces” move the price toward its equilibrium value. The explanatory gap has been filled by the mainstream New Classical School which simply posits the existence of an equilibrium price vector, and, to replace an empirically untenable tâtonnement process for determining prices, posits an equally untenable rational-expectations postulate to assert that market economies typically perform as if they are in, or near the neighborhood of, equilibrium, so that apparent fluctuations in real output are viewed as optimal adjustments to unexplained random productivity shocks.

Alternatively, in New Keynesian mainstream versions, constraints on price changes prevent immediate adjustments to rationally expected equilibrium prices, leading instead to persistent reductions in output and employment following demand or supply shocks. (I note parenthetically that the assumption of rational expectations is not, as often suggested, an assumption distinct from market-clearing, because the rational expectation of all agents of a market-clearing price vector necessarily implies that the markets clear unless one posits a constraint, e.g., a binding price floor or ceiling, that prevents all mutually beneficial trades from being executed.)

Similarly, the Austrian school offers no explanation of how unconstrained price adjustments by market participants is a sufficient basis for a systemic tendency toward equilibrium. Without such an explanation, their belief that market economies have strong self-correcting properties is unfounded, because, as Hayek demonstrated in his 1937 paper, “Economics and Knowledge,” price adjustments in current markets don’t, by themselves, ensure a systemic tendency toward equilibrium values that coordinate the plans of independent economic agents unless agents’ expectations of future prices are sufficiently coincident. To take only one passage of many discussing the difficulty of explaining or accounting for a process that leads individuals toward a state of equilibrium, I offer the following as an example:

All that this condition amounts to, then, is that there must be some discernible regularity in the world which makes it possible to predict events correctly. But, while this is clearly not sufficient to prove that people will learn to foresee events correctly, the same is true to a hardly less degree even about constancy of data in an absolute sense. For any one individual, constancy of the data does in no way mean constancy of all the facts independent of himself, since, of course, only the tastes and not the actions of the other people can in this sense be assumed to be constant. As all those other people will change their decisions as they gain experience about the external facts and about other people’s actions, there is no reason why these processes of successive changes should ever come to an end. These difficulties are well known, and I mention them here only to remind you how little we actually know about the conditions under which an equilibrium will ever be reached.

In this theoretical muddle, Keynesian economics and the neoclassical synthesis were abandoned, because the key proposition of Keynesian economics was supposedly the tendency of a modern economy toward an equilibrium with involuntary unemployment while the neoclassical synthesis rejected that proposition, so that the supposed synthesis was no more than an agreement to disagree. That divided house could not stand. The inability of Keynesian economists such as Hicks, Modigliani, Samuelson and Patinkin to find a satisfactory (at least in terms of a preferred Walrasian general-equilibrium model) rationalization for Keynes’s conclusion that an economy would likely become stuck in an equilibrium with involuntary unemployment led to the breakdown of the neoclassical synthesis and the displacement of Keynesianism as the dominant macroeconomic paradigm.

But perhaps the way out of the muddle is to abandon the idea that a systemic tendency toward equilibrium is a property of an economic system, and, instead, to recognize that equilibrium is, as Hayek suggested, a contingent, not a necessary, property of a complex economy. Ludwig Lachmann, cited by Chancellor for his remark that the early theoretical clash between Hayek and Keynes was a conflict of visions, eventually realized that in an important sense both Hayek and Keynes shared a similar subjectivist conception of the crucial role of individual expectations of the future in explaining the stability or instability of market economies. And despite the efforts of New Classical economists to establish rational expectations as an axiomatic equilibrating property of market economies, that notion rests on nothing more than arbitrary methodological fiat.

Chancellor concludes by suggesting that Wasserman’s characterization of the Austrians as marginalized is not entirely accurate inasmuch as “the Austrians’ view of the economy as a complex, evolving system continues to inspire new research.” Indeed, if economics is ever to find a way out of its current state of confusion, following Lachmann in his quest for a synthesis of sorts between Keynes and Hayek might just be a good place to start from.

Filling the Arrow Explanatory Gap

The following (with some minor revisions) is a Twitter thread I posted yesterday. Unfortunately, because it was my first attempt at threading the thread wound up being split into three sub-threads and rather than try to reconnect them all, I will just post the complete thread here as a blogpost.

1. Here’s an outline of an unwritten paper developing some ideas from my paper “Hayek Hicks Radner and Four Equilibrium Concepts” (see here for an earlier ungated version) and some from previous blog posts, in particular Phillips Curve Musings

2. Standard supply-demand analysis is a form of partial-equilibrium (PE) analysis, which means that it is contingent on a ceteris paribus (CP) assumption, an assumption largely incompatible with realistic dynamic macroeconomic analysis.

3. Macroeconomic analysis is necessarily situated a in general-equilibrium (GE) context that precludes any CP assumption, because there are no variables that are held constant in GE analysis.

4. In the General Theory, Keynes criticized the argument based on supply-demand analysis that cutting nominal wages would cure unemployment. Instead, despite his Marshallian training (upbringing) in PE analysis, Keynes argued that PE (AKA supply-demand) analysis is unsuited for understanding the problem of aggregate (involuntary) unemployment.

5. The comparative-statics method described by Samuelson in the Foundations of Econ Analysis formalized PE analysis under the maintained assumption that a unique GE obtains and deriving a “meaningful theorem” from the 1st- and 2nd-order conditions for a local optimum.

6. PE analysis, as formalized by Samuelson, is conditioned on the assumption that GE obtains. It is focused on the effect of changing a single parameter in a single market small enough for the effects on other markets of the parameter change to be made negligible.

7. Thus, PE analysis, the essence of micro-economics is predicated on the macrofoundation that all, but one, markets are in equilibrium.

8. Samuelson’s meaningful theorems were a misnomer reflecting mid-20th-century operationalism. They can now be understood as empirically refutable propositions implied by theorems augmented with a CP assumption that interactions b/w markets are small enough to be neglected.

9. If a PE model is appropriately specified, and if the market under consideration is small or only minimally related to other markets, then differences between predictions and observations will be statistically insignificant.

10. So PE analysis uses comparative-statics to compare two alternative general equilibria that differ only in respect of a small parameter change.

11. The difference allows an inference about the causal effect of a small change in that parameter, but says nothing about how an economy would actually adjust to a parameter change.

12. PE analysis is conditioned on the CP assumption that the analyzed market and the parameter change are small enough to allow any interaction between the parameter change and markets other than the market under consideration to be disregarded.

13. However, the process whereby one equilibrium transitions to another is left undetermined; the difference between the two equilibria with and without the parameter change is computed but no account of an adjustment process leading from one equilibrium to the other is provided.

14. Hence, the term “comparative statics.”

15. The only suggestion of an adjustment process is an assumption that the price-adjustment in any market is an increasing function of excess demand in the market.

16. In his seminal account of GE, Walras posited the device of an auctioneer who announces prices–one for each market–computes desired purchases and sales at those prices, and sets, under an adjustment algorithm, new prices at which desired purchases and sales are recomputed.

17. The process continues until a set of equilibrium prices is found at which excess demands in all markets are zero. In Walras’s heuristic account of what he called the tatonnement process, trading is allowed only after the equilibrium price vector is found by the auctioneer.

18. Walras and his successors assumed, but did not prove, that, if an equilibrium price vector exists, the tatonnement process would eventually, through trial and error, converge on that price vector.

19. However, contributions by Sonnenschein, Mantel and Debreu (hereinafter referred to as the SMD Theorem) show that no price-adjustment rule necessarily converges on a unique equilibrium price vector even if one exists.

20. The possibility that there are multiple equilibria with distinct equilibrium price vectors may or may not be worth explicit attention, but for purposes of this discussion, I confine myself to the case in which a unique equilibrium exists.

21. The SMD Theorem underscores the lack of any explanatory account of a mechanism whereby changes in market prices, responding to excess demands or supplies, guide a decentralized system of competitive markets toward an equilibrium state, even if a unique equilibrium exists.

22. The Walrasian tatonnement process has been replaced by the Arrow-Debreu-McKenzie (ADM) model in an economy of infinite duration consisting of an infinite number of generations of agents with given resources and technology.

23. The equilibrium of the model involves all agents populating the economy over all time periods meeting before trading starts, and, based on initial endowments and common knowledge, making plans given an announced equilibrium price vector for all time in all markets.

24. Uncertainty is accommodated by the mechanism of contingent trading in alternative states of the world. Given assumptions about technology and preferences, the ADM equilibrium determines the set prices for all contingent states of the world in all time periods.

25. Given equilibrium prices, all agents enter into optimal transactions in advance, conditioned on those prices. Time unfolds according to the equilibrium set of plans and associated transactions agreed upon at the outset and executed without fail over the course of time.

26. At the ADM equilibrium price vector all agents can execute their chosen optimal transactions at those prices in all markets (certain or contingent) in all time periods. In other words, at that price vector, excess demands in all markets with positive prices are zero.

27. The ADM model makes no pretense of identifying a process that discovers the equilibrium price vector. All that can be said about that price vector is that if it exists and trading occurs at equilibrium prices, then excess demands will be zero if prices are positive.

28. Arrow himself drew attention to the gap in the ADM model, writing in 1959:

29. In addition to the explanatory gap identified by Arrow, another shortcoming of the ADM model was discussed by Radner: the dependence of the ADM model on a complete set of forward and state-contingent markets at time zero when equilibrium prices are determined.

30. Not only is the complete-market assumption a backdoor reintroduction of perfect foresight, it excludes many features of the greatest interest in modern market economies: the existence of money, stock markets, and money-crating commercial banks.

31. Radner showed that for full equilibrium to obtain, not only must excess demands in current markets be zero, but whenever current markets and current prices for future delivery are missing, agents must correctly expect those future prices.

32. But there is no plausible account of an equilibrating mechanism whereby price expectations become consistent with GE. Although PE analysis suggests that price adjustments do clear markets, no analogous analysis explains how future price expectations are equilibrated.

33. But if both price expectations and actual prices must be equilibrated for GE to obtain, the notion that “market-clearing” price adjustments are sufficient to achieve macroeconomic “equilibrium” is untenable.

34. Nevertheless, the idea that individual price expectations are rational (correct), so that, except for random shocks, continuous equilibrium is maintained, became the bedrock for New Classical macroeconomics and its New Keynesian and real-business cycle offshoots.

35. Macroeconomic theory has become a theory of dynamic intertemporal optimization subject to stochastic disturbances and market frictions that prevent or delay optimal adjustment to the disturbances, potentially allowing scope for countercyclical monetary or fiscal policies.

36. Given incomplete markets, the assumption of nearly continuous intertemporal equilibrium implies that agents correctly foresee future prices except when random shocks occur, whereupon agents revise expectations in line with the new information communicated by the shocks.
37. Modern macroeconomics replaced the Walrasian auctioneer with agents able to forecast the time path of all prices indefinitely into the future, except for intermittent unforeseen shocks that require agents to optimally their revise previous forecasts.
38. When new information or random events, requiring revision of previous expectations, occur, the new information becomes common knowledge and is processed and interpreted in the same way by all agents. Agents with rational expectations always share the same expectations.
39. So in modern macro, Arrow’s explanatory gap is filled by assuming that all agents, given their common knowledge, correctly anticipate current and future equilibrium prices subject to unpredictable forecast errors that change their expectations of future prices to change.
40. Equilibrium prices aren’t determined by an economic process or idealized market interactions of Walrasian tatonnement. Equilibrium prices are anticipated by agents, except after random changes in common knowledge. Semi-omniscient agents replace the Walrasian auctioneer.
41. Modern macro assumes that agents’ common knowledge enables them to form expectations that, until superseded by new knowledge, will be validated. The assumption is wrong, and the mistake is deeper than just the unrealism of perfect competition singled out by Arrow.
42. Assuming perfect competition, like assuming zero friction in physics, may be a reasonable simplification for some problems in economics, because the simplification renders an otherwise intractable problem tractable.
43. But to assume that agents’ common knowledge enables them to forecast future prices correctly transforms a model of decentralized decision-making into a model of central planning with each agent possessing the knowledge only possessed by an omniscient central planner.
44. The rational-expectations assumption fills Arrow’s explanatory gap, but in a deeply unsatisfactory way. A better approach to filling the gap would be to acknowledge that agents have private knowledge (and theories) that they rely on in forming their expectations.
45. Agents’ expectations are – at least potentially, if not inevitably – inconsistent. Because expectations differ, it’s the expectations of market specialists, who are better-informed than non-specialists, that determine the prices at which most transactions occur.
46. Because price expectations differ even among specialists, prices, even in competitive markets, need not be uniform, so that observed price differences reflect expectational differences among specialists.
47. When market specialists have similar expectations about future prices, current prices will converge on the common expectation, with arbitrage tending to force transactions prices to converge toward notwithstanding the existence of expectational differences.
48. However, the knowledge advantage of market specialists over non-specialists is largely limited to their knowledge of the workings of, at most, a small number of related markets.
49. The perspective of specialists whose expectations govern the actual transactions prices in most markets is almost always a PE perspective from which potentially relevant developments in other markets and in macroeconomic conditions are largely excluded.
50. The interrelationships between markets that, according to the SMD theorem, preclude any price-adjustment algorithm, from converging on the equilibrium price vector may also preclude market specialists from converging, even roughly, on the equilibrium price vector.
51. A strict equilibrium approach to business cycles, either real-business cycle or New Keynesian, requires outlandish assumptions about agents’ common knowledge and their capacity to anticipate the future prices upon which optimal production and consumption plans are based.
52. It is hard to imagine how, without those outlandish assumptions, the theoretical superstructure of real-business cycle theory, New Keynesian theory, or any other version of New Classical economics founded on the rational-expectations postulate can be salvaged.
53. The dominance of an untenable macroeconomic paradigm has tragically led modern macroeconomics into a theoretical dead end.

About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,261 other subscribers
Follow Uneasy Money on WordPress.com