Archive for the 'Axel Leijonfuvud' Category

Franklin Fisher on the Disequilibrium Foundations of Economics and the Stability of General Equilibrium Redux

Last November I posted a revised section of a paper I’m now working on an earlier version of which is posted on SSRN. I have now further revised the paper and that section in particular, so I’m posting the current version of that section in hopes of receiving further comments, criticisms, and suggestions before I submit the paper to a journal. So I will be very grateful to all those who respond, and will try not to be too cranky in my replies.

I         Fisher’s model and the No Favorable Surprise Assumption

Unsuccessful attempts to prove, under standard neoclassical assumptions, the stability of general equilibrium led Franklin Fisher (1983 [Disequilibrium Foundations of Equilibrium Economics) to suggest an alternative approach to proving stability. based on three assumptions: (1) trading occurs at disequilibrium prices (in contrast to the standard assumption that no trading takes place until a new equilibrium is found with prices being adjusted under a tatonnement process); (2) all unsatisfied transactors — either unsatisfied demanders or unsatisfied suppliers — in any disequilibrated market are either all on the demand side or all on the supply side of that market; (3) the “no favorable surprises” (NFS) assumption previously advanced by Hahn (1978 [“On Non-Walrasian Equilibria”).

At the starting point of a disequilibrium process, some commodities would be in excess demand, some in excess supply, and, perhaps, some in equilibrium. Let Zi denote the excess demand for any commodity, i ranging from 1 to n; let commodities in excess demand be numbered from 1 to k, commodities initially in equilibrium numbered from k+1 to m, and commodities in excess supply numbered from m+1 to n. Thus, by assumption, no agent had an excess supply of commodities numbered from 1 to k, no agent had an excess demand for commodities numbered from m+1 to n, and no agent had either an excess demand or excess supply for commodities numbered between k+1 and m.[1]

Fisher argued that in disequilibrium, with prices, not necessarily uniform across all trades, rising in markets with excess demand and falling in markets with excess supply, and not changing in markets with zero excess demand, the sequence of adjustments would converge on an equilibrium price vector. Every agent would form plans to transact conditional on expectations of the prices at which it planned purchases or sales, either spot for forward, could be executed.[2] Because unsuccessful demanders and suppliers would respond to failed attempts to execute planned trades by raising the prices offered, or reducing the prices accepted, prices for goods or services in excess demand would rise, and would fall for goods and services in excess supply. Insofar as agents successfully execute their plans, prices for commodities in excess demand would rise and prices for commodities in excess supply would fall.

Fisher reduced this informal analysis to a formal model in which stability could be proved, at least under standard neoclassical assumptions augmented by plausible assumptions about the adjustment process. Stability of equilibrium is proved by defining some function (V) of the endogenous variables of the model (x1, . . ., xn, t) and showing that the function satisfies the Lyapounov stability conditions: V ≥ 0, dV/dt ≤ 0, and dV/dt = 0 in equilibrium. Fisher defined V as the sum of the expected utilities of households plus the expected profits of firms, all firm profits being distributed to households in equilibrium. Fisher argued that, under the NFS assumption, the expected utility of agents would decline as prices are adjusted when agents fail to execute their planned transactions, disappointed buyers raising the prices offered and disappointed sellers lowering the prices accepted. These adjustments would reduce the expected utility or profit from those transactions, and in equilibrium no further adjustment would be needed and the Lyapounov conditions satisfied. The combination of increased prices for goods purchased and decreased prices for goods sold implies that, with no favorable surprises, dV/dt would be negative until an equilibrium, in which all planned transactions are executed, is reached, so that the sum of expected utility and expected profit is stabilized, confirming the stability of the disequilibrium arbitrage process.

II         Two Problems with the No Favorable Surprise Assumption

Acknowledging that the NFS assumption is ad hoc, not a deep property of rationality implied by standard neoclassical assumptions, Fisher (1983, p. 87) justified the assumption on the pragmatic grounds. “It may well be true,” he wrote,

that an economy of rational agents who understand that there is disequilibrium and act on arbitrage opportunities is driven toward equilibrium, but not if these agents continually perceive new previously unanticipated opportunities for further arbitrage. The appearance of such new and unexpected opportunities will generally disturb the system until they are absorbed.

Such opportunities can be of different kinds. The most obvious sort is the appearance of unforeseen technological developments – the unanticipated development of new products or processes. There are other sorts of new opportunities as well. An unanticipated change in tastes or the development of new uses for old products is one; the discovery of new sources of raw materials another. Further, efficiency improvements in firms are not restricted to technological developments. The discovery of a more efficient mode of internal organization or of a better way of marketing can also present a new opportunity.

Because favorable surprises following the displacement of a prior equilibrium would potentially violate the Lyapounov condition that V be non-increasing, the NFS assumption allows it to be proved that arbitrage of price differences leads to convergence on a new equilibrium. It is not, of course, only favorable surprises that can cause instability, inasmuch as the Lyapounov function must be non-negative as well as non-increasing, and a sufficiently large unfavorable surprise would violate the non-negativity condition.[3]

However, acknowledging the unrealism of the NFS assumption and its conflation of exogenous surprises with those that are endogenous, Fisher (pp. 90-91) argued that proving stability under the NFS assumption is still significant, because, if stability could not be proved under the assumption of no surprises of any kind, it likely could not be proved “under the more interesting weaker assumption” of No Exogenous Favorable Surprises.

The NFS assumption suffers from two problems deeper than Fisher acknowledged. First, it reckons only with equilibrating adjustments in current prices when trading is possible in both spot and forward markets for all goods and services, so that spot and forward prices for each commodity and service are being continuously arbitraged in his setup. Second, he does not take explicit account of interactions between markets of the sort that motivate Lipsey and Lancaster’s (1956 [“Tbe General Theory of Second Best) general theory of the second best.

          A. Semi-complete markets

Fisher does not introduce trading in state-contingent markets, so his model might be described as semi-complete. Because all traders have the choice, when transacting to engage, in either a spot or a forward transaction, depending on their liquidity position, so that when spot and forward trades are occurring for the same product or service, the ratio of those prices, reflecting own commodity interest rates, are constrained by arbitrage to match money interest rate. In an equilibrium, both spot and forward prices must adjusted so that the arbitrage relationships between spot and forward prices for all commodities and services in which both spot and forward prices are occurring is satisfied and all agents are able to execute the trads that they wish to make at the prices they expected when planning those purchases. In other words, an equilibrium requires that all agents that are actually trading commodities or services in which both spot and forward trades are occurring concurrently must share the same expectations of future prices. Otherwise, agents with differing expectations would have an incentive to switch from trading spot to forward or vice versa.

The point that I want to emphasize here is that, insofar as equilibration can be shown to occur in Fisher’s arbitrage model, it depends on the ability of agents to choose between purchasing spot or forward, thereby creating a market mechanism whereby agents’ expectations of future prices to be reconciled along with the adjustment of current prices (either spot or forward) to allow agents to execute their plans to transact. Equilibrium depends not only on the adjustment of current prices to equilibrium levels for spot transactions but on the adjustment of expectations of future spot prices to equilibrium levels. Unlike the market feedback on current prices in current markets conveyed by unsatisfied demanders and suppliers, inconsistencies in agents’ notional plans for future transactions convey no discernible feedback without a broad array of forward or futures markets in which those expectations are revealed and reconciled. Without such feedback on expectations, a plausible account of how expectations of future prices are equilibrated cannot — except under implausibly extreme assumptions — easily be articulated.[4] Nor can the existence of a temporary equilibrium of current prices in current markets, beset by agents’ inconsistent and conflicting expectations, be taken for granted under standard assumptions. And even if a temporary equilibrium exists, it cannot, under standard assumptions, be shown to be optimal (Arrow and Hahn, 1971, 136-51).

            B          Market interactions and the theory of second-best

Second, in Fisher’s account, price changes occur when transactors cannot execute their desired transactions at current prices, those price changes then creating arbitrage opportunities that induce further price changes. Fisher’s stability argument hinges on defining a Lyapounov function in which the prices of goods in excess demand rise as frustrated demanders offer increased prices and prices of goods in excess supply fall as disappointed suppliers accept reduced prices.

But the argument works only if a price adjustment in one market caused by a previous excess demand or excess supply does not simultaneously create excess demands or supplies in markets not previously in disequilibrium or further upset the imbalance between supply and demand in markets already in disequilibrium.

To understand why Fisher’s ad hoc assumptions do not guarantee that the Lyapounov function he defined will be continuously non-increasing, consider the famous Lipsey and Lancaster (1956) second-best theorem, according to which, if one optimality condition in an economic model is unsatisfied because a relevant variable is constrained, the second-best solution, rather than satisfy the other unconstrained optimum conditions, involves revision of at least some of the unconstrained optimum conditions.

Contrast Fisher’s statement of the No Favorable Surprise assumption with how Lipsey and Lancaster (1956, 11) described the import of their theorem.

From this theorem there follows the important negative corollary that there is no a priori way to judge as between various situations in which some of the Paretian optimum conditions are fulfilled while others are not. Specifically, it is not true that a situation in which more, but not all, of the optimum conditions are fulfilled is necessarily, or is even likely to be, superior to a situation in which fewer are fulfilled. It follows, therefore, that in a situation in which there exist many constraints which prevent the fulfilment of the Paretian optimum conditions the removal of any one constraint may affect welfare or efficiency either by raising it, by lowering it, or by leaving it unchanged.

The general theorem of the second best states that if one of the Paretian optimum conditions cannot be fulfilled a second-best optimum situation is achieved only by departing from all other optimum conditions. It is important to note that in general, nothing can be said about the direction or the magnitude of the secondary departures from optimum conditions made necessary by the original non-fulfillment of one condition.

Although Lipsey and Lancaster were not referring to the adjustment process following the displacement of a prior equilibrium, their discussion implies that the stability of an adjustment process depends on the specific sequence of adjustments in that process, inasmuch as each successive price adjustment, aside from its immediate effect on the particular market in which the price adjusts, transmits feedback effects to related markets. A price adjustment in one market may increase, decrease, or leave unchanged, the efficiency of other markets, and the equilibrating tendency of a price adjustment in one market may be offset by indirect disequilibrating tendencies in other markets. When a price adjustment in one market indirectly reduces efficiency in other markets, the resulting price adjustments may well trigger further indirect efficiency reductions.

Thus, in adjustment processes involving interrelated markets, a price change in one market can indeed cause a favorable surprises in one or more other markets by indirectly causing net increases in utility through feedback effects on other markets.

III        Conclusion

Consider a macroeconomic equilibrium satisfying all optimality conditions between marginal rates of substitution in production and consumption and relative prices. If that equilibrium is subjected to a macoreconomic disturbance affecting all, or most, individual markets, thereby changing all optimality conditions corresponding to the prior equilibrium, the new equilibrium will likely entail a different set of optimality conditions. While systemic optimality requires price adjustments to satisfy all the optimality conditions, actual price adjustments occur sequentially, in piecemeal fashion, with prices changing market by market or firm by firm, price changes occurring as agents perceive demand or cost changes. Those changes need not always induce equilibrating adjustments, nor is the arbitraging of price differences necessarily equilibrating when, under suboptimal conditions, prices have generally deviated from their equilibrium values. 

Smithian invisible-hand theorems are of little relevance in explaining the transition to a new equilibrium following a macroeconomic disturbance, because, in this context, the invisible-hand theorem begs the relevant question by assuming that the equilibrium price vector has been found. When all markets are in disequilibrium, moving toward equilibrium in one market has repercussions on other markets, and the simple story of how price adjustment in response to a disequilibrium in that market alone restores equilibrium breaks down, because market conditions in every market depend on market conditions in every other market. So, unless all optimality conditions are satisfied simultaneously, there is no assurance that piecemeal adjustments will bring the system closer to an optimal, or even a second-best, state.

If my interpretation of the NFS assumption is correct, Fisher’s stability results may provide support for Leijonhufvud’s (1973 “Effective Demand Failures”) suggestion that there is a corridor of stability around an equilibrium time path within which, under normal circumstances, an economy will not be displaced too far from path, so that an economy, unless displaced outside that corridor, will revert, more or less on its own, to its equilibrium path.[5]

Leijonhufvud attributed such resilience to the holding of buffer stocks of inventories of goods, holdings of cash and the availability of credit lines enabling agents to operate normally despite disappointed expectations. If negative surprises persist, agents will be unable to add to, or draw from, inventories indefinitely, or to finance normal expenditures by borrowing or drawing down liquid assets. Once buffer stocks are exhausted, the stabilizing properties of the economy have been overwhelmed by the destabilizing tendencies, income-constrained agents cut expenditures, as implied by the Keynesian multiplier analysis, triggering a cumulative contraction, and rendering a spontaneous recovery without compensatory fiscal or monetary measures, impossible.

But my critique of Fisher’s NFS assumption suggests other, perhaps deeper, reasons why displacements of equilibrium may not be self-correcting, such displacements may invalidate previously held expectations, and in the absence of a dense array of forward and futures markets, there is likely no market mechanism that would automatically equilibrate unsettled and inconsistent expectations. In such an environment, price adjustments in current spot markets may cause price adjustments that, under the logic of the Lipsey-Lancaster second-best theorem may in fact be welfare-diminishing rather than welfare-enhancing and may therefore not equilibrate, but only further disequilibrate the macroeconomy.


[1] Fisher’s stability analysis was conducted in the context of complete markets in which all agents could make transactions for future delivery at prices agreed on in the present. Thus, for Fisher arbitrage means that agents choose between contracting for future delivery or waiting to transact until later based on their expectations of whether the forward price now is more or less than the expected future price. In equilibrium, expectations of future prices are correct so that agents are indifferent between making forward transactions of waiting to make spot transactions unless liquidity considerations dictate a preference for selling forward now or postponing buying till later.

[2] Fisher assumed that, for every commodity or service, transactions can be made either spot or forward. When Fisher spoke of arbitrage, he was referring to the decisions of agents whether to transact spot or forward given the agent’s expectations of the spot price at the time of planned exchange, the forward prices adjusting so that, with no transactions costs, agents are indifferent, at the margin, between transacting spot or forward, given their expectations of the future spot price.

[3] It was therefore incorrect for Fisher (1983, 88) to assert: “we can hope to show that  that the continued presence new opportunities is a necessary condition for instability — for continued change,” inasmuch as continued negative surprises could also cause continued — or at least prolonged — change.

[4] Fisher does recognize (pp. 88-89) that changes in expectations can be destabilizing. However, he considers only the possibility of exogenous events that cause expectations to change, but does not consider the possibility that expectations may change endogenously in a destabilizing fashion in the course of an adjustment process following a displacement from a prior equilibrium. See, however, his discussion (p. 91) of the distinction between exogenous and endogenous shocks.

How is . . . an [“exogenous”] shock to be distinguished from the “endogenous” shock brought about by adjustment to the original shock? No Favorable Surprise may not be precisely what is wanted as an assumption in this area, but it is quite difficult to see exactly how to refine it.

A proof of stability under No Favorable Surprise, then, seems quite desirable for a number of related reasons. First, it is the strongest version of an assumption of No Favorable Exogenous Surprise (whatever that may mean precisely); hence, if stability does not hold under No Favorable Surprise it cannot be expected to hold under the more interesting weaker assumption.  

[5] Presumably because the income and output are maximized at the equilibrium path, it is unlikely that an economy will overshoot the path unless entrepreneurial or policy error cause such overshooting which is presumably an unlikely occurrence, although Austrian business cycle theory and perhaps certain other monetary business cycle theories suggest that such overshooting is not or has not always been an uncommon event.

My Paper “Robert Lucas and the Pretense of Science” Is now Available on SSRN

Peter Howitt, whom I got to know slightly when he spent a year at UCLA while we were both graduate students, received an honorary doctorate from Côte d’Azur University in September. Here is a link to the press release of the University marking the award.

Peter wrote his dissertation under Robert Clower, and when Clower moved from Northwestern to UCLA in the early 1970s, Peter followed Clower as he was finishing up his dissertation. Much of Peter’s early work was devoted to trying to develop the macroeconomic ideas of Clower and Leijonhufvud. His book The Keynesian Recovery collects those important early papers which, unfortunately, did not thwart the ascendance, as Peter was writing those papers, of the ideas of Robert Lucas and his many followers, or the eventual dominance of those ideas over modern macroeconomics.

In addition to the award, a workshop on Coordination Issues in Historical Perspective was organized in Peter’s honor, and my paper, “Robert Lucas and the Pretense of Science,” which shares many of Peter’s misgivings about the current state of macroeconomics, was one of the papers presented at the workshop. In writing the paper, I drew on several posts that I have written for this blog over the years. I have continued to revise the paper since then, and the current version is now available on SSRN.

Here’s the abstract:

Hayek and Lucas were both known for their critiques of Keynesian theory on both theoretical and methodological grounds. Hayek (1934) criticized the idea that continuous monetary expansion could permanently increase total investment, foreshadowing Friedman’s (1968) argument that monetary expansion could permanently increase employment. Friedman’s analysis set the stage for Lucas’s (1976) critique of macroeconomic policy analysis, a critique that Hayek (1975) had also anticipated. Hayek’s (1942-43) advocacy of methodological individualism might also be considered an anticipation of Lucas’s methodological insistence on the necessity of rejecting Keynesian and other macroeconomic theories not based on explicit microeconomic foundations. This paper compares Hayek’s methodological individualism with Lucasian microfoundations. While Lucasian microfoundations requires all agents to make optimal choices, Hayek recognized that optimization by interdependent agents is a contingent, not a necessary, state of reconciliation and that the standard equilibrium theory on which Lucas relies does not prove that, or explain how, such a reconciliation is, or can be, achieved. The paper further argues that the Lucasian microfoundations is a form of what Popper called philosophical reductionism that is incompatible with Hayekian methodological individualism.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4260708

Axel Leijonhufvud and Modern Macroeconomics

For many baby boomers like me growing up in Los Angeles, UCLA was an almost inevitable choice for college. As an incoming freshman, I was undecided whether to major in political science or economics. PoliSci 1 didn’t impress me, but Econ 1 did. More than my Econ 1 professor, it was the assigned textbook, University Economics, 1st edition, by Alchian and Allen that impressed me. That’s how my career in economics started.

After taking introductory micro and macro as a freshman, I started the intermediate theory sequence of micro (utility and cost theory, econ 101a), (general equilibrium theory, 101b), and (macro theory, 102) as a sophomore. It was in the winter 1968 quarter that I encountered Axel Leijonhufvud. This was about a year before his famous book – his doctoral dissertation – Keynesian Economics and the Economics of Keynes was published in the fall of 1968 to instant acclaim. Although it must have been known in the department that the book, which he’d been working on for several years, would soon appear, I doubt that its remarkable impact on the economics profession could have been anticipated, turning Axel almost overnight from an obscure untenured assistant professor into a tenured professor at one of the top economics departments in the world and a kind of academic rock star widely sought after to lecture and appear at conferences around the globe. I offer the following scattered recollections of him, drawn from memories at least a half-century old, to those interested in his writings, and some reflections on his rise to the top of the profession, followed by a gradual loss of influence as theoretical marcroeconomics, fell under the influence of Robert Lucas and the rational-expectations movement in its various forms (New Classical, Real Business-Cycle, New-Keynesian).

Axel, then in his early to mid-thirties, was an imposing figure, very tall and gaunt with a short beard and a shock of wavy blondish hair, but his attire reflecting the lowly position he then occupied in the academic hierarchy. He spoke perfect English with a distinct Swedish lilt, frequently leavening his lectures and responses to students’ questions with wry and witty comments and asides.  

Axel’s presentation of general-equilibrium theory was, as then still the norm, at least at UCLA, mostly graphical, supplemented occasionally by some algebra and elementary calculus. The Edgeworth box was his principal technique for analyzing both bilateral trade and production in the simple two-output, two-input case, and he used it to elucidate concepts like Pareto optimality, general-equilibrium prices, and the two welfare theorems, an exposition which I, at least, found deeply satisfying. The assigned readings were the classic paper by F. M. Bator, “The Simple Analytics of Welfare-Maximization,” which I relied on heavily to gain a working grasp of the basics of general-equilibrium theory, and as a supplementary text, Peter Newman’s The Theory of Exchange, much of which was too advanced for me to comprehend more than superficially. Axel also introduced us to the concept of tâtonnement and highlighting its importance as an explanation of sorts of how the equilibrium price vector might, at least in theory, be found, an issue whose profound significance I then only vaguely comprehended, if at all. Another assigned text was Modern Capital Theory by Donald Dewey, providing an introduction to the role of capital, time, and the rate of interest in monetary and macroeconomic theory and a bridge to the intermediate macro course that he would teach the following quarter.

A highlight of Axel’s general-equilibrium course was the guest lecture by Bob Clower, then visiting UCLA from Northwestern, with whom Axel became friendly only after leaving Northwestern, and two of whose papers (“A Reconsideration of the Microfoundations of Monetary Theory,” and “The Keynesian Counterrevolution: A Theoretical Appraisal”) were discussed at length in his forthcoming book. (The collaboration between Clower and Leijonhufvud and their early Northwestern connection has led to the mistaken idea that Clower had been Axel’s thesis advisor. Axel’s dissertation was actually written under Meyer Burstein.) Clower himself came to UCLA economics a few years later when I was already a third-year graduate student, and my contact with him was confined to seeing him at seminars and workshops. I still have a vivid memory of Bob in his lecture explaining, with the aid of chalk and a blackboard, how ballistic theory was developed into an orbital theory by way of a conceptual experiment imagining that the distance travelled by a projectile launched from a fixed position being progressively lengthened until the projectile’s trajectory transitioned into an orbit around the earth.

Axel devoted the first part of his macro course to extending the Keynesian-cross diagram we had been taught in introductory macro into the Hicksian IS-LM model by making investment a negative function of the rate of interest and adding a money market with a fixed money stock and a demand for money that’s a negative function of the interest rate. Depending on the assumptions about elasticities, IS-LM could be an analytical vehicle that could accommodate either the extreme Keynesian-cross case, in which fiscal policy is all-powerful and monetary policy is ineffective, or the Monetarist (classical) case, in which fiscal policy is ineffective and monetary policy all-powerful, which was how macroeconomics was often framed as a debate about the elasticity of the demand for money curve with respect to interest rate. Friedman himself, in his not very successful attempt to articulate his own framework for monetary analysis, accepted that framing, one of the few rhetorical and polemical misfires of his career.

In his intermediate macro course, Axel presented the standard macro model, and I don’t remember his weighing in that much with his own criticism; he didn’t teach from a standard intermediate macro textbook, standard textbook versions of the dominant Keynesian model not being at all to his liking. Instead, he assigned early sources of what became Keynesian economics like Hicks’s 1937 exposition of the IS-LM model and Alvin Hansen’s A Guide to Keynes (1953), with Friedman’s 1956 restatement of the quantity theory serving as a counterpoint, and further developments of Keynesian thought like Patinkin’s 1948 paper on price flexibility and full employment, A. W. Phillips original derivation of the Phillips Curve, Harry Johnson on the General Theory after 25 years, and his own preview “Keynes and the Keynesians: A Suggested Interpretation” of his forthcoming book, and probably others that I’m not now remembering. Presenting the material piecemeal from original sources allowed him to underscore the weaknesses and questionable assumptions latent in the standard Keynesian model.

Of course, for most of us, it was a challenge just to reproduce the standard model and apply it to some specific problems, but we at least we got the sense that there was more going on under the hood of the model than we would have imagined had we learned its structure from a standard macro text. I have the melancholy feeling that the passage of years has dimmed my memory of his teaching too much to adequately describe how stimulating, amusing and enjoyable his lectures were to those of us just starting our journey into economic theory.

The following quarter, in the fall 1968 quarter, when his book had just appeared in print, Axel created a new advanced course called macrodynamics. He talked a lot about Wicksell and Keynes, of course, but he was then also fascinated by the work of Norbert Wiener on cybernetics, assigning Wiener’s book Cybernetics as a primary text and a key to understanding what Keynes was really trying to do. He introduced us to concepts like positive and negative feedback, servo mechanisms, stable and unstable dynamic systems and related those concepts to economic concepts like the price mechanism, stable and unstable equilibria, and to business cycles. Here’s how a put it in On Keynesian Economics and the Economics of Keynes:

Cybernetics as a formal theory, of course, began to develop only during the was and it was only with the appearance of . . . Weiner’s book in 1948 that the first results of serious work on a general theory of dynamic systems – and the term itself – reached a wider public. Even then, research in this field seemed remote from economic problems, and it is thus not surprising that the first decade or more of the Keynesian debate did not go in this direction. But it is surprising that so few monetary economists have caught on to developments in this field in the last ten or twelve years, and that the work of those who have has not triggered a more dramatic chain reaction. This, I believe, is the Keynesian Revolution that did not come off.

In conveying the essential departure of cybernetics from traditional physics, Wiener once noted:

Here there emerges a very interesting distinction between the physics of our grandfathers and that of the present day. In nineteenth-century physics, it seemed to cost nothing to get information.

In context, the reference was to Maxwell’s Demon. In its economic reincarnation as Walras’ auctioneer, the demon has not yet been exorcised. But this certainly must be what Keynes tried to do. If a single distinction is to be drawn between the Economics of Keynes and the economics of our grandfathers, this is it. It is only on this basis that Keynes’ claim to have essayed a more “general theory” can be maintained. If this distinction is not recognized as both valid and important, I believe we must conclude that Keynes’ contribution to pure theory is nil.

Axel’s hopes that cybernetics could provide an analytical tool with which to bring Keynes’s insights into informational scarcity on macroeconomic analysis were never fulfilled. A glance at the index to Axel’s excellent collection of essays written from the late 1960s and the late 1970s Information and Coordination reveals not a single reference either to cybernetics or to Wiener. Instead, to his chagrin and disappointment, macroeconomics took a completely different path following the path blazed by Robert Lucas and his followers of insisting on a nearly continuous state of rational-expectations equilibrium and implicitly denying that there is an intertemporal coordination problem for macroeconomics to analyze, much less to solve.

After getting my BA in economics at UCLA, I stayed put and began my graduate studies there in the next academic year, taking the graduate micro sequence given that year by Jack Hirshleifer, the graduate macro sequence with Axel and the graduate monetary theory sequence with Ben Klein, who started his career as a monetary economist before devoting himself a few years later entirely to IO and antitrust.

Not surprisingly, Axel’s macro course drew heavily on his book, which meant it drew heavily on the history of macroeconomics including, of course, Keynes himself, but also his Cambridge predecessors and collaborators, his friendly, and not so friendly, adversaries, and the Keynesians that followed him. His main point was that if you take Keynes seriously, you can’t argue, as the standard 1960s neoclassical synthesis did, that the main lesson taught by Keynes was that if the real wage in an economy is somehow stuck above the market-clearing wage, an increase in aggregate demand is necessary to allow the labor market to clear at the prevailing market wage by raising the price level to reduce the real wage down to the market-clearing level.

This interpretation of Keynes, Axel argued, trivialized Keynes by implying that he didn’t say anything that had not been said previously by his predecessors who had also blamed high unemployment on wages being kept above market-clearing levels by minimum-wage legislation or the anticompetitive conduct of trade-union monopolies.

Axel sought to reinterpret Keynes as an early precursor of search theories of unemployment subsequently developed by Armen Alchian and Edward Phelps who would soon be followed by others including Robert Lucas. Because negative shocks to aggregate demand are rarely anticipated, the immediate wage and price adjustments to a new post-shock equilibrium price vector that would maintain full employment would occur only under the imaginary tâtonnement system naively taken as the paradigm for price adjustment under competitive market conditions, Keynes believed that a deliberate countercyclical policy response was needed to avoid a potentially long-lasting or permanent decline in output and employment. The issue is not price flexibility per se, but finding the equilibrium price vector consistent with intertemporal coordination. Price flexibility that doesn’t arrive quickly (immediately?) at the equilibrium price vector achieves nothing. Trading at disequilibrium prices leads inevitably to a contraction of output and income. In an inspired turn of phrase, Axel called this cumulative process of aggregate demand shrinkage Say’s Principle, which years later led me to write my paper “Say’s Law and the Classical Theory of Depressions” included as Chapter 9 of my recent book Studies in the History of Monetary Theory.

Attention to the implications of the lack of an actual coordinating mechanism simply assumed (either in the form of Walrasian tâtonnement or the implicit Marshallian ceteris paribus assumption) by neoclassical economic theory was, in Axel’s view, the great contribution of Keynes. Axel deplored the neoclassical synthesis, because its rote acceptance of the neoclassical equilibrium paradigm trivialized Keynes’s contribution, treating unemployment as a phenomenon attributable to sticky or rigid wages without inquiring whether alternative informational assumptions could explain unemployment even with flexible wages.

The new literature on search theories of unemployment advanced by Alchian, Phelps, et al. and the success of his book gave Axel hope that a deepened version of neoclassical economic theory that paid attention to its underlying informational assumptions could lead to a meaningful reconciliation of the economics of Keynes with neoclassical theory and replace the superficial neoclassical synthesis of the 1960s. That quest for an alternative version of neoclassical economic theory was for a while subsumed under the trite heading of finding microfoundations for macroeconomics, by which was meant finding a way to explain Keynesian (involuntary) unemployment caused by deficient aggregate demand without invoking special ad hoc assumptions like rigid or sticky wages and prices. The objective was to analyze the optimizing behavior of individual agents given limitations in or imperfections of the information available to them and to identify and provide remedies for the disequilibrium conditions that characterize coordination failures.

For a short time, perhaps from the early 1970s until the early 1980s, a number of seemingly promising attempts to develop a disequilibrium theory of macroeconomics appeared, most notably by Robert Barro and Herschel Grossman in the US, and by and J. P. Benassy, J. M. Grandmont, and Edmond Malinvaud in France. Axel and Clower were largely critical of these efforts, regarding them as defective and even misguided in many respects.

But at about the same time, another, very different, approach to microfoundations was emerging, inspired by the work of Robert Lucas and Thomas Sargent and their followers, who were introducing the concept of rational expectations into macroeconomics. Axel and Clower had focused their dissatisfaction with neoclassical economics on the rise of the Walrasian paradigm which used the obviously fantastical invention of a tâtonnement process to account for the attainment of an equilibrium price vector perfectly coordinating all economic activity. They argued for an interpretation of Keynes’s contribution as an attempt to steer economics away from an untenable theoretical and analytical paradigm rather than, as the neoclassical synthesis had done, to make peace with it through the adoption of ad hoc assumptions about price and wage rigidity, thereby draining Keynes’s contribution of novelty and significance.

And then Lucas came along to dispense with the auctioneer, eliminate tâtonnement, while achieving the same result by way of a methodological stratagem in three parts: a) insisting that all agents be treated as equilibrium optimizers, and b) who therefore form identical rational expectations of all future prices using the same common knowledge, so that c) they all correctly anticipate the equilibrium price vector that earlier economists had assumed could be found only through the intervention of an imaginary auctioneer conducting a fantastical tâtonnement process.

This methodological imperatives laid down by Lucas were enforced with a rigorous discipline more befitting a religious order than an academic research community. The discipline of equilibrium reasoning, it was decreed by methodological fiat, imposed a question-begging research strategy on researchers in which correct knowledge of future prices became part of the endowment of all optimizing agents.

While microfoundations for Axel, Clower, Alchian, Phelps and their collaborators and followers had meant relaxing the informational assumptions of the standard neoclassical model, for Lucas and his followers microfoundations came to mean that each and every individual agent must be assumed to have all the knowledge that exists in the model. Otherwise the rational-expectations assumption required by the model could not be justified.

The early Lucasian models did assume a certain kind of informational imperfection or ambiguity about whether observed price changes were relative changes or absolute changes, which would be resolved only after a one-period time lag. However, the observed serial correlation in aggregate time series could not be rationalized by an informational ambiguity resolved after just one period. This deficiency in the original Lucasian model led to the development of real-business-cycle models that attribute business cycles to real-productivity shocks that dispense with Lucasian informational ambiguity in accounting for observed aggregate time-series fluctuations. So-called New Keynesian economists chimed in with ad hoc assumptions about wage and price stickiness to create a new neoclassical synthesis to replace the old synthesis but with little claim to any actual analytical insight.

The success of the Lucasian paradigm was disheartening to Axel, and his research agenda gradually shifted from macroeconomic theory to applied policy, especially inflation control in developing countries. Although my own interest in macroeconomics was largely inspired by Axel, my approach to macroeconomics and monetary theory eventually diverged from Axel’s, when, in my last couple of years of graduate work at UCLA, I became close to Earl Thompson whose courses I had not taken as an undergraduate or a graduate student. I had read some of Earl’s monetary theory papers when preparing for my preliminary exams; I found them interesting but quirky and difficult to understand. After I had already started writing my dissertation, under Harold Demsetz on an IO topic, I decided — I think at the urging of my friend and eventual co-author, Ron Batchelder — to sit in on Earl’s graduate macro sequence, which he would sometimes offer as an alternative to Axel’s more popular graduate macro sequence. It was a relatively small group — probably not more than 25 or so attended – that met one evening a week for three hours. Each session – and sometimes more than one session — was devoted to discussing one of Earl’s published or unpublished macroeconomic or monetary theory papers. Hearing Earl explain his papers and respond to questions and criticisms brought them alive to me in a way that just reading them had never done, and I gradually realized that his arguments, which I had previously dismissed or misunderstood, were actually profoundly insightful and theoretically compelling.

For me at least, Earl provided a more systematic way of thinking about macroeconomics and a more systematic critique of standard macro than I could piece together from Axel’s writings and lectures. But one of the lessons that I had learned from Axel was the seminal importance of two Hayek essays: “The Use of Knowledge in Society,” and, especially “Economics and Knowledge.” The former essay is the easier to understand, and I got the gist of it on my first reading; the latter essay is more subtle and harder to follow, and it took years and a number of readings before I could really follow it. I’m not sure when I began to really understand it, but it might have been when I heard Earl expound on the importance of Hicks’s temporary-equilibrium method first introduced in Value and Capital.

In working out the temporary equilibrium method, Hicks relied on the work of Myrdal, Lindahl and Hayek, and Earl’s explanation of the temporary-equilibrium method based on the assumption that markets for current delivery clear, but those market-clearing prices are different from the prices that agents had expected when formulating their optimal intertemporal plans, causing agents to revise their plans and their expectations of future prices. That seemed to be the proper way to think about the intertemporal-coordination failures that Axel was so concerned about, but somehow he never made the connection between Hayek’s work, which he greatly admired, and the Hicksian temporary-equilibrium method which I never heard him refer to, even though he also greatly admired Hicks.

It always seemed to me that a collaboration between Earl and Axel could have been really productive and might even have led to an alternative to the Lucasian reign over macroeconomics. But for some reason, no such collaboration ever took place, and macroeconomics was impoverished as a result. They are both gone, but we still benefit from having Duncan Foley still with us, still active, and still making important contributions to our understanding, And we should be grateful.

A Tale of Two Syntheses

I recently finished reading a slender, but weighty, collection of essays, Microfoundtions Reconsidered: The Relationship of Micro and Macroeconomics in Historical Perspective, edited by Pedro Duarte and Gilberto Lima; it contains in addition to a brief introductory essay by the editors, and contributions by Kevin Hoover, Robert Leonard, Wade Hands, Phil Mirowski, Michel De Vroey, and Pedro Duarte. The volume is both informative and stimulating, helping me to crystalize ideas about which I have been ruminating and writing for a long time, but especially in some of my more recent posts (e.g., here, here, and here) and my recent paper “Hayek, Hicks, Radner and Four Equilibrium Concepts.”

Hoover’s essay provides a historical account of the microfoundations, making clear that the search for microfoundations long preceded the Lucasian microfoundations movement of the 1970s and 1980s that would revolutionize macroeconomics in the late 1980s and early 1990s. I have been writing about the differences between varieties of microfoundations for quite a while (here and here), and Hoover provides valuable detail about early discussions of microfoundations and about their relationship to the now regnant Lucasian microfoundations dogma. But for my purposes here, Hoover’s key contribution is his deconstruction of the concept of microfoundations, showing that the idea of microfoundations depends crucially on the notion that agents in a macroeconomic model be explicit optimizers, meaning that they maximize an explicit function subject to explicit constraints.

What Hoover clarifies is vacuity of the Lucasian optimization dogma. Until Lucas, optimization by agents had been merely a necessary condition for a model to be microfounded. But there was also another condition: that the optimizing choices of agents be mutually consistent. Establishing that the optimizing choices of agents are mutually consistent is not necessarily easy or even possible, so often the consistency of optimizing plans can only be suggested by some sort of heuristic argument. But Lucas and his cohorts, followed by their acolytes, unable to explain, even informally or heuristically, how the optimizing choices of individual agents are rendered mutually consistent, instead resorted to question-begging and question-dodging techniques to avoid addressing the consistency issue, of which one — the most egregious, but not the only — is the representative agent. In so doing, Lucas et al. transformed the optimization problem from the coordination of multiple independent choices into the optimal plan of a single decision maker. Heckuva job!

The second essay by Robert Leonard, though not directly addressing the question of microfoundations, helps clarify and underscore the misrepresentation perpetrated by the Lucasian microfoundational dogma in disregarding and evading the need to describe a mechanism whereby the optimal choices of individual agents are, or could be, reconciled. Leonard focuses on a particular economist, Oskar Morgenstern, who began his career in Vienna as a not untypical adherent of the Austrian school of economics, a member of the Mises seminar and successor of F. A. Hayek as director of the Austrian Institute for Business Cycle Research upon Hayek’s 1931 departure to take a position at the London School of Economics. However, Morgenstern soon began to question the economic orthodoxy of neoclassical economic theory and its emphasis on the tendency of economic forces to reach a state of equilibrium.

In his famous early critique of the foundations of equilibrium theory, Morgenstern tried to show that the concept of perfect foresight, upon which, he alleged, the concept of equilibrium rests, is incoherent. To do so, Morgenstern used the example of the Holmes-Moriarity interaction in which Holmes and Moriarty are caught in a dilemma in which neither can predict whether the other will get off or stay on the train on which they are both passengers, because the optimal choice of each depends on the choice of the other. The unresolvable conflict between Holmes and Moriarty, in Morgenstern’s view, showed that the incoherence of the idea of perfect foresight.

As his disillusionment with orthodox economic theory deepened, Morgenstern became increasingly interested in the potential of mathematics to serve as a tool of economic analysis. Through his acquaintance with the mathematician Karl Menger, the son of Carl Menger, founder of the Austrian School of economics. Morgenstern became close to Menger’s student, Abraham Wald, a pure mathematician of exceptional ability, who, to support himself, was working on statistical and mathematical problems for the Austrian Institute for Business Cycle Resarch, and tutoring Morgenstern in mathematics and its applications to economic theory. Wald, himself, went on to make seminal contributions to mathematical economics and statistical analysis.

Moregenstern also became acquainted with another student of Menger, John von Neumnn, with an interest in applying advanced mathematics to economic theory. Von Neumann and Morgenstern would later collaborate in writing The Theory of Games and Economic Behavior, as a result of which Morgenstern came to reconsider his early view of the Holmes-Moriarty paradox inasmuch as it could be shown that an equilibrium solution of their interaction could be found if payoffs to their joint choices were specified, thereby enabling Holmes and Moriarty to choose optimal probablistic strategies.

I don’t think that the game-theoretic solution to the Holmes Moriarty game is as straightforward as Morgenstern eventually agreed, but the critical point in the microfoundations discussion is that the mathematical solution to the Holmes-Moriarty paradox acknowledges the necessity for the choices made by two or more agents in an economic or game-theoretic equilibrium to be reconciled – i.e., rendered mutually consistent — in equilibrium. Under Lucasian microfoundations dogma, the problem is either annihilated by positing an optimizing representative agent having no need to coordinate his decision with other agents (I leave the question who, in the Holmes-Moriarty interaction, is the representative agent as an exercise for the reader) or it is assumed away by positing the existence of a magical equilibrium with no explanation of how the mutually consistent choices are arrived at.

The third essay (“The Rise and Fall of Walrasian Economics: The Keynes Effect”) by Wade Hands considers the first of the two syntheses – the neoclassical synthesis — that are alluded to in the title of this post. Hands gives a learned account of the mutually reinforcing co-development of Walrasian general equilibrium theory and Keynesian economics in the 25 years or so following World War II. Although Hands agrees that there is no necessary connection between Walrasian GE theory and Keynesian theory, he argues that there was enough common ground between Keynesians and Walrasians, as famously explained by Hicks in summarizing Keynesian theory by way of his IS-LM model, to allow the two disparate research programs to nourish each other in a kind of symbiotic relationship as the two research programs came to dominate postwar economics.

The task for Keynesian macroeconomists following the lead of Samuelson, Solow and Modigliani at MIT, Alvin Hansen at Harvard and James Tobin at Yale was to elaborate the Hicksian IS-LM approach by embedding it in a more general Walrasian framework. In so doing, they helped to shape a research agenda for Walrasian general-equilibrium theorists working out the details of the newly developed Arrow-Debreu model, deriving conditions for the uniqueness and stability of the equilibrium of that model. The neoclassical synthesis followed from those efforts, achieving an uneasy reconciliation between Walrasian general equilibrium theory and Keynesian theory. It received its most complete articulation in the impressive treatise of Don Patinkin which attempted to derive or at least evaluate key Keyensian propositions in the context of a full general equilibrium model. At an even higher level of theoretical sophistication, the 1971 summation of general equilibrium theory by Arrow and Hahn, gave disproportionate attention to Keynesian ideas which were presented and analyzed using the tools of state-of-the art Walrasian analysis.

Hands sums up the coexistence of Walrasian and Keynesian ideas in the Arrow-Hahn volume as follows:

Arrow and Hahn’s General Competitive Analysis – the canonical summary of the literature – dedicated far more pages to stability than to any other topic. The book had fourteen chapters (and a number of mathematical appendices); there was one chapter on consumer choice, one chapter on production theory, and one chapter on existence [of equilibrium], but there were three chapters on stability analysis, (two on the traditional tatonnement and one on alternative ways of modeling general equilibrium dynamics). Add to this the fact that there was an important chapter on “The Keynesian Model’; and it becomes clear how important stability analysis and its connection to Keynesian economics was for Walrasian microeconomics during this period. The purpose of this section has been to show that that would not have been the case if the Walrasian economics of the day had not been a product of co-evolution with Keynesian economic theory. (p. 108)

What seems most unfortunate about the neoclassical synthesis is that it elevated and reinforced the least relevant and least fruitful features of both the Walrasian and the Keynesian research programs. The Hicksian IS-LM setup abstracted from the dynamic and forward-looking aspects of Keynesian theory, modeling a static one-period model, not easily deployed as a tool of dynamic analysis. Walrasian GE analysis, which, following the pathbreaking GE existence proofs of Arrow and Debreu, then proceeded to a disappointing search for the conditions for a unique and stable general equilibrium.

It was Paul Samuelson who, building on Hicks’s pioneering foray into stability analysis, argued that the stability question could be answered by investigating whether a system of Lyapounov differential equations could describe market price adjustments as functions of market excess demands that would converge on an equilibrium price vector. But Samuelson’s approach to establishing stability required the mechanism of a fictional tatonnement process. Even with that unsatisfactory assumption, the stability results were disappointing.

Although for Walrasian theorists the results hardly repaid the effort expended, for those Keynesians who interpreted Keynes as an instability theorist, the weak Walrasian stability results might have been viewed as encouraging. But that was not any easy route to take either, because Keynes had also argued that a persistent unemployment equilibrium might be the norm.

It’s also hard to understand how the stability of equilibrium in an imaginary tatonnement process could ever have been considered relevant to the operation of an actual economy in real time – a leap of faith almost as extraordinary as imagining an economy represented by a single agent. Any conventional comparative-statics exercise – the bread and butter of microeconomic analysis – involves comparing two equilibria, corresponding to a specified parametric change in the conditions of the economy. The comparison presumes that, starting from an equilibrium position, the parametric change leads from an initial to a new equilibrium. If the economy isn’t stable, a disturbance causing an economy to depart from an initial equilibrium need not result in an adjustment to a new equilibrium comparable to the old one.

If conventional comparative statics hinges on an implicit stability assumption, it’s hard to see how a stability analysis of tatonnement has any bearing on the comparative-statics routinely relied upon by economists. No actual economy ever adjusts to a parametric change by way of tatonnement. Whether a parametric change displacing an economy from its equilibrium time path would lead the economy toward another equilibrium time path is another interesting and relevant question, but it’s difficult to see what insight would be gained by proving the stability of equilibrium under a tatonnement process.

Moreover, there is a distinct question about the endogenous stability of an economy: are there endogenous tendencies within an economy that lead it away from its equilibrium time path. But questions of endogenous stability can only be posed in a dynamic, rather than a static, model. While extending the Walrasian model to include an infinity of time periods, Arrow and Debreu telescoped determination of the intertemporal-equilibrium price vector into a preliminary time period before time, production, exchange and consumption begin. So, even in the formally intertemporal Arrow-Debreu model, the equilibrium price vector, once determined, is fixed and not subject to revision. Standard stability analysis was concerned with the response over time to changing circumstances only insofar as changes are foreseen at time zero, before time begins, so that they can be and are taken fully into account when the equilibrium price vector is determined.

Though not entirely uninteresting, the intertemporal analysis had little relevance to the stability of an actual economy operating in real time. Thus, neither the standard Keyensian (IS-LM) model nor the standard Walrasian Arrow-Debreu model provided an intertemporal framework within which to address the dynamic stability that Keynes (and contemporaries like Hayek, Myrdal, Lindahl and Hicks) had developed in the 1930s. In particular, Hicks’s analytical device of temporary equilibrium might have facilitated such an analysis. But, having introduced his IS-LM model two years before publishing his temporary equilibrium analysis in Value and Capital, Hicks concentrated his attention primarily on Keynesian analysis and did not return to the temporary equilibrium model until 1965 in Capital and Growth. And it was IS-LM that became, for a generation or two, the preferred analytical framework for macroeconomic analysis, while temproary equilibrium remained overlooked until the 1970s just as the neoclassical synthesis started coming apart.

The fourth essay by Phil Mirowski investigates the role of the Cowles Commission, based at the University of Chicago from 1939 to 1955, in undermining Keynesian macroeconomics. While Hands argues that Walrasians and Keynesians came together in a non-hostile spirit of tacit cooperation, Mirowski believes that owing to their Walrasian sympathies, the Cowles Committee had an implicit anti-Keynesian orientation and was therefore at best unsympathetic if not overtly hostile to Keynesian theorizing, which was incompatible the Walrasian optimization paradigm endorsed by the Cowles economists. (Another layer of unexplored complexity is the tension between the Walrasianism of the Cowles economists and the Marshallianism of the Chicago School economists, especially Knight and Friedman, which made Chicago an inhospitable home for the Cowles Commission and led to its eventual departure to Yale.)

Whatever differences, both the Mirowski and the Hands essays support the conclusion that the uneasy relationship between Walrasianism and Keynesianism was inherently problematic and unltimately unsustainable. But to me the tragedy is that before the fall, in the 1950s and 1960s, when the neoclassical synthesis bestrode economics like a colossus, the static orientation of both the Walrasian and the Keynesian research programs combined to distract economists from a more promising research program. Such a program, instead of treating expectations either as parametric constants or as merely adaptive, based on an assumed distributed lag function, might have considered whether expectations could perform a potentially equilibrating role in a general equilibrium model.

The equilibrating role of expectations, though implicit in various contributions by Hayek, Myrdal, Lindahl, Irving Fisher, and even Keynes, is contingent so that equilibrium is not inevitable, only a possibility. Instead, the introduction of expectations as an equilibrating variable did not occur until the mid-1970s when Robert Lucas, Tom Sargent and Neil Wallace, borrowing from John Muth’s work in applied microeconomics, introduced the idea of rational expectations into macroeconomics. But in introducing rational expectations, Lucas et al. made rational expectations not the condition of a contingent equilibrium but an indisputable postulate guaranteeing the realization of equilibrium without offering any theoretical account of a mechanism whereby the rationality of expectations is achieved.

The fifth essay by Michel DeVroey (“Microfoundations: a decisive dividing line between Keynesian and new classical macroeconomics?”) is a philosophically sophisticated analysis of Lucasian microfoundations methodological principles. DeVroey begins by crediting Lucas with the revolution in macroeconomics that displaced a Keynesian orthodoxy already discredited in the eyes of many economists after its failure to account for simultaneously rising inflation and unemployment.

The apparent theoretical disorder characterizing the Keynesian orthodoxy and its Monetarist opposition left a void for Lucas to fill by providing a seemingly rigorous microfounded alternative to the confused state of macroeconomics. And microfoundations became the methodological weapon by which Lucas and his associates and followers imposed an iron discipline on the unruly community of macroeconomists. “In Lucas’s eyes,” DeVroey aptly writes,“ the mere intention to produce a theory of involuntary unemployment constitutes an infringement of the equilibrium discipline.” Showing that his description of Lucas is hardly overstated, DeVroey quotes from the famous 1978 joint declaration of war issued by Lucas and Sargent against Keynesian macroeconomics:

After freeing himself of the straightjacket (or discipline) imposed by the classical postulates, Keynes described a model in which rules of thumb, such as the consumption function and liquidity preference schedule, took the place of decision functions that a classical economist would insist be derived from the theory of choice. And rather than require that wages and prices be determined by the postulate that markets clear – which for the labor market seemed patently contradicted by the severity of business depressions – Keynes took as an unexamined postulate that money wages are sticky, meaning that they are set at a level or by a process that could be taken as uninfluenced by the macroeconomic forces he proposed to analyze.

Echoing Keynes’s famous description of the sway of Ricardian doctrines over England in the nineteenth century, DeVroey remarks that the microfoundations requirement “conquered macroeconomics as quickly and thoroughly as the Holy Inquisition conquered Spain,” noting, even more tellingly, that the conquest was achieved without providing any justification. Ricardo had, at least, provided a substantive analysis that could be debated; Lucas offered only an undisputable methodological imperative about the sole acceptable mode of macroeconomic reasoning. Just as optimization is a necessary component of the equilibrium discipline that had to be ruthlessly imposed on pain of excommunication from the macroeconomic community, so, too, did the correlate principle of market-clearing. To deviate from the market-clearing postulate was ipso facto evidence of an impure and heretical state of mind. DeVroey further quotes from the war declaration of Lucas and Sargent.

Cleared markets is simply a principle, not verifiable by direct observation, which may or may not be useful in constructing successful hypotheses about the behavior of these [time] series.

What was only implicit in the war declaration became evident later after right-thinking was enforced, and woe unto him that dared deviate from the right way of thinking.

But, as DeVroey skillfully shows, what is most remarkable is that, having declared market clearing an indisputable methodological principle, Lucas, contrary to his own demand for theoretical discipline, used the market-clearing postulate to free himself from the very equilibrium discipline he claimed to be imposing. How did the market-clearing postulate liberate Lucas from equilibrium discipline? To show how the sleight-of-hand was accomplished, DeVroey, in an argument parallel to that of Hoover in chapter one and that suggested by Leonard in chapter two, contrasts Lucas’s conception of microfoundations with a different microfoundations conception espoused by Hayek and Patinkin. Unlike Lucas, Hayek and Patinkin recognized that the optimization of individual economic agents is conditional on the optimization of other agents. Lucas assumes that if all agents optimize, then their individual optimization ensures that a social optimum is achieved, the whole being the sum of its parts. But that assumption ignores that the choices made interacting agents are themelves interdependent.

To capture the distinction between independent and interdependent optimization, DeVroey distinguishes between optimal plans and optimal behavior. Behavior is optimal only if an optimal plan can be executed. All agents can optimize individually in making their plans, but the optimality of their behavior depends on their capacity to carry those plans out. And the capacity of each to carry out his plan is contingent on the optimal choices of all other agents.

Optimizing plans refers to agents’ intentions before the opening of trading, the solution to the choice-theoretical problem with which they are faced. Optimizing behavior refers to what is observable after trading has started. Thus optimal behavior implies that the optimal plan has been realized. . . . [O]ptmizing plans and optimizing behavior need to be logically separated – there is a difference between finding a solution to a choice problem and implementing the solution. In contrast, whenever optimizing behavior is the sole concept used, the possibility of there being a difference between them is discarded by definition. This is the standpoint takenby Lucas and Sargent. Once it is adopted, it becomes misleading to claim . . .that the microfoundations requirement is based on two criteria, optimizing behavior and market clearing. A single criterion is needed, and it is irrelevant whether this is called generalized optimizing behavior or market clearing. (De Vroey, p. 176)

Each agent is free to optimize his plan, but no agent can execute his optimal plan unless the plan coincides with the complementary plans of other agents. So, the execution of an optimal plan is not within the unilateral control of an agent formulating his own plan. One can readily assume that agents optimize their plans, but one cannot just assume that those plans can be executed as planned. The optimality of interdependent plans is not self-evident; it is a proposition that must be demonstrated. Assuming that agents optimize, Lucas simply asserts that, because agents optimize, markets must clear.

That is a remarkable non-sequitur. And from that non-sequitur, Lucas jumps to a further non-sequitur: that an optimizing representative agent is all that’s required for a macroeconomic model. The logical straightjacket (or discipline) of demonstrating that interdependent optimal plans are consistent is thus discarded (or trampled upon). Lucas’s insistence on a market-clearing principle turns out to be subterfuge by which the pretense of its upholding conceals its violation in practice.

My own view is that the assumption that agents formulate optimizing plans cannot be maintained without further analysis unless the agents are operating in isolation. If the agents interacting with each other, the assumption that they optimize requires a theory of their interaction. If the focus is on equilibrium interactions, then one can have a theory of equilibrium, but then the possibility of non-equilibrium states must also be acknowledged.

That is what John Nash did in developing his equilibrium theory of positive-sum games. He defined conditions for the existence of equilibrium, but he offered no theory of how equilibrium is achieved. Lacking such a theory, he acknowledged that non-equilibrium solutions might occur, e.g., in some variant of the Holmes-Moriarty game. To simply assert that because interdependent agents try to optimize, they must, as a matter of principle, succeed in optimizing is to engage in question-begging on a truly grand scale. To insist, as a matter of methodological principle, that everyone else must also engage in question-begging on equally grand scale is what I have previously called methodological arrogance, though an even harsher description might be appropriate.

In the sixth essay (“Not Going Away: Microfoundations in the making of a new consensus in macroeconomics”), Pedro Duarte considers the current state of apparent macroeconomic consensus in the wake of the sweeping triumph of the Lucasian micorfoundtions methodological imperative. In its current state, mainstream macroeconomists from a variety of backgrounds have reconciled themselves and adjusted to the methodological absolutism Lucas and his associates and followers have imposed on macroeconomic theorizing. Leading proponents of the current consensus are pleased to announce, in unseemly self-satisfaction, that macroeconomics is now – but presumably not previously – “firmly grounded in the principles of economic [presumably neoclassical] theory.” But the underlying conception of neoclassical economic theory motivating such a statement is almost laughably narrow, and, as I have just shown, strictly false even if, for argument’s sake, that narrow conception is accepted.

Duarte provides an informative historical account of the process whereby most mainstream Keynesians and former old-line Monetarists, who had, in fact, adopted much of the underlying Keynesian theoretical framework themselves, became reconciled to the non-negotiable methodological microfoundational demands upon which Lucas and his New Classical followers and Real-Business-Cycle fellow-travelers insisted. While Lucas was willing to tolerate differences of opinion about the importance of monetary factors in accounting for business-cycle fluctuations in real output and employment, and even willing to countenance a role for countercyclical monetary policy, such differences of opinion could be tolerated only if they could be derived from an acceptable microfounded model in which the agent(s) form rational expectations. If New Keynesians were able to produce results rationalizing countercyclical policies in such microfounded models with rational expectations, Lucas was satisfied. Presumably, Lucas felt the price of conceding the theoretical legitimacy of countercyclical policy was worth paying in order to achieve methodological hegemony over macroeconomic theory.

And no doubt, for Lucas, the price was worth paying, because it led to what Marvin Goodfriend and Robert King called the New Neoclassical Synthesis in their 1997 article ushering in the new era of good feelings, a synthesis based on “the systematic application of intertemporal optimization and rational expectations” while embodying “the insights of monetarists . . . regarding the theory and practice of monetary policy.”

While the first synthesis brought about a convergence of sorts between the disparate Walrasian and Keynesian theoretical frameworks, the convergence proved unstable because the inherent theoretical weaknesses of both paradigms were unable to withstand criticisms of the theoretical apparatus and of the policy recommendations emerging from that synthesis, particularly an inability to provide a straightforward analysis of inflation when it became a serious policy problem in the late 1960s and 1970s. But neither the Keynesian nor the Walrasian paradigms were developing in a way that addressed the points of most serious weakness.

On the Keynesian side, the defects included the static nature of the workhorse IS-LM model, the absence of a market for real capital and of a market for endogenous money. On the Walrasian side, the defects were the lack of any theory of actual price determination or of dynamic adjustment. The Hicksian temporary equilibrium paradigm might have provided a viable way forward, and for a very different kind of synthesis, but not even Hicks himself realized the potential of his own creation.

While the first synthesis was a product of convenience and misplaced optimism, the second synthesis is a product of methodological hubris and misplaced complacency derived from an elementary misunderstanding of the distinction between optimization by a single agent and the simultaneous optimization of two or more independent, yet interdependent, agents. The equilibrium of each is the result of the equilibrium of all, and a theory of optimization involving two or more agents requires a theory of how two or more interdependent agents can optimize simultaneously. The New neoclassical synthesis rests on the demand for a macroeconomic theory of individual optimization that refuses even to ask, let along provide an answer to, the question whether the optimization that it demands is actually achieved in practice or what happens if it is not. This is not a synthesis that will last, or that deserves to. And the sooner it collapses, the better off macroeconomics will be.

What the answer is I don’t know, but if I had to offer a suggestion, the one offered by my teacher Axel Leijonhufvud towards the end of his great book, written more than half a century ago, strikes me as not bad at all:

One cannot assume that what went wrong was simply that Keynes slipped up here and there in his adaptation of standard tool, and that consequently, if we go back and tinker a little more with the Marshallian toolbox his purposes will be realized. What is required, I believe, is a systematic investigation, form the standpoint of the information problems stressed in this study, of what elements of the static theory of resource allocation can without further ado be utilized in the analysis of dynamic and historical systems. This, of course, would be merely a first-step: the gap yawns very wide between the systematic and rigorous modern analysis of the stability of “featureless,” pure exchange systems and Keynes’ inspired sketch of the income-constrained process in a monetary-exchange-cum-production system. But even for such a first step, the prescription cannot be to “go back to Keynes.” If one must retrace some steps of past developments in order to get on the right track—and that is probably advisable—my own preference is to go back to Hayek. Hayek’s Gestalt-conception of what happens during business cycles, it has been generally agreed, was much less sound than Keynes’. As an unhappy consequence, his far superior work on the fundamentals of the problem has not received the attention it deserves. (p. 401)

I agree with all that, but would also recommend Roy Radner’s development of an alternative to the Arrow-Debreu version of Walrasian general equilibrium theory that can accommodate Hicksian temporary equilibrium, and Hawtrey’s important contributions to our understanding of monetary theory and the role and potential instability of endogenous bank money. On top of that, Franklin Fisher in his important work, The Disequilibrium Foundations of Equilibrium Economics, has given us further valuable guidance in how to improve the current sorry state of macroeconomics.

 

Say’s (and Walras’s) Law Revisited

Update (6/18/2019): The current draft of my paper is now available on SSRN. Here is a link.

The annual meeting of the History of Economics Society is coming up in two weeks. It will be held at Columbia University at New York, and I will be presenting an unpublished paper of mine “Say’s Law and the Classical Theory of Depressions.” I began writing this paper about 20 years ago, but never finished it. My thinking about Say’s Law goes back to my first paper on classical monetary theory, and I have previously written blog-posts about Say’s Law (here and here). And more recently I realized that in a temporary-equilibrium framework, both Say’s Law and Walras’s Law, however understood, may be violated.

Here’s the abstract from my paper:

Say’s Law occupies a prominent, but equivocal, position in the history of economics, having been the object of repeated controversies about its meaning and significance since it was first propounded early in the nineteenth century. It has been variously defined, and arguments about its meaning and validity have not reached consensus about what was being attacked or defended. This paper proposes a unifying interpretation of Say’s Law based on the idea that the monetary sector of an economy with a competitively supplied money involves at least two distinct markets not just one. Thus, contrary to the Lange-Patinkin interpretation of Say’s Law, an excess supply or demand for money does not necessarily imply an excess supply or demand for goods in a Walrasian GE model. Beyond modifying the standard interpretation of the inconsistency between Say’s Law and a monetary economy, the paper challenges another standard interpretation of Say’s Law as being empirically refuted by the existence of lapses from full employment and economic depressions. Under the alternative interpretation, originally suggested by Clower and Leijonhufvud and by Hutt, Say’s Law provides a theory whereby disequilibrium in one market, causing the amount actually supplied to fall short of what had been planned to be supplied, reduces demand in other markets, initiating a cumulative process of shrinking demand and supply. This cumulative process of contracting supply is analogous to the Keynesian multiplier whereby a reduction in demand initiates a cumulative process of declining demand. Finally, it is shown that in a temporary-equilibrium context, Walras’s Law (and a fortiori Say’ Law) may be violated.

Here is the Introduction of my paper.

I. Introduction

Say’s Law occupies a prominent, but uncertain, position in the history of economics, having been the object of repeated controversies since the early nineteenth century. Despite a formidable secondary literature, the recurring controversies still demand a clear resolution. Say’s Law has been variously defined, and arguments about its meaning and validity have failed to achieve any clear consensus about just what is being defended or attacked. So, I propose in this paper to reconsider Say’s Law in a way that is faithful in spirit to how it was understood by its principal architects, J. B. Say, James Mill, and David Ricardo as well as their contemporary critics, and to provide a conceptual framework within which to assess the views of subsequent commentators.

In doing so, I hope to dispel perhaps the oldest and certainly the most enduring misunderstanding about Say’s Law: that it somehow was meant to assert that depressions cannot occur, or that they are necessarily self-correcting if market forces are allowed to operate freely. As I have tried to suggest with the title of this paper, Say’s Law was actually an element of Classical insights into the causes of depressions. Indeed, a version of the same idea expressed by Say’s Law implicitly underlies those modern explanations of depressions that emphasize coordination failures, though Say’s Law actually conveys an additional insight missing from most modern explanations.

The conception of Say’s Law articulated in this paper bears a strong resemblance to what Clower (1965, 1967) and Leijonhufvud (1968, 1981) called Say’s Principle. However, their artificial distinction between Say’s Law and Say’s Principle suggests a narrower conception and application of Say’s principle than, I believe, is warranted.  Moreover, their apparent endorsement of the idea that the validity of Say’s Law somehow depends in a critical way on the absence of money implied a straightforward misinterpretation of Say’s Law earlier propounded by, among other, Hayek, Lange and Patinkin in which only what became known as Walras’s Law and not Say’s Law is a logically necessary property of a general-equilibrium system. Finally, it is appropriate to note at the outset that, in most respects, the conception of Say’s Law for which I shall be arguing was anticipated in a quirky, but unjustly neglected, work by Hutt (1975) and by the important, and similarly neglected, work of Earl Thompson (1974).

In the next section, I offer a restatement of the Classical conception of Say’s Law. That conception was indeed based on the insight that, in the now familiar formulation, supply creates its own demand. But to grasp how this insight was originally understood, one must first understand the problem for which Say’s Law was proposed as a solution. The problem concerns the relationship between a depression and a general glut of all goods, but it has two aspects. First, is a depression in some sense caused by a general glut of all goods? Second, is a general glut of all goods logically conceivable in a market economy? In section three, I shall consider the Classical objections to Say’s Law and the responses offered by the Classical originators of the doctrine in reply to those objections. In section four, I discuss the modern objections offered to Say’s Law, their relation to the earlier objections, and the validity of the modern objections to the doctrine. In section five, I re-examine the Classical doctrine, relating it explicitly to a theory of depressions characterized by “inadequate aggregate demand.” I also elaborate on the subtle, but important, differences between my understanding of Say’s Law and what Clower and Leijonhufvud have called Say’s Principle. In section six, I show that when considered in the context of a temporary-equilibrium model in there is an incomplete set of forward and state-contingent markets, not even Walras’s Law, let alone Say’s Law, is logically necessary property of the model. An understanding of the conditions in which neither Walras’s Law nor Say’s Law is satisfied provides an important insight into financial crises and the systemic coordination failures that are characteristic of the deep depression to which they lead.

And here are the last two sections of the paper.

VI. Say’s Law Violated

            I have just argued that Clower, Leijonhufvud and Hutt explained in detail how the insight provided by Say’s Law into the mechanism whereby disturbances causing disequilibrium in one market or sector can be propagated and amplified into broader and deeper economy-wide disturbances and disequilibria. I now want to argue that by relaxing the strict Walrasian framework in which since Lange (1942) articulated Walras’s Law and Say’s Law, it is possible to show conditions under which neither Walras’s Law nor Say’s Law is satisfied.

            I relax the Walrasian framework by assuming that there is not a complete set of forward and state-contingent markets in which future transactions can be undertaken in the present. Because there a complete set of markets in which future prices are determined and visible to everyone, economic agents must formulate their intertemporal plans for production and consumption relying not only on observed current prices, but also on their expectations of currently unobservable future prices. As already noted, the standard proof of Walras’s Law and a fortiori of Say’s Law (or Identity) are premised on the assumption that all agents make their decisions about purchases and sales on their common knowledge of all prices.

            Thus, in the temporary-equilibrium framework, economic agents make their production and consumption decisions not on the basis of their common knowledge of future market prices common, but on their own conjectural expectations of those prices, expectations that may, or may not, be correct, and may, or may not, be aligned with the expectations of other agents. Unless the agents’ expectations of future prices are aligned, the expectations of some, or all, agents must be disappointed, and the plans to buy and sell formulated based on those expectations will have to be revised, or abandoned, once agents realize that their expectations were incorrect.

            Consider a simple two-person, two-good, two-period model in which agents make plans based on current prices observed in period 1 and their expectations of what prices will be in period 2. Given price expectations for period 2, period-1 prices are determined in a tatonnement process, so that no trading occurs until a temporary- equilibrium price vector for period 1 is found. Assume, further, that price expectations for period 2 do not change in the course of the tatonnement. Once a period-1 equilibrium price vector is found, the two budget constraints subject to which the agents make their optimal decisions, need not have the same values for expected prices in period 2, because it is not assumed that the period-2 price expectations of the two agents are aligned. Because the proof of Walras’s Law depends on agents basing their decisions to buy and sell each commodity on prices for each commodity in each period that are common to both agents, Walras’s Law cannot be proved unless the period-2 price expectations of both agents are aligned.

            The implication of the potential violation of Walras’s Law is that when actual prices turn out to be different from what they were expected to be, economic agents who previously assumed obligations that are about to come due may be unable to discharge those obligations. In standard general-equilibrium models, the tatonnement process assures that no trading takes place unless equilibrium prices have been identified. But in a temporary-equilibrium model, when decisions to purchase and sell are based not on equilibrium prices, but on actual prices that may not have been expected, the discharge of commitments is not certain.

            Of course, if Walras’s Law cannot be proved, neither can Say’s Law. Supply cannot create demand when the insolvency of economic agents obstructs mutually advantageous transactions between agents when some agents have negative net worth. The negative net worth of some agents can be transmitted to other agents holding obligations undertaken by agents whose net worth has become negative.

            Moreover, because the private supply of a medium of exchange by banks depends on the value of money-backing assets held by banks, the monetary system may cease to function in an economy in which the net worth of agents whose obligations are held banks becomes negative. Thus, the argument made in section IV.A for the validity of Say’s Law in the Identity sense breaks down once a sufficient number of agents no longer have positive net worth.

VII.      Conclusion

            My aim in this paper has been to explain and clarify a number of the different ways in which Say’s Law has been understood and misunderstood. A fair reading of the primary and secondary literature allows one to understand that many of the criticisms of Say’s Law have been not properly understood the argument that Say’s Law was either intended or could be reasonably interpreted to have said. Indeed, Say’s Law, properly understood, can actually help one understand the cumulative process of economic contraction whose existence supposedly proved its invalidity. However, I have also been able to show that there are plausible conditions in which a sufficiently serious financial breakdown, associated with financial crises in which substantial losses of net worth lead to widespread and contagious insolvency, when even Walras’s Law, and a fortiori Say’s Law, no longer hold. Understanding how Say’s Law may be violated may thus help in understanding the dynamics of financial crises and the cumulative systemic coordination failures of deep depressions.

I will soon be posting the paper on SSRN. When it’s posted I will post a link to an update to this post.

 

Hayek and Temporary Equilibrium

In my three previous posts (here, here, and here) about intertemporal equilibrium, I have been emphasizing that the defining characteristic of an intertemporal equilibrium is that agents all share the same expectations of future prices – or at least the same expectations of those future prices on which they are basing their optimizing plans – over their planning horizons. At a given moment at which agents share the same expectations of future prices, the optimizing plans of the agents are consistent, because none of the agents would have any reason to change his optimal plan as long as price expectations do not change, or are not disappointed as a result of prices turning out to be different from what they had been expected to be.

The failure of expected prices to be fulfilled would therefore signify that the information available to agents in forming their expectations and choosing optimal plans conditional on their expectations had been superseded by newly obtained information. The arrival of new information can thus be viewed as a cause of disequilibrium as can any difference in information among agents. The relationship between information and equilibrium can be expressed as follows: differences in information or differences in how agents interpret information leads to disequilibrium, because those differences lead agents to form differing expectations of future prices.

Now the natural way to generalize the intertemporal equilibrium model is to allow for agents to have different expectations of future prices reflecting their differences in how they acquire, or in how they process, information. But if agents have different information, so that their expectations of future prices are not the same, the plans on which agents construct their subjectively optimal plans will be inconsistent and incapable of implementation without at least some revisions. But this generalization seems incompatible with the equilibrium of optimal plans, prices and price expectations described by Roy Radner, which I have identified as an updated version of Hayek’s concept of intertemporal equilibrium.

The question that I want to explore in this post is how to reconcile the absence of equilibrium of optimal plans, prices, and price expectations, with the intuitive notion of market clearing that we use to analyze asset markets and markets for current delivery. If markets for current delivery and for existing assets are in equilibrium in the sense that prices are adjusting in those markets to equate demand and supply in those markets, how can we understand the idea that  the optimizing plans that agents are seeking to implement are mutually inconsistent?

The classic attempt to explain this intermediate situation which partially is and partially is not an equilibrium, was made by J. R. Hicks in 1939 in Value and Capital when he coined the term “temporary equilibrium” to describe a situation in which current prices are adjusting to equilibrate supply and demand in current markets even though agents are basing their choices of optimal plans to implement over time on different expectations of what prices will be in the future. The divergence of the price expectations on the basis of which agents choose their optimal plans makes it inevitable that some or all of those expectations won’t be realized, and that some, or all, of those agents won’t be able to implement the optimal plans that they have chosen, without at least some revisions.

In Hayek’s early works on business-cycle theory, he argued that the correct approach to the analysis of business cycles must be analyzed as a deviation by the economy from its equilibrium path. The problem that he acknowledged with this approach was that the tools of equilibrium analysis could be used to analyze the nature of the equilibrium path of an economy, but could not easily be deployed to analyze how an economy performs once it deviates from its equilibrium path. Moreover, cyclical deviations from an equilibrium path tend not to be immediately self-correcting, but rather seem to be cumulative. Hayek attributed the tendency toward cumulative deviations from equilibrium to the lagged effects of monetary expansion which cause cumulative distortions in the capital structure of the economy that lead at first to an investment-driven expansion of output, income and employment and then later to cumulative contractions in output, income, and employment. But Hayek’s monetary analysis was never really integrated with the equilibrium analysis that he regarded as the essential foundation for a theory of business cycles, so the monetary analysis of the cycle remained largely distinct from, if not inconsistent with, the equilibrium analysis.

I would suggest that for Hayek the Hicksian temporary-equilibrium construct would have been the appropriate theoretical framework within which to formulate a monetary analysis consistent with equilibrium analysis. Although there are hints in the last part of The Pure Theory of Capital that Hayek was thinking along these lines, I don’t believe that he got very far, and he certainly gave no indication that he saw in the Hicksian method the analytical tool with which to weave the two threads of his analysis.

I will now try to explain how the temporary-equilibrium method makes it possible to understand  the conditions for a cumulative monetary disequilibrium. I make no attempt to outline a specifically Austrian or Hayekian theory of monetary disequilibrium, but perhaps others will find it worthwhile to do so.

As I mentioned in my previous post, agents understand that their price expectations may not be realized, and that their plans may have to be revised. Agents also recognize that, given the uncertainty underlying all expectations and plans, not all debt instruments (IOUs) are equally reliable. The general understanding that debt – promises to make future payments — must be evaluated and assessed makes it profitable for some agents to specialize in in debt assessment. Such specialists are known as financial intermediaries. And, as I also mentioned previously, the existence of financial intermediaries cannot be rationalized in the ADM model, because, all contracts being made in period zero, there can be no doubt that the equilibrium exchanges planned in period zero will be executed whenever and exactly as scheduled, so that everyone’s promise to pay in time zero is equally good and reliable.

For our purposes, a particular kind of financial intermediary — banks — are of primary interest. The role of a bank is to assess the quality of the IOUs offered by non-banks, and select from the IOUs offered to them those that are sufficiently reliable to be accepted by the bank. Once a prospective borrower’s IOU is accepted, the bank exchanges its own IOU for the non-bank’s IOU. No non-bank would accept a non-bank’s IOU, at least not on terms as favorable as those on which the bank offers in accepting an IOU. In return for the non-bank IOU, the bank credits the borrower with a corresponding amount of its own IOUs, which, because the bank promises to redeem its IOUs for the numeraire commodity on demand, is generally accepted at face value.

Thus, bank debt functions as a medium of exchange even as it enables non-bank agents to make current expenditures they could not have made otherwise if they can demonstrate to the bank that they are sufficiently likely to repay the loan in the future at agreed upon terms. Such borrowing and repayments are presumably similar to the borrowing and repayments that would occur in the ADM model unmediated by any financial intermediary. In assessing whether a prospective borrower will repay a loan, the bank makes two kinds of assessments. First, does the borrower have sufficient income-earning capacity to generate enough future income to make the promised repayments that the borrower would be committing himself to make? Second, should the borrower’s future income, for whatever reason, turn out to be insufficient to finance the promised repayments, does the borrower have collateral that would allow the bank to secure repayment from the collateral offered as security? In making both kinds of assessments the bank has to form an expectation about the future — the future income of the borrower and the future value of the collateral.

In a temporary-equilibrium context, the expectations of future prices held by agents are not the same, so the expectations of future prices of at least some agents will not be accurate, and some agents won’tbe able to execute their plans as intended. Agents that can’t execute their plans as intended are vulnerable if they have incurred future obligations based on their expectations of future prices that exceed their repayment capacity given the future prices that are actually realized. If they have sufficient wealth — i.e., if they have asset holdings of sufficient value — they may still be able to repay their obligations. However, in the process they may have to sell assets or reduce their own purchases, thereby reducing the income earned by other agents. Selling assets under pressure of obligations coming due is almost always associated with selling those assets at a significant loss, which is precisely why it usually preferable to finance current expenditure by borrowing funds and making repayments on a fixed schedule than to finance the expenditure by the sale of assets.

Now, in adjusting their plans when they observe that their price expectations are disappointed, agents may respond in two different ways. One type of adjustment is to increase sales or decrease purchases of particular goods and services that they had previously been planning to purchase or sell; such marginal adjustments do not fundamentally alter what agents are doing and are unlikely to seriously affect other agents. But it is also possible that disappointed expectations will cause some agents to conclude that their previous plans are no longer sustainable under the conditions in which they unexpectedly find themselves, so that they must scrap their old plans replacing them with completely new plans instead. In the latter case, the abandonment of plans that are no longer viable given disappointed expectations may cause other agents to conclude that the plans that they had expected to implement are no longer profitable and must be scrapped.

When agents whose price expectations have been disappointed respond with marginal adjustments in their existing plans rather than scrapping them and replacing them with new ones, a temporary equilibrium with disappointed expectations may still exist and that equilibrium may be reached through appropriate price adjustments in the markets for current delivery despite the divergent expectations of future prices held by agents. Operation of the price mechanism may still be able to achieve a reconciliation of revised but sub-optimal plans. The sub-optimal temporary equilibrium will be inferior to the allocation that would have resulted had agents all held correct expectations of future prices. Nevertheless, given a history of incorrect price expectations and misallocations of capital assets, labor, and other factors of production, a sub-optimal temporary equilibrium may be the best feasible outcome.

But here’s the problem. There is no guarantee that, when prices turn out to be very different from what they were expected to be, the excess demands of agents will adjust smoothly to changes in current prices. A plan that was optimal based on the expectation that the price of widgets would be $500 a unit may well be untenable at a price of $120 a unit. When realized prices are very different from what they had been expected to be, those price changes can lead to discontinuous adjustments, violating a basic assumption — the continuity of excess demand functions — necessary to prove the existence of an equilibrium. Once output prices reach some minimum threshold, the best response for some firms may be to shut down, the excess demand for the product produced by the firm becoming discontinuous at the that threshold price. The firms shutting down operations may be unable to repay loans they had obligated themselves to repay based on their disappointed price expectations. If ownership shares in firms forced to cease production are held by households that have predicated their consumption plans on prior borrowing and current repayment obligations, the ability of those households to fulfill their obligations may be compromised once those firms stop paying out the expected profit streams. Banks holding debts incurred by firms or households that borrowers cannot service may find that their own net worth is reduced sufficiently to make the banks’ own debt unreliable, potentially causing a breakdown in the payment system. Such effects are entirely consistent with a temporary-equilibrium model if actual prices turn out to be very different from what agents had expected and upon which they had constructed their future consumption and production plans.

Sufficiently large differences between expected and actual prices in a given period may result in discontinuities in excess demand functions once prices reach critical thresholds, thereby violating the standard continuity assumptions on which the existence of general equilibrium depends under the fixed-point theorems that are the lynchpin of modern existence proofs. C. J. Bliss made such an argument in a 1983 paper (“Consistent Temporary Equilibrium” in the volume Modern Macroeconomic Theory edited by  J. P. Fitoussi) in which he also suggested, as I did above, that the divergence of individual expectations implies that agents will not typically regard the debt issued by other agents as homogeneous. Bliss therefore posited the existence of a “Financier” who would subject the borrowing plans of prospective borrowers to an evaluation process to determine if the plan underlying the prospective loan sought by a borrower was likely to generate sufficient cash flow to enable the borrower to repay the loan. The role of the Financier is to ensure that the plans that firms choose are based on roughly similar expectations of future prices so that firms will not wind up acting on price expectations that must inevitably be disappointed.

I am unsure how to understand the function that Bliss’s Financier is supposed to perform. Presumably the Financier is meant as a kind of idealized companion to the Walrasian auctioneer rather than as a representation of an actual institution, but the resemblance between what the Financier is supposed to do and what bankers actually do is close enough to make it unclear to me why Bliss chose an obviously fictitious character to weed out business plans based on implausible price expectations rather than have the role filled by more realistic characters that do what their real-world counterparts are supposed to do. Perhaps Bliss’s implicit assumption is that real-world bankers do not constrain the expectations of prospective borrowers sufficiently to suggest that their evaluation of borrowers would increase the likelihood that a temporary equilibrium actually exists so that only an idealized central authority could impose sufficient consistency on the price expectations to make the existence of a temporary equilibrium likely.

But from the perspective of positive macroeconomic and business-cycle theory, explicitly introducing banks that simultaneously provide an economy with a medium of exchange – either based on convertibility into a real commodity or into a fiat base money issued by the monetary authority – while intermediating between ultimate borrowers and ultimate lenders seems to be a promising way of modeling a dynamic economy that sometimes may — and sometimes may not — function at or near a temporary equilibrium.

We observe economies operating in the real world that sometimes appear to be functioning, from a macroeconomic perspective, reasonably well with reasonably high employment, increasing per capita output and income, and reasonable price stability. At other times, these economies do not function well at all, with high unemployment and negative growth, sometimes with high rates of inflation or with deflation. Sometimes, these economies are beset with financial crises in which there is a general crisis of solvency, and even apparently solvent firms are unable to borrow. A macroeconomic model should be able to account in some way for the diversity of observed macroeconomic experience. The temporary equilibrium paradigm seems to offer a theoretical framework capable of accounting for this diversity of experience and for explaining at least in a very general way what accounts for the difference in outcomes: the degree of congruence between the price expectations of agents. When expectations are reasonably consistent, the economy is able to function at or near a temporary equilibrium which is likely to exist. When expectations are highly divergent, a temporary equilibrium may not exist, and even if it does, the economy may not be able to find its way toward the equilibrium. Price adjustments in current markets may be incapable of restoring equilibrium inasmuch as expectations of future prices must also adjust to equilibrate the economy, there being no market mechanism by which equilibrium price expectations can be adjusted or restored.

This, I think, is the insight underlying Axel Leijonhufvud’s idea of a corridor within which an economy tends to stay close to an equilibrium path. However if the economy drifts or is shocked away from its equilibrium time path, the stabilizing forces that tend to keep an economy within the corridor cease to operate at all or operate only weakly, so that the tendency for the economy to revert back to its equilibrium time path is either absent or disappointingly weak.

The temporary-equilibrium method, it seems to me, might have been a path that Hayek could have successfully taken in pursuing the goal he had set for himself early in his career: to reconcile equilibrium-analysis with a theory of business cycles. Why he ultimately chose not to take this path is a question that, for now at least, I will leave to others to try to answer.

The Free Market Economy Is Awesome and Fragile

Scott Sumner’s three most recent posts (here, here, and here)have been really great, and I’ld like to comment on all of them. I will start with a comment on his post discussing whether the free market economy is stable; perhaps I will get around to the other two next week. Scott uses a 2009 paper by Robert Hetzel as the starting point for his discussion. Hetzel distinguishes between those who view the stabilizing properties of price adjustment as being overwhelmed by real instabilities reflecting fluctuations in consumer and entrepreneurial sentiment – waves of optimism and pessimism – and those who regard the economy as either perpetually in equilibrium (RBC theorists) or just usually in equilibrium (Monetarists) unless destabilized by monetary shocks. Scott classifies himself, along with Hetzel and Milton Friedman, in the latter category.

Scott then brings Paul Krugman into the mix:

Friedman, Hetzel, and I all share the view that the private economy is basically stable, unless disturbed by monetary shocks. Paul Krugman has criticized this view, and indeed accused Friedman of intellectual dishonesty, for claiming that the Fed caused the Great Depression. In Krugman’s view, the account in Friedman and Schwartz’s Monetary History suggests that the Depression was caused by an unstable private economy, which the Fed failed to rescue because of insufficiently interventionist monetary policies. He thinks Friedman was subtly distorting the message to make his broader libertarian ideology seem more appealing.

This is a tricky topic for me to handle, because my own view of what happened in the Great Depression is in one sense similar to Friedman’s – monetary policy, not some spontaneous collapse of the private economy, was what precipitated and prolonged the Great Depression – but Friedman had a partial, simplistic and distorted view of how and why monetary policy failed. And although I believe Friedman was correct to argue that the Great Depression did not prove that the free market economy is inherently unstable and requires comprehensive government intervention to keep it from collapsing, I think that his account of the Great Depression was to some extent informed by his belief that his own simple k-percent rule for monetary growth was a golden bullet that would ensure economic stability and high employment.

I’d like to first ask a basic question: Is this a distinction without a meaningful difference? There are actually two issues here. First, does the Fed always have the ability to stabilize the economy, or does the zero bound sometimes render their policies impotent?  In that case the two views clearly do differ. But the more interesting philosophical question occurs when not at the zero bound, which has been the case for all but one postwar recession. In that case, does it make more sense to say the Fed caused a recession, or failed to prevent it?

Here’s an analogy. Someone might claim that LeBron James is a very weak and frail life form, whose legs will cramp up during basketball games without frequent consumption of fluids. Another might suggest that James is a healthy and powerful athlete, who needs to drink plenty of fluids to perform at his best during basketball games. In a sense, both are describing the same underlying reality, albeit with very different framing techniques. Nonetheless, I think the second description is better. It is a more informative description of LeBron James’s physical condition, relative to average people.

By analogy, I believe the private economy in the US is far more likely to be stable with decent monetary policy than is the economy of Venezuela (which can fall into depression even with sufficiently expansionary monetary policy, or indeed overly expansionary policies.)

I like Scott’s LeBron James analogy, but I have two problems with it. First, although LeBron James is a great player, he’s not perfect. Sometimes, even he messes up. When he messes up, it may not be his fault, in the sense that, with better information or better foresight – say, a little more rest in the second quarter – he might have sunk the game-winning three-pointer at the buzzer. Second, it’s one thing to say that a monetary shock caused the Great Depression, but maybe we just don’t know how to avoid monetary shocks. LeBron can miss shots, so can the Fed. Milton Friedman certainly didn’t know how to avoid monetary shocks, because his pet k-percent rule, as F. A. Hayek shrewdly observed, was a simply a monetary shock waiting to happen. And John Taylor certainly doesn’t know how to avoid monetary shocks, because his pet rule would have caused the Fed to raise interest rates in 2011 with possibly devastating consequences. I agree that a nominal GDP level target would have resulted in a monetary policy superior to the policy the Fed has been conducting since 2008, but do I really know that? I am not sure that I do. The false promise held out by Friedman was that it is easy to get monetary policy right all the time. It certainly wasn’t the case for Friedman’s pet rule, and I don’t think that there is any monetary rule out there that we can be sure will keep us safe and secure and fully employed.

But going beyond the LeBron analogy, I would make a further point. We just have no theoretical basis for saying that the free-market economy is stable. We can prove that, under some assumptions – and it is, to say the least, debatable whether the assumptions could properly be described as reasonable – a model economy corresponding to the basic neoclassical paradigm can be solved for an equilibrium solution. The existence of an equilibrium solution means basically that the neoclassical model is logically coherent, not that it tells us much about how any actual economy works. The pieces of the puzzle could all be put together in a way so that everything fits, but that doesn’t mean that in practice there is any mechanism whereby that equilibrium is ever reached or even approximated.

The argument for the stability of the free market that we learn in our first course in economics, which shows us how price adjusts to balance supply and demand, is an argument that, when every market but one – well, actually two, but we don’t have to quibble about it – is already in equilibrium, price adjustment in the remaining market – if it is small relative to the rest of the economy – will bring that market into equilibrium as well. That’s what I mean when I refer to the macrofoundations of microeconomics. But when many markets are out of equilibrium, even the markets that seem to be equilibrium (with amounts supplied and demanded equal) are not necessarily in equilibrium, because the price adjustments in other markets will disturb the seeming equilibrium of the markets in which supply and demand are momentarily equal. So there is not necessarily any algorithm, either in theory or in practice, by which price adjustments in individual markets would ever lead the economy into a state of general equilibrium. If we believe that the free market economy is stable, our belief is therefore not derived from any theoretical proof of the stability of the free market economy, but simply on an intuition, and some sort of historical assessment that free markets tend to work well most of the time. I would just add that, in his seminal 1937 paper, “Economics and Knowledge,” F. A. Hayek actually made just that observation, though it is not an observation that he, or most of his followers – with the notable and telling exceptions of G. L. S. Shackle and Ludwig Lachmann – made a big fuss about.

Axel Leijonhufvud, who is certainly an admirer of Hayek, addresses the question of the stability of the free-market economy in terms of what he calls a corridor. If you think of an economy moving along a time path, and if you think of the time path that would be followed by the economy if it were operating at a full-employment equilibrium, Leijonjhufvud’s corridor hypothesis is that the actual time path of the economy tends to revert to the equilibrium time path as long as deviations from the equilibrium are kept within certain limits, those limits defining the corridor. However, if the economy, for whatever reasons (exogenous shocks or some other mishaps) leaves the corridor, the spontaneous equilibrating tendencies causing the actual time path to revert back to the equilibrium time path may break down, and there may be no further tendency for the economy to revert back to its equilibrium time path. And as I pointed out recently in my post on Earl Thompson’s “Reformulation of Macroeconomic Theory,” he was able to construct a purely neoclassical model with two potential equilibria, one of which was unstable so that a shock form the lower equilibrium would lead either to a reversion to the higher-level equilibrium or to downward spiral with no endogenous stopping point.

Having said all that, I still agree with Scott’s bottom line: if the economy is operating below full employment, and inflation and interest rates are low, there is very likely a problem with monetary policy.

Roger and Me

Last week Roger Farmer wrote a post elaborating on a comment that he had left to my post on Price Stickiness and Macroeconomics. Roger’s comment is aimed at this passage from my post:

[A]lthough price stickiness is a sufficient condition for inefficient macroeconomic fluctuations, it is not a necessary condition. It is entirely possible that even with highly flexible prices, there would still be inefficient macroeconomic fluctuations. And the reason why price flexibility, by itself, is no guarantee against macroeconomic contractions is that macroeconomic contractions are caused by disequilibrium prices, and disequilibrium prices can prevail regardless of how flexible prices are.

Here’s Roger’s comment:

I have a somewhat different take. I like Lucas’ insistence on equilibrium at every point in time as long as we recognize two facts. 1. There is a continuum of equilibria, both dynamic and steady state and 2. Almost all of them are Pareto suboptimal.

I made the following reply to Roger’s comment:

Roger, I think equilibrium at every point in time is ok if we distinguish between temporary and full equilibrium, but I don’t see how there can be a continuum of full equilibria when agents are making all kinds of long-term commitments by investing in specific capital. Having said that, I certainly agree with you that expectational shifts are very important in determining which equilibrium the economy winds up at.

To which Roger responded:

I am comfortable with temporary equilibrium as the guiding principle, as long as the equilibrium in each period is well defined. By that, I mean that, taking expectations as given in each period, each market clears according to some well defined principle. In classical models, that principle is the equality of demand and supply in a Walrasian auction. I do not think that is the right equilibrium concept.

Roger didn’t explain – at least not here, he probably has elsewhere — exactly why he doesn’t think equality of demand and supply in a Walrasian auction is not the right equilibrium concept. But I would be interested in hearing from him why he thinks equality of supply and demand is not the right equilibrium concept. Perhaps he will clarify his thinking for me.

Hicks wanted to separate ‘fix price markets’ from ‘flex price markets’. I don’t think that is the right equilibrium concept either. I prefer to use competitive search equilibrium for the labor market. Search equilibrium leads to indeterminacy because there are not enough prices for the inputs to the search process. Classical search theory closes that gap with an arbitrary Nash bargaining weight. I prefer to close it by making expectations fundamental [a proposition I have advanced on this blog].

I agree that the Hicksian distinction between fix-price markets and flex-price markets doesn’t cut it. Nevertheless, it’s not clear to me that a Thompsonian temporary-equilibrium model in which expectations determine the reservation wage at which workers will accept employment (i.e, the labor-supply curve conditional on the expected wage) doesn’t work as well as a competitive search equilibrium in this context.

Once one treats expectations as fundamental, there is no longer a multiplicity of equilibria. People act in a well defined way and prices clear markets. Of course ‘market clearing’ in a search market may involve unemployment that is considerably higher than the unemployment rate that would be chosen by a social planner. And when there is steady state indeterminacy, as there is in my work, shocks to beliefs may lead the economy to one of a continuum of steady state equilibria.

There is an equilibrium for each set of expectations (with the understanding, I presume, that expectations are always uniform across agents). The problem that I see with this is that there doesn’t seem to be any interaction between outcomes and expectations. Expectations are always self-fulfilling, and changes in expectations are purely exogenous. But in a classic downturn, the process seems to be cumulative, the contraction seemingly feeding on itself, causing a spiral of falling prices, declining output, rising unemployment, and increasing pessimism.

That brings me to the second part of an equilibrium concept. Are expectations rational in the sense that subjective probability measures over future outcomes coincide with realized probability measures? That is not a property of the real world. It is a consistency property for a model.

Yes; I agree totally. Rational expectations is best understood as a property of a model, the property being that if agents expect an equilibrium price vector the solution of the model is the same equilibrium price vector. It is not a substantive theory of expectation formation, the model doesn’t posit that agents correctly foresee the equilibrium price vector, that’s an extreme and unrealistic assumption about how the world actually works, IMHO. The distinction is crucial, but it seems to me that it is largely ignored in practice.

And yes: if we plop our agents down into a stationary environment, their beliefs should eventually coincide with reality.

This seems to me a plausible-sounding assumption for which there is no theoretical proof, and in view of Roger’s recent discussion of unit roots, dubious empirical support.

If the environment changes in an unpredictable way, it is the belief function, a primitive of the model, that guides the economy to a new steady state. And I can envision models where expectations on the transition path are systematically wrong.

I need to read Roger’s papers about this, but I am left wondering by what mechanism the belief function guides the economy to a steady state? It seems to me that the result requires some pretty strong assumptions.

The recent ‘nonlinearity debate’ on the blogs confuses the existence of multiple steady states in a dynamic model with the existence of multiple rational expectations equilibria. Nonlinearity is neither necessary nor sufficient for the existence of multiplicity. A linear model can have a unique indeterminate steady state associated with an infinite dimensional continuum of locally stable rational expectations equilibria. A linear model can also have a continuum of attracting points, each of which is an equilibrium. These are not just curiosities. Both of these properties characterize modern dynamic equilibrium models of the real economy.

I’m afraid that I don’t quite get the distinction that is being made here. Does “multiple steady states in a dynamic model” mean multiple equilibria of the full Arrow-Debreu general equilibrium model? And does “multiple rational-expectations equilibria” mean multiple equilibria conditional on the expectations of the agents? And I also am not sure what the import of this distinction is supposed to be.

My further question is, how does all of this relate to Leijonhfuvud’s idea of the corridor, which Roger has endorsed? My own understanding of what Axel means by the corridor is that the corridor has certain stability properties that keep the economy from careening out of control, i.e. becoming subject to a cumulative dynamic process that does not lead the economy back to the neighborhood of a stable equilibrium. But if there is a continuum of attracting points, each of which is an equilibrium, how could any of those points be understood to be outside the corridor?

Anyway, those are my questions. I am hoping that Roger can enlighten me.

Aggregate Demand and Coordination Failures

Regular readers of this blog may have noticed that I have been writing less about monetary policy and more about theory and methodology than when I started blogging a little over three years ago. Now one reason for that is that I’ve already said what I want to say about policy, and, since I get bored easily, I look for new things to write about. Another reason is that, at least in the US, the economy seems to have reached a sustainable growth path that seems likely to continue for the near to intermediate term. I think that monetary policy could be doing more to promote recovery, and I wish that it would, but unfortunately, the policy is what it is, and it will continue more or less in the way that Janet Yellen has been saying it will. Falling oil prices, because of increasing US oil output, suggest that growth may speed up slightly even as inflation stays low, possibly even falling to one percent or less. At least in the short-term, the fall in inflation does not seem like a cause for concern. A third reason for writing less about monetary policy is that I have been giving a lot of thought to what it is that I dislike about the current state of macroeconomics, and as I have been thinking about it, I have been writing about it.

In thinking about what I think is wrong with modern macroeconomics, I have been coming back again and again, though usually without explicit attribution, to an idea that was impressed upon me as an undergrad and grad student by Axel Leijonhufvud: that the main concern of macroeconomics ought to be with failures of coordination. A Swede, trained in the tradition of the Wicksellian Stockholm School, Leijonhufvud immersed himself in the study of the economics of Keynes and Keynesian economics, while also mastering the Austrian literature, and becoming an admirer of Hayek, especially Hayek’s seminal 1937 paper, “Economics and Knowledge.”

In discussing Keynes, Leijonhufvud focused on two kinds of coordination failures.

First, there is a problem in the labor market. If there is unemployment because the real wage is too high, an individual worker can’t solve the problem by offering to accept a reduced nominal wage. Suppose the price of output is $1 a unit and the wage is $10 a day, but the real wage consistent with full employment is $9 a day, meaning that producers choose to produce less output than they would produce if the real wage were lower, thus hiring fewer workers than they would if the real wage were lower than it is. If an individual worker offers to accept a wage of $9 a day, but other workers continue to hold out for $10 a day, it’s not clear that an employer would want to hire the worker who offers to work for $9 a day. If employers are not hiring additional workers because they can’t cover the cost of the additional output produced with the incremental revenue generated by the added output, the willingness of one worker to work for $9 a day is not likely to make a difference to the employer’s output and hiring decisions. It is not obvious what sequence of transactions would result in an increase in output and employment when the real wage is above the equilibrium level. There are complex feedback effects from a change, so that the net effect of making those changes in a piecemeal fashion is unpredictable, even though there is a possible full-employment equilibrium with a real wage of $9 a day. If the problem is that real wages in general are too high for full employment, the willingness of an individual worker to accept a reduced wage from a single employer does not fix the problem.

In the standard competitive model, there is a perfect market for every commodity in which every transactor is assumed to be able to buy and sell as much as he wants. But the standard competitive model has very little to say about the process by which those equilibrium prices are arrived at. And a typical worker is never faced with that kind of choice posited in the competitive model: an impersonal uniform wage at which he can decide how many hours a day or week or year he wants to work at that uniform wage. Under those circumstances, Keynes argued that the willingness of some workers to accept wage cuts in order to gain employment would not significantly increase employment, and might actually have destabilizing side-effects. Keynes tried to make this argument in the framework of an equilibrium model, though the nature of the argument, as Don Patinkin among others observed, was really better suited to a disequilibrium framework. Unfortunately, Keynes’s argument was subsequently dumbed down to a simple assertion that wages and prices are sticky (especially downward).

Second, there is an intertemporal problem, because the interest rate may be stuck at a rate too high to allow enough current investment to generate the full-employment level of spending given the current level of the money wage. In this scenario, unemployment isn’t caused by a real wage that is too high, so trying to fix it by wage adjustment would be a mistake. Since the source of the problem is the rate of interest, the way to fix the problem would be to reduce the rate of interest. But depending on the circumstances, there may be a coordination failure: bear speculators, expecting the rate of interest to rise when it falls to abnormally low levels, prevent the rate of interest from falling enough to induce enough investment to support full employment. Keynes put too much weight on bear speculators as the source of the intertemporal problem; Hawtrey’s notion of a credit deadlock would actually have been a better way to go, and nowadays, when people speak about a Keynesian liquidity trap, what they really have in mind is something closer to Hawtreyan credit deadlock than to the Keynesian liquidity trap.

Keynes surely deserves credit for identifying and explaining two possible sources of coordination failures, failures affecting the macroeconomy, because interest rates and wages, though they actually come in many different shapes and sizes, affect all markets and are true macroeconomic variables. But Keynes’s analysis of those coordination failures was far from being fully satisfactory, which is not surprising; a theoretical pioneer rarely provides a fully satisfactory analysis, leaving lots of work for successors.

But I think that Keynes’s theoretical paradigm actually did lead macroeconomics in the wrong direction, in the direction of a highly aggregated model with a single output, a bond, a medium of exchange, and a labor market, with no explicit characterization of the production technology. (I.e., is there one factor or two, and if two how is the price of the second factor determined? See, here, here, here, and here my discussion of Earl Thompson’s “A Reformulation of Macroeconomic Theory,” which I hope at some point to revisit and continue.)

Why was it the wrong direction? Because, the Keynesian model (both Keynes’s own version and the Hicksian IS-LM version of his model) ruled out the sort of coordination problems that might arise in a multi-product, multi-factor, intertemporal model in which total output depends in a meaningful way on the meshing of the interdependent plans, independently formulated by decentralized decision-makers, contingent on possibly inconsistent expectations of the future. In the over-simplified and over-aggregated Keynesian model, the essence of the coordination problem has been assumed away, leaving only a residue of the actual problem to be addressed by the model. The focus of the model is on aggregate expenditure, income, and output flows, with no attention paid to the truly daunting task of achieving sufficient coordination among the independent decision makers to allow total output and income to closely approximate the maximum sustainable output and income that the system could generate in a perfectly coordinated state, aka full intertemporal equilibrium.

This way of thinking about macroeconomics led to the merging of macroeconomics with neoclassical growth theory and to the routine and unthinking incorporation of aggregate production functions in macroeconomic models, a practice that is strictly justified only in a single-output, two-factor model in which the value of capital is independent of the rate of interest, so that the havoc-producing effects of reswitching and capital-reversal can be avoided. Eventually, these models were taken over by modern real-business-cycle theorists, who dogmatically rule out any consideration of coordination problems, while attributing all observed output and employment fluctuations to random productivity shocks. If one thinks of macroeconomics as an attempt to understand coordination failures, the RBC explanation of output and employment fluctuations is totally backwards; productivity fluctuations, like fluctuations in output and employment, are the not the results of unexplained random disturbances, they are the symptoms of coordination failures. That’s it, eureka! Solve the problem by assuming that it does not exist.

If you are thinking that this seems like an Austrian critique of the Keynesian model or the Keynesian approach, you are right; it is an Austrian critique. But it has nothing to do with stereotypical Austrian policy negativism; it is a critique of the oversimplified structure of the Keynesian model, which foreshadowed the reduction ad absurdum or modern real-business-cycle theory, which has nearly banished the idea of coordination failures from modern macroeconomics. The critique is not about the lack of a roundabout capital structure; it is about the narrow scope for inconsistencies in production and consumption plans.

I think that Leijonhufvud almost 40 years ago was getting at this point when he wrote the following paragraph near toward end of his book on Keynes.

The unclear mix of statics and dynamics [in the General Theory] would seem to be main reason for later muddles. One cannot assume that what went wrong was simply that Keynes slipped up here and there in his adaptation of standard tools, and that consequently, if we go back and tinker a little more with the Marshallian toolbox his purposes will be realized. What is required, I believe, is a systematic investigation from the standpoint of the information problems stressed in this study, of what elements of the static theory of resource allocation can without further ado be utilized in the analysis of dynamic and historical systems. This, of course, would be merely a first step: the gap yawns very wide between the systematic and rigorous modern analysis of the stability of simple, “featureless,” pure exchange systems and Keynes’ inspired sketch of the income-constrained process in a monetary exchange-cum production system. But even for such a first step, the prescription cannot be to “go back to Keynes.” If one must retrace some step of past developments in order to get on the right track – and that is probably advisable – my own preference is to go back to Hayek. Hayek’s Gestalt-conception of what happens during business cycles, it has been generally agreed, was much less sound that Keynes’. As an unhappy consequence, his far superior work on the fundamentals of the problem has not received the attention it deserves. (pp. 401-02)

I don’t think that we actually need to go back to Hayek, though “Economics and Knowledge” should certainly be read by every macroeconomist, but we do need to get a clearer understanding of the potential for breakdowns in economic activity to be caused by inconsistent expectations, especially when expectations are themselves mutually dependent and reinforcing. Because expectations are mutually interdependent, they are highly susceptible to network effects. Network effects produce tipping points, tipping points can lead to catastrophic outcomes. Just wanted to share that with you. Have a nice day.

Never Reason from a Disequilibrium

One of Scott Sumner’s many contributions as a blogger has been to show over and over and over again how easy it is to lapse into fallacious economic reasoning by positing a price change and then trying to draw inferences about the results of the price change. The problem is that a price change doesn’t just happen; it is the result of some other change. There being two basic categories of changes (demand and supply) that can affect price, there are always at least two possible causes for a given price change. So, until you have specified the antecedent change responsible for the price change under consideration, you can’t work out the consequences of the price change.

In this post, I want to extend Scott’s insight in a slightly different direction, and explain how every economic analysis has to begin with a statement about the initial conditions from which the analysis starts. In particular, you need to be clear about the equilibrium position corresponding to the initial conditions from which you are starting. If you posit some change in the system, but your starting point isn’t an equilibrium, you have no way of separating out the adjustment to the change that you are imposing on the system from the change the system would be undergoing simply to reach the equilibrium toward which it is already moving, or, even worse, from the change the system would be undergoing if its movement is not toward equilibrium.

Every theoretical analysis in economics properly imposes a ceteris paribus condition. Unfortunately, the ubiquitous ceteris paribus condition comes dangerously close to rendering economic theory irrefutable, except perhaps in a statistical sense, because empirical refutations of the theory can always be attributed to changes, abstracted from only in the theory, but not in the real world of our experience. An empirical model with a sufficient number of data points may be able to control for the changes in conditions that the theory holds constant, but the underlying theory is a comparison of equilibrium states (comparative statics), and it is quite a stretch to assume that the effects of perpetual disequilibrium can be treated as nothing but white noise. Austrians are right to be skeptical of econometric analysis; so was Keynes, for that matter. But skepticism need not imply nihilism.

Let me try to illustrate this principle by applying it to the Keynesian analysis of involuntary unemployment. In the General Theory Keynes argued that if adequate demand is deficient, the likely result is an equilibrium with involuntary unemployment. The “classical” argument that Keynes disputed was that, in principle at least, involuntary unemployment could not persist, because unemployed workers, if only they would accept reduced money wages, would eventually find employment. Keynes denied that involuntary unemployment could not persist, arguing that if workers did accept reduced money wages, the wage reductions would not get translated into reduced real wages. Instead, falling nominal wages would induce employers to cut prices by roughly the same percentage as the reduction in nominal wages, leaving real wages more or less unchanged, thereby nullifying the effectiveness of nominal-wage cuts, and, instead, fueling a vicious downward spiral of prices and wages.

In making this argument, Keynes didn’t dispute the neoclassical proposition that, with a given capital stock, the marginal product of labor declines as employment increases, implying that real wages have to fall for employment to be increased. His argument was about the nature of the labor-supply curve, labor supply, in Keynes’s view, being a function of both the real and the nominal wage, not, as in the neoclassical theory, only the real wage. Under Keynes’s “neoclassical” analysis, the problem with nominal-wage cuts is that they don’t do the job, because they lead to corresponding price cuts. The only way to reduce unemployment, Keynes insisted, is to raise the price level. With nominal wages constant, an increased price level would achieve the real-wage cut necessary for employment to be increased. And this is precisely how Keynes defined involuntary unemployment: the willingness of workers to increase the amount of labor actually supplied in response to a price level increase that reduces their real wage.

Interestingly, in trying to explain why nominal-wage cuts would fail to increase employment, Keynes suggested that the redistribution of income from workers to entrepreneurs associated with reduced nominal wages would tend to reduce consumption, thereby reducing, not increasing, employment. But if that is so, how is it that a reduced real wage, achieved via inflation, would increase employment? Why would the distributional effect of a reduced nominal, but unchanged real, wage be more adverse to employment han a reduced real wage, achieved, with a fixed nominal wage, by way of a price-level increase?

Keynes’s explanation for all this is confused. In chapter 19, where he makes the argument that money-wage cuts can’t eliminate involuntary unemployment, he presents a variety of reasons why nominal-wage cuts are ineffective, and it is usually not clear at what level of theoretical abstraction he is operating, and whether he is arguing that nominal-wage cuts would not work even in principle, or that, although nominal-wage cuts might succeed in theory, they would inevitably fail in practice. Even more puzzling, It is not clear whether he thinks that real wages have to fall to achieve full employment or that full employment could be restored by an increase in aggregate demand with no reduction in real wages. In particular, because Keynes doesn’t start his analysis from a full-employment equilibrium, and doesn’t specify the shock that moves the economy off its equilibrium position, we can only guess whether Keynes is talking about a shock that had reduced labor productivity or (more likely) a shock to entrepreneurial expectations (animal spirits) that has no direct effect on labor productivity.

There was a rhetorical payoff for Keynes in maintaining that ambiguity, because he wanted to present a “general theory” in which full employment is a special case. Keynes therefore emphasized that the labor market is not self-equilibrating by way of nominal-wage adjustments. That was a perfectly fine and useful insight: when the entire system is out of kilter; there is no guarantee that just letting the free market set prices will bring everything back into place. The theory of price adjustment is fundamentally a partial-equilibrium theory that isolates the disequiibrium of a single market, with all other markets in (approximate) equilibrium. There is no necessary connection between the adjustment process in a partial-equilibrium setting and the adjustment process in a full-equilibrium setting. The stability of a single market in disequilibrium does not imply the stability of the entire system of markets in disequilibrium. Keynes might have presented his “general theory” as a theory of disequilibrium, but he preferred (perhaps because he had no other tools to work with) to spell out his theory in terms of familiar equilibrium concepts: savings equaling investment and income equaling expenditure, leaving it ambiguous whether the failure to reach a full-employment equilibrium is caused by a real wage that is too high or an interest rate that is too high. Axel Leijonhufvud highlights the distinction between a disequilibrium in the real wage and a disequilibrium in the interest rate in an important essay “The Wicksell Connection” included in his book Information and Coordination.

Because Keynes did not commit himself on whether a reduction in the real wage is necessary for equilibrium to be restored, it is hard to assess his argument about whether, by accepting reduced money wages, workers could in fact reduce their real wages sufficiently to bring about full employment. Keynes’s argument that money-wage cuts accepted by workers would be undone by corresponding price cuts reflecting reduced production costs is hardly compelling. If the current level of money wages is too high for firms to produce profitably, it is not obvious why the reduced money wages paid by entrepreneurs would be entirely dissipated by price reductions, with none of the cost decline being reflected in increased profit margins. If wage cuts do increase profit margins, that would encourage entrepreneurs to increase output, potentially triggering an expansionary multiplier process. In other words, if the source of disequilibrium is that the real wage is too high, the real wage depending on both the nominal wage and price level, what is the basis for concluding that a reduction in the nominal wage would cause a change in the price level sufficient to keep the real wage at a disequilibrium level? Is it not more likely that the price level would fall no more than required to bring the real wage back to the equilibrium level consistent with full employment? The question is not meant as an expression of policy preference; it is a question about the logic of Keynes’s analysis.

Interestingly, present-day opponents of monetary stimulus (for whom “Keynesian” is a term of extreme derision) like to make a sort of Keynesian argument. Monetary stimulus, by raising the price level, reduces the real wage. That means that monetary stimulus is bad, as it is harmful to workers, whose interests, we all know, is the highest priority – except perhaps the interests of rentiers living off the interest generated by their bond portfolios — of many opponents of monetary stimulus. Once again, the logic is less than compelling. Keynes believed that an increase in the price level could reduce the real wage, a reduction that, at least potentially, might be necessary for the restoration of full employment.

But here is my question: why would an increase in the price level reduce the real wage rather than raise money wages along with the price level. To answer that question, you need to have some idea of whether the current level of real wages is above or below the equilibrium level. If unemployment is high, there is at least some reason to think that the equilibrium real wage is less than the current level, which is why an increase in the price level would be expected to cause the real wage to fall, i.e., to move the actual real wage in the direction of equilibrium. But if the current real wage is about equal to, or even below, the equilibrium level, then why would one think that an increase in the price level would not also cause money wages to rise correspondingly? It seems more plausible that, in the absence of a good reason to think otherwise, that inflation would cause real wages to fall only if real wages are above their equilibrium level.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,263 other subscribers
Follow Uneasy Money on WordPress.com