Archive for the 'Franklin Fisher' Category

A New Version of my Paper “Between Walras and Marshall: Menger’s Third Way” Is Now Available on SSRN

Last week I reposted a revised version of a blogpost from last November, which was a revised section from my paper “Between Walras and Marshall: Menger’s Third Way.” That paper was presented at a conference in September 2021 marking the 100th anniversary of Menger’s death. I have now completed my revision of the entire paper, and the new version is now posted on SSRN.

Here is the link to the new version, and here is the abstract of the paper:

Neoclassical economics is bifurcated between Marshall’s partial-equilibrium and Walras’s general-equilibrium. Neoclassical theory having failed to explain the Great Depression, Keynes proposed a theory of involuntary unemployment, later subsumed under the neoclassical synthesis of Keynesian and Walrasian theories. Lacking suitable microfoundations, that synthesis collapsed. But Walrasian theory provides no account of how equilibrium is achieved. Marshallian partial-equilibrium analysis offered a more plausible account of how general equilibrium is reached. But presuming that all markets, but the one being analyzed, are already in equilibrium, Marshallian partial equilibrium, like Walrasian general equilibrium, begs the question of how equilibrium is attained. A Mengerian approach to circumvent this conceptual impasse, relying in part on a critique of Franklin Fisher’s analysis of the stability of general equilibrium, is proposed.

Commnets, criticisms and suggestions are welcomed and encouraged.

Franklin Fisher on the Disequilibrium Foundations of Economics and the Stability of General Equilibrium Redux

Last November I posted a revised section of a paper I’m now working on an earlier version of which is posted on SSRN. I have now further revised the paper and that section in particular, so I’m posting the current version of that section in hopes of receiving further comments, criticisms, and suggestions before I submit the paper to a journal. So I will be very grateful to all those who respond, and will try not to be too cranky in my replies.

I         Fisher’s model and the No Favorable Surprise Assumption

Unsuccessful attempts to prove, under standard neoclassical assumptions, the stability of general equilibrium led Franklin Fisher (1983 [Disequilibrium Foundations of Equilibrium Economics) to suggest an alternative approach to proving stability. based on three assumptions: (1) trading occurs at disequilibrium prices (in contrast to the standard assumption that no trading takes place until a new equilibrium is found with prices being adjusted under a tatonnement process); (2) all unsatisfied transactors — either unsatisfied demanders or unsatisfied suppliers — in any disequilibrated market are either all on the demand side or all on the supply side of that market; (3) the “no favorable surprises” (NFS) assumption previously advanced by Hahn (1978 [“On Non-Walrasian Equilibria”).

At the starting point of a disequilibrium process, some commodities would be in excess demand, some in excess supply, and, perhaps, some in equilibrium. Let Zi denote the excess demand for any commodity, i ranging from 1 to n; let commodities in excess demand be numbered from 1 to k, commodities initially in equilibrium numbered from k+1 to m, and commodities in excess supply numbered from m+1 to n. Thus, by assumption, no agent had an excess supply of commodities numbered from 1 to k, no agent had an excess demand for commodities numbered from m+1 to n, and no agent had either an excess demand or excess supply for commodities numbered between k+1 and m.[1]

Fisher argued that in disequilibrium, with prices, not necessarily uniform across all trades, rising in markets with excess demand and falling in markets with excess supply, and not changing in markets with zero excess demand, the sequence of adjustments would converge on an equilibrium price vector. Every agent would form plans to transact conditional on expectations of the prices at which it planned purchases or sales, either spot for forward, could be executed.[2] Because unsuccessful demanders and suppliers would respond to failed attempts to execute planned trades by raising the prices offered, or reducing the prices accepted, prices for goods or services in excess demand would rise, and would fall for goods and services in excess supply. Insofar as agents successfully execute their plans, prices for commodities in excess demand would rise and prices for commodities in excess supply would fall.

Fisher reduced this informal analysis to a formal model in which stability could be proved, at least under standard neoclassical assumptions augmented by plausible assumptions about the adjustment process. Stability of equilibrium is proved by defining some function (V) of the endogenous variables of the model (x1, . . ., xn, t) and showing that the function satisfies the Lyapounov stability conditions: V ≥ 0, dV/dt ≤ 0, and dV/dt = 0 in equilibrium. Fisher defined V as the sum of the expected utilities of households plus the expected profits of firms, all firm profits being distributed to households in equilibrium. Fisher argued that, under the NFS assumption, the expected utility of agents would decline as prices are adjusted when agents fail to execute their planned transactions, disappointed buyers raising the prices offered and disappointed sellers lowering the prices accepted. These adjustments would reduce the expected utility or profit from those transactions, and in equilibrium no further adjustment would be needed and the Lyapounov conditions satisfied. The combination of increased prices for goods purchased and decreased prices for goods sold implies that, with no favorable surprises, dV/dt would be negative until an equilibrium, in which all planned transactions are executed, is reached, so that the sum of expected utility and expected profit is stabilized, confirming the stability of the disequilibrium arbitrage process.

II         Two Problems with the No Favorable Surprise Assumption

Acknowledging that the NFS assumption is ad hoc, not a deep property of rationality implied by standard neoclassical assumptions, Fisher (1983, p. 87) justified the assumption on the pragmatic grounds. “It may well be true,” he wrote,

that an economy of rational agents who understand that there is disequilibrium and act on arbitrage opportunities is driven toward equilibrium, but not if these agents continually perceive new previously unanticipated opportunities for further arbitrage. The appearance of such new and unexpected opportunities will generally disturb the system until they are absorbed.

Such opportunities can be of different kinds. The most obvious sort is the appearance of unforeseen technological developments – the unanticipated development of new products or processes. There are other sorts of new opportunities as well. An unanticipated change in tastes or the development of new uses for old products is one; the discovery of new sources of raw materials another. Further, efficiency improvements in firms are not restricted to technological developments. The discovery of a more efficient mode of internal organization or of a better way of marketing can also present a new opportunity.

Because favorable surprises following the displacement of a prior equilibrium would potentially violate the Lyapounov condition that V be non-increasing, the NFS assumption allows it to be proved that arbitrage of price differences leads to convergence on a new equilibrium. It is not, of course, only favorable surprises that can cause instability, inasmuch as the Lyapounov function must be non-negative as well as non-increasing, and a sufficiently large unfavorable surprise would violate the non-negativity condition.[3]

However, acknowledging the unrealism of the NFS assumption and its conflation of exogenous surprises with those that are endogenous, Fisher (pp. 90-91) argued that proving stability under the NFS assumption is still significant, because, if stability could not be proved under the assumption of no surprises of any kind, it likely could not be proved “under the more interesting weaker assumption” of No Exogenous Favorable Surprises.

The NFS assumption suffers from two problems deeper than Fisher acknowledged. First, it reckons only with equilibrating adjustments in current prices when trading is possible in both spot and forward markets for all goods and services, so that spot and forward prices for each commodity and service are being continuously arbitraged in his setup. Second, he does not take explicit account of interactions between markets of the sort that motivate Lipsey and Lancaster’s (1956 [“Tbe General Theory of Second Best) general theory of the second best.

          A. Semi-complete markets

Fisher does not introduce trading in state-contingent markets, so his model might be described as semi-complete. Because all traders have the choice, when transacting to engage, in either a spot or a forward transaction, depending on their liquidity position, so that when spot and forward trades are occurring for the same product or service, the ratio of those prices, reflecting own commodity interest rates, are constrained by arbitrage to match money interest rate. In an equilibrium, both spot and forward prices must adjusted so that the arbitrage relationships between spot and forward prices for all commodities and services in which both spot and forward prices are occurring is satisfied and all agents are able to execute the trads that they wish to make at the prices they expected when planning those purchases. In other words, an equilibrium requires that all agents that are actually trading commodities or services in which both spot and forward trades are occurring concurrently must share the same expectations of future prices. Otherwise, agents with differing expectations would have an incentive to switch from trading spot to forward or vice versa.

The point that I want to emphasize here is that, insofar as equilibration can be shown to occur in Fisher’s arbitrage model, it depends on the ability of agents to choose between purchasing spot or forward, thereby creating a market mechanism whereby agents’ expectations of future prices to be reconciled along with the adjustment of current prices (either spot or forward) to allow agents to execute their plans to transact. Equilibrium depends not only on the adjustment of current prices to equilibrium levels for spot transactions but on the adjustment of expectations of future spot prices to equilibrium levels. Unlike the market feedback on current prices in current markets conveyed by unsatisfied demanders and suppliers, inconsistencies in agents’ notional plans for future transactions convey no discernible feedback without a broad array of forward or futures markets in which those expectations are revealed and reconciled. Without such feedback on expectations, a plausible account of how expectations of future prices are equilibrated cannot — except under implausibly extreme assumptions — easily be articulated.[4] Nor can the existence of a temporary equilibrium of current prices in current markets, beset by agents’ inconsistent and conflicting expectations, be taken for granted under standard assumptions. And even if a temporary equilibrium exists, it cannot, under standard assumptions, be shown to be optimal (Arrow and Hahn, 1971, 136-51).

            B          Market interactions and the theory of second-best

Second, in Fisher’s account, price changes occur when transactors cannot execute their desired transactions at current prices, those price changes then creating arbitrage opportunities that induce further price changes. Fisher’s stability argument hinges on defining a Lyapounov function in which the prices of goods in excess demand rise as frustrated demanders offer increased prices and prices of goods in excess supply fall as disappointed suppliers accept reduced prices.

But the argument works only if a price adjustment in one market caused by a previous excess demand or excess supply does not simultaneously create excess demands or supplies in markets not previously in disequilibrium or further upset the imbalance between supply and demand in markets already in disequilibrium.

To understand why Fisher’s ad hoc assumptions do not guarantee that the Lyapounov function he defined will be continuously non-increasing, consider the famous Lipsey and Lancaster (1956) second-best theorem, according to which, if one optimality condition in an economic model is unsatisfied because a relevant variable is constrained, the second-best solution, rather than satisfy the other unconstrained optimum conditions, involves revision of at least some of the unconstrained optimum conditions.

Contrast Fisher’s statement of the No Favorable Surprise assumption with how Lipsey and Lancaster (1956, 11) described the import of their theorem.

From this theorem there follows the important negative corollary that there is no a priori way to judge as between various situations in which some of the Paretian optimum conditions are fulfilled while others are not. Specifically, it is not true that a situation in which more, but not all, of the optimum conditions are fulfilled is necessarily, or is even likely to be, superior to a situation in which fewer are fulfilled. It follows, therefore, that in a situation in which there exist many constraints which prevent the fulfilment of the Paretian optimum conditions the removal of any one constraint may affect welfare or efficiency either by raising it, by lowering it, or by leaving it unchanged.

The general theorem of the second best states that if one of the Paretian optimum conditions cannot be fulfilled a second-best optimum situation is achieved only by departing from all other optimum conditions. It is important to note that in general, nothing can be said about the direction or the magnitude of the secondary departures from optimum conditions made necessary by the original non-fulfillment of one condition.

Although Lipsey and Lancaster were not referring to the adjustment process following the displacement of a prior equilibrium, their discussion implies that the stability of an adjustment process depends on the specific sequence of adjustments in that process, inasmuch as each successive price adjustment, aside from its immediate effect on the particular market in which the price adjusts, transmits feedback effects to related markets. A price adjustment in one market may increase, decrease, or leave unchanged, the efficiency of other markets, and the equilibrating tendency of a price adjustment in one market may be offset by indirect disequilibrating tendencies in other markets. When a price adjustment in one market indirectly reduces efficiency in other markets, the resulting price adjustments may well trigger further indirect efficiency reductions.

Thus, in adjustment processes involving interrelated markets, a price change in one market can indeed cause a favorable surprises in one or more other markets by indirectly causing net increases in utility through feedback effects on other markets.

III        Conclusion

Consider a macroeconomic equilibrium satisfying all optimality conditions between marginal rates of substitution in production and consumption and relative prices. If that equilibrium is subjected to a macoreconomic disturbance affecting all, or most, individual markets, thereby changing all optimality conditions corresponding to the prior equilibrium, the new equilibrium will likely entail a different set of optimality conditions. While systemic optimality requires price adjustments to satisfy all the optimality conditions, actual price adjustments occur sequentially, in piecemeal fashion, with prices changing market by market or firm by firm, price changes occurring as agents perceive demand or cost changes. Those changes need not always induce equilibrating adjustments, nor is the arbitraging of price differences necessarily equilibrating when, under suboptimal conditions, prices have generally deviated from their equilibrium values. 

Smithian invisible-hand theorems are of little relevance in explaining the transition to a new equilibrium following a macroeconomic disturbance, because, in this context, the invisible-hand theorem begs the relevant question by assuming that the equilibrium price vector has been found. When all markets are in disequilibrium, moving toward equilibrium in one market has repercussions on other markets, and the simple story of how price adjustment in response to a disequilibrium in that market alone restores equilibrium breaks down, because market conditions in every market depend on market conditions in every other market. So, unless all optimality conditions are satisfied simultaneously, there is no assurance that piecemeal adjustments will bring the system closer to an optimal, or even a second-best, state.

If my interpretation of the NFS assumption is correct, Fisher’s stability results may provide support for Leijonhufvud’s (1973 “Effective Demand Failures”) suggestion that there is a corridor of stability around an equilibrium time path within which, under normal circumstances, an economy will not be displaced too far from path, so that an economy, unless displaced outside that corridor, will revert, more or less on its own, to its equilibrium path.[5]

Leijonhufvud attributed such resilience to the holding of buffer stocks of inventories of goods, holdings of cash and the availability of credit lines enabling agents to operate normally despite disappointed expectations. If negative surprises persist, agents will be unable to add to, or draw from, inventories indefinitely, or to finance normal expenditures by borrowing or drawing down liquid assets. Once buffer stocks are exhausted, the stabilizing properties of the economy have been overwhelmed by the destabilizing tendencies, income-constrained agents cut expenditures, as implied by the Keynesian multiplier analysis, triggering a cumulative contraction, and rendering a spontaneous recovery without compensatory fiscal or monetary measures, impossible.

But my critique of Fisher’s NFS assumption suggests other, perhaps deeper, reasons why displacements of equilibrium may not be self-correcting, such displacements may invalidate previously held expectations, and in the absence of a dense array of forward and futures markets, there is likely no market mechanism that would automatically equilibrate unsettled and inconsistent expectations. In such an environment, price adjustments in current spot markets may cause price adjustments that, under the logic of the Lipsey-Lancaster second-best theorem may in fact be welfare-diminishing rather than welfare-enhancing and may therefore not equilibrate, but only further disequilibrate the macroeconomy.


[1] Fisher’s stability analysis was conducted in the context of complete markets in which all agents could make transactions for future delivery at prices agreed on in the present. Thus, for Fisher arbitrage means that agents choose between contracting for future delivery or waiting to transact until later based on their expectations of whether the forward price now is more or less than the expected future price. In equilibrium, expectations of future prices are correct so that agents are indifferent between making forward transactions of waiting to make spot transactions unless liquidity considerations dictate a preference for selling forward now or postponing buying till later.

[2] Fisher assumed that, for every commodity or service, transactions can be made either spot or forward. When Fisher spoke of arbitrage, he was referring to the decisions of agents whether to transact spot or forward given the agent’s expectations of the spot price at the time of planned exchange, the forward prices adjusting so that, with no transactions costs, agents are indifferent, at the margin, between transacting spot or forward, given their expectations of the future spot price.

[3] It was therefore incorrect for Fisher (1983, 88) to assert: “we can hope to show that  that the continued presence new opportunities is a necessary condition for instability — for continued change,” inasmuch as continued negative surprises could also cause continued — or at least prolonged — change.

[4] Fisher does recognize (pp. 88-89) that changes in expectations can be destabilizing. However, he considers only the possibility of exogenous events that cause expectations to change, but does not consider the possibility that expectations may change endogenously in a destabilizing fashion in the course of an adjustment process following a displacement from a prior equilibrium. See, however, his discussion (p. 91) of the distinction between exogenous and endogenous shocks.

How is . . . an [“exogenous”] shock to be distinguished from the “endogenous” shock brought about by adjustment to the original shock? No Favorable Surprise may not be precisely what is wanted as an assumption in this area, but it is quite difficult to see exactly how to refine it.

A proof of stability under No Favorable Surprise, then, seems quite desirable for a number of related reasons. First, it is the strongest version of an assumption of No Favorable Exogenous Surprise (whatever that may mean precisely); hence, if stability does not hold under No Favorable Surprise it cannot be expected to hold under the more interesting weaker assumption.  

[5] Presumably because the income and output are maximized at the equilibrium path, it is unlikely that an economy will overshoot the path unless entrepreneurial or policy error cause such overshooting which is presumably an unlikely occurrence, although Austrian business cycle theory and perhaps certain other monetary business cycle theories suggest that such overshooting is not or has not always been an uncommon event.

Lucas and Sargent on Optimization and Equilibrium in Macroeconomics

In a famous contribution to a conference sponsored by the Federal Reserve Bank of Boston, Robert Lucas and Thomas Sargent (1978) harshly attacked Keynes and Keynesian macroeconomics for shortcomings both theoretical and econometric. The econometric criticisms, drawing on the famous Lucas Critique (Lucas 1976), were focused on technical identification issues and on the dependence of estimated regression coefficients of econometric models on agents’ expectations conditional on the macroeconomic policies actually in effect, rendering those econometric models an unreliable basis for policymaking. But Lucas and Sargent reserved their harshest criticism for abandoning what they called the classical postulates.

Economists prior to the 1930s did not recognize a need for a special branch of economics, with its own special postulates, designed to explain the business cycle. Keynes founded that subdiscipline, called macroeconomics, because he thought that it was impossible to explain the characteristics of business cycles within the discipline imposed by classical economic theory, a discipline imposed by its insistence on . . . two postulates (a) that markets . . . clear, and (b) that agents . . . act in their own self-interest [optimize]. The outstanding fact that seemed impossible to reconcile with these two postulates was the length and severity of business depressions and the large scale unemployment which they entailed. . . . After freeing himself of the straight-jacket (or discipline) imposed by the classical postulates, Keynes described a model in which rules of thumb, such as the consumption function and liquidity preference schedule, took the place of decision functions that a classical economist would insist be derived from the theory of choice. And rather than require that wages and prices be determined by the postulate that markets clear — which for the labor market seemed patently contradicted by the severity of business depressions — Keynes took as an unexamined postulate that money wages are “sticky,” meaning that they are set at a level or by a process that could be taken as uninfluenced by the macroeconomic forces he proposed to analyze[1]. . . .

In recent years, the meaning of the term “equilibrium” has undergone such dramatic development that a theorist of the 1930s would not recognize it. It is now routine to describe an economy following a multivariate stochastic process as being “in equilibrium,” by which is meant nothing more than that at each point in time, postulates (a) and (b) above are satisfied. This development, which stemmed mainly from work by K. J. Arrow and G. Debreu, implies that simply to look at any economic time series and conclude that it is a “disequilibrium phenomenon” is a meaningless observation. Indeed, a more likely conjecture, on the basis of recent work by Hugo Sonnenschein, is that the general hypothesis that a collection of time series describes an economy in competitive equilibrium is without content. (pp. 58-59)

Lucas and Sargent maintain that ‘classical” (by which they obviously mean “neoclassical”) economics is based on the twin postulates of (a) market clearing and (b) optimization. But optimization is a postulate about individual conduct or decision making under ideal conditions in which individuals can choose costlessly among alternatives that they can rank. Market clearing is not a postulate about individuals, it is the outcome of a process that neoclassical theory did not, and has not, described in any detail.

Instead of describing the process by which markets clear, neoclassical economic theory provides a set of not too realistic stories about how markets might clear, of which the two best-known stories are the Walrasian auctioneer/tâtonnement story, widely regarded as merely heuristic, if not fantastical, and the clearly heuristic and not-well-developed Marshallian partial-equilibrium story of a “long-run” equilibrium price for each good correctly anticipated by market participants corresponding to the long-run cost of production. However, the cost of production on which the Marhsallian long-run equilibrium price depends itself presumes that a general equilibrium of all other input and output prices has been reached, so it is not an alternative to, but must be subsumed under, the Walrasian general equilibrium paradigm.

Thus, in invoking the neoclassical postulates of market-clearing and optimization, Lucas and Sargent unwittingly, or perhaps wittingly, begged the question how market clearing, which requires that the plans of individual optimizing agents to buy and sell reconciled in such a way that each agent can carry out his/her/their plan as intended, comes about. Rather than explain how market clearing is achieved, they simply assert – and rather loudly – that we must postulate that market clearing is achieved, and thereby submit to the virtuous discipline of equilibrium.

Because they could provide neither empirical evidence that equilibrium is continuously achieved nor a plausible explanation of the process whereby it might, or could be, achieved, Lucas and Sargent try to normalize their insistence that equilibrium is an obligatory postulate that must be accepted by economists by calling it “routine to describe an economy following a multivariate stochastic process as being ‘in equilibrium,’ by which is meant nothing more than that at each point in time, postulates (a) and (b) above are satisfied,” as if the routine adoption of any theoretical or methodological assumption becomes ipso facto justified once adopted routinely. That justification was unacceptable to Lucas and Sargent when made on behalf of “sticky wages” or Keynesian “rules of thumb, but somehow became compelling when invoked on behalf of perpetual “equilibrium” and neoclassical discipline.

Using the authority of Arrow and Debreu to support the normalcy of the assumption that equilibrium is a necessary and continuous property of reality, Lucas and Sargent maintained that it is “meaningless” to conclude that any economic time series is a disequilibrium phenomenon. A proposition ismeaningless if and only if neither the proposition nor its negation is true. So, in effect, Lucas and Sargent are asserting that it is nonsensical to say that an economic time either reflects or does not reflect an equilibrium, but that it is, nevertheless, methodologically obligatory to for any economic model to make that nonsensical assumption.

It is curious that, in making such an outlandish claim, Lucas and Sargent would seek to invoke the authority of Arrow and Debreu. Leave aside the fact that Arrow (1959) himself identified the lack of a theory of disequilibrium pricing as an explanatory gap in neoclassical general-equilibrium theory. But if equilibrium is a necessary and continuous property of reality, why did Arrow and Debreu, not to mention Wald and McKenzie, devoted so much time and prodigious intellectual effort to proving that an equilibrium solution to a system of equations exists. If, as Lucas and Sargent assert (nonsensically), it makes no sense to entertain the possibility that an economy is, or could be, in a disequilibrium state, why did Wald, Arrow, Debreu and McKenzie bother to prove that the only possible state of the world actually exists?

Having invoked the authority of Arrow and Debreu, Lucas and Sargent next invoke the seminal contribution of Sonnenschein (1973), though without mentioning the similar and almost simultaneous contributions of Mantel (1974) and Debreu (1974), to argue that it is empirically empty to argue that any collection of economic time series is either in equilibrium or out of equilibrium. This property has subsequently been described as an “Anything Goes Theorem” (Mas-Colell, Whinston, and Green, 1995).

Presumably, Lucas and Sargent believe the empirically empty hypothesis that a collection of economic time series is, or, alternatively is not, in equilibrium is an argument supporting the methodological imperative of maintaining the assumption that the economy absolutely and necessarily is in a continuous state of equilibrium. But what Sonnenschein (and Mantel and Debreu) showed was that even if the excess demands of all individual agents are continuous, are homogeneous of degree zero, and even if Walras’s Law is satisfied, aggregating the excess demands of all agents would not necessarily cause the aggregate excess demand functions to behave in such a way that a unique or a stable equilibrium. But if we have no good argument to explain why a unique or at least a stable neoclassical general-economic equilibrium exists, on what methodological ground is it possible to insist that no deviation from the admittedly empirically empty and meaningless postulate of necessary and continuous equilibrium may be tolerated by conscientious economic theorists? Or that the gatekeepers of reputable neoclassical economics must enforce appropriate standards of professional practice?

As Franklin Fisher (1989) showed, inability to prove that there is a stable equilibrium leaves neoclassical economics unmoored, because the bread and butter of neoclassical price theory (microeconomics), comparative statics exercises, is conditional on the assumption that there is at least one stable general equilibrium solution for a competitive economy.

But it’s not correct to say that general equilibrium theory in its Arrow-Debreu-McKenzie version is empirically empty. Indeed, it has some very strong implications. There is no money, no banks, no stock market, and no missing markets; there is no advertising, no unsold inventories, no search, no private information, and no price discrimination. There are no surprises and there are no regrets, no mistakes and no learning. I could go on, but you get the idea. As a theory of reality, the ADM general-equilibrium model is simply preposterous. And, yet, this is the model of economic reality on the basis of which Lucas and Sargent proposed to build a useful and relevant theory of macroeconomic fluctuations. OMG!

Lucas, in various writings, has actually disclaimed any interest in providing an explanation of reality, insisting that his only aim is to devise mathematical models capable of accounting for the observed values of the relevant time series of macroeconomic variables. In Lucas’s conception of science, the only criterion for scientific knowledge is the capacity of a theory – an algorithm for generating numerical values to be measured against observed time series – to generate predicted values approximating the observed values of the time series. The only constraint on the algorithm is Lucas’s methodological preference that the algorithm be derived from what he conceives to be an acceptable microfounded version of neoclassical theory: a set of predictions corresponding to the solution of a dynamic optimization problem for a “representative agent.”

In advancing his conception of the role of science, Lucas has reverted to the approach of ancient astronomers who, for methodological reasons of their own, believed that the celestial bodies revolved around the earth in circular orbits. To ensure that their predictions matched the time series of the observed celestial positions of the planets, ancient astronomers, following Ptolemy, relied on epicycles or second-order circular movements of planets while traversing their circular orbits around the earth to account for their observed motions.

Kepler and later Galileo conceived of the solar system in a radically different way from the ancients, placing the sun, not the earth, at the fixed center of the solar system and proposing that the orbits of the planets were elliptical, not circular. For a long time, however, the actual time series of geocentric predictions outperformed the new heliocentric predictions. But even before the heliocentric predictions started to outperform the geocentric predictions, the greater simplicity and greater realism of the heliocentric theory attracted an increasing number of followers, forcing methodological supporters of the geocentric theory to take active measures to suppress the heliocentric theory.

I hold no particular attachment to the pre-Lucasian versions of macroeconomic theory, whether Keynesian, Monetarist, or heterodox. Macroeconomic theory required a grounding in an explicit intertemporal setting that had been lacking in most earlier theories. But the ruthless enforcement, based on a preposterous methodological imperative, lacking scientific or philosophical justification, of formal intertemporal optimization models as the only acceptable form of macroeconomic theorizing has sidetracked macroeconomics from a more relevant inquiry into the nature and causes of intertemporal coordination failures that Keynes, along with many some of his predecessors and contemporaries, had initiated.

Just as the dispute about whether planetary motion is geocentric or heliocentric was a dispute about what the world is like, not just about the capacity of models to generate accurate predictions of time series variables, current macroeconomic disputes are real disputes about what the world is like and whether aggregate economic fluctuations are the result of optimizing equilibrium choices by economic agents or about coordination failures that cause economic agents to be surprised and disappointed and rendered unable to carry out their plans in the manner in which they had hoped and expected to be able to do. It’s long past time for this dispute about reality to be joined openly with the seriousness that it deserves, instead of being suppressed by a spurious pseudo-scientific methodology.

HT: Arash Molavi Vasséi, Brian Albrecht, and Chris Edmonds


[1] Lucas and Sargent are guilty of at least two misrepresentations in this paragraph. First, Keynes did not “found” macroeconomics, though he certainly influenced its development decisively. Keynes used the term “macroeconomics,” and his work, though crucial, explicitly drew upon earlier work by Marshall, Wicksell, Fisher, Pigou, Hawtrey, and Robertson, among others. See Laidler (1999). Second, having explicitly denied and argued at length that his results did not depend on the assumption of sticky wages, Keynes certainly never introduced the assumption of sticky wages himself. See Leijonhufvud (1968)

Franklin Fisher on Adjustment Processes and Stability

As an addendum to yesterday’s post I will merely quote the first three paragraphs of Franklin Fisher’s entry in The New Palgrave on “Adjustment Processes and Stability.” The whole article merits careful attention and study as does his great book Disequilibrium Foundations of Equilibrium Economics. See also a previous post of mine on Fisher’s work

Economic Theory is pre-eminently a matter of equilibrium analysis. In particular, the centerpiece of the subject — general equilibrium theory — deals with the existence and efficiency properties of competitive equilibrium. Nor is this only an abstract matter. The principal policy insight of economics — that a competitive price system produces desirable results and that government interference will generally lead to an inefficient allocation of resources — rests on the intimate connections between competitive equilibrium and Pareto efficiency.

Yet the very power and elegance of equilibrium analysis often obscures the fact that it rests on a very uncertain foundation. We have no similarly elegant theory of what happens out of equilibrium, of how agents behave when their plans are frustrated. As a result, we have no rigorous basis for believing that equilibria can be achieved or maintained if disturbed. Unless one robs words of their meaning and defines every state of the world as an “equilibrium” in the sense that agent do what they do instead of something else, there is no disguising the fact that this is a major lacuna in economic analysis.

Nor is that lacuna only important in microeconomics. For example, the Keynesian question of whether an economy can become trapped in a situation of underemployment is not merely a question of whether underemployment equilibria exist. It is also a question of whether such equilibria are stable. As such, its answer depends on the properties of the general (dis)equilibrium system which macroeconomic analysis attempts to summarize. Not surprisingly, modern attempts to deal with such systems have been increasingly forced to treat such familiar macroeconomic issues as the rule of money.

We do, of course, have some idea as to how disequilibrium adjustment takes place. From Adam Smith’s discussion of the “Invisible Hand” to the standard elementary textbook’s treatment of the “Law of Supply and Demand”, economists have stressed how the perception of profit opportunities leads agents to act. What remains unclear is whether (as most economists believe) the pursuit of such profit opportunities in fact leads to equilibrium — more particularly, to a competitive equilibrium where such opportunities no longer exist. If one thinks of a competitive economy as a dynamic system driven by the self-seeking actions of individual agents, does that system have competitive equilibria as stable rest points? If so, are such equilibria attained so quickly that the system can be studied without attention to its disequilibrium behaviour? The answers to these crucial questions remain unclear.

A Tale of Two Syntheses

I recently finished reading a slender, but weighty, collection of essays, Microfoundtions Reconsidered: The Relationship of Micro and Macroeconomics in Historical Perspective, edited by Pedro Duarte and Gilberto Lima; it contains in addition to a brief introductory essay by the editors, and contributions by Kevin Hoover, Robert Leonard, Wade Hands, Phil Mirowski, Michel De Vroey, and Pedro Duarte. The volume is both informative and stimulating, helping me to crystalize ideas about which I have been ruminating and writing for a long time, but especially in some of my more recent posts (e.g., here, here, and here) and my recent paper “Hayek, Hicks, Radner and Four Equilibrium Concepts.”

Hoover’s essay provides a historical account of the microfoundations, making clear that the search for microfoundations long preceded the Lucasian microfoundations movement of the 1970s and 1980s that would revolutionize macroeconomics in the late 1980s and early 1990s. I have been writing about the differences between varieties of microfoundations for quite a while (here and here), and Hoover provides valuable detail about early discussions of microfoundations and about their relationship to the now regnant Lucasian microfoundations dogma. But for my purposes here, Hoover’s key contribution is his deconstruction of the concept of microfoundations, showing that the idea of microfoundations depends crucially on the notion that agents in a macroeconomic model be explicit optimizers, meaning that they maximize an explicit function subject to explicit constraints.

What Hoover clarifies is vacuity of the Lucasian optimization dogma. Until Lucas, optimization by agents had been merely a necessary condition for a model to be microfounded. But there was also another condition: that the optimizing choices of agents be mutually consistent. Establishing that the optimizing choices of agents are mutually consistent is not necessarily easy or even possible, so often the consistency of optimizing plans can only be suggested by some sort of heuristic argument. But Lucas and his cohorts, followed by their acolytes, unable to explain, even informally or heuristically, how the optimizing choices of individual agents are rendered mutually consistent, instead resorted to question-begging and question-dodging techniques to avoid addressing the consistency issue, of which one — the most egregious, but not the only — is the representative agent. In so doing, Lucas et al. transformed the optimization problem from the coordination of multiple independent choices into the optimal plan of a single decision maker. Heckuva job!

The second essay by Robert Leonard, though not directly addressing the question of microfoundations, helps clarify and underscore the misrepresentation perpetrated by the Lucasian microfoundational dogma in disregarding and evading the need to describe a mechanism whereby the optimal choices of individual agents are, or could be, reconciled. Leonard focuses on a particular economist, Oskar Morgenstern, who began his career in Vienna as a not untypical adherent of the Austrian school of economics, a member of the Mises seminar and successor of F. A. Hayek as director of the Austrian Institute for Business Cycle Research upon Hayek’s 1931 departure to take a position at the London School of Economics. However, Morgenstern soon began to question the economic orthodoxy of neoclassical economic theory and its emphasis on the tendency of economic forces to reach a state of equilibrium.

In his famous early critique of the foundations of equilibrium theory, Morgenstern tried to show that the concept of perfect foresight, upon which, he alleged, the concept of equilibrium rests, is incoherent. To do so, Morgenstern used the example of the Holmes-Moriarity interaction in which Holmes and Moriarty are caught in a dilemma in which neither can predict whether the other will get off or stay on the train on which they are both passengers, because the optimal choice of each depends on the choice of the other. The unresolvable conflict between Holmes and Moriarty, in Morgenstern’s view, showed that the incoherence of the idea of perfect foresight.

As his disillusionment with orthodox economic theory deepened, Morgenstern became increasingly interested in the potential of mathematics to serve as a tool of economic analysis. Through his acquaintance with the mathematician Karl Menger, the son of Carl Menger, founder of the Austrian School of economics. Morgenstern became close to Menger’s student, Abraham Wald, a pure mathematician of exceptional ability, who, to support himself, was working on statistical and mathematical problems for the Austrian Institute for Business Cycle Resarch, and tutoring Morgenstern in mathematics and its applications to economic theory. Wald, himself, went on to make seminal contributions to mathematical economics and statistical analysis.

Moregenstern also became acquainted with another student of Menger, John von Neumnn, with an interest in applying advanced mathematics to economic theory. Von Neumann and Morgenstern would later collaborate in writing The Theory of Games and Economic Behavior, as a result of which Morgenstern came to reconsider his early view of the Holmes-Moriarty paradox inasmuch as it could be shown that an equilibrium solution of their interaction could be found if payoffs to their joint choices were specified, thereby enabling Holmes and Moriarty to choose optimal probablistic strategies.

I don’t think that the game-theoretic solution to the Holmes Moriarty game is as straightforward as Morgenstern eventually agreed, but the critical point in the microfoundations discussion is that the mathematical solution to the Holmes-Moriarty paradox acknowledges the necessity for the choices made by two or more agents in an economic or game-theoretic equilibrium to be reconciled – i.e., rendered mutually consistent — in equilibrium. Under Lucasian microfoundations dogma, the problem is either annihilated by positing an optimizing representative agent having no need to coordinate his decision with other agents (I leave the question who, in the Holmes-Moriarty interaction, is the representative agent as an exercise for the reader) or it is assumed away by positing the existence of a magical equilibrium with no explanation of how the mutually consistent choices are arrived at.

The third essay (“The Rise and Fall of Walrasian Economics: The Keynes Effect”) by Wade Hands considers the first of the two syntheses – the neoclassical synthesis — that are alluded to in the title of this post. Hands gives a learned account of the mutually reinforcing co-development of Walrasian general equilibrium theory and Keynesian economics in the 25 years or so following World War II. Although Hands agrees that there is no necessary connection between Walrasian GE theory and Keynesian theory, he argues that there was enough common ground between Keynesians and Walrasians, as famously explained by Hicks in summarizing Keynesian theory by way of his IS-LM model, to allow the two disparate research programs to nourish each other in a kind of symbiotic relationship as the two research programs came to dominate postwar economics.

The task for Keynesian macroeconomists following the lead of Samuelson, Solow and Modigliani at MIT, Alvin Hansen at Harvard and James Tobin at Yale was to elaborate the Hicksian IS-LM approach by embedding it in a more general Walrasian framework. In so doing, they helped to shape a research agenda for Walrasian general-equilibrium theorists working out the details of the newly developed Arrow-Debreu model, deriving conditions for the uniqueness and stability of the equilibrium of that model. The neoclassical synthesis followed from those efforts, achieving an uneasy reconciliation between Walrasian general equilibrium theory and Keynesian theory. It received its most complete articulation in the impressive treatise of Don Patinkin which attempted to derive or at least evaluate key Keyensian propositions in the context of a full general equilibrium model. At an even higher level of theoretical sophistication, the 1971 summation of general equilibrium theory by Arrow and Hahn, gave disproportionate attention to Keynesian ideas which were presented and analyzed using the tools of state-of-the art Walrasian analysis.

Hands sums up the coexistence of Walrasian and Keynesian ideas in the Arrow-Hahn volume as follows:

Arrow and Hahn’s General Competitive Analysis – the canonical summary of the literature – dedicated far more pages to stability than to any other topic. The book had fourteen chapters (and a number of mathematical appendices); there was one chapter on consumer choice, one chapter on production theory, and one chapter on existence [of equilibrium], but there were three chapters on stability analysis, (two on the traditional tatonnement and one on alternative ways of modeling general equilibrium dynamics). Add to this the fact that there was an important chapter on “The Keynesian Model’; and it becomes clear how important stability analysis and its connection to Keynesian economics was for Walrasian microeconomics during this period. The purpose of this section has been to show that that would not have been the case if the Walrasian economics of the day had not been a product of co-evolution with Keynesian economic theory. (p. 108)

What seems most unfortunate about the neoclassical synthesis is that it elevated and reinforced the least relevant and least fruitful features of both the Walrasian and the Keynesian research programs. The Hicksian IS-LM setup abstracted from the dynamic and forward-looking aspects of Keynesian theory, modeling a static one-period model, not easily deployed as a tool of dynamic analysis. Walrasian GE analysis, which, following the pathbreaking GE existence proofs of Arrow and Debreu, then proceeded to a disappointing search for the conditions for a unique and stable general equilibrium.

It was Paul Samuelson who, building on Hicks’s pioneering foray into stability analysis, argued that the stability question could be answered by investigating whether a system of Lyapounov differential equations could describe market price adjustments as functions of market excess demands that would converge on an equilibrium price vector. But Samuelson’s approach to establishing stability required the mechanism of a fictional tatonnement process. Even with that unsatisfactory assumption, the stability results were disappointing.

Although for Walrasian theorists the results hardly repaid the effort expended, for those Keynesians who interpreted Keynes as an instability theorist, the weak Walrasian stability results might have been viewed as encouraging. But that was not any easy route to take either, because Keynes had also argued that a persistent unemployment equilibrium might be the norm.

It’s also hard to understand how the stability of equilibrium in an imaginary tatonnement process could ever have been considered relevant to the operation of an actual economy in real time – a leap of faith almost as extraordinary as imagining an economy represented by a single agent. Any conventional comparative-statics exercise – the bread and butter of microeconomic analysis – involves comparing two equilibria, corresponding to a specified parametric change in the conditions of the economy. The comparison presumes that, starting from an equilibrium position, the parametric change leads from an initial to a new equilibrium. If the economy isn’t stable, a disturbance causing an economy to depart from an initial equilibrium need not result in an adjustment to a new equilibrium comparable to the old one.

If conventional comparative statics hinges on an implicit stability assumption, it’s hard to see how a stability analysis of tatonnement has any bearing on the comparative-statics routinely relied upon by economists. No actual economy ever adjusts to a parametric change by way of tatonnement. Whether a parametric change displacing an economy from its equilibrium time path would lead the economy toward another equilibrium time path is another interesting and relevant question, but it’s difficult to see what insight would be gained by proving the stability of equilibrium under a tatonnement process.

Moreover, there is a distinct question about the endogenous stability of an economy: are there endogenous tendencies within an economy that lead it away from its equilibrium time path. But questions of endogenous stability can only be posed in a dynamic, rather than a static, model. While extending the Walrasian model to include an infinity of time periods, Arrow and Debreu telescoped determination of the intertemporal-equilibrium price vector into a preliminary time period before time, production, exchange and consumption begin. So, even in the formally intertemporal Arrow-Debreu model, the equilibrium price vector, once determined, is fixed and not subject to revision. Standard stability analysis was concerned with the response over time to changing circumstances only insofar as changes are foreseen at time zero, before time begins, so that they can be and are taken fully into account when the equilibrium price vector is determined.

Though not entirely uninteresting, the intertemporal analysis had little relevance to the stability of an actual economy operating in real time. Thus, neither the standard Keyensian (IS-LM) model nor the standard Walrasian Arrow-Debreu model provided an intertemporal framework within which to address the dynamic stability that Keynes (and contemporaries like Hayek, Myrdal, Lindahl and Hicks) had developed in the 1930s. In particular, Hicks’s analytical device of temporary equilibrium might have facilitated such an analysis. But, having introduced his IS-LM model two years before publishing his temporary equilibrium analysis in Value and Capital, Hicks concentrated his attention primarily on Keynesian analysis and did not return to the temporary equilibrium model until 1965 in Capital and Growth. And it was IS-LM that became, for a generation or two, the preferred analytical framework for macroeconomic analysis, while temproary equilibrium remained overlooked until the 1970s just as the neoclassical synthesis started coming apart.

The fourth essay by Phil Mirowski investigates the role of the Cowles Commission, based at the University of Chicago from 1939 to 1955, in undermining Keynesian macroeconomics. While Hands argues that Walrasians and Keynesians came together in a non-hostile spirit of tacit cooperation, Mirowski believes that owing to their Walrasian sympathies, the Cowles Committee had an implicit anti-Keynesian orientation and was therefore at best unsympathetic if not overtly hostile to Keynesian theorizing, which was incompatible the Walrasian optimization paradigm endorsed by the Cowles economists. (Another layer of unexplored complexity is the tension between the Walrasianism of the Cowles economists and the Marshallianism of the Chicago School economists, especially Knight and Friedman, which made Chicago an inhospitable home for the Cowles Commission and led to its eventual departure to Yale.)

Whatever differences, both the Mirowski and the Hands essays support the conclusion that the uneasy relationship between Walrasianism and Keynesianism was inherently problematic and unltimately unsustainable. But to me the tragedy is that before the fall, in the 1950s and 1960s, when the neoclassical synthesis bestrode economics like a colossus, the static orientation of both the Walrasian and the Keynesian research programs combined to distract economists from a more promising research program. Such a program, instead of treating expectations either as parametric constants or as merely adaptive, based on an assumed distributed lag function, might have considered whether expectations could perform a potentially equilibrating role in a general equilibrium model.

The equilibrating role of expectations, though implicit in various contributions by Hayek, Myrdal, Lindahl, Irving Fisher, and even Keynes, is contingent so that equilibrium is not inevitable, only a possibility. Instead, the introduction of expectations as an equilibrating variable did not occur until the mid-1970s when Robert Lucas, Tom Sargent and Neil Wallace, borrowing from John Muth’s work in applied microeconomics, introduced the idea of rational expectations into macroeconomics. But in introducing rational expectations, Lucas et al. made rational expectations not the condition of a contingent equilibrium but an indisputable postulate guaranteeing the realization of equilibrium without offering any theoretical account of a mechanism whereby the rationality of expectations is achieved.

The fifth essay by Michel DeVroey (“Microfoundations: a decisive dividing line between Keynesian and new classical macroeconomics?”) is a philosophically sophisticated analysis of Lucasian microfoundations methodological principles. DeVroey begins by crediting Lucas with the revolution in macroeconomics that displaced a Keynesian orthodoxy already discredited in the eyes of many economists after its failure to account for simultaneously rising inflation and unemployment.

The apparent theoretical disorder characterizing the Keynesian orthodoxy and its Monetarist opposition left a void for Lucas to fill by providing a seemingly rigorous microfounded alternative to the confused state of macroeconomics. And microfoundations became the methodological weapon by which Lucas and his associates and followers imposed an iron discipline on the unruly community of macroeconomists. “In Lucas’s eyes,” DeVroey aptly writes,“ the mere intention to produce a theory of involuntary unemployment constitutes an infringement of the equilibrium discipline.” Showing that his description of Lucas is hardly overstated, DeVroey quotes from the famous 1978 joint declaration of war issued by Lucas and Sargent against Keynesian macroeconomics:

After freeing himself of the straightjacket (or discipline) imposed by the classical postulates, Keynes described a model in which rules of thumb, such as the consumption function and liquidity preference schedule, took the place of decision functions that a classical economist would insist be derived from the theory of choice. And rather than require that wages and prices be determined by the postulate that markets clear – which for the labor market seemed patently contradicted by the severity of business depressions – Keynes took as an unexamined postulate that money wages are sticky, meaning that they are set at a level or by a process that could be taken as uninfluenced by the macroeconomic forces he proposed to analyze.

Echoing Keynes’s famous description of the sway of Ricardian doctrines over England in the nineteenth century, DeVroey remarks that the microfoundations requirement “conquered macroeconomics as quickly and thoroughly as the Holy Inquisition conquered Spain,” noting, even more tellingly, that the conquest was achieved without providing any justification. Ricardo had, at least, provided a substantive analysis that could be debated; Lucas offered only an undisputable methodological imperative about the sole acceptable mode of macroeconomic reasoning. Just as optimization is a necessary component of the equilibrium discipline that had to be ruthlessly imposed on pain of excommunication from the macroeconomic community, so, too, did the correlate principle of market-clearing. To deviate from the market-clearing postulate was ipso facto evidence of an impure and heretical state of mind. DeVroey further quotes from the war declaration of Lucas and Sargent.

Cleared markets is simply a principle, not verifiable by direct observation, which may or may not be useful in constructing successful hypotheses about the behavior of these [time] series.

What was only implicit in the war declaration became evident later after right-thinking was enforced, and woe unto him that dared deviate from the right way of thinking.

But, as DeVroey skillfully shows, what is most remarkable is that, having declared market clearing an indisputable methodological principle, Lucas, contrary to his own demand for theoretical discipline, used the market-clearing postulate to free himself from the very equilibrium discipline he claimed to be imposing. How did the market-clearing postulate liberate Lucas from equilibrium discipline? To show how the sleight-of-hand was accomplished, DeVroey, in an argument parallel to that of Hoover in chapter one and that suggested by Leonard in chapter two, contrasts Lucas’s conception of microfoundations with a different microfoundations conception espoused by Hayek and Patinkin. Unlike Lucas, Hayek and Patinkin recognized that the optimization of individual economic agents is conditional on the optimization of other agents. Lucas assumes that if all agents optimize, then their individual optimization ensures that a social optimum is achieved, the whole being the sum of its parts. But that assumption ignores that the choices made interacting agents are themelves interdependent.

To capture the distinction between independent and interdependent optimization, DeVroey distinguishes between optimal plans and optimal behavior. Behavior is optimal only if an optimal plan can be executed. All agents can optimize individually in making their plans, but the optimality of their behavior depends on their capacity to carry those plans out. And the capacity of each to carry out his plan is contingent on the optimal choices of all other agents.

Optimizing plans refers to agents’ intentions before the opening of trading, the solution to the choice-theoretical problem with which they are faced. Optimizing behavior refers to what is observable after trading has started. Thus optimal behavior implies that the optimal plan has been realized. . . . [O]ptmizing plans and optimizing behavior need to be logically separated – there is a difference between finding a solution to a choice problem and implementing the solution. In contrast, whenever optimizing behavior is the sole concept used, the possibility of there being a difference between them is discarded by definition. This is the standpoint takenby Lucas and Sargent. Once it is adopted, it becomes misleading to claim . . .that the microfoundations requirement is based on two criteria, optimizing behavior and market clearing. A single criterion is needed, and it is irrelevant whether this is called generalized optimizing behavior or market clearing. (De Vroey, p. 176)

Each agent is free to optimize his plan, but no agent can execute his optimal plan unless the plan coincides with the complementary plans of other agents. So, the execution of an optimal plan is not within the unilateral control of an agent formulating his own plan. One can readily assume that agents optimize their plans, but one cannot just assume that those plans can be executed as planned. The optimality of interdependent plans is not self-evident; it is a proposition that must be demonstrated. Assuming that agents optimize, Lucas simply asserts that, because agents optimize, markets must clear.

That is a remarkable non-sequitur. And from that non-sequitur, Lucas jumps to a further non-sequitur: that an optimizing representative agent is all that’s required for a macroeconomic model. The logical straightjacket (or discipline) of demonstrating that interdependent optimal plans are consistent is thus discarded (or trampled upon). Lucas’s insistence on a market-clearing principle turns out to be subterfuge by which the pretense of its upholding conceals its violation in practice.

My own view is that the assumption that agents formulate optimizing plans cannot be maintained without further analysis unless the agents are operating in isolation. If the agents interacting with each other, the assumption that they optimize requires a theory of their interaction. If the focus is on equilibrium interactions, then one can have a theory of equilibrium, but then the possibility of non-equilibrium states must also be acknowledged.

That is what John Nash did in developing his equilibrium theory of positive-sum games. He defined conditions for the existence of equilibrium, but he offered no theory of how equilibrium is achieved. Lacking such a theory, he acknowledged that non-equilibrium solutions might occur, e.g., in some variant of the Holmes-Moriarty game. To simply assert that because interdependent agents try to optimize, they must, as a matter of principle, succeed in optimizing is to engage in question-begging on a truly grand scale. To insist, as a matter of methodological principle, that everyone else must also engage in question-begging on equally grand scale is what I have previously called methodological arrogance, though an even harsher description might be appropriate.

In the sixth essay (“Not Going Away: Microfoundations in the making of a new consensus in macroeconomics”), Pedro Duarte considers the current state of apparent macroeconomic consensus in the wake of the sweeping triumph of the Lucasian micorfoundtions methodological imperative. In its current state, mainstream macroeconomists from a variety of backgrounds have reconciled themselves and adjusted to the methodological absolutism Lucas and his associates and followers have imposed on macroeconomic theorizing. Leading proponents of the current consensus are pleased to announce, in unseemly self-satisfaction, that macroeconomics is now – but presumably not previously – “firmly grounded in the principles of economic [presumably neoclassical] theory.” But the underlying conception of neoclassical economic theory motivating such a statement is almost laughably narrow, and, as I have just shown, strictly false even if, for argument’s sake, that narrow conception is accepted.

Duarte provides an informative historical account of the process whereby most mainstream Keynesians and former old-line Monetarists, who had, in fact, adopted much of the underlying Keynesian theoretical framework themselves, became reconciled to the non-negotiable methodological microfoundational demands upon which Lucas and his New Classical followers and Real-Business-Cycle fellow-travelers insisted. While Lucas was willing to tolerate differences of opinion about the importance of monetary factors in accounting for business-cycle fluctuations in real output and employment, and even willing to countenance a role for countercyclical monetary policy, such differences of opinion could be tolerated only if they could be derived from an acceptable microfounded model in which the agent(s) form rational expectations. If New Keynesians were able to produce results rationalizing countercyclical policies in such microfounded models with rational expectations, Lucas was satisfied. Presumably, Lucas felt the price of conceding the theoretical legitimacy of countercyclical policy was worth paying in order to achieve methodological hegemony over macroeconomic theory.

And no doubt, for Lucas, the price was worth paying, because it led to what Marvin Goodfriend and Robert King called the New Neoclassical Synthesis in their 1997 article ushering in the new era of good feelings, a synthesis based on “the systematic application of intertemporal optimization and rational expectations” while embodying “the insights of monetarists . . . regarding the theory and practice of monetary policy.”

While the first synthesis brought about a convergence of sorts between the disparate Walrasian and Keynesian theoretical frameworks, the convergence proved unstable because the inherent theoretical weaknesses of both paradigms were unable to withstand criticisms of the theoretical apparatus and of the policy recommendations emerging from that synthesis, particularly an inability to provide a straightforward analysis of inflation when it became a serious policy problem in the late 1960s and 1970s. But neither the Keynesian nor the Walrasian paradigms were developing in a way that addressed the points of most serious weakness.

On the Keynesian side, the defects included the static nature of the workhorse IS-LM model, the absence of a market for real capital and of a market for endogenous money. On the Walrasian side, the defects were the lack of any theory of actual price determination or of dynamic adjustment. The Hicksian temporary equilibrium paradigm might have provided a viable way forward, and for a very different kind of synthesis, but not even Hicks himself realized the potential of his own creation.

While the first synthesis was a product of convenience and misplaced optimism, the second synthesis is a product of methodological hubris and misplaced complacency derived from an elementary misunderstanding of the distinction between optimization by a single agent and the simultaneous optimization of two or more independent, yet interdependent, agents. The equilibrium of each is the result of the equilibrium of all, and a theory of optimization involving two or more agents requires a theory of how two or more interdependent agents can optimize simultaneously. The New neoclassical synthesis rests on the demand for a macroeconomic theory of individual optimization that refuses even to ask, let along provide an answer to, the question whether the optimization that it demands is actually achieved in practice or what happens if it is not. This is not a synthesis that will last, or that deserves to. And the sooner it collapses, the better off macroeconomics will be.

What the answer is I don’t know, but if I had to offer a suggestion, the one offered by my teacher Axel Leijonhufvud towards the end of his great book, written more than half a century ago, strikes me as not bad at all:

One cannot assume that what went wrong was simply that Keynes slipped up here and there in his adaptation of standard tool, and that consequently, if we go back and tinker a little more with the Marshallian toolbox his purposes will be realized. What is required, I believe, is a systematic investigation, form the standpoint of the information problems stressed in this study, of what elements of the static theory of resource allocation can without further ado be utilized in the analysis of dynamic and historical systems. This, of course, would be merely a first-step: the gap yawns very wide between the systematic and rigorous modern analysis of the stability of “featureless,” pure exchange systems and Keynes’ inspired sketch of the income-constrained process in a monetary-exchange-cum-production system. But even for such a first step, the prescription cannot be to “go back to Keynes.” If one must retrace some steps of past developments in order to get on the right track—and that is probably advisable—my own preference is to go back to Hayek. Hayek’s Gestalt-conception of what happens during business cycles, it has been generally agreed, was much less sound than Keynes’. As an unhappy consequence, his far superior work on the fundamentals of the problem has not received the attention it deserves. (p. 401)

I agree with all that, but would also recommend Roy Radner’s development of an alternative to the Arrow-Debreu version of Walrasian general equilibrium theory that can accommodate Hicksian temporary equilibrium, and Hawtrey’s important contributions to our understanding of monetary theory and the role and potential instability of endogenous bank money. On top of that, Franklin Fisher in his important work, The Disequilibrium Foundations of Equilibrium Economics, has given us further valuable guidance in how to improve the current sorry state of macroeconomics.

 

Price Stickiness and Macroeconomics

Noah Smith has a classically snide rejoinder to Stephen Williamson’s outrage at Noah’s Bloomberg paean to price stickiness and to the classic Ball and Maniw article on the subject, an article that provoked an embarrassingly outraged response from Robert Lucas when published over 20 years ago. I don’t know if Lucas ever got over it, but evidently Williamson hasn’t.

Now to be fair, Lucas’s outrage, though misplaced, was understandable, at least if one understands that Lucas was so offended by the ironic tone in which Ball and Mankiw cast themselves as defenders of traditional macroeconomics – including both Keynesians and Monetarists – against the onslaught of “heretics” like Lucas, Sargent, Kydland and Prescott that he just stopped reading after the first few pages and then, in a fit of righteous indignation, wrote a diatribe attacking Ball and Mankiw as religious fanatics trying to halt the progress of science as if that was the real message of the paper – not, to say the least, a very sophisticated reading of what Ball and Mankiw wrote.

While I am not hostile to the idea of price stickiness — one of the most popular posts I have written being an attempt to provide a rationale for the stylized (though controversial) fact that wages are stickier than other input, and most output, prices — it does seem to me that there is something ad hoc and superficial about the idea of price stickiness and about many explanations, including those offered by Ball and Mankiw, for price stickiness. I think that the negative reactions that price stickiness elicits from a lot of economists — and not only from Lucas and Williamson — reflect a feeling that price stickiness is not well grounded in any economic theory.

Let me offer a slightly different criticism of price stickiness as a feature of macroeconomic models, which is simply that although price stickiness is a sufficient condition for inefficient macroeconomic fluctuations, it is not a necessary condition. It is entirely possible that even with highly flexible prices, there would still be inefficient macroeconomic fluctuations. And the reason why price flexibility, by itself, is no guarantee against macroeconomic contractions is that macroeconomic contractions are caused by disequilibrium prices, and disequilibrium prices can prevail regardless of how flexible prices are.

The usual argument is that if prices are free to adjust in response to market forces, they will adjust to balance supply and demand, and an equilibrium will be restored by the automatic adjustment of prices. That is what students are taught in Econ 1. And it is an important lesson, but it is also a “partial” lesson. It is partial, because it applies to a single market that is out of equilibrium. The implicit assumption in that exercise is that nothing else is changing, which means that all other markets — well, not quite all other markets, but I will ignore that nuance – are in equilibrium. That’s what I mean when I say (as I have done before) that just as macroeconomics needs microfoundations, microeconomics needs macrofoundations.

Now it’s pretty easy to show that in a single market with an upward-sloping supply curve and a downward-sloping demand curve, that a price-adjustment rule that raises price when there’s an excess demand and reduces price when there’s an excess supply will lead to an equilibrium market price. But that simple price-adjustment rule is hard to generalize when many markets — not just one — are in disequilibrium, because reducing disequilibrium in one market may actually exacerbate disequilibrium, or create a disequilibrium that wasn’t there before, in another market. Thus, even if there is an equilibrium price vector out there, which, if it were announced to all economic agents, would sustain a general equilibrium in all markets, there is no guarantee that following the standard price-adjustment rule of raising price in markets with an excess demand and reducing price in markets with an excess supply will ultimately lead to the equilibrium price vector. Even more disturbing, the standard price-adjustment rule may not, even under a tatonnement process in which no trading is allowed at disequilibrium prices, lead to the discovery of the equilibrium price vector. Of course, in the real world trading occurs routinely at disequilibrium prices, so that the “mechanical” forces tending an economy toward equilibrium are even weaker than the standard analysis of price-adjustment would suggest.

This doesn’t mean that an economy out of equilibrium has no stabilizing tendencies; it does mean that those stabilizing tendencies are not very well understood, and we have almost no formal theory with which to describe how such an adjustment process leading from disequilibrium to equilibrium actually works. We just assume that such a process exists. Franklin Fisher made this point 30 years ago in an important, but insufficiently appreciated, volume Disequilibrium Foundations of Equilibrium Economics. But the idea goes back even further: to Hayek’s important work on intertemporal equilibrium, especially his classic paper “Economics and Knowledge,” formalized by Hicks in the temporary-equilibrium model described in Value and Capital.

The key point made by Hayek in this context is that there can be an intertemporal equilibrium if and only if all agents formulate their individual plans on the basis of the same expectations of future prices. If their expectations for future prices are not the same, then any plans based on incorrect price expectations will have to be revised, or abandoned altogether, as price expectations are disappointed over time. For price adjustment to lead an economy back to equilibrium, the price adjustment must converge on an equilibrium price vector and on correct price expectations. But, as Hayek understood in 1937, and as Fisher explained in a dense treatise 30 years ago, we have no economic theory that explains how such a price vector, even if it exists, is arrived at, and even under a tannonement process, much less under decentralized price setting. Pinning the blame on this vague thing called price stickiness doesn’t address the deeper underlying theoretical issue.

Of course for Lucas et al. to scoff at price stickiness on these grounds is a bit rich, because Lucas and his followers seem entirely comfortable with assuming that the equilibrium price vector is rationally expected. Indeed, rational expectation of the equilibrium price vector is held up by Lucas as precisely the microfoundation that transformed the unruly field of macroeconomics into a real science.

Franklin Fisher on the Stability(?) of General Equilibrium

The eminent Franklin Fisher, winner of the J. B. Clark Medal in 1973, a famed econometrician and antitrust economist, who was the expert economics witness for IBM in its long battle with the U. S. Department of Justice, and was later the expert witness for the Justice Department in the antitrust case against Microsoft, currently emeritus professor professor of microeconomics at MIT, visited the FTC today to give a talk about proposals the efficient sharing of water between Israel, Palestine, and Jordan. The talk was interesting and informative, but I must admit that I was more interested in Fisher’s views on the stability of general equilibrium, the subject of a monograph he wrote for the econometric society Disequilibrium Foundations of Equilibrium Economics, a book which I have not yet read, but hope to read before very long.

However, I did find a short paper by Fisher, “The Stability of General Equilibrium – What Do We Know and Why Is It Important?” (available here) which was included in a volume General Equilibrium Analysis: A Century after Walras edited by Pacal Bridel.

Fisher’s contribution was to show that the early stability analyses of general equilibrium, despite the efforts of some of the most best economists of the mid-twentieth century, e.g, Hicks, Samuelson, Arrow and Hurwicz (all Nobel Prize winners) failed to provide a useful analysis of the question whether the general equilibrium described by Walras, whose existence was first demonstrated under very restrictive assumptions by Abraham Wald, and later under more general conditions by Arrow and Debreu, is stable or not.

Although we routinely apply comparative-statics exercises to derive what Samuelson mislabeled “meaningful theorems,” meaning refutable propositions about the directional effects of a parameter change on some observable economic variable(s), such as the effect of an excise tax on the price and quantity sold of the taxed commodity, those comparative-statics exercises are predicated on the assumption that the exercise starts from an initial position of equilibrium and that the parameter change leads, in a short period of time, to a new equilibrium. But there is no theory describing the laws of motion leading from one equilibrium to another, so the whole exercise is built on the mere assumption that a general equilibrium is sufficiently stable so that the old and the new equilibria can be usefully compared. In other words, microeconomics is predicated on macroeconomic foundations, i.e., the stability of a general equilibrium. The methodological demand for microfoundations for macroeconomics is thus a massive and transparent exercise in question begging.

In his paper on the stability of general equilibrium, Fisher observes that there are four important issues to be explored by general-equilibrium theory: existence, uniqueness, optimality, and stability. Of these he considers optimality to be the most important, as it provides a justification for a capitalistic market economy. Fisher continues:

So elegant and powerful are these results, that most economists base their conclusions upon them and work in an equilibrium framework – as they do in partial equilibrium analysis. But the justification for so doing depends on the answer to the fourth question listed above, that of stability, and a favorable answer to that is by no means assured.

It is important to understand this point which is generally ignored by economists. No matter how desirable points of competitive general equilibrium may be, that is of no consequence if they cannot be reached fairly quickly or maintained thereafter, or, as might happen when a country decides to adopt free markets, there are bad consequences on the way to equilibrium.

Milton Friedman remarked to me long ago that the study of the stability of general equilibrium is unimportant, first, because it is obvious that the economy is stable, and, second, because if it isn’t stable we are wasting our time. He should have known better. In the first place, it is not at all obvious that the actual economy is stable. Apart from the lessons of the past few years, there is the fact that prices do change all the time. Beyond this, however, is a subtler and possibly more important point. Whether or not the actual economy is stable, we largely lack a convincing theory of why that should be so. Lacking such a theory, we do not have an adequate theory of value, and there is an important lacuna in the center of microeconomic theory.

Yet economists generally behave as though this problem did not exist. Perhaps the most extreme example of this is the view of the theory of Rational Expectations that any disequilibrium disappears so fast that it can be ignored. (If the 50-dollar bill were really on the sidewalk, it would be gone already.) But this simply assumes the problem away. The pursuit of profits is a major dynamic force in the competitive economy. To only look at situations where the Invisible Hand has finished its work cannot lead to a real understanding of how that work is accomplished. (p. 35)

I would also note that Fisher confirms a proposition that I have advanced a couple of times previously, namely that Walras’s Law is not generally valid except in a full general equilibrium with either a complete set of markets or correct price expectations. Outside of general equilibrium, Walras’s Law is valid only if trading is not permitted at disequilibrium prices, i.e., Walrasian tatonnement. Here’s how Fisher puts it.

In this context, it is appropriate to remark that Walras’s Law no longer holds in its original form. Instead of the sum of the money value of all excess demands over all agents being zero, it now turned out that, at any moment of time, the same sum (including the demands for shares of firms and for money) equals the difference between the total amount of dividends that households expect to receive at that time and the amount that firms expect to pay. This difference disappears in equilibrium where expectations are correct, and the classic version of Walras’s Law then holds.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,272 other subscribers
Follow Uneasy Money on WordPress.com