Archive for the 'theory of second best' Category

Franklin Fisher on the Disequilibrium Foundations of Economics and the Stability of General Equilibrium Redux

Last November I posted a revised section of a paper I’m now working on an earlier version of which is posted on SSRN. I have now further revised the paper and that section in particular, so I’m posting the current version of that section in hopes of receiving further comments, criticisms, and suggestions before I submit the paper to a journal. So I will be very grateful to all those who respond, and will try not to be too cranky in my replies.

I         Fisher’s model and the No Favorable Surprise Assumption

Unsuccessful attempts to prove, under standard neoclassical assumptions, the stability of general equilibrium led Franklin Fisher (1983 [Disequilibrium Foundations of Equilibrium Economics) to suggest an alternative approach to proving stability. based on three assumptions: (1) trading occurs at disequilibrium prices (in contrast to the standard assumption that no trading takes place until a new equilibrium is found with prices being adjusted under a tatonnement process); (2) all unsatisfied transactors — either unsatisfied demanders or unsatisfied suppliers — in any disequilibrated market are either all on the demand side or all on the supply side of that market; (3) the “no favorable surprises” (NFS) assumption previously advanced by Hahn (1978 [“On Non-Walrasian Equilibria”).

At the starting point of a disequilibrium process, some commodities would be in excess demand, some in excess supply, and, perhaps, some in equilibrium. Let Zi denote the excess demand for any commodity, i ranging from 1 to n; let commodities in excess demand be numbered from 1 to k, commodities initially in equilibrium numbered from k+1 to m, and commodities in excess supply numbered from m+1 to n. Thus, by assumption, no agent had an excess supply of commodities numbered from 1 to k, no agent had an excess demand for commodities numbered from m+1 to n, and no agent had either an excess demand or excess supply for commodities numbered between k+1 and m.[1]

Fisher argued that in disequilibrium, with prices, not necessarily uniform across all trades, rising in markets with excess demand and falling in markets with excess supply, and not changing in markets with zero excess demand, the sequence of adjustments would converge on an equilibrium price vector. Every agent would form plans to transact conditional on expectations of the prices at which it planned purchases or sales, either spot for forward, could be executed.[2] Because unsuccessful demanders and suppliers would respond to failed attempts to execute planned trades by raising the prices offered, or reducing the prices accepted, prices for goods or services in excess demand would rise, and would fall for goods and services in excess supply. Insofar as agents successfully execute their plans, prices for commodities in excess demand would rise and prices for commodities in excess supply would fall.

Fisher reduced this informal analysis to a formal model in which stability could be proved, at least under standard neoclassical assumptions augmented by plausible assumptions about the adjustment process. Stability of equilibrium is proved by defining some function (V) of the endogenous variables of the model (x1, . . ., xn, t) and showing that the function satisfies the Lyapounov stability conditions: V ≥ 0, dV/dt ≤ 0, and dV/dt = 0 in equilibrium. Fisher defined V as the sum of the expected utilities of households plus the expected profits of firms, all firm profits being distributed to households in equilibrium. Fisher argued that, under the NFS assumption, the expected utility of agents would decline as prices are adjusted when agents fail to execute their planned transactions, disappointed buyers raising the prices offered and disappointed sellers lowering the prices accepted. These adjustments would reduce the expected utility or profit from those transactions, and in equilibrium no further adjustment would be needed and the Lyapounov conditions satisfied. The combination of increased prices for goods purchased and decreased prices for goods sold implies that, with no favorable surprises, dV/dt would be negative until an equilibrium, in which all planned transactions are executed, is reached, so that the sum of expected utility and expected profit is stabilized, confirming the stability of the disequilibrium arbitrage process.

II         Two Problems with the No Favorable Surprise Assumption

Acknowledging that the NFS assumption is ad hoc, not a deep property of rationality implied by standard neoclassical assumptions, Fisher (1983, p. 87) justified the assumption on the pragmatic grounds. “It may well be true,” he wrote,

that an economy of rational agents who understand that there is disequilibrium and act on arbitrage opportunities is driven toward equilibrium, but not if these agents continually perceive new previously unanticipated opportunities for further arbitrage. The appearance of such new and unexpected opportunities will generally disturb the system until they are absorbed.

Such opportunities can be of different kinds. The most obvious sort is the appearance of unforeseen technological developments – the unanticipated development of new products or processes. There are other sorts of new opportunities as well. An unanticipated change in tastes or the development of new uses for old products is one; the discovery of new sources of raw materials another. Further, efficiency improvements in firms are not restricted to technological developments. The discovery of a more efficient mode of internal organization or of a better way of marketing can also present a new opportunity.

Because favorable surprises following the displacement of a prior equilibrium would potentially violate the Lyapounov condition that V be non-increasing, the NFS assumption allows it to be proved that arbitrage of price differences leads to convergence on a new equilibrium. It is not, of course, only favorable surprises that can cause instability, inasmuch as the Lyapounov function must be non-negative as well as non-increasing, and a sufficiently large unfavorable surprise would violate the non-negativity condition.[3]

However, acknowledging the unrealism of the NFS assumption and its conflation of exogenous surprises with those that are endogenous, Fisher (pp. 90-91) argued that proving stability under the NFS assumption is still significant, because, if stability could not be proved under the assumption of no surprises of any kind, it likely could not be proved “under the more interesting weaker assumption” of No Exogenous Favorable Surprises.

The NFS assumption suffers from two problems deeper than Fisher acknowledged. First, it reckons only with equilibrating adjustments in current prices when trading is possible in both spot and forward markets for all goods and services, so that spot and forward prices for each commodity and service are being continuously arbitraged in his setup. Second, he does not take explicit account of interactions between markets of the sort that motivate Lipsey and Lancaster’s (1956 [“Tbe General Theory of Second Best) general theory of the second best.

          A. Semi-complete markets

Fisher does not introduce trading in state-contingent markets, so his model might be described as semi-complete. Because all traders have the choice, when transacting to engage, in either a spot or a forward transaction, depending on their liquidity position, so that when spot and forward trades are occurring for the same product or service, the ratio of those prices, reflecting own commodity interest rates, are constrained by arbitrage to match money interest rate. In an equilibrium, both spot and forward prices must adjusted so that the arbitrage relationships between spot and forward prices for all commodities and services in which both spot and forward prices are occurring is satisfied and all agents are able to execute the trads that they wish to make at the prices they expected when planning those purchases. In other words, an equilibrium requires that all agents that are actually trading commodities or services in which both spot and forward trades are occurring concurrently must share the same expectations of future prices. Otherwise, agents with differing expectations would have an incentive to switch from trading spot to forward or vice versa.

The point that I want to emphasize here is that, insofar as equilibration can be shown to occur in Fisher’s arbitrage model, it depends on the ability of agents to choose between purchasing spot or forward, thereby creating a market mechanism whereby agents’ expectations of future prices to be reconciled along with the adjustment of current prices (either spot or forward) to allow agents to execute their plans to transact. Equilibrium depends not only on the adjustment of current prices to equilibrium levels for spot transactions but on the adjustment of expectations of future spot prices to equilibrium levels. Unlike the market feedback on current prices in current markets conveyed by unsatisfied demanders and suppliers, inconsistencies in agents’ notional plans for future transactions convey no discernible feedback without a broad array of forward or futures markets in which those expectations are revealed and reconciled. Without such feedback on expectations, a plausible account of how expectations of future prices are equilibrated cannot — except under implausibly extreme assumptions — easily be articulated.[4] Nor can the existence of a temporary equilibrium of current prices in current markets, beset by agents’ inconsistent and conflicting expectations, be taken for granted under standard assumptions. And even if a temporary equilibrium exists, it cannot, under standard assumptions, be shown to be optimal (Arrow and Hahn, 1971, 136-51).

            B          Market interactions and the theory of second-best

Second, in Fisher’s account, price changes occur when transactors cannot execute their desired transactions at current prices, those price changes then creating arbitrage opportunities that induce further price changes. Fisher’s stability argument hinges on defining a Lyapounov function in which the prices of goods in excess demand rise as frustrated demanders offer increased prices and prices of goods in excess supply fall as disappointed suppliers accept reduced prices.

But the argument works only if a price adjustment in one market caused by a previous excess demand or excess supply does not simultaneously create excess demands or supplies in markets not previously in disequilibrium or further upset the imbalance between supply and demand in markets already in disequilibrium.

To understand why Fisher’s ad hoc assumptions do not guarantee that the Lyapounov function he defined will be continuously non-increasing, consider the famous Lipsey and Lancaster (1956) second-best theorem, according to which, if one optimality condition in an economic model is unsatisfied because a relevant variable is constrained, the second-best solution, rather than satisfy the other unconstrained optimum conditions, involves revision of at least some of the unconstrained optimum conditions.

Contrast Fisher’s statement of the No Favorable Surprise assumption with how Lipsey and Lancaster (1956, 11) described the import of their theorem.

From this theorem there follows the important negative corollary that there is no a priori way to judge as between various situations in which some of the Paretian optimum conditions are fulfilled while others are not. Specifically, it is not true that a situation in which more, but not all, of the optimum conditions are fulfilled is necessarily, or is even likely to be, superior to a situation in which fewer are fulfilled. It follows, therefore, that in a situation in which there exist many constraints which prevent the fulfilment of the Paretian optimum conditions the removal of any one constraint may affect welfare or efficiency either by raising it, by lowering it, or by leaving it unchanged.

The general theorem of the second best states that if one of the Paretian optimum conditions cannot be fulfilled a second-best optimum situation is achieved only by departing from all other optimum conditions. It is important to note that in general, nothing can be said about the direction or the magnitude of the secondary departures from optimum conditions made necessary by the original non-fulfillment of one condition.

Although Lipsey and Lancaster were not referring to the adjustment process following the displacement of a prior equilibrium, their discussion implies that the stability of an adjustment process depends on the specific sequence of adjustments in that process, inasmuch as each successive price adjustment, aside from its immediate effect on the particular market in which the price adjusts, transmits feedback effects to related markets. A price adjustment in one market may increase, decrease, or leave unchanged, the efficiency of other markets, and the equilibrating tendency of a price adjustment in one market may be offset by indirect disequilibrating tendencies in other markets. When a price adjustment in one market indirectly reduces efficiency in other markets, the resulting price adjustments may well trigger further indirect efficiency reductions.

Thus, in adjustment processes involving interrelated markets, a price change in one market can indeed cause a favorable surprises in one or more other markets by indirectly causing net increases in utility through feedback effects on other markets.

III        Conclusion

Consider a macroeconomic equilibrium satisfying all optimality conditions between marginal rates of substitution in production and consumption and relative prices. If that equilibrium is subjected to a macoreconomic disturbance affecting all, or most, individual markets, thereby changing all optimality conditions corresponding to the prior equilibrium, the new equilibrium will likely entail a different set of optimality conditions. While systemic optimality requires price adjustments to satisfy all the optimality conditions, actual price adjustments occur sequentially, in piecemeal fashion, with prices changing market by market or firm by firm, price changes occurring as agents perceive demand or cost changes. Those changes need not always induce equilibrating adjustments, nor is the arbitraging of price differences necessarily equilibrating when, under suboptimal conditions, prices have generally deviated from their equilibrium values. 

Smithian invisible-hand theorems are of little relevance in explaining the transition to a new equilibrium following a macroeconomic disturbance, because, in this context, the invisible-hand theorem begs the relevant question by assuming that the equilibrium price vector has been found. When all markets are in disequilibrium, moving toward equilibrium in one market has repercussions on other markets, and the simple story of how price adjustment in response to a disequilibrium in that market alone restores equilibrium breaks down, because market conditions in every market depend on market conditions in every other market. So, unless all optimality conditions are satisfied simultaneously, there is no assurance that piecemeal adjustments will bring the system closer to an optimal, or even a second-best, state.

If my interpretation of the NFS assumption is correct, Fisher’s stability results may provide support for Leijonhufvud’s (1973 “Effective Demand Failures”) suggestion that there is a corridor of stability around an equilibrium time path within which, under normal circumstances, an economy will not be displaced too far from path, so that an economy, unless displaced outside that corridor, will revert, more or less on its own, to its equilibrium path.[5]

Leijonhufvud attributed such resilience to the holding of buffer stocks of inventories of goods, holdings of cash and the availability of credit lines enabling agents to operate normally despite disappointed expectations. If negative surprises persist, agents will be unable to add to, or draw from, inventories indefinitely, or to finance normal expenditures by borrowing or drawing down liquid assets. Once buffer stocks are exhausted, the stabilizing properties of the economy have been overwhelmed by the destabilizing tendencies, income-constrained agents cut expenditures, as implied by the Keynesian multiplier analysis, triggering a cumulative contraction, and rendering a spontaneous recovery without compensatory fiscal or monetary measures, impossible.

But my critique of Fisher’s NFS assumption suggests other, perhaps deeper, reasons why displacements of equilibrium may not be self-correcting, such displacements may invalidate previously held expectations, and in the absence of a dense array of forward and futures markets, there is likely no market mechanism that would automatically equilibrate unsettled and inconsistent expectations. In such an environment, price adjustments in current spot markets may cause price adjustments that, under the logic of the Lipsey-Lancaster second-best theorem may in fact be welfare-diminishing rather than welfare-enhancing and may therefore not equilibrate, but only further disequilibrate the macroeconomy.


[1] Fisher’s stability analysis was conducted in the context of complete markets in which all agents could make transactions for future delivery at prices agreed on in the present. Thus, for Fisher arbitrage means that agents choose between contracting for future delivery or waiting to transact until later based on their expectations of whether the forward price now is more or less than the expected future price. In equilibrium, expectations of future prices are correct so that agents are indifferent between making forward transactions of waiting to make spot transactions unless liquidity considerations dictate a preference for selling forward now or postponing buying till later.

[2] Fisher assumed that, for every commodity or service, transactions can be made either spot or forward. When Fisher spoke of arbitrage, he was referring to the decisions of agents whether to transact spot or forward given the agent’s expectations of the spot price at the time of planned exchange, the forward prices adjusting so that, with no transactions costs, agents are indifferent, at the margin, between transacting spot or forward, given their expectations of the future spot price.

[3] It was therefore incorrect for Fisher (1983, 88) to assert: “we can hope to show that  that the continued presence new opportunities is a necessary condition for instability — for continued change,” inasmuch as continued negative surprises could also cause continued — or at least prolonged — change.

[4] Fisher does recognize (pp. 88-89) that changes in expectations can be destabilizing. However, he considers only the possibility of exogenous events that cause expectations to change, but does not consider the possibility that expectations may change endogenously in a destabilizing fashion in the course of an adjustment process following a displacement from a prior equilibrium. See, however, his discussion (p. 91) of the distinction between exogenous and endogenous shocks.

How is . . . an [“exogenous”] shock to be distinguished from the “endogenous” shock brought about by adjustment to the original shock? No Favorable Surprise may not be precisely what is wanted as an assumption in this area, but it is quite difficult to see exactly how to refine it.

A proof of stability under No Favorable Surprise, then, seems quite desirable for a number of related reasons. First, it is the strongest version of an assumption of No Favorable Exogenous Surprise (whatever that may mean precisely); hence, if stability does not hold under No Favorable Surprise it cannot be expected to hold under the more interesting weaker assumption.  

[5] Presumably because the income and output are maximized at the equilibrium path, it is unlikely that an economy will overshoot the path unless entrepreneurial or policy error cause such overshooting which is presumably an unlikely occurrence, although Austrian business cycle theory and perhaps certain other monetary business cycle theories suggest that such overshooting is not or has not always been an uncommon event.

Krugman’s Second Best

A couple of days ago Paul Krugman discussed “Second-best Macroeconomics” on his blog. I have no real quarrel with anything he said, but I would like to amplify his discussion of what is sometimes called the problem of second-best, because I think the problem of second best has some really important implications for macroeconomics beyond the limited application of the problem that Krugman addressed. The basic idea underlying the problem of second best is not that complicated, but it has many applications, and what made the 1956 paper (“The General Theory of Second Best”) by R. G. Lipsey and Kelvin Lancaster a classic was that it showed how a number of seemingly disparate problems were really all applications of a single unifying principle. Here’s how Krugman frames his application of the second-best problem.

[T]he whole western world has spent years suffering from a severe shortfall of aggregate demand; in Europe a severe misalignment of national costs and prices has been overlaid on this aggregate problem. These aren’t hard problems to diagnose, and simple macroeconomic models — which have worked very well, although nobody believes it — tell us how to solve them. Conventional monetary policy is unavailable thanks to the zero lower bound, but fiscal policy is still on tap, as is the possibility of raising the inflation target. As for misaligned costs, that’s where exchange rate adjustments come in. So no worries: just hit the big macroeconomic That Was Easy button, and soon the troubles will be over.

Except that all the natural answers to our problems have been ruled out politically. Austerians not only block the use of fiscal policy, they drive it in the wrong direction; a rise in the inflation target is impossible given both central-banker prejudices and the power of the goldbug right. Exchange rate adjustment is blocked by the disappearance of European national currencies, plus extreme fear over technical difficulties in reintroducing them.

As a result, we’re stuck with highly problematic second-best policies like quantitative easing and internal devaluation.

I might quibble with Krugman about the quality of the available macroeconomic models, by which I am less impressed than he, but that’s really beside the point of this post, so I won’t even go there. But I can’t let the comment about the inflation target pass without observing that it’s not just “central-banker prejudices” and the “goldbug right” that are to blame for the failure to raise the inflation target; for reasons that I don’t claim to understand myself, the political consensus in both Europe and the US in favor of perpetually low or zero inflation has been supported with scarcely any less fervor by the left than the right. It’s only some eccentric economists – from diverse positions on the political spectrum – that have been making the case for inflation as a recovery strategy. So the political failure has been uniform across the political spectrum.

OK, having registered my factual disagreement with Krugman about the source of our anti-inflationary intransigence, I can now get to the main point. Here’s Krugman:

“[S]econd best” is an economic term of art. It comes from a classic 1956 paper by Lipsey and Lancaster, which showed that policies which might seem to distort markets may nonetheless help the economy if markets are already distorted by other factors. For example, suppose that a developing country’s poorly functioning capital markets are failing to channel savings into manufacturing, even though it’s a highly profitable sector. Then tariffs that protect manufacturing from foreign competition, raise profits, and therefore make more investment possible can improve economic welfare.

The problems with second best as a policy rationale are familiar. For one thing, it’s always better to address existing distortions directly, if you can — second best policies generally have undesirable side effects (e.g., protecting manufacturing from foreign competition discourages consumption of industrial goods, may reduce effective domestic competition, and so on). . . .

But here we are, with anything resembling first-best macroeconomic policy ruled out by political prejudice, and the distortions we’re trying to correct are huge — one global depression can ruin your whole day. So we have quantitative easing, which is of uncertain effectiveness, probably distorts financial markets at least a bit, and gets trashed all the time by people stressing its real or presumed faults; someone like me is then put in the position of having to defend a policy I would never have chosen if there seemed to be a viable alternative.

In a deep sense, I think the same thing is involved in trying to come up with less terrible policies in the euro area. The deal that Greece and its creditors should have reached — large-scale debt relief, primary surpluses kept small and not ramped up over time — is a far cry from what Greece should and probably would have done if it still had the drachma: big devaluation now. The only way to defend the kind of thing that was actually on the table was as the least-worst option given that the right response was ruled out.

That’s one example of a second-best problem, but it’s only one of a variety of problems, and not, it seems to me, the most macroeconomically interesting. So here’s the second-best problem that I want to discuss: given one distortion (i.e., a departure from one of the conditions for Pareto-optimality), reaching a second-best sub-optimum requires violating other – likely all the other – conditions for reaching the first-best (Pareto) optimum. The strategy for getting to the second-best suboptimum cannot be to achieve as many of the conditions for reaching the first-best optimum as possible; the conditions for reaching the second-best optimum are in general totally different from the conditions for reaching the first-best optimum.

So what’s the deeper macroeconomic significance of the second-best principle?

I would put it this way. Suppose there’s a pre-existing macroeconomic equilibrium, all necessary optimality conditions between marginal rates of substitution in production and consumption and relative prices being satisfied. Let the initial equilibrium be subjected to a macoreconomic disturbance. The disturbance will immediately affect a range — possibly all — of the individual markets, and all optimality conditions will change, so that no market will be unaffected when a new optimum is realized. But while optimality for the system as a whole requires that prices adjust in such a way that the optimality conditions are satisfied in all markets simultaneously, each price adjustment that actually occurs is a response to the conditions in a single market – the relationship between amounts demanded and supplied at the existing price. Each price adjustment being a response to a supply-demand imbalance in an individual market, there is no theory to explain how a process of price adjustment in real time will ever restore an equilibrium in which all optimality conditions are simultaneously satisfied.

Invoking a general Smithian invisible-hand theorem won’t work, because, in this context, the invisible-hand theorem tells us only that if an equilibrium price vector were reached, the system would be in an optimal state of rest with no tendency to change. The invisible-hand theorem provides no account of how the equilibrium price vector is discovered by any price-adjustment process in real time. (And even tatonnement, a non-real-time process, is not guaranteed to work as shown by the Sonnenschein-Mantel-Debreu Theorem). With price adjustment in each market entirely governed by the demand-supply imbalance in that market, market prices determined in individual markets need not ensure that all markets clear simultaneously or satisfy the optimality conditions.

Now it’s true that we have a simple theory of price adjustment for single markets: prices rise if there’s an excess demand and fall if there’s an excess supply. If demand and supply curves have normal slopes, the simple price adjustment rule moves the price toward equilibrium. But that partial-equilibriuim story is contingent on the implicit assumption that all other markets are in equilibrium. When all markets are in disequilibrium, moving toward equilibrium in one market will have repercussions on other markets, and the simple story of how price adjustment in response to a disequilibrium restores equilibrium breaks down, because market conditions in every market depend on market conditions in every other market. So unless all markets arrive at equilibrium simultaneously, there’s no guarantee that equilibrium will obtain in any of the markets. Disequilibrium in any market can mean disequilibrium in every market. And if a single market is out of kilter, the second-best, suboptimal solution for the system is totally different from the first-best solution for all markets.

In the standard microeconomics we are taught in econ 1 and econ 101, all these complications are assumed away by restricting the analysis of price adjustment to a single market. In other words, as I have pointed out in a number of previous posts (here and here), standard microeconomics is built on macroeconomic foundations, and the currently fashionable demand for macroeconomics to be microfounded turns out to be based on question-begging circular reasoning. Partial equilibrium is a wonderful pedagogical device, and it is an essential tool in applied microeconomics, but its limitations are often misunderstood or ignored.

An early macroeconomic application of the theory of second is the statement by the quintessentially orthodox pre-Keynesian Cambridge economist Frederick Lavington who wrote in his book The Trade Cycle “the inactivity of all is the cause of the inactivity of each.” Each successive departure from the conditions for second-, third-, fourth-, and eventually nth-best sub-optima has additional negative feedback effects on the rest of the economy, moving it further and further away from a Pareto-optimal equilibrium with maximum output and full employment. The fewer people that are employed, the more difficult it becomes for anyone to find employment.

This insight was actually admirably, if inexactly, expressed by Say’s Law: supply creates its own demand. The cause of the cumulative contraction of output in a depression is not, as was often suggested, that too much output had been produced, but a breakdown of coordination in which disequilibrium spreads in epidemic fashion from market to market, leaving individual transactors unable to compensate by altering the terms on which they are prepared to supply goods and services. The idea that a partial-equilibrium response, a fall in money wages, can by itself remedy a general-disequilibrium disorder is untenable. Keynes and the Keynesians were therefore completely wrong to accuse Say of committing a fallacy in diagnosing the cause of depressions. The only fallacy lay in the assumption that market adjustments would automatically ensure the restoration of something resembling full-employment equilibrium.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 665 other subscribers
Follow Uneasy Money on WordPress.com