Archive for the 'partial equilibrium' Category

General Equilibrium, Partial Equilibrium and Costs

Neoclassical economics is now bifurcated between Marshallian partial-equilibrium and Walrasian general-equilibrium analyses. With the apparent inability of neoclassical theory to explain the coordination failure of the Great Depression, J. M. Keynes proposed an alternative paradigm to explain the involuntary unemployment of the 1930s. But within two decades, Keynes’s contribution was subsumed under what became known as the neoclassical synthesis of the Keynesian and Walrasian theories (about which I have written frequently, e.g., here and here). Lacking microfoundations that could be reconciled with the assumptions of Walrasian general-equilibrium theory, the neoclassical synthesis collapsed, owing to the supposedly inadequate microfoundations of Keynesian theory.

But Walrasian general-equilibrium theory provides no plausible, much less axiomatic, account of how general equilibrium is, or could be, achieved. Even the imaginary tatonnement process lacks an algorithm that guarantees that a general-equilibrium solution, if it exists, would be found. Whatever plausibility is attributed to the assumption that price flexibility leads to equilibrium derives from Marshallian partial-equilibrium analysis, with market prices adjusting to equilibrate supply and demand.

Yet modern macroeconomics, despite its explicit Walrasian assumptions, implicitly relies on the Marshallian intuition that the fundamentals of general-equilibrium, prices and costs are known to agents who, except for random disturbances, continuously form rational expectations of market-clearing equilibrium prices in all markets.

I’ve written many earlier posts (e.g., here and here) contesting, in one way or another, the notion that all macroeconomic theories must be founded on first principles (i.e., microeconomic axioms about optimizing individuals). Any macroeconomic theory not appropriately founded on the axioms of individual optimization by consumers and producers is now dismissed as scientifically defective and unworthy of attention by serious scientific practitioners of macroeconomics.

When contesting the presumed necessity for macroeconomics to be microeconomically founded, I’ve often used Marshall’s partial-equilibrium method as a point of reference. Though derived from underlying preference functions that are independent of prices, the demand curves of partial-equilibrium analysis presume that all product prices, except the price of the product under analysis, are held constant. Similarly, the supply curves are derived from individual firm marginal-cost curves whose geometric position or algebraic description depends critically on the prices of raw materials and factors of production used in the production process. But neither the prices of alternative products to be purchased by consumers nor the prices of raw materials and factors of production are given independently of the general-equilibrium solution of the whole system.

Thus, partial-equilibrium analysis, to be analytically defensible, requires a ceteris-paribus proviso. But to be analytically tenable, that proviso must posit an initial position of general equilibrium. Unless the analysis starts from a state of general equilibrium, the assumption that all prices but one remain constant can’t be maintained, the constancy of disequilibrium prices being a nonsensical assumption.

The ceteris-paribus proviso also entails an assumption about the market under analysis; either the market itself, or the disturbance to which it’s subject, must be so small that any change in the equilibrium price of the product in question has de minimus repercussions on the prices of every other product and of every input and factor of production used in producing that product. Thus, the validity of partial-equilibrium analysis depends on the presumption that the unique and locally stable general-equilibrium is approximately undisturbed by whatever changes result from by the posited change in the single market being analyzed. But that presumption is not so self-evidently plausible that our reliance on it to make empirical predictions is always, or even usually, justified.

Perhaps the best argument for taking partial-equilibrium analysis seriously is that the analysis identifies certain deep structural tendencies that, at least under “normal” conditions of moderate macroeconomic stability (i.e., moderate unemployment and reasonable price stability), will usually be observable despite the disturbing influences that are subsumed under the ceteris-paribus proviso. That assumption — an assumption of relative ignorance about the nature of the disturbances that are assumed to be constant — posits that those disturbances are more or less random, and as likely to cause errors in one direction as another. Consequently, the predictions of partial-equilibrium analysis can be assumed to be statistically, though not invariably, correct.

Of course, the more interconnected a given market is with other markets in the economy, and the greater its size relative to the total economy, the less confidence we can have that the implications of partial-equilibrium analysis will be corroborated by empirical investigation.

Despite its frequent unsuitability, economists and commentators are often willing to deploy partial-equilibrium analysis in offering policy advice even when the necessary ceteris-paribus proviso of partial-equilibrium analysis cannot be plausibly upheld. For example, two of the leading theories of the determination of the rate of interest are the loanable-funds doctrine and the Keynesian liquidity-preference theory. Both these theories of the rate of interest suppose that the rate of interest is determined in a single market — either for loanable funds or for cash balances — and that the rate of interest adjusts to equilibrate one or the other of those two markets. But the rate of interest is an economy-wide price whose determination is an intertemporal-general-equilibrium phenomenon that cannot be reduced, as the loanable-funds and liquidity preference theories try to do, to the analysis of a single market.

Similarly partial-equilibrium analysis of the supply of, and the demand for, labor has been used of late to predict changes in wages from immigration and to advocate for changes in immigration policy, while, in an earlier era, it was used to recommend wage reductions as a remedy for persistently high aggregate unemployment. In the General Theory, Keynes correctly criticized those using a naïve version of the partial-equilibrium method to recommend curing high unemployment by cutting wage rates, correctly observing that the conditions for full employment required the satisfaction of certain macroeconomic conditions for equilibrium that would not necessarily be satisfied by cutting wages.

However, in the very same volume, Keynes argued that the rate of interest is determined exclusively by the relationship between the quantity of money and the demand to hold money, ignoring that the rate of interest is an intertemporal relationship between current and expected future prices, an insight earlier explained by Irving Fisher that Keynes himself had expertly deployed in his Tract on Monetary Reform and elsewhere (Chapter 17) in the General Theory itself.

Evidently, the allure of supply-demand analysis can sometimes be too powerful for well-trained economists to resist even when they actually know better themselves that it ought to be resisted.

A further point also requires attention: the conditions necessary for partial-equilibrium analysis to be valid are never really satisfied; firms don’t know the costs that determine the optimal rate of production when they actually must settle on a plan of how much to produce, how much raw materials to buy, and how much labor and other factors of production to employ. Marshall, the originator of partial-equilibrium analysis, analogized supply and demand to the blades of a scissor acting jointly to achieve a intended result.

But Marshall erred in thinking that supply (i.e., cost) is an independent determinant of price, because the equality of costs and prices is a characteristic of general equilibrium. It can be applied to partial-equilibrium analysis only under the ceteris-paribus proviso that situates partial-equilibrium analysis in a pre-existing general equilibrium of the entire economy. It is only in general-equilibrium state, that the cost incurred by a firm in producing its output represents the value of the foregone output that could have been produced had the firm’s output been reduced. Only if the analyzed market is so small that changes in how much firms in that market produce do not affect the prices of the inputs used in to produce that output can definite marginal-cost curves be drawn or algebraically specified.

Unless general equilibrium obtains, prices need not equal costs, as measured by the quantities and prices of inputs used by firms to produce any product. Partial equilibrium analysis is possible only if carried out in the context of general equilibrium. Cost cannot be an independent determinant of prices, because cost is itself determined simultaneously along with all other prices.

But even aside from the reasons why partial-equilibrium analysis presumes that all prices, but the price in the single market being analyzed, are general-equilibrium prices, there’s another, even more problematic, assumption underlying partial-equilibrium analysis: that producers actually know the prices that they will pay for the inputs and resources to be used in producing their outputs. The cost curves of the standard economic analysis of the firm from which the supply curves of partial-equilibrium analysis are derived, presume that the prices of all inputs and factors of production correspond to those that are consistent with general equilibrium. But general-equilibrium prices are never known by anyone except the hypothetical agents in a general-equilibrium model with complete markets, or by agents endowed with perfect foresight (aka rational expectations in the strict sense of that misunderstood term).

At bottom, Marshallian partial-equilibrium analysis is comparative statics: a comparison of two alternative (hypothetical) equilibria distinguished by some difference in the parameters characterizing the two equilibria. By comparing the equilibria corresponding to the different parameter values, the analyst can infer the effect (at least directionally) of a parameter change.

But comparative-statics analysis is subject to a serious limitation: comparing two alternative hypothetical equilibria is very different from making empirical predictions about the effects of an actual parameter change in real time.

Comparing two alternative equilibria corresponding to different values of a parameter may be suggestive of what could happen after a policy decision to change that parameter, but there are many reasons why the change implied by the comparative-statics exercise might not match or even approximate the actual change.

First, the initial state was almost certainly not an equilibrium state, so systemic changes will be difficult, if not impossible, to disentangle from the effect of parameter change implied by the comparative-statics exercise.

Second, even if the initial state was an equilibrium, the transition to a new equilibrium is never instantaneous. The transitional period therefore leads to changes that in turn induce further systemic changes that cause the new equilibrium toward which the system gravitates to differ from the final equilibrium of the comparative-statics exercise.

Third, each successive change in the final equilibrium toward which the system is gravitating leads to further changes that in turn keep changing the final equilibrium. There is no reason why the successive changes lead to convergence on any final equilibrium end state. Nor is there any theoretical proof that the adjustment path leading from one equilibrium to another ever reaches an equilibrium end state. The gap between the comparative-statics exercise and the theory of adjustment in real time remains unbridged and may, even in principle, be unbridgeable.

Finally, without a complete system of forward and state-contingent markets, equilibrium requires not just that current prices converge to equilibrium prices; it requires that expectations of all agents about future prices converge to equilibrium expectations of future prices. Unless, agents’ expectations of future prices converge to their equilibrium values, an equilibrium many not even exist, let alone be approached or attained.

So the Marshallian assumption that producers know their costs of production and make production and pricing decisions based on that knowledge is both factually wrong and logically untenable. Nor do producers know what the demand curves for their products really looks like, except in the extreme case in which suppliers take market prices to be parametrically determined. But even then, they make decisions not on known prices, but on expected prices. Their expectations are constantly being tested against market information about actual prices, information that causes decision makers to affirm or revise their expectations in light of the constant flow of new information about prices and market conditions.

I don’t reject partial-equilibrium analysis, but I do call attention to its limitations, and to its unsuitability as a supposedly essential foundation for macroeconomic analysis, especially inasmuch as microeconomic analysis, AKA partial-equilibrium analysis, is utterly dependent on the uneasy macrofoundation of general-equilibrium theory. The intuition of Marshallian partial equilibrium cannot fil the gap, long ago noted by Kenneth Arrow, in the neoclassical theory of equilibrium price adjustment.

What’s so Great about Supply-Demand Analysis?

Just about the first thing taught to economics students is that there are demand curves for goods and services and supply curves of goods and services. Demand curves show how much customers wish to buy of a particular good or service within a period of time at various prices that might be charged for that good or service. The supply curve shows how much suppliers of a good or service would offer to sell at those prices.

Economists assume, and given certain more basic assumptions can (almost) prove, that customers will seek to buy less at higher prices for a good or service than at lower prices. Similarly, they assume that suppliers of the good or service offer to sell more at higher prices than at lower prices. Reflecting those assumptions, demand curves are downward-sloping and supply curve are upward-sloping. An upward-sloping supply curve is likely to intersect a downward-sloping demand curve at a single point, which corresponds to an equilibrium that allows customers to buy as much as they want to and suppliers to sell as much as they want to in the relevant time period.

This analysis is the bread and butter of economics. It leads to the conclusion that, when customers can’t buy as much as they would like, the price goes up, and, when suppliers can’t sell as much as they would like, the price goes down. So the natural tendency in any market is for the price to rise if it’s less than the equilibrium price, and to fall if it’s greater than the equilibrium price. This is the logic behind letting the market determine prices.

It can also be shown, if some further assumptions are made, that the intersection of the supply and demand curves represents an optimal allocation of resources in the sense that the total value of output is maximized. The necessary assumptions are, first, that the demand curve measures the marginal value placed on additional units of output, and, second, that the supply curve measures the marginal cost of producing additional units of output. The intersection of the supply and the demand curves corresponds to the maximization of the total value of output, because the marginal cost represents the value of output that could have been produced if the resources devoted to producing the good in question had been shifted to more valuable uses. When the supply curve rises above the demand curve it means that the resources would produce a greater value if devoted to producing something else than the value of the additional output of the good in question.

There is much to be said for the analysis, and it would be wrong to dismiss it. But it’s also important to understand its limitations, and, especially, the implicit assumptions on which it relies. In a sense, supply-demand analysis is foundational, the workhorse model that is the first resort of economists. But its role as a workhorse model does not automatically render analyses untethered to supply and demand illegitimate.

Supply-demand analysis has three key functions. First, it focuses attention on the idea of an equilibrium price at which all buyers can buy as much as they would like, and all sellers can sell as much as they would like. In a typical case, with an upward sloping supply curve and a downward-sloping demand curve, there is one, and only one, price with that property.

Second, as explained above, there is a sense in which that equilibrium price, aside from enabling the mutual compatibility of buyers’ and sellers’ plans to buy or to sell, has optimal properties.

Third, it’s a tool for predicting how changes in market conditions, like imposing a sales or excise tax, affect customers and suppliers. It compares two equilibrium positions on the assumption that only one parameter changes and predicts the effect of the parameter change by comparing the new and old equilibria. It’s the prototype for the comparative-statics method.

The chief problem with supply-demand analysis is that it requires a strict ceteris-paribus assumption, so that everything but the price and the quantity of the good under analysis remains constant. For many reasons, that assumption can’t literally be true. If the price of the good rises (falls), the real income of consumers decreases (increases). And if the price rises (falls), suppliers likely pay more (less) for their inputs. Changes in the price of one good also affect the prices of other goods, which, in turn, may affect the demand for the good under analysis. Each of those consequences would cause the supply and demand curves to shift from their initial positions. How much the ceteris-paribus assumption matters depends on how much of their incomes consumers spend on the good under analysis. The more they spend, the less plausible the ceteris paribus assumption.

But another implicit assumption underlies supply-demand analysis: that the economic system starts from a state of general equilibrium. Why must this assumption be made? The answer is that it‘s implied by the ceteris-paribus assumption that all other prices remain constant. Unless other markets are in equilibrium, it can’t be assumed that all other prices and incomes remain constant; if they aren’t, then prices for other goods, and for inputs used to produce the product under analysis, will change, violating the ceteris-paribus assumption. Unless the prices (and wages) of the inputs used to produce the good under analysis remain constant, the supply curve of the product can’t be assumed to remain unchanged.

On top of that, Walras’s Law implies that if one market is in disequilibrium, then at least one other market must also be in disequilibrium. So an internal contradiction lies at the heart of supply-demand analysis. The contradiction can be avoided, but not resolved, only by assuming that the market being analyzed is so minute relative to the rest of the economy, or so isolated from all other markets, that a disturbance in that market that changes its equilibrium position either wouldn’t disrupt the existing equilibrium in all other markets, or that the disturbances to the equilibria in all the other markets are so small that they can be safely ignored.

But we’re not done yet. The underlying general equilibrium on which the partial equilibrium (supply-demand) analysis is based, exists only conceptually, not in reality. Although it’s possible to prove the existence of such an equilibrium under more or less mathematically plausible assumptions about convexity and the continuity of the relevant functions, it is less straightforward to prove that the equilibrium is unique, or at least locally stable. If it is not unique or locally stable, there is no guarantee that comparative statics is possible, because a displacement from an unstable equilibrium may cause an unpredictable adjustment violates the ceteris-paribus assumption.

Finally, and perhaps most problematic, comparative statics is merely a comparison of two alternative equilibria, neither of which can be regarded as the outcome of a theoretically explicable, much less practical, process leading from initial conditions to the notional equilibrium state. Accordingly, neither is there any process whereby a disturbance to – a parameter change in — an initial equilibrium would lead from the initial equilibrium to a new equilibrium. That is what comparative statics means: the comparison of two alternative and disconnected equilibria. There is no transition from one to the other merely a comparison of the difference between them attributable to the change in a particular parameter in the initial conditions underlying the equilibria.

Given all the assumptions that must be satisfied for the basic implications of conventional supply-demand analysis to be unambiguously valid, that analysis obviously cannot provide demonstrably true predictions. As just explained, the comparative-statics method in general and supply-demand analysis in particular provide no actual predictions; they are merely conjectural comparisons of alternative notional equilibria.

The ceteris paribus assumption is often dismissed as making any theory tautological and untestable. If an ad hoc assumption introduced when observations don’t match the predictions derived from a given theory is independently testable, it adds to the empirical content of the theory, as demonstrated by the ad hoc assumption of an eighth planet (Neptune) in our solar system when predictions about the orbits of the seven known planets did not accord with their observed orbits.

Friedman’s famous methodological argument that only predictions, not assumptions, matter is clearly wrong. Economists have to be willing to modify assumptions and infer the implications that follow from modified or supplementary assumptions rather than take for granted that assumptions cannot meaningfully and productively affect the implications of a general analytical approach. It would be a travesty if physicists maintained the no-friction assumption, because it’s just a simplifying assumption to make the analysis tractable. That approach is a prescription for scientific stagnation.

The art of economics is to identify the key assumptions that ought to be modified to make a general analytical approach relevant and fruitful. When they are empirically testable, ad hoc assumptions that modify the ceteris paribus restriction constitute scientific advance.

But it’s important to understand how tenuous the connection is between the formalism of supply-demand analysis and of the comparative-statics method and the predictive power of that analysis and that method. The formalism stops far short of being able to generate clear and unambiguous conditions. The relationship between the formalism and the real world is tenuous and the apparent logical rigor of the formalism must be supplemented by notable and sometimes embarrassing doses of hand-waving or question-begging.

And it is also worth remembering the degree to which the supposed rigor of neoclassical microeconomic supply-demand formalism depends on the macroeconomic foundation of the existence (and at least approximate reality) of a unique or locally stable general equilibrium.

Filling the Arrow Explanatory Gap

The following (with some minor revisions) is a Twitter thread I posted yesterday. Unfortunately, because it was my first attempt at threading the thread wound up being split into three sub-threads and rather than try to reconnect them all, I will just post the complete thread here as a blogpost.

1. Here’s an outline of an unwritten paper developing some ideas from my paper “Hayek Hicks Radner and Four Equilibrium Concepts” (see here for an earlier ungated version) and some from previous blog posts, in particular Phillips Curve Musings

2. Standard supply-demand analysis is a form of partial-equilibrium (PE) analysis, which means that it is contingent on a ceteris paribus (CP) assumption, an assumption largely incompatible with realistic dynamic macroeconomic analysis.

3. Macroeconomic analysis is necessarily situated a in general-equilibrium (GE) context that precludes any CP assumption, because there are no variables that are held constant in GE analysis.

4. In the General Theory, Keynes criticized the argument based on supply-demand analysis that cutting nominal wages would cure unemployment. Instead, despite his Marshallian training (upbringing) in PE analysis, Keynes argued that PE (AKA supply-demand) analysis is unsuited for understanding the problem of aggregate (involuntary) unemployment.

5. The comparative-statics method described by Samuelson in the Foundations of Econ Analysis formalized PE analysis under the maintained assumption that a unique GE obtains and deriving a “meaningful theorem” from the 1st- and 2nd-order conditions for a local optimum.

6. PE analysis, as formalized by Samuelson, is conditioned on the assumption that GE obtains. It is focused on the effect of changing a single parameter in a single market small enough for the effects on other markets of the parameter change to be made negligible.

7. Thus, PE analysis, the essence of micro-economics is predicated on the macrofoundation that all, but one, markets are in equilibrium.

8. Samuelson’s meaningful theorems were a misnomer reflecting mid-20th-century operationalism. They can now be understood as empirically refutable propositions implied by theorems augmented with a CP assumption that interactions b/w markets are small enough to be neglected.

9. If a PE model is appropriately specified, and if the market under consideration is small or only minimally related to other markets, then differences between predictions and observations will be statistically insignificant.

10. So PE analysis uses comparative-statics to compare two alternative general equilibria that differ only in respect of a small parameter change.

11. The difference allows an inference about the causal effect of a small change in that parameter, but says nothing about how an economy would actually adjust to a parameter change.

12. PE analysis is conditioned on the CP assumption that the analyzed market and the parameter change are small enough to allow any interaction between the parameter change and markets other than the market under consideration to be disregarded.

13. However, the process whereby one equilibrium transitions to another is left undetermined; the difference between the two equilibria with and without the parameter change is computed but no account of an adjustment process leading from one equilibrium to the other is provided.

14. Hence, the term “comparative statics.”

15. The only suggestion of an adjustment process is an assumption that the price-adjustment in any market is an increasing function of excess demand in the market.

16. In his seminal account of GE, Walras posited the device of an auctioneer who announces prices–one for each market–computes desired purchases and sales at those prices, and sets, under an adjustment algorithm, new prices at which desired purchases and sales are recomputed.

17. The process continues until a set of equilibrium prices is found at which excess demands in all markets are zero. In Walras’s heuristic account of what he called the tatonnement process, trading is allowed only after the equilibrium price vector is found by the auctioneer.

18. Walras and his successors assumed, but did not prove, that, if an equilibrium price vector exists, the tatonnement process would eventually, through trial and error, converge on that price vector.

19. However, contributions by Sonnenschein, Mantel and Debreu (hereinafter referred to as the SMD Theorem) show that no price-adjustment rule necessarily converges on a unique equilibrium price vector even if one exists.

20. The possibility that there are multiple equilibria with distinct equilibrium price vectors may or may not be worth explicit attention, but for purposes of this discussion, I confine myself to the case in which a unique equilibrium exists.

21. The SMD Theorem underscores the lack of any explanatory account of a mechanism whereby changes in market prices, responding to excess demands or supplies, guide a decentralized system of competitive markets toward an equilibrium state, even if a unique equilibrium exists.

22. The Walrasian tatonnement process has been replaced by the Arrow-Debreu-McKenzie (ADM) model in an economy of infinite duration consisting of an infinite number of generations of agents with given resources and technology.

23. The equilibrium of the model involves all agents populating the economy over all time periods meeting before trading starts, and, based on initial endowments and common knowledge, making plans given an announced equilibrium price vector for all time in all markets.

24. Uncertainty is accommodated by the mechanism of contingent trading in alternative states of the world. Given assumptions about technology and preferences, the ADM equilibrium determines the set prices for all contingent states of the world in all time periods.

25. Given equilibrium prices, all agents enter into optimal transactions in advance, conditioned on those prices. Time unfolds according to the equilibrium set of plans and associated transactions agreed upon at the outset and executed without fail over the course of time.

26. At the ADM equilibrium price vector all agents can execute their chosen optimal transactions at those prices in all markets (certain or contingent) in all time periods. In other words, at that price vector, excess demands in all markets with positive prices are zero.

27. The ADM model makes no pretense of identifying a process that discovers the equilibrium price vector. All that can be said about that price vector is that if it exists and trading occurs at equilibrium prices, then excess demands will be zero if prices are positive.

28. Arrow himself drew attention to the gap in the ADM model, writing in 1959:

29. In addition to the explanatory gap identified by Arrow, another shortcoming of the ADM model was discussed by Radner: the dependence of the ADM model on a complete set of forward and state-contingent markets at time zero when equilibrium prices are determined.

30. Not only is the complete-market assumption a backdoor reintroduction of perfect foresight, it excludes many features of the greatest interest in modern market economies: the existence of money, stock markets, and money-crating commercial banks.

31. Radner showed that for full equilibrium to obtain, not only must excess demands in current markets be zero, but whenever current markets and current prices for future delivery are missing, agents must correctly expect those future prices.

32. But there is no plausible account of an equilibrating mechanism whereby price expectations become consistent with GE. Although PE analysis suggests that price adjustments do clear markets, no analogous analysis explains how future price expectations are equilibrated.

33. But if both price expectations and actual prices must be equilibrated for GE to obtain, the notion that “market-clearing” price adjustments are sufficient to achieve macroeconomic “equilibrium” is untenable.

34. Nevertheless, the idea that individual price expectations are rational (correct), so that, except for random shocks, continuous equilibrium is maintained, became the bedrock for New Classical macroeconomics and its New Keynesian and real-business cycle offshoots.

35. Macroeconomic theory has become a theory of dynamic intertemporal optimization subject to stochastic disturbances and market frictions that prevent or delay optimal adjustment to the disturbances, potentially allowing scope for countercyclical monetary or fiscal policies.

36. Given incomplete markets, the assumption of nearly continuous intertemporal equilibrium implies that agents correctly foresee future prices except when random shocks occur, whereupon agents revise expectations in line with the new information communicated by the shocks.
37. Modern macroeconomics replaced the Walrasian auctioneer with agents able to forecast the time path of all prices indefinitely into the future, except for intermittent unforeseen shocks that require agents to optimally their revise previous forecasts.
38. When new information or random events, requiring revision of previous expectations, occur, the new information becomes common knowledge and is processed and interpreted in the same way by all agents. Agents with rational expectations always share the same expectations.
39. So in modern macro, Arrow’s explanatory gap is filled by assuming that all agents, given their common knowledge, correctly anticipate current and future equilibrium prices subject to unpredictable forecast errors that change their expectations of future prices to change.
40. Equilibrium prices aren’t determined by an economic process or idealized market interactions of Walrasian tatonnement. Equilibrium prices are anticipated by agents, except after random changes in common knowledge. Semi-omniscient agents replace the Walrasian auctioneer.
41. Modern macro assumes that agents’ common knowledge enables them to form expectations that, until superseded by new knowledge, will be validated. The assumption is wrong, and the mistake is deeper than just the unrealism of perfect competition singled out by Arrow.
42. Assuming perfect competition, like assuming zero friction in physics, may be a reasonable simplification for some problems in economics, because the simplification renders an otherwise intractable problem tractable.
43. But to assume that agents’ common knowledge enables them to forecast future prices correctly transforms a model of decentralized decision-making into a model of central planning with each agent possessing the knowledge only possessed by an omniscient central planner.
44. The rational-expectations assumption fills Arrow’s explanatory gap, but in a deeply unsatisfactory way. A better approach to filling the gap would be to acknowledge that agents have private knowledge (and theories) that they rely on in forming their expectations.
45. Agents’ expectations are – at least potentially, if not inevitably – inconsistent. Because expectations differ, it’s the expectations of market specialists, who are better-informed than non-specialists, that determine the prices at which most transactions occur.
46. Because price expectations differ even among specialists, prices, even in competitive markets, need not be uniform, so that observed price differences reflect expectational differences among specialists.
47. When market specialists have similar expectations about future prices, current prices will converge on the common expectation, with arbitrage tending to force transactions prices to converge toward notwithstanding the existence of expectational differences.
48. However, the knowledge advantage of market specialists over non-specialists is largely limited to their knowledge of the workings of, at most, a small number of related markets.
49. The perspective of specialists whose expectations govern the actual transactions prices in most markets is almost always a PE perspective from which potentially relevant developments in other markets and in macroeconomic conditions are largely excluded.
50. The interrelationships between markets that, according to the SMD theorem, preclude any price-adjustment algorithm, from converging on the equilibrium price vector may also preclude market specialists from converging, even roughly, on the equilibrium price vector.
51. A strict equilibrium approach to business cycles, either real-business cycle or New Keynesian, requires outlandish assumptions about agents’ common knowledge and their capacity to anticipate the future prices upon which optimal production and consumption plans are based.
52. It is hard to imagine how, without those outlandish assumptions, the theoretical superstructure of real-business cycle theory, New Keynesian theory, or any other version of New Classical economics founded on the rational-expectations postulate can be salvaged.
53. The dominance of an untenable macroeconomic paradigm has tragically led modern macroeconomics into a theoretical dead end.

Jack Schwartz on the Weaknesses of the Mathematical Mind

I was recently rereading an essay by Karl Popper, “A Realistic View of Logic, Physics, and History” published in his collection of essays, Objective Knowledge: An Evolutionary Approach, because it discusses the role of reductivism in science and philosophy, a topic about which I’ve written a number of previous posts discussing the microfoundations of macroeconomics.

Here is an important passage from Popper’s essay:

What I should wish to assert is (1) that criticism is a most important methodological device: and (2) that if you answer criticism by saying, “I do not like your logic: your logic may be all right for you, but I prefer a different logic, and according to my logic this criticism is not valid”, then you may undermine the method of critical discussion.

Now I should distinguish between two main uses of logic, namely (1) its use in the demonstrative sciences – that is to say, the mathematical sciences – and (2) its use in the empirical sciences.

In the demonstrative sciences logic is used in the main for proofs – for the transmission of truth – while in the empirical sciences it is almost exclusively used critically – for the retransmission of falsity. Of course, applied mathematics comes in too, which implicitly makes use of the proofs of pure mathematics, but the role of mathematics in the empirical sciences is somewhat dubious in several respects. (There exists a wonderful article by Schwartz to this effect.)

The article to which Popper refers appears by Jack Schwartz in a volume edited by Ernst Nagel, Patrick Suppes, and Alfred Tarski, Logic, Methodology and Philosophy of Science. The title of the essay, “The Pernicious Influence of Mathematics on Science” caught my eye, so I tried to track it down. Unavailable on the internet except behind a paywall, I bought a used copy for $6 including postage. The essay was well worth the $6 I paid to read it.

Before quoting from the essay, I would just note that Jacob T. (Jack) Schwartz was far from being innocent of mathematical and scientific knowledge. Here’s a snippet from the Wikipedia entry on Schwartz.

His research interests included the theory of linear operatorsvon Neumann algebrasquantum field theorytime-sharingparallel computingprogramming language design and implementation, robotics, set-theoretic approaches in computational logicproof and program verification systems; multimedia authoring tools; experimental studies of visual perception; multimedia and other high-level software techniques for analysis and visualization of bioinformatic data.

He authored 18 books and more than 100 papers and technical reports.

He was also the inventor of the Artspeak programming language that historically ran on mainframes and produced graphical output using a single-color graphical plotter.[3]

He served as Chairman of the Computer Science Department (which he founded) at the Courant Institute of Mathematical SciencesNew York University, from 1969 to 1977. He also served as Chairman of the Computer Science Board of the National Research Council and was the former Chairman of the National Science Foundation Advisory Committee for Information, Robotics and Intelligent Systems. From 1986 to 1989, he was the Director of DARPA‘s Information Science and Technology Office (DARPA/ISTO) in Arlington, Virginia.

Here is a link to his obituary.

Though not trained as an economist, Schwartz, an autodidact, wrote two books on economic theory.

With that introduction, I quote from, and comment on, Schwartz’s essay.

Our announced subject today is the role of mathematics in the formulation of physical theories. I wish, however, to make use of the license permitted at philosophical congresses, in two regards: in the first place, to confine myself to the negative aspects of this role, leaving it to others to dwell on the amazing triumphs of the mathematical method; in the second place, to comment not only on physical science but also on social science, in which the characteristic inadequacies which I wish to discuss are more readily apparent.

Computer programmers often make a certain remark about computing machines, which may perhaps be taken as a complaint: that computing machines, with a perfect lack of discrimination, will do any foolish thing they are told to do. The reason for this lies of course in the narrow fixation of the computing machines “intelligence” upon the basely typographical details of its own perceptions – its inability to be guided by any large context. In a psychological description of the computer intelligence, three related adjectives push themselves forward: single-mindedness, literal-mindedness, simple-mindedness. Recognizing this, we should at the same time recognize that this single-mindedness, literal-mindedness, simple-mindedness also characterizes theoretical mathematics, though to a lesser extent.

It is a continual result of the fact that science tries to deal with reality that even the most precise sciences normally work with more or less ill-understood approximations toward which the scientist must maintain an appropriate skepticism. Thus, for instance, it may come as a shock to the mathematician to learn that the Schrodinger equation for the hydrogen atom, which he is able to solve only after a considerable effort of functional analysis and special function theory, is not a literally correct description of this atom, but only an approximation to a somewhat more correct equation taking account of spin, magnetic dipole, and relativistic effects; that this corrected equation is itself only an ill-understood approximation to an infinite set of quantum field-theoretic equations; and finally that the quantum field theory, besides diverging, neglects a myriad of strange-particle interactions whose strength and form are largely unknown. The physicist looking at the original Schrodinger equation, learns to sense in it the presence of many invisible terms, integral, intergrodifferential, perhaps even more complicated types of operators, in addition to the differential terms visible, and this sense inspires an entirely appropriate disregard for the purely technical features of the equation which he sees. This very healthy self-skepticism is foreign to the mathematical approach. . . .

Schwartz, in other words, is noting that the mathematical equations that physicists use in many contexts cannot be relied upon without qualification as accurate or exact representations of reality. The understanding that the mathematics that physicists and other physical scientists use to express their theories is often inexact or approximate inasmuch as reality is more complicated than our theories can capture mathematically. Part of what goes into the making of a good scientist is a kind of artistic feeling for how to adjust or interpret a mathematical model to take into account what the bare mathematics cannot describe in a manageable way.

The literal-mindedness of mathematics . . . makes it essential, if mathematics is to be appropriately used in science, that the assumptions upon which mathematics is to elaborate be correctly chosen from a larger point of view, invisible to mathematics itself. The single-mindedness of mathematics reinforces this conclusion. Mathematics is able to deal successfully only with the simplest of situations, more precisely, with a complex situation only to the extent that rare good fortune makes this complex situation hinge upon a few dominant simple factors. Beyond the well-traversed path, mathematics loses its bearing in a jungle of unnamed special functions and impenetrable combinatorial particularities. Thus, mathematical technique can only reach far if it starts from a point close to the simple essentials of a problem which has simple essentials. That form of wisdom which is the opposite of single-mindedness, the ability to keep many threads in hand, to draw for an argument from many disparate sources, is quite foreign to mathematics. The inability accounts for much of the difficulty which mathematics experiences in attempting to penetrate the social sciences. We may perhaps attempt a mathematical economics – but how difficult would be a mathematical history! Mathematics adjusts only with reluctance to the external, and vitally necessary, approximating of the scientists, and shudders each time a batch of small terms is cavalierly erased. Only with difficulty does it find its way to the scientist’s ready grasp of the relative importance of many factors. Quite typically, science leaps ahead and mathematics plods behind.

Schwartz having referenced mathematical economics, let me try to restate his point more concretely than he did by referring to the Walrasian theory of general equilibrium. “Mathematics,” Schwartz writes, “adjusts only with reluctance to the external, and vitally necessary, approximating of the scientists, and shudders each time a batch of small terms is cavalierly erased.” The Walrasian theory is at once too general and too special to be relied on as an applied theory. It is too general because the functional forms of most of its reliant equations can’t be specified or even meaningfully restricted on very special simplifying assumptions; it is too special, because the simplifying assumptions about the agents and the technologies and the constraints and the price-setting mechanism are at best only approximations and, at worst, are entirely divorced from reality.

Related to this deficiency of mathematics, and perhaps more productive of rueful consequence, is the simple-mindedness of mathematics – its willingness, like that of a computing machine, to elaborate upon any idea, however absurd; to dress scientific brilliancies and scientific absurdities alike in the impressive uniform of formulae and theorems. Unfortunately however, an absurdity in uniform is far more persuasive than an absurdity unclad. The very fact that a theory appears in mathematical form, that, for instance, a theory has provided the occasion for the application of a fixed-point theorem, or of a result about difference equations, somehow makes us more ready to take it seriously. And the mathematical-intellectual effort of applying the theorem fixes in us the particular point of view of the theory with which we deal, making us blind to whatever appears neither as a dependent nor as an independent parameter in its mathematical formulation. The result, perhaps most common in the social sciences, is bad theory with a mathematical passport. The present point is best established by reference to a few horrible examples. . . . I confine myself . . . to the citation of a delightful passage from Keynes’ General Theory, in which the issues before us are discussed with a characteristic wisdom and wit:

“It is the great fault of symbolic pseudomathematical methods of formalizing a system of economic analysis . . . that they expressly assume strict independence between the factors involved and lose all their cogency and authority if this hypothesis is disallowed; whereas, in ordinary discourse, where we are not blindly manipulating but know all the time what we are doing and what the words mean, we can keep ‘at the back of our heads’ the necessary reserves and qualifications and adjustments which we shall have to make later on, in a way in which we cannot keep complicated partial differentials ‘at the back’ of several pages of algebra which assume they all vanish. Too large a proportion of recent ‘mathematical’ economics are mere concoctions, as imprecise as the initial assumptions they reset on, which allow the author to lose sight of the complexities and interdependencies of the real world in a maze of pretentions and unhelpful symbols.”

Although it would have been helpful if Keynes had specifically identified the pseudomathematical methods that he had in mind, I am inclined to think that he was expressing his impatience with the Walrasian general-equilibrium approach that was characteristic of the Marshallian tradition that he carried forward even as he struggled to transcend it. Walrasian general equilibrium analysis, he seems to be suggesting, is too far removed from reality to provide any reliable guide to macroeconomic policy-making, because the necessary qualifications required to make general-equilibrium analysis practically relevant are simply unmanageable within the framework of general-equilibrium analysis. A different kind of analysis is required. As a Marshallian he was less skeptical of partial-equilibrium analysis than of general-equilibrium analysis. But he also recognized that partial-equilibrium analysis could not be usefully applied in situations, e.g., analysis of an overall “market” for labor, where the usual ceteris paribus assumptions underlying the use of stable demand and supply curves as analytical tools cannot be maintained. But for some reason that didn’t stop Keynes from trying to explain the nominal rate of interest by positing a demand curve to hold money and a fixed stock of money supplied by a central bank. But we all have our blind spots and miss obvious implications of familiar ideas that we have already encountered and, at least partially, understand.

Schwartz concludes his essay with an arresting thought that should give us pause about how we often uncritically accept probabilistic and statistical propositions as if we actually knew how they matched up with the stochastic phenomena that we are seeking to analyze. But although there is a lot to unpack in his conclusion, I am afraid someone more capable than I will have to do the unpacking.

[M]athematics, concentrating our attention, makes us blind to its own omissions – what I have already called the single-mindedness of mathematics. Typically, mathematics, knows better what to do than why to do it. Probability theory is a famous example. . . . Here also, the mathematical formalism may be hiding as much as it reveals.

Phillips Curve Musings: Second Addendum on Keynes and the Rate of Interest

In my two previous posts (here and here), I have argued that the partial-equilibrium analysis of a single market, like the labor market, is inappropriate and not particularly relevant, in situations in which the market under analysis is large relative to other markets, and likely to have repercussions on those markets, which, in turn, will have further repercussions on the market under analysis, violating the standard ceteris paribus condition applicable to partial-equilibrium analysis. When the standard ceteris paribus condition of partial equilibrium is violated, as it surely is in analyzing the overall labor market, the analysis is, at least, suspect, or, more likely, useless and misleading.

I suggested that Keynes in chapter 19 of the General Theory was aiming at something like this sort of argument, and I think he was largely right in his argument. But, in all modesty, I think that Keynes would have done better to have couched his argument in terms of the distinction between partial-equilibrium and general-equilibrium analysis. But his Marshallian training, which he simultaneously embraced and rejected, may have made it difficult for him to adopt the Walrasian general-equilibrium approach that Marshall and the Marshallians regarded as overly abstract and unrealistic.

In my next post, I suggested that the standard argument about the tendency of public-sector budget deficits to raise interest rates by competing with private-sector borrowers for loanable funds is fundamentally misguided, because it, too, inappropriately applies the partial-equilibrium analysis of a narrow market for government securities, or even a more broadly defined market for loanable funds in general.

That is a gross mistake, because the rate of interest is determined in a general-equilibrium system along with markets for all long-lived assets, embodying expected flows of income that must be discounted to the present to determine an estimated present value. Some assets are riskier than others and that risk is reflected in those valuations. But the rate of interest is distilled from the combination of all of those valuations, not prior to, or apart from, those valuations. Interest rates of different duration and different risk are embeded in the entire structure of current and expected prices for all long-lived assets. To focus solely on a very narrow subset of markets for newly issued securities, whose combined value is only a small fraction of the total value of all existing long-lived assets, is to miss the forest for the trees.

What I want to point out in this post is that Keynes, whom I credit for having recognized that partial-equilibrium analysis is inappropriate and misleading when applied to an overall market for labor, committed exactly the same mistake that he condemned in the context of the labor market, by asserting that the rate of interest is determined in a single market: the market for money. According to Keynes, the market rate of interest is that rate which equates the stock of money in existence with the amount of money demanded by the public. The higher the rate of interest, Keynes argued, the less money the public wants to hold.

Keynes, applying the analysis of Marshall and his other Cambridge predecessors, provided a wonderful analysis of the factors influencing the amount of money that people want to hold (usually expressed in terms of a fraction of their income). However, as superb as his analysis of the demand for money was, it was a partial-equilibrium analysis, and there was no recognition on his part that other markets in the economy are influenced by, and exert influence upon, the rate of interest.

What makes Keynes’s partial-equilibrium analysis of the interest rate so difficult to understand is that in chapter 17 of the General Theory, a magnificent tour de force of verbal general-equilibrium theorizing, explained the relationships that must exist between the expected returns for alternative long-lived assets that are held in equilibrium. Yet, disregarding his own analysis of the equilibrium relationship between returns on alternative assets, Keynes insisted on explaining the rate of interest in a one-period model (a model roughly corresponding to IS-LM) with only two alternative assets: money and bonds, but no real capital asset.

A general-equilibrium analysis of the rate of interest ought to have at least two periods, and it ought to have a real capital good that may be held in the present for use or consumption in the future, a possibility entirely missing from the Keynesian model. I have discussed this major gap in the Keynesian model in a series of posts (here, here, here, here, and here) about Earl Thompson’s 1976 paper “A Reformulation of Macroeconomic Theory.”

Although Thompson’s model seems to me too simple to account for many macroeconomic phenomena, it would have been a far better starting point for the development of macroeconomics than any of the models from which modern macroeconomic theory has evolved.

Phillips Curve Musings: Addendum on Budget Deficits and Interest Rates

In my previous post, I discussed a whole bunch of stuff, but I spent a lot of time discussing the inappropriate use of partial-equilibrium supply-demand analysis to explain price and quantity movements when price and quantity movements in those markets are dominated by precisely those forces that are supposed to be held constant — the old ceteris paribus qualification — in doing partial equilibrium analysis. Thus, the idea that in a depression or deep recession, high unemployment can be cured by cutting nominal wages is a classic misapplication of partial equilibrium analysis in a situation in which the forces primarily affecting wages and employment are not confined to a supposed “labor market,” but reflect broader macro-economic conditions. As Keynes understood, but did not explain well to his economist readers, analyzing unemployment in terms of the wage rate is futile, because wage changes induce further macroeconomic effects that may counteract whatever effects resulted from the wage changes.

Well, driving home this afternoon, I was listening to Marketplace on NPR with Kai Ryssdal interviewing Neil Irwin. Ryssdal asked Irwin why there is so much nervousness about the economy when unemployment and inflation are both about as low as they have ever been — certainly at the same time — in the last 50 years. Irwin’s response was that it is unsettling to many people that, with budget deficits high and rising, we observe stable inflation and falling interest rates on long-term Treasuries. This, after we have been told for so long that budget deficits drive up the cost of borrowing money and also cause are a major cause of inflation. The cognitive dissonance of stable inflation, falling interest rates and rapidly rising budget deficits, Irwin suggested, accounts for a vague feeling of disorientation, and gives rise to fears that the current apparent stability can’t last very long and will lead to some sort of distress or crisis in the future.

I’m not going to try to reassure Ryssdal and Irwin that there will never be another crisis. I certainly wouldn’t venture to say that all is now well with the Republic, much less with the rest of the world. I will just stick to the narrow observation that the bad habit of predicting the future course of interest rates by the size of the current budget deficit has no basis in economic theory, and reflects a colossal misunderstanding of how interest rates are determined. And that misunderstanding is precisely the one I discussed in my previous post about the misuse of partial-equilibrium analysis when general-equilibrium analysis is required.

To infer anything about interest rates from the market for government debt is a category error. Government debt is a long-lived financial asset providing an income stream, and its price reflects the current value of the promised income stream. Based on the price of a particular instrument with a given duration, it is possible to calculate a corresponding interest rate. That calculation is just a fairly simple mathematical exercise.

But it is a mistake to think that the interest rate for that duration is determined in the market for government debt of that duration. Why? Because, there are many other physical assets or financial instruments that could be held instead of government debt of any particular duration. And asset holders in a financially sophisticated economy can easily shift from one type of asset to another at will, at fairly minimal transactions costs. So it is very unlikely that any long-lived asset is so special that the expected yield from holding that asset varies independently from the expected yield from holding alternative assets that could be held.

That’s not to say that there are no differences in the expected yields from different assets, just that at the margin, taking into account the different characteristics of different assets, their expected returns must be fairly closely connected, so that any large change in the conditions in the market for any single asset are unlikely to have a large effect on the price of that asset alone. Rather, any change in one market will cause shifts in asset-holdings across different markets that will tend to offset the immediate effect that would have been reflected in a single market viewed in isolation.

This holds true as long as each specific market is relatively small compared to the entire economy. That is certainly true for the US economy and the world economy into which the US economy is very closely integrated. The value of all assets — real and financial — dwarfs the total outstanding value of US Treasuries. Interest rates are a measure of the relationship between expected flows of income and the value of the underlying assets.

To assume that increased borrowing by the US government to fund a substantial increase in the US budget deficit will substantially affect the overall economy-wide relationship between current and expected future income flows on the one hand and asset values on the other is wildly implausible. So no one should be surprised to find that the recent sharp increase in the US budget deficit has had no perceptible effect on the interest rates at which US government debt is now yielding.

A more likely cause of a change in interest rates would be an increase in expected inflation, but inflation expectations are not necessarily correlated with the budget deficit, and changing inflation expectations aren’t necessarily reflected in corresponding changes in nominal interest rates, as Monetarist economists have often maintained.

So it’s about time that we disabused ourselves of the simplistic notion that changes in the budget deficit have any substantial effect on interest rates.

Phillips Curve Musings

There’s a lot of talk about the Phillips Curve these days; people wonder why, with the unemployment rate reaching historically low levels, nominal and real wages have increased minimally with inflation remaining securely between 1.5 and 2%. The Phillips Curve, for those untutored in basic macroeconomics, depicts a relationship between inflation and unemployment. The original empirical Philips Curve relationship showed that high rates of unemployment were associated with low or negative rates of wage inflation while low rates of unemployment were associated with high rates of wage inflation. This empirical relationship suggested a causal theory that the rate of wage increase tends to rise when unemployment is low and tends to fall when unemployment is high, a causal theory that seems to follow from a simple supply-demand model in which wages rise when there is an excess demand for labor (unemployment is low) and wages fall when there is an excess supply of labor (unemployment is high).

Viewed in this light, low unemployment, signifying a tight labor market, signals that inflation is likely to rise, providing a rationale for monetary policy to be tightened to prevent inflation from rising at it normally does when unemployment is low. Seeming to accept that rationale, the Fed has gradually raised interest rates for the past two years or so. But the increase in interest rates has now slowed the expansion of employment and decline in unemployment to historic lows. Nor has the improving employment situation resulted in any increase in price inflation and at most a minimal increase in the rate of increase in wages.

In a couple of previous posts about sticky wages (here and here), I’ve questioned whether the simple supply-demand model of the labor market motivating the standard interpretation of the Phillips Curve is a useful way to think about wage adjustment and inflation-employment dynamics. I’ve offered a few reasons why the supply-demand model, though applicable in some situations, is not useful for understanding how wages adjust.

The particular reason that I want to focus on here is Keynes’s argument in chapter 19 of the General Theory (though I express it in terms different from his) that supply-demand analysis can’t explain how wages and employment are determined. The upshot of his argument I believe is that supply demand-analysis only works in a partial-equilibrium setting in which feedback effects from the price changes in the market under consideration don’t affect equilibrium prices in other markets, so that the position of the supply and demand curves in the market of interest can be assumed stable even as price and quantity in that market adjust from one equilibrium to another (the comparative-statics method).

Because the labor market, affecting almost every other market, is not a small part of the economy, partial-equilibrium analysis is unsuitable for understanding that market, the normal stability assumption being untenable if we attempt to trace the adjustment from one labor-market equilibrium to another after an exogenous disturbance. In the supply-demand paradigm, unemployment is a measure of the disequilibrium in the labor market, a disequilibrium that could – at least in principle — be eliminated by a wage reduction sufficient to equate the quantity of labor services supplied with the amount demanded. Viewed from this supply-demand perspective, the failure of the wage to fall to a supposed equilibrium level is attributable to some sort of endogenous stickiness or some external impediment (minimum wage legislation or union intransigence) in wage adjustment that prevents the normal equilibrating free-market adjustment mechanism. But the habitual resort to supply-demand analysis by economists, reinforced and rewarded by years of training and professionalization, is actually misleading when applied in an inappropriate context.

So Keynes was right to challenge this view of a potentially equilibrating market mechanism that is somehow stymied from behaving in the manner described in the textbook version of supply-demand analysis. Instead, Keynes argued that the level of employment is determined by the level of spending and income at an exogenously given wage level, an approach that seems to be deeply at odds with idea that price adjustments are an essential part of the process whereby a complex economic system arrives at, or at least tends to move toward, an equilibrium.

One of the main motivations for a search for microfoundations in the decades after the General Theory was published was to be able to articulate a convincing microeconomic rationale for persistent unemployment that was not eliminated by the usual tendency of market prices to adjust to eliminate excess supplies of any commodity or service. But Keynes was right to question whether there is any automatic market mechanism that adjusts nominal or real wages in a manner even remotely analogous to the adjustment of prices in organized commodity or stock exchanges – the sort of markets that serve as exemplars of automatic price adjustments in response to excess demands or supplies.

Keynes was also correct to argue that, even if there was a mechanism causing automatic wage adjustments in response to unemployment, the labor market, accounting for roughly 60 percent of total income, is so large that any change in wages necessarily affects all other markets, causing system-wide repercussions that might well offset any employment-increasing tendency of the prior wage adjustment.

But what I want to suggest in this post is that Keynes’s criticism of the supply-demand paradigm is relevant to any general-equilibrium system in the following sense: if a general-equilibrium system is considered from an initial non-equilibrium position, does the system have any tendency to move toward equilibrium? And to make the analysis relatively tractable, assume that the system is such that a unique equilibrium exists. Before proceeding, I also want to note that I am not arguing that traditional supply-demand analysis is necessarily flawed; I am just emphasizing that traditional supply-demand analysis is predicated on a macroeconomic foundation: that all markets but the one under consideration are in, or are in the neighborhood of, equilibrium. It is only because the system as a whole is in the neighborhood of equilibrium, that the microeconomic forces on which traditional supply-demand analysis relies appear to be so powerful and so stabilizing.

However, if our focus is a general-equilibrium system, microeconomic supply-demand analysis of a single market in isolation provides no basis on which to argue that the system as a whole has a self-correcting tendency toward equilibrium. To make such an argument is to commit a fallacy of composition. The tendency of any single market toward equilibrium is premised on an assumption that all markets but the one under analysis are already at, or in the neighborhood of, equilibrium. But when the system as a whole is in a disequilibrium state, the method of partial equilibrium analysis is misplaced; partial-equilibrium analysis provides no ground – no micro-foundation — for an argument that the adjustment of market prices in response to excess demands and excess supplies will ever – much less rapidly — guide the entire system back to an equilibrium state.

The lack of automatic market forces that return a system not in the neighborhood — for purposes of this discussion “neighborhood” is left undefined – of equilibrium back to equilibrium is implied by the Sonnenschein-Mantel-Debreu Theorem, which shows that, even if a unique general equilibrium exists, there may be no rule or algorithm for increasing (decreasing) prices in markets with excess demands (supplies) by which the general-equilibrium price vector would be discovered in a finite number of steps.

The theorem holds even under a Walrasian tatonnement mechanism in which no trading at disequilibrium prices is allowed. The reason is that the interactions between individual markets may be so complicated that a price-adjustment rule will not eliminate all excess demands, because even if a price adjustment reduces excess demand in one market, that price adjustment may cause offsetting disturbances in one or more other markets. So, unless the equilibrium price vector is somehow hit upon by accident, no rule or algorithm for price adjustment based on the excess demand in each market will necessarily lead to discovery of the equilibrium price vector.

The Sonnenschein Mantel Debreu Theorem reinforces the insight of Kenneth Arrow in an important 1959 paper “Toward a Theory of Price Adjustment,” which posed the question: how does the theory of perfect competition account for the determination of the equilibrium price at which all agents can buy or sell as much as they want to at the equilibrium (“market-clearing”) price? As Arrow observed, “there exists a logical gap in the usual formulations of the theory of perfectly competitive economy, namely, that there is no place for a rational decision with respect to prices as there is with respect to quantities.”

Prices in perfect competition are taken as parameters by all agents in the model, and optimization by agents consists in choosing optimal quantities. The equilibrium solution allows the mutually consistent optimization by all agents at the equilibrium price vector. This is true for the general-equilibrium system as a whole, and for partial equilibrium in every market. Not only is there no positive theory of price adjustment within the competitive general-equilibrium model, as pointed out by Arrow, but the Sonnenschein-Mantel-Debreu Theorem shows that there’s no guarantee that even the notional tatonnement method of price adjustment can ensure that a unique equilibrium price vector will be discovered.

While acknowledging his inability to fill the gap, Arrow suggested that, because perfect competition and price taking are properties of general equilibrium, there are inevitably pockets of market power, in non-equilibrium states, so that some transactors in non-equilibrium states, are price searchers rather than price takers who therefore choose both an optimal quantity and an optimal price. I have no problem with Arrow’s insight as far as it goes, but it still doesn’t really solve his problem, because he couldn’t explain, even intuitively, how a disequilibrium system with some agents possessing market power (either as sellers or buyers) transitions into an equilibrium system in which all agents are price-takers who can execute their planned optimal purchases and sales at the parametric prices.

One of the few helpful, but, as far as I can tell, totally overlooked, contributions of the rational-expectations revolution was to solve (in a very narrow sense) the problem that Arrow identified and puzzled over, although Hayek, Lindahl and Myrdal, in their original independent formulations of the concept of intertemporal equilibrium, had already provided the key to the solution. Hayek, Lindahl, and Myrdal showed that an intertemporal equilibrium is possible only insofar as agents form expectations of future prices that are so similar to each other that, if future prices turn out as expected, the agents would be able to execute their planned sales and purchases as expected.

But if agents have different expectations about the future price(s) of some commodity(ies), and if their plans for future purchases and sales are conditioned on those expectations, then when the expectations of at least some agents are inevitably disappointed, those agents will necessarily have to abandon (or revise) the plans that their previously formulated plans.

What led to Arrow’s confusion about how equilibrium prices are arrived at was the habit of thinking that market prices are determined by way of a Walrasian tatonnement process (supposedly mimicking the haggling over price by traders). So the notion that a mythical market auctioneer, who first calls out prices at random (prix cries au hasard), and then, based on the tallied market excess demands and supplies, adjusts those prices until all markets “clear,” is untenable, because continual trading at disequilibrium prices keeps changing the solution of the general-equilibrium system. An actual system with trading at non-equilibrium prices may therefore be moving away from, rather converging on, an equilibrium state.

Here is where the rational-expectations hypothesis comes in. The rational-expectations assumption posits that revisions of previously formulated plans are never necessary, because all agents actually do correctly anticipate the equilibrium price vector in advance. That is indeed a remarkable assumption to make; it is an assumption that all agents in the model have the capacity to anticipate, insofar as their future plans to buy and sell require them to anticipate, the equilibrium prices that will prevail for the products and services that they plan to purchase or sell. Of course, in a general-equilibrium system, all prices being determined simultaneously, the equilibrium prices for some future prices cannot generally be forecast in isolation from the equilibrium prices for all other products. So, in effect, the rational-expectations hypothesis supposes that each agent in the model is an omniscient central planner able to solve an entire general-equilibrium system for all future prices!

But let us not be overly nitpicky about details. So forget about false trading, and forget about the Sonnenschein-Mantel-Debreu theorem. Instead, just assume that, at time t, agents form rational expectations of the future equilibrium price vector in period (t+1). If agents at time t form rational expectations of the equilibrium price vector in period (t+1), then they may well assume that the equilibrium price vector in period t is equal to the expected price vector in period (t+1).

Now, the expected price vector in period (t+1) may or may not be an equilibrium price vector in period t. If it is an equilibrium price vector in period t as well as in period (t+1), then all is right with the world, and everyone will succeed in buying and selling as much of each commodity as he or she desires. If not, prices may or may not adjust in response to that disequilibrium, and expectations may or may not change accordingly.

Thus, instead of positing a mythical auctioneer in a contrived tatonnement process as the mechanism whereby prices are determined for currently executed transactions, the rational-expectations hypothesis posits expected future prices as the basis for the prices at which current transactions are executed, providing a straightforward solution to Arrow’s problem. The prices at which agents are willing to purchase or sell correspond to their expectations of prices in the future. If they find trading partners with similar expectations of future prices, they will reach agreement and execute transactions at those prices. If they don’t find traders with similar expectations, they will either be unable to transact, or will revise their price expectations, or they will assume that current market conditions are abnormal and then decide whether to transact at prices different from those they had expected.

When current prices are more favorable than expected, agents will want to buy or sell more than they would have if current prices were equal to their expectations for the future. If current prices are less favorable than they expect future prices to be, they will not transact at all or will seek to buy or sell less than they would have bought or sold if current prices had equaled expected future prices. The dichotomy between observed current prices, dictated by current demands and supplies, and expected future prices is unrealistic; all current transactions are made with an eye to expected future prices and to their opportunities to postpone current transactions until the future, or to advance future transactions into the present.

If current prices for similar commodities are not uniform in all current transactions, a circumstance that Arrow attributes to the existence of varying degrees of market power across imperfectly competitive suppliers, price dispersion may actually be caused, not by market power, but by dispersion in the expectations of future prices held by agents. Sellers expecting future prices to rise will be less willing to sell at relatively low prices now than are suppliers with pessimistic expectations about future prices. Equilibrium occurs when all transactors share the same expectations of future prices and expected future prices correspond to equilibrium prices in the current period.

Of course, that isn’t the only possible equilibrium situation. There may be situations in which a future event that will change a subset of prices can be anticipated. If the anticipation of the future event affects not only expected future prices, it must also and necessarily affect current prices insofar as current supplies can be carried into the future from the present or current purchases can be postponed until the future or future consumption shifted into the present.

The practical upshot of these somewhat disjointed reflections is, I think,primarily to reinforce skepticism that the traditional Phillips Curve supposition that low and falling unemployment necessarily presages an increase in inflation. Wages are not primarily governed by the current state of the labor market, whatever the labor market might even mean in macroeconomic context.

Expectations rule! And the rational-expectations revolution to the contrary notwithstanding, we have no good theory of how expectations are actually formed and there is certainly no reason to assume that, as a general matter, all agents share the same set of expectations.

The current fairly benign state of the economy reflects the absence of any serious disappointment of price expectations. If an economy is operating not very far from an equilibrium, although expectations are not the same, they likely are not very different. They will only be very different after the unexpected strikes. When that happens, borrowers and traders who had taken positions based on overly optimistic expectations find themselves unable to meet their obligations. It is only then that we will see whether the economy is really as strong and resilient as it now seems.

Expecting the unexpected is hard to do, but you can be sure that, sooner or later, the unexpected is going to happen.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,829 other followers

Follow Uneasy Money on WordPress.com