Archive for the 'partial equilibrium' Category

Jack Schwartz on the Weaknesses of the Mathematical Mind

I was recently rereading an essay by Karl Popper, “A Realistic View of Logic, Physics, and History” published in his collection of essays, Objective Knowledge: An Evolutionary Approach, because it discusses the role of reductivism in science and philosophy, a topic about which I’ve written a number of previous posts discussing the microfoundations of macroeconomics.

Here is an important passage from Popper’s essay:

What I should wish to assert is (1) that criticism is a most important methodological device: and (2) that if you answer criticism by saying, “I do not like your logic: your logic may be all right for you, but I prefer a different logic, and according to my logic this criticism is not valid”, then you may undermine the method of critical discussion.

Now I should distinguish between two main uses of logic, namely (1) its use in the demonstrative sciences – that is to say, the mathematical sciences – and (2) its use in the empirical sciences.

In the demonstrative sciences logic is used in the main for proofs – for the transmission of truth – while in the empirical sciences it is almost exclusively used critically – for the retransmission of falsity. Of course, applied mathematics comes in too, which implicitly makes use of the proofs of pure mathematics, but the role of mathematics in the empirical sciences is somewhat dubious in several respects. (There exists a wonderful article by Schwartz to this effect.)

The article to which Popper refers appears by Jack Schwartz in a volume edited by Ernst Nagel, Patrick Suppes, and Alfred Tarski, Logic, Methodology and Philosophy of Science. The title of the essay, “The Pernicious Influence of Mathematics on Science” caught my eye, so I tried to track it down. Unavailable on the internet except behind a paywall, I bought a used copy for $6 including postage. The essay was well worth the $6 I paid to read it.

Before quoting from the essay, I would just note that Jacob T. (Jack) Schwartz was far from being innocent of mathematical and scientific knowledge. Here’s a snippet from the Wikipedia entry on Schwartz.

His research interests included the theory of linear operatorsvon Neumann algebrasquantum field theorytime-sharingparallel computingprogramming language design and implementation, robotics, set-theoretic approaches in computational logicproof and program verification systems; multimedia authoring tools; experimental studies of visual perception; multimedia and other high-level software techniques for analysis and visualization of bioinformatic data.

He authored 18 books and more than 100 papers and technical reports.

He was also the inventor of the Artspeak programming language that historically ran on mainframes and produced graphical output using a single-color graphical plotter.[3]

He served as Chairman of the Computer Science Department (which he founded) at the Courant Institute of Mathematical SciencesNew York University, from 1969 to 1977. He also served as Chairman of the Computer Science Board of the National Research Council and was the former Chairman of the National Science Foundation Advisory Committee for Information, Robotics and Intelligent Systems. From 1986 to 1989, he was the Director of DARPA‘s Information Science and Technology Office (DARPA/ISTO) in Arlington, Virginia.

Here is a link to his obituary.

Though not trained as an economist, Schwartz, an autodidact, wrote two books on economic theory.

With that introduction, I quote from, and comment on, Schwartz’s essay.

Our announced subject today is the role of mathematics in the formulation of physical theories. I wish, however, to make use of the license permitted at philosophical congresses, in two regards: in the first place, to confine myself to the negative aspects of this role, leaving it to others to dwell on the amazing triumphs of the mathematical method; in the second place, to comment not only on physical science but also on social science, in which the characteristic inadequacies which I wish to discuss are more readily apparent.

Computer programmers often make a certain remark about computing machines, which may perhaps be taken as a complaint: that computing machines, with a perfect lack of discrimination, will do any foolish thing they are told to do. The reason for this lies of course in the narrow fixation of the computing machines “intelligence” upon the basely typographical details of its own perceptions – its inability to be guided by any large context. In a psychological description of the computer intelligence, three related adjectives push themselves forward: single-mindedness, literal-mindedness, simple-mindedness. Recognizing this, we should at the same time recognize that this single-mindedness, literal-mindedness, simple-mindedness also characterizes theoretical mathematics, though to a lesser extent.

It is a continual result of the fact that science tries to deal with reality that even the most precise sciences normally work with more or less ill-understood approximations toward which the scientist must maintain an appropriate skepticism. Thus, for instance, it may come as a shock to the mathematician to learn that the Schrodinger equation for the hydrogen atom, which he is able to solve only after a considerable effort of functional analysis and special function theory, is not a literally correct description of this atom, but only an approximation to a somewhat more correct equation taking account of spin, magnetic dipole, and relativistic effects; that this corrected equation is itself only an ill-understood approximation to an infinite set of quantum field-theoretic equations; and finally that the quantum field theory, besides diverging, neglects a myriad of strange-particle interactions whose strength and form are largely unknown. The physicist looking at the original Schrodinger equation, learns to sense in it the presence of many invisible terms, integral, intergrodifferential, perhaps even more complicated types of operators, in addition to the differential terms visible, and this sense inspires an entirely appropriate disregard for the purely technical features of the equation which he sees. This very healthy self-skepticism is foreign to the mathematical approach. . . .

Schwartz, in other words, is noting that the mathematical equations that physicists use in many contexts cannot be relied upon without qualification as accurate or exact representations of reality. The understanding that the mathematics that physicists and other physical scientists use to express their theories is often inexact or approximate inasmuch as reality is more complicated than our theories can capture mathematically. Part of what goes into the making of a good scientist is a kind of artistic feeling for how to adjust or interpret a mathematical model to take into account what the bare mathematics cannot describe in a manageable way.

The literal-mindedness of mathematics . . . makes it essential, if mathematics is to be appropriately used in science, that the assumptions upon which mathematics is to elaborate be correctly chosen from a larger point of view, invisible to mathematics itself. The single-mindedness of mathematics reinforces this conclusion. Mathematics is able to deal successfully only with the simplest of situations, more precisely, with a complex situation only to the extent that rare good fortune makes this complex situation hinge upon a few dominant simple factors. Beyond the well-traversed path, mathematics loses its bearing in a jungle of unnamed special functions and impenetrable combinatorial particularities. Thus, mathematical technique can only reach far if it starts from a point close to the simple essentials of a problem which has simple essentials. That form of wisdom which is the opposite of single-mindedness, the ability to keep many threads in hand, to draw for an argument from many disparate sources, is quite foreign to mathematics. The inability accounts for much of the difficulty which mathematics experiences in attempting to penetrate the social sciences. We may perhaps attempt a mathematical economics – but how difficult would be a mathematical history! Mathematics adjusts only with reluctance to the external, and vitally necessary, approximating of the scientists, and shudders each time a batch of small terms is cavalierly erased. Only with difficulty does it find its way to the scientist’s ready grasp of the relative importance of many factors. Quite typically, science leaps ahead and mathematics plods behind.

Schwartz having referenced mathematical economics, let me try to restate his point more concretely than he did by referring to the Walrasian theory of general equilibrium. “Mathematics,” Schwartz writes, “adjusts only with reluctance to the external, and vitally necessary, approximating of the scientists, and shudders each time a batch of small terms is cavalierly erased.” The Walrasian theory is at once too general and too special to be relied on as an applied theory. It is too general because the functional forms of most of its reliant equations can’t be specified or even meaningfully restricted on very special simplifying assumptions; it is too special, because the simplifying assumptions about the agents and the technologies and the constraints and the price-setting mechanism are at best only approximations and, at worst, are entirely divorced from reality.

Related to this deficiency of mathematics, and perhaps more productive of rueful consequence, is the simple-mindedness of mathematics – its willingness, like that of a computing machine, to elaborate upon any idea, however absurd; to dress scientific brilliancies and scientific absurdities alike in the impressive uniform of formulae and theorems. Unfortunately however, an absurdity in uniform is far more persuasive than an absurdity unclad. The very fact that a theory appears in mathematical form, that, for instance, a theory has provided the occasion for the application of a fixed-point theorem, or of a result about difference equations, somehow makes us more ready to take it seriously. And the mathematical-intellectual effort of applying the theorem fixes in us the particular point of view of the theory with which we deal, making us blind to whatever appears neither as a dependent nor as an independent parameter in its mathematical formulation. The result, perhaps most common in the social sciences, is bad theory with a mathematical passport. The present point is best established by reference to a few horrible examples. . . . I confine myself . . . to the citation of a delightful passage from Keynes’ General Theory, in which the issues before us are discussed with a characteristic wisdom and wit:

“It is the great fault of symbolic pseudomathematical methods of formalizing a system of economic analysis . . . that they expressly assume strict independence between the factors involved and lose all their cogency and authority if this hypothesis is disallowed; whereas, in ordinary discourse, where we are not blindly manipulating but know all the time what we are doing and what the words mean, we can keep ‘at the back of our heads’ the necessary reserves and qualifications and adjustments which we shall have to make later on, in a way in which we cannot keep complicated partial differentials ‘at the back’ of several pages of algebra which assume they all vanish. Too large a proportion of recent ‘mathematical’ economics are mere concoctions, as imprecise as the initial assumptions they reset on, which allow the author to lose sight of the complexities and interdependencies of the real world in a maze of pretentions and unhelpful symbols.”

Although it would have been helpful if Keynes had specifically identified the pseudomathematical methods that he had in mind, I am inclined to think that he was expressing his impatience with the Walrasian general-equilibrium approach that was characteristic of the Marshallian tradition that he carried forward even as he struggled to transcend it. Walrasian general equilibrium analysis, he seems to be suggesting, is too far removed from reality to provide any reliable guide to macroeconomic policy-making, because the necessary qualifications required to make general-equilibrium analysis practically relevant are simply unmanageable within the framework of general-equilibrium analysis. A different kind of analysis is required. As a Marshallian he was less skeptical of partial-equilibrium analysis than of general-equilibrium analysis. But he also recognized that partial-equilibrium analysis could not be usefully applied in situations, e.g., analysis of an overall “market” for labor, where the usual ceteris paribus assumptions underlying the use of stable demand and supply curves as analytical tools cannot be maintained. But for some reason that didn’t stop Keynes from trying to explain the nominal rate of interest by positing a demand curve to hold money and a fixed stock of money supplied by a central bank. But we all have our blind spots and miss obvious implications of familiar ideas that we have already encountered and, at least partially, understand.

Schwartz concludes his essay with an arresting thought that should give us pause about how we often uncritically accept probabilistic and statistical propositions as if we actually knew how they matched up with the stochastic phenomena that we are seeking to analyze. But although there is a lot to unpack in his conclusion, I am afraid someone more capable than I will have to do the unpacking.

[M]athematics, concentrating our attention, makes us blind to its own omissions – what I have already called the single-mindedness of mathematics. Typically, mathematics, knows better what to do than why to do it. Probability theory is a famous example. . . . Here also, the mathematical formalism may be hiding as much as it reveals.

Phillips Curve Musings: Second Addendum on Keynes and the Rate of Interest

In my two previous posts (here and here), I have argued that the partial-equilibrium analysis of a single market, like the labor market, is inappropriate and not particularly relevant, in situations in which the market under analysis is large relative to other markets, and likely to have repercussions on those markets, which, in turn, will have further repercussions on the market under analysis, violating the standard ceteris paribus condition applicable to partial-equilibrium analysis. When the standard ceteris paribus condition of partial equilibrium is violated, as it surely is in analyzing the overall labor market, the analysis is, at least, suspect, or, more likely, useless and misleading.

I suggested that Keynes in chapter 19 of the General Theory was aiming at something like this sort of argument, and I think he was largely right in his argument. But, in all modesty, I think that Keynes would have done better to have couched his argument in terms of the distinction between partial-equilibrium and general-equilibrium analysis. But his Marshallian training, which he simultaneously embraced and rejected, may have made it difficult for him to adopt the Walrasian general-equilibrium approach that Marshall and the Marshallians regarded as overly abstract and unrealistic.

In my next post, I suggested that the standard argument about the tendency of public-sector budget deficits to raise interest rates by competing with private-sector borrowers for loanable funds is fundamentally misguided, because it, too, inappropriately applies the partial-equilibrium analysis of a narrow market for government securities, or even a more broadly defined market for loanable funds in general.

That is a gross mistake, because the rate of interest is determined in a general-equilibrium system along with markets for all long-lived assets, embodying expected flows of income that must be discounted to the present to determine an estimated present value. Some assets are riskier than others and that risk is reflected in those valuations. But the rate of interest is distilled from the combination of all of those valuations, not prior to, or apart from, those valuations. Interest rates of different duration and different risk are embeded in the entire structure of current and expected prices for all long-lived assets. To focus solely on a very narrow subset of markets for newly issued securities, whose combined value is only a small fraction of the total value of all existing long-lived assets, is to miss the forest for the trees.

What I want to point out in this post is that Keynes, whom I credit for having recognized that partial-equilibrium analysis is inappropriate and misleading when applied to an overall market for labor, committed exactly the same mistake that he condemned in the context of the labor market, by asserting that the rate of interest is determined in a single market: the market for money. According to Keynes, the market rate of interest is that rate which equates the stock of money in existence with the amount of money demanded by the public. The higher the rate of interest, Keynes argued, the less money the public wants to hold.

Keynes, applying the analysis of Marshall and his other Cambridge predecessors, provided a wonderful analysis of the factors influencing the amount of money that people want to hold (usually expressed in terms of a fraction of their income). However, as superb as his analysis of the demand for money was, it was a partial-equilibrium analysis, and there was no recognition on his part that other markets in the economy are influenced by, and exert influence upon, the rate of interest.

What makes Keynes’s partial-equilibrium analysis of the interest rate so difficult to understand is that in chapter 17 of the General Theory, a magnificent tour de force of verbal general-equilibrium theorizing, explained the relationships that must exist between the expected returns for alternative long-lived assets that are held in equilibrium. Yet, disregarding his own analysis of the equilibrium relationship between returns on alternative assets, Keynes insisted on explaining the rate of interest in a one-period model (a model roughly corresponding to IS-LM) with only two alternative assets: money and bonds, but no real capital asset.

A general-equilibrium analysis of the rate of interest ought to have at least two periods, and it ought to have a real capital good that may be held in the present for use or consumption in the future, a possibility entirely missing from the Keynesian model. I have discussed this major gap in the Keynesian model in a series of posts (here, here, here, here, and here) about Earl Thompson’s 1976 paper “A Reformulation of Macroeconomic Theory.”

Although Thompson’s model seems to me too simple to account for many macroeconomic phenomena, it would have been a far better starting point for the development of macroeconomics than any of the models from which modern macroeconomic theory has evolved.

Phillips Curve Musings: Addendum on Budget Deficits and Interest Rates

In my previous post, I discussed a whole bunch of stuff, but I spent a lot of time discussing the inappropriate use of partial-equilibrium supply-demand analysis to explain price and quantity movements when price and quantity movements in those markets are dominated by precisely those forces that are supposed to be held constant — the old ceteris paribus qualification — in doing partial equilibrium analysis. Thus, the idea that in a depression or deep recession, high unemployment can be cured by cutting nominal wages is a classic misapplication of partial equilibrium analysis in a situation in which the forces primarily affecting wages and employment are not confined to a supposed “labor market,” but reflect broader macro-economic conditions. As Keynes understood, but did not explain well to his economist readers, analyzing unemployment in terms of the wage rate is futile, because wage changes induce further macroeconomic effects that may counteract whatever effects resulted from the wage changes.

Well, driving home this afternoon, I was listening to Marketplace on NPR with Kai Ryssdal interviewing Neil Irwin. Ryssdal asked Irwin why there is so much nervousness about the economy when unemployment and inflation are both about as low as they have ever been — certainly at the same time — in the last 50 years. Irwin’s response was that it is unsettling to many people that, with budget deficits high and rising, we observe stable inflation and falling interest rates on long-term Treasuries. This, after we have been told for so long that budget deficits drive up the cost of borrowing money and also cause are a major cause of inflation. The cognitive dissonance of stable inflation, falling interest rates and rapidly rising budget deficits, Irwin suggested, accounts for a vague feeling of disorientation, and gives rise to fears that the current apparent stability can’t last very long and will lead to some sort of distress or crisis in the future.

I’m not going to try to reassure Ryssdal and Irwin that there will never be another crisis. I certainly wouldn’t venture to say that all is now well with the Republic, much less with the rest of the world. I will just stick to the narrow observation that the bad habit of predicting the future course of interest rates by the size of the current budget deficit has no basis in economic theory, and reflects a colossal misunderstanding of how interest rates are determined. And that misunderstanding is precisely the one I discussed in my previous post about the misuse of partial-equilibrium analysis when general-equilibrium analysis is required.

To infer anything about interest rates from the market for government debt is a category error. Government debt is a long-lived financial asset providing an income stream, and its price reflects the current value of the promised income stream. Based on the price of a particular instrument with a given duration, it is possible to calculate a corresponding interest rate. That calculation is just a fairly simple mathematical exercise.

But it is a mistake to think that the interest rate for that duration is determined in the market for government debt of that duration. Why? Because, there are many other physical assets or financial instruments that could be held instead of government debt of any particular duration. And asset holders in a financially sophisticated economy can easily shift from one type of asset to another at will, at fairly minimal transactions costs. So it is very unlikely that any long-lived asset is so special that the expected yield from holding that asset varies independently from the expected yield from holding alternative assets that could be held.

That’s not to say that there are no differences in the expected yields from different assets, just that at the margin, taking into account the different characteristics of different assets, their expected returns must be fairly closely connected, so that any large change in the conditions in the market for any single asset are unlikely to have a large effect on the price of that asset alone. Rather, any change in one market will cause shifts in asset-holdings across different markets that will tend to offset the immediate effect that would have been reflected in a single market viewed in isolation.

This holds true as long as each specific market is relatively small compared to the entire economy. That is certainly true for the US economy and the world economy into which the US economy is very closely integrated. The value of all assets — real and financial — dwarfs the total outstanding value of US Treasuries. Interest rates are a measure of the relationship between expected flows of income and the value of the underlying assets.

To assume that increased borrowing by the US government to fund a substantial increase in the US budget deficit will substantially affect the overall economy-wide relationship between current and expected future income flows on the one hand and asset values on the other is wildly implausible. So no one should be surprised to find that the recent sharp increase in the US budget deficit has had no perceptible effect on the interest rates at which US government debt is now yielding.

A more likely cause of a change in interest rates would be an increase in expected inflation, but inflation expectations are not necessarily correlated with the budget deficit, and changing inflation expectations aren’t necessarily reflected in corresponding changes in nominal interest rates, as Monetarist economists have often maintained.

So it’s about time that we disabused ourselves of the simplistic notion that changes in the budget deficit have any substantial effect on interest rates.

Phillips Curve Musings

There’s a lot of talk about the Phillips Curve these days; people wonder why, with the unemployment rate reaching historically low levels, nominal and real wages have increased minimally with inflation remaining securely between 1.5 and 2%. The Phillips Curve, for those untutored in basic macroeconomics, depicts a relationship between inflation and unemployment. The original empirical Philips Curve relationship showed that high rates of unemployment were associated with low or negative rates of wage inflation while low rates of unemployment were associated with high rates of wage inflation. This empirical relationship suggested a causal theory that the rate of wage increase tends to rise when unemployment is low and tends to fall when unemployment is high, a causal theory that seems to follow from a simple supply-demand model in which wages rise when there is an excess demand for labor (unemployment is low) and wages fall when there is an excess supply of labor (unemployment is high).

Viewed in this light, low unemployment, signifying a tight labor market, signals that inflation is likely to rise, providing a rationale for monetary policy to be tightened to prevent inflation from rising at it normally does when unemployment is low. Seeming to accept that rationale, the Fed has gradually raised interest rates for the past two years or so. But the increase in interest rates has now slowed the expansion of employment and decline in unemployment to historic lows. Nor has the improving employment situation resulted in any increase in price inflation and at most a minimal increase in the rate of increase in wages.

In a couple of previous posts about sticky wages (here and here), I’ve questioned whether the simple supply-demand model of the labor market motivating the standard interpretation of the Phillips Curve is a useful way to think about wage adjustment and inflation-employment dynamics. I’ve offered a few reasons why the supply-demand model, though applicable in some situations, is not useful for understanding how wages adjust.

The particular reason that I want to focus on here is Keynes’s argument in chapter 19 of the General Theory (though I express it in terms different from his) that supply-demand analysis can’t explain how wages and employment are determined. The upshot of his argument I believe is that supply demand-analysis only works in a partial-equilibrium setting in which feedback effects from the price changes in the market under consideration don’t affect equilibrium prices in other markets, so that the position of the supply and demand curves in the market of interest can be assumed stable even as price and quantity in that market adjust from one equilibrium to another (the comparative-statics method).

Because the labor market, affecting almost every other market, is not a small part of the economy, partial-equilibrium analysis is unsuitable for understanding that market, the normal stability assumption being untenable if we attempt to trace the adjustment from one labor-market equilibrium to another after an exogenous disturbance. In the supply-demand paradigm, unemployment is a measure of the disequilibrium in the labor market, a disequilibrium that could – at least in principle — be eliminated by a wage reduction sufficient to equate the quantity of labor services supplied with the amount demanded. Viewed from this supply-demand perspective, the failure of the wage to fall to a supposed equilibrium level is attributable to some sort of endogenous stickiness or some external impediment (minimum wage legislation or union intransigence) in wage adjustment that prevents the normal equilibrating free-market adjustment mechanism. But the habitual resort to supply-demand analysis by economists, reinforced and rewarded by years of training and professionalization, is actually misleading when applied in an inappropriate context.

So Keynes was right to challenge this view of a potentially equilibrating market mechanism that is somehow stymied from behaving in the manner described in the textbook version of supply-demand analysis. Instead, Keynes argued that the level of employment is determined by the level of spending and income at an exogenously given wage level, an approach that seems to be deeply at odds with idea that price adjustments are an essential part of the process whereby a complex economic system arrives at, or at least tends to move toward, an equilibrium.

One of the main motivations for a search for microfoundations in the decades after the General Theory was published was to be able to articulate a convincing microeconomic rationale for persistent unemployment that was not eliminated by the usual tendency of market prices to adjust to eliminate excess supplies of any commodity or service. But Keynes was right to question whether there is any automatic market mechanism that adjusts nominal or real wages in a manner even remotely analogous to the adjustment of prices in organized commodity or stock exchanges – the sort of markets that serve as exemplars of automatic price adjustments in response to excess demands or supplies.

Keynes was also correct to argue that, even if there was a mechanism causing automatic wage adjustments in response to unemployment, the labor market, accounting for roughly 60 percent of total income, is so large that any change in wages necessarily affects all other markets, causing system-wide repercussions that might well offset any employment-increasing tendency of the prior wage adjustment.

But what I want to suggest in this post is that Keynes’s criticism of the supply-demand paradigm is relevant to any general-equilibrium system in the following sense: if a general-equilibrium system is considered from an initial non-equilibrium position, does the system have any tendency to move toward equilibrium? And to make the analysis relatively tractable, assume that the system is such that a unique equilibrium exists. Before proceeding, I also want to note that I am not arguing that traditional supply-demand analysis is necessarily flawed; I am just emphasizing that traditional supply-demand analysis is predicated on a macroeconomic foundation: that all markets but the one under consideration are in, or are in the neighborhood of, equilibrium. It is only because the system as a whole is in the neighborhood of equilibrium, that the microeconomic forces on which traditional supply-demand analysis relies appear to be so powerful and so stabilizing.

However, if our focus is a general-equilibrium system, microeconomic supply-demand analysis of a single market in isolation provides no basis on which to argue that the system as a whole has a self-correcting tendency toward equilibrium. To make such an argument is to commit a fallacy of composition. The tendency of any single market toward equilibrium is premised on an assumption that all markets but the one under analysis are already at, or in the neighborhood of, equilibrium. But when the system as a whole is in a disequilibrium state, the method of partial equilibrium analysis is misplaced; partial-equilibrium analysis provides no ground – no micro-foundation — for an argument that the adjustment of market prices in response to excess demands and excess supplies will ever – much less rapidly — guide the entire system back to an equilibrium state.

The lack of automatic market forces that return a system not in the neighborhood — for purposes of this discussion “neighborhood” is left undefined – of equilibrium back to equilibrium is implied by the Sonnenschein-Mantel-Debreu Theorem, which shows that, even if a unique general equilibrium exists, there may be no rule or algorithm for increasing (decreasing) prices in markets with excess demands (supplies) by which the general-equilibrium price vector would be discovered in a finite number of steps.

The theorem holds even under a Walrasian tatonnement mechanism in which no trading at disequilibrium prices is allowed. The reason is that the interactions between individual markets may be so complicated that a price-adjustment rule will not eliminate all excess demands, because even if a price adjustment reduces excess demand in one market, that price adjustment may cause offsetting disturbances in one or more other markets. So, unless the equilibrium price vector is somehow hit upon by accident, no rule or algorithm for price adjustment based on the excess demand in each market will necessarily lead to discovery of the equilibrium price vector.

The Sonnenschein Mantel Debreu Theorem reinforces the insight of Kenneth Arrow in an important 1959 paper “Toward a Theory of Price Adjustment,” which posed the question: how does the theory of perfect competition account for the determination of the equilibrium price at which all agents can buy or sell as much as they want to at the equilibrium (“market-clearing”) price? As Arrow observed, “there exists a logical gap in the usual formulations of the theory of perfectly competitive economy, namely, that there is no place for a rational decision with respect to prices as there is with respect to quantities.”

Prices in perfect competition are taken as parameters by all agents in the model, and optimization by agents consists in choosing optimal quantities. The equilibrium solution allows the mutually consistent optimization by all agents at the equilibrium price vector. This is true for the general-equilibrium system as a whole, and for partial equilibrium in every market. Not only is there no positive theory of price adjustment within the competitive general-equilibrium model, as pointed out by Arrow, but the Sonnenschein-Mantel-Debreu Theorem shows that there’s no guarantee that even the notional tatonnement method of price adjustment can ensure that a unique equilibrium price vector will be discovered.

While acknowledging his inability to fill the gap, Arrow suggested that, because perfect competition and price taking are properties of general equilibrium, there are inevitably pockets of market power, in non-equilibrium states, so that some transactors in non-equilibrium states, are price searchers rather than price takers who therefore choose both an optimal quantity and an optimal price. I have no problem with Arrow’s insight as far as it goes, but it still doesn’t really solve his problem, because he couldn’t explain, even intuitively, how a disequilibrium system with some agents possessing market power (either as sellers or buyers) transitions into an equilibrium system in which all agents are price-takers who can execute their planned optimal purchases and sales at the parametric prices.

One of the few helpful, but, as far as I can tell, totally overlooked, contributions of the rational-expectations revolution was to solve (in a very narrow sense) the problem that Arrow identified and puzzled over, although Hayek, Lindahl and Myrdal, in their original independent formulations of the concept of intertemporal equilibrium, had already provided the key to the solution. Hayek, Lindahl, and Myrdal showed that an intertemporal equilibrium is possible only insofar as agents form expectations of future prices that are so similar to each other that, if future prices turn out as expected, the agents would be able to execute their planned sales and purchases as expected.

But if agents have different expectations about the future price(s) of some commodity(ies), and if their plans for future purchases and sales are conditioned on those expectations, then when the expectations of at least some agents are inevitably disappointed, those agents will necessarily have to abandon (or revise) the plans that their previously formulated plans.

What led to Arrow’s confusion about how equilibrium prices are arrived at was the habit of thinking that market prices are determined by way of a Walrasian tatonnement process (supposedly mimicking the haggling over price by traders). So the notion that a mythical market auctioneer, who first calls out prices at random (prix cries au hasard), and then, based on the tallied market excess demands and supplies, adjusts those prices until all markets “clear,” is untenable, because continual trading at disequilibrium prices keeps changing the solution of the general-equilibrium system. An actual system with trading at non-equilibrium prices may therefore be moving away from, rather converging on, an equilibrium state.

Here is where the rational-expectations hypothesis comes in. The rational-expectations assumption posits that revisions of previously formulated plans are never necessary, because all agents actually do correctly anticipate the equilibrium price vector in advance. That is indeed a remarkable assumption to make; it is an assumption that all agents in the model have the capacity to anticipate, insofar as their future plans to buy and sell require them to anticipate, the equilibrium prices that will prevail for the products and services that they plan to purchase or sell. Of course, in a general-equilibrium system, all prices being determined simultaneously, the equilibrium prices for some future prices cannot generally be forecast in isolation from the equilibrium prices for all other products. So, in effect, the rational-expectations hypothesis supposes that each agent in the model is an omniscient central planner able to solve an entire general-equilibrium system for all future prices!

But let us not be overly nitpicky about details. So forget about false trading, and forget about the Sonnenschein-Mantel-Debreu theorem. Instead, just assume that, at time t, agents form rational expectations of the future equilibrium price vector in period (t+1). If agents at time t form rational expectations of the equilibrium price vector in period (t+1), then they may well assume that the equilibrium price vector in period t is equal to the expected price vector in period (t+1).

Now, the expected price vector in period (t+1) may or may not be an equilibrium price vector in period t. If it is an equilibrium price vector in period t as well as in period (t+1), then all is right with the world, and everyone will succeed in buying and selling as much of each commodity as he or she desires. If not, prices may or may not adjust in response to that disequilibrium, and expectations may or may not change accordingly.

Thus, instead of positing a mythical auctioneer in a contrived tatonnement process as the mechanism whereby prices are determined for currently executed transactions, the rational-expectations hypothesis posits expected future prices as the basis for the prices at which current transactions are executed, providing a straightforward solution to Arrow’s problem. The prices at which agents are willing to purchase or sell correspond to their expectations of prices in the future. If they find trading partners with similar expectations of future prices, they will reach agreement and execute transactions at those prices. If they don’t find traders with similar expectations, they will either be unable to transact, or will revise their price expectations, or they will assume that current market conditions are abnormal and then decide whether to transact at prices different from those they had expected.

When current prices are more favorable than expected, agents will want to buy or sell more than they would have if current prices were equal to their expectations for the future. If current prices are less favorable than they expect future prices to be, they will not transact at all or will seek to buy or sell less than they would have bought or sold if current prices had equaled expected future prices. The dichotomy between observed current prices, dictated by current demands and supplies, and expected future prices is unrealistic; all current transactions are made with an eye to expected future prices and to their opportunities to postpone current transactions until the future, or to advance future transactions into the present.

If current prices for similar commodities are not uniform in all current transactions, a circumstance that Arrow attributes to the existence of varying degrees of market power across imperfectly competitive suppliers, price dispersion may actually be caused, not by market power, but by dispersion in the expectations of future prices held by agents. Sellers expecting future prices to rise will be less willing to sell at relatively low prices now than are suppliers with pessimistic expectations about future prices. Equilibrium occurs when all transactors share the same expectations of future prices and expected future prices correspond to equilibrium prices in the current period.

Of course, that isn’t the only possible equilibrium situation. There may be situations in which a future event that will change a subset of prices can be anticipated. If the anticipation of the future event affects not only expected future prices, it must also and necessarily affect current prices insofar as current supplies can be carried into the future from the present or current purchases can be postponed until the future or future consumption shifted into the present.

The practical upshot of these somewhat disjointed reflections is, I think,primarily to reinforce skepticism that the traditional Phillips Curve supposition that low and falling unemployment necessarily presages an increase in inflation. Wages are not primarily governed by the current state of the labor market, whatever the labor market might even mean in macroeconomic context.

Expectations rule! And the rational-expectations revolution to the contrary notwithstanding, we have no good theory of how expectations are actually formed and there is certainly no reason to assume that, as a general matter, all agents share the same set of expectations.

The current fairly benign state of the economy reflects the absence of any serious disappointment of price expectations. If an economy is operating not very far from an equilibrium, although expectations are not the same, they likely are not very different. They will only be very different after the unexpected strikes. When that happens, borrowers and traders who had taken positions based on overly optimistic expectations find themselves unable to meet their obligations. It is only then that we will see whether the economy is really as strong and resilient as it now seems.

Expecting the unexpected is hard to do, but you can be sure that, sooner or later, the unexpected is going to happen.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,355 other followers

Follow Uneasy Money on WordPress.com