Archive for the 'general equilibrium' Category



Thinking about Interest and Irving Fisher

In two recent posts I have discussed Keynes’s theory of interest and the natural rate of interest. My goal in both posts was not to give my own view of the correct way to think about what determines interest rates,  but to identify and highlight problems with Keynes’s liquidity-preference theory of interest, and with the concept of a natural rate of interest. The main point that I wanted to make about Keynes’s liquidity-preference theory was that although Keynes thought that he was explaining – or perhaps, explicating — the rate of interest, his theory was nothing more than an explanation of why, typically, the nominal pecuniary yield on holding cash is less than the nominal yield on holding real assets, the difference in yield being attributable to the liquidity services derived from holding a maximally liquid asset rather than holding an imperfectly liquid asset. Unfortunately, Keynes imagined that by identifying and explaining the liquidity premium on cash, he had thereby explained the real yield on holding physical capital assets; he did nothing of the kind, as the marvelous exposition of the theory of own rates of interest in chapter 17 of the General Theory unwittingly demonstrates.

For expository purposes, I followed Keynes in contrasting his liquidity-preference theory with what he called the classical theory of interest, which he identified with Alfred Marshall, in which the rate of interest is supposed to be the rate that equilibrates saving and investment. I criticized Keynes for attributing this theory to Marshall rather than to Irving Fisher, which was, I am now inclined to think, a mistake on my part, because I doubt, based on a quick examination of Fisher’s two great books The Rate of Interest and The Theory of Interest, that he ever asserted that the rate of interest is determined by equilibrating savings and investment. (I actually don’t know if Marshall did or did make such an assertion.) But I think it’s clear that Fisher did not formulate his theory in terms of equating investment and savings via adjustments in the rate of interest rate. Fisher, I think, did agree (but I can’t quote a passage to this effect) that savings and investment are equal in equilibrium, but his analysis of the determination of the rate of interest was not undertaken in terms of equalizing two flows, i.e., savings and investment. Instead the analysis was carried out in terms of individual or household decisions about how much to consume out of current and expected future income, and in terms of decisions by business firms about how much available resources to devote to producing output for current consumption versus producing for future consumption. Fisher showed (in Walrasian fashion) that there are exactly enough equations in his system to solve for all the independent variables, so that his system had a solution. (That Walrasian argument of counting equations and unknowns is mathematically flawed, but later work by my cousin Abraham Wald and subsequently by Arrow, Debreu and McKenzie showed that Fisher’s claim could, under some more or less plausible assumptions, be proved in a mathematically rigorous way.)

Maybe it was Knut Wicksell who in his discussions of the determination of the rate of interest argued that the rate of interest is responsible for equalizing savings and investment, but that was not how Fisher understood what the rate of interest is all about. The Wicksellian notion that the equilibrium rate of interest equalizes savings and investment was thus a misunderstanding of the Fisherian theory, and it would be a worthwhile endeavor to trace the genesis and subsequent development of this misunderstanding to the point that Keynes and his contemporaries could have thought that they were giving an accurate representation of what orthodox theory asserted when they claimed that according to orthodox theory the rate of interest is what ensures equality between savings and investment.

This mistaken doctrine was formalized as the loanable-funds theory of interest – I believe that Dennis Robertson is usually credited with originating this term — in which savings is represented as the supply of loanable funds and investment is represented as the demand for loanable funds, with the rate of interest serving as a sort of price that is determined in Marshallian fashion by the intersection of the two schedules. Somehow it became accepted that the loanable-funds doctrine is the orthodox theory of interest determination, but it is clear from Fisher and from standard expositions of the neoclassical theory of interest which are of course simply extensions of Fisher’s work) that the loanable-funds theory is mistaken and misguided at a very basic level. (At this point, I should credit George Blackford for his comments on my post about Keynes’s theory of the rate of interest for helping me realize that it is not possible to make any sense out of the loanable-funds theory even though I am not sure that we agree on exactly why the loanable funds theory doesn’t make sense. Not that I had espoused the loanable-funds theory, but I did not fully appreciate its incoherence.)

Why do I say that the loanable-funds theory is mistaken and incoherent? Simply because it is fundamentally inconsistent with the essential properties of general-equilibrium analysis. In general-equilibrium analysis, interest rates emerge not as a separate subset of prices determined in a corresponding subset of markets; they emerge from the intertemporal relationships between and across all asset markets and asset prices. To view the rate of interest as being determined in a separate market for loanable funds as if the rate of interest were not being simultaneously determined in all asset markets is a complete misunderstanding of the theory of intertemporal general equilibrium.

Here’s how Fisher put over a century ago in The Rate of Interest:

We thus need to distinguish between interest in terms of money and interest in terms of goods. The first thought suggested by this fact is that the rate of interest in money is “nominal” and that in goods “real.” But this distinction is not sufficient, for no two forms of goods maintain or are expected to maintain, a constant price ratio toward each other. There are therefore just as many rates of interest in goods as there are forms of goods diverging in value. (p. 84, Fisher’s emphasis).

So a quarter of a century before Sraffa supposedly introduced the idea of own rates of interest in his 1932 review of Hayek’s Prices and Production, Fisher had done so in his first classic treatise on interest, which reproduced the own-rate analysis in his 1896 monograph Appreciation and Interest. While crediting Sraffa for introducing the concept of own rates of interest, Keynes, in chapter 17, simply — and brilliantly extends the basics of Fisher’s own-rate analysis, incorporating the idea of liquidity preference and silently correcting Sraffa insofar as his analysis departed from Fisher’s.

Christopher Bliss in his own classic treatise on the theory of interest, expands upon Fisher’s point.

According to equilibrium theory – according indeed to any theory of economic action which relates firms’ decisions to prospective profit and households’ decisions to budget-constrained searches for the most preferred combination of goods – it is prices which play the fundamental role. This is because prices provide the weights to be attached to the possible amendments to their net supply plans which the actors have implicitly rejected in deciding upon their choices. In an intertemporal economy it is then, naturally, present-value prices which play the fundamental role. Although this argument is mounted here on the basis of a consideration of an economy with forward markets in intertemporal equilibrium, it in no way depends on this particular foundation. As has been remarked, if forward markets are not in operation the economic actors have no choice but to substitute their “guesses” for the firm quotations of the forward markets. This will make a big difference, since full intertemporal equilibrium is not likely to be achieved unless there is a mechanism to check and correct for inconsistency in plans and expectations. But the forces that pull economic decisions one way or another are present-value prices . . . be they guesses or firm quotations. (pp. 55-56)

Changes in time preference therefore cause immediate changes in the present value prices of assets thereby causing corresponding changes in own rates of interest. Changes in own rates of interest constrain the rates of interest charged on money loans; changes in asset valuations and interest rates induce changes in production, consumption plans and the rate at which new assets are produced and capital accumulated. The notion that there is ever a separate market for loanable funds in which the rate of interest is somehow determined, and savings and investment are somehow equilibrated is simply inconsistent with the basic Fisherian theory of the rate of interest.

Just as Nick Rowe argues that there is no single market in which the exchange value of money (medium of account) is determined, because money is exchanged for goods in all markets, there can be no single market in which the rate of interest is determined because the value of every asset depends on the rate of interest at which the expected income or service-flow derived from the asset is discounted. The determination of the rate of interest can’t be confined to a single market.

Advertisements

Representative Agents, Homunculi and Faith-Based Macroeconomics

After my previous post comparing the neoclassical synthesis in its various versions to the mind-body problem, there was an interesting Twitter exchange between Steve Randy Waldman and David Andolfatto in which Andolfatto queried whether Waldman and I are aware that there are representative-agent models in which the equilibrium is not Pareto-optimal. Andalfatto raised an interesting point, but what I found interesting about it might be different from what Andalfatto was trying to show, which, I am guessing, was that a representative-agent modeling strategy doesn’t necessarily commit the theorist to the conclusion that the world is optimal and that the solutions of the model can never be improved upon by a monetary/fiscal-policy intervention. I concede the point. It is well-known I think that, given the appropriate assumptions, a general-equilibrium model can have a sub-optimal solution. Given those assumptions, the corresponding representative-agent will also choose a sub-optimal solution. So I think I get that, but perhaps there’s a more subtle point  that I’m missing. If so, please set me straight.

But what I was trying to argue was not that representative-agent models are necessarily optimal, but that representative-agent models suffer from an inherent, and, in my view, fatal, flaw: they can’t explain any real macroeconomic phenomenon, because a macroeconomic phenomenon has to encompass something more than the decision of a single agent, even an omniscient central planner. At best, the representative agent is just a device for solving an otherwise intractable general-equilibrium model, which is how I think Lucas originally justified the assumption.

Yet just because a general-equilibrium model can be formulated so that it can be solved as the solution of an optimizing agent does not explain the economic mechanism or process that generates the solution. The mathematical solution of a model does not necessarily provide any insight into the adjustment process or mechanism by which the solution actually is, or could be, achieved in the real world. Your ability to find a solution for a mathematical problem does not mean that you understand the real-world mechanism to which the solution of your model corresponds. The correspondence between your model may be a strictly mathematical correspondence which may not really be in any way descriptive of how any real-world mechanism or process actually operates.

Here’s an example of what I am talking about. Consider a traffic-flow model explaining how congestion affects vehicle speed and the flow of traffic. It seems obvious that traffic congestion is caused by interactions between the different vehicles traversing a thoroughfare, just as it seems obvious that market exchange arises as the result of interactions between the different agents seeking to advance their own interests. OK, can you imagine building a useful traffic-flow model based on solving for the optimal plan of a representative vehicle?

I don’t think so. Once you frame the model in terms of a representative vehicle, you have abstracted from the phenomenon to be explained. The entire exercise would be pointless – unless, that is, you assumed that interactions between vehicles are so minimal that they can be ignored. But then why would you be interested in congestion effects? If you want to claim that your model has any relevance to the effect of congestion on traffic flow, you can’t base the claim on an assumption that there is no congestion.

Or to take another example, suppose you want to explain the phenomenon that, at sporting events, all, or almost all, the spectators sit in their seats but occasionally get up simultaneously from their seats to watch the play on the field or court. Would anyone ever think that an explanation in terms of a representative spectator could explain that phenomenon?

In just the same way, a representative-agent macroeconomic model necessarily abstracts from the interactions between actual agents. Obviously, by abstracting from the interactions, the model can’t demonstrate that there are no interactions between agents in the real world or that their interactions are too insignificant to matter. I would be shocked if anyone really believed that the interactions between agents are unimportant, much less, negligible; nor have I seen an argument that interactions between agents are unimportant, the concept of network effects, to give just one example, being an important topic in microeconomics.

It’s no answer to say that all the interactions are accounted for within the general-equilibrium model. That is just a form of question-begging. The representative agent is being assumed because without him the problem of finding a general-equilibrium solution of the model is very difficult or intractable. Taking into account interactions makes the model too complicated to work with analytically, so it is much easier — but still hard enough to allow the theorist to perform some fancy mathematical techniques — to ignore those pesky interactions. On top of that, the process by which the real world arrives at outcomes to which a general-equilibrium model supposedly bears at least some vague resemblance can’t even be described by conventional modeling techniques.

The modeling approach seems like that of a neuroscientist saying that, because he could simulate the functions, electrical impulses, chemical reactions, and neural connections in the brain – which he can’t do and isn’t even close to doing, even though a neuroscientist’s understanding of the brain far surpasses any economist’s understanding of the economy – he can explain consciousness. Simulating the operation of a brain would not explain consciousness, because the computer on which the neuroscientist performed the simulation would not become conscious in the course of the simulation.

Many neuroscientists and other materialists like to claim that consciousness is not real, that it’s just an epiphenomenon. But we all have the subjective experience of consciousness, so whatever it is that someone wants to call it, consciousness — indeed the entire world of mental phenomena denoted by that term — remains an unexplained phenomenon, a phenomenon that can only be dismissed as unreal on the basis of a metaphysical dogma that denies the existence of anything that can’t be explained as the result of material and physical causes.

I call that metaphysical belief a dogma not because it’s false — I have no way of proving that it’s false — but because materialism is just as much a metaphysical belief as deism or monotheism. It graduates from belief to dogma when people assert not only that the belief is true but that there’s something wrong with you if you are unwilling to believe it as well. The most that I would say against the belief in materialism is that I can’t understand how it could possibly be true. But I admit that there are a lot of things that I just don’t understand, and I will even admit to believing in some of those things.

New Classical macroeconomists, like, say, Robert Lucas and, perhaps, Thomas Sargent, like to claim that unless a macroeconomic model is microfounded — by which they mean derived from an explicit intertemporal optimization exercise typically involving a representative agent or possibly a small number of different representative agents — it’s not an economic model, because the model, being vulnerable to the Lucas critique, is theoretically superficial and vacuous. But only models of intertemporal equilibrium — a set of one or more mutually consistent optimal plans — are immune to the Lucas critique, so insisting on immunity to the Lucas critique as a prerequisite for a macroeconomic model is a guarantee of failure if your aim to explain anything other than an intertemporal equilibrium.

Unless, that is, you believe that real world is in fact the realization of a general equilibrium model, which is what real-business-cycle theorists, like Edward Prescott, at least claim to believe. Like materialist believers that all mental states are epiphenomenous, and that consciousness is an (unexplained) illusion, real-business-cycle theorists purport to deny that there is such a thing as a disequilibrium phenomenon, the so-called business cycle, in their view, being nothing but a manifestation of the intertemporal-equilibrium adjustment of an economy to random (unexplained) productivity shocks. According to real-business-cycle theorists, such characteristic phenomena of business cycles as surprise, regret, disappointed expectations, abandoned and failed plans, the inability to find work at wages comparable to wages that other similar workers are being paid are not real phenomena; they are (unexplained) illusions and misnomers. The real-business-cycle theorists don’t just fail to construct macroeconomic models; they deny the very existence of macroeconomics, just as strict materialists deny the existence of consciousness.

What is so preposterous about the New-Classical/real-business-cycle methodological position is not the belief that the business cycle can somehow be modeled as a purely equilibrium phenomenon, implausible as that idea seems, but the insistence that only micro-founded business-cycle models are methodologically acceptable. It is one thing to believe that ultimately macroeconomics and business-cycle theory will be reduced to the analysis of individual agents and their interactions. But current micro-founded models can’t provide explanations for what many of us think are basic features of macroeconomic and business-cycle phenomena. If non-micro-founded models can provide explanations for those phenomena, even if those explanations are not fully satisfactory, what basis is there for rejecting them just because of a methodological precept that disqualifies all non-micro-founded models?

According to Kevin Hoover, the basis for insisting that only micro-founded macroeconomic models are acceptable, even if the microfoundation consists in a single representative agent optimizing for an entire economy, is eschatological. In other words, because of a belief that economics will eventually develop analytical or computational techniques sufficiently advanced to model an entire economy in terms of individual interacting agents, an analysis based on a single representative agent, as the first step on this theoretical odyssey, is somehow methodologically privileged over alternative models that do not share that destiny. Hoover properly rejects the presumptuous notion that an avowed, but unrealized, theoretical destiny, can provide a privileged methodological status to an explanatory strategy. The reductionist microfoundationalism of New-Classical macroeconomics and real-business-cycle theory, with which New Keynesian economists have formed an alliance of convenience, is truly a faith-based macroeconomics.

The remarkable similarity between the reductionist microfoundational methodology of New-Classical macroeconomics and the reductionist materialist approach to the concept of mind suggests to me that there is also a close analogy between the representative agent and what philosophers of mind call a homunculus. The Cartesian materialist theory of mind maintains that, at some place or places inside the brain, there resides information corresponding to our conscious experience. The question then arises: how does our conscious experience access the latent information inside the brain? And the answer is that there is a homunculus (or little man) that processes the information for us so that we can perceive it through him. For example, the homunculus (see the attached picture of the little guy) views the image cast by light on the retina as if he were watching a movie projected onto a screen.

homunculus

But there is an obvious fallacy, because the follow-up question is: how does our little friend see anything? Well, the answer must be that there’s another, smaller, homunculus inside his brain. You can probably already tell that this argument is going to take us on an infinite regress. So what purports to be an explanation turns out to be just a form of question-begging. Sound familiar? The only difference between the representative agent and the homunculus is that the representative agent begs the question immediately without having to go on an infinite regress.

PS I have been sidetracked by other responsibilities, so I have not been blogging much, if at all, for the last few weeks. I hope to post more frequently, but I am afraid that my posting and replies to comments are likely to remain infrequent for the next couple of months.

The Neoclassical Synthesis and the Mind-Body Problem

The neoclassical synthesis that emerged in the early postwar period aimed at reconciling the macroeconomic (IS-LM) analysis derived from Keynes via Hicks and others with the neoclassical microeconomic analysis of general equilibrium derived from Walras. The macroeconomic analysis was focused on an equilibrium of income and expenditure flows while the Walrasian analysis was focused on the equilibrium between supply and demand in individual markets. The two types of analysis seemed to be incommensurate inasmuch as the conditions for equilibrium in the two analysis did not seem to match up against each other. How does an analysis focused on the equality of aggregate flows of income and expenditure get translated into an analysis focused on the equality of supply and demand in individual markets? The two languages seem to be different, so it is not obvious how a statement formulated in one language gets translated into the other. And even if a translation is possible, does the translation hold under all, or only under some, conditions? And if so, what are those conditions?

The original neoclassical synthesis did not aim to provide a definitive answer to those questions, but it was understood to assert that if the equality of income and expenditure was assured at a level consistent with full employment, one could safely assume that market forces would take care of the allocation of resources, so that markets would be cleared and the conditions of microeconomic general equilibrium satisfied, at least as a first approximation. This version of the neoclassical synthesis was obviously ad hoc and an unsatisfactory resolution of the incommensurability of the two levels of analysis. Don Patinkin sought to provide a rigorous reconciliation of the two levels of analysis in his treatise Money, Interest and Prices. But for all its virtues – and they are numerous – Patinkin’s treatise failed to bridge the gap between the two levels of analysis.

As I mentioned recently in a post on Romer and Lucas, Kenneth Arrow in a 1967 review of Samuelson’s Collected Works commented disparagingly on the neoclassical synthesis of which Samuelson was a leading proponent. The widely shared dissatisfaction expressed by Arrow motivated much of the work that soon followed on the microfoundations of macroeconomics exemplified in the famous 1970 Phelps volume. But the motivation for the search for microfoundations was then (before the rational expectations revolution) to specify the crucial deviations from the assumptions underlying the standard Walrasian general-equilibrium model that would generate actual or seeming price rigidities, which a straightforward – some might say superficial — understanding of neoclassical microeconomic theory suggested were necessary to explain why, after a macro-disturbance, equilibrium was not rapidly restored by price adjustments. Two sorts of explanations emerged from the early microfoundations literature: a) search and matching theories assuming that workers and employers must expend time and resources to find appropriate matches; b) institutional theories of efficiency wages or implicit contracts that explain why employers and workers prefer layoffs to wage cuts in response to negative demand shocks.

Forty years on, the search and matching theories do not seem capable of accounting for the magnitude of observed fluctuations in employment or the cyclical variation in layoffs, and the institutional theories are still difficult to reconcile with the standard neoclassical assumptions, remaining an ad hoc appendage to New Keynesian models that otherwise adhere to the neoclassical paradigm. Thus, although the original neoclassical synthesis in which the Keynesian income-expenditure model was seen as a pre-condition for the validity of the neoclassical model was rejected within a decade of Arrow’s dismissive comment about the neoclassical synthesis, Tom Sargent has observed in a recent review of Robert Lucas’s Collected Papers on Monetary Theory that Lucas has implicitly adopted a new version of the neoclassical synthesis dominated by an intertemporal neoclassical general-equilibrium model, but with the proviso that substantial shocks to aggregate demand and the price level are prevented by monetary policy, thereby making the neoclassical model a reasonable approximation to reality.

Ok, so you are probably asking what does all this have to do with the mind-body problem? A lot, I think in that both the neoclassical synthesis and the mind-body problem involve a disconnect between two kinds – two levels – of explanation. The neoclassical synthesis asserts some sort of connection – but a problematic one — between the explanatory apparatus – macroeconomics — used to understand the cyclical fluctuations of what we are used to think of as the aggregate economy and the explanatory apparatus – microeconomics — used to understand the constituent elements of the aggregate economy — households and firms — and how those elements are related to, and interact with, each other.

The mind-body problem concerns the relationship between the mental – our direct experience of a conscious inner life of thoughts, emotions, memories, decisions, hopes and regrets — and the physical – matter, atoms, neurons. A basic postulate of science is that all phenomena have material causes. So the existence of conscious states that seem to us, by way of our direct experience, to be independent of material causes is also highly problematic. There are a few strategies for handling the problem. One is to assert that the mind truly is independent of the body, which is to say that consciousness is not the result of physical causes. A second is to say that mind is not independent of the body; we just don’t understand the nature of the relationship. There are two possible versions of this strategy: a) that although the nature of the relationship is unknown to us now, advances in neuroscience could reveal to us the way in which consciousness is caused by the operation of the brain; b) although our minds are somehow related to the operation of our brains, the nature of this relationship is beyond the capacity of our minds or brains to comprehend owing to considerations analogous to Godel’s incompleteness theorem (a view espoused by the philosopher Colin McGinn among others); in other words, the mind-body problem is inherently beyond human understanding. And the third strategy is to deny the existence of consciousness, because a conscious state is identical with the physical state of a brain, so that consciousness is just an epiphenomenon of a brain state; we in our naivete may think that our conscious states have a separate existence, but those states are strictly identical with corresponding brain states, so that whatever conscious state that we think we are experiencing has been entirely produced by the physical forces that determine the behavior of our brains and the configuration of its physical constituents.

The first, and probably the last, thing that one needs to understand about the third strategy is that, as explained by Colin McGinn (see e.g., here), its validity has not been demonstrated by neuroscience or by any other branch of science; it is, no less than any of the other strategies, strictly a metaphysical position. The mind-body problem is a problem precisely because science has not even come close to demonstrating how mental states are caused by, let alone that they are identical to, brain states, despite some spurious misinterpretations of research that purport to show such an identity.

Analogous to the scientific principle that all phenomena have material or physical causes, there is in economics and social science a principle called methodological individualism, which roughly states that explanations of social outcomes should be derived from theories about the conduct of individuals, not from theories about abstract social entities that exist independently of their constituent elements. The underlying motivation for methodological individualism (as opposed to political individualism with which it is related but from which it is distinct) was to counter certain ideas popular in the nineteenth and twentieth centuries asserting the existence of metaphysical social entities like “history” that are somehow distinct from yet impinge upon individual human beings, and that there are laws of history or social development from which future states of the world can be predicted, as Hegel, Marx and others tried to do. This notion gave rise to a two famous books by Popper: The Open Society and its Enemies and The Poverty of Historicism. Methodological individualism as articulated by Popper was thus primarily an attack on the attribution of special powers to determine the course of future events to abstract metaphysical or mystical entities like history or society that are supposedly things or beings in themselves distinct from the individual human beings of which they are constituted. Methodological individualism does not deny the existence of collective entities like society; it simply denies that such collective entities exist as objective facts that can be observed as such. Our apprehension of these entities must be built up from more basic elements — individuals and their plans, beliefs and expectations — that we can apprehend directly.

However, methodological individualism is not the same as reductionism; methodological individualism teaches us to look for explanations of higher-level phenomena, e.g., a pattern of social relationships like the business cycle, in terms of the basic constituents forming the pattern: households, business firms, banks, central banks and governments. It does not assert identity between the pattern of relationships and the constituent elements; it says that the pattern can be understood in terms of interactions between the elements. Thus, a methodologically individualistic explanation of the business cycle in terms of the interactions between agents – households, businesses, etc. — would be analogous to an explanation of consciousness in terms of the brain if an explanation of consciousness existed. A methodologically individualistic explanation of the business cycle would not be analogous to an assertion that consciousness exists only as an epiphenomenon of brain states. The assertion that consciousness is nothing but the epiphenomenon of a corresponding brain state is reductionist; it asserts an identity between consciousness and brain states without explaining how consciousness is caused by brain states.

In business-cycle theory, the analogue of such a reductionist assertion of identity between higher-level and lower level phenomena is the assertion that the business cycle is not the product of the interaction of individual agents, but is simply the optimal plan of a representative agent. On this account, the business cycle becomes an epiphenomenon; apparent fluctuations being nothing more than the optimal choices of the representative agent. Of course, everyone knows that the representative agent is merely a convenient modeling device in terms of which a business-cycle theorist tries to account for the observed fluctuations. But that is precisely the point. The whole exercise is a sham; the representative agent is an as-if device that does not ground business-cycle fluctuations in the conduct of individual agents and their interactions, but simply asserts an identity between those interactions and the supposed decisions of the fictitious representative agent. The optimality conditions in terms of which the model is solved completely disregard the interactions between individuals that might cause an unintended pattern of relationships between those individuals. The distinctive feature of methodological individualism is precisely the idea that the interactions between individuals can lead to unintended consequences; it is by way of those unintended consequences that a higher-level pattern might emerge from interactions among individuals. And those individual interactions are exactly what is suppressed by representative-agent models.

So the notion that any analysis premised on a representative agent provides microfoundations for macroeconomic theory seems to be a travesty built on a total misunderstanding of the principle of methodological individualism that it purports to affirm.

Krugman’s Second Best

A couple of days ago Paul Krugman discussed “Second-best Macroeconomics” on his blog. I have no real quarrel with anything he said, but I would like to amplify his discussion of what is sometimes called the problem of second-best, because I think the problem of second best has some really important implications for macroeconomics beyond the limited application of the problem that Krugman addressed. The basic idea underlying the problem of second best is not that complicated, but it has many applications, and what made the 1956 paper (“The General Theory of Second Best”) by R. G. Lipsey and Kelvin Lancaster a classic was that it showed how a number of seemingly disparate problems were really all applications of a single unifying principle. Here’s how Krugman frames his application of the second-best problem.

[T]he whole western world has spent years suffering from a severe shortfall of aggregate demand; in Europe a severe misalignment of national costs and prices has been overlaid on this aggregate problem. These aren’t hard problems to diagnose, and simple macroeconomic models — which have worked very well, although nobody believes it — tell us how to solve them. Conventional monetary policy is unavailable thanks to the zero lower bound, but fiscal policy is still on tap, as is the possibility of raising the inflation target. As for misaligned costs, that’s where exchange rate adjustments come in. So no worries: just hit the big macroeconomic That Was Easy button, and soon the troubles will be over.

Except that all the natural answers to our problems have been ruled out politically. Austerians not only block the use of fiscal policy, they drive it in the wrong direction; a rise in the inflation target is impossible given both central-banker prejudices and the power of the goldbug right. Exchange rate adjustment is blocked by the disappearance of European national currencies, plus extreme fear over technical difficulties in reintroducing them.

As a result, we’re stuck with highly problematic second-best policies like quantitative easing and internal devaluation.

I might quibble with Krugman about the quality of the available macroeconomic models, by which I am less impressed than he, but that’s really beside the point of this post, so I won’t even go there. But I can’t let the comment about the inflation target pass without observing that it’s not just “central-banker prejudices” and the “goldbug right” that are to blame for the failure to raise the inflation target; for reasons that I don’t claim to understand myself, the political consensus in both Europe and the US in favor of perpetually low or zero inflation has been supported with scarcely any less fervor by the left than the right. It’s only some eccentric economists – from diverse positions on the political spectrum – that have been making the case for inflation as a recovery strategy. So the political failure has been uniform across the political spectrum.

OK, having registered my factual disagreement with Krugman about the source of our anti-inflationary intransigence, I can now get to the main point. Here’s Krugman:

“[S]econd best” is an economic term of art. It comes from a classic 1956 paper by Lipsey and Lancaster, which showed that policies which might seem to distort markets may nonetheless help the economy if markets are already distorted by other factors. For example, suppose that a developing country’s poorly functioning capital markets are failing to channel savings into manufacturing, even though it’s a highly profitable sector. Then tariffs that protect manufacturing from foreign competition, raise profits, and therefore make more investment possible can improve economic welfare.

The problems with second best as a policy rationale are familiar. For one thing, it’s always better to address existing distortions directly, if you can — second best policies generally have undesirable side effects (e.g., protecting manufacturing from foreign competition discourages consumption of industrial goods, may reduce effective domestic competition, and so on). . . .

But here we are, with anything resembling first-best macroeconomic policy ruled out by political prejudice, and the distortions we’re trying to correct are huge — one global depression can ruin your whole day. So we have quantitative easing, which is of uncertain effectiveness, probably distorts financial markets at least a bit, and gets trashed all the time by people stressing its real or presumed faults; someone like me is then put in the position of having to defend a policy I would never have chosen if there seemed to be a viable alternative.

In a deep sense, I think the same thing is involved in trying to come up with less terrible policies in the euro area. The deal that Greece and its creditors should have reached — large-scale debt relief, primary surpluses kept small and not ramped up over time — is a far cry from what Greece should and probably would have done if it still had the drachma: big devaluation now. The only way to defend the kind of thing that was actually on the table was as the least-worst option given that the right response was ruled out.

That’s one example of a second-best problem, but it’s only one of a variety of problems, and not, it seems to me, the most macroeconomically interesting. So here’s the second-best problem that I want to discuss: given one distortion (i.e., a departure from one of the conditions for Pareto-optimality), reaching a second-best sub-optimum requires violating other – likely all the other – conditions for reaching the first-best (Pareto) optimum. The strategy for getting to the second-best suboptimum cannot be to achieve as many of the conditions for reaching the first-best optimum as possible; the conditions for reaching the second-best optimum are in general totally different from the conditions for reaching the first-best optimum.

So what’s the deeper macroeconomic significance of the second-best principle?

I would put it this way. Suppose there’s a pre-existing macroeconomic equilibrium, all necessary optimality conditions between marginal rates of substitution in production and consumption and relative prices being satisfied. Let the initial equilibrium be subjected to a macoreconomic disturbance. The disturbance will immediately affect a range — possibly all — of the individual markets, and all optimality conditions will change, so that no market will be unaffected when a new optimum is realized. But while optimality for the system as a whole requires that prices adjust in such a way that the optimality conditions are satisfied in all markets simultaneously, each price adjustment that actually occurs is a response to the conditions in a single market – the relationship between amounts demanded and supplied at the existing price. Each price adjustment being a response to a supply-demand imbalance in an individual market, there is no theory to explain how a process of price adjustment in real time will ever restore an equilibrium in which all optimality conditions are simultaneously satisfied.

Invoking a general Smithian invisible-hand theorem won’t work, because, in this context, the invisible-hand theorem tells us only that if an equilibrium price vector were reached, the system would be in an optimal state of rest with no tendency to change. The invisible-hand theorem provides no account of how the equilibrium price vector is discovered by any price-adjustment process in real time. (And even tatonnement, a non-real-time process, is not guaranteed to work as shown by the Sonnenschein-Mantel-Debreu Theorem). With price adjustment in each market entirely governed by the demand-supply imbalance in that market, market prices determined in individual markets need not ensure that all markets clear simultaneously or satisfy the optimality conditions.

Now it’s true that we have a simple theory of price adjustment for single markets: prices rise if there’s an excess demand and fall if there’s an excess supply. If demand and supply curves have normal slopes, the simple price adjustment rule moves the price toward equilibrium. But that partial-equilibriuim story is contingent on the implicit assumption that all other markets are in equilibrium. When all markets are in disequilibrium, moving toward equilibrium in one market will have repercussions on other markets, and the simple story of how price adjustment in response to a disequilibrium restores equilibrium breaks down, because market conditions in every market depend on market conditions in every other market. So unless all markets arrive at equilibrium simultaneously, there’s no guarantee that equilibrium will obtain in any of the markets. Disequilibrium in any market can mean disequilibrium in every market. And if a single market is out of kilter, the second-best, suboptimal solution for the system is totally different from the first-best solution for all markets.

In the standard microeconomics we are taught in econ 1 and econ 101, all these complications are assumed away by restricting the analysis of price adjustment to a single market. In other words, as I have pointed out in a number of previous posts (here and here), standard microeconomics is built on macroeconomic foundations, and the currently fashionable demand for macroeconomics to be microfounded turns out to be based on question-begging circular reasoning. Partial equilibrium is a wonderful pedagogical device, and it is an essential tool in applied microeconomics, but its limitations are often misunderstood or ignored.

An early macroeconomic application of the theory of second is the statement by the quintessentially orthodox pre-Keynesian Cambridge economist Frederick Lavington who wrote in his book The Trade Cycle “the inactivity of all is the cause of the inactivity of each.” Each successive departure from the conditions for second-, third-, fourth-, and eventually nth-best sub-optima has additional negative feedback effects on the rest of the economy, moving it further and further away from a Pareto-optimal equilibrium with maximum output and full employment. The fewer people that are employed, the more difficult it becomes for anyone to find employment.

This insight was actually admirably, if inexactly, expressed by Say’s Law: supply creates its own demand. The cause of the cumulative contraction of output in a depression is not, as was often suggested, that too much output had been produced, but a breakdown of coordination in which disequilibrium spreads in epidemic fashion from market to market, leaving individual transactors unable to compensate by altering the terms on which they are prepared to supply goods and services. The idea that a partial-equilibrium response, a fall in money wages, can by itself remedy a general-disequilibrium disorder is untenable. Keynes and the Keynesians were therefore completely wrong to accuse Say of committing a fallacy in diagnosing the cause of depressions. The only fallacy lay in the assumption that market adjustments would automatically ensure the restoration of something resembling full-employment equilibrium.

In Praise of Israel Kirzner

Over the holiday weekend, I stumbled across, to my pleasant surprise, the lecture given just a week ago by Israel Kirzner on being awarded the 2015 Hayek medal by the Hayek Gesellschaft in Berlin. The medal, it goes without saying, was richly deserved, because Kirzner’s long career spanning over half a century has produced hundreds of articles and many books elucidating many important concepts in various areas of economics, but especially on the role of competition and entrepreneurship (the title of his best known book) in theory and in practice. A student of Ludwig von Mises, when Mises was at NYU in the 1950s, Kirzner was able to recast and rework Mises’s ideas in a way that made them more accessible and more relevant to younger generations of students than were the didactic and dogmatic pronouncements so characteristic of Mises’s own writings. Not that there wasn’t and still isn’t a substantial market niche in which such didacticism and dogmatism is highly prized, but there are also many for whom those features of the Misesian style don’t go down quite so easily.

But it would be very unfair, and totally wrong, to think of Kirzner as a mere popularizer of Misesian doctrines. Although in his modesty and self-effacement, Kirzner made few, if any, claims of originality for himself, his application of ideas that he learned from, or, having developed them himself, read into, Mises, Kirzner’s contributions in their own way were not at all lacking originality and creativity. In a certain sense, his contribution was, in its own way, entrepreneurial, i.e., taking a concept or an idea or an analytical tool applied in one context and deploying that concept, idea, or tool in another context. It’s worth mentioning that a reverential attitude towards one’s teachers and intellectual forbears is not only very much characteristic of the Talmudic tradition of which Kirzner is also an accomplished master, but it’s also characteristic, or at least used to be, of other scholarly traditions, notably Cambridge, England, where such illustrious students of Alfred Marshall as Frederick Lavington and A. C. Pigou viewed themselves as merely elaborating on doctrines that had already been expounded by Marshall himself, Pigou having famously said of his own voluminous work, “it’s all in Marshall.”

But rather than just extol Kirzner’s admirable personal qualities, I want to discuss what Kirzner said in his Hayek lecture. His main point was to explain how, in his view, the Austrian tradition, just as it seemed to be petering out in the late 1930s and 1940s, evolved from just another variant school of thought within the broader neoclassical tradition that emerged late in the 19th century from the marginal revolution started almost simultaneously around 1870 by William Stanley Jevons in England, Carl Menger in Austria, and Leon Walras in France/Switzerland, into a completely distinct school of thought very much at odds with the neoclassical mainstream. In Kirzner’s view, the divergence between Mises and Hayek on the one hand and the neoclassical mainstream on the other was that Mises and Hayek went further in developing the subjectivist paradigm underlying the marginal-utility theory of value introduced by Jevons, Menger, and Walras in opposition to the physicalist, real-cost, theory of value inherited from Smith, Ricardo, Mill, and other economists of the classical school.

The marginal revolution shifted the focus of economics from the objective physical and technological forces that supposedly determine cost, which, in turn, supposedly determines value, to subjective, not objective, mental states or opinions that characterize preferences, which, in turn, determine value. And, as soon became evident, the new subjective marginalist theory of value implied that cost, at bottom, is nothing other than a foregone value (opportunity cost), so that the classical doctrine that cost determines value has it exactly backwards: it is really value that determines cost (though it is usually a mistake to suppose that in complex systems causation runs in only one direction).

However, as the neoclassical research program evolved, the subjective character of the underlying theory was increasingly de-emphasized, a de-emphasis that was probably driven by two factors: 1) the profoundly paradoxical nature of the idea that value determines cost, not the reverse, and b) the mathematicization of economics and the increasing adoption, in the Walrasian style, of functional representations of utility and production, leading to the construction of models of economic equilibrium that, under appropriate continuity and convexity assumptions, could be shown to have a theoretically determinate and unique solution. The false impression was created that economics was an objective science like physics, and that economics should aim to create objective and deterministic scientific representations (models) of complex economic systems that could then yield quantitatively precise predictions, in the same way that physics produced models of planetary motion yielding quantitatively precise predictions.

What Hayek and Mises objected to was the idea, derived from the functional approach to economic theory, that economics is just a technique of optimization subject to constraints, that all economic problems can be reduced to optimization problems. And it is a historical curiosum that one of the chief contributors to this view of economics was none other than Lionel Robbins in his seminal methodological work An Essay on the Nature and Significance of Economic Science, written precisely during that stage of his career when he came under the profound influence of Mises and Hayek, but before Mises and Hayek adopted the more radically subjective approach that characterizes their views in the late 1930s and 1940s. The critical works are Hayek’s essays reproduced as The Counterrevolution of Science and his essays “Economics and Knowledge,” “The Facts of the Social Sciences,” “The Use of Knowledge in Society,” and “The Meaning of Competition,” all contained in the remarkable volume Individualism and Economic Order.

What neoclassical economists who developed this deterministic version of economic theory, a version wonderfully expounded in Samuelson Foundations of Economic Analysis and ultimately embodied in the Arrow-Debreu general-equilibrium model, failed to see is that the model could not incorporate in an intellectually satisfying or empirically fruitful way the process of economic growth and development. The fundamental characteristic of the Arrow-Debreu model is its perfection. The solution of the model is Pareto-optimal, and cannot be improved upon; the entire unfolding of the model from beginning to end proceeds entirely according to a plan (actually a set of perfectly consistent and harmonious individual plans) with no surprises and no disappointments relative to what was already foreseen and planned — in detail — at the outset. Nothing is learned in the unfolding and execution of those detailed, perfectly harmonious plans that was not already known at the beginning, whatever happens having already been foreseen. If something truly new would have been learned in the course of the unfolding and execution of those plans, the new knowledge would necessarily have been surprising, and a surprise would necessarily have generated some disappointment and caused some revision of a previously formulated plan of action. But that is precisely what the Arrow-Debreu model, in its perfection, disallows. And that is what, from the perspective of Mises, Hayek, and Kirzner, is exactly wrong with the entire development of neoclassical theory for the past 80 years or more.

The specific point of the neoclassical paradigm on which Kirzner has focused his criticism is the inability of the neoclassical paradigm to find a place for the entrepreneur and entrepreneurial activity in its theoretical apparatus. Profit is what is earned by the entrepreneur, but in full general equilibrium, all income is earned by factors of production, so profits have been exhausted and the entrepreneur euthanized.

Joseph Schumpeter, who was torn between his admiration for the Walrasian general equilibrium system and his Austrian education in economics, tried to reintroduce the entrepreneur as the disruptive force behind the process of creative destruction, the role of the entrepreneur being to disrupt the harmonious equilibrium of the Walrasian system by innovating – introducing either new techniques of production or new consumer products. Kirzner, however, though not denying that disruptive Schumpeterian entrepreneurs may come on the scene from time to time, is focused on a less disruptive, but more pervasive and more characteristic type of entrepreneurship, the kind that is on the lookout for – that is alert to – the profit opportunities that are always latent in the existing allocation of resources and the current structure of input and output prices. Prices in some places or at some times may be high relative to other places and other times, so the entrepreneurial maxim is: buy cheap and sell dear.

Not so long ago, someone noticed that used book prices on Amazon are typically lower at the end of the semester or the school year, when students are trying to unload the books that they don’t want to keep, than they are at the beginning of the semester, when students are buying the books that they will have to read in the new semester. By buying the books students are selling at the end of the school year and selling them at the beginning of the school year, the insightful entrepreneur reduces the cost to students of obtaining the books they use during the school year. That bit of insight and alertness is classic Kirznerian entrepreneurship in action; it was rewarded by a profit, but the activity was equilibrating, not disruptive, reducing the spread between prices for the same, or very similar, commodities paid by buyers or received by sellers at different times of the year.

Sometimes entrepreneurship involves recognizing that a resource or a factor of production is undervalued in its current use. Shifting the resource from a relatively low-valued use to a higher-value use generates a profit for the alert entrepreneur. Here, again, the activity is equilibrating not disruptive. And as others start to catch on, the profit earned on the spread between the value of the resource in its old and new uses will be eroded by the competition of copy-cat entrepreneurs and of other entrepreneurs with an even better idea derived from an even more penetrating insight.

Here is another critical point. Rarely does a new idea come into existence and cause just one change. Every change creates a new and different situation, potentially creating further opportunities to be taken advantage of by other alert and insightful individuals. In an open competitive system, there is no reason why the process of discovery and adaptation should ever come to an end state in which new insights can no longer be made and change is no longer possible.

However, it also the case that knowledge or information is often imperfect and faulty, and that incentives are imperfectly aligned with actual benefits, so that changes can be profitable even though they lead to inferior outcomes. Margarine can be substituted for butter, and transfats for saturated fats. Big mistake. But who knew? And processed carbohydrates can replace fats in low-fat diets. Big mistake. But who knew?

I myself had the pleasure of experiencing first-hand, on a very small scale to be sure, but still in a very inspiring way, this sort of unplanned, serendipitous connection between my stumbling across Kirzner’s Hayek lecture and, then, after starting to write this post a couple of days ago, doing a Google search on Kirzner plus something else (can’t remember what) and seeing a link to Deirdre McCloskey’s paper “A Kirznerian Economic History of the Modern World” in which McCloskey, in somewhat over-the-top style, waxed eloquent about the long and circuitous evolution of her views from the strict neoclassicism in which she was indoctrinated at Harvard and later at Chicago to Kirznerian Austrianism. McCloskey observes in her wonderful paean to Kirzner that growth theory (which is now the core of modern macroeconomics) is utterly incapable of accounting for the historically unique period of economic growth over the past 200 years in what we now refer to as the developed world.

I had faced repeatedly 1964 to 2010 the failure of oomph in the routine, Samuelsonian arguments, such as accumulation inspired by the Protestant ethic, or trade as an engine of growth, or Marxian exploitation, or imperialism as the last stage of capitalism, or factor-biased induced technical change, or Unified Growth Theory. My colleagues at the University of Chicago in the 1970s, Al Harberger and Bob Fogel, pioneered the point that Harberger Triangles of efficiency gain are small (Harberger 1964; Fogel 1965). None of the allocative, capital-accumulation explanations of economic growth since Adam Smith have worked scientifically, which I show in depressing detail in Bourgeois Dignity. None of them have the quantitative force and the distinctiveness to the modern world and the West to explain the Great Fact. No oomph.

What works? Creativity. Innovation. Discovery. The Austrian core. And where did discovery come from? It came from the releasing of the West from ancient constraints on the dignity and liberty of the bourgeoisie, producing an intellectual and engineering explosion of ideas. As the banker and science writer Matt Ridley has recently described it (2010; compare Storr 2008), ideas started breeding, and having baby ideas, who bred further. The liberation of the Jews in the West is a good emblem for the wider story. A people of the book began to be allowed into commercial centers in Holland and then England, and allowed outside the shtetl and the ghetto, and into the universities of Berlin and Manchester. They commenced innovating on a massive, breeding-reactor scale, in good ways (Rothschild, Einstein) and in bad (Marx, Freud).

Ridley explains how the evolutionary biologist Leigh Van Valen proposed in 1973 a Red Queen Hypothesis that would explain why commercial and mechanical ideas, when first allowed to evolve, had to run faster and faster to stay in the same place. Economists would call it the dissipation of initial rents, in the second and third acts of the economic drama. Once breeding ideas were set free in the seventeenth century they created more and more opportunities for Kirznerian alertness. The opportunities were alertly taken up, and persuasively argued for, and at length routinized. The idea of the steam engine had babies with the idea of rails and the idea of wrought iron, and the result was the railroads. The new generation of ideas-in view of the continuing breeding of ideas going on in the background-created by their very routinization still more Kirznerian opportunities. Railroads once they were routine led to Sears, Roebuck and Montgomery Ward. And the routine then created prosperous people, such as my grandfather the freight conductor on the Milwaukee Road or my great-grandfather the postal clerk on the Chicago & Western Indiana or my other great-grandfather who invented the ring on telephones (which extended the telegraph, which itself had made tight scheduling of trains possible). Some became prosperous enough to take up the new ideas, and all became prosperous enough under the Great Fact to buy them. If there was no dissipation of the rents to alertness, and no ultimate gain of income to hoi polloi, no third act, no Red Queen effect, then innovation would not have a justification on egalitarian grounds-as in the historical event it surely did have. The Bosses would engorge all the income, as Ricardo in the early days of the Great Fact had feared. But in the event the discovery of which Kirzner and the Austrian tradition speaks enriched in the third act mainly the poor-your ancestors, and Israel’s, and mine.

It is the growth and diffusion of knowledge (both practical and theoretical, but especially the former), not the accumulation of capital, that accounts for the spectacular economic growth of the past two centuries. So, all praise to the Austrian economist par excellence, Israel Kirzner. But just to avoid any misunderstanding, I will state for the record, that my admiration for Kirzner does not mean that I have gone wobbly on the subject of Austrian Business Cycle Theory, a subject on which Kirzner has been, so far as I know, largely silent — yet further evidence – as if any were needed — of Kirzner’s impeccable judgment.

Price Stickiness and Macroeconomics

Noah Smith has a classically snide rejoinder to Stephen Williamson’s outrage at Noah’s Bloomberg paean to price stickiness and to the classic Ball and Maniw article on the subject, an article that provoked an embarrassingly outraged response from Robert Lucas when published over 20 years ago. I don’t know if Lucas ever got over it, but evidently Williamson hasn’t.

Now to be fair, Lucas’s outrage, though misplaced, was understandable, at least if one understands that Lucas was so offended by the ironic tone in which Ball and Mankiw cast themselves as defenders of traditional macroeconomics – including both Keynesians and Monetarists – against the onslaught of “heretics” like Lucas, Sargent, Kydland and Prescott that he just stopped reading after the first few pages and then, in a fit of righteous indignation, wrote a diatribe attacking Ball and Mankiw as religious fanatics trying to halt the progress of science as if that was the real message of the paper – not, to say the least, a very sophisticated reading of what Ball and Mankiw wrote.

While I am not hostile to the idea of price stickiness — one of the most popular posts I have written being an attempt to provide a rationale for the stylized (though controversial) fact that wages are stickier than other input, and most output, prices — it does seem to me that there is something ad hoc and superficial about the idea of price stickiness and about many explanations, including those offered by Ball and Mankiw, for price stickiness. I think that the negative reactions that price stickiness elicits from a lot of economists — and not only from Lucas and Williamson — reflect a feeling that price stickiness is not well grounded in any economic theory.

Let me offer a slightly different criticism of price stickiness as a feature of macroeconomic models, which is simply that although price stickiness is a sufficient condition for inefficient macroeconomic fluctuations, it is not a necessary condition. It is entirely possible that even with highly flexible prices, there would still be inefficient macroeconomic fluctuations. And the reason why price flexibility, by itself, is no guarantee against macroeconomic contractions is that macroeconomic contractions are caused by disequilibrium prices, and disequilibrium prices can prevail regardless of how flexible prices are.

The usual argument is that if prices are free to adjust in response to market forces, they will adjust to balance supply and demand, and an equilibrium will be restored by the automatic adjustment of prices. That is what students are taught in Econ 1. And it is an important lesson, but it is also a “partial” lesson. It is partial, because it applies to a single market that is out of equilibrium. The implicit assumption in that exercise is that nothing else is changing, which means that all other markets — well, not quite all other markets, but I will ignore that nuance – are in equilibrium. That’s what I mean when I say (as I have done before) that just as macroeconomics needs microfoundations, microeconomics needs macrofoundations.

Now it’s pretty easy to show that in a single market with an upward-sloping supply curve and a downward-sloping demand curve, that a price-adjustment rule that raises price when there’s an excess demand and reduces price when there’s an excess supply will lead to an equilibrium market price. But that simple price-adjustment rule is hard to generalize when many markets — not just one — are in disequilibrium, because reducing disequilibrium in one market may actually exacerbate disequilibrium, or create a disequilibrium that wasn’t there before, in another market. Thus, even if there is an equilibrium price vector out there, which, if it were announced to all economic agents, would sustain a general equilibrium in all markets, there is no guarantee that following the standard price-adjustment rule of raising price in markets with an excess demand and reducing price in markets with an excess supply will ultimately lead to the equilibrium price vector. Even more disturbing, the standard price-adjustment rule may not, even under a tatonnement process in which no trading is allowed at disequilibrium prices, lead to the discovery of the equilibrium price vector. Of course, in the real world trading occurs routinely at disequilibrium prices, so that the “mechanical” forces tending an economy toward equilibrium are even weaker than the standard analysis of price-adjustment would suggest.

This doesn’t mean that an economy out of equilibrium has no stabilizing tendencies; it does mean that those stabilizing tendencies are not very well understood, and we have almost no formal theory with which to describe how such an adjustment process leading from disequilibrium to equilibrium actually works. We just assume that such a process exists. Franklin Fisher made this point 30 years ago in an important, but insufficiently appreciated, volume Disequilibrium Foundations of Equilibrium Economics. But the idea goes back even further: to Hayek’s important work on intertemporal equilibrium, especially his classic paper “Economics and Knowledge,” formalized by Hicks in the temporary-equilibrium model described in Value and Capital.

The key point made by Hayek in this context is that there can be an intertemporal equilibrium if and only if all agents formulate their individual plans on the basis of the same expectations of future prices. If their expectations for future prices are not the same, then any plans based on incorrect price expectations will have to be revised, or abandoned altogether, as price expectations are disappointed over time. For price adjustment to lead an economy back to equilibrium, the price adjustment must converge on an equilibrium price vector and on correct price expectations. But, as Hayek understood in 1937, and as Fisher explained in a dense treatise 30 years ago, we have no economic theory that explains how such a price vector, even if it exists, is arrived at, and even under a tannonement process, much less under decentralized price setting. Pinning the blame on this vague thing called price stickiness doesn’t address the deeper underlying theoretical issue.

Of course for Lucas et al. to scoff at price stickiness on these grounds is a bit rich, because Lucas and his followers seem entirely comfortable with assuming that the equilibrium price vector is rationally expected. Indeed, rational expectation of the equilibrium price vector is held up by Lucas as precisely the microfoundation that transformed the unruly field of macroeconomics into a real science.

Traffic Jams and Multipliers

Since my previous post which I closed by quoting the abstract of Brian Arthur’s paper “Complexity Economics: A Different Framework for Economic Thought,” I have been reading his paper and some of the papers he cites, especially Magda Fontana’s paper “The Santa Fe Perspective on Economics: Emerging Patterns in the Science of Complexity,” and Mark Blaug’s paper “The Formalist Revolution of the 1950s.” The papers bring together a number of themes that I have been emphasizing in previous posts on what I consider the misguided focus of modern macroeconomics on rational-expectations equilibrium as the organizing principle of macroeconomic theory. Among these themes are the importance of coordination failures in explaining macroeconomic fluctuations, the inappropriateness of the full general-equilibrium paradigm in macroeconomics, the mistaken transformation of microfoundations from a theoretical problem to be solved into an absolute methodological requirement to be insisted upon (almost exactly analogous to the absurd transformation of the mind-body problem into a dogmatic insistence that the mind is merely a figment of our own imagination), or, stated another way, a recognition that macrofoundations are just as necessary for economics as microfoundations.

Let me quote again from Arthur’s essay; this time a beautiful passage which captures the interdependence between the micro and macro perspectives

To look at the economy, or areas within the economy, from a complexity viewpoint then would mean asking how it evolves, and this means examining in detail how individual agents’ behaviors together form some outcome and how this might in turn alter their behavior as a result. Complexity in other words asks how individual behaviors might react to the pattern they together create, and how that pattern would alter itself as a result. This is often a difficult question; we are asking how a process is created from the purposed actions of multiple agents. And so economics early in its history took a simpler approach, one more amenable to mathematical analysis. It asked not how agents’ behaviors would react to the aggregate patterns these created, but what behaviors (actions, strategies, expectations) would be upheld by — would be consistent with — the aggregate patterns these caused. It asked in other words what patterns would call for no changes in microbehavior, and would therefore be in stasis, or equilibrium. (General equilibrium theory thus asked what prices and quantities of goods produced and consumed would be consistent with — would pose no incentives for change to — the overall pattern of prices and quantities in the economy’s markets. Classical game theory asked what strategies, moves, or allocations would be consistent with — would be the best course of action for an agent (under some criterion) — given the strategies, moves, allocations his rivals might choose. And rational expectations economics asked what expectations would be consistent with — would on average be validated by — the outcomes these expectations together created.)

This equilibrium shortcut was a natural way to examine patterns in the economy and render them open to mathematical analysis. It was an understandable — even proper — way to push economics forward. And it achieved a great deal. Its central construct, general equilibrium theory, is not just mathematically elegant; in modeling the economy it re-composes it in our minds, gives us a way to picture it, a way to comprehend the economy in its wholeness. This is extremely valuable, and the same can be said for other equilibrium modelings: of the theory of the firm, of international trade, of financial markets.

But there has been a price for this equilibrium finesse. Economists have objected to it — to the neoclassical construction it has brought about — on the grounds that it posits an idealized, rationalized world that distorts reality, one whose underlying assumptions are often chosen for analytical convenience. I share these objections. Like many economists, I admire the beauty of the neoclassical economy; but for me the construct is too pure, too brittle — too bled of reality. It lives in a Platonic world of order, stasis, knowableness, and perfection. Absent from it is the ambiguous, the messy, the real. (pp. 2-3)

Later in the essay, Arthur provides a simple example of a non-equilibrium complex process: traffic flow.

A typical model would acknowledge that at close separation from cars in front, cars lower their speed, and at wide separation they raise it. A given high density of traffic of N cars per mile would imply a certain average separation, and cars would slow or accelerate to a speed that corresponds. Trivially, an equilibrium speed emerges, and if we were restricting solutions to equilibrium that is all we would see. But in practice at high density, a nonequilibrium phenomenon occurs. Some car may slow down — its driver may lose concentration or get distracted — and this might cause cars behind to slow down. This immediately compresses the flow, which causes further slowing of the cars behind. The compression propagates backwards, traffic backs up, and a jam emerges. In due course the jam clears. But notice three things. The phenomenon’s onset is spontaneous; each instance of it is unique in time of appearance, length of propagation, and time of clearing. It is therefore not easily captured by closed-form solutions, but best studied by probabilistic or statistical methods. Second, the phenomenon is temporal, it emerges or happens within time, and cannot appear if we insist on equilibrium. And third, the phenomenon occurs neither at the micro-level (individual car level) nor at the macro-level (overall flow on the road) but at a level in between — the meso-level. (p. 9)

This simple example provides an excellent insight into why macroeconomic reasoning can be led badly astray by focusing on the purely equilibrium relationships characterizing what we now think of as microfounded models. In arguing against the Keynesian multiplier analysis supposedly justifying increased government spending as a countercyclical tool, Robert Barro wrote the following in an unfortunate Wall Street Journal op-ed piece, which I have previously commented on here and here.

Keynesian economics argues that incentives and other forces in regular economics are overwhelmed, at least in recessions, by effects involving “aggregate demand.” Recipients of food stamps use their transfers to consume more. Compared to this urge, the negative effects on consumption and investment by taxpayers are viewed as weaker in magnitude, particularly when the transfers are deficit-financed.

Thus, the aggregate demand for goods rises, and businesses respond by selling more goods and then by raising production and employment. The additional wage and profit income leads to further expansions of demand and, hence, to more production and employment. As per Mr. Vilsack, the administration believes that the cumulative effect is a multiplier around two.

If valid, this result would be truly miraculous. The recipients of food stamps get, say, $1 billion but they are not the only ones who benefit. Another $1 billion appears that can make the rest of society better off. Unlike the trade-off in regular economics, that extra $1 billion is the ultimate free lunch.

How can it be right? Where was the market failure that allowed the government to improve things just by borrowing money and giving it to people? Keynes, in his “General Theory” (1936), was not so good at explaining why this worked, and subsequent generations of Keynesian economists (including my own youthful efforts) have not been more successful.

In the disequilibrium environment of a recession, it is at least possible that injecting additional spending into the economy could produce effects that a similar injection of spending, under “normal” macro conditions, would not produce, just as somehow withdrawing a few cars from a congested road could increase the average speed of all the remaining cars on the road, by a much greater amount than would withdrawing a few cars from an uncongested road. In other words, microresponses may be sensitive to macroconditions.

Hicks on IS-LM and Temporary Equilibrium

Jan, commenting on my recent post about Krugman, Minsky and IS-LM, quoted the penultimate paragraph of J. R. Hicks’s 1980 paper on IS-LM in the Journal of Post-Keynesian Economics, a brand of economics not particularly sympathetic to Hicks’s invention. Hicks explained that in the mid-1930s he had been thinking along lines similar to Keynes’s even before the General Theory was published, and had the basic idea of IS-LM in his mind even before he had read the General Theory, while also acknowledging that his enthusiasm for the IS-LM construct had waned considerably over the years.

Hicks discussed both the similarities and the differences between his model and IS-LM. But as the discussion proceeds, it becomes clear that what he is thinking of as his model is what became his model of temporary equilibrium in Value and Capital. So it really is important to understand what Hicks felt were the similarities as well as the key differences between the temporary- equilibrium model, and the IS-LM model. Here is how Hicks put it:

I recognized immediately, as soon as I read The General Theory, that my model and Keynes’ had some things in common. Both of us fixed our attention on the behavior of an economy during a period—a period that had a past, which nothing that was done during the period could alter, and a future, which during the period was unknown. Expectations of the future would nevertheless affect what happened during the period. Neither of us made any assumption about “rational expectations” ; expectations, in our models, were strictly exogenous.3 (Keynes made much more fuss over that than I did, but there is the same implication in my model also.) Subject to these data— the given equipment carried over from the past, the production possibilities within the period, the preference schedules, and the given expectations— the actual performance of the economy within the period was supposed to be determined, or determinable. It would be determined as an equilibrium performance, with respect to these data.

There was all this in common between my model and Keynes’; it was enough to make me recognize, as soon as I saw The General Theory, that his model was a relation of mine and, as such, one which I could warmly welcome. There were, however, two differences, on which (as we shall see) much depends. The more obvious difference was that mine was a flexprice model, a perfect competition model, in which all prices were flexible, while in Keynes’ the level of money wages (at least) was exogenously determined. So Keynes’ was a model that was consistent with unemployment, while mine, in his terms, was a full employment model. I shall have much to say about this difference, but I may as well note, at the start, that I do not think it matters much. I did not think, even in 1936, that it mattered much. IS-LM was in fact a translation of Keynes’ nonflexprice model into my terms. It seemed to me already that that could be done; but how it is done requires explanation.

The other difference is more fundamental; it concerns the length of the period. Keynes’ (he said) was a “short-period,” a term with connotations derived from Marshall; we shall not go far wrong if we think of it as a year. Mine was an “ultra-short-period” ; I called it a week. Much more can happen in a year than in a week; Keynes has to allow for quite a lot of things to happen. I wanted to avoid so much happening, so that my (flexprice) markets could reflect propensities (and expectations) as they are at a moment. So it was that I made my markets open only on a Monday; what actually happened during the ensuing week was not to affect them. This was a very artificial device, not (I would think now) much to be recommended. But the point of it was to exclude the things which might happen, and must disturb the markets, during a period of finite length; and this, as we shall see, is a very real trouble in Keynes. (pp. 139-40)

Hicks then explained how the specific idea of the IS-LM model came to him as a result of working on a three-good Walrasian system in which the solution could be described in terms of equilibrium in two markets, the third market necessarily being in equilibrium if the other two were in equilibrium. That’s an interesting historical tidbit, but the point that I want to discuss is what I think is Hicks’s failure to fully understand the significance of his own model, whose importance, regrettably, he consistently underestimated in later work (e.g., in Capital and Growth and in this paper).

The point that I want to focus on is in the second paragraph quoted above where Hicks says “mine [i.e. temporary equilibrium] was a flexprice model, a perfect competition model, in which all prices were flexible, while in Keynes’ the level of money wages (at least) was exogenously determined. So Keynes’ was a model that was consistent with unemployment, while mine, in his terms, was a full employment model.” This, it seems to me, is all wrong, because Hicks, is taking a very naïve and misguided view of what perfect competition and flexible prices mean. Those terms are often mistakenly assumed to meant that if prices are simply allowed to adjust freely, all  markets will clear and all resources will be utilized.

I think that is a total misconception, and the significance of the temporary-equilibrium construct is in helping us understand why an economy can operate sub-optimally with idle resources even when there is perfect competition and markets “clear.” What prevents optimality and allows resources to remain idle despite freely adjustming prices and perfect competition is that the expectations held by agents are not consistent. If expectations are not consistent, the plans based on those expectations are not consistent. If plans are not consistent, then how can one expect resources to be used optimally or even at all? Thus, for Hicks to assert, casually without explicit qualification, that his temporary-equilibrium model was a full-employment model, indicates to me that Hicks was unaware of the deeper significance of his own model.

If we take a full equilibrium as our benchmark, and look at how one of the markets in that full equilibrium clears, we can imagine the equilibrium as the intersection of a supply curve and a demand curve, whose positions in the standard price/quantity space depend on the price expectations of suppliers and of demanders. Different, i.e, inconsistent, price expectations would imply shifts in both the demand and supply curves from those corresponding to full intertemporal equilibrium. Overall, the price expectations consistent with a full intertemporal equilibrium will in some sense maximize total output and employment, so when price expectations are inconsistent with full intertemporal equilibrium, the shifts of the demand and supply curves will be such that they will intersect at points corresponding to less output and less employment than would have been the case in full intertemporal equilibrium. In fact, it is possible to imagine that expectations on the supply side and the demand side are so inconsistent that the point of intersection between the demand and supply curves corresponds to an output (and hence employment) that is way less than it would have been in full intertemporal equilibrium. The problem is not that the price in the market doesn’t allow the market to clear. Rather, given the positions of the demand and supply curves, their point of intersection implies a low output, because inconsistent price expectations are such that potentially advantageous trading opportunities are not being recognized.

So for Hicks to assert that his flexprice temporary-equilibrium model was (in Keynes’s terms) a full-employment model without noting the possibility of a significant contraction of output (and employment) in a perfectly competitive flexprice temporary-equilibrium model when there are significant inconsistencies in expectations suggests strongly that Hicks somehow did not fully comprehend what his own creation was all about. His failure to comprehend his own model also explains why he felt the need to abandon the flexprice temporary-equilibrium model in his later work for a fixprice model.

There is, of course, a lot more to be said about all this, and Hicks’s comments concerning the choice of a length of the period are also of interest, but the clear (or so it seems to me) misunderstanding by Hicks of what is entailed by a flexprice temporary equilibrium is an important point to recognize in evaluating both Hicks’s work and his commentary on that work and its relation to Keynes.

Franklin Fisher on the Stability(?) of General Equilibrium

The eminent Franklin Fisher, winner of the J. B. Clark Medal in 1973, a famed econometrician and antitrust economist, who was the expert economics witness for IBM in its long battle with the U. S. Department of Justice, and was later the expert witness for the Justice Department in the antitrust case against Microsoft, currently emeritus professor professor of microeconomics at MIT, visited the FTC today to give a talk about proposals the efficient sharing of water between Israel, Palestine, and Jordan. The talk was interesting and informative, but I must admit that I was more interested in Fisher’s views on the stability of general equilibrium, the subject of a monograph he wrote for the econometric society Disequilibrium Foundations of Equilibrium Economics, a book which I have not yet read, but hope to read before very long.

However, I did find a short paper by Fisher, “The Stability of General Equilibrium – What Do We Know and Why Is It Important?” (available here) which was included in a volume General Equilibrium Analysis: A Century after Walras edited by Pacal Bridel.

Fisher’s contribution was to show that the early stability analyses of general equilibrium, despite the efforts of some of the most best economists of the mid-twentieth century, e.g, Hicks, Samuelson, Arrow and Hurwicz (all Nobel Prize winners) failed to provide a useful analysis of the question whether the general equilibrium described by Walras, whose existence was first demonstrated under very restrictive assumptions by Abraham Wald, and later under more general conditions by Arrow and Debreu, is stable or not.

Although we routinely apply comparative-statics exercises to derive what Samuelson mislabeled “meaningful theorems,” meaning refutable propositions about the directional effects of a parameter change on some observable economic variable(s), such as the effect of an excise tax on the price and quantity sold of the taxed commodity, those comparative-statics exercises are predicated on the assumption that the exercise starts from an initial position of equilibrium and that the parameter change leads, in a short period of time, to a new equilibrium. But there is no theory describing the laws of motion leading from one equilibrium to another, so the whole exercise is built on the mere assumption that a general equilibrium is sufficiently stable so that the old and the new equilibria can be usefully compared. In other words, microeconomics is predicated on macroeconomic foundations, i.e., the stability of a general equilibrium. The methodological demand for microfoundations for macroeconomics is thus a massive and transparent exercise in question begging.

In his paper on the stability of general equilibrium, Fisher observes that there are four important issues to be explored by general-equilibrium theory: existence, uniqueness, optimality, and stability. Of these he considers optimality to be the most important, as it provides a justification for a capitalistic market economy. Fisher continues:

So elegant and powerful are these results, that most economists base their conclusions upon them and work in an equilibrium framework – as they do in partial equilibrium analysis. But the justification for so doing depends on the answer to the fourth question listed above, that of stability, and a favorable answer to that is by no means assured.

It is important to understand this point which is generally ignored by economists. No matter how desirable points of competitive general equilibrium may be, that is of no consequence if they cannot be reached fairly quickly or maintained thereafter, or, as might happen when a country decides to adopt free markets, there are bad consequences on the way to equilibrium.

Milton Friedman remarked to me long ago that the study of the stability of general equilibrium is unimportant, first, because it is obvious that the economy is stable, and, second, because if it isn’t stable we are wasting our time. He should have known better. In the first place, it is not at all obvious that the actual economy is stable. Apart from the lessons of the past few years, there is the fact that prices do change all the time. Beyond this, however, is a subtler and possibly more important point. Whether or not the actual economy is stable, we largely lack a convincing theory of why that should be so. Lacking such a theory, we do not have an adequate theory of value, and there is an important lacuna in the center of microeconomic theory.

Yet economists generally behave as though this problem did not exist. Perhaps the most extreme example of this is the view of the theory of Rational Expectations that any disequilibrium disappears so fast that it can be ignored. (If the 50-dollar bill were really on the sidewalk, it would be gone already.) But this simply assumes the problem away. The pursuit of profits is a major dynamic force in the competitive economy. To only look at situations where the Invisible Hand has finished its work cannot lead to a real understanding of how that work is accomplished. (p. 35)

I would also note that Fisher confirms a proposition that I have advanced a couple of times previously, namely that Walras’s Law is not generally valid except in a full general equilibrium with either a complete set of markets or correct price expectations. Outside of general equilibrium, Walras’s Law is valid only if trading is not permitted at disequilibrium prices, i.e., Walrasian tatonnement. Here’s how Fisher puts it.

In this context, it is appropriate to remark that Walras’s Law no longer holds in its original form. Instead of the sum of the money value of all excess demands over all agents being zero, it now turned out that, at any moment of time, the same sum (including the demands for shares of firms and for money) equals the difference between the total amount of dividends that households expect to receive at that time and the amount that firms expect to pay. This difference disappears in equilibrium where expectations are correct, and the classic version of Walras’s Law then holds.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,326 other followers

Follow Uneasy Money on WordPress.com
Advertisements