Posts Tagged 'Robert Lucas'

My Paper on Hayek, Hicks and Radner and 3 Equilibrium Concepts Now Available on SSRN

A little over a year ago, I posted a series of posts (here, here, here, here, and here) that came together as a paper (“Hayek and Three Equilibrium Concepts: Sequential, Temporary and Rational-Expectations”) that I presented at the History of Economics Society in Toronto in June 2017. After further revisions I posted the introductory section and the concluding section in April before presenting the paper at the Colloquium on Market Institutions and Economic Processes at NYU.

I have since been making further revisions and tweaks to the paper as well as adding the names of Hicks and Radner to the title, and I have just posted the current version on SSRN where it is available for download.

Here is the abstract:

Along with Erik Lindahl and Gunnar Myrdal, F. A. Hayek was among the first to realize that the necessary conditions for intertemporal, as opposed to stationary, equilibrium could be expressed in terms of correct expectations of future prices, often referred to as perfect foresight. Subsequently, J. R. Hicks further elaborated the concept of intertemporal equilibrium in Value and Capital in which he also developed the related concept of a temporary equilibrium in which future prices are not correctly foreseen. This paper attempts to compare three important subsequent developments of that idea with Hayek’s 1937 refinement of his original 1928 paper on intertemporal equilibrium. As a preliminary, the paper explains the significance of Hayek’s 1937 distinction between correct expectations and perfect foresight. In non-chronological order, the three developments of interest are: (1) Roy Radner’s model of sequential equilibrium with incomplete markets as an alternative to the Arrow-Debreu-McKenzie model of full equilibrium with complete markets; (2) Hicks’s temporary equilibrium model, and an important extension of that model by C. J. Bliss; (3) the Muth rational-expectations model and its illegitimate extension by Lucas from its original microeconomic application into macroeconomics. While Hayek’s 1937 treatment most closely resembles Radner’s sequential equilibrium model, which Radner, echoing Hayek, describes as an equilibrium of plans, prices, and price expectations, Hicks’s temporary equilibrium model would seem to have been the natural development of Hayek’s approach. The now dominant Lucas rational-expectations approach misconceives intertemporal equilibrium and ignores the fundamental Hayekian insights about the meaning of intertemporal equilibrium.

On Equilibrium in Economic Theory

Here is the introduction to a new version of my paper, “Hayek and Three Concepts of Intertemporal Equilibrium” which I presented last June at the History of Economics Society meeting in Toronto, and which I presented piecemeal in a series of posts last May and June. This post corresponds to the first part of this post from last May 21.

Equilibrium is an essential concept in economics. While equilibrium is an essential concept in other sciences as well, and was probably imported into economics from physics, its meaning in economics cannot be straightforwardly transferred from physics into economics. The dissonance between the physical meaning of equilibrium and its economic interpretation required a lengthy process of explication and clarification, before the concept and its essential, though limited, role in economic theory could be coherently explained.

The concept of equilibrium having originally been imported from physics at some point in the nineteenth century, economists probably thought it natural to think of an economic system in equilibrium as analogous to a physical system at rest, in the sense of a system in which there was no movement or in the sense of all movements being repetitive. But what would it mean for an economic system to be at rest? The obvious answer was to say that prices of goods and the quantities produced, exchanged and consumed would not change. If supply equals demand in every market, and if there no exogenous disturbance displaces the system, e.g., in population, technology, tastes, etc., then there would seem to be no reason for the prices paid and quantities produced to change in that system. But that conception of an economic system at rest was understood to be overly restrictive, given the large, and perhaps causally important, share of economic activity – savings and investment – that is predicated on the assumption and expectation that prices and quantities not remain constant.

The model of a stationary economy at rest in which all economic activity simply repeats what has already happened before did not seem very satisfying or informative to economists, but that view of equilibrium remained dominant in the nineteenth century and for perhaps the first quarter of the twentieth. Equilibrium was not an actual state that an economy could achieve, it was just an end state that economic processes would move toward if given sufficient time to play themselves out with no disturbing influences. This idea of a stationary timeless equilibrium is found in the writings of the classical economists, especially Ricardo and Mill who used the idea of a stationary state as the end-state towards which natural economic processes were driving an an economic system.

This, not very satisfactory, concept of equilibrium was undermined when Jevons, Menger, Walras, and their followers began to develop the idea of optimizing decisions by rational consumers and producers. The notion of optimality provided the key insight that made it possible to refashion the earlier classical equilibrium concept into a new, more fruitful and robust, version.

If each economic agent (household or business firm) is viewed as making optimal choices, based on some scale of preferences, and subject to limitations or constraints imposed by their capacities, endowments, technologies, and the legal system, then the equilibrium of an economy can be understood as a state in which each agent, given his subjective ranking of the feasible alternatives, is making an optimal decision, and each optimal decision is both consistent with, and contingent upon, those of all other agents. The optimal decisions of each agent must simultaneously be optimal from the point of view of that agent while being consistent, or compatible, with the optimal decisions of every other agent. In other words, the decisions of all buyers of how much to purchase must be consistent with the decisions of all sellers of how much to sell. But every decision, just like every piece in a jig-saw puzzle, must fit perfectly with every other decision. If any decision is suboptimal, none of the other decisions contingent upon that decision can be optimal.

The idea of an equilibrium as a set of independently conceived, mutually consistent, optimal plans was latent in the earlier notions of equilibrium, but it could only be coherently articulated on the basis of a notion of optimality. Originally framed in terms of utility maximization, the notion was gradually extended to encompass the ideas of cost minimization and profit maximization. The general concept of an optimal plan having been grasped, it then became possible to formulate a generically economic idea of equilibrium, not in terms of a system at rest, but in terms of the mutual consistency of optimal plans. Once equilibrium was conceived as the mutual consistency of optimal plans, the needlessly restrictiveness of defining equilibrium as a system at rest became readily apparent, though it remained little noticed and its significance overlooked for quite some time.

Because the defining characteristics of economic equilibrium are optimality and mutual consistency, change, even non-repetitive change, is not logically excluded from the concept of equilibrium as it was from the idea of an equilibrium as a stationary state. An optimal plan may be carried out, not just at a single moment, but over a period of time. Indeed, the idea of an optimal plan is, at the very least, suggestive of a future that need not simply repeat the present. So, once the idea of equilibrium as a set of mutually consistent optimal plans was grasped, it was to be expected that the concept of equilibrium could be formulated in a manner that accommodates the existence of change and development over time.

But the manner in which change and development could be incorporated into an equilibrium framework of optimality was not entirely straightforward, and it required an extended process of further intellectual reflection to formulate the idea of equilibrium in a way that gives meaning and relevance to the processes of change and development that make the passage of time something more than merely a name assigned to one of the n dimensions in vector space.

This paper examines the slow process by which the concept of equilibrium was transformed from a timeless or static concept into an intertemporal one by focusing on the pathbreaking contribution of F. A. Hayek who first articulated the concept, and exploring the connection between his articulation and three noteworthy, but very different, versions of intertemporal equilibrium: (1) an equilibrium of plans, prices, and expectations, (2) temporary equilibrium, and (3) rational-expectations equilibrium.

But before discussing these three versions of intertemporal equilibrium, I summarize in section two Hayek’s seminal 1937 contribution clarifying the necessary conditions for the existence of an intertemporal equilibrium. Then, in section three, I elaborate on an important, and often neglected, distinction, first stated and clarified by Hayek in his 1937 paper, between perfect foresight and what I call contingently correct foresight. That distinction is essential for an understanding of the distinction between the canonical Arrow-Debreu-McKenzie (ADM) model of general equilibrium, and Roy Radner’s 1972 generalization of that model as an equilibrium of plans, prices and price expectations, which I describe in section four.

Radner’s important generalization of the ADM model captured the spirit and formalized Hayek’s insights about the nature and empirical relevance of intertemporal equilibrium. But to be able to prove the existence of an equilibrium of plans, prices and price expectations, Radner had to make assumptions about agents that Hayek, in his philosophically parsimonious view of human knowledge and reason, had been unwilling to accept. In section five, I explore how J. R. Hicks’s concept of temporary equilibrium, clearly inspired by Hayek, though credited by Hicks to Erik Lindahl, provides an important bridge connecting the pure hypothetical equilibrium of correct expectations and perfect consistency of plans with the messy real world in which expectations are inevitably disappointed and plans routinely – and sometimes radically – revised. The advantage of the temporary-equilibrium framework is to provide the conceptual tools with which to understand how financial crises can occur and how such crises can be propagated and transformed into economic depressions, thereby making possible the kind of business-cycle model that Hayek tried unsuccessfully to create. But just as Hicks unaccountably failed to credit Hayek for the insights that inspired his temporary-equilibrium approach, Hayek failed to see the potential of temporary equilibrium as a modeling strategy that combines the theoretical discipline of the equilibrium method with the reality of expectational inconsistency across individual agents.

In section six, I discuss the Lucasian idea of rational expectations in macroeconomic models, mainly to point out that, in many ways, it simply assumes away the problem of plan expectational consistency with which Hayek, Hicks and Radner and others who developed the idea of intertemporal equilibrium were so profoundly concerned.

The Phillips Curve and the Lucas Critique

With unemployment at the lowest levels since the start of the millennium (initial unemployment claims in February were the lowest since 1973!), lots of people are starting to wonder if we might be headed for a pick-up in the rate of inflation, which has been averaging well under 2% a year since the financial crisis of September 2008 ushered in the Little Depression of 2008-09 and beyond. The Fed has already signaled its intention to continue raising interest rates even though inflation remains well anchored at rates below the Fed’s 2% target. And among Fed watchers and Fed cognoscenti, the only question being asked is not whether the Fed will raise its Fed Funds rate target, but how frequent those (presumably) quarter-point increments will be.

The prevailing view seems to be that the thought process of the Federal Open Market Committee (FOMC) in raising interest rates — even before there is any real evidence of an increase in an inflation rate that is still below the Fed’s 2% target — is that a preemptive strike is required to prevent inflation from accelerating and rising above what has become an inflation ceiling — not an inflation target — of 2%.

Why does the Fed believe that inflation is going to rise? That’s what the econoblogosphere has, of late, been trying to figure out. And the consensus seems to be that the FOMC is basing its assessment that the risk that inflation will break the 2% ceiling that it has implicitly adopted has become unacceptably high. That risk assessment is based on some sort of analysis in which it is inferred from the Phillips Curve that, with unemployment nearing historically low levels, rising inflation has become dangerously likely. And so the next question is: why is the FOMC fretting about the Phillips Curve?

In a blog post earlier this week, David Andolfatto of the St. Louis Federal Reserve Bank, tried to spell out in some detail the kind of reasoning that lay behind the FOMC decision to actively tighten the stance of monetary policy to avoid any increase in inflation. At the same time, Andolfatto expressed his own view, that the rate of inflation is not determined by the rate of unemployment, but by the stance of monetary policy.

Andolfatto’s avowal of monetarist faith in the purely monetary forces that govern the rate of inflation elicited a rejoinder from Paul Krugman expressing considerable annoyance at Andolfatto’s monetarism.

Here are three questions about inflation, unemployment, and Fed policy. Some people may imagine that they’re the same question, but they definitely aren’t:

  1. Does the Fed know how low the unemployment rate can go?
  2. Should the Fed be tightening now, even though inflation is still low?
  3. Is there any relationship between unemployment and inflation?

It seems obvious to me that the answer to (1) is no. We’re currently well above historical estimates of full employment, and inflation remains subdued. Could unemployment fall to 3.5% without accelerating inflation? Honestly, we don’t know.

Agreed.

I would also argue that the Fed is making a mistake by tightening now, for several reasons. One is that we really don’t know how low U can go, and won’t find out if we don’t give it a chance. Another is that the costs of getting it wrong are asymmetric: waiting too long to tighten might be awkward, but tightening too soon increases the risks of falling back into a liquidity trap. Finally, there are very good reasons to believe that the Fed’s 2 percent inflation target is too low; certainly the belief that it was high enough to make the zero lower bound irrelevant has been massively falsified by experience.

Agreed, but the better approach would be to target the price level, or even better nominal GDP, so that short-term undershooting of the inflation target would provide increased leeway to allow inflation to overshoot the inflation target without undermining the credibility of the commitment to price stability.

But should we drop the whole notion that unemployment has anything to do with inflation? Via FTAlphaville, I see that David Andolfatto is at it again, asserting that there’s something weird about asserting an unemployment-inflation link, and that inflation is driven by an imbalance between money supply and money demand.

But one can fully accept that inflation is driven by an excess supply of money without denying that there is a link between inflation and unemployment. In the normal course of events an excess supply of money may lead to increased spending as people attempt to exchange their excess cash balances for real goods and services. The increased spending can induce additional output and additional employment along with rising prices. The reverse happens when there is an excess demand for cash balances and people attempt to build up their cash holdings by cutting back their spending, reducing output. So the inflation unemployment relationship results from the effects induced by a particular causal circumstance. Nor does that mean that an imbalance in the supply of money is the only cause of inflation or price level changes.

Inflation can also result from nothing more than the anticipation of inflation. Expected inflation can also affect output and employment, so inflation and unemployment are related not only by both being affected by excess supply of (demand for) money, but by both being affect by expected inflation.

Even if you think that inflation is fundamentally a monetary phenomenon (which you shouldn’t, as I’ll explain in a minute), wage- and price-setters don’t care about money demand; they care about their own ability or lack thereof to charge more, which has to – has to – involve the amount of slack in the economy. As Karl Smith pointed out a decade ago, the doctrine of immaculate inflation, in which money translates directly into inflation – a doctrine that was invoked to predict inflationary consequences from Fed easing despite a depressed economy – makes no sense.

There’s no reason for anyone to care about overall money demand in this scenario. Price setters respond to the perceived change in the rate of spending induced by an excess supply of money. (I note parenthetically, that I am referring now to an excess supply of base money, not to an excess supply of bank-created money, which, unlike base money, is not a hot potato that cannot be withdrawn from circulation in response to market incentives.) Now some price setters may actually use macroeconomic information to forecast price movements, but recognizing that channel would take us into the realm of an expectations-theory of inflation, not the strict monetary theory of inflation that Krugman is criticizing.

And the claim that there’s weak or no evidence of a link between unemployment and inflation is sustainable only if you insist on restricting yourself to recent U.S. data. Take a longer and broader view, and the evidence is obvious.

Consider, for example, the case of Spain. Inflation in Spain is definitely not driven by monetary factors, since Spain hasn’t even had its own money since it joined the euro. Nonetheless, there have been big moves in both Spanish inflation and Spanish unemployment:

That period of low unemployment, by Spanish standards, was the result of huge inflows of capital, fueling a real estate bubble. Then came the sudden stop after the Greek crisis, which sent unemployment soaring.

Meanwhile, the pre-crisis era was marked by relatively high inflation, well above the euro-area average; the post-crisis era by near-zero inflation, below the rest of the euro area, allowing Spain to achieve (at immense cost) an “internal devaluation” that has driven an export-led recovery.

So, do you really want to claim that the swings in inflation had nothing to do with the swings in unemployment? Really, really?

No one claims – at least no one who believes in a monetary theory of inflation — should claim that swings in inflation and unemployment are unrelated, but to acknowledge the relationship between inflation and unemployment does not entail acceptance of the proposition that unemployment is a causal determinant of inflation.

But if you concede that unemployment had a lot to do with Spanish inflation and disinflation, you’ve already conceded the basic logic of the Phillips curve. You may say, with considerable justification, that U.S. data are too noisy to have any confidence in particular estimates of that curve. But denying that it makes sense to talk about unemployment driving inflation is foolish.

No it’s not foolish, because the relationship between inflation and unemployment is not a causal relationship; it’s a coincidental relationship. The level of employment depends on many things and some of the things that employment depends on also affect inflation. That doesn’t mean that employment causally affects inflation.

When I read Krugman’s post and the Andalfatto post that provoked Krugman, it occurred to me that the way to summarize all of this is to say that unemployment and inflation are determined by a variety of deep structural (causal) relationships. The Phillips Curve, although it was once fashionable to refer to it as the missing equation in the Keynesian model, is not a structural relationship; it is a reduced form. The negative relationship between unemployment and inflation that is found by empirical studies does not tell us that high unemployment reduces inflation, any more than a positive empirical relationship between the price of a commodity and the quantity sold would tell you that the demand curve for that product is positively sloped.

It may be interesting to know that there is a negative empirical relationship between inflation and unemployment, but we can’t rely on that relationship in making macroeconomic policy. I am not a big admirer of the Lucas Critique for reasons that I have discussed in other posts (e.g., here and here). But, the Lucas Critique, a rather trivial result that was widely understood even before Lucas took ownership of the idea, does at least warn us not to confuse a reduced form with a causal relationship.

Representative Agents, Homunculi and Faith-Based Macroeconomics

After my previous post comparing the neoclassical synthesis in its various versions to the mind-body problem, there was an interesting Twitter exchange between Steve Randy Waldman and David Andolfatto in which Andolfatto queried whether Waldman and I are aware that there are representative-agent models in which the equilibrium is not Pareto-optimal. Andalfatto raised an interesting point, but what I found interesting about it might be different from what Andalfatto was trying to show, which, I am guessing, was that a representative-agent modeling strategy doesn’t necessarily commit the theorist to the conclusion that the world is optimal and that the solutions of the model can never be improved upon by a monetary/fiscal-policy intervention. I concede the point. It is well-known I think that, given the appropriate assumptions, a general-equilibrium model can have a sub-optimal solution. Given those assumptions, the corresponding representative-agent will also choose a sub-optimal solution. So I think I get that, but perhaps there’s a more subtle point  that I’m missing. If so, please set me straight.

But what I was trying to argue was not that representative-agent models are necessarily optimal, but that representative-agent models suffer from an inherent, and, in my view, fatal, flaw: they can’t explain any real macroeconomic phenomenon, because a macroeconomic phenomenon has to encompass something more than the decision of a single agent, even an omniscient central planner. At best, the representative agent is just a device for solving an otherwise intractable general-equilibrium model, which is how I think Lucas originally justified the assumption.

Yet just because a general-equilibrium model can be formulated so that it can be solved as the solution of an optimizing agent does not explain the economic mechanism or process that generates the solution. The mathematical solution of a model does not necessarily provide any insight into the adjustment process or mechanism by which the solution actually is, or could be, achieved in the real world. Your ability to find a solution for a mathematical problem does not mean that you understand the real-world mechanism to which the solution of your model corresponds. The correspondence between your model may be a strictly mathematical correspondence which may not really be in any way descriptive of how any real-world mechanism or process actually operates.

Here’s an example of what I am talking about. Consider a traffic-flow model explaining how congestion affects vehicle speed and the flow of traffic. It seems obvious that traffic congestion is caused by interactions between the different vehicles traversing a thoroughfare, just as it seems obvious that market exchange arises as the result of interactions between the different agents seeking to advance their own interests. OK, can you imagine building a useful traffic-flow model based on solving for the optimal plan of a representative vehicle?

I don’t think so. Once you frame the model in terms of a representative vehicle, you have abstracted from the phenomenon to be explained. The entire exercise would be pointless – unless, that is, you assumed that interactions between vehicles are so minimal that they can be ignored. But then why would you be interested in congestion effects? If you want to claim that your model has any relevance to the effect of congestion on traffic flow, you can’t base the claim on an assumption that there is no congestion.

Or to take another example, suppose you want to explain the phenomenon that, at sporting events, all, or almost all, the spectators sit in their seats but occasionally get up simultaneously from their seats to watch the play on the field or court. Would anyone ever think that an explanation in terms of a representative spectator could explain that phenomenon?

In just the same way, a representative-agent macroeconomic model necessarily abstracts from the interactions between actual agents. Obviously, by abstracting from the interactions, the model can’t demonstrate that there are no interactions between agents in the real world or that their interactions are too insignificant to matter. I would be shocked if anyone really believed that the interactions between agents are unimportant, much less, negligible; nor have I seen an argument that interactions between agents are unimportant, the concept of network effects, to give just one example, being an important topic in microeconomics.

It’s no answer to say that all the interactions are accounted for within the general-equilibrium model. That is just a form of question-begging. The representative agent is being assumed because without him the problem of finding a general-equilibrium solution of the model is very difficult or intractable. Taking into account interactions makes the model too complicated to work with analytically, so it is much easier — but still hard enough to allow the theorist to perform some fancy mathematical techniques — to ignore those pesky interactions. On top of that, the process by which the real world arrives at outcomes to which a general-equilibrium model supposedly bears at least some vague resemblance can’t even be described by conventional modeling techniques.

The modeling approach seems like that of a neuroscientist saying that, because he could simulate the functions, electrical impulses, chemical reactions, and neural connections in the brain – which he can’t do and isn’t even close to doing, even though a neuroscientist’s understanding of the brain far surpasses any economist’s understanding of the economy – he can explain consciousness. Simulating the operation of a brain would not explain consciousness, because the computer on which the neuroscientist performed the simulation would not become conscious in the course of the simulation.

Many neuroscientists and other materialists like to claim that consciousness is not real, that it’s just an epiphenomenon. But we all have the subjective experience of consciousness, so whatever it is that someone wants to call it, consciousness — indeed the entire world of mental phenomena denoted by that term — remains an unexplained phenomenon, a phenomenon that can only be dismissed as unreal on the basis of a metaphysical dogma that denies the existence of anything that can’t be explained as the result of material and physical causes.

I call that metaphysical belief a dogma not because it’s false — I have no way of proving that it’s false — but because materialism is just as much a metaphysical belief as deism or monotheism. It graduates from belief to dogma when people assert not only that the belief is true but that there’s something wrong with you if you are unwilling to believe it as well. The most that I would say against the belief in materialism is that I can’t understand how it could possibly be true. But I admit that there are a lot of things that I just don’t understand, and I will even admit to believing in some of those things.

New Classical macroeconomists, like, say, Robert Lucas and, perhaps, Thomas Sargent, like to claim that unless a macroeconomic model is microfounded — by which they mean derived from an explicit intertemporal optimization exercise typically involving a representative agent or possibly a small number of different representative agents — it’s not an economic model, because the model, being vulnerable to the Lucas critique, is theoretically superficial and vacuous. But only models of intertemporal equilibrium — a set of one or more mutually consistent optimal plans — are immune to the Lucas critique, so insisting on immunity to the Lucas critique as a prerequisite for a macroeconomic model is a guarantee of failure if your aim to explain anything other than an intertemporal equilibrium.

Unless, that is, you believe that real world is in fact the realization of a general equilibrium model, which is what real-business-cycle theorists, like Edward Prescott, at least claim to believe. Like materialist believers that all mental states are epiphenomenous, and that consciousness is an (unexplained) illusion, real-business-cycle theorists purport to deny that there is such a thing as a disequilibrium phenomenon, the so-called business cycle, in their view, being nothing but a manifestation of the intertemporal-equilibrium adjustment of an economy to random (unexplained) productivity shocks. According to real-business-cycle theorists, such characteristic phenomena of business cycles as surprise, regret, disappointed expectations, abandoned and failed plans, the inability to find work at wages comparable to wages that other similar workers are being paid are not real phenomena; they are (unexplained) illusions and misnomers. The real-business-cycle theorists don’t just fail to construct macroeconomic models; they deny the very existence of macroeconomics, just as strict materialists deny the existence of consciousness.

What is so preposterous about the New-Classical/real-business-cycle methodological position is not the belief that the business cycle can somehow be modeled as a purely equilibrium phenomenon, implausible as that idea seems, but the insistence that only micro-founded business-cycle models are methodologically acceptable. It is one thing to believe that ultimately macroeconomics and business-cycle theory will be reduced to the analysis of individual agents and their interactions. But current micro-founded models can’t provide explanations for what many of us think are basic features of macroeconomic and business-cycle phenomena. If non-micro-founded models can provide explanations for those phenomena, even if those explanations are not fully satisfactory, what basis is there for rejecting them just because of a methodological precept that disqualifies all non-micro-founded models?

According to Kevin Hoover, the basis for insisting that only micro-founded macroeconomic models are acceptable, even if the microfoundation consists in a single representative agent optimizing for an entire economy, is eschatological. In other words, because of a belief that economics will eventually develop analytical or computational techniques sufficiently advanced to model an entire economy in terms of individual interacting agents, an analysis based on a single representative agent, as the first step on this theoretical odyssey, is somehow methodologically privileged over alternative models that do not share that destiny. Hoover properly rejects the presumptuous notion that an avowed, but unrealized, theoretical destiny, can provide a privileged methodological status to an explanatory strategy. The reductionist microfoundationalism of New-Classical macroeconomics and real-business-cycle theory, with which New Keynesian economists have formed an alliance of convenience, is truly a faith-based macroeconomics.

The remarkable similarity between the reductionist microfoundational methodology of New-Classical macroeconomics and the reductionist materialist approach to the concept of mind suggests to me that there is also a close analogy between the representative agent and what philosophers of mind call a homunculus. The Cartesian materialist theory of mind maintains that, at some place or places inside the brain, there resides information corresponding to our conscious experience. The question then arises: how does our conscious experience access the latent information inside the brain? And the answer is that there is a homunculus (or little man) that processes the information for us so that we can perceive it through him. For example, the homunculus (see the attached picture of the little guy) views the image cast by light on the retina as if he were watching a movie projected onto a screen.

homunculus

But there is an obvious fallacy, because the follow-up question is: how does our little friend see anything? Well, the answer must be that there’s another, smaller, homunculus inside his brain. You can probably already tell that this argument is going to take us on an infinite regress. So what purports to be an explanation turns out to be just a form of question-begging. Sound familiar? The only difference between the representative agent and the homunculus is that the representative agent begs the question immediately without having to go on an infinite regress.

PS I have been sidetracked by other responsibilities, so I have not been blogging much, if at all, for the last few weeks. I hope to post more frequently, but I am afraid that my posting and replies to comments are likely to remain infrequent for the next couple of months.

In Defense of Stigler

I recently discussed Paul Romer’s criticism of Robert Lucas for shifting from the Feynman integrity that, in Romer’s view, characterized Lucas’s early work, to the Stigler conviction that Romer believes has characterized Lucas’s later work. I wanted to make a criticism of Lucas different from Romer’s, so I only suggested in passing that that the Stigler conviction criticized by Romer didn’t seem that terrible to me, and I compared Stigler conviction to Galileo’s defense of Copernican heliocentrism. Now, having reread the essay, “The Nature and Role of Originality in Scientific Progress,” from which Romer quoted, I find, as I suspected, that Romer has inaccurately conveyed the message that Stigler meant to convey in his essay.

In accusing Lucas of forsaking the path of Feynman integrity and chosing instead the path of Stigler conviction, making it seem as if Stigler had provided justification for pursuing an ideological agenda, as Romer believes Lucas and other freshwater economists have done, Romer provides no information about the context of Stigler’s essay. Much of Stigler’s early writing in economics was about the history of economics, and Stigler’s paper on originality is one of those; in fact, it was subsequently republished as the lead essay in Stigler’s 1965 volume Essays in the History of Economics. What concerns Stigler in the essay are a few closely related questions: 1) what characteristic of originality makes it highly valued in science in general and in economics in particular? 2) Given that originality is so highly valued, how do economists earn a reputation for originality? 3) Is the quest for originality actually conducive to scientific progress?

Here is Stigler’s answer to the first question provided at the end of the introductory section under the heading “The Meaning of Originality.”

Scientific originality in its important role should be measured against the knowledge of a man’s contemporaries. If he opens their eyes to new ideas or to new perspectives on old ideas, he is an original economist in the scientifically important sense. . . . Smith, Ricardo, Jevons, Walras, Marshall, Keynes – they all changed the beliefs of economists and thus changed economics.

It is conceivable for an economist to be ignored by contemporaries and yet exert considerable influence on later generations, but this is a most improbable event. He must have been extraordinarily out of tune with (in advance of?) his times, and rarely do first-class minds throw themselves away on the visionary. Perhaps Cournot is an example of a man whose work skipped a half a century, but normally such men become famous only by reflecting the later fame of the rediscovered doctrines.

Originality then in its scientifically important role, is a matter of subtle unaccustomedness – neither excessive radicalism nor statement of the previous unformulated consensus.

The extended passage quoted by Romer appears a few paragraphs later in the second section of the paper under the heading “The Techniques of Persuasion.” Having already established that scientific originality must be both somehow surprising yet also capable of being understood by other economists, Stigler wants to know how an original economist can get the attention of his peers for his new idea. Doing so is not easy, because

New ideas are even harder to sell than new products. Inertia and the many unharmonious voices of those who would change our ways combine against the balanced and temperate statement of the merits of one’s ” original ” views. One must put on the best face possible, and much is possible. Wares must be shouted — the human mind is not a divining rod that quivers over truth.

It is this analogy between the selling of new ideas and selling of new products that leads Stigler in his drollery to suggest that with two highly unusual exceptions – Smith and Marshall – all economists have had to resort to “the techniques of the huckster.”

What are those techniques? And who used them? Although Stigler asserted that all but two famous economists used such techniques, he mentioned only two by name, and helpfully provided the specific evidence of their resort to huckster-like self-promotional techniques. Whom did Stigler single out for attention? William Stanley Jevons and Eugen von Bohm-Bawerk.

So what was the hucksterism committed by Jevons? Get ready to be shocked:

Writing a Theory of Political Economy, he devoted the first 197 pages of a book of 267 pages to his ideas on utility!

OMG! Shocking; just shocking. How could he have stooped so low as that? But Bohm-Bawerk was even worse.

Not content with writing two volumes, and dozens of articles, in presenting and defending his capital theory, he added a third volume (to the third edition of his Positive Theorie des Kapitals) devoted exclusively to refuting, at least to his own satisfaction, every criticism that had arisen during the preceding decades.

What a sordid character that loathsome Austrian aristocrat must have been! Publishing a third volume devoted entirely to responding to criticisms of the first two. The idea!

Well, actually, they weren’t as bad as you might have thought. Let’s read Stigler’s next paragraph.

Although the new economic theories are introduced by the technique of the huckster, I should add that they are not the work of mere hucksters. The sincerity of Jevons, for example, is printed on every page. Indeed I do not believe that any important economist has ever deliberately contrived ideas in which he did not believe in order to achieve prominence: men of the requisite intellectual power and morality can get bigger prizes elsewhere. Instead, the successful inventor is a one-sided man. He is utterly persuaded of the significance and correctness of his ideas and he subordinates all other truths because they seem to him less important than the general acceptance of his truth. He is more a warrior against ignorance than a scholar among ideas.

I believe that Romer misunderstood what Stigler mean to say here. Romer seems to interpret this passage to mean that if a theorist is utterly convinced that he is right, he somehow can be justified in “subordinat[ing] all other truths” in cutting corners, avoiding contrary arguments or suppressing contradictory evidence that might undercut his theory – the sorts of practices ruled out by Feynman integrity, which is precisely what Romer was accusing Lucas of having done in a paper on growth theory. But to me it is clear from the context that what Stigler meant by “subordinating all other truths” was not any lack of Feynman integrity, but the single-minded focus on a specific contribution to the exclusion of all others. That was why Stigler drew attention to the exorbitant share of Jevons’s book entitled Principles of Political Economy devoted to the theory of marginal utility or the publication by Bohm-Bawerk of an entire volume devoted to responding to criticisms of his two earlier volumes on the theory of capital and interest. He neither implied nor meant to suggest that either Jevons or Bohm-Bawerk committed any breach of scientific propriety, much less Feynman integrity.

If there were any doubt about the correctness of this interpretation of what Stigler meant, it would be dispelled by the third section of Stigler’s paper under the heading: “The Case of Mill.”

John Stuart Mill is a striking example with which to illustrate the foregoing remarks. He is now considered a mediocre economist of unusual literary power; a fluent, flabby echo of Ricardo. This judgement is well-nigh universal: I do not believe that Mill has had a fervent admirer in the twentieth century. I attribute this low reputation to the fact that Mill had the perspective and balance, but not the full powers, of Smith and Marshall. He avoided all the tactics of easy success. He wrote with extraordinary balance, and his own ideas-considering their importance-received unbelievably little emphasis. The bland prose moved sedately over a corpus of knowledge organized with due regard to structure and significance, and hardly at all with regard to parentage. . . .

Yet however one judges Mill, it cannot be denied that he was original. In terms of identifiable theories, he was one of the most original economists in the history of the science.

Stigler went on to list and document the following original contributions of Mill in the area of value theory, ignoring Mill’s contributions to trade theory, “because I cannot be confident of the priorities.”

1 Non-competing Groups

2 Joint Products

3 Alternative Costs

4 The Economics of the Firm

5 Supply and Demand

6 Say’s Law

Stigler concludes his discussion with this assessment of Mill

This is a very respectable list of contributions. But it is also a peculiar list: any one of the contributions could be made independently of all the others. Mill was not trying to build a new system but only to add improvements here and there to the Ricardian system. The fairest of economists, as Schumpeter has properly characterized Mill, unselfishly dedicated his abilities to the advancement of the science. And, yet, Mill’s magisterial quality and conciliatory tone may have served less well than sharp and opinionated controversy in inciting his contemporaries to make advances.

Finally, just to confirm the lack of ideological motivation in Stigler’s discussion, let me quote Stigler’s characteristically ironic and playful conclusion.

These reflections on the nature and role of originality, however, have no utilitarian purpose, or even a propagandistic purpose. If I have a prejudice, it is that we commonly exaggerate the merits of originality in economics–that we are unjust in conferring immortality upon the authors of absurd theories while we forget the fine, if not particularly original, work of others. But I do not propose that we do something about it.

Price Stickiness and Macroeconomics

Noah Smith has a classically snide rejoinder to Stephen Williamson’s outrage at Noah’s Bloomberg paean to price stickiness and to the classic Ball and Maniw article on the subject, an article that provoked an embarrassingly outraged response from Robert Lucas when published over 20 years ago. I don’t know if Lucas ever got over it, but evidently Williamson hasn’t.

Now to be fair, Lucas’s outrage, though misplaced, was understandable, at least if one understands that Lucas was so offended by the ironic tone in which Ball and Mankiw cast themselves as defenders of traditional macroeconomics – including both Keynesians and Monetarists – against the onslaught of “heretics” like Lucas, Sargent, Kydland and Prescott that he just stopped reading after the first few pages and then, in a fit of righteous indignation, wrote a diatribe attacking Ball and Mankiw as religious fanatics trying to halt the progress of science as if that was the real message of the paper – not, to say the least, a very sophisticated reading of what Ball and Mankiw wrote.

While I am not hostile to the idea of price stickiness — one of the most popular posts I have written being an attempt to provide a rationale for the stylized (though controversial) fact that wages are stickier than other input, and most output, prices — it does seem to me that there is something ad hoc and superficial about the idea of price stickiness and about many explanations, including those offered by Ball and Mankiw, for price stickiness. I think that the negative reactions that price stickiness elicits from a lot of economists — and not only from Lucas and Williamson — reflect a feeling that price stickiness is not well grounded in any economic theory.

Let me offer a slightly different criticism of price stickiness as a feature of macroeconomic models, which is simply that although price stickiness is a sufficient condition for inefficient macroeconomic fluctuations, it is not a necessary condition. It is entirely possible that even with highly flexible prices, there would still be inefficient macroeconomic fluctuations. And the reason why price flexibility, by itself, is no guarantee against macroeconomic contractions is that macroeconomic contractions are caused by disequilibrium prices, and disequilibrium prices can prevail regardless of how flexible prices are.

The usual argument is that if prices are free to adjust in response to market forces, they will adjust to balance supply and demand, and an equilibrium will be restored by the automatic adjustment of prices. That is what students are taught in Econ 1. And it is an important lesson, but it is also a “partial” lesson. It is partial, because it applies to a single market that is out of equilibrium. The implicit assumption in that exercise is that nothing else is changing, which means that all other markets — well, not quite all other markets, but I will ignore that nuance – are in equilibrium. That’s what I mean when I say (as I have done before) that just as macroeconomics needs microfoundations, microeconomics needs macrofoundations.

Now it’s pretty easy to show that in a single market with an upward-sloping supply curve and a downward-sloping demand curve, that a price-adjustment rule that raises price when there’s an excess demand and reduces price when there’s an excess supply will lead to an equilibrium market price. But that simple price-adjustment rule is hard to generalize when many markets — not just one — are in disequilibrium, because reducing disequilibrium in one market may actually exacerbate disequilibrium, or create a disequilibrium that wasn’t there before, in another market. Thus, even if there is an equilibrium price vector out there, which, if it were announced to all economic agents, would sustain a general equilibrium in all markets, there is no guarantee that following the standard price-adjustment rule of raising price in markets with an excess demand and reducing price in markets with an excess supply will ultimately lead to the equilibrium price vector. Even more disturbing, the standard price-adjustment rule may not, even under a tatonnement process in which no trading is allowed at disequilibrium prices, lead to the discovery of the equilibrium price vector. Of course, in the real world trading occurs routinely at disequilibrium prices, so that the “mechanical” forces tending an economy toward equilibrium are even weaker than the standard analysis of price-adjustment would suggest.

This doesn’t mean that an economy out of equilibrium has no stabilizing tendencies; it does mean that those stabilizing tendencies are not very well understood, and we have almost no formal theory with which to describe how such an adjustment process leading from disequilibrium to equilibrium actually works. We just assume that such a process exists. Franklin Fisher made this point 30 years ago in an important, but insufficiently appreciated, volume Disequilibrium Foundations of Equilibrium Economics. But the idea goes back even further: to Hayek’s important work on intertemporal equilibrium, especially his classic paper “Economics and Knowledge,” formalized by Hicks in the temporary-equilibrium model described in Value and Capital.

The key point made by Hayek in this context is that there can be an intertemporal equilibrium if and only if all agents formulate their individual plans on the basis of the same expectations of future prices. If their expectations for future prices are not the same, then any plans based on incorrect price expectations will have to be revised, or abandoned altogether, as price expectations are disappointed over time. For price adjustment to lead an economy back to equilibrium, the price adjustment must converge on an equilibrium price vector and on correct price expectations. But, as Hayek understood in 1937, and as Fisher explained in a dense treatise 30 years ago, we have no economic theory that explains how such a price vector, even if it exists, is arrived at, and even under a tannonement process, much less under decentralized price setting. Pinning the blame on this vague thing called price stickiness doesn’t address the deeper underlying theoretical issue.

Of course for Lucas et al. to scoff at price stickiness on these grounds is a bit rich, because Lucas and his followers seem entirely comfortable with assuming that the equilibrium price vector is rationally expected. Indeed, rational expectation of the equilibrium price vector is held up by Lucas as precisely the microfoundation that transformed the unruly field of macroeconomics into a real science.

Krugman on the Volcker Disinflation

Earlier in the week, Paul Krugman wrote about the Volcker disinflation of the 1980s. Krugman’s annoyance at Stephen Moore (whom Krugman flatters by calling him an economist) and John Cochrane (whom Krugman disflatters by comparing him to Stephen Moore) is understandable, but he has less excuse for letting himself get carried away in an outburst of Keynesian triumphalism.

Right-wing economists like Stephen Moore and John Cochrane — it’s becoming ever harder to tell the difference — have some curious beliefs about history. One of those beliefs is that the experience of disinflation in the 1980s was a huge shock to Keynesians, refuting everything they believed. What makes this belief curious is that it’s the exact opposite of the truth. Keynesians came into the Volcker disinflation — yes, it was mainly the Fed’s doing, not Reagan’s — with a standard, indeed textbook, model of what should happen. And events matched their expectations almost precisely.

I’ve been cleaning out my library, and just unearthed my copy of Dornbusch and Fischer’s Macroeconomics, first edition, copyright 1978. Quite a lot of that book was concerned with inflation and disinflation, using an adaptive-expectations Phillips curve — that is, an assumed relationship in which the current inflation rate depends on the unemployment rate and on lagged inflation. Using that approach, they laid out at some length various scenarios for a strategy of reducing the rate of money growth, and hence eventually reducing inflation. Here’s one of their charts, with the top half showing inflation and the bottom half showing unemployment:




Not the cleanest dynamics in the world, but the basic point should be clear: cutting inflation would require a temporary surge in unemployment. Eventually, however, unemployment could come back down to more or less its original level; this temporary surge in unemployment would deliver a permanent reduction in the inflation rate, because it would change expectations.

And here’s what the Volcker disinflation actually looked like:


A temporary but huge surge in unemployment, with inflation coming down to a sustained lower level.

So were Keynesian economists feeling amazed and dismayed by the events of the 1980s? On the contrary, they were feeling pretty smug: disinflation had played out exactly the way the models in their textbooks said it should.

Well, this is true, but only up to a point. What Krugman neglects to mention, which is why the Volcker disinflation is not widely viewed as having enhanced the Keynesian forecasting record, is that most Keynesians had opposed the Reagan tax cuts, and one of their main arguments was that the tax cuts would be inflationary. However, in the Reagan-Volcker combination of loose fiscal policy and tight money, it was tight money that dominated. Score one for the Monetarists. The rapid drop in inflation, though accompanied by high unemployment, was viewed as a vindication of the Monetarist view that inflation is always and everywhere a monetary phenomenon, a view which now seems pretty commonplace, but in the 1970s and 1980s was hotly contested, including by Keynesians.

However, the (Friedmanian) Monetarist view was only partially vindicated, because the Volcker disinflation was achieved by way of high interest rates not by tightly controlling the money supply. As I have written before on this blog (here and here) and in chapter 10 of my book on free banking (especially, pp. 214-21), Volcker actually tried very hard to slow down the rate of growth in the money supply, but the attempt to implement a k-percent rule induced perverse dynamics, creating a precautionary demand for money whenever monetary growth overshot the target range, the anticipation of an imminent future tightening causing people, fearful that cash would soon be unavailable, to hoard cash by liquidating assets before the tightening. The scenario played itself out repeatedly in the 1981-82 period, when the most closely watched economic or financial statistic in the world was the Fed’s weekly report of growth in the money supply, with growth rates over the target range being associated with falling stock and commodities prices. Finally, in the summer of 1982, Volcker announced that the Fed would stop trying to achieve its money growth targets, and the great stock market rally of the 1980s took off, and economic recovery quickly followed.

So neither the old-line Keynesian dismissal of monetary policy as irrelevant to the control of inflation, nor the Monetarist obsession with controlling the monetary aggregates fared very well in the aftermath of the Volcker disinflation. The result was the New Keynesian focus on monetary policy as the key tool for macroeconomic stabilization, except that monetary policy no longer meant controlling a targeted monetary aggregate, but controlling a targeted interest rate (as in the Taylor rule).

But Krugman doesn’t mention any of this, focusing instead on the conflicts among  non-Keynesians.

Indeed, it was the other side of the macro divide that was left scrambling for answers. The models Chicago was promoting in the 1970s, based on the work of Robert Lucas and company, said that unemployment should have come down quickly, as soon as people realized that the Fed really was bringing down inflation.

Lucas came to Chicago in 1975, and he was the wave of the future at Chicago, but it’s not as if Friedman disappeared; after all, he did win the Nobel Prize in 1976. And although Friedman did not explicitly attack Lucas, it’s clear that, to his credit, Friedman never bought into the rational-expectations revolution. So although Friedman may have been surprised at the depth of the 1981-82 recession – in part attributable to the perverse effects of the money-supply targeting he had convinced the Fed to adopt – the adaptive-expectations model in the Dornbusch-Fischer macro textbook is as much Friedmanian as Keynesian. And by the way, Dornbush and Fischer were both at Chicago in the mid 1970s when the first edition of their macro text was written.

By a few years into the 80s it was obvious that those models were unsustainable in the face of the data. But rather than admit that their dismissal of Keynes was premature, most of those guys went into real business cycle theory — basically, denying that the Fed had anything to do with recessions. And from there they just kept digging ever deeper into the rabbit hole.

But anyway, what you need to know is that the 80s were actually a decade of Keynesian analysis triumphant.

I am just as appalled as Krugman by the real-business-cycle episode, but it was as much a rejection of Friedman, and of all other non-Keynesian monetary theory, as of Keynes. So the inspiring morality tale spun by Krugman in which the hardy band of true-blue Keynesians prevail against those nasty new classical barbarians is a bit overdone and vastly oversimplified.

On Multipliers, Ricardian Equivalence and Functioning Well

In my post yesterday, I explained why if one believes, as do Robert Lucas and Robert Barro, that monetary policy can stimulate an economy in an economic downturn, it is easy to construct an argument that fiscal policy would do so as well. I hope that my post won’t cause anyone to conclude that real-business-cycle theory must be right that monetary policy is no more effective than fiscal policy. I suppose that there is that risk, but I can’t worry about every weird idea floating around in the blogosphere. Instead, I want to think out loud a bit about fiscal multipliers and Ricardian equivalence.

I am inspired to do so by something that John Cochrane wrote on his blog defending Robert Lucas from Paul Krugman’s charge that Lucas didn’t understand Ricardian equivalence. Here’s what Cochrane, explaining what Ricardian equivalence means, had to say:

So, according to Paul [Krugman], “Ricardian Equivalence,” which is the theorem that stimulus does not work in a well-functioning economy, fails, because it predicts that a family who takes out a mortgage to buy a $100,000 house would reduce consumption by $100,000 in that very year.

Cochrane was a little careless in defining Ricardian equivalance as a theorem about stimulus, when it’s really a theorem about the equivalence of the effects of present and future taxes on spending. But that’s just a minor slip. What I found striking about Cochrane’s statement was something else: that little qualifying phrase “in a well-functioning economy,” which Cochrane seems to have inserted as a kind of throat-clearing remark, the sort of aside that people are just supposed to hear but not really pay much attention to, that sometimes can be quite revealing, usually unintentionally, in its own way.

What is so striking about those five little words “in a well-functioning economy?” Well, just this. Why, in a well-functioning economy, would anyone care whether a stimulus works or not? A well-functioning economy doesn’t need any stimulus, so why would you even care whether it works or not, much less prove a theorem to show that it doesn’t? (I apologize for the implicit Philistinism of that rhetorical question, I’m just engaging in a little rhetorical excess to make my point a little bit more colorfully.)

So if a well-functioning economy doesn’t require any stimulus, and if a stimulus wouldn’t work in a well-functioning economy, what does that tell us about whether a stimulus works (or would work) in an economy that is not functioning well? Not a whole lot. Thus, the bread and butter models that economists use, models of how an economy functions when there are no frictions, expectations are rational, and markets clear, are guaranteed to imply that there are no multipliers and that Ricardian equivalence holds. This is the world of a single, unique, and stable equilibrium. If you exogenously change any variable in the system, the system will snap back to a new equilibrium in which all variables have optimally adjusted to whatever exogenous change you have subjected the system to. All conventional economic analysis, comparative statics or dynamic adjustment, are built on the assumption of a unique and stable equilibrium to which all economic variables inevitably return when subjected to any exogenous shock. This is the indispensable core of economic theory, but it is not the whole of economic theory.

Keynes had a vision of what could go wrong with an economy: entrepreneurial pessimism — a dampening of animal spirits — would cause investment to flag; the rate of interest would not (or could not) fall enough to revive investment; people would try to shift out of assets into cash, causing a cumulative contraction of income, expenditure and output. In such circumstances, spending by government could replace the investment spending no longer being undertaken by discouraged entrepreneurs, at least until entrepreneurial expectations recovered. This is a vision not of a well-functioning economy, but of a dysfunctional one, but Keynes was able to describe it in terms of a simplified model, essentially what has come down to us as the Keynesian cross. In this little model, you can easily calculate a multiplier as the reciprocal of the marginal propensity to save out of disposable income.

But packaging Keynes’s larger vision into the four corners of the Keynesian cross diagram, or even the slightly more realistic IS-LM diagram, misses the essence of Keynes’s vision — the volatility of entrepreneurial expectations and their susceptibility to unpredictable mood swings that overwhelm any conceivable equilibrating movements in interest rates. A numerical calculation of the multiplier in the simplified Keynesian models is not particularly relevant, because the real goal is not to reach an equilibrium within a system of depressed entrepreneurial expectations, but to create conditions in which entrepreneurial expectations bounce back from their depressed state. As I like to say, expectations are fundamental.

Unlike a well-functioning economy with a unique equilibrium, a not-so-well functioning economy may have multiple equilibria corresponding to different sets of expectations. The point of increased government spending is then not to increase the size of government, but to restore entrepreneurial confidence by providing assurance that if they increase production, they will have customers willing and able to buy the output at prices sufficient to cover their costs.

Ricardian equivalence assumes that expectations of future income are independent of tax and spending decisions in the present, because, in a well-functioning economy, there is but one equilibrium path for future output and income. But if, because the economy not functioning well, expectations of future income, and therefore actual future income, may depend on current decisions about spending and taxation. No matter what Ricardian equivalence says, a stimulus may work by shifting the economy to a different higher path of future output and income than the one it now happens to be on, in which case present taxes may not be equivalent to future taxes, after all.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,261 other subscribers
Follow Uneasy Money on WordPress.com