Posts Tagged 'Robert Lucas'

Representative Agents, Homunculi and Faith-Based Macroeconomics

After my previous post comparing the neoclassical synthesis in its various versions to the mind-body problem, there was an interesting Twitter exchange between Steve Randy Waldman and David Andolfatto in which Andolfatto queried whether Waldman and I are aware that there are representative-agent models in which the equilibrium is not Pareto-optimal. Andalfatto raised an interesting point, but what I found interesting about it might be different from what Andalfatto was trying to show, which, I am guessing, was that a representative-agent modeling strategy doesn’t necessarily commit the theorist to the conclusion that the world is optimal and that the solutions of the model can never be improved upon by a monetary/fiscal-policy intervention. I concede the point. It is well-known I think that, given the appropriate assumptions, a general-equilibrium model can have a sub-optimal solution. Given those assumptions, the corresponding representative-agent will also choose a sub-optimal solution. So I think I get that, but perhaps there’s a more subtle point  that I’m missing. If so, please set me straight.

But what I was trying to argue was not that representative-agent models are necessarily optimal, but that representative-agent models suffer from an inherent, and, in my view, fatal, flaw: they can’t explain any real macroeconomic phenomenon, because a macroeconomic phenomenon has to encompass something more than the decision of a single agent, even an omniscient central planner. At best, the representative agent is just a device for solving an otherwise intractable general-equilibrium model, which is how I think Lucas originally justified the assumption.

Yet just because a general-equilibrium model can be formulated so that it can be solved as the solution of an optimizing agent does not explain the economic mechanism or process that generates the solution. The mathematical solution of a model does not necessarily provide any insight into the adjustment process or mechanism by which the solution actually is, or could be, achieved in the real world. Your ability to find a solution for a mathematical problem does not mean that you understand the real-world mechanism to which the solution of your model corresponds. The correspondence between your model may be a strictly mathematical correspondence which may not really be in any way descriptive of how any real-world mechanism or process actually operates.

Here’s an example of what I am talking about. Consider a traffic-flow model explaining how congestion affects vehicle speed and the flow of traffic. It seems obvious that traffic congestion is caused by interactions between the different vehicles traversing a thoroughfare, just as it seems obvious that market exchange arises as the result of interactions between the different agents seeking to advance their own interests. OK, can you imagine building a useful traffic-flow model based on solving for the optimal plan of a representative vehicle?

I don’t think so. Once you frame the model in terms of a representative vehicle, you have abstracted from the phenomenon to be explained. The entire exercise would be pointless – unless, that is, you assumed that interactions between vehicles are so minimal that they can be ignored. But then why would you be interested in congestion effects? If you want to claim that your model has any relevance to the effect of congestion on traffic flow, you can’t base the claim on an assumption that there is no congestion.

Or to take another example, suppose you want to explain the phenomenon that, at sporting events, all, or almost all, the spectators sit in their seats but occasionally get up simultaneously from their seats to watch the play on the field or court. Would anyone ever think that an explanation in terms of a representative spectator could explain that phenomenon?

In just the same way, a representative-agent macroeconomic model necessarily abstracts from the interactions between actual agents. Obviously, by abstracting from the interactions, the model can’t demonstrate that there are no interactions between agents in the real world or that their interactions are too insignificant to matter. I would be shocked if anyone really believed that the interactions between agents are unimportant, much less, negligible; nor have I seen an argument that interactions between agents are unimportant, the concept of network effects, to give just one example, being an important topic in microeconomics.

It’s no answer to say that all the interactions are accounted for within the general-equilibrium model. That is just a form of question-begging. The representative agent is being assumed because without him the problem of finding a general-equilibrium solution of the model is very difficult or intractable. Taking into account interactions makes the model too complicated to work with analytically, so it is much easier — but still hard enough to allow the theorist to perform some fancy mathematical techniques — to ignore those pesky interactions. On top of that, the process by which the real world arrives at outcomes to which a general-equilibrium model supposedly bears at least some vague resemblance can’t even be described by conventional modeling techniques.

The modeling approach seems like that of a neuroscientist saying that, because he could simulate the functions, electrical impulses, chemical reactions, and neural connections in the brain – which he can’t do and isn’t even close to doing, even though a neuroscientist’s understanding of the brain far surpasses any economist’s understanding of the economy – he can explain consciousness. Simulating the operation of a brain would not explain consciousness, because the computer on which the neuroscientist performed the simulation would not become conscious in the course of the simulation.

Many neuroscientists and other materialists like to claim that consciousness is not real, that it’s just an epiphenomenon. But we all have the subjective experience of consciousness, so whatever it is that someone wants to call it, consciousness — indeed the entire world of mental phenomena denoted by that term — remains an unexplained phenomenon, a phenomenon that can only be dismissed as unreal on the basis of a metaphysical dogma that denies the existence of anything that can’t be explained as the result of material and physical causes.

I call that metaphysical belief a dogma not because it’s false — I have no way of proving that it’s false — but because materialism is just as much a metaphysical belief as deism or monotheism. It graduates from belief to dogma when people assert not only that the belief is true but that there’s something wrong with you if you are unwilling to believe it as well. The most that I would say against the belief in materialism is that I can’t understand how it could possibly be true. But I admit that there are a lot of things that I just don’t understand, and I will even admit to believing in some of those things.

New Classical macroeconomists, like, say, Robert Lucas and, perhaps, Thomas Sargent, like to claim that unless a macroeconomic model is microfounded — by which they mean derived from an explicit intertemporal optimization exercise typically involving a representative agent or possibly a small number of different representative agents — it’s not an economic model, because the model, being vulnerable to the Lucas critique, is theoretically superficial and vacuous. But only models of intertemporal equilibrium — a set of one or more mutually consistent optimal plans — are immune to the Lucas critique, so insisting on immunity to the Lucas critique as a prerequisite for a macroeconomic model is a guarantee of failure if your aim to explain anything other than an intertemporal equilibrium.

Unless, that is, you believe that real world is in fact the realization of a general equilibrium model, which is what real-business-cycle theorists, like Edward Prescott, at least claim to believe. Like materialist believers that all mental states are epiphenomenous, and that consciousness is an (unexplained) illusion, real-business-cycle theorists purport to deny that there is such a thing as a disequilibrium phenomenon, the so-called business cycle, in their view, being nothing but a manifestation of the intertemporal-equilibrium adjustment of an economy to random (unexplained) productivity shocks. According to real-business-cycle theorists, such characteristic phenomena of business cycles as surprise, regret, disappointed expectations, abandoned and failed plans, the inability to find work at wages comparable to wages that other similar workers are being paid are not real phenomena; they are (unexplained) illusions and misnomers. The real-business-cycle theorists don’t just fail to construct macroeconomic models; they deny the very existence of macroeconomics, just as strict materialists deny the existence of consciousness.

What is so preposterous about the New-Classical/real-business-cycle methodological position is not the belief that the business cycle can somehow be modeled as a purely equilibrium phenomenon, implausible as that idea seems, but the insistence that only micro-founded business-cycle models are methodologically acceptable. It is one thing to believe that ultimately macroeconomics and business-cycle theory will be reduced to the analysis of individual agents and their interactions. But current micro-founded models can’t provide explanations for what many of us think are basic features of macroeconomic and business-cycle phenomena. If non-micro-founded models can provide explanations for those phenomena, even if those explanations are not fully satisfactory, what basis is there for rejecting them just because of a methodological precept that disqualifies all non-micro-founded models?

According to Kevin Hoover, the basis for insisting that only micro-founded macroeconomic models are acceptable, even if the microfoundation consists in a single representative agent optimizing for an entire economy, is eschatological. In other words, because of a belief that economics will eventually develop analytical or computational techniques sufficiently advanced to model an entire economy in terms of individual interacting agents, an analysis based on a single representative agent, as the first step on this theoretical odyssey, is somehow methodologically privileged over alternative models that do not share that destiny. Hoover properly rejects the presumptuous notion that an avowed, but unrealized, theoretical destiny, can provide a privileged methodological status to an explanatory strategy. The reductionist microfoundationalism of New-Classical macroeconomics and real-business-cycle theory, with which New Keynesian economists have formed an alliance of convenience, is truly a faith-based macroeconomics.

The remarkable similarity between the reductionist microfoundational methodology of New-Classical macroeconomics and the reductionist materialist approach to the concept of mind suggests to me that there is also a close analogy between the representative agent and what philosophers of mind call a homunculus. The Cartesian materialist theory of mind maintains that, at some place or places inside the brain, there resides information corresponding to our conscious experience. The question then arises: how does our conscious experience access the latent information inside the brain? And the answer is that there is a homunculus (or little man) that processes the information for us so that we can perceive it through him. For example, the homunculus (see the attached picture of the little guy) views the image cast by light on the retina as if he were watching a movie projected onto a screen.


But there is an obvious fallacy, because the follow-up question is: how does our little friend see anything? Well, the answer must be that there’s another, smaller, homunculus inside his brain. You can probably already tell that this argument is going to take us on an infinite regress. So what purports to be an explanation turns out to be just a form of question-begging. Sound familiar? The only difference between the representative agent and the homunculus is that the representative agent begs the question immediately without having to go on an infinite regress.

PS I have been sidetracked by other responsibilities, so I have not been blogging much, if at all, for the last few weeks. I hope to post more frequently, but I am afraid that my posting and replies to comments are likely to remain infrequent for the next couple of months.


In Defense of Stigler

I recently discussed Paul Romer’s criticism of Robert Lucas for shifting from the Feynman integrity that, in Romer’s view, characterized Lucas’s early work, to the Stigler conviction that Romer believes has characterized Lucas’s later work. I wanted to make a criticism of Lucas different from Romer’s, so I only suggested in passing that that the Stigler conviction criticized by Romer didn’t seem that terrible to me, and I compared Stigler conviction to Galileo’s defense of Copernican heliocentrism. Now, having reread the essay, “The Nature and Role of Originality in Scientific Progress,” from which Romer quoted, I find, as I suspected, that Romer has inaccurately conveyed the message that Stigler meant to convey in his essay.

In accusing Lucas of forsaking the path of Feynman integrity and chosing instead the path of Stigler conviction, making it seem as if Stigler had provided justification for pursuing an ideological agenda, as Romer believes Lucas and other freshwater economists have done, Romer provides no information about the context of Stigler’s essay. Much of Stigler’s early writing in economics was about the history of economics, and Stigler’s paper on originality is one of those; in fact, it was subsequently republished as the lead essay in Stigler’s 1965 volume Essays in the History of Economics. What concerns Stigler in the essay are a few closely related questions: 1) what characteristic of originality makes it highly valued in science in general and in economics in particular? 2) Given that originality is so highly valued, how do economists earn a reputation for originality? 3) Is the quest for originality actually conducive to scientific progress?

Here is Stigler’s answer to the first question provided at the end of the introductory section under the heading “The Meaning of Originality.”

Scientific originality in its important role should be measured against the knowledge of a man’s contemporaries. If he opens their eyes to new ideas or to new perspectives on old ideas, he is an original economist in the scientifically important sense. . . . Smith, Ricardo, Jevons, Walras, Marshall, Keynes – they all changed the beliefs of economists and thus changed economics.

It is conceivable for an economist to be ignored by contemporaries and yet exert considerable influence on later generations, but this is a most improbable event. He must have been extraordinarily out of tune with (in advance of?) his times, and rarely do first-class minds throw themselves away on the visionary. Perhaps Cournot is an example of a man whose work skipped a half a century, but normally such men become famous only by reflecting the later fame of the rediscovered doctrines.

Originality then in its scientifically important role, is a matter of subtle unaccustomedness – neither excessive radicalism nor statement of the previous unformulated consensus.

The extended passage quoted by Romer appears a few paragraphs later in the second section of the paper under the heading “The Techniques of Persuasion.” Having already established that scientific originality must be both somehow surprising yet also capable of being understood by other economists, Stigler wants to know how an original economist can get the attention of his peers for his new idea. Doing so is not easy, because

New ideas are even harder to sell than new products. Inertia and the many unharmonious voices of those who would change our ways combine against the balanced and temperate statement of the merits of one’s ” original ” views. One must put on the best face possible, and much is possible. Wares must be shouted — the human mind is not a divining rod that quivers over truth.

It is this analogy between the selling of new ideas and selling of new products that leads Stigler in his drollery to suggest that with two highly unusual exceptions – Smith and Marshall – all economists have had to resort to “the techniques of the huckster.”

What are those techniques? And who used them? Although Stigler asserted that all but two famous economists used such techniques, he mentioned only two by name, and helpfully provided the specific evidence of their resort to huckster-like self-promotional techniques. Whom did Stigler single out for attention? William Stanley Jevons and Eugen von Bohm-Bawerk.

So what was the hucksterism committed by Jevons? Get ready to be shocked:

Writing a Theory of Political Economy, he devoted the first 197 pages of a book of 267 pages to his ideas on utility!

OMG! Shocking; just shocking. How could he have stooped so low as that? But Bohm-Bawerk was even worse.

Not content with writing two volumes, and dozens of articles, in presenting and defending his capital theory, he added a third volume (to the third edition of his Positive Theorie des Kapitals) devoted exclusively to refuting, at least to his own satisfaction, every criticism that had arisen during the preceding decades.

What a sordid character that loathsome Austrian aristocrat must have been! Publishing a third volume devoted entirely to responding to criticisms of the first two. The idea!

Well, actually, they weren’t as bad as you might have thought. Let’s read Stigler’s next paragraph.

Although the new economic theories are introduced by the technique of the huckster, I should add that they are not the work of mere hucksters. The sincerity of Jevons, for example, is printed on every page. Indeed I do not believe that any important economist has ever deliberately contrived ideas in which he did not believe in order to achieve prominence: men of the requisite intellectual power and morality can get bigger prizes elsewhere. Instead, the successful inventor is a one-sided man. He is utterly persuaded of the significance and correctness of his ideas and he subordinates all other truths because they seem to him less important than the general acceptance of his truth. He is more a warrior against ignorance than a scholar among ideas.

I believe that Romer misunderstood what Stigler mean to say here. Romer seems to interpret this passage to mean that if a theorist is utterly convinced that he is right, he somehow can be justified in “subordinat[ing] all other truths” in cutting corners, avoiding contrary arguments or suppressing contradictory evidence that might undercut his theory – the sorts of practices ruled out by Feynman integrity, which is precisely what Romer was accusing Lucas of having done in a paper on growth theory. But to me it is clear from the context that what Stigler meant by “subordinating all other truths” was not any lack of Feynman integrity, but the single-minded focus on a specific contribution to the exclusion of all others. That was why Stigler drew attention to the exorbitant share of Jevons’s book entitled Principles of Political Economy devoted to the theory of marginal utility or the publication by Bohm-Bawerk of an entire volume devoted to responding to criticisms of his two earlier volumes on the theory of capital and interest. He neither implied nor meant to suggest that either Jevons or Bohm-Bawerk committed any breach of scientific propriety, much less Feynman integrity.

If there were any doubt about the correctness of this interpretation of what Stigler meant, it would be dispelled by the third section of Stigler’s paper under the heading: “The Case of Mill.”

John Stuart Mill is a striking example with which to illustrate the foregoing remarks. He is now considered a mediocre economist of unusual literary power; a fluent, flabby echo of Ricardo. This judgement is well-nigh universal: I do not believe that Mill has had a fervent admirer in the twentieth century. I attribute this low reputation to the fact that Mill had the perspective and balance, but not the full powers, of Smith and Marshall. He avoided all the tactics of easy success. He wrote with extraordinary balance, and his own ideas-considering their importance-received unbelievably little emphasis. The bland prose moved sedately over a corpus of knowledge organized with due regard to structure and significance, and hardly at all with regard to parentage. . . .

Yet however one judges Mill, it cannot be denied that he was original. In terms of identifiable theories, he was one of the most original economists in the history of the science.

Stigler went on to list and document the following original contributions of Mill in the area of value theory, ignoring Mill’s contributions to trade theory, “because I cannot be confident of the priorities.”

1 Non-competing Groups

2 Joint Products

3 Alternative Costs

4 The Economics of the Firm

5 Supply and Demand

6 Say’s Law

Stigler concludes his discussion with this assessment of Mill

This is a very respectable list of contributions. But it is also a peculiar list: any one of the contributions could be made independently of all the others. Mill was not trying to build a new system but only to add improvements here and there to the Ricardian system. The fairest of economists, as Schumpeter has properly characterized Mill, unselfishly dedicated his abilities to the advancement of the science. And, yet, Mill’s magisterial quality and conciliatory tone may have served less well than sharp and opinionated controversy in inciting his contemporaries to make advances.

Finally, just to confirm the lack of ideological motivation in Stigler’s discussion, let me quote Stigler’s characteristically ironic and playful conclusion.

These reflections on the nature and role of originality, however, have no utilitarian purpose, or even a propagandistic purpose. If I have a prejudice, it is that we commonly exaggerate the merits of originality in economics–that we are unjust in conferring immortality upon the authors of absurd theories while we forget the fine, if not particularly original, work of others. But I do not propose that we do something about it.

Price Stickiness and Macroeconomics

Noah Smith has a classically snide rejoinder to Stephen Williamson’s outrage at Noah’s Bloomberg paean to price stickiness and to the classic Ball and Maniw article on the subject, an article that provoked an embarrassingly outraged response from Robert Lucas when published over 20 years ago. I don’t know if Lucas ever got over it, but evidently Williamson hasn’t.

Now to be fair, Lucas’s outrage, though misplaced, was understandable, at least if one understands that Lucas was so offended by the ironic tone in which Ball and Mankiw cast themselves as defenders of traditional macroeconomics – including both Keynesians and Monetarists – against the onslaught of “heretics” like Lucas, Sargent, Kydland and Prescott that he just stopped reading after the first few pages and then, in a fit of righteous indignation, wrote a diatribe attacking Ball and Mankiw as religious fanatics trying to halt the progress of science as if that was the real message of the paper – not, to say the least, a very sophisticated reading of what Ball and Mankiw wrote.

While I am not hostile to the idea of price stickiness — one of the most popular posts I have written being an attempt to provide a rationale for the stylized (though controversial) fact that wages are stickier than other input, and most output, prices — it does seem to me that there is something ad hoc and superficial about the idea of price stickiness and about many explanations, including those offered by Ball and Mankiw, for price stickiness. I think that the negative reactions that price stickiness elicits from a lot of economists — and not only from Lucas and Williamson — reflect a feeling that price stickiness is not well grounded in any economic theory.

Let me offer a slightly different criticism of price stickiness as a feature of macroeconomic models, which is simply that although price stickiness is a sufficient condition for inefficient macroeconomic fluctuations, it is not a necessary condition. It is entirely possible that even with highly flexible prices, there would still be inefficient macroeconomic fluctuations. And the reason why price flexibility, by itself, is no guarantee against macroeconomic contractions is that macroeconomic contractions are caused by disequilibrium prices, and disequilibrium prices can prevail regardless of how flexible prices are.

The usual argument is that if prices are free to adjust in response to market forces, they will adjust to balance supply and demand, and an equilibrium will be restored by the automatic adjustment of prices. That is what students are taught in Econ 1. And it is an important lesson, but it is also a “partial” lesson. It is partial, because it applies to a single market that is out of equilibrium. The implicit assumption in that exercise is that nothing else is changing, which means that all other markets — well, not quite all other markets, but I will ignore that nuance – are in equilibrium. That’s what I mean when I say (as I have done before) that just as macroeconomics needs microfoundations, microeconomics needs macrofoundations.

Now it’s pretty easy to show that in a single market with an upward-sloping supply curve and a downward-sloping demand curve, that a price-adjustment rule that raises price when there’s an excess demand and reduces price when there’s an excess supply will lead to an equilibrium market price. But that simple price-adjustment rule is hard to generalize when many markets — not just one — are in disequilibrium, because reducing disequilibrium in one market may actually exacerbate disequilibrium, or create a disequilibrium that wasn’t there before, in another market. Thus, even if there is an equilibrium price vector out there, which, if it were announced to all economic agents, would sustain a general equilibrium in all markets, there is no guarantee that following the standard price-adjustment rule of raising price in markets with an excess demand and reducing price in markets with an excess supply will ultimately lead to the equilibrium price vector. Even more disturbing, the standard price-adjustment rule may not, even under a tatonnement process in which no trading is allowed at disequilibrium prices, lead to the discovery of the equilibrium price vector. Of course, in the real world trading occurs routinely at disequilibrium prices, so that the “mechanical” forces tending an economy toward equilibrium are even weaker than the standard analysis of price-adjustment would suggest.

This doesn’t mean that an economy out of equilibrium has no stabilizing tendencies; it does mean that those stabilizing tendencies are not very well understood, and we have almost no formal theory with which to describe how such an adjustment process leading from disequilibrium to equilibrium actually works. We just assume that such a process exists. Franklin Fisher made this point 30 years ago in an important, but insufficiently appreciated, volume Disequilibrium Foundations of Equilibrium Economics. But the idea goes back even further: to Hayek’s important work on intertemporal equilibrium, especially his classic paper “Economics and Knowledge,” formalized by Hicks in the temporary-equilibrium model described in Value and Capital.

The key point made by Hayek in this context is that there can be an intertemporal equilibrium if and only if all agents formulate their individual plans on the basis of the same expectations of future prices. If their expectations for future prices are not the same, then any plans based on incorrect price expectations will have to be revised, or abandoned altogether, as price expectations are disappointed over time. For price adjustment to lead an economy back to equilibrium, the price adjustment must converge on an equilibrium price vector and on correct price expectations. But, as Hayek understood in 1937, and as Fisher explained in a dense treatise 30 years ago, we have no economic theory that explains how such a price vector, even if it exists, is arrived at, and even under a tannonement process, much less under decentralized price setting. Pinning the blame on this vague thing called price stickiness doesn’t address the deeper underlying theoretical issue.

Of course for Lucas et al. to scoff at price stickiness on these grounds is a bit rich, because Lucas and his followers seem entirely comfortable with assuming that the equilibrium price vector is rationally expected. Indeed, rational expectation of the equilibrium price vector is held up by Lucas as precisely the microfoundation that transformed the unruly field of macroeconomics into a real science.

Krugman on the Volcker Disinflation

Earlier in the week, Paul Krugman wrote about the Volcker disinflation of the 1980s. Krugman’s annoyance at Stephen Moore (whom Krugman flatters by calling him an economist) and John Cochrane (whom Krugman disflatters by comparing him to Stephen Moore) is understandable, but he has less excuse for letting himself get carried away in an outburst of Keynesian triumphalism.

Right-wing economists like Stephen Moore and John Cochrane — it’s becoming ever harder to tell the difference — have some curious beliefs about history. One of those beliefs is that the experience of disinflation in the 1980s was a huge shock to Keynesians, refuting everything they believed. What makes this belief curious is that it’s the exact opposite of the truth. Keynesians came into the Volcker disinflation — yes, it was mainly the Fed’s doing, not Reagan’s — with a standard, indeed textbook, model of what should happen. And events matched their expectations almost precisely.

I’ve been cleaning out my library, and just unearthed my copy of Dornbusch and Fischer’s Macroeconomics, first edition, copyright 1978. Quite a lot of that book was concerned with inflation and disinflation, using an adaptive-expectations Phillips curve — that is, an assumed relationship in which the current inflation rate depends on the unemployment rate and on lagged inflation. Using that approach, they laid out at some length various scenarios for a strategy of reducing the rate of money growth, and hence eventually reducing inflation. Here’s one of their charts, with the top half showing inflation and the bottom half showing unemployment:

Not the cleanest dynamics in the world, but the basic point should be clear: cutting inflation would require a temporary surge in unemployment. Eventually, however, unemployment could come back down to more or less its original level; this temporary surge in unemployment would deliver a permanent reduction in the inflation rate, because it would change expectations.

And here’s what the Volcker disinflation actually looked like:

A temporary but huge surge in unemployment, with inflation coming down to a sustained lower level.

So were Keynesian economists feeling amazed and dismayed by the events of the 1980s? On the contrary, they were feeling pretty smug: disinflation had played out exactly the way the models in their textbooks said it should.

Well, this is true, but only up to a point. What Krugman neglects to mention, which is why the Volcker disinflation is not widely viewed as having enhanced the Keynesian forecasting record, is that most Keynesians had opposed the Reagan tax cuts, and one of their main arguments was that the tax cuts would be inflationary. However, in the Reagan-Volcker combination of loose fiscal policy and tight money, it was tight money that dominated. Score one for the Monetarists. The rapid drop in inflation, though accompanied by high unemployment, was viewed as a vindication of the Monetarist view that inflation is always and everywhere a monetary phenomenon, a view which now seems pretty commonplace, but in the 1970s and 1980s was hotly contested, including by Keynesians.

However, the (Friedmanian) Monetarist view was only partially vindicated, because the Volcker disinflation was achieved by way of high interest rates not by tightly controlling the money supply. As I have written before on this blog (here and here) and in chapter 10 of my book on free banking (especially, pp. 214-21), Volcker actually tried very hard to slow down the rate of growth in the money supply, but the attempt to implement a k-percent rule induced perverse dynamics, creating a precautionary demand for money whenever monetary growth overshot the target range, the anticipation of an imminent future tightening causing people, fearful that cash would soon be unavailable, to hoard cash by liquidating assets before the tightening. The scenario played itself out repeatedly in the 1981-82 period, when the most closely watched economic or financial statistic in the world was the Fed’s weekly report of growth in the money supply, with growth rates over the target range being associated with falling stock and commodities prices. Finally, in the summer of 1982, Volcker announced that the Fed would stop trying to achieve its money growth targets, and the great stock market rally of the 1980s took off, and economic recovery quickly followed.

So neither the old-line Keynesian dismissal of monetary policy as irrelevant to the control of inflation, nor the Monetarist obsession with controlling the monetary aggregates fared very well in the aftermath of the Volcker disinflation. The result was the New Keynesian focus on monetary policy as the key tool for macroeconomic stabilization, except that monetary policy no longer meant controlling a targeted monetary aggregate, but controlling a targeted interest rate (as in the Taylor rule).

But Krugman doesn’t mention any of this, focusing instead on the conflicts among  non-Keynesians.

Indeed, it was the other side of the macro divide that was left scrambling for answers. The models Chicago was promoting in the 1970s, based on the work of Robert Lucas and company, said that unemployment should have come down quickly, as soon as people realized that the Fed really was bringing down inflation.

Lucas came to Chicago in 1975, and he was the wave of the future at Chicago, but it’s not as if Friedman disappeared; after all, he did win the Nobel Prize in 1976. And although Friedman did not explicitly attack Lucas, it’s clear that, to his credit, Friedman never bought into the rational-expectations revolution. So although Friedman may have been surprised at the depth of the 1981-82 recession – in part attributable to the perverse effects of the money-supply targeting he had convinced the Fed to adopt – the adaptive-expectations model in the Dornbusch-Fischer macro textbook is as much Friedmanian as Keynesian. And by the way, Dornbush and Fischer were both at Chicago in the mid 1970s when the first edition of their macro text was written.

By a few years into the 80s it was obvious that those models were unsustainable in the face of the data. But rather than admit that their dismissal of Keynes was premature, most of those guys went into real business cycle theory — basically, denying that the Fed had anything to do with recessions. And from there they just kept digging ever deeper into the rabbit hole.

But anyway, what you need to know is that the 80s were actually a decade of Keynesian analysis triumphant.

I am just as appalled as Krugman by the real-business-cycle episode, but it was as much a rejection of Friedman, and of all other non-Keynesian monetary theory, as of Keynes. So the inspiring morality tale spun by Krugman in which the hardy band of true-blue Keynesians prevail against those nasty new classical barbarians is a bit overdone and vastly oversimplified.

On Multipliers, Ricardian Equivalence and Functioning Well

In my post yesterday, I explained why if one believes, as do Robert Lucas and Robert Barro, that monetary policy can stimulate an economy in an economic downturn, it is easy to construct an argument that fiscal policy would do so as well. I hope that my post won’t cause anyone to conclude that real-business-cycle theory must be right that monetary policy is no more effective than fiscal policy. I suppose that there is that risk, but I can’t worry about every weird idea floating around in the blogosphere. Instead, I want to think out loud a bit about fiscal multipliers and Ricardian equivalence.

I am inspired to do so by something that John Cochrane wrote on his blog defending Robert Lucas from Paul Krugman’s charge that Lucas didn’t understand Ricardian equivalence. Here’s what Cochrane, explaining what Ricardian equivalence means, had to say:

So, according to Paul [Krugman], “Ricardian Equivalence,” which is the theorem that stimulus does not work in a well-functioning economy, fails, because it predicts that a family who takes out a mortgage to buy a $100,000 house would reduce consumption by $100,000 in that very year.

Cochrane was a little careless in defining Ricardian equivalance as a theorem about stimulus, when it’s really a theorem about the equivalence of the effects of present and future taxes on spending. But that’s just a minor slip. What I found striking about Cochrane’s statement was something else: that little qualifying phrase “in a well-functioning economy,” which Cochrane seems to have inserted as a kind of throat-clearing remark, the sort of aside that people are just supposed to hear but not really pay much attention to, that sometimes can be quite revealing, usually unintentionally, in its own way.

What is so striking about those five little words “in a well-functioning economy?” Well, just this. Why, in a well-functioning economy, would anyone care whether a stimulus works or not? A well-functioning economy doesn’t need any stimulus, so why would you even care whether it works or not, much less prove a theorem to show that it doesn’t? (I apologize for the implicit Philistinism of that rhetorical question, I’m just engaging in a little rhetorical excess to make my point a little bit more colorfully.)

So if a well-functioning economy doesn’t require any stimulus, and if a stimulus wouldn’t work in a well-functioning economy, what does that tell us about whether a stimulus works (or would work) in an economy that is not functioning well? Not a whole lot. Thus, the bread and butter models that economists use, models of how an economy functions when there are no frictions, expectations are rational, and markets clear, are guaranteed to imply that there are no multipliers and that Ricardian equivalence holds. This is the world of a single, unique, and stable equilibrium. If you exogenously change any variable in the system, the system will snap back to a new equilibrium in which all variables have optimally adjusted to whatever exogenous change you have subjected the system to. All conventional economic analysis, comparative statics or dynamic adjustment, are built on the assumption of a unique and stable equilibrium to which all economic variables inevitably return when subjected to any exogenous shock. This is the indispensable core of economic theory, but it is not the whole of economic theory.

Keynes had a vision of what could go wrong with an economy: entrepreneurial pessimism — a dampening of animal spirits — would cause investment to flag; the rate of interest would not (or could not) fall enough to revive investment; people would try to shift out of assets into cash, causing a cumulative contraction of income, expenditure and output. In such circumstances, spending by government could replace the investment spending no longer being undertaken by discouraged entrepreneurs, at least until entrepreneurial expectations recovered. This is a vision not of a well-functioning economy, but of a dysfunctional one, but Keynes was able to describe it in terms of a simplified model, essentially what has come down to us as the Keynesian cross. In this little model, you can easily calculate a multiplier as the reciprocal of the marginal propensity to save out of disposable income.

But packaging Keynes’s larger vision into the four corners of the Keynesian cross diagram, or even the slightly more realistic IS-LM diagram, misses the essence of Keynes’s vision — the volatility of entrepreneurial expectations and their susceptibility to unpredictable mood swings that overwhelm any conceivable equilibrating movements in interest rates. A numerical calculation of the multiplier in the simplified Keynesian models is not particularly relevant, because the real goal is not to reach an equilibrium within a system of depressed entrepreneurial expectations, but to create conditions in which entrepreneurial expectations bounce back from their depressed state. As I like to say, expectations are fundamental.

Unlike a well-functioning economy with a unique equilibrium, a not-so-well functioning economy may have multiple equilibria corresponding to different sets of expectations. The point of increased government spending is then not to increase the size of government, but to restore entrepreneurial confidence by providing assurance that if they increase production, they will have customers willing and able to buy the output at prices sufficient to cover their costs.

Ricardian equivalence assumes that expectations of future income are independent of tax and spending decisions in the present, because, in a well-functioning economy, there is but one equilibrium path for future output and income. But if, because the economy not functioning well, expectations of future income, and therefore actual future income, may depend on current decisions about spending and taxation. No matter what Ricardian equivalence says, a stimulus may work by shifting the economy to a different higher path of future output and income than the one it now happens to be on, in which case present taxes may not be equivalent to future taxes, after all.

About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 1,626 other followers

Follow Uneasy Money on