Archive for the 'rational expectations' Category

What’s Wrong with EMH?

Scott Sumner wrote a post commenting on my previous post about Paul Krugman’s column in the New York Times last Friday. I found Krugman’s column really interesting in his ability to pack so much real economic content into an 800-word column written to help non-economists understand recent fluctuations in the stock market. Part of what I was doing in my post was to offer my own criticism of the efficient market hypothesis (EMH) of which Krugman is probably not an enthusiastic adherent either. Nevertheless, both Krugman and I recognize that EMH serves as a useful way to discipline how we think about fluctuating stock prices.

Here is a passage of Krugman’s that I commented on:

But why are long-term interest rates so low? As I argued in my last column, the answer is basically weakness in investment spending, despite low short-term interest rates, which suggests that those rates will have to stay low for a long time.

My comment was:

Again, this seems inexactly worded. Weakness in investment spending is a symptom not a cause, so we are back to where we started from. At the margin, there are no attractive investment opportunities.

Scott had this to say about my comment:

David is certainly right that Krugman’s statement is “inexactly worded”, but I’m also a bit confused by his criticism. Certainly “weakness in investment spending” is not a “symptom” of low interest rates, which is how his comment reads in context.  Rather I think David meant that the shift in the investment schedule is a symptom of a low level of AD, which is a very reasonable argument, and one he develops later in the post.  But that’s just a quibble about wording.  More substantively, I’m persuaded by Krugman’s argument that weak investment is about more than just AD; the modern information economy (with, I would add, a slowgrowing working age population) just doesn’t generate as much investment spending as before, even at full employment.

Just to be clear, what I was trying to say was that investment spending is determined by “fundamentals,” i.e., expectations about future conditions (including what demand for firms’ output will be, what competing firms are planning to do, what cost conditions will be, and a whole range of other considerations. It is the combination of all those real and psychological factors that determines the projected returns from undertaking an investment, and those expected returns must be compared with the cost of capital to reach a final decision about which projects will be undertaken, thereby giving rise to actual investment spending. So I certainly did not mean to say that weakness in investment spending is a symptom of low interest rates. I meant that it is a symptom of the entire economic environment that, depending on the level of interest rates, makes specific investment projects seem attractive or unattractive. Actually, I don’t think that there is any real disagreement between Scott and me on this particular point; I just mention the point to avoid possible misunderstandings.

But the differences between Scott and me about the EMH seem to be substantive. Scott quotes this passage from my previous post:

The efficient market hypothesis (EMH) is at best misleading in positing that market prices are determined by solid fundamentals. What does it mean for fundamentals to be solid? It means that the fundamentals remain what they are independent of what people think they are. But if fundamentals themselves depend on opinions, the idea that values are determined by fundamentals is a snare and a delusion.

Scott responded as follows:

I don’t think it’s correct to say the EMH is based on “solid fundamentals”.  Rather, AFAIK, the EMH says that asset prices are based on rational expectations of future fundamentals, what David calls “opinions”.  Thus when David tries to replace the EMH view of fundamentals with something more reasonable, he ends up with the actual EMH, as envisioned by people like Eugene Fama.  Or am I missing something?

In fairness, David also rejects rational expectations, so he would not accept even my version of the EMH, but I think he’s too quick to dismiss the EMH as being obviously wrong. Lots of people who are much smarter than me believe in the EMH, and if there was an obvious flaw I think it would have been discovered by now.

I accept Scott’s correction that EMH is based on the rational expectation of future fundamentals, but I don’t think that the distinction is as meaningful as Scott does. The problem is that in a typical rational-expectations model, the fundamentals are given and don’t change, so that fundamentals are actually static. The seemingly non-static property of a rational-expectations model is achieved by introducing stochastic parameters with known means and variances, so that the ultimate realizations of stochastic variables within the model are not known in advance. However, the rational expectations of all stochastic variables are unbiased, and they are – in some sense — the best expectations possible given the underlying stochastic nature of the variables. But given that stochastic structure, current asset prices reflect the actual – and unchanging — fundamentals, the stochastic elements in the model being fully reflected in asset prices today. Prices may change ex post, but, conditional on the realizations of the stochastic variables (whose probability distributions are assumed to have been known in advance), those changes are fully anticipated. Thus, in a rational-expectations equilibrium, causation still runs from fundamentals to expectations.

The problem with rational expectations is not a flaw in logic. In fact, the importance of rational expectations is that it is a very important logical test for the coherence of a model. If a model cannot be solved for a rational-expectations equilibrium, it suffers from a basic lack of coherence. Something is basically wrong with a model in which the expectation of the equilibrium values predicted by the model does not lead to their realization. But a logical property of the model is not the same as a positive theory of how expectations are formed and how they evolve. In the real world, knowledge is constantly growing, and new knowledge implies that the fundamentals underlying the economy must be changing as knowledge grows. The future fundamentals that will determine the future prices of a future economy cannot be rationally expected in the present, because we have no way of specifying probability distributions corresponding to dynamic evolving systems.

If future fundamentals are logically unknowable — even in a probabilistic sense — in the present, because we can’t predict what our future knowledge will be, because if we could, future knowledge would already be known, making it present knowledge, then expectations of the future can’t possibly be rational because we never have the knowledge that would be necessary to form rational expectations. And so I can’t accept Scott’s assertion that asset prices are based on rational expectations of future fundamentals. It seems to me that the causation goes in the other direction as well: future fundamentals will be based, at least in part, on current expectations.

There Is No Intertemporal Budget Constraint

Last week Nick Rowe posted a link to a just published article in a special issue of the Review of Keynesian Economics commemorating the 80th anniversary of the General Theory. Nick’s article discusses the confusion in the General Theory between saving and hoarding, and Nick invited readers to weigh in with comments about his article. The ROKE issue also features an article by Simon Wren-Lewis explaining the eclipse of Keynesian theory as a result of the New Classical Counter-Revolution, correctly identified by Wren-Lewis as a revolution inspired not by empirical success but by a methodological obsession with reductive micro-foundationalism. While deploring the New Classical methodological authoritarianism, Wren-Lewis takes solace from the ability of New Keynesians to survive under the New Classical methodological regime, salvaging a role for activist counter-cyclical policy by, in effect, negotiating a safe haven for the sticky-price assumption despite its shaky methodological credentials. The methodological fiction that sticky prices qualify as micro-founded allowed New Keynesianism to survive despite the ascendancy of micro-foundationalist methodology, thereby enabling the core Keynesian policy message to survive.

I mention the Wren-Lewis article in this context because of an exchange between two of the commenters on Nick’s article: the presumably pseudonymous Avon Barksdale and blogger Jason Smith about microfoundations and Keynesian economics. Avon began by chastising Nick for wasting time discussing Keynes’s 80-year old ideas, something Avon thinks would never happen in a discussion about a true science like physics, the 100-year-old ideas of Einstein being of no interest except insofar as they have been incorporated into the theoretical corpus of modern physics. Of course, this is simply vulgar scientism, as if the only legitimate way to do economics is to mimic how physicists do physics. This methodological scolding is typically charming New Classical arrogance. Sort of reminds one of how Friedrich Engels described Marxian theory as scientific socialism. I mean who, other than a religious fanatic, would be stupid enough to argue with the assertions of science?

Avon continues with a quotation from David Levine, a fine economist who has done a lot of good work, but who is also enthralled by the New Classical methodology. Avon’s scientism provoked the following comment from Jason Smith, a Ph. D. in physics with a deep interest in and understanding of economics.

You quote from Levine: “Keynesianism as argued by people such as Paul Krugman and Brad DeLong is a theory without people either rational or irrational”

This is false. The L in ISLM means liquidity preference and e.g. here …

http://krugman.blogs.nytimes.com/2013/11/18/the-new-keynesian-case-for-fiscal-policy-wonkish/

… Krugman mentions an Euler equation. The Euler equation essentially says that an agent must be indifferent between consuming one more unit today on the one hand and saving that unit and consuming in the future on the other if utility is maximized.

So there are agents in both formulations preferring one state of the world relative to others.

Avon replied:

Jason,

“This is false. The L in ISLM means liquidity preference and e.g. here”

I know what ISLM is. It’s not recursive so it really doesn’t have people in it. The dynamics are not set by any micro-foundation. If you’d like to see models with people in them, try Ljungqvist and Sargent, Recursive Macroeconomic Theory.

To which Jason retorted:

Avon,

So the definition of “people” is restricted to agents making multi-period optimizations over time, solving a dynamic programming problem?

Well then any such theory is obviously wrong because people don’t behave that way. For example, humans don’t optimize the dictator game. How can you add up optimizing agents and get a result that is true for non-optimizing agents … coincident with the details of the optimizing agents mattering.

Your microfoundation requirement is like saying the ideal gas law doesn’t have any atoms in it. And it doesn’t! It is an aggregate property of individual “agents” that don’t have properties like temperature or pressure (or even volume in a meaningful sense). Atoms optimize entropy, but not out of any preferences.

So how do you know for a fact that macro properties like inflation or interest rates are directly related to agent optimizations? Maybe inflation is like temperature — it doesn’t exist for individuals and is only a property of economics in aggregate.

These questions are not answered definitively, and they’d have to be to enforce a requirement for microfoundations … or a particular way of solving the problem.

Are quarks important to nuclear physics? Not really — it’s all pions and nucleons. Emergent degrees of freedom. Sure, you can calculate pion scattering from QCD lattice calculations (quark and gluon DoF), but it doesn’t give an empirically better result than chiral perturbation theory (pion DoF) that ignores the microfoundations (QCD).

Assuming quarks are required to solve nuclear physics problems would have been a giant step backwards.

To which Avon rejoined:

Jason

The microfoundation of nuclear physics and quarks is quantum mechanics and quantum field theory. How the degrees of freedom reorganize under the renormalization group flow, what effective field theory results is an empirical question. Keynesian economics is worse tha[n] useless. It’s wrong empirically, it has no theoretical foundation, it has no laws. It has no microfoundation. No serious grad school has taught Keynesian economics in nearly 40 years.

To which Jason answered:

Avon,

RG flow is irrelevant to chiral perturbation theory which is based on the approximate chiral symmetry of QCD. And chiral perturbation theory could exist without QCD as the “microfoundation”.

Quantum field theory is not a ‘microfoundation’, but rather a framework for building theories that may or may not have microfoundations. As Weinberg (1979) said:

” … quantum field theory itself has no content beyond analyticity, unitarity,
cluster decomposition, and symmetry.”

If I put together an NJL model, there is no requirement that the scalar field condensate be composed of quark-antiquark pairs. In fact, the basic idea was used for Cooper pairs as a model of superconductivity. Same macro theory; different microfoundations. And that is a general problem with microfoundations — different microfoundations can lead to the same macro theory, so which one is right?

And the IS-LM model is actually pretty empirically accurate (for economics):

http://informationtransfereconomics.blogspot.com/2014/03/the-islm-model-again.html

To which Avon responded:

First, ISLM analysis does not hold empirically. It just doesn’t work. That’s why we ended up with the macro revolution of the 70s and 80s. Keynesian economics ignores intertemporal budget constraints, it violates Ricardian equivalence. It’s just not the way the world works. People might not solve dynamic programs to set their consumption path, but at least these models include a future which people plan over. These models work far better than Keynesian ISLM reasoning.

As for chiral perturbation theory and the approximate chiral symmetries of QCD, I am not making the case that NJL models requires QCD. NJL is an effective field theory so it comes from something else. That something else happens to be QCD. It could have been something else, that’s an empirical question. The microfoundation I’m talking about with theories like NJL is QFT and the symmetries of the vacuum, not the short distance physics that might be responsible for it. The microfoundation here is about the basic laws, the principles.

ISLM and Keynesian economics has none of this. There is no principle. The microfoundation of modern macro is not about increasing the degrees of freedom to model every person in the economy on some short distance scale, it is about building the basic principles from consistent economic laws that we find in microeconomics.

Well, I totally agree that IS-LM is a flawed macroeconomic model, and, in its original form, it was borderline-incoherent, being a single-period model with an interest rate, a concept without meaning except as an intertemporal price relationship. These deficiencies of IS-LM became obvious in the 1970s, so the model was extended to include a future period, with an expected future price level, making it possible to speak meaningfully about real and nominal interest rates, inflation and an equilibrium rate of spending. So the failure of IS-LM to explain stagflation, cited by Avon as the justification for rejecting IS-LM in favor of New Classical macro, was not that hard to fix, at least enough to make it serviceable. And comparisons of the empirical success of augmented IS-LM and the New Classical models have shown that IS-LM models consistently outperform New Classical models.

What Avon fails to see is that the microfoundations that he considers essential for macroeconomics are themselves derived from the assumption that the economy is operating in macroeconomic equilibrium. Thus, insisting on microfoundations – at least in the formalist sense that Avon and New Classical macroeconomists understand the term – does not provide a foundation for macroeconomics; it is just question begging aka circular reasoning or petitio principia.

The circularity is obvious from even a cursory reading of Samuelson’s Foundations of Economic Analysis, Robert Lucas’s model for doing economics. What Samuelson called meaningful theorems – thereby betraying his misguided acceptance of the now discredited logical positivist dogma that only potentially empirically verifiable statements have meaning – are derived using the comparative-statics method, which involves finding the sign of the derivative of an endogenous economic variable with respect to a change in some parameter. But the comparative-statics method is premised on the assumption that before and after the parameter change the system is in full equilibrium or at an optimum, and that the equilibrium, if not unique, is at least locally stable and the parameter change is sufficiently small not to displace the system so far that it does not revert back to a new equilibrium close to the original one. So the microeconomic laws invoked by Avon are valid only in the neighborhood of a stable equilibrium, and the macroeconomics that Avon’s New Classical mentors have imposed on the economics profession is a macroeconomics that, by methodological fiat, is operative only in the neighborhood of a locally stable equilibrium.

Avon dismisses Keynesian economics because it ignores intertemporal budget constraints. But the intertemporal budget constraint doesn’t exist in any objective sense. Certainly macroeconomics has to take into account intertemporal choice, but the idea of an intertemporal budget constraint analogous to the microeconomic budget constraint underlying the basic theory of consumer choice is totally misguided. In the static theory of consumer choice, the consumer has a given resource endowment and known prices at which consumers can transact at will, so the utility-maximizing vector of purchases and sales can be determined as the solution of a constrained-maximization problem.

In the intertemporal context, consumers have a given resource endowment, but prices are not known. So consumers have to make current transactions based on their expectations about future prices and a variety of other circumstances about which consumers can only guess. Their budget constraints are thus not real but totally conjectural based on their expectations of future prices. The optimizing Euler equations are therefore entirely conjectural as well, and subject to continual revision in response to changing expectations. The idea that the microeconomic theory of consumer choice is straightforwardly applicable to the intertemporal choice problem in a setting in which consumers don’t know what future prices will be and agents’ expectations of future prices are a) likely to be very different from each other and thus b) likely to be different from their ultimate realizations is a huge stretch. The intertemporal budget constraint has a completely different role in macroeconomics from the role it has in microeconomics.

If I expect that the demand for my services will be such that my disposable income next year would be $500k, my consumption choices would be very different from what they would have been if I were expecting a disposable income of $100k next year. If I expect a disposable income of $500k next year, and it turns out that next year’s income is only $100k, I may find myself in considerable difficulty, because my planned expenditure and the future payments I have obligated myself to make may exceed my disposable income or my capacity to borrow. So if there are a lot of people who overestimate their future incomes, the repercussions of their over-optimism may reverberate throughout the economy, leading to bankruptcies and unemployment and other bad stuff.

A large enough initial shock of mistaken expectations can become self-amplifying, at least for a time, possibly resembling the way a large initial displacement of water can generate a tsunami. A financial crisis, which is hard to model as an equilibrium phenomenon, may rather be an emergent phenomenon with microeconomic sources, but whose propagation can’t be described in microeconomic terms. New Classical macroeconomics simply excludes such possibilities on methodological grounds by imposing a rational-expectations general-equilibrium structure on all macroeconomic models.

This is not to say that the rational expectations assumption does not have a useful analytical role in macroeconomics. But the most interesting and most important problems in macroeconomics arise when the rational expectations assumption does not hold, because it is when individual expectations are very different and very unstable – say, like now, for instance — that macroeconomies become vulnerable to really scary instability.

Simon Wren-Lewis makes a similar point in his paper in the Review of Keynesian Economics.

Much discussion of current divisions within macroeconomics focuses on the ‘saltwater/freshwater’ divide. This understates the importance of the New Classical Counter Revolution (hereafter NCCR). It may be more helpful to think about the NCCR as involving two strands. The one most commonly talked about involves Keynesian monetary and fiscal policy. That is of course very important, and plays a role in the policy reaction to the recent Great Recession. However I want to suggest that in some ways the second strand, which was methodological, is more important. The NCCR helped completely change the way academic macroeconomics is done.

Before the NCCR, macroeconomics was an intensely empirical discipline: something made possible by the developments in statistics and econometrics inspired by The General Theory. After the NCCR and its emphasis on microfoundations, it became much more deductive. As Hoover (2001, p. 72) writes, ‘[t]he conviction that macroeconomics must possess microfoundations has changed the face of the discipline in the last quarter century’. In terms of this second strand, the NCCR was triumphant and remains largely unchallenged within mainstream academic macroeconomics.

Perhaps I will have some more to say about Wren-Lewis’s article in a future post. And perhaps also about Nick Rowe’s article.

HT: Tom Brown

Update (02/11/16):

On his blog Jason Smith provides some further commentary on his exchange with Avon on Nick Rowe’s blog, explaining at greater length how irrelevant microfoundations are to doing real empirically relevant physics. He also expands on and puts into a broader meta-theoretical context my point about the extremely narrow range of applicability of the rational-expectations equilibrium assumptions of New Classical macroeconomics.

David Glasner found a back-and-forth between me and a commenter (with the pseudonym “Avon Barksdale” after [a] character on The Wire who [didn’t end] up taking an economics class [per Tom below]) on Nick Rowe’s blog who expressed the (widely held) view that the only scientific way to proceed in economics is with rigorous microfoundations. “Avon” held physics up as a purported shining example of this approach.
I couldn’t let it go: even physics isn’t that reductionist. I gave several examples of cases where the microfoundations were actually known, but not used to figure things out: thermodynamics, nuclear physics. Even modern physics is supposedly built on string theory. However physicists do not require every pion scattering amplitude be calculated from QCD. Some people do do so-called lattice calculations. But many resort to the “effective” chiral perturbation theory. In a sense, that was what my thesis was about — an effective theory that bridges the gap between lattice QCD and chiral perturbation theory. That effective theory even gave up on one of the basic principles of QCD — confinement. It would be like an economist giving up opportunity cost (a basic principle of the micro theory). But no physicist ever said to me “your model is flawed because it doesn’t have true microfoundations”. That’s because the kind of hard core reductionism that surrounds the microfoundations paradigm doesn’t exist in physics — the most hard core reductionist natural science!
In his post, Glasner repeated something that he had before and — probably because it was in the context of a bunch of quotes about physics — I thought of another analogy.

Glasner says:

But the comparative-statics method is premised on the assumption that before and after the parameter change the system is in full equilibrium or at an optimum, and that the equilibrium, if not unique, is at least locally stable and the parameter change is sufficiently small not to displace the system so far that it does not revert back to a new equilibrium close to the original one. So the microeconomic laws invoked by Avon are valid only in the neighborhood of a stable equilibrium, and the macroeconomics that Avon’s New Classical mentors have imposed on the economics profession is a macroeconomics that, by methodological fiat, is operative only in the neighborhood of a locally stable equilibrium.

 

This hits on a basic principle of physics: any theory radically simplifies near an equilibrium.

Go to Jason’s blog to read the rest of his important and insightful post.

Romer v. Lucas

A couple of months ago, Paul Romer created a stir by publishing a paper in the American Economic Review “Mathiness in the Theory of Economic Growth,” an attack on two papers, one by McGrattan and Prescott and the other by Lucas and Moll on aspects of growth theory. He accused the authors of those papers of using mathematical modeling as a cover behind which to hide assumptions guaranteeing results by which the authors could promote their research agendas. In subsequent blog posts, Romer has sharpened his attack, focusing it more directly on Lucas, whom he accuses of a non-scientific attachment to ideological predispositions that have led him to violate what he calls Feynman integrity, a concept eloquently described by Feynman himself in a 1974 commencement address at Caltech.

It’s a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty–a kind of leaning over backwards. For example, if you’re doing an experiment, you should report everything that you think might make it invalid–not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you’ve eliminated by some other experiment, and how they worked–to make sure the other fellow can tell they have been eliminated.

Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can–if you know anything at all wrong, or possibly wrong–to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. There is also a more subtle problem. When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition.

Romer contrasts this admirable statement of what scientific integrity means with another by George Stigler, seemingly justifying, or at least excusing, a kind of special pleading on behalf of one’s own theory. And the institutional and perhaps ideological association between Stigler and Lucas seems to suggest that Lucas is inclined to follow the permissive and flexible Stiglerian ethic rather than rigorous Feynman standard of scientific integrity. Romer regards this as a breach of the scientific method and a step backward for economics as a science.

I am not going to comment on the specific infraction that Romer accuses Lucas of having committed; I am not familiar with the mathematical question in dispute. Certainly if Lucas was aware that his argument in the paper Romer criticizes depended on the particular mathematical assumption in question, Lucas should have acknowledged that to be the case. And even if, as Lucas asserted in responding to a direct question by Romer, he could have derived the result in a more roundabout way, then he should have pointed that out, too. However, I don’t regard the infraction alleged by Romer to be more than a misdemeanor, hardly a scandalous breach of the scientific method.

Why did Lucas, who as far as I can tell was originally guided by Feynman integrity, switch to the mode of Stigler conviction? Market clearing did not have to evolve from auxiliary hypothesis to dogma that could not be questioned.

My conjecture is economists let small accidents of intellectual history matter too much. If we had behaved like scientists, things could have turned out very differently. It is worth paying attention to these accidents because doing so might let us take more control over the process of scientific inquiry that we are engaged in. At the very least, we should try to reduce the odds that that personal frictions and simple misunderstandings could once again cause us to veer off on some damaging trajectory.

I suspect that it was personal friction and a misunderstanding that encouraged a turn toward isolation (or if you prefer, epistemic closure) by Lucas and colleagues. They circled the wagons because they thought that this was the only way to keep the rational expectations revolution alive. The misunderstanding is that Lucas and his colleagues interpreted the hostile reaction they received from such economists as Robert Solow to mean that they were facing implacable, unreasoning resistance from such departments as MIT. In fact, in a remarkably short period of time, rational expectations completely conquered the PhD program at MIT.

More recently Romer, having done graduate work both at MIT and Chicago in the late 1970s, has elaborated on the personal friction between Solow and Lucas and how that friction may have affected Lucas, causing him to disengage from the professional mainstream. Paul Krugman, who was at MIT when this nastiness was happening, is skeptical of Romer’s interpretation.

My own view is that being personally and emotionally attached to one’s own theories, whether for religious or ideological or other non-scientific reasons, is not necessarily a bad thing as long as there are social mechanisms allowing scientists with different scientific viewpoints an opportunity to make themselves heard. If there are such mechanisms, the need for Feynman integrity is minimized, because individual lapses of integrity will be exposed and remedied by criticism from other scientists; scientific progress is possible even if scientists don’t live up to the Feynman standards, and maintain their faith in their theories despite contradictory evidence. But, as I am going to suggest below, there are reasons to doubt that social mechanisms have been operating to discipline – not suppress, just discipline – dubious economic theorizing.

My favorite example of the importance of personal belief in, and commitment to the truth of, one’s own theories is Galileo. As discussed by T. S. Kuhn in The Structure of Scientific Revolutions. Galileo was arguing for a paradigm change in how to think about the universe, despite being confronted by empirical evidence that appeared to refute the Copernican worldview he believed in: the observations that the sun revolves around the earth, and that the earth, as we directly perceive it, is, apart from the occasional earthquake, totally stationary — good old terra firma. Despite that apparently contradictory evidence, Galileo had an alternative vision of the universe in which the obvious movement of the sun in the heavens was explained by the spinning of the earth on its axis, and the stationarity of the earth by the assumption that all our surroundings move along with the earth, rendering its motion imperceptible, our perception of motion being relative to a specific frame of reference.

At bottom, this was an almost metaphysical world view not directly refutable by any simple empirical test. But Galileo adopted this worldview or paradigm, because he deeply believed it to be true, and was therefore willing to defend it at great personal cost, refusing to recant his Copernican view when he could have easily appeased the Church by describing the Copernican theory as just a tool for predicting planetary motion rather than an actual representation of reality. Early empirical tests did not support heliocentrism over geocentrism, but Galileo had faith that theoretical advancements and improved measurements would eventually vindicate the Copernican theory. He was right of course, but strict empiricism would have led to a premature rejection of heliocentrism. Without a deep personal commitment to the Copernican worldview, Galileo might not have articulated the case for heliocentrism as persuasively as he did, and acceptance of heliocentrism might have been delayed for a long time.

Imre Lakatos called such deeply-held views underlying a scientific theory the hard core of the theory (aka scientific research program), a set of beliefs that are maintained despite apparent empirical refutation. The response to any empirical refutation is not to abandon or change the hard core but to adjust what Lakatos called the protective belt of the theory. Eventually, as refutations or empirical anomalies accumulate, the research program may undergo a crisis, leading to its abandonment, or it may simply degenerate if it fails to solve new problems or discover any new empirical facts or regularities. So Romer’s criticism of Lucas’s dogmatic attachment to market clearing – Lucas frequently makes use of ad hoc price stickiness assumptions; I don’t know why Romer identifies market-clearing as a Lucasian dogma — may be no more justified from a history of science perspective than would criticism of Galileo’s dogmatic attachment to heliocentrism.

So while I have many problems with Lucas, lack of Feynman integrity is not really one of them, certainly not in the top ten. What I find more disturbing is his narrow conception of what economics is. As he himself wrote in an autobiographical sketch for Lives of the Laureates, he was bewitched by the beauty and power of Samuelson’s Foundations of Economic Analysis when he read it the summer before starting his training as a graduate student at Chicago in 1960. Although it did not have the transformative effect on me that it had on Lucas, I greatly admire the Foundations, but regardless of whether Samuelson himself meant to suggest such an idea (which I doubt), it is absurd to draw this conclusion from it:

I loved the Foundations. Like so many others in my cohort, I internalized its view that if I couldn’t formulate a problem in economic theory mathematically, I didn’t know what I was doing. I came to the position that mathematical analysis is not one of many ways of doing economic theory: It is the only way. Economic theory is mathematical analysis. Everything else is just pictures and talk.

Oh, come on. Would anyone ever think that unless you can formulate the problem of whether the earth revolves around the sun or the sun around the earth mathematically, you don’t know what you are doing? And, yet, remarkably, on the page following that silly assertion, one finds a totally brilliant description of what it was like to take graduate price theory from Milton Friedman.

Friedman rarely lectured. His class discussions were often structured as debates, with student opinions or newspaper quotes serving to introduce a problem and some loosely stated opinions about it. Then Friedman would lead us into a clear statement of the problem, considering alternative formulations as thoroughly as anyone in the class wanted to. Once formulated, the problem was quickly analyzed—usually diagrammatically—on the board. So we learned how to formulate a model, to think about and decide which features of a problem we could safely abstract from and which he needed to put at the center of the analysis. Here “model” is my term: It was not a term that Friedman liked or used. I think that for him talking about modeling would have detracted from the substantive seriousness of the inquiry we were engaged in, would divert us away from the attempt to discover “what can be done” into a merely mathematical exercise. [my emphasis].

Despite his respect for Friedman, it’s clear that Lucas did not adopt and internalize Friedman’s approach to economic problem solving, but instead internalized the caricature he extracted from Samuelson’s Foundations: that mathematical analysis is the only legitimate way of doing economic theory, and that, in particular, the essence of macroeconomics consists in a combination of axiomatic formalism and philosophical reductionism (microfoundationalism). For Lucas, the only scientifically legitimate macroeconomic models are those that can be deduced from the axiomatized Arrow-Debreu-McKenzie general equilibrium model, with solutions that can be computed and simulated in such a way that the simulations can be matched up against the available macroeconomics time series on output, investment and consumption.

This was both bad methodology and bad science, restricting the formulation of economic problems to those for which mathematical techniques are available to be deployed in finding solutions. On the one hand, the rational-expectations assumption made finding solutions to certain intertemporal models tractable; on the other, the assumption was justified as being required by the rationality assumptions of neoclassical price theory.

In a recent review of Lucas’s Collected Papers on Monetary Theory, Thomas Sargent makes a fascinating reference to Kenneth Arrow’s 1967 review of the first two volumes of Paul Samuelson’s Collected Works in which Arrow referred to the problematic nature of the neoclassical synthesis of which Samuelson was a chief exponent.

Samuelson has not addressed himself to one of the major scandals of current price theory, the relation between microeconomics and macroeconomics. Neoclassical microeconomic equilibrium with fully flexible prices presents a beautiful picture of the mutual articulations of a complex structure, full employment being one of its major elements. What is the relation between this world and either the real world with its recurrent tendencies to unemployment of labor, and indeed of capital goods, or the Keynesian world of underemployment equilibrium? The most explicit statement of Samuelson’s position that I can find is the following: “Neoclassical analysis permits of fully stable underemployment equilibrium only on the assumption of either friction or a peculiar concatenation of wealth-liquidity-interest elasticities. . . . [The neoclassical analysis] goes far beyond the primitive notion that, by definition of a Walrasian system, equilibrium must be at full employment.” . . .

In view of the Phillips curve concept in which Samuelson has elsewhere shown such interest, I take the second sentence in the above quotation to mean that wages are stationary whenever unemployment is X percent, with X positive; thus stationary unemployment is possible. In general, one can have a neoclassical model modified by some elements of price rigidity which will yield Keynesian-type implications. But such a model has yet to be constructed in full detail, and the question of why certain prices remain rigid becomes of first importance. . . . Certainly, as Keynes emphasized the rigidity of prices has something to do with the properties of money; and the integration of the demand and supply of money with general competitive equilibrium theory remains incomplete despite attempts beginning with Walras himself.

If the neoclassical model with full price flexibility were sufficiently unrealistic that stable unemployment equilibrium be possible, then in all likelihood the bulk of the theorems derived by Samuelson, myself, and everyone else from the neoclassical assumptions are also contrafactual. The problem is not resolved by what Samuelson has called “the neoclassical synthesis,” in which it is held that the achievement of full employment requires Keynesian intervention but that neoclassical theory is valid when full employment is reached. . . .

Obviously, I believe firmly that the mutual adjustment of prices and quantities represented by the neoclassical model is an important aspect of economic reality worthy of the serious analysis that has been bestowed on it; and certain dramatic historical episodes – most recently the reconversion of the United States from World War II and the postwar European recovery – suggest that an economic mechanism exists which is capable of adaptation to radical shifts in demand and supply conditions. On the other hand, the Great Depression and the problems of developing countries remind us dramatically that something beyond, but including, neoclassical theory is needed.

Perhaps in a future post, I may discuss this passage, including a few sentences that I have omitted here, in greater detail. For now I will just say that Arrow’s reference to a “neoclassical microeconomic equilibrium with fully flexible prices” seems very strange inasmuch as price flexibility has absolutely no role in the proofs of the existence of a competitive general equilibrium for which Arrow and Debreu and McKenzie are justly famous. All the theorems Arrow et al. proved about the neoclassical equilibrium were related to existence, uniqueness and optimaiity of an equilibrium supported by an equilibrium set of prices. Price flexibility was not involved in those theorems, because the theorems had nothing to do with how prices adjust in response to a disequilibrium situation. What makes this juxtaposition of neoclassical microeconomic equilibrium with fully flexible prices even more remarkable is that about eight years earlier Arrow wrote a paper (“Toward a Theory of Price Adjustment”) whose main concern was the lack of any theory of price adjustment in competitive equilibrium, about which I will have more to say below.

Sargent also quotes from two lectures in which Lucas referred to Don Patinkin’s treatise Money, Interest and Prices which provided perhaps the definitive statement of the neoclassical synthesis Samuelson espoused. In one lecture (“My Keynesian Education” presented to the History of Economics Society in 2003) Lucas explains why he thinks Patinkin’s book did not succeed in its goal of integrating value theory and monetary theory:

I think Patinkin was absolutely right to try and use general equilibrium theory to think about macroeconomic problems. Patinkin and I are both Walrasians, whatever that means. I don’t see how anybody can not be. It’s pure hindsight, but now I think that Patinkin’s problem was that he was a student of Lange’s, and Lange’s version of the Walrasian model was already archaic by the end of the 1950s. Arrow and Debreu and McKenzie had redone the whole theory in a clearer, more rigorous, and more flexible way. Patinkin’s book was a reworking of his Chicago thesis from the middle 1940s and had not benefited from this more recent work.

In the other lecture, his 2003 Presidential address to the American Economic Association, Lucas commented further on why Patinkin fell short in his quest to unify monetary and value theory:

When Don Patinkin gave his Money, Interest, and Prices the subtitle “An Integration of Monetary and Value Theory,” value theory meant, to him, a purely static theory of general equilibrium. Fluctuations in production and employment, due to monetary disturbances or to shocks of any other kind, were viewed as inducing disequilibrium adjustments, unrelated to anyone’s purposeful behavior, modeled with vast numbers of free parameters. For us, today, value theory refers to models of dynamic economies subject to unpredictable shocks, populated by agents who are good at processing information and making choices over time. The macroeconomic research I have discussed today makes essential use of value theory in this modern sense: formulating explicit models, computing solutions, comparing their behavior quantitatively to observed time series and other data sets. As a result, we are able to form a much sharper quantitative view of the potential of changes in policy to improve peoples’ lives than was possible a generation ago.

So, as Sargent observes, Lucas recreated an updated neoclassical synthesis of his own based on the intertemporal Arrow-Debreu-McKenzie version of the Walrasian model, augmented by a rationale for the holding of money and perhaps some form of monetary policy, via the assumption of credit-market frictions and sticky prices. Despite the repudiation of the updated neoclassical synthesis by his friend Edward Prescott, for whom monetary policy is irrelevant, Lucas clings to neoclassical synthesis 2.0. Sargent quotes this passage from Lucas’s 1994 retrospective review of A Monetary History of the US by Friedman and Schwartz to show how tightly Lucas clings to neoclassical synthesis 2.0 :

In Kydland and Prescott’s original model, and in many (though not all) of its descendants, the equilibrium allocation coincides with the optimal allocation: Fluctuations generated by the model represent an efficient response to unavoidable shocks to productivity. One may thus think of the model not as a positive theory suited to all historical time periods but as a normative benchmark providing a good approximation to events when monetary policy is conducted well and a bad approximation when it is not. Viewed in this way, the theory’s relative success in accounting for postwar experience can be interpreted as evidence that postwar monetary policy has resulted in near-efficient behavior, not as evidence that money doesn’t matter.

Indeed, the discipline of real business cycle theory has made it more difficult to defend real alternaltives to a monetary account of the 1930s than it was 30 years ago. It would be a term-paper-size exercise, for example, to work out the possible effects of the 1930 Smoot-Hawley Tariff in a suitably adapted real business cycle model. By now, we have accumulated enough quantitative experience with such models to be sure that the aggregate effects of such a policy (in an economy with a 5% foreign trade sector before the Act and perhaps a percentage point less after) would be trivial.

Nevertheless, in the absence of some catastrophic error in monetary policy, Lucas evidently believes that the key features of the Arrow-Debreu-McKenzie model are closely approximated in the real world. That may well be true. But if it is, Lucas has no real theory to explain why.

In his 1959 paper (“Towards a Theory of Price Adjustment”) I just mentioned, Arrow noted that the theory of competitive equilibrium has no explanation of how equilibrium prices are actually set. Indeed, the idea of competitive price adjustment is beset by a paradox: all agents in a general equilibrium being assumed to be price takers, how is it that a new equilibrium price is ever arrived at following any disturbance to an initial equilibrium? Arrow had no answer to the question, but offered the suggestion that, out of equilibrium, agents are not price takers, but price searchers, possessing some measure of market power to set price in the transition between the old and new equilibrium. But the upshot of Arrow’s discussion was that the problem and the paradox awaited solution. Almost sixty years on, some of us are still waiting, but for Lucas and the Lucasians, there is neither problem nor paradox, because the actual price is the equilibrium price, and the equilibrium price is always the (rationally) expected price.

If the social functions of science were being efficiently discharged, this rather obvious replacement of problem solving by question begging would not have escaped effective challenge and opposition. But Lucas was able to provide cover for this substitution by persuading the profession to embrace his microfoundational methodology, while offering irresistible opportunities for professional advancement to younger economists who could master the new analytical techniques that Lucas and others were rapidly introducing, thereby neutralizing or coopting many of the natural opponents to what became modern macroeconomics. So while Romer considers the conquest of MIT by the rational-expectations revolution, despite the opposition of Robert Solow, to be evidence for the advance of economic science, I regard it as a sign of the social failure of science to discipline a regressive development driven by the elevation of technique over substance.

Roger and Me

Last week Roger Farmer wrote a post elaborating on a comment that he had left to my post on Price Stickiness and Macroeconomics. Roger’s comment is aimed at this passage from my post:

[A]lthough price stickiness is a sufficient condition for inefficient macroeconomic fluctuations, it is not a necessary condition. It is entirely possible that even with highly flexible prices, there would still be inefficient macroeconomic fluctuations. And the reason why price flexibility, by itself, is no guarantee against macroeconomic contractions is that macroeconomic contractions are caused by disequilibrium prices, and disequilibrium prices can prevail regardless of how flexible prices are.

Here’s Roger’s comment:

I have a somewhat different take. I like Lucas’ insistence on equilibrium at every point in time as long as we recognize two facts. 1. There is a continuum of equilibria, both dynamic and steady state and 2. Almost all of them are Pareto suboptimal.

I made the following reply to Roger’s comment:

Roger, I think equilibrium at every point in time is ok if we distinguish between temporary and full equilibrium, but I don’t see how there can be a continuum of full equilibria when agents are making all kinds of long-term commitments by investing in specific capital. Having said that, I certainly agree with you that expectational shifts are very important in determining which equilibrium the economy winds up at.

To which Roger responded:

I am comfortable with temporary equilibrium as the guiding principle, as long as the equilibrium in each period is well defined. By that, I mean that, taking expectations as given in each period, each market clears according to some well defined principle. In classical models, that principle is the equality of demand and supply in a Walrasian auction. I do not think that is the right equilibrium concept.

Roger didn’t explain – at least not here, he probably has elsewhere — exactly why he doesn’t think equality of demand and supply in a Walrasian auction is not the right equilibrium concept. But I would be interested in hearing from him why he thinks equality of supply and demand is not the right equilibrium concept. Perhaps he will clarify his thinking for me.

Hicks wanted to separate ‘fix price markets’ from ‘flex price markets’. I don’t think that is the right equilibrium concept either. I prefer to use competitive search equilibrium for the labor market. Search equilibrium leads to indeterminacy because there are not enough prices for the inputs to the search process. Classical search theory closes that gap with an arbitrary Nash bargaining weight. I prefer to close it by making expectations fundamental [a proposition I have advanced on this blog].

I agree that the Hicksian distinction between fix-price markets and flex-price markets doesn’t cut it. Nevertheless, it’s not clear to me that a Thompsonian temporary-equilibrium model in which expectations determine the reservation wage at which workers will accept employment (i.e, the labor-supply curve conditional on the expected wage) doesn’t work as well as a competitive search equilibrium in this context.

Once one treats expectations as fundamental, there is no longer a multiplicity of equilibria. People act in a well defined way and prices clear markets. Of course ‘market clearing’ in a search market may involve unemployment that is considerably higher than the unemployment rate that would be chosen by a social planner. And when there is steady state indeterminacy, as there is in my work, shocks to beliefs may lead the economy to one of a continuum of steady state equilibria.

There is an equilibrium for each set of expectations (with the understanding, I presume, that expectations are always uniform across agents). The problem that I see with this is that there doesn’t seem to be any interaction between outcomes and expectations. Expectations are always self-fulfilling, and changes in expectations are purely exogenous. But in a classic downturn, the process seems to be cumulative, the contraction seemingly feeding on itself, causing a spiral of falling prices, declining output, rising unemployment, and increasing pessimism.

That brings me to the second part of an equilibrium concept. Are expectations rational in the sense that subjective probability measures over future outcomes coincide with realized probability measures? That is not a property of the real world. It is a consistency property for a model.

Yes; I agree totally. Rational expectations is best understood as a property of a model, the property being that if agents expect an equilibrium price vector the solution of the model is the same equilibrium price vector. It is not a substantive theory of expectation formation, the model doesn’t posit that agents correctly foresee the equilibrium price vector, that’s an extreme and unrealistic assumption about how the world actually works, IMHO. The distinction is crucial, but it seems to me that it is largely ignored in practice.

And yes: if we plop our agents down into a stationary environment, their beliefs should eventually coincide with reality.

This seems to me a plausible-sounding assumption for which there is no theoretical proof, and in view of Roger’s recent discussion of unit roots, dubious empirical support.

If the environment changes in an unpredictable way, it is the belief function, a primitive of the model, that guides the economy to a new steady state. And I can envision models where expectations on the transition path are systematically wrong.

I need to read Roger’s papers about this, but I am left wondering by what mechanism the belief function guides the economy to a steady state? It seems to me that the result requires some pretty strong assumptions.

The recent ‘nonlinearity debate’ on the blogs confuses the existence of multiple steady states in a dynamic model with the existence of multiple rational expectations equilibria. Nonlinearity is neither necessary nor sufficient for the existence of multiplicity. A linear model can have a unique indeterminate steady state associated with an infinite dimensional continuum of locally stable rational expectations equilibria. A linear model can also have a continuum of attracting points, each of which is an equilibrium. These are not just curiosities. Both of these properties characterize modern dynamic equilibrium models of the real economy.

I’m afraid that I don’t quite get the distinction that is being made here. Does “multiple steady states in a dynamic model” mean multiple equilibria of the full Arrow-Debreu general equilibrium model? And does “multiple rational-expectations equilibria” mean multiple equilibria conditional on the expectations of the agents? And I also am not sure what the import of this distinction is supposed to be.

My further question is, how does all of this relate to Leijonhfuvud’s idea of the corridor, which Roger has endorsed? My own understanding of what Axel means by the corridor is that the corridor has certain stability properties that keep the economy from careening out of control, i.e. becoming subject to a cumulative dynamic process that does not lead the economy back to the neighborhood of a stable equilibrium. But if there is a continuum of attracting points, each of which is an equilibrium, how could any of those points be understood to be outside the corridor?

Anyway, those are my questions. I am hoping that Roger can enlighten me.

Price Stickiness and Macroeconomics

Noah Smith has a classically snide rejoinder to Stephen Williamson’s outrage at Noah’s Bloomberg paean to price stickiness and to the classic Ball and Maniw article on the subject, an article that provoked an embarrassingly outraged response from Robert Lucas when published over 20 years ago. I don’t know if Lucas ever got over it, but evidently Williamson hasn’t.

Now to be fair, Lucas’s outrage, though misplaced, was understandable, at least if one understands that Lucas was so offended by the ironic tone in which Ball and Mankiw cast themselves as defenders of traditional macroeconomics – including both Keynesians and Monetarists – against the onslaught of “heretics” like Lucas, Sargent, Kydland and Prescott that he just stopped reading after the first few pages and then, in a fit of righteous indignation, wrote a diatribe attacking Ball and Mankiw as religious fanatics trying to halt the progress of science as if that was the real message of the paper – not, to say the least, a very sophisticated reading of what Ball and Mankiw wrote.

While I am not hostile to the idea of price stickiness — one of the most popular posts I have written being an attempt to provide a rationale for the stylized (though controversial) fact that wages are stickier than other input, and most output, prices — it does seem to me that there is something ad hoc and superficial about the idea of price stickiness and about many explanations, including those offered by Ball and Mankiw, for price stickiness. I think that the negative reactions that price stickiness elicits from a lot of economists — and not only from Lucas and Williamson — reflect a feeling that price stickiness is not well grounded in any economic theory.

Let me offer a slightly different criticism of price stickiness as a feature of macroeconomic models, which is simply that although price stickiness is a sufficient condition for inefficient macroeconomic fluctuations, it is not a necessary condition. It is entirely possible that even with highly flexible prices, there would still be inefficient macroeconomic fluctuations. And the reason why price flexibility, by itself, is no guarantee against macroeconomic contractions is that macroeconomic contractions are caused by disequilibrium prices, and disequilibrium prices can prevail regardless of how flexible prices are.

The usual argument is that if prices are free to adjust in response to market forces, they will adjust to balance supply and demand, and an equilibrium will be restored by the automatic adjustment of prices. That is what students are taught in Econ 1. And it is an important lesson, but it is also a “partial” lesson. It is partial, because it applies to a single market that is out of equilibrium. The implicit assumption in that exercise is that nothing else is changing, which means that all other markets — well, not quite all other markets, but I will ignore that nuance – are in equilibrium. That’s what I mean when I say (as I have done before) that just as macroeconomics needs microfoundations, microeconomics needs macrofoundations.

Now it’s pretty easy to show that in a single market with an upward-sloping supply curve and a downward-sloping demand curve, that a price-adjustment rule that raises price when there’s an excess demand and reduces price when there’s an excess supply will lead to an equilibrium market price. But that simple price-adjustment rule is hard to generalize when many markets — not just one — are in disequilibrium, because reducing disequilibrium in one market may actually exacerbate disequilibrium, or create a disequilibrium that wasn’t there before, in another market. Thus, even if there is an equilibrium price vector out there, which, if it were announced to all economic agents, would sustain a general equilibrium in all markets, there is no guarantee that following the standard price-adjustment rule of raising price in markets with an excess demand and reducing price in markets with an excess supply will ultimately lead to the equilibrium price vector. Even more disturbing, the standard price-adjustment rule may not, even under a tatonnement process in which no trading is allowed at disequilibrium prices, lead to the discovery of the equilibrium price vector. Of course, in the real world trading occurs routinely at disequilibrium prices, so that the “mechanical” forces tending an economy toward equilibrium are even weaker than the standard analysis of price-adjustment would suggest.

This doesn’t mean that an economy out of equilibrium has no stabilizing tendencies; it does mean that those stabilizing tendencies are not very well understood, and we have almost no formal theory with which to describe how such an adjustment process leading from disequilibrium to equilibrium actually works. We just assume that such a process exists. Franklin Fisher made this point 30 years ago in an important, but insufficiently appreciated, volume Disequilibrium Foundations of Equilibrium Economics. But the idea goes back even further: to Hayek’s important work on intertemporal equilibrium, especially his classic paper “Economics and Knowledge,” formalized by Hicks in the temporary-equilibrium model described in Value and Capital.

The key point made by Hayek in this context is that there can be an intertemporal equilibrium if and only if all agents formulate their individual plans on the basis of the same expectations of future prices. If their expectations for future prices are not the same, then any plans based on incorrect price expectations will have to be revised, or abandoned altogether, as price expectations are disappointed over time. For price adjustment to lead an economy back to equilibrium, the price adjustment must converge on an equilibrium price vector and on correct price expectations. But, as Hayek understood in 1937, and as Fisher explained in a dense treatise 30 years ago, we have no economic theory that explains how such a price vector, even if it exists, is arrived at, and even under a tannonement process, much less under decentralized price setting. Pinning the blame on this vague thing called price stickiness doesn’t address the deeper underlying theoretical issue.

Of course for Lucas et al. to scoff at price stickiness on these grounds is a bit rich, because Lucas and his followers seem entirely comfortable with assuming that the equilibrium price vector is rationally expected. Indeed, rational expectation of the equilibrium price vector is held up by Lucas as precisely the microfoundation that transformed the unruly field of macroeconomics into a real science.

Krugman on the Volcker Disinflation

Earlier in the week, Paul Krugman wrote about the Volcker disinflation of the 1980s. Krugman’s annoyance at Stephen Moore (whom Krugman flatters by calling him an economist) and John Cochrane (whom Krugman disflatters by comparing him to Stephen Moore) is understandable, but he has less excuse for letting himself get carried away in an outburst of Keynesian triumphalism.

Right-wing economists like Stephen Moore and John Cochrane — it’s becoming ever harder to tell the difference — have some curious beliefs about history. One of those beliefs is that the experience of disinflation in the 1980s was a huge shock to Keynesians, refuting everything they believed. What makes this belief curious is that it’s the exact opposite of the truth. Keynesians came into the Volcker disinflation — yes, it was mainly the Fed’s doing, not Reagan’s — with a standard, indeed textbook, model of what should happen. And events matched their expectations almost precisely.

I’ve been cleaning out my library, and just unearthed my copy of Dornbusch and Fischer’s Macroeconomics, first edition, copyright 1978. Quite a lot of that book was concerned with inflation and disinflation, using an adaptive-expectations Phillips curve — that is, an assumed relationship in which the current inflation rate depends on the unemployment rate and on lagged inflation. Using that approach, they laid out at some length various scenarios for a strategy of reducing the rate of money growth, and hence eventually reducing inflation. Here’s one of their charts, with the top half showing inflation and the bottom half showing unemployment:




Not the cleanest dynamics in the world, but the basic point should be clear: cutting inflation would require a temporary surge in unemployment. Eventually, however, unemployment could come back down to more or less its original level; this temporary surge in unemployment would deliver a permanent reduction in the inflation rate, because it would change expectations.

And here’s what the Volcker disinflation actually looked like:


A temporary but huge surge in unemployment, with inflation coming down to a sustained lower level.

So were Keynesian economists feeling amazed and dismayed by the events of the 1980s? On the contrary, they were feeling pretty smug: disinflation had played out exactly the way the models in their textbooks said it should.

Well, this is true, but only up to a point. What Krugman neglects to mention, which is why the Volcker disinflation is not widely viewed as having enhanced the Keynesian forecasting record, is that most Keynesians had opposed the Reagan tax cuts, and one of their main arguments was that the tax cuts would be inflationary. However, in the Reagan-Volcker combination of loose fiscal policy and tight money, it was tight money that dominated. Score one for the Monetarists. The rapid drop in inflation, though accompanied by high unemployment, was viewed as a vindication of the Monetarist view that inflation is always and everywhere a monetary phenomenon, a view which now seems pretty commonplace, but in the 1970s and 1980s was hotly contested, including by Keynesians.

However, the (Friedmanian) Monetarist view was only partially vindicated, because the Volcker disinflation was achieved by way of high interest rates not by tightly controlling the money supply. As I have written before on this blog (here and here) and in chapter 10 of my book on free banking (especially, pp. 214-21), Volcker actually tried very hard to slow down the rate of growth in the money supply, but the attempt to implement a k-percent rule induced perverse dynamics, creating a precautionary demand for money whenever monetary growth overshot the target range, the anticipation of an imminent future tightening causing people, fearful that cash would soon be unavailable, to hoard cash by liquidating assets before the tightening. The scenario played itself out repeatedly in the 1981-82 period, when the most closely watched economic or financial statistic in the world was the Fed’s weekly report of growth in the money supply, with growth rates over the target range being associated with falling stock and commodities prices. Finally, in the summer of 1982, Volcker announced that the Fed would stop trying to achieve its money growth targets, and the great stock market rally of the 1980s took off, and economic recovery quickly followed.

So neither the old-line Keynesian dismissal of monetary policy as irrelevant to the control of inflation, nor the Monetarist obsession with controlling the monetary aggregates fared very well in the aftermath of the Volcker disinflation. The result was the New Keynesian focus on monetary policy as the key tool for macroeconomic stabilization, except that monetary policy no longer meant controlling a targeted monetary aggregate, but controlling a targeted interest rate (as in the Taylor rule).

But Krugman doesn’t mention any of this, focusing instead on the conflicts among  non-Keynesians.

Indeed, it was the other side of the macro divide that was left scrambling for answers. The models Chicago was promoting in the 1970s, based on the work of Robert Lucas and company, said that unemployment should have come down quickly, as soon as people realized that the Fed really was bringing down inflation.

Lucas came to Chicago in 1975, and he was the wave of the future at Chicago, but it’s not as if Friedman disappeared; after all, he did win the Nobel Prize in 1976. And although Friedman did not explicitly attack Lucas, it’s clear that, to his credit, Friedman never bought into the rational-expectations revolution. So although Friedman may have been surprised at the depth of the 1981-82 recession – in part attributable to the perverse effects of the money-supply targeting he had convinced the Fed to adopt – the adaptive-expectations model in the Dornbusch-Fischer macro textbook is as much Friedmanian as Keynesian. And by the way, Dornbush and Fischer were both at Chicago in the mid 1970s when the first edition of their macro text was written.

By a few years into the 80s it was obvious that those models were unsustainable in the face of the data. But rather than admit that their dismissal of Keynes was premature, most of those guys went into real business cycle theory — basically, denying that the Fed had anything to do with recessions. And from there they just kept digging ever deeper into the rabbit hole.

But anyway, what you need to know is that the 80s were actually a decade of Keynesian analysis triumphant.

I am just as appalled as Krugman by the real-business-cycle episode, but it was as much a rejection of Friedman, and of all other non-Keynesian monetary theory, as of Keynes. So the inspiring morality tale spun by Krugman in which the hardy band of true-blue Keynesians prevail against those nasty new classical barbarians is a bit overdone and vastly oversimplified.

Explaining the Hegemony of New Classical Economics

Simon Wren-Lewis, Robert Waldmann, and Paul Krugman have all recently devoted additional space to explaining – ruefully, for the most part – how it came about that New Classical Economics took over mainstream macroeconomics just about half a century after the Keynesian Revolution. And Mark Thoma got them all started by a complaint about the sorry state of modern macroeconomics and its failure to prevent or to cure the Little Depression.

Wren-Lewis believes that the main problem with modern macro is too much of a good thing, the good thing being microfoundations. Those microfoundations, in Wren-Lewis’s rendering, filled certain gaps in the ad hoc Keynesian expenditure functions. Although the gaps were not as serious as the New Classical School believed, adding an explicit model of intertemporal expenditure plans derived from optimization conditions and rational expectations, was, in Wren-Lewis’s estimation, an improvement on the old Keynesian theory. The improvements could have been easily assimilated into the old Keynesian theory, but weren’t because New Classicals wanted to junk, not improve, the received Keynesian theory.

Wren-Lewis believes that it is actually possible for the progeny of Keynes and the progeny of Fisher to coexist harmoniously, and despite his discomfort with the anti-Keynesian bias of modern macroeconomics, he views the current macroeconomic research program as progressive. By progressive, I interpret him to mean that macroeconomics is still generating new theoretical problems to investigate, and that attempts to solve those problems are producing a stream of interesting and useful publications – interesting and useful, that is, to other economists doing macroeconomic research. Whether the problems and their solutions are useful to anyone else is perhaps not quite so clear. But even if interest in modern macroeconomics is largely confined to practitioners of modern macroeconomics, that fact alone would not conclusively show that the research program in which they are engaged is not progressive, the progressiveness of the research program requiring no more than a sufficient number of self-selecting econ grad students, and a willingness of university departments and sources of research funding to cater to the idiosyncratic tastes of modern macroeconomists.

Robert Waldmann, unsurprisingly, takes a rather less charitable view of modern macroeconomics, focusing on its failure to discover any new, previously unknown, empirical facts about macroeconomic, or to better explain known facts than do alternative models, e.g., by more accurately predicting observed macro time-series data. By that, admittedly, demanding criterion, Waldmann finds nothing progressive in the modern macroeconomics research program.

Paul Krugman weighed in by emphasizing not only the ideological agenda behind the New Classical Revolution, but the self-interest of those involved:

Well, while the explicit message of such manifestos is intellectual – this is the only valid way to do macroeconomics – there’s also an implicit message: from now on, only my students and disciples will get jobs at good schools and publish in major journals/ And that, to an important extent, is exactly what happened; Ken Rogoff wrote about the “scars of not being able to publish stick-price papers during the years of new classical repression.” As time went on and members of the clique made up an ever-growing share of senior faculty and journal editors, the clique’s dominance became self-perpetuating – and impervious to intellectual failure.

I don’t disagree that there has been intellectual repression, and that this has made professional advancement difficult for those who don’t subscribe to the reigning macroeconomic orthodoxy, but I think that the story is more complicated than Krugman suggests. The reason I say that is because I cannot believe that the top-ranking economics departments at schools like MIT, Harvard, UC Berkeley, Princeton, and Penn, and other supposed bastions of saltwater thinking have bought into the underlying New Classical ideology. Nevertheless, microfounded DSGE models have become de rigueur for any serious academic macroeconomic theorizing, not only in the Journal of Political Economy (Chicago), but in the Quarterly Journal of Economics (Harvard), the Review of Economics and Statistics (MIT), and the American Economic Review. New Keynesians, like Simon Wren-Lewis, have made their peace with the new order, and old Keynesians have been relegated to the periphery, unable to publish in the journals that matter without observing the generally accepted (even by those who don’t subscribe to New Classical ideology) conventions of proper macroeconomic discourse.

So I don’t think that Krugman’s ideology plus self-interest story fully explains how the New Classical hegemony was achieved. What I think is missing from his story is the spurious methodological requirement of microfoundations foisted on macroeconomists in the course of the 1970s. I have discussed microfoundations in a number of earlier posts (here, here, here, here, and here) so I will try, possibly in vain, not to repeat myself too much.

The importance and desirability of microfoundations were never questioned. What, after all, was the neoclassical synthesis, if not an attempt, partly successful and partly unsuccessful, to integrate monetary theory with value theory, or macroeconomics with microeconomics? But in the early 1970s the focus of attempts, notably in the 1970 Phelps volume, to provide microfoundations changed from embedding the Keynesian system in a general-equilibrium framework, as Patinkin had done, to providing an explicit microeconomic rationale for the Keynesian idea that the labor market could not be cleared via wage adjustments.

In chapter 19 of the General Theory, Keynes struggled to come up with a convincing general explanation for the failure of nominal-wage reductions to clear the labor market. Instead, he offered an assortment of seemingly ad hoc arguments about why nominal-wage adjustments would not succeed in reducing unemployment, enabling all workers willing to work at the prevailing wage to find employment at that wage. This forced Keynesians into the awkward position of relying on an argument — wages tend to be sticky, especially in the downward direction — that was not really different from one used by the “Classical Economists” excoriated by Keynes to explain high unemployment: that rigidities in the price system – often politically imposed rigidities – prevented wage and price adjustments from equilibrating demand with supply in the textbook fashion.

These early attempts at providing microfoundations were largely exercises in applied price theory, explaining why self-interested behavior by rational workers and employers lacking perfect information about all potential jobs and all potential workers would not result in immediate price adjustments that would enable all workers to find employment at a uniform market-clearing wage. Although these largely search-theoretic models led to a more sophisticated and nuanced understanding of labor-market dynamics than economists had previously had, the models ultimately did not provide a fully satisfactory account of cyclical unemployment. But the goal of microfoundations was to explain a certain set of phenomena in the labor market that had not been seriously investigated, in the hope that price and wage stickiness could be analyzed as an economic phenomenon rather than being arbitrarily introduced into models as an ad hoc, albeit seemingly plausible, assumption.

But instead of pursuing microfoundations as an explanatory strategy, the New Classicals chose to impose it as a methodological prerequisite. A macroeconomic model was inadmissible unless it could be explicitly and formally derived from the optimizing choices of fully rational agents. Instead of trying to enrich and potentially transform the Keynesian model with a deeper analysis and understanding of the incentives and constraints under which workers and employers make decisions, the New Classicals used microfoundations as a methodological tool by which to delegitimize Keynesian models, those models being insufficiently or improperly microfounded. Instead of using microfoundations as a method by which to make macroeconomic models conform more closely to the imperfect and limited informational resources available to actual employers deciding to hire or fire employees, and actual workers deciding to accept or reject employment opportunities, the New Classicals chose to use microfoundations as a methodological justification for the extreme unrealism of the rational-expectations assumption, portraying it as nothing more than the consistent application of the rationality postulate underlying standard neoclassical price theory.

For the New Classicals, microfoundations became a reductionist crusade. There is only one kind of economics, and it is not macroeconomics. Even the idea that there could be a conceptual distinction between micro and macroeconomics was unacceptable to Robert Lucas, just as the idea that there is, or could be, a mind not reducible to the brain is unacceptable to some deranged neuroscientists. No science, not even chemistry, has been reduced to physics. Were it ever to be accomplished, the reduction of chemistry to physics would be a great scientific achievement. Some parts of chemistry have been reduced to physics, which is a good thing, especially when doing so actually enhances our understanding of the chemical process and results in an improved, or more exact, restatement of the relevant chemical laws. But it would be absurd and preposterous simply to reject, on supposed methodological principle, those parts of chemistry that have not been reduced to physics. And how much more absurd would it be to reject higher-level sciences, like biology and ecology, for no other reason than that they have not been reduced to physics.

But reductionism is what modern macroeconomics, under the New Classical hegemony, insists on. No exceptions allowed; don’t even ask. Meekly and unreflectively, modern macroeconomics has succumbed to the absurd and arrogant methodological authoritarianism of the New Classical Revolution. What an embarrassment.

UPDATE (11:43 AM EDST): I made some minor editorial revisions to eliminate some grammatical errors and misplaced or superfluous words.

Temporary Equilibrium One More Time

It’s always nice to be noticed, especially by Paul Krugman. So I am not upset, but in his response to my previous post, I don’t think that Krugman quite understood what I was trying to convey. I will try to be clearer this time. It will be easiest if I just quote from his post and insert my comments or explanations.

Glasner is right to say that the Hicksian IS-LM analysis comes most directly not out of Keynes but out of Hicks’s own Value and Capital, which introduced the concept of “temporary equilibrium”.

Actually, that’s not what I was trying to say. I wasn’t making any explicit connection between Hicks’s temporary-equilibrium concept from Value and Capital and the IS-LM model that he introduced two years earlier in his paper on Keynes and the Classics. Of course that doesn’t mean that the temporary equilibrium method isn’t connected to the IS-LM model; one would need to do a more in-depth study than I have done of Hicks’s intellectual development to determine how much IS-LM was influenced by Hicks’s interest in intertemporal equilibrium and in the method of temporary equilibrium as a way of analyzing intertemporal issues.

This involves using quasi-static methods to analyze a dynamic economy, not because you don’t realize that it’s dynamic, but simply as a tool. In particular, V&C discussed at some length a temporary equilibrium in a three-sector economy, with goods, bonds, and money; that’s essentially full-employment IS-LM, which becomes the 1937 version with some price stickiness. I wrote about that a long time ago.

Now I do think that it’s fair to say that the IS-LM model was very much in the spirit of Value and Capital, in which Hicks deployed an explicit general-equilibrium model to analyze an economy at a Keynesian level of aggregation: goods, bonds, and money. But the temporary-equilibrium aspect of Value and Capital went beyond the Keynesian analysis, because the temporary equilibrium analysis was explicitly intertemporal, all agents formulating plans based on explicit future price expectations, and the inconsistency between expected prices and actual prices was explicitly noted, while in the General Theory, and in IS-LM, price expectations were kept in the background, making an appearance only in the discussion of the marginal efficiency of capital.

So is IS-LM really Keynesian? I think yes — there is a lot of temporary equilibrium in The General Theory, even if there’s other stuff too. As I wrote in the last post, one key thing that distinguished TGT from earlier business cycle theorizing was precisely that it stopped trying to tell a dynamic story — no more periods, forced saving, boom and bust, instead a focus on how economies can stay depressed. Anyway, does it matter? The real question is whether the method of temporary equilibrium is useful.

That is precisely where I think Krugman’s grasp on the concept of temporary equilibrium is slipping. Temporary equilibrium is indeed about periods, and it is explicitly dynamic. In my previous post I referred to Hicks’s discussion in Capital and Growth, about 25 years after writing Value and Capital, in which he wrote

The Temporary Equilibrium model of Value and Capital, also, is “quasi-static” [like the Keynes theory] – in just the same sense. The reason why I was contented with such a model was because I had my eyes fixed on Keynes.

As I read this passage now — and it really bothered me when I read it as I was writing my previous post — I realize that what Hicks was saying was that his desire to conform to the Keynesian paradigm led him to compromise the integrity of the temporary equilibrium model, by forcing it to be “quasi-static” when it really was essentially dynamic. The challenge has been to convert a “quasi-static” IS-LM model into something closer to the temporary-equilibrium method that Hicks introduced, but did not fully execute in Value and Capital.

What are the alternatives? One — which took over much of macro — is to do intertemporal equilibrium all the way, with consumers making lifetime consumption plans, prices set with the future rationally expected, and so on. That’s DSGE — and I think Glasner and I agree that this hasn’t worked out too well. In fact, economists who never learned temporary-equiibrium-style modeling have had a strong tendency to reinvent pre-Keynesian fallacies (cough-Say’s Law-cough), because they don’t know how to think out of the forever-equilibrium straitjacket.

Yes, I agree! Rational expectations, full-equilibrium models have turned out to be a regression, not an advance. But the way I would make the point is that the temporary-equilibrium method provides a sort of a middle way to do intertemporal dynamics without presuming that consumption plans and investment plans are always optimal.

What about disequilibrium dynamics all the way? Basically, I have never seen anyone pull this off. Like the forever-equilibrium types, constant-disequilibrium theorists have a remarkable tendency to make elementary conceptual mistakes.

Again, I agree. We can’t work without some sort of equilibrium conditions, but temporary equilibrium provides a way to keep the discipline of equilibrium without assuming (nearly) full optimality.

Still, Glasner says that temporary equilibrium must involve disappointed expectations, and fails to take account of the dynamics that must result as expectations are revised.

Perhaps I was unclear, but I thought I was saying just the opposite. It’s the “quasi-static” IS-LM model, not temporary equilibrium, that fails to take account of the dynamics produced by revised expectations.

I guess I’d say two things. First, I’m not sure that this is always true. Hicks did indeed assume static expectations — the future will be like the present; but in Keynes’s vision of an economy stuck in sustained depression, such static expectations will be more or less right.

Again, I agree. There may be self-fulfilling expectations of a low-income, low-employment equilibrium. But I don’t think that that is the only explanation for such a situation, and certainly not for the downturn that can lead to such an equilibrium.

Second, those of us who use temporary equilibrium often do think in terms of dynamics as expectations adjust. In fact, you could say that the textbook story of how the short-run aggregate supply curve adjusts over time, eventually restoring full employment, is just that kind of thing. It’s not a great story, but it is the kind of dynamics Glasner wants — and it’s Econ 101 stuff.

Again, I agree. It’s not a great story, but, like it or not, the story is not a Keynesian story.

So where does this leave us? I’m not sure, but my impression is that Krugman, in his admiration for the IS-LM model, is trying too hard to identify IS-LM with the temporary-equilibrium approach, which I think represented a major conceptual advance over both the Keynesian model and the IS-LM representation of the Keynesian model. Temporary equilibrium and IS-LM are not necessarily inconsistent, but I mainly wanted to point out that the two aren’t the same, and shouldn’t be conflated.

Paul Krugman and Roger Farmer on Sticky Wages

I was pleasantly surprised last Friday to see that Paul Krugman took favorable notice of my post about sticky wages, but also registering some disagreement.

[Glasner] is partially right in suggesting that there has been a bit of a role reversal regarding the role of sticky wages in recessions: Keynes asserted that wage flexibility would not help, but Keynes’s self-proclaimed heirs ended up putting downward nominal wage rigidity at the core of their analysis. By the way, this didn’t start with the New Keynesians; way back in the 1940s Franco Modigliani had already taught us to think that everything depended on M/w, the ratio of the money supply to the wage rate.

That said, wage stickiness plays a bigger role in The General Theory — and in modern discussions that are consistent with what Keynes said — than Glasner indicates.

To document his assertion about Keynes, Krugman quotes a passage from the General Theory in which Keynes seems to suggest that in the nineteenth century inflexible wages were partially compensated for by price level movements. One might quibble with Krugman’s interpretation, but the payoff doesn’t seem worth the effort.

But I will quibble with the next paragraph in Krugman’s post.

But there’s another point: even if you don’t think wage flexibility would help in our current situation (and like Keynes, I think it wouldn’t), Keynesians still need a sticky-wage story to make the facts consistent with involuntary unemployment. For if wages were flexible, an excess supply of labor should be reflected in ever-falling wages. If you want to say that we have lots of willing workers unable to find jobs — as opposed to moochers not really seeking work because they’re cradled in Paul Ryan’s hammock — you have to have a story about why wages aren’t falling.

Not that I really disagree with Krugman that the behavior of wages since the 2008 downturn is consistent with some stickiness in wages. Nevertheless, it is still not necessarily the case that, if wages were flexible, an excess supply of labor would lead to ever-falling wages. In a search model of unemployment, if workers are expecting wages to rise every year at a 3% rate, and instead wages rise at only a 1% rate, the model predicts that unemployment will rise, and will continue to rise (or at least not return to the natural rate) as long as observed wages did not increase as fast as workers were expecting wages to rise. Presumably over time, wage expectations would adjust to a new lower rate of increase, but there is no guarantee that the transition would be speedy.

Krugman concludes:

So sticky wages are an important part of both old and new Keynesian analysis, not because wage cuts would help us, but simply to make sense of what we see.

My own view is actually a bit more guarded. I think that “sticky wages” is simply a name that we apply to a problematic phenomenon for ehich we still haven’t found a really satisfactory explanation for. Search models, for all their theoretical elegance, simply can’t explain the observed process by which unemployment rises during recessions, i.e., by layoffs and a lack of job openings rather than an increase in quits and refused offers, as search models imply. The suggestion in my earlier post was intended to offer a possible basis of understanding what the phrase “sticky wages” is actually describing.

Roger Farmer, a long-time and renowned UCLA economist, also commented on my post on his new blog. Welcome to the blogosphere, Roger.

Roger has a different take on the sticky-wage phenomenon. Roger argues, as did some of the commenters to my post, that wages are not sticky. To document this assertion, Roger presents a diagram showing that the decline of nominal wages closely tracked that of prices for the first six years of the Great Depression. From this evidence Roger concludes that nominal wage rigidity is not the cause of rising unemployment during the Great Depression, and presumably, not the cause of rising unemployment in the Little Depression.

farmer_sticky_wagesInstead, Roger argues, the rise in unemployment was caused by an outbreak of self-fulfilling pessimism. Roger believes that there are many alternative equilibria and which equilibrium (actually equilibrium time path) we reach depends on what our expectations are. Roger also believes that our expectations are rational, so that we get what we expect, as he succinctly phrases it “beliefs are fundamental.” I have a lot of sympathy for this way of looking at the economy. In fact one of the early posts on this blog was entitled “Expectations are Fundamental.” But as I have explained in other posts, I am not so sure that expectations are rational in any useful sense, because I think that individual expectations diverge. I don’t think that there is a single way of looking at reality. If there are many potential equilibria, why should everyone expect the same equilibrium. I can be an optimist, and you can be a pessimist. If we agreed, we would be right, but if we disagree, we will both be wrong. What economic mechanism is there to reconcile our expectations? In a world in which expectations diverge — a world of temporary equilibrium — there can be cumulative output reductions that get propagated across the economy as each sector fails to produce its maximum potential output, thereby reducing the demand for the output of other sectors to which it is linked. That’s what happens when there is trading at prices that don’t correspond to the full optimum equilibrium solution.

So I agree with Roger in part, but I think that the coordination problem is (at least potentially) more serious than he imagines.

Big Ideas in Macroeconomics: A Review

Steve Williamson recently plugged a new book by Kartik Athreya (Big Ideas in Macroeconomics), an economist at the Federal Reserve Bank of Richmond, which tries to explain in relatively non-technical terms what modern macroeconomics is all about. I will acknowledge that my graduate training in macroeconomics predated the rise of modern macro, and I am not fluent in the language of modern macro, though I am trying to fill in the gaps. And this book is a good place to start. I found Athreya’s book a good overview of the field, explaining the fundamental ideas and how they fit together.

Big Ideas in Macroeconomics is a moderately big book, 415 pages, covering a very wide range of topics. It is noteworthy, I think, that despite its size, there is so little overlap between the topics covered in this book, and those covered in more traditional, perhaps old-fashioned, books on macroeconomics. The index contains not a single entry on the price level, inflation, deflation, money, interest, total output, employment or unemployment. Which is not to say that none of those concepts are ever mentioned or discussed, just that they are not treated, as they are in traditional macroeconomics books, as the principal objects of macroeconomic inquiry. The conduct of monetary or fiscal policy to achieve some explicit macroeconomic objective is never discussed. In contrast, there are repeated references to Walrasian equilibrium, the Arrow-Debreu-McKenzie model, the Radner model, Nash-equilibria, Pareto optimality, the first and second Welfare theorems. It’s a new world.

The first two chapters present a fairly detailed description of the idea of Walrasian general equilibrium and its modern incarnation in the canonical Arrow-Debreu-McKenzie (ADM) model.The ADM model describes an economy of utility-maximizing households and profit-maximizing firms engaged in the production and consumption of commodities through time and space. There are markets for commodities dated by time period, specified by location and classified by foreseeable contingent states of the world, so that the same physical commodity corresponds to many separate commodities, each corresponding to different time periods and locations and to contingent states of the world. Prices for such physically identical commodities are not necessarily uniform across times, locations or contingent states.The demand for road salt to de-ice roads depends on whether conditions, which depend on time and location and on states of the world. For each different possible weather contingency, there would be a distinct market for road salt for each location and time period.

The ADM model is solved once for all time periods and all states of the world. Under appropriate conditions, there is one (and possibly more than one) intertemporal equilibrium, all trades being executed in advance, with all deliveries subsequently being carried out, as time an contingencies unfold, in accordance with the terms of the original contracts.

Given the existence of an equilibrium, i.e., a set of prices subject to which all agents are individually optimizing, and all markets are clearing, there are two classical welfare theorems stating that any such equilibrium involves a Pareto-optimal allocation and any Pareto-optimal allocation could be supported by an equilibrium set of prices corresponding to a suitably chosen set of initial endowments. For these optimality results to obtain, it is necessary that markets be complete in the sense that there is a market for each commodity in each time period and contingent state of the world. Without a complete set of markets in this sense, the Pareto-optimality of the Walrasian equilibrium cannot be proved.

Readers may wonder about the process by which an equilibrium price vector would actually be found through some trading process. Athreya invokes the fiction of a Walrasian clearinghouse in which all agents (truthfully) register their notional demands and supplies at alternative price vectors. Based on these responses the clearinghouse is able to determine, by a process of trial and error, the equilibrium price vector. Since the Walrasian clearinghouse presumes that no trading occurs except at an equilibrium price vector, there can be no assurance that an equilibrium price vector would ever be arrived at under an actual trading process in which trading occurs at disequilibrium prices. Moreover, as Clower and Leijonhufvud showed over 40 years ago (“Say’s Principle: What it Means and What it Doesn’t Mean”), trading at disequilibrium prices may cause cumulative contractions of aggregate demand because the total volume of trade at a disequilibrium price will always be less than the volume of trade at an equilibrium price, the volume of trade being constrained by the lesser of quantity supplied and quantity demanded.

In the view of modern macroeconomics, then, Walrasian general equilibrium, as characterized by the ADM model, is the basic and overarching paradigm of macroeconomic analysis. To be sure, modern macroeconomics tries to go beyond the highly restrictive assumptions of the ADM model, but it is not clear whether the concessions made by modern macroeconomics to the real world go very far in enhancing the realism of the basic model.

Chapter 3, contains some interesting reflections on the importance of efficiency (Pareto-optimality) as a policy objective and on the trade-offs between efficiency and equity and between ex-ante and ex-post efficiency. But these topics are on the periphery of macroeconomics, so I will offer no comment here.

In chapter 4, Athreya turns to some common criticisms of modern macroeconomics: that it is too highly aggregated, too wedded to the rationality assumption, too focused on equilibrium steady states, and too highly mathematical. Athreya correctly points out that older macroeconomic models were also highly aggregated, so that if aggregation is a problem it is not unique to modern macroeconomics. That’s a fair point, but skirts some thorny issues. As Athreya acknowledges in chapter 5, an important issue separating certain older macroeconomic traditions (both Keynesian and Austrian among others) is the idea that macroeconomic dysfunction is a manifestation of coordination failure. It is a property – a remarkable property – of Walrasian general equilibrium that it achieves perfect (i.e., Pareto-optimal) coordination of disparate, self-interested, competitive individual agents, fully reconciling their plans in a way that might have been achieved by an omniscient and benevolent central planner. Walrasian general equilibrium fully solves the coordination problem. Insofar as important results of modern macroeconomics depend on the assumption that a real-life economy can be realistically characterized as a Walrasian equilibrium, modern macroeconomics is assuming that coordination failures are irrelevant to macroeconomics. It is only after coordination failures have been excluded from the purview of macroeconomics that it became legitimate (for the sake of mathematical tractability) to deploy representative-agent models in macroeconomics, a coordination failure being tantamount, in the context of a representative agent model, to a form of irrationality on the part of the representative agent. Athreya characterizes choices about the level of aggregation as a trade-off between realism and tractability, but it seems to me that, rather than making a trade-off between realism and tractability, modern macroeconomics has simply made an a priori decision that coordination problems are not a relevant macroeconomic concern.

A similar argument applies to Athreya’s defense of rational expectations and the use of equilibrium in modern macroeconomic models. I would not deny that there are good reasons to adopt rational expectations and full equilibrium in some modeling situations, depending on the problem that theorist is trying to address. The question is whether it can be appropriate to deviate from the assumption of a full rational-expectations equilibrium for the purposes of modeling fluctuations over the course of a business cycle, especially a deep cyclical downturn. In particular, the idea of a Hicksian temporary equilibrium in which agents hold divergent expectations about future prices, but markets clear period by period given those divergent expectations, seems to offer (as in, e.g., Thompson’s “Reformulation of Macroeconomic Theory“) more realism and richer empirical content than modern macromodels of rational expectations.

Athreya offers the following explanation and defense of rational expectations:

[Rational expectations] purports to explain the expectations people actually have about the relevant items in their own futures. It does so by asking that their expectations lead to economy-wide outcomes that do not contradict their views. By imposing the requirement that expectations not be systematically contradicted by outcomes, economists keep an unobservable object from becoming a source of “free parameters” through which we can cheaply claim to have “explained” some phenomenon. In other words, in rational-expectations models, expectations are part of what is solved for, and so they are not left to the discretion of the modeler to impose willy-nilly. In so doing, the assumption of rational expectations protects the public from economists.

This defense of rational expectations plainly belies betrays the methodological arrogance of modern macroeconomics. I am all in favor of solving a model for equilibrium expectations, but solving for equilibrium expectations is certainly not the same as insisting that the only interesting or relevant result of a model is the one generated by the assumption of full equilibrium under rational expectations. (Again see Thompson’s “Reformulation of Macroeconomic Theory” as well as the classic paper by Foley and Sidrauski, and this post by Rajiv Sethi on his blog.) It may be relevant and useful to look at a model and examine its properties in a state in which agents hold inconsistent expectations about future prices; the temporary equilibrium existing at a point in time does not correspond to a steady state. Why is such an equilibrium uninteresting and uninformative about what happens in a business cycle? But evidently modern macroeconomists such as Athreya consider it their duty to ban such models from polite discourse — certainly from the leading economics journals — lest the public be tainted by economists who might otherwise dare to abuse their models by making illicit assumptions about expectations formation and equilibrium concepts.

Chapter 5 is the most important chapter of the book. It is in this chapter that Athreya examines in more detail the kinds of adjustments that modern macroeconomists make in the Walrasian/ADM paradigm to accommodate the incompleteness of markets and the imperfections of expectation formation that limit the empirical relevance of the full ADM model as a macroeconomic paradigm. To do so, Athreya starts by explaining how the Radner model in which a less than the full complement of Arrow-Debreu contingent-laims markets is available. In the Radner model, unlike the ADM model, trading takes place through time for those markets that actually exist, so that the full Walrasian equilibrium exists only if agents are able to form correct expectations about future prices. And even if the full Walrasian equilibrium exists, in the absence of a complete set of Arrow-Debreu markets, the classical welfare theorems may not obtain.

To Athreya, these limitations on the Radner version of the Walrasian model seem manageable. After all, if no one really knows how to improve on the equilibrium of the Radner model, the potential existence of Pareto improvements to the Radner equilibrium is not necessarily that big a deal. Athreya expands on the discussion of the Radner model by introducing the neoclassical growth model in both its deterministic and stochastic versions, all the elements of the dynamic stochastic general equilibrium (DSGE) model that characterizes modern macroeconomics now being in place. Athreya closes out the chapter with additional discussions of the role of further modifications to the basic Walrasian paradigm, particularly search models and overlapping-generations models.

I found the discussion in chapter 5 highly informative and useful, but it doesn’t seem to me that Athreya faces up to the limitations of the Radner model or to the implied disconnect between the Walraisan paradigm and macroeconomic analysis. A full Walrasian equilibrium exists in the Radner model only if all agents correctly anticipate future prices. If they don’t correctly anticipate future prices, then we are in the world of Hicksian temporary equilibrium. But in that world, the kind of coordination failures that Athreya so casually dismisses seem all too likely to occur. In a world of temporary equilibrium, there is no guarantee that intertemporal budget constraints will be effective, because those budget constraint reflect expected, not actual, future prices, and, in temporary equilibrium, expected prices are not the same for all transactors. Budget constraints are not binding in a world in which trading takes place through time based on possibly incorrect expectations of future prices. Not only does this mean that all the standard equilibrium and optimality conditions of Walrasian theory are violated, but that defaults on IOUs and, thus, financial-market breakdowns, are entirely possible.

In a key passage in chapter 5, Athreya dismisses coordination-failure explanations, invidiously characterized as Keynesian, for inefficient declines in output and employment. While acknowledging that such fluctuations could, in theory, be caused by “self-fulfilling pessimism or fear,” Athreya invokes the benchmark Radner trading arrangement of the ADM model. “In the Radner economy, Athreya writes, “households and firms have correct expectations for the spot market prices one period hence.” The justification for that expectational assumption, which seems indistinguishable from the assumption of a full, rational-expectations equilibrium, is left unstated. Athreya continues:

Granting that they indeed have such expectations, we can now ask about the extent to which, in a modern economy, we can have outcomes that are extremely sensitive to them. In particular, is it the case that under fairly plausible conditions, “optimism” and “pessimism” can be self-fulfilling in ways that make everyone (or nearly everyone) better off in the former than the latter?

Athreya argues that this is possible only if the aggregate production function of the economy is characterized by increasing returns to scale, so that productivity increases as output rises.

[W]hat I have in mind is that the structure of the economy must be such that when, for example, all households suddenly defer consumption spending (and save instead), interest rates do not adjust rapidly to forestall such a fall in spending by encouraging firms to invest.

Notice that Athreya makes no distinction between a reduction in consumption in which people shift into long-term real or financial assets and one in which people shift into holding cash. The two cases are hardly identical, but Athreya has nothing to say about the demand for money and its role in macroeconomics.

If they did, under what I will later describe as a “standard” production side for the economy, wages would, barring any countervailing forces, promptly rise (as the capital stock rises and makes workers more productive). In turn, output would not fall in response to pessimism.

What Athreya is saying is that if we assume that there is a reduction in the time preference of households, causing them to defer present consumption in order to increase their future consumption, the shift in time preference should be reflected in a rise in asset prices, causing an increase in the production of durable assets, and leading to an increase in wages insofar as the increase in the stock of fixed capital implies an increase in the marginal product of labor. Thus, if all the consequences of increased thrift are foreseen at the moment that current demand for output falls, there would be a smooth transition from the previous steady state corresponding to a high rate of time preference to the new steady state corresponding to a low rate of time preference.

Fine. If you assume that the economy always remains in full equilibrium, even in the transition from one steady state to another, because everyone has rational expectations, you will avoid a lot of unpleasantness. But what if entrepreneurial expectations do not change instantaneously, and the reduction in current demand for output corresponding to reduced spending on consumption causes entrepreneurs to reduce, not increase, their demand for capital equipment? If, after the shift in time preference, total spending actually falls, there may be a chain of disappointments in expectations, and a series of defaults on IOUs, culminating in a financial crisis. Pessimism may indeed be self-fulfilling. But Athreya has a just-so story to tell, and he seems satisfied that there is no other story to be told. Others may not be so easily satisfied, especially when his just-so story depends on a) the rational expectations assumption that many smart people have a hard time accepting as even remotely plausible, and b) the assumption that no trading takes place at disequilibrium prices. Athreya continues:

Thus, at least within the context of models in which households and firms are not routinely incorrect about the future, multiple self-fulfilling outcomes require particular features of the production side of the economy to prevail.

Actually what Athreya should have said is: “within the context of models in which households and firms always predict future prices correctly.”

In chapter 6, Athreya discusses how modern macroeconomics can and has contributed to the understanding of the financial crisis of 2007-08 and the subsequent downturn and anemic recovery. There is a lot of very useful information and discussion of various issues, especially in connection with banking and financial markets. But further comment at this point would be largely repetitive.

Anyway, despite my obvious and strong disagreements with much of what I read, I learned a lot from Athreya’s well-written and stimulating book, and I actually enjoyed reading it.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 418 other followers

Follow Uneasy Money on WordPress.com

Follow

Get every new post delivered to your Inbox.

Join 418 other followers