Archive for the 'New Keynesians' Category

Roger Farmer’s Prosperity for All

I have just read a review copy of Roger Farmer’s new book Prosperity for All, which distills many of Roger’s very interesting ideas into a form which, though readable, is still challenging — at least, it was for me. There is a lot that I like and agree with in Roger’s book, and the fact that he is a UCLA economist, though he came to UCLA after my departure, is certainly a point in his favor. So I will begin by mentioning some of the things that I really liked about Roger’s book.

What I like most is that he recognizes that beliefs are fundamental, which is almost exactly what I meant when I wrote this post (“Expectations Are Fundamental”) five years ago. The point I wanted to make is that the idea that there is some fundamental existential reality that economic agents try — and, if they are rational, will — perceive is a gross and misleading oversimplification, because expectations themselves are part of reality. In a world in which expectations are fundamental, the Keynesian beauty-contest theory of expectations and stock prices (described in chapter 12 of The General Theory) is not absurd as it is widely considered to be believers in the efficient market hypothesis. The almost universal unprofitability of simple trading rules or algorithms is not inconsistent with a market process in which the causality between prices and expectations goes in both directions, in which case anticipating expectations is no less rational than anticipating future cash flows.

One of the treats of reading this book is Farmer’s recollections of his time as a graduate student at Penn in the early 1980s when David Cass, Karl Shell, and Costas Azariadis were developing their theory of sunspot equilibrium in which expectations are self-fulfilling, an idea skillfully deployed by Roger to revise the basic New Keynesian model and re-orient it along a very different path from the standard New Keynesian one. I am sympathetic to that reorientation, and the main reason for that re-orientation is that Roger rejects the idea that there is a unique equilibrium to which the economy automatically reverts, albeit somewhat more slowly than if speeded along by the appropriate monetary policy, on its own. The notion that there is a unique equilibrium to which the economy automatically reverts is an assumption with no basis in theory or experience. The most that the natural-rate hypothesis can tell us is that if an economy is operating at its natural rate of unemployment, monetary expansion cannot permanently reduce the rate of unemployment below that natural rate. Eventually — once economic agents come to expect that the monetary expansion and the correspondingly higher rate of inflation will be maintained indefinitely — the unemployment rate must revert to the natural rate. But the natural-rate hypothesis does not tell us that monetary expansion cannot reduce unemployment when the actual unemployment rate exceeds the natural rate, although it is often misinterpreted as making that assertion.

In his book, Roger takes the anti-natural-rate argument a step further, asserting that the natural rate of unemployment rate is not unique. There is actually a range of unemployment rates at which the economy can permanently remain; which of those alternative natural rates the economy winds up at depends on the expectations held by the public about nominal future income. The higher expected future income, the greater consumption spending and, consequently, the greater employment. Things are a bit more complicated than I have just described them, because Roger also believes that consumption depends not on current income but on wealth. However, in the very simplified model with which Roger operates, wealth depends on expectations about future income. The more optimistic people are about their income-earning opportunities, the higher asset values; the higher asset values, the wealthier the public, and the greater consumption spending. The relationship between current income and expected future income is what Roger calls the belief function.

Thus, Roger juxtaposes a simple New Keynesian model against his own monetary model. The New Keynesian model consists of 1) an investment equals saving equilibrium condition (IS curve) describing the optimal consumption/savings decision of the representative individual as a locus of combinations of expected real interest rates and real income, based on the assumed rate of time preference of the representative individual, expected future income, and expected future inflation; 2) a Taylor rule describing how the monetary authority sets its nominal interest rate as a function of inflation and the output gap and its target (natural) nominal interest rate; 3) a short-run Phillips Curve that expresses actual inflation as a function of expected future inflation and the output gap. The three basic equations allow three endogenous variables, inflation, real income and the nominal rate of interest to be determined. The IS curve represents equilibrium combinations of real income and real interest rates; the Taylor rule determines a nominal interest rate; given the nominal rate determined by the Taylor rule, the IS curve can be redrawn to represent equilibrium combinations of real income and inflation. The intersection of the redrawn IS curve with the Phillips curve determines the inflation rate and real income.

Roger doesn’t like the New Keynesian model because he rejects the notion of a unique equilibrium with a unique natural rate of unemployment, a notion that I have argued is theoretically unfounded. Roger dismisses the natural-rate hypothesis on empirical grounds, the frequent observations of persistently high rates of unemployment being inconsistent with the idea that there are economic forces causing unemployment to revert back to the natural rate. Two responses to this empirical anomaly are possible: 1) the natural rate of unemployment is unstable, so that the observed persistence of high unemployment reflect increases in the underlying but unobservable natural rate of unemployment; 2) the adverse economic shocks that produce high unemployment are persistent, with unemployment returning to a natural level only after the adverse shocks have ceased. In the absence of independent empirical tests of the hypothesis that the natural rate of unemployment has changed, or of the hypothesis that adverse shocks causing unemployment to rise above the natural rate are persistent, neither of these responses is plausible, much less persuasive.

So Roger recasts the basic New Keynesian model in a very different form. While maintaining the Taylor Rule, he rewrites the IS curve so that it describes a relationship between the nominal interest rate and the expected growth of nominal income given the assumed rate of time preference, and in place of the Phillips Curve, he substitutes his belief function, which says that the expected growth of nominal income in the next period equals the current rate of growth. The IS curve and the Taylor Rule provide two steady state equations in three variables, nominal income growth, nominal interest rate and inflation, so that the rate of inflation is left undetermined. Once the belief function specifies the expected rate of growth of nominal income, the nominal interest rate consistent with expected nominal-income growth is determined. Since the belief function tells us only that the expected nominal-income growth equals the current rate of nominal-income growth, any change in nominal-income growth persists into the next period.

At any rate, Roger’s policy proposal is not to change the interest-rate rule followed by the monetary authority, but to propose a rule whereby the monetary authority influences the public’s expectations of nominal-income growth. The greater expected nominal-income growth, the greater wealth, and the greater consumption expenditures. The greater consumption expenditures, the greater income and employment. Expectations are self-fulfilling. Roger therefore advocates a policy by which the government buys and sells a stock-market index fund in order to keep overall wealth at a level that will generate enough consumption expenditures to support maximum sustainable employment.

This is a quick summary of some of the main substantive arguments that Roger makes in his book, and I hope that I have not misrepresented them too badly. As I have already said, I very much sympathize with his criticism of the New Keynesian model, and I agree with nearly all of his criticisms. I also agree wholeheartedly with his emphasis on the importance of expectations and on self-fulfilling character of expectations. Nevertheless, I have to admit that I have trouble taking Roger’s own monetary model and his policy proposal for stabilizing a broad index of equity prices over time seriously. And the reason I am so skeptical about Roger’s model and his policy recommendation is that his model, which does after all bear at least a family resemblance to the simple New Keynesian model, strikes me as being far too simplified to be credible as a representation of a real-world economy. His model, like the New Keynesian model, is an intertemporal model with neither money nor real capital, and the idea that there is an interest rate in such model is, though theoretically defensible, not very plausible. There may be a sequence of periods in such a model in which some form of intertemporal exchange takes place, but without explicitly introducing at least one good that is carried over from period to period, the extent of intertemporal trading is limited and devoid of the arbitrage constraints inherent in a system in which real assets are held from one period to the next.

So I am very skeptical about any macroeconomic model with no market for real assets so that the interest rate interacts with asset values and expected future prices in such a way that the existing stock of durable assets is willingly held over time. The simple New Keynesian model in which there is no money and no durable assets, but simply bonds whose existence is difficult to rationalize in the absence of money or durable assets, does not strike me as a sound foundation for making macroeconomic policy. An interest rate may exist in such a model, but such a model strikes me as woefully inadequate for macroeconomic policy analysis. And although Roger has certainly offered some interesting improvements on the simple New Keynesian model, I would not be willing to rely on Roger’s monetary model for the sweeping policy and institutional recommendations that he proposes, especially his proposal for stabilizing the long-run growth path of a broad index of stock prices.

This is an important point, so I will try to restate it within a wider context. Modern macroeconomics, of which Roger’s model is one of the more interesting examples, flatters itself by claiming to be grounded in the secure microfoundations of the Arrow-Debreu-McKenzie general equilibrium model. But the great achievement of the ADM model was to show the logical possibility of an equilibrium of the independently formulated, optimizing plans of an unlimited number of economic agents producing and trading an unlimited number of commodities over an unlimited number of time periods.

To prove the mutual consistency of such a decentralized decision-making process coordinated by a system of equilibrium prices was a remarkable intellectual achievement. Modern macroeconomics deceptively trades on the prestige of this achievement in claiming to be founded on the ADM general-equilibrium model; the claim is at best misleading, because modern macroeconomics collapses the multiplicity of goods, services, and assets into a single non-durable commodity, so that the only relevant plan the agents in the modern macromodel are called upon to make is a decision about how much to spend in the current period given a shared utility function and a shared production technology for the single output. In the process, all the hard work performed by the ADM general-equilibrium model in explaining how a system of competitive prices could achieve an equilibrium of the complex independent — but interdependent — intertemporal plans of a multitude of decision-makers is effectively discarded and disregarded.

This approach to macroeconomics is not microfounded, but its opposite. The approach relies on the assumption that all but a very small set of microeconomic issues are irrelevant to macroeconomics. Now it is legitimate for macroeconomics to disregard many microeconomic issues, but the assumption that there is continuous microeconomic coordination, apart from the handful of potential imperfections on which modern macroeconomics chooses to focus is not legitimate. In particular, to collapse the entire economy into a single output, implies that all the separate markets encompassed by an actual economy are in equilibrium and that the equilibrium is maintained over time. For that equilibrium to be maintained over time, agents must formulate correct expectations of all the individual relative prices that prevail in those markets over time. The ADM model sidestepped that expectational problem by assuming that a full set of current and forward markets exists in the initial period and that all the agents participating in the economy are present and endowed with wealth enabling them to trade in the initial period. Under those rather demanding assumptions, if an equilibrium price vector covering all current and future markets is arrived at, the optimizing agents will formulate a set of mutually consistent optimal plans conditional on that vector of equilibrium prices so that all the optimal plans can and will be carried out as time happily unfolds for as long as the agents continue in their blissful existence.

However, without a complete set of current and forward markets, achieving the full equilibrium of the ADM model requires that agents formulate consistent expectations of the future prices that will be realized only over the course of time not in the initial period. Roy Radner, who extended the ADM model to accommodate the case of incomplete markets, called such a sequential equilibrium, an equilibrium of plans, prices and expectations. The sequential equilibrium described by Radner has the property that expectations are rational, but the assumption of rational expectations for all future prices over a sequence of future time periods is so unbelievably outlandish as an approximation to reality — sort of like the assumption that it could be 76 degrees fahrenheit in Washington DC in February — that to build that assumption into a macroeconomic model is an absurdity of mind-boggling proportions. But that is precisely what modern macroeconomics, in both its Real Business Cycle and New Keynesian incarnations, has done.

If instead of the sequential equilibrium of plans, prices and expectations, one tries to model an economy in which the price expectations of agents can be inconsistent, while prices adjust within any period to clear markets – the method of temporary equilibrium first described by Hicks in Value and Capital – one can begin to develop a richer conception of how a macroeconomic system can be subject to the financial disturbances, and financial crises to which modern macroeconomies are occasionally, if not routinely, vulnerable. But that would require a reorientation, if not a repudiation, of the path on which macroeconomics has been resolutely marching for nigh on forty years. In his 1984 paper “Consistent Temporary Equilibrium,” published in a volume edited by J. P. Fitoussi, C. J. Bliss made a start on developing such a macroeconomic theory.

There are few economists better equipped than Roger Farmer to lead macroeconomics onto a new and more productive path. He has not done so in this book, but I am hoping that, in his next one, he will.

There Is No Intertemporal Budget Constraint

Last week Nick Rowe posted a link to a just published article in a special issue of the Review of Keynesian Economics commemorating the 80th anniversary of the General Theory. Nick’s article discusses the confusion in the General Theory between saving and hoarding, and Nick invited readers to weigh in with comments about his article. The ROKE issue also features an article by Simon Wren-Lewis explaining the eclipse of Keynesian theory as a result of the New Classical Counter-Revolution, correctly identified by Wren-Lewis as a revolution inspired not by empirical success but by a methodological obsession with reductive micro-foundationalism. While deploring the New Classical methodological authoritarianism, Wren-Lewis takes solace from the ability of New Keynesians to survive under the New Classical methodological regime, salvaging a role for activist counter-cyclical policy by, in effect, negotiating a safe haven for the sticky-price assumption despite its shaky methodological credentials. The methodological fiction that sticky prices qualify as micro-founded allowed New Keynesianism to survive despite the ascendancy of micro-foundationalist methodology, thereby enabling the core Keynesian policy message to survive.

I mention the Wren-Lewis article in this context because of an exchange between two of the commenters on Nick’s article: the presumably pseudonymous Avon Barksdale and blogger Jason Smith about microfoundations and Keynesian economics. Avon began by chastising Nick for wasting time discussing Keynes’s 80-year old ideas, something Avon thinks would never happen in a discussion about a true science like physics, the 100-year-old ideas of Einstein being of no interest except insofar as they have been incorporated into the theoretical corpus of modern physics. Of course, this is simply vulgar scientism, as if the only legitimate way to do economics is to mimic how physicists do physics. This methodological scolding is typically charming New Classical arrogance. Sort of reminds one of how Friedrich Engels described Marxian theory as scientific socialism. I mean who, other than a religious fanatic, would be stupid enough to argue with the assertions of science?

Avon continues with a quotation from David Levine, a fine economist who has done a lot of good work, but who is also enthralled by the New Classical methodology. Avon’s scientism provoked the following comment from Jason Smith, a Ph. D. in physics with a deep interest in and understanding of economics.

You quote from Levine: “Keynesianism as argued by people such as Paul Krugman and Brad DeLong is a theory without people either rational or irrational”

This is false. The L in ISLM means liquidity preference and e.g. here …

http://krugman.blogs.nytimes.com/2013/11/18/the-new-keynesian-case-for-fiscal-policy-wonkish/

… Krugman mentions an Euler equation. The Euler equation essentially says that an agent must be indifferent between consuming one more unit today on the one hand and saving that unit and consuming in the future on the other if utility is maximized.

So there are agents in both formulations preferring one state of the world relative to others.

Avon replied:

Jason,

“This is false. The L in ISLM means liquidity preference and e.g. here”

I know what ISLM is. It’s not recursive so it really doesn’t have people in it. The dynamics are not set by any micro-foundation. If you’d like to see models with people in them, try Ljungqvist and Sargent, Recursive Macroeconomic Theory.

To which Jason retorted:

Avon,

So the definition of “people” is restricted to agents making multi-period optimizations over time, solving a dynamic programming problem?

Well then any such theory is obviously wrong because people don’t behave that way. For example, humans don’t optimize the dictator game. How can you add up optimizing agents and get a result that is true for non-optimizing agents … coincident with the details of the optimizing agents mattering.

Your microfoundation requirement is like saying the ideal gas law doesn’t have any atoms in it. And it doesn’t! It is an aggregate property of individual “agents” that don’t have properties like temperature or pressure (or even volume in a meaningful sense). Atoms optimize entropy, but not out of any preferences.

So how do you know for a fact that macro properties like inflation or interest rates are directly related to agent optimizations? Maybe inflation is like temperature — it doesn’t exist for individuals and is only a property of economics in aggregate.

These questions are not answered definitively, and they’d have to be to enforce a requirement for microfoundations … or a particular way of solving the problem.

Are quarks important to nuclear physics? Not really — it’s all pions and nucleons. Emergent degrees of freedom. Sure, you can calculate pion scattering from QCD lattice calculations (quark and gluon DoF), but it doesn’t give an empirically better result than chiral perturbation theory (pion DoF) that ignores the microfoundations (QCD).

Assuming quarks are required to solve nuclear physics problems would have been a giant step backwards.

To which Avon rejoined:

Jason

The microfoundation of nuclear physics and quarks is quantum mechanics and quantum field theory. How the degrees of freedom reorganize under the renormalization group flow, what effective field theory results is an empirical question. Keynesian economics is worse tha[n] useless. It’s wrong empirically, it has no theoretical foundation, it has no laws. It has no microfoundation. No serious grad school has taught Keynesian economics in nearly 40 years.

To which Jason answered:

Avon,

RG flow is irrelevant to chiral perturbation theory which is based on the approximate chiral symmetry of QCD. And chiral perturbation theory could exist without QCD as the “microfoundation”.

Quantum field theory is not a ‘microfoundation’, but rather a framework for building theories that may or may not have microfoundations. As Weinberg (1979) said:

” … quantum field theory itself has no content beyond analyticity, unitarity,
cluster decomposition, and symmetry.”

If I put together an NJL model, there is no requirement that the scalar field condensate be composed of quark-antiquark pairs. In fact, the basic idea was used for Cooper pairs as a model of superconductivity. Same macro theory; different microfoundations. And that is a general problem with microfoundations — different microfoundations can lead to the same macro theory, so which one is right?

And the IS-LM model is actually pretty empirically accurate (for economics):

http://informationtransfereconomics.blogspot.com/2014/03/the-islm-model-again.html

To which Avon responded:

First, ISLM analysis does not hold empirically. It just doesn’t work. That’s why we ended up with the macro revolution of the 70s and 80s. Keynesian economics ignores intertemporal budget constraints, it violates Ricardian equivalence. It’s just not the way the world works. People might not solve dynamic programs to set their consumption path, but at least these models include a future which people plan over. These models work far better than Keynesian ISLM reasoning.

As for chiral perturbation theory and the approximate chiral symmetries of QCD, I am not making the case that NJL models requires QCD. NJL is an effective field theory so it comes from something else. That something else happens to be QCD. It could have been something else, that’s an empirical question. The microfoundation I’m talking about with theories like NJL is QFT and the symmetries of the vacuum, not the short distance physics that might be responsible for it. The microfoundation here is about the basic laws, the principles.

ISLM and Keynesian economics has none of this. There is no principle. The microfoundation of modern macro is not about increasing the degrees of freedom to model every person in the economy on some short distance scale, it is about building the basic principles from consistent economic laws that we find in microeconomics.

Well, I totally agree that IS-LM is a flawed macroeconomic model, and, in its original form, it was borderline-incoherent, being a single-period model with an interest rate, a concept without meaning except as an intertemporal price relationship. These deficiencies of IS-LM became obvious in the 1970s, so the model was extended to include a future period, with an expected future price level, making it possible to speak meaningfully about real and nominal interest rates, inflation and an equilibrium rate of spending. So the failure of IS-LM to explain stagflation, cited by Avon as the justification for rejecting IS-LM in favor of New Classical macro, was not that hard to fix, at least enough to make it serviceable. And comparisons of the empirical success of augmented IS-LM and the New Classical models have shown that IS-LM models consistently outperform New Classical models.

What Avon fails to see is that the microfoundations that he considers essential for macroeconomics are themselves derived from the assumption that the economy is operating in macroeconomic equilibrium. Thus, insisting on microfoundations – at least in the formalist sense that Avon and New Classical macroeconomists understand the term – does not provide a foundation for macroeconomics; it is just question begging aka circular reasoning or petitio principia.

The circularity is obvious from even a cursory reading of Samuelson’s Foundations of Economic Analysis, Robert Lucas’s model for doing economics. What Samuelson called meaningful theorems – thereby betraying his misguided acceptance of the now discredited logical positivist dogma that only potentially empirically verifiable statements have meaning – are derived using the comparative-statics method, which involves finding the sign of the derivative of an endogenous economic variable with respect to a change in some parameter. But the comparative-statics method is premised on the assumption that before and after the parameter change the system is in full equilibrium or at an optimum, and that the equilibrium, if not unique, is at least locally stable and the parameter change is sufficiently small not to displace the system so far that it does not revert back to a new equilibrium close to the original one. So the microeconomic laws invoked by Avon are valid only in the neighborhood of a stable equilibrium, and the macroeconomics that Avon’s New Classical mentors have imposed on the economics profession is a macroeconomics that, by methodological fiat, is operative only in the neighborhood of a locally stable equilibrium.

Avon dismisses Keynesian economics because it ignores intertemporal budget constraints. But the intertemporal budget constraint doesn’t exist in any objective sense. Certainly macroeconomics has to take into account intertemporal choice, but the idea of an intertemporal budget constraint analogous to the microeconomic budget constraint underlying the basic theory of consumer choice is totally misguided. In the static theory of consumer choice, the consumer has a given resource endowment and known prices at which consumers can transact at will, so the utility-maximizing vector of purchases and sales can be determined as the solution of a constrained-maximization problem.

In the intertemporal context, consumers have a given resource endowment, but prices are not known. So consumers have to make current transactions based on their expectations about future prices and a variety of other circumstances about which consumers can only guess. Their budget constraints are thus not real but totally conjectural based on their expectations of future prices. The optimizing Euler equations are therefore entirely conjectural as well, and subject to continual revision in response to changing expectations. The idea that the microeconomic theory of consumer choice is straightforwardly applicable to the intertemporal choice problem in a setting in which consumers don’t know what future prices will be and agents’ expectations of future prices are a) likely to be very different from each other and thus b) likely to be different from their ultimate realizations is a huge stretch. The intertemporal budget constraint has a completely different role in macroeconomics from the role it has in microeconomics.

If I expect that the demand for my services will be such that my disposable income next year would be $500k, my consumption choices would be very different from what they would have been if I were expecting a disposable income of $100k next year. If I expect a disposable income of $500k next year, and it turns out that next year’s income is only $100k, I may find myself in considerable difficulty, because my planned expenditure and the future payments I have obligated myself to make may exceed my disposable income or my capacity to borrow. So if there are a lot of people who overestimate their future incomes, the repercussions of their over-optimism may reverberate throughout the economy, leading to bankruptcies and unemployment and other bad stuff.

A large enough initial shock of mistaken expectations can become self-amplifying, at least for a time, possibly resembling the way a large initial displacement of water can generate a tsunami. A financial crisis, which is hard to model as an equilibrium phenomenon, may rather be an emergent phenomenon with microeconomic sources, but whose propagation can’t be described in microeconomic terms. New Classical macroeconomics simply excludes such possibilities on methodological grounds by imposing a rational-expectations general-equilibrium structure on all macroeconomic models.

This is not to say that the rational expectations assumption does not have a useful analytical role in macroeconomics. But the most interesting and most important problems in macroeconomics arise when the rational expectations assumption does not hold, because it is when individual expectations are very different and very unstable – say, like now, for instance — that macroeconomies become vulnerable to really scary instability.

Simon Wren-Lewis makes a similar point in his paper in the Review of Keynesian Economics.

Much discussion of current divisions within macroeconomics focuses on the ‘saltwater/freshwater’ divide. This understates the importance of the New Classical Counter Revolution (hereafter NCCR). It may be more helpful to think about the NCCR as involving two strands. The one most commonly talked about involves Keynesian monetary and fiscal policy. That is of course very important, and plays a role in the policy reaction to the recent Great Recession. However I want to suggest that in some ways the second strand, which was methodological, is more important. The NCCR helped completely change the way academic macroeconomics is done.

Before the NCCR, macroeconomics was an intensely empirical discipline: something made possible by the developments in statistics and econometrics inspired by The General Theory. After the NCCR and its emphasis on microfoundations, it became much more deductive. As Hoover (2001, p. 72) writes, ‘[t]he conviction that macroeconomics must possess microfoundations has changed the face of the discipline in the last quarter century’. In terms of this second strand, the NCCR was triumphant and remains largely unchallenged within mainstream academic macroeconomics.

Perhaps I will have some more to say about Wren-Lewis’s article in a future post. And perhaps also about Nick Rowe’s article.

HT: Tom Brown

Update (02/11/16):

On his blog Jason Smith provides some further commentary on his exchange with Avon on Nick Rowe’s blog, explaining at greater length how irrelevant microfoundations are to doing real empirically relevant physics. He also expands on and puts into a broader meta-theoretical context my point about the extremely narrow range of applicability of the rational-expectations equilibrium assumptions of New Classical macroeconomics.

David Glasner found a back-and-forth between me and a commenter (with the pseudonym “Avon Barksdale” after [a] character on The Wire who [didn’t end] up taking an economics class [per Tom below]) on Nick Rowe’s blog who expressed the (widely held) view that the only scientific way to proceed in economics is with rigorous microfoundations. “Avon” held physics up as a purported shining example of this approach.
I couldn’t let it go: even physics isn’t that reductionist. I gave several examples of cases where the microfoundations were actually known, but not used to figure things out: thermodynamics, nuclear physics. Even modern physics is supposedly built on string theory. However physicists do not require every pion scattering amplitude be calculated from QCD. Some people do do so-called lattice calculations. But many resort to the “effective” chiral perturbation theory. In a sense, that was what my thesis was about — an effective theory that bridges the gap between lattice QCD and chiral perturbation theory. That effective theory even gave up on one of the basic principles of QCD — confinement. It would be like an economist giving up opportunity cost (a basic principle of the micro theory). But no physicist ever said to me “your model is flawed because it doesn’t have true microfoundations”. That’s because the kind of hard core reductionism that surrounds the microfoundations paradigm doesn’t exist in physics — the most hard core reductionist natural science!
In his post, Glasner repeated something that he had before and — probably because it was in the context of a bunch of quotes about physics — I thought of another analogy.

Glasner says:

But the comparative-statics method is premised on the assumption that before and after the parameter change the system is in full equilibrium or at an optimum, and that the equilibrium, if not unique, is at least locally stable and the parameter change is sufficiently small not to displace the system so far that it does not revert back to a new equilibrium close to the original one. So the microeconomic laws invoked by Avon are valid only in the neighborhood of a stable equilibrium, and the macroeconomics that Avon’s New Classical mentors have imposed on the economics profession is a macroeconomics that, by methodological fiat, is operative only in the neighborhood of a locally stable equilibrium.

 

This hits on a basic principle of physics: any theory radically simplifies near an equilibrium.

Go to Jason’s blog to read the rest of his important and insightful post.

Making Sense of the Phillips Curve

In a comment on my previous post about supposedly vertical long run Phillips Curve, Richard Lipsey mentioned a paper he presented a couple of years ago at the History of Economics Society Meeting: “The Phillips Curve and the Tyranny of an Assumed Unique Macro Equilibrium.” In a subsequent comment, Richard also posted the abstract to his paper. The paper provides a succinct yet fascinating overview of the evolution macroeconomists’ interpretations of the Phillips curve since Phillips published his paper almost 60 years ago.

The two key points that I take away from Richard’s discussion are the following. 1) A key microeconomic assumption underlying the Keynesian model is that over a broad range of outputs, most firms are operating under conditions of constant short-run marginal cost, because in the short run firms keep the capital labor ratio fixed, varying their usage of capital along with the amount of labor utilized. With a fixed capital-labor ration, marginal cost is flat. In the usual textbook version, the short-run marginal cost is rising because of a declining capital-labor ratio, requiring an increasing number of workers to wring out successive equal increments of output from a fixed amount of capital. Given flat marginal cost, firms respond to changes in demand by varying output but not price until they hit a capacity bottleneck.

The second point, a straightforward implication of the first, is that there are multiple equilibria for such an economy, each equilibrium corresponding to a different level of total demand, with a price level more or less determined by costs, at any rate until total output approaches the limits of its capacity.

Thus, early on, the Phillips Curve was thought to be relatively flat, with little effect on inflation unless unemployment was forced down below some very low level. The key question was how far unemployment could be pushed down before significant inflationary pressure would begin to emerge. Doctrinaire Keynesians advocated driving unemployment down as low as possible, while skeptics argued that significant inflationary pressure would begin to emerge even at higher rates of unemployment, so that a prudent policy would be to operate at a level of unemployment sufficiently high to keep inflationary pressures in check.

Lipsey allows that, in the 1960s, the view that the Phillips Curve presented a menu of alternative combinations of unemployment and inflation from which policymakers could choose did take hold, acknowledging that he himself expressed such a view in a 1965 paper (“Structural and Deficient Demand Unemployment Reconsidered” in Employment Policy and the Labor Market edited by Arthur Ross), “inflationary points on the Phillips Curve represent[ing] disequilibrium points that had to be maintained by monetary policy that perpetuated the disequilibrium by suitable increases in the rate of monetary expansion.” It was this version of the Phillips Curve that was effectively attacked by Friedman and Phelps, who replaced it with a version in which the equilibrium rate of unemployment is uniquely determined by real factors, the natural rate of unemployment, any deviation from the natural rate resulting in a series of adjustments in inflation and expected inflation that would restore the natural rate of unemployment.

Sometime in the 1960s the Phillips curve came to be thought of as providing a stable trade-off between inflation and unemployment. When Lipsey did adopt this trade-off version, as for example Lipsey (1965), inflationary points on the Phillips curve represented disequilibrium points that had to be maintained by monetary policy that perpetuated the disequilibrium by suitable increases in the rate of monetary expansion. In the new Classical interpretation that began with Edmund Phelps (1967), Milton Friedman (1968) and Lucas and Rapping (1969), each point was an equilibrium point because demands and supplies of agents were shifted from their full-information locations when they misinterpreted the price signals. There was, however, only one full-information equilibrium of income, Y*, and unemployment, U*.

The Friedman-Phelps argument was made as inflation rose significantly in the late 1960s, and the mild 1969-70 recession reduce inflation by only a smidgen, setting the stage for Nixon’s imposition of his disastrous wage and price controls in 1971 combined with a loosening of monetary policy by a compliant Arthur Burns as part of Nixon’s 1972 reelection strategy. When the hangover to the 1972 monetary binge was combined with a quadrupling of oil prices by OPEC in late 1973, the result was a simultaneous increase in inflation and unemployment – stagflation — a combination widely perceived as a decisive refutation of Keynesian theory. To cope with that theoretical conundrum, the Keynesian model was expanded to incorporate the determination of the price level by deriving an aggregate supply and aggregate demand curve in price-level/output space.

Lipsey acknowledges a crucial misstep in constructing the Aggregate Demand/Aggregate Supply framework: assuming a unique macroeconomic equilibrium, an assumption that implied the existence of a unique natural rate of unemployment. Keynesians won the battle, providing a perfectly respectable theoretical explanation for stagflation, but, in doing so, they lost the war to Friedman, paving the way for the malign ascendancy of New Classical economics, with which New Keynesian economics became an effective collaborator. Whether the collaboration was willing or unwilling is unclear and unimportant; by assuming a unique equilibrium, New Keynesians gave up the game.

I was so intent in showing that this AD-AS construction provided a simple Keynesian explanation of stagflation, contrary to the accusation of the New Classical economists that stagflation provided a conclusive refutation of Keynesian economics that I paid too little attention to the enormous importance of the new assumption introduced into Keynesian models. The addition of an expectations-augmented Philips curve, negatively sloped in the short run but vertical in the long run, produced a unique macro equilibrium that would be reached whatever macroeconomic policy was adopted.

Lipsey does not want to go back to the old Keynesian paradigm; he prefers a third approach that can be traced back to, among others, Joseph Schumpeter in which the economy is viewed “as constantly evolving under the impact of endogenously generated technological change.” Such technological change can be vaguely foreseen, but also gives rise to genuine surprises. The course of economic development is not predetermined, but path-dependent. History matters.

I suggest that the explanation of the current behaviour of inflation, output and unemployment in modern industrial economies is provided not by any EWD [equilibrium with deviations] theory but by evolutionary theories. These build on the obvious observation that technological change is continual in modern economies (decade by decade at least since 1760), but uneven (tending to come in spurts), and path dependent (because, among other reasons, knowledge is cumulative with one advance enabling another). These changes are generated endogenously by private-sector, profit-seeking agents competing in terms of new products, new processes and new forms of organisation, and by public sector activities in such places as universities and government research laboratories. They continually alter the structure of the economy, causing waves of serially correlated investment expenditure that are a major cause of cycles, as well as driving the long-term growth that continually transforms our economic, social and political structures. In their important book As Time Goes By, Freeman and Louça (2001) trace these processes as they have operated since the beginnings of the First Industrial Revolution.

A critical distinction in all such theories is between risk, which is easily handled in neoclassical economics, and uncertainty, which is largely ignored in it except to pay it lip service. In risky situations, agents with the same objective function and identical knowledge will chose the same alternative: the one that maximizes the expected value of their profits or utility. This gives rise to unique predictable behaviour of agents acting under specified conditions. In contrast in uncertain situations, two identically situated and motivated agents can, and observably do, choose different alternatives — as for example when different firms all looking for the same technological breakthrough chose different lines of R&D — and there is no way to tell in advance of knowing the results which is the better choice. Importantly, agents typically make R&D decisions under conditions of genuine uncertainty. No one knows if a direction of technological investigation will go up a blind alley or open onto a rich field of applications until funds are spend investigating the route. Sometimes trivial expenses produce results of great value while major expenses produce nothing of value. Since there is no way to decide in advance which of two alternative actions with respect to invention or innovation is the best one until the results are known, there is no unique line of behaviour that maximises agents’ expected profits. Thus agents are better understood as groping into an uncertain future in a purposeful, profit- or utility-seeking manner, rather than as maximizing their profits or utility.

This is certainly the right way to think about how economies evolve over time, but I would just add that even if one stays within the more restricted framework of Walrasian general equilibrium, there is simply no persuasive theoretical reason to assume that there is a unique equilibrium or that an economy will necessarily arrive at that equilibrium no matter how long we wait. I have discussed this point several times before most recently here. The assumption that there is a natural rate of unemployment “ground out,” as Milton Friedman put it so awkwardly, “by the Walrasian system of general equilibrium equations” simply lacks any theoretical foundation. Even in a static model in which knowledge and technology were not evolving, the natural rate of unemployment is a will o the wisp.

Because there is no unique static equilibrium in the evolutionary world in which history matters, no adjustment mechanism is required to maintain it. Instead, the constantly changing economy can exist over a wide range of income, employment and unemployment values, without behaving as it would if its inflation rate were determined by an expectations-augmented Phillips curve or any similar construct centred on unique general equilibrium values of Y and U. Thus there is no stable long-run vertical Phillips curve or aggregate supply curve.

Instead of the Phillips curve there is a band as shown in Figure 4 [See below]. Its midpoint is at the expected rate of inflation. If the central bank has a credible inflation target that it sticks to, the expected rate will be that target rate, shown as πe in the figure. The actual rate will vary around the expected rate depending on a number of influences such as changes in productivity, the price of oil and food, but not significantly on variations in U or Y. At either end of this band, there may be something closer to a conventional Phillips curve with prices and wages falling in the face of a major depression and rising in the face of a major boom financed by monetary expansion. Also, the whole band will be shifted by anything that changes the expected rate of inflation.

phillips_lipsey

Lipsey concludes as follows:

So we seem to have gone full circle from early Keynesian view in which there was no unique level of income to which the economy was inevitably drawn, through a simple Phillips curve with its implied trade off, to an expectations-augmented Phillips curve (or any of its more modern equivalents) with its associated unique level of national income, and finally back to the early non-unique Keynesian view in which policy makers had an option as to the average pressure of aggregate demand at which the economy could be operated.

“Perhaps [then] Keynesians were too hasty in following the New Classical economists in accepting the view that follows from static [and all EWD] models that stable rates of wage and price inflation are poised on the razor’s edge of a unique NAIRU and its accompanying Y*. The alternative does not require a long term Phillips curve trade off, nor does it deny the possibility of accelerating inflations of the kind that have bedevilled many third world countries. It is merely states that industrialised economies with low expected inflation rates may be less precisely responsive than current theory assumes because they are subject to many lags and inertias, and are operating in an ever-changing and uncertain world of endogenous technological change, which has no unique long term static equilibrium. If so, the economy may not be similar to the smoothly functioning mechanical world of Newtonian mechanics but rather to the imperfectly evolving world of evolutionary biology. The Phillips relation then changes from being a precise curve to being a band within which various combinations of inflation and unemployment are possible but outside of which inflation tends to accelerate or decelerate. Perhaps then the great [pre-Phillips curve] debates of the 1940s and early 1950s that assumed that there was a range within which the economy could be run with varying pressures of demand, and varying amounts of unemployment and inflation[ary pressure], were not as silly as they were made to seem when both Keynesian and New Classical economists accepted the assumption of a perfectly inelastic, one-dimensional, long run Phillips curve located at a unique equilibrium Y* and NAIRU.” (Lipsey, “The Phillips Curve,” In Famous Figures and Diagrams in Economics, edited by Mark Blaug and Peter Lloyd, p. 389)

Krugman on the Volcker Disinflation

Earlier in the week, Paul Krugman wrote about the Volcker disinflation of the 1980s. Krugman’s annoyance at Stephen Moore (whom Krugman flatters by calling him an economist) and John Cochrane (whom Krugman disflatters by comparing him to Stephen Moore) is understandable, but he has less excuse for letting himself get carried away in an outburst of Keynesian triumphalism.

Right-wing economists like Stephen Moore and John Cochrane — it’s becoming ever harder to tell the difference — have some curious beliefs about history. One of those beliefs is that the experience of disinflation in the 1980s was a huge shock to Keynesians, refuting everything they believed. What makes this belief curious is that it’s the exact opposite of the truth. Keynesians came into the Volcker disinflation — yes, it was mainly the Fed’s doing, not Reagan’s — with a standard, indeed textbook, model of what should happen. And events matched their expectations almost precisely.

I’ve been cleaning out my library, and just unearthed my copy of Dornbusch and Fischer’s Macroeconomics, first edition, copyright 1978. Quite a lot of that book was concerned with inflation and disinflation, using an adaptive-expectations Phillips curve — that is, an assumed relationship in which the current inflation rate depends on the unemployment rate and on lagged inflation. Using that approach, they laid out at some length various scenarios for a strategy of reducing the rate of money growth, and hence eventually reducing inflation. Here’s one of their charts, with the top half showing inflation and the bottom half showing unemployment:




Not the cleanest dynamics in the world, but the basic point should be clear: cutting inflation would require a temporary surge in unemployment. Eventually, however, unemployment could come back down to more or less its original level; this temporary surge in unemployment would deliver a permanent reduction in the inflation rate, because it would change expectations.

And here’s what the Volcker disinflation actually looked like:


A temporary but huge surge in unemployment, with inflation coming down to a sustained lower level.

So were Keynesian economists feeling amazed and dismayed by the events of the 1980s? On the contrary, they were feeling pretty smug: disinflation had played out exactly the way the models in their textbooks said it should.

Well, this is true, but only up to a point. What Krugman neglects to mention, which is why the Volcker disinflation is not widely viewed as having enhanced the Keynesian forecasting record, is that most Keynesians had opposed the Reagan tax cuts, and one of their main arguments was that the tax cuts would be inflationary. However, in the Reagan-Volcker combination of loose fiscal policy and tight money, it was tight money that dominated. Score one for the Monetarists. The rapid drop in inflation, though accompanied by high unemployment, was viewed as a vindication of the Monetarist view that inflation is always and everywhere a monetary phenomenon, a view which now seems pretty commonplace, but in the 1970s and 1980s was hotly contested, including by Keynesians.

However, the (Friedmanian) Monetarist view was only partially vindicated, because the Volcker disinflation was achieved by way of high interest rates not by tightly controlling the money supply. As I have written before on this blog (here and here) and in chapter 10 of my book on free banking (especially, pp. 214-21), Volcker actually tried very hard to slow down the rate of growth in the money supply, but the attempt to implement a k-percent rule induced perverse dynamics, creating a precautionary demand for money whenever monetary growth overshot the target range, the anticipation of an imminent future tightening causing people, fearful that cash would soon be unavailable, to hoard cash by liquidating assets before the tightening. The scenario played itself out repeatedly in the 1981-82 period, when the most closely watched economic or financial statistic in the world was the Fed’s weekly report of growth in the money supply, with growth rates over the target range being associated with falling stock and commodities prices. Finally, in the summer of 1982, Volcker announced that the Fed would stop trying to achieve its money growth targets, and the great stock market rally of the 1980s took off, and economic recovery quickly followed.

So neither the old-line Keynesian dismissal of monetary policy as irrelevant to the control of inflation, nor the Monetarist obsession with controlling the monetary aggregates fared very well in the aftermath of the Volcker disinflation. The result was the New Keynesian focus on monetary policy as the key tool for macroeconomic stabilization, except that monetary policy no longer meant controlling a targeted monetary aggregate, but controlling a targeted interest rate (as in the Taylor rule).

But Krugman doesn’t mention any of this, focusing instead on the conflicts among  non-Keynesians.

Indeed, it was the other side of the macro divide that was left scrambling for answers. The models Chicago was promoting in the 1970s, based on the work of Robert Lucas and company, said that unemployment should have come down quickly, as soon as people realized that the Fed really was bringing down inflation.

Lucas came to Chicago in 1975, and he was the wave of the future at Chicago, but it’s not as if Friedman disappeared; after all, he did win the Nobel Prize in 1976. And although Friedman did not explicitly attack Lucas, it’s clear that, to his credit, Friedman never bought into the rational-expectations revolution. So although Friedman may have been surprised at the depth of the 1981-82 recession – in part attributable to the perverse effects of the money-supply targeting he had convinced the Fed to adopt – the adaptive-expectations model in the Dornbusch-Fischer macro textbook is as much Friedmanian as Keynesian. And by the way, Dornbush and Fischer were both at Chicago in the mid 1970s when the first edition of their macro text was written.

By a few years into the 80s it was obvious that those models were unsustainable in the face of the data. But rather than admit that their dismissal of Keynes was premature, most of those guys went into real business cycle theory — basically, denying that the Fed had anything to do with recessions. And from there they just kept digging ever deeper into the rabbit hole.

But anyway, what you need to know is that the 80s were actually a decade of Keynesian analysis triumphant.

I am just as appalled as Krugman by the real-business-cycle episode, but it was as much a rejection of Friedman, and of all other non-Keynesian monetary theory, as of Keynes. So the inspiring morality tale spun by Krugman in which the hardy band of true-blue Keynesians prevail against those nasty new classical barbarians is a bit overdone and vastly oversimplified.

John Cochrane, Meet Richard Lipsey and Kenneth Carlaw

Paul Krugman wrote an uncharacteristically positive post today about John Cochrane’s latest post in which Cochrane dialed it down a bit after writing two rather heated posts (here and here) attacking Alan Blinder for a recent piece he wrote in the New York Review of Books in which Blinder wrote dismissively quoted Cochrane’s dismissive remark about Keynesian economics being fairy tales that haven’t been taught to graduate students since the 1960s. I don’t want to get into that fracas, but I was amused to read the following paragraphs at the end of Cochrane’s second post in the current series.

Thus, if you read Krugman’s columns, you will see him occasionally crowing about how Keynesian economics won, and how the disciples of Stan Fisher at MIT have spread out to run the world. He’s right. Then you see him complaining about how nobody in academia understands Keynesian economics. He’s right again.

Perhaps academic research ran off the rails for 40 years producing nothing of value. Social sciences can do that. Perhaps our policy makers are stuck with simple stories they learned as undergraduates; and, as has happened countless times before, new ideas will percolate up when the generation trained in the 1980s makes their way to to top of policy circles.

I think we can agree on something. If one wants to write about “what’s wrong with economics,” such a huge divide between academic research ideas and the ideas running our policy establishment is not a good situation.

The right way to address this is with models — written down, objective models, not pundit prognostications — and data. What accounts, quantitatively, for our experience?  I see old-fashioned Keynesianism losing because, having dramatically failed that test once, its advocates are unwilling to do so again, preferring a campaign of personal attack in the popular press. Models confront data in the pages of the AER, the JPE, the QJE, and Econometrica. If old-time Keynesianism really does account for the data, write it down and let’s see.

So Cochrane wants to take this bickering out of the realm of punditry and put the conflicting models to an objective test of how well they perform against the data. Sounds good to me, but I can’t help but wonder if Cochrane means to attribute the academic ascendancy of RBC/New Classical models to their having empirically outperformed competing models? If so, I am not aware that anyone else has made that claim, including Kartik Athreya who wrote the book on the subject. (Here’s my take on the book.) Again just wondering – I am not a macroeconometrician – but is there any study showing that RBC or DSGE models outperform old-fashioned Keynesian models in explaining macro-time-series data?

But I am aware of, and have previously written about, a paper by Kenneth Carlaw and Richard Lipsey (“Does History Matter?: Empirical Analysis of Evolutionary versus Stationary Equilibrium Views of the Economy”) in which they show that time-series data for six OECD countries provide no evidence of the stylized facts about inflation and unemployment implied by RBC and New Keynesian theory. Here is the abstract from the Carlaw-Lipsey paper.

The evolutionary vision in which history matters is of an evolving economy driven by bursts of technological change initiated by agents facing uncertainty and producing long term, path-dependent growth and shorter-term, non-random investment cycles. The alternative vision in which history does not matter is of a stationary, ergodic process driven by rational agents facing risk and producing stable trend growth and shorter term cycles caused by random disturbances. We use Carlaw and Lipsey’s simulation model of non-stationary, sustained growth driven by endogenous, path-dependent technological change under uncertainty to generate artificial macro data. We match these data to the New Classical stylized growth facts. The raw simulation data pass standard tests for trend and difference stationarity, exhibiting unit roots and cointegrating processes of order one. Thus, contrary to current belief, these tests do not establish that the real data are generated by a stationary process. Real data are then used to estimate time-varying NAIRU’s for six OECD countries. The estimates are shown to be highly sensitive to the time period over which they are made. They also fail to show any relation between the unemployment gap, actual unemployment minus estimated NAIRU and the acceleration of inflation. Thus there is no tendency for inflation to behave as required by the New Keynesian and earlier New Classical theory. We conclude by rejecting the existence of a well-defined a short-run, negatively sloped Philips curve, a NAIRU, a unique general equilibrium, short and long-run, a vertical long-run Phillips curve, and the long-run neutrality of money.

Cochrane, like other academic macroeconomists with a RBC/New Classical orientation seems inordinately self-satisfied with the current state of the modern macroeconomics, but curiously sensitive to, and defensive about, criticism from the unwashed masses. Rather than weigh in again with my own criticisms, let me close by quoting another abstract – this one from a paper (“Complexity Eonomics: A Different Framework for Economic Thought”) by Brian Arthur, certainly one of the smartest, and most technically capable, economists around.

This paper provides a logical framework for complexity economics. Complexity economics builds from the proposition that the economy is not necessarily in equilibrium: economic agents (firms, consumers, investors) constantly change their actions and strategies in response to the outcome they mutually create. This further changes the outcome, which requires them to adjust afresh. Agents thus live in a world where their beliefs and strategies are constantly being “tested” for survival within an outcome or “ecology” these beliefs and strategies together create. Economics has largely avoided this nonequilibrium view in the past, but if we allow it, we see patterns or phenomena not visible to equilibrium analysis. These emerge probabilistically, last for some time and dissipate, and they correspond to complex structures in other fields. We also see the economy not as something given and existing but forming from a constantly developing set of technological innovations, institutions, and arrangements that draw forth further innovations, institutions and arrangements.

Complexity economics sees the economy as in motion, perpetually “computing” itself — perpetually constructingitself anew. Where equilibrium economics emphasizes order, determinacy, deduction, and stasis, complexity economics emphasizes contingency, indeterminacy, sense-making, and openness to change. In this framework time, in the sense of real historical time, becomes important, and a solution is no longer necessarily a set of mathematical conditions but a pattern, a set of emergent phenomena, a set of changes that may induce further changes, a set of existing entities creating novel entities. Equilibrium economics is a special case of nonequilibrium and hence complexity economics, therefore complexity economics is economics done in a more general way. It shows us an economy perpetually inventing itself, creating novel structures and possibilities for exploitation, and perpetually open to response.

HT: Mike Norman

Explaining the Hegemony of New Classical Economics

Simon Wren-Lewis, Robert Waldmann, and Paul Krugman have all recently devoted additional space to explaining – ruefully, for the most part – how it came about that New Classical Economics took over mainstream macroeconomics just about half a century after the Keynesian Revolution. And Mark Thoma got them all started by a complaint about the sorry state of modern macroeconomics and its failure to prevent or to cure the Little Depression.

Wren-Lewis believes that the main problem with modern macro is too much of a good thing, the good thing being microfoundations. Those microfoundations, in Wren-Lewis’s rendering, filled certain gaps in the ad hoc Keynesian expenditure functions. Although the gaps were not as serious as the New Classical School believed, adding an explicit model of intertemporal expenditure plans derived from optimization conditions and rational expectations, was, in Wren-Lewis’s estimation, an improvement on the old Keynesian theory. The improvements could have been easily assimilated into the old Keynesian theory, but weren’t because New Classicals wanted to junk, not improve, the received Keynesian theory.

Wren-Lewis believes that it is actually possible for the progeny of Keynes and the progeny of Fisher to coexist harmoniously, and despite his discomfort with the anti-Keynesian bias of modern macroeconomics, he views the current macroeconomic research program as progressive. By progressive, I interpret him to mean that macroeconomics is still generating new theoretical problems to investigate, and that attempts to solve those problems are producing a stream of interesting and useful publications – interesting and useful, that is, to other economists doing macroeconomic research. Whether the problems and their solutions are useful to anyone else is perhaps not quite so clear. But even if interest in modern macroeconomics is largely confined to practitioners of modern macroeconomics, that fact alone would not conclusively show that the research program in which they are engaged is not progressive, the progressiveness of the research program requiring no more than a sufficient number of self-selecting econ grad students, and a willingness of university departments and sources of research funding to cater to the idiosyncratic tastes of modern macroeconomists.

Robert Waldmann, unsurprisingly, takes a rather less charitable view of modern macroeconomics, focusing on its failure to discover any new, previously unknown, empirical facts about macroeconomic, or to better explain known facts than do alternative models, e.g., by more accurately predicting observed macro time-series data. By that, admittedly, demanding criterion, Waldmann finds nothing progressive in the modern macroeconomics research program.

Paul Krugman weighed in by emphasizing not only the ideological agenda behind the New Classical Revolution, but the self-interest of those involved:

Well, while the explicit message of such manifestos is intellectual – this is the only valid way to do macroeconomics – there’s also an implicit message: from now on, only my students and disciples will get jobs at good schools and publish in major journals/ And that, to an important extent, is exactly what happened; Ken Rogoff wrote about the “scars of not being able to publish stick-price papers during the years of new classical repression.” As time went on and members of the clique made up an ever-growing share of senior faculty and journal editors, the clique’s dominance became self-perpetuating – and impervious to intellectual failure.

I don’t disagree that there has been intellectual repression, and that this has made professional advancement difficult for those who don’t subscribe to the reigning macroeconomic orthodoxy, but I think that the story is more complicated than Krugman suggests. The reason I say that is because I cannot believe that the top-ranking economics departments at schools like MIT, Harvard, UC Berkeley, Princeton, and Penn, and other supposed bastions of saltwater thinking have bought into the underlying New Classical ideology. Nevertheless, microfounded DSGE models have become de rigueur for any serious academic macroeconomic theorizing, not only in the Journal of Political Economy (Chicago), but in the Quarterly Journal of Economics (Harvard), the Review of Economics and Statistics (MIT), and the American Economic Review. New Keynesians, like Simon Wren-Lewis, have made their peace with the new order, and old Keynesians have been relegated to the periphery, unable to publish in the journals that matter without observing the generally accepted (even by those who don’t subscribe to New Classical ideology) conventions of proper macroeconomic discourse.

So I don’t think that Krugman’s ideology plus self-interest story fully explains how the New Classical hegemony was achieved. What I think is missing from his story is the spurious methodological requirement of microfoundations foisted on macroeconomists in the course of the 1970s. I have discussed microfoundations in a number of earlier posts (here, here, here, here, and here) so I will try, possibly in vain, not to repeat myself too much.

The importance and desirability of microfoundations were never questioned. What, after all, was the neoclassical synthesis, if not an attempt, partly successful and partly unsuccessful, to integrate monetary theory with value theory, or macroeconomics with microeconomics? But in the early 1970s the focus of attempts, notably in the 1970 Phelps volume, to provide microfoundations changed from embedding the Keynesian system in a general-equilibrium framework, as Patinkin had done, to providing an explicit microeconomic rationale for the Keynesian idea that the labor market could not be cleared via wage adjustments.

In chapter 19 of the General Theory, Keynes struggled to come up with a convincing general explanation for the failure of nominal-wage reductions to clear the labor market. Instead, he offered an assortment of seemingly ad hoc arguments about why nominal-wage adjustments would not succeed in reducing unemployment, enabling all workers willing to work at the prevailing wage to find employment at that wage. This forced Keynesians into the awkward position of relying on an argument — wages tend to be sticky, especially in the downward direction — that was not really different from one used by the “Classical Economists” excoriated by Keynes to explain high unemployment: that rigidities in the price system – often politically imposed rigidities – prevented wage and price adjustments from equilibrating demand with supply in the textbook fashion.

These early attempts at providing microfoundations were largely exercises in applied price theory, explaining why self-interested behavior by rational workers and employers lacking perfect information about all potential jobs and all potential workers would not result in immediate price adjustments that would enable all workers to find employment at a uniform market-clearing wage. Although these largely search-theoretic models led to a more sophisticated and nuanced understanding of labor-market dynamics than economists had previously had, the models ultimately did not provide a fully satisfactory account of cyclical unemployment. But the goal of microfoundations was to explain a certain set of phenomena in the labor market that had not been seriously investigated, in the hope that price and wage stickiness could be analyzed as an economic phenomenon rather than being arbitrarily introduced into models as an ad hoc, albeit seemingly plausible, assumption.

But instead of pursuing microfoundations as an explanatory strategy, the New Classicals chose to impose it as a methodological prerequisite. A macroeconomic model was inadmissible unless it could be explicitly and formally derived from the optimizing choices of fully rational agents. Instead of trying to enrich and potentially transform the Keynesian model with a deeper analysis and understanding of the incentives and constraints under which workers and employers make decisions, the New Classicals used microfoundations as a methodological tool by which to delegitimize Keynesian models, those models being insufficiently or improperly microfounded. Instead of using microfoundations as a method by which to make macroeconomic models conform more closely to the imperfect and limited informational resources available to actual employers deciding to hire or fire employees, and actual workers deciding to accept or reject employment opportunities, the New Classicals chose to use microfoundations as a methodological justification for the extreme unrealism of the rational-expectations assumption, portraying it as nothing more than the consistent application of the rationality postulate underlying standard neoclassical price theory.

For the New Classicals, microfoundations became a reductionist crusade. There is only one kind of economics, and it is not macroeconomics. Even the idea that there could be a conceptual distinction between micro and macroeconomics was unacceptable to Robert Lucas, just as the idea that there is, or could be, a mind not reducible to the brain is unacceptable to some deranged neuroscientists. No science, not even chemistry, has been reduced to physics. Were it ever to be accomplished, the reduction of chemistry to physics would be a great scientific achievement. Some parts of chemistry have been reduced to physics, which is a good thing, especially when doing so actually enhances our understanding of the chemical process and results in an improved, or more exact, restatement of the relevant chemical laws. But it would be absurd and preposterous simply to reject, on supposed methodological principle, those parts of chemistry that have not been reduced to physics. And how much more absurd would it be to reject higher-level sciences, like biology and ecology, for no other reason than that they have not been reduced to physics.

But reductionism is what modern macroeconomics, under the New Classical hegemony, insists on. No exceptions allowed; don’t even ask. Meekly and unreflectively, modern macroeconomics has succumbed to the absurd and arrogant methodological authoritarianism of the New Classical Revolution. What an embarrassment.

UPDATE (11:43 AM EDST): I made some minor editorial revisions to eliminate some grammatical errors and misplaced or superfluous words.

John Cochrane on the Failure of Macroeconomics

The state of modern macroeconomics is not good; John Cochrane, professor of finance at the University of Chicago, senior fellow of the Hoover Institution, and adjunct scholar of the Cato Institute, writing in Thursday’s Wall Street Journal, thinks macroeconomics is a failure. Perhaps so, but he has trouble explaining why.

The problem that Cochrane is chiefly focused on is slow growth.

Output per capita fell almost 10 percentage points below trend in the 2008 recession. It has since grown at less than 1.5%, and lost more ground relative to trend. Cumulative losses are many trillions of dollars, and growing. And the latest GDP report disappoints again, declining in the first quarter.

Sclerotic growth trumps every other economic problem. Without strong growth, our children and grandchildren will not see the great rise in health and living standards that we enjoy relative to our parents and grandparents. Without growth, our government’s already questionable ability to pay for health care, retirement and its debt evaporate. Without growth, the lot of the unfortunate will not improve. Without growth, U.S. military strength and our influence abroad must fade.

Macroeconomists offer two possible explanations for slow growth: a) too little demand — correctable through monetary or fiscal stimulus — and b) structural rigidities and impediments to growth, for which stimulus is no remedy. Cochrane is not a fan of the demand explanation.

The “demand” side initially cited New Keynesian macroeconomic models. In this view, the economy requires a sharply negative real (after inflation) rate of interest. But inflation is only 2%, and the Federal Reserve cannot lower interest rates below zero. Thus the current negative 2% real rate is too high, inducing people to save too much and spend too little.

New Keynesian models have also produced attractively magical policy predictions. Government spending, even if financed by taxes, and even if completely wasted, raises GDP. Larry Summers and Berkeley’s Brad DeLong write of a multiplier so large that spending generates enough taxes to pay for itself. Paul Krugman writes that even the “broken windows fallacy ceases to be a fallacy,” because replacing windows “can stimulate spending and raise employment.”

If you look hard at New-Keynesian models, however, this diagnosis and these policy predictions are fragile. There are many ways to generate the models’ predictions for GDP, employment and inflation from their underlying assumptions about how people behave. Some predict outsize multipliers and revive the broken-window fallacy. Others generate normal policy predictions—small multipliers and costly broken windows. None produces our steady low-inflation slump as a “demand” failure.

Cochrane’s characterization of what’s wrong with New Keynesian models is remarkably superficial. Slow growth, according to the New Keynesian model, is caused by the real interest rate being insufficiently negative, with the nominal rate at zero and inflation at (less than) 2%. So what is the problem? True, the nominal rate can’t go below zero, but where is it written that the upper bound on inflation is (or must be) 2%? Cochrane doesn’t say. Not only doesn’t he say, he doesn’t even seem interested. It might be that something really terrible would happen if the rate of inflation rose about 2%, but if so, Cochrane or somebody needs to explain why terrible calamities did not befall us during all those comparatively glorious bygone years when the rate of inflation consistently exceeded 2% while real economic growth was at least a percentage point higher than it is now. Perhaps, like Fischer Black, Cochrane believes that the rate of inflation has nothing to do with monetary or fiscal policy. But that is certainly not the standard interpretation of the New Keynesian model that he is using as the archetype for modern demand-management macroeconomic theories. And if Cochrane does believe that the rate of inflation is not determined by either monetary policy or fiscal policy, he ought to come out and say so.

Cochrane thinks that persistent low inflation and low growth together pose a problem for New Keynesian theories. Indeed it does, but it doesn’t seem that a radical revision of New Keynesian theory would be required to cope with that state of affairs. Cochrane thinks otherwise.

These problems [i.e., a steady low-inflation slump, aka “secular stagnation”] are recognized, and now academics such as Brown University’s Gauti Eggertsson and Neil Mehrotra are busy tweaking the models to address them. Good. But models that someone might get to work in the future are not ready to drive trillions of dollars of public expenditure.

In other words, unless the economic model has already been worked out before a particular economic problem arises, no economic policy conclusions may be deduced from that economic model. May I call  this Cochrane’s rule?

Cochrane the proceeds to accuse those who look to traditional Keynesian ideas of rejecting science.

The reaction in policy circles to these problems is instead a full-on retreat, not just from the admirable rigor of New Keynesian modeling, but from the attempt to make economics scientific at all.

Messrs. DeLong and Summers and Johns Hopkins’s Laurence Ball capture this feeling well, writing in a recent paper that “the appropriate new thinking is largely old thinking: traditional Keynesian ideas of the 1930s to 1960s.” That is, from before the 1960s when Keynesian thinking was quantified, fed into computers and checked against data; and before the 1970s, when that check failed, and other economists built new and more coherent models. Paul Krugman likewise rails against “generations of economists” who are “viewing the world through a haze of equations.”

Well, maybe they’re right. Social sciences can go off the rails for 50 years. I think Keynesian economics did just that. But if economics is as ephemeral as philosophy or literature, then it cannot don the mantle of scientific expertise to demand trillions of public expenditure.

This is political rhetoric wrapped in a cloak of scientific objectivity. We don’t have the luxury of knowing in advance what the consequences of our actions will be. The United States has spent trillions of dollars on all kinds of stuff over the past dozen years or so. A lot of it has not worked out well at all. So it is altogether fitting and proper for us to be skeptical about whether we will get our money’s worth for whatever the government proposes to spend on our behalf. But Cochrane’s implicit demand that money only be spent if there is some sort of scientific certainty that the money will be well spent can never be met. However, as Larry Summers has pointed out, there are certainly many worthwhile infrastructure projects that could be undertaken, so the risk of committing the “broken windows fallacy” is small. With the government able to borrow at negative real interest rates, the present value of funding such projects is almost certainly positive. So one wonders what is the scientific basis for not funding those projects?

Cochrane compares macroeconomics to climate science:

The climate policy establishment also wants to spend trillions of dollars, and cites scientific literature, imperfect and contentious as that literature may be. Imagine how much less persuasive they would be if they instead denied published climate science since 1975 and bemoaned climate models’ “haze of equations”; if they told us to go back to the complex writings of a weather guru from the 1930s Dustbowl, as they interpret his writings. That’s the current argument for fiscal stimulus.

Cochrane writes as if there were some important scientific breakthrough made by modern macroeconomics — “the new and more coherent models,” either the New Keynesian version of New Classical macroeconomics or Real Business Cycle Theory — that rendered traditional Keynesian economics obsolete or outdated. I have never been a devote of Keynesian economics, but the fact is that modern macroeconomics has achieved its ascendancy in academic circles almost entirely by way of a misguided methodological preference for axiomatized intertemporal optimization models for which a unique equilibrium solution can be found by imposing the empirically risible assumption of rational expectations. These models, whether in their New Keynesian or Real Business Cycle versions, do not generate better empirical predictions than the old fashioned Keynesian models, and, as Noah Smith has usefully pointed out, these models have been consistently rejected by private forecasters in favor of the traditional Keynesian models. It is only the dominant clique of ivory-tower intellectuals that cultivate and nurture these models. The notion that such models are entitled to any special authority or scientific status is based on nothing but the exaggerated self-esteem that is characteristic of almost every intellectual clique, particularly dominant ones.

Having rejected inadequate demand as a cause of slow growth, Cochrane, relying on no model and no evidence, makes a pitch for uncertainty as the source of slow growth.

Where, instead, are the problems? John Taylor, Stanford’s Nick Bloom and Chicago Booth’s Steve Davis see the uncertainty induced by seat-of-the-pants policy at fault. Who wants to hire, lend or invest when the next stroke of the presidential pen or Justice Department witch hunt can undo all the hard work? Ed Prescott emphasizes large distorting taxes and intrusive regulations. The University of Chicago’s Casey Mulligan deconstructs the unintended disincentives of social programs. And so forth. These problems did not cause the recession. But they are worse now, and they can impede recovery and retard growth.

Where, one wonders, is the science on which this sort of seat-of-the-pants speculation is based? Is there any evidence, for example, that the tax burden on businesses or individuals is greater now than it was let us say in 1983-85 when, under President Reagan, the economy, despite annual tax increases partially reversing the 1981 cuts enacted in Reagan’s first year, began recovering rapidly from the 1981-82 recession?

What Does “Keynesian” Mean?

Last week Simon Wren-Lewis wrote a really interesting post on his blog trying to find the right labels with which to identify macroeconomists. Simon, rather disarmingly, starts by admitting the ultimate futility of assigning people labels; reality is just too complicated to conform to the labels that we invent to help ourselves make sense of reality. A good label can provide us with a handle with which to gain a better grasp on a messy set of observations, but it is not the reality. And if you come up with one label, I may counter with a different one. Who’s to say which label is better?

At any rate, as I read through Simon’s post I found myself alternately nodding my head in agreement and shaking my head in disagreement. So staying in the spirit of fun in which Simon wrote his post, I will provide a commentary on his labels and other pronouncements. If the comments are weighted on the side of disagreement, well, that’s what makes blogging fun, n’est-ce pas?

Simon divides academic researchers into two groups (mainstream and heterodox) and macroeconomic policy into two approaches (Keynesian and anti-Keynesian). He then offers the following comment on the meaning of the label Keynesian.

Just think about the label Keynesian. Any sensible definition would involve the words sticky prices and aggregate demand. Yet there are still some economists (generally not academics) who think Keynesian means believing fiscal rather than monetary policy should be used to stabilise demand. Fifty years ago maybe, but no longer. Even worse are non-economists who think being a Keynesian means believing in market imperfections, government intervention in general and a mixed economy. (If you do not believe this happens, look at the definition in Wikipedia.)

Well, as I pointed out in a recent post, there is nothing peculiarly Keynesian about the assumption of sticky prices, especially not as a necessary condition for an output gap and involuntary unemployment. So if Simon is going to have to work harder to justify his distinction between Keynesian and anti-Keynesian. In a comment on Simon’s blog, Nick Rowe pointed out just this problem, asking in particular why Simon could not substitute a Monetarist/anti-Monetarist dichotomy for the Keynesian/anti-Keynesian one.

The story gets more complicated in Simon’s next paragraph in which he describes his dichotomy of academic research into mainstream and heterodox.

Thanks to the microfoundations revolution in macro, mainstream macroeconomists speak the same language. I can go to a seminar that involves an RBC model with flexible prices and no involuntary unemployment and still contribute and possibly learn something. Equally an economist like John Cochrane can and does engage in meaningful discussions of New Keynesian theory (pdf).

In other words, the range of acceptable macroeconomic models has been drastically narrowed. Unless it is microfounded in a dynamic stochastic general equilibrium model, a model does not qualify as “mainstream.” This notion of microfoundation is certainly not what Edmund Phelps meant by “microeconomic foundations” when he edited his famous volume Microeconomic Foundations of Employment and Inflation Theory, which contained, among others, Alchian’s classic paper on search costs and unemployment and a paper by the then not so well-known Robert Lucas and his early collaborator Leonard Rapping. Nevertheless, in the current consensus, it is apparently the New Classicals that determine what kind of model is acceptable, while New Keynesians are allowed to make whatever adjustments, mainly sticky wages, they need to derive Keynesian policy recommendations. Anyone who doesn’t go along with this bargain is excluded from the mainstream. Simon may not be happy with this state of affairs, but he seems to have made peace with it without undue discomfort.

Now many mainstream macroeconomists, myself included, can be pretty critical of the limitations that this programme can place on economic thinking, particularly if it is taken too literally by microfoundations purists. But like it or not, that is how most macro research is done nowadays in the mainstream, and I see no sign of this changing anytime soon. (Paul Krugman discusses some reasons why here.) My own view is that I would like to see more tolerance and a greater variety of modelling approaches, but a pragmatic microfoundations macro will and should remain the major academic research paradigm.

Thus, within the mainstream, there is no basic difference in how to create a macroeconomic model. The difference is just in how to tweak the model in order to derive the desired policy implication.

When it comes to macroeconomic policy, and keeping to the different language idea, the only significant division I see is between the mainstream macro practiced by most economists, including those in most central banks, and anti-Keynesians. By anti-Keynesian I mean those who deny the potential for aggregate demand to influence output and unemployment in the short term.

So, even though New Keynesians have learned how to speak the language of New Classicals, New Keynesians can console themselves in retaining the upper hand in policy discussions. Which is why in policy terms, Simon chooses a label that is at least suggestive of a certain Keynesian primacy, the other side being defined in terms of their opposition to Keynesian policy. Half apologetically, Simon then asks: “Why do I use the term anti-Keynesian rather than, say, New Classical?” After all, it’s the New Classical model that’s being tweaked. Simon responds:

Partly because New Keynesian economics essentially just augments New Classical macroeconomics with sticky prices. But also because as far as I can see what holds anti-Keynesians together isn’t some coherent and realistic view of the world, but instead a dislike of what taking aggregate demand seriously implies.

This explanation really annoyed Steve Williamson who commented on Simon’s blog as follows:

Part of what defines a Keynesian (new or old), is that a Keynesian thinks that his or her views are “mainstream,” and that the rest of macroeconomic thought is defined relative to what Keynesians think – Keynesians reside at the center of the universe, and everything else revolves around them.

Simon goes on to explain what he means by the incoherence of the anti-Keynesian view of the world, pointing out that the Pigou Effect, which supposedly invalidated Keynes’s argument that perfect wage and price flexibility would not eventually restore full employment to an economy operating at less than full employment, has itself been shown not to be valid. And then Simon invokes that old standby Say’s Law.

Second, the evidence that prices are not flexible is so overwhelming that you need something else to drive you to ignore this evidence. Or to put it another way, you need something pretty strong for politicians or economists to make the ‘schoolboy error’ that is Says Law, which is why I think the basis of the anti-Keynesian view is essentially ideological.

Here, I think, Simon is missing something important. It was a mistake on Keynes’s part to focus on Say’s Law as the epitome of everything wrong with “classical economics.” Actually Say’s Law is a description of what happens in an economy when trading takes place at disequilibrium prices. At disequilibrium prices, potential gains from trade are left on the table. Not only are they left on the table, but the effects can be cumulative, because the failure to supply implies a further failure to demand. The Keynesian spending multiplier is the other side of the coin of the supply-side contraction envisioned by Say. Even infinite wage and price flexibility may not help an economy in which a lot of trade is occurring at disequilibrium prices.

The microeconomic theory of price adjustment is a theory of price adjustment in a single market. It is a theory in which, implicitly, all prices and quantities, but a single price-quantity pair are in equilibrium. Equilibrium in that single market is rapidly restored by price and quantity adjustment in that single market. That is why I have said that microeconomics rests on a macroeconomic foundation, and that is why it is illusory to imagine that macroeconomics can be logically derived from microfoundations. Microfoundations, insofar as they explain how prices adjust, are themselves founded on the existence of a macroeconomic equilibrium. Founding macroeconomics on microfoundations is just a form of bootstrapping.

If there is widespread unemployment, it may indeed be that wages are too high, and that a reduction in wages would restore equilibrium. But there is no general presumption that unemployment will be cured by a reduction in wages. Unemployment may be the result of a more general dysfunction in which all prices are away from their equilibrium levels, in which case no adjustment of the wage would solve the problem, so that there is no presumption that the current wage exceeds the full-equilibrium wage. This, by the way, seems to me to be nothing more than a straightforward implication of the Lipsey-Lancaster theory of second best.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 1,391 other followers

Follow Uneasy Money on WordPress.com