Archive for the 'expectations' Category

There Is No Intertemporal Budget Constraint

Last week Nick Rowe posted a link to a just published article in a special issue of the Review of Keynesian Economics commemorating the 80th anniversary of the General Theory. Nick’s article discusses the confusion in the General Theory between saving and hoarding, and Nick invited readers to weigh in with comments about his article. The ROKE issue also features an article by Simon Wren-Lewis explaining the eclipse of Keynesian theory as a result of the New Classical Counter-Revolution, correctly identified by Wren-Lewis as a revolution inspired not by empirical success but by a methodological obsession with reductive micro-foundationalism. While deploring the New Classical methodological authoritarianism, Wren-Lewis takes solace from the ability of New Keynesians to survive under the New Classical methodological regime, salvaging a role for activist counter-cyclical policy by, in effect, negotiating a safe haven for the sticky-price assumption despite its shaky methodological credentials. The methodological fiction that sticky prices qualify as micro-founded allowed New Keynesianism to survive despite the ascendancy of micro-foundationalist methodology, thereby enabling the core Keynesian policy message to survive.

I mention the Wren-Lewis article in this context because of an exchange between two of the commenters on Nick’s article: the presumably pseudonymous Avon Barksdale and blogger Jason Smith about microfoundations and Keynesian economics. Avon began by chastising Nick for wasting time discussing Keynes’s 80-year old ideas, something Avon thinks would never happen in a discussion about a true science like physics, the 100-year-old ideas of Einstein being of no interest except insofar as they have been incorporated into the theoretical corpus of modern physics. Of course, this is simply vulgar scientism, as if the only legitimate way to do economics is to mimic how physicists do physics. This methodological scolding is typically charming New Classical arrogance. Sort of reminds one of how Friedrich Engels described Marxian theory as scientific socialism. I mean who, other than a religious fanatic, would be stupid enough to argue with the assertions of science?

Avon continues with a quotation from David Levine, a fine economist who has done a lot of good work, but who is also enthralled by the New Classical methodology. Avon’s scientism provoked the following comment from Jason Smith, a Ph. D. in physics with a deep interest in and understanding of economics.

You quote from Levine: “Keynesianism as argued by people such as Paul Krugman and Brad DeLong is a theory without people either rational or irrational”

This is false. The L in ISLM means liquidity preference and e.g. here …

http://krugman.blogs.nytimes.com/2013/11/18/the-new-keynesian-case-for-fiscal-policy-wonkish/

… Krugman mentions an Euler equation. The Euler equation essentially says that an agent must be indifferent between consuming one more unit today on the one hand and saving that unit and consuming in the future on the other if utility is maximized.

So there are agents in both formulations preferring one state of the world relative to others.

Avon replied:

Jason,

“This is false. The L in ISLM means liquidity preference and e.g. here”

I know what ISLM is. It’s not recursive so it really doesn’t have people in it. The dynamics are not set by any micro-foundation. If you’d like to see models with people in them, try Ljungqvist and Sargent, Recursive Macroeconomic Theory.

To which Jason retorted:

Avon,

So the definition of “people” is restricted to agents making multi-period optimizations over time, solving a dynamic programming problem?

Well then any such theory is obviously wrong because people don’t behave that way. For example, humans don’t optimize the dictator game. How can you add up optimizing agents and get a result that is true for non-optimizing agents … coincident with the details of the optimizing agents mattering.

Your microfoundation requirement is like saying the ideal gas law doesn’t have any atoms in it. And it doesn’t! It is an aggregate property of individual “agents” that don’t have properties like temperature or pressure (or even volume in a meaningful sense). Atoms optimize entropy, but not out of any preferences.

So how do you know for a fact that macro properties like inflation or interest rates are directly related to agent optimizations? Maybe inflation is like temperature — it doesn’t exist for individuals and is only a property of economics in aggregate.

These questions are not answered definitively, and they’d have to be to enforce a requirement for microfoundations … or a particular way of solving the problem.

Are quarks important to nuclear physics? Not really — it’s all pions and nucleons. Emergent degrees of freedom. Sure, you can calculate pion scattering from QCD lattice calculations (quark and gluon DoF), but it doesn’t give an empirically better result than chiral perturbation theory (pion DoF) that ignores the microfoundations (QCD).

Assuming quarks are required to solve nuclear physics problems would have been a giant step backwards.

To which Avon rejoined:

Jason

The microfoundation of nuclear physics and quarks is quantum mechanics and quantum field theory. How the degrees of freedom reorganize under the renormalization group flow, what effective field theory results is an empirical question. Keynesian economics is worse tha[n] useless. It’s wrong empirically, it has no theoretical foundation, it has no laws. It has no microfoundation. No serious grad school has taught Keynesian economics in nearly 40 years.

To which Jason answered:

Avon,

RG flow is irrelevant to chiral perturbation theory which is based on the approximate chiral symmetry of QCD. And chiral perturbation theory could exist without QCD as the “microfoundation”.

Quantum field theory is not a ‘microfoundation’, but rather a framework for building theories that may or may not have microfoundations. As Weinberg (1979) said:

” … quantum field theory itself has no content beyond analyticity, unitarity,
cluster decomposition, and symmetry.”

If I put together an NJL model, there is no requirement that the scalar field condensate be composed of quark-antiquark pairs. In fact, the basic idea was used for Cooper pairs as a model of superconductivity. Same macro theory; different microfoundations. And that is a general problem with microfoundations — different microfoundations can lead to the same macro theory, so which one is right?

And the IS-LM model is actually pretty empirically accurate (for economics):

http://informationtransfereconomics.blogspot.com/2014/03/the-islm-model-again.html

To which Avon responded:

First, ISLM analysis does not hold empirically. It just doesn’t work. That’s why we ended up with the macro revolution of the 70s and 80s. Keynesian economics ignores intertemporal budget constraints, it violates Ricardian equivalence. It’s just not the way the world works. People might not solve dynamic programs to set their consumption path, but at least these models include a future which people plan over. These models work far better than Keynesian ISLM reasoning.

As for chiral perturbation theory and the approximate chiral symmetries of QCD, I am not making the case that NJL models requires QCD. NJL is an effective field theory so it comes from something else. That something else happens to be QCD. It could have been something else, that’s an empirical question. The microfoundation I’m talking about with theories like NJL is QFT and the symmetries of the vacuum, not the short distance physics that might be responsible for it. The microfoundation here is about the basic laws, the principles.

ISLM and Keynesian economics has none of this. There is no principle. The microfoundation of modern macro is not about increasing the degrees of freedom to model every person in the economy on some short distance scale, it is about building the basic principles from consistent economic laws that we find in microeconomics.

Well, I totally agree that IS-LM is a flawed macroeconomic model, and, in its original form, it was borderline-incoherent, being a single-period model with an interest rate, a concept without meaning except as an intertemporal price relationship. These deficiencies of IS-LM became obvious in the 1970s, so the model was extended to include a future period, with an expected future price level, making it possible to speak meaningfully about real and nominal interest rates, inflation and an equilibrium rate of spending. So the failure of IS-LM to explain stagflation, cited by Avon as the justification for rejecting IS-LM in favor of New Classical macro, was not that hard to fix, at least enough to make it serviceable. And comparisons of the empirical success of augmented IS-LM and the New Classical models have shown that IS-LM models consistently outperform New Classical models.

What Avon fails to see is that the microfoundations that he considers essential for macroeconomics are themselves derived from the assumption that the economy is operating in macroeconomic equilibrium. Thus, insisting on microfoundations – at least in the formalist sense that Avon and New Classical macroeconomists understand the term – does not provide a foundation for macroeconomics; it is just question begging aka circular reasoning or petitio principia.

The circularity is obvious from even a cursory reading of Samuelson’s Foundations of Economic Analysis, Robert Lucas’s model for doing economics. What Samuelson called meaningful theorems – thereby betraying his misguided acceptance of the now discredited logical positivist dogma that only potentially empirically verifiable statements have meaning – are derived using the comparative-statics method, which involves finding the sign of the derivative of an endogenous economic variable with respect to a change in some parameter. But the comparative-statics method is premised on the assumption that before and after the parameter change the system is in full equilibrium or at an optimum, and that the equilibrium, if not unique, is at least locally stable and the parameter change is sufficiently small not to displace the system so far that it does not revert back to a new equilibrium close to the original one. So the microeconomic laws invoked by Avon are valid only in the neighborhood of a stable equilibrium, and the macroeconomics that Avon’s New Classical mentors have imposed on the economics profession is a macroeconomics that, by methodological fiat, is operative only in the neighborhood of a locally stable equilibrium.

Avon dismisses Keynesian economics because it ignores intertemporal budget constraints. But the intertemporal budget constraint doesn’t exist in any objective sense. Certainly macroeconomics has to take into account intertemporal choice, but the idea of an intertemporal budget constraint analogous to the microeconomic budget constraint underlying the basic theory of consumer choice is totally misguided. In the static theory of consumer choice, the consumer has a given resource endowment and known prices at which consumers can transact at will, so the utility-maximizing vector of purchases and sales can be determined as the solution of a constrained-maximization problem.

In the intertemporal context, consumers have a given resource endowment, but prices are not known. So consumers have to make current transactions based on their expectations about future prices and a variety of other circumstances about which consumers can only guess. Their budget constraints are thus not real but totally conjectural based on their expectations of future prices. The optimizing Euler equations are therefore entirely conjectural as well, and subject to continual revision in response to changing expectations. The idea that the microeconomic theory of consumer choice is straightforwardly applicable to the intertemporal choice problem in a setting in which consumers don’t know what future prices will be and agents’ expectations of future prices are a) likely to be very different from each other and thus b) likely to be different from their ultimate realizations is a huge stretch. The intertemporal budget constraint has a completely different role in macroeconomics from the role it has in microeconomics.

If I expect that the demand for my services will be such that my disposable income next year would be $500k, my consumption choices would be very different from what they would have been if I were expecting a disposable income of $100k next year. If I expect a disposable income of $500k next year, and it turns out that next year’s income is only $100k, I may find myself in considerable difficulty, because my planned expenditure and the future payments I have obligated myself to make may exceed my disposable income or my capacity to borrow. So if there are a lot of people who overestimate their future incomes, the repercussions of their over-optimism may reverberate throughout the economy, leading to bankruptcies and unemployment and other bad stuff.

A large enough initial shock of mistaken expectations can become self-amplifying, at least for a time, possibly resembling the way a large initial displacement of water can generate a tsunami. A financial crisis, which is hard to model as an equilibrium phenomenon, may rather be an emergent phenomenon with microeconomic sources, but whose propagation can’t be described in microeconomic terms. New Classical macroeconomics simply excludes such possibilities on methodological grounds by imposing a rational-expectations general-equilibrium structure on all macroeconomic models.

This is not to say that the rational expectations assumption does not have a useful analytical role in macroeconomics. But the most interesting and most important problems in macroeconomics arise when the rational expectations assumption does not hold, because it is when individual expectations are very different and very unstable – say, like now, for instance — that macroeconomies become vulnerable to really scary instability.

Simon Wren-Lewis makes a similar point in his paper in the Review of Keynesian Economics.

Much discussion of current divisions within macroeconomics focuses on the ‘saltwater/freshwater’ divide. This understates the importance of the New Classical Counter Revolution (hereafter NCCR). It may be more helpful to think about the NCCR as involving two strands. The one most commonly talked about involves Keynesian monetary and fiscal policy. That is of course very important, and plays a role in the policy reaction to the recent Great Recession. However I want to suggest that in some ways the second strand, which was methodological, is more important. The NCCR helped completely change the way academic macroeconomics is done.

Before the NCCR, macroeconomics was an intensely empirical discipline: something made possible by the developments in statistics and econometrics inspired by The General Theory. After the NCCR and its emphasis on microfoundations, it became much more deductive. As Hoover (2001, p. 72) writes, ‘[t]he conviction that macroeconomics must possess microfoundations has changed the face of the discipline in the last quarter century’. In terms of this second strand, the NCCR was triumphant and remains largely unchallenged within mainstream academic macroeconomics.

Perhaps I will have some more to say about Wren-Lewis’s article in a future post. And perhaps also about Nick Rowe’s article.

HT: Tom Brown

Update (02/11/16):

On his blog Jason Smith provides some further commentary on his exchange with Avon on Nick Rowe’s blog, explaining at greater length how irrelevant microfoundations are to doing real empirically relevant physics. He also expands on and puts into a broader meta-theoretical context my point about the extremely narrow range of applicability of the rational-expectations equilibrium assumptions of New Classical macroeconomics.

David Glasner found a back-and-forth between me and a commenter (with the pseudonym “Avon Barksdale” after [a] character on The Wire who [didn’t end] up taking an economics class [per Tom below]) on Nick Rowe’s blog who expressed the (widely held) view that the only scientific way to proceed in economics is with rigorous microfoundations. “Avon” held physics up as a purported shining example of this approach.
I couldn’t let it go: even physics isn’t that reductionist. I gave several examples of cases where the microfoundations were actually known, but not used to figure things out: thermodynamics, nuclear physics. Even modern physics is supposedly built on string theory. However physicists do not require every pion scattering amplitude be calculated from QCD. Some people do do so-called lattice calculations. But many resort to the “effective” chiral perturbation theory. In a sense, that was what my thesis was about — an effective theory that bridges the gap between lattice QCD and chiral perturbation theory. That effective theory even gave up on one of the basic principles of QCD — confinement. It would be like an economist giving up opportunity cost (a basic principle of the micro theory). But no physicist ever said to me “your model is flawed because it doesn’t have true microfoundations”. That’s because the kind of hard core reductionism that surrounds the microfoundations paradigm doesn’t exist in physics — the most hard core reductionist natural science!
In his post, Glasner repeated something that he had before and — probably because it was in the context of a bunch of quotes about physics — I thought of another analogy.

Glasner says:

But the comparative-statics method is premised on the assumption that before and after the parameter change the system is in full equilibrium or at an optimum, and that the equilibrium, if not unique, is at least locally stable and the parameter change is sufficiently small not to displace the system so far that it does not revert back to a new equilibrium close to the original one. So the microeconomic laws invoked by Avon are valid only in the neighborhood of a stable equilibrium, and the macroeconomics that Avon’s New Classical mentors have imposed on the economics profession is a macroeconomics that, by methodological fiat, is operative only in the neighborhood of a locally stable equilibrium.

 

This hits on a basic principle of physics: any theory radically simplifies near an equilibrium.

Go to Jason’s blog to read the rest of his important and insightful post.

How not to Win Friends and Influence People

Last week David Beckworth and Ramesh Ponnuru wrote a very astute op-ed article in the New York Times explaining how the Fed was tightening its monetary policy in 2008 even as the economy was rapidly falling into recession. Although there are a couple of substantive points on which I might take issue with Beckworth and Ponnuru (more about that below), I think that on the whole they do a very good job of covering the important points about the 2008 financial crisis given that their article had less than 1000 words.

That said, Beckworth and Ponnuru made a really horrible – to me incomprehensible — blunder. For some reason, in the second paragraph of their piece, after having recounted the conventional narrative of the 2008 financial crisis as an inevitable result of housing bubble and the associated misconduct of the financial industry in their first paragraph, Beckworth and Ponnuru cite Ted Cruz as the spokesman for the alternative view that they are about to present. They compound that blunder in a disclaimer identifying one of them – presumably Ponnuru — as a friend of Ted Cruz – for some recent pro-Cruz pronouncements from Ponnuru see here, here, and here – thereby transforming what might have been a piece of neutral policy analysis into a pro-Cruz campaign document. Aside from the unseemliness of turning Cruz into the poster-boy for Market Monetarism and NGDP Level Targeting, when, as recently as last October 28, Mr. Cruz was advocating resurrection of the gold standard while bashing the Fed for debasing the currency, a shout-out to Ted Cruz is obviously not a gesture calculated to engage readers (of the New York Times for heaven sakes) and predispose them to be receptive to the message they want to convey.

I suppose that this would be the appropriate spot for me to add a disclaimer of my own. I do not know, and am no friend of, Ted Cruz, but I was a FTC employee during Cruz’s brief tenure at the agency from July 2002 to December 2003. I can also affirm that I have absolutely no recollection of having ever seen or interacted with him while he was at the agency or since, and have spoken to only one current FTC employee who does remember him.

Predictably, Beckworth and Ponnuru provoked a barrage of negative responses to their argument that the Fed was responsible for the 2008 financial crisis by not easing monetary policy for most of 2008 when, even before the financial crisis, the economy was sliding into a deep recession. Much of the criticism focuses on the ambiguous nature of the concepts of causation and responsibility when hardly any political or economic event is the direct result of just one cause. So to say that the Fed caused or was responsible for the 2008 financial crisis cannot possibly mean that the Fed single-handedly brought it about, and that, but for the Fed’s actions, no crisis would have occurred. That clearly was not the case; the Fed was operating in an environment in which not only its past actions but the actions of private parties and public and political institutions increased the vulnerability of the financial system. To say that the Fed’s actions of commission or omission “caused” the financial crisis in no way absolves all the other actors from responsibility for creating the conditions in which the Fed found itself and in which the Fed’s actions became crucial for the path that the economy actually followed.

Consider the Great Depression. I think it is totally reasonable to say that the Great Depression was the result of the combination of a succession of interest rate increases by the Fed in 1928 and 1929 and by the insane policy adopted by the Bank of France in 1928 and continued for several years thereafter to convert its holdings of foreign-exchange reserves into gold. But does saying that the Fed and the Bank of France caused the Great Depression mean that World War I and the abandonment of the gold standard and the doubling of the price level in terms of gold during the war were irrelevant to the Great Depression? Of course not. Does it mean that accumulation of World War I debt and reparations obligations imposed on Germany by the Treaty of Versailles and the accumulation of debt issued by German state and local governments — debt and obligations that found their way onto the balance sheets of banks all over the world, were irrelevant to the Great Depression? Not at all.

Nevertheless, it does make sense to speak of the role of monetary policy as a specific cause of the Great Depression because the decisions made by the central bankers made a difference at critical moments when it would have been possible to avoid the calamity had they adopted policies that would have avoided a rapid accumulation of gold reserves by the Fed and the Bank of France, thereby moderating or counteracting, instead of intensifying, the deflationary pressures threatening the world economy. Interestingly, many of those objecting to the notion that Fed policy caused the 2008 financial crisis are not at all bothered by the idea that humans are causing global warming even though the world has evidently undergone previous cycles of rising and falling temperatures about which no one would suggest that humans played any causal role. Just as the existence of non-human factors that affect climate does not preclude one from arguing that humans are now playing a key role in the current upswing of temperatures, the existence of non-monetary factors contributing to the 2008 financial crisis need not preclude one from attributing a causal role in the crisis to the Fed.

So let’s have a look at some of the specific criticisms directed at Beckworth and Ponnuru. Here’s Paul Krugman’s take in which he refers back to an earlier exchange last December between Mr. Cruz and Janet Yellen when she testified before Congress:

Back when Ted Cruz first floated his claim that the Fed caused the Great Recession — and some neo-monetarists spoke up in support — I noted that this was a repeat of the old Milton Friedman two-step.

First, you declare that the Fed could have prevented a disaster — the Great Depression in Friedman’s case, the Great Recession this time around. This is an arguable position, although Friedman’s claims about the 30s look a lot less convincing now that we have tried again to deal with a liquidity trap. But then this morphs into the claim that the Fed caused the disaster. See, government is the problem, not the solution! And the motivation for this bait-and-switch is, indeed, political.

Now come Beckworth and Ponnuru to make the argument at greater length, and it’s quite direct: because the Fed “caused” the crisis, things like financial deregulation and runaway bankers had nothing to do with it.

As regular readers of this blog – if there are any – already know, I am not a big fan of Milton Friedman’s work on the Great Depression, and I agree with Krugman’s criticism that Friedman allowed his ideological preferences or commitments to exert an undue influence not only on his policy advocacy but on his substantive analysis. Thus, trying to make a case for his dumb k-percent rule as an alternative monetary regime to the classical gold standard regime generally favored by his libertarian, classical liberal and conservative ideological brethren, he went to great and unreasonable lengths to deny the obvious fact that the demand for money is anything but stable, because such an admission would have made the k-percent rule untenable on its face as it proved to be when Paul Volcker misguidedly tried to follow Friedman’s advice and conduct monetary policy by targeting monetary aggregates. Even worse, because he was so wedded to the naïve quantity-theory monetary framework he thought he was reviving – when in fact he was using a modified version of the Cambride/Keynesian demand for money, even making the patently absurd claim that the quantity theory of money was a theory of the demand for money – Friedman insisted on conducting monetary analysis under the assumption – also made by Keynes — that quantity of money is directly under the control of the monetary authority when in fact, under a gold standard – which means during the Great Depression – the quantity of money for any country is endogenously determined. As a result, there was a total mismatch between Friedman’s monetary model and the institutional setting in place at the time of the monetary phenomenon he was purporting to explain.

So although there were big problems with Friedman’s account of the Great Depression and his characterization of the Fed’s mishandling of the Great Depression, fixing those problems doesn’t reduce the Fed’s culpability. What is certainly true is that the Great Depression, the result of a complex set of circumstances going back at least 15 years to the start of World War I, might well have been avoided largely or entirely, but for the egregious conduct of the Fed and Bank of France. But it is also true that, at the onset of the Great Depression, there was no consensus about how to conduct monetary policy, even though Hawtrey and Cassel and a handful of others well understood how terribly monetary policy had gone off track. But theirs was a minority view, and Hawtrey and Cassel are still largely ignored or forgotten.

Ted Cruz may view the Fed’s mistakes in 2008 as a club with which to beat up on Janet Yellen, but for most of the rest of us who think that Fed mistakes were a critical element of the 2008 financial crisis, the point is not to make an ideological statement, it is to understand what went wrong and to try to keep it from happening again.

Krugman sends us to Mike Konczal for further commentary on Beckworth and Ponnuru.

Is Ted Cruz right about the Great Recession and the Federal Reserve? From a November debate, Cruz argued that “in the third quarter of 2008, the Fed tightened the money and crashed those asset prices, which caused a cascading collapse.”

Fleshing that argument out in the New York Times is David Beckworth and Ramesh Ponnuru, backing and expanding Cruz’s theory that “the Federal Reserve caused the crisis by tightening monetary policy in 2008.”

But wait, didn’t the Federal Reserve lower rates during that time?

Um, no. The Fed cut its interest rate target to 2.25% on March 18, 2008, and to 2% on April 20, which by my calculations would have been in the second quarter of 2008. There it remained until it was reduced to 1.5% on October 8, which by my calculations would have been in the fourth quarter of 2008. So on the face of it, Mr. Cruz was right that the Fed kept its interest rate target constant for over five months while the economy was contracting in real terms in the third quarter at a rate of 1.9% (and growing in nominal terms at a mere 0.8% rate)

Konczal goes on to accuse Cruz of inconsistency for blaming the Fed for tightening policy in 2008 before the crash while bashing the Fed for quantitative easing after the crash. That certainly is a just criticism, and I really hope someone asks Cruz to explain himself, though my expectations that that will happen are not very high. But that’s Cruz’s problem, not Beckworth’s or Ponnuru’s.

Konczal also focuses on the ambiguity in saying that the Fed caused the financial crisis by not cutting interest rates earlier:

I think a lot of people’s frustrations with the article – see Barry Ritholtz at Bloomberg here – is the authors slipping between many possible interpretations. Here’s the three that I could read them making, though these aren’t actual quotes from the piece:

(a) “The Federal Reserve could have stopped the panic in the financial markets with more easing.”

There’s nothing in the Valukas bankruptcy report on Lehman, or any of the numerous other reports that have since come out, that leads me to believe Lehman wouldn’t have failed if the short-term interest rate was lowered. One way to see the crisis was in the interbank lending spreads, often called the TED spread, which is a measure of banking panic. Looking at an image of the spread and its components, you can see a falling short-term t-bill rate didn’t ease that spread throughout 2008.

And, as Matt O’Brien noted, Bear Stearns failed before the passive tightening started.

The problem with this criticism is that it assumes that the only way that the Fed can be effective is by altering the interest rate that it effectively sets on overnight loans. It ignores the relationship between the interest rate that the Fed sets and total spending. That relationship is not entirely obvious, but almost all monetary economists have assumed that there is such a relationship, even if they can’t exactly agree on the mechanism by which the relationship is brought into existence. So it is not enough to look at the effect of the Fed’s interest rate on Lehman or Bear Stearns, you also have to look at the relationship between the interest rate and total spending and how a higher rate of total spending would have affected Lehman and Bear Stearns. If the economy had been performing better in the second and third quarters, the assets that Lehman and Bear Stearns were holding would not have lost as much of their value. And even if Lehman and Bear Stearns had not survived, arranging for their takeover by other firms might have been less difficult.

But beyond that, Beckworth and Ponnuru themselves overlook the fact that tightening by the Fed did not begin in the third quarter – or even the second quarter – of 2008. The tightening may have already begun in as early as the middle of 2006. The chart below shows the rate of expansion of the adjusted monetary base from January 2004 through September 2008. From 2004 through the middle of 2006, the biweekly rate of expansion of the monetary base was consistently at an annual rate exceeding 4% with the exception of a six-month interval at the end of 2005 when the rate fell to the 3-4% range. But from the middle of 2006 through September 2008, the bi-weekly rate of expansion was consistently below 3%, and was well below 2% for most of 2008. Now, I am generally wary of reading too much into changes in the monetary aggregates, because those changes can reflect either changes in supply conditions or demand conditions. However, when the economy is contracting, with the rate of growth in total spending falling substantially below trend, and the rate of growth in the monetary aggregates is decreasing sharply, it isn’t unreasonable to infer that monetary policy was being tightened. So, the monetary policy may well have been tightened as early as 2006, and, insofar as the rate of growth of the monetary base is indicative of the stance of monetary policy, that tightening was hardly passive.

adjusted_monetary_base

(b) “The Federal Reserve could have helped the recovery by acting earlier in 2008. Unemployment would have peaked at, say, 9.5 percent, instead of 10 percent.”

That would have been good! I would have been a fan of that outcome, and I’m willing to believe it. That’s 700,000 people with a job that they wouldn’t have had otherwise. The stimulus should have been bigger too, with a second round once it was clear how deep the hole was and how Treasuries were crashing too.

Again, there are two points. First, tightening may well have begun at least a year or two before the third quarter of 2008. Second, the economy started collapsing in the third quarter of 2008, and the run-up in the value of the dollar starting in July 2008, foolishly interpreted by the Fed as a vote of confidence in its anti-inflation policy, was really a cry for help as the economy was being starved of liquidity just as the demand for liquidity was becoming really intense. That denial of liquidity led to a perverse situation in which the return to holding cash began to exceed the return on real assets, setting the stage for a collapse in asset prices and a financial panic. The Fed could have prevented the panic, by providing more liquidity. Had it done so, the financial crisis would have been avoided, and the collapse in the real economy and the rise in unemployment would have been substantially mitigate.

c – “The Federal Reserve could have stopped the Great Recession from ever happening. Unemployment in 2009 wouldn’t have gone above 5.5 percent.”

This I don’t believe. Do they? There’s a lot of “might have kept that decline from happening or at least moderated it” back-and-forth language in the piece.

Is the argument that we’d somehow avoid the zero-lower bound? Ben Bernanke recently showed that interest rates would have had to go to about -4 percent to offset the Great Recession at the time. Hitting the zero-lower bound earlier than later is good policy, but it’s still there.

I think there’s an argument about “expectations,” and “expectations” wouldn’t have been set for a Great Recession. A lot of the “expectations” stuff has a magic and tautological quality to it once it leaves the models and enters the policy discussion, but the idea that a random speech about inflation worries could have shifted the Taylor Rule 4 percent seems really off base. Why doesn’t it go haywire all the time, since people are always giving speeches?

Well, I have shown in this paper that, starting in 2008, there was a strong empirical relationship between stock prices and inflation expectations, so it’s not just tautological. And we’re not talking about random speeches; we are talking about the decisions of the FOMC and the reasons that were given for those decisions. The markets pay a lot of attention to those reason.

And couldn’t it be just as likely that since the Fed was so confident about inflation in mid-2008 it boosted nominal income, by giving people a higher level of inflation expectations than they’d have otherwise? Given the failure of the Evans Rule and QE3 to stabilize inflation (or even prevent it from collapsing) in 2013, I imagine transporting them back to 2008 would haven’t fundamentally changed the game.

The inflation in 2008 was not induced by monetary policy, but by adverse supply shocks, expectations of higher inflation, given the Fed’s inflation targeting were thus tantamount to predictions of further monetary tightening.

If your mental model is that the Federal Reserve delaying something three months is capable of throwing 8.7 million people out of work, you should probably want to have much more shovel-ready construction and automatic stabilizers, the second of which kicked in right away without delay, as part of your agenda. It seems odd to put all the eggs in this basket if you also believe that even the most minor of mistakes are capable of devastating the economy so greatly.

Once again, it’s not a matter of just three months, but even if it were, in the summer of 2008 the economy was at a kind of inflection point, and the failure to ease monetary policy at that critical moment led directly to a financial crisis with cascading effects on the real economy. If the financial crisis could have been avoided by preventing total spending from dropping far below trend in the third quarter, the crisis might have been avoided, and the subsequent loss of output and employment could have been greatly mitigated.

And just to be clear, I have pointed out previously that the free market economy is fragile, because its smooth functioning depends on the coherence and consistency of expectations. That makes monetary policy very important, but I don’t dismiss shovel-ready construction and automatic stabilizers as means of anchoring expectations in a useful way, in contrast to the perverse way that inflation targeting stabilizes expectations.

Excess Volatility Strikes Again

Both David Henderson and Scott Sumner had some fun with this declaration of victory on behalf of Austrian Business Cycle Theory by Robert Murphy after the recent mini-stock-market crash.

As shocking as these developments [drops in stock prices and increased volatility] may be to some analysts, those versed in the writings of economist Ludwig von Mises have been warning for years that the Federal Reserve was setting us up for another crash.

While it’s always tempting to join in the fun of mocking ABCT, I am going to try to be virtuous and resist temptation, and instead comment on a different lesson that I would draw from the recent stock market fluctuations.

To do so, let me quote from Scott’s post:

Austrians aren’t the only ones who think they have something useful to say about future trends in asset prices. Keynesians and others also like to talk about “bubbles”, which I take as an implied prediction that the asset will do poorly over an extended period of time. If not, what exactly does “bubble” mean? I think this is all foolish; assume the Efficient Markets Hypothesis is roughly accurate, and look for what markets are telling us about policy.

I agree with Scott that it is nearly impossible to define “bubble” in an operational ex ante way. And I also agree that there is much truth in the Efficient Market Hypothesis and that it can be a useful tool in making inferences about the effects of policies as I tried to show a few years back in this paper. But I also think that there are some conceptual problems with EMH that Scott and others don’t take as seriously as they should. Scott believes that there is powerful empirical evidence that supports EMH. Responding to Murphy’s charge that EMH is no more falsifiable than ABCT, Scott replied:

The EMH is most certainly “falsifiable.”  It’s been tested in many ways.  Some people even claim that it has been falsified, although I’m not convinced.  In the tests that I think are the most relevant the EMH comes out ahead.  (Stocks respond immediately to news, stocks follow roughly a random walk, indexed funds outperformed managed funds, excess returns are not serially correlated, or not enough to profit from, etc., etc.)

A few comments come to mind.

First, Nobel laureate Robert Shiller was awarded the prize largely for work showing that stock prices exhibit excess volatility. The recent sharp fall in stock prices followed by a sharp rebound raise the possibility that stock prices have been fluctuating for reasons other than the flow of new publicly available information, which, according to EMH, is what determines stock prices. Shiller’s work is not necessarily definitive, so it’s possible to reconcile EMH with observed volatility, but I think that there are good reasons for skepticism.

Second, there are theories other than EMH that predict or are at least consistent with stock prices following a random walk. A good example is Keynes’s discussion of the stock exchange in chapter 12 of the General Theory in which Keynes actually formulated a version of EMH, but rejected it based on his intuition that investors focused on “fundamentals” would not have the capital resources to finance their positions when, for whatever reason, market sentiment turns against them. According to Keynes, picking stocks is like guessing who will win a beauty contest. You can guess either by forming an opinion about the most beautiful contestant or by guessing who the judges will think is the most beautiful. Forming an opinion about who is the most beautiful is like picking stocks based on fundamentals or EMH, guessing who the judges will think is most beautiful is like picking stocks based on predicting market sentiment (Keynesian theory). EMH and the Keynesian theory are totally contrary to each other, but it’s not clear to me that any of the tests mentioned by Scott (random fluctuations in stock prices, index funds outperforming managed funds, excess returns not serially correlated) is inconsistent with the Keynesian theory.

Third, EMH presumes that there is a direct line of causation running from “fundamentals” to “expectations,” and that expectations are rationally inferred from “fundamentals.” That neat conceptual dichotomy between objective fundamentals and rational expectations based on fundamentals presumes that fundamentals are independent of expectations. But that is clearly false. The state of expectations is itself fundamental. Expectations can be and often are self-fulfilling. That is a commonplace observation about social interactions. The nature and character of many social interactions depends on the expectations with which people enter into those interactions.

I may hold a very optimistic view about the state of the economy today. But suppose that I wake up tomorrow and hear that the Shanghai stock market crashes, going down by 30% in one day. Will my expectations be completely independent of my observation of falling asset prices in China? Maybe, but what if I hear that S&P futures are down by 10%? If other people start revising their expectations, will it not become rational for me to change my own expectations at some point? How can it not be rational for me to change my expectations if I see that everyone else is changing theirs? If people are becoming more pessimistic they will reduce their spending, and my income and my wealth, directly or indirectly, depend on how much other people are planning to spend. So my plans have to take into account the expectations of others.

An equilibrium requires consistent expectations among individuals. If you posit an exogenous change in the expectations of some people, unless there is only one set of expectations that is consistent with equilibrium, the exogenous change in the expectations of some may very well imply a movement toward another equilibrium with a set of expectations from the set characterizing the previous equilibrium. There may be cases in which the shock to expectations is ephemeral, expectations reverting to what they were previously. Perhaps that was what happened last week. But it is also possible that expectations are volatile, and will continue to fluctuate. If so, who knows where we will wind up? EMH provides no insight into that question.

I started out by saying that I was going to resist the temptation to mock ABCT, but I’m afraid that I must acknowledge that temptation has got the better of me. Here are two charts: the first shows the movement of gold prices from August 2005 to August 2015, the second shows the movement of the S&P 500 from August 2005 to August 2015. I leave it to readers to decide which chart is displaying the more bubble-like price behavior.gold_price_2005-15

S&P500_2005-2015

Romer v. Lucas

A couple of months ago, Paul Romer created a stir by publishing a paper in the American Economic Review “Mathiness in the Theory of Economic Growth,” an attack on two papers, one by McGrattan and Prescott and the other by Lucas and Moll on aspects of growth theory. He accused the authors of those papers of using mathematical modeling as a cover behind which to hide assumptions guaranteeing results by which the authors could promote their research agendas. In subsequent blog posts, Romer has sharpened his attack, focusing it more directly on Lucas, whom he accuses of a non-scientific attachment to ideological predispositions that have led him to violate what he calls Feynman integrity, a concept eloquently described by Feynman himself in a 1974 commencement address at Caltech.

It’s a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty–a kind of leaning over backwards. For example, if you’re doing an experiment, you should report everything that you think might make it invalid–not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you’ve eliminated by some other experiment, and how they worked–to make sure the other fellow can tell they have been eliminated.

Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can–if you know anything at all wrong, or possibly wrong–to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. There is also a more subtle problem. When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition.

Romer contrasts this admirable statement of what scientific integrity means with another by George Stigler, seemingly justifying, or at least excusing, a kind of special pleading on behalf of one’s own theory. And the institutional and perhaps ideological association between Stigler and Lucas seems to suggest that Lucas is inclined to follow the permissive and flexible Stiglerian ethic rather than rigorous Feynman standard of scientific integrity. Romer regards this as a breach of the scientific method and a step backward for economics as a science.

I am not going to comment on the specific infraction that Romer accuses Lucas of having committed; I am not familiar with the mathematical question in dispute. Certainly if Lucas was aware that his argument in the paper Romer criticizes depended on the particular mathematical assumption in question, Lucas should have acknowledged that to be the case. And even if, as Lucas asserted in responding to a direct question by Romer, he could have derived the result in a more roundabout way, then he should have pointed that out, too. However, I don’t regard the infraction alleged by Romer to be more than a misdemeanor, hardly a scandalous breach of the scientific method.

Why did Lucas, who as far as I can tell was originally guided by Feynman integrity, switch to the mode of Stigler conviction? Market clearing did not have to evolve from auxiliary hypothesis to dogma that could not be questioned.

My conjecture is economists let small accidents of intellectual history matter too much. If we had behaved like scientists, things could have turned out very differently. It is worth paying attention to these accidents because doing so might let us take more control over the process of scientific inquiry that we are engaged in. At the very least, we should try to reduce the odds that that personal frictions and simple misunderstandings could once again cause us to veer off on some damaging trajectory.

I suspect that it was personal friction and a misunderstanding that encouraged a turn toward isolation (or if you prefer, epistemic closure) by Lucas and colleagues. They circled the wagons because they thought that this was the only way to keep the rational expectations revolution alive. The misunderstanding is that Lucas and his colleagues interpreted the hostile reaction they received from such economists as Robert Solow to mean that they were facing implacable, unreasoning resistance from such departments as MIT. In fact, in a remarkably short period of time, rational expectations completely conquered the PhD program at MIT.

More recently Romer, having done graduate work both at MIT and Chicago in the late 1970s, has elaborated on the personal friction between Solow and Lucas and how that friction may have affected Lucas, causing him to disengage from the professional mainstream. Paul Krugman, who was at MIT when this nastiness was happening, is skeptical of Romer’s interpretation.

My own view is that being personally and emotionally attached to one’s own theories, whether for religious or ideological or other non-scientific reasons, is not necessarily a bad thing as long as there are social mechanisms allowing scientists with different scientific viewpoints an opportunity to make themselves heard. If there are such mechanisms, the need for Feynman integrity is minimized, because individual lapses of integrity will be exposed and remedied by criticism from other scientists; scientific progress is possible even if scientists don’t live up to the Feynman standards, and maintain their faith in their theories despite contradictory evidence. But, as I am going to suggest below, there are reasons to doubt that social mechanisms have been operating to discipline – not suppress, just discipline – dubious economic theorizing.

My favorite example of the importance of personal belief in, and commitment to the truth of, one’s own theories is Galileo. As discussed by T. S. Kuhn in The Structure of Scientific Revolutions. Galileo was arguing for a paradigm change in how to think about the universe, despite being confronted by empirical evidence that appeared to refute the Copernican worldview he believed in: the observations that the sun revolves around the earth, and that the earth, as we directly perceive it, is, apart from the occasional earthquake, totally stationary — good old terra firma. Despite that apparently contradictory evidence, Galileo had an alternative vision of the universe in which the obvious movement of the sun in the heavens was explained by the spinning of the earth on its axis, and the stationarity of the earth by the assumption that all our surroundings move along with the earth, rendering its motion imperceptible, our perception of motion being relative to a specific frame of reference.

At bottom, this was an almost metaphysical world view not directly refutable by any simple empirical test. But Galileo adopted this worldview or paradigm, because he deeply believed it to be true, and was therefore willing to defend it at great personal cost, refusing to recant his Copernican view when he could have easily appeased the Church by describing the Copernican theory as just a tool for predicting planetary motion rather than an actual representation of reality. Early empirical tests did not support heliocentrism over geocentrism, but Galileo had faith that theoretical advancements and improved measurements would eventually vindicate the Copernican theory. He was right of course, but strict empiricism would have led to a premature rejection of heliocentrism. Without a deep personal commitment to the Copernican worldview, Galileo might not have articulated the case for heliocentrism as persuasively as he did, and acceptance of heliocentrism might have been delayed for a long time.

Imre Lakatos called such deeply-held views underlying a scientific theory the hard core of the theory (aka scientific research program), a set of beliefs that are maintained despite apparent empirical refutation. The response to any empirical refutation is not to abandon or change the hard core but to adjust what Lakatos called the protective belt of the theory. Eventually, as refutations or empirical anomalies accumulate, the research program may undergo a crisis, leading to its abandonment, or it may simply degenerate if it fails to solve new problems or discover any new empirical facts or regularities. So Romer’s criticism of Lucas’s dogmatic attachment to market clearing – Lucas frequently makes use of ad hoc price stickiness assumptions; I don’t know why Romer identifies market-clearing as a Lucasian dogma — may be no more justified from a history of science perspective than would criticism of Galileo’s dogmatic attachment to heliocentrism.

So while I have many problems with Lucas, lack of Feynman integrity is not really one of them, certainly not in the top ten. What I find more disturbing is his narrow conception of what economics is. As he himself wrote in an autobiographical sketch for Lives of the Laureates, he was bewitched by the beauty and power of Samuelson’s Foundations of Economic Analysis when he read it the summer before starting his training as a graduate student at Chicago in 1960. Although it did not have the transformative effect on me that it had on Lucas, I greatly admire the Foundations, but regardless of whether Samuelson himself meant to suggest such an idea (which I doubt), it is absurd to draw this conclusion from it:

I loved the Foundations. Like so many others in my cohort, I internalized its view that if I couldn’t formulate a problem in economic theory mathematically, I didn’t know what I was doing. I came to the position that mathematical analysis is not one of many ways of doing economic theory: It is the only way. Economic theory is mathematical analysis. Everything else is just pictures and talk.

Oh, come on. Would anyone ever think that unless you can formulate the problem of whether the earth revolves around the sun or the sun around the earth mathematically, you don’t know what you are doing? And, yet, remarkably, on the page following that silly assertion, one finds a totally brilliant description of what it was like to take graduate price theory from Milton Friedman.

Friedman rarely lectured. His class discussions were often structured as debates, with student opinions or newspaper quotes serving to introduce a problem and some loosely stated opinions about it. Then Friedman would lead us into a clear statement of the problem, considering alternative formulations as thoroughly as anyone in the class wanted to. Once formulated, the problem was quickly analyzed—usually diagrammatically—on the board. So we learned how to formulate a model, to think about and decide which features of a problem we could safely abstract from and which he needed to put at the center of the analysis. Here “model” is my term: It was not a term that Friedman liked or used. I think that for him talking about modeling would have detracted from the substantive seriousness of the inquiry we were engaged in, would divert us away from the attempt to discover “what can be done” into a merely mathematical exercise. [my emphasis].

Despite his respect for Friedman, it’s clear that Lucas did not adopt and internalize Friedman’s approach to economic problem solving, but instead internalized the caricature he extracted from Samuelson’s Foundations: that mathematical analysis is the only legitimate way of doing economic theory, and that, in particular, the essence of macroeconomics consists in a combination of axiomatic formalism and philosophical reductionism (microfoundationalism). For Lucas, the only scientifically legitimate macroeconomic models are those that can be deduced from the axiomatized Arrow-Debreu-McKenzie general equilibrium model, with solutions that can be computed and simulated in such a way that the simulations can be matched up against the available macroeconomics time series on output, investment and consumption.

This was both bad methodology and bad science, restricting the formulation of economic problems to those for which mathematical techniques are available to be deployed in finding solutions. On the one hand, the rational-expectations assumption made finding solutions to certain intertemporal models tractable; on the other, the assumption was justified as being required by the rationality assumptions of neoclassical price theory.

In a recent review of Lucas’s Collected Papers on Monetary Theory, Thomas Sargent makes a fascinating reference to Kenneth Arrow’s 1967 review of the first two volumes of Paul Samuelson’s Collected Works in which Arrow referred to the problematic nature of the neoclassical synthesis of which Samuelson was a chief exponent.

Samuelson has not addressed himself to one of the major scandals of current price theory, the relation between microeconomics and macroeconomics. Neoclassical microeconomic equilibrium with fully flexible prices presents a beautiful picture of the mutual articulations of a complex structure, full employment being one of its major elements. What is the relation between this world and either the real world with its recurrent tendencies to unemployment of labor, and indeed of capital goods, or the Keynesian world of underemployment equilibrium? The most explicit statement of Samuelson’s position that I can find is the following: “Neoclassical analysis permits of fully stable underemployment equilibrium only on the assumption of either friction or a peculiar concatenation of wealth-liquidity-interest elasticities. . . . [The neoclassical analysis] goes far beyond the primitive notion that, by definition of a Walrasian system, equilibrium must be at full employment.” . . .

In view of the Phillips curve concept in which Samuelson has elsewhere shown such interest, I take the second sentence in the above quotation to mean that wages are stationary whenever unemployment is X percent, with X positive; thus stationary unemployment is possible. In general, one can have a neoclassical model modified by some elements of price rigidity which will yield Keynesian-type implications. But such a model has yet to be constructed in full detail, and the question of why certain prices remain rigid becomes of first importance. . . . Certainly, as Keynes emphasized the rigidity of prices has something to do with the properties of money; and the integration of the demand and supply of money with general competitive equilibrium theory remains incomplete despite attempts beginning with Walras himself.

If the neoclassical model with full price flexibility were sufficiently unrealistic that stable unemployment equilibrium be possible, then in all likelihood the bulk of the theorems derived by Samuelson, myself, and everyone else from the neoclassical assumptions are also contrafactual. The problem is not resolved by what Samuelson has called “the neoclassical synthesis,” in which it is held that the achievement of full employment requires Keynesian intervention but that neoclassical theory is valid when full employment is reached. . . .

Obviously, I believe firmly that the mutual adjustment of prices and quantities represented by the neoclassical model is an important aspect of economic reality worthy of the serious analysis that has been bestowed on it; and certain dramatic historical episodes – most recently the reconversion of the United States from World War II and the postwar European recovery – suggest that an economic mechanism exists which is capable of adaptation to radical shifts in demand and supply conditions. On the other hand, the Great Depression and the problems of developing countries remind us dramatically that something beyond, but including, neoclassical theory is needed.

Perhaps in a future post, I may discuss this passage, including a few sentences that I have omitted here, in greater detail. For now I will just say that Arrow’s reference to a “neoclassical microeconomic equilibrium with fully flexible prices” seems very strange inasmuch as price flexibility has absolutely no role in the proofs of the existence of a competitive general equilibrium for which Arrow and Debreu and McKenzie are justly famous. All the theorems Arrow et al. proved about the neoclassical equilibrium were related to existence, uniqueness and optimaiity of an equilibrium supported by an equilibrium set of prices. Price flexibility was not involved in those theorems, because the theorems had nothing to do with how prices adjust in response to a disequilibrium situation. What makes this juxtaposition of neoclassical microeconomic equilibrium with fully flexible prices even more remarkable is that about eight years earlier Arrow wrote a paper (“Toward a Theory of Price Adjustment”) whose main concern was the lack of any theory of price adjustment in competitive equilibrium, about which I will have more to say below.

Sargent also quotes from two lectures in which Lucas referred to Don Patinkin’s treatise Money, Interest and Prices which provided perhaps the definitive statement of the neoclassical synthesis Samuelson espoused. In one lecture (“My Keynesian Education” presented to the History of Economics Society in 2003) Lucas explains why he thinks Patinkin’s book did not succeed in its goal of integrating value theory and monetary theory:

I think Patinkin was absolutely right to try and use general equilibrium theory to think about macroeconomic problems. Patinkin and I are both Walrasians, whatever that means. I don’t see how anybody can not be. It’s pure hindsight, but now I think that Patinkin’s problem was that he was a student of Lange’s, and Lange’s version of the Walrasian model was already archaic by the end of the 1950s. Arrow and Debreu and McKenzie had redone the whole theory in a clearer, more rigorous, and more flexible way. Patinkin’s book was a reworking of his Chicago thesis from the middle 1940s and had not benefited from this more recent work.

In the other lecture, his 2003 Presidential address to the American Economic Association, Lucas commented further on why Patinkin fell short in his quest to unify monetary and value theory:

When Don Patinkin gave his Money, Interest, and Prices the subtitle “An Integration of Monetary and Value Theory,” value theory meant, to him, a purely static theory of general equilibrium. Fluctuations in production and employment, due to monetary disturbances or to shocks of any other kind, were viewed as inducing disequilibrium adjustments, unrelated to anyone’s purposeful behavior, modeled with vast numbers of free parameters. For us, today, value theory refers to models of dynamic economies subject to unpredictable shocks, populated by agents who are good at processing information and making choices over time. The macroeconomic research I have discussed today makes essential use of value theory in this modern sense: formulating explicit models, computing solutions, comparing their behavior quantitatively to observed time series and other data sets. As a result, we are able to form a much sharper quantitative view of the potential of changes in policy to improve peoples’ lives than was possible a generation ago.

So, as Sargent observes, Lucas recreated an updated neoclassical synthesis of his own based on the intertemporal Arrow-Debreu-McKenzie version of the Walrasian model, augmented by a rationale for the holding of money and perhaps some form of monetary policy, via the assumption of credit-market frictions and sticky prices. Despite the repudiation of the updated neoclassical synthesis by his friend Edward Prescott, for whom monetary policy is irrelevant, Lucas clings to neoclassical synthesis 2.0. Sargent quotes this passage from Lucas’s 1994 retrospective review of A Monetary History of the US by Friedman and Schwartz to show how tightly Lucas clings to neoclassical synthesis 2.0 :

In Kydland and Prescott’s original model, and in many (though not all) of its descendants, the equilibrium allocation coincides with the optimal allocation: Fluctuations generated by the model represent an efficient response to unavoidable shocks to productivity. One may thus think of the model not as a positive theory suited to all historical time periods but as a normative benchmark providing a good approximation to events when monetary policy is conducted well and a bad approximation when it is not. Viewed in this way, the theory’s relative success in accounting for postwar experience can be interpreted as evidence that postwar monetary policy has resulted in near-efficient behavior, not as evidence that money doesn’t matter.

Indeed, the discipline of real business cycle theory has made it more difficult to defend real alternaltives to a monetary account of the 1930s than it was 30 years ago. It would be a term-paper-size exercise, for example, to work out the possible effects of the 1930 Smoot-Hawley Tariff in a suitably adapted real business cycle model. By now, we have accumulated enough quantitative experience with such models to be sure that the aggregate effects of such a policy (in an economy with a 5% foreign trade sector before the Act and perhaps a percentage point less after) would be trivial.

Nevertheless, in the absence of some catastrophic error in monetary policy, Lucas evidently believes that the key features of the Arrow-Debreu-McKenzie model are closely approximated in the real world. That may well be true. But if it is, Lucas has no real theory to explain why.

In his 1959 paper (“Towards a Theory of Price Adjustment”) I just mentioned, Arrow noted that the theory of competitive equilibrium has no explanation of how equilibrium prices are actually set. Indeed, the idea of competitive price adjustment is beset by a paradox: all agents in a general equilibrium being assumed to be price takers, how is it that a new equilibrium price is ever arrived at following any disturbance to an initial equilibrium? Arrow had no answer to the question, but offered the suggestion that, out of equilibrium, agents are not price takers, but price searchers, possessing some measure of market power to set price in the transition between the old and new equilibrium. But the upshot of Arrow’s discussion was that the problem and the paradox awaited solution. Almost sixty years on, some of us are still waiting, but for Lucas and the Lucasians, there is neither problem nor paradox, because the actual price is the equilibrium price, and the equilibrium price is always the (rationally) expected price.

If the social functions of science were being efficiently discharged, this rather obvious replacement of problem solving by question begging would not have escaped effective challenge and opposition. But Lucas was able to provide cover for this substitution by persuading the profession to embrace his microfoundational methodology, while offering irresistible opportunities for professional advancement to younger economists who could master the new analytical techniques that Lucas and others were rapidly introducing, thereby neutralizing or coopting many of the natural opponents to what became modern macroeconomics. So while Romer considers the conquest of MIT by the rational-expectations revolution, despite the opposition of Robert Solow, to be evidence for the advance of economic science, I regard it as a sign of the social failure of science to discipline a regressive development driven by the elevation of technique over substance.

Neo-Fisherism and All That

A few weeks ago Michael Woodford and his Columbia colleague Mariana Garcia-Schmidt made an initial response to the Neo-Fisherian argument advanced by, among others, John Cochrane and Stephen Williamson that a central bank can achieve its inflation target by pegging its interest-rate instrument at a rate such that if the expected inflation rate is the inflation rate targeted by the central bank, the Fisher equation would be satisfied. In other words, if the central bank wants 2% inflation, it should set the interest rate instrument under its control at the Fisherian real rate of interest (aka the natural rate) plus 2% expected inflation. So if the Fisherian real rate is 2%, the central bank should set its interest-rate instrument (Fed Funds rate) at 4%, because, in equilibrium – and, under rational expectations, that is the only policy-relevant solution of the model – inflation expectations must satisfy the Fisher equation.

The Neo-Fisherians believe that, by way of this insight, they have overturned at least two centuries of standard monetary theory, dating back at least to Henry Thornton, instructing the monetary authorities to raise interest rates to combat inflation and to reduce interest rates to counter deflation. According to the Neo-Fisherian Revolution, this was all wrong: the way to reduce inflation is for the monetary authority to reduce the setting on its interest-rate instrument and the way to counter deflation is to raise the setting on the instrument. That is supposedly why the Fed, by reducing its Fed Funds target practically to zero, has locked us into a low-inflation environment.

Unwilling to junk more than 200 years of received doctrine on the basis, not of a behavioral relationship, but a reduced-form equilibrium condition containing no information about the direction of causality, few monetary economists and no policy makers have become devotees of the Neo-Fisherian Revolution. Nevertheless, the Neo-Fisherian argument has drawn enough attention to elicit a response from Michael Woodford, who is the go-to monetary theorist for monetary-policy makers. The Woodford-Garcia-Schmidt (hereinafter WGS) response (for now just a slide presentation) has already been discussed by Noah Smith, Nick Rowe, Scott Sumner, Brad DeLong, Roger Farmer and John Cochrane. Nick Rowe’s discussion, not surprisingly, is especially penetrating in distilling the WGS presentation into its intuitive essence.

Using Nick’s discussion as a starting point, I am going to offer some comments of my own on Neo-Fisherism and the WGS critique. Right off the bat, WGS concede that it is possible that by increasing the setting of its interest-rate instrument, a central bank could, move the economy from one rational-expectations equilibrium to another, the only difference between the two being that inflation in the second would differ from inflation in the first by an amount exactly equal to the difference in the corresponding settings of the interest-rate instrument. John Cochrane apparently feels pretty good about having extracted this concession from WGS, remarking

My first reaction is relief — if Woodford says it is a prediction of the standard perfect foresight / rational expectations version, that means I didn’t screw up somewhere. And if one has to resort to learning and non-rational expectations to get rid of a result, the battle is half won.

And my first reaction to Cochrane’s first reaction is: why only half? What else is there to worry about besides a comparison of rational-expectations equilibria? Well, let Cochrane read Nick Rowe’s blogpost. If he did, he might realize that if you do no more than compare alternative steady-state equilibria, ignoring the path leading from one equilibrium to the other, you miss just about everything that makes macroeconomics worth studying (by the way I do realize the question-begging nature of that remark). Of course that won’t necessarily bother Cochrane, because, like other practitioners of modern macroeconomics, he has convinced himself that it is precisely by excluding everything but rational-expectations equilibria from consideration that modern macroeconomics has made what its practitioners like to think of as progress, and what its critics regard as the opposite .

But Nick Rowe actually takes the trouble to show what might happen if you try to specify the path by which you could get from rational-expectations equilibrium A with the interest-rate instrument of the central bank set at i to rational-expectations equilibrium B with the interest-rate instrument of the central bank set at i ­+ ε. If you try to specify a process of trial-and-error (tatonnement) that leads from A to B, you will almost certainly fail, your only chance being to get it right on your first try. And, as Nick further points out, the very notion of a tatonnement process leading from one equilibrium to another is a huge stretch, because, in the real world there are “no backs” as there are in tatonnement. If you enter into an exchange, you can’t nullify it, as is the case under tatonnement, just because the price you agreed on turns out not to have been an equilibrium price. For there to be a tatonnement path from the first equilibrium that converges on the second requires that monetary authority set its interest-rate instrument in the conventional, not the Neo-Fisherian, manner, using variations in the real interest rate as a lever by which to nudge the economy onto a path leading to a new equilibrium rather than away from it.

The very notion that you don’t have to worry about the path by which you get from one equilibrium to another is so bizarre that it would be merely laughable if it were not so dangerous. Kenneth Boulding used to tell a story about a physicist, a chemist and an economist stranded on a desert island with nothing to eat except a can of food, but nothing to open the can with. The physicist and the chemist tried to figure out a way to open the can, but the economist just said: “assume a can opener.” But I wonder if even Boulding could have imagined the disconnect from reality embodied in the Neo-Fisherian argument.

Having registered my disapproval of Neo-Fisherism, let me now reverse field and make some critical comments about the current state of non-Neo-Fisherian monetary theory, and what makes it vulnerable to off-the-wall ideas like Neo-Fisherism. The important fact to consider about the past two centuries of monetary theory that I referred to above is that for at least three-quarters of that time there was a basic default assumption that the value of money was ultimately governed by the value of some real commodity, usually either silver or gold (or even both). There could be temporary deviations between the value of money and the value of the monetary standard, but because there was a standard, the value of gold or silver provided a benchmark against which the value of money could always be reckoned. I am not saying that this was either a good way of thinking about the value of money or a bad way; I am just pointing out that this was metatheoretical background governing how people thought about money.

Even after the final collapse of the gold standard in the mid-1930s, there was a residue of metalism that remained, people still calculating values in terms of gold equivalents and the value of currency in terms of its gold price. Once the gold standard collapsed, it was inevitable that these inherited habits of thinking about money would eventually give way to new ways of thinking, and it took another 40 years or so, until the official way of thinking about the value of money finally eliminated any vestige of the gold mentality. In our age of enlightenment, no sane person any longer thinks about the value of money in terms of gold or silver equivalents.

But the problem for monetary theory is that, without a real-value equivalent to assign to money, the value of money in our macroeconomic models became theoretically indeterminate. If the value of money is theoretically indeterminate, so, too, is the rate of inflation. The value of money and the rate of inflation are simply, as Fischer Black understood, whatever people in the aggregate expect them to be. Nevertheless, our basic mental processes for understanding how central banks can use an interest-rate instrument to control the value of money are carryovers from an earlier epoch when the value of money was determined, most of the time and in most places, by convertibility, either actual or expected, into gold or silver. The interest-rate instrument of central banks was not primarily designed as a method for controlling the value of money; it was the mechanism by which the central bank could control the amount of reserves on its balance sheet and the amount of gold or silver in its vaults. There was only an indirect connection – at least until the 1920s — between a central bank setting its interest-rate instrument to control its balance sheet and the effect on prices and inflation. The rules of monetary policy developed under a gold standard are not necessarily applicable to an economic system in which the value of money is fundamentally indeterminate.

Viewed from this perspective, the Neo-Fisherian Revolution appears as a kind of reductio ad absurdum of the present confused state of monetary theory in which the price level and the rate of inflation are entirely subjective and determined totally by expectations.

A New Paper on the Short, But Sweet, 1933 Recovery Confirms that Hawtrey and Cassel Got it Right

In a recent post, the indispensable Marcus Nunes drew my attention to a working paper by Andrew Jalil of Occidental College and Gisela Rua of the Federal Reserve Board. The paper is called “Inflation Expectations and Recovery from the Depression in 1933: Evidence from the Narrative Record. “ Subsequently I noticed that Mark Thoma had also posted the abstract on his blog.

 Here’s the abstract:

This paper uses the historical narrative record to determine whether inflation expectations shifted during the second quarter of 1933, precisely as the recovery from the Great Depression took hold. First, by examining the historical news record and the forecasts of contemporary business analysts, we show that inflation expectations increased dramatically. Second, using an event-studies approach, we identify the impact on financial markets of the key events that shifted inflation expectations. Third, we gather new evidence—both quantitative and narrative—that indicates that the shift in inflation expectations played a causal role in stimulating the recovery.

There’s a lot of new and interesting stuff in this paper even though the basic narrative framework goes back almost 80 years to the discussion of the 1933 recovery in Hawtrey’s Trade Depression and The Way Out. The paper highlights the importance of rising inflation (or price-level) expectations in generating the recovery, which started within a few weeks of FDR’s inauguration in March 1933. In the absence of direct measures of inflation expectations, such as breakeven TIPS spreads, that are now available, or surveys of consumer and business expectations, Jalil and Rua document the sudden and sharp shift in expectations in three different ways.

First, they show document that there was a sharp spike in news coverage of inflation in April 1933. Second, they show an expectational shift toward inflation by a close analysis of the economic reporting and commentary in the Economist and in Business Week, providing a fascinating account of the evolution of FDR’s thinking and how his economic policy was assessed in the period between the election in November 1932 and April 1933 when the gold standard was suspended. Just before the election, the Economist observed

No well-informed man in Wall Street expects the outcome of the election to make much real difference in business prospects, the argument being that while politicians may do something to bring on a trade slump, they can do nothing to change a depression into prosperity (October 29, 1932)

 On April 22, 1933, just after FDR took the US of the gold standard, the Economist commented

As usual, Wall Street has interpreted the policy of the Washington Administration with uncanny accuracy. For a week or so before President Roosevelt announced his abandonment of the gold standard, Wall Street was “talking inflation.”

 A third indication of increasing inflation is drawn from the five independent economic forecasters which all began predicting inflation — some sooner than others  — during the April-May time frame.

Jalil and Rua extend the important work of Daniel Nelson whose 1991 paper “Was the Deflation of 1929-30 Anticipated? The Monetary Regime as Viewed by the Business Press” showed that the 1929-30 downturn coincided with a sharp drop in price level expectations, providing powerful support for the Hawtrey-Cassel interpretation of the onset of the Great Depression.

Besides persuasive evidence from multiple sources that inflation expectations shifted in the spring of 1933, Jalil and Rua identify 5 key events or news shocks that focused attention on a changing policy environment that would lead to rising prices.

1 Abandonment of the Gold Standard and a Pledge by FDR to Raise Prices (April 19)

2 Passage of the Thomas Inflation Amendment to the Farm Relief Bill by the Senate (April 28)

3 Announcement of Open Market Operations (May 24)

4 Announcement that the Gold Clause Would Be Repealed and a Reduction in the New York Fed’s Rediscount Rate (May 26)

5 FDR’s Message to the World Economic Conference Calling for Restoration of the 1926 Price Level (June 19)

Jalil and Rua perform an event study and find that stock prices rose significantly and the dollar depreciated against gold and pound sterling after each of these news shocks. They also discuss the macreconomic effects of shift in inflation expectations, showing that a standard macro model cannot account for the rapid 1933 recovery. Further, they scrutinize the claim by Friedman and Schwartz in their Monetary History of the United States that, based on the lack of evidence of any substantial increase in the quantity of money, “the economic recovery in the half-year after the panic owed nothing to monetary expansion.” Friedman and Schwartz note that, given the increase in prices and the more rapid increase in output, the velocity of circulation must have increased, without mentioning the role of rising inflation expectations in reducing that amount of cash (relative to income) that people wanted to hold.

Jalil and Rua also offer a very insightful explanation for the remarkably rapid recovery in the April-July period, suggesting that the commitment to raise prices back to their 1926 levels encouraged businesses to hasten their responses to the prospect of rising prices, because prices would stop rising after they reached their target level.

The literature on price-level targeting has shown that, relative to inflation targeting, this policy choice has the advantage of removing more uncertainty in terms of the future level of prices. Under price-level targeting, inflation depends on the relationship between the current price level and its target. Inflation expectations will be higher the lower is the current price level. Thus, Roosevelt’s commitment to a price-level target caused market participants to expect inflation until prices were back at that higher set target.

A few further comments before closing. Jalil and Rua have a brief discussion of whether other factors besides increasing inflation expectations could account for the rapid recovery. The only factor that they mention as an alternative is exit from the gold standard. This discussion is somewhat puzzling inasmuch as they already noted that exit from the gold standard was one of five news shocks (and by all odds the important one) in causing the increase in inflation expectations. They go on to point out that no other country that left the gold standard during the Great Depression experienced anywhere near as rapid a recovery as did the US. Because international trade accounted for a relatively small share of the US economy, they argue that the stimulus to production by US producers of tradable goods from a depreciating dollar would not have been all that great. But that just shows that the macroeconomic significance of abandoning the gold standard was not in shifting the real exchange rate, but in raising the price level. The fact that the US recovery after leaving the gold standard was so much more powerful than it was in other countries is because, at least for a short time, the US sought to use monetary policy aggressively to raise prices, while other countries were content merely to stop the deflation that the gold standard had inflicted on them, but made no attempt to reverse the deflation that had already occurred.

Jalil and Rua conclude with a discussion of possible explanations for why the April-July recovery seemed to peter out suddenly at the end of July. They offer two possible explanations. First passage of the National Industrial Recovery Act in July was a negative supply shock, and second the rapid recovery between April and July persuaded FDR that further inflation was no longer necessary, with actual inflation and expected inflation both subsiding as a result. These are obviously not competing explanations. Indeed the NIRA may have itself been another reason why FDR no longer felt inflation was necessary, as indicated by this news story in the New York Times

The government does not contemplate entering upon inflation of the currency at present and will issue cheaper money only as a last resort to stimulate trade, according to a close adviser of the President who discussed financial policies with him this week. This official asserted today that the President was well satisfied with the business improvement and the government’s ability to borrow money at cheap rates. These are interpreted as good signs, and if the conditions continue as the recovery program broadened, it was believed no real inflation of the currency would be necessary. (“Inflation Putt Off, Officials Suggest,” New York Times, August 4, 1933)

If only . . .

Roger and Me

Last week Roger Farmer wrote a post elaborating on a comment that he had left to my post on Price Stickiness and Macroeconomics. Roger’s comment is aimed at this passage from my post:

[A]lthough price stickiness is a sufficient condition for inefficient macroeconomic fluctuations, it is not a necessary condition. It is entirely possible that even with highly flexible prices, there would still be inefficient macroeconomic fluctuations. And the reason why price flexibility, by itself, is no guarantee against macroeconomic contractions is that macroeconomic contractions are caused by disequilibrium prices, and disequilibrium prices can prevail regardless of how flexible prices are.

Here’s Roger’s comment:

I have a somewhat different take. I like Lucas’ insistence on equilibrium at every point in time as long as we recognize two facts. 1. There is a continuum of equilibria, both dynamic and steady state and 2. Almost all of them are Pareto suboptimal.

I made the following reply to Roger’s comment:

Roger, I think equilibrium at every point in time is ok if we distinguish between temporary and full equilibrium, but I don’t see how there can be a continuum of full equilibria when agents are making all kinds of long-term commitments by investing in specific capital. Having said that, I certainly agree with you that expectational shifts are very important in determining which equilibrium the economy winds up at.

To which Roger responded:

I am comfortable with temporary equilibrium as the guiding principle, as long as the equilibrium in each period is well defined. By that, I mean that, taking expectations as given in each period, each market clears according to some well defined principle. In classical models, that principle is the equality of demand and supply in a Walrasian auction. I do not think that is the right equilibrium concept.

Roger didn’t explain – at least not here, he probably has elsewhere — exactly why he doesn’t think equality of demand and supply in a Walrasian auction is not the right equilibrium concept. But I would be interested in hearing from him why he thinks equality of supply and demand is not the right equilibrium concept. Perhaps he will clarify his thinking for me.

Hicks wanted to separate ‘fix price markets’ from ‘flex price markets’. I don’t think that is the right equilibrium concept either. I prefer to use competitive search equilibrium for the labor market. Search equilibrium leads to indeterminacy because there are not enough prices for the inputs to the search process. Classical search theory closes that gap with an arbitrary Nash bargaining weight. I prefer to close it by making expectations fundamental [a proposition I have advanced on this blog].

I agree that the Hicksian distinction between fix-price markets and flex-price markets doesn’t cut it. Nevertheless, it’s not clear to me that a Thompsonian temporary-equilibrium model in which expectations determine the reservation wage at which workers will accept employment (i.e, the labor-supply curve conditional on the expected wage) doesn’t work as well as a competitive search equilibrium in this context.

Once one treats expectations as fundamental, there is no longer a multiplicity of equilibria. People act in a well defined way and prices clear markets. Of course ‘market clearing’ in a search market may involve unemployment that is considerably higher than the unemployment rate that would be chosen by a social planner. And when there is steady state indeterminacy, as there is in my work, shocks to beliefs may lead the economy to one of a continuum of steady state equilibria.

There is an equilibrium for each set of expectations (with the understanding, I presume, that expectations are always uniform across agents). The problem that I see with this is that there doesn’t seem to be any interaction between outcomes and expectations. Expectations are always self-fulfilling, and changes in expectations are purely exogenous. But in a classic downturn, the process seems to be cumulative, the contraction seemingly feeding on itself, causing a spiral of falling prices, declining output, rising unemployment, and increasing pessimism.

That brings me to the second part of an equilibrium concept. Are expectations rational in the sense that subjective probability measures over future outcomes coincide with realized probability measures? That is not a property of the real world. It is a consistency property for a model.

Yes; I agree totally. Rational expectations is best understood as a property of a model, the property being that if agents expect an equilibrium price vector the solution of the model is the same equilibrium price vector. It is not a substantive theory of expectation formation, the model doesn’t posit that agents correctly foresee the equilibrium price vector, that’s an extreme and unrealistic assumption about how the world actually works, IMHO. The distinction is crucial, but it seems to me that it is largely ignored in practice.

And yes: if we plop our agents down into a stationary environment, their beliefs should eventually coincide with reality.

This seems to me a plausible-sounding assumption for which there is no theoretical proof, and in view of Roger’s recent discussion of unit roots, dubious empirical support.

If the environment changes in an unpredictable way, it is the belief function, a primitive of the model, that guides the economy to a new steady state. And I can envision models where expectations on the transition path are systematically wrong.

I need to read Roger’s papers about this, but I am left wondering by what mechanism the belief function guides the economy to a steady state? It seems to me that the result requires some pretty strong assumptions.

The recent ‘nonlinearity debate’ on the blogs confuses the existence of multiple steady states in a dynamic model with the existence of multiple rational expectations equilibria. Nonlinearity is neither necessary nor sufficient for the existence of multiplicity. A linear model can have a unique indeterminate steady state associated with an infinite dimensional continuum of locally stable rational expectations equilibria. A linear model can also have a continuum of attracting points, each of which is an equilibrium. These are not just curiosities. Both of these properties characterize modern dynamic equilibrium models of the real economy.

I’m afraid that I don’t quite get the distinction that is being made here. Does “multiple steady states in a dynamic model” mean multiple equilibria of the full Arrow-Debreu general equilibrium model? And does “multiple rational-expectations equilibria” mean multiple equilibria conditional on the expectations of the agents? And I also am not sure what the import of this distinction is supposed to be.

My further question is, how does all of this relate to Leijonhfuvud’s idea of the corridor, which Roger has endorsed? My own understanding of what Axel means by the corridor is that the corridor has certain stability properties that keep the economy from careening out of control, i.e. becoming subject to a cumulative dynamic process that does not lead the economy back to the neighborhood of a stable equilibrium. But if there is a continuum of attracting points, each of which is an equilibrium, how could any of those points be understood to be outside the corridor?

Anyway, those are my questions. I am hoping that Roger can enlighten me.


About Me

David Glasner
Washington, DC

I am an economist at the Federal Trade Commission. Nothing that you read on this blog necessarily reflects the views of the FTC or the individual commissioners. Although I work at the FTC as an antitrust economist, most of my research and writing has been on monetary economics and policy and the history of monetary theory. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 399 other followers


Follow

Get every new post delivered to your Inbox.

Join 399 other followers