Archive for the 'expectations' Category

Excess Volatility Strikes Again

Both David Henderson and Scott Sumner had some fun with this declaration of victory on behalf of Austrian Business Cycle Theory by Robert Murphy after the recent mini-stock-market crash.

As shocking as these developments [drops in stock prices and increased volatility] may be to some analysts, those versed in the writings of economist Ludwig von Mises have been warning for years that the Federal Reserve was setting us up for another crash.

While it’s always tempting to join in the fun of mocking ABCT, I am going to try to be virtuous and resist temptation, and instead comment on a different lesson that I would draw from the recent stock market fluctuations.

To do so, let me quote from Scott’s post:

Austrians aren’t the only ones who think they have something useful to say about future trends in asset prices. Keynesians and others also like to talk about “bubbles”, which I take as an implied prediction that the asset will do poorly over an extended period of time. If not, what exactly does “bubble” mean? I think this is all foolish; assume the Efficient Markets Hypothesis is roughly accurate, and look for what markets are telling us about policy.

I agree with Scott that it is nearly impossible to define “bubble” in an operational ex ante way. And I also agree that there is much truth in the Efficient Market Hypothesis and that it can be a useful tool in making inferences about the effects of policies as I tried to show a few years back in this paper. But I also think that there are some conceptual problems with EMH that Scott and others don’t take as seriously as they should. Scott believes that there is powerful empirical evidence that supports EMH. Responding to Murphy’s charge that EMH is no more falsifiable than ABCT, Scott replied:

The EMH is most certainly “falsifiable.”  It’s been tested in many ways.  Some people even claim that it has been falsified, although I’m not convinced.  In the tests that I think are the most relevant the EMH comes out ahead.  (Stocks respond immediately to news, stocks follow roughly a random walk, indexed funds outperformed managed funds, excess returns are not serially correlated, or not enough to profit from, etc., etc.)

A few comments come to mind.

First, Nobel laureate Robert Shiller was awarded the prize largely for work showing that stock prices exhibit excess volatility. The recent sharp fall in stock prices followed by a sharp rebound raise the possibility that stock prices have been fluctuating for reasons other than the flow of new publicly available information, which, according to EMH, is what determines stock prices. Shiller’s work is not necessarily definitive, so it’s possible to reconcile EMH with observed volatility, but I think that there are good reasons for skepticism.

Second, there are theories other than EMH that predict or are at least consistent with stock prices following a random walk. A good example is Keynes’s discussion of the stock exchange in chapter 12 of the General Theory in which Keynes actually formulated a version of EMH, but rejected it based on his intuition that investors focused on “fundamentals” would not have the capital resources to finance their positions when, for whatever reason, market sentiment turns against them. According to Keynes, picking stocks is like guessing who will win a beauty contest. You can guess either by forming an opinion about the most beautiful contestant or by guessing who the judges will think is the most beautiful. Forming an opinion about who is the most beautiful is like picking stocks based on fundamentals or EMH, guessing who the judges will think is most beautiful is like picking stocks based on predicting market sentiment (Keynesian theory). EMH and the Keynesian theory are totally contrary to each other, but it’s not clear to me that any of the tests mentioned by Scott (random fluctuations in stock prices, index funds outperforming managed funds, excess returns not serially correlated) is inconsistent with the Keynesian theory.

Third, EMH presumes that there is a direct line of causation running from “fundamentals” to “expectations,” and that expectations are rationally inferred from “fundamentals.” That neat conceptual dichotomy between objective fundamentals and rational expectations based on fundamentals presumes that fundamentals are independent of expectations. But that is clearly false. The state of expectations is itself fundamental. Expectations can be and often are self-fulfilling. That is a commonplace observation about social interactions. The nature and character of many social interactions depends on the expectations with which people enter into those interactions.

I may hold a very optimistic view about the state of the economy today. But suppose that I wake up tomorrow and hear that the Shanghai stock market crashes, going down by 30% in one day. Will my expectations be completely independent of my observation of falling asset prices in China? Maybe, but what if I hear that S&P futures are down by 10%? If other people start revising their expectations, will it not become rational for me to change my own expectations at some point? How can it not be rational for me to change my expectations if I see that everyone else is changing theirs? If people are becoming more pessimistic they will reduce their spending, and my income and my wealth, directly or indirectly, depend on how much other people are planning to spend. So my plans have to take into account the expectations of others.

An equilibrium requires consistent expectations among individuals. If you posit an exogenous change in the expectations of some people, unless there is only one set of expectations that is consistent with equilibrium, the exogenous change in the expectations of some may very well imply a movement toward another equilibrium with a set of expectations from the set characterizing the previous equilibrium. There may be cases in which the shock to expectations is ephemeral, expectations reverting to what they were previously. Perhaps that was what happened last week. But it is also possible that expectations are volatile, and will continue to fluctuate. If so, who knows where we will wind up? EMH provides no insight into that question.

I started out by saying that I was going to resist the temptation to mock ABCT, but I’m afraid that I must acknowledge that temptation has got the better of me. Here are two charts: the first shows the movement of gold prices from August 2005 to August 2015, the second shows the movement of the S&P 500 from August 2005 to August 2015. I leave it to readers to decide which chart is displaying the more bubble-like price behavior.gold_price_2005-15


Romer v. Lucas

A couple of months ago, Paul Romer created a stir by publishing a paper in the American Economic Review “Mathiness in the Theory of Economic Growth,” an attack on two papers, one by McGrattan and Prescott and the other by Lucas and Moll on aspects of growth theory. He accused the authors of those papers of using mathematical modeling as a cover behind which to hide assumptions guaranteeing results by which the authors could promote their research agendas. In subsequent blog posts, Romer has sharpened his attack, focusing it more directly on Lucas, whom he accuses of a non-scientific attachment to ideological predispositions that have led him to violate what he calls Feynman integrity, a concept eloquently described by Feynman himself in a 1974 commencement address at Caltech.

It’s a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty–a kind of leaning over backwards. For example, if you’re doing an experiment, you should report everything that you think might make it invalid–not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you’ve eliminated by some other experiment, and how they worked–to make sure the other fellow can tell they have been eliminated.

Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can–if you know anything at all wrong, or possibly wrong–to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. There is also a more subtle problem. When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition.

Romer contrasts this admirable statement of what scientific integrity means with another by George Stigler, seemingly justifying, or at least excusing, a kind of special pleading on behalf of one’s own theory. And the institutional and perhaps ideological association between Stigler and Lucas seems to suggest that Lucas is inclined to follow the permissive and flexible Stiglerian ethic rather than rigorous Feynman standard of scientific integrity. Romer regards this as a breach of the scientific method and a step backward for economics as a science.

I am not going to comment on the specific infraction that Romer accuses Lucas of having committed; I am not familiar with the mathematical question in dispute. Certainly if Lucas was aware that his argument in the paper Romer criticizes depended on the particular mathematical assumption in question, Lucas should have acknowledged that to be the case. And even if, as Lucas asserted in responding to a direct question by Romer, he could have derived the result in a more roundabout way, then he should have pointed that out, too. However, I don’t regard the infraction alleged by Romer to be more than a misdemeanor, hardly a scandalous breach of the scientific method.

Why did Lucas, who as far as I can tell was originally guided by Feynman integrity, switch to the mode of Stigler conviction? Market clearing did not have to evolve from auxiliary hypothesis to dogma that could not be questioned.

My conjecture is economists let small accidents of intellectual history matter too much. If we had behaved like scientists, things could have turned out very differently. It is worth paying attention to these accidents because doing so might let us take more control over the process of scientific inquiry that we are engaged in. At the very least, we should try to reduce the odds that that personal frictions and simple misunderstandings could once again cause us to veer off on some damaging trajectory.

I suspect that it was personal friction and a misunderstanding that encouraged a turn toward isolation (or if you prefer, epistemic closure) by Lucas and colleagues. They circled the wagons because they thought that this was the only way to keep the rational expectations revolution alive. The misunderstanding is that Lucas and his colleagues interpreted the hostile reaction they received from such economists as Robert Solow to mean that they were facing implacable, unreasoning resistance from such departments as MIT. In fact, in a remarkably short period of time, rational expectations completely conquered the PhD program at MIT.

More recently Romer, having done graduate work both at MIT and Chicago in the late 1970s, has elaborated on the personal friction between Solow and Lucas and how that friction may have affected Lucas, causing him to disengage from the professional mainstream. Paul Krugman, who was at MIT when this nastiness was happening, is skeptical of Romer’s interpretation.

My own view is that being personally and emotionally attached to one’s own theories, whether for religious or ideological or other non-scientific reasons, is not necessarily a bad thing as long as there are social mechanisms allowing scientists with different scientific viewpoints an opportunity to make themselves heard. If there are such mechanisms, the need for Feynman integrity is minimized, because individual lapses of integrity will be exposed and remedied by criticism from other scientists; scientific progress is possible even if scientists don’t live up to the Feynman standards, and maintain their faith in their theories despite contradictory evidence. But, as I am going to suggest below, there are reasons to doubt that social mechanisms have been operating to discipline – not suppress, just discipline – dubious economic theorizing.

My favorite example of the importance of personal belief in, and commitment to the truth of, one’s own theories is Galileo. As discussed by T. S. Kuhn in The Structure of Scientific Revolutions. Galileo was arguing for a paradigm change in how to think about the universe, despite being confronted by empirical evidence that appeared to refute the Copernican worldview he believed in: the observations that the sun revolves around the earth, and that the earth, as we directly perceive it, is, apart from the occasional earthquake, totally stationary — good old terra firma. Despite that apparently contradictory evidence, Galileo had an alternative vision of the universe in which the obvious movement of the sun in the heavens was explained by the spinning of the earth on its axis, and the stationarity of the earth by the assumption that all our surroundings move along with the earth, rendering its motion imperceptible, our perception of motion being relative to a specific frame of reference.

At bottom, this was an almost metaphysical world view not directly refutable by any simple empirical test. But Galileo adopted this worldview or paradigm, because he deeply believed it to be true, and was therefore willing to defend it at great personal cost, refusing to recant his Copernican view when he could have easily appeased the Church by describing the Copernican theory as just a tool for predicting planetary motion rather than an actual representation of reality. Early empirical tests did not support heliocentrism over geocentrism, but Galileo had faith that theoretical advancements and improved measurements would eventually vindicate the Copernican theory. He was right of course, but strict empiricism would have led to a premature rejection of heliocentrism. Without a deep personal commitment to the Copernican worldview, Galileo might not have articulated the case for heliocentrism as persuasively as he did, and acceptance of heliocentrism might have been delayed for a long time.

Imre Lakatos called such deeply-held views underlying a scientific theory the hard core of the theory (aka scientific research program), a set of beliefs that are maintained despite apparent empirical refutation. The response to any empirical refutation is not to abandon or change the hard core but to adjust what Lakatos called the protective belt of the theory. Eventually, as refutations or empirical anomalies accumulate, the research program may undergo a crisis, leading to its abandonment, or it may simply degenerate if it fails to solve new problems or discover any new empirical facts or regularities. So Romer’s criticism of Lucas’s dogmatic attachment to market clearing – Lucas frequently makes use of ad hoc price stickiness assumptions; I don’t know why Romer identifies market-clearing as a Lucasian dogma — may be no more justified from a history of science perspective than would criticism of Galileo’s dogmatic attachment to heliocentrism.

So while I have many problems with Lucas, lack of Feynman integrity is not really one of them, certainly not in the top ten. What I find more disturbing is his narrow conception of what economics is. As he himself wrote in an autobiographical sketch for Lives of the Laureates, he was bewitched by the beauty and power of Samuelson’s Foundations of Economic Analysis when he read it the summer before starting his training as a graduate student at Chicago in 1960. Although it did not have the transformative effect on me that it had on Lucas, I greatly admire the Foundations, but regardless of whether Samuelson himself meant to suggest such an idea (which I doubt), it is absurd to draw this conclusion from it:

I loved the Foundations. Like so many others in my cohort, I internalized its view that if I couldn’t formulate a problem in economic theory mathematically, I didn’t know what I was doing. I came to the position that mathematical analysis is not one of many ways of doing economic theory: It is the only way. Economic theory is mathematical analysis. Everything else is just pictures and talk.

Oh, come on. Would anyone ever think that unless you can formulate the problem of whether the earth revolves around the sun or the sun around the earth mathematically, you don’t know what you are doing? And, yet, remarkably, on the page following that silly assertion, one finds a totally brilliant description of what it was like to take graduate price theory from Milton Friedman.

Friedman rarely lectured. His class discussions were often structured as debates, with student opinions or newspaper quotes serving to introduce a problem and some loosely stated opinions about it. Then Friedman would lead us into a clear statement of the problem, considering alternative formulations as thoroughly as anyone in the class wanted to. Once formulated, the problem was quickly analyzed—usually diagrammatically—on the board. So we learned how to formulate a model, to think about and decide which features of a problem we could safely abstract from and which he needed to put at the center of the analysis. Here “model” is my term: It was not a term that Friedman liked or used. I think that for him talking about modeling would have detracted from the substantive seriousness of the inquiry we were engaged in, would divert us away from the attempt to discover “what can be done” into a merely mathematical exercise. [my emphasis].

Despite his respect for Friedman, it’s clear that Lucas did not adopt and internalize Friedman’s approach to economic problem solving, but instead internalized the caricature he extracted from Samuelson’s Foundations: that mathematical analysis is the only legitimate way of doing economic theory, and that, in particular, the essence of macroeconomics consists in a combination of axiomatic formalism and philosophical reductionism (microfoundationalism). For Lucas, the only scientifically legitimate macroeconomic models are those that can be deduced from the axiomatized Arrow-Debreu-McKenzie general equilibrium model, with solutions that can be computed and simulated in such a way that the simulations can be matched up against the available macroeconomics time series on output, investment and consumption.

This was both bad methodology and bad science, restricting the formulation of economic problems to those for which mathematical techniques are available to be deployed in finding solutions. On the one hand, the rational-expectations assumption made finding solutions to certain intertemporal models tractable; on the other, the assumption was justified as being required by the rationality assumptions of neoclassical price theory.

In a recent review of Lucas’s Collected Papers on Monetary Theory, Thomas Sargent makes a fascinating reference to Kenneth Arrow’s 1967 review of the first two volumes of Paul Samuelson’s Collected Works in which Arrow referred to the problematic nature of the neoclassical synthesis of which Samuelson was a chief exponent.

Samuelson has not addressed himself to one of the major scandals of current price theory, the relation between microeconomics and macroeconomics. Neoclassical microeconomic equilibrium with fully flexible prices presents a beautiful picture of the mutual articulations of a complex structure, full employment being one of its major elements. What is the relation between this world and either the real world with its recurrent tendencies to unemployment of labor, and indeed of capital goods, or the Keynesian world of underemployment equilibrium? The most explicit statement of Samuelson’s position that I can find is the following: “Neoclassical analysis permits of fully stable underemployment equilibrium only on the assumption of either friction or a peculiar concatenation of wealth-liquidity-interest elasticities. . . . [The neoclassical analysis] goes far beyond the primitive notion that, by definition of a Walrasian system, equilibrium must be at full employment.” . . .

In view of the Phillips curve concept in which Samuelson has elsewhere shown such interest, I take the second sentence in the above quotation to mean that wages are stationary whenever unemployment is X percent, with X positive; thus stationary unemployment is possible. In general, one can have a neoclassical model modified by some elements of price rigidity which will yield Keynesian-type implications. But such a model has yet to be constructed in full detail, and the question of why certain prices remain rigid becomes of first importance. . . . Certainly, as Keynes emphasized the rigidity of prices has something to do with the properties of money; and the integration of the demand and supply of money with general competitive equilibrium theory remains incomplete despite attempts beginning with Walras himself.

If the neoclassical model with full price flexibility were sufficiently unrealistic that stable unemployment equilibrium be possible, then in all likelihood the bulk of the theorems derived by Samuelson, myself, and everyone else from the neoclassical assumptions are also contrafactual. The problem is not resolved by what Samuelson has called “the neoclassical synthesis,” in which it is held that the achievement of full employment requires Keynesian intervention but that neoclassical theory is valid when full employment is reached. . . .

Obviously, I believe firmly that the mutual adjustment of prices and quantities represented by the neoclassical model is an important aspect of economic reality worthy of the serious analysis that has been bestowed on it; and certain dramatic historical episodes – most recently the reconversion of the United States from World War II and the postwar European recovery – suggest that an economic mechanism exists which is capable of adaptation to radical shifts in demand and supply conditions. On the other hand, the Great Depression and the problems of developing countries remind us dramatically that something beyond, but including, neoclassical theory is needed.

Perhaps in a future post, I may discuss this passage, including a few sentences that I have omitted here, in greater detail. For now I will just say that Arrow’s reference to a “neoclassical microeconomic equilibrium with fully flexible prices” seems very strange inasmuch as price flexibility has absolutely no role in the proofs of the existence of a competitive general equilibrium for which Arrow and Debreu and McKenzie are justly famous. All the theorems Arrow et al. proved about the neoclassical equilibrium were related to existence, uniqueness and optimaiity of an equilibrium supported by an equilibrium set of prices. Price flexibility was not involved in those theorems, because the theorems had nothing to do with how prices adjust in response to a disequilibrium situation. What makes this juxtaposition of neoclassical microeconomic equilibrium with fully flexible prices even more remarkable is that about eight years earlier Arrow wrote a paper (“Toward a Theory of Price Adjustment”) whose main concern was the lack of any theory of price adjustment in competitive equilibrium, about which I will have more to say below.

Sargent also quotes from two lectures in which Lucas referred to Don Patinkin’s treatise Money, Interest and Prices which provided perhaps the definitive statement of the neoclassical synthesis Samuelson espoused. In one lecture (“My Keynesian Education” presented to the History of Economics Society in 2003) Lucas explains why he thinks Patinkin’s book did not succeed in its goal of integrating value theory and monetary theory:

I think Patinkin was absolutely right to try and use general equilibrium theory to think about macroeconomic problems. Patinkin and I are both Walrasians, whatever that means. I don’t see how anybody can not be. It’s pure hindsight, but now I think that Patinkin’s problem was that he was a student of Lange’s, and Lange’s version of the Walrasian model was already archaic by the end of the 1950s. Arrow and Debreu and McKenzie had redone the whole theory in a clearer, more rigorous, and more flexible way. Patinkin’s book was a reworking of his Chicago thesis from the middle 1940s and had not benefited from this more recent work.

In the other lecture, his 2003 Presidential address to the American Economic Association, Lucas commented further on why Patinkin fell short in his quest to unify monetary and value theory:

When Don Patinkin gave his Money, Interest, and Prices the subtitle “An Integration of Monetary and Value Theory,” value theory meant, to him, a purely static theory of general equilibrium. Fluctuations in production and employment, due to monetary disturbances or to shocks of any other kind, were viewed as inducing disequilibrium adjustments, unrelated to anyone’s purposeful behavior, modeled with vast numbers of free parameters. For us, today, value theory refers to models of dynamic economies subject to unpredictable shocks, populated by agents who are good at processing information and making choices over time. The macroeconomic research I have discussed today makes essential use of value theory in this modern sense: formulating explicit models, computing solutions, comparing their behavior quantitatively to observed time series and other data sets. As a result, we are able to form a much sharper quantitative view of the potential of changes in policy to improve peoples’ lives than was possible a generation ago.

So, as Sargent observes, Lucas recreated an updated neoclassical synthesis of his own based on the intertemporal Arrow-Debreu-McKenzie version of the Walrasian model, augmented by a rationale for the holding of money and perhaps some form of monetary policy, via the assumption of credit-market frictions and sticky prices. Despite the repudiation of the updated neoclassical synthesis by his friend Edward Prescott, for whom monetary policy is irrelevant, Lucas clings to neoclassical synthesis 2.0. Sargent quotes this passage from Lucas’s 1994 retrospective review of A Monetary History of the US by Friedman and Schwartz to show how tightly Lucas clings to neoclassical synthesis 2.0 :

In Kydland and Prescott’s original model, and in many (though not all) of its descendants, the equilibrium allocation coincides with the optimal allocation: Fluctuations generated by the model represent an efficient response to unavoidable shocks to productivity. One may thus think of the model not as a positive theory suited to all historical time periods but as a normative benchmark providing a good approximation to events when monetary policy is conducted well and a bad approximation when it is not. Viewed in this way, the theory’s relative success in accounting for postwar experience can be interpreted as evidence that postwar monetary policy has resulted in near-efficient behavior, not as evidence that money doesn’t matter.

Indeed, the discipline of real business cycle theory has made it more difficult to defend real alternaltives to a monetary account of the 1930s than it was 30 years ago. It would be a term-paper-size exercise, for example, to work out the possible effects of the 1930 Smoot-Hawley Tariff in a suitably adapted real business cycle model. By now, we have accumulated enough quantitative experience with such models to be sure that the aggregate effects of such a policy (in an economy with a 5% foreign trade sector before the Act and perhaps a percentage point less after) would be trivial.

Nevertheless, in the absence of some catastrophic error in monetary policy, Lucas evidently believes that the key features of the Arrow-Debreu-McKenzie model are closely approximated in the real world. That may well be true. But if it is, Lucas has no real theory to explain why.

In his 1959 paper (“Towards a Theory of Price Adjustment”) I just mentioned, Arrow noted that the theory of competitive equilibrium has no explanation of how equilibrium prices are actually set. Indeed, the idea of competitive price adjustment is beset by a paradox: all agents in a general equilibrium being assumed to be price takers, how is it that a new equilibrium price is ever arrived at following any disturbance to an initial equilibrium? Arrow had no answer to the question, but offered the suggestion that, out of equilibrium, agents are not price takers, but price searchers, possessing some measure of market power to set price in the transition between the old and new equilibrium. But the upshot of Arrow’s discussion was that the problem and the paradox awaited solution. Almost sixty years on, some of us are still waiting, but for Lucas and the Lucasians, there is neither problem nor paradox, because the actual price is the equilibrium price, and the equilibrium price is always the (rationally) expected price.

If the social functions of science were being efficiently discharged, this rather obvious replacement of problem solving by question begging would not have escaped effective challenge and opposition. But Lucas was able to provide cover for this substitution by persuading the profession to embrace his microfoundational methodology, while offering irresistible opportunities for professional advancement to younger economists who could master the new analytical techniques that Lucas and others were rapidly introducing, thereby neutralizing or coopting many of the natural opponents to what became modern macroeconomics. So while Romer considers the conquest of MIT by the rational-expectations revolution, despite the opposition of Robert Solow, to be evidence for the advance of economic science, I regard it as a sign of the social failure of science to discipline a regressive development driven by the elevation of technique over substance.

Neo-Fisherism and All That

A few weeks ago Michael Woodford and his Columbia colleague Mariana Garcia-Schmidt made an initial response to the Neo-Fisherian argument advanced by, among others, John Cochrane and Stephen Williamson that a central bank can achieve its inflation target by pegging its interest-rate instrument at a rate such that if the expected inflation rate is the inflation rate targeted by the central bank, the Fisher equation would be satisfied. In other words, if the central bank wants 2% inflation, it should set the interest rate instrument under its control at the Fisherian real rate of interest (aka the natural rate) plus 2% expected inflation. So if the Fisherian real rate is 2%, the central bank should set its interest-rate instrument (Fed Funds rate) at 4%, because, in equilibrium – and, under rational expectations, that is the only policy-relevant solution of the model – inflation expectations must satisfy the Fisher equation.

The Neo-Fisherians believe that, by way of this insight, they have overturned at least two centuries of standard monetary theory, dating back at least to Henry Thornton, instructing the monetary authorities to raise interest rates to combat inflation and to reduce interest rates to counter deflation. According to the Neo-Fisherian Revolution, this was all wrong: the way to reduce inflation is for the monetary authority to reduce the setting on its interest-rate instrument and the way to counter deflation is to raise the setting on the instrument. That is supposedly why the Fed, by reducing its Fed Funds target practically to zero, has locked us into a low-inflation environment.

Unwilling to junk more than 200 years of received doctrine on the basis, not of a behavioral relationship, but a reduced-form equilibrium condition containing no information about the direction of causality, few monetary economists and no policy makers have become devotees of the Neo-Fisherian Revolution. Nevertheless, the Neo-Fisherian argument has drawn enough attention to elicit a response from Michael Woodford, who is the go-to monetary theorist for monetary-policy makers. The Woodford-Garcia-Schmidt (hereinafter WGS) response (for now just a slide presentation) has already been discussed by Noah Smith, Nick Rowe, Scott Sumner, Brad DeLong, Roger Farmer and John Cochrane. Nick Rowe’s discussion, not surprisingly, is especially penetrating in distilling the WGS presentation into its intuitive essence.

Using Nick’s discussion as a starting point, I am going to offer some comments of my own on Neo-Fisherism and the WGS critique. Right off the bat, WGS concede that it is possible that by increasing the setting of its interest-rate instrument, a central bank could, move the economy from one rational-expectations equilibrium to another, the only difference between the two being that inflation in the second would differ from inflation in the first by an amount exactly equal to the difference in the corresponding settings of the interest-rate instrument. John Cochrane apparently feels pretty good about having extracted this concession from WGS, remarking

My first reaction is relief — if Woodford says it is a prediction of the standard perfect foresight / rational expectations version, that means I didn’t screw up somewhere. And if one has to resort to learning and non-rational expectations to get rid of a result, the battle is half won.

And my first reaction to Cochrane’s first reaction is: why only half? What else is there to worry about besides a comparison of rational-expectations equilibria? Well, let Cochrane read Nick Rowe’s blogpost. If he did, he might realize that if you do no more than compare alternative steady-state equilibria, ignoring the path leading from one equilibrium to the other, you miss just about everything that makes macroeconomics worth studying (by the way I do realize the question-begging nature of that remark). Of course that won’t necessarily bother Cochrane, because, like other practitioners of modern macroeconomics, he has convinced himself that it is precisely by excluding everything but rational-expectations equilibria from consideration that modern macroeconomics has made what its practitioners like to think of as progress, and what its critics regard as the opposite .

But Nick Rowe actually takes the trouble to show what might happen if you try to specify the path by which you could get from rational-expectations equilibrium A with the interest-rate instrument of the central bank set at i to rational-expectations equilibrium B with the interest-rate instrument of the central bank set at i ­+ ε. If you try to specify a process of trial-and-error (tatonnement) that leads from A to B, you will almost certainly fail, your only chance being to get it right on your first try. And, as Nick further points out, the very notion of a tatonnement process leading from one equilibrium to another is a huge stretch, because, in the real world there are “no backs” as there are in tatonnement. If you enter into an exchange, you can’t nullify it, as is the case under tatonnement, just because the price you agreed on turns out not to have been an equilibrium price. For there to be a tatonnement path from the first equilibrium that converges on the second requires that monetary authority set its interest-rate instrument in the conventional, not the Neo-Fisherian, manner, using variations in the real interest rate as a lever by which to nudge the economy onto a path leading to a new equilibrium rather than away from it.

The very notion that you don’t have to worry about the path by which you get from one equilibrium to another is so bizarre that it would be merely laughable if it were not so dangerous. Kenneth Boulding used to tell a story about a physicist, a chemist and an economist stranded on a desert island with nothing to eat except a can of food, but nothing to open the can with. The physicist and the chemist tried to figure out a way to open the can, but the economist just said: “assume a can opener.” But I wonder if even Boulding could have imagined the disconnect from reality embodied in the Neo-Fisherian argument.

Having registered my disapproval of Neo-Fisherism, let me now reverse field and make some critical comments about the current state of non-Neo-Fisherian monetary theory, and what makes it vulnerable to off-the-wall ideas like Neo-Fisherism. The important fact to consider about the past two centuries of monetary theory that I referred to above is that for at least three-quarters of that time there was a basic default assumption that the value of money was ultimately governed by the value of some real commodity, usually either silver or gold (or even both). There could be temporary deviations between the value of money and the value of the monetary standard, but because there was a standard, the value of gold or silver provided a benchmark against which the value of money could always be reckoned. I am not saying that this was either a good way of thinking about the value of money or a bad way; I am just pointing out that this was metatheoretical background governing how people thought about money.

Even after the final collapse of the gold standard in the mid-1930s, there was a residue of metalism that remained, people still calculating values in terms of gold equivalents and the value of currency in terms of its gold price. Once the gold standard collapsed, it was inevitable that these inherited habits of thinking about money would eventually give way to new ways of thinking, and it took another 40 years or so, until the official way of thinking about the value of money finally eliminated any vestige of the gold mentality. In our age of enlightenment, no sane person any longer thinks about the value of money in terms of gold or silver equivalents.

But the problem for monetary theory is that, without a real-value equivalent to assign to money, the value of money in our macroeconomic models became theoretically indeterminate. If the value of money is theoretically indeterminate, so, too, is the rate of inflation. The value of money and the rate of inflation are simply, as Fischer Black understood, whatever people in the aggregate expect them to be. Nevertheless, our basic mental processes for understanding how central banks can use an interest-rate instrument to control the value of money are carryovers from an earlier epoch when the value of money was determined, most of the time and in most places, by convertibility, either actual or expected, into gold or silver. The interest-rate instrument of central banks was not primarily designed as a method for controlling the value of money; it was the mechanism by which the central bank could control the amount of reserves on its balance sheet and the amount of gold or silver in its vaults. There was only an indirect connection – at least until the 1920s — between a central bank setting its interest-rate instrument to control its balance sheet and the effect on prices and inflation. The rules of monetary policy developed under a gold standard are not necessarily applicable to an economic system in which the value of money is fundamentally indeterminate.

Viewed from this perspective, the Neo-Fisherian Revolution appears as a kind of reductio ad absurdum of the present confused state of monetary theory in which the price level and the rate of inflation are entirely subjective and determined totally by expectations.

A New Paper on the Short, But Sweet, 1933 Recovery Confirms that Hawtrey and Cassel Got it Right

In a recent post, the indispensable Marcus Nunes drew my attention to a working paper by Andrew Jalil of Occidental College and Gisela Rua of the Federal Reserve Board. The paper is called “Inflation Expectations and Recovery from the Depression in 1933: Evidence from the Narrative Record. “ Subsequently I noticed that Mark Thoma had also posted the abstract on his blog.

 Here’s the abstract:

This paper uses the historical narrative record to determine whether inflation expectations shifted during the second quarter of 1933, precisely as the recovery from the Great Depression took hold. First, by examining the historical news record and the forecasts of contemporary business analysts, we show that inflation expectations increased dramatically. Second, using an event-studies approach, we identify the impact on financial markets of the key events that shifted inflation expectations. Third, we gather new evidence—both quantitative and narrative—that indicates that the shift in inflation expectations played a causal role in stimulating the recovery.

There’s a lot of new and interesting stuff in this paper even though the basic narrative framework goes back almost 80 years to the discussion of the 1933 recovery in Hawtrey’s Trade Depression and The Way Out. The paper highlights the importance of rising inflation (or price-level) expectations in generating the recovery, which started within a few weeks of FDR’s inauguration in March 1933. In the absence of direct measures of inflation expectations, such as breakeven TIPS spreads, that are now available, or surveys of consumer and business expectations, Jalil and Rua document the sudden and sharp shift in expectations in three different ways.

First, they show document that there was a sharp spike in news coverage of inflation in April 1933. Second, they show an expectational shift toward inflation by a close analysis of the economic reporting and commentary in the Economist and in Business Week, providing a fascinating account of the evolution of FDR’s thinking and how his economic policy was assessed in the period between the election in November 1932 and April 1933 when the gold standard was suspended. Just before the election, the Economist observed

No well-informed man in Wall Street expects the outcome of the election to make much real difference in business prospects, the argument being that while politicians may do something to bring on a trade slump, they can do nothing to change a depression into prosperity (October 29, 1932)

 On April 22, 1933, just after FDR took the US of the gold standard, the Economist commented

As usual, Wall Street has interpreted the policy of the Washington Administration with uncanny accuracy. For a week or so before President Roosevelt announced his abandonment of the gold standard, Wall Street was “talking inflation.”

 A third indication of increasing inflation is drawn from the five independent economic forecasters which all began predicting inflation — some sooner than others  — during the April-May time frame.

Jalil and Rua extend the important work of Daniel Nelson whose 1991 paper “Was the Deflation of 1929-30 Anticipated? The Monetary Regime as Viewed by the Business Press” showed that the 1929-30 downturn coincided with a sharp drop in price level expectations, providing powerful support for the Hawtrey-Cassel interpretation of the onset of the Great Depression.

Besides persuasive evidence from multiple sources that inflation expectations shifted in the spring of 1933, Jalil and Rua identify 5 key events or news shocks that focused attention on a changing policy environment that would lead to rising prices.

1 Abandonment of the Gold Standard and a Pledge by FDR to Raise Prices (April 19)

2 Passage of the Thomas Inflation Amendment to the Farm Relief Bill by the Senate (April 28)

3 Announcement of Open Market Operations (May 24)

4 Announcement that the Gold Clause Would Be Repealed and a Reduction in the New York Fed’s Rediscount Rate (May 26)

5 FDR’s Message to the World Economic Conference Calling for Restoration of the 1926 Price Level (June 19)

Jalil and Rua perform an event study and find that stock prices rose significantly and the dollar depreciated against gold and pound sterling after each of these news shocks. They also discuss the macreconomic effects of shift in inflation expectations, showing that a standard macro model cannot account for the rapid 1933 recovery. Further, they scrutinize the claim by Friedman and Schwartz in their Monetary History of the United States that, based on the lack of evidence of any substantial increase in the quantity of money, “the economic recovery in the half-year after the panic owed nothing to monetary expansion.” Friedman and Schwartz note that, given the increase in prices and the more rapid increase in output, the velocity of circulation must have increased, without mentioning the role of rising inflation expectations in reducing that amount of cash (relative to income) that people wanted to hold.

Jalil and Rua also offer a very insightful explanation for the remarkably rapid recovery in the April-July period, suggesting that the commitment to raise prices back to their 1926 levels encouraged businesses to hasten their responses to the prospect of rising prices, because prices would stop rising after they reached their target level.

The literature on price-level targeting has shown that, relative to inflation targeting, this policy choice has the advantage of removing more uncertainty in terms of the future level of prices. Under price-level targeting, inflation depends on the relationship between the current price level and its target. Inflation expectations will be higher the lower is the current price level. Thus, Roosevelt’s commitment to a price-level target caused market participants to expect inflation until prices were back at that higher set target.

A few further comments before closing. Jalil and Rua have a brief discussion of whether other factors besides increasing inflation expectations could account for the rapid recovery. The only factor that they mention as an alternative is exit from the gold standard. This discussion is somewhat puzzling inasmuch as they already noted that exit from the gold standard was one of five news shocks (and by all odds the important one) in causing the increase in inflation expectations. They go on to point out that no other country that left the gold standard during the Great Depression experienced anywhere near as rapid a recovery as did the US. Because international trade accounted for a relatively small share of the US economy, they argue that the stimulus to production by US producers of tradable goods from a depreciating dollar would not have been all that great. But that just shows that the macroeconomic significance of abandoning the gold standard was not in shifting the real exchange rate, but in raising the price level. The fact that the US recovery after leaving the gold standard was so much more powerful than it was in other countries is because, at least for a short time, the US sought to use monetary policy aggressively to raise prices, while other countries were content merely to stop the deflation that the gold standard had inflicted on them, but made no attempt to reverse the deflation that had already occurred.

Jalil and Rua conclude with a discussion of possible explanations for why the April-July recovery seemed to peter out suddenly at the end of July. They offer two possible explanations. First passage of the National Industrial Recovery Act in July was a negative supply shock, and second the rapid recovery between April and July persuaded FDR that further inflation was no longer necessary, with actual inflation and expected inflation both subsiding as a result. These are obviously not competing explanations. Indeed the NIRA may have itself been another reason why FDR no longer felt inflation was necessary, as indicated by this news story in the New York Times

The government does not contemplate entering upon inflation of the currency at present and will issue cheaper money only as a last resort to stimulate trade, according to a close adviser of the President who discussed financial policies with him this week. This official asserted today that the President was well satisfied with the business improvement and the government’s ability to borrow money at cheap rates. These are interpreted as good signs, and if the conditions continue as the recovery program broadened, it was believed no real inflation of the currency would be necessary. (“Inflation Putt Off, Officials Suggest,” New York Times, August 4, 1933)

If only . . .

Roger and Me

Last week Roger Farmer wrote a post elaborating on a comment that he had left to my post on Price Stickiness and Macroeconomics. Roger’s comment is aimed at this passage from my post:

[A]lthough price stickiness is a sufficient condition for inefficient macroeconomic fluctuations, it is not a necessary condition. It is entirely possible that even with highly flexible prices, there would still be inefficient macroeconomic fluctuations. And the reason why price flexibility, by itself, is no guarantee against macroeconomic contractions is that macroeconomic contractions are caused by disequilibrium prices, and disequilibrium prices can prevail regardless of how flexible prices are.

Here’s Roger’s comment:

I have a somewhat different take. I like Lucas’ insistence on equilibrium at every point in time as long as we recognize two facts. 1. There is a continuum of equilibria, both dynamic and steady state and 2. Almost all of them are Pareto suboptimal.

I made the following reply to Roger’s comment:

Roger, I think equilibrium at every point in time is ok if we distinguish between temporary and full equilibrium, but I don’t see how there can be a continuum of full equilibria when agents are making all kinds of long-term commitments by investing in specific capital. Having said that, I certainly agree with you that expectational shifts are very important in determining which equilibrium the economy winds up at.

To which Roger responded:

I am comfortable with temporary equilibrium as the guiding principle, as long as the equilibrium in each period is well defined. By that, I mean that, taking expectations as given in each period, each market clears according to some well defined principle. In classical models, that principle is the equality of demand and supply in a Walrasian auction. I do not think that is the right equilibrium concept.

Roger didn’t explain – at least not here, he probably has elsewhere — exactly why he doesn’t think equality of demand and supply in a Walrasian auction is not the right equilibrium concept. But I would be interested in hearing from him why he thinks equality of supply and demand is not the right equilibrium concept. Perhaps he will clarify his thinking for me.

Hicks wanted to separate ‘fix price markets’ from ‘flex price markets’. I don’t think that is the right equilibrium concept either. I prefer to use competitive search equilibrium for the labor market. Search equilibrium leads to indeterminacy because there are not enough prices for the inputs to the search process. Classical search theory closes that gap with an arbitrary Nash bargaining weight. I prefer to close it by making expectations fundamental [a proposition I have advanced on this blog].

I agree that the Hicksian distinction between fix-price markets and flex-price markets doesn’t cut it. Nevertheless, it’s not clear to me that a Thompsonian temporary-equilibrium model in which expectations determine the reservation wage at which workers will accept employment (i.e, the labor-supply curve conditional on the expected wage) doesn’t work as well as a competitive search equilibrium in this context.

Once one treats expectations as fundamental, there is no longer a multiplicity of equilibria. People act in a well defined way and prices clear markets. Of course ‘market clearing’ in a search market may involve unemployment that is considerably higher than the unemployment rate that would be chosen by a social planner. And when there is steady state indeterminacy, as there is in my work, shocks to beliefs may lead the economy to one of a continuum of steady state equilibria.

There is an equilibrium for each set of expectations (with the understanding, I presume, that expectations are always uniform across agents). The problem that I see with this is that there doesn’t seem to be any interaction between outcomes and expectations. Expectations are always self-fulfilling, and changes in expectations are purely exogenous. But in a classic downturn, the process seems to be cumulative, the contraction seemingly feeding on itself, causing a spiral of falling prices, declining output, rising unemployment, and increasing pessimism.

That brings me to the second part of an equilibrium concept. Are expectations rational in the sense that subjective probability measures over future outcomes coincide with realized probability measures? That is not a property of the real world. It is a consistency property for a model.

Yes; I agree totally. Rational expectations is best understood as a property of a model, the property being that if agents expect an equilibrium price vector the solution of the model is the same equilibrium price vector. It is not a substantive theory of expectation formation, the model doesn’t posit that agents correctly foresee the equilibrium price vector, that’s an extreme and unrealistic assumption about how the world actually works, IMHO. The distinction is crucial, but it seems to me that it is largely ignored in practice.

And yes: if we plop our agents down into a stationary environment, their beliefs should eventually coincide with reality.

This seems to me a plausible-sounding assumption for which there is no theoretical proof, and in view of Roger’s recent discussion of unit roots, dubious empirical support.

If the environment changes in an unpredictable way, it is the belief function, a primitive of the model, that guides the economy to a new steady state. And I can envision models where expectations on the transition path are systematically wrong.

I need to read Roger’s papers about this, but I am left wondering by what mechanism the belief function guides the economy to a steady state? It seems to me that the result requires some pretty strong assumptions.

The recent ‘nonlinearity debate’ on the blogs confuses the existence of multiple steady states in a dynamic model with the existence of multiple rational expectations equilibria. Nonlinearity is neither necessary nor sufficient for the existence of multiplicity. A linear model can have a unique indeterminate steady state associated with an infinite dimensional continuum of locally stable rational expectations equilibria. A linear model can also have a continuum of attracting points, each of which is an equilibrium. These are not just curiosities. Both of these properties characterize modern dynamic equilibrium models of the real economy.

I’m afraid that I don’t quite get the distinction that is being made here. Does “multiple steady states in a dynamic model” mean multiple equilibria of the full Arrow-Debreu general equilibrium model? And does “multiple rational-expectations equilibria” mean multiple equilibria conditional on the expectations of the agents? And I also am not sure what the import of this distinction is supposed to be.

My further question is, how does all of this relate to Leijonhfuvud’s idea of the corridor, which Roger has endorsed? My own understanding of what Axel means by the corridor is that the corridor has certain stability properties that keep the economy from careening out of control, i.e. becoming subject to a cumulative dynamic process that does not lead the economy back to the neighborhood of a stable equilibrium. But if there is a continuum of attracting points, each of which is an equilibrium, how could any of those points be understood to be outside the corridor?

Anyway, those are my questions. I am hoping that Roger can enlighten me.

Price Stickiness and Macroeconomics

Noah Smith has a classically snide rejoinder to Stephen Williamson’s outrage at Noah’s Bloomberg paean to price stickiness and to the classic Ball and Maniw article on the subject, an article that provoked an embarrassingly outraged response from Robert Lucas when published over 20 years ago. I don’t know if Lucas ever got over it, but evidently Williamson hasn’t.

Now to be fair, Lucas’s outrage, though misplaced, was understandable, at least if one understands that Lucas was so offended by the ironic tone in which Ball and Mankiw cast themselves as defenders of traditional macroeconomics – including both Keynesians and Monetarists – against the onslaught of “heretics” like Lucas, Sargent, Kydland and Prescott that he just stopped reading after the first few pages and then, in a fit of righteous indignation, wrote a diatribe attacking Ball and Mankiw as religious fanatics trying to halt the progress of science as if that was the real message of the paper – not, to say the least, a very sophisticated reading of what Ball and Mankiw wrote.

While I am not hostile to the idea of price stickiness — one of the most popular posts I have written being an attempt to provide a rationale for the stylized (though controversial) fact that wages are stickier than other input, and most output, prices — it does seem to me that there is something ad hoc and superficial about the idea of price stickiness and about many explanations, including those offered by Ball and Mankiw, for price stickiness. I think that the negative reactions that price stickiness elicits from a lot of economists — and not only from Lucas and Williamson — reflect a feeling that price stickiness is not well grounded in any economic theory.

Let me offer a slightly different criticism of price stickiness as a feature of macroeconomic models, which is simply that although price stickiness is a sufficient condition for inefficient macroeconomic fluctuations, it is not a necessary condition. It is entirely possible that even with highly flexible prices, there would still be inefficient macroeconomic fluctuations. And the reason why price flexibility, by itself, is no guarantee against macroeconomic contractions is that macroeconomic contractions are caused by disequilibrium prices, and disequilibrium prices can prevail regardless of how flexible prices are.

The usual argument is that if prices are free to adjust in response to market forces, they will adjust to balance supply and demand, and an equilibrium will be restored by the automatic adjustment of prices. That is what students are taught in Econ 1. And it is an important lesson, but it is also a “partial” lesson. It is partial, because it applies to a single market that is out of equilibrium. The implicit assumption in that exercise is that nothing else is changing, which means that all other markets — well, not quite all other markets, but I will ignore that nuance – are in equilibrium. That’s what I mean when I say (as I have done before) that just as macroeconomics needs microfoundations, microeconomics needs macrofoundations.

Now it’s pretty easy to show that in a single market with an upward-sloping supply curve and a downward-sloping demand curve, that a price-adjustment rule that raises price when there’s an excess demand and reduces price when there’s an excess supply will lead to an equilibrium market price. But that simple price-adjustment rule is hard to generalize when many markets — not just one — are in disequilibrium, because reducing disequilibrium in one market may actually exacerbate disequilibrium, or create a disequilibrium that wasn’t there before, in another market. Thus, even if there is an equilibrium price vector out there, which, if it were announced to all economic agents, would sustain a general equilibrium in all markets, there is no guarantee that following the standard price-adjustment rule of raising price in markets with an excess demand and reducing price in markets with an excess supply will ultimately lead to the equilibrium price vector. Even more disturbing, the standard price-adjustment rule may not, even under a tatonnement process in which no trading is allowed at disequilibrium prices, lead to the discovery of the equilibrium price vector. Of course, in the real world trading occurs routinely at disequilibrium prices, so that the “mechanical” forces tending an economy toward equilibrium are even weaker than the standard analysis of price-adjustment would suggest.

This doesn’t mean that an economy out of equilibrium has no stabilizing tendencies; it does mean that those stabilizing tendencies are not very well understood, and we have almost no formal theory with which to describe how such an adjustment process leading from disequilibrium to equilibrium actually works. We just assume that such a process exists. Franklin Fisher made this point 30 years ago in an important, but insufficiently appreciated, volume Disequilibrium Foundations of Equilibrium Economics. But the idea goes back even further: to Hayek’s important work on intertemporal equilibrium, especially his classic paper “Economics and Knowledge,” formalized by Hicks in the temporary-equilibrium model described in Value and Capital.

The key point made by Hayek in this context is that there can be an intertemporal equilibrium if and only if all agents formulate their individual plans on the basis of the same expectations of future prices. If their expectations for future prices are not the same, then any plans based on incorrect price expectations will have to be revised, or abandoned altogether, as price expectations are disappointed over time. For price adjustment to lead an economy back to equilibrium, the price adjustment must converge on an equilibrium price vector and on correct price expectations. But, as Hayek understood in 1937, and as Fisher explained in a dense treatise 30 years ago, we have no economic theory that explains how such a price vector, even if it exists, is arrived at, and even under a tannonement process, much less under decentralized price setting. Pinning the blame on this vague thing called price stickiness doesn’t address the deeper underlying theoretical issue.

Of course for Lucas et al. to scoff at price stickiness on these grounds is a bit rich, because Lucas and his followers seem entirely comfortable with assuming that the equilibrium price vector is rationally expected. Indeed, rational expectation of the equilibrium price vector is held up by Lucas as precisely the microfoundation that transformed the unruly field of macroeconomics into a real science.

The Near Irrelevance of the Vertical Long-Run Phillips Curve

From a discussion about how much credit Milton Friedman deserves for changing the way that economists thought about inflation, I want to nudge the conversation in a slightly different direction, to restate a point that I made some time ago in one of my favorite posts (The Lucas Critique Revisited). But if Friedman taught us anything it is that incessant repetition of the same already obvious point can do wonders for your reputation. That’s one lesson from Milton that I am willing to take to heart, though my tolerance for hearing myself say the same darn thing over and over again is probably not as great as Friedman’s was, which to be sure is not the only way in which I fall short of him by comparison. (I am almost a foot taller than he was by the way). Speaking of being a foot taller than Friedman, I don’t usually post pictures on this blog, but here is one that I have always found rather touching. And if you don’t know who the other guy is in the picture, you have no right to call yourself an economist.

friedman_&_StiglerAt any rate, the expectations augmented, long-run Phillips Curve, as we all know, was shown by Friedman to be vertical. But what exactly does it mean for the expectations-augmented, long-run Phillips Curve to be vertical? Discussions about whether the evidence supports the proposition that the expectations-augmented, long-run Phillips Curve is vertical (including some of the comments on my recent posts) suggest that people are not clear on what “long-run” means in the context of the expectations-augmented Phillips Curve and have not really thought carefully about what empirical content is contained by the proposition that the expectations-augmented, long-run Phillips Curve is vertical.

Just to frame the discussion of the Phillips Curve, let’s talk about what the term “long-run” means in economics. What it certainly does not mean is an amount of calendar time, though I won’t deny that there are frequent attempts to correlate long-run with varying durations of calendar time. But all such attempts either completely misunderstand what the long-run actually represents, or they merely aim to provide the untutored with some illusion of concreteness in what is otherwise a completely abstract discussion. In fact, what “long run” connotes is simply a full transition from one equilibrium state to another in the context of a comparative-statics exercise.

If a change in some exogenous parameter is imposed on a pre-existing equilibrium, then the long-run represents the full transition to a new equilibrium in which all endogenous variables have fully adjusted to the parameter change. The short-run, then, refers to some intermediate adjustment to the parameter change in which some endogenous variables have been arbitrarily held fixed (presumably because of some possibly reasonable assumption that some variables are able to adjust more speedily than other variables to the posited parameter change).

Now the Phillips Curve that was discovered by A. W. Phillips in his original paper was a strictly empirical relation between observed (wage) inflation and observed unemployment. But the expectations-augmented long-run Phillips Curve is a theoretical construct. And what it represents is certainly not an observable relationship between inflation and unemployment; it rather is a locus of points of equilibrium, each point representing full adjustment of the labor market to a particular rate of inflation, where full adjustment means that the rate of inflation is fully anticipated by all economic agents in the model. So what the expectations-augmented, long-run Phillips Curve is telling us is that if we perform a series of comparative-statics exercises in which, starting from full equilibrium with the given rate of inflation fully expected, we impose on the system a parameter change in which the exogenously imposed rate of inflation is changed and deduce a new equilibrium in which the fully and universally expected rate of inflation equals the alternative exogenously imposed inflation parameter, the equilibrium rate of unemployment corresponding to the new inflation parameter will not differ from the equilibrium rate of unemployment corresponding to the original inflation parameter.

Notice, as well, that the expectations-augmented, long-run Phillips Curve is not saying that imposing a new rate of inflation on an actual economic system would lead to a new equilibrium in which there was no change in unemployment; it is merely comparing alternative equilibria of the same system with different exogenously imposed rates of inflation. To make a statement about the effect of a change in the rate of inflation on unemployment, one has to be able to specify an adjustment path in moving from one equilibrium to another. The comparative-statics method says nothing about the adjustment path; it simply compares two alternative equilibrium states and specifies the change in endogenous variable induced by the change in an exogenous parameter.

So the vertical shape of the expectations-augmented, long-run Phillips Curve tells us very little about how, in any given situation, a change in the rate of inflation would actually affect the rate of unemployment. Not only does the expectations-augmented long-run Phillips Curve fail to tell us how a real system starting from equilibrium would be affected by a change in the rate of inflation, the underlying comparative-statics exercise being unable to specify the adjustment path taken by a system once it departs from its original equilibrium state, the expectations augmented, long-run Phillips Curve is even less equipped to tell us about the adjustment to a change in the rate of inflation when a system is not even in equilibrium to begin with.

The entire discourse of the expectations-augmented, long-run Phillips Curve is completely divorced from the kinds of questions that policy makers in the real world usually have to struggle with – questions like will increasing the rate of inflation of an economy in which there is abnormally high unemployment facilitate or obstruct the adjustment process that takes the economy back to a more normal unemployment rate. The expectations-augmented, long-run Phillips Curve may not be completely irrelevant to the making of economic policy – it is good to know, for example, that if we are trying to figure out which time path of NGDP to aim for, there is no particular reason to think that a time path with a 10% rate of growth of NGDP would probably not generate a significantly lower rate of unemployment than a time path with a 5% rate of growth – but its relationship to reality is sufficiently tenuous that it is irrelevant to any discussion of policy alternatives for economies unless those economies are already close to being in equilibrium.

About Me

David Glasner
Washington, DC

I am an economist at the Federal Trade Commission. Nothing that you read on this blog necessarily reflects the views of the FTC or the individual commissioners. Although I work at the FTC as an antitrust economist, most of my research and writing has been on monetary economics and policy and the history of monetary theory. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 362 other followers


Get every new post delivered to your Inbox.

Join 362 other followers