Archive for the 'Paul Krugman' Category

Krugman’s Second Best

A couple of days ago Paul Krugman discussed “Second-best Macroeconomics” on his blog. I have no real quarrel with anything he said, but I would like to amplify his discussion of what is sometimes called the problem of second-best, because I think the problem of second best has some really important implications for macroeconomics beyond the limited application of the problem that Krugman addressed. The basic idea underlying the problem of second best is not that complicated, but it has many applications, and what made the 1956 paper (“The General Theory of Second Best”) by R. G. Lipsey and Kelvin Lancaster a classic was that it showed how a number of seemingly disparate problems were really all applications of a single unifying principle. Here’s how Krugman frames his application of the second-best problem.

[T]he whole western world has spent years suffering from a severe shortfall of aggregate demand; in Europe a severe misalignment of national costs and prices has been overlaid on this aggregate problem. These aren’t hard problems to diagnose, and simple macroeconomic models — which have worked very well, although nobody believes it — tell us how to solve them. Conventional monetary policy is unavailable thanks to the zero lower bound, but fiscal policy is still on tap, as is the possibility of raising the inflation target. As for misaligned costs, that’s where exchange rate adjustments come in. So no worries: just hit the big macroeconomic That Was Easy button, and soon the troubles will be over.

Except that all the natural answers to our problems have been ruled out politically. Austerians not only block the use of fiscal policy, they drive it in the wrong direction; a rise in the inflation target is impossible given both central-banker prejudices and the power of the goldbug right. Exchange rate adjustment is blocked by the disappearance of European national currencies, plus extreme fear over technical difficulties in reintroducing them.

As a result, we’re stuck with highly problematic second-best policies like quantitative easing and internal devaluation.

I might quibble with Krugman about the quality of the available macroeconomic models, by which I am less impressed than he, but that’s really beside the point of this post, so I won’t even go there. But I can’t let the comment about the inflation target pass without observing that it’s not just “central-banker prejudices” and the “goldbug right” that are to blame for the failure to raise the inflation target; for reasons that I don’t claim to understand myself, the political consensus in both Europe and the US in favor of perpetually low or zero inflation has been supported with scarcely any less fervor by the left than the right. It’s only some eccentric economists – from diverse positions on the political spectrum – that have been making the case for inflation as a recovery strategy. So the political failure has been uniform across the political spectrum.

OK, having registered my factual disagreement with Krugman about the source of our anti-inflationary intransigence, I can now get to the main point. Here’s Krugman:

“[S]econd best” is an economic term of art. It comes from a classic 1956 paper by Lipsey and Lancaster, which showed that policies which might seem to distort markets may nonetheless help the economy if markets are already distorted by other factors. For example, suppose that a developing country’s poorly functioning capital markets are failing to channel savings into manufacturing, even though it’s a highly profitable sector. Then tariffs that protect manufacturing from foreign competition, raise profits, and therefore make more investment possible can improve economic welfare.

The problems with second best as a policy rationale are familiar. For one thing, it’s always better to address existing distortions directly, if you can — second best policies generally have undesirable side effects (e.g., protecting manufacturing from foreign competition discourages consumption of industrial goods, may reduce effective domestic competition, and so on). . . .

But here we are, with anything resembling first-best macroeconomic policy ruled out by political prejudice, and the distortions we’re trying to correct are huge — one global depression can ruin your whole day. So we have quantitative easing, which is of uncertain effectiveness, probably distorts financial markets at least a bit, and gets trashed all the time by people stressing its real or presumed faults; someone like me is then put in the position of having to defend a policy I would never have chosen if there seemed to be a viable alternative.

In a deep sense, I think the same thing is involved in trying to come up with less terrible policies in the euro area. The deal that Greece and its creditors should have reached — large-scale debt relief, primary surpluses kept small and not ramped up over time — is a far cry from what Greece should and probably would have done if it still had the drachma: big devaluation now. The only way to defend the kind of thing that was actually on the table was as the least-worst option given that the right response was ruled out.

That’s one example of a second-best problem, but it’s only one of a variety of problems, and not, it seems to me, the most macroeconomically interesting. So here’s the second-best problem that I want to discuss: given one distortion (i.e., a departure from one of the conditions for Pareto-optimality), reaching a second-best sub-optimum requires violating other – likely all the other – conditions for reaching the first-best (Pareto) optimum. The strategy for getting to the second-best suboptimum cannot be to achieve as many of the conditions for reaching the first-best optimum as possible; the conditions for reaching the second-best optimum are in general totally different from the conditions for reaching the first-best optimum.

So what’s the deeper macroeconomic significance of the second-best principle?

I would put it this way. Suppose there’s a pre-existing macroeconomic equilibrium, all necessary optimality conditions between marginal rates of substitution in production and consumption and relative prices being satisfied. Let the initial equilibrium be subjected to a macoreconomic disturbance. The disturbance will immediately affect a range — possibly all — of the individual markets, and all optimality conditions will change, so that no market will be unaffected when a new optimum is realized. But while optimality for the system as a whole requires that prices adjust in such a way that the optimality conditions are satisfied in all markets simultaneously, each price adjustment that actually occurs is a response to the conditions in a single market – the relationship between amounts demanded and supplied at the existing price. Each price adjustment being a response to a supply-demand imbalance in an individual market, there is no theory to explain how a process of price adjustment in real time will ever restore an equilibrium in which all optimality conditions are simultaneously satisfied.

Invoking a general Smithian invisible-hand theorem won’t work, because, in this context, the invisible-hand theorem tells us only that if an equilibrium price vector were reached, the system would be in an optimal state of rest with no tendency to change. The invisible-hand theorem provides no account of how the equilibrium price vector is discovered by any price-adjustment process in real time. (And even tatonnement, a non-real-time process, is not guaranteed to work as shown by the Sonnenschein-Mantel-Debreu Theorem). With price adjustment in each market entirely governed by the demand-supply imbalance in that market, market prices determined in individual markets need not ensure that all markets clear simultaneously or satisfy the optimality conditions.

Now it’s true that we have a simple theory of price adjustment for single markets: prices rise if there’s an excess demand and fall if there’s an excess supply. If demand and supply curves have normal slopes, the simple price adjustment rule moves the price toward equilibrium. But that partial-equilibriuim story is contingent on the implicit assumption that all other markets are in equilibrium. When all markets are in disequilibrium, moving toward equilibrium in one market will have repercussions on other markets, and the simple story of how price adjustment in response to a disequilibrium restores equilibrium breaks down, because market conditions in every market depend on market conditions in every other market. So unless all markets arrive at equilibrium simultaneously, there’s no guarantee that equilibrium will obtain in any of the markets. Disequilibrium in any market can mean disequilibrium in every market. And if a single market is out of kilter, the second-best, suboptimal solution for the system is totally different from the first-best solution for all markets.

In the standard microeconomics we are taught in econ 1 and econ 101, all these complications are assumed away by restricting the analysis of price adjustment to a single market. In other words, as I have pointed out in a number of previous posts (here and here), standard microeconomics is built on macroeconomic foundations, and the currently fashionable demand for macroeconomics to be microfounded turns out to be based on question-begging circular reasoning. Partial equilibrium is a wonderful pedagogical device, and it is an essential tool in applied microeconomics, but its limitations are often misunderstood or ignored.

An early macroeconomic application of the theory of second is the statement by the quintessentially orthodox pre-Keynesian Cambridge economist Frederick Lavington who wrote in his book The Trade Cycle “the inactivity of all is the cause of the inactivity of each.” Each successive departure from the conditions for second-, third-, fourth-, and eventually nth-best sub-optima has additional negative feedback effects on the rest of the economy, moving it further and further away from a Pareto-optimal equilibrium with maximum output and full employment. The fewer people that are employed, the more difficult it becomes for anyone to find employment.

This insight was actually admirably, if inexactly, expressed by Say’s Law: supply creates its own demand. The cause of the cumulative contraction of output in a depression is not, as was often suggested, that too much output had been produced, but a breakdown of coordination in which disequilibrium spreads in epidemic fashion from market to market, leaving individual transactors unable to compensate by altering the terms on which they are prepared to supply goods and services. The idea that a partial-equilibrium response, a fall in money wages, can by itself remedy a general-disequilibrium disorder is untenable. Keynes and the Keynesians were therefore completely wrong to accuse Say of committing a fallacy in diagnosing the cause of depressions. The only fallacy lay in the assumption that market adjustments would automatically ensure the restoration of something resembling full-employment equilibrium.

Repeat after Me: Inflation’s the Cure not the Disease

Last week Martin Feldstein triggered a fascinating four-way exchange with a post explaining yet again why we still need to be worried about inflation. Tony Yates responded first with an explanation of why money printing doesn’t work at the zero lower bound (aka liquidity trap), leading Paul Krugman to comment wearily about the obtuseness of all those right-wingers who just can’t stop obsessing about the non-existent inflation threat when, all along, it was crystal clear that in a liquidity trap, printing money is useless.

I’m still not sure why relatively moderate conservatives like Feldstein didn’t find all this convincing back in 2009. I get, I think, why politics might predispose them to see inflation risks everywhere, but this was as crystal-clear a proposition as I’ve ever seen. Still, even if you managed to convince yourself that the liquidity-trap analysis was wrong six years ago, by now you should surely have realized that Bernanke, Woodford, Eggertsson, and, yes, me got it right.

But no — it’s a complete puzzle. Maybe it’s because those tricksy Fed officials started paying all of 25 basis points on reserves (Japan never paid such interest). Anyway, inflation is just around the corner, the same way it has been all these years.

Which surprisingly (not least to Krugman) led Brad DeLong to rise to Feldstein’s defense (well, sort of), pointing out that there is a respectable argument to be made for why even if money printing is not immediately effective at the zero lower bound, it could still be effective down the road, so that the mere fact that inflation has been consistently below 2% since the crash (except for a short blip when oil prices spiked in 2011-12) doesn’t mean that inflation might not pick up quickly once inflation expectations pick up a bit, triggering an accelerating and self-sustaining inflation as all those hitherto idle balances start gushing into circulation.

That argument drew a slightly dyspeptic response from Krugman who again pointed out, as had Tony Yates, that at the zero lower bound, the demand for cash is virtually unlimited so that there is no tendency for monetary expansion to raise prices, as if DeLong did not already know that. For some reason, Krugman seems unwilling to accept the implication of the argument in his own 1998 paper that he cites frequently: that for an increase in the money stock to raise the price level – note that there is an implicit assumption that the real demand for money does not change – the increase must be expected to be permanent. (I also note that the argument had been made almost 20 years earlier by Jack Hirshleifer, in his Fisherian text on capital theory, Capital Interest and Investment.) Thus, on Krugman’s own analysis, the effect of an increase in the money stock is expectations-dependent. A change in monetary policy will be inflationary if it is expected to be inflationary, and it will not be inflationary if it is not expected to be inflationary. And Krugman even quotes himself on the point, referring to

my call for the Bank of Japan to “credibly promise to be irresponsible” — to make the expansion of the base permanent, by committing to a relatively high inflation target. That was the main point of my 1998 paper!

So the question whether the monetary expansion since 2008 will ever turn out to be inflationary depends not on an abstract argument about the shape of the LM curve, but about the evolution of inflation expectations over time. I’m not sure that I’m persuaded by DeLong’s backward induction argument – an argument that I like enough to have used myself on occasion while conceding that the logic may not hold in the real word – but there is no logical inconsistency between the backward-induction argument and Krugman’s credibility argument; they simply reflect different conjectures about the evolution of inflation expectations in a world in which there is uncertainty about what the future monetary policy of the central bank is going to be (in other words, a world like the one we inhabit).

Which brings me to the real point of this post: the problem with monetary policy since 2008 has been that the Fed has credibly adopted a 2% inflation target, a target that, it is generally understood, the Fed prefers to undershoot rather than overshoot. Thus, in operational terms, the actual goal is really less than 2%. As long as the inflation target credibly remains less than 2%, the argument about inflation risk is about the risk that the Fed will credibly revise its target upwards.

With the both Wickselian natural real and natural nominal short-term rates of interest probably below zero, it would have made sense to raise the inflation target to get the natural nominal short-term rate above zero. There were other reasons to raise the inflation target as well, e.g., providing debt relief to debtors, thereby benefitting not only debtors but also those creditors whose debtors simply defaulted.

Krugman takes it for granted that monetary policy is impotent at the zero lower bound, but that impotence is not inherent; it is self-imposed by the credibility of the Fed’s own inflation target. To be sure, changing the inflation target is not a decision that we would want the Fed to take lightly, because it opens up some very tricky time-inconsistency problems. However, in a crisis, you may have to take a chance and hope that credibility can be restored by future responsible behavior once things get back to normal.

In this vein, I am reminded of the 1930 exchange between Hawtrey and Hugh Pattison Macmillan, chairman of the Committee on Finance and Industry, when Hawtrey, testifying before the Committee, suggested that the Bank of England reduce Bank Rate even at the risk of endangering the convertibility of sterling into gold (England eventually left the gold standard a little over a year later)

MACMILLAN. . . . the course you suggest would not have been consistent with what one may call orthodox Central Banking, would it?

HAWTREY. I do not know what orthodox Central Banking is.

MACMILLAN. . . . when gold ebbs away you must restrict credit as a general principle?

HAWTREY. . . . that kind of orthodoxy is like conventions at bridge; you have to break them when the circumstances call for it. I think that a gold reserve exists to be used. . . . Perhaps once in a century the time comes when you can use your gold reserve for the governing purpose, provided you have the courage to use practically all of it.

Of course the best evidence for the effectiveness of monetary policy at the zero lower bound was provided three years later, in April 1933, when FDR suspended the gold standard in the US, causing the dollar to depreciate against gold, triggering an immediate rise in US prices (wholesale prices rising 14% from April through July) and the fastest real recovery in US history (industrial output rising by over 50% over the same period). A recent paper by Andrew Jalil and Gisela Rua documents this amazing recovery from the depths of the Great Depression and the crucial role that changing inflation expectations played in stimulating the recovery. They also make a further important point: that by announcing a price level target, FDR both accelerated the recovery and prevented expectations of inflation from increasing without limit. The 1933 episode suggests that a sharp, but limited, increase in the price-level target would generate a faster and more powerful output response than an incremental increase in the inflation target. Unfortunately, after the 2008 downturn we got neither.

Maybe it’s too much to expect that an unelected central bank would take upon itself to adopt as a policy goal a substantial increase in the price level. Had the Fed announced such a goal after the 2008 crisis, it would have invited a potentially fatal attack, and not just from the usual right-wing suspects, on its institutional independence. Price stability, is after all, part of dual mandate that Fed is legally bound to pursue. And it was FDR, not the Fed, that took the US off the gold standard.

But even so, we at least ought to be clear that if monetary policy is impotent at the zero lower bound, the impotence is not caused by any inherent weakness, but by the institutional and political constraints under which it operates in a constitutional system. And maybe there is no better argument for nominal GDP level targeting than that it offers a practical and civilly reverent way of allowing monetary policy to be effective at the zero lower bound.

Paul Krugman on Tricky Urban Economics

Paul Krugman has a post about a New Yorker piece by Tim Wu discussing the surprising and disturbing increase in vacant storefronts in the very prosperous and desirable West Village in Lower Manhattan. I agree with most of what Krugman has to say, but I was struck by what seemed to me to be a misplaced emphasis in his post. My comment is not meant so much as a criticism, as an observation on the complexity of the forces that affect life in the city, which makes it tricky to offer any sort of general ideological prescriptions for policy. Krugman warns against adopting a free-market ideological stance – which is fine – but fails to observe that statist interventionism has had far more devastating effects on urban life. We should be wary of both extremes.

Krugman starts off his discussion with the following statement, with which, in principle, I don’t take issue, but is made so emphatically that it suggests the opposite mistake of the one that Krugman warns against.

First, when it comes to things that make urban life better or worse, there is absolutely no reason to have faith in the invisible hand of the market. External economies are everywhere in an urban environment. After all, external economies — the perceived payoff to being near other people engaged in activities that generate positive spillovers — is the reason cities exist in the first place. And this in turn means that market values can very easily produce destructive incentives. When, say, a bank branch takes over the space formerly occupied by a beloved neighborhood shop, everyone may be maximizing returns, yet the disappearance of that shop may lead to a decline in foot traffic, contribute to the exodus of a few families and their replacement by young bankers who are never home, and so on in a way that reduces the whole neighborhood’s attractiveness.

The basic point is surely correct; urban environments are highly susceptible, owing to their high population density, to both congestion and pollution, on the one hand, and to positive spillovers, on the other, and cities require a host of public services and amenities provided, more or less indiscriminately, to large numbers of people. Market incentives, to the exclusion of various kinds of collective action, cannot be relied upon to cope with congestion and pollution or to provide public services and amenities. But it is equally true that cities cannot function well without ample scope for private initiative and market exchange. The challenge for any city is to find a reasonable balance between allowing individuals to organize their lives, and pursue their own interests, as they see fit, and providing an adequate supply of public services and amenities, while limiting the harmful effects that individuals living in close proximity inevitably have on each other. It is certainly fair to point out that unfettered market forces alone can’t produce good outcomes in dense urban environments, and understandable that Krugman, a leading opponent of free-market dogmatism, would say so, but he curiously misses an opportunity, two paragraphs down, to make an equally cogent point about the dangers of going too far in the other direction.

Curiously, the missed opportunity arises just when, in the spirit of even-handedness and objectivity, Krugman acknowledges that increasing income equality does not necessarily enhance the quality of urban life.

Politically, I’d like to say that inequality is bad for urbanism. That’s far from obvious, however. Jane Jacobs wrote The Death and Life of Great American Cities right in the middle of the great postwar expansion, an era of widely shared economic growth, relatively equal income distribution, empowered labor — and collapsing urban life, as white families fled the cities and a combination of highway building and urban renewal destroyed many neighborhoods.

This just seems strange to me. Krugman focuses on declining income equality, as if that was what was driving the collapse in urban life, while simultaneously seeming to recognize, though with remarkable understatement, that the collapse coincided with white families fleeing cities and neighborhoods being destroyed by highway building and urban renewal, as if the highway building and the urban renewal were exogenous accidents that just then happened to be wreaking havoc on American urban centers. But urban renewal was a deliberate policy adopted by the federal government with the explicit aim of improving urban life, and Jane Jacobs wrote The Death and Life of Great American Cities precisely to show that large-scale redevelopment plans adopted to “renew” urban centers were actually devastating them. And the highway building that Krugman mentions was an integral part of a larger national plan to build the interstate highway system, a system that, to this day, is regarded as one of the great accomplishments of the federal government in the twentieth century, a system that subsidized the flight of white people to the suburbs facilitated the white flight lamented by Krugman. The collapse of urban life did not just happen; it was the direct result of policies adopted by the federal government.

In arguing for his fiscal stimulus package, Barack Obama, who ought to have known better, invoked the memory of the bipartisan consensus supporting the Interstate Highway Act. May God protect us from another such bipartisan consensus. I found the following excerpt from a book by Eric Avila The Folklore of the Freeway: Race and Revolt in the Modernist City, which is worth sharing:

In this age of divided government, we look to the 1950s as a golden age of bipartisan unity. President Barack Obama, a Democrat, often invokes the landmark passage of the 1956 Federal Aid Highway Act to remind the nation that Republicans and Democrats can unite under a shared sense of common purpose. Introduced by President Dwight Eisenhower, a Republican, the Federal Aid Highway Act, originally titled the National Interstate and Defense Highway Act, won unanimous support from Democrats and Republicans alike, uniting the two parties in a shared commitment to building a national highway infrastructure. This was big government at its biggest, the single largest federal expenditure in American history before the advent of the Great Society.

Yet although Congress unified around the construction of a national highway system, the American people did not. Contemporary nostalgia for bipartisan support around the Interstate Highway Act ignores the deep fissures that it inflicted on the American city after World War II: literally, by cleaving the urban built environment into isolated parcels of race and class, and figuratively, by sparking civic wars over the freeway’s threat to specific neighborhoods and communities. This book explores the conflicted legacy of that megaproject: even as the interstate highway program unified a nation around a 42,800-mile highway network, it divided the American people, as it divided their cities, fueling new social tensions that flared during the tumultuous 1960s.

Talk of a “freeway revolt” permeates the annals of American urban history. During the late 1960s and early 1970s, a generation of scholars and journalists introduced this term to describe the groundswell of grassroots opposition to urban highway construction. Their account saluted the urban women and men who stood up to state bulldozers, forging new civic strategies to rally against the highway-building juggernaut and to defeat the powerful interests it represented. It recounted these episodic victories with flair and conviction, doused with righteous invocations of “power to the people.” In the afterglow of the sixties, a narrative of the freeway revolt emerged: a grass- roots uprising of civic-minded people, often neighbors, banding together to defeat the technocrats, the oil companies, the car manufacturers, and ultimately the state itself, saving the city from the onslaught of automobiles, expressways, gas stations, parking lots, and other civic detriments. This story has entered the lore of the sixties, a mythic “shout in the street” that proclaimed the death of the modernist city and its master plans.

By and large, however, the dominant narrative of the freeway revolt is a racialized story, describing the victories of white middle-class or affluent communities that mustered the resources and connections to force concessions from the state. If we look closely at where the freeway revolt found its greatest success—Cambridge, Massachusetts; Lower Manhattan; the French Quarter in New Orleans; Georgetown in Washington D.C.; Beverly Hills, California; Princeton, New Jersey; Fells Point in Baltimore—we discover what this movement was really about and whose interests it served. As bourgeois counterparts to the inner-city uprising, the disparate victories of the freeway revolt illustrate how racial and class privilege structure the metropolitan built environment, demonstrating the skewed geography of power in the postwar American city.

One of my colleagues once told me a joke: if future anthropologists want to find the remains of people of color in a postapocalypse America, they will simply have to find the ruins of the nearest freeway. Yet such collegial jocularity contained a sobering reminder that the victories associated with the freeway revolt usually did not extend to urban communities of color, where highway construction often took a disastrous toll. To greater and lesser degrees, race—racial identity and racial ideology—shaped the geography of highway construction in urban America, fueling new patterns of racial inequality that exacerbated an unfolding “urban crisis” in postwar America. In many southern cities, local city planners took advantage of federal moneys to target black communities point-blank; in other parts of the nation, highway planners found the paths of least resistance, wiping out black commer- cial districts, Mexican barrios, and Chinatowns and desecrating land sacred to indigenous peoples. The bodies and spaces of people of color, historically coded as “blight” in planning discourse, provided an easy target for a federal highway program that usually coordinated its work with private redevelop- ment schemes and public policies like redlining, urban renewal, and slum clearance.

One of my favorite posts in the nearly four years that I’ve been blogging was one with a horrible title: “Intangible Infrastructural Capital.” My main point in that post was that the huge investment in building physical infrastructure during the years of urban renewal and highway building was associated with the mindless destruction of vastly more valuable intangible infrastructure: knowledge, expectations (in both the positive and normative senses of that term), webs of social relationships and hierarchies, authority structures and informal mechanisms of social control that held communities together. I am neither a sociologist nor a social psychologist, but I have no doubt that the tragic dispersal of all those communities took an enormous physical, economic, and psychological toll on the displaced, forced to find new places to live, new environments to adapt to, often in new brand-new dysfunctional communities bereft of the intangible infrastructure needed to preserve social order and peace. But don’t think that it was only cities that suffered. The horrific interstate highway system was also a (slow, but painful) death sentence for hundreds, if not thousands, of small towns, whose economic viability was undermined by the superhighways.

And what about all those vacant storefronts in the West Village? Tim Wu suggests that the owners are keeping the properties off the market in hopes of finding a really lucrative tenant, like maybe a bank branch. Maybe the city should tax properties kept vacant for more than two months by the owner after having terminated a tenant’s lease.

Krugman on the Volcker Disinflation

Earlier in the week, Paul Krugman wrote about the Volcker disinflation of the 1980s. Krugman’s annoyance at Stephen Moore (whom Krugman flatters by calling him an economist) and John Cochrane (whom Krugman disflatters by comparing him to Stephen Moore) is understandable, but he has less excuse for letting himself get carried away in an outburst of Keynesian triumphalism.

Right-wing economists like Stephen Moore and John Cochrane — it’s becoming ever harder to tell the difference — have some curious beliefs about history. One of those beliefs is that the experience of disinflation in the 1980s was a huge shock to Keynesians, refuting everything they believed. What makes this belief curious is that it’s the exact opposite of the truth. Keynesians came into the Volcker disinflation — yes, it was mainly the Fed’s doing, not Reagan’s — with a standard, indeed textbook, model of what should happen. And events matched their expectations almost precisely.

I’ve been cleaning out my library, and just unearthed my copy of Dornbusch and Fischer’s Macroeconomics, first edition, copyright 1978. Quite a lot of that book was concerned with inflation and disinflation, using an adaptive-expectations Phillips curve — that is, an assumed relationship in which the current inflation rate depends on the unemployment rate and on lagged inflation. Using that approach, they laid out at some length various scenarios for a strategy of reducing the rate of money growth, and hence eventually reducing inflation. Here’s one of their charts, with the top half showing inflation and the bottom half showing unemployment:




Not the cleanest dynamics in the world, but the basic point should be clear: cutting inflation would require a temporary surge in unemployment. Eventually, however, unemployment could come back down to more or less its original level; this temporary surge in unemployment would deliver a permanent reduction in the inflation rate, because it would change expectations.

And here’s what the Volcker disinflation actually looked like:


A temporary but huge surge in unemployment, with inflation coming down to a sustained lower level.

So were Keynesian economists feeling amazed and dismayed by the events of the 1980s? On the contrary, they were feeling pretty smug: disinflation had played out exactly the way the models in their textbooks said it should.

Well, this is true, but only up to a point. What Krugman neglects to mention, which is why the Volcker disinflation is not widely viewed as having enhanced the Keynesian forecasting record, is that most Keynesians had opposed the Reagan tax cuts, and one of their main arguments was that the tax cuts would be inflationary. However, in the Reagan-Volcker combination of loose fiscal policy and tight money, it was tight money that dominated. Score one for the Monetarists. The rapid drop in inflation, though accompanied by high unemployment, was viewed as a vindication of the Monetarist view that inflation is always and everywhere a monetary phenomenon, a view which now seems pretty commonplace, but in the 1970s and 1980s was hotly contested, including by Keynesians.

However, the (Friedmanian) Monetarist view was only partially vindicated, because the Volcker disinflation was achieved by way of high interest rates not by tightly controlling the money supply. As I have written before on this blog (here and here) and in chapter 10 of my book on free banking (especially, pp. 214-21), Volcker actually tried very hard to slow down the rate of growth in the money supply, but the attempt to implement a k-percent rule induced perverse dynamics, creating a precautionary demand for money whenever monetary growth overshot the target range, the anticipation of an imminent future tightening causing people, fearful that cash would soon be unavailable, to hoard cash by liquidating assets before the tightening. The scenario played itself out repeatedly in the 1981-82 period, when the most closely watched economic or financial statistic in the world was the Fed’s weekly report of growth in the money supply, with growth rates over the target range being associated with falling stock and commodities prices. Finally, in the summer of 1982, Volcker announced that the Fed would stop trying to achieve its money growth targets, and the great stock market rally of the 1980s took off, and economic recovery quickly followed.

So neither the old-line Keynesian dismissal of monetary policy as irrelevant to the control of inflation, nor the Monetarist obsession with controlling the monetary aggregates fared very well in the aftermath of the Volcker disinflation. The result was the New Keynesian focus on monetary policy as the key tool for macroeconomic stabilization, except that monetary policy no longer meant controlling a targeted monetary aggregate, but controlling a targeted interest rate (as in the Taylor rule).

But Krugman doesn’t mention any of this, focusing instead on the conflicts among  non-Keynesians.

Indeed, it was the other side of the macro divide that was left scrambling for answers. The models Chicago was promoting in the 1970s, based on the work of Robert Lucas and company, said that unemployment should have come down quickly, as soon as people realized that the Fed really was bringing down inflation.

Lucas came to Chicago in 1975, and he was the wave of the future at Chicago, but it’s not as if Friedman disappeared; after all, he did win the Nobel Prize in 1976. And although Friedman did not explicitly attack Lucas, it’s clear that, to his credit, Friedman never bought into the rational-expectations revolution. So although Friedman may have been surprised at the depth of the 1981-82 recession – in part attributable to the perverse effects of the money-supply targeting he had convinced the Fed to adopt – the adaptive-expectations model in the Dornbusch-Fischer macro textbook is as much Friedmanian as Keynesian. And by the way, Dornbush and Fischer were both at Chicago in the mid 1970s when the first edition of their macro text was written.

By a few years into the 80s it was obvious that those models were unsustainable in the face of the data. But rather than admit that their dismissal of Keynes was premature, most of those guys went into real business cycle theory — basically, denying that the Fed had anything to do with recessions. And from there they just kept digging ever deeper into the rabbit hole.

But anyway, what you need to know is that the 80s were actually a decade of Keynesian analysis triumphant.

I am just as appalled as Krugman by the real-business-cycle episode, but it was as much a rejection of Friedman, and of all other non-Keynesian monetary theory, as of Keynes. So the inspiring morality tale spun by Krugman in which the hardy band of true-blue Keynesians prevail against those nasty new classical barbarians is a bit overdone and vastly oversimplified.

Is John Cochrane Really an (Irving) Fisherian?

I’m pretty late getting to this Wall Street Journal op-ed by John Cochrane (here’s an ungated version), and Noah Smith has already given it an admirable working over, but, even after Noah Smith, there’s an assertion or two by Cochrane that could use a bit of elucidation. Like this one:

Keynesians told us that once interest rates got stuck at or near zero, economies would fall into a deflationary spiral. Deflation would lower demand, causing more deflation, and so on.

Noah seems to think this is a good point, but I guess that I am less easily impressed than Noah. Feeling no need to provide citations for the views he attributes to Keynesians, Cochrane does not bother either to tell us which Keynesian has asserted that the zero lower bound creates the danger of a deflationary spiral, though in a previous blog post, Cochrane does provide a number of statements by Paul Krugman (who I guess qualifies as the default representative of all Keynesians) about the danger of a deflationary spiral. Interestingly all but one of these quotations were from 2009 when, in the wake of the fall 2008 financial crisis, a nasty little relapse in early 2009 having driven the stock market to a 12-year low, the Fed finally launched its first round of quantitative easing, the threat of a deflationary spiral did not seem at all remote.

Now an internet search shows that Krugman does have a model showing that a downward deflationary spiral is possible at the zero lower bound. I would just note, for the record, that Earl Thompson, in an unpublished 1976 paper, derived a similar result from an aggregate model based on a neo-classical aggregate production function with the Keynesian expenditure functions (through application of Walras’s Law) excluded. So what’s Keynes got to do with it?

But even more remarkable is that the most famous model of a deflationary downward spiral was constructed not by a Keynesian, but by the grandfather of modern Monetarism, Irving Fisher, in his famous 1933 paper on debt deflation, “The Debt-Deflation Theory of Great Depressions.” So the suggestion that there is something uniquely Keynesian about a downward deflationary spiral at the zero lower bound is simply not credible.

Cochrane also believes that because inflation has stabilized at very low levels, slow growth cannot be blamed on insufficient aggregate demand.

Zero interest rates and low inflation turn out to be quite a stable state, even in Japan. Yes, Japan is growing more slowly than one might wish, but with 3.5% unemployment and no deflationary spiral, it’s hard to blame slow growth on lack of “demand.”

Except that, since 2009 when the threat of a downward deflationary spiral seemed more visibly on the horizon than it does now, Krugman has consistently argued that, at the zero lower bound, chronic stagnation and underemployment are perfectly capable of coexisting with a positive rate of inflation. So it’s not clear why Cochrane thinks the coincidence of low inflation and sluggish economic growth for five years since the end of the 2008-09 downturn somehow refutes Krugman’s diagnosis of what has been ailing the economy in recent years.

And, again, what’s even more interesting is that the proposition that there can be insufficient aggregate demand, even with positive inflation, follows directly from the Fisher equation, of which Cochrane claims to be a fervent devotee. After all, if the real rate of interest is negative, then the Fisher equation tells us that the equilibrium expected rate of inflation cannot be less than the absolute value of the real rate of interest. So if, at the zero lower bound, the real rate of interest is minus 1%, then the equilibrium expected rate of inflation is 1%, and if the actual rate of inflation equals the equilibrium expected rate, then the economy, even if it is operating at less than full employment and less than its potential output, may be in a state of macroeconomic equilibrium. And it may not be possible to escape from that low-level equilibrium and increase output and employment without a burst of unexpected inflation, providing a self-sustaining stimulus to economic growth, thereby moving the economy to a higher-level equilibrium with a higher real rate of interest than the rate corresponding to lower-level equilibrium. If I am not mistaken, Roger Farmer has been making an argument along these lines.

Given the close correspondence between the Keynesian and Fisherian analyses of what happens in the neighborhood of the zero lower bound, I am really curious to know what part of the Fisherian analysis Cochrane finds difficult to comprehend.

Forget the Monetary Base and Just Pay Attention to the Price Level

Kudos to David Beckworth for eliciting a welcome concession or clarification from Paul Krugman that monetary policy is not necessarily ineffectual at the zero lower bound. The clarification is welcome because Krugman and Simon Wren Lewis seemed to be making a big deal about insisting that monetary policy at the zero lower bound is useless if it affects only the current, but not the future, money supply, and touting the discovery as if it were a point that was not already well understood.

Now it’s true that Krugman is entitled to take credit for having come up with an elegant way of showing the difference between a permanent and a temporary increase in the monetary base, but it’s a point that, WADR, was understood even before Krugman. See, for example, the discussion in chapter 5 of Jack Hirshleifer’s textbook on capital theory (published in 1970), Investment, Interest and Capital, showing that the Fisher equation follows straightforwardly in an intertemporal equilibrium model, so that the nominal interest rate can be decomposed into a real component and an expected-inflation component. If holding money is costless, then the nominal rate of interest cannot be negative, and expected deflation cannot exceed the equilibrium real rate of interest. This implies that, at the zero lower bound, the current price level cannot be raised without raising the future price level proportionately. That is all Krugman was saying in asserting that monetary policy is ineffective at the zero lower bound, even though he couched the analysis in terms of the current and future money supplies rather than in terms of the current and future price levels. But the entire argument is implicit in the Fisher equation. And contrary to Krugman, the IS-LM model (with which I am certainly willing to coexist) offers no unique insight into this proposition; it would be remarkable if it did, because the IS-LM model in essence is a static model that has to be re-engineered to be used in an intertemporal setting.

Here is how Hirshleifer concludes his discussion:

The simple two-period model of choice between dated consumptive goods and dated real liquidities has been shown to be sufficiently comprehensive as to display both the quantity theorists’ and the Keynesian theorists’ predicted results consequent upon “changes in the money supply.” The seeming contradiction is resolved by noting that one result or the other follows, or possibly some mixture of the two, depending upon the precise meaning of the phrase “changes in the quantity of money.” More exactly, the result follows from the assumption made about changes in the time-distributed endowments of money and consumption goods.  pp. 150-51

Another passage from Hirshleifer is also worth quoting:

Imagine a financial “panic.” Current money is very scarce relative to future money – and so monetary interest rates are very high. The monetary authorities might then provide an increment [to the money stock] while announcing that an equal aggregate amount of money would be retired at some date thereafter. Such a change making current money relatively more plentiful (or less scarce) than before in comparison with future money, would clearly tend to reduce the monetary rate of interest. (p. 149)

In this passage Hirshleifer accurately describes the objective of Fed policy since the crisis: provide as much liquidity as needed to prevent a panic, but without even trying to generate a substantial increase in aggregate demand by increasing inflation or expected inflation. The refusal to increase aggregate demand was implicit in the Fed’s refusal to increase its inflation target.

However, I do want to make explicit a point of disagreement between me and Hirshleifer, Krugman and Beckworth. The point is more conceptual than analytical, by which I mean that although the analysis of monetary policy can formally be carried out either in terms of current and future money supplies, as Hirshleifer, Krugman and Beckworth do, or in terms of price levels, as I prefer to do so in terms of price levels. For one thing, reasoning in terms of price levels immediately puts you in the framework of the Fisher equation, while thinking in terms of current and future money supplies puts you in the framework of the quantity theory, which I always prefer to avoid.

The problem with the quantity theory framework is that it assumes that quantity of money is a policy variable over which a monetary authority can exercise effective control, a mistake — imprinted in our economic intuition by two or three centuries of quantity-theorizing, regrettably reinforced in the second-half of the twentieth century by the preposterous theoretical detour of monomaniacal Friedmanian Monetarism, as if there were no such thing as an identification problem. Thus, to analyze monetary policy by doing thought experiments that change the quantity of money is likely to mislead or confuse.

I can’t think of an effective monetary policy that was ever implemented by targeting a monetary aggregate. The optimal time path of a monetary aggregate can never be specified in advance, so that trying to target any monetary aggregate will inevitably fail, thereby undermining the credibility of the monetary authority. Effective monetary policies have instead tried to target some nominal price while allowing monetary aggregates to adjust automatically given that price. Sometimes the price being targeted has been the conversion price of money into a real asset, as was the case under the gold standard, or an exchange rate between one currency and another, as the Swiss National Bank is now doing with the franc/euro exchange rate. Monetary policies aimed at stabilizing a single price are easy to implement and can therefore be highly credible, but they are vulnerable to sudden changes with highly deflationary or inflationary implications. Nineteenth century bimetallism was an attempt to avoid or at least mitigate such risks. We now prefer inflation targeting, but we have learned (or at least we should have) from the Fed’s focus on inflation in 2008 that inflation targeting can also lead to disastrous consequences.

I emphasize the distinction between targeting monetary aggregates and targeting the price level, because David Beckworth in his post is so focused on showing 1) that the expansion of the Fed’s balance sheet under QE has been temoprary and 2) that to have been effective in raising aggregate demand at the zero lower bound, the increase in the monetary base needed to be permanent. And I say: both of the facts cited by David are implied by the fact that the Fed did not raise its inflation target or, preferably, replace its inflation target with a sufficiently high price-level target. With a higher inflation target or a suitable price-level target, the monetary base would have taken care of itself.

PS If your name is Scott Sumner, you have my permission to insert “NGDP” wherever “price level” appears in this post.

Temporary Equilibrium One More Time

It’s always nice to be noticed, especially by Paul Krugman. So I am not upset, but in his response to my previous post, I don’t think that Krugman quite understood what I was trying to convey. I will try to be clearer this time. It will be easiest if I just quote from his post and insert my comments or explanations.

Glasner is right to say that the Hicksian IS-LM analysis comes most directly not out of Keynes but out of Hicks’s own Value and Capital, which introduced the concept of “temporary equilibrium”.

Actually, that’s not what I was trying to say. I wasn’t making any explicit connection between Hicks’s temporary-equilibrium concept from Value and Capital and the IS-LM model that he introduced two years earlier in his paper on Keynes and the Classics. Of course that doesn’t mean that the temporary equilibrium method isn’t connected to the IS-LM model; one would need to do a more in-depth study than I have done of Hicks’s intellectual development to determine how much IS-LM was influenced by Hicks’s interest in intertemporal equilibrium and in the method of temporary equilibrium as a way of analyzing intertemporal issues.

This involves using quasi-static methods to analyze a dynamic economy, not because you don’t realize that it’s dynamic, but simply as a tool. In particular, V&C discussed at some length a temporary equilibrium in a three-sector economy, with goods, bonds, and money; that’s essentially full-employment IS-LM, which becomes the 1937 version with some price stickiness. I wrote about that a long time ago.

Now I do think that it’s fair to say that the IS-LM model was very much in the spirit of Value and Capital, in which Hicks deployed an explicit general-equilibrium model to analyze an economy at a Keynesian level of aggregation: goods, bonds, and money. But the temporary-equilibrium aspect of Value and Capital went beyond the Keynesian analysis, because the temporary equilibrium analysis was explicitly intertemporal, all agents formulating plans based on explicit future price expectations, and the inconsistency between expected prices and actual prices was explicitly noted, while in the General Theory, and in IS-LM, price expectations were kept in the background, making an appearance only in the discussion of the marginal efficiency of capital.

So is IS-LM really Keynesian? I think yes — there is a lot of temporary equilibrium in The General Theory, even if there’s other stuff too. As I wrote in the last post, one key thing that distinguished TGT from earlier business cycle theorizing was precisely that it stopped trying to tell a dynamic story — no more periods, forced saving, boom and bust, instead a focus on how economies can stay depressed. Anyway, does it matter? The real question is whether the method of temporary equilibrium is useful.

That is precisely where I think Krugman’s grasp on the concept of temporary equilibrium is slipping. Temporary equilibrium is indeed about periods, and it is explicitly dynamic. In my previous post I referred to Hicks’s discussion in Capital and Growth, about 25 years after writing Value and Capital, in which he wrote

The Temporary Equilibrium model of Value and Capital, also, is “quasi-static” [like the Keynes theory] – in just the same sense. The reason why I was contented with such a model was because I had my eyes fixed on Keynes.

As I read this passage now — and it really bothered me when I read it as I was writing my previous post — I realize that what Hicks was saying was that his desire to conform to the Keynesian paradigm led him to compromise the integrity of the temporary equilibrium model, by forcing it to be “quasi-static” when it really was essentially dynamic. The challenge has been to convert a “quasi-static” IS-LM model into something closer to the temporary-equilibrium method that Hicks introduced, but did not fully execute in Value and Capital.

What are the alternatives? One — which took over much of macro — is to do intertemporal equilibrium all the way, with consumers making lifetime consumption plans, prices set with the future rationally expected, and so on. That’s DSGE — and I think Glasner and I agree that this hasn’t worked out too well. In fact, economists who never learned temporary-equiibrium-style modeling have had a strong tendency to reinvent pre-Keynesian fallacies (cough-Say’s Law-cough), because they don’t know how to think out of the forever-equilibrium straitjacket.

Yes, I agree! Rational expectations, full-equilibrium models have turned out to be a regression, not an advance. But the way I would make the point is that the temporary-equilibrium method provides a sort of a middle way to do intertemporal dynamics without presuming that consumption plans and investment plans are always optimal.

What about disequilibrium dynamics all the way? Basically, I have never seen anyone pull this off. Like the forever-equilibrium types, constant-disequilibrium theorists have a remarkable tendency to make elementary conceptual mistakes.

Again, I agree. We can’t work without some sort of equilibrium conditions, but temporary equilibrium provides a way to keep the discipline of equilibrium without assuming (nearly) full optimality.

Still, Glasner says that temporary equilibrium must involve disappointed expectations, and fails to take account of the dynamics that must result as expectations are revised.

Perhaps I was unclear, but I thought I was saying just the opposite. It’s the “quasi-static” IS-LM model, not temporary equilibrium, that fails to take account of the dynamics produced by revised expectations.

I guess I’d say two things. First, I’m not sure that this is always true. Hicks did indeed assume static expectations — the future will be like the present; but in Keynes’s vision of an economy stuck in sustained depression, such static expectations will be more or less right.

Again, I agree. There may be self-fulfilling expectations of a low-income, low-employment equilibrium. But I don’t think that that is the only explanation for such a situation, and certainly not for the downturn that can lead to such an equilibrium.

Second, those of us who use temporary equilibrium often do think in terms of dynamics as expectations adjust. In fact, you could say that the textbook story of how the short-run aggregate supply curve adjusts over time, eventually restoring full employment, is just that kind of thing. It’s not a great story, but it is the kind of dynamics Glasner wants — and it’s Econ 101 stuff.

Again, I agree. It’s not a great story, but, like it or not, the story is not a Keynesian story.

So where does this leave us? I’m not sure, but my impression is that Krugman, in his admiration for the IS-LM model, is trying too hard to identify IS-LM with the temporary-equilibrium approach, which I think represented a major conceptual advance over both the Keynesian model and the IS-LM representation of the Keynesian model. Temporary equilibrium and IS-LM are not necessarily inconsistent, but I mainly wanted to point out that the two aren’t the same, and shouldn’t be conflated.


About Me

David Glasner
Washington, DC

I am an economist at the Federal Trade Commission. Nothing that you read on this blog necessarily reflects the views of the FTC or the individual commissioners. Although I work at the FTC as an antitrust economist, most of my research and writing has been on monetary economics and policy and the history of monetary theory. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 350 other followers


Follow

Get every new post delivered to your Inbox.

Join 350 other followers