Archive Page 3

Repeat after Me: Inflation’s the Cure not the Disease

Last week Martin Feldstein triggered a fascinating four-way exchange with a post explaining yet again why we still need to be worried about inflation. Tony Yates responded first with an explanation of why money printing doesn’t work at the zero lower bound (aka liquidity trap), leading Paul Krugman to comment wearily about the obtuseness of all those right-wingers who just can’t stop obsessing about the non-existent inflation threat when, all along, it was crystal clear that in a liquidity trap, printing money is useless.

I’m still not sure why relatively moderate conservatives like Feldstein didn’t find all this convincing back in 2009. I get, I think, why politics might predispose them to see inflation risks everywhere, but this was as crystal-clear a proposition as I’ve ever seen. Still, even if you managed to convince yourself that the liquidity-trap analysis was wrong six years ago, by now you should surely have realized that Bernanke, Woodford, Eggertsson, and, yes, me got it right.

But no — it’s a complete puzzle. Maybe it’s because those tricksy Fed officials started paying all of 25 basis points on reserves (Japan never paid such interest). Anyway, inflation is just around the corner, the same way it has been all these years.

Which surprisingly (not least to Krugman) led Brad DeLong to rise to Feldstein’s defense (well, sort of), pointing out that there is a respectable argument to be made for why even if money printing is not immediately effective at the zero lower bound, it could still be effective down the road, so that the mere fact that inflation has been consistently below 2% since the crash (except for a short blip when oil prices spiked in 2011-12) doesn’t mean that inflation might not pick up quickly once inflation expectations pick up a bit, triggering an accelerating and self-sustaining inflation as all those hitherto idle balances start gushing into circulation.

That argument drew a slightly dyspeptic response from Krugman who again pointed out, as had Tony Yates, that at the zero lower bound, the demand for cash is virtually unlimited so that there is no tendency for monetary expansion to raise prices, as if DeLong did not already know that. For some reason, Krugman seems unwilling to accept the implication of the argument in his own 1998 paper that he cites frequently: that for an increase in the money stock to raise the price level – note that there is an implicit assumption that the real demand for money does not change – the increase must be expected to be permanent. (I also note that the argument had been made almost 20 years earlier by Jack Hirshleifer, in his Fisherian text on capital theory, Capital Interest and Investment.) Thus, on Krugman’s own analysis, the effect of an increase in the money stock is expectations-dependent. A change in monetary policy will be inflationary if it is expected to be inflationary, and it will not be inflationary if it is not expected to be inflationary. And Krugman even quotes himself on the point, referring to

my call for the Bank of Japan to “credibly promise to be irresponsible” — to make the expansion of the base permanent, by committing to a relatively high inflation target. That was the main point of my 1998 paper!

So the question whether the monetary expansion since 2008 will ever turn out to be inflationary depends not on an abstract argument about the shape of the LM curve, but about the evolution of inflation expectations over time. I’m not sure that I’m persuaded by DeLong’s backward induction argument – an argument that I like enough to have used myself on occasion while conceding that the logic may not hold in the real word – but there is no logical inconsistency between the backward-induction argument and Krugman’s credibility argument; they simply reflect different conjectures about the evolution of inflation expectations in a world in which there is uncertainty about what the future monetary policy of the central bank is going to be (in other words, a world like the one we inhabit).

Which brings me to the real point of this post: the problem with monetary policy since 2008 has been that the Fed has credibly adopted a 2% inflation target, a target that, it is generally understood, the Fed prefers to undershoot rather than overshoot. Thus, in operational terms, the actual goal is really less than 2%. As long as the inflation target credibly remains less than 2%, the argument about inflation risk is about the risk that the Fed will credibly revise its target upwards.

With the both Wickselian natural real and natural nominal short-term rates of interest probably below zero, it would have made sense to raise the inflation target to get the natural nominal short-term rate above zero. There were other reasons to raise the inflation target as well, e.g., providing debt relief to debtors, thereby benefitting not only debtors but also those creditors whose debtors simply defaulted.

Krugman takes it for granted that monetary policy is impotent at the zero lower bound, but that impotence is not inherent; it is self-imposed by the credibility of the Fed’s own inflation target. To be sure, changing the inflation target is not a decision that we would want the Fed to take lightly, because it opens up some very tricky time-inconsistency problems. However, in a crisis, you may have to take a chance and hope that credibility can be restored by future responsible behavior once things get back to normal.

In this vein, I am reminded of the 1930 exchange between Hawtrey and Hugh Pattison Macmillan, chairman of the Committee on Finance and Industry, when Hawtrey, testifying before the Committee, suggested that the Bank of England reduce Bank Rate even at the risk of endangering the convertibility of sterling into gold (England eventually left the gold standard a little over a year later)

MACMILLAN. . . . the course you suggest would not have been consistent with what one may call orthodox Central Banking, would it?

HAWTREY. I do not know what orthodox Central Banking is.

MACMILLAN. . . . when gold ebbs away you must restrict credit as a general principle?

HAWTREY. . . . that kind of orthodoxy is like conventions at bridge; you have to break them when the circumstances call for it. I think that a gold reserve exists to be used. . . . Perhaps once in a century the time comes when you can use your gold reserve for the governing purpose, provided you have the courage to use practically all of it.

Of course the best evidence for the effectiveness of monetary policy at the zero lower bound was provided three years later, in April 1933, when FDR suspended the gold standard in the US, causing the dollar to depreciate against gold, triggering an immediate rise in US prices (wholesale prices rising 14% from April through July) and the fastest real recovery in US history (industrial output rising by over 50% over the same period). A recent paper by Andrew Jalil and Gisela Rua documents this amazing recovery from the depths of the Great Depression and the crucial role that changing inflation expectations played in stimulating the recovery. They also make a further important point: that by announcing a price level target, FDR both accelerated the recovery and prevented expectations of inflation from increasing without limit. The 1933 episode suggests that a sharp, but limited, increase in the price-level target would generate a faster and more powerful output response than an incremental increase in the inflation target. Unfortunately, after the 2008 downturn we got neither.

Maybe it’s too much to expect that an unelected central bank would take upon itself to adopt as a policy goal a substantial increase in the price level. Had the Fed announced such a goal after the 2008 crisis, it would have invited a potentially fatal attack, and not just from the usual right-wing suspects, on its institutional independence. Price stability, is after all, part of dual mandate that Fed is legally bound to pursue. And it was FDR, not the Fed, that took the US off the gold standard.

But even so, we at least ought to be clear that if monetary policy is impotent at the zero lower bound, the impotence is not caused by any inherent weakness, but by the institutional and political constraints under which it operates in a constitutional system. And maybe there is no better argument for nominal GDP level targeting than that it offers a practical and civilly reverent way of allowing monetary policy to be effective at the zero lower bound.

Paul Krugman on Tricky Urban Economics

Paul Krugman has a post about a New Yorker piece by Tim Wu discussing the surprising and disturbing increase in vacant storefronts in the very prosperous and desirable West Village in Lower Manhattan. I agree with most of what Krugman has to say, but I was struck by what seemed to me to be a misplaced emphasis in his post. My comment is not meant so much as a criticism, as an observation on the complexity of the forces that affect life in the city, which makes it tricky to offer any sort of general ideological prescriptions for policy. Krugman warns against adopting a free-market ideological stance – which is fine – but fails to observe that statist interventionism has had far more devastating effects on urban life. We should be wary of both extremes.

Krugman starts off his discussion with the following statement, with which, in principle, I don’t take issue, but is made so emphatically that it suggests the opposite mistake of the one that Krugman warns against.

First, when it comes to things that make urban life better or worse, there is absolutely no reason to have faith in the invisible hand of the market. External economies are everywhere in an urban environment. After all, external economies — the perceived payoff to being near other people engaged in activities that generate positive spillovers — is the reason cities exist in the first place. And this in turn means that market values can very easily produce destructive incentives. When, say, a bank branch takes over the space formerly occupied by a beloved neighborhood shop, everyone may be maximizing returns, yet the disappearance of that shop may lead to a decline in foot traffic, contribute to the exodus of a few families and their replacement by young bankers who are never home, and so on in a way that reduces the whole neighborhood’s attractiveness.

The basic point is surely correct; urban environments are highly susceptible, owing to their high population density, to both congestion and pollution, on the one hand, and to positive spillovers, on the other, and cities require a host of public services and amenities provided, more or less indiscriminately, to large numbers of people. Market incentives, to the exclusion of various kinds of collective action, cannot be relied upon to cope with congestion and pollution or to provide public services and amenities. But it is equally true that cities cannot function well without ample scope for private initiative and market exchange. The challenge for any city is to find a reasonable balance between allowing individuals to organize their lives, and pursue their own interests, as they see fit, and providing an adequate supply of public services and amenities, while limiting the harmful effects that individuals living in close proximity inevitably have on each other. It is certainly fair to point out that unfettered market forces alone can’t produce good outcomes in dense urban environments, and understandable that Krugman, a leading opponent of free-market dogmatism, would say so, but he curiously misses an opportunity, two paragraphs down, to make an equally cogent point about the dangers of going too far in the other direction.

Curiously, the missed opportunity arises just when, in the spirit of even-handedness and objectivity, Krugman acknowledges that increasing income equality does not necessarily enhance the quality of urban life.

Politically, I’d like to say that inequality is bad for urbanism. That’s far from obvious, however. Jane Jacobs wrote The Death and Life of Great American Cities right in the middle of the great postwar expansion, an era of widely shared economic growth, relatively equal income distribution, empowered labor — and collapsing urban life, as white families fled the cities and a combination of highway building and urban renewal destroyed many neighborhoods.

This just seems strange to me. Krugman focuses on declining income equality, as if that was what was driving the collapse in urban life, while simultaneously seeming to recognize, though with remarkable understatement, that the collapse coincided with white families fleeing cities and neighborhoods being destroyed by highway building and urban renewal, as if the highway building and the urban renewal were exogenous accidents that just then happened to be wreaking havoc on American urban centers. But urban renewal was a deliberate policy adopted by the federal government with the explicit aim of improving urban life, and Jane Jacobs wrote The Death and Life of Great American Cities precisely to show that large-scale redevelopment plans adopted to “renew” urban centers were actually devastating them. And the highway building that Krugman mentions was an integral part of a larger national plan to build the interstate highway system, a system that, to this day, is regarded as one of the great accomplishments of the federal government in the twentieth century, a system that subsidized the flight of white people to the suburbs facilitated the white flight lamented by Krugman. The collapse of urban life did not just happen; it was the direct result of policies adopted by the federal government.

In arguing for his fiscal stimulus package, Barack Obama, who ought to have known better, invoked the memory of the bipartisan consensus supporting the Interstate Highway Act. May God protect us from another such bipartisan consensus. I found the following excerpt from a book by Eric Avila The Folklore of the Freeway: Race and Revolt in the Modernist City, which is worth sharing:

In this age of divided government, we look to the 1950s as a golden age of bipartisan unity. President Barack Obama, a Democrat, often invokes the landmark passage of the 1956 Federal Aid Highway Act to remind the nation that Republicans and Democrats can unite under a shared sense of common purpose. Introduced by President Dwight Eisenhower, a Republican, the Federal Aid Highway Act, originally titled the National Interstate and Defense Highway Act, won unanimous support from Democrats and Republicans alike, uniting the two parties in a shared commitment to building a national highway infrastructure. This was big government at its biggest, the single largest federal expenditure in American history before the advent of the Great Society.

Yet although Congress unified around the construction of a national highway system, the American people did not. Contemporary nostalgia for bipartisan support around the Interstate Highway Act ignores the deep fissures that it inflicted on the American city after World War II: literally, by cleaving the urban built environment into isolated parcels of race and class, and figuratively, by sparking civic wars over the freeway’s threat to specific neighborhoods and communities. This book explores the conflicted legacy of that megaproject: even as the interstate highway program unified a nation around a 42,800-mile highway network, it divided the American people, as it divided their cities, fueling new social tensions that flared during the tumultuous 1960s.

Talk of a “freeway revolt” permeates the annals of American urban history. During the late 1960s and early 1970s, a generation of scholars and journalists introduced this term to describe the groundswell of grassroots opposition to urban highway construction. Their account saluted the urban women and men who stood up to state bulldozers, forging new civic strategies to rally against the highway-building juggernaut and to defeat the powerful interests it represented. It recounted these episodic victories with flair and conviction, doused with righteous invocations of “power to the people.” In the afterglow of the sixties, a narrative of the freeway revolt emerged: a grass- roots uprising of civic-minded people, often neighbors, banding together to defeat the technocrats, the oil companies, the car manufacturers, and ultimately the state itself, saving the city from the onslaught of automobiles, expressways, gas stations, parking lots, and other civic detriments. This story has entered the lore of the sixties, a mythic “shout in the street” that proclaimed the death of the modernist city and its master plans.

By and large, however, the dominant narrative of the freeway revolt is a racialized story, describing the victories of white middle-class or affluent communities that mustered the resources and connections to force concessions from the state. If we look closely at where the freeway revolt found its greatest success—Cambridge, Massachusetts; Lower Manhattan; the French Quarter in New Orleans; Georgetown in Washington D.C.; Beverly Hills, California; Princeton, New Jersey; Fells Point in Baltimore—we discover what this movement was really about and whose interests it served. As bourgeois counterparts to the inner-city uprising, the disparate victories of the freeway revolt illustrate how racial and class privilege structure the metropolitan built environment, demonstrating the skewed geography of power in the postwar American city.

One of my colleagues once told me a joke: if future anthropologists want to find the remains of people of color in a postapocalypse America, they will simply have to find the ruins of the nearest freeway. Yet such collegial jocularity contained a sobering reminder that the victories associated with the freeway revolt usually did not extend to urban communities of color, where highway construction often took a disastrous toll. To greater and lesser degrees, race—racial identity and racial ideology—shaped the geography of highway construction in urban America, fueling new patterns of racial inequality that exacerbated an unfolding “urban crisis” in postwar America. In many southern cities, local city planners took advantage of federal moneys to target black communities point-blank; in other parts of the nation, highway planners found the paths of least resistance, wiping out black commer- cial districts, Mexican barrios, and Chinatowns and desecrating land sacred to indigenous peoples. The bodies and spaces of people of color, historically coded as “blight” in planning discourse, provided an easy target for a federal highway program that usually coordinated its work with private redevelop- ment schemes and public policies like redlining, urban renewal, and slum clearance.

One of my favorite posts in the nearly four years that I’ve been blogging was one with a horrible title: “Intangible Infrastructural Capital.” My main point in that post was that the huge investment in building physical infrastructure during the years of urban renewal and highway building was associated with the mindless destruction of vastly more valuable intangible infrastructure: knowledge, expectations (in both the positive and normative senses of that term), webs of social relationships and hierarchies, authority structures and informal mechanisms of social control that held communities together. I am neither a sociologist nor a social psychologist, but I have no doubt that the tragic dispersal of all those communities took an enormous physical, economic, and psychological toll on the displaced, forced to find new places to live, new environments to adapt to, often in new brand-new dysfunctional communities bereft of the intangible infrastructure needed to preserve social order and peace. But don’t think that it was only cities that suffered. The horrific interstate highway system was also a (slow, but painful) death sentence for hundreds, if not thousands, of small towns, whose economic viability was undermined by the superhighways.

And what about all those vacant storefronts in the West Village? Tim Wu suggests that the owners are keeping the properties off the market in hopes of finding a really lucrative tenant, like maybe a bank branch. Maybe the city should tax properties kept vacant for more than two months by the owner after having terminated a tenant’s lease.

Is Finance Parasitic?

We all know what a parasite is: an organism that attaches itself to another organism and derives nourishment from the host organism and in so doing weakens the host possibly making the host unviable and thereby undermining its own existence. Ayn Rand and her all too numerous acolytes were and remain obsessed with parasitism, considering every form of voluntary charity and especially government assistance to the poor and needy a form of parasitism whereby the undeserving weak live off of and sap the strength and the industry of their betters: the able, the productive, and the creative.

In earlier posts, I have observed that a lot of what the financial industry does is not really productive of net benefits to society, the gains of some coming at the expense of others. This insight was developed by Jack Hirshleifer in his classic 1971 paper “The Private and Social Value of Information and the Reward to Inventive Activity.” Financial trading to a large extent involves nothing but the exchange of existing assets, real or financial, and the profit made by one trader is largely at the expense of the other party to the trade. Because the potential gain to one side of the transaction exceeds the net gain to society, there is a substantial incentive to devote resources to gaining any small, and transient informational advantage that can help a trader buy or sell at the right time, making a profit at the expense of another. The social benefit from these valuable, but minimal and transitory, informational advantages is far less than the value of the resources devoted to obtaining those informational advantages. Thus, much of what the financial sector is doing just drains resources from the rest of society, resource that could be put to far better and more productive use in other sectors of the economy.

So I was really interested to see Timothy Taylor’s recent blog post about Luigi Zingales’s Presidential Address to the American Finance Association in which Zingales, professor of Finance at the University of Chicago Business School, lectured his colleagues about taking a detached and objective position about the financial industry rather than acting as cheer-leaders for the industry, as, he believes, they have been all too inclined to do. Rather than discussing the incentive of the financial industry to over-invest in research in search of transient informational advantages that can be exploited, or to invest in billions in high-frequency trading cables to make transient informational advantages more readily exploitable, Zingales mentions a number of other ways that the finance industry uses information advantages to profit at expense of the rest of society.

A couple of xamples from Zingales.

Financial innovations. Every new product introduced by the financial industry is better understood by the supplier than the customer or client. How many clients or customers have been warned about the latent defects or risks in the products or instruments that they are buying? The doctrine of caveat emptor almost always applies, especially because the customers and clients are often considered to be informed and sophisticated. Informed and sophisticated? Perhaps, but that still doesn’t mean that there is no information asymmetry between such customers and the financial institution that creates financial innovations with the specific intent of exploiting the resulting informational advantage it gains over its clients.

As Zingales points out, we understand that doctors often exploit the informational asymmetry that they enjoy over their patients by overtreating, overmedicating, and overtesting their patients. They do so, notwithstanding the ethical obligations that they have sworn to observe when they become doctors. Are we to assume that the bankers and investment bankers and their cohorts in the financial industry, who have not sworn to uphold even minimal ethical standards, are any less inclined than doctors to exploit informational asymmetries that are no less extreme than those that exist between doctors and patients?

Another example. Payday loans are a routine part of life for many low-income people who live from paycheck to paycheck, and are in constant danger of being drawn into a downward spiral of overindebtedness, rising interest costs and financial ruin. Zingales points out that the ruinous effects of payday loans might be mitigated if borrowers chose installment loans instead of loans due in full at maturity. Unsophisticated borrowers seems to prefer single-repayment loans even though such loans in practice are more likely to lead to disaster than installment loans. Because total interest paid is greater under single payment loans, the payday-loan industry resists legislation requiring that payday loans be installment loans. Such legislation has been enacted in Colorado with favorable results. Zingales sums up the results of recent research about payday loans.

Given such a drastic reduction in fees paid to lenders, it is entirely relevant to consider what happened to the payday lending supply In fact, supply of loans increased. The explanation relies upon the elimination of two inefficiencies. First, less bankruptcies. Second, the reduction of excessive entry in the sector. Half of Colorado’s stores closed in the three years following the reform, but each remaining stores served 80 percent more customers, with no evidence of a reduced access to funds. This result is consistent with Avery and Samolyk (2010), who find that states with no rate limits tend to have more payday loan stores per capita. In other words, when payday lenders can charge very high rates, too many lenders enter the sector, reducing the profitability of each one of them. Similar to the real estate brokers, in the presence of free entry, the possibility of charging abnormal profit margins lead to too many firms in the industry, each operating below the optimal scale (Flannery and Samolyk, 2007), and thus making only normal profits. Interestingly, the efficient outcome cannot be achieved without mandatory regulation. Customers who are charged the very high rates do not fully appreciate that the cost is higher than if they were in a loan product which does not induce the spiral of unnecessary loan float and thus higher default. In the presence of this distortion, lenders find the opportunity to charge very high fees to be irresistible, a form of catering products to profit from cognitive limitations of the customers (Campbell, 2006). Hence, the payday loan industry has excessive entry and firms operating below the efficient scale. Competition alone will not fix the problem, in fact it might make it worse, because payday lenders will compete in finding more sophisticated ways to charge very high fees to naïve customers, exacerbating both the over-borrowing and the excessive entry. Competition works only if we restrict the dimension in which competition takes place: if unsecured lending to lower income people can take place only in the form of installment loans, competition will lower the cost of these loans.

One more example of my own. A favorite tactic of the credit-card industry is to offer customers zero-interest rate loans on transferred balances. Now you might think that banks were competing hard to drive down the excessive cost of borrowing incurred by many credit card holders for whom borrowing via their credit card is their best way of obtaining unsecured credit. But you would be wrong. Credit-card issuers offer the zero-interest loans because, a) they typically charge a 3 or 4 percent service charge off the top, and b) then include a $35 penalty for a late payment, and then c), under the fine print of the loan agreement, terminate the promotional rate on the transferred balance, increasing the interest rate on the transferred balance to some exorbitant level in the range of 20 to 30 percent. Most customers, especially if they haven’t tried a balance-transfer before, will not even read the fine print to know that a single late payment will result in a penalty and loss of the promotional rate. But even if they are aware of the fine print, they will almost certainly underestimate the likelihood that they will sooner or later miss an installment-payment deadline. I don’t know whether any studies have looked into the profitability of promotional rates for credit card issuers, but I suspect, given how widespread such offers are, that they are very profitable for credit-card issuers. Information asymmetry strikes again.

A New Paper on the Short, But Sweet, 1933 Recovery Confirms that Hawtrey and Cassel Got it Right

In a recent post, the indispensable Marcus Nunes drew my attention to a working paper by Andrew Jalil of Occidental College and Gisela Rua of the Federal Reserve Board. The paper is called “Inflation Expectations and Recovery from the Depression in 1933: Evidence from the Narrative Record. “ Subsequently I noticed that Mark Thoma had also posted the abstract on his blog.

 Here’s the abstract:

This paper uses the historical narrative record to determine whether inflation expectations shifted during the second quarter of 1933, precisely as the recovery from the Great Depression took hold. First, by examining the historical news record and the forecasts of contemporary business analysts, we show that inflation expectations increased dramatically. Second, using an event-studies approach, we identify the impact on financial markets of the key events that shifted inflation expectations. Third, we gather new evidence—both quantitative and narrative—that indicates that the shift in inflation expectations played a causal role in stimulating the recovery.

There’s a lot of new and interesting stuff in this paper even though the basic narrative framework goes back almost 80 years to the discussion of the 1933 recovery in Hawtrey’s Trade Depression and The Way Out. The paper highlights the importance of rising inflation (or price-level) expectations in generating the recovery, which started within a few weeks of FDR’s inauguration in March 1933. In the absence of direct measures of inflation expectations, such as breakeven TIPS spreads, that are now available, or surveys of consumer and business expectations, Jalil and Rua document the sudden and sharp shift in expectations in three different ways.

First, they show document that there was a sharp spike in news coverage of inflation in April 1933. Second, they show an expectational shift toward inflation by a close analysis of the economic reporting and commentary in the Economist and in Business Week, providing a fascinating account of the evolution of FDR’s thinking and how his economic policy was assessed in the period between the election in November 1932 and April 1933 when the gold standard was suspended. Just before the election, the Economist observed

No well-informed man in Wall Street expects the outcome of the election to make much real difference in business prospects, the argument being that while politicians may do something to bring on a trade slump, they can do nothing to change a depression into prosperity (October 29, 1932)

 On April 22, 1933, just after FDR took the US of the gold standard, the Economist commented

As usual, Wall Street has interpreted the policy of the Washington Administration with uncanny accuracy. For a week or so before President Roosevelt announced his abandonment of the gold standard, Wall Street was “talking inflation.”

 A third indication of increasing inflation is drawn from the five independent economic forecasters which all began predicting inflation — some sooner than others  — during the April-May time frame.

Jalil and Rua extend the important work of Daniel Nelson whose 1991 paper “Was the Deflation of 1929-30 Anticipated? The Monetary Regime as Viewed by the Business Press” showed that the 1929-30 downturn coincided with a sharp drop in price level expectations, providing powerful support for the Hawtrey-Cassel interpretation of the onset of the Great Depression.

Besides persuasive evidence from multiple sources that inflation expectations shifted in the spring of 1933, Jalil and Rua identify 5 key events or news shocks that focused attention on a changing policy environment that would lead to rising prices.

1 Abandonment of the Gold Standard and a Pledge by FDR to Raise Prices (April 19)

2 Passage of the Thomas Inflation Amendment to the Farm Relief Bill by the Senate (April 28)

3 Announcement of Open Market Operations (May 24)

4 Announcement that the Gold Clause Would Be Repealed and a Reduction in the New York Fed’s Rediscount Rate (May 26)

5 FDR’s Message to the World Economic Conference Calling for Restoration of the 1926 Price Level (June 19)

Jalil and Rua perform an event study and find that stock prices rose significantly and the dollar depreciated against gold and pound sterling after each of these news shocks. They also discuss the macreconomic effects of shift in inflation expectations, showing that a standard macro model cannot account for the rapid 1933 recovery. Further, they scrutinize the claim by Friedman and Schwartz in their Monetary History of the United States that, based on the lack of evidence of any substantial increase in the quantity of money, “the economic recovery in the half-year after the panic owed nothing to monetary expansion.” Friedman and Schwartz note that, given the increase in prices and the more rapid increase in output, the velocity of circulation must have increased, without mentioning the role of rising inflation expectations in reducing that amount of cash (relative to income) that people wanted to hold.

Jalil and Rua also offer a very insightful explanation for the remarkably rapid recovery in the April-July period, suggesting that the commitment to raise prices back to their 1926 levels encouraged businesses to hasten their responses to the prospect of rising prices, because prices would stop rising after they reached their target level.

The literature on price-level targeting has shown that, relative to inflation targeting, this policy choice has the advantage of removing more uncertainty in terms of the future level of prices. Under price-level targeting, inflation depends on the relationship between the current price level and its target. Inflation expectations will be higher the lower is the current price level. Thus, Roosevelt’s commitment to a price-level target caused market participants to expect inflation until prices were back at that higher set target.

A few further comments before closing. Jalil and Rua have a brief discussion of whether other factors besides increasing inflation expectations could account for the rapid recovery. The only factor that they mention as an alternative is exit from the gold standard. This discussion is somewhat puzzling inasmuch as they already noted that exit from the gold standard was one of five news shocks (and by all odds the important one) in causing the increase in inflation expectations. They go on to point out that no other country that left the gold standard during the Great Depression experienced anywhere near as rapid a recovery as did the US. Because international trade accounted for a relatively small share of the US economy, they argue that the stimulus to production by US producers of tradable goods from a depreciating dollar would not have been all that great. But that just shows that the macroeconomic significance of abandoning the gold standard was not in shifting the real exchange rate, but in raising the price level. The fact that the US recovery after leaving the gold standard was so much more powerful than it was in other countries is because, at least for a short time, the US sought to use monetary policy aggressively to raise prices, while other countries were content merely to stop the deflation that the gold standard had inflicted on them, but made no attempt to reverse the deflation that had already occurred.

Jalil and Rua conclude with a discussion of possible explanations for why the April-July recovery seemed to peter out suddenly at the end of July. They offer two possible explanations. First passage of the National Industrial Recovery Act in July was a negative supply shock, and second the rapid recovery between April and July persuaded FDR that further inflation was no longer necessary, with actual inflation and expected inflation both subsiding as a result. These are obviously not competing explanations. Indeed the NIRA may have itself been another reason why FDR no longer felt inflation was necessary, as indicated by this news story in the New York Times

The government does not contemplate entering upon inflation of the currency at present and will issue cheaper money only as a last resort to stimulate trade, according to a close adviser of the President who discussed financial policies with him this week. This official asserted today that the President was well satisfied with the business improvement and the government’s ability to borrow money at cheap rates. These are interpreted as good signs, and if the conditions continue as the recovery program broadened, it was believed no real inflation of the currency would be necessary. (“Inflation Putt Off, Officials Suggest,” New York Times, August 4, 1933)

If only . . .

Cluelessness about Strategy, Tactics and Discretion

In his op-ed in the weekend Wall Street Journal, John Taylor restates his confused opposition to what Ben Bernanke calls the policy of constrained discretion followed by the Federal Reserve during his tenure at the Fed, as vice-chairman under Alan Greenspan from 2003 to 2005 and as Chairman from 2005 to 2013. Taylor has been arguing for the Fed to adopt what he calls the “rules-based monetary policy” supposedly practiced by the Fed while Paul Volcker was chairman (at least from 1981 onwards) and for most of Alan Greenspan’s tenure until 2003 when, according to Taylor, the Fed abandoned the “rules-based monetary rule” that it had followed since 1981. In a recent post, I explained why Taylor’s description of Fed policy under Volcker was historically inaccurate and why his critique of recent Fed policy is both historically inaccurate and conceptually incoherent.

Taylor denies that his steady refrain calling for a “rules-based policy” (i.e., the implementation of some version of his beloved Taylor Rule) is intended “to chain the Fed to an algebraic formula;” he just thinks that the Fed needs “an explicit strategy for setting the instruments” of monetary policy. Now I agree that one ought not to set a policy goal without a strategy for achieving the goal, but Taylor is saying that he wants to go far beyond a strategy for achieving a policy goal; he wants a strategy for setting instruments of monetary policy, which seems like an obvious confusion between strategy and tactics, ends and means.

Instruments are the means by which a policy is implemented. Setting a policy goal can be considered a strategic decision; setting a policy instrument a tactical decision. But Taylor is saying that the Fed should have a strategy for setting the instruments with which it implements its strategic policy.  (OED, “instrument – 1. A thing used in or for performing an action: a means. . . . 5. A tool, an implement, esp. one used for delicate or scientific work.”) This is very confused.

Let’s be very specific. The Fed, for better or for worse – I think for worse — has made a strategic decision to set a 2% inflation target. Taylor does not say whether he supports the 2% target; his criticism is that the Fed is not setting the instrument – the Fed Funds rate – that it uses to hit the 2% target in accordance with the Taylor rule. He regards the failure to set the Fed Funds rate in accordance with the Taylor rule as a departure from a rules-based policy. But the Fed has continually undershot its 2% inflation target for the past three years. So the question naturally arises: if the Fed had raised the Fed Funds rate to the level prescribed by the Taylor rule, would the Fed have succeeded in hitting its inflation target? If Taylor thinks that a higher Fed Funds rate than has prevailed since 2012 would have led to higher inflation than we experienced, then there is something very wrong with the Taylor rule, because, under the Taylor rule, the Fed Funds rate is positively related to the difference between the actual inflation rate and the target rate. If a Fed Funds rate higher than the rate set for the past three years would have led, as the Taylor rule implies, to lower inflation than we experienced, following the Taylor rule would have meant disregarding the Fed’s own inflation target. How is that consistent with a rules-based policy?

It is worth noting that the practice of defining a rule in terms of a policy instrument rather than in terms of a policy goal did not originate with John Taylor; it goes back to Milton Friedman who somehow convinced a generation of monetary economists that the optimal policy for the Fed would be to target the rate of growth of the money supply at a k-percent annual rate. I have devoted other posts to explaining the absurdity of Friedman’s rule, but the point that I want to emphasize now is that Friedman, for complicated reasons which I think (but am not sure) that I understand, convinced himself that (classical) liberal principles require that governments and government agencies exercise their powers only in accordance with explicit and general rules that preclude or minimize the exercise of discretion by the relevant authorities.

Friedman’s confusions about his k-percent rule were deep and comprehensive, as a quick perusal of Friedman’s chapter 3 in Capitalism and Freedom, “The Control of Money,” amply demonstrates. In practice, the historical gold standard was a mixture of gold coins and privately issued banknotes and deposits as well as government banknotes that did not function particularly well, requiring frequent and significant government intervention. Unlike, a pure gold currency in which, given the high cost of extracting gold from the ground, the quantity of gold money would change only gradually, a mixed system of gold coin and banknotes and deposits was subject to large and destabilizing fluctuations in quantity. So, in Friedman’s estimation, the liberal solution was to design a monetary system such that the quantity of money would expand at a slow and steady rate, providing the best of all possible worlds: the stability of a pure gold standard and the minimal resource cost of a paper currency. In making this argument, as I have shown in an earlier post, Friedman displayed a basic misunderstanding of what constituted the gold standard as it was historically practiced, especially during its heyday from about 1880 to the outbreak of World War I, believing that the crucial characteristic of the gold standard was the limitation that it imposed on the quantity of money, when in fact the key characteristic of the gold standard is that it forces the value of money – regardless of its material content — to be equal to the value of a specified quantity of gold. (This misunderstanding – the focus on control of the quantity of money as the key task of monetary policy — led to Friedman’s policy instrumentalism – i.e., setting a policy rule in terms of the quantity of money.)

Because Friedman wanted to convince his friends in the Mont Pelerin Society (his egregious paper “Real and Pseudo Gold Standards” was originally presented at a meeting of the Mont Pelerin Society), who largely favored the gold standard, that (classical) liberal principles did not necessarily entail restoration of the gold standard, he emphasized a distinction between what he called the objectives of monetary policy and the instruments of monetary policy. In fact, in the classical discussion of the issue by Friedman’s teacher at Chicago, Henry Simons, in an essay called “Rules versus Authorities in Monetary Policy,” Simons also tried to formulate a rule that would be entirely automatic, operating insofar as possible in a mechanical fashion, even considering the option of stabilizing the quantity of money. But Simons correctly understood that any operational definition of money is necessarily arbitrary, meaning that there will always be a bright line between what is money under the definition and what is not money, even though the practical difference between what is on one side of the line and what is on the other will be slight. Thus, the existence of near-moneys would make control of any monetary aggregate a futile exercise. Simons therefore defined a monetary rule in terms of an objective of monetary policy: stabilizing the price level. Friedman did not want to settle for such a rule, because he understood that stabilizing the price level has its own ambiguities, there being many ways to measure the price level as well as theoretical problems in constructing index numbers (the composition and weights assigned to components of the index being subject to constant change) that make any price index inexact. Given Friedman’s objective — demonstrating that there is a preferable alternative to the gold standard evaluated in terms of (classical) liberal principles – a price-level rule lacked the automatism that Friedman felt was necessary to trump the gold standard as a monetary rule.

Friedman therefore made his case for a monetary rule in terms of the quantity of money, ignoring Simons powerful arguments against trying to control the quantity of money, stating the rule in general terms and treating the selection of an operational definition of money as a mere detail. Here is how Friedman put it:

If a rule is to be legislated, what rule should it be? The rule that has most frequently been suggested by people of a generally liberal persuasion is a price level rule; namely, a legislative directive to the monetary authorities that they maintain a stable price level. I think this is the wrong kind of a rule [my emphasis]. It is the wrong kind of a rule because it is in terms of objectives that the monetary authorities do not have the clear and direct power to achieve by their own actions. It consequently raises the problem of dispersing responsibilities and leaving the authorities too much leeway.

As an aside, I note that Friedman provided no explanation of why such a rule would disperse responsibilities. Who besides the monetary authority did Friedman think would have responsibility for controlling the price level under such a rule? Whether such a rule would give the monetary authorities “too much leeway” is of course an entirely different question.

There is unquestionably a close connection between monetary actions and the price level. But the connection is not so close, so invariable, or so direct that the objective of achieving a stable price level is an appropriate guide to the day-to-day activities of the authorities. (p. 53)

Friedman continues:

In the present state of our knowledge, it seems to me desirable to state the rule in terms of the behavior of the stock of money. My choice at the moment would be a legislated rule instructing the monetary authority to achieve a specified rate of growth in the stock of money. For this purpose, I would define the stock of money as including currency outside commercial banks plus all deposits of commercial banks. I would specify that the Reserve System shall see to it [Friedman’s being really specific there, isn’t he?] that the total stock of money so defined rises month by month, and indeed, so far as possible day by day, at an annual rate of X per cent, where X is some number between 3 and 5. (p. 54)

Friedman, of course, deliberately ignored, or, more likely, simply did not understand, that the quantity of deposits created by the banking system, under whatever definition, is no more under the control of the Fed than the price level. So the whole premise of Friedman’s money supply rule – that it was formulated in terms of an instrument under the immediate control of the monetary authority — was based on the fallacy that quantity of money is an instrument that the monetary authority is able to control at will.

I therefore note, as a further aside, that in his latest Wall Street Journal op-ed, Taylor responded to Bernanke’s observation that the Taylor rule becomes inoperative when the rule implies an interest-rate target below zero. Taylor disagrees:

The zero bound is not a new problem. Policy rule design research took that into account decades ago. The default was to move to a stable money growth regime not to massive asset purchases.

Taylor may regard the stable money growth regime as an acceptable default rule when the Taylor rule is sidelined at the zero lower bound. But if so, he is caught in a trap of his own making, because, whether he admits it or not, the quantity of money, unlike the Fed Funds rate, is not an instrument under the direct control of the Fed. If Taylor rejects an inflation target as a monetary rule, because it grants too much discretion to the monetary authority, then he must also reject a stable money growth rule, because it allows at least as much discretion as does an inflation target. Indeed, if the past 35 years have shown us anything it is that the Fed has much more control over the price level and the rate of inflation than it has over the quantity of money, however defined.

This post is already too long, but I think that it’s important to say something about discretion, which was such a bugaboo for Friedman, and remains one for Taylor. But the concept of discretion is not as simple as it is often made out to be, especially by Friedman and Taylor, and if you are careful to pay attention to what the word means in ordinary usage, you will see that discretion does not necessarily, or usually, refer to an unchecked authority to act as one pleases. Rather it suggests that a certain authority to make a decision is being granted to a person or an official, but the decision is to be made in light of certain criteria or principles that, while not fully explicit, still inform and constrain the decision.

The best analysis of what is meant by discretion that I know of is by Ronald Dworkin in his classic essay “Is Law a System of Rules?” Dworkin discusses the meaning of discretion in the context of a judge deciding a “hard case,” a case in which conflicting rules of law seem to be applicable, or a case in which none of the relevant rules seems to fit the facts of the case. Such a judge is said to exercise discretion, because his decision is not straightforwardly determined by the existing set of legal rules. Legal positivists, against whom Dworkin was arguing, would say that the judge is able, and called upon, to exercise his discretion in deciding the case, meaning, that by deciding the case, the judge is simply imposing his will. It is something like the positivist view that underlies Friedman’s intolerance for discretion.

Countering the positivist view, Dworkin considers the example of a sergeant ordered by his lieutenant to take his five most experienced soldiers on patrol, and reflects on how to interpret an observer’s statement about the orders: “the orders left the sergeant a great deal of discretion.” It is clear that, in carrying out his orders, the sergeant is called upon to exercise his judgment, because he is not given a metric for measuring the experience of his soldiers. But that does not mean that when he chooses five soldiers to go on patrol, he is engaging in an exercise of will. The decision can be carried out with good judgment or with bad judgment, but it is an exercise of judgment, not will, just as a judge, in deciding a hard case, is exercising his judgment, on a more sophisticated level to be sure than the sergeant choosing soldiers, not just indulging his preferences.

If the Fed is committed to an inflation target, then, by choosing a setting for its instrumental target, the Fed Funds rate, the Fed is exercising judgment in light of its policy goals. That exercise of judgment in pursuit of a policy goal is very different from the arbitrary behavior of the Fed in the 1970s when its decisions were taken with no clear price-level or inflation target and with no clear responsibility for hitting the target.

Ben Bernanke has described the monetary regime in which the Fed’s decisions are governed by an explicit inflation target and a subordinate commitment to full employment as one of “constrained discretion.” When using this term, Taylor always encloses it in quotations markets, apparently to suggest that the term is an oxymoron. But that is yet another mistake; “constrained discretion” is no oxymoron. Indeed, it is a pleonasm, the exercise of discretion usually being understood to mean not an unconstrained exercise of will, but an exercise of judgment in the light of relevant goals, policies, and principles.

PS I apologize for not having responded to comments recently. I will try to catch up later this week.

Roger and Me

Last week Roger Farmer wrote a post elaborating on a comment that he had left to my post on Price Stickiness and Macroeconomics. Roger’s comment is aimed at this passage from my post:

[A]lthough price stickiness is a sufficient condition for inefficient macroeconomic fluctuations, it is not a necessary condition. It is entirely possible that even with highly flexible prices, there would still be inefficient macroeconomic fluctuations. And the reason why price flexibility, by itself, is no guarantee against macroeconomic contractions is that macroeconomic contractions are caused by disequilibrium prices, and disequilibrium prices can prevail regardless of how flexible prices are.

Here’s Roger’s comment:

I have a somewhat different take. I like Lucas’ insistence on equilibrium at every point in time as long as we recognize two facts. 1. There is a continuum of equilibria, both dynamic and steady state and 2. Almost all of them are Pareto suboptimal.

I made the following reply to Roger’s comment:

Roger, I think equilibrium at every point in time is ok if we distinguish between temporary and full equilibrium, but I don’t see how there can be a continuum of full equilibria when agents are making all kinds of long-term commitments by investing in specific capital. Having said that, I certainly agree with you that expectational shifts are very important in determining which equilibrium the economy winds up at.

To which Roger responded:

I am comfortable with temporary equilibrium as the guiding principle, as long as the equilibrium in each period is well defined. By that, I mean that, taking expectations as given in each period, each market clears according to some well defined principle. In classical models, that principle is the equality of demand and supply in a Walrasian auction. I do not think that is the right equilibrium concept.

Roger didn’t explain – at least not here, he probably has elsewhere — exactly why he doesn’t think equality of demand and supply in a Walrasian auction is not the right equilibrium concept. But I would be interested in hearing from him why he thinks equality of supply and demand is not the right equilibrium concept. Perhaps he will clarify his thinking for me.

Hicks wanted to separate ‘fix price markets’ from ‘flex price markets’. I don’t think that is the right equilibrium concept either. I prefer to use competitive search equilibrium for the labor market. Search equilibrium leads to indeterminacy because there are not enough prices for the inputs to the search process. Classical search theory closes that gap with an arbitrary Nash bargaining weight. I prefer to close it by making expectations fundamental [a proposition I have advanced on this blog].

I agree that the Hicksian distinction between fix-price markets and flex-price markets doesn’t cut it. Nevertheless, it’s not clear to me that a Thompsonian temporary-equilibrium model in which expectations determine the reservation wage at which workers will accept employment (i.e, the labor-supply curve conditional on the expected wage) doesn’t work as well as a competitive search equilibrium in this context.

Once one treats expectations as fundamental, there is no longer a multiplicity of equilibria. People act in a well defined way and prices clear markets. Of course ‘market clearing’ in a search market may involve unemployment that is considerably higher than the unemployment rate that would be chosen by a social planner. And when there is steady state indeterminacy, as there is in my work, shocks to beliefs may lead the economy to one of a continuum of steady state equilibria.

There is an equilibrium for each set of expectations (with the understanding, I presume, that expectations are always uniform across agents). The problem that I see with this is that there doesn’t seem to be any interaction between outcomes and expectations. Expectations are always self-fulfilling, and changes in expectations are purely exogenous. But in a classic downturn, the process seems to be cumulative, the contraction seemingly feeding on itself, causing a spiral of falling prices, declining output, rising unemployment, and increasing pessimism.

That brings me to the second part of an equilibrium concept. Are expectations rational in the sense that subjective probability measures over future outcomes coincide with realized probability measures? That is not a property of the real world. It is a consistency property for a model.

Yes; I agree totally. Rational expectations is best understood as a property of a model, the property being that if agents expect an equilibrium price vector the solution of the model is the same equilibrium price vector. It is not a substantive theory of expectation formation, the model doesn’t posit that agents correctly foresee the equilibrium price vector, that’s an extreme and unrealistic assumption about how the world actually works, IMHO. The distinction is crucial, but it seems to me that it is largely ignored in practice.

And yes: if we plop our agents down into a stationary environment, their beliefs should eventually coincide with reality.

This seems to me a plausible-sounding assumption for which there is no theoretical proof, and in view of Roger’s recent discussion of unit roots, dubious empirical support.

If the environment changes in an unpredictable way, it is the belief function, a primitive of the model, that guides the economy to a new steady state. And I can envision models where expectations on the transition path are systematically wrong.

I need to read Roger’s papers about this, but I am left wondering by what mechanism the belief function guides the economy to a steady state? It seems to me that the result requires some pretty strong assumptions.

The recent ‘nonlinearity debate’ on the blogs confuses the existence of multiple steady states in a dynamic model with the existence of multiple rational expectations equilibria. Nonlinearity is neither necessary nor sufficient for the existence of multiplicity. A linear model can have a unique indeterminate steady state associated with an infinite dimensional continuum of locally stable rational expectations equilibria. A linear model can also have a continuum of attracting points, each of which is an equilibrium. These are not just curiosities. Both of these properties characterize modern dynamic equilibrium models of the real economy.

I’m afraid that I don’t quite get the distinction that is being made here. Does “multiple steady states in a dynamic model” mean multiple equilibria of the full Arrow-Debreu general equilibrium model? And does “multiple rational-expectations equilibria” mean multiple equilibria conditional on the expectations of the agents? And I also am not sure what the import of this distinction is supposed to be.

My further question is, how does all of this relate to Leijonhfuvud’s idea of the corridor, which Roger has endorsed? My own understanding of what Axel means by the corridor is that the corridor has certain stability properties that keep the economy from careening out of control, i.e. becoming subject to a cumulative dynamic process that does not lead the economy back to the neighborhood of a stable equilibrium. But if there is a continuum of attracting points, each of which is an equilibrium, how could any of those points be understood to be outside the corridor?

Anyway, those are my questions. I am hoping that Roger can enlighten me.

What Is the Historically Challenged, Rule-Worshipping John Taylor Talking About?

A couple of weeks ago, I wrote a post chiding John Taylor for his habitual verbal carelessness. As if that were not enough, Taylor, in a recent talk at the IMF, appearing on a panel on monetary policy with former Fed Chairman Ben Bernanke and the former head of the South African central bank, Gill Marcus,  extends his trail of errors into new terrain: historical misstatement. Tony Yates and Paul Krugman have already subjected Taylor’s talk to well-deserved criticism for its conceptual confusion, but I want to focus on the outright historical errors Taylor blithely makes in his talk, a talk noteworthy, apart from its conceptual confusion and historical misstatements, for the incessant repetition of the meaningless epithet “rules-based,” as if he were a latter-day Homeric rhapsodist incanting a sacred text.

Taylor starts by offering his own “mini history of monetary policy in the United States” since the late 1960s.

When I first started doing monetary economics . . ., monetary policy was highly discretionary and interventionist. It went from boom to bust and back again, repeatedly falling behind the curve, and then over-reacting. The Fed had lofty goals but no consistent strategy. If you measure macroeconomic performance as I do by both price stability and output stability, the results were terrible. Unemployment and inflation both rose.

What Taylor means by “interventionist,” other than establishing that he is against it, is not clear. Nor is the meaning of “bust” in this context. The recession of 1970 was perhaps the mildest of the entire post-World War II era, and the 1974-75 recession was certainly severe, but it was largely the result of a supply shock and politically imposed wage and price controls exacerbated by monetary tightening. (See my post about 1970s stagflation.) Taylor talks about the Fed’s lofty goals, but doesn’t say what they were. In fact in the 1970s, the Fed was disclaiming responsibility for inflation, and Arthur Burns, a supposedly conservative Republican economist, appointed by Nixon to be Fed Chairman, actually promoted what was then called an “incomes policy,” thereby enabling and facilitating Nixon’s infamous wage-and-price controls. The Fed’s job was to keep aggregate demand high, and, in the widely held view at the time, it was up to the politicians to keep business and labor from getting too greedy and causing inflation.

Then in the early 1980s policy changed. It became more focused, more systematic, more rules-based, and it stayed that way through the 1990s and into the start of this century.

Yes, in the early 1980s, policy did change, and it did become more focused, and for a short time – about a year and a half – it did become more rules-based. (I have no idea what “systematic” means in this context.) And the result was the sharpest and longest post-World War II downturn until the Little Depression. Policy changed, because, under Volcker, the Fed took ownership of inflation. It became more rules-based, because, under Volcker, the Fed attempted to follow a modified sort of Monetarist rule, seeking to keep the growth of the monetary aggregates within a pre-determined target range. I have explained in my book and in previous posts (e.g., here and here) why the attempt to follow a Monetarist rule was bound to fail and why the attempt would have perverse feedback effects, but others, notably Charles Goodhart (discoverer of Goodhart’s Law), had identified the problem even before the Fed adopted its misguided policy. The recovery did not begin until the summer of 1982 after the Fed announced that it would allow the monetary aggregates to grow faster than the Fed’s targets.

So the success of the Fed monetary policy under Volcker can properly be attributed to a) to the Fed’s taking ownership of inflation and b) to its decision to abandon the rules-based policy urged on it by Milton Friedman and his Monetarist acolytes like Alan Meltzer whom Taylor now cites approvingly for supporting rules-based policies. The only monetary policy rule that the Fed ever adopted under Volcker having been scrapped prior to the beginning of the recovery from the 1981-82 recession, the notion that the Great Moderation was ushered in by the Fed’s adoption of a “rules-based” policy is a total misrepresentation.

But Taylor is not done.

Few complained about spillovers or beggar-thy-neighbor policies during the Great Moderation.  The developed economies were effectively operating in what I call a nearly international cooperative equilibrium.

Really! Has Professor Taylor, who served as Under Secretary of the Treasury for International Affairs ever heard of the Plaza and the Louvre Accords?

The Plaza Accord or Plaza Agreement was an agreement between the governments of France, West Germany, Japan, the United States, and the United Kingdom, to depreciate the U.S. dollar in relation to the Japanese yen and German Deutsche Mark by intervening in currency markets. The five governments signed the accord on September 22, 1985 at the Plaza Hotel in New York City. (“Plaza Accord” Wikipedia)

The Louvre Accord was an agreement, signed on February 22, 1987 in Paris, that aimed to stabilize the international currency markets and halt the continued decline of the US Dollar caused by the Plaza Accord. The agreement was signed by France, West Germany, Japan, Canada, the United States and the United Kingdom. (“Louvre Accord” Wikipedia)

The chart below shows the fluctuation in the trade weighted value of the US dollar against the other major trading currencies since 1980. Does it look like there was a nearly international cooperative equilibrium in the 1980s?

taylor_dollar_tradeweighted

But then there was a setback. The Fed decided to hold the interest rate very low during 2003-2005, thereby deviating from the rules-based policy that worked well during the Great Moderation.  You do not need policy rules to see the change: With the inflation rate around 2%, the federal funds rate was only 1% in 2003, compared with 5.5% in 1997 when the inflation rate was also about 2%.

Well, in 1997 the expansion was six years old and the unemployment rate was under 5% and falling. In 2003, the expansion was barely under way and unemployment was rising above 6%.

I could provide other dubious historical characterizations that Taylor makes in his talk, but I will just mention a few others relating to the Volcker episode.

Some argue that the historical evidence in favor of rules is simply correlation not causation.  But this ignores the crucial timing of events:  in each case, the changes in policy occurred before the changes in performance, clear evidence for causality.  The decisions taken by Paul Volcker came before the Great Moderation.

Yes, and as I pointed out above, inflation came down when Volcker and the Fed took ownership of the inflation, and were willing to tolerate or inflict sufficient pain on the real economy to convince the public that the Fed was serious about bringing the rate of inflation down to a rate of roughly 4%. But the recovery and the Great Moderation did not begin until the Fed renounced the only rule that it had ever adopted, namely targeting the rate of growth of the monetary aggregates. The Fed, under Volcker, never even adopted an explicit inflation target, much less a specific rule for setting the Federal Funds rate. The Taylor rule was just an ex post rationalization of what the Fed had done by instinct.

Another point relates to the zero bound. Wasn’t that the reason that the central banks had to deviate from rules in recent years? Well it was certainly not a reason in 2003-2005 and it is not a reason now, because the zero bound is not binding. It appears that there was a short period in 2009 when zero was clearly binding. But the zero bound is not a new thing in economics research. Policy rule design research took that into account long ago. The default was to move to a stable money growth regime not to massive asset purchases.

OMG! Is Taylor’s preferred rule at the zero lower bound the stable money growth rule that Volcker tried, but failed, to implement in 1981-82? Is that the lesson that Taylor wants us to learn from the Volcker era?

Some argue that rules based policy for the instruments is not needed if you have goals for the inflation rate or other variables. They say that all you really need for effective policy making is a goal, such as an inflation target and an employment target. The rest of policymaking is doing whatever the policymakers think needs to be done with the policy instruments. You do not need to articulate or describe a strategy, a decision rule, or a contingency plan for the instruments. If you want to hold the interest rate well below the rule-based strategy that worked well during the Great Moderation, as the Fed did in 2003-2005, then it’s ok as long as you can justify it at the moment in terms of the goal.

This approach has been called “constrained discretion” by Ben Bernanke, and it may be constraining discretion in some sense, but it is not inducing or encouraging a rule as a “rules versus discretion” dichotomy might suggest.  Simply having a specific numerical goal or objective is not a rule for the instruments of policy; it is not a strategy; it ends up being all tactics.  I think the evidence shows that relying solely on constrained discretion has not worked for monetary policy.

Taylor wants a rule for the instruments of policy. Well, although Taylor will not admit it, a rule for the instruments of policy is precisely what Volcker tried to implement in 1981-82 when he was trying — and failing — to target the monetary aggregates, thereby driving the economy into a rapidly deepening recession, before escaping from the positive-feedback loop in which he and the economy were trapped by scrapping his monetary growth targets. Since 2009, Taylor has been calling for the Fed to raise the currently targeted instrument, the Fed Funds rate, even though inflation has been below the Fed’s 2% target almost continuously for the past three years. Not only does Taylor want to target the instrument of policy, he wants the instrument target to preempt the policy target. If that is not all tactics and no strategy, I don’t know what is.


About Me

David Glasner
Washington, DC

I am an economist at the Federal Trade Commission. Nothing that you read on this blog necessarily reflects the views of the FTC or the individual commissioners. Although I work at the FTC as an antitrust economist, most of my research and writing has been on monetary economics and policy and the history of monetary theory. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 352 other followers


Follow

Get every new post delivered to your Inbox.

Join 352 other followers