Archive for April, 2012



Friedman and Schwartz on James Tobin

Nick Rowe and I, with some valuable commentary from Bill Woolsey, Mike Sproul and Scott Sumner, and perhaps others whom I am not now remembering, have been having an intermittent and (I hope) friendly argument for the past six months or so about the “hot potato” theory of money to which Nick subscribes, and which I deny, at least when it comes to privately produced bank money as opposed to government issued “fiat” money. Our differences were put on display once again in the discussion following my previous post on endogenous money. As I have mentioned many times, my view of how banks operate is derived from one of the best papers I have ever read, James Tobin’s classic 1963 paper “Commercial Banks as Creators of Money,” a paper that in my estimation would, on its own, have amply entitled Tobin to be awarded the Nobel Prize. If you haven’t read the paper, you should not deny yourself that pleasure and profit any longer.

A few months ago, I stumbled across the PDF version of one of the relatively obscure follow-up volumes to the Monetary History of the US that Friedman and Schwartz wrote: Monetary Statistics of the US: Estimates, Sources, Methods. Part one of the book is an extended discussion about the definition of money, presenting various historical definitions of money and approaches to defining money. I think that I read parts of it when I was in graduate school, perhaps when I took Ben Klein’s graduate class monetary theory. As one might expect, Friedman and Schwartz spent a lot of time on discussing a priori versus pragmatic or empirical definitions of money, arguing that definitions based on concepts like “the essential properties of the medium of exchange” (title of a paper written by Leland Yeager) inevitably lead to dead ends, preferring instead definitions, like M2, that turn out to be empirically useful, even if only for a certain period of time, under a certain set of monetary institutions and practices. In rereading a number of sections of part one, I was repeatedly struck by how good and insightful an economist Friedman was. Since I am far from being an unqualified admirer of Friedman’s, it was good to be reminded again that despite his faults, he was a true master of the subject.

At any rate, on pp. 123-24, there is a discussion of definitions based on a concept of “market equilibrium.”

Gramley and Chase, in a highly formal analysis of monetary adjustments in the shortest of short periods (Marshall’s market equilibrium contrasted with his short-run and long-run equilibriua), discuss the definition of money only incidentally. Yet their analysis qualifies for consideration along with the analyses of Pesek and Saving, Newlyn, and Yeager because, like the others, Gramley and Chase believe that far-reaching substantive conclusions about monetary analysis can be derived from rather simple abstract considerations and like, Newlyn and Yeager, they put great stress on whether the decisions of the public can or do affect monetary totals. That “the stock of money” is “an exogenous variable set by central bank policy,” they regard as one of the “time-honored doctrines of traditional monetary analysis.” They contrast this “more conventional view” with the “new view” that “open market operations alter the stock of money balances if, and only if, they alter the quantity of money demanded by the public.

In a footnote to this passage, Friedman and Schwartz add the following comment (p. 124).

In this respect [Gramley and Chase] follow James Tobin, “Commercial Banks as Creators of ‘Money.'” . . . Tobin presents a lucid exposition of commercial banks as financial intermediaries with which we agree fully and which we find most illuminating. His analysis, like that of Pesek and Saving, Newlyn, and Yeager, and as we shall note, Gramley and Chase, demonstrates that emphasis on supply considerations leads to a distinction between high-powered money and other assets but not between any broader total and other assets. Unlike Gramley and Chase, Tobin explicitly eschews drawing any far-reaching conclusions for policy and analysis from his quantitative analysis.

Then on p. 135, Friedman and Schwartz in a critical discussion of the New View, of which Tobin’s paper was a key contribution, observed:

This approach is an appropriate theoretical counterpart to an analysis of changes in income and expenditure along Keynesian lines. That analysis takes the price level as an institutional datum and therefore minimizes the distinction between nominal and real magnitudes. It takes interest rates as essentially the only market variable that reconciles the structure of assets supplied with the structure demanded.

In a footnote to this passage, Friedman and Schwartz add this comment.

It is instructive that economists who adopt this general view write as if the monetary authorities could determine the real and not merely the nominal quantity of high-powered money. For example, William C. Brainard and James Tobin in setting up a financial model to illustrate pitfalls in the building of such models use “the replacement value of . . . physical assets . . . as the numeraire of the system,” yet regard “the supply of reserves” as “one of the quantities the central bank directly controls (“Pitfalls in Financial Model-Building,” AER, May 1968, pp. 101-02). If the nominal level of prices is regarded as an endogenous variable, this is clearly wrong. Hence the writers must be assuming this nominal level of prices to be fixed outsider their system. Keynes’ “wage unit” serves the same role in his analysis and leads him and his followers also to treat the monetary authorities as directly controlling real and not nominal variables.

But there is no logical necessity that requires the New View so elegantly formulated by Tobin to be deployed within a Keynesian framework rather than in a non-Keynesian framework in which some monetary aggregate, like the stock of currency or the monetary base, rather than M1 or M2, is what determines the price level. The stock of currency (or the monetary base) can function as the hot potato that determines (in conjunction with all the other variables affecting the demand for currency or the monetary base) the price level. Denying that bank money is a hot potato doesn’t require you to treat the price level “as an institutional datum.” Friedman, almost, but not quite, figured that one out.

Endogenous Money

During my little vacation recently from writing about monetary policy, it seems that there has been quite a dust-up about endogenous money in econo-blogosphere. It all started with a post by Steve Keen, an Australian economist of the post-Keynesian persuasion, in which he expounded at length the greatness of Hyman Minsky, the irrelevance of equilibrium to macroeconomic problems, the endogeneity of the money supply, and the critical importance of debt in explaining macroeconomic fluctuations. In making his argument, Keen used as a foil a paper by Krugman and Eggerston “Debt, Delevereging, and the Liquidity Trap: A Fisher-Minsky-Koo Approach,” which he ridiculed for its excessive attachment to wrong-headed neoclassicism, as exemplified in the DSGE model in which Krugman and Eggerston conducted their analysis. I can’t help but note parenthetically that I was astounded by the following sentence in Keen’s post.

There are so many ways in which neoclassical economists misinterpret non-neoclassical thinkers like Fisher and Minsky that I could write a book on the topic.

No doubt that it would be a fascinating book, but what would be even more fascinating would be an, explanation of how Irving Fisher – yes, that Irving Fisher – could possibly be considered as anything other than a neo-classical economist.

At any rate, this assault did not go unnoticed by Dr. Krugman, who responded with evident annoyance on his blog, focusing in particular on the question whether a useful macroeconomic model requires an explicit model of the banking system, as Keen asserted, or whether a simple assumption that the monetary authority can exercise sufficient control over the banking system makes an explicit model of the banking sector unnecessary, as Krugman, following the analysis of the General Theory, asserted. Sorry, but I can’t resist making another parenthetical observation. Post-Keynesians, following Joan Robinson, rarely miss an opportunity to dismiss the IS-LM model as an inauthentic and misleading transformation of the richer analysis of the General Theory. Yet, the IS-LM model’s assumption of a fixed nominal quantity of money determined by the monetary authority was taken straight from the General Theory, a point made by, among others, Jacques Rueff in his 1948 critique of the General Theory and the liquidity-preference theory of interest, and by G.L.S. Shackle in his writings on Keynes, e.g., The Years of High Theory. Thus, in arguing for an endogenous model of the money supply, it is the anti-IS-LM post-Keynesians who are departing from Keynes’s analysis in the GT.

Krugman’s dismissive response to Keen, focusing on the endogeneity issue, elicited a stinging rejoinder, followed by several further rounds of heated argument. In the meantime, Nick Rowe joined the fray, writing at least three posts on the subject (1, 2, 3) generally siding with Krugman, as did Scott Fullwiler and Randall Wray, two leading lights of what has come to be known as Modern Monetary Theory (MMT), siding with Keen. Further discussion and commentary was provided by Steve Randy Waldman and Scott Sumner, and summaries by Edward Harrison, John Carney, Unlearning Economics, and Business Insider.

In reading through the voluminous posts, I found myself pulled in both directions. Some readers may recall that I got into a bit of a controversy with Nick Rowe some months back over the endogeneity issue, when Nick asserted that any increase in the quantity of bank money is a hot potato. Thus, if banks create more money than the public want to hold, the disequilibrium cannot be eliminated by a withdrawal of the excess money, rather the money must be passed from hand to hand, generating additional money income until the resulting increase in the demand to hold money eliminates the disequilibrium between the demand for money and the amount in existence. I argued that Nick had this all wrong, because banks can destroy, as well as create, money. Citing James Tobin’s classic article “Commercial Banks as Creators of Money,” I argued that responding to the interest-rate spreads between various lending and deposit rates, profit-maximizing banks have economic incentives to create only as much money as the public is willing to hold, no more and no less. Any disequilibrium between the amount of money in existence and the amount the public wants to hold can be eliminated either by a change (positive or negative) in the quantity of money or by a change in the deposit rates necessary to induce the pubic to hold the amount of money in existence.

The idea stressed by Keen, Fullwiler and Wray, that banks don’t lend out deposits and hold reserves against their deposits, but create deposits in the course of lending and hold reserves only insofar as reserves offer some pecuniary or non-pecuniary yield is an idea to which I fully subscribe. They think that the money multiplier is a nonsensical concept, and so do I. I was actually encouraged to see that Nick Rowe now appears willing to accept that this is the right way to think about how banks operate, and that because banks are committed to convert their liabilities into currency on demand, they cannot create more liabilities than the public is willing to hold unless they are prepared to suffer losses as a consequence.

But Keen, Fullwiler and Wray go a step further, which is to say that, since banks can create money out of thin air, there is no limit to their ability to create money. I don’t understand this point. Do they mean that banks are in a perpetual state of disequilibrium? I understand that they are uncomfortable with any notion of equilibrium, but all other profit-maximizing firms can be said to be subject to some limit, not necessarily a physical or quantitative limit, but an economic limit to their expansion. Tobin, in his classic article, was very clear that banks do not have an incentive to create unlimited quantities of deposits. At any moment, a bank must perceive that there is a point beyond which it would be unprofitable to expand (by making additional loans and creating additional deposits) its balance sheet further.

Fullwiler argues at length that it makes no sense to speak about reserves or currency as setting any sort of constraint on the expansion of the banking system, ridiculing the notion that any bank is prevented from expanding by an inability to obtain additional reserves or additional currency should it want to do so. But banks are not constrained by any quantitative limit; they are constrained by the economic environment in which they operate and the incentives associated with the goal of maximizing profit. And that goal depends critically on the current and expected future price level, and on current lending and deposit rates. The current and expected future price level are controlled (or, at least, one may coherently hypothesize that they are controlled) by the central bank which controls the quantity of currency and the monetary base. Fullwiler denies that the central bank can control the quantity of currency or the monetary base, because the central bank is obligated to accommodate any demand for currency and to provide sufficient reserves to ensure that the payment system does not break down. But in any highly organized, efficiently managed market, transactors are able to buy and sell as much as they want to at the prevailing market price.  So the mere fact that there are no frustrated demands for currency or reserves cannot prove that the central bank does not have the power to affect the value of currency. That would be like saying that the government could not affect the value of a domestically produced, internationally traded, commodity by applying a tariff on imports, but could do so only by imposing an import quota. Applying a tariff and imposing a quota are, in principle (with full knowledge of the relevant supply and demand curves), equivalent methods of raising the price of a commodity. However, in the absence of the requisite knowledge, if fluctuations in price would be more disruptive than fluctuations in quantity, the tariff is a better way to raise the price of the commodity than a numerical quota on imports.

So while I think that bank money is endogenous, I don’t believe that the quantity of base money or currency is endogenous in the sense that the central bank is powerless to control the price level. The central bank may not be trying to target a particular quantity of currency or of the monetary base, but it can target a price level by varying its lending rate or by taking steps to vary the interbank overnight rate on bank reserves. This, it seems to me, is not very different from trying to control the domestic value of an imported commodity by setting a tariff on imports rather than controlling the quantity of imports directly.  Endogeneity of bank money does not necessarily mean that a central bank cannot control the price level.  If it can, I am not so sure that the post-Keynesian, MMT critique of more conventional macroeconomics is quite as powerful as they seem to think.

Does the Value of Intellectual Property Reflect the Social or the Private Value of Information?

A couple of weeks ago, and again last week, I suggested a reason why, despite the general proposition that non-lump-sum taxes are distortionary, reduced marginal tax rates since the 1980s could have slowed down US economic growth. In illustrating how this might have happened, I focused mainly on the growth of the financial sector, hypothesizing that investments by financial firms in “research” and in devising trading strategies are socially wasteful, because the return on those investments stems from trading profits reflecting not additions to output but transfers from other less knowledgeable and sophisticated traders.

An article (“AOL Strikes $1.1 billion Patent Deal with Microsoft”) in today’s New York Times gives another illustration of this phenomenon, a difference in the social and private value of information, except that the difference between the social and private value of information in the AOL/Microsoft patent deal results not so much from the information as such, as from the creation of property rights in ideas, i.e., intellectual property, especially patents.

AOL agreed on Monday to sell a portfolio of over 800 patents, and license about 300 more, to Microsoft for $1.056 billion, amid an arms race within the technology industry over intellectual property.

Under the terms of the transaction, AOL will retain a license for the patents it is selling, while Microsoft will receive a nonexclusive license for the technologies AOL is retaining.

It is the latest big deal for patents, at a time when tech companies are amassing intellectual property rights as ammunition against competitors. Last year, Google purchased Motorola Mobility for $12.5 billion, largely for its patent portfolio.

And scores of companies, including Apple, Samsung, Facebook and Yahoo, are clashing in courtrooms over claims to technology underpinning some of the basic functions of smartphones and social networks alike.

The deal will also provide AOL with some sorely needed cash as the struggling Internet company continues its attempt to refashion itself as a media content provider.

AOL began shopping around the patents in the fall, in what it said was a “robust, competitive auction.” The results of the sale may be surprising to some analysts. Two weeks ago, an advisory firm estimated that the sale could yield as little as $290 million.

“The combined sale and licensing arrangement unlocks current dollar value for our shareholders and enables AOL to continue to aggressively execute on our strategy to create long-term shareholder value,” Tim Armstrong, the company’s chairman and chief executive, said in a statement.

The company said it would distribute a “significant portion” of the proceeds to restive shareholders, who have been awaiting positive results from the turnaround campaign from Mr. Armstrong.

Among those investors is Starboard Value L.P., which currently owns a 5.2 percent stake. In a letter on Feb. 24, the investment firm wrote that it remained disappointed in AOL’s progress, and announced a slate of candidates for the company’s board.

Shares in AOL leaped 35.6 percent, to $24.98, in premarket trading on Monday. They had fallen more than 8 percent over the last 12 months.

AOL was advised by Evercore Partners, Goldman Sachs and the law firms Wachtell, Lipton, Rosen & Katz and Finnegan, Henderson, Farabow, Garrett & Dunner.

Microsoft was advised by the law firm Covington & Burling.

The benefits from inventive activity are systematically overstated.  Take for example a classic argument by George Stigler, in his book The Organization of Industry (Chapter 11: “A Note on Patents”). Stigler posed the following question:

The main theoretical question posed in the production of knowledge . . . is how to bring about the correct amount of resources in the search for new knowledge. Does our present [published in 1968] patent system with its 17-year grant of exclusive possession of the knowledge bring forth approximately the right amount of research effort?

To which Stigler offered a further question and a tentative answer:

We normally give perpetual possession of a piece of capital to its maker and his heirs. The reason is simple: the marginal social product is the sum of all future yields of the piece of capital, and if capital is to be produced privately to where its marginal social product equals its marginal cost, the owner must receive all future yields. Why not the same rule for the producer of new knowledge?

The traditional formal answer, I assume, is that the new knowledge is usually sold monopolistically rather than competitively. The inventor of the safety razor does not have to compete with 500 equally attractive other new ways to shave, so he may charge a monopoly price for his razor. . . . Thus with a perpetual patent system too many resources would go into research and innovation.

Stigler’s answer is correct, as far as it goes, but it doesn’t go very far. There is another, perhaps greater, problem with treating knowledge like a piece of physical capital than the one addressed by Stigler:  it presumes that new knowledge created at time t would not have been created subsequently at any later date, say, time t+x. If the knowledge created at time t would, in any case, have been created subsequently at time t+x, then giving the inventor a perpetual right to the invention overcompensates him inasmuch as he contributed only x years, not an infinite length of time, to society’s use of the invention.  The analysis is further clouded by the fact that every inventor faces uncertainty about whether he or some other inventor will obtain the patent for the invention on which he is working.  The existence of patents creates countervailing incentives to engage in inventive activity.

But the AOL/Microsoft patent deal illustrates another, wholly insidious, result of the explosion in intellectual-property enforcement, which is that patents have become weapons with which competitors in high-tech industries engage in a zero- (or even negative-) sum game of legal warfare with each other (“an arms race within the technology industry”). The resources devoted to the creation and protection of intellectual property are largely wasted, making it very questionable whether the benefits from encouraging investment in inventive and innovative activity that the patent system is theoretically supposed to encourage are now, in fact, greater than the value of resources wasted in the process of obtaining and using those right.

In his seminal 1959 article on the Federal Communications Commission – how many people know that it was in this article, not the better known “The Problem of Social Cost” published the following year, that the Coase Theorem was first stated and proved? – Ronald Coase made the following incredibly important observation at the end of footnote 54 on p. 27.

A waste of resources may result when the criteria used by courts to delimit rights result in resources being employed solely to establish a claim.

So it seems that the current intellectual property regime is causing a substantial, perhaps huge, shift of resources from inventive activity, i.e., creating new and better products or creating new and less costly ways of producing existing products, to establishing claims to intellectual property that can then be used offensively or defensively against others (“amassing intellectual property rights as ammunition against competitors”).  Such gains as are being generated from this kind of activity are, I conjecture, going disproportionately to those earning high incomes from intellectual property rights established through this costly process.  The corresponding losses associated with creating intellectual property rights are probably distributed more evenly across individuals than the gains, so that the net effect is to increase income inequality even as the rate of growth in aggregate income and wealth is slowed down.

Guardians of Our Liberties

In an oral argument before the Supreme Court on Tuesday March 27 in the case Dept. of Human Services Et Al. v. Florida Et Al. about the Constitutionality of the individual health-insurance mandate, Justice Anthony Kennedy made the following statement expressing deep skepticism that the power claimed by the Obama administration to compel individuals to purchase health insurance against their will is a power compatible with our traditional understanding of the relationship embodied in the common law and our jurisprudence between an individual citizen and his or her government.

JUSTICE KENNEDY: But the reason, the reason this is concerning, is because it requires the individual to do an affirmative act. In the law of torts our tradition, our law, has been that you don’t have the duty to rescue someone if that person is in danger. The blind man is walking in front of a car and you do not have a duty to stop him absent some relation between you. And there is some severe moral criticisms of that rule, but that’s generally the rule.

And here the government is saying that the Federal Government has a duty to tell the individual citizen that it must act, and that is different from what we have in previous cases and that changes the relationship of the Federal Government to the individual in the very fundamental way.

Following Justice Kennedy’s pronouncement, the Justices and the lawyers kept referring to the existence or the non-existence of a “limiting principle” that would prevent the government, if its power to impose an individual mandate were granted, from exercising an unlimited power over the economic decisions of individuals under the “commerce clause.”  By all accounts, Chief Justice Roberts, and Justices Scalia and Alito expressed similar concerns to those of Justice Kennedy.  Justice Thomas, as is his wont, remained silent during the oral argument, but he has already written skeptically about the extent to which the “commerce clause” has been used in earlier cases to justify government regulation of private economic activity.

A few days later in the case Florence v. Board of Chosen Freeholders of County of Burlington Et Al., Justice Kennedy, writing for a majority (Chief Justice Roberts, and Justices Scalia, Thomas, and Alito) of the Court, upheld the power of jail officials to strip search detainees arrested for any offense at their own discretion, regardless of whether there was probable cause to suspect the detainee of having contraband on his person.  According to press reports, a nun arrested at an anti-war protest was subjected to a strip search under the discretionary authority approved by Justice Kennedy and his four learned colleagues.  Here is an excerpt chosen more or less randomly from Justice Kennedy’s opinion.

Petitioner’s proposal―that new detainees not arrested for serious crimes or for offenses involving weapons or drugs be exempt from invasive searches unless they give officers a particular reason to suspect them of hiding contraband―is unworkable. The seriousness of an offense is a poor predictor of who has contraband, and it would be difficult to determine whether individual detainees fall within the proposed exemption. Even persons arrested for a minor offense may be coerced by others into concealing contraband. Exempting people arrested for minor offenses from a standard search protocol thus may put them at greater risk and result in more contraband being brought into the detention facility.

It also may be difficult to classify inmates by their current and prior offenses before the intake search. Jail officials know little at the outset about an arrestee, who may be carrying a false ID or lie about his identity. The officers conducting an initial search often do not have access to criminal history records. And those records can be inaccurate or incomplete. Even with accurate information, officers would encounter serious implementation difficulties. They would be required to determine quickly whether any underlying offenses were serious enough to authorize the more invasive search protocol. Other possible classifications based on characteristics of individual detainees also might prove to be unworkable or even give rise to charges of discriminatory application. To avoid liability, officers might be inclined not to conduct a thorough search in any close case, thus creating unnecessary risk for the entire jail population. While the restrictions petitioner suggests would limit the intrusion on the privacy of some detainees, it would be at the risk of increased danger to everyone in the facility, including the less serious offenders. The Fourth and Fourteenth Amendments do not require adoption of the proposed framework.

One can’t help but wonder what limiting principle these five honorable justices would articulate in circumscribing the authority to conduct a “reasonable search and seizure” under the Fourth Amendment to the Constitution.  But I really don’t want to go there.

Why Low Marginal Tax Rates Might Have Harmful Side Effects

My post last week about marginal tax rates has received a fair amount of attention on the web, being mentioned by Noah Smith last week and today by Andrew Sullivan and Kevin Drum.  (Drum, by the way, was mistaken in suggesting that I intended to link reduced marginal tax rates with the financial crisis; I was talking about a long-run, not a cyclical, effect.)  As a couple of commenters on that post noted, I didn’t fully explain why reducing marginal rates would have led to such a big expansion of the financial sector. Kevin Drum raised the point explicitly in his post. After quoting a couple of passages in which I explained why reducing marginal rates on income might have led to the expansion of the financial sector, Drum registers his conflicted response.

Count me in! I’m totally ready to believe this.

Except that I don’t get it. It’s certainly true that marginal tax rates have declined dramatically since 1980. It’s also true that the financial sector has expanded dramatically since 1980. But what evidence is there that low tax rates caused that expansion? Does finance benefit from lower taxes more than other industries, thanks to the sheer number of transactions it engages in? Or what? There’s a huge missing step here. Can anyone fill it in?

So Drum wants to know why it is that reducing marginal rates might have caused an expansion of the financial sector. Obviously multiple causes may have been working to expand the financial-services sector; I was focusing on just one, but did not mean to suggest that it was the only one.  But why would reduced marginal tax rates have any tendency to increase the size of the financial sector relative to other sectors?  The connection it seems to me is that doing the kind of research necessary to come up with information that traders can put to profitable use requires very high cognitive and analytical skills, skills associated with success in mathematics, engineering, applied and pure scientific research. In addition, I am also positing that, at equal levels of remuneration, most students would choose a career in one of the latter fields over a career in finance. Indeed, I would suggest that most students about to embark on a career would choose a career in the sciences, technology, or engineering over a career in finance even if it meant a sacrifice in income. If for someone with the mental abilities necessary to pursue a successful career in science or technology, requires what are called compensating differences in remuneration, then the higher the marginal tax rate, the greater the compensating difference in pre-tax income necessary to induce prospective job candidates to choose a career in finance.

So reductions in marginal tax rates in the 1980s enabled the financial sector to bid away talented young people from other occupations and sectors who would otherwise have pursued careers in science and technology. The infusion of brain power helped the financial sector improve the profitability of its trading operations, profits that came at the expense of less sophisticated financial firms and unprofessional traders, encouraging a further proliferation of products to trade and of strategies for trading them.

Now although this story makes sense, simple logic is not enough to establish my conjecture. The magnitude of the effects that I am talking about can’t be determined from the kind of simple arm-chair theorizing that I am engaging in. That’s why I am not willing to make a flat statement that reducing marginal income tax rates has, on balance, had a harmful effect on economic performance. And even if I were satisfied that reducing marginal tax rates has had a harmful effect on economic performance, I still would want to be sure that there aren’t other ways of addressing those harmful effects before I would advocate raising marginal tax rates as a remedy.  But the logic, it seems to me, is solid.

Nor is the logic limited to just the financial sector.  There is a whole range of other economic activities in which social and private gains are not equal.  In all such cases, high marginal tax rates operate to reduce the incentive to misdirect resources.  But a discussion of those other activities will have to wait for another occasion.

Reading John Taylor’s Mind

Last Saturday, John Taylor posted a very favorable comment on Robert Hetzel’s new book, The Great Recession: Market Failure or Policy Failure? Developing ideas that he published in an important article published in the Federal Reserve Bank of Richmond Economic Quarterly, Hetzel argues that it was mainly tight monetary policy in 2008, not the bursting of the housing bubble and its repercussions that caused the financial crisis in the weeks after Lehman Brothers collapsed in September 2008. Hetzel thus makes an argument that has obvious attractions for Taylor, attributing the Great Recession to the mistaken policy choices of the Federal Open Market Committee, rather than to any deep systemic flaw in modern free-market capitalism. Nevertheless, Taylor’s apparent endorsement of Hetzel’s main argument comes as something of a surprise, inasmuch as Taylor has sharply criticized Fed policies aiming to provide monetary stimulus since the crisis. However, if the Great Recession (Little Depression) was itself caused by overly tight monetary policy in 2008, it is not so easy to argue that a compensatory easing of monetary policy would not be appropriate.

While acknowledging the powerful case that Hetzel makes against Fed policy in 2008 as the chief cause of the Great Recession, Taylor tries very hard to reconcile this view with his previous focus on Fed policy in 2003-05 as the main cause of all the bad stuff that happened subsequently.

One area of disagreement among those who agree that deviations from sensible policy rules were a cause of the deep crisis is how much emphasis to place on the “too low for too long” period around 2003-2005—which, as I wrote in Getting Off Track, helped create an excessive boom, higher inflation, a risk-taking search for yield, and the ultimate bust—compared with the “too tight” period when interest rates got too high in 2007 and 2008 and thereby worsened the decline in GDP growth and the recession.

In my view these two episodes are closely connected in the sense that if rates had not been held too low for too long in 2003-2005 then the boom and the rise in inflation would likely have been avoided, and the Fed would not have found itself in a position of raising rates so much in 2006 and then keeping them relatively high in 2008.

A bit later, Taylor continues:

[T]here is a clear connection between the too easy period and the too tight period, much like the connection between the “go” and the “stop” in “go-stop” monetary policy, which those who warn about too much discretion are concerned with. I have emphasized the “too low for too long” period in my writing because of its “enormous implications” (to use Hetzel’s description) for the crisis and the recession which followed. Now this does not mean that people are incorrect to say that the Fed should have cut interest rates sooner in 2008. It simply says that the Fed’s actions in 2003-2005 should be considered as a possible part of the problem along with the failure to move more quickly in 2008.

Moreover in a blog post last November, when targeting nominal GDP made a big splash, receiving endorsements from such notables as Christina Romer and Paul Krugman, Taylor criticized NGDP targeting on his blog and through his flack Amity Shlaes.

A more fundamental problem is that, as I said in 1985, “The actual instrument adjustments necessary to make a nominal GNP rule operational are not usually specified in the various proposals for nominal GNP targeting. This lack of specification makes the policies difficult to evaluate because the instrument adjustments affect the dynamics and thereby the influence of a nominal GNP rule on business-cycle fluctuations.” The same lack of specificity is found in recent proposals. It may be why those who propose the idea have been reluctant to show how it actually would work over a range of empirical models of the economy as I have been urging here. Christina Romer’s article, for instance, leaves the instrument decision completely unspecified, in a do-whatever-it-takes approach. More quantitative easing, promising low rates for longer periods, and depreciating the dollar are all on her list. NGDP targeting may seem like a policy rule, but it does not give much quantitative operational guidance about what the central bank should do with the instruments. It is highly discretionary. Like the wolf dressed up as a sheep, it is discretion in rules clothing.

For this reason, as Amity Shlaes argues in her recent Bloomberg piece, NGDP targeting is not the kind of policy that Milton Friedman would advocate. In Capitalism and Freedom, he argued that this type of targeting procedure is stated in terms of “objectives that the monetary authorities do not have the clear and direct power to achieve by their own actions.” That is why he preferred instrument rules like keeping constant the growth rate of the money supply. It is also why I have preferred instrument rules, either for the money supply, or for the short term interest rate.

Taylor does not indicate whether, after reading Hetzel’s book, he is now willing to reassess either his view that monetary policy should be tightened or his negative view of NGDP. However, following Taylor post on Saturday, David Beckworth wrote an optimistic post suggesting that Taylor was coming round to Market Monetarism and NGDP targeting. Scott Sumner followed up Beckworth’s post with an optimistic one of his own, more or less welcoming Taylor to ranks of Market Monetarists. However, Marcus Nunes in his comment on Taylor’s post about Hetzel may have the more realistic view of what Taylor is thinking, observing that Taylor may have mischaracterized Hetzel’s view about the 2003-04 period, thereby allowing himself to continue to identify Fed easing in 2003 as the source of everything bad that happened subsequently. And Bill Woolsey also seems to think that Marcus’s take on Taylor is the right one.

But, no doubt Professor Taylor will soon provide us with further enlightenment on his mental state.  We hang on his next pronouncement.


About Me

David Glasner
Washington, DC

I am an economist at the Federal Trade Commission. Nothing that you read on this blog necessarily reflects the views of the FTC or the individual commissioners. Although I work at the FTC as an antitrust economist, most of my research and writing has been on monetary economics and policy and the history of monetary theory. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 276 other followers


Follow

Get every new post delivered to your Inbox.

Join 276 other followers