Posts Tagged 'John Taylor'

The Free Market Economy Is Awesome and Fragile

Scott Sumner’s three most recent posts (here, here, and here)have been really great, and I’ld like to comment on all of them. I will start with a comment on his post discussing whether the free market economy is stable; perhaps I will get around to the other two next week. Scott uses a 2009 paper by Robert Hetzel as the starting point for his discussion. Hetzel distinguishes between those who view the stabilizing properties of price adjustment as being overwhelmed by real instabilities reflecting fluctuations in consumer and entrepreneurial sentiment – waves of optimism and pessimism – and those who regard the economy as either perpetually in equilibrium (RBC theorists) or just usually in equilibrium (Monetarists) unless destabilized by monetary shocks. Scott classifies himself, along with Hetzel and Milton Friedman, in the latter category.

Scott then brings Paul Krugman into the mix:

Friedman, Hetzel, and I all share the view that the private economy is basically stable, unless disturbed by monetary shocks. Paul Krugman has criticized this view, and indeed accused Friedman of intellectual dishonesty, for claiming that the Fed caused the Great Depression. In Krugman’s view, the account in Friedman and Schwartz’s Monetary History suggests that the Depression was caused by an unstable private economy, which the Fed failed to rescue because of insufficiently interventionist monetary policies. He thinks Friedman was subtly distorting the message to make his broader libertarian ideology seem more appealing.

This is a tricky topic for me to handle, because my own view of what happened in the Great Depression is in one sense similar to Friedman’s – monetary policy, not some spontaneous collapse of the private economy, was what precipitated and prolonged the Great Depression – but Friedman had a partial, simplistic and distorted view of how and why monetary policy failed. And although I believe Friedman was correct to argue that the Great Depression did not prove that the free market economy is inherently unstable and requires comprehensive government intervention to keep it from collapsing, I think that his account of the Great Depression was to some extent informed by his belief that his own simple k-percent rule for monetary growth was a golden bullet that would ensure economic stability and high employment.

I’d like to first ask a basic question: Is this a distinction without a meaningful difference? There are actually two issues here. First, does the Fed always have the ability to stabilize the economy, or does the zero bound sometimes render their policies impotent?  In that case the two views clearly do differ. But the more interesting philosophical question occurs when not at the zero bound, which has been the case for all but one postwar recession. In that case, does it make more sense to say the Fed caused a recession, or failed to prevent it?

Here’s an analogy. Someone might claim that LeBron James is a very weak and frail life form, whose legs will cramp up during basketball games without frequent consumption of fluids. Another might suggest that James is a healthy and powerful athlete, who needs to drink plenty of fluids to perform at his best during basketball games. In a sense, both are describing the same underlying reality, albeit with very different framing techniques. Nonetheless, I think the second description is better. It is a more informative description of LeBron James’s physical condition, relative to average people.

By analogy, I believe the private economy in the US is far more likely to be stable with decent monetary policy than is the economy of Venezuela (which can fall into depression even with sufficiently expansionary monetary policy, or indeed overly expansionary policies.)

I like Scott’s LeBron James analogy, but I have two problems with it. First, although LeBron James is a great player, he’s not perfect. Sometimes, even he messes up. When he messes up, it may not be his fault, in the sense that, with better information or better foresight – say, a little more rest in the second quarter – he might have sunk the game-winning three-pointer at the buzzer. Second, it’s one thing to say that a monetary shock caused the Great Depression, but maybe we just don’t know how to avoid monetary shocks. LeBron can miss shots, so can the Fed. Milton Friedman certainly didn’t know how to avoid monetary shocks, because his pet k-percent rule, as F. A. Hayek shrewdly observed, was a simply a monetary shock waiting to happen. And John Taylor certainly doesn’t know how to avoid monetary shocks, because his pet rule would have caused the Fed to raise interest rates in 2011 with possibly devastating consequences. I agree that a nominal GDP level target would have resulted in a monetary policy superior to the policy the Fed has been conducting since 2008, but do I really know that? I am not sure that I do. The false promise held out by Friedman was that it is easy to get monetary policy right all the time. It certainly wasn’t the case for Friedman’s pet rule, and I don’t think that there is any monetary rule out there that we can be sure will keep us safe and secure and fully employed.

But going beyond the LeBron analogy, I would make a further point. We just have no theoretical basis for saying that the free-market economy is stable. We can prove that, under some assumptions – and it is, to say the least, debatable whether the assumptions could properly be described as reasonable – a model economy corresponding to the basic neoclassical paradigm can be solved for an equilibrium solution. The existence of an equilibrium solution means basically that the neoclassical model is logically coherent, not that it tells us much about how any actual economy works. The pieces of the puzzle could all be put together in a way so that everything fits, but that doesn’t mean that in practice there is any mechanism whereby that equilibrium is ever reached or even approximated.

The argument for the stability of the free market that we learn in our first course in economics, which shows us how price adjusts to balance supply and demand, is an argument that, when every market but one – well, actually two, but we don’t have to quibble about it – is already in equilibrium, price adjustment in the remaining market – if it is small relative to the rest of the economy – will bring that market into equilibrium as well. That’s what I mean when I refer to the macrofoundations of microeconomics. But when many markets are out of equilibrium, even the markets that seem to be equilibrium (with amounts supplied and demanded equal) are not necessarily in equilibrium, because the price adjustments in other markets will disturb the seeming equilibrium of the markets in which supply and demand are momentarily equal. So there is not necessarily any algorithm, either in theory or in practice, by which price adjustments in individual markets would ever lead the economy into a state of general equilibrium. If we believe that the free market economy is stable, our belief is therefore not derived from any theoretical proof of the stability of the free market economy, but simply on an intuition, and some sort of historical assessment that free markets tend to work well most of the time. I would just add that, in his seminal 1937 paper, “Economics and Knowledge,” F. A. Hayek actually made just that observation, though it is not an observation that he, or most of his followers – with the notable and telling exceptions of G. L. S. Shackle and Ludwig Lachmann – made a big fuss about.

Axel Leijonhufvud, who is certainly an admirer of Hayek, addresses the question of the stability of the free-market economy in terms of what he calls a corridor. If you think of an economy moving along a time path, and if you think of the time path that would be followed by the economy if it were operating at a full-employment equilibrium, Leijonjhufvud’s corridor hypothesis is that the actual time path of the economy tends to revert to the equilibrium time path as long as deviations from the equilibrium are kept within certain limits, those limits defining the corridor. However, if the economy, for whatever reasons (exogenous shocks or some other mishaps) leaves the corridor, the spontaneous equilibrating tendencies causing the actual time path to revert back to the equilibrium time path may break down, and there may be no further tendency for the economy to revert back to its equilibrium time path. And as I pointed out recently in my post on Earl Thompson’s “Reformulation of Macroeconomic Theory,” he was able to construct a purely neoclassical model with two potential equilibria, one of which was unstable so that a shock form the lower equilibrium would lead either to a reversion to the higher-level equilibrium or to downward spiral with no endogenous stopping point.

Having said all that, I still agree with Scott’s bottom line: if the economy is operating below full employment, and inflation and interest rates are low, there is very likely a problem with monetary policy.

Milton Friedman’s Dumb Rule

Josh Hendrickson discusses Milton Friedman’s famous k-percent rule on his blog, using Friedman’s rule as a vehicle for an enlightening discussion of the time-inconsistency problem so brilliantly described by Fynn Kydland and Edward Prescott in a classic paper published 36 years ago. Josh recognizes that Friedman’s rule is imperfect. At any given time, the k-percent rule is likely to involve either an excess demand for cash or an excess supply of cash, so that the economy would constantly be adjusting to a policy induced macroeconomic disturbance. Obviously a less restrictive rule would allow the monetary authorities to achieve a better outcome. But Josh has an answer to that objection.

The k-percent rule has often been derided as a sub-optimal policy. Suppose, for example, that there was an increase in money demand. Without a corresponding increase in the money supply, there would be excess money demand that even Friedman believed would cause a reduction in both nominal income and real economic activity. So why would Friedman advocate such a policy?

The reason Friedman advocated the k-percent rule was not because he believed that it was the optimal policy in the modern sense of phrase, but rather that it limited the damage done by activist monetary policy. In Friedman’s view, shaped by his empirical work on monetary history, central banks tended to be a greater source of business cycle fluctuations than they were a source of stability. Thus, the k-percent rule would eliminate recessions caused by bad monetary policy.

That’s a fair statement of why Friedman advocated the k-percent rule. One of Friedman’s favorite epigrams was that one shouldn’t allow the best to be the enemy of the good, meaning that the pursuit of perfection is usually not worth it. Perfection is costly, and usually merely good is good enough. That’s generally good advice. Friedman thought that allowing the money supply to expand at a moderate rate (say 3%) would avoid severe deflationary pressure and avoid significant inflation, allowing the economy to muddle through without serious problems.

But behind that common-sense argument, there were deeper, more ideological, reasons for the k-percent rule. The k-percent rule was also part of Friedman’s attempt to provide a libertarian/conservative alternative to the gold standard, which Friedman believed was both politically impractical and economically undesirable. However, the gold standard for over a century had been viewed by supporters of free-market liberalism as a necessary check on government power and as a bulwark of liberty. Friedman, desiring to offer a modern version of the case for classical liberalism (which has somehow been renamed neo-liberalism), felt that the k-percent rule, importantly combined with a regime of flexible exchange rates, could serve as an ideological substitute for the gold standard.

To provide a rationale for why the k-percent rule was preferable to simply trying to stabilize the price level, Friedman had to draw on a distinction between the aims of monetary policy and the instruments of monetary policy. Friedman argued that a rule specifying that the monetary authority should stabilize the price level was too flexible, granting the monetary authority too much discretion in its decision making.

The price level is not a variable over which the monetary authority has any direct control. It is a target not an instrument. Specifying a price-level target allows the monetary authority discretion in its choice of instruments to achieve the target. Friedman actually made a similar argument about the gold standard in a paper called “Real and Pseudo Gold Standards.” The price of gold is a target, not an instrument. The monetary authority can achieve its target price of gold with more than one policy. Unless you define the rule in terms of the instruments of the central bank, you have not taken away the discretionary power of the monetary authority. In his anti-discretionary zeal, Friedman believed that he had discovered an argument that trumped advocates of the gold standard .

Of course there was a huge problem with this argument, though Friedman was rarely called on it. The money supply, under any definition that Friedman ever entertained, is no more an instrument of the monetary authority than the price level. Most of the money instruments included in any of the various definitions of money Friedman entertained for purposes of his k-percent rule are privately issued. So Friedman’s claim that his rule would eliminate the discretion of the monetary authority in its use of instrument was clearly false. Now, one might claim that when Friedman originally advanced the rule in his Program for Monetary Stability, the rule was formulated the context of a proposal for 100-percent reserves. However, the proposal for 100-percent reserves would inevitably have to identify those deposits subject to the 100-percent requirement and those exempt from the requirement. Once it is possible to convert the covered deposits into higher yielding uncovered deposits, monetary policy would not be effective if it controlled only the growth of deposits subject to a 100-percent reserve requirement.

In his chapter on monetary policy in The Constitution of Liberty, F. A. Hayek effectively punctured Friedman’s argument that a monetary authority could operate effectively without some discretion in its use of instruments to execute a policy aimed at some agreed upon policy goal. It is a category error to equate the discretion of the monetary authority in the choice of its policy instruments with the discretion of the government in applying coercive sanctions against the persons and property of private individuals. It is true that Hayek later modified his views about central banks, but that change in his views was at least in part attributable to a misunderstanding. Hayek erroneoulsy believed that his discovery that competition in the supply of money is possible without driving the value of money down to zero meant that competitive banks would compete to create an alternative monetary standard that would be superior to the existing standard legally established by the monetary authority. His conclusion did not follow from his premise.

In a previous post, I discussed how Hayek also memorably demolished Friedman’s argument that, although the k-percent rule might not be the theoretically best rule, it would at least be a good rule that would avoid the worst consequences of misguided monetary policies producing either deflation or inflation. John Taylor, accepting the Hayek Prize from the Manhattan Institute, totally embarrassed himself by flagarantly misunderstanding what Hayek was talking about. Here are the two relevant passages from Hayek. The first from his pamphlet, Full Employment at any Price?

I wish I could share the confidence of my friend Milton Friedman who thinks that one could deprive the monetary authorities, in order to prevent the abuse of their powers for political purposes, of all discretionary powers by prescribing the amount of money they may and should add to circulation in any one year. It seems to me that he regards this as practicable because he has become used for statistical purposes to draw a sharp distinction between what is to be regarded as money and what is not. This distinction does not exist in the real world. I believe that, to ensure the convertibility of all kinds of near-money into real money, which is necessary if we are to avoid severe liquidity crises or panics, the monetary authorities must be given some discretion. But I agree with Friedman that we will have to try and get back to a more or less automatic system for regulating the quantity of money in ordinary times. The necessity of “suspending” Sir Robert Peel’s Bank Act of 1844 three times within 25 years after it was passed ought to have taught us this once and for all.

Hayek in the Denationalization of Money, Hayek was more direct:

As regards Professor Friedman’s proposal of a legal limit on the rate at which a monopolistic issuer of money was to be allowed to increase the quantity in circulation, I can only say that I would not like to see what would happen if it ever became known that the amount of cash in circulation was approaching the upper limit and that therefore a need for increased liquidity could not be met.

And in a footnote, Hayek added.

To such a situation the classic account of Walter Bagehot . . . would apply: “In a sensitive state of the English money market the near approach to the legal limit of reserve would be a sure incentive to panic; if one-third were fixed by law, the moment the banks were close to one-third, alarm would begin and would run like magic.

So Friedman’s k-percent rule was dumb, really dumb. It was dumb, because it induced expectations that made it unsustainable. As Hayek observed, not only was the theory clear, but it was confirmed by the historical evidence from the nineteenth century. Unfortunately, it had to be reconfirmed one more time in 1982 before the Fed abandoned its own misguided attempt to implement a modified version of the Friedman rule.

Who Sets the Real Rate of Interest?

Understanding economics requires, among other things, understanding the distinction between real and nominal variables. Confusion between real and nominal variables is pervasive, constantly presenting barriers to clear thinking, and snares and delusions for the mentally lazy. In this post, I want to talk about the distinction between the real rate of interest and the nominal rate of interest. That distinction has been recognized for at least a couple of centuries, Henry Thornton having mentioned it early in the nineteenth century. But the importance of the distinction wasn’t really fully understood until Irving Fisher made the distinction between the real and nominal rates of interest a key element of his theory of interest and his theory of money, expressing the relationship in algebraic form — what we now call the Fisher equation. Notation varies, but the Fisher equation can be written more or less as follows:

i = r + dP/dt,

where i is the nominal rate, r is the real rate, and dP/dt is the rate of inflation. It is important to bear in mind that the Fisher equation can be understood in two very different ways. It can either represent an ex ante relationship, with dP/dt referring to expected inflation, or it can represent an ex post relationship, with dP/dt referring to actual inflation.

What I want to discuss in this post is the tacit assumption that usually underlies our understanding, and our application, of the ex ante version of the Fisher equation. There are three distinct variables in the Fisher equation: the real and the nominal rates of interest and the rate of inflation. If we think of the Fisher equation as an ex post relationship, it holds identically, because the unobservable ex post real rate is defined as the difference between the nominal rate and the inflation rate. The ex post, or the realized, real rate has no independent existence; it is merely a semantic convention. But if we consider the more interesting interpretation of the Fisher equation as an ex ante relationship, the real interest rate, though still unobservable, is not just a semantic convention. It becomes the theoretically fundamental interest rate of capital theory — the market rate of intertemporal exchange, reflecting, as Fisher masterfully explained in his canonical renderings of the theory of capital and interest, the “fundamental” forces of time preference and the productivity of capital. Because it is determined by economic “fundamentals,” economists of a certain mindset naturally assume that the real interest rate is independent of monetary forces, except insofar as monetary factors are incorporated in inflation expectations. But if money is neutral, at least in the long run, then the real rate has to be independent of monetary factors, at least in the long run. So in most expositions of the Fisher equation, it is tacitly assumed that the real rate can be treated as a parameter determined, outside the model, by the “fundamentals.” With r determined exogenously, fluctuations in i are correlated with, and reflect, changes in expected inflation.

Now there’s an obvious problem with the Fisher equation, which is that in many, if not most, monetary models, going back to Thornton and Wicksell in the nineteenth century, and to Hawtrey and Keynes in the twentieth, and in today’s modern New Keynesian models, it is precisely by way of changes in its lending rate to the banking system that the central bank controls the rate of inflation. And in this framework, the nominal interest rate is negatively correlated with inflation, not positively correlated, as implied by the usual understanding of the Fisher equation. Raising the nominal interest rate reduces inflation, and reducing the nominal interest rate raises inflation. The conventional resolution of this anomaly is that the change in the nominal interest rate is just temporary, so that, after the economy adjusts to the policy of the central bank, the nominal interest rate also adjusts to a level consistent with the exogenous real rate and to the rate of inflation implied by the policy of the central bank. The Fisher equation is thus an equilibrium relationship, while central-bank policy operates by creating a short-term disequilibrium. But the short-term disequilibrium imposed by the central bank cannot be sustained, because the economy inevitably begins an adjustment process that restores the equilibrium real interest rate, a rate determined by fundamental forces that eventually override any nominal interest rate set by the central bank if that rate is inconsistent with the equilibrium real interest rate and the expected rate of inflation.

It was just this analogy between the powerlessness of the central bank to hold the nominal interest rate below the sum of the exogenously determined equilibrium real rate and the expected rate of inflation that led Milton Friedman to the idea of a “natural rate of unemployment” when he argued that monetary policy could not keep the unemployment rate below the “natural rate ground out by the Walrasian system of general equilibrium equations.” Having been used by Wicksell as a synonym for the Fisherian equilibrium real rate, the term “natural rate” was undoubtedly adopted by Friedman, because monetarily induced deviations between the actual rate of unemployment and the natural rate of unemployment set in motion an adjustment process that restores unemployment to its “natural” level, just as any deviation between the nominal interest rate and the sum of the equilibrium real rate and expected inflation triggers an adjustment process that restores equality between the nominal rate and the sum of the equilibrium real rate and expected inflation.

So, if the ability of the central bank to use its power over the nominal rate to control the real rate of interest is as limited as the conventional interpretation of the Fisher equation suggests, here’s my question: When critics of monetary stimulus accuse the Fed of rigging interest rates, using the Fed’s power to keep interest rates “artificially low,” taking bread out of the mouths of widows, orphans and millionaires, what exactly are they talking about? The Fed has no legal power to set interest rates; it can only announce what interest rate it will lend at, and it can buy and sell assets in the market. It has an advantage because it can create the money with which to buy assets. But if you believe that the Fed cannot reduce the rate of unemployment below the “natural rate of unemployment” by printing money, why would you believe that the Fed can reduce the real rate of interest below the “natural rate of interest” by printing money? Martin Feldstein and the Wall Street Journal believe that the Fed is unable to do one, but perfectly able to do the other. Sorry, but I just don’t get it.

Look at the accompanying chart. It tracks the three variables in the Fisher equation (the nominal interest rate, the real interest rate, and expected inflation) from October 1, 2007 to July 2, 2013. To measure the nominal interest rate, I use the yield on 10-year Treasury bonds; to measure the real interest rate, I use the yield on 10-year TIPS; to measure expected inflation, I use the 10-year breakeven TIPS spread. The yield on the 10-year TIPS is an imperfect measure of the real rate, and the 10-year TIPS spread is an imperfect measure of inflation expectations, especially during financial crises, when the rates on TIPS are distorted by illiquidity in the TIPS market. Those aren’t the only problems with identifying the TIPS yield with the real rate and the TIPS spread with inflation expectations, but those variables usually do provide a decent approximation of what is happening to real rates and to inflation expectations over time.

real_and_nominal_interest_rates

Before getting to the main point, I want to make a couple of preliminary observations about the behavior of the real rate over time. First, notice that the real rate declined steadily, with a few small blips, from October 2007 to March 2008, when the Fed was reducing the Fed Funds target rate from 4.75 to 3% as the economy was sliding into a recession that officially began in December 2007. The Fed reduced the Fed Funds target to 2% at the end of April, but real interest rates had already started climbing in early March, so the failure of the FOMC to reduce the Fed Funds target again till October 2008, three weeks after the onset of the financial crisis, clearly meant that there was at least a passive tightening of monetary policy throughout the second and third quarters, helping create the conditions that precipitated the crisis in September. The rapid reduction in the Fed Funds target from 2% in October to 0.25% in December 2008 brought real interest rates down, but, despite the low Fed Funds rate, a lack of liquidity caused a severe tightening of monetary conditions in early 2009, forcing real interest rates to rise sharply until the Fed announced its first QE program in March 2009.

I won’t go into more detail about ups and downs in the real rate since March 2009. Let’s just focus on the overall trend. From that time forward, what we see is a steady decline in real interest rates from over 2% at the start of the initial QE program till real rates bottomed out in early 2012 at just over -1%. So, over a period of three years, there was a steady 3% decline in real interest rates. This was no temporary phenomenon; it was a sustained trend. I have yet to hear anyone explain how the Fed could have single-handedly produced a steady downward trend in real interest rates by way of monetary expansion over a period of three years. To claim that decline in real interest rates was caused by monetary expansion on the part of the Fed flatly contradicts everything that we think we know about the determination of real interest rates. Maybe what we think we know is all wrong. But if it is, people who blame the Fed for a three-year decline in real interest rates that few reputable economists – and certainly no economists that Fed critics pay any attention to — ever thought was achievable by monetary policy ought to provide an explanation for how the Fed suddenly got new and unimagined powers to determine real interest rates. Until they come forward with such an explanation, Fed critics have a major credibility problem.

So please – pleaseWall Street Journal editorial page, Martin Feldstein, John Taylor, et al., enlighten us. We’re waiting.

PS Of course, there is a perfectly obvious explanation for the three-year long decline in real interest rates, but not one very attractive to critics of QE. Either the equilibrium real interest rate has been falling since 2009, or the equilibrium real interest rate fell before 2009, but nominal rates adjusted slowly to the reduced real rate. The real interest rate might have adjusted more rapidly to the reduced equilibrium rate, but that would have required expected inflation to have risen. What that means is that sometimes it is the real interest rate, not, as is usually assumed, the nominal rate, that adjusts to the expected rate of inflation. My next post will discuss that alternative understanding of the implicit dynamics of the Fisher equation.

John Taylor, Post-Modern Monetary Theorist

In the beginning, there was Keynesian economics; then came Post-Keynesian economics.  After Post-Keynesian economics, came Modern Monetary Theory.  And now it seems, John Taylor has discovered Post-Modern Monetary Theory.

What, you may be asking yourself, is Post-Modern Monetary Theory all about? Great question!  In a recent post, Scott Sumner tried to deconstruct Taylor’s position, and found himself unable to determine just what it is that Taylor wants in the way of monetary policy.  How post-modern can you get?

Taylor is annoyed that the Fed is keeping interest rates too low by a policy of forward guidance, i.e., promising to keep short-term interest rates close to zero for an extended period while buying Treasuries to support that policy.

And yet—unlike its actions taken during the panic—the Fed’s policies have been accompanied by disappointing outcomes. While the Fed points to external causes, it ignores the possibility that its own policy has been a factor.

At this point, the alert reader is surely anticipating an explanation of why forward guidance aimed at reducing the entire term structure of interest rates, thereby increasing aggregate demand, has failed to do so, notwithstanding the teachings of both Keynesian and non-Keynesian monetary theory.  Here is Taylor’s answer:

At the very least, the policy creates a great deal of uncertainty. People recognize that the Fed will eventually have to reverse course. When the economy begins to heat up, the Fed will have to sell the assets it has been purchasing to prevent inflation.

Taylor seems to be suggesting that, despite low interest rates, the public is not willing to spend because of increased uncertainty.  But why wasn’t the public spending more in the first place, before all that nasty forward guidance?  Could it possibly have had something to do with business pessimism about demand and household pessimism about employment?  If the problem stems from an underlying state of pessimistic expectations about the future, the question arises whether Taylor considers such pessimism to be an element of, or related to, uncertainty?

I don’t know the answer, but Taylor posits that the public is assuming that the Fed’s policy will have to be reversed at some point. Why? Because the economy will “heat up.” As an economic term, the verb “to heat up” is pretty vague, but it seems to connote, at the very least, increased spending and employment. Which raises a further question: given a state of pessimistic expectations about future demand and employment, does a policy that, by assumption, increases the likelihood of additional spending and employment create uncertainty or diminish it?

It turns out that Taylor has other arguments for the ineffectiveness of forward guidance.  We can safely ignore his two throw-away arguments about on-again off-again asset purchases, and the tendency of other central banks to follow Fed policy.  A more interesting reason is provided when Taylor compares Fed policy to a regulatory price ceiling.

[I]f investors are told by the Fed that the short-term rate is going to be close to zero in the future, then they will bid down the yield on the long-term bond. The forward guidance keeps the long-term rate low and tends to prevent it from rising. Effectively the Fed is imposing an interest-rate ceiling on the longer-term market by saying it will keep the short rate unusually low.

The perverse effect comes when this ceiling is below what would be the equilibrium between borrowers and lenders who normally participate in that market. While borrowers might like a near-zero rate, there is little incentive for lenders to extend credit at that rate.

This is much like the effect of a price ceiling in a rental market where landlords reduce the supply of rental housing. Here lenders supply less credit at the lower rate. The decline in credit availability reduces aggregate demand, which tends to increase unemployment, a classic unintended consequence of the policy.

When economists talk about a price ceiling what they usually mean is that there is some legal prohibition on transactions between willing parties at a price above a specified legal maximum price.  If the prohibition is enforced, as are, for example, rent ceilings in New York City, some people trying to rent apartments will be unable to do so, even though they are willing to pay as much, or more, than others are paying for comparable apartments.  The only rates that the Fed is targeting, directly or indirectly, are those on US Treasuries at various maturities.  All other interest rates in the economy are what they are because, given the overall state of expectations, transactors are voluntarily agreeing to the terms reflected in those rates.  For any given class of financial instruments, everyone willing to purchase or sell those instruments at the going rate is able to do so.  For Professor Taylor to analogize this state of affairs to a price ceiling is not only novel, it  is thoroughly post-modern.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,263 other subscribers
Follow Uneasy Money on WordPress.com