Archive for the 'Larry Summers' Category

Larry Summers v. John Taylor: No Contest

It seems that an announcement about who will be appointed as Fed Chairman after Janet Yellen’s terms expires early next year is imminent. Although there are sources in the Administration, e.g., the President, indicating that Janet Yellen may be reappointed, the betting odds strongly favor Jerome Powell, a Republican currently serving as a member of the Board of Governors, over the better-known contender, John Taylor, who has earned a considerable reputation as an academic economist, largely as author of the so-called Taylor Rule, and has also served as a member of the Council of Economic Advisers and the Treasury in previous Republican administrations.

Taylor’s support seems to be drawn from the more militant ideological factions within the Republican Party owing to his past criticism of Fed’s quantitative-easing policy after the financial crisis and little depression, having famously predicted that quantitative easing would revive dormant inflationary pressures, presaging a return to the stagflation of the 1970s, while Powell, who has supported the Fed’s policies under Bernanke and Yellen, is widely suspect in the eyes of the Republican base as a just another elitist establishmentarian inhabiting the swamp that the new administration was elected to drain. Nevertheless, Taylor’s academic background, his prior government service, and his long-standing ties to the US and international banking and financial institutions make him a less than ideal torch bearer for the true-blue (or true-red) swamp drainers whose ostensible goal is less to take control of the Fed than to abolish it. To accommodate both the base and the establishment, it is possible that, as reported by Breitbart, both Powell and Taylor will be appointed, one replacing Yellen as chairman, the other replacing Stanley Fischer as vice-chairman.

Seeing no evidence that Taylor has a sufficient following for his appointment to provide any political benefit, I have little doubt that it will be Powell who replaces Yellen, possibly along with Taylor as Vice-Chairman, if Taylor, at the age of 71, is willing to accept a big pay cut, just to take the vice-chairmanship with little prospect of eventually gaining the top spot he has long coveted.

Although I think it unlikely that Taylor will be the next Fed Chairman, the recent flurry of speculation about his possible appointment prompted me to look at a recent talk that he gave at the Federal Reserve Bank of Boston Conference on the subject: Are Rules Made to be Broken? Discretion and Monetary Policy. The title of his talk “Rules versus Discretion: Assessing the Debate over Monetary Policy” is typical of Taylor’s very low-key style, a style that, to his credit, is certainly not calculated to curry favor with the Fed-bashers who make up a large share of a Republican base that demands constant attention and large and frequently dispensed servings of red meat.

I found several things in Taylor’s talk notable. First, and again to his credit, Taylor does, on occasion, acknowledge the possibility that other interpretations of events from his own are possible. Thus, in arguing that the good macroeconomic performance (“the Great Moderation”) from about 1985 to 2003, was the result of the widespread adoption of “rules-based” monetary policy, and that the subsequent financial crisis and deep recession were the results of the FOMC’s having shifted, after the 2001 recession, from that rules-based policy to a discretionary policy, by keeping interest rates too low for too long, Taylor did at least recognize the possibility that the reason that the path of interest rates after 2003 departed from the path that, he claims, had been followed during the Great Moderation was that the economy was entering a period of inherently greater instability in the early 2000s than in the previous two decades because of external conditions unrelated to actions taken by the Fed.

The other view is that the onset of poor economic performance was not caused by a deviation from policy rules that were working, but rather to other factors. For example, Carney (2013) argues that the deterioration of performance in recent years occurred because “… the disruptive potential of financial instability—absent effective macroprudential policies—leads to a less favourable Taylor frontier.” Carney (2013) illustrated his argument with a shift in the tradeoff frontier as did King (2012). The view I offer here is that the deterioration was due more to a move off the efficient policy frontier due to a change in policy. That would suggest moving back toward the type of policy rule that described policy decisions during the Great Moderation period. (p. 9)

But despite acknowledging the possibility of another view, Taylor offers not a single argument against it. He merely reiterates his own unsupported opinion that the policy post-2003 became less rule-based than it had been from 1985 to 2003. However, later in his talk in a different context, Taylor does return to the argument that the Fed’s policy after 2003 was not fundamentally different from its policy before 2003. Here Taylor is assuming that Bernanke is acknowledging that there was a shift in from the rules-based monetary policy of 1985 to 2003, but that the post-2003 monetary policy, though not rule-based as in the way that it had been in 1985 to 2003, was rule-based in a different sense. I don’t believe that Bernanke would accept that there was a fundamental change in the nature of monetary policy after 2003, but that is not really my concern here.

At a recent Brookings conference, Ben Bernanke argued that the Fed had been following a policy rule—including in the “too low for too long” period. But the rule that Bernanke had in mind is not a rule in the sense that I have used it in this discussion, or that many others have used it.

Rather it is a concept that all you really need for effective policy making is a goal, such as an inflation target and an employment target. In medicine, it would be the goal of a healthy patient. The rest of policymaking is doing whatever you as an expert, or you as an expert with models, thinks needs to be done with the instruments. You do not need to articulate or describe a strategy, a decision rule, or a contingency plan for the instruments. If you want to hold the interest rate well below the rule-based strategy that worked well during the Great Moderation, as the Fed did in 2003-2005, then it’s ok, if you can justify it in terms of the goal.

Bernanke and others have argued that this approach is a form of “constrained discretion.” It is an appealing term, and it may be constraining discretion in some sense, but it is not inducing or encouraging a rule as the language would have you believe. Simply having a specific numerical goal or objective function is not a rule for the instruments of policy; it is not a strategy; in my view, it ends up being all tactics. I think there is evidence that relying solely on constrained discretion has not worked for monetary policy. (pp. 16-17)

Taylor has made this argument against constrained discretion before in an op-ed in the Wall Street Journal (May 2, 2015). Responding to that argument I wrote a post (“Cluelessness about Strategy, Tactics and Discretion”) which I think exposed how thoroughly confused Taylor is about what a monetary rule can accomplish and what the difference is between a monetary rule that specifies targets for an instrument and a monetary rule that specifies targets for policy goals. At an even deeper level, I believe I also showed that Taylor doesn’t understand the difference between strategy and tactics or the meaning of discretion. Here is an excerpt from my post of almost two and a half years ago.

Taylor denies that his steady refrain calling for a “rules-based policy” (i.e., the implementation of some version of his beloved Taylor Rule) is intended “to chain the Fed to an algebraic formula;” he just thinks that the Fed needs “an explicit strategy for setting the instruments” of monetary policy. Now I agree that one ought not to set a policy goal without a strategy for achieving the goal, but Taylor is saying that he wants to go far beyond a strategy for achieving a policy goal; he wants a strategy for setting instruments of monetary policy, which seems like an obvious confusion between strategy and tactics, ends and means.

Instruments are the means by which a policy is implemented. Setting a policy goal can be considered a strategic decision; setting a policy instrument a tactical decision. But Taylor is saying that the Fed should have a strategy for setting the instruments with which it implements its strategic policy.  (OED, “instrument – 1. A thing used in or for performing an action: a means. . . . 5. A tool, an implement, esp. one used for delicate or scientific work.”) This is very confused.

Let’s be very specific. The Fed, for better or for worse – I think for worse — has made a strategic decision to set a 2% inflation target. Taylor does not say whether he supports the 2% target; his criticism is that the Fed is not setting the instrument – the Fed Funds rate – that it uses to hit the 2% target in accordance with the Taylor rule. He regards the failure to set the Fed Funds rate in accordance with the Taylor rule as a departure from a rules-based policy. But the Fed has continually undershot its 2% inflation target for the past three [now almost six] years. So the question naturally arises: if the Fed had raised the Fed Funds rate to the level prescribed by the Taylor rule, would the Fed have succeeded in hitting its inflation target? If Taylor thinks that a higher Fed Funds rate than has prevailed since 2012 would have led to higher inflation than we experienced, then there is something very wrong with the Taylor rule, because, under the Taylor rule, the Fed Funds rate is positively related to the difference between the actual inflation rate and the target rate. If a Fed Funds rate higher than the rate set for the past three years would have led, as the Taylor rule implies, to lower inflation than we experienced, following the Taylor rule would have meant disregarding the Fed’s own inflation target. How is that consistent with a rules-based policy?

This is such an obvious point – and I am hardly the only one to have made it – that Taylor’s continuing failure to respond to it is simply inexcusable. In his apologetics for the Taylor rule and for legislation introduced (no doubt with his blessing and active assistance) by various Republican critics of Fed policy in the House of Representatives, Taylor repeatedly insists that the point of the legislation is just to require the Fed to state a rule that it will follow in setting its instrument with no requirement that Fed actually abide by its stated rule. The purpose of the legislation is not to obligate the Fed to follow the rule, but to merely to require the Fed, when deviating from its own stated rule, to provide Congress with a rationale for such a deviation. I don’t endorse the legislation that Taylor supports, but I do agree that it would be desirable for the Fed to be more forthcoming than it has been in explaining the reasoning about its monetary-policy decisions, which tend to be either platitudinous or obfuscatory rather than informative. But if Taylor wants the Fed to be more candid and transparent in defending its own decisions about monetary policy, it would be only fitting and proper for Taylor, as an aspiring Fed Chairman, to be more forthcoming than he has yet been about the obvious, and rather scary, implications of following the Taylor Rule during the period since 2003.

If Taylor is nominated to be Chairman or Vice-Chairman of the Fed, I hope that, during his confirmation hearings, he will be asked to explain what the implications of following the Taylor Rule would have been in the post-2003 period.

As the attached figure shows PCE inflation (excluding food and energy prices) was 1.9 percent in 2004. If inflation in 2004 was less than the 2% inflation target assumed by the Taylor Rule, why does Taylor think that raising interest rates in 2004 would have been appropriate? And if inflation in 2005 was merely 2.2%, just barely above the 2% target, what rate should the Fed Funds rate have reached in 2005, and how would that rate have affected the fairly weak recovery from the 2001 recession? And what is the basis for Taylor’s assessment that raising the Fed Funds rate in 2005 to a higher level than it was raised to would have prevented the subsequent financial crisis?

Taylor’s implicit argument is that by not raising interest rates as rapidly as the Taylor rule required, the Fed created additional uncertainty that was damaging to the economy. But what was the nature of the uncertainty created? The Federal Funds rate is merely the instrument of policy, not the goal of policy. To argue that the Fed was creating additional uncertainty by not changing its interest rate in line with the Taylor rule would only make sense if the economic agents care about how the instrument is set, but if it is an instrument the importance of the Fed Funds rate is derived entirely from its usefulness in achieving the policy goal of the Fed and the policy goal was the 2% inflation rate, which the Fed came extremely close to hitting in the 2004-06 period, during which Taylor alleges that the Fed’s monetary policy went off the rails and became random, unpredictable and chaotic.

If you calculate the absolute difference between the observed yearly PCE inflation rate (excluding food and energy prices) and the 2% target from 1985 to 2003 (Taylor’s golden age of monetary policy) the average yearly deviation was 0.932%. From 2004 to 2015, the period of chaotic monetary policy in Taylor’s view, the average yearly deviation between PCE inflation and the 2% target was just 0.375%. So when was monetary policy more predictable? Even if you just look at the last 12 years of the golden age (1992 to 2003), the average annual deviation was 0.425%.

The name Larry Summers is in the title of this post, but I haven’t mentioned him yet, so let me explain where Larry Summers comes into the picture. In his talk, Taylor mentions a debate about rules versus discretion that he and Summers had at the 2013 American Economic Association meetings and proceeds to give the following account of the key interchange in that debate.

Summers started off by saying: “John Taylor and I have, it will not surprise you . . . a fundamental philosophical difference, and I would put it in this way. I think about my doctor. Which would I prefer: for my doctor’s advice, to be consistently predictable, or for my doctor’s advice to be responsive to the medical condition with which I present? Me, I’d rather have a doctor who most of the time didn’t tell me to take some stuff, and every once in a while said I needed to ingest some stuff into my body in response to the particular problem that I had. That would be a doctor who’s [sic] [advice], believe me, would be less predictable.” Thus, Summers argues in favor of relying on an all-knowing expert, a doctor who does not perceive the need for, and does not use, a set of guidelines, but who once in a while in an unpredictable way says to ingest some stuff. But as in economics, there has been progress in medicine over the years. And much progress has been due to doctors using checklists, as described by Atul Gawande.

Of course, doctors need to exercise judgement in implementing checklists, but if they start winging it or skipping steps the patients usually suffer. Experience and empirical studies show that checklist-free medicine is wrought with dangers just as rules-free, strategy-free monetary policy is. (pp. 15-16)

Taylor’s citation of Atul Gawande, author of The Checklist Manifesto, is pure obfuscation. To see how off-point it is, have a look at this review published in the Seattle Times.

“The Checklist Manifesto” is about how to prevent highly trained, specialized workers from making dumb mistakes. Gawande — who appears in Seattle several times early next week — is a surgeon, and much of his book is about surgery. But he also talks to a construction manager, a master chef, a venture capitalist and the man at The Boeing Co. who writes checklists for airline pilots.

Commercial pilots have been using checklists for decades. Gawande traces this back to a fly-off at Wright Field, Ohio, in 1935, when the Army Air Force was choosing its new bomber. Boeing’s entry, the B-17, would later be built by the thousands, but on that first flight it took off, stalled, crashed and burned. The new airplane was complicated, and the pilot, who was highly experienced, had forgotten a routine step.

For pilots, checklists are part of the culture. For surgical teams they have not been. That began to change when a colleague of Gawande’s tried using a checklist to reduce infections when using a central venous catheter, a tube to deliver drugs to the bloodstream.

The original checklist: wash hands; clean patient’s skin with antiseptic; use sterile drapes; wear sterile mask, hat, gown and gloves; use a sterile dressing after inserting the line. These are all things every surgical team knows. After putting them in a checklist, the number of central-line infections in that hospital fell dramatically.

Then came the big study, the use of a surgical checklist in eight hospitals around the world. One was in rural Tanzania, in Africa. One was in the Kingdom of Jordan. One was the University of Washington Medical Center in Seattle. They were hugely different hospitals with much different rates of infection.

Use of the checklist lowered infection rates significantly in all of them.

Gawande describes the key things about a checklist, much of it learned from Boeing. It has to be short, limited to critical steps only. Generally the checking is not done by the top person. In the cockpit, the checklist is read by the copilot; in an operating room, Gawande discovered, it is done best by a nurse.

Gawande wondered whether surgeons would accept control by a subordinate. Which was stronger, the culture of hierarchy or the culture of precision? He found reason for optimism in the following dialogue he heard in the hospital in Amman, Jordan, after a nurse saw a surgeon touch a nonsterile surface:

Nurse: “You have to change your glove.”

Surgeon: “It’s fine.”

Nurse: “No, it’s not. Don’t be stupid.”

In other words, the basic rule underlying the checklist is simply: don’t be stupid. It has nothing to do with whether doctors should exercise judgment, or “winging it,” or “skipping steps.” What was Taylor even thinking? For a monetary authority not to follow a Taylor rule is not analogous to a doctor practicing checklist-free medicine.

As it happens, I have a story of my own about whether following numerical rules without exercising independent judgment makes sense in practicing medicine. Fourteen years ago, on the Friday before Labor Day, I was exercising at home and began to feeling chest pains. After ignoring the pain for a few minutes, I stopped and took a shower and then told my wife that I thought I needed to go to the hospital, because I was feeling chest pains – I was still in semi-denial about what I was feeling – my wife asked me if she should call 911, and I said that that might be a good idea. So she called 911, and told the operator that I was feeling chest pains. Within a couple of minutes, two ambulances arrived, and I was given an aspirin to chew and a nitroglycerine tablet to put under my tongue. I was taken to the emergency room at the hospital nearest to my home. After calling 911, my wife also called our family doctor to let him know what was happening and which hospital I was being taken to. He then placed a call to a cardiologist who had privileges at that hospital who happened to be making rounds there that morning.

When I got to the hospital, I was given an electrocardiogram, and my blood was taken. I was also asked to rate my pain level on a scale of zero to ten. The aspirin and nitroglycerine had reduced the pain level slightly, but I probably said it was at eight or nine. However, the ER doc looked at the electrocardiogram results and the enzyme levels in my blood, and told me that there was no indication that I was having a heart attack, but that they would keep me in the ER for observation. Luckily, the cardiologist who had been called by my internist came to the ER, and after talking to the ER doc, looking at the test results, came over to me and started asking me questions about what had happened and how I was feeling. Although the test results did not indicate that I was having heart attack, the cardiologist quickly concluded that what I was experiencing likely was a heart attack. He, therefore, hinted to me that I should request to be transferred to another nearby hospital, which not only had a cath lab, as the one I was then at did, but also had an operating room in which open heart surgery could be performed, if that would be necessary. It took a couple of tries on his part before I caught on to what he was hinting at, but as soon as I requested to be transferred to the other hospital, he got me onto an ambulance ASAP so that he could meet me at the hospital and perform an angiogram in the cath lab, cancelling an already scheduled angiogram.

The angiogram showed that my left anterior descending artery was completely blocked, so open-heart surgery was not necessary; angioplasty would be sufficient to clear the artery, which the cardiologist performed, also implanting two stents to prevent future blockage.  I remained in the cardiac ICU for two days, and was back home on Monday, when my rehab started. I was back at work two weeks later.

The willingness of my cardiologist to use his judgment, experience and intuition to ignore the test results indicating that I was not having a heart attack saved my life. If the ER doctor, following the test results, had kept me in the ER for observation, I would have been dead within a few hours. Following the test results and ignoring what the patient was feeling would have been stupid. Luckily, I was saved by a really good cardiologist. He was not stupid; he could tell that the numbers were not telling the real story about what was happening to me.

We now know that, in the summer of 2008, the FOMC, being in the thrall of headline inflation numbers allowed a recession that had already started at the end of 2007 to deteriorate rapidly, pr0viding little or no monetary stimulus, to an economy when nominal income was falling so fast that debts coming due could no longer be serviced. The financial crisis and subsequent Little Depression were caused by the failure of the FOMC to provide stimulus to a failing economy, not by interest rates having been kept too low for too long after 2003. If John Taylor still hasn’t figured that out – and he obviously hasn’t — he should not be allowed anywhere near the Federal Reserve Board.

Advertisements

Sumner on the Demand for Money, Interest Rates and Barsky and Summers

Scott Sumner had two outstanding posts a couple of weeks ago (here and here) discussing the relationship between interest rates and NGDP, making a number of important points, which I largely agree with, even though I have some (mostly semantic) quibbles about the details. I especially liked how in the second post he applied the analysis of Robert Barsky and Larry Summers in their article about Gibson’s Paradox under the gold standard to recent monetary experience. The two posts are so good and cover such a wide range of topics that the best way for me to address them is by cutting and pasting relevant passages and commenting on them.

Scott begins with the equation of exchange MV = PY. I personally prefer the Cambridge version (M = kPY) where k stands for the fraction of income that people hold as cash, thereby making it clear that the relevant concept is how much money want to hold, not that mysterious metaphysical concept called the velocity of circulation V (= 1/k). With attention focused on the decision about how much money to hold, it is natural to think of the rate of interest as the opportunity cost of holding non-interest-bearing cash balances. When the rate of interest rate rises, the desired holdings of non-interest-bearing cash tend to fall; in other words k falls (and V rises). With unchanged M, the equation is satisfied only if PY increases. So the notion that a reduction in interest rates, in and of itself, is expansionary is based on a misunderstanding. An increase in the amount of money demanded is always contractionary. A reduction in interest rates increases the amount of money demanded (if money is non-interest-bearing). A reduction in interest rates is therefore contractionary (all else equal).

Scott suggests some reasons why this basic relationship seems paradoxical.

Sometimes, not always, reductions in interest rates are caused by an increase in the monetary base. (This was not the case in late 2007 and early 2008, but it is the case on some occasions.) When there is an expansionary monetary policy, specifically an exogenous increase in M, then when interest rates fall, V tends to fall by less than M rises. So the policy as a whole causes NGDP to rise, even as the specific impact of lower interest rates is to cause NGDP to fall.

To this I would add that, as discussed in my recent posts about Keynes and Fisher, Keynes in the General Theory seemed to be advancing a purely monetary theory of the rate of interest. If Keynes meant that the rate of interest is determined exclusively by monetary factors, then a falling rate of interest is a sure sign of an excess supply of money. Of course in the Hicksian world of IS-LM, the rate of interest is simultaneously determined by both equilibrium in the money market and an equilibrium rate of total spending, but Keynes seems to have had trouble with the notion that the rate of interest could be simultaneously determined by not one, but two, equilibrium conditions.

Another problem is the Keynesian model, which hopelessly confuses the transmission mechanism. Any Keynesian model with currency that says low interest rates are expansionary is flat out wrong.

But if Keynes believed that the rate of interest is exclusively determined by money demand and money supply, then the only possible cause of a low or falling interest rate is the state of the money market, the supply side of which is always under the control of the monetary authority. Or stated differently, in the Keynesian model, the money-supply function is perfectly elastic at the target rate of interest, so that the monetary authority supplies whatever amount of money is demanded at that rate of interest. I disagree with the underlying view of what determines the rate of interest, but given that theory of the rate of interest, the model is not incoherent and doesn’t confuse the transmission mechanism.

That’s probably why economists were so confused by 2008. Many people confuse aggregate demand with consumption. Thus they think low rates encourage people to “spend” and that this n somehow boosts AD and NGDP. But it doesn’t, at least not in the way they assume. If by “spend” you mean higher velocity, then yes, spending more boosts NGDP. But we’ve already seen that lower interest rates don’t boost velocity, rather they lower velocity.

But, remember that Keynes believed that the interest rate can be reduced only by increasing the quantity of money, which nullifies the contractionary effect of a reduced interest rate.

Even worse, some assume that “spending” is the same as consumption, hence if low rates encourage people to save less and consume more, then AD will rise. This is reasoning from a price change on steroids! When you don’t spend you save, and saving goes into investment, which is also part of GDP.

But this is reasoning from an accounting identity. The question is what happens if people try to save. The Keynesian argument is that the attempt to save will be self-defeating; instead of increased saving, there is reduced income. Both scenarios are consistent with the accounting identity. The question is which causal mechanism is operating? Does an attempt to increase saving cause investment to increase, or does it cause income to go down? Seemingly aware of the alternative scenario, Scott continues:

Now here’s were amateur Keynesians get hopelessly confused. They recall reading something about the paradox of thrift, about planned vs. actual saving, about the fact that an attempt to save more might depress NGDP, and that in the end people may fail to save more, and instead NGDP will fall. This is possible, but even if true it has no bearing on my claim that low rates are contractionary.

Just so. But there is not necessarily any confusion; the issue may be just a difference in how monetary policy is implemented. You can think of the monetary authority as having a choice in setting its policy in terms of the quantity of the monetary base, or in terms of an interest-rate target. Scott characterizes monetary policy in terms of the base, allowing the interest rate to adjust; Keynesians characterize monetary policy in terms of an interest-rate target, allowing the monetary base to adjust. The underlying analysis should not depend on how policy is characterized. I think that this is borne out by Scott’s next paragraph, which is consistent with a policy choice on the part of the Keynesian monetary authority to raise interest rates as needed to curb aggregate demand when aggregate demand is excessive.

To see the problem with this analysis, consider the Keynesian explanations for increases in AD. One theory is that animal spirits propel businesses to invest more. Another is that consumer optimism propels consumers to spend more. Another is that fiscal policy becomes more expansionary, boosting the budget deficit. What do all three of these shocks have in common? In all three cases the shock leads to higher interest rates. (Use the S&I diagram to show this.) Yes, in all three cases the higher interest rates boost velocity, and hence ceteris paribus (i.e. fixed monetary base) the higher V leads to more NGDP. But that’s not an example of low rates boosting AD, it’s an example of some factor boosting AD, and also raising interest rates.

In the Keynesian terminology, the shocks do lead to higher rates, but only because excessive aggregate demand, caused by animal spirits, consumer optimism, or government budget deficits, has to be curbed by interest-rate increases. The ceteris paribus assumption is ambiguous; it can be interpreted to mean holding the monetary base constant or holding the interest-rate target constant. I don’t often cite Milton Friedman as an authority, but one of his early classic papers was “The Marshallian Demand Curve” in which he pointed out that there is an ambiguity in what is held constant along the demand curve: prices of other goods or real income. You can hold only one of the two constant, not both, and you get a different demand curve depending on which ceteris paribus assumption you make. So the upshot of my commentary here is that, although Scott is right to point out that the standard reasoning about how a change in interest rates affects NGDP implicitly assumes that the quantity of money is changing, that valid point doesn’t refute the standard reasoning. There is an inherent ambiguity in specifying what is actually held constant in any ceteris paribus exercise. It’s good to make these ambiguities explicit, and there might be good reasons to prefer one ceteris paribus assumption over another, but a ceteris paribus assumption isn’t a sufficient basis for rejecting a model.

Now just to be clear, I agree with Scott that, as a matter of positive economics, the interest rate is not fully under the control of the monetary authority. And one reason that it’s not  is that the rate of interest is embedded in the entire price system, not just a particular short-term rate that the central bank may be able to control. So I don’t accept the basic Keynesian premise that monetary authority can always make the rate of interest whatever it wants it to be, though the monetary authority probably does have some control over short-term rates.

Scott also provides an analysis of the effects of interest on reserves, and he is absolutely correct to point out that paying interest on reserves is deflationary.

I will just note that near the end of his post, Scott makes a comment about living “in a Ratex world.” WADR, I don’t think that ratex is at all descriptive of reality, but I will save that discussion for another time.

Scott followed up the post about the contractionary effects of low interest rates with a post about the 1988 Barsky and Summers paper.

Barsky and Summers . . . claim that the “Gibson Paradox” is caused by the fact that low interest rates are deflationary under the gold standard, and that causation runs from falling interest rates to deflation. Note that there was no NGDP data for this period, so they use the price level rather than NGDP as their nominal indicator. But their basic argument is identical to mine.

The Gibson Paradox referred to the tendency of prices and interest rates to be highly correlated under the gold standard. Initially some people thought this was due to the Fisher effect, but it turns out that prices were roughly a random walk under the gold standard, and hence the expected rate of inflation was close to zero. So the actual correlation was between prices and both real and nominal interest rates. Nonetheless, the nominal interest rate is the key causal variable in their model, even though changes in that variable are mostly due to changes in the real interest rate.

Since gold is a durable good with a fixed price, the nominal interest rate is the opportunity cost of holding that good. A lower nominal rate tends to increase the demand for gold, for both monetary and non-monetary purposes.  And an increased demand for gold is deflationary (and also reduces NGDP.)

Very insightful on Scott’s part to see the connection between the Barsky and Summers analysis and the standard theory of the demand for money. I had previously thought about the Barsky and Summers discussion simply as a present-value problem. The present value of any durable asset, generating a given expected flow of future services, must vary inversely with the interest rate at which those future services are discounted. Since the future price level under the gold standard was expected to be roughly stable, any change in nominal interest rates implied a change in real interest rates. The value of gold, like other durable assets, varied inversely with nominal interest rate. But with the nominal value of gold fixed by the gold standard, changes in the value of gold implied a change in the price level, an increased value of gold being deflationary and a decreased value of gold inflationary. Scott rightly observes that the same idea can be expressed in the language of monetary theory by thinking of the nominal interest rate as the cost of holding any asset, so that a reduction in the nominal interest rate has to increase the demand to own assets, because reducing the cost of holding an asset increases the demand to own it, thereby raising its value in exchange, provided that current output of the asset is small relative to the total stock.

However, the present-value approach does have an advantage over the opportunity-cost approach, because the present-value approach relates the value of gold or money to the entire term structure of interest rates, while the opportunity-cost approach can only handle a single interest rate – presumably the short-term rate – that is relevant to the decision to hold money at any given moment in time. In simple models of the IS-LM ilk, the only interest rate under consideration is the short-term rate, or the term-structure is assumed to have a fixed shape so that all interest rates are equally affected by, or along with, any change in the short-term rate. The latter assumption of course is clearly unrealistic, though Keynes made it without a second thought. However, in his Century of Bank Rate, Hawtrey showed that between 1844 and 1938, when the gold standard was in effect in Britain (except 1914-25 and 1931-38) short-term rates and long-term rates often moved by significantly different magnitudes and even in opposite directions.

Scott makes a further interesting observation:

The puzzle of why the economy does poorly when interest rates fall (such as during 2007-09) is in principle just as interesting as the one Barsky and Summers looked at. Just as gold was the medium of account during the gold standard, base money is currently the medium of account. And just as causation went from falling interest rates to higher demand for gold to deflation under the gold standard, causation went from falling interest rates to higher demand for base money to recession in 2007-08.

There is something to this point, but I think Scott may be making too much of it. Falling interest rates in 2007 may have caused the demand for money to increase, but other factors were also important in causing contraction. The problem in 2008 was that the real rate of interest was falling, while the Fed, fixated on commodity (especially energy) prices, kept interest rates too high given the rapidly deteriorating economy. With expected yields from holding real assets falling, the Fed, by not cutting interest rates any further between April and October of 2008, precipitated a financial crisis once inflationary expectations started collapsing in August 2008, the expected yield from holding money dominating the expected yield from holding real assets, bringing about a pathological Fisher effect in which asset values had to collapse for the yields from holding money and from holding assets to be equalized.

Under the gold standard, the value of gold was actually sensitive to two separate interest-rate effects – one reflected in the short-term rate and one reflected in the long-term rate. The latter effect is the one focused on by Barsky and Summers, though they also performed some tests on the short-term rate. However, it was through the short-term rate that the central bank, in particular the Bank of England, the dominant central bank during in the pre-World War I era, manifested its demand for gold reserves, raising the short-term rate when it was trying to accumulate gold and reducing the short-term rate when it was willing to reduce its reserve holdings. Barsky and Summers found the long-term rate to be more highly correlated with the price level than the short-term rate. I conjecture that the reason for that result is that the long-term rate is what captures the theoretical inverse relationship between the interest rate and the value of a durable asset, while the short-term rate would be negatively correlated with the value of gold when (as is usually the case) it moves together with the long-term rate but may sometimes be positively correlated with the value of gold (when the central bank is trying to accumulate gold) and thereby tightening the world market for gold. I don’t know if Barsky and Summers ran regressions using both long-term and short-term rates, but using both long-term and short-term rates in the same regression might have allowed them to find evidence of both effects in the data.

PS I have been too busy and too distracted of late to keep up with comments on earlier posts. Sorry for not responding promptly. In case anyone is still interested, I hope to respond to comments over the next few days, and to post and respond more regularly than I have been doing for the past few weeks.

Further Thoughts on Capital and Inequality

In a recent post, I criticized, perhaps without adequate understanding, some of Thomas Piketty’s arguments about capital in his best-selling book. My main criticism is that Piketty’s argument that. under capitalism, there is an inherent tendency toward increasing inequality, ignores the heterogeneity of capital and the tendency for new capital embodying new knowledge, new techniques, and new technologies to render older capital obsolete. Contrary to the simple model of accumulation on which Piketty relies, the accumulation of capital is not a smooth process; it is a very uneven process, generating very high returns to some owners of capital, but also imposing substantial losses on other owners of capital. The only way to avoid the risk of owning suddenly obsolescent capital is to own the market portfolio. But I conjecture that few, if any, great fortunes have been amassed by investing in the market portfolio, and (I further conjecture) great fortunes, once amassed, are usually not liquidated and reinvested in the market portfolio, but continue to be weighted heavily in fairly narrow portfolios of assets from which those great fortunes grew. Great fortunes, aside from being dissipated by deliberate capital consumption, also tend to be eroded by the loss of value through obsolescence, a process that can only be avoided by extreme diversification of holdings or by the exercise of entrepreneurial skill, a skill rarely bequeathed from generation to generation.

Applying this insight, Larry Summers pointed out in his review of Piketty’s book that the rate of turnover in the Forbes list of the 400 wealthiest individuals between 1982 and 2012 was much higher than the turnover predicted by Piketty’s simple accumulation model. Commenting on my post (in which I referred to Summers’s review), Kevin Donoghue objected that Piketty had criticized the Forbes 400 as a measure of wealth in his book, so that Piketty would not necessarily accept Summers’ criticism based on the Forbes 400. Well, as an alternative, let’s have a look at the S&P 500. I just found this study of the rate of turnover in the 500 firms making up the S&P 500, showing that the rate of turnover in the composition of the S&P 500 has been increased greatly over the past 50 years. See the chart below copied from that study showing that the average length of time for firms on the S&P 500 was over 60 years in 1958, but by 2011 had fallen to less than 20 years. The pace of creative destruction seems to be accelerating

S&P500_turnover

From the same study here’s another chart showing the companies that were deleted from the index between 2001 and 2011 and those that were added.

S&P500_churn

But I would also add a cautionary note that, because the population of individuals and publicly held business firms is growing, comparing the composition of a fixed number (400) of wealthiest individuals or (500) most successful corporations over time may overstate the increase over time in the rate of turnover, any group of fixed numerical size becoming a smaller percentage of the population over time. Even with that caveat, however, what this tells me is that there is a lot of variability in the value of capital assets. Wealth grows, but it grows unevenly. Capital is accumulated, but it is also lost.

Does the process of capital accumulation necessarily lead to increasing inequality of wealth and income? Perhaps, but I don’t think that the answer is necessarily determined by the relationship between the real rate of interest and the rate of growth in GDP.

Many people have suggested that an important cause of rising inequality has been the increasing importance of winner-take-all markets in which a few top performers seem to be compensated at very much higher rates than other, only slightly less gifted, performers. This sort of inequality is reflected in widening gaps between the highest and lowest paid participants in a given occupation. In some cases at least, the differences between the highest and lowest paid don’t seem to correspond to the differences in skill, though admittedly skill is often difficult to measure.

This concentration of rewards is especially characteristic of competitive sports, winners gaining much larger rewards than losers. However, because the winner’s return comes, at least in part, at the expense of the loser, the private gain to winning exceeds the social gain. That’s why all organized professional sports engage in some form of revenue sharing and impose limits on spending on players. Without such measures, competitive sports would not be viable, because the private return to improve quality exceeds the collective return from improved quality. There are, of course, times when a superstar like Babe Ruth or Michael Jordan can actually increase the return to losers, but that seems to be the exception.

To what extent other sorts of winner-take-all markets share this intrinsic inefficiency is not immediately clear to me, but it does not seem implausible to think that there is an incentive to overinvest in skills that increase the expected return to participants in winner-take-all markets. If so, the source of inequality may also be a source of inefficiency.

Thomas Piketty and Joseph Schumpeter (and Gerard Debreu)

Everybody else seems to have an opinion about Thomas PIketty, so why not me? As if the last two months of Piketty-mania (reminiscent, to those of a certain age, of an earlier invasion of American shores, exactly 50 years ago, by four European rock-stars) were not enough, there has been a renewed flurry of interest this week about Piketty’s blockbuster book triggered by Chris Giles’s recent criticism in the Financial Times of Piketty’s use of income data, which mainly goes to show that, love him or hate him, people cannot get enough of Professor Piketty. Now I will admit upfront that I have not read Piketty’s book, and from my superficial perusal of the recent criticisms, they seem less problematic than the missteps of Reinhart and Rogoff in claiming that, beyond a critical 90% ratio of national debt to national income, the burden of national debt begins to significantly depress economic growth. But in any event, my comments in this post are directed at Piketty’s conceptual approach, not on his use of the data in his empirical work. In fact, I think that Larry Summers in his superficially laudatory, but substantively critical, review has already made most of the essential points about Piketty’s book. But I think that Summers left out a couple of important issues — issues touched upon usefully by George Cooper in a recent blog post about Piketty — which bear further emphasis, .

Just to set the stage for my comments, here is my understanding of the main conceptual point of Piketty’s book. Piketty believes that the essence of capitalism is that capital generates a return to the owners of capital that, on average over time, is equal to the rate of interest. Capital grows; it accumulates. And the rate of accumulation is equal to the rate of interest. However, the rate of interest is generally somewhat higher than the rate of growth of the economy. So if capital is accumulating at a rate of growth equal to, say, 5%, and the economy is growing at a rate of growth equal to only 3%, the share of income accruing to the owners of capital will grow over time. It is in this simple theoretical framework — the relationship between the rate of economic growth to the rate of interest — that Piketty believes he has found the explanation not only for the increase in inequality over the past few centuries of capitalist development, but for the especially rapid increase in inequality over the past 30 years.

While praising Piketty’s scholarship, empirical research and rhetorical prowess, Summers does not handle Piketty’s main thesis gently. Summers points out that, as accumulation proceeds, the incentive to engage in further accumulation tends to weaken, so the iron law of increasing inequality posited by Piketty is not nearly as inflexible as Piketty suggests. Now one could respond that, once accumulation reaches a certain threshold, the capacity to consume weakens as well, if only, as Gary Becker liked to remind us, because of the constraint that time imposes on consumption.

Perhaps so, but the return to capital is not the only, or even the most important, source of inequality. I would interpret Summers’ point to be the following: pure accumulation is unlikely to generate enough growth in wealth to outstrip the capacity to increase consumption. To generate an increase in wealth so large that consumption can’t keep up, there must be not just a return to the ownership of capital, there must be profit in the Knightian or Schumpeterian sense of a profit over and above the return on capital. Alternatively, there must be some extraordinary rent on a unique, irreproducible factor of production. Accumulation by itself, without the stimulus of entrepreneurial profit, reflecting the the application of new knowledge in the broadest sense of the term, cannot go on for very long. It is entrepreneurial profits and rents to unique factors of production (or awards of government monopolies or other privileges) not plain vanilla accumulation that account for the accumulation of extraordinary amounts of wealth. Moreover, it seems that philanthropy (especially conspicuous philanthropy) provides an excellent outlet for the dissipation of accumulated wealth and can easily be combined with quasi-consumption activities, like art patronage or political activism, as more conventional consumption outlets become exhausted.

Summers backs up his conceptual criticism with a powerful factual argument. Comparing the Forbes list of the 400 richest individuals in 1982 with the Forbes list for 2012 Summers observes:

When Forbes compared its list of the wealthiest Americans in 1982 and 2012, it found that less than one tenth of the 1982 list was still on the list in 2012, despite the fact that a significant majority of members of the 1982 list would have qualified for the 2012 list if they had accumulated wealth at a real rate of even 4 percent a year. They did not, given pressures to spend, donate, or misinvest their wealth. In a similar vein, the data also indicate, contra Piketty, that the share of the Forbes 400 who inherited their wealth is in sharp decline.

But something else is also going on here, a misunderstanding, derived from a fundamental ambiguity, about what capital actually means. Capital can refer either to a durable physical asset or to a sum of money. When economists refer to capital as a factor of production, they are thinking of capital as a physical asset. But in most models, economists try to simplify the analysis by collapsing the diversity of the entire stock of heterogeneous capital assets into single homogeneous substance called “capital” and then measure it not in terms of its physical units (which, given heterogeneity, is strictly impossible) but in terms of its value. This creates all kinds of problems, leading to some mighty arguments among economists ever since the latter part of the nineteenth century when Carl Menger (the first Austrian economist) turned on his prize pupil Eugen von Bohm-Bawerk who wrote three dense volumes discussing the theory of capital and interest, and pronounced Bohm-Bawerk’s theory of capital “the greatest blunder in the history of economics.” I remember wanting to ask F. A. Hayek, who, trying to restate Bohm-Bawerk’s theory in a coherent form, wrote a volume about 75 years ago called The Pure Theory of Capital, which probably has been read from cover to cover by fewer than 100 living souls, and probably understood by fewer than 20 of those, what he made of Menger’s remark, but, to my eternal sorrow, I forgot to ask him that question the last time that I saw him.

At any rate, treating capital as a homogeneous substance that can be measured in terms of its value rather than in terms of physical units involves serious, perhaps intractable, problems. For certain purposes, it may be worthwhile to ignore those problems and work with a simplified model (a single output which can be consumed or used as a factor of production), but the magnitude of the simplification is rarely acknowledged. In his discussion, Piketty seems, as best as I could determine using obvious search terms on Amazon, unaware of the conceptual problems involved in speaking about capital as a homogeneous substance measured in terms of its value.

In the real world, capital is anything but homogeneous. It consists of an array of very specialized, often unique, physical embodiments. Once installed, physical capital is usually sunk, and its value is highly uncertain. In contrast to the imaginary model of a homogenous substance that just seems to grow at fixed natural rate, the real physical capital that is deployed in the process of producing goods and services is complex and ever-changing in its physical and economic characteristics, and the economic valuations associated with its various individual components are in perpetual flux. While the total value of all capital may be growing at a fairly steady rate over time, the values of the individual assets that constitute the total stock of capital fluctuate wildly, and few owners of physical capital have any guarantee that the value of their assets will appreciate at a steady rate over time.

Now one would have thought that an eminent scholar like Professor Piketty would, in the course of a 700-page book about capital, have had occasion to comment on enormous diversity and ever-changing composition of the stock of physical capital. These changes are driven by a competitive process in which entrepreneurs constantly introduce new products and new methods of producing products, a competitive process that enriches some owners of new capital, and, it turns out, impoverishes others — owners of old, suddenly obsolete, capital. It is a process that Joseph Schumpeter in his first great book, The Theory of Economic Development, memorably called “creative destruction.” But the term “creative destruction” or the title of Schumpeter’s book does not appear at all in Piketty’s book, and Schumpeter’s name appears only once, in connection not with the notion of creative destruction, but with his, possibly ironic, prediction in a later book Capitalism, Socialism and Democracy that socialism would eventually replace capitalism.

Thus, Piketty’s version of capitalist accumulation seems much too abstract and too far removed from the way in which great fortunes are amassed to provide real insight into the sources of increasing inequality. Insofar as such fortunes are associated with accumulation of capital, they are likely to be the result of the creation of new forms of capital associated with new products, or new production processes. The creation of new capital simultaneously destroys old forms of capital. New fortunes are amassed, and old ones dissipated. The model of steady accumulation that is at the heart of Piketty’s account of inexorably increasing inequality misses this essential feature of capitalism.

I don’t say that Schumpeter’s account of creative destruction means that increasing inequality is a trend that should be welcomed. There may well be arguments that capitalist development and creative destruction are socially inefficient. I have explained in previous posts (e.g., here, here, and here) why I think that a lot of financial-market activity is likely to be socially wasteful. Similar arguments might be made about other kinds of activities in non-financial markets where the private gain exceeds the social gain. Winner-take-all markets seem to be characterized by this divergence between private and social benefits and costs, apparently accounting for a growing share of economic activity, are an obvious source of inequality. But what I find most disturbing about the growth in inequality over the past 30 years is that great wealth has gained increased social status. That seems to me to be a very unfortunate change in public attitudes. I have no problem with people getting rich, even filthy rich. But don’t expect me to admire them because they are rich.

Finally, you may be wondering what all of this has to do with Gerard Debreu. Well, nothing really, but I couldn’t help noticing that Piketty refers in an endnote (p. 654) to “the work of Adam Smith, Friedrich Hayek, and Kenneth Arrow and  Claude Debreu” apparently forgetting that the name of his famous countryman, winner of the Nobel Memorial Prize for Economics in 1983, is not Claude, but Gerard, Debreu. Perhaps Piketty confused Debreu with another eminent Frenchman Claude Debussy, but I hope that in the next printing of his book, Piketty will correct this unfortunate error.

UPDATE (5/29 at 9:46 EDST): Thanks to Kevin Donoghue for checking with Arthur Goldhammer, who translated Piketty’s book from the original French. Goldhammer took responsibility for getting Debreu’s first name wrong in the English edition. In the French edition, only Debreu’s last name was mentioned.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 1,538 other followers

Follow Uneasy Money on WordPress.com