Archive for the 'Larry Summers' Category

Sic Transit Inflatio Mundi

Larry Summers continues to lead the charge for a quick, decisive tightening of monetary policy by the Federal Reserve to head off an inflationary surge that, he believes, is about to overtake us. Undoubtedly one of the most capable economists of his generation, Summers also had a long career as a policy maker at the highest levels, so his advice cannot be casually dismissed. Even aside from Summers’s warning, the current economic environment fully justifies heightened concern caused by the recent uptick in inflation.

I am, nevertheless, not inclined to share Summers’s confidence in his oft-repeated predictions of resurgent inflation unless monetary policy is substantially tightened soon to prevent current inflation from being entrenched into the expectations of households and businesses. Summers’s’ latest warning came in a Washington Post op-ed following the statement by the FOMC and by Chairman Jay Powell that Fed policy would shift to give priority to maintaining price stability.

After welcoming the FOMC statement, Summers immediately segued into a critique of the Fed position on every substantive point.

There have been few, if any, instances in which inflation has been successfully stabilized without recession. Every U.S. economic expansion between the Korean War and Paul A. Volcker’s slaying of inflation after 1979 ended as the Federal Reserve tried to put the brakes on inflation and the economy skidded into recession. Since Volcker’s victory, there have been no major outbreaks of inflation until this year, and so no need for monetary policy to engineer a soft landing of the kind that the Fed hopes for over the next several years.

The not-very-encouraging history of disinflation efforts suggests that the Fed will need to be both skillful and lucky as it seeks to apply sufficient restraint to cause inflation to come down to its 2 percent target without pushing the economy into recession. Unfortunately, several aspects of the Open Market Committee statement and Powell’s news conference suggest that the Fed may not yet fully grasp either the current economic situation or the implications of current monetary policy.

Summers cites the recessions between the Korean War and the 1979-82 Volcker Monetarist experiment to support his anti-inflationary diagnosis and remedy. But none of the three recessions in the 1950s during the Eisenhower Presidency was needed to cope with any significant inflationary threat. There was no substantial inflation in the US during the 1950s, never reaching 3% in any year between 1953 and 1960, and rarely exceeding 2%.

Inflation during the late 1960 and 1970s was caused by a combination of factors, including both excess demand fueled by Vietnam War spending and politically motivated monetary expansion, plus two oil shocks in 1973-74 and 1979-80, an economic environment with only modest similarity to the current economic situation.

But the important lesson from the disastrous Volcker-Friedman recession is that most of the reduction in inflation following Volcker’s decisive move to tighten monetary policy in early 1981 did not come until a year and a half later, when with the US unemployment rate above 10%, Volcker finally abandoned the futile and counterproductive Monetarist policy of making the monetary aggregates policy instruments. Had it not been for the Monetarist obsession with controlling the monetary aggregates, a recovery could have started six months to a year earlier than it did, with inflation continuing on the downward trajectory as output and employment expanded.

The key point is that falling output, in and of itself, tends to cause rising, not falling, prices, so that postponing the start of a recovery actually delays, rather than hastens, the reduction of inflation. As I explained in another post, rather than focusing onthe monetary aggregates, monetary policy ought to have aimed to reduce the rate of growth of total nominal spending from well over 12% in 1980-81 to a rate of about 7%, which would have been consistent with the informal 4% inflation target that Volcker and Reagan had set for themselves.

The appropriate lesson to take away from the Volcker-Friedman recession of 1981-82 is therefore that a central bank can meet its inflation target by reducing the rate of increase in total nominal spending and income to the rate, given anticipated real expansion of capacity and productivity, consistent with its inflation target. The rate of growth in nominal spending and income cannot be controlled with a degree of accuracy, but rates of increase in spending above or below the target rate of increase provide the central bank with real time indications of whether policy needs to be tightened or loosened to meet the inflation target. That approach would avoid the inordinate cost of reducing inflation associated with the Volcker-Friedman episode.

A further aggravating factor in the 1981-82 recession was that interest rates had risen to double-digit levels even before Volcker embarked on his Monetarist anti-inflation strategy, showing how deeply embedded inflation expectations had become in the plans of households and businesses. By contrast, interest rates have actually been falling for months, suggesting that Summers’s warnings about inflation expectations becoming entrenched are overstated.

The Fed forecast calls for inflation to significantly subside even as the economy sustains 3.5 percent unemployment — a development without precedent in U.S. economic history. The Fed believes this even though it regards the sustainable level of unemployment as 4 percent. This only makes sense if the Fed is clinging to the idea that current inflation is transitory and expects it to subside of its own accord.

Summers’s factual assertion that the US unemployment rate has never fallen, without inflationary stimulus, to 3.5%, an argument predicated on the assumption that the natural (or non-accelerating- inflation rate of unemployment) is firmly fixed at 4% is not well supported by the data. In 2019 and early 2020, the unemployment rate dropped to 3.5% without evident inflationary pressure. In the late 1990s unemployment also dropped below 4% without inflationary pressure. So, the expectation that a 3.5% unemployment rate could be restored without inflationary pressure may be optimistic, but it’s hardly unprecedented.

Summers suggests that the Fed is confused because it expects the unemployment rate to fall back to the 3.5% rate of 2019 even while supposedly regarding a 4%, not a 3.5%, rate of unemployment as sustainable. According to Summers, reaching a 3.5% rate of unemployment would be possible only if the current increase in the inflation rate is temporary. But the bond market seems to share that view with the Fed given the recent decreases in the yields on Treasury bonds of 5 to 30 years duration. But Summers takes a different view.

In fact, there is solid reason to think inflation may accelerate. The consumer price index’s shelter component, which represents one-third of the index, has gone up by less than 4 percent, even as private calculations without exception suggest increases of 10 to 20 percent in rent and home prices. Catch-up is likely. More fundamentally, job vacancies are at record levels and the labor market is still heating up, according to the Fed forecast. This portends acceleration rather than deceleration in labor costs — by far the largest cost for the business sector.

Projecting how increases in rent and home prices that have already occurred will affect reported inflation in the future is a tricky exercise. It is certain that those effects will show up in the future, but those effects are already baked into those future inflation reports, so they provide an uneasy basis on which to conduct monetary policy. Insofar as inflation is a problem, it is a problem not because of short-term fluctuations in prices in specific goods, even home prices and rents, or whole sectors of the economy, but because of generalized and potentially continuing long-term trends affecting the whole structure of prices.

The current number of job vacancies reflects both the demand for, and the supply of, labor. The labor-force participation rate is still well below the pre-pandemic level, reflecting the effect of withdrawal from the labor force by workers afraid of contracting the COVID virus, or unable to find day care for children, or deterred from seeking by other pandemic-related concerns from seeking or accepting employment. Under such circumstances, the re-allocations associated with high job-vacancy rates are likely to enhance the efficiency and productivity of the workers that are re-employed, and need not exacerbate inflationary pressures.

Presumably, the Fed has judged that current aggregate-demand increases have less to do with observed inflation than labor-supply constraints or other supply-side bottlenecks whose effects on prices are likely self-limiting. This judgment is neither obviously right nor obviously wrong. But, for now at least, it is not unreasonable for the Fed to remain cautious before making a drastic policy change, neither committing itself to an immediate tightening, as Summers is proposing, nor doubling down on a commitment to its current accommodative stance.

Meanwhile, the pandemic-related bottlenecks central to the transitory argument are exaggerated. Prices for more than 80 percent of goods in the CPI have increased more than 3 percent in the past year.With the economy’s capacity growing 2 percent a year and the Fed’s own forecast calling for 4 percent growth in 2022, price pressures seem more likely to grow than to abate.

This argument makes no sense. We have, to be sure, gone through a period of actual broad-based inflation, so pointing out that 80% of goods in the CPI have increased in price by more than 3% in the past year is unsurprising. The bottleneck point is that supply constraints have prevented the real economy from growing as fast as nominal spending has grown. As I’ve pointed out recently, there’s an overhang of cash and liquid assets, accumulated rather than spent during the pandemic, which has amplified aggregate-demand growth since the economy began to recover from the pandemic, opening up previously closed opportunities for spending. The mismatch between the growth of demand and the growth of supply has been manifested in rising inflation. If the bottleneck theory of inflation is true, then the short-term growth potential of the economy is greater than the 2% rate posited by Summers. As bottlenecks are removed and workers that withdrew from the labor force during the pandemic are re-employed, the economy could easily grow faster than Summers is willing to acknowledge. Summers simply assumes, but doesn’t demonstrate, his conclusion.

This all suggests that policy will need to restrain demand to restore price stability.

No, it does not suggest that at all. It only suggests the possibility that demand may have to be restrained to keep prices stable. Recent inflation may have been a delayed response to an expansive monetary policy designed to prevent a contraction of demand during the pandemic. A temporary increase in inflation does not necessarily call for an immediate contractionary response. It’s too early to tell with confidence whether preventing future inflation requires, as Summers asserts, monetary policy to be tightened immediately. That option shouldn’t be taken off the table, but the Fed clearly hasn’t done so.

How much tightening is required? No one knows, and the Fed is right to insist that it will monitor the economy and adjust. We do know, however, that monetary policy is far looser today — in a high-inflation, low-unemployment economy — than it was about a year ago when inflation was below the Fed’s target and unemployment was around 8 percent. With relatively constant nominal interest rates, higher inflation and the expectation of future inflation have led to dramatic reductions in real interest rates over the past year. This is why bubbles are increasingly pervasive in asset markets ranging from crypto to beachfront properties and meme stocks to tech start-ups.

Summers, again, is just assuming, not demonstrating, his own preferred conclusion. A year ago, high unemployment was caused by the unique confluence of essentially simultaneous negative demand and supply shocks. The unprecedented coincidence of two simultaneous shocks posed a unique policy challenge to which the Fed has so far responded with remarkable skill. But the unfamiliar and challenging economic environment remains murky, and premature responses to unclear conditions may not yield the anticipated results. Undaunted by any doubt in his own reading of an opaque situation, Summers self-assurance is characteristic and impressive, but his argument is less than compelling.

The implication is that restoring monetary policy to a normal posture, let alone to applying restraint to the economy, will require far more than the three quarter-point rate increases the Fed has predicted for next year. This point takes on particular force once it is recognized that, contrary to Powell’s assertion, almost all economists believe there is a lag of about a year between the application of a rate change and its effect. Failure to restore policy neutrality next year means allowing two more years of highly inflationary monetary policy.

All of this suggests that even with its actions this week, the Fed remains well behind the curve in its commitment to fighting inflation. If its statements reflect its convictions, this is a matter of serious concern.

The idea that there is a one-year lag between applying a policy and its effect is hardly credible. The problem is not the length of the lag, but the uncertain effects of policy in a given set of circumstances. The effects of a change in the money stock or a change in the policy rate may not be apparent if they are offset by other changes. The ceteris-paribus proviso that qualifies every analysis of the effects of monetary policy is rarely satisfied in the real world; almost every policy action by the central bank is an uncertain bet. Under current circumstances, the Fed response to the recent increase in inflation seems eminently sensible: signal that the Fed is anticipating the likelihood that monetary policy will have to be tightened if the current rate of increase in nominal spending remains substantially above the rate consistent with the Fed’s average inflation target of 2%, but wait for further evidence before deciding about the magnitude of any changes in the Fed’s policy instruments.

High-Inflation Anxiety

UPDATE (9:25am 11/16/2021): Thanks to philipji for catching some problematic passages in my initially posted version. I have also revised the opening paragraph, which was less than clear. Apologies for my sloppy late-night editing before posting.

When I started blogging ten-plus years ago, most of my posts were about monetary policy, because I then felt that the Fed was not doing as much as it could, and should, have been doing to promote a recovery from the Little Depression (aka Great Recession) for which Fed’s policy mistakes bore heavy responsibility. The 2008 financial crisis and the ensuing deep downturn were largely the product of an overly tight monetary policy starting in 2006, and, despite interest-rate cuts in 2007 and 2008, the Fed’s policy consistently erred on the side of tightness because of concerns about rising oil and commodities prices, for almost two months after the start of the crisis. The stance of monetary policy cannot be assessed just by looking at the level of the central bank policy rate; the stance depends on the relationship between the policy rate and the economic conditions at any particular moment. The 2-percent Fed Funds target in the summer of 2008, given economic conditions at the time, meant that monetary policy was tight, not easy.

Although, after the crisis, the Fed never did as much as it could — and should — have to promote recovery, it at least took small measures to avoid a lapse into a ruinous deflation, even as many of the sound-money types, egged on by deranged right-wing scare mongers, warned of runaway inflation.

Slowly but surely, a pathetically slow recovery by the end of Obama’s second term, brought us back to near full employment. By then, my interest in the conduct of monetary policy had given way to a variety of other concerns as we dove into the anni horribiles of the maladministration of Obama’s successor.

Riding a recovery that started seven and a half years before he took office, and buoyed by a right-wing propaganda juggernaut and a pathetically obscene personality cult that broadcast and amplified his brazenly megalomaniacal self-congratulations for the inherited recovery over which he presided, Obama’s successor watched incompetently as the Covid 19 virus spread through the United States, causing the sharpest drop in output and employment in US history.

Ten months after Obama’s successor departed from the White House, the US economy has recovered much, but not all, of the ground lost during the pandemic, employment still below its peak at the start of 2020, and real output still lagging the roughly 2% real growth path along which the economy had been moving for most of the preceding decade.

However, the very rapid increase in output in Q2 2021 and the less rapid, but still substantial, increase in output in Q3 2021, combined with inflation that has risen to the highest rates in 30 years, has provoked ominous warning of resurgent inflation similar to the inflation from the late 1960s till the early 1980s, ending only with the deep 1981-82 recession caused by the resolute anti-inflation policy administered by Fed Chairman Paul Volcker with the backing of the newly elected Ronald Reagan.

It’s worth briefly revisiting that history (which I have discussed previously on this blog here, here, and especially in the following series (1), (2) and (3) from 2020) to understand the nature of the theoretical misunderstanding and the resulting policy errors in the 1970s and 1980s. While I agree that the recent increase in inflation is worrisome, it’s far from clear that inflation is likely, as many now predict, to get worse, although the inflation risk can’t be dismissed.

What I find equally if not more worrisome is that the anti-inflation commentary that we are hearing now from very serious people like Larry Summers in today’s Washington Post is how much it sounds like the inflation talk of 2008, which frightened the Fed, then presided over by a truly eminent economist, Ben Bernanke, into thinking that the chief risk facing the economy was rising headline inflation that would cause inflation expectations to become “unanchored.”

So, rather than provide an economy rapidly sliding into recession, the FOMC, focused on rapid increases in oil and commodity prices, refused to loosen monetary policy in the summer of 2008, even though the pace of growth in nominal gross domestic product (NGDP) had steadily decelerated measured year on year. The accompanying table shows the steady decline in the quarterly year-on-year growth of NGDP in each successive quarter between Q1 2004 and Q4 2008. Between 2004 and 2006, the decline was gradual, but accelerated in 2007 leading to the start of a recession in December 2007.

Source: https://fred.stlouisfed.org/series/DFEDTAR

The decline in the rate of NGDP growth was associated with a gradual increase in the Fed funds target rate from a very low level 1% until Q2 2004, but by Q2 2006, when the rate reached 5%, the slowdown in the growth of total spending quickened. As the rate of spending declined, the Fed eased interest rates in the second half of 2007, but that easing was insufficient to prevent an economy, already suffering financial distress after the the housing bubble burst in 2006, from lapsing into recession.

Although the Fed cut its interest-rate target substantially in March 2008, during the summer of 2008, when the recession was rapidly deteriorating, the FOMC, fearing that inflation expectations would become “unanchored”, given rising headline inflation associated with very large increases in crude-oil prices (which climbed to a record $130 a barrel) and in commodity prices even as the recession was visibly worsening, refused to reduce rates further.

The Fed, to be sure, was confronted with a difficult policy dilemma, but it was a disastrous error to prioritize a speculative concern about the “unanchoring” of long-term inflation expectations over the reality of a fragile and clearly weakening financial system in a contracting economy already clearly in a recession. The Fed made the wrong choice, and the crisis came.

That was then, and now is now. The choices are different, but once again, on one side there is pressure to prioritize the speculative concern about the “unanchoring” of long-term inflation expectations over promoting recovery and increased employment after a a recession and a punishing pandemic. And, once again, the concerns about inflation are driven by a likely transitory increase in the price of crude oil and gasoline prices.

The case for prioritizing fighting inflation was just made by none other than Larry Summers in an op-ed in the Washington Post. Let’s have a look at Summer’s case for fighting inflation now.

Fed Chair Jerome H. Powell’s Jackson Hole speech in late August provided a clear, comprehensive and authoritative statement, enumerated in five pillars, of the widespread team “transitory” view of inflation that prevailed at that time and shaped policy thinking at the central bank and in the administration. Today, all five pillars are wobbly at best.

First, there was a claim that price increases were confined to a few sectors. No longer. In October, prices for commodity goods outside of food and energy rose at more than a 12 percent annual rate. Various Federal Reserve system indexes that exclude sectors with extreme price movements are now at record highs.

https://www.washingtonpost.com/opinions/2021/11/15/inflation-its-past-time-team-transitory-stand-down/

Summers has a point. Price increases are spreading throughout the economy. However, that doesn’t mean that increasing oil prices are not causing the prices of many other products to increase as well, inasmuch as oil and other substitute forms of energy are so widely used throughout the economy. If the increase in oil prices, and likely, in food prices, has peaked, or will do so soon, it does not necessarily make sense to fight a war against an enemy that has retreated or is about to do so.

Second, Powell suggested that high inflation in key sectors, such as used cars and durable goods more broadly, was coming under control and would start falling again. In October, used-car prices accelerated to more than a 30 percent annual inflation rate, new cars to a 17 percent rate and household furnishings by an annualized rate of just above 10 percent.

Id.

Again, citing large increases in the price of cars when it’s clear that there are special circumstances causing new car prices to rise rapidly, bringing used-car prices tagging along, is not very persuasive, especially when those special circumstances appear likely to be short-lived. To be sure other durables prices are also rising, but in the absence of a deeper source of inflation, the atmospherics cited by Summers are not that compelling.

Third, the speech pointed out that there was “little evidence of wage increases that might threaten excessive inflation.” This claim is untenable today with vacancy and quit rates at record highs, workers who switch jobs in sectors ranging from fast food to investment banking getting double-digit pay increases, and ominous Employment Cost Index increases.

Id.

Wage increases are usually an indicator of inflation, though, again the withdrawal, permanent or temporary, of many workers from employment in the past two years is a likely cause of increased wages that is independent of an underlying and ongoing inflationary trend.

Fourth, the speech argued that inflation expectations remained anchored. When Powell spoke, market inflation expectations for the term of the next Federal Reserve chair were around 2.5 percent. Now they are about 3.1 percent, up half a percentage point in the past month alone. And consumer sentiment is at a 10-year low due to inflation fears.

Id.

Clearly inflation expectations have increased over the short term for a variety of reasons that we have just been considering. But the curve of inflation expectations still seems to be reverting toward a lower level in the medium term and the long-term.

Fifth, Powell emphasized global deflationary trends. In the same week the United States learned of the fastest annual inflation rate in 30 years, Japan, China and Germany all reported their highest inflation in more than a decade. And the price of oil, the most important global determinant of inflation, is very high and not expected by forward markets to decline rapidly.

Id.

Again, Summers is simply recycling the same argument. We know that there has been a short-term increase in inflation. The question we need to grapple with is whether this short-term inflationary blip is likely to be self-limiting, or will feed on itself, causing inflation expectations to become “unanchored”. Forward prices of oil may not be showing that the price of oil will decline rapidly, but they aren’t showing expectations of further increases. Without further increases in oil prices, it is fair to ask what the source of further, ongoing inflation, that will cause “unanchoring”?

As it has in the past, the threat of “unanchoring”, is doing an awful lot of work. And it is not clear how the work is being done except by way of begging the question that really needs to be answered not begged.

After his windup, Summers offers fairly mild suggestions for his anti-inflation program, and only one of his comments seems mistaken.

Because of inflation, real interest rates are lower, as money is easier than a year ago. The Fed should signal that this is unacceptable and will be reversed.

Id.

The real interest rate about which the Fed should be concerned is the ex ante real interest rate reflecting both the expected yield from real capital and the expected rate of inflation (which may and often does have feedback effects on the expected yield from real capital). Past inflation does not automatically get transformed into an increase in expected inflation, and it is not necessarily the case that past inflation has left the expected yield from real capital unaffected, so Summers’ inference that the recent blip in inflation necessarily implies that monetary policy has been eased could well be mistaken. Yet again, these are judgments (or even guesses) that policymakers have to make about the subjective judgments of market participants. Those policy judgments that can’t be made simply by reading off a computer screen.

While I’m not overly concerned by Summers’s list of inflation danger signs, there’s no doubt that inflation risk has risen. Yet, at least for now, that risk seems to be manageable. The risk may require the Fed to take pre-emptive measures against inflation down the road, but I don’t think that we have not reached that point yet.

The main reason why I think that inflation risk has been overblown is that inflation is a common occurrence in postwar economies, as occurred in the US after both World Wars, and after the Korean War and the Vietnam War. It is widely recognized that war itself is inflationary owing, among other reasons, to the usual practice of governments to finance wartime expenditures by printing money, but inflationary pressures tend to persist even after the wars end.

Why does inflation persist after wars come to an end? The main reason is that, during wartime, resources, labor and capital, are shifted from producing goods for civilian purposes to waging war and producing and transporting supplies to support the war effort. Because total spending, financed by printing money, increases during the war, money income goes up even though the production of goods and services for civilian purposes goes down.

The output of goods and services for civilian purposes having been reduced, the increased money income accruing to the civilian population implies rising prices of the civilian goods and services that are produced. The tendency for prices to rise during wartime is mitigated by the reduced availability of outlets for private spending, people normally postponing much of their non-essential spending while the war is ongoing. Consequently, the public accumulates cash and liquid assets during wartime with the intention of spending the accumulated cash and liquid assets when life returns to normal after the war.

The lack of outlets for private spending is reinforced when, as happened in World War I, World War II, the Korean War and the late stages of the Vietnam War, price controls prevent the prices of civilian goods still being produced from rising, so that consumers can’t buy goods – either at all or as much as they would like – that they would willingly have paid for. The result is suppressed inflation until wartime price controls are lifted, and deferred price increases are allowed to occur. As prices rise, the excess cash that had been accumulated while the goods people demanded were unavailable is absorbed by purchases made at the postponed increases in price.

In his last book, Incomes and Money, Ralph Hawtrey described with characteristic clarity the process by which postwar inflation absorbed redundant cash balances accumulated during the World War II when price controls were lifted.

America, like Britain, had imposed price controls during the war, and had accumulated a great amount of redundant money. So long as the price controls continued, the American manufacturers were precluded from checking demand by raising their prices. But the price controls were abandoned in the latter half of 1946, and there resulted a rise of prices reaching 30 per cent on manufactured goods in the latter part of 1947. That meant that American industry was able to defend itself against the excess demand. By the end of 1947 the rise of prices had nearly eliminated the redundant money; that it to say, the quantity of money (currency and bank deposits) was little more than in a normal proportion to the national income. There was no longer over-employment in American industry, and there was no reluctance to take export orders.

Hawtrey, Incomes and Money, p. 7

Responding to Paul Krugman’s similar claim that there was high inflation following World War II, Summers posted the following twitter thread.

@paulkrugman continues his efforts to minimize the inflation threat to the American economy and progressive politics by pointing to the fact that inflation surged and then there was a year of deflation after World War 2.

If this is the best argument for not being alarmed that someone as smart, rhetorically effective and committed as Paul can make, my anxiety about inflation is increased.

Pervasive price controls were removed after the war. Economists know that measured prices with controls are artificial, so subsequent inflation proves little.

Millions of soldiers were returning home and a massive demobilization was in effect. Nothing like the current pervasive labor shortage was present.

https://twitter.com/LHSummers/status/1459992638170583041

Summers is surely correct that the situation today is not perfectly analogous to the post-WWII situation, but post-WWII inflation, as Hawtrey explained, was only partially attributable to the lifting of price controls. He ignores the effect of excess cash balances, which ultimately had to be spent or somehow withdrawn from circulation through a deliberate policy of deflation, which neither Summers nor most economists would think advisable or even acceptable. While the inflationary effect of absorbing excess cash balances is therefore almost inevitable, the duration of the inflation is limited and need not cause inflation expectations to become “unanchored.”

With the advent of highly effective Covid vaccines, we are now gradually emerging from the worst horrors of the Covid pandemic, when a substantial fraction of the labor force was either laid off or chose to withdraw from employment. As formerly idle workers return to work, we are in a prolonged quasi-postwar situation.

Just as the demand for civilian products declines during wartime, the demand for a broad range of private goods declined during the pandemic as people stopped going to restaurants, going on vacations, attending public gathering, and limited their driving and travel. Thus, the fraction of earnings that was saved increased as outlets for private spending became unavailable, inappropriate or undesirable.

As the pandemic has receded, restoring outlets for private spending, pent-up suppressed private demands have re-emerged, financed by households drawing down accumulated cash balances or drawing on credit lines augmented by paying down indebtedness. For many goods, like cars, the release of pent-up private demand has outpaced the increase in supply, leading to substantial price increases that are unlikely to be sustained once short-term supply bottlenecks are eliminated. But such imbalances between rapid increases in demand and sluggish increases in supply does not seem like a reliable basis on which to make policy choices.

So what are we to do now? As always, Ralph Hawtrey offers the best advice. The control of inflation, he taught, ultimately depends on controlling the relationship between the rate of growth in total nominal spending (and income) and the rate of growth of total real output. If total nominal spending (and income) is increasing faster than the increase in total real output, the difference will be reflected in the prices at which goods and services are provided.

In the five years from 2015 to 2019, the average growth rate in nominal spending (and income) was about 3.9%. During that period the average rate of growth in real output was 2.2% annually and the average rate of inflation was 1.7%. It has been reasonably suggested that extrapolating the 3.9% annual growth in nominal spending in the previous five years provides a reasonable baseline against which to compare actual spending in 2020 and 2021.

Actual nominal spending in Q3 2021 was slightly below what nominal GDP would have been in Q3 if it had continued growing at the extrapolated 3.9% growth path in nominal GDP. But for nominal GDP in Q4 not exceed that extrapolated growth path in Q4, Q4 could increase by an annual rate of no more than 4.3%. Inasmuch as spending in Q3 2021 was growing at 7.8%, the growth rate of nominal spending would have to slow substantially in Q4 from its Q3 growth rate.

But it is not clear that a 3.9% growth rate in nominal spending is the appropriate baseline to use. From 2015 to 2019, the average growth rate in real output was only 2.2% annually and the average inflation rate was only 1.7%. The Fed has long announced that its inflation target was 2% and in the 2015 to 2019 period, it consistently failed to meet that target. If the target inflation was 2% rather than 1.7%, presumably the Fed believed that annual growth would not have been less with 2% inflation than with 1.7%, so there is no reason to believe that the Fed should not have been aiming for more than 3.9% growth in total spending. If so a baseline for extrapolating the growth path for nominal spending should certainly not be less than 4.2%, Even a 4.5% baseline seems reasonable, and a baseline as high as 5% does not seem unreasonable.

With a 5% baseline, total nominal spending in Q4 could increase by as much as 5.4% without raising total nominal spending above its target path. But I think the more important point is not whether total spending does or does not rise above its growth path. The important goal is for the growth in nominal spending to decline steadily toward a reasonable growth path of about 4.5 to 5% and for this goal to be communicated to the public in a convincing manner. The 13.4% increase in total spending in Q2, when it appeared that the pandemic might soon be over, was likely a one-off outlier reflecting the release of pent-up demand. The 7.8% increase in Q3 was excessive, but substantially less than the Q2 rate of increase. If the Q4 increase does not continue downward trend in the rate of increase in nominal spending, it will be time to re-evaluate policy to ensure that the growth of spending is brought down to a non-inflationary range.

Larry Summers v. John Taylor: No Contest

It seems that an announcement about who will be appointed as Fed Chairman after Janet Yellen’s terms expires early next year is imminent. Although there are sources in the Administration, e.g., the President, indicating that Janet Yellen may be reappointed, the betting odds strongly favor Jerome Powell, a Republican currently serving as a member of the Board of Governors, over the better-known contender, John Taylor, who has earned a considerable reputation as an academic economist, largely as author of the so-called Taylor Rule, and has also served as a member of the Council of Economic Advisers and the Treasury in previous Republican administrations.

Taylor’s support seems to be drawn from the more militant ideological factions within the Republican Party owing to his past criticism of Fed’s quantitative-easing policy after the financial crisis and little depression, having famously predicted that quantitative easing would revive dormant inflationary pressures, presaging a return to the stagflation of the 1970s, while Powell, who has supported the Fed’s policies under Bernanke and Yellen, is widely suspect in the eyes of the Republican base as a just another elitist establishmentarian inhabiting the swamp that the new administration was elected to drain. Nevertheless, Taylor’s academic background, his prior government service, and his long-standing ties to the US and international banking and financial institutions make him a less than ideal torch bearer for the true-blue (or true-red) swamp drainers whose ostensible goal is less to take control of the Fed than to abolish it. To accommodate both the base and the establishment, it is possible that, as reported by Breitbart, both Powell and Taylor will be appointed, one replacing Yellen as chairman, the other replacing Stanley Fischer as vice-chairman.

Seeing no evidence that Taylor has a sufficient following for his appointment to provide any political benefit, I have little doubt that it will be Powell who replaces Yellen, possibly along with Taylor as Vice-Chairman, if Taylor, at the age of 71, is willing to accept a big pay cut, just to take the vice-chairmanship with little prospect of eventually gaining the top spot he has long coveted.

Although I think it unlikely that Taylor will be the next Fed Chairman, the recent flurry of speculation about his possible appointment prompted me to look at a recent talk that he gave at the Federal Reserve Bank of Boston Conference on the subject: Are Rules Made to be Broken? Discretion and Monetary Policy. The title of his talk “Rules versus Discretion: Assessing the Debate over Monetary Policy” is typical of Taylor’s very low-key style, a style that, to his credit, is certainly not calculated to curry favor with the Fed-bashers who make up a large share of a Republican base that demands constant attention and large and frequently dispensed servings of red meat.

I found several things in Taylor’s talk notable. First, and again to his credit, Taylor does, on occasion, acknowledge the possibility that other interpretations of events from his own are possible. Thus, in arguing that the good macroeconomic performance (“the Great Moderation”) from about 1985 to 2003, was the result of the widespread adoption of “rules-based” monetary policy, and that the subsequent financial crisis and deep recession were the results of the FOMC’s having shifted, after the 2001 recession, from that rules-based policy to a discretionary policy, by keeping interest rates too low for too long, Taylor did at least recognize the possibility that the reason that the path of interest rates after 2003 departed from the path that, he claims, had been followed during the Great Moderation was that the economy was entering a period of inherently greater instability in the early 2000s than in the previous two decades because of external conditions unrelated to actions taken by the Fed.

The other view is that the onset of poor economic performance was not caused by a deviation from policy rules that were working, but rather to other factors. For example, Carney (2013) argues that the deterioration of performance in recent years occurred because “… the disruptive potential of financial instability—absent effective macroprudential policies—leads to a less favourable Taylor frontier.” Carney (2013) illustrated his argument with a shift in the tradeoff frontier as did King (2012). The view I offer here is that the deterioration was due more to a move off the efficient policy frontier due to a change in policy. That would suggest moving back toward the type of policy rule that described policy decisions during the Great Moderation period. (p. 9)

But despite acknowledging the possibility of another view, Taylor offers not a single argument against it. He merely reiterates his own unsupported opinion that the policy post-2003 became less rule-based than it had been from 1985 to 2003. However, later in his talk in a different context, Taylor does return to the argument that the Fed’s policy after 2003 was not fundamentally different from its policy before 2003. Here Taylor is assuming that Bernanke is acknowledging that there was a shift in from the rules-based monetary policy of 1985 to 2003, but that the post-2003 monetary policy, though not rule-based as in the way that it had been in 1985 to 2003, was rule-based in a different sense. I don’t believe that Bernanke would accept that there was a fundamental change in the nature of monetary policy after 2003, but that is not really my concern here.

At a recent Brookings conference, Ben Bernanke argued that the Fed had been following a policy rule—including in the “too low for too long” period. But the rule that Bernanke had in mind is not a rule in the sense that I have used it in this discussion, or that many others have used it.

Rather it is a concept that all you really need for effective policy making is a goal, such as an inflation target and an employment target. In medicine, it would be the goal of a healthy patient. The rest of policymaking is doing whatever you as an expert, or you as an expert with models, thinks needs to be done with the instruments. You do not need to articulate or describe a strategy, a decision rule, or a contingency plan for the instruments. If you want to hold the interest rate well below the rule-based strategy that worked well during the Great Moderation, as the Fed did in 2003-2005, then it’s ok, if you can justify it in terms of the goal.

Bernanke and others have argued that this approach is a form of “constrained discretion.” It is an appealing term, and it may be constraining discretion in some sense, but it is not inducing or encouraging a rule as the language would have you believe. Simply having a specific numerical goal or objective function is not a rule for the instruments of policy; it is not a strategy; in my view, it ends up being all tactics. I think there is evidence that relying solely on constrained discretion has not worked for monetary policy. (pp. 16-17)

Taylor has made this argument against constrained discretion before in an op-ed in the Wall Street Journal (May 2, 2015). Responding to that argument I wrote a post (“Cluelessness about Strategy, Tactics and Discretion”) which I think exposed how thoroughly confused Taylor is about what a monetary rule can accomplish and what the difference is between a monetary rule that specifies targets for an instrument and a monetary rule that specifies targets for policy goals. At an even deeper level, I believe I also showed that Taylor doesn’t understand the difference between strategy and tactics or the meaning of discretion. Here is an excerpt from my post of almost two and a half years ago.

Taylor denies that his steady refrain calling for a “rules-based policy” (i.e., the implementation of some version of his beloved Taylor Rule) is intended “to chain the Fed to an algebraic formula;” he just thinks that the Fed needs “an explicit strategy for setting the instruments” of monetary policy. Now I agree that one ought not to set a policy goal without a strategy for achieving the goal, but Taylor is saying that he wants to go far beyond a strategy for achieving a policy goal; he wants a strategy for setting instruments of monetary policy, which seems like an obvious confusion between strategy and tactics, ends and means.

Instruments are the means by which a policy is implemented. Setting a policy goal can be considered a strategic decision; setting a policy instrument a tactical decision. But Taylor is saying that the Fed should have a strategy for setting the instruments with which it implements its strategic policy.  (OED, “instrument – 1. A thing used in or for performing an action: a means. . . . 5. A tool, an implement, esp. one used for delicate or scientific work.”) This is very confused.

Let’s be very specific. The Fed, for better or for worse – I think for worse — has made a strategic decision to set a 2% inflation target. Taylor does not say whether he supports the 2% target; his criticism is that the Fed is not setting the instrument – the Fed Funds rate – that it uses to hit the 2% target in accordance with the Taylor rule. He regards the failure to set the Fed Funds rate in accordance with the Taylor rule as a departure from a rules-based policy. But the Fed has continually undershot its 2% inflation target for the past three [now almost six] years. So the question naturally arises: if the Fed had raised the Fed Funds rate to the level prescribed by the Taylor rule, would the Fed have succeeded in hitting its inflation target? If Taylor thinks that a higher Fed Funds rate than has prevailed since 2012 would have led to higher inflation than we experienced, then there is something very wrong with the Taylor rule, because, under the Taylor rule, the Fed Funds rate is positively related to the difference between the actual inflation rate and the target rate. If a Fed Funds rate higher than the rate set for the past three years would have led, as the Taylor rule implies, to lower inflation than we experienced, following the Taylor rule would have meant disregarding the Fed’s own inflation target. How is that consistent with a rules-based policy?

This is such an obvious point – and I am hardly the only one to have made it – that Taylor’s continuing failure to respond to it is simply inexcusable. In his apologetics for the Taylor rule and for legislation introduced (no doubt with his blessing and active assistance) by various Republican critics of Fed policy in the House of Representatives, Taylor repeatedly insists that the point of the legislation is just to require the Fed to state a rule that it will follow in setting its instrument with no requirement that Fed actually abide by its stated rule. The purpose of the legislation is not to obligate the Fed to follow the rule, but to merely to require the Fed, when deviating from its own stated rule, to provide Congress with a rationale for such a deviation. I don’t endorse the legislation that Taylor supports, but I do agree that it would be desirable for the Fed to be more forthcoming than it has been in explaining the reasoning about its monetary-policy decisions, which tend to be either platitudinous or obfuscatory rather than informative. But if Taylor wants the Fed to be more candid and transparent in defending its own decisions about monetary policy, it would be only fitting and proper for Taylor, as an aspiring Fed Chairman, to be more forthcoming than he has yet been about the obvious, and rather scary, implications of following the Taylor Rule during the period since 2003.

If Taylor is nominated to be Chairman or Vice-Chairman of the Fed, I hope that, during his confirmation hearings, he will be asked to explain what the implications of following the Taylor Rule would have been in the post-2003 period.

As the attached figure shows PCE inflation (excluding food and energy prices) was 1.9 percent in 2004. If inflation in 2004 was less than the 2% inflation target assumed by the Taylor Rule, why does Taylor think that raising interest rates in 2004 would have been appropriate? And if inflation in 2005 was merely 2.2%, just barely above the 2% target, what rate should the Fed Funds rate have reached in 2005, and how would that rate have affected the fairly weak recovery from the 2001 recession? And what is the basis for Taylor’s assessment that raising the Fed Funds rate in 2005 to a higher level than it was raised to would have prevented the subsequent financial crisis?

Taylor’s implicit argument is that by not raising interest rates as rapidly as the Taylor rule required, the Fed created additional uncertainty that was damaging to the economy. But what was the nature of the uncertainty created? The Federal Funds rate is merely the instrument of policy, not the goal of policy. To argue that the Fed was creating additional uncertainty by not changing its interest rate in line with the Taylor rule would only make sense if the economic agents care about how the instrument is set, but if it is an instrument the importance of the Fed Funds rate is derived entirely from its usefulness in achieving the policy goal of the Fed and the policy goal was the 2% inflation rate, which the Fed came extremely close to hitting in the 2004-06 period, during which Taylor alleges that the Fed’s monetary policy went off the rails and became random, unpredictable and chaotic.

If you calculate the absolute difference between the observed yearly PCE inflation rate (excluding food and energy prices) and the 2% target from 1985 to 2003 (Taylor’s golden age of monetary policy) the average yearly deviation was 0.932%. From 2004 to 2015, the period of chaotic monetary policy in Taylor’s view, the average yearly deviation between PCE inflation and the 2% target was just 0.375%. So when was monetary policy more predictable? Even if you just look at the last 12 years of the golden age (1992 to 2003), the average annual deviation was 0.425%.

The name Larry Summers is in the title of this post, but I haven’t mentioned him yet, so let me explain where Larry Summers comes into the picture. In his talk, Taylor mentions a debate about rules versus discretion that he and Summers had at the 2013 American Economic Association meetings and proceeds to give the following account of the key interchange in that debate.

Summers started off by saying: “John Taylor and I have, it will not surprise you . . . a fundamental philosophical difference, and I would put it in this way. I think about my doctor. Which would I prefer: for my doctor’s advice, to be consistently predictable, or for my doctor’s advice to be responsive to the medical condition with which I present? Me, I’d rather have a doctor who most of the time didn’t tell me to take some stuff, and every once in a while said I needed to ingest some stuff into my body in response to the particular problem that I had. That would be a doctor who’s [sic] [advice], believe me, would be less predictable.” Thus, Summers argues in favor of relying on an all-knowing expert, a doctor who does not perceive the need for, and does not use, a set of guidelines, but who once in a while in an unpredictable way says to ingest some stuff. But as in economics, there has been progress in medicine over the years. And much progress has been due to doctors using checklists, as described by Atul Gawande.

Of course, doctors need to exercise judgement in implementing checklists, but if they start winging it or skipping steps the patients usually suffer. Experience and empirical studies show that checklist-free medicine is wrought with dangers just as rules-free, strategy-free monetary policy is. (pp. 15-16)

Taylor’s citation of Atul Gawande, author of The Checklist Manifesto, is pure obfuscation. To see how off-point it is, have a look at this review published in the Seattle Times.

“The Checklist Manifesto” is about how to prevent highly trained, specialized workers from making dumb mistakes. Gawande — who appears in Seattle several times early next week — is a surgeon, and much of his book is about surgery. But he also talks to a construction manager, a master chef, a venture capitalist and the man at The Boeing Co. who writes checklists for airline pilots.

Commercial pilots have been using checklists for decades. Gawande traces this back to a fly-off at Wright Field, Ohio, in 1935, when the Army Air Force was choosing its new bomber. Boeing’s entry, the B-17, would later be built by the thousands, but on that first flight it took off, stalled, crashed and burned. The new airplane was complicated, and the pilot, who was highly experienced, had forgotten a routine step.

For pilots, checklists are part of the culture. For surgical teams they have not been. That began to change when a colleague of Gawande’s tried using a checklist to reduce infections when using a central venous catheter, a tube to deliver drugs to the bloodstream.

The original checklist: wash hands; clean patient’s skin with antiseptic; use sterile drapes; wear sterile mask, hat, gown and gloves; use a sterile dressing after inserting the line. These are all things every surgical team knows. After putting them in a checklist, the number of central-line infections in that hospital fell dramatically.

Then came the big study, the use of a surgical checklist in eight hospitals around the world. One was in rural Tanzania, in Africa. One was in the Kingdom of Jordan. One was the University of Washington Medical Center in Seattle. They were hugely different hospitals with much different rates of infection.

Use of the checklist lowered infection rates significantly in all of them.

Gawande describes the key things about a checklist, much of it learned from Boeing. It has to be short, limited to critical steps only. Generally the checking is not done by the top person. In the cockpit, the checklist is read by the copilot; in an operating room, Gawande discovered, it is done best by a nurse.

Gawande wondered whether surgeons would accept control by a subordinate. Which was stronger, the culture of hierarchy or the culture of precision? He found reason for optimism in the following dialogue he heard in the hospital in Amman, Jordan, after a nurse saw a surgeon touch a nonsterile surface:

Nurse: “You have to change your glove.”

Surgeon: “It’s fine.”

Nurse: “No, it’s not. Don’t be stupid.”

In other words, the basic rule underlying the checklist is simply: don’t be stupid. It has nothing to do with whether doctors should exercise judgment, or “winging it,” or “skipping steps.” What was Taylor even thinking? For a monetary authority not to follow a Taylor rule is not analogous to a doctor practicing checklist-free medicine.

As it happens, I have a story of my own about whether following numerical rules without exercising independent judgment makes sense in practicing medicine. Fourteen years ago, on the Friday before Labor Day, I was exercising at home and began to feeling chest pains. After ignoring the pain for a few minutes, I stopped and took a shower and then told my wife that I thought I needed to go to the hospital, because I was feeling chest pains – I was still in semi-denial about what I was feeling – my wife asked me if she should call 911, and I said that that might be a good idea. So she called 911, and told the operator that I was feeling chest pains. Within a couple of minutes, two ambulances arrived, and I was given an aspirin to chew and a nitroglycerine tablet to put under my tongue. I was taken to the emergency room at the hospital nearest to my home. After calling 911, my wife also called our family doctor to let him know what was happening and which hospital I was being taken to. He then placed a call to a cardiologist who had privileges at that hospital who happened to be making rounds there that morning.

When I got to the hospital, I was given an electrocardiogram, and my blood was taken. I was also asked to rate my pain level on a scale of zero to ten. The aspirin and nitroglycerine had reduced the pain level slightly, but I probably said it was at eight or nine. However, the ER doc looked at the electrocardiogram results and the enzyme levels in my blood, and told me that there was no indication that I was having a heart attack, but that they would keep me in the ER for observation. Luckily, the cardiologist who had been called by my internist came to the ER, and after talking to the ER doc, looking at the test results, came over to me and started asking me questions about what had happened and how I was feeling. Although the test results did not indicate that I was having heart attack, the cardiologist quickly concluded that what I was experiencing likely was a heart attack. He, therefore, hinted to me that I should request to be transferred to another nearby hospital, which not only had a cath lab, as the one I was then at did, but also had an operating room in which open heart surgery could be performed, if that would be necessary. It took a couple of tries on his part before I caught on to what he was hinting at, but as soon as I requested to be transferred to the other hospital, he got me onto an ambulance ASAP so that he could meet me at the hospital and perform an angiogram in the cath lab, cancelling an already scheduled angiogram.

The angiogram showed that my left anterior descending artery was completely blocked, so open-heart surgery was not necessary; angioplasty would be sufficient to clear the artery, which the cardiologist performed, also implanting two stents to prevent future blockage.  I remained in the cardiac ICU for two days, and was back home on Monday, when my rehab started. I was back at work two weeks later.

The willingness of my cardiologist to use his judgment, experience and intuition to ignore the test results indicating that I was not having a heart attack saved my life. If the ER doctor, following the test results, had kept me in the ER for observation, I would have been dead within a few hours. Following the test results and ignoring what the patient was feeling would have been stupid. Luckily, I was saved by a really good cardiologist. He was not stupid; he could tell that the numbers were not telling the real story about what was happening to me.

We now know that, in the summer of 2008, the FOMC, being in the thrall of headline inflation numbers allowed a recession that had already started at the end of 2007 to deteriorate rapidly, pr0viding little or no monetary stimulus, to an economy when nominal income was falling so fast that debts coming due could no longer be serviced. The financial crisis and subsequent Little Depression were caused by the failure of the FOMC to provide stimulus to a failing economy, not by interest rates having been kept too low for too long after 2003. If John Taylor still hasn’t figured that out – and he obviously hasn’t — he should not be allowed anywhere near the Federal Reserve Board.

Sumner on the Demand for Money, Interest Rates and Barsky and Summers

Scott Sumner had two outstanding posts a couple of weeks ago (here and here) discussing the relationship between interest rates and NGDP, making a number of important points, which I largely agree with, even though I have some (mostly semantic) quibbles about the details. I especially liked how in the second post he applied the analysis of Robert Barsky and Larry Summers in their article about Gibson’s Paradox under the gold standard to recent monetary experience. The two posts are so good and cover such a wide range of topics that the best way for me to address them is by cutting and pasting relevant passages and commenting on them.

Scott begins with the equation of exchange MV = PY. I personally prefer the Cambridge version (M = kPY) where k stands for the fraction of income that people hold as cash, thereby making it clear that the relevant concept is how much money want to hold, not that mysterious metaphysical concept called the velocity of circulation V (= 1/k). With attention focused on the decision about how much money to hold, it is natural to think of the rate of interest as the opportunity cost of holding non-interest-bearing cash balances. When the rate of interest rate rises, the desired holdings of non-interest-bearing cash tend to fall; in other words k falls (and V rises). With unchanged M, the equation is satisfied only if PY increases. So the notion that a reduction in interest rates, in and of itself, is expansionary is based on a misunderstanding. An increase in the amount of money demanded is always contractionary. A reduction in interest rates increases the amount of money demanded (if money is non-interest-bearing). A reduction in interest rates is therefore contractionary (all else equal).

Scott suggests some reasons why this basic relationship seems paradoxical.

Sometimes, not always, reductions in interest rates are caused by an increase in the monetary base. (This was not the case in late 2007 and early 2008, but it is the case on some occasions.) When there is an expansionary monetary policy, specifically an exogenous increase in M, then when interest rates fall, V tends to fall by less than M rises. So the policy as a whole causes NGDP to rise, even as the specific impact of lower interest rates is to cause NGDP to fall.

To this I would add that, as discussed in my recent posts about Keynes and Fisher, Keynes in the General Theory seemed to be advancing a purely monetary theory of the rate of interest. If Keynes meant that the rate of interest is determined exclusively by monetary factors, then a falling rate of interest is a sure sign of an excess supply of money. Of course in the Hicksian world of IS-LM, the rate of interest is simultaneously determined by both equilibrium in the money market and an equilibrium rate of total spending, but Keynes seems to have had trouble with the notion that the rate of interest could be simultaneously determined by not one, but two, equilibrium conditions.

Another problem is the Keynesian model, which hopelessly confuses the transmission mechanism. Any Keynesian model with currency that says low interest rates are expansionary is flat out wrong.

But if Keynes believed that the rate of interest is exclusively determined by money demand and money supply, then the only possible cause of a low or falling interest rate is the state of the money market, the supply side of which is always under the control of the monetary authority. Or stated differently, in the Keynesian model, the money-supply function is perfectly elastic at the target rate of interest, so that the monetary authority supplies whatever amount of money is demanded at that rate of interest. I disagree with the underlying view of what determines the rate of interest, but given that theory of the rate of interest, the model is not incoherent and doesn’t confuse the transmission mechanism.

That’s probably why economists were so confused by 2008. Many people confuse aggregate demand with consumption. Thus they think low rates encourage people to “spend” and that this n somehow boosts AD and NGDP. But it doesn’t, at least not in the way they assume. If by “spend” you mean higher velocity, then yes, spending more boosts NGDP. But we’ve already seen that lower interest rates don’t boost velocity, rather they lower velocity.

But, remember that Keynes believed that the interest rate can be reduced only by increasing the quantity of money, which nullifies the contractionary effect of a reduced interest rate.

Even worse, some assume that “spending” is the same as consumption, hence if low rates encourage people to save less and consume more, then AD will rise. This is reasoning from a price change on steroids! When you don’t spend you save, and saving goes into investment, which is also part of GDP.

But this is reasoning from an accounting identity. The question is what happens if people try to save. The Keynesian argument is that the attempt to save will be self-defeating; instead of increased saving, there is reduced income. Both scenarios are consistent with the accounting identity. The question is which causal mechanism is operating? Does an attempt to increase saving cause investment to increase, or does it cause income to go down? Seemingly aware of the alternative scenario, Scott continues:

Now here’s were amateur Keynesians get hopelessly confused. They recall reading something about the paradox of thrift, about planned vs. actual saving, about the fact that an attempt to save more might depress NGDP, and that in the end people may fail to save more, and instead NGDP will fall. This is possible, but even if true it has no bearing on my claim that low rates are contractionary.

Just so. But there is not necessarily any confusion; the issue may be just a difference in how monetary policy is implemented. You can think of the monetary authority as having a choice in setting its policy in terms of the quantity of the monetary base, or in terms of an interest-rate target. Scott characterizes monetary policy in terms of the base, allowing the interest rate to adjust; Keynesians characterize monetary policy in terms of an interest-rate target, allowing the monetary base to adjust. The underlying analysis should not depend on how policy is characterized. I think that this is borne out by Scott’s next paragraph, which is consistent with a policy choice on the part of the Keynesian monetary authority to raise interest rates as needed to curb aggregate demand when aggregate demand is excessive.

To see the problem with this analysis, consider the Keynesian explanations for increases in AD. One theory is that animal spirits propel businesses to invest more. Another is that consumer optimism propels consumers to spend more. Another is that fiscal policy becomes more expansionary, boosting the budget deficit. What do all three of these shocks have in common? In all three cases the shock leads to higher interest rates. (Use the S&I diagram to show this.) Yes, in all three cases the higher interest rates boost velocity, and hence ceteris paribus (i.e. fixed monetary base) the higher V leads to more NGDP. But that’s not an example of low rates boosting AD, it’s an example of some factor boosting AD, and also raising interest rates.

In the Keynesian terminology, the shocks do lead to higher rates, but only because excessive aggregate demand, caused by animal spirits, consumer optimism, or government budget deficits, has to be curbed by interest-rate increases. The ceteris paribus assumption is ambiguous; it can be interpreted to mean holding the monetary base constant or holding the interest-rate target constant. I don’t often cite Milton Friedman as an authority, but one of his early classic papers was “The Marshallian Demand Curve” in which he pointed out that there is an ambiguity in what is held constant along the demand curve: prices of other goods or real income. You can hold only one of the two constant, not both, and you get a different demand curve depending on which ceteris paribus assumption you make. So the upshot of my commentary here is that, although Scott is right to point out that the standard reasoning about how a change in interest rates affects NGDP implicitly assumes that the quantity of money is changing, that valid point doesn’t refute the standard reasoning. There is an inherent ambiguity in specifying what is actually held constant in any ceteris paribus exercise. It’s good to make these ambiguities explicit, and there might be good reasons to prefer one ceteris paribus assumption over another, but a ceteris paribus assumption isn’t a sufficient basis for rejecting a model.

Now just to be clear, I agree with Scott that, as a matter of positive economics, the interest rate is not fully under the control of the monetary authority. And one reason that it’s not  is that the rate of interest is embedded in the entire price system, not just a particular short-term rate that the central bank may be able to control. So I don’t accept the basic Keynesian premise that monetary authority can always make the rate of interest whatever it wants it to be, though the monetary authority probably does have some control over short-term rates.

Scott also provides an analysis of the effects of interest on reserves, and he is absolutely correct to point out that paying interest on reserves is deflationary.

I will just note that near the end of his post, Scott makes a comment about living “in a Ratex world.” WADR, I don’t think that ratex is at all descriptive of reality, but I will save that discussion for another time.

Scott followed up the post about the contractionary effects of low interest rates with a post about the 1988 Barsky and Summers paper.

Barsky and Summers . . . claim that the “Gibson Paradox” is caused by the fact that low interest rates are deflationary under the gold standard, and that causation runs from falling interest rates to deflation. Note that there was no NGDP data for this period, so they use the price level rather than NGDP as their nominal indicator. But their basic argument is identical to mine.

The Gibson Paradox referred to the tendency of prices and interest rates to be highly correlated under the gold standard. Initially some people thought this was due to the Fisher effect, but it turns out that prices were roughly a random walk under the gold standard, and hence the expected rate of inflation was close to zero. So the actual correlation was between prices and both real and nominal interest rates. Nonetheless, the nominal interest rate is the key causal variable in their model, even though changes in that variable are mostly due to changes in the real interest rate.

Since gold is a durable good with a fixed price, the nominal interest rate is the opportunity cost of holding that good. A lower nominal rate tends to increase the demand for gold, for both monetary and non-monetary purposes.  And an increased demand for gold is deflationary (and also reduces NGDP.)

Very insightful on Scott’s part to see the connection between the Barsky and Summers analysis and the standard theory of the demand for money. I had previously thought about the Barsky and Summers discussion simply as a present-value problem. The present value of any durable asset, generating a given expected flow of future services, must vary inversely with the interest rate at which those future services are discounted. Since the future price level under the gold standard was expected to be roughly stable, any change in nominal interest rates implied a change in real interest rates. The value of gold, like other durable assets, varied inversely with nominal interest rate. But with the nominal value of gold fixed by the gold standard, changes in the value of gold implied a change in the price level, an increased value of gold being deflationary and a decreased value of gold inflationary. Scott rightly observes that the same idea can be expressed in the language of monetary theory by thinking of the nominal interest rate as the cost of holding any asset, so that a reduction in the nominal interest rate has to increase the demand to own assets, because reducing the cost of holding an asset increases the demand to own it, thereby raising its value in exchange, provided that current output of the asset is small relative to the total stock.

However, the present-value approach does have an advantage over the opportunity-cost approach, because the present-value approach relates the value of gold or money to the entire term structure of interest rates, while the opportunity-cost approach can only handle a single interest rate – presumably the short-term rate – that is relevant to the decision to hold money at any given moment in time. In simple models of the IS-LM ilk, the only interest rate under consideration is the short-term rate, or the term-structure is assumed to have a fixed shape so that all interest rates are equally affected by, or along with, any change in the short-term rate. The latter assumption of course is clearly unrealistic, though Keynes made it without a second thought. However, in his Century of Bank Rate, Hawtrey showed that between 1844 and 1938, when the gold standard was in effect in Britain (except 1914-25 and 1931-38) short-term rates and long-term rates often moved by significantly different magnitudes and even in opposite directions.

Scott makes a further interesting observation:

The puzzle of why the economy does poorly when interest rates fall (such as during 2007-09) is in principle just as interesting as the one Barsky and Summers looked at. Just as gold was the medium of account during the gold standard, base money is currently the medium of account. And just as causation went from falling interest rates to higher demand for gold to deflation under the gold standard, causation went from falling interest rates to higher demand for base money to recession in 2007-08.

There is something to this point, but I think Scott may be making too much of it. Falling interest rates in 2007 may have caused the demand for money to increase, but other factors were also important in causing contraction. The problem in 2008 was that the real rate of interest was falling, while the Fed, fixated on commodity (especially energy) prices, kept interest rates too high given the rapidly deteriorating economy. With expected yields from holding real assets falling, the Fed, by not cutting interest rates any further between April and October of 2008, precipitated a financial crisis once inflationary expectations started collapsing in August 2008, the expected yield from holding money dominating the expected yield from holding real assets, bringing about a pathological Fisher effect in which asset values had to collapse for the yields from holding money and from holding assets to be equalized.

Under the gold standard, the value of gold was actually sensitive to two separate interest-rate effects – one reflected in the short-term rate and one reflected in the long-term rate. The latter effect is the one focused on by Barsky and Summers, though they also performed some tests on the short-term rate. However, it was through the short-term rate that the central bank, in particular the Bank of England, the dominant central bank during in the pre-World War I era, manifested its demand for gold reserves, raising the short-term rate when it was trying to accumulate gold and reducing the short-term rate when it was willing to reduce its reserve holdings. Barsky and Summers found the long-term rate to be more highly correlated with the price level than the short-term rate. I conjecture that the reason for that result is that the long-term rate is what captures the theoretical inverse relationship between the interest rate and the value of a durable asset, while the short-term rate would be negatively correlated with the value of gold when (as is usually the case) it moves together with the long-term rate but may sometimes be positively correlated with the value of gold (when the central bank is trying to accumulate gold) and thereby tightening the world market for gold. I don’t know if Barsky and Summers ran regressions using both long-term and short-term rates, but using both long-term and short-term rates in the same regression might have allowed them to find evidence of both effects in the data.

PS I have been too busy and too distracted of late to keep up with comments on earlier posts. Sorry for not responding promptly. In case anyone is still interested, I hope to respond to comments over the next few days, and to post and respond more regularly than I have been doing for the past few weeks.

Further Thoughts on Capital and Inequality

In a recent post, I criticized, perhaps without adequate understanding, some of Thomas Piketty’s arguments about capital in his best-selling book. My main criticism is that Piketty’s argument that. under capitalism, there is an inherent tendency toward increasing inequality, ignores the heterogeneity of capital and the tendency for new capital embodying new knowledge, new techniques, and new technologies to render older capital obsolete. Contrary to the simple model of accumulation on which Piketty relies, the accumulation of capital is not a smooth process; it is a very uneven process, generating very high returns to some owners of capital, but also imposing substantial losses on other owners of capital. The only way to avoid the risk of owning suddenly obsolescent capital is to own the market portfolio. But I conjecture that few, if any, great fortunes have been amassed by investing in the market portfolio, and (I further conjecture) great fortunes, once amassed, are usually not liquidated and reinvested in the market portfolio, but continue to be weighted heavily in fairly narrow portfolios of assets from which those great fortunes grew. Great fortunes, aside from being dissipated by deliberate capital consumption, also tend to be eroded by the loss of value through obsolescence, a process that can only be avoided by extreme diversification of holdings or by the exercise of entrepreneurial skill, a skill rarely bequeathed from generation to generation.

Applying this insight, Larry Summers pointed out in his review of Piketty’s book that the rate of turnover in the Forbes list of the 400 wealthiest individuals between 1982 and 2012 was much higher than the turnover predicted by Piketty’s simple accumulation model. Commenting on my post (in which I referred to Summers’s review), Kevin Donoghue objected that Piketty had criticized the Forbes 400 as a measure of wealth in his book, so that Piketty would not necessarily accept Summers’ criticism based on the Forbes 400. Well, as an alternative, let’s have a look at the S&P 500. I just found this study of the rate of turnover in the 500 firms making up the S&P 500, showing that the rate of turnover in the composition of the S&P 500 has been increased greatly over the past 50 years. See the chart below copied from that study showing that the average length of time for firms on the S&P 500 was over 60 years in 1958, but by 2011 had fallen to less than 20 years. The pace of creative destruction seems to be accelerating

S&P500_turnover

From the same study here’s another chart showing the companies that were deleted from the index between 2001 and 2011 and those that were added.

S&P500_churn

But I would also add a cautionary note that, because the population of individuals and publicly held business firms is growing, comparing the composition of a fixed number (400) of wealthiest individuals or (500) most successful corporations over time may overstate the increase over time in the rate of turnover, any group of fixed numerical size becoming a smaller percentage of the population over time. Even with that caveat, however, what this tells me is that there is a lot of variability in the value of capital assets. Wealth grows, but it grows unevenly. Capital is accumulated, but it is also lost.

Does the process of capital accumulation necessarily lead to increasing inequality of wealth and income? Perhaps, but I don’t think that the answer is necessarily determined by the relationship between the real rate of interest and the rate of growth in GDP.

Many people have suggested that an important cause of rising inequality has been the increasing importance of winner-take-all markets in which a few top performers seem to be compensated at very much higher rates than other, only slightly less gifted, performers. This sort of inequality is reflected in widening gaps between the highest and lowest paid participants in a given occupation. In some cases at least, the differences between the highest and lowest paid don’t seem to correspond to the differences in skill, though admittedly skill is often difficult to measure.

This concentration of rewards is especially characteristic of competitive sports, winners gaining much larger rewards than losers. However, because the winner’s return comes, at least in part, at the expense of the loser, the private gain to winning exceeds the social gain. That’s why all organized professional sports engage in some form of revenue sharing and impose limits on spending on players. Without such measures, competitive sports would not be viable, because the private return to improve quality exceeds the collective return from improved quality. There are, of course, times when a superstar like Babe Ruth or Michael Jordan can actually increase the return to losers, but that seems to be the exception.

To what extent other sorts of winner-take-all markets share this intrinsic inefficiency is not immediately clear to me, but it does not seem implausible to think that there is an incentive to overinvest in skills that increase the expected return to participants in winner-take-all markets. If so, the source of inequality may also be a source of inefficiency.

Thomas Piketty and Joseph Schumpeter (and Gerard Debreu)

Everybody else seems to have an opinion about Thomas PIketty, so why not me? As if the last two months of Piketty-mania (reminiscent, to those of a certain age, of an earlier invasion of American shores, exactly 50 years ago, by four European rock-stars) were not enough, there has been a renewed flurry of interest this week about Piketty’s blockbuster book triggered by Chris Giles’s recent criticism in the Financial Times of Piketty’s use of income data, which mainly goes to show that, love him or hate him, people cannot get enough of Professor Piketty. Now I will admit upfront that I have not read Piketty’s book, and from my superficial perusal of the recent criticisms, they seem less problematic than the missteps of Reinhart and Rogoff in claiming that, beyond a critical 90% ratio of national debt to national income, the burden of national debt begins to significantly depress economic growth. But in any event, my comments in this post are directed at Piketty’s conceptual approach, not on his use of the data in his empirical work. In fact, I think that Larry Summers in his superficially laudatory, but substantively critical, review has already made most of the essential points about Piketty’s book. But I think that Summers left out a couple of important issues — issues touched upon usefully by George Cooper in a recent blog post about Piketty — which bear further emphasis, .

Just to set the stage for my comments, here is my understanding of the main conceptual point of Piketty’s book. Piketty believes that the essence of capitalism is that capital generates a return to the owners of capital that, on average over time, is equal to the rate of interest. Capital grows; it accumulates. And the rate of accumulation is equal to the rate of interest. However, the rate of interest is generally somewhat higher than the rate of growth of the economy. So if capital is accumulating at a rate of growth equal to, say, 5%, and the economy is growing at a rate of growth equal to only 3%, the share of income accruing to the owners of capital will grow over time. It is in this simple theoretical framework — the relationship between the rate of economic growth to the rate of interest — that Piketty believes he has found the explanation not only for the increase in inequality over the past few centuries of capitalist development, but for the especially rapid increase in inequality over the past 30 years.

While praising Piketty’s scholarship, empirical research and rhetorical prowess, Summers does not handle Piketty’s main thesis gently. Summers points out that, as accumulation proceeds, the incentive to engage in further accumulation tends to weaken, so the iron law of increasing inequality posited by Piketty is not nearly as inflexible as Piketty suggests. Now one could respond that, once accumulation reaches a certain threshold, the capacity to consume weakens as well, if only, as Gary Becker liked to remind us, because of the constraint that time imposes on consumption.

Perhaps so, but the return to capital is not the only, or even the most important, source of inequality. I would interpret Summers’ point to be the following: pure accumulation is unlikely to generate enough growth in wealth to outstrip the capacity to increase consumption. To generate an increase in wealth so large that consumption can’t keep up, there must be not just a return to the ownership of capital, there must be profit in the Knightian or Schumpeterian sense of a profit over and above the return on capital. Alternatively, there must be some extraordinary rent on a unique, irreproducible factor of production. Accumulation by itself, without the stimulus of entrepreneurial profit, reflecting the the application of new knowledge in the broadest sense of the term, cannot go on for very long. It is entrepreneurial profits and rents to unique factors of production (or awards of government monopolies or other privileges) not plain vanilla accumulation that account for the accumulation of extraordinary amounts of wealth. Moreover, it seems that philanthropy (especially conspicuous philanthropy) provides an excellent outlet for the dissipation of accumulated wealth and can easily be combined with quasi-consumption activities, like art patronage or political activism, as more conventional consumption outlets become exhausted.

Summers backs up his conceptual criticism with a powerful factual argument. Comparing the Forbes list of the 400 richest individuals in 1982 with the Forbes list for 2012 Summers observes:

When Forbes compared its list of the wealthiest Americans in 1982 and 2012, it found that less than one tenth of the 1982 list was still on the list in 2012, despite the fact that a significant majority of members of the 1982 list would have qualified for the 2012 list if they had accumulated wealth at a real rate of even 4 percent a year. They did not, given pressures to spend, donate, or misinvest their wealth. In a similar vein, the data also indicate, contra Piketty, that the share of the Forbes 400 who inherited their wealth is in sharp decline.

But something else is also going on here, a misunderstanding, derived from a fundamental ambiguity, about what capital actually means. Capital can refer either to a durable physical asset or to a sum of money. When economists refer to capital as a factor of production, they are thinking of capital as a physical asset. But in most models, economists try to simplify the analysis by collapsing the diversity of the entire stock of heterogeneous capital assets into single homogeneous substance called “capital” and then measure it not in terms of its physical units (which, given heterogeneity, is strictly impossible) but in terms of its value. This creates all kinds of problems, leading to some mighty arguments among economists ever since the latter part of the nineteenth century when Carl Menger (the first Austrian economist) turned on his prize pupil Eugen von Bohm-Bawerk who wrote three dense volumes discussing the theory of capital and interest, and pronounced Bohm-Bawerk’s theory of capital “the greatest blunder in the history of economics.” I remember wanting to ask F. A. Hayek, who, trying to restate Bohm-Bawerk’s theory in a coherent form, wrote a volume about 75 years ago called The Pure Theory of Capital, which probably has been read from cover to cover by fewer than 100 living souls, and probably understood by fewer than 20 of those, what he made of Menger’s remark, but, to my eternal sorrow, I forgot to ask him that question the last time that I saw him.

At any rate, treating capital as a homogeneous substance that can be measured in terms of its value rather than in terms of physical units involves serious, perhaps intractable, problems. For certain purposes, it may be worthwhile to ignore those problems and work with a simplified model (a single output which can be consumed or used as a factor of production), but the magnitude of the simplification is rarely acknowledged. In his discussion, Piketty seems, as best as I could determine using obvious search terms on Amazon, unaware of the conceptual problems involved in speaking about capital as a homogeneous substance measured in terms of its value.

In the real world, capital is anything but homogeneous. It consists of an array of very specialized, often unique, physical embodiments. Once installed, physical capital is usually sunk, and its value is highly uncertain. In contrast to the imaginary model of a homogenous substance that just seems to grow at fixed natural rate, the real physical capital that is deployed in the process of producing goods and services is complex and ever-changing in its physical and economic characteristics, and the economic valuations associated with its various individual components are in perpetual flux. While the total value of all capital may be growing at a fairly steady rate over time, the values of the individual assets that constitute the total stock of capital fluctuate wildly, and few owners of physical capital have any guarantee that the value of their assets will appreciate at a steady rate over time.

Now one would have thought that an eminent scholar like Professor Piketty would, in the course of a 700-page book about capital, have had occasion to comment on enormous diversity and ever-changing composition of the stock of physical capital. These changes are driven by a competitive process in which entrepreneurs constantly introduce new products and new methods of producing products, a competitive process that enriches some owners of new capital, and, it turns out, impoverishes others — owners of old, suddenly obsolete, capital. It is a process that Joseph Schumpeter in his first great book, The Theory of Economic Development, memorably called “creative destruction.” But the term “creative destruction” or the title of Schumpeter’s book does not appear at all in Piketty’s book, and Schumpeter’s name appears only once, in connection not with the notion of creative destruction, but with his, possibly ironic, prediction in a later book Capitalism, Socialism and Democracy that socialism would eventually replace capitalism.

Thus, Piketty’s version of capitalist accumulation seems much too abstract and too far removed from the way in which great fortunes are amassed to provide real insight into the sources of increasing inequality. Insofar as such fortunes are associated with accumulation of capital, they are likely to be the result of the creation of new forms of capital associated with new products, or new production processes. The creation of new capital simultaneously destroys old forms of capital. New fortunes are amassed, and old ones dissipated. The model of steady accumulation that is at the heart of Piketty’s account of inexorably increasing inequality misses this essential feature of capitalism.

I don’t say that Schumpeter’s account of creative destruction means that increasing inequality is a trend that should be welcomed. There may well be arguments that capitalist development and creative destruction are socially inefficient. I have explained in previous posts (e.g., here, here, and here) why I think that a lot of financial-market activity is likely to be socially wasteful. Similar arguments might be made about other kinds of activities in non-financial markets where the private gain exceeds the social gain. Winner-take-all markets seem to be characterized by this divergence between private and social benefits and costs, apparently accounting for a growing share of economic activity, are an obvious source of inequality. But what I find most disturbing about the growth in inequality over the past 30 years is that great wealth has gained increased social status. That seems to me to be a very unfortunate change in public attitudes. I have no problem with people getting rich, even filthy rich. But don’t expect me to admire them because they are rich.

Finally, you may be wondering what all of this has to do with Gerard Debreu. Well, nothing really, but I couldn’t help noticing that Piketty refers in an endnote (p. 654) to “the work of Adam Smith, Friedrich Hayek, and Kenneth Arrow and  Claude Debreu” apparently forgetting that the name of his famous countryman, winner of the Nobel Memorial Prize for Economics in 1983, is not Claude, but Gerard, Debreu. Perhaps Piketty confused Debreu with another eminent Frenchman Claude Debussy, but I hope that in the next printing of his book, Piketty will correct this unfortunate error.

UPDATE (5/29 at 9:46 EDST): Thanks to Kevin Donoghue for checking with Arthur Goldhammer, who translated Piketty’s book from the original French. Goldhammer took responsibility for getting Debreu’s first name wrong in the English edition. In the French edition, only Debreu’s last name was mentioned.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,948 other followers

Follow Uneasy Money on WordPress.com