The Monumental Dishonesty and Appalling Bad Faith of Chief Justice Roberts’s Decision

Noah Feldman brilliantly exposes the moral rot underlying the horrific Supreme Court decision handed down today approving the Muslim ban, truly, as Feldman describes it, a decision that will live in infamy in the company of Dred Scott and Korematsu. Here are the key passages from Feldman’s masterful unmasking of the faulty reasoning of the Roberts opinion

When Chief Justice Roberts comes to the topic of bias, he recounts Trump’s anti-Muslim statements and the history of the travel ban (this is the administration’s third version). Then he balks. “The issue before us is not whether to denounce the statements,” Roberts writes. Rather, Roberts insists, the court’s focus must be on “the significance of those statements in reviewing a presidential directive, neutral on its face, addressing the matter within the core of executive responsibility.”

That is lawyer-speak for saying that, despite its obviousness, the court would ignore Trump’s anti-Muslim bias. Roberts is trying to argue that, when a president is acting within his executive authority, the court should defer to what the president says his intention is, no matter the underlying reality.

That’s more or less what the Supreme Court did in the Korematsu case. There, Justice Hugo Black, a Franklin D. Roosevelt loyalist, denied that the orders requiring the internment of Japanese-Americans were based on racial prejudice. The dissenters, especially Justice Frank Murphy, pointed out that this was preposterous.

Justice Sonia Sotomayor, the court’s most liberal member, played the truth-telling role today. Her dissent, joined by Justice Ruth Bader Ginsburg, states bluntly that a reasonable observer looking at the record would conclude that the ban was “motivated by anti-Muslim animus.”

She properly invokes the Korematsu case — in which, she points out, the government also claimed a national security rationale when it was really relying on stereotypes. And she concludes that “our Constitution demands, and our country deserves, a Judiciary willing to hold the coordinate branches to account when they defy our most sacred legal commitments.”

Roberts tried to dodge the Korematsu comparison by focusing on the narrow text of the order, which, according to Roberts, on its own terms – absent the statements made by the author of the ban himself — is not facially discriminatory. Feldman skewers that attempt.

Roberts certainly knows the consequences of this decision. He tries to deflect the Korematsu comparison by saying that the order as written could have been enacted by any other president — a point that is irrelevant to the reality of the ban. Roberts also takes the opportunity to announce that Korematsu “was gravely wrong the day it was decided [and] has been overruled in the court of history.”

In another context, we might well be celebrating the fact that the Supreme Court had finally and expressly repudiated Korematsu, which it had never fully done before. Instead, Roberts’s declaration reads like a desperate attempt to change the subject. The truth is that this decision and Korematsu are a pair: Prominent instances where the Supreme Court abdicated its claim to moral leadership.

Following up Feldman, I just want to make it absolutely clear how closely, despite Roberts’s bad faith protestations to the contrary, the reasoning of his opinion follows the reasoning of the Korematsu court (opinion by Justice Black).

From the opinion of Chief Justice Roberts, attempting to counter the charge by Justice Sotomayor in her dissent that the majority was repeating the error of Korematsu.

Finally, the dissent invokes Korematsu v. United States, 323 U. S. 214 (1944). Whatever rhetorical advantage the dissent may see in doing so, Korematsu has nothing to do with this case. The forcible relocation of U. S. citizens to concentration camps, solely and explicitly on the basis of race, is objectively unlawful and outside the scope of Presidential authority. But it is wholly inapt to liken that morally repugnant order to a facially neutral policy denying certain foreign nationals the privilege of admission. See post, at 26–28. The entry suspension is an act that is well within executive authority and could have been taken by any other President—the only question is evaluating the actions of this particular President in promulgating an otherwise valid Proclamation.

This statement by the Chief Justice is monumentally false and misleading and utterly betrays either consciousness of wrongdoing or a culpable ignorance of the case he is presuming to distinguish from the one that he is deciding. Here is the concluding paragraph of Justice Black’s opinion in Korematsu.

It is said that we are dealing here with the case of imprisonment of a citizen in a concentration camp solely because of his ancestry, without evidence or inquiry concerning his loyalty and good disposition towards the United States. Our task would be simple, our duty clear, were this a case involving the imprisonment of a loyal citizen in a concentration camp because of racial prejudice.

Justice Black is explicitly denying that the Japanese American citizens being imprisoned were imprisoned because of racial prejudice.

Regardless of the true nature of the assembly and relocation centers — and we deem it unjustifiable to call them concentration camps, with all the ugly connotations that term implies — we are dealing specifically with nothing but an exclusion order.

And Justice Black denies that the Japanese Americans were sent to concentration camps.

To cast this case into outlines of racial prejudice, without reference to the real military dangers which were presented, merely confuses the issue.

Contrary to the assertion of Chief Justice Roberts, the Korematsu court did not “solely and explicitly” relocate U.S. citizens to concentration camps solely on the basis of race. Justice Black explicitly rejected that contention. So his attempt to distinguish his opinion from Justice Black’s majority opinion fails. Indeed Mr. Justice Black bases his decision on statutory authority given to the President by Congress, his inherent powers as Commander-in-Chief, and his assessment of the military danger of an invasion of the West Coast by the Japanese.

Korematsu was not excluded from the Military Area because of hostility to him or his race. He was excluded because we are at war with the Japanese Empire, because the properly constituted military authorities feared an invasion of our West Coast and felt constrained to take proper security measures, because they decided that the military urgency of the situation demanded that all citizens of Japanese ancestry be segregated from the West Coast temporarily, and, finally, because Congress, reposing its confidence in this time of war in our military leaders — as inevitably it must — determined that they should have the power to do just this. There was evidence of disloyalty on the part of some, the military authorities considered that the need for action was great, and time was short. We cannot — by availing ourselves of the calm perspective of hindsight — now say that, at that time, these actions were unjustified.

In almost every particular, Justice Black’s decision employed the exact same reasoning that the Chief Justice now employs to uphold the travel ban. Justice Black argued that the relocation could have been motivated by reasons of national security, just as Chief Justice now argues that the travel ban was motivated by reasons of national security. Justice Black argued that the military must be trusted to make decisions about which citizens might be disloyal and could pose a national security threat in time of war just as Chief Justice Roberts now argues that the President must be allowed to make national security decisions about who may enter the United States from abroad. Neither Justice Black nor Chief Justice Roberts is prepared to say that singling out a group based on race or religion is unjustified.

The only distinction between the cases is that Korematsu concerned the rights of American citizens not to be imprisoned without due process, and the travel ban primarily affects the rights of non-resident aliens. Clearly an important distinction, but the rights of American citizens and resident aliens are also implicated. Their rights to be free from religious discrimination are also at issue, and those rights may not be lightly disregarded.

Chief Justice Roberts concludes by attempting to distract attention from the glaring similarities between his own decision and Justice Black’s in Korematsu.

The dissent’s reference to Korematsu, however, affords this Court the opportunity to make express what is already obvious: Korematsu was gravely wrong the day it was decided, has been overruled in the court of history, and—to be clear—“has no place in law under the Constitution.” (Jackson, J., dissenting).

But in doing so, Chief Justice Roberts only provides further evidence of his own consciousness of wrongdoing and his stunning display of bad faith.

Advertisements

Who’s Afraid of a Flattening Yield Curve?

Last week the Fed again raised its benchmark Federal Funds rate target, now at 2%, up from the 0.25% rate that had been maintained steadily from late 2008 until late 2015, when the Fed, after a few false starts, finally worked up the courage — or caved to the pressure of the banks and the financial community — to start raising rates. The Fed also signaled its intention last week to continue raising rates – presumably at 0.25% increments – at least twice more this calendar year.

Some commentators have worried that rising short-term interest rates are outpacing increases at the longer end, so that the normally positively-sloped yield curve is flattening. They point out that historically flat or inverted yield curves have often presaged an economic downturn or recession within a year.

What accounts for the normally positive slope of the yield curve? It’s usually attributed to the increased risk associated with a lengthening of the duration of a financial instrument, even if default risk is zero. The longer the duration of a financial instrument, the more sensitive the (resale) value of the instrument to changes in the rate of interest. Because risk falls as the duration of the of the instrument is shortened, risk-averse asset-holders are willing to accept a lower return on short-dated claims than on riskier long-dated claims.

If the Fed continues on its current course, it’s likely that the yield curve will flatten or become inverted – sloping downward instead of upward – a phenomenon that has frequently presaged recessions within about a year. So the question I want to think through in this post is whether there is anything inherently recessionary about a flat or inverted yield curve, or is the correlation between recessions and inverted yield curves merely coincidental?

The beginning of wisdom in this discussion is the advice of Scott Sumner: never reason from a price change. A change in the slope of the yield curve reflects a change in price relationships. Any given change in price relationships can reflect a variety of possible causes, and the ultimate effects, e.g., an inverted yield curve, of those various underlying causes, need not be the same. So, we can’t take it for granted that all yield-curve inversions are created equal; just because yield-curve inversions have sometimes, or usually, or always, preceded recessions doesn’t mean that recessions must necessarily follow once the yield curve becomes inverted.

Let’s try to sort out some of the possible causes of an inverted yield curve, and see whether those causes are likely to result in a recession if the yield curve remains flat or inverted for a substantial period of time. But it’s also important to realize that the shape of the yield curve reflects a myriad of possible causes in a complex economic system. The yield curve summarizes expectations about the future that are deeply intertwined in the intertemporal structure of an economic system. Interest rates aren’t simply prices determined in specific markets for debt instruments of various durations; interest rates reflect the opportunities to exchange current goods for future goods or to transform current output into future output. Interest rates are actually distillations of relationships between current prices and expected future prices that govern the prices and implied yields at which debt instruments are bought and sold. If the interest rates on debt instruments are out of line with the intricate web of intertemporal price relationships that exist in any complex economy, those discrepancies imply profitable opportunities for exchange and production that tend to eliminate those discrepancies. Interest rates are not set in a vacuum, they are a reflection of innumerable asset valuations and investment opportunities. So there are potentially innumerable possible causes that could lead to the flattening or inversion of the yield curve.

For purposes of this discussion, however, I will focus on just two factors that, in an ultra-simplified partial-equilibrium setting, seem most likely to cause a normally upward-sloping yield curve to become relatively flat or even inverted. These two factors affecting the slope of the yield curve are the demand for liquidity and the supply of liquidity.

An increase in the demand for liquidity manifests itself in reduced current spending to conserve liquidity and by an increase in the demands of the public on the banking system for credit. But even as reduced spending improves the liquidity position of those trying to conserve liquidity, it correspondingly worsens the liquidity position of those whose revenues are reduced, the reduced spending of some necessarily reducing the revenues of others. So, ultimately, an increase in the demand for liquidity can be met only by (a) the banking system, which is uniquely positioned to create liquidity by accepting the illiquid IOUs of the private sector in exchange for the highly liquid IOUs (cash or deposits) that the banking system can create, or (b) by the discretionary action of a monetary authority that can issue additional units of fiat currency.

Let’s consider first what would happen in case of an increased demand for liquidity by the public. Such an increased demand could have two possible causes. (There might be others, of course, but these two seem fairly commonplace.)

First, the price expectations on which one or more significant sectors of the economy have made investments have turned out to overly optimistic (or alternatively made investments on overly optimistic expectations of low input prices). Given the commitments made on the basis of optimistic expectations, it then turns out that realized sales or revenues fall short of what was required by those firms to service their debt obligations. Thus, to service their debt obligations, firms may seek short-term loans to cover the shortfall in earnings relative to expectations. Potential lenders, including the banking system, who may already be holding the debt of such firms, must then decide whether to continue extending credit to these firms in hopes that prices will rebound back to what they had been expected to be (or that borrowers will be able to cut costs sufficiently to survive if prices don’t recover), or to cut their losses by ceasing to lend further.

The short-run demand for credit will tend to raise short-term rates relative to long-term rates, causing the yield curve to flatten. And the more serious the short-term need for liquidity, the flatter or more inverted the yield curve becomes. In such a period of financial stress, the potential for significant failures of firms that can’t service their financial obligations is an indication that an economic downturn or a recession is likely, so that the extent to which the yield curve flattens or becomes inverted is a measure of the likelihood that a downturn is in the offing.

Aside from sectoral problems affecting particular industries or groups of industries, the demand for liquidity might increase owing to a generalized increase in uncertainty that causes entrepreneurs to hold back from making investments (dampens animal spirits). This is often a response during and immediately following a recession, when the current state of economic activity and uncertainty about its future state discourages entrepreneurs from making investments whose profitability depends on the magnitude and scope of the future recovery. In that case, an increasing demand for liquidity causes firms to hoard their profits as cash rather than undertake new investments, because expected demand is not sufficient to justify commitments that would be remunerative only if future demand exceeds some threshold. Such a flattening of the yield curve can be mitigated if the monetary authority makes liquidity cheaply available by cutting short-term rates to very low levels or even to zero, as the Fed did when it adopted its quantitative easing policies after the 2008-09 downturn, thereby supporting a recovery, a modest one to be sure, but still a stronger recovery than occurred in Europe after the European Central Bank prematurely raised interest short-term rates.

Such an episode occurred in 2002-03, after the 9-11 attack on the US. The American economy had entered a recession in early 2001, partly as a result of the bursting of the dotcom bubble of the late 1990s. The recession was short and mild, and the large tax cut enacted by Congress at the behest of the Bush administration in June 2001 was expected to provide significant economic stimulus to promote recovery. However, it soon became clear that, besides the limited US attack on Afghanistan to unseat the Taliban regime and to kill or capture the Al Qaeda leadership in Afghanistan, the Bush Administration was planning for a much more ambitious military operation to effect regime change in Iraq and perhaps even in other neighboring countries in hopes of radically transforming the political landscape of the Middle East. The grandiose ambitions of the Bush administration and the likelihood that a major war of unknown scope and duration with unpredictable consequences might well begin sometime in early 2003 created a general feeling of apprehension and uncertainty that discouraged businesses from making significant new commitments until the war plans of the Administration were clarified and executed and their consequences assessed.

Gauging the unusual increase in the demand for liquidity in 2002 and 2003, the Fed reduced short-term rates to accommodate increasing demands for liquidity, even as the economy entered into a weak expansion and recovery. Given the unusual increase in the demand for liquidity, the accommodative stance of the Fed and the reduction in the Fed Funds target to an unusually low level of 1% had no inflationary effect, but merely cushioned the economy against a relapse into recession. The weakness of the recovery is reflected in the modest rate of increase in nominal spending, averaging about 3.9%, and not exceeding 5.1% in any of the seven quarters from 2001-IV when the recession ended until 2003-II when the Saddam Hussein regime was toppled.

Quarter              % change in NGDP

2001-IV               2.34%

2002-I                 5.07%

2002-II                3.76%

2002-III               3.80%

2002-IV               2.44%

2003-I                 4.63%

2003-II                5.10%

2003-III               9.26%

2003-IV               6.76%

2004-I                 5.94%

2004-II                6.60%

2004-III               6.26%

2004-IV               6.44%

2005-I                 8.25%

2005-II                5.10%

2005-III               7.33%

2005-IV               5.44%

2006-I                 8.23%

2006-II                4.50%

2006-III               3.19%

2006-IV               4.62%

2007-I                 4.83%

2007-II                5.42%

2007-III               4.15%

2007-IV               3.21%

The apparent success of the American invasion in the second quarter of 2003 was matched by a quickening expansion from 2003-III through 2006-I, nominal GDP increasing at a 6.8% annual rate over those 11 quarters. As the economy recovered, and spending began increasing rapidly, the Fed gradually raised its Fed Funds target by 25 basis points about every six weeks starting at the end of June 2004, so that in early 2006, the Fed Funds target rate reached 4.25%, peaking at 5.25% in July 2006, where it remained till September 2007. By February 2006, the yield on 3-month Treasury bills reached the yield on 10-year Treasuries, so that the yield curve had become essentially flat, remaining so until October 2008, soon after the start of the financial crisis. Indeed, for most of 2006 and 2007, the Fed Funds target was above the yield on three-month Treasury bills, implying a slight inversion at the short-end of the yield curve, suggesting that the Fed was exacting a slight liquidity surcharge on overnight reserves and that there was a market expectation that the Fed Funds target would be reduced from its 5.25% peak.

The Fed was probably tardy in increasing its Fed Funds target till June 2004, nominal spending having increased in 2003-III at an annual rate above 9%, and increasing in the next three quarters at an average annual rate of about 6.5%. In 2005 while the Fed was in auto-pilot mode, automatically raising its Fed Funds target 25 basis points every six weeks, nominal spending continued to increase at a roughly 6% annual rate, increases becoming slightly more erratic, fluctuating between 5.1% and 8.3%. But by the second quarter of 2006 when the Fed Funds target rose to 5%, the rate of increase in spending slowed to an average of just over 4% and just under 5% in the first three quarters of 2007.

While the rate of increase in spending slowed to less than 5% in the second quarter of 2006, as the yield curve flattened, and the Fed Funds target peaked at 5.25%, housing prices also peaked, and the concerns about financial stability started to be voiced. The chart below shows the yields on 10-year constant maturity Treasuries and the yield on 3-month Treasury bills, the two key market rates at opposite ends of the yield curve.

The yields on the two instruments became nearly equal in early 2006, and, with slight variations, remained so till the onset of the financial crisis in September 2008. In retrospect, at least, the continued increases in the Fed Funds rate target seem too have been extremely ill-advised, perhaps triggering the downturn that started at the end of 2007, and leading nine months later to the financial crisis of 2008.

The Fed having put itself on autopilot, the yield curve became flat or even slightly inverted in early 2006, implying that a substantial liquidity premium had to be absorbed in order to keep cash on hand to meet debt obligations. By the second quarter of 2006, insufficient liquidity caused the growth in total spending to slow, just when housing prices were peaking, a development that intensified the stresses on the financial system, further increasing the demand for liquidity. Despite the high liquidity premium and flat yield curve, total spending continued to increase modestly through 2006 and most of 2007. But after stock prices dropped in August 2007 and home prices continued to slide, growth in total spending slowed further at the end of 2007, and the downturn began.

Responding to signs of economic weakness and falling long-term rates, the Fed did lower its Fed Funds target late in 2007, cutting the Fed Funds target several more times in early 2008. In May 2008, the Fed reduced the target to 2%, but the yield curve remained flat, because the Fed, consistently underestimating the severity of the downturn, kept signaling its concern with inflation, thereby suggesting that an increase in the target might be in the offing. So, even as it reduced its Fed Funds target, the Fed kept the yield curve nearly flat until, and even after, the start of the financial crisis in September 2008, thereby maintaining an excessive liquidity premium while the demand for liquidity was intensifying as total spending contracted rapidly in the third quarter of 2008.

To summarize this discussion of the liquidity premium and the yield curve during the 2001-08 period, the Fed appropriately steepened the yield curve right after the 2001 recession and the 9/11 attacks, but was slow to normalize the slope of the yield curve after the US invasion of Iraq in the second quarter of 2003. When it did begin to normalize the yield curve in a series of automatic 25 basis point increases in its Fed Fund target rate, the Fed was again slow to reassess the effects of the policy as yield curve flattened in 2006. Thus by 2006, the Fed had effectively implemented a tight monetary policy in the face of rising demands for liquidity just as the bursting of the housing bubble in mid-2006 began to subject the financial system to steadily increasing stress. The implications of a flat or slightly inverted yield curve were ignored or dismissed by the Fed for at least two years until after the financial panic and crisis in September 2008.

At the beginning of the 2001-08 period, the Fed seemed to be aware that an unusual demand for liquidity justified a policy response to increase the supply of liquidity by reducing the Fed Funds target and steepening the yield curve. But, at the end of the period, the Fed was unwilling to respond to increasing demands for liquidity and instead allowed a flat yield curve to remain in place even when the increasing demand for liquidity was causing a slowdown in aggregate spending growth. One possible reason for the asymmetric response of the Fed to increasing liquidity demands in 2002 and 2006 is that the Fed was sensitive to criticism that, by holding short-term rates too low for too long, it had promoted and prolonged the housing bubble. Even if the criticism contained some element of truth, the Fed’s refusal to respond to increasing demands for liquidity in 2006 was tragically misguided.

The current Fed’s tentative plan to keep increasing the Fed Funds target seems less unreflective as the nearly mindless schedule followed by the Fed from mid-2004 to mid-2006. However, the Fed is playing a weaker hand now than it did in 2004. Nominal GDP has been increasing at a very lackluster annual rate of about 4-4.5% for the past two years. Certainly, further increases in the Fed Funds target would not be warranted if the rate of growth in nominal GDP is any less than 4% or if the yield curve should flatten for some other reason like a decline in interest rates at the longer end of the yield curve. Caution, possible inversion ahead.

Keynes and the Fisher Equation

The history of economics society is holding its annual meeting in Chicago from Friday June 15 to Sunday June 17. Bringing together material from a number of posts over the past five years or so about Keynes and the Fisher equation and Fisher effect, I will be presenting a new paper called “Keynes and the Fisher Equation.” Here is the abstract of my paper.

One of the most puzzling passages in the General Theory is the attack (GT p. 142) on Fisher’s distinction between the money rate of interest and the real rate of interest “where the latter is equal to the former after correction for changes in the value of money.” Keynes’s attack on the real/nominal distinction is puzzling on its own terms, inasmuch as the distinction is a straightforward and widely accepted distinction that was hardly unique to Fisher, and was advanced as a fairly obvious proposition by many earlier economists including Marshall. What makes Keynes’s criticism even more problematic is that Keynes’s own celebrated theorem in the Tract on Monetary Reform about covered interest arbitrage is merely an application of Fisher’s reasoning in Appreciation and Interest. Moreover, Keynes endorsed Fisher’s distinction in the Treatise on Money. But even more puzzling is that Keynes’s analysis in Chapter 17 demonstrates that in equilibrium the return on alternative assets must reflect their differences in their expected rates of appreciation. Thus Keynes, himself, in the General Theory endorsed the essential reasoning underlying the distinction between real and the money rates of interest. The solution to the puzzle lies in understanding the distinction between the relationships between the real and nominal rates of interest at a moment in time and the effects of a change in expected rates of appreciation that displaces an existing equilibrium and leads to a new equilibrium. Keynes’s criticism of the Fisher effect must be understood in the context of his criticism of the idea of a unique natural rate of interest implicitly identifying the Fisherian real rate with a unique natural rate.

And here is the concluding section of my paper.

Keynes’s criticisms of the Fisher effect, especially the facile assumption that changes in inflation expectations are reflected mostly, if not entirely, in nominal interest rates – an assumption for which neither Fisher himself nor subsequent researchers have found much empirical support – were grounded in well-founded skepticism that changes in expected inflation do not affect the real interest rate. A Fisherian analysis of an increase in expected deflation at the zero lower bound shows that the burden of the adjustment must be borne by an increase in the real interest rate. Of course, such a scenario might be dismissed as a special case, which it certainly is, but I very much doubt that it is the only assumptions leading to the conclusion that a change in expected inflation or deflation affects the real as well as the nominal interest rate.

Although Keynes’s criticism of the Fisher equation (or more precisely against the conventional simplistic interpretation) was not well argued, his intuition was sound. And in his contribution to the Fisher festschrift, Keynes (1937b) correctly identified the two key assumptions leading to the conclusion that changes in inflation expectations are reflected entirely in nominal interest rates: (1) a unique real equilibrium and (2) the neutrality (actually superneutrality) of money. Keynes’s intuition was confirmed by Hirshleifer (1970, 135-38) who derived the Fisher equation as a theorem by performing a comparative-statics exercise in a two-period general-equilibrium model with money balances, when the money stock in the second period was increased by an exogenous shift factor k. The price level in the second period increases by a factor of k and the nominal interest rate increases as well by a factor of k, with no change in the real interest rate.

But typical Keynesian and New Keynesian macromodels based on the assumption of no capital or a single capital good drastically oversimplify the analysis, because those highly aggregated models assume that the determination of the real interest rate takes place in a single market. The market-clearing assumption invites the conclusion that the rate of interest, like any other price, is determined by the equality of supply and demand – both of which are functions of that price — in  that market.

The equilibrium rate of interest, as C. J. Bliss (1975) explains in the context of an intertemporal general-equilibrium analysis, is not a price; it is an intertemporal rate of exchange characterizing the relationships between all equilibrium prices and expected equilibrium prices in the current and future time periods. To say that the interest rate is determined in any single market, e.g., a market for loanable funds or a market for cash balances, is, at best, a gross oversimplification, verging on fallaciousness. The interest rate or term structure of interest rates is a reflection of the entire intertemporal structure of prices, so a market for something like loanable funds cannot set the rate of interest at a level inconsistent with that intertemporal structure of prices without disrupting and misaligning that structure of intertemporal price relationships. The interest rates quoted in the market for loanable funds are determined and constrained by those intertemporal price relationships, not the other way around.

In the real world, in which current prices, future prices and expected future prices are not and almost certainly never are in an equilibrium relationship with each other, there is always some scope for second-order variations in the interest rates transacted in markets for loanable funds, but those variations are still tightly constrained by the existing intertemporal relationships between current, future and expected future prices. Because the conditions under which Hirshleifer derived his theorem demonstrating that changes in expected inflation are fully reflected in nominal interest rates are not satisfied, there is no basis for assuming that a change in expected inflation affect only nominal interest rates with no effect on real rates.

There are probably a huge range of possible scenarios of how changes in expected inflation could affect nominal and real interest rates. One should not disregard the Fisher equation as one possibility, it seems completely unwarranted to assume that it is the most plausible scenario in any actual situation. If we read Keynes at the end of his marvelous Chapter 17 in the General Theory in which he remarks that he has abandoned the belief he had once held in the existence of a unique natural rate of interest, and has come to believe that there are really different natural rates corresponding to different levels of unemployment, we see that he was indeed, notwithstanding his detour toward a pure liquidity preference theory of interest, groping his way toward a proper understanding of the Fisher equation.

In my Treatise on Money I defined what purported to be a unique rate of interest, which I called the natural rate of interest – namely, the rate of interest which, in the terminology of my Treatise, preserved equality between the rate of saving (as there defined) and the rate of investment. I believed this to be a development and clarification of of Wicksell’s “natural rate of interest,” which was, according to him, the rate which would preserve the stability of some, not quite clearly specified, price-level.

I had, however, overlooked the fact that in any given society there is, on this definition, a different natural rate for each hypothetical level of employment. And, similarly, for every rate of interest there is a level of employment for which that rate is the “natural” rate, in the sense that the system will be in equilibrium with that rate of interest and that level of employment. Thus, it was a mistake to speak of the natural rate of interest or to suggest that the above definition would yield a unique value for the rate of interest irrespective of the level of employment. . . .

If there is any such rate of interest, which is unique and significant, it must be the rate which we might term the neutral rate of interest, namely, the natural rate in the above sense which is consistent with full employment, given the other parameters of the system; though this rate might be better described, perhaps, as the optimum rate. (pp. 242-43)

Because Keynes believed that an increased in the expected future price level implies an increase in the marginal efficiency of capital, it follows that an increase in expected inflation under conditions of less than full employment would increase investment spending and employment, thereby raising the real rate of interest as well the nominal rate. Cottrell (1994) has attempted to make an argument along such lines within a traditional IS-LM framework. I believe that, in a Fisherian framework, my argument points in a similar direction.

 

Neo- and Other Liberalisms

Everybody seems to be worked up about “neoliberalism” these days. A review of Quinn Slobodian’s new book on the Austrian (or perhaps the Austro-Hungarian) roots of neoliberalism in the New Republic by Patrick Iber reminded me that the term “neoliberalism” which, in my own faulty recollection, came into somewhat popular usage only in the early 1980s, had actually been coined in the early the late 1930s at the now almost legendary Colloque Walter Lippmann and had actually been used by Hayek in at least one of his political essays in the 1940s. In that usage the point of neoliberalism was to revise and update the classical nineteenth-century liberalism that seemed to have run aground in the Great Depression, when the attempt to resurrect and restore what had been widely – and in my view mistakenly – regarded as an essential pillar of the nineteenth-century liberal order – the international gold standard – collapsed in an epic international catastrophe. The new liberalism was supposed to be a kinder and gentler — less relentlessly laissez-faire – version of the old liberalism, more amenable to interventions to aid the less well-off and to social-insurance programs providing a safety net to cushion individuals against the economic risks of modern capitalism, while preserving the social benefits and efficiencies of a market economy based on private property and voluntary exchange.

Any memory of Hayek’s use of “neo-liberalism” was blotted out by the subsequent use of the term to describe the unorthodox efforts of two young ambitious Democratic politicians, Bill Bradley and Dick Gephardt to promote tax reform. Bradley, who was then a first-term Senator from New Jersey, having graduated directly from NBA stardom to the US Senate in 1978, and Gephardt, then an obscure young Congressman from Missouri, made a splash in the first term of the Reagan administration by proposing to cut income tax rates well below the rates to which Reagan had proposed when running for President, in 1980, subsequently enacted early in his first term. Bradley and Gephardt proposed cutting the top federal income tax bracket from the new 50% rate to the then almost unfathomable 30%. What made the Bradley-Gephardt proposal liberal was the idea that special-interest tax exemptions would be eliminated, so that the reduced rates would not mean a loss of tax revenue, while making the tax system less intrusive on private decision-making, improving economic efficiency. Despite cutting the top rate, Bradley and Gephardt retained the principle of progressivity by reducing the entire rate structure from top to bottom while eliminating tax deductions and tax shelters.

Here is how David Ignatius described Bradley’s role in achieving the 1986 tax reform in the Washington Post (May 18, 1986)

Bradley’s intellectual breakthrough on tax reform was to combine the traditional liberal approach — closing loopholes that benefit mainly the rich — with the supply-side conservatives’ demand for lower marginal tax rates. The result was Bradley’s 1982 “Fair Tax” plan, which proposed removing many tax preferences and simplifying the tax code with just three rates: 14 percent, 26 percent and 30 percent. Most subsequent reform plans, including the measure that passed the Senate Finance Committee this month, were modelled on Bradley’s.

The Fair Tax was an example of what Democrats have been looking for — mostly without success — for much of the last decade. It synthesized liberal and conservative ideas in a new package that could appeal to middle-class Americans. As Bradley noted in an interview this week, the proposal offered “lower rates for the middle-income people who are the backbone of America, who are paying most of the freight.” And who, it might be added, increasingly have been voting Republican in recent presidential elections.

The Bradley proposal also offered Democrats a way to shed their anti-growth, tax-and-spend image by allowing them, as Bradley says, “to advocate economic growth and fairness simultaneously.” The only problem with the idea was that it challenged the party’s penchant for soak-the-rich rhetoric and interest-group politics.

So the new liberalism of Bradley and Gephardt was an ideological movement in the opposite direction from that of the earlier version of neoliberalism; the point of neoliberalism 1.0 was to moderate classical laissez-faire liberal orthodoxy; neoliberalism 2.0 aimed to counter the knee-jerk interventionism of New Deal liberalism that favored highly progressive income taxation to redistribute income from rich to poor and price ceilings and controls to protect the poor from exploitation by ruthless capitalists and greedy landlords and as an anti-inflation policy. The impetus for reassessing mid-twentieth-century American liberalism was the evident failure in the 1970s of wage and price controls, which had been supported with little evidence of embarrassment by most Democratic economists (with the notable exception of James Tobin) when imposed by Nixon in 1971, and by the decade-long rotting residue of Nixon’s controls — controls on crude oil and gasoline prices — finally scrapped by Reagan in 1981.

Although the neoliberalism 2.0 enjoyed considerable short-term success, eventually providing the template for the 1986 Reagan tax reform, and establishing Bradley and Gephardt as major figures in the Democratic Party, neoliberalism 2.0 was never embraced by the Democratic grassroots. Gephardt himself abandoned the neo-liberal banner in 1988 when he ran for President as a protectionist, pro-Labor Democrat, providing the eventual nominee, the mildly neoliberalish Michael Dukakis, with plenty of material with which to portray Gephardt as a flip-flopper. But Dukasis’s own failure in the general election did little to enhance the prospects of neoliberalism as a winning electoral strategy. The Democratic acceptance of low marginal tax rates in exchange for eliminating tax breaks, exemptions and shelters was short-lived, and Bradley himself abandoned the approach in 2000 when he ran for the Democratic Presidential nomination from the left against Al Gore.

So the notion that “neoliberalism” has any definite meaning is as misguided as the notion that “liberalism” has any definite meaning. “Neoliberalism” now serves primarily as a term of abuse for leftists to impugn the motives of their ideological and political opponents in exactly the same way that right-wingers use “liberal” as a term of abuse — there are so many of course — with which to dismiss and denigrate their ideological and political opponents. That archetypical classical liberal Ludwig von Mises was openly contemptuous of the neoliberalism that emerged from the Colloque Walter Lipmann and of its later offspring Ordoliberalism (frequently described as the Germanic version of neoliberalism) referring to it as “neo-interventionism.” Similarly, modern liberals who view themselves as upholders of New Deal liberalism deploy “neoliberalism” as a useful pejorative epithet with which to cast a rhetorical cloud over those sharing a not so dissimilar political background or outlook but who are more willing to tolerate the outcomes of market forces than they are.

There are many liberalisms and perhaps almost as many neoliberalisms, so it’s pointless and futile to argue about which is the true or legitimate meaning of “liberalism.” However, one can at least say about the two versions of neoliberalism that I’ve mentioned that they were attempts to moderate more extreme versions of liberalism and to move toward the ideological middle of the road: from the extreme laissez-faire of classical liberalism on the one right and from the dirigisme of the New Deal on the left toward – pardon the cliché – a third way in the center.

But despite my disclaimer that there is no fixed, essential, meaning of “liberalism,” I want to suggest that it is possible to find some common thread that unites many, if not all, of the disparate strands of liberalism. I think it’s important to do so, because it wasn’t so long ago that even conservatives were able to speak approvingly about the “liberal democratic” international order that was created, largely thanks to American leadership, in the post-World War II era. That time is now unfortunately past, but it’s still worth remembering that it once was possible to agree that “liberal” did correspond to an admirable political ideal.

The deep underlying principle that I think reconciles the different strands of the best versions of liberalism is a version of Kant’s categorical imperative: treat every individual as an end not a means. Individuals must not be used merely as tools or instruments with which other individuals or groups satisfy their own purposes. If you want someone else to serve you in accomplishing your ends, that other person must provide that assistance to you voluntarily not because you require him to do so. If you want that assistance you must secure it not by command but by persuasion. Persuasion can be secured in two ways, either by argument — persuading the other person to share your objective — or if you can’t, or won’t, persuade the person to share your objective, you can still secure his or her agreement to help you by offering some form of compensation to induce the person to provide you the services you desire.

The principle has an obvious libertarian interpretation: all cooperation is secured through voluntary agreements between autonomous agents. Force and fraud are impermissible. But the Kantian ideal doesn’t necessarily imply a strictly libertarian political system. The choices of autonomous agents can — actually must — be restricted by a set of legal rules governing the conduct of those agents. And the content of those legal rules must be worked out either by legislation or by an evolutionary process of common law adjudication or some combination of the two. The content of those rules needn’t satisfy a libertarian laissez-faire standard. Rather the liberal standard that legal rules must satisfy is that they don’t prescribe or impose ends, goals, or purposes that must be pursued by autonomous agents, but simply govern the means agents can employ in pursuing their objectives.

Legal rules of conduct are like semantic rules of grammar. Like rules of grammar that don’t dictate the ideas or thoughts expressed in speech or writing, only the manner of their expression, rules of conduct don’t specify the objectives that agents seek to achieve, only the acceptable means of accomplishing those objectives. The rules of conduct need not be libertarian; some choices may be ruled out for reasons of ethics or morality or expediency or the common good. What makes the rules liberal is that they apply equally to all citizens, and that the rules allow sufficient space to agents to conduct their own lives according to their own purposes, goals, preferences, and values.

In other words, the rule of law — not the rule of particular groups, classes, occupations — prevails. Agents are subject to an impartial legal standard, not to the will or command of another agent, or of the ruler. And for this to be the case, the ruler himself must be subject to the law. But within this framework of law that imposes no common goals and purposes on agents, a good deal of collective action to provide for common purposes — far beyond the narrow boundaries of laissez-faire doctrine — is possible. Citizens can be taxed to pay for a wide range of public services that the public, through its elected representatives, decides to provide. Those elected representatives can enact legislation that governs the conduct of individuals as long as the legislation does not treat individuals differently based on irrelevant distinctions or based on criteria that disadvantage certain people unfairly.

My view that the rule of law, not laissez-faire, not income redistribution, is the fundamental value and foundation of liberalism is a view that I learned from Hayek, who, in his later life was as much a legal philosopher as an economist, but it is a view that John Rawls, Ronald Dworkin on the left, and Michael Oakeshott on the right, also shared. Hayek, indeed, went so far as to say that he was fundamentally in accord with Rawls’s magnum opus A Theory of Justice, which was supposed to have provided a philosophical justification for modern welfare-state liberalism. Liberalism is a big tent, and it can accommodate a wide range of conflicting views on economic and even social policy. What sets liberalism apart is a respect for and commitment to the rule of law and due process, a commitment that ought to take precedence over any specific policy goal or preference.

But here’s the problem. If the ruler can also make or change the laws, the ruler is not really bound by the laws, because the ruler can change the law to permit any action that the ruler wants to take. How then is the rule of law consistent with a ruler that is empowered to make the law to which he is supposedly subject. That is the dilemma that every liberal state must cope with. And for Hayek, at least, the issue was especially problematic in connection with taxation.

With the possible exception of inflation, what concerned Hayek most about modern welfare-state policies was the highly progressive income-tax regimes that western countries had adopted in the mid-twentieth century. By almost any reasonable standard, top marginal income-tax rates were way too high in the mid-twentieth century, and the economic case for reducing the top rates was compelling when reducing the top rates would likely entail little, if any, net revenue loss. As a matter of optics, reductions in the top marginal rates had to be coupled with reductions of lower tax brackets which did entail revenue losses, but reforming an overly progressive tax system without a substantial revenue loss was not that hard to do.

But Hayek’s argument against highly progressive income tax rates was based more on principle than on expediency. Hayek regarded steeply progressive income tax rates as inherently discriminatory by imposing a disproportionate burden on a minority — the wealthy — of the population. Hayek did not oppose modest progressivity to ease the tax burden on the least well-off, viewing such progressivity treating as a legitimate concession that a well-off majority could allow to a less-well-off minority. But he greatly feared attempts by the majority to shift the burden of taxation onto a well-off minority, viewing that kind of progressivity as a kind of legalized hold-up, whereby the majority uses its control of the legislature to write the rules to their own advantage at the expense of the minority.

While Hayek’s concern that a wealthy minority could be plundered by a greedy majority seems plausible, a concern bolstered by the unreasonably high top marginal rates that were in place when he wrote, he overstated his case in arguing that high marginal rates were, in and of themselves, unequal treatment. Certainly it would be discriminatory if different tax rates applied to people because of their religion or national origin or for reasons unrelated to income, but even a highly progressive income tax can’t be discriminatory on its face, as Hayek alleged, when the progressivity is embedded in a schedule of rates applicable to everyone that reaches specified income thresholds.

There are other reasons to think that Hayek went too far in his opposition to progressive tax rates. First, he assumed that earned income accurately measures the value of the incremental contribution to social output. But Hayek overlooked that much of earned income reflects either rents that are unnecessary to call forth the efforts required to earn that income, in which case increasing the marginal tax rate on such earnings does not diminish effort and output. We also know as a result of a classic 1971 paper by Jack Hirshleifer that earned incomes often do not correspond to net social output. For example, incomes earned by stock and commodity traders reflect only in part incremental contributions to social output; they also reflect losses incurred by other traders. So resources devoted to acquiring information with which to make better predictions of future prices add less to output than those resources are worth, implying a net reduction in total output. Insofar as earned incomes reflect not incremental contributions to social output but income transfers from other individuals, raising taxes on those incomes can actually increase aggregate output.

So the economic case for reducing marginal tax rates is not necessarily more compelling than the philosophical case, and the economic arguments certainly seem less compelling than they did some three decades ago when Bill Bradley, in his youthful neoliberal enthusiasm, argued eloquently for drastically reducing marginal rates while broadening the tax base. Supporters of reducing marginal tax rates still like to point to the dynamic benefits of increasing incentives to work and invest, but they don’t acknowledge that earned income does not necessarily correspond closely to net contributions to aggregate output.

Drastically reducing the top marginal rate from 70% to 28% within five years, greatly increased the incentive to earn high incomes. The taxation of high incomes having been reducing so drastically, the number of people earning very high incomes since 1986 has grown very rapidly. Does that increase in the number of people earning very high incomes reflect an improvement in the overall economy, or does it reflect a shift in the occupational choices of talented people? Since the increase in very high incomes has not been associated with an increase in the overall rate of economic growth, it hardly seems obvious that the increase in the number of people earning very high incomes is closely correlated with the overall performance of the economy. I suspect rather that the opportunity to earn and retain very high incomes has attracted a many very talented people into occupations, like financial management, venture capital, investment banking, and real-estate brokerage, in which high incomes are being earned, with correspondingly fewer people choosing to enter less lucrative occupations. And if, as I suggested above, these occupations in which high incomes are being earned often contribute less to total output than lower-paying occupations, the increased opportunity to earn high incomes has actually reduced overall economic productivity.

Perhaps the greatest effect of reducing marginal income tax rates has been sociological. I conjecture that, as a consequence of reduced marginal income tax rates, the social status and prestige of people earning high incomes has risen, as has the social acceptability of conspicuous — even brazen — public displays of wealth. The presumption that those who have earned high incomes and amassed great fortunes are morally deserving of those fortunes, and therefore entitled to deference and respect on account of their wealth alone, a presumption that Hayek himself warned against, seems to be much more widely held now than it was forty or fifty years ago. Others may take a different view, but I find this shift towards increased respect and admiration for the wealthy, curiously combined with a supposedly populist political environment, to be decidedly unedifying.

Hayek, Radner and Rational-Expectations Equilibrium

In revising my paper on Hayek and Three Equilibrium Concepts, I have made some substantial changes to the last section which I originally posted last June. So I thought I would post my new updated version of the last section. The new version of the paper has not been submitted yet to a journal; I will give a talk about it at the colloquium on Economic Institutions and Market Processes at the NYU economics department next Monday. Depending on the reaction I get at the Colloquium and from some other people I will send the paper to, I may, or may not, post the new version on SSRN and submit to a journal.

In this section, I want to focus on a particular kind of intertemporal equilibrium: rational-expectations equilibrium. It is noteworthy that in his discussions of intertemporal equilibrium, Roy Radner assigns a  meaning to the term “rational-expectations equilibrium” very different from the one normally associated with that term. Radner describes a rational-expectations equilibrium as the equilibrium that results when some agents can make inferences about the beliefs of other agents when observed prices differ from the prices that the agents had expected. Agents attribute the differences between observed and expected prices to the superior information held by better-informed agents. As they assimilate the information that must have caused observed prices to deviate from their expectations, agents revise their own expectations accordingly, which, in turn, leads to further revisions in plans, expectations and outcomes.

There is a somewhat famous historical episode of inferring otherwise unknown or even secret information from publicly available data about prices. In 1954, one very rational agent, Armen Alchian, was able to identify which chemicals were being used in making the newly developed hydrogen bomb by looking for companies whose stock prices had risen too rapidly to be otherwise explained. Alchian, who spent almost his entire career at UCLA while moonlighting at the nearby Rand Corporation, wrote a paper at Rand listing the chemicals used in making the hydrogen bomb. When news of his unpublished paper reached officials at the Defense Department – the Rand Corporation (from whose files Daniel Ellsberg took the Pentagon Papers) having been started as a think tank with funding by the Department of Defense to do research on behalf of the U.S. military – the paper was confiscated from Alchian’s office at Rand and destroyed. (See Newhard’s paper for an account of the episode and a reconstruction of Alchian’s event study.)

But Radner also showed that the ability of some agents to infer the information on which other agents are causing prices to differ from the prices that had been expected does not necessarily lead to an equilibrium. The process of revising expectations in light of observed prices may not converge on a shared set of expectations of future prices based on common knowledge. Radner’s result reinforces Hayek’s insight, upon which I remarked above, that although expectations are equilibrating variables there is no economic mechanism that tends to bring expectations toward their equilibrium values. There is no feedback mechanism, corresponding to the normal mechanism for adjusting market prices in response to perceived excess demands or supplies, that operates on price expectations. The heavy lifting of bringing expectations into correspondence with what the future holds must be done by the agents themselves; the magic of the market goes only so far.

Although Radner’s conception of rational expectations differs from the more commonly used meaning of the term, his conception helps us understand the limitations of the conventional “rational expectations” assumption in modern macroeconomics, which is that the price expectations formed by the agents populating a model should be consistent with what the model itself predicts that those future prices will be. In this very restricted sense, I believe rational expectations is an important property of any model. If one assumes that the outcome expected by agents in a model is the equilibrium predicted by the model, then, under those expectations, the solution of the model ought to be the equilibrium of the model. If the solution of the model is somehow different from what agents in the model expect, then there is something really wrong with the model.

What kind of crazy model would have the property that correct expectations turn out not to be self-fulfilling? A model in which correct expectations are not self-fulfilling is a nonsensical model. But there is a huge difference between saying (a) that a model should have the property that correct expectations are self-fulfilling and saying (b) that the agents populating the model understand how the model works and, based know their knowledge of the model, form expectations of the equilibrium predicted by the model.

Rational expectations in the first sense is a minimal consistency property of an economic model; rational expectations in the latter sense is an empirical assertion about the real world. You can make such an assumption if you want, but you can’t credibly claim that it is a property of the real world. Whether it is a property of the real world is a matter of fact, not a methodological imperative. But the current sacrosanct status of rational expectations in modern macroeconomics has been achieved largely through methodological tyrannizing.

In his 1937 paper, Hayek was very clear that correct expectations are logically implied by the concept of an equilibrium of plans extending through time. But correct expectations are not a necessary, or even descriptively valid, characteristic of reality. Hayek also conceded that we don’t even have an explanation in theory of how correct expectations come into existence. He merely alluded to the empirical observation – perhaps not the most faithful description of empirical reality in 1937 – that there is an observed general tendency for markets to move toward equilibrium, implying that, over time, expectations somehow do tend to become more accurate.

It is worth pointing out that when the idea of rational expectations was introduced by John Muth (1961), he did so in the context of partial-equilibrium models in which the rational expectation in the model was the rational expectation of the equilibrium price in a particular market. The motivation for Muth to introduce the idea of a rational expectation was the cobweb-cycle model in which producers base current decisions about how much to produce for the following period on the currently observed price. But with a one-period time lag between production decisions and realized output, as is the case in agricultural markets in which the initial application of inputs does not result in output until a subsequent time period, it is easy to generate an alternating sequence of boom and bust, with current high prices inducing increased output in the following period, driving prices down, thereby inducing low output and high prices in the next period and so on.

Muth argued that rational producers would not respond to price signals in a way that led to consistently mistaken expectations, but would base their price expectations on more realistic expectations of what future prices would turn out to be. In his microeconomic work on rational expectations, Muth showed that the rational-expectation assumption was a better predictor of observed prices than the assumption of static expectations underlying the traditional cobweb-cycle model. So Muth’s rational-expectations assumption was based on a realistic conjecture of how real-world agents would actually form expectations. In that sense, Muth’s assumption was consistent with Hayek’s conjecture that there is an empirical tendency for markets to move toward equilibrium.

So, while Muth’s introduction of the rational-expectations hypothesis was an empirically progressive theoretical innovation, extending rational-expectations into the domain of macroeconomics has not been empirically progressive, rational-expectations models having consistently failed to generate better predictions than macro-models using other expectational assumptions. Instead, a rational-expectations axiom has been imposed as part of a spurious methodological demand that all macroeconomic models be “micro-founded.” But the deeper point – one that Hayek understood better than perhaps anyone else — is that there is a difference in kind between forming rational expectations about a single market price and forming rational expectations about the vector of n prices on the basis of which agents are choosing or revising their optimal intertemporal consumption and production plans.

It is one thing to assume that agents have some expert knowledge about the course of future prices in the particular markets in which they participate regularly; it is another thing entirely to assume that they have knowledge sufficient to forecast the course of all future prices and in particular to understand the subtle interactions between prices in one market and the apparently unrelated prices in another market. It is those subtle interactions that allow the kinds of informational inferences that, based on differences between expected and realized prices of the sort contemplated by Alchian and Radner, can sometimes be made. The former kind of knowledge is knowledge that expert traders might be expected to have; the latter kind of knowledge is knowledge that would be possessed by no one but a nearly omniscient central planner, whose existence was shown by Hayek to be a practical impossibility.

The key — but far from the only — error of the rational-expectations methodology that rules modern macroeconomics is that rational expectations somehow cause or bring about an intertemporal equilibrium. It is certainly a fact that people try very hard to use all the information available to them to predict what the future has in store, and any new bit of information not previously possessed will be rapidly assessed and assimilated and will inform a possibly revised set of expectations of the future. But there is no reason to think that this ongoing process of information gathering and processing and evaluation leads people to formulate correct expectations of the future or of future prices. Indeed, Radner proved that, even under strong assumptions, there is no necessity that the outcome of a process of information revision based on the observed differences between observed and expected prices leads to an equilibrium.

So it cannot be rational expectations that leads to equilibrium, On the contrary, rational expectations are a property of equilibrium. To speak of a “rational-expectations equilibrium” is to speak about a truism. There can be no rational expectations in the macroeconomic except in an equilibrium state, because correct expectations, as Hayek showed, is a defining characteristic of equilibrium. Outside of equilibrium, expectations cannot be rational. Failure to grasp that point is what led Morgenstern astray in thinking that Holmes-Moriarty story demonstrated the nonsensical nature of equilibrium. It simply demonstrated that Holmes and Moriarity were playing a non-repeated game in which an equilibrium did not exist.

To think about rational expectations as if it somehow results in equilibrium is nothing but a category error, akin to thinking about a triangle being caused by having angles whose angles add up to 180 degrees. The 180-degree sum of the angles of a triangle don’t cause the triangle; it is a property of the triangle.

Standard macroeconomic models are typically so highly aggregated that the extreme nature of the rational-expectations assumption is effectively suppressed. To treat all output as a single good (which involves treating the single output as both a consumption good and a productive asset generating a flow of productive services) effectively imposes the assumption that the only relative price that can ever change is the wage, so that all but one future relative prices are known in advance. That assumption effectively assumes away the problem of incorrect expectations except for two variables: the future price level and the future productivity of labor (owing to the productivity shocks so beloved of Real Business Cycle theorists).

Having eliminated all complexity from their models, modern macroeconomists, purporting to solve micro-founded macromodels, simply assume that there are just a couple of variables about which agents have to form their rational expectations. The radical simplification of the expectational requirements for achieving a supposedly micro-founded equilibrium belies the claim to have achieved anything of the sort. Whether the micro-foundational pretense affected — with apparently sincere methodological fervor — by modern macroeconomics is merely self-delusional or a deliberate hoax perpetrated on a generation of unsuspecting students is an interesting distinction, but a distinction lacking any practical significance.

Four score years since Hayek explained how challenging the notion of intertemporal equilibrium really is and the difficulties inherent in explaining any empirical tendency toward intertempral equilibrium, modern macroeconomics has succeeded in assuming all those difficulties out of existence. Many macroeconomists feel rather proud of what modern macroeconomics has achieved. I am not quite as impressed as they are.

 

On Equilibrium in Economic Theory

Here is the introduction to a new version of my paper, “Hayek and Three Concepts of Intertemporal Equilibrium” which I presented last June at the History of Economics Society meeting in Toronto, and which I presented piecemeal in a series of posts last May and June. This post corresponds to the first part of this post from last May 21.

Equilibrium is an essential concept in economics. While equilibrium is an essential concept in other sciences as well, and was probably imported into economics from physics, its meaning in economics cannot be straightforwardly transferred from physics into economics. The dissonance between the physical meaning of equilibrium and its economic interpretation required a lengthy process of explication and clarification, before the concept and its essential, though limited, role in economic theory could be coherently explained.

The concept of equilibrium having originally been imported from physics at some point in the nineteenth century, economists probably thought it natural to think of an economic system in equilibrium as analogous to a physical system at rest, in the sense of a system in which there was no movement or in the sense of all movements being repetitive. But what would it mean for an economic system to be at rest? The obvious answer was to say that prices of goods and the quantities produced, exchanged and consumed would not change. If supply equals demand in every market, and if there no exogenous disturbance displaces the system, e.g., in population, technology, tastes, etc., then there would seem to be no reason for the prices paid and quantities produced to change in that system. But that conception of an economic system at rest was understood to be overly restrictive, given the large, and perhaps causally important, share of economic activity – savings and investment – that is predicated on the assumption and expectation that prices and quantities not remain constant.

The model of a stationary economy at rest in which all economic activity simply repeats what has already happened before did not seem very satisfying or informative to economists, but that view of equilibrium remained dominant in the nineteenth century and for perhaps the first quarter of the twentieth. Equilibrium was not an actual state that an economy could achieve, it was just an end state that economic processes would move toward if given sufficient time to play themselves out with no disturbing influences. This idea of a stationary timeless equilibrium is found in the writings of the classical economists, especially Ricardo and Mill who used the idea of a stationary state as the end-state towards which natural economic processes were driving an an economic system.

This, not very satisfactory, concept of equilibrium was undermined when Jevons, Menger, Walras, and their followers began to develop the idea of optimizing decisions by rational consumers and producers. The notion of optimality provided the key insight that made it possible to refashion the earlier classical equilibrium concept into a new, more fruitful and robust, version.

If each economic agent (household or business firm) is viewed as making optimal choices, based on some scale of preferences, and subject to limitations or constraints imposed by their capacities, endowments, technologies, and the legal system, then the equilibrium of an economy can be understood as a state in which each agent, given his subjective ranking of the feasible alternatives, is making an optimal decision, and each optimal decision is both consistent with, and contingent upon, those of all other agents. The optimal decisions of each agent must simultaneously be optimal from the point of view of that agent while being consistent, or compatible, with the optimal decisions of every other agent. In other words, the decisions of all buyers of how much to purchase must be consistent with the decisions of all sellers of how much to sell. But every decision, just like every piece in a jig-saw puzzle, must fit perfectly with every other decision. If any decision is suboptimal, none of the other decisions contingent upon that decision can be optimal.

The idea of an equilibrium as a set of independently conceived, mutually consistent, optimal plans was latent in the earlier notions of equilibrium, but it could only be coherently articulated on the basis of a notion of optimality. Originally framed in terms of utility maximization, the notion was gradually extended to encompass the ideas of cost minimization and profit maximization. The general concept of an optimal plan having been grasped, it then became possible to formulate a generically economic idea of equilibrium, not in terms of a system at rest, but in terms of the mutual consistency of optimal plans. Once equilibrium was conceived as the mutual consistency of optimal plans, the needlessly restrictiveness of defining equilibrium as a system at rest became readily apparent, though it remained little noticed and its significance overlooked for quite some time.

Because the defining characteristics of economic equilibrium are optimality and mutual consistency, change, even non-repetitive change, is not logically excluded from the concept of equilibrium as it was from the idea of an equilibrium as a stationary state. An optimal plan may be carried out, not just at a single moment, but over a period of time. Indeed, the idea of an optimal plan is, at the very least, suggestive of a future that need not simply repeat the present. So, once the idea of equilibrium as a set of mutually consistent optimal plans was grasped, it was to be expected that the concept of equilibrium could be formulated in a manner that accommodates the existence of change and development over time.

But the manner in which change and development could be incorporated into an equilibrium framework of optimality was not entirely straightforward, and it required an extended process of further intellectual reflection to formulate the idea of equilibrium in a way that gives meaning and relevance to the processes of change and development that make the passage of time something more than merely a name assigned to one of the n dimensions in vector space.

This paper examines the slow process by which the concept of equilibrium was transformed from a timeless or static concept into an intertemporal one by focusing on the pathbreaking contribution of F. A. Hayek who first articulated the concept, and exploring the connection between his articulation and three noteworthy, but very different, versions of intertemporal equilibrium: (1) an equilibrium of plans, prices, and expectations, (2) temporary equilibrium, and (3) rational-expectations equilibrium.

But before discussing these three versions of intertemporal equilibrium, I summarize in section two Hayek’s seminal 1937 contribution clarifying the necessary conditions for the existence of an intertemporal equilibrium. Then, in section three, I elaborate on an important, and often neglected, distinction, first stated and clarified by Hayek in his 1937 paper, between perfect foresight and what I call contingently correct foresight. That distinction is essential for an understanding of the distinction between the canonical Arrow-Debreu-McKenzie (ADM) model of general equilibrium, and Roy Radner’s 1972 generalization of that model as an equilibrium of plans, prices and price expectations, which I describe in section four.

Radner’s important generalization of the ADM model captured the spirit and formalized Hayek’s insights about the nature and empirical relevance of intertemporal equilibrium. But to be able to prove the existence of an equilibrium of plans, prices and price expectations, Radner had to make assumptions about agents that Hayek, in his philosophically parsimonious view of human knowledge and reason, had been unwilling to accept. In section five, I explore how J. R. Hicks’s concept of temporary equilibrium, clearly inspired by Hayek, though credited by Hicks to Erik Lindahl, provides an important bridge connecting the pure hypothetical equilibrium of correct expectations and perfect consistency of plans with the messy real world in which expectations are inevitably disappointed and plans routinely – and sometimes radically – revised. The advantage of the temporary-equilibrium framework is to provide the conceptual tools with which to understand how financial crises can occur and how such crises can be propagated and transformed into economic depressions, thereby making possible the kind of business-cycle model that Hayek tried unsuccessfully to create. But just as Hicks unaccountably failed to credit Hayek for the insights that inspired his temporary-equilibrium approach, Hayek failed to see the potential of temporary equilibrium as a modeling strategy that combines the theoretical discipline of the equilibrium method with the reality of expectational inconsistency across individual agents.

In section six, I discuss the Lucasian idea of rational expectations in macroeconomic models, mainly to point out that, in many ways, it simply assumes away the problem of plan expectational consistency with which Hayek, Hicks and Radner and others who developed the idea of intertemporal equilibrium were so profoundly concerned.

The Phillips Curve and the Lucas Critique

With unemployment at the lowest levels since the start of the millennium (initial unemployment claims in February were the lowest since 1973!), lots of people are starting to wonder if we might be headed for a pick-up in the rate of inflation, which has been averaging well under 2% a year since the financial crisis of September 2008 ushered in the Little Depression of 2008-09 and beyond. The Fed has already signaled its intention to continue raising interest rates even though inflation remains well anchored at rates below the Fed’s 2% target. And among Fed watchers and Fed cognoscenti, the only question being asked is not whether the Fed will raise its Fed Funds rate target, but how frequent those (presumably) quarter-point increments will be.

The prevailing view seems to be that the thought process of the Federal Open Market Committee (FOMC) in raising interest rates — even before there is any real evidence of an increase in an inflation rate that is still below the Fed’s 2% target — is that a preemptive strike is required to prevent inflation from accelerating and rising above what has become an inflation ceiling — not an inflation target — of 2%.

Why does the Fed believe that inflation is going to rise? That’s what the econoblogosphere has, of late, been trying to figure out. And the consensus seems to be that the FOMC is basing its assessment that the risk that inflation will break the 2% ceiling that it has implicitly adopted has become unacceptably high. That risk assessment is based on some sort of analysis in which it is inferred from the Phillips Curve that, with unemployment nearing historically low levels, rising inflation has become dangerously likely. And so the next question is: why is the FOMC fretting about the Phillips Curve?

In a blog post earlier this week, David Andolfatto of the St. Louis Federal Reserve Bank, tried to spell out in some detail the kind of reasoning that lay behind the FOMC decision to actively tighten the stance of monetary policy to avoid any increase in inflation. At the same time, Andolfatto expressed his own view, that the rate of inflation is not determined by the rate of unemployment, but by the stance of monetary policy.

Andolfatto’s avowal of monetarist faith in the purely monetary forces that govern the rate of inflation elicited a rejoinder from Paul Krugman expressing considerable annoyance at Andolfatto’s monetarism.

Here are three questions about inflation, unemployment, and Fed policy. Some people may imagine that they’re the same question, but they definitely aren’t:

  1. Does the Fed know how low the unemployment rate can go?
  2. Should the Fed be tightening now, even though inflation is still low?
  3. Is there any relationship between unemployment and inflation?

It seems obvious to me that the answer to (1) is no. We’re currently well above historical estimates of full employment, and inflation remains subdued. Could unemployment fall to 3.5% without accelerating inflation? Honestly, we don’t know.

Agreed.

I would also argue that the Fed is making a mistake by tightening now, for several reasons. One is that we really don’t know how low U can go, and won’t find out if we don’t give it a chance. Another is that the costs of getting it wrong are asymmetric: waiting too long to tighten might be awkward, but tightening too soon increases the risks of falling back into a liquidity trap. Finally, there are very good reasons to believe that the Fed’s 2 percent inflation target is too low; certainly the belief that it was high enough to make the zero lower bound irrelevant has been massively falsified by experience.

Agreed, but the better approach would be to target the price level, or even better nominal GDP, so that short-term undershooting of the inflation target would provide increased leeway to allow inflation to overshoot the inflation target without undermining the credibility of the commitment to price stability.

But should we drop the whole notion that unemployment has anything to do with inflation? Via FTAlphaville, I see that David Andolfatto is at it again, asserting that there’s something weird about asserting an unemployment-inflation link, and that inflation is driven by an imbalance between money supply and money demand.

But one can fully accept that inflation is driven by an excess supply of money without denying that there is a link between inflation and unemployment. In the normal course of events an excess supply of money may lead to increased spending as people attempt to exchange their excess cash balances for real goods and services. The increased spending can induce additional output and additional employment along with rising prices. The reverse happens when there is an excess demand for cash balances and people attempt to build up their cash holdings by cutting back their spending, reducing output. So the inflation unemployment relationship results from the effects induced by a particular causal circumstance. Nor does that mean that an imbalance in the supply of money is the only cause of inflation or price level changes.

Inflation can also result from nothing more than the anticipation of inflation. Expected inflation can also affect output and employment, so inflation and unemployment are related not only by both being affected by excess supply of (demand for) money, but by both being affect by expected inflation.

Even if you think that inflation is fundamentally a monetary phenomenon (which you shouldn’t, as I’ll explain in a minute), wage- and price-setters don’t care about money demand; they care about their own ability or lack thereof to charge more, which has to – has to – involve the amount of slack in the economy. As Karl Smith pointed out a decade ago, the doctrine of immaculate inflation, in which money translates directly into inflation – a doctrine that was invoked to predict inflationary consequences from Fed easing despite a depressed economy – makes no sense.

There’s no reason for anyone to care about overall money demand in this scenario. Price setters respond to the perceived change in the rate of spending induced by an excess supply of money. (I note parenthetically, that I am referring now to an excess supply of base money, not to an excess supply of bank-created money, which, unlike base money, is not a hot potato that cannot be withdrawn from circulation in response to market incentives.) Now some price setters may actually use macroeconomic information to forecast price movements, but recognizing that channel would take us into the realm of an expectations-theory of inflation, not the strict monetary theory of inflation that Krugman is criticizing.

And the claim that there’s weak or no evidence of a link between unemployment and inflation is sustainable only if you insist on restricting yourself to recent U.S. data. Take a longer and broader view, and the evidence is obvious.

Consider, for example, the case of Spain. Inflation in Spain is definitely not driven by monetary factors, since Spain hasn’t even had its own money since it joined the euro. Nonetheless, there have been big moves in both Spanish inflation and Spanish unemployment:

That period of low unemployment, by Spanish standards, was the result of huge inflows of capital, fueling a real estate bubble. Then came the sudden stop after the Greek crisis, which sent unemployment soaring.

Meanwhile, the pre-crisis era was marked by relatively high inflation, well above the euro-area average; the post-crisis era by near-zero inflation, below the rest of the euro area, allowing Spain to achieve (at immense cost) an “internal devaluation” that has driven an export-led recovery.

So, do you really want to claim that the swings in inflation had nothing to do with the swings in unemployment? Really, really?

No one claims – at least no one who believes in a monetary theory of inflation — should claim that swings in inflation and unemployment are unrelated, but to acknowledge the relationship between inflation and unemployment does not entail acceptance of the proposition that unemployment is a causal determinant of inflation.

But if you concede that unemployment had a lot to do with Spanish inflation and disinflation, you’ve already conceded the basic logic of the Phillips curve. You may say, with considerable justification, that U.S. data are too noisy to have any confidence in particular estimates of that curve. But denying that it makes sense to talk about unemployment driving inflation is foolish.

No it’s not foolish, because the relationship between inflation and unemployment is not a causal relationship; it’s a coincidental relationship. The level of employment depends on many things and some of the things that employment depends on also affect inflation. That doesn’t mean that employment causally affects inflation.

When I read Krugman’s post and the Andalfatto post that provoked Krugman, it occurred to me that the way to summarize all of this is to say that unemployment and inflation are determined by a variety of deep structural (causal) relationships. The Phillips Curve, although it was once fashionable to refer to it as the missing equation in the Keynesian model, is not a structural relationship; it is a reduced form. The negative relationship between unemployment and inflation that is found by empirical studies does not tell us that high unemployment reduces inflation, any more than a positive empirical relationship between the price of a commodity and the quantity sold would tell you that the demand curve for that product is positively sloped.

It may be interesting to know that there is a negative empirical relationship between inflation and unemployment, but we can’t rely on that relationship in making macroeconomic policy. I am not a big admirer of the Lucas Critique for reasons that I have discussed in other posts (e.g., here and here). But, the Lucas Critique, a rather trivial result that was widely understood even before Lucas took ownership of the idea, does at least warn us not to confuse a reduced form with a causal relationship.

What Hath Merkel Wrought?

In my fifth month of blogging in November 2011, I wrote a post which I called “The Economic Consequences of Mrs. Merkel.” The title, as I explained, was inspired by J. M. Keynes’s famous essay “The Economic Consequences of Mr. Churchill,” which eloquently warned that Britain was courting disaster by restoring the convertibility of sterling into gold at the prewar parity of $4.86 to the pound, the dollar then being the only major currency convertible into gold. The title of Keynes’s essay, in turn, had been inspired by Keynes’s celebrated book The Economic Consequences of the Peace about the disastrous Treaty of Versailles, which accurately foretold the futility of imposing punishing war reparations on Germany.

In his essay, Keynes warned that by restoring the prewar parity, Churchill would force Britain into an untenable deflation at a time when more than 10% of the British labor force was unemployed (i.e., looking for, but unable to find, a job at prevailing wages). Keynes argued that the deflation necessitated by restoration of the prewar parity would impose an intolerable burden of continued and increased unemployment on British workers.

But, as it turned out, Churchill’s decision turned out to be less disastrous than Keynes had feared. The resulting deflation was quite mild, wages in nominal terms were roughly stable, and real output and employment grew steadily with unemployment gradually falling under 10% by 1928. The deflationary shock that Keynes had warned against turned out to be less severe than Keynes had feared because the U.S. Federal Reserve, under the leadership of Benjamin Strong, President of the New York Fed, the de facto monetary authority of the US and the world, followed a policy that allowed a slight increase in the world price level in terms of dollars, thereby moderating the deflationary effect on Britain of restoring the prewar sterling/dollar exchange rate.

Thanks to Strong’s enlightened policy, the world economy continued to expand through 1928. I won’t discuss the sequence of events in 1928 and 1929 that led to the 1929 stock market crash, but those events had little, if anything, to do with Churchill’s 1925 decision. I’ve discussed the causes of the 1929 crash and the Great Depression in many other places including my 2011 post about Mrs. Merkel, so I will skip the 1929 story in this post.

The point that I want to make is that even though Keynes’s criticism of Churchill’s decision to restore the prewar dollar/sterling parity was well-taken, the dire consequences that Keynes foretold, although they did arrive a few years thereafter, were not actually caused by Churchill’s decision, but by decisions made in Paris and New York, over which Britain may have had some influence, but little, if any, control.

What I want to discuss in this post is how my warnings about potential disaster almost six and a half years ago have turned out. Here’s how I described the situation in November 2011:

Fast forward some four score years to today’s tragic re-enactment of the deflationary dynamics that nearly destroyed European civilization in the 1930s. But what a role reversal! In 1930 it was Germany that was desperately seeking to avoid defaulting on its obligations by engaging in round after round of futile austerity measures and deflationary wage cuts, causing the collapse of one major European financial institution after another in the annus horribilis of 1931, finally (at least a year after too late) forcing Britain off the gold standard in September 1931. Eighty years ago it was France, accumulating huge quantities of gold, in Midas-like self-satisfaction despite the economic wreckage it was inflicting on the rest of Europe and ultimately itself, whose monetary policy was decisive for the international value of gold and the downward course of the international economy. Now, it is Germany, the economic powerhouse of Europe dominating the European Central Bank, which effectively controls the value of the euro. And just as deflation under the gold standard made it impossible for Germany (and its state and local governments) not to default on its obligations in 1931, the policy of the European Central Bank, self-righteously dictated by Germany, has made default by Greece and now Italy and at least three other members of the Eurozone inevitable. . . .

If the European central bank does not soon – and I mean really soon – grasp that there is no exit from the debt crisis without a reversal of monetary policy sufficient to enable nominal incomes in all the economies in the Eurozone to grow more rapidly than does their indebtedness, the downward spiral will overtake even the stronger European economies. (I pointed out three months ago that the European crisis is a NGDP crisis not a debt crisis.) As the weakest countries choose to ditch the euro and revert back to their own national currencies, the euro is likely to start to appreciate as it comes to resemble ever more closely the old deutschmark. At some point the deflationary pressures of a rising euro will cause even the Germans, like the French in 1935, to relent. But one shudders at the economic damage that will be inflicted until the Germans come to their senses. Only then will we be able to assess the full economic consequences of Mrs. Merkel.

Greece did default, but the European Community succeeded in imposing draconian austerity measures on Greece, while Italy, Spain, France, and Portugal, which had all been in some danger, managed to avoid default. That they did so is due first to the enormous cost that would have be borne by a country in the Eurozone to extricate itself from the Eurozone and reinstitute its own national currency and second to the actions taken by Mario Draghi, who succeeded Jean Claude Trichet as President of the European Central Bank in November 2011. If monetary secession from the eurozone were less fraught, surely Greece and perhaps other countries would have chosen that course rather than absorb the continuing pain of remaining in the eurozone.

But if it were not for a decisive change in policy by Draghi, Greece and perhaps other countries would have been compelled to follow that uncharted and potentially catastrophic path. But, after assuming leadership of the ECB, Draghi immediately reversed the perverse interest-rate hikes imposed by his predecessor and, even more crucially, announced in July 2012 that the ECB “is ready to do whatever it takes to preserve the Euro. And believe me, it will be enough.” Draghi’s reassurance that monetary easing would be sufficient to avoid default calmed markets, alleviated market pressure driving up interest rates on debt issued by those countries.

But although Draghi’s courageous actions to ease monetary policy in the face of German disapproval avoided a complete collapse, the damage inflicted by Mrs. Merkel’s ferocious anti-inflation policy did irreparable damage, not only on Greece, but, by deepening the European downturn and delaying and suppressing the recovery, on the rest of the European community, inflaming anti-EU, populist nationalism in much of Europe that helped fuel the campaign for Brexit in the UK and has inspired similar anti-EU movements elsewhere in Europe and almost prevented Mrs. Merkel from forming a government after the election a few months ago.

Mrs. Merkel is perhaps the most impressive political leader of our time, and her willingness to follow a humanitarian policy toward refugees fleeing the horrors of war and persecution showed an extraordinary degree of political courage and personal decency that ought to serve as a model for other politicians to emulate. But that admirable legacy will be forever tarnished by the damage she inflicted on her own country and the rest of the EU by her misguided battle against the phantom threat of inflation.

What Do Stock Prices Tell Us about the Economy?

Stock prices (as measured by the S&P 500) in 2017 rose by over 20%, an impressive amount, and what is most impressive about it is perhaps that this rise prices came after eight previous years of steady increases.

Here are the annual year-on-year and cumulative changes in the S&P500 since 2009.

2009              21.1%              21.1%*

2010              12.0%              33.1%*

2011              -0.0%               33.1%*

2012              12.2%               45.3%*

2013              25.9%               71.2%*

2014              10.8%               82.0%*

2015              -0.7%                81.3%*

2016             9.1%                   90.4%*            (4.5%)**          (85.9%)***

2017             17.7%                 108.1%*          (22.3%)****

2018 (YTD)    2.0%                  110.1%*           (24.3%)****

* cumulative increase since the end of 2008

** increase from end of 2015 to November 8, 2016

*** cumulative increase from end of 2008 to November 8, 2016

**** cumulative increase since November 8, 2016

So, from the end of 2008 until the start of 2017, approximately coinciding with Obama’s two terms as President, the S&P 500 rose in every year except 2011 and 2015, when the index was essentially unchanged, and rose by more than 10% in five of the eight years (twice by more than 20%), with stock prices nearly doubling during the Obama Presidency.

But what does the doubling of stock prices under Obama really tell us about the well-being of the American economy, and, even more importantly, about the well-being of the American public during those years? Is there any correlation between the performance of the stock market and the well-being of actual people? Does the doubling of stock prices under Obama mean that most Americans were better off at the end of his Presidency than they were at the start of it?

My answer to these questions is a definite — though not very resounding — yes, because we know that the US economy at the end of 2008 was in the middle of the sharpest downturn since the Great Depression. Output was contracting, employment was falling, and the financial system was on the verge of collapse, with stock prices down almost 50% from where they had been at the end of August, and nearly 60% from the previous all-time high reached in 2007. In 2016, after seven years of slow but steady growth, employment and output had recovered and surpassed their previous peaks, though only by small amounts. But the recovery, although disappointingly slow, was real.

That improvement was reflected, albeit with a lag, in changes in median household and median personal income between 2008 and 2016.

2009                    -0.7%                   -0.7%

2010                    -2.6%                    -3.3%

2011                     -1.6%                    -4.9%

2012                    -0.1%                    -5.0%

2013                      3.5%                    -1.5%

2014                    -1.5%                     -3.0%

2015                     5.1%                       2.0%

2016                      3.1%                      5.1%

But it’s also striking how weak the correlation was between rapidly rising stock prices and rising median incomes in the Obama years. Given a tepid real recovery from the Little Depression, what accounts for the associated roaring recovery in stock prices? Well, for one thing, much of the improvement in the stock market was simply recovering losses in stock valuations during the downturn. Stock prices having fallen further than incomes in the Little Depression, it’s not surprising that the recovery in stocks was steeper than the recovery in incomes. It took four years for the S&P 500 to reach its pre-Depression peak, so, normalized to their pre-Depression peaks, the correlation between stock prices and median incomes is not as weak as it seems when comparing year-on-year percentage changes.

But considering the improvement in stock prices under Obama in historical context also makes the improvement in stock prices under Obama seem less remarkable than it does when viewed without taking the previous downturn into account. Stock prices simply returned (more or less) to the path that, one might have expected them to follow by extrapolating their past performance. Nevertheless, even if we take into account that, during the Little Depression, stocks prices fell more sharply than real incomes, stocks have clearly outperformed the real economy during the recovery, real output and income having failed to return to the growth path that it had been tracking before the 2008 downturn.

Why have stocks outperformed the real economy? The answer to that question is a straightforward application of the basic theory of asset valuation, according to which the value of real assets – machines, buildings, land — and financial assets — stocks and bonds — reflects the discounted expected future income streams associated with those assets. In particular, stock prices represent the discounted present value of the expected future cash flows (dividends or stock buy-backs) from firms to their shareholders. So, if the economy has “recovered” (more or less) from the 2008-09 downturn, the expected future cash flows from firms have presumably – and on average — surpassed the cash flows that had been expected before the downturn.

But the weakness in the recovery suggests that the increase in expected cash flows can’t fully account for the increase in stock prices. Why did stock prices rise by more than the likely increase in expected cash flows? The basic theory of asset valuation tells us that the remainder of the increase in stock prices can be attributed to the decline of real interest rates since the 2008 downturn to historically low levels.

Of course, to say that the increase in stock prices is attributable to the decline in real interest rates just raises a different question: what accounts for the decline in real interest rates? The answer, derived from Irving Fisher, is basically that if perceived opportunities for future investment and growth are diminished, the willingness of people to trade future for present income also tends to diminish. What the rate of interest represents in the Fisherian framework is the rate at which people are willing to trade future for present income – i.e., the premium (discount) that is placed on present (future) income.

The Fisherian view is totally at odds with the view that the real interest rate is – or can be — controlled by the monetary authority. According to the latter view, the reason that real interest rates since the 2008 downturn have been at historically low levels is that the Federal Reserve has forced interest rates down to those levels by flooding the economy with huge quantities of printed money. There is a certain sense in which that view has a small element of truth: had the Fed adopted a different set of policy goals concerning inflation and nominal GDP, real interest rates might have risen to more “normal” levels. But given the overall policy framework within which it was operating, the Fed had only minimal control over the real rate of interest.

The essential idea is that in the Fisherian view the real rate of interest is not a single price determined in a single market; it is a distillation of the entire intertemporal structure of price relationships simultaneously determined in the myriad of individual markets in which transactions for present and future delivery are continuously being agreed upon. To imagine that the Fed, or any monetary authority, could control or even modestly influence this almost incomprehensibly complicated structure of price relationships according to its wishes is simply delusional.

If the decline in real interest rates after the 2008 downturn reflected generally reduced optimism about future economic growth, then the increase in stock prices actually reflected declining optimism by most people about their future well-being compared to their pre-downturn expectations. That loss of optimism might have been, at least in part, self-fulfilling insofar as it discouraged potentially worthwhile – i.e., profitable — investments that would have been undertaken had expectations been more optimistic.

Nevertheless, the near doubling of stock prices during the Obama administration did coincide with a not insignificant improvement in the well-being of most Americans. Most Americans were substantially better off at the end of 2016, after about seven years of slow but steady economic growth, than they were at the end of 2008 when total output and employment were contracting at the fastest rate since the Great Depression. But to use the increase in stock prices as a quantitative measure of the improvement in their well-being would be misleading.

I would also mention as an aside that a favorite faux-populist talking point of Obama and Fed critics used to be that rising stock prices during the Obama years revealed the bias of the elitist Fed Governors appointed by Obama in favor of the wealthy owners of corporate stock, and their callous disregard of the small savers who leave their retirement funds in bank savings accounts earning minimal interest and of workers whose wage increases barely kept up with inflation. But this refrain of critics – and I am thinking especially of the Wall Street Journal editorial page – who excoriated the Obama administration and the Fed for trying to raise stock prices by keeping interest rates at abnormally low levels now unblushingly celebrate record-high stock prices as proof that tax cuts mostly benefiting corporations and their stockholders signal the start of a new golden age of accelerating growth.

So the next question to consider is what can we infer about the well-being of Americans and the American economy from the increase in stock prices since November 8, 2016? For purposes of this mental exercise, let me stipulate that the rise in stock prices since the moment when it became clear who had been elected President by the voters on November 8, 2016 was attributable to the policies that the new administration was expected to adopt.

Because interest rates have risen along with stock prices since November 8, 2016, increased stock prices must reflect investors’ growing optimism about the future cash flows to be distributed by corporations to shareholders. So, our question can be restated as follows: which policies — actual or expected — of the new administration could account for the growing optimism of investors since the election? Here are five policy categories to consider: (1) regulation, (2) taxes, (3) international trade, (4) foreign affairs, (5) macroeconomic and monetary policies.

The negative reaction of stock prices to the announcement last week that tariffs will be imposed on steel and aluminum imports suggests that hopes for protectionist trade policies were not the main cause of rising investor optimism since November 2016. And presumably investor hopes for rising corporate cash flows to shareholders were not buoyed up by increasing tensions on the Korean peninsula and various belligerent statements by Administration officials about possible military responses to North Korean provocations.

Macroeconomic and monetary policies being primarily the responsibility of the Federal Reserve, the most important macroeconomic decision made by the new Administration to date was appointing Jay Powell to succeed Janet Yellen as Fed Chair. But this appointment was seen as a decision to keep Fed monetary policy more or less unchanged from what it was under Yellen, so one could hardly ascribe increased investor optimism to a decision not to change the macroeconomic and monetary policies that had been in place for at least the previous four years.

That leaves us with anticipated or actual changes in regulatory and tax policies as reasons for increased optimism about future cash flows from corporations to their shareholders. The two relevant questions to ask about anticipated or actual changes in regulatory and tax policies are: (1) could such changes have raised investor optimism, thereby raising stock prices, and (2), if so, would rising stock prices reflect enhanced well-being on the part of the American economy and the American people?

Briefly, the main idea for regulatory reform that the Administration wants to pursue is to require that whenever an agency adopts a new regulation, it should simultaneously eliminate two old ones. Supposedly such a requirement – sometimes called a regulatory budget – is to limit the total amount of regulation that the government can impose on the economy, the theory being that new regulations would not be adopted unless they were likely to be really effective.

But agencies are already required to show that regulations pass some cost-benefit test before imposing new regulations. So it’s not clear that the economy would be better off if new regulations, which can now be adopted only if they are expected to generate benefits exceeding the costs associated with their adoption, cannot be adopted unless two other regulations are eliminated. Presumably, underlying the new regulatory approach is a theory of bureaucratic behavior positing that the benefits of new regulations are systematically overestimated and their costs systematically underestimated by bureaucrats.

I’m not going to argue the merits of the underlying theory, but obviously it is possible that the new regulatory approach would result in increased profits for businesses that will have fewer regulatory burdens imposed upon them, thereby increasing the value of ownership shares in those firms. So, it’s possible that the new regulatory approach adopted by the Administration is causing stock prices to rise, presumably by more than they would have risen under the old simple cost-benefit regulatory approach that was followed by the Obama Administration.

But even if the new regulatory approach has caused stock prices to rise, it’s not clear that increasing stock valuations represent a net increase in the well-being of the American economy and the American people. If regulations that are costly to the economy in general are eliminated, the benefits of fewer regulations would accrue not just to the businesses whose profits rise as a result; eliminating inefficient regulations would also benefit the rest of the economy by freeing up resources to produce goods and services whose value to consumers would the benefits foregone when regulations were eliminated. But it’s also possible, that regulations are providing benefits greater than the costs of implementing and enforcing them.

If eliminating regulations leads to increased pollution or sickness or consumer fraud, and the value of those foregone benefits exceeds the costs of those regulations, it will not be corporations and their shareholders that suffer; it will be the general public that will bear the burden of their elimination. While corporations increase the cash flows paid to shareholders, members of the public will suffer more-than-offsetting reductions in well-being by being exposed to increased pollution, suffering increased illness and injury, or suffering added fraud and other consumer harms.

Since 1970, when the federal government took serious measures to limit air and water pollution, air and water quality have improved greatly in most of the US. Those improvements, for the most part, have probably not been reflected in stock prices, because environmental improvements, mostly affecting common-property resources, can’t be easily capitalized, though, some of those improvements have likely been reflected in increasing land values in cities and neighborhoods where air and water quality have improved. Foregoing pollution-reducing regulations might actually have led to increased stock prices for many corporations burdened by those regulations, but the US as a whole, and its inhabitants, would not have been better off without those regulations than they are with them.

So, rising stock prices are not necessarily a good indicator of whether the new regulatory approach of the Administration is benefiting or harming the American economy and the American public. Market valuations convey a lot of important information, but there is also a lot of important information that is not conveyed in stock prices.

As for taxes, it is straightforward that reducing corporate-tax liability increases funds available to be paid directly to shareholders as dividends and share buy-backs, or indirectly through investments expected to increase cash flows to shareholders in the more distant future. Does an increase in stock prices caused by a reduction in corporate-tax liability imply any enhancement in the well-being of the American economy and the American people

The answer, as a first approximation, is no. A reduction in corporate tax liability implies a reduction in the tax liability of shareholders, and that reduction is immediately capitalized into the value of shares. Increased stock prices simply reflect the expected reduction in shareholder tax liability.

Of course, reducing the tax burden on shareholders may improve economic performance, causing an increase in corporate cash flows to shareholders exceeding the reduction in shareholder tax liabilities. But it is unlikely that the difference between the increase in cash flows to shareholders and the reduction in shareholder tax liabilities would be more than a few percent of the total reduction in corporate tax liability, so that any increase in economic performance resulting from a reduction in corporate tax liability would account for only a small fraction of the increase in stock prices.

The good thing about the corporate-income tax is that it is so easy to collect, and that it is so hard to tell who really bears the tax burden: shareholders, workers or consumers. That’s why governments like taxing corporations. But the really bad thing about the corporate-income tax is that it is so hard to tell who really bears the burden of the corporate tax, shareholders, workers or consumers.

Because it is so hard to tell who bears the burden of the tax, people just think that “corporations” pay the tax, but “corporations” aren’t people, and they don’t really pay taxes, they are just the conduit for a lot of unidentified people to pay unknown amounts of tax. As Adam Winkler has just explained in this article and in an important new book, It is a travesty that the Supreme Court was hoodwinked in the latter part of the nineteenth century into accepting the notion that corporations are Constitutional persons with essentially the same rights as actual persons – indeed, with far greater rights than human beings belonging to disfavored racial or ethnic categories.

As I wrote years ago in one of my early posts on this blog, there are some very good arguments for abolishing the corporate income tax altogether, as Hyman Minsky argued. Forcing corporations to distribute their profits to shareholders would diminish the incentives for corporate empire building, thereby making venture capital more available to start-ups and small businesses. Such a reform might turn out to be an important democratizing and decentralizing change in the way that modern capitalism operates. But even if that were so, it would not mean that the effects of a reduction in the corporate tax rate could be properly measured by looking that resulting change in corporate stock prices.

Before closing this excessively long post, I will just remark that although I have been using the basic theory of asset pricing that underlies the efficient market hypothesis (EMH), adopting that theory of asset pricing does not imply that I accept the EMH. What separates me from the EMH is the assumption that there is a single unique equilibrium toward which the economy is tending at any moment in time, and that the expectations of market participants are unbiased and efficient estimates of the equilibrium price vector toward which the price system is moving. I reject all of those assumptions about the existence and uniqueness of an equilibrium price vector. If there is no equilibrium price vector toward which the economy is tending, the idea that expectations are governed by some objective equilibrium which is already there to be discovered is erroneous; expectations create their own reality and equilibrium is itself determined by expectations. When the existence of equilibrium depends on expectations, it becomes impossible to assign any meaning to the term “efficient market.”

Milton Friedman’s Rabble-Rousing Case for Abolishing the Fed

I recently came across this excerpt from a longer interview of Milton Friedman conducted by Brian Lamb on Cspan in 1994. In this excerpt Lamb asks Friedman what he thinks of the Fed, and Friedman, barely able to contain his ideological fervor, quickly rattles off his version of the history of the Fed, blaming the Fed, at least by implication, for all the bad monetary and macroeconomic events that happened between 1914, when the Fed came into existence, and the1970s.

Here’s a rough summary of Friedman’s tirade:

I have long been in favor of abolishing [the Fed]. There is no institution in the United States that has such a high public standing and such a poor record of performance. . . . The Federal Reserve began operations in 1914 and presided over a doubling of prices during World War I. It produced a major collapse in 1921. It had a good period from about 1922 to 1928. It took actions in 1928 and 1929 that led to a major recession in 1929 and 1930, and it converted that recession by its actions into the Great Depression. The major villain in the Great Depression in my opinion was unquestionably the Federal Reserve System. Since that time, it presided over a doubling of price in World War II. It financed the inflation of the 1970s. On the whole it has a very poor record. It’s done far more harm than good.

Let’s go through Friedman’s complaints one at a time.

World War I inflation.

Friedman blames World War I inflation on the Fed. Friedman, as I have shown in many previous posts, had a very shaky understanding of how the gold standard worked. His remark about the Fed’s “presiding over a doubling of prices” during World War I is likely yet another example of Friedman’s incomprehension, though his use of the weasel words “presided over” rather than the straightforward “caused” does suggest that Friedman was merely trying to insinuate that the Fed was blameworthy when he actually understood that the Fed had almost no control over inflation in World War I, the US remaining formally on the gold standard until April 6, 1917, when the US declared war on Germany and entered World War I, formally suspending the convertibility of the dollar into gold.

As long as the US remained on a gold standard, the value of the dollar was determined by the value of gold. The US was importing lots of gold during the first two and a half years of the World War I as the belligerents used their gold reserves and demonetized their gold coins to finance imports of war material from the US. The massive demonetization of gold caused gold to depreciate on world markets. Another neutral country, Sweden, actually left the gold standard during World War I to avoid the inevitable inflation associated with the wartime depreciation of gold. So it was either ignorant or disingenuous for Friedman to attribute the World War I inflation to the actions of the Federal Reserve. No country could have remained on the gold standard during World War I without accepting inflation, and the Federal Reserve had no legal authority to abrogate or suspend the legal convertibility of the dollar into a fixed weight of gold.

The Post-War Collapse of 1921

Friedman correctly blames the 1921 collapse to the Fed. However, after a rapid wartime and postwar inflation, the US was trying to recreate a gold standard while holding 40% of the world’s gold reserves. The Fed therefore took steps to stabilize the value of gold, which meant raising interest rates, thereby inducing a further inflow of gold into the US to stop the real value of gold from falling in international markets. The problem was that the Fed went overboard, causing a really, and probably unnecessarily, steep deflation.

The Great Depression

Friedman is right that the Fed helped cause the Great Depression by its actions in 1928 and 1929, raising interest rates to try to quell rapidly rising stock prices. But the concerns about rising stock-market prices were probably misplaced, and the Fed’s raising of interest rates caused an inflow of gold into the US just when a gold outflow from the US was needed to accommodate the rising demand for gold on the part of the Bank of France and other central banks rejoining the gold standard and accumulating gold reserves. It was the sudden tightening of the world gold market, with the US and France and other countries rejoining the gold standard simultaneously trying to increase their gold holdings, that caused the value of gold to rise (and nominal prices to fall) in 1929 starting the Great Depression. Friedman totally ignored the international context in which the Fed was operating, failing to see that the US price level under the newly established gold standard, being determined by the international value of gold, was beyond the control of the Fed.

World War II Inflation

As with World War I, Friedman blamed the Fed for “presiding over” a doubling of prices in World War II. But unlike World War I, when rising US prices reflected a falling real value of gold caused by events outside the US and beyond the control of the Fed, in World War II rising US prices reflected the falling value of an inconvertible US dollar caused by Fed “money printing” at the behest of the President and the Treasury. But why did Friedman consider Fed money printing in World War II to have been a blameworthy act on the part of the Fed? The US was then engaged in a total war against the Axis powers. Under those circumstances, was the primary duty of the Fed to keep prices stable or to use its control over “printing press” to ensure that the US government had sufficient funds to win the war against Nazi totalitarianism and allied fascist forces, thereby preserving American liberties and values even more fundamental than keeping inflation low and enabling creditors to extract what was owed to them by their debtors in dollars of undiminished real purchasing power.

Now it’s true that many of Friedman’s libertarian allies were appalled by US participation in World War II, but Friedman, to his credit, did not share their disapproval of US participation in World War II. But, given his support for World War II, Friedman should have at least acknowledged the obvious role of inflationary finance in emergency war financing, a role which, as Earl Thompson and I and others have argued, rationalizes the historic legal monopoly on money printing maintained by almost all sovereign states. To condemn the Fed for inflationary policies during World War II without recognizing the critical role of the “printing press” in war finance was a remarkably uninformed and biased judgment on Friedman’s part.

1970s Inflation

The Fed certainly had a major role in inflation during the 1970s, which as early as 1966 was already starting to creep up from 1-2% rates that had prevailed from 1953 to 1965. The rise in inflation was again triggered by war-related expenditures, owing to the growing combat role of the US in Vietnam starting in 1965. The Fed’s role in rising inflation in the late 1960s and early 1970s was hardly the Fed’s finest hour, but again, it is unrealistic to expect a public institution like the Fed to withhold the financing necessary to support a military action undertaken by the national government. Certainly, the role of Arthur Burns, appointed by Nixon in 1970 to become Fed Chairman in encouraging Nixon to impose wage-and-price controls as an anti-inflationary measure was one of the most disreputable chapters in the Fed’s history, and the cluelessness of Carter’s first Fed Chairman, G. William Miller, appointed to succeed Burns, is almost legendary, but given the huge oil-price increases of 1973-74 and 1978-79, a policy of accommodating those supply-side shocks by allowing a temporary increase in inflation was probably optimal. So, given the difficult circumstances under which the Fed was operating, the increased inflation of the 1970s was not entirely undesirable.

But although Friedman was often sensitive to the subtleties and nuances of policy making when rendering scholarly historical and empirical judgments, he rarely allowed subtleties and nuances to encroach on his denunciations when he was operating in full rabble-rousing mode.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 1,823 other followers

Follow Uneasy Money on WordPress.com
Advertisements