Archive Page 2

Henry Manne and the Dubious Case for Insider Trading

In a recent tweet, my old friend Alan Reynolds plugged a 2003 op-ed article (“The Case for Insider Training”) by Henry Manne railing against legal prohibitions against insider trading. Reynolds’s tweet followed his earlier tweet railing against the indictment of Rep. Chris Collins for engaging in insider trading after learning that the small pharmaceutical company (Innate Pharmaceuticals) of which he was the largest shareholder transmitted news that a key clinical trial of a drug the company was developing had failed, making a substantial decline in the value of the company’s stock inevitable once news of the failed trial became public. Collins informed his own son of the results of the trial, and his son then shared that information with the son’s father-in-law and other friends and acquaintances, who all sold their stock in the firm, causing the company’s stock price to fall by 92%.

Reynolds thinks that what Collins did was just fine, and invokes Manne as an authority to support his position. Here is how Manne articulated the case against insider trading in his op-ed piece, which summarizes a longer 2005 article (“Insider Trading: Hayek, Virtual Markets and the Dog that Did not Bark”) published in The Journal of Corporate Law.

Prior to 1968, insider trading was very common, well-known, and generally accepted when it was thought about at all.

A similar observation – albeit somewhat backdated — might be made about slavery and polygamy.

When the time came, the corporate world was neither able nor inclined to mount a defense of the practice, while those who demanded its regulation were strident and successful in its demonization. The business community was as hoodwinked by these frightening arguments as was the public generally.

Note the impressive philosophical detachment with which Manne recounts the historical background.

Since then, however, insider trading has been strongly, if by no means universally, defended in scholarly journals. There have been three primary economic arguments (not counting the show-stopper that the present law simply cannot be effectively enforced.) The first and generally undisputed argument is that insider trading does little or no direct harm to any individual trading in the market, even when an insider is on the other side of the trades.

The assertion that insider trading does “little or no direct harm” is patently ridiculous inasmuch as it is based on the weasel word “direct” so that the wealth transferred from less informed to better informed traders cannot result in “direct” harm to the less-informed traders, “direct harm” being understood to occur only when theft or fraud is used to effect a wealth transfer. Question-begging at its best.

The second argument in favor of allowing insider trading is that it always (fraud aside) helps move the price of a corporation’s shares to its “correct” level. Thus insider trading is one of the most important reasons why we have an “efficient” stock market. While there have been arguments about the relative weight to be attributed to insider trading and to other devices also performing this function, the basic idea that insider pushes stock prices in the right direction is largely unquestioned today.

“Efficient” (scare quotes are Manne’s) pricing of stocks and other assets certainly sounds good, but defining “efficient” pricing is not so easy. And even if one were to grant that there is a well-defined efficient price at a moment in time, it is not at all clear how to measure the social gain from an efficient price relative to an inefficient price, or, even more problematically, how to measure the social benefit from arriving at the efficient price sooner rather than later.

The third economic defense has been that it is an efficient and highly desirable form of incentive compensation, especially for corporation dependent on innovation and new developments. This argument has come to the fore recently with the spate of scandals involving stock options. These are the closes substitutes for insider trading in managerial compensation, but they suffer many disadvantages not found with insider trading. The strongest argument against insider trading as compensation is the difficulty of calibrating entitlements and rewards.

“The difficulty of calibrating entitlements and rewards” is simply a euphemism for the incentive of insiders privy to adverse information to trade on that information rather than attempt to counteract an expected decline in the value of the firm.

Critics of insider trading have responded to these arguments principally with two aggregate-harm theories, one psychological and the other economic. The first, the faraway favorite of the SEC, is the “market confidence” argument: If investors in the stock market know that insider trading is common, they will refuse to invest in such an “unfair” market.

Using scare quotes around “unfair” as if the idea that trading with asymmetric information might be unfair were illogical or preposterous, Manne stumbles into an inconsistency of his own by abandoning the very efficient market hypothesis that he otherwise steadfastly upholds. According to the efficient market hypothesis that market prices reflects all publicly available information, movements in stock prices are unpredictable on the basis of publicly available information. Thus, investors who select stocks randomly should, in the aggregate, and over time, just break even. However, traders with inside information make profits. But if it is possible to break even by picking stocks randomly, who are the insiders making their profits from? The renowned physicist Niels Bohr, who was fascinated by stock markets and anticipated the efficient market hypothesis, argued that it must be the stock market analysts from whom the profits of insiders are extracted. Whether Bohr was right that insiders extract their profits only from market analysts and not at all from traders with randomized strategies, I am not sure, but clearly Bohr’s basic intuition that profits earned by insiders are necessarily at the expense of other traders is logically unassailable.

Thus investment and liquidity will be seriously diminished. But there is no evidence that publicity about insider trading ever caused a significant reduction in aggregate stock market activity. It is merely one of many scare arguments that the SEC and others have used over the years as a substitute for sound economics.

Manne’s qualifying adjective “significant” is clearly functioning as a weasel world in this context, because the theoretical argument that an understanding that insiders may freely trade on their inside information would, on Manne’s own EMH premises, clearly imply that stock trading by non-insiders would in the aggregate, and over time, be unprofitable. So Manne resorts to a hand-waving argument about the size of the effect. The size of the effect depends on how widespread insider trading and how-well informed the public is about the extent of such trading, so he is in no position to judge its significance.

The more responsible aggregate-harm argument is the “adverse selection” theory. This argument is that specialists and other market makers, when faced with insider trading, will broaden their bid-ask spreads to cover the losses implicit in dealing with insiders. The larger spread in effect becomes a “tax” on all traders, thus impacting investment and liquidity. This is a plausible scenario, but it is of very questionable applicability and significance. Such an effect, while there is some confirming data, is certainly not large enough in aggregate to justify outlawing insider trading.

But the adverse-selection theory credited by Manne is no different in principle from the “market confidence” theory that he dismisses; they are two sides of the same coin, and are equally derived from the same premise: that the profits of insider traders must come from the pockets of non-insiders. So he has no basis in theory to dismiss either effect, and his evidence that insider trading provides any efficiency benefit is certainly no stronger than the evidence he dismisses so blithely that insider trading harms non-insiders.

In fact the relevant theoretical point was made very clearly by Jack Hirshleifer in the important article (“The Private and Social Value of Information and the Reward to Inventive Activity”) about which I wrote last week on this blog. Information has social value when it leads to a reconfiguration of resources that increases the total output of society. However, the private value of information may far exceed whatever social value the information has, because privately held information that allows the better-informed to trade with the less-well informed enables the better-informed to profit at the expense of the less-well informed. Prohibiting insider trading prevents such wealth transfers, and insofar as these wealth transfers are not associated with any social benefit from improved resource allocation, an argument that such trading reduces welfare follows as night does day. Insofar as such trading does generate some social benefit, there are also the losses associated with adverse selection and reduced market confidence, so the efficiency effects, though theoretically ambiguous, are still very likely negative.

But Manne posits a different kind of efficiency effect.

No other device can approach knowledgeable trading by insiders for efficiently and accurately pricing endogenous developments in a company. Insiders, driven by self-interest and competition among themselves will trade until the correct price is reached. This will be true even when the new information involves trading on bad news. You do not need whistleblowers if you have insider trading.

Here again, Manne is assuming that efficient pricing has large social benefits, but that premise depends on the how rapidly resource allocation responds to price changes, especially changes in asset prices. The question is how long does it take for insider information to become public information? If insider information quickly becomes public, so that insiders can profit from their inside information only by trading on it before the information becomes public, the social value of speeding up the rate at which inside information is reflected in asset pricing is almost nil. But Manne implicitly assumes that the social value of the information is very high, and it is precisely that implicit assumption that would have to be demonstrated before the efficiency argument for insider trading would come close to being persuasive.

Moreover, allowing insiders to trade on bad news creates precisely the wrong incentive, effectively giving insiders the opportunity to loot a company before it goes belly up, rather than take any steps to mitigate the damage.

While I acknowledge that there are legitimate concerns about whether laws against insider trading can be enforced without excessive arbitrariness, those concerns are entirely distinct from arguments that insider trading actually promotes economic efficiency.

Advertisements

My Paper (with Sean Sullivan) on Defining Relevant Antitrust Markets Now Available on SSRN

Antitrust aficionados may want to have a look at this new paper (“The Logic of Market Definition”) that I have co-authored with Sean Sullivan of the University of Iowa School of Law about defining relevant antitrust markets. The paper is now posted on SSRN.

Here is the abstract:

Despite the voluminous commentary that the topic has attracted in recent years, much confusion still surrounds the proper definition of antitrust markets. This paper seeks to clarify market definition, partly by explaining what should not factor into the exercise. Specifically, we identify and describe three common errors in how courts and advocates approach market definition. The first error is what we call the natural market fallacy: the mistake of treating market boundaries as preexisting features of competition, rather than the purely conceptual abstractions of a particular analytical process. The second is the independent market fallacy: the failure to recognize that antitrust markets must always be defined to reflect a theory of harm, and do not exist independent of a theory of harm. The third is the single market fallacy: the tendency of courts and advocates to seek some single, best relevant market, when in reality there will typically be many relevant markets, all of which could be appropriately drawn to aid in competitive effects analysis. In the process of dispelling these common fallacies, this paper offers a clarifying framework for understanding the fundamental logic of market definition.

Hirshleifer on the Private and Social Value of Information

I have written a number posts (here here here, and here) over the past few years citing an article by one of my favorite UCLA luminaries, Jack Hirshleifer, of the fabled UCLA economics department of the 1950s, 1960s, 1970s and 1980s. Like everything Hirshleifer wrote, the article, “The Private and Social Value of Information and the Reward to Inventive Activity,” published in 1971 in the American Economic Review, is deeply insightful, carefully reasoned, and lucidly explained, reflecting the author’s comprehensive mastery of the whole body of neoclassical microeconomic theory.

Hirshleifer’s article grew out of a whole literature inspired by two of Hayek’s most important articles “Economics and Knowledge” in 1937 and “The Use of Knowledge in Society” in 1945. Both articles were concerned with the fact that, contrary to the assumptions in textbook treatments, economic agents don’t have complete information about all the characteristics of the goods being traded and about the prices at which those goods are available. Hayek was aiming to show that markets are characteristically capable of transmitting information held by some agents in a condensed form to make it usable by other agents. That role is performed by prices. It is prices that provide both information and incentives to economic agents to formulate and tailor their plans, and if necessary, to readjust those plans in response to changed conditions. Agents need not know what those underlying changes are; they need only observe, and act on, the price changes that result from those changes.

Hayek’s argument, though profoundly insightful, was not totally convincing in demonstrating the superiority of the pure “free market,” for three reasons.

First, economic agents base decisions, as Hayek himself was among the first to understand, not just on actual current prices, but also on expected future prices. Although traders sometimes – but usually don’t — know what the current price of something is, one can only guess – not know — what the price of that thing will be in the future. So, the work of providing the information individuals need to make good economic decisions cannot be accomplished – even in principle – just by the adjustment of prices in current markets. People also need enough information to make good guesses – form correct expectations — about future prices.

Second, economic agents don’t automatically know all prices. The assumption that every trader knows exactly what prices are before executing plans to buy and sell is true, if at all, only in highly organized markets where prices are publicly posted and traders can always buy and sell at the posted price. In most other markets, transactors must devote time and effort to find out what prices are and to find out the characteristics of the goods that they are interested in buying. It takes effort or search or advertising or some other, more or less costly, discovery method for economic agents to find out what current prices are and what characteristics those goods have. If agents aren’t fully informed even about current prices, they don’t necessarily make good decisions.

Libertarians, free marketeers, and other Hayek acolytes often like to credit Hayek with having solved or having shown how “the market” solves “the knowledge problem,” a problem that Hayek definitively showed a central-planning regime to be incapable of solving. But the solution at best is only partial, and certainly not robust, because markets never transmit all available relevant information. That’s because markets transmit only information about costs and valuations known to private individuals, but there is a lot of information about public or social valuations and costs that is not known to private individuals and rarely if ever gets fed into, or is transmitted by, the price system — valuations of public goods and the social costs of pollution for example.

Third, a lot of information is not obtained or transmitted unless it is acquired, and acquiring information is costly. Economic agents must search for relevant information about the goods and services that they are interested in obtaining and about the prices at which those goods and services are available. Moreover, agents often engage in transactions with counterparties in which one side has an information advantage over the other. When traders have an information advantage over their counterparties, the opportunity for one party to take advantage of the inferior information of the counterparty may make it impossible for the two parties to reach mutually acceptable terms, because a party who realizes that the counterparty has an information advantage may be unwilling to risk being taken advantage of. Sometimes these problems can be surmounted by creative contractual arrangements or legal interventions, but often they can’t.

To recognize the limitations of Hayek’s insight is not to minimize its importance, either in its own right or as a stimulus to further research. Important early contributions (all published between 1961 and 1970) by Stigler (“The Economics of Information”) Ozga (“Imperfect Markets through Lack of Knowledge”), Arrow (“Economic Welfare and the Allocation of Resources for Invention”), Demsetz (“Information and Efficiency: Another Viewpoint”) and Alchian (“Information Costs, Pricing, and Resource Unemployment”) all analyzed the problem of incomplete and limited information and the incentives for acquiring information, the institutions and market arrangements that arise to cope with limited information and the implications for economic efficiency of these limitations and incentives. They can all be traced directly or indirectly to Hayek’s early contributions. Among the important results that seem to follow from these early papers was that the inability of those discovering or creating new knowledge to appropriate the net benefits accruing from the knowledge implied that the incentive to create new knowledge is less than optimal owing to their inability to claim full property rights over new knowledge through patents or other forms of intellectual property.

Here is where Hirshleifer’s paper enters the picture. Is more information always better? It would certainly seem that more of any good is better than less. But how valuable is new information? And are the incentives to create or discover new information aligned with the value of that information? Hayek’s discussion implicitly assumed that the amount of information in existence is a given stock, at least in the aggregate. How can the information that already exists be optimally used? Markets help us make use of the information that already exists. But the problem addressed by Hirshleifer was whether the incentives to discover and create new information call forth the optimal investment of time, effort and resources to make new discoveries and create new knowledge.

Instead of focusing on the incentives to search for information about existing opportunities, Hirshleifer analyzed the incentives to learn about uncertain resource endowments and about the productivity of those resources.

This paper deals with an entirely different aspect of the economics of information. We here revert to the textbook assumption that markets are perfect and costless. The individual is always fully acquainted with the supply-demand offers of all potential traders, and an equilibrium integrating all individuals’ supply-demand offers is attained instantaneously. Individuals are unsure only about the size of their own commodity endowments and/or about the returns attainable from their own productive investments. They are subject to technological uncertainty rather than market uncertainty.

Technological uncertainty brings immediately to mind the economics of research and invention. The traditional position been that the excess of the social over the private value of new technological knowledge leads to underinvestment in inventive activity. The main reason is that information, viewed as a product, is only imperfectly appropriable by its discoverer. But this paper will show that there is a hitherto unrecognized force operating in opposite direction. What has been scarcely appreciated in the literature, if recognized at all, is the distributive aspect of access to superior information. It will be seen below how this advantage provides a motivation for the private acquisition and dissemination of technological information that is quite apart from – and may even exist in the absence – of any social usefulness of that information. (p. 561)

The key insight motivating Hirshleifer was that privately held knowledge enables someone possessing that knowledge to anticipate future price movements once the privately held information becomes public. If you can anticipate a future price movement that no one else can, you can confidently trade with others who don’t know what you know, and then wait for the profit to roll in when the less well-informed acquire the knowledge that you have. By assumption the newly obtained knowledge doesn’t affect the quantity of goods available to be traded, so acquiring new knowledge or information provides no social benefit. In a pure-exchange model, newly discovered knowledge provides no net social benefit; it only enables better-informed traders to anticipate price movements that less well-informed traders don’t see coming. Any gains from new knowledge are exactly matched by the losses suffered by those without that knowledge. Hirshleifer called the kind of knowledge that enables one to anticipate future price movements “foreknowledge,” which he distinguished from actual discovery .

The type of information represented by foreknowledge is exemplified by ability to successfully predict tomorrow’s (or next year’s) weather. Here we have a stochastic situation: with particular probabilities the future weather might be hot or cold, rainy or dry, etc. But whatever does actually occur will, in due time, be evident to all: the only aspect of information that may be of advantage is prior knowledge as to what will happen. Discovery, in contrast, is correct recognition of something that is hidden from view. Examples include the determination of the properties of materials, of physical laws, even of mathematical attributes (e.g., the millionth digit in the decimal expansion of “π”). The essential point is that in such cases nature will not automatically reveal the information; only human action can extract it. (562)

Hirshleifer’s result, though derived in the context of a pure-exchange economy, is very powerful, implying that any expenditure of resources devoted to finding out new information that enables the first possessor of the information to predict price changes and reap profits from trading is unambiguously wasteful by reducing total consumption of the community.

[T]he community as a whole obtains no benefit, under pure exchange, from either the acquisition or the dissemination (by resale or otherwise) of private foreknowledge. . . .

[T]he expenditure of real resources for the production of technological information is socially wasteful in pure exchange, as the expenditure of resources for an increase in the quantity of money by mining gold is wasteful, and for essentially the same reason. Just as a smaller quantity of money serves monetary functions as well as a larger, the price level adjusting correspondingly, so a larger amount of foreknowledge serves no social purpose under pure exchange that the smaller amount did not. (pp. 565-66)

Relaxing the assumption that there is no production does not alter the conclusion, because the kind of information that is discovered, even if it did lead to efficient production decisions that increase the output of goods whose prices rise sooner as a result of the new information than they would have otherwise. But if the foreknowledge is privately obtained, the private incentive is to use that information by trading with another, less-well-informed, trader, at a price the other trader would not agree to if he weren’t at an information disadvantage. The private incentive to use foreknowledge that might cause a change in production decisions is not to use the information to alter production decisions but to use it to trade with, and profit from, those with inferior knowledge.

[A]s under the regime of pure exchange, private foreknowledge makes possible large private profit without leading to socially useful activity. The individual would have just as much incentive as under pure exchange (even more, in fact) to expend real resources in generating socially useless private information. (p. 567)

If the foreknowledge is publicly available, there would be a change in production incentives to shift production toward more valuable products. However, the private gain if the information is kept private greatly exceeds the private value of the information if the information is public. Under some circumstances, private individuals may have an incentive to publicize their private information to cause the price increases in expectation of which they have taken speculative positions. But it is primarily the gain from foreseen price changes, not the gain from more efficient production decisions, that creates the incentive to discover foreknowledge.

The key factor underlying [these] results . . . is the distributive significance of private foreknowledge. When private information fails to lead to improved productive alignments (as must necessarily be the case in a world of pure exchange, and also in a regime of production unless there is dissemination effected in the interest of speculation or resale), it is evident that the individual’s source of gain can only be at the expense of his fellows. But even where information is disseminated and does lead to improved productive commitments, the distributive transfer gain will surely be far greater than the relatively minor productive gain the individual might reap from the redirection of his own real investment commitments. (Id.)

Moreover, better-informed individuals – indeed individuals who wrongly believe themselves to be better informed — will perceive it to be in their self-interest to expend resources to disseminate the information in the expectation that the ensuing price changes would redound to their profit. The private gain expected from disseminating information far exceeds the social benefit from the prices changes once the new information is disseminated; the social benefit from the price changes resulting from the disseminated information corresponds to an improved allocation of resources, but that improvement will be very small compared to the expected private profit from anticipating the price change and trading with those that don’t anticipate it.

Hirshleifer then turns from the value of foreknowledge to the value of discovering new information about the world or about nature that makes a contribution to total social output by causing a shift of resources to more productive uses. Inasmuch as the discovery of new information about the world reveals previously unknown productive opportunities, it might be thought that the private incentive to devote resources to the discovery of technological information about productive opportunities generates substantial social benefits. But Hirshleifer shows that here, too, because the private discovery of information about the world creates private opportunities for gain by trading based on the consequent knowledge of future price changes, the private incentive to discover technological information always exceeds the social value of the discovery.

We need only consider the more general regime of production and exchange. Given private, prior, and sure information of event A [a state of the world in which a previously unknown natural relationship has been shown to exist] the individual in a world of perfect markets would not adapt his productive decisions if he were sure the information would remain private until after the close of trading. (p. 570)

Hirshleifer is saying that the discovery of a previously unknown property of the world can lead to an increase in total social output only by causing productive resources to be reallocated, but that reallocation can occur only if and when the new information is disclosed. So if someone discovers a previously unknown property of the world, the discoverer can profit from that information by anticipating the price effect likely to result once the information is disseminated and then making a speculative transaction based on the expectation of a price change. A corollary of this argument is that individuals who think that they are better informed about the world will take speculative positions based on their beliefs, but insofar as their investments in discovering properties of the world lead them to incorrect beliefs, their investments in information gathering and discovery will not be rewarded. The net social return to information gathering and discovery is thus almost certainly negative.

The obvious way of acquiring the private information in question is, of course, by performing technological research. By a now familiar argument we can show once again that the distributive advantage of private information provides an incentive for information-generating activity that may quite possibly be in excess of the social value of the information. (Id.)

Finally, Hirshliefer turns to the implications for patent policy of his analysis of the private and social value of information.

The issues involved may be clarified by distinguishing the “technological” and “pecuniary” effects of invention. The technological effects are the improvements in production functions . . . consequent upon the new idea. The pecuniary effects are the wealth shifts due to the price revaluations that take place upon release and/or utilization of the information. The pecuniary effects are purely redistributive.

For concreteness, we can think in terms of a simple cost-reducing innovation. The technological benefit to society is, roughly, the integrated area between the old and new marginal-cost curves for the preinvention level of output plus, for any additional output, and the area between the demand curve and the new marginal-cost curve. The holder of a (perpetual) patent could ideally extract, via a perfectly discriminatory fee policy, this entire technological benefit. Equivalence between the social and private benefits of innovation would thus induce the optimal amount of private inventive activity. Presumably it is reasoning of this sort that underlies the economic case for patent protection. (p. 571)

Here Hirshleifer is uncritically restating the traditional analysis for the social benefit from new technological knowledge. But the analysis overstates the benefit, by assuming incorrectly that, with no patent protection, the discovery would never be made. If the discovery would be made without patent protection, then obviously the technological benefit to society is only the area indicated over a limited time horizon, so a perpetual patent enabling the holder of the patent to extract all additional consumer and producer surplus flowing from invention in perpetuity would overcompensate the patent holder for the invention.

Nor does Hirshleifer mention the tendency of patents to increase the costs of invention, research and development owing to the royalties subsequent inventors would have to pay existing patent holders for infringing inventions even if those inventions were, or would have been, discovered with no knowledge of the patented invention. While rewarding some inventions and inventors, patent protection penalizes or blocks subsequent inventions and inventors. Inventions are outputs, but they are also inputs. If the use of past inventions is made more costly by new inventors, it is not clear that the net result will be an increase in the rate of invention.

Moreover, the knowledge that a patented invention may block or penalize a new invention that infringes on an existing patent or a patent that issues before a new invention is introduced, may in some cases cause an overinvestment in research as inventors race to gain the sole right to an invention, in order to avoid being excluded while gaining the right to exclude others.

Hirshleifer does mention some reasons why maximally rewarding patent holders for their inventions may lead to suboptimal results, but fails to acknowledge that the conventional assessment of the social gain from new invention is substantially overstated or patents may well have a negative effect on inventive activity in fields in which patent holders have gained the right to exclude potentially infringing inventions even if the infringing inventions would have been made without the knowledge publicly disclosed by the patent holders in their patent applications.

On the other side are the recognized disadvantages of patents: the social costs of the administrative-judicial process, the possible anti-competitve impact, and restriction of output due to the marginal burden of patent fees. As a second-best kind of judgment, some degree of patent protection has seemed a reasonable compromise among the objectives sought.

Of course, that judgment about the social utility of patents is not universally accepted, and authorities from Arnold Plant, to Fritz Machlup, and most recently Michele Boldrin and David Levine have been extremely skeptical of the arguments in favor of patent protection, copyright and other forms of intellectual property.

However, Hirshleifer advances a different counter-argument against patent protection based on his distinction between the private and social gains derived from information.

But recognition of the unique position of the innovator for forecasting and consequently capturing portions of the pecuniary effects – the wealth transfers due to price revaluation – may put matters in a different light. The “ideal” case of the perfectly discriminating patent holder earning the entire technological benefit is no longer so ideal. (pp. 571-72)

Of course, as I have pointed out, the ‘“ideal” case’ never was ideal.

For the same inventor is in a position to reap speculative profits, too; counting these as well, he would clearly be overcompensated. (p. 572)

Indeed!

Hirshleifer goes on to recognize that the capacity to profit from speculative activity may be beyond the capacity or the ken of many inventors.

Given the inconceivably vast number of potential contingencies and the costs of establishing markets, the prospective speculator will find it costly or even impossible ot purchase neutrality from “irrelevant” risks. Eli Whitney [inventor of the cotton gin who obtained one of the first US patents for his invention in 1794] could not be sure that his gin would make cotton prices fall: while a considerable force would clearly be acting in that direction, a multitude of other contingencies might also have possibly affected the price of cotton. Such “uninsurable” risks gravely limit the speculation feasible with any degree of prudence. (Id.)

HIrshleifer concludes that there is no compelling case either for or against patent protection, because the standard discussion of the case for patent protection has not taken into consideration the potential profit that inventors can gain by speculating on the anticipated price effects of their patents. Of course the argument that inventors are unlikely to be adept at making such speculative plays is a serious argument, we have also seen the rise of patent trolls that buy up patent rights from inventors and then file lawsuits against suspected infringers. In a world without patent protection, it is entirely possible that patent trolls would reinvent themselves as patent speculators, buying up information about new inventions from inventors and using that information to engage in speculative activity based on that information. By acquiring a portfolio of patents such invention speculators could pool the risks of speculation over their entire portfolio, enabling them to speculate more effectively than any single inventor could on his own invention. Hirshleifer concludes as follows:

Even though practical considerations limit the effective scale and consequent impact of speculation and/or resale [but perhaps not as much as Hirshleifer thought], the gains thus achievable eliminate any a priori anticipation of underinvestment in the generation of new technological knowledge. (p. 574)

And I reiterate one last time that Hirshleifer arrived at his non-endorsement of patent protection even while accepting the overstated estimate of the social value of inventions and neglecting the tendency of patents to increase the cost of inventive activity.

My Paper on Hayek, Hicks and Radner and 3 Equilibrium Concepts Now Available on SSRN

A little over a year ago, I posted a series of posts (here, here, here, here, and here) that came together as a paper (“Hayek and Three Equilibrium Concepts: Sequential, Temporary and Rational-Expectations”) that I presented at the History of Economics Society in Toronto in June 2017. After further revisions I posted the introductory section and the concluding section in April before presenting the paper at the Colloquium on Market Institutions and Economic Processes at NYU.

I have since been making further revisions and tweaks to the paper as well as adding the names of Hicks and Radner to the title, and I have just posted the current version on SSRN where it is available for download.

Here is the abstract:

Along with Erik Lindahl and Gunnar Myrdal, F. A. Hayek was among the first to realize that the necessary conditions for intertemporal, as opposed to stationary, equilibrium could be expressed in terms of correct expectations of future prices, often referred to as perfect foresight. Subsequently, J. R. Hicks further elaborated the concept of intertemporal equilibrium in Value and Capital in which he also developed the related concept of a temporary equilibrium in which future prices are not correctly foreseen. This paper attempts to compare three important subsequent developments of that idea with Hayek’s 1937 refinement of his original 1928 paper on intertemporal equilibrium. As a preliminary, the paper explains the significance of Hayek’s 1937 distinction between correct expectations and perfect foresight. In non-chronological order, the three developments of interest are: (1) Roy Radner’s model of sequential equilibrium with incomplete markets as an alternative to the Arrow-Debreu-McKenzie model of full equilibrium with complete markets; (2) Hicks’s temporary equilibrium model, and an important extension of that model by C. J. Bliss; (3) the Muth rational-expectations model and its illegitimate extension by Lucas from its original microeconomic application into macroeconomics. While Hayek’s 1937 treatment most closely resembles Radner’s sequential equilibrium model, which Radner, echoing Hayek, describes as an equilibrium of plans, prices, and price expectations, Hicks’s temporary equilibrium model would seem to have been the natural development of Hayek’s approach. The now dominant Lucas rational-expectations approach misconceives intertemporal equilibrium and ignores the fundamental Hayekian insights about the meaning of intertemporal equilibrium.

Martin Wolf Reviews Adam Tooze on the 2008 Financial Crisis

The eminent Martin Wolf, a fine economist and the foremost financial journalist of his generation, has written an admiring review of a new book (Crashed: How a Decade of Financial Crises Changed the World) about the financial crisis of 2008 and the ensuing decade of aftershocks and turmoil and upheaval by the distinguished historian Adam Tooze. This is not the first time I have written a post commenting on a review of a book by Tooze; in 2015, I wrote a post about David Frum’s review of Tooze’s book on World War I and its aftermath (Deluge: The Great War, America and the Remaking of the World Order 1916-1931). No need to dwell on the obvious similarities between these two impressive volumes.

Let me admit at the outset that I haven’t read either book. Unquestionably my loss, but I hope at some point to redeem myself by reading both of them. But in this post I don’t intend to comment at length about Tooze’s argument. Judging from Martin Wolf’s review, I fully expect that I will agree with most of what Tooze has to say about the crisis.

My criticism – and I hesitate even to use that word – will be directed toward what, judging from Wolf’s review, Tooze seems to have been left out of his book. I am referring to the role of tight monetary policy, motivated by an excessive concern with inflation, when what was causing inflation was a persistent rise in energy and commodity prices that had little to do with monetary policy. Certainly, the failure to fully understand the role of monetary policy during the 2006 to 2008 period in the run-up to the financial crisis doesn’t negate all the excellent qualities that the book undoubtedly has, nevertheless, leaving out that essential part of the story that is like watching Hamlet without the prince.

Let me just offer a few examples from Wolf’s review. Early in the review, Wolf provides a clear overview of the nature of the crisis, its scope and the response.

As Tooze explains, the book examines “the struggle to contain the crisis in three interlocking zones of deep private financial integration: the transatlantic dollar-based financial system, the eurozone and the post-Soviet sphere of eastern Europe”. This implosion “entangled both public and private finances in a doom loop”. The failures of banks forced “scandalous government intervention to rescue private oligopolists”. The Federal Reserve even acted to provide liquidity to banks in other countries.

Such a huge crisis, Tooze points out, has inevitably deeply affected international affairs: relations between Germany and Greece, the UK and the eurozone, the US and the EU and the west and Russia were all affected. In all, he adds, the challenges were “mind-bogglingly technical and complex. They were vast in scale. They were fast moving. Between 2007 and 2012, the pressure was relentless.”

Tooze concludes this description of events with the judgment that “In its own terms, . . . the response patched together by the US Treasury and the Fed was remarkably successful.” Yet the success of these technocrats, first with support from the Democratic Congress at the end of the administration of George W Bush, and then under a Democratic president, brought the Democrats no political benefits.

This is all very insightful and I have no quarrel with any of it. But it mentions not a word about the role of monetary policy. Last month I wrote a post about the implications of a flat or inverted yield curve. The yield curve usually has an upward slope because short-term rates interest rates tend to be lower than long-term rates. Over the past year the yield curve has been steadily flattening as short term rates have been increasing while long-term rates have risen only slightly if at all. Many analysts are voicing concern that the yield curve may go flat or become inverted once again. And one reason that they worry is that the last time the yield curve became flat was late in 2006. Here’s how I described what happened to the yield curve in 2006 after the Fed started mechanically raising its Fed-funds target interest rate by 25 basis points every 6 weeks starting in June 2004.

The Fed having put itself on autopilot, the yield curve became flat or even slightly inverted in early 2006, implying that a substantial liquidity premium had to be absorbed in order to keep cash on hand to meet debt obligations. By the second quarter of 2006, insufficient liquidity caused the growth in total spending to slow, just when housing prices were peaking, a development that intensified the stresses on the financial system, further increasing the demand for liquidity. Despite the high liquidity premium and flat yield curve, total spending continued to increase modestly through 2006 and most of 2007. But after stock prices dropped in August 2007 and home prices continued to slide, growth in total spending slowed further at the end of 2007, and the downturn began.

Despite the weakening economy, the Fed remained focused primarily on inflation. The Fed did begin cutting its Fed Funds target from 5.25% in late 2007 once the downturn began, but the Fed’s reluctance to move aggressively to counter a recession that worsened rapidly in spring and summer of 2008 because the Fed remain fixated on headline inflation which was consistently higher than the Fed’s 2% target. But inflation was staying above the 2% target simply because of an ongoing supply shock that began in early 2006 when the price of oil was just over $50 a barrel and rose steadily with a short dip late in 2006 and early 2007 and continuing to rise above $100 a barrel in the summer of 2007 and peaking at over $140 a barrel in July 2008.

The mistake of tightening monetary policy in response to a supply shock in the midst of a recession would have been egregious under any circumstances, but in the context of a seriously weakened and fragile financial system, the mistake was simply calamitous. And, indeed, the calamitous consequences of that decision are plain. But somehow the connection between the focus of the Fed on inflation while the economy was contracting and the financial system was in peril has never been fully recognized by most observers and certainly not by the Federal Reserve officials who made those decisions. A few paragraphs later, Wolf observes.

Furthermore, because the banking systems had become so huge and intertwined, this became, in the words of Ben Bernanke — Fed chairman throughout the worst days of the crisis and a noted academic expert — the “worst financial crisis in global history, including the Great Depression”. The fact that the people who had been running the system had so little notion of these risks inevitably destroyed their claim to competence and, for some, even probity.

I will not agree or disagree with Bernanke that the 2008 crisis was the worse than 1929-30 or 1931 or 1933 crises, but it appears that they still have not fully understood their own role in precipitating the crisis. That is a story that remains to be told. I hope we don’t have to wait too much longer.

The Monumental Dishonesty and Appalling Bad Faith of Chief Justice Roberts’s Decision

Noah Feldman brilliantly exposes the moral rot underlying the horrific Supreme Court decision handed down today approving the Muslim ban, truly, as Feldman describes it, a decision that will live in infamy in the company of Dred Scott and Korematsu. Here are the key passages from Feldman’s masterful unmasking of the faulty reasoning of the Roberts opinion

When Chief Justice Roberts comes to the topic of bias, he recounts Trump’s anti-Muslim statements and the history of the travel ban (this is the administration’s third version). Then he balks. “The issue before us is not whether to denounce the statements,” Roberts writes. Rather, Roberts insists, the court’s focus must be on “the significance of those statements in reviewing a presidential directive, neutral on its face, addressing the matter within the core of executive responsibility.”

That is lawyer-speak for saying that, despite its obviousness, the court would ignore Trump’s anti-Muslim bias. Roberts is trying to argue that, when a president is acting within his executive authority, the court should defer to what the president says his intention is, no matter the underlying reality.

That’s more or less what the Supreme Court did in the Korematsu case. There, Justice Hugo Black, a Franklin D. Roosevelt loyalist, denied that the orders requiring the internment of Japanese-Americans were based on racial prejudice. The dissenters, especially Justice Frank Murphy, pointed out that this was preposterous.

Justice Sonia Sotomayor, the court’s most liberal member, played the truth-telling role today. Her dissent, joined by Justice Ruth Bader Ginsburg, states bluntly that a reasonable observer looking at the record would conclude that the ban was “motivated by anti-Muslim animus.”

She properly invokes the Korematsu case — in which, she points out, the government also claimed a national security rationale when it was really relying on stereotypes. And she concludes that “our Constitution demands, and our country deserves, a Judiciary willing to hold the coordinate branches to account when they defy our most sacred legal commitments.”

Roberts tried to dodge the Korematsu comparison by focusing on the narrow text of the order, which, according to Roberts, on its own terms – absent the statements made by the author of the ban himself — is not facially discriminatory. Feldman skewers that attempt.

Roberts certainly knows the consequences of this decision. He tries to deflect the Korematsu comparison by saying that the order as written could have been enacted by any other president — a point that is irrelevant to the reality of the ban. Roberts also takes the opportunity to announce that Korematsu “was gravely wrong the day it was decided [and] has been overruled in the court of history.”

In another context, we might well be celebrating the fact that the Supreme Court had finally and expressly repudiated Korematsu, which it had never fully done before. Instead, Roberts’s declaration reads like a desperate attempt to change the subject. The truth is that this decision and Korematsu are a pair: Prominent instances where the Supreme Court abdicated its claim to moral leadership.

Following up Feldman, I just want to make it absolutely clear how closely, despite Roberts’s bad faith protestations to the contrary, the reasoning of his opinion follows the reasoning of the Korematsu court (opinion by Justice Black).

From the opinion of Chief Justice Roberts, attempting to counter the charge by Justice Sotomayor in her dissent that the majority was repeating the error of Korematsu.

Finally, the dissent invokes Korematsu v. United States, 323 U. S. 214 (1944). Whatever rhetorical advantage the dissent may see in doing so, Korematsu has nothing to do with this case. The forcible relocation of U. S. citizens to concentration camps, solely and explicitly on the basis of race, is objectively unlawful and outside the scope of Presidential authority. But it is wholly inapt to liken that morally repugnant order to a facially neutral policy denying certain foreign nationals the privilege of admission. See post, at 26–28. The entry suspension is an act that is well within executive authority and could have been taken by any other President—the only question is evaluating the actions of this particular President in promulgating an otherwise valid Proclamation.

This statement by the Chief Justice is monumentally false and misleading and utterly betrays either consciousness of wrongdoing or a culpable ignorance of the case he is presuming to distinguish from the one that he is deciding. Here is the concluding paragraph of Justice Black’s opinion in Korematsu.

It is said that we are dealing here with the case of imprisonment of a citizen in a concentration camp solely because of his ancestry, without evidence or inquiry concerning his loyalty and good disposition towards the United States. Our task would be simple, our duty clear, were this a case involving the imprisonment of a loyal citizen in a concentration camp because of racial prejudice.

Justice Black is explicitly denying that the Japanese American citizens being imprisoned were imprisoned because of racial prejudice.

Regardless of the true nature of the assembly and relocation centers — and we deem it unjustifiable to call them concentration camps, with all the ugly connotations that term implies — we are dealing specifically with nothing but an exclusion order.

And Justice Black denies that the Japanese Americans were sent to concentration camps.

To cast this case into outlines of racial prejudice, without reference to the real military dangers which were presented, merely confuses the issue.

Contrary to the assertion of Chief Justice Roberts, the Korematsu court did not “solely and explicitly” relocate U.S. citizens to concentration camps solely on the basis of race. Justice Black explicitly rejected that contention. So his attempt to distinguish his opinion from Justice Black’s majority opinion fails. Indeed Mr. Justice Black bases his decision on statutory authority given to the President by Congress, his inherent powers as Commander-in-Chief, and his assessment of the military danger of an invasion of the West Coast by the Japanese.

Korematsu was not excluded from the Military Area because of hostility to him or his race. He was excluded because we are at war with the Japanese Empire, because the properly constituted military authorities feared an invasion of our West Coast and felt constrained to take proper security measures, because they decided that the military urgency of the situation demanded that all citizens of Japanese ancestry be segregated from the West Coast temporarily, and, finally, because Congress, reposing its confidence in this time of war in our military leaders — as inevitably it must — determined that they should have the power to do just this. There was evidence of disloyalty on the part of some, the military authorities considered that the need for action was great, and time was short. We cannot — by availing ourselves of the calm perspective of hindsight — now say that, at that time, these actions were unjustified.

In almost every particular, Justice Black’s decision employed the exact same reasoning that the Chief Justice now employs to uphold the travel ban. Justice Black argued that the relocation could have been motivated by reasons of national security, just as Chief Justice now argues that the travel ban was motivated by reasons of national security. Justice Black argued that the military must be trusted to make decisions about which citizens might be disloyal and could pose a national security threat in time of war just as Chief Justice Roberts now argues that the President must be allowed to make national security decisions about who may enter the United States from abroad. Neither Justice Black nor Chief Justice Roberts is prepared to say that singling out a group based on race or religion is unjustified.

The only distinction between the cases is that Korematsu concerned the rights of American citizens not to be imprisoned without due process, and the travel ban primarily affects the rights of non-resident aliens. Clearly an important distinction, but the rights of American citizens and resident aliens are also implicated. Their rights to be free from religious discrimination are also at issue, and those rights may not be lightly disregarded.

Chief Justice Roberts concludes by attempting to distract attention from the glaring similarities between his own decision and Justice Black’s in Korematsu.

The dissent’s reference to Korematsu, however, affords this Court the opportunity to make express what is already obvious: Korematsu was gravely wrong the day it was decided, has been overruled in the court of history, and—to be clear—“has no place in law under the Constitution.” (Jackson, J., dissenting).

But in doing so, Chief Justice Roberts only provides further evidence of his own consciousness of wrongdoing and his stunning display of bad faith.

Who’s Afraid of a Flattening Yield Curve?

Last week the Fed again raised its benchmark Federal Funds rate target, now at 2%, up from the 0.25% rate that had been maintained steadily from late 2008 until late 2015, when the Fed, after a few false starts, finally worked up the courage — or caved to the pressure of the banks and the financial community — to start raising rates. The Fed also signaled its intention last week to continue raising rates – presumably at 0.25% increments – at least twice more this calendar year.

Some commentators have worried that rising short-term interest rates are outpacing increases at the longer end, so that the normally positively-sloped yield curve is flattening. They point out that historically flat or inverted yield curves have often presaged an economic downturn or recession within a year.

What accounts for the normally positive slope of the yield curve? It’s usually attributed to the increased risk associated with a lengthening of the duration of a financial instrument, even if default risk is zero. The longer the duration of a financial instrument, the more sensitive the (resale) value of the instrument to changes in the rate of interest. Because risk falls as the duration of the of the instrument is shortened, risk-averse asset-holders are willing to accept a lower return on short-dated claims than on riskier long-dated claims.

If the Fed continues on its current course, it’s likely that the yield curve will flatten or become inverted – sloping downward instead of upward – a phenomenon that has frequently presaged recessions within about a year. So the question I want to think through in this post is whether there is anything inherently recessionary about a flat or inverted yield curve, or is the correlation between recessions and inverted yield curves merely coincidental?

The beginning of wisdom in this discussion is the advice of Scott Sumner: never reason from a price change. A change in the slope of the yield curve reflects a change in price relationships. Any given change in price relationships can reflect a variety of possible causes, and the ultimate effects, e.g., an inverted yield curve, of those various underlying causes, need not be the same. So, we can’t take it for granted that all yield-curve inversions are created equal; just because yield-curve inversions have sometimes, or usually, or always, preceded recessions doesn’t mean that recessions must necessarily follow once the yield curve becomes inverted.

Let’s try to sort out some of the possible causes of an inverted yield curve, and see whether those causes are likely to result in a recession if the yield curve remains flat or inverted for a substantial period of time. But it’s also important to realize that the shape of the yield curve reflects a myriad of possible causes in a complex economic system. The yield curve summarizes expectations about the future that are deeply intertwined in the intertemporal structure of an economic system. Interest rates aren’t simply prices determined in specific markets for debt instruments of various durations; interest rates reflect the opportunities to exchange current goods for future goods or to transform current output into future output. Interest rates are actually distillations of relationships between current prices and expected future prices that govern the prices and implied yields at which debt instruments are bought and sold. If the interest rates on debt instruments are out of line with the intricate web of intertemporal price relationships that exist in any complex economy, those discrepancies imply profitable opportunities for exchange and production that tend to eliminate those discrepancies. Interest rates are not set in a vacuum, they are a reflection of innumerable asset valuations and investment opportunities. So there are potentially innumerable possible causes that could lead to the flattening or inversion of the yield curve.

For purposes of this discussion, however, I will focus on just two factors that, in an ultra-simplified partial-equilibrium setting, seem most likely to cause a normally upward-sloping yield curve to become relatively flat or even inverted. These two factors affecting the slope of the yield curve are the demand for liquidity and the supply of liquidity.

An increase in the demand for liquidity manifests itself in reduced current spending to conserve liquidity and by an increase in the demands of the public on the banking system for credit. But even as reduced spending improves the liquidity position of those trying to conserve liquidity, it correspondingly worsens the liquidity position of those whose revenues are reduced, the reduced spending of some necessarily reducing the revenues of others. So, ultimately, an increase in the demand for liquidity can be met only by (a) the banking system, which is uniquely positioned to create liquidity by accepting the illiquid IOUs of the private sector in exchange for the highly liquid IOUs (cash or deposits) that the banking system can create, or (b) by the discretionary action of a monetary authority that can issue additional units of fiat currency.

Let’s consider first what would happen in case of an increased demand for liquidity by the public. Such an increased demand could have two possible causes. (There might be others, of course, but these two seem fairly commonplace.)

First, the price expectations on which one or more significant sectors of the economy have made investments have turned out to overly optimistic (or alternatively made investments on overly optimistic expectations of low input prices). Given the commitments made on the basis of optimistic expectations, it then turns out that realized sales or revenues fall short of what was required by those firms to service their debt obligations. Thus, to service their debt obligations, firms may seek short-term loans to cover the shortfall in earnings relative to expectations. Potential lenders, including the banking system, who may already be holding the debt of such firms, must then decide whether to continue extending credit to these firms in hopes that prices will rebound back to what they had been expected to be (or that borrowers will be able to cut costs sufficiently to survive if prices don’t recover), or to cut their losses by ceasing to lend further.

The short-run demand for credit will tend to raise short-term rates relative to long-term rates, causing the yield curve to flatten. And the more serious the short-term need for liquidity, the flatter or more inverted the yield curve becomes. In such a period of financial stress, the potential for significant failures of firms that can’t service their financial obligations is an indication that an economic downturn or a recession is likely, so that the extent to which the yield curve flattens or becomes inverted is a measure of the likelihood that a downturn is in the offing.

Aside from sectoral problems affecting particular industries or groups of industries, the demand for liquidity might increase owing to a generalized increase in uncertainty that causes entrepreneurs to hold back from making investments (dampens animal spirits). This is often a response during and immediately following a recession, when the current state of economic activity and uncertainty about its future state discourages entrepreneurs from making investments whose profitability depends on the magnitude and scope of the future recovery. In that case, an increasing demand for liquidity causes firms to hoard their profits as cash rather than undertake new investments, because expected demand is not sufficient to justify commitments that would be remunerative only if future demand exceeds some threshold. Such a flattening of the yield curve can be mitigated if the monetary authority makes liquidity cheaply available by cutting short-term rates to very low levels or even to zero, as the Fed did when it adopted its quantitative easing policies after the 2008-09 downturn, thereby supporting a recovery, a modest one to be sure, but still a stronger recovery than occurred in Europe after the European Central Bank prematurely raised interest short-term rates.

Such an episode occurred in 2002-03, after the 9-11 attack on the US. The American economy had entered a recession in early 2001, partly as a result of the bursting of the dotcom bubble of the late 1990s. The recession was short and mild, and the large tax cut enacted by Congress at the behest of the Bush administration in June 2001 was expected to provide significant economic stimulus to promote recovery. However, it soon became clear that, besides the limited US attack on Afghanistan to unseat the Taliban regime and to kill or capture the Al Qaeda leadership in Afghanistan, the Bush Administration was planning for a much more ambitious military operation to effect regime change in Iraq and perhaps even in other neighboring countries in hopes of radically transforming the political landscape of the Middle East. The grandiose ambitions of the Bush administration and the likelihood that a major war of unknown scope and duration with unpredictable consequences might well begin sometime in early 2003 created a general feeling of apprehension and uncertainty that discouraged businesses from making significant new commitments until the war plans of the Administration were clarified and executed and their consequences assessed.

Gauging the unusual increase in the demand for liquidity in 2002 and 2003, the Fed reduced short-term rates to accommodate increasing demands for liquidity, even as the economy entered into a weak expansion and recovery. Given the unusual increase in the demand for liquidity, the accommodative stance of the Fed and the reduction in the Fed Funds target to an unusually low level of 1% had no inflationary effect, but merely cushioned the economy against a relapse into recession. The weakness of the recovery is reflected in the modest rate of increase in nominal spending, averaging about 3.9%, and not exceeding 5.1% in any of the seven quarters from 2001-IV when the recession ended until 2003-II when the Saddam Hussein regime was toppled.

Quarter              % change in NGDP

2001-IV               2.34%

2002-I                 5.07%

2002-II                3.76%

2002-III               3.80%

2002-IV               2.44%

2003-I                 4.63%

2003-II                5.10%

2003-III               9.26%

2003-IV               6.76%

2004-I                 5.94%

2004-II                6.60%

2004-III               6.26%

2004-IV               6.44%

2005-I                 8.25%

2005-II                5.10%

2005-III               7.33%

2005-IV               5.44%

2006-I                 8.23%

2006-II                4.50%

2006-III               3.19%

2006-IV               4.62%

2007-I                 4.83%

2007-II                5.42%

2007-III               4.15%

2007-IV               3.21%

The apparent success of the American invasion in the second quarter of 2003 was matched by a quickening expansion from 2003-III through 2006-I, nominal GDP increasing at a 6.8% annual rate over those 11 quarters. As the economy recovered, and spending began increasing rapidly, the Fed gradually raised its Fed Funds target by 25 basis points about every six weeks starting at the end of June 2004, so that in early 2006, the Fed Funds target rate reached 4.25%, peaking at 5.25% in July 2006, where it remained till September 2007. By February 2006, the yield on 3-month Treasury bills reached the yield on 10-year Treasuries, so that the yield curve had become essentially flat, remaining so until October 2008, soon after the start of the financial crisis. Indeed, for most of 2006 and 2007, the Fed Funds target was above the yield on three-month Treasury bills, implying a slight inversion at the short-end of the yield curve, suggesting that the Fed was exacting a slight liquidity surcharge on overnight reserves and that there was a market expectation that the Fed Funds target would be reduced from its 5.25% peak.

The Fed was probably tardy in increasing its Fed Funds target till June 2004, nominal spending having increased in 2003-III at an annual rate above 9%, and increasing in the next three quarters at an average annual rate of about 6.5%. In 2005 while the Fed was in auto-pilot mode, automatically raising its Fed Funds target 25 basis points every six weeks, nominal spending continued to increase at a roughly 6% annual rate, increases becoming slightly more erratic, fluctuating between 5.1% and 8.3%. But by the second quarter of 2006 when the Fed Funds target rose to 5%, the rate of increase in spending slowed to an average of just over 4% and just under 5% in the first three quarters of 2007.

While the rate of increase in spending slowed to less than 5% in the second quarter of 2006, as the yield curve flattened, and the Fed Funds target peaked at 5.25%, housing prices also peaked, and the concerns about financial stability started to be voiced. The chart below shows the yields on 10-year constant maturity Treasuries and the yield on 3-month Treasury bills, the two key market rates at opposite ends of the yield curve.

The yields on the two instruments became nearly equal in early 2006, and, with slight variations, remained so till the onset of the financial crisis in September 2008. In retrospect, at least, the continued increases in the Fed Funds rate target seem too have been extremely ill-advised, perhaps triggering the downturn that started at the end of 2007, and leading nine months later to the financial crisis of 2008.

The Fed having put itself on autopilot, the yield curve became flat or even slightly inverted in early 2006, implying that a substantial liquidity premium had to be absorbed in order to keep cash on hand to meet debt obligations. By the second quarter of 2006, insufficient liquidity caused the growth in total spending to slow, just when housing prices were peaking, a development that intensified the stresses on the financial system, further increasing the demand for liquidity. Despite the high liquidity premium and flat yield curve, total spending continued to increase modestly through 2006 and most of 2007. But after stock prices dropped in August 2007 and home prices continued to slide, growth in total spending slowed further at the end of 2007, and the downturn began.

Responding to signs of economic weakness and falling long-term rates, the Fed did lower its Fed Funds target late in 2007, cutting the Fed Funds target several more times in early 2008. In May 2008, the Fed reduced the target to 2%, but the yield curve remained flat, because the Fed, consistently underestimating the severity of the downturn, kept signaling its concern with inflation, thereby suggesting that an increase in the target might be in the offing. So, even as it reduced its Fed Funds target, the Fed kept the yield curve nearly flat until, and even after, the start of the financial crisis in September 2008, thereby maintaining an excessive liquidity premium while the demand for liquidity was intensifying as total spending contracted rapidly in the third quarter of 2008.

To summarize this discussion of the liquidity premium and the yield curve during the 2001-08 period, the Fed appropriately steepened the yield curve right after the 2001 recession and the 9/11 attacks, but was slow to normalize the slope of the yield curve after the US invasion of Iraq in the second quarter of 2003. When it did begin to normalize the yield curve in a series of automatic 25 basis point increases in its Fed Fund target rate, the Fed was again slow to reassess the effects of the policy as yield curve flattened in 2006. Thus by 2006, the Fed had effectively implemented a tight monetary policy in the face of rising demands for liquidity just as the bursting of the housing bubble in mid-2006 began to subject the financial system to steadily increasing stress. The implications of a flat or slightly inverted yield curve were ignored or dismissed by the Fed for at least two years until after the financial panic and crisis in September 2008.

At the beginning of the 2001-08 period, the Fed seemed to be aware that an unusual demand for liquidity justified a policy response to increase the supply of liquidity by reducing the Fed Funds target and steepening the yield curve. But, at the end of the period, the Fed was unwilling to respond to increasing demands for liquidity and instead allowed a flat yield curve to remain in place even when the increasing demand for liquidity was causing a slowdown in aggregate spending growth. One possible reason for the asymmetric response of the Fed to increasing liquidity demands in 2002 and 2006 is that the Fed was sensitive to criticism that, by holding short-term rates too low for too long, it had promoted and prolonged the housing bubble. Even if the criticism contained some element of truth, the Fed’s refusal to respond to increasing demands for liquidity in 2006 was tragically misguided.

The current Fed’s tentative plan to keep increasing the Fed Funds target seems less unreflective as the nearly mindless schedule followed by the Fed from mid-2004 to mid-2006. However, the Fed is playing a weaker hand now than it did in 2004. Nominal GDP has been increasing at a very lackluster annual rate of about 4-4.5% for the past two years. Certainly, further increases in the Fed Funds target would not be warranted if the rate of growth in nominal GDP is any less than 4% or if the yield curve should flatten for some other reason like a decline in interest rates at the longer end of the yield curve. Caution, possible inversion ahead.

Keynes and the Fisher Equation

The history of economics society is holding its annual meeting in Chicago from Friday June 15 to Sunday June 17. Bringing together material from a number of posts over the past five years or so about Keynes and the Fisher equation and Fisher effect, I will be presenting a new paper called “Keynes and the Fisher Equation.” Here is the abstract of my paper.

One of the most puzzling passages in the General Theory is the attack (GT p. 142) on Fisher’s distinction between the money rate of interest and the real rate of interest “where the latter is equal to the former after correction for changes in the value of money.” Keynes’s attack on the real/nominal distinction is puzzling on its own terms, inasmuch as the distinction is a straightforward and widely accepted distinction that was hardly unique to Fisher, and was advanced as a fairly obvious proposition by many earlier economists including Marshall. What makes Keynes’s criticism even more problematic is that Keynes’s own celebrated theorem in the Tract on Monetary Reform about covered interest arbitrage is merely an application of Fisher’s reasoning in Appreciation and Interest. Moreover, Keynes endorsed Fisher’s distinction in the Treatise on Money. But even more puzzling is that Keynes’s analysis in Chapter 17 demonstrates that in equilibrium the return on alternative assets must reflect their differences in their expected rates of appreciation. Thus Keynes, himself, in the General Theory endorsed the essential reasoning underlying the distinction between real and the money rates of interest. The solution to the puzzle lies in understanding the distinction between the relationships between the real and nominal rates of interest at a moment in time and the effects of a change in expected rates of appreciation that displaces an existing equilibrium and leads to a new equilibrium. Keynes’s criticism of the Fisher effect must be understood in the context of his criticism of the idea of a unique natural rate of interest implicitly identifying the Fisherian real rate with a unique natural rate.

And here is the concluding section of my paper.

Keynes’s criticisms of the Fisher effect, especially the facile assumption that changes in inflation expectations are reflected mostly, if not entirely, in nominal interest rates – an assumption for which neither Fisher himself nor subsequent researchers have found much empirical support – were grounded in well-founded skepticism that changes in expected inflation do not affect the real interest rate. A Fisherian analysis of an increase in expected deflation at the zero lower bound shows that the burden of the adjustment must be borne by an increase in the real interest rate. Of course, such a scenario might be dismissed as a special case, which it certainly is, but I very much doubt that it is the only assumptions leading to the conclusion that a change in expected inflation or deflation affects the real as well as the nominal interest rate.

Although Keynes’s criticism of the Fisher equation (or more precisely against the conventional simplistic interpretation) was not well argued, his intuition was sound. And in his contribution to the Fisher festschrift, Keynes (1937b) correctly identified the two key assumptions leading to the conclusion that changes in inflation expectations are reflected entirely in nominal interest rates: (1) a unique real equilibrium and (2) the neutrality (actually superneutrality) of money. Keynes’s intuition was confirmed by Hirshleifer (1970, 135-38) who derived the Fisher equation as a theorem by performing a comparative-statics exercise in a two-period general-equilibrium model with money balances, when the money stock in the second period was increased by an exogenous shift factor k. The price level in the second period increases by a factor of k and the nominal interest rate increases as well by a factor of k, with no change in the real interest rate.

But typical Keynesian and New Keynesian macromodels based on the assumption of no capital or a single capital good drastically oversimplify the analysis, because those highly aggregated models assume that the determination of the real interest rate takes place in a single market. The market-clearing assumption invites the conclusion that the rate of interest, like any other price, is determined by the equality of supply and demand – both of which are functions of that price — in  that market.

The equilibrium rate of interest, as C. J. Bliss (1975) explains in the context of an intertemporal general-equilibrium analysis, is not a price; it is an intertemporal rate of exchange characterizing the relationships between all equilibrium prices and expected equilibrium prices in the current and future time periods. To say that the interest rate is determined in any single market, e.g., a market for loanable funds or a market for cash balances, is, at best, a gross oversimplification, verging on fallaciousness. The interest rate or term structure of interest rates is a reflection of the entire intertemporal structure of prices, so a market for something like loanable funds cannot set the rate of interest at a level inconsistent with that intertemporal structure of prices without disrupting and misaligning that structure of intertemporal price relationships. The interest rates quoted in the market for loanable funds are determined and constrained by those intertemporal price relationships, not the other way around.

In the real world, in which current prices, future prices and expected future prices are not and almost certainly never are in an equilibrium relationship with each other, there is always some scope for second-order variations in the interest rates transacted in markets for loanable funds, but those variations are still tightly constrained by the existing intertemporal relationships between current, future and expected future prices. Because the conditions under which Hirshleifer derived his theorem demonstrating that changes in expected inflation are fully reflected in nominal interest rates are not satisfied, there is no basis for assuming that a change in expected inflation affect only nominal interest rates with no effect on real rates.

There are probably a huge range of possible scenarios of how changes in expected inflation could affect nominal and real interest rates. One should not disregard the Fisher equation as one possibility, it seems completely unwarranted to assume that it is the most plausible scenario in any actual situation. If we read Keynes at the end of his marvelous Chapter 17 in the General Theory in which he remarks that he has abandoned the belief he had once held in the existence of a unique natural rate of interest, and has come to believe that there are really different natural rates corresponding to different levels of unemployment, we see that he was indeed, notwithstanding his detour toward a pure liquidity preference theory of interest, groping his way toward a proper understanding of the Fisher equation.

In my Treatise on Money I defined what purported to be a unique rate of interest, which I called the natural rate of interest – namely, the rate of interest which, in the terminology of my Treatise, preserved equality between the rate of saving (as there defined) and the rate of investment. I believed this to be a development and clarification of of Wicksell’s “natural rate of interest,” which was, according to him, the rate which would preserve the stability of some, not quite clearly specified, price-level.

I had, however, overlooked the fact that in any given society there is, on this definition, a different natural rate for each hypothetical level of employment. And, similarly, for every rate of interest there is a level of employment for which that rate is the “natural” rate, in the sense that the system will be in equilibrium with that rate of interest and that level of employment. Thus, it was a mistake to speak of the natural rate of interest or to suggest that the above definition would yield a unique value for the rate of interest irrespective of the level of employment. . . .

If there is any such rate of interest, which is unique and significant, it must be the rate which we might term the neutral rate of interest, namely, the natural rate in the above sense which is consistent with full employment, given the other parameters of the system; though this rate might be better described, perhaps, as the optimum rate. (pp. 242-43)

Because Keynes believed that an increased in the expected future price level implies an increase in the marginal efficiency of capital, it follows that an increase in expected inflation under conditions of less than full employment would increase investment spending and employment, thereby raising the real rate of interest as well the nominal rate. Cottrell (1994) has attempted to make an argument along such lines within a traditional IS-LM framework. I believe that, in a Fisherian framework, my argument points in a similar direction.

 

Neo- and Other Liberalisms

Everybody seems to be worked up about “neoliberalism” these days. A review of Quinn Slobodian’s new book on the Austrian (or perhaps the Austro-Hungarian) roots of neoliberalism in the New Republic by Patrick Iber reminded me that the term “neoliberalism” which, in my own faulty recollection, came into somewhat popular usage only in the early 1980s, had actually been coined in the early the late 1930s at the now almost legendary Colloque Walter Lippmann and had actually been used by Hayek in at least one of his political essays in the 1940s. In that usage the point of neoliberalism was to revise and update the classical nineteenth-century liberalism that seemed to have run aground in the Great Depression, when the attempt to resurrect and restore what had been widely – and in my view mistakenly – regarded as an essential pillar of the nineteenth-century liberal order – the international gold standard – collapsed in an epic international catastrophe. The new liberalism was supposed to be a kinder and gentler — less relentlessly laissez-faire – version of the old liberalism, more amenable to interventions to aid the less well-off and to social-insurance programs providing a safety net to cushion individuals against the economic risks of modern capitalism, while preserving the social benefits and efficiencies of a market economy based on private property and voluntary exchange.

Any memory of Hayek’s use of “neo-liberalism” was blotted out by the subsequent use of the term to describe the unorthodox efforts of two young ambitious Democratic politicians, Bill Bradley and Dick Gephardt to promote tax reform. Bradley, who was then a first-term Senator from New Jersey, having graduated directly from NBA stardom to the US Senate in 1978, and Gephardt, then an obscure young Congressman from Missouri, made a splash in the first term of the Reagan administration by proposing to cut income tax rates well below the rates to which Reagan had proposed when running for President, in 1980, subsequently enacted early in his first term. Bradley and Gephardt proposed cutting the top federal income tax bracket from the new 50% rate to the then almost unfathomable 30%. What made the Bradley-Gephardt proposal liberal was the idea that special-interest tax exemptions would be eliminated, so that the reduced rates would not mean a loss of tax revenue, while making the tax system less intrusive on private decision-making, improving economic efficiency. Despite cutting the top rate, Bradley and Gephardt retained the principle of progressivity by reducing the entire rate structure from top to bottom while eliminating tax deductions and tax shelters.

Here is how David Ignatius described Bradley’s role in achieving the 1986 tax reform in the Washington Post (May 18, 1986)

Bradley’s intellectual breakthrough on tax reform was to combine the traditional liberal approach — closing loopholes that benefit mainly the rich — with the supply-side conservatives’ demand for lower marginal tax rates. The result was Bradley’s 1982 “Fair Tax” plan, which proposed removing many tax preferences and simplifying the tax code with just three rates: 14 percent, 26 percent and 30 percent. Most subsequent reform plans, including the measure that passed the Senate Finance Committee this month, were modelled on Bradley’s.

The Fair Tax was an example of what Democrats have been looking for — mostly without success — for much of the last decade. It synthesized liberal and conservative ideas in a new package that could appeal to middle-class Americans. As Bradley noted in an interview this week, the proposal offered “lower rates for the middle-income people who are the backbone of America, who are paying most of the freight.” And who, it might be added, increasingly have been voting Republican in recent presidential elections.

The Bradley proposal also offered Democrats a way to shed their anti-growth, tax-and-spend image by allowing them, as Bradley says, “to advocate economic growth and fairness simultaneously.” The only problem with the idea was that it challenged the party’s penchant for soak-the-rich rhetoric and interest-group politics.

So the new liberalism of Bradley and Gephardt was an ideological movement in the opposite direction from that of the earlier version of neoliberalism; the point of neoliberalism 1.0 was to moderate classical laissez-faire liberal orthodoxy; neoliberalism 2.0 aimed to counter the knee-jerk interventionism of New Deal liberalism that favored highly progressive income taxation to redistribute income from rich to poor and price ceilings and controls to protect the poor from exploitation by ruthless capitalists and greedy landlords and as an anti-inflation policy. The impetus for reassessing mid-twentieth-century American liberalism was the evident failure in the 1970s of wage and price controls, which had been supported with little evidence of embarrassment by most Democratic economists (with the notable exception of James Tobin) when imposed by Nixon in 1971, and by the decade-long rotting residue of Nixon’s controls — controls on crude oil and gasoline prices — finally scrapped by Reagan in 1981.

Although the neoliberalism 2.0 enjoyed considerable short-term success, eventually providing the template for the 1986 Reagan tax reform, and establishing Bradley and Gephardt as major figures in the Democratic Party, neoliberalism 2.0 was never embraced by the Democratic grassroots. Gephardt himself abandoned the neo-liberal banner in 1988 when he ran for President as a protectionist, pro-Labor Democrat, providing the eventual nominee, the mildly neoliberalish Michael Dukakis, with plenty of material with which to portray Gephardt as a flip-flopper. But Dukasis’s own failure in the general election did little to enhance the prospects of neoliberalism as a winning electoral strategy. The Democratic acceptance of low marginal tax rates in exchange for eliminating tax breaks, exemptions and shelters was short-lived, and Bradley himself abandoned the approach in 2000 when he ran for the Democratic Presidential nomination from the left against Al Gore.

So the notion that “neoliberalism” has any definite meaning is as misguided as the notion that “liberalism” has any definite meaning. “Neoliberalism” now serves primarily as a term of abuse for leftists to impugn the motives of their ideological and political opponents in exactly the same way that right-wingers use “liberal” as a term of abuse — there are so many of course — with which to dismiss and denigrate their ideological and political opponents. That archetypical classical liberal Ludwig von Mises was openly contemptuous of the neoliberalism that emerged from the Colloque Walter Lipmann and of its later offspring Ordoliberalism (frequently described as the Germanic version of neoliberalism) referring to it as “neo-interventionism.” Similarly, modern liberals who view themselves as upholders of New Deal liberalism deploy “neoliberalism” as a useful pejorative epithet with which to cast a rhetorical cloud over those sharing a not so dissimilar political background or outlook but who are more willing to tolerate the outcomes of market forces than they are.

There are many liberalisms and perhaps almost as many neoliberalisms, so it’s pointless and futile to argue about which is the true or legitimate meaning of “liberalism.” However, one can at least say about the two versions of neoliberalism that I’ve mentioned that they were attempts to moderate more extreme versions of liberalism and to move toward the ideological middle of the road: from the extreme laissez-faire of classical liberalism on the one right and from the dirigisme of the New Deal on the left toward – pardon the cliché – a third way in the center.

But despite my disclaimer that there is no fixed, essential, meaning of “liberalism,” I want to suggest that it is possible to find some common thread that unites many, if not all, of the disparate strands of liberalism. I think it’s important to do so, because it wasn’t so long ago that even conservatives were able to speak approvingly about the “liberal democratic” international order that was created, largely thanks to American leadership, in the post-World War II era. That time is now unfortunately past, but it’s still worth remembering that it once was possible to agree that “liberal” did correspond to an admirable political ideal.

The deep underlying principle that I think reconciles the different strands of the best versions of liberalism is a version of Kant’s categorical imperative: treat every individual as an end not a means. Individuals must not be used merely as tools or instruments with which other individuals or groups satisfy their own purposes. If you want someone else to serve you in accomplishing your ends, that other person must provide that assistance to you voluntarily not because you require him to do so. If you want that assistance you must secure it not by command but by persuasion. Persuasion can be secured in two ways, either by argument — persuading the other person to share your objective — or if you can’t, or won’t, persuade the person to share your objective, you can still secure his or her agreement to help you by offering some form of compensation to induce the person to provide you the services you desire.

The principle has an obvious libertarian interpretation: all cooperation is secured through voluntary agreements between autonomous agents. Force and fraud are impermissible. But the Kantian ideal doesn’t necessarily imply a strictly libertarian political system. The choices of autonomous agents can — actually must — be restricted by a set of legal rules governing the conduct of those agents. And the content of those legal rules must be worked out either by legislation or by an evolutionary process of common law adjudication or some combination of the two. The content of those rules needn’t satisfy a libertarian laissez-faire standard. Rather the liberal standard that legal rules must satisfy is that they don’t prescribe or impose ends, goals, or purposes that must be pursued by autonomous agents, but simply govern the means agents can employ in pursuing their objectives.

Legal rules of conduct are like semantic rules of grammar. Like rules of grammar that don’t dictate the ideas or thoughts expressed in speech or writing, only the manner of their expression, rules of conduct don’t specify the objectives that agents seek to achieve, only the acceptable means of accomplishing those objectives. The rules of conduct need not be libertarian; some choices may be ruled out for reasons of ethics or morality or expediency or the common good. What makes the rules liberal is that they apply equally to all citizens, and that the rules allow sufficient space to agents to conduct their own lives according to their own purposes, goals, preferences, and values.

In other words, the rule of law — not the rule of particular groups, classes, occupations — prevails. Agents are subject to an impartial legal standard, not to the will or command of another agent, or of the ruler. And for this to be the case, the ruler himself must be subject to the law. But within this framework of law that imposes no common goals and purposes on agents, a good deal of collective action to provide for common purposes — far beyond the narrow boundaries of laissez-faire doctrine — is possible. Citizens can be taxed to pay for a wide range of public services that the public, through its elected representatives, decides to provide. Those elected representatives can enact legislation that governs the conduct of individuals as long as the legislation does not treat individuals differently based on irrelevant distinctions or based on criteria that disadvantage certain people unfairly.

My view that the rule of law, not laissez-faire, not income redistribution, is the fundamental value and foundation of liberalism is a view that I learned from Hayek, who, in his later life was as much a legal philosopher as an economist, but it is a view that John Rawls, Ronald Dworkin on the left, and Michael Oakeshott on the right, also shared. Hayek, indeed, went so far as to say that he was fundamentally in accord with Rawls’s magnum opus A Theory of Justice, which was supposed to have provided a philosophical justification for modern welfare-state liberalism. Liberalism is a big tent, and it can accommodate a wide range of conflicting views on economic and even social policy. What sets liberalism apart is a respect for and commitment to the rule of law and due process, a commitment that ought to take precedence over any specific policy goal or preference.

But here’s the problem. If the ruler can also make or change the laws, the ruler is not really bound by the laws, because the ruler can change the law to permit any action that the ruler wants to take. How then is the rule of law consistent with a ruler that is empowered to make the law to which he is supposedly subject. That is the dilemma that every liberal state must cope with. And for Hayek, at least, the issue was especially problematic in connection with taxation.

With the possible exception of inflation, what concerned Hayek most about modern welfare-state policies was the highly progressive income-tax regimes that western countries had adopted in the mid-twentieth century. By almost any reasonable standard, top marginal income-tax rates were way too high in the mid-twentieth century, and the economic case for reducing the top rates was compelling when reducing the top rates would likely entail little, if any, net revenue loss. As a matter of optics, reductions in the top marginal rates had to be coupled with reductions of lower tax brackets which did entail revenue losses, but reforming an overly progressive tax system without a substantial revenue loss was not that hard to do.

But Hayek’s argument against highly progressive income tax rates was based more on principle than on expediency. Hayek regarded steeply progressive income tax rates as inherently discriminatory by imposing a disproportionate burden on a minority — the wealthy — of the population. Hayek did not oppose modest progressivity to ease the tax burden on the least well-off, viewing such progressivity treating as a legitimate concession that a well-off majority could allow to a less-well-off minority. But he greatly feared attempts by the majority to shift the burden of taxation onto a well-off minority, viewing that kind of progressivity as a kind of legalized hold-up, whereby the majority uses its control of the legislature to write the rules to their own advantage at the expense of the minority.

While Hayek’s concern that a wealthy minority could be plundered by a greedy majority seems plausible, a concern bolstered by the unreasonably high top marginal rates that were in place when he wrote, he overstated his case in arguing that high marginal rates were, in and of themselves, unequal treatment. Certainly it would be discriminatory if different tax rates applied to people because of their religion or national origin or for reasons unrelated to income, but even a highly progressive income tax can’t be discriminatory on its face, as Hayek alleged, when the progressivity is embedded in a schedule of rates applicable to everyone that reaches specified income thresholds.

There are other reasons to think that Hayek went too far in his opposition to progressive tax rates. First, he assumed that earned income accurately measures the value of the incremental contribution to social output. But Hayek overlooked that much of earned income reflects either rents that are unnecessary to call forth the efforts required to earn that income, in which case increasing the marginal tax rate on such earnings does not diminish effort and output. We also know as a result of a classic 1971 paper by Jack Hirshleifer that earned incomes often do not correspond to net social output. For example, incomes earned by stock and commodity traders reflect only in part incremental contributions to social output; they also reflect losses incurred by other traders. So resources devoted to acquiring information with which to make better predictions of future prices add less to output than those resources are worth, implying a net reduction in total output. Insofar as earned incomes reflect not incremental contributions to social output but income transfers from other individuals, raising taxes on those incomes can actually increase aggregate output.

So the economic case for reducing marginal tax rates is not necessarily more compelling than the philosophical case, and the economic arguments certainly seem less compelling than they did some three decades ago when Bill Bradley, in his youthful neoliberal enthusiasm, argued eloquently for drastically reducing marginal rates while broadening the tax base. Supporters of reducing marginal tax rates still like to point to the dynamic benefits of increasing incentives to work and invest, but they don’t acknowledge that earned income does not necessarily correspond closely to net contributions to aggregate output.

Drastically reducing the top marginal rate from 70% to 28% within five years, greatly increased the incentive to earn high incomes. The taxation of high incomes having been reducing so drastically, the number of people earning very high incomes since 1986 has grown very rapidly. Does that increase in the number of people earning very high incomes reflect an improvement in the overall economy, or does it reflect a shift in the occupational choices of talented people? Since the increase in very high incomes has not been associated with an increase in the overall rate of economic growth, it hardly seems obvious that the increase in the number of people earning very high incomes is closely correlated with the overall performance of the economy. I suspect rather that the opportunity to earn and retain very high incomes has attracted a many very talented people into occupations, like financial management, venture capital, investment banking, and real-estate brokerage, in which high incomes are being earned, with correspondingly fewer people choosing to enter less lucrative occupations. And if, as I suggested above, these occupations in which high incomes are being earned often contribute less to total output than lower-paying occupations, the increased opportunity to earn high incomes has actually reduced overall economic productivity.

Perhaps the greatest effect of reducing marginal income tax rates has been sociological. I conjecture that, as a consequence of reduced marginal income tax rates, the social status and prestige of people earning high incomes has risen, as has the social acceptability of conspicuous — even brazen — public displays of wealth. The presumption that those who have earned high incomes and amassed great fortunes are morally deserving of those fortunes, and therefore entitled to deference and respect on account of their wealth alone, a presumption that Hayek himself warned against, seems to be much more widely held now than it was forty or fifty years ago. Others may take a different view, but I find this shift towards increased respect and admiration for the wealthy, curiously combined with a supposedly populist political environment, to be decidedly unedifying.

Hayek, Radner and Rational-Expectations Equilibrium

In revising my paper on Hayek and Three Equilibrium Concepts, I have made some substantial changes to the last section which I originally posted last June. So I thought I would post my new updated version of the last section. The new version of the paper has not been submitted yet to a journal; I will give a talk about it at the colloquium on Economic Institutions and Market Processes at the NYU economics department next Monday. Depending on the reaction I get at the Colloquium and from some other people I will send the paper to, I may, or may not, post the new version on SSRN and submit to a journal.

In this section, I want to focus on a particular kind of intertemporal equilibrium: rational-expectations equilibrium. It is noteworthy that in his discussions of intertemporal equilibrium, Roy Radner assigns a  meaning to the term “rational-expectations equilibrium” very different from the one normally associated with that term. Radner describes a rational-expectations equilibrium as the equilibrium that results when some agents can make inferences about the beliefs of other agents when observed prices differ from the prices that the agents had expected. Agents attribute the differences between observed and expected prices to the superior information held by better-informed agents. As they assimilate the information that must have caused observed prices to deviate from their expectations, agents revise their own expectations accordingly, which, in turn, leads to further revisions in plans, expectations and outcomes.

There is a somewhat famous historical episode of inferring otherwise unknown or even secret information from publicly available data about prices. In 1954, one very rational agent, Armen Alchian, was able to identify which chemicals were being used in making the newly developed hydrogen bomb by looking for companies whose stock prices had risen too rapidly to be otherwise explained. Alchian, who spent almost his entire career at UCLA while moonlighting at the nearby Rand Corporation, wrote a paper at Rand listing the chemicals used in making the hydrogen bomb. When news of his unpublished paper reached officials at the Defense Department – the Rand Corporation (from whose files Daniel Ellsberg took the Pentagon Papers) having been started as a think tank with funding by the Department of Defense to do research on behalf of the U.S. military – the paper was confiscated from Alchian’s office at Rand and destroyed. (See Newhard’s paper for an account of the episode and a reconstruction of Alchian’s event study.)

But Radner also showed that the ability of some agents to infer the information on which other agents are causing prices to differ from the prices that had been expected does not necessarily lead to an equilibrium. The process of revising expectations in light of observed prices may not converge on a shared set of expectations of future prices based on common knowledge. Radner’s result reinforces Hayek’s insight, upon which I remarked above, that although expectations are equilibrating variables there is no economic mechanism that tends to bring expectations toward their equilibrium values. There is no feedback mechanism, corresponding to the normal mechanism for adjusting market prices in response to perceived excess demands or supplies, that operates on price expectations. The heavy lifting of bringing expectations into correspondence with what the future holds must be done by the agents themselves; the magic of the market goes only so far.

Although Radner’s conception of rational expectations differs from the more commonly used meaning of the term, his conception helps us understand the limitations of the conventional “rational expectations” assumption in modern macroeconomics, which is that the price expectations formed by the agents populating a model should be consistent with what the model itself predicts that those future prices will be. In this very restricted sense, I believe rational expectations is an important property of any model. If one assumes that the outcome expected by agents in a model is the equilibrium predicted by the model, then, under those expectations, the solution of the model ought to be the equilibrium of the model. If the solution of the model is somehow different from what agents in the model expect, then there is something really wrong with the model.

What kind of crazy model would have the property that correct expectations turn out not to be self-fulfilling? A model in which correct expectations are not self-fulfilling is a nonsensical model. But there is a huge difference between saying (a) that a model should have the property that correct expectations are self-fulfilling and saying (b) that the agents populating the model understand how the model works and, based know their knowledge of the model, form expectations of the equilibrium predicted by the model.

Rational expectations in the first sense is a minimal consistency property of an economic model; rational expectations in the latter sense is an empirical assertion about the real world. You can make such an assumption if you want, but you can’t credibly claim that it is a property of the real world. Whether it is a property of the real world is a matter of fact, not a methodological imperative. But the current sacrosanct status of rational expectations in modern macroeconomics has been achieved largely through methodological tyrannizing.

In his 1937 paper, Hayek was very clear that correct expectations are logically implied by the concept of an equilibrium of plans extending through time. But correct expectations are not a necessary, or even descriptively valid, characteristic of reality. Hayek also conceded that we don’t even have an explanation in theory of how correct expectations come into existence. He merely alluded to the empirical observation – perhaps not the most faithful description of empirical reality in 1937 – that there is an observed general tendency for markets to move toward equilibrium, implying that, over time, expectations somehow do tend to become more accurate.

It is worth pointing out that when the idea of rational expectations was introduced by John Muth (1961), he did so in the context of partial-equilibrium models in which the rational expectation in the model was the rational expectation of the equilibrium price in a particular market. The motivation for Muth to introduce the idea of a rational expectation was the cobweb-cycle model in which producers base current decisions about how much to produce for the following period on the currently observed price. But with a one-period time lag between production decisions and realized output, as is the case in agricultural markets in which the initial application of inputs does not result in output until a subsequent time period, it is easy to generate an alternating sequence of boom and bust, with current high prices inducing increased output in the following period, driving prices down, thereby inducing low output and high prices in the next period and so on.

Muth argued that rational producers would not respond to price signals in a way that led to consistently mistaken expectations, but would base their price expectations on more realistic expectations of what future prices would turn out to be. In his microeconomic work on rational expectations, Muth showed that the rational-expectation assumption was a better predictor of observed prices than the assumption of static expectations underlying the traditional cobweb-cycle model. So Muth’s rational-expectations assumption was based on a realistic conjecture of how real-world agents would actually form expectations. In that sense, Muth’s assumption was consistent with Hayek’s conjecture that there is an empirical tendency for markets to move toward equilibrium.

So, while Muth’s introduction of the rational-expectations hypothesis was an empirically progressive theoretical innovation, extending rational-expectations into the domain of macroeconomics has not been empirically progressive, rational-expectations models having consistently failed to generate better predictions than macro-models using other expectational assumptions. Instead, a rational-expectations axiom has been imposed as part of a spurious methodological demand that all macroeconomic models be “micro-founded.” But the deeper point – one that Hayek understood better than perhaps anyone else — is that there is a difference in kind between forming rational expectations about a single market price and forming rational expectations about the vector of n prices on the basis of which agents are choosing or revising their optimal intertemporal consumption and production plans.

It is one thing to assume that agents have some expert knowledge about the course of future prices in the particular markets in which they participate regularly; it is another thing entirely to assume that they have knowledge sufficient to forecast the course of all future prices and in particular to understand the subtle interactions between prices in one market and the apparently unrelated prices in another market. It is those subtle interactions that allow the kinds of informational inferences that, based on differences between expected and realized prices of the sort contemplated by Alchian and Radner, can sometimes be made. The former kind of knowledge is knowledge that expert traders might be expected to have; the latter kind of knowledge is knowledge that would be possessed by no one but a nearly omniscient central planner, whose existence was shown by Hayek to be a practical impossibility.

The key — but far from the only — error of the rational-expectations methodology that rules modern macroeconomics is that rational expectations somehow cause or bring about an intertemporal equilibrium. It is certainly a fact that people try very hard to use all the information available to them to predict what the future has in store, and any new bit of information not previously possessed will be rapidly assessed and assimilated and will inform a possibly revised set of expectations of the future. But there is no reason to think that this ongoing process of information gathering and processing and evaluation leads people to formulate correct expectations of the future or of future prices. Indeed, Radner proved that, even under strong assumptions, there is no necessity that the outcome of a process of information revision based on the observed differences between observed and expected prices leads to an equilibrium.

So it cannot be rational expectations that leads to equilibrium, On the contrary, rational expectations are a property of equilibrium. To speak of a “rational-expectations equilibrium” is to speak about a truism. There can be no rational expectations in the macroeconomic except in an equilibrium state, because correct expectations, as Hayek showed, is a defining characteristic of equilibrium. Outside of equilibrium, expectations cannot be rational. Failure to grasp that point is what led Morgenstern astray in thinking that Holmes-Moriarty story demonstrated the nonsensical nature of equilibrium. It simply demonstrated that Holmes and Moriarity were playing a non-repeated game in which an equilibrium did not exist.

To think about rational expectations as if it somehow results in equilibrium is nothing but a category error, akin to thinking about a triangle being caused by having angles whose angles add up to 180 degrees. The 180-degree sum of the angles of a triangle don’t cause the triangle; it is a property of the triangle.

Standard macroeconomic models are typically so highly aggregated that the extreme nature of the rational-expectations assumption is effectively suppressed. To treat all output as a single good (which involves treating the single output as both a consumption good and a productive asset generating a flow of productive services) effectively imposes the assumption that the only relative price that can ever change is the wage, so that all but one future relative prices are known in advance. That assumption effectively assumes away the problem of incorrect expectations except for two variables: the future price level and the future productivity of labor (owing to the productivity shocks so beloved of Real Business Cycle theorists).

Having eliminated all complexity from their models, modern macroeconomists, purporting to solve micro-founded macromodels, simply assume that there are just a couple of variables about which agents have to form their rational expectations. The radical simplification of the expectational requirements for achieving a supposedly micro-founded equilibrium belies the claim to have achieved anything of the sort. Whether the micro-foundational pretense affected — with apparently sincere methodological fervor — by modern macroeconomics is merely self-delusional or a deliberate hoax perpetrated on a generation of unsuspecting students is an interesting distinction, but a distinction lacking any practical significance.

Four score years since Hayek explained how challenging the notion of intertemporal equilibrium really is and the difficulties inherent in explaining any empirical tendency toward intertempral equilibrium, modern macroeconomics has succeeded in assuming all those difficulties out of existence. Many macroeconomists feel rather proud of what modern macroeconomics has achieved. I am not quite as impressed as they are.

 


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 1,975 other followers

Follow Uneasy Money on WordPress.com
Advertisements