Archive for the 'Jack Hirshleifer' Category

What’s Wrong with DSGE Models Is Not Representative Agency

The basic DSGE macroeconomic model taught to students is based on a representative agent. Many critics of modern macroeconomics and DSGE models have therefore latched on to the representative agent as the key – and disqualifying — feature in DSGE models, and by extension, with modern macroeconomics. Criticism of representative-agent models is certainly appropriate, because, as Alan Kirman admirably explained some 25 years ago, the simplification inherent in a macoreconomic model based on a representative agent, renders the model entirely inappropriate and unsuitable for most of the problems that a macroeconomic model might be expected to address, like explaining why economies might suffer from aggregate fluctuations in output and employment and the price level.

While altogether fitting and proper, criticism of the representative agent model in macroeconomics had an unfortunate unintended consequence, which was to focus attention on representative agency rather than on the deeper problem with DSGE models, problems that cannot be solved by just throwing the Representative Agent under the bus.

Before explaining why representative agency is not the root problem with DSGE models, let’s take a moment or two to talk about where the idea of representative agency comes from. The idea can be traced back to F. Y. Edgeworth who, in his exposition of the ideas of W. S. Jevons – one of the three marginal revolutionaries of the 1870s – introduced two “representative particulars” to illustrate how trade could maximize the utility of each particular subject to the benchmark utility of the counterparty. That analysis of two different representative particulars, reflected in what is now called the Edgeworth Box, remains one of the outstanding achievements and pedagogical tools of economics. (See a superb account of the historical development of the Box and the many contributions to economic theory that it facilitated by Thomas Humphrey). But Edgeworth’s analysis and its derivatives always focused on the incentives of two representative agents rather than a single isolated representative agent.

Only a few years later, Alfred Marshall in his Principles of Economics, offered an analysis of how the equilibrium price for the product of a competitive industry is determined by the demand for (derived from the marginal utility accruing to consumers from increments of the product) and the supply of that product (derived from the cost of production). The concepts of the marginal cost of an individual firm as a function of quantity produced and the supply of an individual firm as a function of price not yet having been formulated, Marshall, in a kind of hand-waving exercise, introduced a hypothetical representative firm as a stand-in for the entire industry.

The completely ad hoc and artificial concept of a representative firm was not well-received by Marshall’s contemporaries, and the young Lionel Robbins, starting his long career at the London School of Economics, subjected the idea to withering criticism in a 1928 article. Even without Robbins’s criticism, the development of the basic theory of a profit-maximizing firm quickly led to the disappearance of Marshall’s concept from subsequent economics textbooks. James Hartley wrote about the short and unhappy life of Marshall’s Representative Firm in the Journal of Economic Perspectives.

One might have thought that the inauspicious career of Marshall’s Representative Firm would have discouraged modern macroeconomists from resurrecting the Representative Firm in the barely disguised form of a Representative Agent in their DSGE models, but the convenience and relative simplicity of solving a DSGE model for a single agent was too enticing to be resisted.

Therein lies the difference between the theory of the firm and a macroeconomic theory. The gain in convenience from adopting the Representative Firm was radically reduced by Marshall’s Cambridge students and successors who, without the representative firm, provided a more rigorous, more satisfying and more flexible exposition of the industry supply curve and the corresponding partial-equilibrium analysis than Marshall had with it. Providing no advantages of realism, logical coherence, analytical versatility or heuristic intuition, the Representative Firm was unceremoniously expelled from the polite company of economists.

However, as a heuristic device for portraying certain properties of an equilibrium state — whose existence is assumed not derived — even a single representative individual or agent proved to be a serviceable device with which to display the defining first-order conditions, the simultaneous equality of marginal rates of substitution in consumption and production with the marginal rate of substitution at market prices. Unlike the Edgeworth Box populated by two representative agents whose different endowments or preference maps result in mutually beneficial trade, the representative agent, even if afforded the opportunity to trade, can find no gain from engaging in it.

An excellent example of this heuristic was provided by Jack Hirshleifer in his 1970 textbook Investment, Interest, and Capital, wherein he adapted the basic Fisherian model of intertemporal consumption, production and exchange opportunities, representing the canonical Fisherian exposition in a single basic diagram. But the representative agent necessarily represents a state of no trade, because, for a single isolated agent, production and consumption must coincide, and the equilibrium price vector must have the property that the representative agent chooses not to trade at that price vector. I reproduce Hirshleifer’s diagram (Figure 4-6) in the attached chart.

Here is how Hirshleifer explained what was going on.

Figure 4-6 illustrates a technique that will be used often from now on: the representative-individual device. If one makes the assumption that all individuals have identical tastes and are identically situated with respect to endowments and productive opportunities, it follows that the individual optimum must be a microcosm of the social equilibrium. In this model the productive and consumptive solutions coincide, as in the Robinson Crusoe case. Nevertheless, market opportunities exist, as indicated by the market line M’M’ through the tangency point P* = C*. But the price reflected in the slope of M’M’ is a sustaining price, such that each individual prefers to hold the combination attained by productive transformations rather than engage in market transactions. The representative-individual device is helpful in suggesting how the equilibrium will respond to changes in exogenous data—the proviso being that such changes od not modify the distribution of wealth among individuals.

While not spelling out the limitations of the representative-individual device, Hirshleifer makes it clear that the representative-agent device is being used as an expository technique to describe, not as an analytical tool to determine, intertemporal equilibrium. The existence of intertemporal equilibrium does not depend on the assumptions necessary to allow a representative individual to serve as a stand-in for all other agents. The representative-individual is portrayed only to provide the student with a special case serving as a visual aid with which to gain an intuitive grasp of the necessary conditions characterizing an intertemporal equilibrium in production and consumption.

But the role of the representative agent in the DSGE model is very different from the representative individual in Hirshleifer’s exposition of the canonical Fisherian theory. In Hirshleifer’s exposition, the representative individual is just a special case and a visual aid with no independent analytical importance. In contrast to Hirshleifer’s deployment of the representative-individual, representative-agent in the DSGE model is used as an assumption whereby an analytical solution to the DSGE model can be derived, allowing the modeler to generate quantitative results to be compared with existing time-series data, to generate forecasts of future economic conditions, and to evaluate the effects of alternative policy rules.

The prominent and dubious role of the representative agent in DSGE models provided a convenient target for critics of DSGE models to direct their criticisms. In Congressional testimony, Robert Solow famously attacked DSGE models and used their reliance on the representative-agents to make them seem, well, simply ridiculous.

Most economists are willing to believe that most individual “agents” – consumers investors, borrowers, lenders, workers, employers – make their decisions so as to do the best that they can for themselves, given their possibilities and their information. Clearly they do not always behave in this rational way, and systematic deviations are well worth studying. But this is not a bad first approximation in many cases. The DSGE school populates its simplified economy – remember that all economics is about simplified economies just as biology is about simplified cells – with exactly one single combination worker-owner-consumer-everything-else who plans ahead carefully and lives forever. One important consequence of this “representative agent” assumption is that there are no conflicts of interest, no incompatible expectations, no deceptions.

This all-purpose decision-maker essentially runs the economy according to its own preferences. Not directly, of course: the economy has to operate through generally well-behaved markets and prices. Under pressure from skeptics and from the need to deal with actual data, DSGE modellers have worked hard to allow for various market frictions and imperfections like rigid prices and wages, asymmetries of information, time lags, and so on. This is all to the good. But the basic story always treats the whole economy as if it were like a person, trying consciously and rationally to do the best it can on behalf of the representative agent, given its circumstances. This cannot be an adequate description of a national economy, which is pretty conspicuously not pursuing a consistent goal. A thoughtful person, faced with the thought that economic policy was being pursued on this basis, might reasonably wonder what planet he or she is on.

An obvious example is that the DSGE story has no real room for unemployment of the kind we see most of the time, and especially now: unemployment that is pure waste. There are competent workers, willing to work at the prevailing wage or even a bit less, but the potential job is stymied by a market failure. The economy is unable to organize a win-win situation that is apparently there for the taking. This sort of outcome is incompatible with the notion that the economy is in rational pursuit of an intelligible goal. The only way that DSGE and related models can cope with unemployment is to make it somehow voluntary, a choice of current leisure or a desire to retain some kind of flexibility for the future or something like that. But this is exactly the sort of explanation that does not pass the smell test.

While Solow’s criticism of the representative agent was correct, he left himself open to an effective rejoinder by defenders of DSGE models who could point out that the representative agent was adopted by DSGE modelers not because it was an essential feature of the DSGE model but because it enabled DSGE modelers to simplify the task of analytically solving for an equilibrium solution. With enough time and computing power, however, DSGE modelers were able to write down models with a few heterogeneous agents (themselves representative of particular kinds of agents in the model) and then crank out an equilibrium solution for those models.

Unfortunately for Solow, V. V. Chari also testified at the same hearing, and he responded directly to Solow, denying that DSGE models necessarily entail the assumption of a representative agent and identifying numerous examples even in 2010 of DSGE models with heterogeneous agents.

What progress have we made in modern macro? State of the art models in, say, 1982, had a representative agent, no role for unemployment, no role for Financial factors, no sticky prices or sticky wages, no role for crises and no role for government. What do modern macroeconomic models look like? The models have all kinds of heterogeneity in behavior and decisions. This heterogeneity arises because people’s objectives dier, they differ by age, by information, by the history of their past experiences. Please look at the seminal work by Rao Aiyagari, Per Krusell and Tony Smith, Tim Kehoe and David Levine, Victor Rios Rull, Nobu Kiyotaki and John Moore. All of them . . . prominent macroeconomists at leading departments . . . much of their work is explicitly about models without representative agents. Any claim that modern macro is dominated by representative-agent models is wrong.

So on the narrow question of whether DSGE models are necessarily members of the representative-agent family, Solow was debunked by Chari. But debunking the claim that DSGE models must be representative-agent models doesn’t mean that DSGE models have the basic property that some of us at least seek in a macro-model: the capacity to explain how and why an economy may deviate from a potential full-employment time path.

Chari actually addressed the charge that DSGE models cannot explain lapses from full employment (to use Pigou’s rather anodyne terminology for depressions). Here is Chari’s response:

In terms of unemployment, the baseline model used in the analysis of labor markets in modern macroeconomics is the Mortensen-Pissarides model. The main point of this model is to focus on the dynamics of unemployment. It is specifically a model in which labor markets are beset with frictions.

Chari’s response was thus to treat lapses from full employment as “frictions.” To treat unemployment as the result of one or more frictions is to take a very narrow view of the potential causes of unemployment. The argument that Keynes made in the General Theory was that unemployment is a systemic failure of a market economy, which lacks an error-correction mechanism that is capable of returning the economy to a full-employment state, at least not within a reasonable period of time.

The basic approach of DSGE is to treat the solution of the model as an optimal solution of a problem. In the representative-agent version of a DSGE model, the optimal solution is optimal solution for a single agent, so optimality is already baked into the model. With heterogeneous agents, the solution of the model is a set of mutually consistent optimal plans, and optimality is baked into that heterogenous-agent DSGE model as well. Sophisticated heterogeneous-agent models can incorporate various frictions and constraints that cause the solution to deviate from a hypothetical frictionless, unconstrained first-best optimum.

The policy message emerging from this modeling approach is that unemployment is attributable to frictions and other distortions that don’t permit a first-best optimum that would be achieved automatically in their absence from being reached. The possibility that the optimal plans of individuals might be incompatible resulting in a systemic breakdown — that there could be a failure to coordinate — does not even come up for discussion.

One needn’t accept Keynes’s own theoretical explanation of unemployment to find the attribution of cyclical unemployment to frictions deeply problematic. But, as I have asserted in many previous posts (e.g., here and here) a modeling approach that excludes a priori any systemic explanation of cyclical unemployment, attributing instead all cyclical unemployment to frictions or inefficient constraints on market pricing, cannot be regarded as anything but an exercise in question begging.

 

Henry Manne and the Dubious Case for Insider Trading

In a recent tweet, my old friend Alan Reynolds plugged a 2003 op-ed article (“The Case for Insider Training”) by Henry Manne railing against legal prohibitions against insider trading. Reynolds’s tweet followed his earlier tweet railing against the indictment of Rep. Chris Collins for engaging in insider trading after learning that the small pharmaceutical company (Innate Pharmaceuticals) of which he was the largest shareholder transmitted news that a key clinical trial of a drug the company was developing had failed, making a substantial decline in the value of the company’s stock inevitable once news of the failed trial became public. Collins informed his own son of the results of the trial, and his son then shared that information with the son’s father-in-law and other friends and acquaintances, who all sold their stock in the firm, causing the company’s stock price to fall by 92%.

Reynolds thinks that what Collins did was just fine, and invokes Manne as an authority to support his position. Here is how Manne articulated the case against insider trading in his op-ed piece, which summarizes a longer 2005 article (“Insider Trading: Hayek, Virtual Markets and the Dog that Did not Bark”) published in The Journal of Corporate Law.

Prior to 1968, insider trading was very common, well-known, and generally accepted when it was thought about at all.

A similar observation – albeit somewhat backdated — might be made about slavery and polygamy.

When the time came, the corporate world was neither able nor inclined to mount a defense of the practice, while those who demanded its regulation were strident and successful in its demonization. The business community was as hoodwinked by these frightening arguments as was the public generally.

Note the impressive philosophical detachment with which Manne recounts the historical background.

Since then, however, insider trading has been strongly, if by no means universally, defended in scholarly journals. There have been three primary economic arguments (not counting the show-stopper that the present law simply cannot be effectively enforced.) The first and generally undisputed argument is that insider trading does little or no direct harm to any individual trading in the market, even when an insider is on the other side of the trades.

The assertion that insider trading does “little or no direct harm” is patently ridiculous inasmuch as it is based on the weasel word “direct” so that the wealth transferred from less informed to better informed traders cannot result in “direct” harm to the less-informed traders, “direct harm” being understood to occur only when theft or fraud is used to effect a wealth transfer. Question-begging at its best.

The second argument in favor of allowing insider trading is that it always (fraud aside) helps move the price of a corporation’s shares to its “correct” level. Thus insider trading is one of the most important reasons why we have an “efficient” stock market. While there have been arguments about the relative weight to be attributed to insider trading and to other devices also performing this function, the basic idea that insider pushes stock prices in the right direction is largely unquestioned today.

“Efficient” (scare quotes are Manne’s) pricing of stocks and other assets certainly sounds good, but defining “efficient” pricing is not so easy. And even if one were to grant that there is a well-defined efficient price at a moment in time, it is not at all clear how to measure the social gain from an efficient price relative to an inefficient price, or, even more problematically, how to measure the social benefit from arriving at the efficient price sooner rather than later.

The third economic defense has been that it is an efficient and highly desirable form of incentive compensation, especially for corporation dependent on innovation and new developments. This argument has come to the fore recently with the spate of scandals involving stock options. These are the closes substitutes for insider trading in managerial compensation, but they suffer many disadvantages not found with insider trading. The strongest argument against insider trading as compensation is the difficulty of calibrating entitlements and rewards.

“The difficulty of calibrating entitlements and rewards” is simply a euphemism for the incentive of insiders privy to adverse information to trade on that information rather than attempt to counteract an expected decline in the value of the firm.

Critics of insider trading have responded to these arguments principally with two aggregate-harm theories, one psychological and the other economic. The first, the faraway favorite of the SEC, is the “market confidence” argument: If investors in the stock market know that insider trading is common, they will refuse to invest in such an “unfair” market.

Using scare quotes around “unfair” as if the idea that trading with asymmetric information might be unfair were illogical or preposterous, Manne stumbles into an inconsistency of his own by abandoning the very efficient market hypothesis that he otherwise steadfastly upholds. According to the efficient market hypothesis that market prices reflects all publicly available information, movements in stock prices are unpredictable on the basis of publicly available information. Thus, investors who select stocks randomly should, in the aggregate, and over time, just break even. However, traders with inside information make profits. But if it is possible to break even by picking stocks randomly, who are the insiders making their profits from? The renowned physicist Niels Bohr, who was fascinated by stock markets and anticipated the efficient market hypothesis, argued that it must be the stock market analysts from whom the profits of insiders are extracted. Whether Bohr was right that insiders extract their profits only from market analysts and not at all from traders with randomized strategies, I am not sure, but clearly Bohr’s basic intuition that profits earned by insiders are necessarily at the expense of other traders is logically unassailable.

Thus investment and liquidity will be seriously diminished. But there is no evidence that publicity about insider trading ever caused a significant reduction in aggregate stock market activity. It is merely one of many scare arguments that the SEC and others have used over the years as a substitute for sound economics.

Manne’s qualifying adjective “significant” is clearly functioning as a weasel world in this context, because the theoretical argument that an understanding that insiders may freely trade on their inside information would, on Manne’s own EMH premises, clearly imply that stock trading by non-insiders would in the aggregate, and over time, be unprofitable. So Manne resorts to a hand-waving argument about the size of the effect. The size of the effect depends on how widespread insider trading and how-well informed the public is about the extent of such trading, so he is in no position to judge its significance.

The more responsible aggregate-harm argument is the “adverse selection” theory. This argument is that specialists and other market makers, when faced with insider trading, will broaden their bid-ask spreads to cover the losses implicit in dealing with insiders. The larger spread in effect becomes a “tax” on all traders, thus impacting investment and liquidity. This is a plausible scenario, but it is of very questionable applicability and significance. Such an effect, while there is some confirming data, is certainly not large enough in aggregate to justify outlawing insider trading.

But the adverse-selection theory credited by Manne is no different in principle from the “market confidence” theory that he dismisses; they are two sides of the same coin, and are equally derived from the same premise: that the profits of insider traders must come from the pockets of non-insiders. So he has no basis in theory to dismiss either effect, and his evidence that insider trading provides any efficiency benefit is certainly no stronger than the evidence he dismisses so blithely that insider trading harms non-insiders.

In fact the relevant theoretical point was made very clearly by Jack Hirshleifer in the important article (“The Private and Social Value of Information and the Reward to Inventive Activity”) about which I wrote last week on this blog. Information has social value when it leads to a reconfiguration of resources that increases the total output of society. However, the private value of information may far exceed whatever social value the information has, because privately held information that allows the better-informed to trade with the less-well informed enables the better-informed to profit at the expense of the less-well informed. Prohibiting insider trading prevents such wealth transfers, and insofar as these wealth transfers are not associated with any social benefit from improved resource allocation, an argument that such trading reduces welfare follows as night does day. Insofar as such trading does generate some social benefit, there are also the losses associated with adverse selection and reduced market confidence, so the efficiency effects, though theoretically ambiguous, are still very likely negative.

But Manne posits a different kind of efficiency effect.

No other device can approach knowledgeable trading by insiders for efficiently and accurately pricing endogenous developments in a company. Insiders, driven by self-interest and competition among themselves will trade until the correct price is reached. This will be true even when the new information involves trading on bad news. You do not need whistleblowers if you have insider trading.

Here again, Manne is assuming that efficient pricing has large social benefits, but that premise depends on the how rapidly resource allocation responds to price changes, especially changes in asset prices. The question is how long does it take for insider information to become public information? If insider information quickly becomes public, so that insiders can profit from their inside information only by trading on it before the information becomes public, the social value of speeding up the rate at which inside information is reflected in asset pricing is almost nil. But Manne implicitly assumes that the social value of the information is very high, and it is precisely that implicit assumption that would have to be demonstrated before the efficiency argument for insider trading would come close to being persuasive.

Moreover, allowing insiders to trade on bad news creates precisely the wrong incentive, effectively giving insiders the opportunity to loot a company before it goes belly up, rather than take any steps to mitigate the damage.

While I acknowledge that there are legitimate concerns about whether laws against insider trading can be enforced without excessive arbitrariness, those concerns are entirely distinct from arguments that insider trading actually promotes economic efficiency.

Hirshleifer on the Private and Social Value of Information

I have written a number posts (here here here, and here) over the past few years citing an article by one of my favorite UCLA luminaries, Jack Hirshleifer, of the fabled UCLA economics department of the 1950s, 1960s, 1970s and 1980s. Like everything Hirshleifer wrote, the article, “The Private and Social Value of Information and the Reward to Inventive Activity,” published in 1971 in the American Economic Review, is deeply insightful, carefully reasoned, and lucidly explained, reflecting the author’s comprehensive mastery of the whole body of neoclassical microeconomic theory.

Hirshleifer’s article grew out of a whole literature inspired by two of Hayek’s most important articles “Economics and Knowledge” in 1937 and “The Use of Knowledge in Society” in 1945. Both articles were concerned with the fact that, contrary to the assumptions in textbook treatments, economic agents don’t have complete information about all the characteristics of the goods being traded and about the prices at which those goods are available. Hayek was aiming to show that markets are characteristically capable of transmitting information held by some agents in a condensed form to make it usable by other agents. That role is performed by prices. It is prices that provide both information and incentives to economic agents to formulate and tailor their plans, and if necessary, to readjust those plans in response to changed conditions. Agents need not know what those underlying changes are; they need only observe, and act on, the price changes that result from those changes.

Hayek’s argument, though profoundly insightful, was not totally convincing in demonstrating the superiority of the pure “free market,” for three reasons.

First, economic agents base decisions, as Hayek himself was among the first to understand, not just on actual current prices, but also on expected future prices. Although traders sometimes – but usually don’t — know what the current price of something is, one can only guess – not know — what the price of that thing will be in the future. So, the work of providing the information individuals need to make good economic decisions cannot be accomplished – even in principle – just by the adjustment of prices in current markets. People also need enough information to make good guesses – form correct expectations — about future prices.

Second, economic agents don’t automatically know all prices. The assumption that every trader knows exactly what prices are before executing plans to buy and sell is true, if at all, only in highly organized markets where prices are publicly posted and traders can always buy and sell at the posted price. In most other markets, transactors must devote time and effort to find out what prices are and to find out the characteristics of the goods that they are interested in buying. It takes effort or search or advertising or some other, more or less costly, discovery method for economic agents to find out what current prices are and what characteristics those goods have. If agents aren’t fully informed even about current prices, they don’t necessarily make good decisions.

Libertarians, free marketeers, and other Hayek acolytes often like to credit Hayek with having solved or having shown how “the market” solves “the knowledge problem,” a problem that Hayek definitively showed a central-planning regime to be incapable of solving. But the solution at best is only partial, and certainly not robust, because markets never transmit all available relevant information. That’s because markets transmit only information about costs and valuations known to private individuals, but there is a lot of information about public or social valuations and costs that is not known to private individuals and rarely if ever gets fed into, or is transmitted by, the price system — valuations of public goods and the social costs of pollution for example.

Third, a lot of information is not obtained or transmitted unless it is acquired, and acquiring information is costly. Economic agents must search for relevant information about the goods and services that they are interested in obtaining and about the prices at which those goods and services are available. Moreover, agents often engage in transactions with counterparties in which one side has an information advantage over the other. When traders have an information advantage over their counterparties, the opportunity for one party to take advantage of the inferior information of the counterparty may make it impossible for the two parties to reach mutually acceptable terms, because a party who realizes that the counterparty has an information advantage may be unwilling to risk being taken advantage of. Sometimes these problems can be surmounted by creative contractual arrangements or legal interventions, but often they can’t.

To recognize the limitations of Hayek’s insight is not to minimize its importance, either in its own right or as a stimulus to further research. Important early contributions (all published between 1961 and 1970) by Stigler (“The Economics of Information”) Ozga (“Imperfect Markets through Lack of Knowledge”), Arrow (“Economic Welfare and the Allocation of Resources for Invention”), Demsetz (“Information and Efficiency: Another Viewpoint”) and Alchian (“Information Costs, Pricing, and Resource Unemployment”) all analyzed the problem of incomplete and limited information and the incentives for acquiring information, the institutions and market arrangements that arise to cope with limited information and the implications for economic efficiency of these limitations and incentives. They can all be traced directly or indirectly to Hayek’s early contributions. Among the important results that seem to follow from these early papers was that the inability of those discovering or creating new knowledge to appropriate the net benefits accruing from the knowledge implied that the incentive to create new knowledge is less than optimal owing to their inability to claim full property rights over new knowledge through patents or other forms of intellectual property.

Here is where Hirshleifer’s paper enters the picture. Is more information always better? It would certainly seem that more of any good is better than less. But how valuable is new information? And are the incentives to create or discover new information aligned with the value of that information? Hayek’s discussion implicitly assumed that the amount of information in existence is a given stock, at least in the aggregate. How can the information that already exists be optimally used? Markets help us make use of the information that already exists. But the problem addressed by Hirshleifer was whether the incentives to discover and create new information call forth the optimal investment of time, effort and resources to make new discoveries and create new knowledge.

Instead of focusing on the incentives to search for information about existing opportunities, Hirshleifer analyzed the incentives to learn about uncertain resource endowments and about the productivity of those resources.

This paper deals with an entirely different aspect of the economics of information. We here revert to the textbook assumption that markets are perfect and costless. The individual is always fully acquainted with the supply-demand offers of all potential traders, and an equilibrium integrating all individuals’ supply-demand offers is attained instantaneously. Individuals are unsure only about the size of their own commodity endowments and/or about the returns attainable from their own productive investments. They are subject to technological uncertainty rather than market uncertainty.

Technological uncertainty brings immediately to mind the economics of research and invention. The traditional position been that the excess of the social over the private value of new technological knowledge leads to underinvestment in inventive activity. The main reason is that information, viewed as a product, is only imperfectly appropriable by its discoverer. But this paper will show that there is a hitherto unrecognized force operating in opposite direction. What has been scarcely appreciated in the literature, if recognized at all, is the distributive aspect of access to superior information. It will be seen below how this advantage provides a motivation for the private acquisition and dissemination of technological information that is quite apart from – and may even exist in the absence – of any social usefulness of that information. (p. 561)

The key insight motivating Hirshleifer was that privately held knowledge enables someone possessing that knowledge to anticipate future price movements once the privately held information becomes public. If you can anticipate a future price movement that no one else can, you can confidently trade with others who don’t know what you know, and then wait for the profit to roll in when the less well-informed acquire the knowledge that you have. By assumption the newly obtained knowledge doesn’t affect the quantity of goods available to be traded, so acquiring new knowledge or information provides no social benefit. In a pure-exchange model, newly discovered knowledge provides no net social benefit; it only enables better-informed traders to anticipate price movements that less well-informed traders don’t see coming. Any gains from new knowledge are exactly matched by the losses suffered by those without that knowledge. Hirshleifer called the kind of knowledge that enables one to anticipate future price movements “foreknowledge,” which he distinguished from actual discovery .

The type of information represented by foreknowledge is exemplified by ability to successfully predict tomorrow’s (or next year’s) weather. Here we have a stochastic situation: with particular probabilities the future weather might be hot or cold, rainy or dry, etc. But whatever does actually occur will, in due time, be evident to all: the only aspect of information that may be of advantage is prior knowledge as to what will happen. Discovery, in contrast, is correct recognition of something that is hidden from view. Examples include the determination of the properties of materials, of physical laws, even of mathematical attributes (e.g., the millionth digit in the decimal expansion of “π”). The essential point is that in such cases nature will not automatically reveal the information; only human action can extract it. (562)

Hirshleifer’s result, though derived in the context of a pure-exchange economy, is very powerful, implying that any expenditure of resources devoted to finding out new information that enables the first possessor of the information to predict price changes and reap profits from trading is unambiguously wasteful by reducing total consumption of the community.

[T]he community as a whole obtains no benefit, under pure exchange, from either the acquisition or the dissemination (by resale or otherwise) of private foreknowledge. . . .

[T]he expenditure of real resources for the production of technological information is socially wasteful in pure exchange, as the expenditure of resources for an increase in the quantity of money by mining gold is wasteful, and for essentially the same reason. Just as a smaller quantity of money serves monetary functions as well as a larger, the price level adjusting correspondingly, so a larger amount of foreknowledge serves no social purpose under pure exchange that the smaller amount did not. (pp. 565-66)

Relaxing the assumption that there is no production does not alter the conclusion, because the kind of information that is discovered, even if it did lead to efficient production decisions that increase the output of goods whose prices rise sooner as a result of the new information than they would have otherwise. But if the foreknowledge is privately obtained, the private incentive is to use that information by trading with another, less-well-informed, trader, at a price the other trader would not agree to if he weren’t at an information disadvantage. The private incentive to use foreknowledge that might cause a change in production decisions is not to use the information to alter production decisions but to use it to trade with, and profit from, those with inferior knowledge.

[A]s under the regime of pure exchange, private foreknowledge makes possible large private profit without leading to socially useful activity. The individual would have just as much incentive as under pure exchange (even more, in fact) to expend real resources in generating socially useless private information. (p. 567)

If the foreknowledge is publicly available, there would be a change in production incentives to shift production toward more valuable products. However, the private gain if the information is kept private greatly exceeds the private value of the information if the information is public. Under some circumstances, private individuals may have an incentive to publicize their private information to cause the price increases in expectation of which they have taken speculative positions. But it is primarily the gain from foreseen price changes, not the gain from more efficient production decisions, that creates the incentive to discover foreknowledge.

The key factor underlying [these] results . . . is the distributive significance of private foreknowledge. When private information fails to lead to improved productive alignments (as must necessarily be the case in a world of pure exchange, and also in a regime of production unless there is dissemination effected in the interest of speculation or resale), it is evident that the individual’s source of gain can only be at the expense of his fellows. But even where information is disseminated and does lead to improved productive commitments, the distributive transfer gain will surely be far greater than the relatively minor productive gain the individual might reap from the redirection of his own real investment commitments. (Id.)

Moreover, better-informed individuals – indeed individuals who wrongly believe themselves to be better informed — will perceive it to be in their self-interest to expend resources to disseminate the information in the expectation that the ensuing price changes would redound to their profit. The private gain expected from disseminating information far exceeds the social benefit from the prices changes once the new information is disseminated; the social benefit from the price changes resulting from the disseminated information corresponds to an improved allocation of resources, but that improvement will be very small compared to the expected private profit from anticipating the price change and trading with those that don’t anticipate it.

Hirshleifer then turns from the value of foreknowledge to the value of discovering new information about the world or about nature that makes a contribution to total social output by causing a shift of resources to more productive uses. Inasmuch as the discovery of new information about the world reveals previously unknown productive opportunities, it might be thought that the private incentive to devote resources to the discovery of technological information about productive opportunities generates substantial social benefits. But Hirshleifer shows that here, too, because the private discovery of information about the world creates private opportunities for gain by trading based on the consequent knowledge of future price changes, the private incentive to discover technological information always exceeds the social value of the discovery.

We need only consider the more general regime of production and exchange. Given private, prior, and sure information of event A [a state of the world in which a previously unknown natural relationship has been shown to exist] the individual in a world of perfect markets would not adapt his productive decisions if he were sure the information would remain private until after the close of trading. (p. 570)

Hirshleifer is saying that the discovery of a previously unknown property of the world can lead to an increase in total social output only by causing productive resources to be reallocated, but that reallocation can occur only if and when the new information is disclosed. So if someone discovers a previously unknown property of the world, the discoverer can profit from that information by anticipating the price effect likely to result once the information is disseminated and then making a speculative transaction based on the expectation of a price change. A corollary of this argument is that individuals who think that they are better informed about the world will take speculative positions based on their beliefs, but insofar as their investments in discovering properties of the world lead them to incorrect beliefs, their investments in information gathering and discovery will not be rewarded. The net social return to information gathering and discovery is thus almost certainly negative.

The obvious way of acquiring the private information in question is, of course, by performing technological research. By a now familiar argument we can show once again that the distributive advantage of private information provides an incentive for information-generating activity that may quite possibly be in excess of the social value of the information. (Id.)

Finally, Hirshliefer turns to the implications for patent policy of his analysis of the private and social value of information.

The issues involved may be clarified by distinguishing the “technological” and “pecuniary” effects of invention. The technological effects are the improvements in production functions . . . consequent upon the new idea. The pecuniary effects are the wealth shifts due to the price revaluations that take place upon release and/or utilization of the information. The pecuniary effects are purely redistributive.

For concreteness, we can think in terms of a simple cost-reducing innovation. The technological benefit to society is, roughly, the integrated area between the old and new marginal-cost curves for the preinvention level of output plus, for any additional output, and the area between the demand curve and the new marginal-cost curve. The holder of a (perpetual) patent could ideally extract, via a perfectly discriminatory fee policy, this entire technological benefit. Equivalence between the social and private benefits of innovation would thus induce the optimal amount of private inventive activity. Presumably it is reasoning of this sort that underlies the economic case for patent protection. (p. 571)

Here Hirshleifer is uncritically restating the traditional analysis for the social benefit from new technological knowledge. But the analysis overstates the benefit, by assuming incorrectly that, with no patent protection, the discovery would never be made. If the discovery would be made without patent protection, then obviously the technological benefit to society is only the area indicated over a limited time horizon, so a perpetual patent enabling the holder of the patent to extract all additional consumer and producer surplus flowing from invention in perpetuity would overcompensate the patent holder for the invention.

Nor does Hirshleifer mention the tendency of patents to increase the costs of invention, research and development owing to the royalties subsequent inventors would have to pay existing patent holders for infringing inventions even if those inventions were, or would have been, discovered with no knowledge of the patented invention. While rewarding some inventions and inventors, patent protection penalizes or blocks subsequent inventions and inventors. Inventions are outputs, but they are also inputs. If the use of past inventions is made more costly by new inventors, it is not clear that the net result will be an increase in the rate of invention.

Moreover, the knowledge that a patented invention may block or penalize a new invention that infringes on an existing patent or a patent that issues before a new invention is introduced, may in some cases cause an overinvestment in research as inventors race to gain the sole right to an invention, in order to avoid being excluded while gaining the right to exclude others.

Hirshleifer does mention some reasons why maximally rewarding patent holders for their inventions may lead to suboptimal results, but fails to acknowledge that the conventional assessment of the social gain from new invention is substantially overstated or patents may well have a negative effect on inventive activity in fields in which patent holders have gained the right to exclude potentially infringing inventions even if the infringing inventions would have been made without the knowledge publicly disclosed by the patent holders in their patent applications.

On the other side are the recognized disadvantages of patents: the social costs of the administrative-judicial process, the possible anti-competitve impact, and restriction of output due to the marginal burden of patent fees. As a second-best kind of judgment, some degree of patent protection has seemed a reasonable compromise among the objectives sought.

Of course, that judgment about the social utility of patents is not universally accepted, and authorities from Arnold Plant, to Fritz Machlup, and most recently Michele Boldrin and David Levine have been extremely skeptical of the arguments in favor of patent protection, copyright and other forms of intellectual property.

However, Hirshleifer advances a different counter-argument against patent protection based on his distinction between the private and social gains derived from information.

But recognition of the unique position of the innovator for forecasting and consequently capturing portions of the pecuniary effects – the wealth transfers due to price revaluation – may put matters in a different light. The “ideal” case of the perfectly discriminating patent holder earning the entire technological benefit is no longer so ideal. (pp. 571-72)

Of course, as I have pointed out, the ‘“ideal” case’ never was ideal.

For the same inventor is in a position to reap speculative profits, too; counting these as well, he would clearly be overcompensated. (p. 572)

Indeed!

Hirshleifer goes on to recognize that the capacity to profit from speculative activity may be beyond the capacity or the ken of many inventors.

Given the inconceivably vast number of potential contingencies and the costs of establishing markets, the prospective speculator will find it costly or even impossible ot purchase neutrality from “irrelevant” risks. Eli Whitney [inventor of the cotton gin who obtained one of the first US patents for his invention in 1794] could not be sure that his gin would make cotton prices fall: while a considerable force would clearly be acting in that direction, a multitude of other contingencies might also have possibly affected the price of cotton. Such “uninsurable” risks gravely limit the speculation feasible with any degree of prudence. (Id.)

HIrshleifer concludes that there is no compelling case either for or against patent protection, because the standard discussion of the case for patent protection has not taken into consideration the potential profit that inventors can gain by speculating on the anticipated price effects of their patents. Of course the argument that inventors are unlikely to be adept at making such speculative plays is a serious argument, we have also seen the rise of patent trolls that buy up patent rights from inventors and then file lawsuits against suspected infringers. In a world without patent protection, it is entirely possible that patent trolls would reinvent themselves as patent speculators, buying up information about new inventions from inventors and using that information to engage in speculative activity based on that information. By acquiring a portfolio of patents such invention speculators could pool the risks of speculation over their entire portfolio, enabling them to speculate more effectively than any single inventor could on his own invention. Hirshleifer concludes as follows:

Even though practical considerations limit the effective scale and consequent impact of speculation and/or resale [but perhaps not as much as Hirshleifer thought], the gains thus achievable eliminate any a priori anticipation of underinvestment in the generation of new technological knowledge. (p. 574)

And I reiterate one last time that Hirshleifer arrived at his non-endorsement of patent protection even while accepting the overstated estimate of the social value of inventions and neglecting the tendency of patents to increase the cost of inventive activity.

Keynes and the Fisher Equation

The history of economics society is holding its annual meeting in Chicago from Friday June 15 to Sunday June 17. Bringing together material from a number of posts over the past five years or so about Keynes and the Fisher equation and Fisher effect, I will be presenting a new paper called “Keynes and the Fisher Equation.” Here is the abstract of my paper.

One of the most puzzling passages in the General Theory is the attack (GT p. 142) on Fisher’s distinction between the money rate of interest and the real rate of interest “where the latter is equal to the former after correction for changes in the value of money.” Keynes’s attack on the real/nominal distinction is puzzling on its own terms, inasmuch as the distinction is a straightforward and widely accepted distinction that was hardly unique to Fisher, and was advanced as a fairly obvious proposition by many earlier economists including Marshall. What makes Keynes’s criticism even more problematic is that Keynes’s own celebrated theorem in the Tract on Monetary Reform about covered interest arbitrage is merely an application of Fisher’s reasoning in Appreciation and Interest. Moreover, Keynes endorsed Fisher’s distinction in the Treatise on Money. But even more puzzling is that Keynes’s analysis in Chapter 17 demonstrates that in equilibrium the return on alternative assets must reflect their differences in their expected rates of appreciation. Thus Keynes, himself, in the General Theory endorsed the essential reasoning underlying the distinction between real and the money rates of interest. The solution to the puzzle lies in understanding the distinction between the relationships between the real and nominal rates of interest at a moment in time and the effects of a change in expected rates of appreciation that displaces an existing equilibrium and leads to a new equilibrium. Keynes’s criticism of the Fisher effect must be understood in the context of his criticism of the idea of a unique natural rate of interest implicitly identifying the Fisherian real rate with a unique natural rate.

And here is the concluding section of my paper.

Keynes’s criticisms of the Fisher effect, especially the facile assumption that changes in inflation expectations are reflected mostly, if not entirely, in nominal interest rates – an assumption for which neither Fisher himself nor subsequent researchers have found much empirical support – were grounded in well-founded skepticism that changes in expected inflation do not affect the real interest rate. A Fisherian analysis of an increase in expected deflation at the zero lower bound shows that the burden of the adjustment must be borne by an increase in the real interest rate. Of course, such a scenario might be dismissed as a special case, which it certainly is, but I very much doubt that it is the only assumptions leading to the conclusion that a change in expected inflation or deflation affects the real as well as the nominal interest rate.

Although Keynes’s criticism of the Fisher equation (or more precisely against the conventional simplistic interpretation) was not well argued, his intuition was sound. And in his contribution to the Fisher festschrift, Keynes (1937b) correctly identified the two key assumptions leading to the conclusion that changes in inflation expectations are reflected entirely in nominal interest rates: (1) a unique real equilibrium and (2) the neutrality (actually superneutrality) of money. Keynes’s intuition was confirmed by Hirshleifer (1970, 135-38) who derived the Fisher equation as a theorem by performing a comparative-statics exercise in a two-period general-equilibrium model with money balances, when the money stock in the second period was increased by an exogenous shift factor k. The price level in the second period increases by a factor of k and the nominal interest rate increases as well by a factor of k, with no change in the real interest rate.

But typical Keynesian and New Keynesian macromodels based on the assumption of no capital or a single capital good drastically oversimplify the analysis, because those highly aggregated models assume that the determination of the real interest rate takes place in a single market. The market-clearing assumption invites the conclusion that the rate of interest, like any other price, is determined by the equality of supply and demand – both of which are functions of that price — in  that market.

The equilibrium rate of interest, as C. J. Bliss (1975) explains in the context of an intertemporal general-equilibrium analysis, is not a price; it is an intertemporal rate of exchange characterizing the relationships between all equilibrium prices and expected equilibrium prices in the current and future time periods. To say that the interest rate is determined in any single market, e.g., a market for loanable funds or a market for cash balances, is, at best, a gross oversimplification, verging on fallaciousness. The interest rate or term structure of interest rates is a reflection of the entire intertemporal structure of prices, so a market for something like loanable funds cannot set the rate of interest at a level inconsistent with that intertemporal structure of prices without disrupting and misaligning that structure of intertemporal price relationships. The interest rates quoted in the market for loanable funds are determined and constrained by those intertemporal price relationships, not the other way around.

In the real world, in which current prices, future prices and expected future prices are not and almost certainly never are in an equilibrium relationship with each other, there is always some scope for second-order variations in the interest rates transacted in markets for loanable funds, but those variations are still tightly constrained by the existing intertemporal relationships between current, future and expected future prices. Because the conditions under which Hirshleifer derived his theorem demonstrating that changes in expected inflation are fully reflected in nominal interest rates are not satisfied, there is no basis for assuming that a change in expected inflation affect only nominal interest rates with no effect on real rates.

There are probably a huge range of possible scenarios of how changes in expected inflation could affect nominal and real interest rates. One should not disregard the Fisher equation as one possibility, it seems completely unwarranted to assume that it is the most plausible scenario in any actual situation. If we read Keynes at the end of his marvelous Chapter 17 in the General Theory in which he remarks that he has abandoned the belief he had once held in the existence of a unique natural rate of interest, and has come to believe that there are really different natural rates corresponding to different levels of unemployment, we see that he was indeed, notwithstanding his detour toward a pure liquidity preference theory of interest, groping his way toward a proper understanding of the Fisher equation.

In my Treatise on Money I defined what purported to be a unique rate of interest, which I called the natural rate of interest – namely, the rate of interest which, in the terminology of my Treatise, preserved equality between the rate of saving (as there defined) and the rate of investment. I believed this to be a development and clarification of of Wicksell’s “natural rate of interest,” which was, according to him, the rate which would preserve the stability of some, not quite clearly specified, price-level.

I had, however, overlooked the fact that in any given society there is, on this definition, a different natural rate for each hypothetical level of employment. And, similarly, for every rate of interest there is a level of employment for which that rate is the “natural” rate, in the sense that the system will be in equilibrium with that rate of interest and that level of employment. Thus, it was a mistake to speak of the natural rate of interest or to suggest that the above definition would yield a unique value for the rate of interest irrespective of the level of employment. . . .

If there is any such rate of interest, which is unique and significant, it must be the rate which we might term the neutral rate of interest, namely, the natural rate in the above sense which is consistent with full employment, given the other parameters of the system; though this rate might be better described, perhaps, as the optimum rate. (pp. 242-43)

Because Keynes believed that an increased in the expected future price level implies an increase in the marginal efficiency of capital, it follows that an increase in expected inflation under conditions of less than full employment would increase investment spending and employment, thereby raising the real rate of interest as well the nominal rate. Cottrell (1994) has attempted to make an argument along such lines within a traditional IS-LM framework. I believe that, in a Fisherian framework, my argument points in a similar direction.

 

Does Economic Theory Entail or Support Free-Market Ideology?

A few weeks ago, via Twitter, Beatrice Cherrier solicited responses to this query from Dina Pomeranz

It is a serious — and a disturbing – question, because it suggests that the free-market ideology which is a powerful – though not necessarily the most powerful — force in American right-wing politics, and probably more powerful in American politics than in the politics of any other country, is the result of how economics was taught in the 1970s and 1980s, and in the 1960s at UCLA, where I was an undergrad (AB 1970) and a graduate student (PhD 1977), and at Chicago.

In the 1950s, 1960s and early 1970s, free-market economics had been largely marginalized; Keynes and his successors were ascendant. But thanks to Milton Friedman and his compatriots at a few other institutions of higher learning, especially UCLA, the power of microeconomics (aka price theory) to explain a very broad range of economic and even non-economic phenomena was becoming increasingly appreciated by economists. A very broad range of advances in economic theory on a number of fronts — economics of information, industrial organization and antitrust, law and economics, public choice, monetary economics and economic history — supported by the award of the Nobel Prize to Hayek in 1974 and Friedman in 1976, greatly elevated the status of free-market economics just as Margaret Thatcher and Ronald Reagan were coming into office in 1979 and 1981.

The growing prestige of free-market economics was used by Thatcher and Reagan to bolster the credibility of their policies, especially when the recessions caused by their determination to bring double-digit inflation down to about 4% annually – a reduction below 4% a year then being considered too extreme even for Thatcher and Reagan – were causing both Thatcher and Reagan to lose popular support. But the growing prestige of free-market economics and economists provided some degree of intellectual credibility and weight to counter the barrage of criticism from their opponents, enabling both Thatcher and Reagan to use Friedman and Hayek, Nobel Prize winners with a popular fan base, as props and ornamentation under whose reflected intellectual glory they could take cover.

And so after George Stigler won the Nobel Prize in 1982, he was invited to the White House in hopes that, just in time, he would provide some additional intellectual star power for a beleaguered administration about to face the 1982 midterm elections with an unemployment rate over 10%. Famously sharp-tongued, and far less a team player than his colleague and friend Milton Friedman, Stigler refused to play his role as a prop and a spokesman for the administration when asked to meet reporters following his celebratory visit with the President, calling the 1981-82 downturn a “depression,” not a mere “recession,” and dismissing supply-side economics as “a slogan for packaging certain economic ideas rather than an orthodox economic category.” That Stiglerian outburst of candor brought the press conference to an unexpectedly rapid close as the Nobel Prize winner was quickly ushered out of the shouting range of White House reporters. On the whole, however, Republican politicians have not been lacking of economists willing to lend authority and intellectual credibility to Republican policies and to proclaim allegiance to the proposition that the market is endowed with magical properties for creating wealth for the masses.

Free-market economics in the 1960s and 1970s made a difference by bringing to light the many ways in which letting markets operate freely, allowing output and consumption decisions to be guided by market prices, could improve outcomes for all people. A notable success of Reagan’s free-market agenda was lifting, within days of his inauguration, all controls on the prices of domestically produced crude oil and refined products, carryovers of the disastrous wage-and-price controls imposed by Nixon in 1971, but which, following OPEC’s quadrupling of oil prices in 1973, neither Nixon, Ford, nor Carter had dared to scrap. Despite a political consensus against lifting controls, a consensus endorsed, or at least not strongly opposed, by a surprisingly large number of economists, Reagan, following the advice of Friedman and other hard-core free-market advisers, lifted the controls anyway. The Iran-Iraq war having started just a few months earlier, the Saudi oil minister was predicting that the price of oil would soon rise from $40 to at least $50 a barrel, and there were few who questioned his prediction. One opponent of decontrol described decontrol as writing a blank check to the oil companies and asking OPEC to fill in the amount. So the decision to decontrol oil prices was truly an act of some political courage, though it was then characterized as an act of blind ideological faith, or a craven sellout to Big Oil. But predictions of another round of skyrocketing oil prices, similar to the 1973-74 and 1978-79 episodes, were refuted almost immediately, international crude-oil prices falling steadily from $40/barrel in January to about $33/barrel in June.

Having only a marginal effect on domestic gasoline prices, via an implicit subsidy to imported crude oil, controls on domestic crude-oil prices were primarily a mechanism by which domestic refiners could extract a share of the rents that otherwise would have accrued to domestic crude-oil producers. Because additional crude-oil imports increased a domestic refiner’s allocation of “entitlements” to cheap domestic crude oil, thereby reducing the net cost of foreign crude oil below the price paid by the refiner, one overall effect of the controls was to subsidize the importation of crude oil, notwithstanding the goal loudly proclaimed by all the Presidents overseeing the controls: to achieve US “energy independence.” In addition to increasing the demand for imported crude oil, the controls reduced the elasticity of refiners’ demand for imported crude, controls and “entitlements” transforming a given change in the international price of crude into a reduced change in the net cost to domestic refiners of imported crude, thereby raising OPEC’s profit-maximizing price for crude oil. Once domestic crude oil prices were decontrolled, market forces led almost immediately to reductions in the international price of crude oil, so the coincidence of a fall in oil prices with Reagan’s decision to lift all price controls on crude oil was hardly accidental.

The decontrol of domestic petroleum prices was surely as pure a victory for, and vindication of, free-market economics as one could have ever hoped for [personal disclosure: I wrote a book for The Independent Institute, a free-market think tank, Politics, Prices and Petroleum, explaining in rather tedious detail many of the harmful effects of price controls on crude oil and refined products]. Unfortunately, the coincidence of free-market ideology with good policy is not necessarily as comprehensive as Friedman and his many acolytes, myself included, had assumed.

To be sure, price-fixing is almost always a bad idea, and attempts at price-fixing almost always turn out badly, providing lots of ammunition for critics of government intervention of all kinds. But the implicit assumption underlying the idea that freely determined market prices optimally guide the decentralized decisions of economic agents is that the private costs and benefits taken into account by economic agents in making and executing their plans about how much to buy and sell and produce closely correspond to the social costs and benefits that an omniscient central planner — if such a being actually did exist — would take into account in making his plans. But in the real world, the private costs and benefits considered by individual agents when making their plans and decisions often don’t reflect all relevant costs and benefits, so the presumption that market prices determined by the elemental forces of supply and demand always lead to the best possible outcomes is hardly ironclad, as we – i.e., those of us who are not philosophical anarchists – all acknowledge in practice, and in theory, when we affirm that competing private armies and competing private police forces and competing judicial systems would not provide for common defense and for domestic tranquility more effectively than our national, state, and local governments, however imperfectly, provide those essential services. The only question is where and how to draw the ever-shifting lines between those decisions that are left mostly or entirely to the voluntary decisions and plans of private economic agents and those decisions that are subject to, and heavily — even mainly — influenced by, government rule-making, oversight, or intervention.

I didn’t fully appreciate how widespread and substantial these deviations of private costs and benefits from social costs and benefits can be even in well-ordered economies until early in my blogging career, when it occurred to me that the presumption underlying that central pillar of modern right-wing, free-market ideology – that reducing marginal income tax rates increases economic efficiency and promotes economic growth with little or no loss in tax revenue — implicitly assumes that all taxable private income corresponds to the output of goods and services whose private values and costs equal their social values and costs.

But one of my eminent UCLA professors, Jack Hirshleifer, showed that this presumption is subject to a huge caveat, because insofar as some people can earn income by exploiting their knowledge advantages over the counterparties with whom they trade, incentives are created to seek the kinds of knowledge that can be exploited in trades with less-well informed counterparties. The incentive to search for, and exploit, knowledge advantages implies excessive investment in the acquisition of exploitable knowledge, the private gain from acquiring such knowledge greatly exceeding the net gain to society from the acquisition of such knowledge, inasmuch as gains accruing to the exploiter are largely achieved at the expense of the knowledge-disadvantaged counterparties with whom they trade.

For example, substantial resources are now almost certainly wasted by various forms of financial research aiming to gain information that would have been revealed in due course anyway slightly sooner than the knowledge is gained by others, so that the better-informed traders can profit by trading with less knowledgeable counterparties. Similarly, the incentive to exploit knowledge advantages encourages the creation of financial products and structuring other kinds of transactions designed mainly to capitalize on and exploit individual weaknesses in underestimating the probability of adverse events (e.g., late repayment penalties, gambling losses when the house knows the odds better than most gamblers do). Even technical and inventive research encouraged by the potential to patent those discoveries may induce too much research activity by enabling patent-protected monopolies to exploit discoveries that would have been made eventually even without the monopoly rents accruing to the patent holders.

The list of examples of transactions that are profitable for one side only because the other side is less well-informed than, or even misled by, his counterparty could be easily multiplied. Because much, if not most, of the highest incomes earned, are associated with activities whose private benefits are at least partially derived from losses to less well-informed counterparties, it is not a stretch to suspect that reducing marginal income tax rates may have led resources to be shifted from activities in which private benefits and costs approximately equal social benefits and costs to more lucrative activities in which the private benefits and costs are very different from social benefits and costs, the benefits being derived largely at the expense of losses to others.

Reducing marginal tax rates may therefore have simultaneously reduced economic efficiency, slowed economic growth and increased the inequality of income. I don’t deny that this hypothesis is largely speculative, but the speculative part is strictly about the magnitude, not the existence, of the effect. The underlying theory is completely straightforward.

So there is no logical necessity requiring that right-wing free-market ideological policy implications be inferred from orthodox economic theory. Economic theory is a flexible set of conceptual tools and models, and the policy implications following from those models are sensitive to the basic assumptions and initial conditions specified in those models, as well as the value judgments informing an evaluation of policy alternatives. Free-market policy implications require factual assumptions about low transactions costs and about the existence of a low-cost process of creating and assigning property rights — including what we now call intellectual property rights — that imply that private agents perceive costs and benefits that closely correspond to social costs and benefits. Altering those assumptions can radically change the policy implications of the theory.

The best example I can find to illustrate that point is another one of my UCLA professors, the late Earl Thompson, who was certainly the most relentless economic reductionist whom I ever met, perhaps the most relentless whom I can even think of. Despite having a Harvard Ph.D. when he arrived back at UCLA as an assistant professor in the early 1960s, where he had been an undergraduate student of Armen Alchian, he too started out as a pro-free-market Friedman acolyte. But gradually adopting the Buchanan public-choice paradigm – Nancy Maclean, please take note — of viewing democratic politics as a vehicle for advancing the self-interest of agents participating in the political process (marketplace), he arrived at increasingly unorthodox policy conclusions to the consternation and dismay of many of his free-market friends and colleagues. Unlike most public-choice theorists, Earl viewed the political marketplace as a largely efficient mechanism for achieving collective policy goals. The main force tending to make the political process inefficient, Earl believed, was ideologically driven politicians pursuing ideological aims rather than the interests of their constituents, a view that seems increasingly on target as our political process becomes simultaneously increasingly ideological and increasingly dysfunctional.

Until Earl’s untimely passing in 2010, I regarded his support of a slew of interventions in the free-market economy – mostly based on national-defense grounds — as curiously eccentric, and I am still inclined to disagree with many of them. But my point here is not to argue whether Earl was right or wrong on specific policies. What matters in the context of the question posed by Dina Pomeranz is the economic logic that gets you from a set of facts and a set of behavioral and causality assumptions to a set of policy conclusion. What is important to us as economists has to be the process not the conclusion. There is simply no presumption that the economic logic that takes you from a set of reasonably accurate factual assumptions and a set of plausible behavioral and causality assumptions has to take you to the policy conclusions advocated by right-wing, free-market ideologues, or, need I add, to the policy conclusions advocated by anti-free-market ideologues of either left or right.

Certainly we are all within our rights to advocate for policy conclusions that are congenial to our own political preferences, but our obligation as economists is to acknowledge the extent to which a policy conclusion follows from a policy preference rather than from strict economic logic.

Stuart Dreyfus on Richard Bellman, Dynamic Programming, Quants and Financial Engineering

Last week, looking for some information about the mathematician Richard Bellman who, among other feats and achievements, developed dynamic programming, I came across a film called the Bellman Equation which you can watch on the internet. It was written produced and narrated by Bellman’s grandson, Gabriel Bellman and features among others, Gabriel’s father (Bellman’s son), Gabriel’s aunt (Bellman’s daughter), Bellman’s first and second wives, and numerous friends and colleagues. You learn how brilliant, driven, arrogant, charming, and difficult Bellman was, and how he cast a shadow over the lives of his children and grandchildren. Aside from the stories about his life, his work on the atomic bomb in World War II, his meeting with Einstein when he was a young assistant professor at Princeton, his run-in  with the Julius and Ethel Rosenberg at Los Alamos, and, as a result, with Joe McCarthy. And on top of all the family history, family dynamics, and psychological theorizing, you also get an interesting little account of the intuitive logic underlying the theory of dynamic programming. You can watch it for free with commercials on snagfilms.

But I especially wanted to draw attention to the brief appearance in the video of Bellman’s colleague at Rand Corporation in the 1950s, Stuart Dreyfus, with whom Bellman collaborated in developing the theory of dynamic programming, and with whom Bellman co-wrote Applied Dynamic Programming. At 14:17 into the film, one hears the voice of Stuart Dreyfus saying just before he comes into view on the screen:

The world is full of problems where what is required of the person making eh decision is not to just face a static situation and make one single decision, but to make a sequence of decisions as the situation evolves. If you stop to think about it, almost everything in the world falls in that category. So that is the kind of situation hat dynamic programming addressed. The principle on which it is based is such an intuitively obvious principle that it drives some mathematicians crazy, because it’s really kind of impossible to prove that it’s an intuitive principle, and pure mathematicians don’t like intuition.

Then a few moments later, Dreyfus continues:

So this principle of optimality is: why would you ever make a decision now which puts you into a position one step from now where you couldn’t do as well as [if you were in] some other position? Obviously, you would never do that if you knew the value of these other positions.

And a few moments after that:

The place that [dynamic programming] is used the most upsets me greatly — and I don’t know how Dick would feel — but that’s in the so-called “quants” doing so-called “financial engineering” that designed derivatives that brought down the financial system. That’s all dynamic programming mathematics basically. I have a feeling Dick would have thought that’s immoral. The financial world doesn’t produce any useful thing. It’s just like poker; it’s just a game. You’re taking money away from other people and getting yourself things. And to encourage our graduate students to learn how to apply dynamic programming in that area, I think is a sin.

Allowing for some hyperbole on Dreyfus’s part, I think he is making an important point, a point I’ve made before in several posts about finance. A great deal of the income earned by the financial industry does not represent real output; it represents trading based on gaining information advantages over trading partners. So the more money the financial industry makes from financial engineering, the more money someone else is losing to the financial industry, because every trade has two sides.

Not all trading has this characteristic. A lot of trading involves exchanges that are mutually beneficial, and middlemen that facilitate such trading are contributing to the welfare of society by improving the allocation of goods, services and resources. But trading that takes place in order to exploit an information advantage over a counter-party, and devoting resources to the creation of the information advantages that makes such trading profitable is socially wasteful. That is the intuitive principle insightfully grasped and articulated by Dreyfus.

As I have also pointed out in previous posts (e.g., here, here and here) the principle, intuitively grasped on some level, but not properly articulated or applied by people like Thorstein Veblen, was first correctly explicated by Jack Hirshleifer, who like Bellman and Dreyfus, worked for the Rand Corporation in the 1950s and 1960s, in his classic article “The Private and Social Value of Information and the Reward to Inventive Activity.”

Susan Woodward Remembers Armen Alchian

Susan Woodward, a former colleague and co-author of the late great Armen Alchian, was kind enough to share with me an article of hers forthcoming in a special issue of the Journal of Corporate Finance dedicated to Alchian’s memory. I thank Susan and Harold Mulherin, co-editor of the Journal of Corporate Finance for allowing me to post this wonderful tribute to Alchian.

Memories of Armen

Susan Woodward

Sand Hill Econometrics

Armen Alchian approached economics with constructive eccentricity. An aspect became apparent long ago when I taught intermediate price theory, a two-quarter course. Jack Hirshleifer’s new text (Hirshleifer (1976)) was just out and his approach was the foundation of my own training, so that was an obvious choice. But also, Alchian and Allen’s University Economics (Alchian and Allen (1964)) had been usefully separated into parts, of which Exchange and Production: Competition, Coordination, and Control (Alchian and Allen (1977)), the “price theory” part, available in paperback. I used both books.

Somewhere in the second quarter we got to the topic of rent. Rent is such a difficult topic because it’s a word in everyone’s vocabulary but to which economists give a special, second meaning. To prepare a discussion, I looked up “rent” in the index of both texts. In Hirshleifer (1976), it appeared for the first time on some page like 417. In Alchian & Allen (1977), it appeared, say, on page 99, and page 102, and page 188, and pages 87-88, 336-338, and 364-365. It was peppered all through the book.

Hirshleifer approached price theory as geometry. Lay out the axioms, prove the theorems. And never introduce a new idea, especially one like “rent” that collides with standard usage, without a solid foundation. The Alchian approach is more exploratory. “Oh, here’s an idea. Let’s walk around the idea and see what it looks like from all sides. Let’s tip it over and see what’s under it and what kind of noise it makes. Let’s light a fire under it and just see what happens. Drop it ten stories.” The books were complements, not substitutes.

While this textbook story illustrates one aspect of Armen’s thinking, the big epiphanies came working on our joint papers. Unusual for students at UCLA in that era, I didn’t have Armen as a teacher. My first year, Armen was away, and Jack Hirshleifer taught the entire first year price theory. Entranced by the finance segment of that year, the lure of finance in business school was irresistible. But fortune did not abandon me.

I came back to UCLA to teach at the dawn of personal computers. Oh they were feeble! There was a little room on the eighth floor of Bunche Hall where there were three little Compaq computers—the ones with really tiny green-on-black screens. Portable, sort of, but not like a purse. Armen and I were regulars in this word processing cave. Armen would get bored and start a conversation by asking some profound question. I’d flounder a bit and tell him I didn’t know and go back to work. But one day he asked why corporations limit liability. Whew, something to say. It is not a risk story, but about facilitating transferable shares. Limit liability, then shareholders and contracting creditors can price possible recovery, and the wealth and resources of individual shareholder are then irrelevant. When liability tries to reach beyond the firm’s assets to those of individual shareholders, shareholder wealth matters to value, and this creates reasons for inhibiting share transfers. You can limit liability and still address concern about tort creditors by having the firm carrying insurance for torts.

Armen asked “How did you figure this out?” I said, “I don’t know.” “Have you written it down?” “No, it doesn’t seem important enough, it would only be two pages.” “Oh, no, of course it is!” He was right. What I wrote at Armen’s insistence, Woodward (1985), is now in two books of readings on the modern corporation, still in print, still on reading lists, and yes it was more than two pages. The paper by Bargeron and Lehn (2015) in this volume provides empirical confirmation about the impact of limited liability on share transferability. After our conversations about limited liability, Armen never again called me “Joanne,” as in the actress, Joanne Woodward, wife of Paul Newman.

This led to many more discussions about the organization of firms. I was dismayed by the seeming mysticism of “teamwork” as discussed in the old Alchian & Demsetz paper. Does it not all boil down to moral hazard and hold-up, both aspects of information costs, and the potential for the residual claimant to manage these? Armen came to agree and that this, too, was worth writing up. So we started writing. I scribbled down my thoughts. Armen read them and said, “Well, this is right, but it will make Harold (Demsetz) mad. We can’t say it that way. We’ll say it another way.” Armen saw it as his job to bring Harold around.

As we started working on this paper (Alchian and Woodward (1987)), I asked Armen, “What journal should we be thinking of?” Armen said “Oh, don’t worry about that, something will come along”. It went to Rolf Richter’s journal because Armen admired Rolf’s efforts to promote economic analysis of institutions. There are accounts of Armen pulling accepted papers from journals in order to put them into books of readings in honor of his friends, and these stories are true. No journal impressed Armen very much. He thought that if something was good, people would find it and read it.

Soon after the first paper was circulating, Orley Ashenfelter asked Armen to write a book review of Oliver Williamson’s The Economic Institutions of Capitalism (such a brilliant title!). I got enlisted for that project too (Alchian and Woodward (1988)). Armen began writing, but I went back to reread Institutions of Capitalism. Armen gave me what he had written, and I was baffled. “Armen, this stuff isn’t in Williamson.” He asked, “Well, did he get it wrong?” I said, “No, it’s not that he got it wrong. These issues just aren’t there at all. You attribute these ideas to him, but they really come from our other paper.” And he said “Oh, well, don’t worry about that. Some historian will sort it out later. It’s a good place to promote these ideas, and they’ll get the right story eventually.” So, dear reader, now you know.

This from someone who spent his life discussing the efficiencies of private property and property rights—to basically give ideas away in order to promote them? It was a good lesson. I was just starting my ten years in the federal government. In academia, thinkers try to establish property rights in ideas. “This is mine. I thought of this. You must cite me.” In government this is not a winning strategy. Instead, you need plant an idea, then convince others that it’s their idea so they will help you.

And it was sometimes Armen’s strategy in the academic world too. Only someone who was very confident would do this. Or someone who just cared more about promoting ideas he thought were right than he cared about getting credit for them. Or someone who did not have so much respect for the refereeing process. He was so cavalier!

Armen had no use for formal models that did not teach us to look somewhere new in the known world, nor had he any patience for findings that relied on fancy econometrics. What was Armen’s idea of econometrics? Merton Miller told me. We were chatting about limited liability. Merton asked about evidence. Well, all public firms with transferable shares now have limited liability. But in private, closely-held firms, loans nearly always explicitly specify which of the owner’s personal assets are pledged against bank loans. “How do you know?” “From conversations with bankers.” Merton said said, “Ah, this sounds like UCLA econometrics! You go to Armen Alchian and you ask, ‘Armen, is this number about right?’ And Armen says, ‘Yeah, that sounds right.’ So you use that number.”

Why is Armen loved so much? It’s not just his contributions to our understanding because many great thinkers are hardly loved at all. Several things stand out. As noted above, Armen’s sense of what is important is very appealing. Ideas are important. Ideas are more important than being important. Don’t fuss over the small stuff or the small-minded stuff, just work on the ideas and get them right. Armen worked at inhibiting inefficient behavior, but never in an obvious way. He would be the first to agree that not all competition is efficient, and in particular that status competition is inefficient. Lunches and dinners with Armen never included conversations about who was getting tenure where or why various papers got in or did not get in to certain journals. He thought it just did not matter very much or deserve much attention.

Armen was intensely curious about the world and interested in things outside of himself. He was one of the least self-indulgent people that I have ever met. It cheered everybody up. Everyone was in a better mood for the often silly questions that Armen would ask about everything, such as, “Why do they use decorations in the sushi bar and not anywhere else? Is there some optimality story here?” Armen recognized his own limitations and was not afraid of them.

Armen’s views on inefficient behavior came out in an interesting way when we were working on the Williamson book review. What does the word “fair” mean? In the early 1970’s at UCLA, no one was very comfortable with “fair”. Many would even have said ‘fair’ has no meaning in economics. But then we got to pondering the car repair person in the desert (remember Los Angeles is next to a big desert), who is in a position to hold up unlucky motorists whose vehicles break down in a remote place. Why would the mechanic not hold up the motorist and charge a high price? The mechanic has the power. Information about occasional holdups would provoke inefficient avoidance of travel or taking ridiculous precautions. But from the individual perspective, why wouldn’t the mechanic engage in opportunistic behavior, on the spot? “Well,” Armen said, “probably he doesn’t do it because he was raised right.” Armen knew what “fair” meant, and was willing to take a stand on it being efficient.

For all his reputation as a conservative, Armen was very interested in Earl Thompson’s ideas about socially efficient institutions, and the useful constraints that collective action could and does impose on us. (see, for example, Thompson (1968, 1974).) He had more patience for Earl that any of Earl’s other senior colleagues except possibly Jim Buchanan. Earl could go on all evening and longer about the welfare cost of the status rat race, of militarism and how to discourage it, the brilliance of agricultural subsidies, how no one should listen to corrupt elites, and Armen would smile and nod and ponder.

Armen was a happy teacher. As others attest in this issue, he brought great energy, engagement, and generosity to the classroom. He might have been dressed for golf, but he gave students his complete attention. He especially enjoyed teaching the judges in Henry Manne’s Economics & Law program. One former pupil sought him out and at dinner, brought up the Apple v. Microsoft copyright dispute. He wanted to discuss the merits of the issues. Armen said oh no, simply get the thing over with ASAP. Armen said that he was a shareholder in both companies, and consequently did not care who won, but cared very much about what resources were squandered on the battle. Though the economics of this perspective was not novel (it was aired in Texaco v Pennzoil few years earlier), Armen provided in that conversation a view that neither side had an interest in promoting in court. The reaction was: Oh! Those who followed this case might have been puzzled at the subsequent proceeding in this dispute, but those who heard the conversation at dinner were not.

And Armen was a warm and sentimental person. When I moved to Washington, I left my roller skates in the extra bedroom where I slept when I visited Armen and Pauline. These were old-fashioned skates with two wheels in the front and two in the back, Riedell boots and kryptonite wheels, bought at Rip City Skates on Santa Monica Boulevard, (which is still there in 2015! I just looked it up, the proprietor knows all the empty swimming pools within 75 miles). I would take my skates down to the beach and skate from Santa Monica to Venice and back, then go buy some cinnamon rolls at the Pioneer bakery, and bring them back to Mar Vista and Armen and Pauline and I would eat them. Armen loved this ritual. Is she back yet? When I married Bob Hall and moved back to California, Armen did not want me to take the skates away. So I didn’t.

And here is a story Armen loved: Ron Batchelder was a student at UCLA who is also a great tennis player, a professional tennis player who had to be lured out of tennis and into economics, and who has written some fine economic history and more. He played tennis with Armen regularly for many years. On one occasion before dinner Armen said to Ron, “I played really well today.” Ron said, “Yes, you did; you played quite well today.” And Armen said, “But you know what? When I play better, you play better.” And Ron smiled and shrugged his shoulders. I said, “Ron, is it true?” He shrugged again and said, “Well, a long time ago, I learned to play the customer’s game.” And of course Armen just loved that line. He re-told that story many times.

Armen’s enthusiasm for that story is a reflection of his enthusiasm for life. It was a rare enthusiasm, an extraordinary enthusiasm. We all give him credit for it and we should, because it was an act of choice; it was an act of will, a gift to us all. Armen would have never said so, though, because he was raised right.

References

Alchian, Armen A., William R. Allen, 1964. University Economics. Wadsworth Publishing Company, Belmont, CA.

Alchian, Armen A., William R. Allen, 1977 Exchange and Production: Competition, Coordination, and Control. Wadsworth Publishing Company, Belmont, CA., 2nd edition.

Alchian, Armen A., Woodward, Susan, 1987. “Reflections on the theory of the firm.” Journal of Institutional and Theoretical Economics. 143, 110-136.

Alchian, Armen A., Woodward, Susan, 1988. “The firm is dead: Long live the firm: A review of Oliver E. Williamson’s The Economic Institutions of Capitalism.” Journal of Economic Literature. 26, 65-79.

Bargeron, Leonce, Lehn, Kenneth, 2015. “Limited liability and share transferability: An analysis of California firms, 1920-1940.” Journal of Corporate Finance, this volume.

Hirshleifer, Jack, 1976. Price Theory and Applications. Prentice hall, Englewood Cliffs, NJ.

Thompson, Earl A., 1968. “The perfectly competitive production of public goods.” Review of Economics and Statistics. 50, 1-12.

Thompson, Earl A., 1974. “Taxation and national defense.” Journal of Political Economy. 82, 755-782.

Woodward, Susan E., 1985. “Limited liability in the theory of the firm.” Journal of Institutional and Theorectical Economics. 141, 601-611.

Repeat after Me: Inflation’s the Cure not the Disease

Last week Martin Feldstein triggered a fascinating four-way exchange with a post explaining yet again why we still need to be worried about inflation. Tony Yates responded first with an explanation of why money printing doesn’t work at the zero lower bound (aka liquidity trap), leading Paul Krugman to comment wearily about the obtuseness of all those right-wingers who just can’t stop obsessing about the non-existent inflation threat when, all along, it was crystal clear that in a liquidity trap, printing money is useless.

I’m still not sure why relatively moderate conservatives like Feldstein didn’t find all this convincing back in 2009. I get, I think, why politics might predispose them to see inflation risks everywhere, but this was as crystal-clear a proposition as I’ve ever seen. Still, even if you managed to convince yourself that the liquidity-trap analysis was wrong six years ago, by now you should surely have realized that Bernanke, Woodford, Eggertsson, and, yes, me got it right.

But no — it’s a complete puzzle. Maybe it’s because those tricksy Fed officials started paying all of 25 basis points on reserves (Japan never paid such interest). Anyway, inflation is just around the corner, the same way it has been all these years.

Which surprisingly (not least to Krugman) led Brad DeLong to rise to Feldstein’s defense (well, sort of), pointing out that there is a respectable argument to be made for why even if money printing is not immediately effective at the zero lower bound, it could still be effective down the road, so that the mere fact that inflation has been consistently below 2% since the crash (except for a short blip when oil prices spiked in 2011-12) doesn’t mean that inflation might not pick up quickly once inflation expectations pick up a bit, triggering an accelerating and self-sustaining inflation as all those hitherto idle balances start gushing into circulation.

That argument drew a slightly dyspeptic response from Krugman who again pointed out, as had Tony Yates, that at the zero lower bound, the demand for cash is virtually unlimited so that there is no tendency for monetary expansion to raise prices, as if DeLong did not already know that. For some reason, Krugman seems unwilling to accept the implication of the argument in his own 1998 paper that he cites frequently: that for an increase in the money stock to raise the price level – note that there is an implicit assumption that the real demand for money does not change – the increase must be expected to be permanent. (I also note that the argument had been made almost 20 years earlier by Jack Hirshleifer, in his Fisherian text on capital theory, Capital Interest and Investment.) Thus, on Krugman’s own analysis, the effect of an increase in the money stock is expectations-dependent. A change in monetary policy will be inflationary if it is expected to be inflationary, and it will not be inflationary if it is not expected to be inflationary. And Krugman even quotes himself on the point, referring to

my call for the Bank of Japan to “credibly promise to be irresponsible” — to make the expansion of the base permanent, by committing to a relatively high inflation target. That was the main point of my 1998 paper!

So the question whether the monetary expansion since 2008 will ever turn out to be inflationary depends not on an abstract argument about the shape of the LM curve, but about the evolution of inflation expectations over time. I’m not sure that I’m persuaded by DeLong’s backward induction argument – an argument that I like enough to have used myself on occasion while conceding that the logic may not hold in the real word – but there is no logical inconsistency between the backward-induction argument and Krugman’s credibility argument; they simply reflect different conjectures about the evolution of inflation expectations in a world in which there is uncertainty about what the future monetary policy of the central bank is going to be (in other words, a world like the one we inhabit).

Which brings me to the real point of this post: the problem with monetary policy since 2008 has been that the Fed has credibly adopted a 2% inflation target, a target that, it is generally understood, the Fed prefers to undershoot rather than overshoot. Thus, in operational terms, the actual goal is really less than 2%. As long as the inflation target credibly remains less than 2%, the argument about inflation risk is about the risk that the Fed will credibly revise its target upwards.

With the both Wickselian natural real and natural nominal short-term rates of interest probably below zero, it would have made sense to raise the inflation target to get the natural nominal short-term rate above zero. There were other reasons to raise the inflation target as well, e.g., providing debt relief to debtors, thereby benefitting not only debtors but also those creditors whose debtors simply defaulted.

Krugman takes it for granted that monetary policy is impotent at the zero lower bound, but that impotence is not inherent; it is self-imposed by the credibility of the Fed’s own inflation target. To be sure, changing the inflation target is not a decision that we would want the Fed to take lightly, because it opens up some very tricky time-inconsistency problems. However, in a crisis, you may have to take a chance and hope that credibility can be restored by future responsible behavior once things get back to normal.

In this vein, I am reminded of the 1930 exchange between Hawtrey and Hugh Pattison Macmillan, chairman of the Committee on Finance and Industry, when Hawtrey, testifying before the Committee, suggested that the Bank of England reduce Bank Rate even at the risk of endangering the convertibility of sterling into gold (England eventually left the gold standard a little over a year later)

MACMILLAN. . . . the course you suggest would not have been consistent with what one may call orthodox Central Banking, would it?

HAWTREY. I do not know what orthodox Central Banking is.

MACMILLAN. . . . when gold ebbs away you must restrict credit as a general principle?

HAWTREY. . . . that kind of orthodoxy is like conventions at bridge; you have to break them when the circumstances call for it. I think that a gold reserve exists to be used. . . . Perhaps once in a century the time comes when you can use your gold reserve for the governing purpose, provided you have the courage to use practically all of it.

Of course the best evidence for the effectiveness of monetary policy at the zero lower bound was provided three years later, in April 1933, when FDR suspended the gold standard in the US, causing the dollar to depreciate against gold, triggering an immediate rise in US prices (wholesale prices rising 14% from April through July) and the fastest real recovery in US history (industrial output rising by over 50% over the same period). A recent paper by Andrew Jalil and Gisela Rua documents this amazing recovery from the depths of the Great Depression and the crucial role that changing inflation expectations played in stimulating the recovery. They also make a further important point: that by announcing a price level target, FDR both accelerated the recovery and prevented expectations of inflation from increasing without limit. The 1933 episode suggests that a sharp, but limited, increase in the price-level target would generate a faster and more powerful output response than an incremental increase in the inflation target. Unfortunately, after the 2008 downturn we got neither.

Maybe it’s too much to expect that an unelected central bank would take upon itself to adopt as a policy goal a substantial increase in the price level. Had the Fed announced such a goal after the 2008 crisis, it would have invited a potentially fatal attack, and not just from the usual right-wing suspects, on its institutional independence. Price stability, is after all, part of dual mandate that Fed is legally bound to pursue. And it was FDR, not the Fed, that took the US off the gold standard.

But even so, we at least ought to be clear that if monetary policy is impotent at the zero lower bound, the impotence is not caused by any inherent weakness, but by the institutional and political constraints under which it operates in a constitutional system. And maybe there is no better argument for nominal GDP level targeting than that it offers a practical and civilly reverent way of allowing monetary policy to be effective at the zero lower bound.

Is Finance Parasitic?

We all know what a parasite is: an organism that attaches itself to another organism and derives nourishment from the host organism and in so doing weakens the host possibly making the host unviable and thereby undermining its own existence. Ayn Rand and her all too numerous acolytes were and remain obsessed with parasitism, considering every form of voluntary charity and especially government assistance to the poor and needy a form of parasitism whereby the undeserving weak live off of and sap the strength and the industry of their betters: the able, the productive, and the creative.

In earlier posts, I have observed that a lot of what the financial industry does is not really productive of net benefits to society, the gains of some coming at the expense of others. This insight was developed by Jack Hirshleifer in his classic 1971 paper “The Private and Social Value of Information and the Reward to Inventive Activity.” Financial trading to a large extent involves nothing but the exchange of existing assets, real or financial, and the profit made by one trader is largely at the expense of the other party to the trade. Because the potential gain to one side of the transaction exceeds the net gain to society, there is a substantial incentive to devote resources to gaining any small, and transient informational advantage that can help a trader buy or sell at the right time, making a profit at the expense of another. The social benefit from these valuable, but minimal and transitory, informational advantages is far less than the value of the resources devoted to obtaining those informational advantages. Thus, much of what the financial sector is doing just drains resources from the rest of society, resource that could be put to far better and more productive use in other sectors of the economy.

So I was really interested to see Timothy Taylor’s recent blog post about Luigi Zingales’s Presidential Address to the American Finance Association in which Zingales, professor of Finance at the University of Chicago Business School, lectured his colleagues about taking a detached and objective position about the financial industry rather than acting as cheer-leaders for the industry, as, he believes, they have been all too inclined to do. Rather than discussing the incentive of the financial industry to over-invest in research in search of transient informational advantages that can be exploited, or to invest in billions in high-frequency trading cables to make transient informational advantages more readily exploitable, Zingales mentions a number of other ways that the finance industry uses information advantages to profit at expense of the rest of society.

A couple of xamples from Zingales.

Financial innovations. Every new product introduced by the financial industry is better understood by the supplier than the customer or client. How many clients or customers have been warned about the latent defects or risks in the products or instruments that they are buying? The doctrine of caveat emptor almost always applies, especially because the customers and clients are often considered to be informed and sophisticated. Informed and sophisticated? Perhaps, but that still doesn’t mean that there is no information asymmetry between such customers and the financial institution that creates financial innovations with the specific intent of exploiting the resulting informational advantage it gains over its clients.

As Zingales points out, we understand that doctors often exploit the informational asymmetry that they enjoy over their patients by overtreating, overmedicating, and overtesting their patients. They do so, notwithstanding the ethical obligations that they have sworn to observe when they become doctors. Are we to assume that the bankers and investment bankers and their cohorts in the financial industry, who have not sworn to uphold even minimal ethical standards, are any less inclined than doctors to exploit informational asymmetries that are no less extreme than those that exist between doctors and patients?

Another example. Payday loans are a routine part of life for many low-income people who live from paycheck to paycheck, and are in constant danger of being drawn into a downward spiral of overindebtedness, rising interest costs and financial ruin. Zingales points out that the ruinous effects of payday loans might be mitigated if borrowers chose installment loans instead of loans due in full at maturity. Unsophisticated borrowers seems to prefer single-repayment loans even though such loans in practice are more likely to lead to disaster than installment loans. Because total interest paid is greater under single payment loans, the payday-loan industry resists legislation requiring that payday loans be installment loans. Such legislation has been enacted in Colorado with favorable results. Zingales sums up the results of recent research about payday loans.

Given such a drastic reduction in fees paid to lenders, it is entirely relevant to consider what happened to the payday lending supply In fact, supply of loans increased. The explanation relies upon the elimination of two inefficiencies. First, less bankruptcies. Second, the reduction of excessive entry in the sector. Half of Colorado’s stores closed in the three years following the reform, but each remaining stores served 80 percent more customers, with no evidence of a reduced access to funds. This result is consistent with Avery and Samolyk (2010), who find that states with no rate limits tend to have more payday loan stores per capita. In other words, when payday lenders can charge very high rates, too many lenders enter the sector, reducing the profitability of each one of them. Similar to the real estate brokers, in the presence of free entry, the possibility of charging abnormal profit margins lead to too many firms in the industry, each operating below the optimal scale (Flannery and Samolyk, 2007), and thus making only normal profits. Interestingly, the efficient outcome cannot be achieved without mandatory regulation. Customers who are charged the very high rates do not fully appreciate that the cost is higher than if they were in a loan product which does not induce the spiral of unnecessary loan float and thus higher default. In the presence of this distortion, lenders find the opportunity to charge very high fees to be irresistible, a form of catering products to profit from cognitive limitations of the customers (Campbell, 2006). Hence, the payday loan industry has excessive entry and firms operating below the efficient scale. Competition alone will not fix the problem, in fact it might make it worse, because payday lenders will compete in finding more sophisticated ways to charge very high fees to naïve customers, exacerbating both the over-borrowing and the excessive entry. Competition works only if we restrict the dimension in which competition takes place: if unsecured lending to lower income people can take place only in the form of installment loans, competition will lower the cost of these loans.

One more example of my own. A favorite tactic of the credit-card industry is to offer customers zero-interest rate loans on transferred balances. Now you might think that banks were competing hard to drive down the excessive cost of borrowing incurred by many credit card holders for whom borrowing via their credit card is their best way of obtaining unsecured credit. But you would be wrong. Credit-card issuers offer the zero-interest loans because, a) they typically charge a 3 or 4 percent service charge off the top, and b) then include a $35 penalty for a late payment, and then c), under the fine print of the loan agreement, terminate the promotional rate on the transferred balance, increasing the interest rate on the transferred balance to some exorbitant level in the range of 20 to 30 percent. Most customers, especially if they haven’t tried a balance-transfer before, will not even read the fine print to know that a single late payment will result in a penalty and loss of the promotional rate. But even if they are aware of the fine print, they will almost certainly underestimate the likelihood that they will sooner or later miss an installment-payment deadline. I don’t know whether any studies have looked into the profitability of promotional rates for credit card issuers, but I suspect, given how widespread such offers are, that they are very profitable for credit-card issuers. Information asymmetry strikes again.

Forget the Monetary Base and Just Pay Attention to the Price Level

Kudos to David Beckworth for eliciting a welcome concession or clarification from Paul Krugman that monetary policy is not necessarily ineffectual at the zero lower bound. The clarification is welcome because Krugman and Simon Wren Lewis seemed to be making a big deal about insisting that monetary policy at the zero lower bound is useless if it affects only the current, but not the future, money supply, and touting the discovery as if it were a point that was not already well understood.

Now it’s true that Krugman is entitled to take credit for having come up with an elegant way of showing the difference between a permanent and a temporary increase in the monetary base, but it’s a point that, WADR, was understood even before Krugman. See, for example, the discussion in chapter 5 of Jack Hirshleifer’s textbook on capital theory (published in 1970), Investment, Interest and Capital, showing that the Fisher equation follows straightforwardly in an intertemporal equilibrium model, so that the nominal interest rate can be decomposed into a real component and an expected-inflation component. If holding money is costless, then the nominal rate of interest cannot be negative, and expected deflation cannot exceed the equilibrium real rate of interest. This implies that, at the zero lower bound, the current price level cannot be raised without raising the future price level proportionately. That is all Krugman was saying in asserting that monetary policy is ineffective at the zero lower bound, even though he couched the analysis in terms of the current and future money supplies rather than in terms of the current and future price levels. But the entire argument is implicit in the Fisher equation. And contrary to Krugman, the IS-LM model (with which I am certainly willing to coexist) offers no unique insight into this proposition; it would be remarkable if it did, because the IS-LM model in essence is a static model that has to be re-engineered to be used in an intertemporal setting.

Here is how Hirshleifer concludes his discussion:

The simple two-period model of choice between dated consumptive goods and dated real liquidities has been shown to be sufficiently comprehensive as to display both the quantity theorists’ and the Keynesian theorists’ predicted results consequent upon “changes in the money supply.” The seeming contradiction is resolved by noting that one result or the other follows, or possibly some mixture of the two, depending upon the precise meaning of the phrase “changes in the quantity of money.” More exactly, the result follows from the assumption made about changes in the time-distributed endowments of money and consumption goods.  pp. 150-51

Another passage from Hirshleifer is also worth quoting:

Imagine a financial “panic.” Current money is very scarce relative to future money – and so monetary interest rates are very high. The monetary authorities might then provide an increment [to the money stock] while announcing that an equal aggregate amount of money would be retired at some date thereafter. Such a change making current money relatively more plentiful (or less scarce) than before in comparison with future money, would clearly tend to reduce the monetary rate of interest. (p. 149)

In this passage Hirshleifer accurately describes the objective of Fed policy since the crisis: provide as much liquidity as needed to prevent a panic, but without even trying to generate a substantial increase in aggregate demand by increasing inflation or expected inflation. The refusal to increase aggregate demand was implicit in the Fed’s refusal to increase its inflation target.

However, I do want to make explicit a point of disagreement between me and Hirshleifer, Krugman and Beckworth. The point is more conceptual than analytical, by which I mean that although the analysis of monetary policy can formally be carried out either in terms of current and future money supplies, as Hirshleifer, Krugman and Beckworth do, or in terms of price levels, as I prefer to do so in terms of price levels. For one thing, reasoning in terms of price levels immediately puts you in the framework of the Fisher equation, while thinking in terms of current and future money supplies puts you in the framework of the quantity theory, which I always prefer to avoid.

The problem with the quantity theory framework is that it assumes that quantity of money is a policy variable over which a monetary authority can exercise effective control, a mistake — imprinted in our economic intuition by two or three centuries of quantity-theorizing, regrettably reinforced in the second-half of the twentieth century by the preposterous theoretical detour of monomaniacal Friedmanian Monetarism, as if there were no such thing as an identification problem. Thus, to analyze monetary policy by doing thought experiments that change the quantity of money is likely to mislead or confuse.

I can’t think of an effective monetary policy that was ever implemented by targeting a monetary aggregate. The optimal time path of a monetary aggregate can never be specified in advance, so that trying to target any monetary aggregate will inevitably fail, thereby undermining the credibility of the monetary authority. Effective monetary policies have instead tried to target some nominal price while allowing monetary aggregates to adjust automatically given that price. Sometimes the price being targeted has been the conversion price of money into a real asset, as was the case under the gold standard, or an exchange rate between one currency and another, as the Swiss National Bank is now doing with the franc/euro exchange rate. Monetary policies aimed at stabilizing a single price are easy to implement and can therefore be highly credible, but they are vulnerable to sudden changes with highly deflationary or inflationary implications. Nineteenth century bimetallism was an attempt to avoid or at least mitigate such risks. We now prefer inflation targeting, but we have learned (or at least we should have) from the Fed’s focus on inflation in 2008 that inflation targeting can also lead to disastrous consequences.

I emphasize the distinction between targeting monetary aggregates and targeting the price level, because David Beckworth in his post is so focused on showing 1) that the expansion of the Fed’s balance sheet under QE has been temoprary and 2) that to have been effective in raising aggregate demand at the zero lower bound, the increase in the monetary base needed to be permanent. And I say: both of the facts cited by David are implied by the fact that the Fed did not raise its inflation target or, preferably, replace its inflation target with a sufficiently high price-level target. With a higher inflation target or a suitable price-level target, the monetary base would have taken care of itself.

PS If your name is Scott Sumner, you have my permission to insert “NGDP” wherever “price level” appears in this post.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,272 other subscribers
Follow Uneasy Money on WordPress.com