Archive for the 'Hayek' Category

Hirshleifer on the Private and Social Value of Information

I have written a number posts (here here here, and here) over the past few years citing an article by one of my favorite UCLA luminaries, Jack Hirshleifer, of the fabled UCLA economics department of the 1950s, 1960s, 1970s and 1980s. Like everything Hirshleifer wrote, the article, “The Private and Social Value of Information and the Reward to Inventive Activity,” published in 1971 in the American Economic Review, is deeply insightful, carefully reasoned, and lucidly explained, reflecting the author’s comprehensive mastery of the whole body of neoclassical microeconomic theory.

Hirshleifer’s article grew out of a whole literature inspired by two of Hayek’s most important articles “Economics and Knowledge” in 1937 and “The Use of Knowledge in Society” in 1945. Both articles were concerned with the fact that, contrary to the assumptions in textbook treatments, economic agents don’t have complete information about all the characteristics of the goods being traded and about the prices at which those goods are available. Hayek was aiming to show that markets are characteristically capable of transmitting information held by some agents in a condensed form to make it usable by other agents. That role is performed by prices. It is prices that provide both information and incentives to economic agents to formulate and tailor their plans, and if necessary, to readjust those plans in response to changed conditions. Agents need not know what those underlying changes are; they need only observe, and act on, the price changes that result from those changes.

Hayek’s argument, though profoundly insightful, was not totally convincing in demonstrating the superiority of the pure “free market,” for three reasons.

First, economic agents base decisions, as Hayek himself was among the first to understand, not just on actual current prices, but also on expected future prices. Although traders sometimes – but usually don’t — know what the current price of something is, one can only guess – not know — what the price of that thing will be in the future. So, the work of providing the information individuals need to make good economic decisions cannot be accomplished – even in principle – just by the adjustment of prices in current markets. People also need enough information to make good guesses – form correct expectations — about future prices.

Second, economic agents don’t automatically know all prices. The assumption that every trader knows exactly what prices are before executing plans to buy and sell is true, if at all, only in highly organized markets where prices are publicly posted and traders can always buy and sell at the posted price. In most other markets, transactors must devote time and effort to find out what prices are and to find out the characteristics of the goods that they are interested in buying. It takes effort or search or advertising or some other, more or less costly, discovery method for economic agents to find out what current prices are and what characteristics those goods have. If agents aren’t fully informed even about current prices, they don’t necessarily make good decisions.

Libertarians, free marketeers, and other Hayek acolytes often like to credit Hayek with having solved or having shown how “the market” solves “the knowledge problem,” a problem that Hayek definitively showed a central-planning regime to be incapable of solving. But the solution at best is only partial, and certainly not robust, because markets never transmit all available relevant information. That’s because markets transmit only information about costs and valuations known to private individuals, but there is a lot of information about public or social valuations and costs that is not known to private individuals and rarely if ever gets fed into, or is transmitted by, the price system — valuations of public goods and the social costs of pollution for example.

Third, a lot of information is not obtained or transmitted unless it is acquired, and acquiring information is costly. Economic agents must search for relevant information about the goods and services that they are interested in obtaining and about the prices at which those goods and services are available. Moreover, agents often engage in transactions with counterparties in which one side has an information advantage over the other. When traders have an information advantage over their counterparties, the opportunity for one party to take advantage of the inferior information of the counterparty may make it impossible for the two parties to reach mutually acceptable terms, because a party who realizes that the counterparty has an information advantage may be unwilling to risk being taken advantage of. Sometimes these problems can be surmounted by creative contractual arrangements or legal interventions, but often they can’t.

To recognize the limitations of Hayek’s insight is not to minimize its importance, either in its own right or as a stimulus to further research. Important early contributions (all published between 1961 and 1970) by Stigler (“The Economics of Information”) Ozga (“Imperfect Markets through Lack of Knowledge”), Arrow (“Economic Welfare and the Allocation of Resources for Invention”), Demsetz (“Information and Efficiency: Another Viewpoint”) and Alchian (“Information Costs, Pricing, and Resource Unemployment”) all analyzed the problem of incomplete and limited information and the incentives for acquiring information, the institutions and market arrangements that arise to cope with limited information and the implications for economic efficiency of these limitations and incentives. They can all be traced directly or indirectly to Hayek’s early contributions. Among the important results that seem to follow from these early papers was that the inability of those discovering or creating new knowledge to appropriate the net benefits accruing from the knowledge implied that the incentive to create new knowledge is less than optimal owing to their inability to claim full property rights over new knowledge through patents or other forms of intellectual property.

Here is where Hirshleifer’s paper enters the picture. Is more information always better? It would certainly seem that more of any good is better than less. But how valuable is new information? And are the incentives to create or discover new information aligned with the value of that information? Hayek’s discussion implicitly assumed that the amount of information in existence is a given stock, at least in the aggregate. How can the information that already exists be optimally used? Markets help us make use of the information that already exists. But the problem addressed by Hirshleifer was whether the incentives to discover and create new information call forth the optimal investment of time, effort and resources to make new discoveries and create new knowledge.

Instead of focusing on the incentives to search for information about existing opportunities, Hirshleifer analyzed the incentives to learn about uncertain resource endowments and about the productivity of those resources.

This paper deals with an entirely different aspect of the economics of information. We here revert to the textbook assumption that markets are perfect and costless. The individual is always fully acquainted with the supply-demand offers of all potential traders, and an equilibrium integrating all individuals’ supply-demand offers is attained instantaneously. Individuals are unsure only about the size of their own commodity endowments and/or about the returns attainable from their own productive investments. They are subject to technological uncertainty rather than market uncertainty.

Technological uncertainty brings immediately to mind the economics of research and invention. The traditional position been that the excess of the social over the private value of new technological knowledge leads to underinvestment in inventive activity. The main reason is that information, viewed as a product, is only imperfectly appropriable by its discoverer. But this paper will show that there is a hitherto unrecognized force operating in opposite direction. What has been scarcely appreciated in the literature, if recognized at all, is the distributive aspect of access to superior information. It will be seen below how this advantage provides a motivation for the private acquisition and dissemination of technological information that is quite apart from – and may even exist in the absence – of any social usefulness of that information. (p. 561)

The key insight motivating Hirshleifer was that privately held knowledge enables someone possessing that knowledge to anticipate future price movements once the privately held information becomes public. If you can anticipate a future price movement that no one else can, you can confidently trade with others who don’t know what you know, and then wait for the profit to roll in when the less well-informed acquire the knowledge that you have. By assumption the newly obtained knowledge doesn’t affect the quantity of goods available to be traded, so acquiring new knowledge or information provides no social benefit. In a pure-exchange model, newly discovered knowledge provides no net social benefit; it only enables better-informed traders to anticipate price movements that less well-informed traders don’t see coming. Any gains from new knowledge are exactly matched by the losses suffered by those without that knowledge. Hirshleifer called the kind of knowledge that enables one to anticipate future price movements “foreknowledge,” which he distinguished from actual discovery .

The type of information represented by foreknowledge is exemplified by ability to successfully predict tomorrow’s (or next year’s) weather. Here we have a stochastic situation: with particular probabilities the future weather might be hot or cold, rainy or dry, etc. But whatever does actually occur will, in due time, be evident to all: the only aspect of information that may be of advantage is prior knowledge as to what will happen. Discovery, in contrast, is correct recognition of something that is hidden from view. Examples include the determination of the properties of materials, of physical laws, even of mathematical attributes (e.g., the millionth digit in the decimal expansion of “π”). The essential point is that in such cases nature will not automatically reveal the information; only human action can extract it. (562)

Hirshleifer’s result, though derived in the context of a pure-exchange economy, is very powerful, implying that any expenditure of resources devoted to finding out new information that enables the first possessor of the information to predict price changes and reap profits from trading is unambiguously wasteful by reducing total consumption of the community.

[T]he community as a whole obtains no benefit, under pure exchange, from either the acquisition or the dissemination (by resale or otherwise) of private foreknowledge. . . .

[T]he expenditure of real resources for the production of technological information is socially wasteful in pure exchange, as the expenditure of resources for an increase in the quantity of money by mining gold is wasteful, and for essentially the same reason. Just as a smaller quantity of money serves monetary functions as well as a larger, the price level adjusting correspondingly, so a larger amount of foreknowledge serves no social purpose under pure exchange that the smaller amount did not. (pp. 565-66)

Relaxing the assumption that there is no production does not alter the conclusion, because the kind of information that is discovered, even if it did lead to efficient production decisions that increase the output of goods whose prices rise sooner as a result of the new information than they would have otherwise. But if the foreknowledge is privately obtained, the private incentive is to use that information by trading with another, less-well-informed, trader, at a price the other trader would not agree to if he weren’t at an information disadvantage. The private incentive to use foreknowledge that might cause a change in production decisions is not to use the information to alter production decisions but to use it to trade with, and profit from, those with inferior knowledge.

[A]s under the regime of pure exchange, private foreknowledge makes possible large private profit without leading to socially useful activity. The individual would have just as much incentive as under pure exchange (even more, in fact) to expend real resources in generating socially useless private information. (p. 567)

If the foreknowledge is publicly available, there would be a change in production incentives to shift production toward more valuable products. However, the private gain if the information is kept private greatly exceeds the private value of the information if the information is public. Under some circumstances, private individuals may have an incentive to publicize their private information to cause the price increases in expectation of which they have taken speculative positions. But it is primarily the gain from foreseen price changes, not the gain from more efficient production decisions, that creates the incentive to discover foreknowledge.

The key factor underlying [these] results . . . is the distributive significance of private foreknowledge. When private information fails to lead to improved productive alignments (as must necessarily be the case in a world of pure exchange, and also in a regime of production unless there is dissemination effected in the interest of speculation or resale), it is evident that the individual’s source of gain can only be at the expense of his fellows. But even where information is disseminated and does lead to improved productive commitments, the distributive transfer gain will surely be far greater than the relatively minor productive gain the individual might reap from the redirection of his own real investment commitments. (Id.)

Moreover, better-informed individuals – indeed individuals who wrongly believe themselves to be better informed — will perceive it to be in their self-interest to expend resources to disseminate the information in the expectation that the ensuing price changes would redound to their profit. The private gain expected from disseminating information far exceeds the social benefit from the prices changes once the new information is disseminated; the social benefit from the price changes resulting from the disseminated information corresponds to an improved allocation of resources, but that improvement will be very small compared to the expected private profit from anticipating the price change and trading with those that don’t anticipate it.

Hirshleifer then turns from the value of foreknowledge to the value of discovering new information about the world or about nature that makes a contribution to total social output by causing a shift of resources to more productive uses. Inasmuch as the discovery of new information about the world reveals previously unknown productive opportunities, it might be thought that the private incentive to devote resources to the discovery of technological information about productive opportunities generates substantial social benefits. But Hirshleifer shows that here, too, because the private discovery of information about the world creates private opportunities for gain by trading based on the consequent knowledge of future price changes, the private incentive to discover technological information always exceeds the social value of the discovery.

We need only consider the more general regime of production and exchange. Given private, prior, and sure information of event A [a state of the world in which a previously unknown natural relationship has been shown to exist] the individual in a world of perfect markets would not adapt his productive decisions if he were sure the information would remain private until after the close of trading. (p. 570)

Hirshleifer is saying that the discovery of a previously unknown property of the world can lead to an increase in total social output only by causing productive resources to be reallocated, but that reallocation can occur only if and when the new information is disclosed. So if someone discovers a previously unknown property of the world, the discoverer can profit from that information by anticipating the price effect likely to result once the information is disseminated and then making a speculative transaction based on the expectation of a price change. A corollary of this argument is that individuals who think that they are better informed about the world will take speculative positions based on their beliefs, but insofar as their investments in discovering properties of the world lead them to incorrect beliefs, their investments in information gathering and discovery will not be rewarded. The net social return to information gathering and discovery is thus almost certainly negative.

The obvious way of acquiring the private information in question is, of course, by performing technological research. By a now familiar argument we can show once again that the distributive advantage of private information provides an incentive for information-generating activity that may quite possibly be in excess of the social value of the information. (Id.)

Finally, Hirshliefer turns to the implications for patent policy of his analysis of the private and social value of information.

The issues involved may be clarified by distinguishing the “technological” and “pecuniary” effects of invention. The technological effects are the improvements in production functions . . . consequent upon the new idea. The pecuniary effects are the wealth shifts due to the price revaluations that take place upon release and/or utilization of the information. The pecuniary effects are purely redistributive.

For concreteness, we can think in terms of a simple cost-reducing innovation. The technological benefit to society is, roughly, the integrated area between the old and new marginal-cost curves for the preinvention level of output plus, for any additional output, and the area between the demand curve and the new marginal-cost curve. The holder of a (perpetual) patent could ideally extract, via a perfectly discriminatory fee policy, this entire technological benefit. Equivalence between the social and private benefits of innovation would thus induce the optimal amount of private inventive activity. Presumably it is reasoning of this sort that underlies the economic case for patent protection. (p. 571)

Here Hirshleifer is uncritically restating the traditional analysis for the social benefit from new technological knowledge. But the analysis overstates the benefit, by assuming incorrectly that, with no patent protection, the discovery would never be made. If the discovery would be made without patent protection, then obviously the technological benefit to society is only the area indicated over a limited time horizon, so a perpetual patent enabling the holder of the patent to extract all additional consumer and producer surplus flowing from invention in perpetuity would overcompensate the patent holder for the invention.

Nor does Hirshleifer mention the tendency of patents to increase the costs of invention, research and development owing to the royalties subsequent inventors would have to pay existing patent holders for infringing inventions even if those inventions were, or would have been, discovered with no knowledge of the patented invention. While rewarding some inventions and inventors, patent protection penalizes or blocks subsequent inventions and inventors. Inventions are outputs, but they are also inputs. If the use of past inventions is made more costly by new inventors, it is not clear that the net result will be an increase in the rate of invention.

Moreover, the knowledge that a patented invention may block or penalize a new invention that infringes on an existing patent or a patent that issues before a new invention is introduced, may in some cases cause an overinvestment in research as inventors race to gain the sole right to an invention, in order to avoid being excluded while gaining the right to exclude others.

Hirshleifer does mention some reasons why maximally rewarding patent holders for their inventions may lead to suboptimal results, but fails to acknowledge that the conventional assessment of the social gain from new invention is substantially overstated or patents may well have a negative effect on inventive activity in fields in which patent holders have gained the right to exclude potentially infringing inventions even if the infringing inventions would have been made without the knowledge publicly disclosed by the patent holders in their patent applications.

On the other side are the recognized disadvantages of patents: the social costs of the administrative-judicial process, the possible anti-competitve impact, and restriction of output due to the marginal burden of patent fees. As a second-best kind of judgment, some degree of patent protection has seemed a reasonable compromise among the objectives sought.

Of course, that judgment about the social utility of patents is not universally accepted, and authorities from Arnold Plant, to Fritz Machlup, and most recently Michele Boldrin and David Levine have been extremely skeptical of the arguments in favor of patent protection, copyright and other forms of intellectual property.

However, Hirshleifer advances a different counter-argument against patent protection based on his distinction between the private and social gains derived from information.

But recognition of the unique position of the innovator for forecasting and consequently capturing portions of the pecuniary effects – the wealth transfers due to price revaluation – may put matters in a different light. The “ideal” case of the perfectly discriminating patent holder earning the entire technological benefit is no longer so ideal. (pp. 571-72)

Of course, as I have pointed out, the ‘“ideal” case’ never was ideal.

For the same inventor is in a position to reap speculative profits, too; counting these as well, he would clearly be overcompensated. (p. 572)

Indeed!

Hirshleifer goes on to recognize that the capacity to profit from speculative activity may be beyond the capacity or the ken of many inventors.

Given the inconceivably vast number of potential contingencies and the costs of establishing markets, the prospective speculator will find it costly or even impossible ot purchase neutrality from “irrelevant” risks. Eli Whitney [inventor of the cotton gin who obtained one of the first US patents for his invention in 1794] could not be sure that his gin would make cotton prices fall: while a considerable force would clearly be acting in that direction, a multitude of other contingencies might also have possibly affected the price of cotton. Such “uninsurable” risks gravely limit the speculation feasible with any degree of prudence. (Id.)

HIrshleifer concludes that there is no compelling case either for or against patent protection, because the standard discussion of the case for patent protection has not taken into consideration the potential profit that inventors can gain by speculating on the anticipated price effects of their patents. Of course the argument that inventors are unlikely to be adept at making such speculative plays is a serious argument, we have also seen the rise of patent trolls that buy up patent rights from inventors and then file lawsuits against suspected infringers. In a world without patent protection, it is entirely possible that patent trolls would reinvent themselves as patent speculators, buying up information about new inventions from inventors and using that information to engage in speculative activity based on that information. By acquiring a portfolio of patents such invention speculators could pool the risks of speculation over their entire portfolio, enabling them to speculate more effectively than any single inventor could on his own invention. Hirshleifer concludes as follows:

Even though practical considerations limit the effective scale and consequent impact of speculation and/or resale [but perhaps not as much as Hirshleifer thought], the gains thus achievable eliminate any a priori anticipation of underinvestment in the generation of new technological knowledge. (p. 574)

And I reiterate one last time that Hirshleifer arrived at his non-endorsement of patent protection even while accepting the overstated estimate of the social value of inventions and neglecting the tendency of patents to increase the cost of inventive activity.

Advertisements

My Paper on Hayek, Hicks and Radner and 3 Equilibrium Concepts Now Available on SSRN

A little over a year ago, I posted a series of posts (here, here, here, here, and here) that came together as a paper (“Hayek and Three Equilibrium Concepts: Sequential, Temporary and Rational-Expectations”) that I presented at the History of Economics Society in Toronto in June 2017. After further revisions I posted the introductory section and the concluding section in April before presenting the paper at the Colloquium on Market Institutions and Economic Processes at NYU.

I have since been making further revisions and tweaks to the paper as well as adding the names of Hicks and Radner to the title, and I have just posted the current version on SSRN where it is available for download.

Here is the abstract:

Along with Erik Lindahl and Gunnar Myrdal, F. A. Hayek was among the first to realize that the necessary conditions for intertemporal, as opposed to stationary, equilibrium could be expressed in terms of correct expectations of future prices, often referred to as perfect foresight. Subsequently, J. R. Hicks further elaborated the concept of intertemporal equilibrium in Value and Capital in which he also developed the related concept of a temporary equilibrium in which future prices are not correctly foreseen. This paper attempts to compare three important subsequent developments of that idea with Hayek’s 1937 refinement of his original 1928 paper on intertemporal equilibrium. As a preliminary, the paper explains the significance of Hayek’s 1937 distinction between correct expectations and perfect foresight. In non-chronological order, the three developments of interest are: (1) Roy Radner’s model of sequential equilibrium with incomplete markets as an alternative to the Arrow-Debreu-McKenzie model of full equilibrium with complete markets; (2) Hicks’s temporary equilibrium model, and an important extension of that model by C. J. Bliss; (3) the Muth rational-expectations model and its illegitimate extension by Lucas from its original microeconomic application into macroeconomics. While Hayek’s 1937 treatment most closely resembles Radner’s sequential equilibrium model, which Radner, echoing Hayek, describes as an equilibrium of plans, prices, and price expectations, Hicks’s temporary equilibrium model would seem to have been the natural development of Hayek’s approach. The now dominant Lucas rational-expectations approach misconceives intertemporal equilibrium and ignores the fundamental Hayekian insights about the meaning of intertemporal equilibrium.

Neo- and Other Liberalisms

Everybody seems to be worked up about “neoliberalism” these days. A review of Quinn Slobodian’s new book on the Austrian (or perhaps the Austro-Hungarian) roots of neoliberalism in the New Republic by Patrick Iber reminded me that the term “neoliberalism” which, in my own faulty recollection, came into somewhat popular usage only in the early 1980s, had actually been coined in the early the late 1930s at the now almost legendary Colloque Walter Lippmann and had actually been used by Hayek in at least one of his political essays in the 1940s. In that usage the point of neoliberalism was to revise and update the classical nineteenth-century liberalism that seemed to have run aground in the Great Depression, when the attempt to resurrect and restore what had been widely – and in my view mistakenly – regarded as an essential pillar of the nineteenth-century liberal order – the international gold standard – collapsed in an epic international catastrophe. The new liberalism was supposed to be a kinder and gentler — less relentlessly laissez-faire – version of the old liberalism, more amenable to interventions to aid the less well-off and to social-insurance programs providing a safety net to cushion individuals against the economic risks of modern capitalism, while preserving the social benefits and efficiencies of a market economy based on private property and voluntary exchange.

Any memory of Hayek’s use of “neo-liberalism” was blotted out by the subsequent use of the term to describe the unorthodox efforts of two young ambitious Democratic politicians, Bill Bradley and Dick Gephardt to promote tax reform. Bradley, who was then a first-term Senator from New Jersey, having graduated directly from NBA stardom to the US Senate in 1978, and Gephardt, then an obscure young Congressman from Missouri, made a splash in the first term of the Reagan administration by proposing to cut income tax rates well below the rates to which Reagan had proposed when running for President, in 1980, subsequently enacted early in his first term. Bradley and Gephardt proposed cutting the top federal income tax bracket from the new 50% rate to the then almost unfathomable 30%. What made the Bradley-Gephardt proposal liberal was the idea that special-interest tax exemptions would be eliminated, so that the reduced rates would not mean a loss of tax revenue, while making the tax system less intrusive on private decision-making, improving economic efficiency. Despite cutting the top rate, Bradley and Gephardt retained the principle of progressivity by reducing the entire rate structure from top to bottom while eliminating tax deductions and tax shelters.

Here is how David Ignatius described Bradley’s role in achieving the 1986 tax reform in the Washington Post (May 18, 1986)

Bradley’s intellectual breakthrough on tax reform was to combine the traditional liberal approach — closing loopholes that benefit mainly the rich — with the supply-side conservatives’ demand for lower marginal tax rates. The result was Bradley’s 1982 “Fair Tax” plan, which proposed removing many tax preferences and simplifying the tax code with just three rates: 14 percent, 26 percent and 30 percent. Most subsequent reform plans, including the measure that passed the Senate Finance Committee this month, were modelled on Bradley’s.

The Fair Tax was an example of what Democrats have been looking for — mostly without success — for much of the last decade. It synthesized liberal and conservative ideas in a new package that could appeal to middle-class Americans. As Bradley noted in an interview this week, the proposal offered “lower rates for the middle-income people who are the backbone of America, who are paying most of the freight.” And who, it might be added, increasingly have been voting Republican in recent presidential elections.

The Bradley proposal also offered Democrats a way to shed their anti-growth, tax-and-spend image by allowing them, as Bradley says, “to advocate economic growth and fairness simultaneously.” The only problem with the idea was that it challenged the party’s penchant for soak-the-rich rhetoric and interest-group politics.

So the new liberalism of Bradley and Gephardt was an ideological movement in the opposite direction from that of the earlier version of neoliberalism; the point of neoliberalism 1.0 was to moderate classical laissez-faire liberal orthodoxy; neoliberalism 2.0 aimed to counter the knee-jerk interventionism of New Deal liberalism that favored highly progressive income taxation to redistribute income from rich to poor and price ceilings and controls to protect the poor from exploitation by ruthless capitalists and greedy landlords and as an anti-inflation policy. The impetus for reassessing mid-twentieth-century American liberalism was the evident failure in the 1970s of wage and price controls, which had been supported with little evidence of embarrassment by most Democratic economists (with the notable exception of James Tobin) when imposed by Nixon in 1971, and by the decade-long rotting residue of Nixon’s controls — controls on crude oil and gasoline prices — finally scrapped by Reagan in 1981.

Although the neoliberalism 2.0 enjoyed considerable short-term success, eventually providing the template for the 1986 Reagan tax reform, and establishing Bradley and Gephardt as major figures in the Democratic Party, neoliberalism 2.0 was never embraced by the Democratic grassroots. Gephardt himself abandoned the neo-liberal banner in 1988 when he ran for President as a protectionist, pro-Labor Democrat, providing the eventual nominee, the mildly neoliberalish Michael Dukakis, with plenty of material with which to portray Gephardt as a flip-flopper. But Dukasis’s own failure in the general election did little to enhance the prospects of neoliberalism as a winning electoral strategy. The Democratic acceptance of low marginal tax rates in exchange for eliminating tax breaks, exemptions and shelters was short-lived, and Bradley himself abandoned the approach in 2000 when he ran for the Democratic Presidential nomination from the left against Al Gore.

So the notion that “neoliberalism” has any definite meaning is as misguided as the notion that “liberalism” has any definite meaning. “Neoliberalism” now serves primarily as a term of abuse for leftists to impugn the motives of their ideological and political opponents in exactly the same way that right-wingers use “liberal” as a term of abuse — there are so many of course — with which to dismiss and denigrate their ideological and political opponents. That archetypical classical liberal Ludwig von Mises was openly contemptuous of the neoliberalism that emerged from the Colloque Walter Lipmann and of its later offspring Ordoliberalism (frequently described as the Germanic version of neoliberalism) referring to it as “neo-interventionism.” Similarly, modern liberals who view themselves as upholders of New Deal liberalism deploy “neoliberalism” as a useful pejorative epithet with which to cast a rhetorical cloud over those sharing a not so dissimilar political background or outlook but who are more willing to tolerate the outcomes of market forces than they are.

There are many liberalisms and perhaps almost as many neoliberalisms, so it’s pointless and futile to argue about which is the true or legitimate meaning of “liberalism.” However, one can at least say about the two versions of neoliberalism that I’ve mentioned that they were attempts to moderate more extreme versions of liberalism and to move toward the ideological middle of the road: from the extreme laissez-faire of classical liberalism on the one right and from the dirigisme of the New Deal on the left toward – pardon the cliché – a third way in the center.

But despite my disclaimer that there is no fixed, essential, meaning of “liberalism,” I want to suggest that it is possible to find some common thread that unites many, if not all, of the disparate strands of liberalism. I think it’s important to do so, because it wasn’t so long ago that even conservatives were able to speak approvingly about the “liberal democratic” international order that was created, largely thanks to American leadership, in the post-World War II era. That time is now unfortunately past, but it’s still worth remembering that it once was possible to agree that “liberal” did correspond to an admirable political ideal.

The deep underlying principle that I think reconciles the different strands of the best versions of liberalism is a version of Kant’s categorical imperative: treat every individual as an end not a means. Individuals must not be used merely as tools or instruments with which other individuals or groups satisfy their own purposes. If you want someone else to serve you in accomplishing your ends, that other person must provide that assistance to you voluntarily not because you require him to do so. If you want that assistance you must secure it not by command but by persuasion. Persuasion can be secured in two ways, either by argument — persuading the other person to share your objective — or if you can’t, or won’t, persuade the person to share your objective, you can still secure his or her agreement to help you by offering some form of compensation to induce the person to provide you the services you desire.

The principle has an obvious libertarian interpretation: all cooperation is secured through voluntary agreements between autonomous agents. Force and fraud are impermissible. But the Kantian ideal doesn’t necessarily imply a strictly libertarian political system. The choices of autonomous agents can — actually must — be restricted by a set of legal rules governing the conduct of those agents. And the content of those legal rules must be worked out either by legislation or by an evolutionary process of common law adjudication or some combination of the two. The content of those rules needn’t satisfy a libertarian laissez-faire standard. Rather the liberal standard that legal rules must satisfy is that they don’t prescribe or impose ends, goals, or purposes that must be pursued by autonomous agents, but simply govern the means agents can employ in pursuing their objectives.

Legal rules of conduct are like semantic rules of grammar. Like rules of grammar that don’t dictate the ideas or thoughts expressed in speech or writing, only the manner of their expression, rules of conduct don’t specify the objectives that agents seek to achieve, only the acceptable means of accomplishing those objectives. The rules of conduct need not be libertarian; some choices may be ruled out for reasons of ethics or morality or expediency or the common good. What makes the rules liberal is that they apply equally to all citizens, and that the rules allow sufficient space to agents to conduct their own lives according to their own purposes, goals, preferences, and values.

In other words, the rule of law — not the rule of particular groups, classes, occupations — prevails. Agents are subject to an impartial legal standard, not to the will or command of another agent, or of the ruler. And for this to be the case, the ruler himself must be subject to the law. But within this framework of law that imposes no common goals and purposes on agents, a good deal of collective action to provide for common purposes — far beyond the narrow boundaries of laissez-faire doctrine — is possible. Citizens can be taxed to pay for a wide range of public services that the public, through its elected representatives, decides to provide. Those elected representatives can enact legislation that governs the conduct of individuals as long as the legislation does not treat individuals differently based on irrelevant distinctions or based on criteria that disadvantage certain people unfairly.

My view that the rule of law, not laissez-faire, not income redistribution, is the fundamental value and foundation of liberalism is a view that I learned from Hayek, who, in his later life was as much a legal philosopher as an economist, but it is a view that John Rawls, Ronald Dworkin on the left, and Michael Oakeshott on the right, also shared. Hayek, indeed, went so far as to say that he was fundamentally in accord with Rawls’s magnum opus A Theory of Justice, which was supposed to have provided a philosophical justification for modern welfare-state liberalism. Liberalism is a big tent, and it can accommodate a wide range of conflicting views on economic and even social policy. What sets liberalism apart is a respect for and commitment to the rule of law and due process, a commitment that ought to take precedence over any specific policy goal or preference.

But here’s the problem. If the ruler can also make or change the laws, the ruler is not really bound by the laws, because the ruler can change the law to permit any action that the ruler wants to take. How then is the rule of law consistent with a ruler that is empowered to make the law to which he is supposedly subject. That is the dilemma that every liberal state must cope with. And for Hayek, at least, the issue was especially problematic in connection with taxation.

With the possible exception of inflation, what concerned Hayek most about modern welfare-state policies was the highly progressive income-tax regimes that western countries had adopted in the mid-twentieth century. By almost any reasonable standard, top marginal income-tax rates were way too high in the mid-twentieth century, and the economic case for reducing the top rates was compelling when reducing the top rates would likely entail little, if any, net revenue loss. As a matter of optics, reductions in the top marginal rates had to be coupled with reductions of lower tax brackets which did entail revenue losses, but reforming an overly progressive tax system without a substantial revenue loss was not that hard to do.

But Hayek’s argument against highly progressive income tax rates was based more on principle than on expediency. Hayek regarded steeply progressive income tax rates as inherently discriminatory by imposing a disproportionate burden on a minority — the wealthy — of the population. Hayek did not oppose modest progressivity to ease the tax burden on the least well-off, viewing such progressivity treating as a legitimate concession that a well-off majority could allow to a less-well-off minority. But he greatly feared attempts by the majority to shift the burden of taxation onto a well-off minority, viewing that kind of progressivity as a kind of legalized hold-up, whereby the majority uses its control of the legislature to write the rules to their own advantage at the expense of the minority.

While Hayek’s concern that a wealthy minority could be plundered by a greedy majority seems plausible, a concern bolstered by the unreasonably high top marginal rates that were in place when he wrote, he overstated his case in arguing that high marginal rates were, in and of themselves, unequal treatment. Certainly it would be discriminatory if different tax rates applied to people because of their religion or national origin or for reasons unrelated to income, but even a highly progressive income tax can’t be discriminatory on its face, as Hayek alleged, when the progressivity is embedded in a schedule of rates applicable to everyone that reaches specified income thresholds.

There are other reasons to think that Hayek went too far in his opposition to progressive tax rates. First, he assumed that earned income accurately measures the value of the incremental contribution to social output. But Hayek overlooked that much of earned income reflects either rents that are unnecessary to call forth the efforts required to earn that income, in which case increasing the marginal tax rate on such earnings does not diminish effort and output. We also know as a result of a classic 1971 paper by Jack Hirshleifer that earned incomes often do not correspond to net social output. For example, incomes earned by stock and commodity traders reflect only in part incremental contributions to social output; they also reflect losses incurred by other traders. So resources devoted to acquiring information with which to make better predictions of future prices add less to output than those resources are worth, implying a net reduction in total output. Insofar as earned incomes reflect not incremental contributions to social output but income transfers from other individuals, raising taxes on those incomes can actually increase aggregate output.

So the economic case for reducing marginal tax rates is not necessarily more compelling than the philosophical case, and the economic arguments certainly seem less compelling than they did some three decades ago when Bill Bradley, in his youthful neoliberal enthusiasm, argued eloquently for drastically reducing marginal rates while broadening the tax base. Supporters of reducing marginal tax rates still like to point to the dynamic benefits of increasing incentives to work and invest, but they don’t acknowledge that earned income does not necessarily correspond closely to net contributions to aggregate output.

Drastically reducing the top marginal rate from 70% to 28% within five years, greatly increased the incentive to earn high incomes. The taxation of high incomes having been reducing so drastically, the number of people earning very high incomes since 1986 has grown very rapidly. Does that increase in the number of people earning very high incomes reflect an improvement in the overall economy, or does it reflect a shift in the occupational choices of talented people? Since the increase in very high incomes has not been associated with an increase in the overall rate of economic growth, it hardly seems obvious that the increase in the number of people earning very high incomes is closely correlated with the overall performance of the economy. I suspect rather that the opportunity to earn and retain very high incomes has attracted a many very talented people into occupations, like financial management, venture capital, investment banking, and real-estate brokerage, in which high incomes are being earned, with correspondingly fewer people choosing to enter less lucrative occupations. And if, as I suggested above, these occupations in which high incomes are being earned often contribute less to total output than lower-paying occupations, the increased opportunity to earn high incomes has actually reduced overall economic productivity.

Perhaps the greatest effect of reducing marginal income tax rates has been sociological. I conjecture that, as a consequence of reduced marginal income tax rates, the social status and prestige of people earning high incomes has risen, as has the social acceptability of conspicuous — even brazen — public displays of wealth. The presumption that those who have earned high incomes and amassed great fortunes are morally deserving of those fortunes, and therefore entitled to deference and respect on account of their wealth alone, a presumption that Hayek himself warned against, seems to be much more widely held now than it was forty or fifty years ago. Others may take a different view, but I find this shift towards increased respect and admiration for the wealthy, curiously combined with a supposedly populist political environment, to be decidedly unedifying.

Hayek, Radner and Rational-Expectations Equilibrium

In revising my paper on Hayek and Three Equilibrium Concepts, I have made some substantial changes to the last section which I originally posted last June. So I thought I would post my new updated version of the last section. The new version of the paper has not been submitted yet to a journal; I will give a talk about it at the colloquium on Economic Institutions and Market Processes at the NYU economics department next Monday. Depending on the reaction I get at the Colloquium and from some other people I will send the paper to, I may, or may not, post the new version on SSRN and submit to a journal.

In this section, I want to focus on a particular kind of intertemporal equilibrium: rational-expectations equilibrium. It is noteworthy that in his discussions of intertemporal equilibrium, Roy Radner assigns a  meaning to the term “rational-expectations equilibrium” very different from the one normally associated with that term. Radner describes a rational-expectations equilibrium as the equilibrium that results when some agents can make inferences about the beliefs of other agents when observed prices differ from the prices that the agents had expected. Agents attribute the differences between observed and expected prices to the superior information held by better-informed agents. As they assimilate the information that must have caused observed prices to deviate from their expectations, agents revise their own expectations accordingly, which, in turn, leads to further revisions in plans, expectations and outcomes.

There is a somewhat famous historical episode of inferring otherwise unknown or even secret information from publicly available data about prices. In 1954, one very rational agent, Armen Alchian, was able to identify which chemicals were being used in making the newly developed hydrogen bomb by looking for companies whose stock prices had risen too rapidly to be otherwise explained. Alchian, who spent almost his entire career at UCLA while moonlighting at the nearby Rand Corporation, wrote a paper at Rand listing the chemicals used in making the hydrogen bomb. When news of his unpublished paper reached officials at the Defense Department – the Rand Corporation (from whose files Daniel Ellsberg took the Pentagon Papers) having been started as a think tank with funding by the Department of Defense to do research on behalf of the U.S. military – the paper was confiscated from Alchian’s office at Rand and destroyed. (See Newhard’s paper for an account of the episode and a reconstruction of Alchian’s event study.)

But Radner also showed that the ability of some agents to infer the information on which other agents are causing prices to differ from the prices that had been expected does not necessarily lead to an equilibrium. The process of revising expectations in light of observed prices may not converge on a shared set of expectations of future prices based on common knowledge. Radner’s result reinforces Hayek’s insight, upon which I remarked above, that although expectations are equilibrating variables there is no economic mechanism that tends to bring expectations toward their equilibrium values. There is no feedback mechanism, corresponding to the normal mechanism for adjusting market prices in response to perceived excess demands or supplies, that operates on price expectations. The heavy lifting of bringing expectations into correspondence with what the future holds must be done by the agents themselves; the magic of the market goes only so far.

Although Radner’s conception of rational expectations differs from the more commonly used meaning of the term, his conception helps us understand the limitations of the conventional “rational expectations” assumption in modern macroeconomics, which is that the price expectations formed by the agents populating a model should be consistent with what the model itself predicts that those future prices will be. In this very restricted sense, I believe rational expectations is an important property of any model. If one assumes that the outcome expected by agents in a model is the equilibrium predicted by the model, then, under those expectations, the solution of the model ought to be the equilibrium of the model. If the solution of the model is somehow different from what agents in the model expect, then there is something really wrong with the model.

What kind of crazy model would have the property that correct expectations turn out not to be self-fulfilling? A model in which correct expectations are not self-fulfilling is a nonsensical model. But there is a huge difference between saying (a) that a model should have the property that correct expectations are self-fulfilling and saying (b) that the agents populating the model understand how the model works and, based know their knowledge of the model, form expectations of the equilibrium predicted by the model.

Rational expectations in the first sense is a minimal consistency property of an economic model; rational expectations in the latter sense is an empirical assertion about the real world. You can make such an assumption if you want, but you can’t credibly claim that it is a property of the real world. Whether it is a property of the real world is a matter of fact, not a methodological imperative. But the current sacrosanct status of rational expectations in modern macroeconomics has been achieved largely through methodological tyrannizing.

In his 1937 paper, Hayek was very clear that correct expectations are logically implied by the concept of an equilibrium of plans extending through time. But correct expectations are not a necessary, or even descriptively valid, characteristic of reality. Hayek also conceded that we don’t even have an explanation in theory of how correct expectations come into existence. He merely alluded to the empirical observation – perhaps not the most faithful description of empirical reality in 1937 – that there is an observed general tendency for markets to move toward equilibrium, implying that, over time, expectations somehow do tend to become more accurate.

It is worth pointing out that when the idea of rational expectations was introduced by John Muth (1961), he did so in the context of partial-equilibrium models in which the rational expectation in the model was the rational expectation of the equilibrium price in a particular market. The motivation for Muth to introduce the idea of a rational expectation was the cobweb-cycle model in which producers base current decisions about how much to produce for the following period on the currently observed price. But with a one-period time lag between production decisions and realized output, as is the case in agricultural markets in which the initial application of inputs does not result in output until a subsequent time period, it is easy to generate an alternating sequence of boom and bust, with current high prices inducing increased output in the following period, driving prices down, thereby inducing low output and high prices in the next period and so on.

Muth argued that rational producers would not respond to price signals in a way that led to consistently mistaken expectations, but would base their price expectations on more realistic expectations of what future prices would turn out to be. In his microeconomic work on rational expectations, Muth showed that the rational-expectation assumption was a better predictor of observed prices than the assumption of static expectations underlying the traditional cobweb-cycle model. So Muth’s rational-expectations assumption was based on a realistic conjecture of how real-world agents would actually form expectations. In that sense, Muth’s assumption was consistent with Hayek’s conjecture that there is an empirical tendency for markets to move toward equilibrium.

So, while Muth’s introduction of the rational-expectations hypothesis was an empirically progressive theoretical innovation, extending rational-expectations into the domain of macroeconomics has not been empirically progressive, rational-expectations models having consistently failed to generate better predictions than macro-models using other expectational assumptions. Instead, a rational-expectations axiom has been imposed as part of a spurious methodological demand that all macroeconomic models be “micro-founded.” But the deeper point – one that Hayek understood better than perhaps anyone else — is that there is a difference in kind between forming rational expectations about a single market price and forming rational expectations about the vector of n prices on the basis of which agents are choosing or revising their optimal intertemporal consumption and production plans.

It is one thing to assume that agents have some expert knowledge about the course of future prices in the particular markets in which they participate regularly; it is another thing entirely to assume that they have knowledge sufficient to forecast the course of all future prices and in particular to understand the subtle interactions between prices in one market and the apparently unrelated prices in another market. It is those subtle interactions that allow the kinds of informational inferences that, based on differences between expected and realized prices of the sort contemplated by Alchian and Radner, can sometimes be made. The former kind of knowledge is knowledge that expert traders might be expected to have; the latter kind of knowledge is knowledge that would be possessed by no one but a nearly omniscient central planner, whose existence was shown by Hayek to be a practical impossibility.

The key — but far from the only — error of the rational-expectations methodology that rules modern macroeconomics is that rational expectations somehow cause or bring about an intertemporal equilibrium. It is certainly a fact that people try very hard to use all the information available to them to predict what the future has in store, and any new bit of information not previously possessed will be rapidly assessed and assimilated and will inform a possibly revised set of expectations of the future. But there is no reason to think that this ongoing process of information gathering and processing and evaluation leads people to formulate correct expectations of the future or of future prices. Indeed, Radner proved that, even under strong assumptions, there is no necessity that the outcome of a process of information revision based on the observed differences between observed and expected prices leads to an equilibrium.

So it cannot be rational expectations that leads to equilibrium, On the contrary, rational expectations are a property of equilibrium. To speak of a “rational-expectations equilibrium” is to speak about a truism. There can be no rational expectations in the macroeconomic except in an equilibrium state, because correct expectations, as Hayek showed, is a defining characteristic of equilibrium. Outside of equilibrium, expectations cannot be rational. Failure to grasp that point is what led Morgenstern astray in thinking that Holmes-Moriarty story demonstrated the nonsensical nature of equilibrium. It simply demonstrated that Holmes and Moriarity were playing a non-repeated game in which an equilibrium did not exist.

To think about rational expectations as if it somehow results in equilibrium is nothing but a category error, akin to thinking about a triangle being caused by having angles whose angles add up to 180 degrees. The 180-degree sum of the angles of a triangle don’t cause the triangle; it is a property of the triangle.

Standard macroeconomic models are typically so highly aggregated that the extreme nature of the rational-expectations assumption is effectively suppressed. To treat all output as a single good (which involves treating the single output as both a consumption good and a productive asset generating a flow of productive services) effectively imposes the assumption that the only relative price that can ever change is the wage, so that all but one future relative prices are known in advance. That assumption effectively assumes away the problem of incorrect expectations except for two variables: the future price level and the future productivity of labor (owing to the productivity shocks so beloved of Real Business Cycle theorists).

Having eliminated all complexity from their models, modern macroeconomists, purporting to solve micro-founded macromodels, simply assume that there are just a couple of variables about which agents have to form their rational expectations. The radical simplification of the expectational requirements for achieving a supposedly micro-founded equilibrium belies the claim to have achieved anything of the sort. Whether the micro-foundational pretense affected — with apparently sincere methodological fervor — by modern macroeconomics is merely self-delusional or a deliberate hoax perpetrated on a generation of unsuspecting students is an interesting distinction, but a distinction lacking any practical significance.

Four score years since Hayek explained how challenging the notion of intertemporal equilibrium really is and the difficulties inherent in explaining any empirical tendency toward intertempral equilibrium, modern macroeconomics has succeeded in assuming all those difficulties out of existence. Many macroeconomists feel rather proud of what modern macroeconomics has achieved. I am not quite as impressed as they are.

 

On Equilibrium in Economic Theory

Here is the introduction to a new version of my paper, “Hayek and Three Concepts of Intertemporal Equilibrium” which I presented last June at the History of Economics Society meeting in Toronto, and which I presented piecemeal in a series of posts last May and June. This post corresponds to the first part of this post from last May 21.

Equilibrium is an essential concept in economics. While equilibrium is an essential concept in other sciences as well, and was probably imported into economics from physics, its meaning in economics cannot be straightforwardly transferred from physics into economics. The dissonance between the physical meaning of equilibrium and its economic interpretation required a lengthy process of explication and clarification, before the concept and its essential, though limited, role in economic theory could be coherently explained.

The concept of equilibrium having originally been imported from physics at some point in the nineteenth century, economists probably thought it natural to think of an economic system in equilibrium as analogous to a physical system at rest, in the sense of a system in which there was no movement or in the sense of all movements being repetitive. But what would it mean for an economic system to be at rest? The obvious answer was to say that prices of goods and the quantities produced, exchanged and consumed would not change. If supply equals demand in every market, and if there no exogenous disturbance displaces the system, e.g., in population, technology, tastes, etc., then there would seem to be no reason for the prices paid and quantities produced to change in that system. But that conception of an economic system at rest was understood to be overly restrictive, given the large, and perhaps causally important, share of economic activity – savings and investment – that is predicated on the assumption and expectation that prices and quantities not remain constant.

The model of a stationary economy at rest in which all economic activity simply repeats what has already happened before did not seem very satisfying or informative to economists, but that view of equilibrium remained dominant in the nineteenth century and for perhaps the first quarter of the twentieth. Equilibrium was not an actual state that an economy could achieve, it was just an end state that economic processes would move toward if given sufficient time to play themselves out with no disturbing influences. This idea of a stationary timeless equilibrium is found in the writings of the classical economists, especially Ricardo and Mill who used the idea of a stationary state as the end-state towards which natural economic processes were driving an an economic system.

This, not very satisfactory, concept of equilibrium was undermined when Jevons, Menger, Walras, and their followers began to develop the idea of optimizing decisions by rational consumers and producers. The notion of optimality provided the key insight that made it possible to refashion the earlier classical equilibrium concept into a new, more fruitful and robust, version.

If each economic agent (household or business firm) is viewed as making optimal choices, based on some scale of preferences, and subject to limitations or constraints imposed by their capacities, endowments, technologies, and the legal system, then the equilibrium of an economy can be understood as a state in which each agent, given his subjective ranking of the feasible alternatives, is making an optimal decision, and each optimal decision is both consistent with, and contingent upon, those of all other agents. The optimal decisions of each agent must simultaneously be optimal from the point of view of that agent while being consistent, or compatible, with the optimal decisions of every other agent. In other words, the decisions of all buyers of how much to purchase must be consistent with the decisions of all sellers of how much to sell. But every decision, just like every piece in a jig-saw puzzle, must fit perfectly with every other decision. If any decision is suboptimal, none of the other decisions contingent upon that decision can be optimal.

The idea of an equilibrium as a set of independently conceived, mutually consistent, optimal plans was latent in the earlier notions of equilibrium, but it could only be coherently articulated on the basis of a notion of optimality. Originally framed in terms of utility maximization, the notion was gradually extended to encompass the ideas of cost minimization and profit maximization. The general concept of an optimal plan having been grasped, it then became possible to formulate a generically economic idea of equilibrium, not in terms of a system at rest, but in terms of the mutual consistency of optimal plans. Once equilibrium was conceived as the mutual consistency of optimal plans, the needlessly restrictiveness of defining equilibrium as a system at rest became readily apparent, though it remained little noticed and its significance overlooked for quite some time.

Because the defining characteristics of economic equilibrium are optimality and mutual consistency, change, even non-repetitive change, is not logically excluded from the concept of equilibrium as it was from the idea of an equilibrium as a stationary state. An optimal plan may be carried out, not just at a single moment, but over a period of time. Indeed, the idea of an optimal plan is, at the very least, suggestive of a future that need not simply repeat the present. So, once the idea of equilibrium as a set of mutually consistent optimal plans was grasped, it was to be expected that the concept of equilibrium could be formulated in a manner that accommodates the existence of change and development over time.

But the manner in which change and development could be incorporated into an equilibrium framework of optimality was not entirely straightforward, and it required an extended process of further intellectual reflection to formulate the idea of equilibrium in a way that gives meaning and relevance to the processes of change and development that make the passage of time something more than merely a name assigned to one of the n dimensions in vector space.

This paper examines the slow process by which the concept of equilibrium was transformed from a timeless or static concept into an intertemporal one by focusing on the pathbreaking contribution of F. A. Hayek who first articulated the concept, and exploring the connection between his articulation and three noteworthy, but very different, versions of intertemporal equilibrium: (1) an equilibrium of plans, prices, and expectations, (2) temporary equilibrium, and (3) rational-expectations equilibrium.

But before discussing these three versions of intertemporal equilibrium, I summarize in section two Hayek’s seminal 1937 contribution clarifying the necessary conditions for the existence of an intertemporal equilibrium. Then, in section three, I elaborate on an important, and often neglected, distinction, first stated and clarified by Hayek in his 1937 paper, between perfect foresight and what I call contingently correct foresight. That distinction is essential for an understanding of the distinction between the canonical Arrow-Debreu-McKenzie (ADM) model of general equilibrium, and Roy Radner’s 1972 generalization of that model as an equilibrium of plans, prices and price expectations, which I describe in section four.

Radner’s important generalization of the ADM model captured the spirit and formalized Hayek’s insights about the nature and empirical relevance of intertemporal equilibrium. But to be able to prove the existence of an equilibrium of plans, prices and price expectations, Radner had to make assumptions about agents that Hayek, in his philosophically parsimonious view of human knowledge and reason, had been unwilling to accept. In section five, I explore how J. R. Hicks’s concept of temporary equilibrium, clearly inspired by Hayek, though credited by Hicks to Erik Lindahl, provides an important bridge connecting the pure hypothetical equilibrium of correct expectations and perfect consistency of plans with the messy real world in which expectations are inevitably disappointed and plans routinely – and sometimes radically – revised. The advantage of the temporary-equilibrium framework is to provide the conceptual tools with which to understand how financial crises can occur and how such crises can be propagated and transformed into economic depressions, thereby making possible the kind of business-cycle model that Hayek tried unsuccessfully to create. But just as Hicks unaccountably failed to credit Hayek for the insights that inspired his temporary-equilibrium approach, Hayek failed to see the potential of temporary equilibrium as a modeling strategy that combines the theoretical discipline of the equilibrium method with the reality of expectational inconsistency across individual agents.

In section six, I discuss the Lucasian idea of rational expectations in macroeconomic models, mainly to point out that, in many ways, it simply assumes away the problem of plan expectational consistency with which Hayek, Hicks and Radner and others who developed the idea of intertemporal equilibrium were so profoundly concerned.

Milton Friedman and the Phillips Curve

In December 1967, Milton Friedman delivered his Presidential Address to the American Economic Association in Washington DC. In those days the AEA met in the week between Christmas and New Years, in contrast to the more recent practice of holding the convention in the week after New Years. That’s why the anniversary of Friedman’s 1967 address was celebrated at the 2018 AEA convention. A special session was dedicated to commemoration of that famous address, published in the March 1968 American Economic Review, and fittingly one of the papers at the session as presented by the outgoing AEA president Olivier Blanchard, who also wrote one of the papers discussed at the session. Other papers were written by Thomas Sargent and Robert Hall, and by Greg Mankiw and Ricardo Reis. The papers were discussed by Lawrence Summers, Eric Nakamura, and Stanley Fischer. An all-star cast.

Maybe in a future post, I will comment on the papers presented in the Friedman session, but in this post I want to discuss a point that has been generally overlooked, not only in the three “golden” anniversary papers on Friedman and the Phillips Curve, but, as best as I can recall, in all the commentaries I’ve seen about Friedman and the Phillips Curve. The key point to understand about Friedman’s address is that his argument was basically an extension of the idea of monetary neutrality, which says that the real equilibrium of an economy corresponds to a set of relative prices that allows all agents simultaneously to execute their optimal desired purchases and sales conditioned on those relative prices. So it is only relative prices, not absolute prices, that matter. Taking an economy in equilibrium, if you were suddenly to double all prices, relative prices remaining unchanged, the equilibrium would be preserved and the economy would proceed exactly – and optimally – as before as if nothing had changed. (There are some complications about what is happening to the quantity of money in this thought experiment that I am skipping over.) On the other hand, if you change just a single price, not only would the market in which that price is determined be disequilibrated, at least one, and potentially more than one, other market would be disequilibrated. The point here is that the real economy rules, and equilibrium in the real economy depends on relative, not absolute, prices.

What Friedman did was to argue that if money is neutral with respect to changes in the price level, it should also be neutral with respect to changes in the rate of inflation. The idea that you can wring some extra output and employment out of the economy just by choosing to increase the rate of inflation goes against the grain of two basic principles: (1) monetary neutrality (i.e., the real equilibrium of the economy is determined solely by real factors) and (2) Friedman’s famous non-existence (of a free lunch) theorem. In other words, you can’t make the economy as a whole better off just by printing money.

Or can you?

Actually you can, and Friedman himself understood that you can, but he argued that the possibility of making the economy as a whole better of (in the sense of increasing total output and employment) depends crucially on whether inflation is expected or unexpected. Only if inflation is not expected does it serve to increase output and employment. If inflation is correctly expected, the neutrality principle reasserts itself so that output and employment are no different from what they would have been had prices not changed.

What that means is that policy makers (monetary authorities) can cause output and employment to increase by inflating the currency, as implied by the downward-sloping Phillips Curve, but that simply reflects that actual inflation exceeds expected inflation. And, sure, the monetary authorities can always surprise the public by raising the rate of inflation above the rate expected by the public , but that doesn’t mean that the public can be perpetually fooled by a monetary authority determined to keep inflation higher than expected. If that is the strategy of the monetary authorities, it will lead, sooner or later, to a very unpleasant outcome.

So, in any time period – the length of the time period corresponding to the time during which expectations are given – the short-run Phillips Curve for that time period is downward-sloping. But given the futility of perpetually delivering higher than expected inflation, the long-run Phillips Curve from the point of view of the monetary authorities trying to devise a sustainable policy must be essentially vertical.

Two quick parenthetical remarks. Friedman’s argument was far from original. Many critics of Keynesian policies had made similar arguments; the names Hayek, Haberler, Mises and Viner come immediately to mind, but the list could easily be lengthened. But the earliest version of the argument of which I am aware is Hayek’s 1934 reply in Econometrica to a discussion of Prices and Production by Alvin Hansen and Herbert Tout in their 1933 article reviewing recent business-cycle literature in Econometrica in which they criticized Hayek’s assertion that a monetary expansion that financed investment spending in excess of voluntary savings would be unsustainable. They pointed out that there was nothing to prevent the monetary authority from continuing to create money, thereby continually financing investment in excess of voluntary savings. Hayek’s reply was that a permanent constant rate of monetary expansion would not suffice to permanently finance investment in excess of savings, because once that monetary expansion was expected, prices would adjust so that in real terms the constant flow of monetary expansion would correspond to the same amount of investment that had been undertaken prior to the first and unexpected round of monetary expansion. To maintain a rate of investment permanently in excess of voluntary savings would require progressively increasing rates of monetary expansion over and above the expected rate of monetary expansion, which would sooner or later prove unsustainable. The gist of the argument, more than three decades before Friedman’s 1967 Presidential address, was exactly the same as Friedman’s.

A further aside. But what Hayek failed to see in making this argument was that, in so doing, he was refuting his own argument in Prices and Production that only a constant rate of total expenditure and total income is consistent with maintenance of a real equilibrium in which voluntary saving and planned investment are equal. Obviously, any rate of monetary expansion, if correctly foreseen, would be consistent with a real equilibrium with saving equal to investment.

My second remark is to note the ambiguous meaning of the short-run Phillips Curve relationship. The underlying causal relationship reflected in the negative correlation between inflation and unemployment can be understood either as increases in inflation causing unemployment to go down, or as increases in unemployment causing inflation to go down. Undoubtedly the causality runs in both directions, but subtle differences in the understanding of the causal mechanism can lead to very different policy implications. Usually the Keynesian understanding of the causality is that it runs from unemployment to inflation, while a more monetarist understanding treats inflation as a policy instrument that determines (with expected inflation treated as a parameter) at least directionally the short-run change in the rate of unemployment.

Now here is the main point that I want to make in this post. The standard interpretation of the Friedman argument is that since attempts to increase output and employment by monetary expansion are futile, the best policy for a monetary authority to pursue is a stable and predictable one that keeps the economy at or near the optimal long-run growth path that is determined by real – not monetary – factors. Thus, the best policy is to find a clear and predictable rule for how the monetary authority will behave, so that monetary mismanagement doesn’t inadvertently become a destabilizing force causing the economy to deviate from its optimal growth path. In the 50 years since Friedman’s address, this message has been taken to heart by monetary economists and monetary authorities, leading to a broad consensus in favor of inflation targeting with the target now almost always set at 2% annual inflation. (I leave aside for now the tricky question of what a clear and predictable monetary rule would look like.)

But this interpretation, clearly the one that Friedman himself drew from his argument, doesn’t actually follow from the argument that monetary expansion can’t affect the long-run equilibrium growth path of an economy. The monetary neutrality argument, being a pure comparative-statics exercise, assumes that an economy, starting from a position of equilibrium, is subjected to a parametric change (either in the quantity of money or in the price level) and then asks what will the new equilibrium of the economy look like? The answer is: it will look exactly like the prior equilibrium, except that the price level will be twice as high with twice as much money as previously, but with relative prices unchanged. The same sort of reasoning, with appropriate adjustments, can show that changing the expected rate of inflation will have no effect on the real equilibrium of the economy, with only the rate of inflation and the rate of monetary expansion affected.

This comparative-statics exercise teaches us something, but not as much as Friedman and his followers thought. True, you can’t get more out of the economy – at least not for very long – than its real equilibrium will generate. But what if the economy is not operating at its real equilibrium? Even Friedman didn’t believe that the economy always operates at its real equilibrium. Just read his Monetary History of the United States. Real-business cycle theorists do believe that the economy always operates at its real equilibrium, but they, unlike Friedman, think monetary policy is useless, so we can forget about them — at least for purposes of this discussion. So if we have reason to think that the economy is falling short of its real equilibrium, as almost all of us believe that it sometimes does, why should we assume that monetary policy might not nudge the economy in the direction of its real equilibrium?

The answer to that question is not so obvious, but one answer might be that if you use monetary policy to move the economy toward its real equilibrium, you might make mistakes sometimes and overshoot the real equilibrium and then bad stuff would happen and inflation would run out of control, and confidence in the currency would be shattered, and you would find yourself in a re-run of the horrible 1970s. I get that argument, and it is not totally without merit, but I wouldn’t characterize it as overly compelling. On a list of compelling arguments, I would put it just above, or possibly just below, the domino theory on the basis of which the US fought the Vietnam War.

But even if the argument is not overly compelling, it should not be dismissed entirely, so here is a way of taking it into account. Just for fun, I will call it a Taylor Rule for the Inflation Target (IT). Let us assume that the long-run inflation target is 2% and let us say that (YY*) is the output gap between current real GDP and potential GDP (i.e., the GDP corresponding to the real equilibrium of the economy). We could then define the following Taylor Rule for the inflation target:

IT = α(2%) + β((YY*)/ Y*).

This equation says that the inflation target in any period would be a linear combination of the default Inflation Target of 2% times an adjustment coefficient α designed to keep successively chosen Inflation targets from deviating from the long-term price-level-path corresponding to 2% annual inflation and some fraction β of the output gap expressed as a percentage of potential GDP. Thus, for example, if the output gap was -0.5% and β was 0.5, the short-term Inflation Target would be raised to 4.5% if α were 1.

However, if on average output gaps are expected to be negative, then α would have to be chosen to be less than 1 in order for the actual time path of the price level to revert back to a target price-level corresponding to a 2% annual rate.

Such a procedure would fit well with the current dual inflation and employment mandate of the Federal Reserve. The long-term price level path would correspond to the price-stability mandate, while the adjustable short-term choice of the IT would correspond to and promote the goal of maximum employment by raising the inflation target when unemployment was high as a countercyclical policy for promoting recovery. But short-term changes in the IT would not be allowed to cause a long-term deviation of the price level from its target path. The dual mandate would ensure that relatively higher inflation in periods of high unemployment would be compensated for by periods of relatively low inflation in periods of low unemployment.

Alternatively, you could just target nominal GDP at a rate consistent with a long-run average 2% inflation target for the price level, with the target for nominal GDP adjusted over time as needed to ensure that the 2% average inflation target for the price level was also maintained.

Does Economic Theory Entail or Support Free-Market Ideology?

A few weeks ago, via Twitter, Beatrice Cherrier solicited responses to this query from Dina Pomeranz

It is a serious — and a disturbing – question, because it suggests that the free-market ideology which is a powerful – though not necessarily the most powerful — force in American right-wing politics, and probably more powerful in American politics than in the politics of any other country, is the result of how economics was taught in the 1970s and 1980s, and in the 1960s at UCLA, where I was an undergrad (AB 1970) and a graduate student (PhD 1977), and at Chicago.

In the 1950s, 1960s and early 1970s, free-market economics had been largely marginalized; Keynes and his successors were ascendant. But thanks to Milton Friedman and his compatriots at a few other institutions of higher learning, especially UCLA, the power of microeconomics (aka price theory) to explain a very broad range of economic and even non-economic phenomena was becoming increasingly appreciated by economists. A very broad range of advances in economic theory on a number of fronts — economics of information, industrial organization and antitrust, law and economics, public choice, monetary economics and economic history — supported by the award of the Nobel Prize to Hayek in 1974 and Friedman in 1976, greatly elevated the status of free-market economics just as Margaret Thatcher and Ronald Reagan were coming into office in 1979 and 1981.

The growing prestige of free-market economics was used by Thatcher and Reagan to bolster the credibility of their policies, especially when the recessions caused by their determination to bring double-digit inflation down to about 4% annually – a reduction below 4% a year then being considered too extreme even for Thatcher and Reagan – were causing both Thatcher and Reagan to lose popular support. But the growing prestige of free-market economics and economists provided some degree of intellectual credibility and weight to counter the barrage of criticism from their opponents, enabling both Thatcher and Reagan to use Friedman and Hayek, Nobel Prize winners with a popular fan base, as props and ornamentation under whose reflected intellectual glory they could take cover.

And so after George Stigler won the Nobel Prize in 1982, he was invited to the White House in hopes that, just in time, he would provide some additional intellectual star power for a beleaguered administration about to face the 1982 midterm elections with an unemployment rate over 10%. Famously sharp-tongued, and far less a team player than his colleague and friend Milton Friedman, Stigler refused to play his role as a prop and a spokesman for the administration when asked to meet reporters following his celebratory visit with the President, calling the 1981-82 downturn a “depression,” not a mere “recession,” and dismissing supply-side economics as “a slogan for packaging certain economic ideas rather than an orthodox economic category.” That Stiglerian outburst of candor brought the press conference to an unexpectedly rapid close as the Nobel Prize winner was quickly ushered out of the shouting range of White House reporters. On the whole, however, Republican politicians have not been lacking of economists willing to lend authority and intellectual credibility to Republican policies and to proclaim allegiance to the proposition that the market is endowed with magical properties for creating wealth for the masses.

Free-market economics in the 1960s and 1970s made a difference by bringing to light the many ways in which letting markets operate freely, allowing output and consumption decisions to be guided by market prices, could improve outcomes for all people. A notable success of Reagan’s free-market agenda was lifting, within days of his inauguration, all controls on the prices of domestically produced crude oil and refined products, carryovers of the disastrous wage-and-price controls imposed by Nixon in 1971, but which, following OPEC’s quadrupling of oil prices in 1973, neither Nixon, Ford, nor Carter had dared to scrap. Despite a political consensus against lifting controls, a consensus endorsed, or at least not strongly opposed, by a surprisingly large number of economists, Reagan, following the advice of Friedman and other hard-core free-market advisers, lifted the controls anyway. The Iran-Iraq war having started just a few months earlier, the Saudi oil minister was predicting that the price of oil would soon rise from $40 to at least $50 a barrel, and there were few who questioned his prediction. One opponent of decontrol described decontrol as writing a blank check to the oil companies and asking OPEC to fill in the amount. So the decision to decontrol oil prices was truly an act of some political courage, though it was then characterized as an act of blind ideological faith, or a craven sellout to Big Oil. But predictions of another round of skyrocketing oil prices, similar to the 1973-74 and 1978-79 episodes, were refuted almost immediately, international crude-oil prices falling steadily from $40/barrel in January to about $33/barrel in June.

Having only a marginal effect on domestic gasoline prices, via an implicit subsidy to imported crude oil, controls on domestic crude-oil prices were primarily a mechanism by which domestic refiners could extract a share of the rents that otherwise would have accrued to domestic crude-oil producers. Because additional crude-oil imports increased a domestic refiner’s allocation of “entitlements” to cheap domestic crude oil, thereby reducing the net cost of foreign crude oil below the price paid by the refiner, one overall effect of the controls was to subsidize the importation of crude oil, notwithstanding the goal loudly proclaimed by all the Presidents overseeing the controls: to achieve US “energy independence.” In addition to increasing the demand for imported crude oil, the controls reduced the elasticity of refiners’ demand for imported crude, controls and “entitlements” transforming a given change in the international price of crude into a reduced change in the net cost to domestic refiners of imported crude, thereby raising OPEC’s profit-maximizing price for crude oil. Once domestic crude oil prices were decontrolled, market forces led almost immediately to reductions in the international price of crude oil, so the coincidence of a fall in oil prices with Reagan’s decision to lift all price controls on crude oil was hardly accidental.

The decontrol of domestic petroleum prices was surely as pure a victory for, and vindication of, free-market economics as one could have ever hoped for [personal disclosure: I wrote a book for The Independent Institute, a free-market think tank, Politics, Prices and Petroleum, explaining in rather tedious detail many of the harmful effects of price controls on crude oil and refined products]. Unfortunately, the coincidence of free-market ideology with good policy is not necessarily as comprehensive as Friedman and his many acolytes, myself included, had assumed.

To be sure, price-fixing is almost always a bad idea, and attempts at price-fixing almost always turn out badly, providing lots of ammunition for critics of government intervention of all kinds. But the implicit assumption underlying the idea that freely determined market prices optimally guide the decentralized decisions of economic agents is that the private costs and benefits taken into account by economic agents in making and executing their plans about how much to buy and sell and produce closely correspond to the social costs and benefits that an omniscient central planner — if such a being actually did exist — would take into account in making his plans. But in the real world, the private costs and benefits considered by individual agents when making their plans and decisions often don’t reflect all relevant costs and benefits, so the presumption that market prices determined by the elemental forces of supply and demand always lead to the best possible outcomes is hardly ironclad, as we – i.e., those of us who are not philosophical anarchists – all acknowledge in practice, and in theory, when we affirm that competing private armies and competing private police forces and competing judicial systems would not provide for common defense and for domestic tranquility more effectively than our national, state, and local governments, however imperfectly, provide those essential services. The only question is where and how to draw the ever-shifting lines between those decisions that are left mostly or entirely to the voluntary decisions and plans of private economic agents and those decisions that are subject to, and heavily — even mainly — influenced by, government rule-making, oversight, or intervention.

I didn’t fully appreciate how widespread and substantial these deviations of private costs and benefits from social costs and benefits can be even in well-ordered economies until early in my blogging career, when it occurred to me that the presumption underlying that central pillar of modern right-wing, free-market ideology – that reducing marginal income tax rates increases economic efficiency and promotes economic growth with little or no loss in tax revenue — implicitly assumes that all taxable private income corresponds to the output of goods and services whose private values and costs equal their social values and costs.

But one of my eminent UCLA professors, Jack Hirshleifer, showed that this presumption is subject to a huge caveat, because insofar as some people can earn income by exploiting their knowledge advantages over the counterparties with whom they trade, incentives are created to seek the kinds of knowledge that can be exploited in trades with less-well informed counterparties. The incentive to search for, and exploit, knowledge advantages implies excessive investment in the acquisition of exploitable knowledge, the private gain from acquiring such knowledge greatly exceeding the net gain to society from the acquisition of such knowledge, inasmuch as gains accruing to the exploiter are largely achieved at the expense of the knowledge-disadvantaged counterparties with whom they trade.

For example, substantial resources are now almost certainly wasted by various forms of financial research aiming to gain information that would have been revealed in due course anyway slightly sooner than the knowledge is gained by others, so that the better-informed traders can profit by trading with less knowledgeable counterparties. Similarly, the incentive to exploit knowledge advantages encourages the creation of financial products and structuring other kinds of transactions designed mainly to capitalize on and exploit individual weaknesses in underestimating the probability of adverse events (e.g., late repayment penalties, gambling losses when the house knows the odds better than most gamblers do). Even technical and inventive research encouraged by the potential to patent those discoveries may induce too much research activity by enabling patent-protected monopolies to exploit discoveries that would have been made eventually even without the monopoly rents accruing to the patent holders.

The list of examples of transactions that are profitable for one side only because the other side is less well-informed than, or even misled by, his counterparty could be easily multiplied. Because much, if not most, of the highest incomes earned, are associated with activities whose private benefits are at least partially derived from losses to less well-informed counterparties, it is not a stretch to suspect that reducing marginal income tax rates may have led resources to be shifted from activities in which private benefits and costs approximately equal social benefits and costs to more lucrative activities in which the private benefits and costs are very different from social benefits and costs, the benefits being derived largely at the expense of losses to others.

Reducing marginal tax rates may therefore have simultaneously reduced economic efficiency, slowed economic growth and increased the inequality of income. I don’t deny that this hypothesis is largely speculative, but the speculative part is strictly about the magnitude, not the existence, of the effect. The underlying theory is completely straightforward.

So there is no logical necessity requiring that right-wing free-market ideological policy implications be inferred from orthodox economic theory. Economic theory is a flexible set of conceptual tools and models, and the policy implications following from those models are sensitive to the basic assumptions and initial conditions specified in those models, as well as the value judgments informing an evaluation of policy alternatives. Free-market policy implications require factual assumptions about low transactions costs and about the existence of a low-cost process of creating and assigning property rights — including what we now call intellectual property rights — that imply that private agents perceive costs and benefits that closely correspond to social costs and benefits. Altering those assumptions can radically change the policy implications of the theory.

The best example I can find to illustrate that point is another one of my UCLA professors, the late Earl Thompson, who was certainly the most relentless economic reductionist whom I ever met, perhaps the most relentless whom I can even think of. Despite having a Harvard Ph.D. when he arrived back at UCLA as an assistant professor in the early 1960s, where he had been an undergraduate student of Armen Alchian, he too started out as a pro-free-market Friedman acolyte. But gradually adopting the Buchanan public-choice paradigm – Nancy Maclean, please take note — of viewing democratic politics as a vehicle for advancing the self-interest of agents participating in the political process (marketplace), he arrived at increasingly unorthodox policy conclusions to the consternation and dismay of many of his free-market friends and colleagues. Unlike most public-choice theorists, Earl viewed the political marketplace as a largely efficient mechanism for achieving collective policy goals. The main force tending to make the political process inefficient, Earl believed, was ideologically driven politicians pursuing ideological aims rather than the interests of their constituents, a view that seems increasingly on target as our political process becomes simultaneously increasingly ideological and increasingly dysfunctional.

Until Earl’s untimely passing in 2010, I regarded his support of a slew of interventions in the free-market economy – mostly based on national-defense grounds — as curiously eccentric, and I am still inclined to disagree with many of them. But my point here is not to argue whether Earl was right or wrong on specific policies. What matters in the context of the question posed by Dina Pomeranz is the economic logic that gets you from a set of facts and a set of behavioral and causality assumptions to a set of policy conclusion. What is important to us as economists has to be the process not the conclusion. There is simply no presumption that the economic logic that takes you from a set of reasonably accurate factual assumptions and a set of plausible behavioral and causality assumptions has to take you to the policy conclusions advocated by right-wing, free-market ideologues, or, need I add, to the policy conclusions advocated by anti-free-market ideologues of either left or right.

Certainly we are all within our rights to advocate for policy conclusions that are congenial to our own political preferences, but our obligation as economists is to acknowledge the extent to which a policy conclusion follows from a policy preference rather than from strict economic logic.

Hayek’s Rapid Rise to Stardom

For a month or so, I have been working on a paper about Hayek’s early pro-deflationary policy recommendations which seem to be at odds with his own idea of neutral money which he articulated in a way that implied or at least suggested that the ideal monetary policy would aim to keep nominal spending or nominal income constant. In the Great Depression, prices and real output were both falling, so that nominal spending and income were also falling at a rate equal to the rate of decline in real output plus the rate of decline in the price level. So in a depression, the monetary policy implied by Hayek’s neutral money criterion would have been to print money like crazy to generate enough inflation to keep nominal spending and nominal income constant. But Hayek denounced any monetary policy that aimed to raise prices during the depression, arguing that such a policy would treat the disease of depression with the drug that had caused the disease in the first place. Decades later, Hayek acknowledged his mistake and made clear that he favored a policy that would prevent the flow of nominal spending from ever shrinking. In this post, I am excerpting the introductory section of the current draft of my paper.

Few economists, if any, ever experienced as rapid a rise to stardom as F. A. Hayek did upon arriving in London in January 1931, at the invitation of Lionel Robbins, to deliver a series of four lectures on the theory of industrial fluctuations. The Great Depression having started about 15 months earlier, British economists were desperately seeking new insights into the unfolding and deteriorating economic catastrophe. The subject on which Hayek was to expound was of more than academic interest; it was of the most urgent economic, political and social, import.

Only 31 years old, Hayek, director of the Austrian Institute of Business Cycle Research headed by his mentor Ludwig von Mises, had never held an academic position. Upon completing his doctorate at the University of Vienna, writing his doctoral thesis under Friedrich von Wieser, one of the eminent figures of the Austrian School of Economics, Hayek, through financial assistance secured by Mises, spent over a year in the United States doing research on business cycles, and meeting such leading American experts on business cycles as W. C. Mitchell. While in the US, Hayek also exhaustively studied the English-language  literature on the monetary history of the eighteenth and nineteenth centuries and the, mostly British, monetary doctrines of that era.

Even without an academic position, Hayek’s productivity upon returning to Vienna was impressive. Aside from writing a monthly digest of statistical reports, financial news, and analysis of business conditions for the Institute, Hayek published several important theoretical papers, gaining a reputation as a young economist of considerable promise. Moreover, Hayek’s immersion in the English monetary literature and his sojourn in the United States gave him an excellent command of English, so that when Robbins, newly installed as head of the economics department at LSE, and having fallen under the influence of the Austrian school of economics, was seeking to replace Edwin Cannan, who before his retirement had been the leading monetary economist at LSE, Robbins thought of Hayek as a candidate for Cannan’s position.

Hoping that Hayek’s performance would be sufficiently impressive to justify the offer of a position at LSE, Robbins undoubtedly made clear to Hayek that if his lectures were well received, his chances of receiving an offer to replace Cannan were quite good. A secure academic position for a young economist, even one as talented as Hayek, was then hard to come by in Austria or Germany. Realizing how much depended on the impression he would make, Hayek, despite having undertaken to write a textbook on monetary theory for which he had already written several chapters, dropped everything else to compose the four lectures that he would present at LSE.

When he arrived in England in January 1931, Hayek actually went first to Cambridge to give a lecture, a condensed version of the four LSE lectures. Hayek was not feeling well when he came to Cambridge to face an unsympathetic, if not hostile, audience, and the lecture was not a success. However, either despite, or because of, his inauspicious debut at Cambridge, Hayek’s performance at LSE turned out to be an immediate sensation. In his History of Economic Analysis, Joseph Schumpeter, who, although an Austrian with a background in economics similar to Hayek’s, was neither a personal friend nor an ideological ally of Hayek’s, wrote that Hayek’s theory

on being presented to the Anglo-American community of economists, met with a sweeping success that has never been equaled by any strictly theoretical book that failed to make amends for its rigors by including plans and policy recommendations or to make contact in other ways with its readers loves or hates. A strong critical reaction followed that, at first, but served to underline the success, and then the profession turned away to other leaders and interests.

The four lectures provided a masterful survey of business-cycle theory and the role of monetary analysis in business-cycle theory, including a lucid summary of the Austrian capital-theoretic approach to business-cycle theory and of the equilibrium price relationships that are conducive to economic stability, an explanation of how those equilibrium price relationships are disturbed by monetary disturbances giving rise to cyclical effects, and some comments on the appropriate policies for avoiding or minimizing such disturbances. The goal of monetary policy should be to set the money interest rate equal to the hypothetical equilibrium interest rate determined by strictly real factors. The only policy implication that Hayek could extract from this rarified analysis was that monetary policy should aim not to stabilize the price level as recommended by such distinguished monetary theorists as Alfred Marshall and Knut Wicksell, but to stabilize total spending or total money income.

This objective would be achieved, Hayek argued, only if injections of new money preserved the equilibrium relationship between savings and investment, investments being financed entirely by voluntary savings, not by money newly created for that purpose. Insofar as new investment projects were financed by newly created money, the additional expenditure thereby financed would entail a deviation from the real equilibrium that would obtain in a hypothetical barter economy or in an economy in which money had no distortionary effect. That  interest rate was called by Hayek, following Wicksell, the natural (or equilibrium) rate of interest.

But according to Hayek, Wicksell failed to see that, in a progressive economy with real investment financed by voluntary saving, the increasing output of goods and services over time implies generally falling prices as the increasing productivity of factors of production progressively reduces costs of production. A stable price level would require ongoing increases in the quantity of money to, the new money being used to finance additional investment over and above voluntary saving, thereby causing the economy to deviate from its equilibrium time path by inducing investment that would not otherwise have been undertaken.

As Paul Zimmerman and I have pointed out in our paper on Hayek’s response to Piero Sraffa’s devastating, but flawed, review of Prices and Production (the published version of Hayek’s LSE lectures) Hayek’s argument that only an economy in which no money is created to finance investment is consistent with the real equilibrium of a pure barter economy depends on the assumption that money is non-interest-bearing and that the rate of inflation is not correctly foreseen. If money bears competitive interest and inflation is correctly foreseen, the economy can attain its real equilibrium regardless of the rate of inflation – provided, at least, that the rate of deflation is not greater than the real rate of interest. Inasmuch as the real equilibrium is defined by a system of n-1 relative prices per time period which can be multiplied by any scalar representing the expected price level or expected rate of inflation between time periods.

So Hayek’s assumption that the real equilibrium requires a rate of deflation equal to the rate of increase in factor productivity is an arbitrary and unfounded assumption reflecting his failure to see that the real equilibrium of the economy is independent of the price levels in different time periods and rates of inflation between time periods, when prices levels and rates of inflation are correctly anticipated. If inflation is correctly foreseen, nominal wages will rise commensurately with inflation and real wages with productivity increases, so that the increase in nominal money supplied by banks will not induce or finance investment beyond voluntary savings. Hayek’s argument was based on a failure to work through the full implications of his equilibrium method. As Hayek would later come to recognize, disequilibrium is the result not of money creation by banks but of mistaken expectations about the future.

Thus, Hayek’s argument mistakenly identified monetary expansion of any sort that moderated or reversed what Hayek considered the natural tendency of prices to fall in a progressively expanding economy, as the disturbing and distorting impulse responsible for business-cycle fluctuations. Although he did not offer a detailed account of the origins of the Great Depression, Hayek’s diagnosis of the causes of the Great Depression, made explicit in various other writings, was clear: monetary expansion by the Federal Reserve during the 1920s — especially in 1927 — to keep the US price level from falling and to moderate deflationary pressure on Britain (sterling having been overvalued at the prewar dollar-sterling parity when Britain restored gold convertibility in March 1925) distorted relative prices and the capital structure. When distortions eventually become unsustainable, unprofitable investment projects would be liquidated, supposedly freeing those resources to be re-employed in more productive activities. Why the Depression continued to deepen rather than recover more than a year after the downturn had started, was another question.

Despite warning of the dangers of a policy of price-level stabilization, Hayek was reluctant to advance an alternative policy goal or criterion beyond the general maxim that policy should avoid any disturbing or distorting effect — in particular monetary expansion — on the economic system. But Hayek was incapable of, or unwilling to, translate this abstract precept into a definite policy norm.

The simplest implementation of Hayek’s objective would be to hold the quantity of money constant. But that policy, as Hayek acknowledged, was beset with both practical and conceptual difficulties. Under a gold standard, which Hayek, at least in the early 1930s, still favored, the relevant area within which to keep the quantity of money constant would be the entire world (or, more precisely, the set of countries linked to the gold standard). But national differences between the currencies on the gold standard would make it virtually impossible to coordinate those national currencies to keep some aggregate measure of the quantity of money convertible into gold constant. And Hayek also recognized that fluctuations in the demand to hold money (the reciprocal of the velocity of circulation) produce monetary disturbances analogous to variations in the quantity of money, so that the relevant policy objective was not to hold the quantity of money constant, but to change the quantity of money proportionately (inversely) with the demand to hold money (the velocity of circulation).

Hayek therefore suggested that the appropriate criterion for the neutrality of money might be to hold total spending (or alternatively total factor income) constant. With constant total spending, neither an increase nor a decrease in the amount of money the public desired to hold would lead to disequilibrium. This was a compelling argument for constant total spending as the goal of policy, but Hayek was unwilling to adopt it as a practical guide for monetary policy.

In the final paragraph of his final LSE lecture, Hayek made his most explicit, though still equivocal, policy recommendation:

[T]he only practical maxim for monetary policy to be derived from our considerations is probably . . . that the simple fact of an increase of production and trade forms no justification for an expansion of credit, and that—save in an acute crisis—bankers need not be afraid to harm production by overcaution. . . . It is probably an illusion to suppose that we shall ever be able entirely to eliminate industrial fluctuations by means of monetary policy. The most we may hope for is that the growing information of the public may make it easier for central banks both to follow a cautious policy during the upward swing of the cycle, and so to mitigate the following depression, and to resist the well-meaning but dangerous proposals to fight depression by “a little inflation “.

Thus, Hayek concluded his series of lectures by implicitly rejecting his own idea of neutral money as a policy criterion, warning instead against the “well-meaning but dangerous proposals to fight depression by ‘a little inflation.’” The only sensible interpretation of Hayek’s counsel of “resistance” is an icy expression of indifference to falling nominal spending in a deep depression.

Larry White has defended Hayek against the charge that his policy advice in the depression was liquidationist, encouraging policy makers to take a “hands-off” approach to the unfolding economic catastrophe. In making this argument, White relies on Hayek’s neutral-money concept as well as Hayek’s disavowals decades later of his early pro-deflation policy advice. However, White omitted any mention of Hayek’s explicit rejection of neutral money as a policy norm at the conclusion of his LSE lectures. White also disputes that Hayek was a liquidationist, arguing that Hayek supported liquidation not for its own sake but only as a means to reallocate resources from lower- to higher-valued uses. Although that is certainly true, White does not establish that any of the other liquidationists he mentions favored liquidation as an end and not, like Hayek, as a means.

Hayek’s policy stance in the early 1930s was characterized by David Laidler as a skepticism bordering on nihilism in opposing any monetary- or fiscal-policy responses to mitigate the suffering of the general public caused by the Depression. White’s efforts at rehabilitation notwithstanding, Laidler’s characterization seems to be on the mark. The perplexing and disturbing question raised by Hayek’s policy stance in the early 1930s is why, given the availability of his neutral-money criterion as a justification for favoring at least a mildly inflationary (or reflationary) policy to promote economic recovery from the Depression, did Hayek remain, during the 1930s at any rate, implacably opposed to expansionary monetary policies? Hayek’s later disavowals of his early position actually provide some insight into his reasoning in the early 1930s, but to understand the reasons for his advocacy of a policy inconsistent with his own theoretical understanding of the situation for which he was offering policy advice, it is necessary to understand the intellectual and doctrinal background that set the boundaries on what kinds of policies Hayek was prepared to entertain. The source of that intellectual and doctrinal background was David Hume and the intermediary through which it was transmitted was none other than Hayek’s mentor Ludwig von Mises.

The Understanding and Misunderstanding of Imperfect Information

Last Friday on his blog, Timothy Taylor, editor of the Journal of Economic Perspectives, wrote about whether imperfect information strengthens or weakens the case for free markets and for deregulation. Taylor frames his discussion by comparing and contrasting two recent papers. One paper, “Friedrich Hayek and the Market Algorithm,” by Samuel Bowles, Alan Kirman and Rajiv Sethi, appeared in the Journal of Economic Perspectives; the other, “The Revolution of Information Economics: the Past and the Future,” by Joseph Stiglitz is a NBER working paper. Although I agree with much of what Taylor has to say, I think he, like many others, misses some important distinctions and nuances in Hayek’s thought. Although Hayek’s instincts were indeed very much opposed to any form of government intervention, that did not prevent him from acknowledging that there is a very wide range of government action that is not inconsistent with his understanding of liberal principles. He was, in fact, very far from being the dogmatic libertarian anti-interventionist for which he is mistaken. So I am going to try to put things in a clearer perspective.

Taylor begins by referencing Hayek and the paper by Bowles, Kirman and Sethi.

Friedrich von Hayek (Nobel 1974) is among the most prominent of those who have made the case that imperfect information strengthens the case for free markets. . . .

In one much-quoted example, Hayek offers a discussion of what happens in the market for some raw material, like tin, when “somewhere in the world a new opportunity for the use” arises, or “one of the sources of supply of tin has been eliminated.” Either of these changes (rise in demand, or a fall in supply) will lead to a higher market price. But as Hayek points out, no company that uses tin, nor any consumer who uses products made with tin as an ingredient, needs to know any details about what happened. No commission of government officials needs to meet to discuss how every firm and consumer should be required to react to this change in the price of tin. No government quota system for allocation of tin supplies needs to be established. No special government program for research and development into cheaper substitutes for tin, and no government-subsidized producers for potential-but-still-costly substitutes needs to be created. Instead, the shifts in demand or supply, and the corresponding changes in price, work themselves out with a larger number of small-scale shifts in the market.

A government agency might collect information on who currently produces and uses tin. But that government lacks the granular information about all the different alternatives that might possibly be used for tin, and any sense of when a user of tin would be willing to pay twice as much, or when a user of tin would shift to a substitute if the price rose even a little. Indeed, this granular information about the tin market is not even theoretically available to a government planner or regulator! Many users of tin, or potential suppliers of additional tin, or potential suppliers of substitutes, don’t actually know just how they would react to the higher price until after it happens. Their reactions emerge through a process of trial and error.

Hayek’s point becomes even more acute if one considers not just existing basic products, like tin, but the potential for innovative new products or services. One can make a guess about whether a certain type of new smartphone, headache remedy, spicy sauce, alternative energy source, or water-in-a-bottle will be popular and desired. But government planners–especially given that they are operating under political constraints–won’t have the knowledge to make these decisions. Hayek’s point is not only that government economic planners not only that government planners lack perfect information, but that it is not even theoretically possible for them to have perfect information–because much of the information about production, consumption, and prices does not exist. thus, Hayek wrote:

[The market is] a system of the utilization of knowledge which nobody can possess as a whole, which. . . leads people to aim at the needs of people whom they do not know, make use of facilities about which they have no direct information; all this condensed in abstract signals. . . [T]hat our whole modern wealth and production could arise only thanks to this mechanism is, I believe, the basis not only of my economics but also much of my political views. . .

Taylor, channeling Bowles, Kirman and Sethi, is here quoting from a passage in Hayek’s classic paper, “The Use of Knowledge in Society” in which he explained how markets accomplish automatically the task of transmitting and processing dispersed knowledge held by disparate agents who otherwise would have no way to communicate with each other to coordinate and reconcile their distinct plans into a coherent set of mutually consistent and interdependent actions, thereby achieving coincidentally a coherence and consistency that all decision-makers take for granted, but which none deliberately sought. The key point that Hayek was making is not so much that this “market order” is optimal in any static sense, but that if a central planner tried to replicate it, he would have to collect, process, and constantly update an impossibly huge quantity of information.

After describing Hayek’s explanation of why imperfect information – a term that for Hayek involved both the dispersal of existing knowledge and the discovery of new knowledge – implies that markets are a better mechanism than central planning for coordinating a complex network of interrelated activities, Taylor turns to Stiglitz’s paper on imperfect information.

Joseph Stiglitz (Nobel, 2001) is among the best-known of those who have explained how imperfect information can hinder the functioning of a market, and thus offer a justification for government intervention or regulation. Stiglitz offers a readable overview of his perspective in “The Revolution of Information Economics: The Past and the Future” (September 2017, National Bureau of Economic Research Working Paper 23780). The paper isn’t freely available online, although readers may have access through a library subscription, but a set of slides from when he presented a talk on this topic at the World Bank in 2016 are available here. Stiglitz emphasizes two particular aspects of imperfect information: it leads to a lack of competition and especially to problems in the financial sector. He writes:

The imperfections of competition and the absence of risk markets with which they are marked matter a great deal. . . . And in those sectors where information and its imperfections play a particularly important role, there is an even greater presumption of the need for public policy. The financial sector is, above all else, about gathering and processing information, on the basis of which capital resources can be efficiently allocated. Information is central. And that is at least part of the reason that financial sector regulation is so important. Markets where information is imperfect are also typically far from perfectly competitive. . . In markets with some, but imperfect competition, firms strive to increase their market power and to increase the extraction of rents from existing market power, giving rise to widespread distortions. In such circumstances, institutions and the rules of the game matter. Public policy is critical in setting the rules of the game.

There’s a lot going on here, and I think it’s a mistake to set up Hayek and Stiglitz as polar opposites. Although they surely are not in total agreement, Hayek did agree that the perfect-competition model is not descriptive of most actual markets. Hayek may have had a more benign view of the operation of “imperfect” competition than Stiglitz, but he certainly did not view perfect competition as a normative ideal in terms of which the performance of actual economies should be assessed. It is certainly true that imperfectly competitive firms attempt to increase their market power, either by colluding or by tacit understandings to refrain from “ruinous” competition, but perfectly competitive firms also seek to collude on their own or try to enlist the government to help restrain competition that drives profits down to – or even below – zero.

And it would be hard to think of a statement with which Hayek would have been less likely to disagree than this one: “public policy is critical in setting the rules of the game.” To suggest that Hayek conceived of a market economy as a system operating independently of the constraints of an evolving and increasingly sophisticated system of rules is to completely misunderstand Hayek’s conception of a market order and the legal underpinnings without which no such order could come into existence. The ideal of a free market is not for businesses and entrepreneurs to be able to do whatever they want, but for all agents to be subject to a system of general rules that lays out the acceptable means by which every individual may pursue his interests and try to achieve goals of his own choosing. Taylor continues:

Stiglitz also argues that in a modern economy, concerns over information are likely to become more acute.

Looking forward, changes in structure of demand (that is, as a country gets richer, the mix of goods purchased changes) and in technology may lead to an increased role of information and increased consequences of information imperfections, decreased competition, and increasing inequality. Many key battles will be about information and knowledge (implicitly or explicitly)—and the governance of information. Already, there are big debates going on about privacy (the rights of individuals to keep their own information) and transparency (requirements that government and corporations, for instance, reveal critical information about what they are doing). In many sectors, most especially, the financial sector, there are ongoing debates about disclosure—obligations on the part of individuals or firms to reveal certain things about their products.

Taylor misses an opportunity here to dig deeper into Stiglitz’s analysis of what makes imperfect information so problematic. The most serious problems arise when substantial information asymmetries exist, allowing better-informed agents to make trades that exploit the ignorance or gullibility of their counterparties. Though not confined to the financial sector – the health sector being another area in which information asymmetries are especially acute and potentially disastrous to the relatively uninformed party – existing information asymmetries create opportunities and incentives for reprehensible behavior by financial institutions while encouraging them to engage in tireless efforts to find or create additional information asymmetries, devoting valuable resources to the search for and creation of those asymmetries.

In many previous posts, I have discussed how the financial sector, when seeking to profit from transitory informational advantages by anticipating short-term price movements, or by creating new financial products that counterparties do not understand as well as their creators do, wastes resources on a massive scale. The net social product of such activity is far less than the private gains reaped from those fleeting informational advantages. But Wall Street banks and other financial institutions pay huge salaries to the very bright people who help create these momentary informational advantages and these new financial products. The actual and potential harms created by the existence – and, even worse, the pursuit – of such information asymmetries calls for serious analysis and creative thinking to correct, or at least mitigate, the malincentives that lead to such socially wasteful activity. And I can’t think of any reason why Hayek would have opposed changing “the rules of the game” to correct those malincentives. So the idea that reforming the legal framework within which markets operate to eliminate inefficient malincentives somehow is indicative of hostility to or skepticism about free markets, an idea that seems to underlie much of what Taylor and Stiglitz are saying, is entirely misplaced.

Which is not to say that it is easy to change the rules to fix every malincentive besetting the market economy; some malincentives may be truly intractable. But when malincentives truly are intractable – a state of affairs that, unfortunately, is closer to being the rule than the exception — it is usually not obvious what the appropriate policy response is. The problem is compounded many times over, because the theory of second best teaches us that, as soon as there is a single departure from optimality, satisfying all the other optimality conditions will not achieve the next best outcome. A single departure from optimality in one market requires departures from optimality in all related markets, so trying to satisfy optimality conditions in n-2 out of n markets doesn’t get you to the second best outcome.

In the end Taylor tries to suggest an awkward reconciliation between the supposedly opposing visions of Hayek and Stiglitz.

Both Hayek and Stiglitz use a similar “straw man” argumentative tactic: that is, set up a weak position as the opposing view, and then set it on fire. Hayek’s preferred straw man is government economic planners who seek to dictate every economic decision. He was writing in part with economic systems like the Communist Soviet Union in mind. But arguing that a market is better than wildly intrusive and weirdly over-precise old-time Soviet-style economic planning doesn’t make a case against more restrained and better-aimed forms of economic regulation. Indeed, Hayek occasionally expressed support for a universal basic income and for certain kinds of bank regulation.

I get what Taylor is trying to say, but I’m afraid he has phrased it rather badly. As Taylor actually seems to recognize, Hayek wasn’t just arguing against a straw man, which suggests creating an opposing argument to refute that no one really believes in. But that was hardly the case in the 1930s and 1940s when Hayek was first making his systematic argumeents against central planning by thinking carefully about what knowledge we actually are assuming that individual agents possess in standard economic models, and what knowledge a central planner would need in order to replicate the optimal state of affairs that is associated with the equilibrium of the standard economic model. And in the post-neoliberal political environment in which we now find ourselves, it is not clear that what not so long ago seemed like a straw man has not come back to life.

However, Taylor’s assessment of Stiglitz seems to me to be pretty much on target.

Stiglitz’s straw man is a free market that operates essentially without government intervention or regulation. He likes to emphasize that in the real world of imperfect information, there is no conceptual reason to presume that markets are efficient. But arguing that imperfect information can offer a potential justification for government regulation doesn’t make a case that all or most government regulation is justified. especially given that the real-world government regulators labor with their own problems of political constraints and limited information. And indeed, while Stiglitz tends to favor an increase in US economic regulations in a number of specific areas, his vision of the economy always leaves a substantial role for private sector ownership, decision-making, and innovation.

Taylor sums up this confused state of affairs with two quotations. The first from Scott Fitzgerald. “The true test of a first-rate mind is the ability to hold two contradictory ideas at the same time.” Taylor adds:

In this case, the contradictory ideas are that markets can often be a substantial improvement on government regulators, and government regulators can often be a substantial improvement on unconstrained market outcomes.

Taylor then quotes Joan Robinson: “[E]conomic theory, in itself, preaches no doctrines and cannot establish any universally valid laws. It is a method of ordering ideas and formulating questions.” And, if we are lucky, coming up with some conjectures that might answer those questions.

But before closing, I would add another quote from the paper by Bowles, Kirman and Sethi, which seems to me to penetrate to the core of the problem of imperfect information:

[W]e wish to call into question Hayek’s belief that his advocacy of free market policies follows as a matter of logic from his economic vision. The very usefulness of prices (and other economic variables) as informative messages—which is the centerpiece of Hayek’s economics—creates incentives to extract information from signals in ways that can be destabilizing. Markets can promote prosperity but can also generate crises. We will argue, accordingly, that a Hayekian understanding of the economy as an information-processing system does not support the type of policy positions that he favored. Thus, we find considerable lasting value in Hayek’s economic analysis while nonetheless questioning the connection of this analysis to his political philosophy.

My only quibble with their insightful comment is that Hayek’s political philosophy did not necessarily exclude a role for government intervention and regulation, provided that interventions and regulations satisfied appropriate procedural standards of generality and non-arbitrariness. Hayek’s main concern was not to make government small, but to subject all laws and regulations enacted by government to procedural conditions ensuring that the substantive content of legislation and regulation does not aim at achieving specific concrete objectives, e.g., a particular distribution of income or the advancement of a particular special interest, but at making markets function more smoothly and more predictably, e.g., by prohibiting anticompetitive or collusive agreements between business firms. In principle, measures such as guaranteeing a minimum income, or providing medical care, to all citizens, prohibiting or taxing pollution by manufacturers or unduly risky behavior by financial institutions, is not incompatible with that philosophy. The advisability of any specific law or regulation would of course depend on an appropriate weighing of the expected costs and benefits of imposing such a law or regulation.

Hayek, Deflation and Nihilism: A Popperian Postscript

In my previous post about Hayek’s support for deflationary monetary policy in the early 1930s, I wrote that Hayek’s support for deflation in the hope that it would break rigidities (he thought) were blocking the relative-price adjustments whereby self-correcting market forces would induce a spontaneous recovery from the Great Depression reminded me of the epigram attributed to Lenin: “you can’t make an omelet without breaking eggs.” I actually believed that that was a line that I had seen Karl Popper use somewhere. But in searching unsuccessfully for that quotation in Popper, I did find the following passage in Popper’s autobiography (Unended Quest), which seems to me to be worth reproducing. Popper describes the circumstances that led him while still a teenager to renounce his youthful Marxism.

The incident that turned me against communism, and that soon led me away from Marxism altogether, was one of the most important incidents in my life. It happened shortly before my seventeenth birthday. In Vienna, shooting broke out during a demonstration by unarmed young socialists who, instigated by the communists, tried to help some communists to escape who were under arrest in the central police station in Vienna. Several young socialist and communist workers were killed. I was horrified and shocked by the brutality of the police, but also by myself. For I felt that as a Marxist I bore part of the responsibility for the tragedy – at least in principle. Marxist theory demands that the class struggle be intensified, in order to speed up the coming of socialism. Its thesis is that although the revolution may claim some victims, capitalism is claiming more victims than the whole socialist revolution.

That was the Marxist theory – part of so-called “scientific socialism”. I now asked myself whether such a calculation could ever be supported by “science”. The whole experience, and especially this question, produced in me a life-long revulsion of feeling.

Communism is a creed which promises to bring about a better world. It claims to be based on knowledge: knowledge of the laws of historical development. I still hoped for a better world, a less violent and more just world, but I questioned whether I really knew – whether what I thought was knowledge was perhaps not more than mere pretence. I had, of course, read some Marx and Engels – but had I really understood it? Had I examined it critically, as anybody should do before he accepts a creed which justifies its means by a somewhat distant end?

I was shocked to have to admit to myself that not only had I accepted a complex theory somewhat uncritically, but that I had also actually noticed quite a bit of what was wrong, in the theory as well as in the practice of communism. But I had repressed this – partly out of loyalty to my friends, partly out of loyalty to “the cause”, and partly because there is a mechanism of getting oneself more and more deeply involved: once one has sacrificed one’s intellectual conscience over a minor point one does not wish to give in too easily; one wishes to justify the self-sacrifice by convincing oneself of the fundamental goodness of the cause, which is seen to outweigh any little moral or intellectual compromise that may be required. With every such moral or intellectual sacrifice one gets more deeply involved. One becomes ready to back one’s moral or intellectual investments in the cause with further investments. It is like being eager to throw good money after bad.

I saw how this mechanism had been working in my case, and I was horrified. I also saw it at work in others, especially my communist friends. And the experience enabled me to understand later many things which otherwise I would not have understood.

I had accepted a dangerous creed uncritically, dogmatically. The reaction made me first a sceptic; then it led me, though only for a very short time, to react against all rationalism. (As I found later, this is a typical reaction of a disappointed Marxist.)

By the time I was seventeen I had become an anti-Marxist. I realized the dogmatic character of the creed, and its incredible intellectual arrogance. It was a terrible thing to arrogate to oneself a kind of knowledge which made it a duty to risk  the lives of other people for an uncritically accepted dogma or for a dream which might turn out not to be realizable. (pp. 32-34)

Popper’s description of the process whereby emotional investment in a futile, but seemingly noble, cause leads to moral self-corruption is both chilling and frighteningly familiar to anyone paying attention to the news.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 1,886 other followers

Follow Uneasy Money on WordPress.com
Advertisements