Archive for the 'Hayek' Category

The Explanatory Gap and Mengerian Subjectivism

My last several posts have been focused on Marshall and Walras and the relationships and differences between the partial equilibrium approach of Marshall and the general-equilibrium approach of Walras and how that current state of neoclassical economics is divided between the more practical applied approach of Marshallian partial-equilibrium analysis and the more theoretical general-equilibrium approach of Walras. The divide is particularly important for the history of macroeconomics, because many of the macroeconomic controversies in the decades since Keynes have also involved differences between Marshallians and Walrasians. I’m not happy with either the Marshallian or Walrasian approach, and I have been trying to articulate my unhappiness with both branches of current neoclassical thinking by going back to the work of the forgotten marginal revolutionary, Carl Menger. I’ve been writing a paper for a conference later this month celebrating the 150th anniversary of Menger’s great work which draws on some of my recent musings, because I think it offers at least some hints at how to go about developing an improved neoclassical theory. Here’s a further sampling of my thinking which is drawn from one of the sections of my work in progress.

Both the Marshallian and the Walrasian versions of equilibrium analysis have failed to bridge an explanatory gap between the equilibrium state, whose existence is crucial for such empirical content as can be claimed on behalf of those versions of neoclassical theory, and such an equilibrium state could ever be attained. The gap was identified by one of the chief architects of modern neoclassical theory, Kenneth Arrow, in his 1958 paper “Toward a Theory of Price Adjustment.”

The equilibrium is defined in terms of a set of prices. In the Marshallian version, the equilibrium prices are assumed to have already been determined in all but a single market (or perhaps a subset of closely related markets), so that the Marshallian equilibrium simply represents how, in a single small or isolated market, an equilibrium price in that market is determined, under suitable ceteris-paribus conditions thereby leaving the equilibrium prices determined in other markets unaffected.

In the Walrasian version, all prices in all markets are determined simultaneously, but the method for determining those prices simultaneously was not spelled out by Walras other than by reference to the admittedly fictitious and purely heuristic tâtonnement process.

Both the Marshallian and Walrasian versions can show that equilibrium has optimal properties, but neither version can explain how the equilibrium is reached or how it can be discovered in practice. This is true even in the single-period context in which the Walrasian and Marshallian equilibrium analyses were originally carried out.

The single-period equilibrium has been extended, at least in a formal way, in the standard Arrow-Debreu-McKenzie (ADM) version of the Walrasian equilibrium, but this version is in important respects just an enhanced version of a single-period model inasmuch as all trades take place at time zero in a complete array of future state-contingent markets. So it is something of a stretch to consider the ADM model a truly intertemporal model in which the future can unfold in potentially surprising ways as opposed to just playing out a script already written in which agents go through the motions of executing a set of consistent plans to produce, purchase and sell in a sequence of predetermined actions.

Under less extreme assumptions than those of the ADM model, an intertemporal equilibrium involves both equilibrium current prices and equilibrium expected prices, and just as the equilibrium current prices are the same for all agents, equilibrium expected future prices must be equal for all agents. In his 1937 exposition of the concept of intertemporal equilibrium, Hayek explained the difference between what agents are assumed to know in a state of intertemporal equilibrium and what they are assumed to know in a single-period equilibrium.

If all agents share common knowledge, it may be plausible to assume that they will rationally arrive at similar expectations of the future prices. But if their stock of knowledge consists of both common knowledge and private knowledge, then it seems implausible to assume that the price expectations of different agents will always be in accord. Nevertheless, it is not necessarily inconceivable, though perhaps improbable, that agents will all arrive at the same expectations of future prices.

In the single-period equilibrium, all agents share common knowledge of equilibrium prices of all commodities. But in intertemporal equilibrium, agents lack knowledge of the future, but can only form expectations of future prices derived from their own, more or less accurate, stock of private knowledge. However, an equilibrium may still come about if, based on their private knowledge, they arrive at sufficiently similar expectations of future prices for their plans for their current and future purchases and sales to be mutually compatible.

Thus, just twenty years after Arrow called attention to the explanatory gap in neoclassical theory by observing that there is no neoclassical theory of how competitive prices can change, Milgrom and Stokey turned Arrow’s argument on its head by arguing that, under rational expectations, no trading would ever occur at prices other than equilibrium prices, so that it would be impossible for a trader with private information to take advantage of that information. This argument seems to suffer from a widely shared misunderstanding of what rational expectations signify.

Thus, in the Mengerian view articulated by Hayek, intertemporal equilibrium, given the diversity of private knowledge and expectations, is an unlikely, but not inconceivable, state of affairs, a view that stands in sharp contrast to the argument of Paul Milgrom and Nancy Stokey (1982), in which they argue that under a rational-expectations equilibrium there is no private knowledge, only common knowledge, and that it would be impossible for any trader to trade on private knowledge, because no other trader with rational expectations would be willing to trade with anyone at a price other than the equilibrium price.

Rational expectations is not a property of individual agents making rational and efficient use of the information from whatever source it is acquired. As I have previously explained here (and a revised version here) rational expectations is a property of intertemporal equilibrium; it is not an intrinsic property that agents have by virtue of being rational, just as the fact that the three angles in a triangle sum to 180 degrees is not a property of the angles qua angles, but a property of the triangle. When the expectations that agents hold about future prices are identical, their expectations are equilibrium expectations and they are rational. That the agents hold rational expectations in equilibrium, does not mean that the agents are possessed of the power to calculate equilibrium prices or even to know if their expectations of future prices are equilibrium expectations. Equilibrium is the cause of rational expectations; rational expectations do not exist if the conditions for equilibrium aren’t satisfied. See Blume, Curry and Easley (2006).

The assumption, now routinely regarded as axiomatic, that rational expectations is sufficient to ensure that equilibrium is automatic achieved, and that agents’ price expectations necessarily correspond to equilibrium price expectations is a form of question begging disguised as a methodological imperative that requires all macroeconomic models to be properly microfounded. The newly published volume edited by Arnon, Young and van der Beek Expectations: Theory and Applications from Historical Perspectives contains a wonderful essay by Duncan Foley that elucidates these issues.

In his centenary retrospective on Menger’s contribution, Hayek (1970), commenting on the inexactness of Menger’s account of economic theory, focused on Menger’s reluctance to embrace mathematics as an expository medium with which to articulate economic-theoretical concepts. While this may have been an aspect of Menger’s skepticism about mathematical reasoning, his recognition that expectations of the future are inherently inexact and conjectural and more akin to a range of potential outcomes of different probability may have been an even more significant factor in how Menger chose to articulate his theoretical vision.

But it is noteworthy that Hayek (1937) explicitly recognized that there is no theoretical explanation that accounts for any tendency toward intertemporal equilibrium, and instead merely (and in 1937!) relied an empirical tendency of economies to move in the direction of equilibrium as a justification for considering economic theory to have any practical relevance.

On the Price Specie Flow Mechanism

I have been working on a paper tentatively titled “The Smithian and Humean Traditions in Monetary Theory.” One section of the paper is on the price-specie-flow mechanism, about which I wrote last month in my previous post. This section develops the arguments of the previous post at greater length and draws on a number of earlier posts that I’ve written about PSFM as well (e.g., here and here )provides more detailed criticisms of both PSFM and sterilization and provides some further historical evidence to support some of the theoretical arguments. I will be grateful for any comments and feedback.

The tortured intellectual history of the price-specie-flow mechanism (PSFM) received its still classic exposition in a Hume (1752) essay, which has remained a staple of the theory of international adjustment under the gold standard, or any international system of fixed exchange rates. Regrettably, the two-and-a-half-century life span of PSFM provides no ground for optimism about the prospects for progress in what some are pleased to call without irony economic science.

PSFM describes how, under a gold standard, national price levels tend to be equalized, with deviations between the national price levels in any two countries inducing gold to be shipped from the country with higher prices to the one with lower prices until prices are equalized. Premised on a version of the quantity theory of money in which (1) the price level in each country on the gold standard is determined by the quantity of money in that country, and (2) money consists entirely in gold coin or bullion, Hume elegantly articulated a model of disturbance and equilibration after an exogenous change in the gold stock in one country.

Viewing banks as inflationary engines of financial disorder, Hume disregarded banks and the convertible monetary liabilities of banks in his account of PSFM, leaving to others the task of describing the international adjustment process under a gold standard with fractional-reserve banking. The task of devising an institutional framework, within which PSFM could operate, for a system of fractional-reserve banking proved to be problematic and ultimately unsuccessful.

For three-quarters of a century, PSFM served a purely theoretical function. During the Bullionist debates of the first two decades of the nineteenth century, triggered by the suspension of the convertibility of the pound sterling into gold in 1797, PSFM served as a theoretical benchmark not a guide for policy, it being generally assumed that, when convertibility was resumed, international monetary equilibrium would be restored automatically.

However, the 1821 resumption was followed by severe and recurring monetary disorders, leading some economists, who formed what became known as the Currency School, to view PSFM as a normative criterion for ensuring smooth adjustment to international gold flows. That criterion, the Currency Principle, stated that the total currency in circulation in Britain should increase or decrease by exactly as much as the amount of gold flowing into or out of Britain.[1]

The Currency Principle was codified by the Bank Charter Act of 1844. To mimic the Humean mechanism, it restricted, but did not suppress, the right of note-issuing banks in England and Wales, which were allowed to continue issuing notes, at current, but no higher, levels, without holding equivalent gold reserves. Scottish and Irish note-issuing banks were allowed to continue issuing notes, but could increase their note issue only if matched by increased holdings of gold or government debt. In England and Wales, the note issue could increase only if gold was exchanged for Bank of England notes, so that a 100-percent marginal gold reserve requirement was imposed on additional banknotes.

Opposition to the Bank Charter Act was led by the Banking School, notably John Fullarton and Thomas Tooke. Rejecting the Humean quantity-theoretic underpinnings of the Currency School and the Bank Charter Act, the Banking School rejected the quantitative limits of the Bank Charter Act as both unnecessary and counterproductive, because banks, obligated to redeem their liabilities directly or indirectly in gold, issue liabilities only insofar as they expect those liabilities to be willingly held by the public, or, if not, are capable of redeeming any liabilities no longer willingly held. Rather than the Humean view that banks issue banknotes or create deposits without constraint, the Banking School held Smith’s view that banks issue money in a form more convenient to hold and to transact with than metallic money, so that bank money allows an equivalent amount of gold to be shifted from monetary to real (non-monetary) uses, providing a net social savings. For a small open economy, the diversion (and likely export) of gold bullion from monetary to non-monetary uses has negligible effect on prices (which are internationally, not locally, determined).

The quarter century following enactment of the Bank Charter Act showed that the Act had not eliminated monetary disturbances, the government having been compelled to suspend the Act in 1847, 1857 and 1866 to prevent incipient crises from causing financial collapse. Indeed, it was precisely the fear that liquidity might not be forthcoming that precipitated increased demands for liquidity that the Act made it impossible to accommodate. Suspending the Act was sufficient to end the crises with limited intervention by the Bank. [check articles on the crises of 1847, 1857 and 1866.]

It may seem surprising, but the disappointing results of the Bank Charter Act provided little vindication to the Banking School. It led only to a partial, uneasy, and not entirely coherent, accommodation between PSFM doctrine and the reality of a monetary system in which the money stock consists mostly of banknotes and bank deposits issued by fractional-reserve banks. But despite the failure of the Bank Charter Act, PSFM achieved almost canonical status, continuing, albeit with some notable exceptions, to serve as the textbook model of the gold standard.

The requirement that gold flows induce equal changes in the quantity of money within a country into (or from) which gold is flowing was replaced by an admonition that gold flows lead to “appropriate” changes in the central-bank discount rate or an alternative monetary instrument to cause the quantity of money to change in the same direction as the gold flow. While such vague maxims, sometimes described as “the rules of the game,” gave only directional guidance about how to respond to change in gold reserves, their hortatory character, and avoidance of quantitative guidance, allowed monetary authorities latitude to avoid the self-inflicted crises that had resulted from the quantitative limits of the Bank Charter Act.

Nevertheless, the myth of vague “rules” relating the quantity of money in a country to changes in gold reserves, whose observance ensured the smooth functioning of the international gold standard before its collapse at the start of World War I, enshrined PSFM as the theoretical paradigm for international monetary adjustment under the gold standard.

That paradigm was misconceived in four ways that can be briefly summarized.

  • Contrary to PSFM, changes in the quantity of money in a gold-standard country cannot change local prices proportionately, because prices of tradable goods in that country are constrained by arbitrage to equal the prices of those goods in other countries.
  • Contrary to PSFM, changes in local gold reserves are not necessarily caused either by non-monetary disturbances such as shifts in the terms of trade between countries or by local monetary disturbances (e.g. overissue by local banks) that must be reversed or counteracted by central-bank policy.
  • Contrary to PSFM, changes in the national price levels of gold-standard countries were uncorrelated with gold flows, and changes in national price levels were positively, not negatively, correlated.
  • Local banks and monetary authorities exhibit their own demands for gold reserves, demands exhibited by choice (i.e., independent of legally required gold holdings) or by law (i.e., by legally requirement to hold gold reserves equal to some fraction of banknotes issued by banks or monetary authorities). Such changes in gold reserves may be caused by changes in the local demands for gold by local banks and the monetary authorities in one or more countries.

Many of the misconceptions underlying PSFM were identified by Fullarton’s refutation of the Currency School. In articulating the classical Law of Reflux, he established the logical independence of the quantity convertible money in a country from by the quantity of gold reserves held by the monetary authority. The gold reserves held by individual banks, or their deposits with the Bank of England, are not the raw material from which banks create money, either banknotes or deposits. Rather, it is their creation of banknotes or deposits when extending credit to customers that generates a derived demand to hold liquid assets (i.e., gold) to allow them to accommodate the demands of customers and other banks to redeem banknotes and deposits. Causality runs from creating banknotes and deposits to holding reserves, not vice versa.

The misconceptions inherent in PSFM and the resulting misunderstanding of gold flows under the gold standard led to a further misconception known as sterilization: the idea that central banks, violating the obligations imposed by “the rules of the game,” do not allow, or deliberately prevent, local money stocks from changing as their gold holdings change. The misconception is the presumption that gold inflows ought necessarily cause increases in local money stocks. The mechanisms causing local money stocks to change are entirely different from those causing gold flows. And insofar as those mechanisms are related, causality flows from the local money stock to gold reserves, not vice versa.

Gold flows also result when monetary authorities transform their own asset holdings into gold. Notable examples of such transformations occurred in the 1870s when a number of countries abandoned their de jure bimetallic (and de facto silver) standards to the gold standard. Monetary authorities in those countries transformed silver holdings into gold, driving the value of gold up and silver down. Similarly, but with more catastrophic consequences, the Bank of France, in 1928 after France restored the gold standard, began redeeming holdings of foreign-exchange reserves (financial claims on the United States or Britain, payable in gold) into gold. Following the French example, other countries rejoining the gold standard redeemed foreign exchange for gold, causing gold appreciation and deflation that led to the Great Depression.

Rereading the memoirs of this splendid translation . . . has impressed me with important subtleties that I missed when I read the memoirs in a language not my own and in which I am far from completely fluent. Had I fully appreciated those subtleties when Anna Schwartz and I were writing our A Monetary History of the United States, we would likely have assessed responsibility for the international character of the Great Depression somewhat differently. We attributed responsibility for the initiation of a worldwide contraction to the United States and I would not alter that judgment now. However, we also remarked, “The international effects were severe and the transmission rapid, not only because the gold-exchange standard had rendered the international financial system more vulnerable to disturbances, but also because the United States did not follow gold-standard rules.” Were I writing that sentence today, I would say “because the United States and France did not follow gold-standard rules.”

I pause to note for the record Friedman’s assertion that the United States and France did not follow “gold-standard rules.” Warming up to the idea, he then accused them of sterilization.

Benjamin Strong and Emile Moreau were admirable characters of personal force and integrity. But . . .the common policies they followed were misguided and contributed to the severity and rapidity of transmission of the U.S. shock to the international community. We stressed that the U.S. “did not permit the inflow of gold to expand the U.S. money stock. We not only sterilized it, we went much further. Our money stock moved perversely, going down as the gold stock went up” from 1929 to 1931.

Strong and Moreau tried to reconcile two ultimately incompatible objectives: fixed exchange rates and internal price stability. Thanks to the level at which Britain returned to gold in 1925, the U.S. dollar was undervalued, and thanks to the level at which France returned to gold at the end of 1926, so was the French franc. Both countries as a result experienced substantial gold inflows. Gold-standard rules called for letting the stock of money rise in response to the gold inflows and for price inflation in the U.S. and France, and deflation in Britain, to end the over-and under-valuations. But both Strong and Moreau were determined to prevent inflation and accordingly both sterilized the gold inflows, preventing them from providing the required increase in the quantity of money.

Friedman’s discussion of sterilization is at odds with basic theory. Working with a naïve version of PSFM, he imagines that gold flows passively respond to trade balances independent of monetary forces, and that the monetary authority under a gold standard is supposed to ensure that the domestic money stock varies roughly in proportion to its gold reserves. Ignoring the international deflationary dynamic, he asserts that the US money stock perversely declined from 1929 to 1931, while its gold stock increased. With a faltering banking system, the public shifted from holding demand deposits to currency. Gold reserves were legally required against currency, but not against demand deposits, so the shift from deposits to currency entailed an increase gold reserves. To be sure the increased US demand for gold added to upward pressure on value of gold, and to worldwide deflationary pressure. But US gold holdings rose by only $150 million from December 1929 to December 1931 compared with an increase of $1.06 billion in French gold holdings over the same period. Gold accumulation by the US and its direct contribution to world deflation during the first two years of the Depression was small relative to that of France.

Friedman also erred in stating “the common policies they followed were misguided and contributed to the severity and rapidity of transmission of the U.S. shock to the international community.” The shock to the international community clearly originated not in the US but in France. The Fed could have absorbed and mitigated the shock by allowing a substantial outflow of its huge gold reserves, but instead amplified the shock by raising interest rates to nearly unprecedented levels, causing gold to flow into the US.

After correctly noting the incompatibility between fixed exchange rates and internal price stability, Friedman contradicts himself by asserting that, in seeking to stabilize their internal price levels, Strong and Moreau violated the gold-standard “rules,” as if it were rules, not arbitrage, that constrain national price to converge toward a common level under a gold standard.

Friedman’s assertion that, after 1925, the dollar was undervalued and sterling overvalued was not wrong. But he misunderstood the consequences of currency undervaluation and overvaluation under the gold standard, a confusion stemming from the underlying misconception, derived from PSFM, that foreign exchange rates adjust to balance trade flows, so that, in equilibrium, no country runs a trade deficit or trade surplus.

Thus, in Friedman’s view, dollar undervaluation and sterling overvaluation implied a US trade surplus and British trade deficit, causing gold to flow from Britain to the US. Under gold-standard “rules,” the US money stock and US prices were supposed to rise and the British money stock and British prices were supposed to fall until undervaluation and overvaluation were eliminated. Friedman therefore blamed sterilization of gold inflows by the Fed for preventing the necessary increase in the US money stock and price level to restore equilibrium. But, in fact, from 1925 through 1928, prices in the US were roughly stable and prices in Britain fell slightly. Violating gold-standard “rules” did not prevent the US and British price levels from converging, a convergence driven by market forces, not “rules.”

The stance of monetary policy in a gold-standard country had minimal effect on either the quantity of money or the price level in that country, which were mainly determined by the internationally determined value of gold. What the stance of national monetary policy determines under the gold standard is whether the quantity of money in the country adjusts to the quantity demanded by a process of domestic monetary creation or withdrawal or by the inflow or outflow of gold. Sufficiently tight domestic monetary policy restricting the quantify of domestic money causes a compensatory gold inflow increasing the domestic money stock, while sufficiently easy money causes a compensatory outflow of gold reducing the domestic money stock. Tightness or ease of domestic monetary policy under the gold standard mainly affected gold and foreign-exchange reserves, and, only minimally, the quantity of domestic money and the domestic price level.

However, the combined effects of many countries simultaneously tightening monetary policy in a deliberate, or even inadvertent, attempt to accumulate — or at least prevent the loss — of gold reserves could indeed drive up the international value of gold through a deflationary process affecting prices in all gold-standard countries. Friedman, even while admitting that, in his Monetary History, he had understated the effect of the Bank of France on the Great Depression, referred only the overvaluation of sterling and undervaluation of the dollar and franc as causes of the Great Depression, remaining oblivious to the deflationary effects of gold accumulation and appreciation.

It was thus nonsensical for Friedman to argue that the mistake of the Bank of France during the Great Depression was not to increase the quantity of francs in proportion to the increase of its gold reserves. The problem was not that the quantity of francs was too low; it was that the Bank of France prevented the French public from collectively increasing the quantity of francs that they held except by importing gold.

Unlike Friedman, F. A. Hayek actually defended the policy of the Bank of France, and denied that the Bank of France had violated “the rules of the game” after nearly quadrupling its gold reserves between 1928 and 1932. Under his interpretation of those “rules,” because the Bank of France increased the quantity of banknotes after the 1928 restoration of convertibility by about as much as its gold reserves increased, it had fully complied with the “rules.” Hayek’s defense was incoherent; under its legal obligation to convert gold into francs at the official conversion rate, the Bank of France had no choice but to increase the quantity of francs by as much as its gold reserves increased.

That eminent economists like Hayek and Friedman could defend, or criticize, the conduct of the Bank of France during the Great Depression, because the Bank either did, or did not, follow “the rules of the game” under which the gold standard operated, shows the uselessness and irrelevance of the “rules of the game” as a guide to policy. For that reason alone, the failure of empirical studies to find evidence that “the rules of the game” were followed during the heyday of the gold standard is unsurprising. But the deeper reason for that lack of evidence is that PSFM, whose implementation “the rules of the game” were supposed to guarantee, was based on a misunderstanding of the international-adjustment mechanism under either the gold standard or any fixed-exchange-rates system.

Despite the grip of PSFM over most of the profession, a few economists did show a deeper understanding of the adjustment mechanism. The idea that the price level in terms of gold directly constrained the movements of national price levels across countries was indeed recognized by writers as diverse as Keynes, Mises, and Hawtrey who all pointed out that the prices of internationally traded commodities were constrained by arbitrage and that the free movement of capital across countries would limit discrepancies in interest rates across countries attached to the gold standard, observations that had already been made by Smith, Thornton, Ricardo, Fullarton and Mill in the classical period. But, until the Monetary Approach to the Balance of Payments became popular in the 1970s, only Hawtrey consistently and systematically deduced the implications of those insights in analyzing both the Great Depression and the Bretton Woods system of fixed, but adjustable, exchange rates following World War II.

The inconsistencies and internal contradictions of PSFM were sometimes recognized, but usually overlooked, by business-cycle theorists when focusing on the disturbing influence of central banks, perpetuating mistakes of the Humean Currency School doctrine that attributed cyclical disturbances to the misbehavior of local banking systems that were inherently disposed to overissue their liabilities.

White and Hogan on Hayek and Cassel on the Causes of the Great Depression

Lawrence White and Thomas Hogan have just published a new paper in the Journal of Economic Behavior and Organization (“Hayek, Cassel, and the origins of the great depression”). Since White is a leading Hayek scholar, who has written extensively on Hayek’s economic writings (e.g., his important 2008 article “Did Hayek and Robbins Deepen the Great Depression?”) and edited the new edition of Hayek’s notoriously difficult volume, The Pure Theory of Capital, when it was published as volume 11 of the Collected Works of F. A. Hayek, the conclusion reached by the new paper that Hayek had a better understanding than Cassel of what caused the Great Depression is not, in and of itself, surprising.

However, I admit to being taken aback by the abstract of the paper:

We revisit the origins of the Great Depression by contrasting the accounts of two contemporary economists, Friedrich A. Hayek and Gustav Cassel. Their distinct theories highlight important, but often unacknowledged, differences between the international depression and the Great Depression in the United States. Hayek’s business cycle theory offered a monetary overexpansion account for the 1920s investment boom, the collapse of which initiated the Great Depression in the United States. Cassel’s warnings about a scarcity gold reserves related to the international character of the downturn, but the mechanisms he emphasized contributed little to the deflation or depression in the United States.

I wouldn’t deny that there are differences between the way the Great Depression played out in the United States and in the rest of the world, e.g., Britain and France, which to be sure, suffered less severely than did the US or, say, Germany. It is both possible, and important, to explore and understand the differential effects of the Great Depression in various countries. I am sorry to say that White and Hogan do neither. Instead, taking at face value the dubious authority of Friedman and Schwartz’s treatment of the Great Depression in the Monetary History of the United States, they assert that the cause of the Great Depression in the US was fundamentally different from the cause of the Great Depression in many or all other countries.

Taking that insupportable premise from Friedman and Schwartz, they simply invoke various numerical facts from the Monetary History as if those facts, in and of themselves, demonstrate what requires to be demonstrated: that the causes of the Great Depression in the US were different from those of the Great Depression in the rest of the world. That assumption vitiated the entire treatment of the Great Depression in the Monetary History, and it vitiates the results that White and Hogan reach about the merits of the conflicting explanations of the Great Depression offered by Cassel and Hayek.

I’ve discussed the failings of Friedman’s treatment of the Great Depression and of other episodes he analyzed in the Monetary History in previous posts (e.g., here, here, here, here, and here). The common failing of all the episodes treated by Friedman in the Monetary History and elsewhere is that he misunderstood how the gold standard operated, because his model of the gold standard was a primitive version of the price-specie-flow mechanism in which the monetary authority determines the quantity of money, which then determines the price level, which then determines the balance of payments, the balance of payments being a function of the relative price levels of the different countries on the gold standard. Countries with relatively high price levels experience trade deficits and outflows of gold, and countries with relatively low price levels experience trade surpluses and inflows of gold. Under the mythical “rules of the game” under the gold standard, countries with gold inflows were supposed to expand their money supplies, so that prices would rise and countries with outflows were supposed to reduce their money supplies, so that prices fall. If countries followed the rules, then an international monetary equilibrium would eventually be reached.

That is the model of the gold standard that Friedman used throughout his career. He was not alone; Hayek and Mises and many others also used that model, following Hume’s treatment in his essay on the balance of trade. But it’s the wrong model. The correct model is the one originating with Adam Smith, based on the law of one price, which says that prices of all commodities in terms of gold are equalized by arbitrage in all countries on the gold standard.

As a first approximation, under the Smithean model, there is only one price level adjusted for different currency parities for all countries on the gold standard. So if there is deflation in one country on the gold standard, there is deflation for all countries on the gold standard. If the rest of the world was suffering from deflation under the gold standard, the US was also suffering from a deflation of approximately the same magnitude as every other country on the gold standard was suffering.

The entire premise of the Friedman account of the Great Depression, adopted unquestioningly by White and Hogan, is that there was a different causal mechanism for the Great Depression in the United States from the mechanism operating in the rest of the world. That premise is flatly wrong. The causation assumed by Friedman in the Monetary History was the exact opposite of the actual causation. It wasn’t, as Friedman assumed, that the decline in the quantity of money in the US was causing deflation; it was the common deflation in all gold-standard countries that was causing the quantity of money in the US to decline.

To be sure there was a banking collapse in the US that was exacerbating the catastrophe, but that was an effect of the underlying cause: deflation, not an independent cause. Absent the deflationary collapse, there is no reason to assume that the investment boom in the most advanced and most productive economy in the world after World War I was unsustainable as the Hayekian overinvestment/malinvestment hypothesis posits with no evidence of unsustainability other than the subsequent economic collapse.

So what did cause deflation under the gold standard? It was the rapid increase in the monetary demand for gold resulting from the insane policy of the Bank of France (disgracefully endorsed by Hayek as late as 1932) which Cassel, along with Ralph Hawtrey (whose writings, closely parallel to Cassel’s on the danger of postwar deflation, avoid all of the ancillary mistakes White and Hogan attribute to Cassel), was warning would lead to catastrophe.

It is true that Cassel also believed that over the long run not enough gold was being produced to avoid deflation. White and Hogan spend inordinate space and attention on that issue, because that secular tendency toward deflation is entirely different from the catastrophic effects of the increase in gold demand in the late 1920s triggered by the insane policy of the Bank of France.

The US could have mitigated the effects if it had been willing to accommodate the Bank of France’s demand to increase its gold holdings. Of course, mitigating the effects of the insane policy of the Bank of France would have rewarded the French for their catastrophic policy, but, under the circumstances, some other means of addressing French misconduct would have spared the world incalculable suffering. But misled by an inordinate fear of stock market speculation, the Fed tightened policy in 1928-29 and began accumulating gold rather than accommodate the French demand.

And the Depression came.

An Austrian Tragedy

It was hardly predictable that the New York Review of Books would take notice of Marginal Revolutionaries by Janek Wasserman, marking the susquicentenial of the publication of Carl Menger’s Grundsätze (Principles of Economics) which, along with Jevons’s Principles of Political Economy and Walras’s Elements of Pure Economics ushered in the marginal revolution upon which all of modern economics, for better or for worse, is based. The differences among the three founding fathers of modern economic theory were not insubstantial, and the Jevonian version was largely superseded by the work of his younger contemporary Alfred Marshall, so that modern neoclassical economics is built on the work of only one of the original founders, Leon Walras, Jevons’s work having left little impression on the future course of economics.

Menger’s work, however, though largely, but not totally, eclipsed by that of Marshall and Walras, did leave a more enduring imprint and a more complicated legacy than Jevons’s — not only for economics, but for political theory and philosophy, more generally. Judging from Edward Chancellor’s largely favorable review of Wasserman’s volume, one might even hope that a start might be made in reassessing that legacy, a process that could provide an opportunity for mutually beneficial interaction between long-estranged schools of thought — one dominant and one marginal — that are struggling to overcome various conceptual, analytical and philosophical problems for which no obvious solutions seem available.

In view of the failure of modern economists to anticipate the Great Recession of 2008, the worst financial shock since the 1930s, it was perhaps inevitable that the Austrian School, a once favored branch of economics that had made a specialty of booms and busts, would enjoy a revival of public interest.

The theme of Austrians as outsiders runs through Janek Wasserman’s The Marginal Revolutionaries: How Austrian Economists Fought the War of Ideas, a general history of the Austrian School from its beginnings to the present day. The title refers both to the later marginalization of the Austrian economists and to the original insight of its founding father, Carl Menger, who introduced the notion of marginal utility—namely, that economic value does not derive from the cost of inputs such as raw material or labor, as David Ricardo and later Karl Marx suggested, but from the utility an individual derives from consuming an additional amount of any good or service. Water, for instance, may be indispensable to humans, but when it is abundant, the marginal value of an extra glass of the stuff is close to zero. Diamonds are less useful than water, but a great deal rarer, and hence command a high market price. If diamonds were as common as dewdrops, however, they would be worthless.

Menger was not the first economist to ponder . . . the “paradox of value” (why useless things are worth more than essentials)—the Italian Ferdinando Galiani had gotten there more than a century earlier. His central idea of marginal utility was simultaneously developed in England by W. S. Jevons and on the Continent by Léon Walras. Menger’s originality lay in applying his theory to the entire production process, showing how the value of capital goods like factory equipment derived from the marginal value of the goods they produced. As a result, Austrian economics developed a keen interest in the allocation of capital. Furthermore, Menger and his disciples emphasized that value was inherently subjective, since it depends on what consumers are willing to pay for something; this imbued the Austrian school from the outset with a fiercely individualistic and anti-statist aspect.

Menger’s unique contribution is indeed worthy of special emphasis. He was more explicit than Jevons or Walras, and certainly more than Marshall, in explaining that the value of factors of production is derived entirely from the value of the incremental output that could be attributed (or imputed) to their services. This insight implies that cost is not an independent determinant of value, as Marshall, despite accepting the principle of marginal utility, continued to insist – famously referring to demand and supply as the two blades of the analytical scissors that determine value. The cost of production therefore turns out to be nothing but the value the output foregone when factors are used to produce one output instead of the next most highly valued alternative. Cost therefore does not determine, but is determined by, equilibrium price, which means that, in practice, costs are always subjective and conjectural. (I have made this point in an earlier post in a different context.) I will have more to say below about the importance of Menger’s specific contribution and its lasting imprint on the Austrian school.

Menger’s Principles of Economics, published in 1871, established the study of economics in Vienna—before then, no economic journals were published in Austria, and courses in economics were taught in law schools. . . .

The Austrian School was also bound together through family and social ties: [his two leading disciples, [Eugen von] Böhm-Bawerk and Friedrich von Wieser [were brothers-in-law]. [Wieser was] a close friend of the statistician Franz von Juraschek, Friedrich Hayek’s maternal grandfather. Young Austrian economists bonded on Alpine excursions and met in Böhm-Bawerk’s famous seminars (also attended by the Bolshevik Nikolai Bukharin and the German Marxist Rudolf Hilferding). Ludwig von Mises continued this tradition, holding private seminars in Vienna in the 1920s and later in New York. As Wasserman notes, the Austrian School was “a social network first and last.”

After World War I, the Habsburg Empire was dismantled by the victorious Allies. The Austrian bureaucracy shrank, and university placements became scarce. Menger, the last surviving member of the first generation of Austrian economists, died in 1921. The economic school he founded, with its emphasis on individualism and free markets, might have disappeared under the socialism of “Red Vienna.” Instead, a new generation of brilliant young economists emerged: Schumpeter, Hayek, and Mises—all of whom published best-selling works in English and remain familiar names today—along with a number of less well known but influential economists, including Oskar Morgenstern, Fritz Machlup, Alexander Gerschenkron, and Gottfried Haberler.

Two factual corrections are in order. Menger outlived Böhm-Bawerk, but not his other chief disciple von Wieser, who died in 1926, not long after supervising Hayek’s doctoral dissertation, later published in 1927, and, in 1933, translated into English and published as Monetary Theory and the Trade Cycle. Moreover, a 16-year gap separated Mises and Schumpeter, who were exact contemporaries, from Hayek (born in 1899) who was a few years older than Gerschenkron, Haberler, Machlup and Morgenstern.

All the surviving members or associates of the Austrian school wound up either in the US or Britain after World War II, and Hayek, who had taken a position in London in 1931, moved to the US in 1950, taking a position in the Committee on Social Thought at the University of Chicago after having been refused a position in the economics department. Through the intervention of wealthy sponsors, Mises obtained an academic appointment of sorts at the NYU economics department, where he succeeded in training two noteworthy disciples who wrote dissertations under his tutelage, Murray Rothbard and Israel Kirzner. (Kirzner wrote his dissertation under Mises at NYU, but Rothbard did his graduate work at Colulmbia.) Schumpeter, Haberler and Gerschenkron eventually took positions at Harvard, while Machlup (with some stops along the way) and Morgenstern made their way to Princeton. However, Hayek’s interests shifted from pure economic theory to deep philosophical questions. While Machlup and Haberler continued to work on economic theory, the Austrian influence on their work after World War II was barely recognizable. Morgenstern and Schumpeter made major contributions to economics, but did not hide their alienation from the doctrines of the Austrian School.

So there was little reason to expect that the Austrian School would survive its dispersal when the Nazis marched unopposed into Vienna in 1938. That it did survive is in no small measure due to its ideological usefulness to anti-socialist supporters who provided financial support to Hayek, enabling his appointment to the Committee on Social Thought at the University of Chicago, and Mises’s appointment at NYU, and other forms of research support to Hayek, Mises and other like-minded scholars, as well as funding the Mont Pelerin Society, an early venture in globalist networking, started by Hayek in 1947. Such support does not discredit the research to which it gave rise. That the survival of the Austrian School would probably not have been possible without the support of wealthy benefactors who anticipated that the Austrians would advance their political and economic interests does not invalidate the research thereby enabled. (In the interest of transparency, I acknowledge that I received support from such sources for two books that I wrote.)

Because Austrian School survivors other than Mises and Hayek either adapted themselves to mainstream thinking without renouncing their earlier beliefs (Haberler and Machlup) or took an entirely different direction (Morgenstern), and because the economic mainstream shifted in two directions that were most uncongenial to the Austrians: Walrasian general-equilibrium theory and Keynesian macroeconomics, the Austrian remnant, initially centered on Mises at NYU, adopted a sharply adversarial attitude toward mainstream economic doctrines.

Despite its minute numbers, the lonely remnant became a house divided against itself, Mises’s two outstanding NYU disciples, Murray Rothbard and Israel Kirzner, holding radically different conceptions of how to carry on the Austrian tradition. An extroverted radical activist, Rothbard was not content just to lead a school of economic thought, he aspired to become the leader of a fantastical anarchistic revolutionary movement to replace all established governments under a reign of private-enterprise anarcho-capitalism. Rothbard’s political radicalism, which, despite his Jewish ancestry, even included dabbling in Holocaust denialism, so alienated his mentor, that Mises terminated all contact with Rothbard for many years before his death. Kirzner, self-effacing, personally conservative, with no political or personal agenda other than the advancement of his own and his students’ scholarship, published hundreds of articles and several books filling 10 thick volumes of his collected works published by the Liberty Fund, while establishing a robust Austrian program at NYU, training many excellent scholars who found positions in respected academic and research institutions. Similar Austrian programs, established under the guidance of Kirzner’s students, were started at other institutions, most notably at George Mason University.

One of the founders of the Cato Institute, which for nearly half a century has been the leading avowedly libertarian think tank in the US, Rothbard was eventually ousted by Cato, and proceeded to set up a rival think tank, the Ludwig von Mises Institute, at Auburn University, which has turned into a focal point for extreme libertarians and white nationalists to congregate, get acquainted, and strategize together.

Isolation and marginalization tend to cause a subspecies either to degenerate toward extinction, to somehow blend in with the members of the larger species, thereby losing its distinctive characteristics, or to accentuate its unique traits, enabling it to find some niche within which to survive as a distinct sub-species. Insofar as they have engaged in economic analysis rather than in various forms of political agitation and propaganda, the Rothbardian Austrians have focused on anarcho-capitalist theory and the uniquely perverse evils of fractional-reserve banking.

Rejecting the political extremism of the Rothbardians, Kirznerian Austrians differentiate themselves by analyzing what they call market processes and emphasizing the limitations on the knowledge and information possessed by actual decision-makers. They attribute this misplaced focus on equilibrium to the extravagantly unrealistic and patently false assumptions of mainstream models on the knowledge possessed by economic agents, which effectively make equilibrium the inevitable — and trivial — conclusion entailed by those extreme assumptions. In their view, the focus of mainstream models on equilibrium states with unrealistic assumptions results from a preoccupation with mathematical formalism in which mathematical tractability rather than sound economics dictates the choice of modeling assumptions.

Skepticism of the extreme assumptions about the informational endowments of agents covers a range of now routine assumptions in mainstream models, e.g., the ability of agents to form precise mathematical estimates of the probability distributions of future states of the world, implying that agents never confront decisions about which they are genuinely uncertain. Austrians also object to the routine assumption that all the information needed to determine the solution of a model is the common knowledge of the agents in the model, so that an existing equilibrium cannot be disrupted unless new information randomly and unpredictably arrives. Each agent in the model having been endowed with the capacity of a semi-omniscient central planner, solving the model for its equilibrium state becomes a trivial exercise in which the optimal choices of a single agent are taken as representative of the choices made by all of the model’s other, semi-omnicient, agents.

Although shreds of subjectivism — i.e., agents make choices based own preference orderings — are shared by all neoclassical economists, Austrian criticisms of mainstream neoclassical models are aimed at what Austrians consider to be their insufficient subjectivism. It is this fierce commitment to a robust conception of subjectivism, in which an equilibrium state of shared expectations by economic agents must be explained, not just assumed, that Chancellor properly identifies as a distinguishing feature of the Austrian School.

Menger’s original idea of marginal utility was posited on the subjective preferences of consumers. This subjectivist position was retained by subsequent generations of the school. It inspired a tradition of radical individualism, which in time made the Austrians the favorite economists of American libertarians. Subjectivism was at the heart of the Austrians’ polemical rejection of Marxism. Not only did they dismiss Marx’s labor theory of value, they argued that socialism couldn’t possibly work since it would lack the means to allocate resources efficiently.

The problem with central planning, according to Hayek, is that so much of the knowledge that people act upon is specific knowledge that individuals acquire in the course of their daily activities and life experience, knowledge that is often difficult to articulate – mere intuition and guesswork, yet more reliable than not when acted upon by people whose livelihoods depend on being able to do the right thing at the right time – much less communicate to a central planner.

Chancellor attributes Austrian mistrust of statistical aggregates or indices, like GDP and price levels, to Austrian subjectivism, which regards such magnitudes as abstractions irrelevant to the decisions of private decision-makers, except perhaps in forming expectations about the actions of government policy makers. (Of course, this exception potentially provides full subjectivist license and legitimacy for macroeconomic theorizing despite Austrian misgivings.) Observed statistical correlations between aggregate variables identified by macroeconomists are dismissed as irrelevant unless grounded in, and implied by, the purposeful choices of economic agents.

But such scruples about the use of macroeconomic aggregates and inferring causal relationships from observed correlations are hardly unique to the Austrian school. One of the most important contributions of the 20th century to the methodology of economics was an article by T. C. Koopmans, “Measurement Without Theory,” which argued that measured correlations between macroeconomic variables provide a reliable basis for business-cycle research and policy advice only if the correlations can be explained in terms of deeper theoretical or structural relationships. The Nobel Prize Committee, in awarding the 1975 Prize to Koopmans, specifically mentioned this paper in describing Koopmans’s contributions. Austrians may be more fastidious than their mainstream counterparts in rejecting macroeconomic relationships not based on microeconomic principles, but they aren’t the only ones mistrustful of mere correlations.

Chancellor cites mistrust about the use of statistical aggregates and price indices as a factor in Hayek’s disastrous policy advice warning against anti-deflationary or reflationary measures during the Great Depression.

Their distrust of price indexes brought Austrian economists into conflict with mainstream economic opinion during the 1920s. At the time, there was a general consensus among leading economists, ranging from Irving Fisher at Yale to Keynes at Cambridge, that monetary policy should aim at delivering a stable price level, and in particular seek to prevent any decline in prices (deflation). Hayek, who earlier in the decade had spent time at New York University studying monetary policy and in 1927 became the first director of the Austrian Institute for Business Cycle Research, argued that the policy of price stabilization was misguided. It was only natural, Hayek wrote, that improvements in productivity should lead to lower prices and that any resistance to this movement (sometimes described as “good deflation”) would have damaging economic consequences.

The argument that deflation stemming from economic expansion and increasing productivity is normal and desirable isn’t what led Hayek and the Austrians astray in the Great Depression; it was their failure to realize the deflation that triggered the Great Depression was a monetary phenomenon caused by a malfunctioning international gold standard. Moreover, Hayek’s own business-cycle theory explicitly stated that a neutral (stable) monetary policy ought to aim at keeping the flow of total spending and income constant in nominal terms while his policy advice of welcoming deflation meant a rapidly falling rate of total spending. Hayek’s policy advice was an inexcusable error of judgment, which, to his credit, he did acknowledge after the fact, though many, perhaps most, Austrians have refused to follow him even that far.

Considered from the vantage point of almost a century, the collapse of the Austrian School seems to have been inevitable. Hayek’s long-shot bid to establish his business-cycle theory as the dominant explanation of the Great Depression was doomed from the start by the inadequacies of the very specific version of his basic model and his disregard of the obvious implication of that model: prevent total spending from contracting. The promising young students and colleagues who had briefly gathered round him upon his arrival in England, mostly attached themselves to other mentors, leaving Hayek with only one or two immediate disciples to carry on his research program. The collapse of his research program, which he himself abandoned after completing his final work in economic theory, marked a research hiatus of almost a quarter century, with the notable exception of publications by his student, Ludwig Lachmann who, having decamped in far-away South Africa, labored in relative obscurity for most of his career.

The early clash between Keynes and Hayek, so important in the eyes of Chancellor and others, is actually overrated. Chancellor, quoting Lachmann and Nicholas Wapshott, describes it as a clash of two irreconcilable views of the economic world, and the clash that defined modern economics. In later years, Lachmann actually sought to effect a kind of reconciliation between their views. It was not a conflict of visions that undid Hayek in 1931-32, it was his misapplication of a narrowly constructed model to a problem for which it was irrelevant.

Although the marginalization of the Austrian School, after its misguided policy advice in the Great Depression and its dispersal during and after World War II, is hardly surprising, the unwillingness of mainstream economists to sort out what was useful and relevant in the teachings of the Austrian School from what is not was unfortunate not only for the Austrians. Modern economics was itself impoverished by its disregard for the complexity and interconnectedness of economic phenomena. It’s precisely the Austrian attentiveness to the complexity of economic activity — the necessity for complementary goods and factors of production to be deployed over time to satisfy individual wants – that is missing from standard economic models.

That Austrian attentiveness, pioneered by Menger himself, to the complementarity of inputs applied over the course of time undoubtedly informed Hayek’s seminal contribution to economic thought: his articulation of the idea of intertemporal equilibrium that comprehends the interdependence of the plans of independent agents and the need for them to all fit together over the course of time for equilibrium to obtain. Hayek’s articulation represented a conceptual advance over earlier versions of equilibrium analysis stemming from Walras and Pareto, and even from Irving Fisher who did pay explicit attention to intertemporal equilibrium. But in Fisher’s articulation, intertemporal consistency was described in terms of aggregate production and income, leaving unexplained the mechanisms whereby the individual plans to produce and consume particular goods over time are reconciled. Hayek’s granular exposition enabled him to attend to, and articulate, necessary but previously unspecified relationships between the current prices and expected future prices.

Moreover, neither mainstream nor Austrian economists have ever explained how prices are adjust in non-equilibrium settings. The focus of mainstream analysis has always been the determination of equilibrium prices, with the implicit understanding that “market forces” move the price toward its equilibrium value. The explanatory gap has been filled by the mainstream New Classical School which simply posits the existence of an equilibrium price vector, and, to replace an empirically untenable tâtonnement process for determining prices, posits an equally untenable rational-expectations postulate to assert that market economies typically perform as if they are in, or near the neighborhood of, equilibrium, so that apparent fluctuations in real output are viewed as optimal adjustments to unexplained random productivity shocks.

Alternatively, in New Keynesian mainstream versions, constraints on price changes prevent immediate adjustments to rationally expected equilibrium prices, leading instead to persistent reductions in output and employment following demand or supply shocks. (I note parenthetically that the assumption of rational expectations is not, as often suggested, an assumption distinct from market-clearing, because the rational expectation of all agents of a market-clearing price vector necessarily implies that the markets clear unless one posits a constraint, e.g., a binding price floor or ceiling, that prevents all mutually beneficial trades from being executed.)

Similarly, the Austrian school offers no explanation of how unconstrained price adjustments by market participants is a sufficient basis for a systemic tendency toward equilibrium. Without such an explanation, their belief that market economies have strong self-correcting properties is unfounded, because, as Hayek demonstrated in his 1937 paper, “Economics and Knowledge,” price adjustments in current markets don’t, by themselves, ensure a systemic tendency toward equilibrium values that coordinate the plans of independent economic agents unless agents’ expectations of future prices are sufficiently coincident. To take only one passage of many discussing the difficulty of explaining or accounting for a process that leads individuals toward a state of equilibrium, I offer the following as an example:

All that this condition amounts to, then, is that there must be some discernible regularity in the world which makes it possible to predict events correctly. But, while this is clearly not sufficient to prove that people will learn to foresee events correctly, the same is true to a hardly less degree even about constancy of data in an absolute sense. For any one individual, constancy of the data does in no way mean constancy of all the facts independent of himself, since, of course, only the tastes and not the actions of the other people can in this sense be assumed to be constant. As all those other people will change their decisions as they gain experience about the external facts and about other people’s actions, there is no reason why these processes of successive changes should ever come to an end. These difficulties are well known, and I mention them here only to remind you how little we actually know about the conditions under which an equilibrium will ever be reached.

In this theoretical muddle, Keynesian economics and the neoclassical synthesis were abandoned, because the key proposition of Keynesian economics was supposedly the tendency of a modern economy toward an equilibrium with involuntary unemployment while the neoclassical synthesis rejected that proposition, so that the supposed synthesis was no more than an agreement to disagree. That divided house could not stand. The inability of Keynesian economists such as Hicks, Modigliani, Samuelson and Patinkin to find a satisfactory (at least in terms of a preferred Walrasian general-equilibrium model) rationalization for Keynes’s conclusion that an economy would likely become stuck in an equilibrium with involuntary unemployment led to the breakdown of the neoclassical synthesis and the displacement of Keynesianism as the dominant macroeconomic paradigm.

But perhaps the way out of the muddle is to abandon the idea that a systemic tendency toward equilibrium is a property of an economic system, and, instead, to recognize that equilibrium is, as Hayek suggested, a contingent, not a necessary, property of a complex economy. Ludwig Lachmann, cited by Chancellor for his remark that the early theoretical clash between Hayek and Keynes was a conflict of visions, eventually realized that in an important sense both Hayek and Keynes shared a similar subjectivist conception of the crucial role of individual expectations of the future in explaining the stability or instability of market economies. And despite the efforts of New Classical economists to establish rational expectations as an axiomatic equilibrating property of market economies, that notion rests on nothing more than arbitrary methodological fiat.

Chancellor concludes by suggesting that Wasserman’s characterization of the Austrians as marginalized is not entirely accurate inasmuch as “the Austrians’ view of the economy as a complex, evolving system continues to inspire new research.” Indeed, if economics is ever to find a way out of its current state of confusion, following Lachmann in his quest for a synthesis of sorts between Keynes and Hayek might just be a good place to start from.

A Tale of Two Syntheses

I recently finished reading a slender, but weighty, collection of essays, Microfoundtions Reconsidered: The Relationship of Micro and Macroeconomics in Historical Perspective, edited by Pedro Duarte and Gilberto Lima; it contains in addition to a brief introductory essay by the editors, and contributions by Kevin Hoover, Robert Leonard, Wade Hands, Phil Mirowski, Michel De Vroey, and Pedro Duarte. The volume is both informative and stimulating, helping me to crystalize ideas about which I have been ruminating and writing for a long time, but especially in some of my more recent posts (e.g., here, here, and here) and my recent paper “Hayek, Hicks, Radner and Four Equilibrium Concepts.”

Hoover’s essay provides a historical account of the microfoundations, making clear that the search for microfoundations long preceded the Lucasian microfoundations movement of the 1970s and 1980s that would revolutionize macroeconomics in the late 1980s and early 1990s. I have been writing about the differences between varieties of microfoundations for quite a while (here and here), and Hoover provides valuable detail about early discussions of microfoundations and about their relationship to the now regnant Lucasian microfoundations dogma. But for my purposes here, Hoover’s key contribution is his deconstruction of the concept of microfoundations, showing that the idea of microfoundations depends crucially on the notion that agents in a macroeconomic model be explicit optimizers, meaning that they maximize an explicit function subject to explicit constraints.

What Hoover clarifies is vacuity of the Lucasian optimization dogma. Until Lucas, optimization by agents had been merely a necessary condition for a model to be microfounded. But there was also another condition: that the optimizing choices of agents be mutually consistent. Establishing that the optimizing choices of agents are mutually consistent is not necessarily easy or even possible, so often the consistency of optimizing plans can only be suggested by some sort of heuristic argument. But Lucas and his cohorts, followed by their acolytes, unable to explain, even informally or heuristically, how the optimizing choices of individual agents are rendered mutually consistent, instead resorted to question-begging and question-dodging techniques to avoid addressing the consistency issue, of which one — the most egregious, but not the only — is the representative agent. In so doing, Lucas et al. transformed the optimization problem from the coordination of multiple independent choices into the optimal plan of a single decision maker. Heckuva job!

The second essay by Robert Leonard, though not directly addressing the question of microfoundations, helps clarify and underscore the misrepresentation perpetrated by the Lucasian microfoundational dogma in disregarding and evading the need to describe a mechanism whereby the optimal choices of individual agents are, or could be, reconciled. Leonard focuses on a particular economist, Oskar Morgenstern, who began his career in Vienna as a not untypical adherent of the Austrian school of economics, a member of the Mises seminar and successor of F. A. Hayek as director of the Austrian Institute for Business Cycle Research upon Hayek’s 1931 departure to take a position at the London School of Economics. However, Morgenstern soon began to question the economic orthodoxy of neoclassical economic theory and its emphasis on the tendency of economic forces to reach a state of equilibrium.

In his famous early critique of the foundations of equilibrium theory, Morgenstern tried to show that the concept of perfect foresight, upon which, he alleged, the concept of equilibrium rests, is incoherent. To do so, Morgenstern used the example of the Holmes-Moriarity interaction in which Holmes and Moriarty are caught in a dilemma in which neither can predict whether the other will get off or stay on the train on which they are both passengers, because the optimal choice of each depends on the choice of the other. The unresolvable conflict between Holmes and Moriarty, in Morgenstern’s view, showed that the incoherence of the idea of perfect foresight.

As his disillusionment with orthodox economic theory deepened, Morgenstern became increasingly interested in the potential of mathematics to serve as a tool of economic analysis. Through his acquaintance with the mathematician Karl Menger, the son of Carl Menger, founder of the Austrian School of economics. Morgenstern became close to Menger’s student, Abraham Wald, a pure mathematician of exceptional ability, who, to support himself, was working on statistical and mathematical problems for the Austrian Institute for Business Cycle Resarch, and tutoring Morgenstern in mathematics and its applications to economic theory. Wald, himself, went on to make seminal contributions to mathematical economics and statistical analysis.

Moregenstern also became acquainted with another student of Menger, John von Neumnn, with an interest in applying advanced mathematics to economic theory. Von Neumann and Morgenstern would later collaborate in writing The Theory of Games and Economic Behavior, as a result of which Morgenstern came to reconsider his early view of the Holmes-Moriarty paradox inasmuch as it could be shown that an equilibrium solution of their interaction could be found if payoffs to their joint choices were specified, thereby enabling Holmes and Moriarty to choose optimal probablistic strategies.

I don’t think that the game-theoretic solution to the Holmes Moriarty game is as straightforward as Morgenstern eventually agreed, but the critical point in the microfoundations discussion is that the mathematical solution to the Holmes-Moriarty paradox acknowledges the necessity for the choices made by two or more agents in an economic or game-theoretic equilibrium to be reconciled – i.e., rendered mutually consistent — in equilibrium. Under Lucasian microfoundations dogma, the problem is either annihilated by positing an optimizing representative agent having no need to coordinate his decision with other agents (I leave the question who, in the Holmes-Moriarty interaction, is the representative agent as an exercise for the reader) or it is assumed away by positing the existence of a magical equilibrium with no explanation of how the mutually consistent choices are arrived at.

The third essay (“The Rise and Fall of Walrasian Economics: The Keynes Effect”) by Wade Hands considers the first of the two syntheses – the neoclassical synthesis — that are alluded to in the title of this post. Hands gives a learned account of the mutually reinforcing co-development of Walrasian general equilibrium theory and Keynesian economics in the 25 years or so following World War II. Although Hands agrees that there is no necessary connection between Walrasian GE theory and Keynesian theory, he argues that there was enough common ground between Keynesians and Walrasians, as famously explained by Hicks in summarizing Keynesian theory by way of his IS-LM model, to allow the two disparate research programs to nourish each other in a kind of symbiotic relationship as the two research programs came to dominate postwar economics.

The task for Keynesian macroeconomists following the lead of Samuelson, Solow and Modigliani at MIT, Alvin Hansen at Harvard and James Tobin at Yale was to elaborate the Hicksian IS-LM approach by embedding it in a more general Walrasian framework. In so doing, they helped to shape a research agenda for Walrasian general-equilibrium theorists working out the details of the newly developed Arrow-Debreu model, deriving conditions for the uniqueness and stability of the equilibrium of that model. The neoclassical synthesis followed from those efforts, achieving an uneasy reconciliation between Walrasian general equilibrium theory and Keynesian theory. It received its most complete articulation in the impressive treatise of Don Patinkin which attempted to derive or at least evaluate key Keyensian propositions in the context of a full general equilibrium model. At an even higher level of theoretical sophistication, the 1971 summation of general equilibrium theory by Arrow and Hahn, gave disproportionate attention to Keynesian ideas which were presented and analyzed using the tools of state-of-the art Walrasian analysis.

Hands sums up the coexistence of Walrasian and Keynesian ideas in the Arrow-Hahn volume as follows:

Arrow and Hahn’s General Competitive Analysis – the canonical summary of the literature – dedicated far more pages to stability than to any other topic. The book had fourteen chapters (and a number of mathematical appendices); there was one chapter on consumer choice, one chapter on production theory, and one chapter on existence [of equilibrium], but there were three chapters on stability analysis, (two on the traditional tatonnement and one on alternative ways of modeling general equilibrium dynamics). Add to this the fact that there was an important chapter on “The Keynesian Model’; and it becomes clear how important stability analysis and its connection to Keynesian economics was for Walrasian microeconomics during this period. The purpose of this section has been to show that that would not have been the case if the Walrasian economics of the day had not been a product of co-evolution with Keynesian economic theory. (p. 108)

What seems most unfortunate about the neoclassical synthesis is that it elevated and reinforced the least relevant and least fruitful features of both the Walrasian and the Keynesian research programs. The Hicksian IS-LM setup abstracted from the dynamic and forward-looking aspects of Keynesian theory, modeling a static one-period model, not easily deployed as a tool of dynamic analysis. Walrasian GE analysis, which, following the pathbreaking GE existence proofs of Arrow and Debreu, then proceeded to a disappointing search for the conditions for a unique and stable general equilibrium.

It was Paul Samuelson who, building on Hicks’s pioneering foray into stability analysis, argued that the stability question could be answered by investigating whether a system of Lyapounov differential equations could describe market price adjustments as functions of market excess demands that would converge on an equilibrium price vector. But Samuelson’s approach to establishing stability required the mechanism of a fictional tatonnement process. Even with that unsatisfactory assumption, the stability results were disappointing.

Although for Walrasian theorists the results hardly repaid the effort expended, for those Keynesians who interpreted Keynes as an instability theorist, the weak Walrasian stability results might have been viewed as encouraging. But that was not any easy route to take either, because Keynes had also argued that a persistent unemployment equilibrium might be the norm.

It’s also hard to understand how the stability of equilibrium in an imaginary tatonnement process could ever have been considered relevant to the operation of an actual economy in real time – a leap of faith almost as extraordinary as imagining an economy represented by a single agent. Any conventional comparative-statics exercise – the bread and butter of microeconomic analysis – involves comparing two equilibria, corresponding to a specified parametric change in the conditions of the economy. The comparison presumes that, starting from an equilibrium position, the parametric change leads from an initial to a new equilibrium. If the economy isn’t stable, a disturbance causing an economy to depart from an initial equilibrium need not result in an adjustment to a new equilibrium comparable to the old one.

If conventional comparative statics hinges on an implicit stability assumption, it’s hard to see how a stability analysis of tatonnement has any bearing on the comparative-statics routinely relied upon by economists. No actual economy ever adjusts to a parametric change by way of tatonnement. Whether a parametric change displacing an economy from its equilibrium time path would lead the economy toward another equilibrium time path is another interesting and relevant question, but it’s difficult to see what insight would be gained by proving the stability of equilibrium under a tatonnement process.

Moreover, there is a distinct question about the endogenous stability of an economy: are there endogenous tendencies within an economy that lead it away from its equilibrium time path. But questions of endogenous stability can only be posed in a dynamic, rather than a static, model. While extending the Walrasian model to include an infinity of time periods, Arrow and Debreu telescoped determination of the intertemporal-equilibrium price vector into a preliminary time period before time, production, exchange and consumption begin. So, even in the formally intertemporal Arrow-Debreu model, the equilibrium price vector, once determined, is fixed and not subject to revision. Standard stability analysis was concerned with the response over time to changing circumstances only insofar as changes are foreseen at time zero, before time begins, so that they can be and are taken fully into account when the equilibrium price vector is determined.

Though not entirely uninteresting, the intertemporal analysis had little relevance to the stability of an actual economy operating in real time. Thus, neither the standard Keyensian (IS-LM) model nor the standard Walrasian Arrow-Debreu model provided an intertemporal framework within which to address the dynamic stability that Keynes (and contemporaries like Hayek, Myrdal, Lindahl and Hicks) had developed in the 1930s. In particular, Hicks’s analytical device of temporary equilibrium might have facilitated such an analysis. But, having introduced his IS-LM model two years before publishing his temporary equilibrium analysis in Value and Capital, Hicks concentrated his attention primarily on Keynesian analysis and did not return to the temporary equilibrium model until 1965 in Capital and Growth. And it was IS-LM that became, for a generation or two, the preferred analytical framework for macroeconomic analysis, while temproary equilibrium remained overlooked until the 1970s just as the neoclassical synthesis started coming apart.

The fourth essay by Phil Mirowski investigates the role of the Cowles Commission, based at the University of Chicago from 1939 to 1955, in undermining Keynesian macroeconomics. While Hands argues that Walrasians and Keynesians came together in a non-hostile spirit of tacit cooperation, Mirowski believes that owing to their Walrasian sympathies, the Cowles Committee had an implicit anti-Keynesian orientation and was therefore at best unsympathetic if not overtly hostile to Keynesian theorizing, which was incompatible the Walrasian optimization paradigm endorsed by the Cowles economists. (Another layer of unexplored complexity is the tension between the Walrasianism of the Cowles economists and the Marshallianism of the Chicago School economists, especially Knight and Friedman, which made Chicago an inhospitable home for the Cowles Commission and led to its eventual departure to Yale.)

Whatever differences, both the Mirowski and the Hands essays support the conclusion that the uneasy relationship between Walrasianism and Keynesianism was inherently problematic and unltimately unsustainable. But to me the tragedy is that before the fall, in the 1950s and 1960s, when the neoclassical synthesis bestrode economics like a colossus, the static orientation of both the Walrasian and the Keynesian research programs combined to distract economists from a more promising research program. Such a program, instead of treating expectations either as parametric constants or as merely adaptive, based on an assumed distributed lag function, might have considered whether expectations could perform a potentially equilibrating role in a general equilibrium model.

The equilibrating role of expectations, though implicit in various contributions by Hayek, Myrdal, Lindahl, Irving Fisher, and even Keynes, is contingent so that equilibrium is not inevitable, only a possibility. Instead, the introduction of expectations as an equilibrating variable did not occur until the mid-1970s when Robert Lucas, Tom Sargent and Neil Wallace, borrowing from John Muth’s work in applied microeconomics, introduced the idea of rational expectations into macroeconomics. But in introducing rational expectations, Lucas et al. made rational expectations not the condition of a contingent equilibrium but an indisputable postulate guaranteeing the realization of equilibrium without offering any theoretical account of a mechanism whereby the rationality of expectations is achieved.

The fifth essay by Michel DeVroey (“Microfoundations: a decisive dividing line between Keynesian and new classical macroeconomics?”) is a philosophically sophisticated analysis of Lucasian microfoundations methodological principles. DeVroey begins by crediting Lucas with the revolution in macroeconomics that displaced a Keynesian orthodoxy already discredited in the eyes of many economists after its failure to account for simultaneously rising inflation and unemployment.

The apparent theoretical disorder characterizing the Keynesian orthodoxy and its Monetarist opposition left a void for Lucas to fill by providing a seemingly rigorous microfounded alternative to the confused state of macroeconomics. And microfoundations became the methodological weapon by which Lucas and his associates and followers imposed an iron discipline on the unruly community of macroeconomists. “In Lucas’s eyes,” DeVroey aptly writes,“ the mere intention to produce a theory of involuntary unemployment constitutes an infringement of the equilibrium discipline.” Showing that his description of Lucas is hardly overstated, DeVroey quotes from the famous 1978 joint declaration of war issued by Lucas and Sargent against Keynesian macroeconomics:

After freeing himself of the straightjacket (or discipline) imposed by the classical postulates, Keynes described a model in which rules of thumb, such as the consumption function and liquidity preference schedule, took the place of decision functions that a classical economist would insist be derived from the theory of choice. And rather than require that wages and prices be determined by the postulate that markets clear – which for the labor market seemed patently contradicted by the severity of business depressions – Keynes took as an unexamined postulate that money wages are sticky, meaning that they are set at a level or by a process that could be taken as uninfluenced by the macroeconomic forces he proposed to analyze.

Echoing Keynes’s famous description of the sway of Ricardian doctrines over England in the nineteenth century, DeVroey remarks that the microfoundations requirement “conquered macroeconomics as quickly and thoroughly as the Holy Inquisition conquered Spain,” noting, even more tellingly, that the conquest was achieved without providing any justification. Ricardo had, at least, provided a substantive analysis that could be debated; Lucas offered only an undisputable methodological imperative about the sole acceptable mode of macroeconomic reasoning. Just as optimization is a necessary component of the equilibrium discipline that had to be ruthlessly imposed on pain of excommunication from the macroeconomic community, so, too, did the correlate principle of market-clearing. To deviate from the market-clearing postulate was ipso facto evidence of an impure and heretical state of mind. DeVroey further quotes from the war declaration of Lucas and Sargent.

Cleared markets is simply a principle, not verifiable by direct observation, which may or may not be useful in constructing successful hypotheses about the behavior of these [time] series.

What was only implicit in the war declaration became evident later after right-thinking was enforced, and woe unto him that dared deviate from the right way of thinking.

But, as DeVroey skillfully shows, what is most remarkable is that, having declared market clearing an indisputable methodological principle, Lucas, contrary to his own demand for theoretical discipline, used the market-clearing postulate to free himself from the very equilibrium discipline he claimed to be imposing. How did the market-clearing postulate liberate Lucas from equilibrium discipline? To show how the sleight-of-hand was accomplished, DeVroey, in an argument parallel to that of Hoover in chapter one and that suggested by Leonard in chapter two, contrasts Lucas’s conception of microfoundations with a different microfoundations conception espoused by Hayek and Patinkin. Unlike Lucas, Hayek and Patinkin recognized that the optimization of individual economic agents is conditional on the optimization of other agents. Lucas assumes that if all agents optimize, then their individual optimization ensures that a social optimum is achieved, the whole being the sum of its parts. But that assumption ignores that the choices made interacting agents are themelves interdependent.

To capture the distinction between independent and interdependent optimization, DeVroey distinguishes between optimal plans and optimal behavior. Behavior is optimal only if an optimal plan can be executed. All agents can optimize individually in making their plans, but the optimality of their behavior depends on their capacity to carry those plans out. And the capacity of each to carry out his plan is contingent on the optimal choices of all other agents.

Optimizing plans refers to agents’ intentions before the opening of trading, the solution to the choice-theoretical problem with which they are faced. Optimizing behavior refers to what is observable after trading has started. Thus optimal behavior implies that the optimal plan has been realized. . . . [O]ptmizing plans and optimizing behavior need to be logically separated – there is a difference between finding a solution to a choice problem and implementing the solution. In contrast, whenever optimizing behavior is the sole concept used, the possibility of there being a difference between them is discarded by definition. This is the standpoint takenby Lucas and Sargent. Once it is adopted, it becomes misleading to claim . . .that the microfoundations requirement is based on two criteria, optimizing behavior and market clearing. A single criterion is needed, and it is irrelevant whether this is called generalized optimizing behavior or market clearing. (De Vroey, p. 176)

Each agent is free to optimize his plan, but no agent can execute his optimal plan unless the plan coincides with the complementary plans of other agents. So, the execution of an optimal plan is not within the unilateral control of an agent formulating his own plan. One can readily assume that agents optimize their plans, but one cannot just assume that those plans can be executed as planned. The optimality of interdependent plans is not self-evident; it is a proposition that must be demonstrated. Assuming that agents optimize, Lucas simply asserts that, because agents optimize, markets must clear.

That is a remarkable non-sequitur. And from that non-sequitur, Lucas jumps to a further non-sequitur: that an optimizing representative agent is all that’s required for a macroeconomic model. The logical straightjacket (or discipline) of demonstrating that interdependent optimal plans are consistent is thus discarded (or trampled upon). Lucas’s insistence on a market-clearing principle turns out to be subterfuge by which the pretense of its upholding conceals its violation in practice.

My own view is that the assumption that agents formulate optimizing plans cannot be maintained without further analysis unless the agents are operating in isolation. If the agents interacting with each other, the assumption that they optimize requires a theory of their interaction. If the focus is on equilibrium interactions, then one can have a theory of equilibrium, but then the possibility of non-equilibrium states must also be acknowledged.

That is what John Nash did in developing his equilibrium theory of positive-sum games. He defined conditions for the existence of equilibrium, but he offered no theory of how equilibrium is achieved. Lacking such a theory, he acknowledged that non-equilibrium solutions might occur, e.g., in some variant of the Holmes-Moriarty game. To simply assert that because interdependent agents try to optimize, they must, as a matter of principle, succeed in optimizing is to engage in question-begging on a truly grand scale. To insist, as a matter of methodological principle, that everyone else must also engage in question-begging on equally grand scale is what I have previously called methodological arrogance, though an even harsher description might be appropriate.

In the sixth essay (“Not Going Away: Microfoundations in the making of a new consensus in macroeconomics”), Pedro Duarte considers the current state of apparent macroeconomic consensus in the wake of the sweeping triumph of the Lucasian micorfoundtions methodological imperative. In its current state, mainstream macroeconomists from a variety of backgrounds have reconciled themselves and adjusted to the methodological absolutism Lucas and his associates and followers have imposed on macroeconomic theorizing. Leading proponents of the current consensus are pleased to announce, in unseemly self-satisfaction, that macroeconomics is now – but presumably not previously – “firmly grounded in the principles of economic [presumably neoclassical] theory.” But the underlying conception of neoclassical economic theory motivating such a statement is almost laughably narrow, and, as I have just shown, strictly false even if, for argument’s sake, that narrow conception is accepted.

Duarte provides an informative historical account of the process whereby most mainstream Keynesians and former old-line Monetarists, who had, in fact, adopted much of the underlying Keynesian theoretical framework themselves, became reconciled to the non-negotiable methodological microfoundational demands upon which Lucas and his New Classical followers and Real-Business-Cycle fellow-travelers insisted. While Lucas was willing to tolerate differences of opinion about the importance of monetary factors in accounting for business-cycle fluctuations in real output and employment, and even willing to countenance a role for countercyclical monetary policy, such differences of opinion could be tolerated only if they could be derived from an acceptable microfounded model in which the agent(s) form rational expectations. If New Keynesians were able to produce results rationalizing countercyclical policies in such microfounded models with rational expectations, Lucas was satisfied. Presumably, Lucas felt the price of conceding the theoretical legitimacy of countercyclical policy was worth paying in order to achieve methodological hegemony over macroeconomic theory.

And no doubt, for Lucas, the price was worth paying, because it led to what Marvin Goodfriend and Robert King called the New Neoclassical Synthesis in their 1997 article ushering in the new era of good feelings, a synthesis based on “the systematic application of intertemporal optimization and rational expectations” while embodying “the insights of monetarists . . . regarding the theory and practice of monetary policy.”

While the first synthesis brought about a convergence of sorts between the disparate Walrasian and Keynesian theoretical frameworks, the convergence proved unstable because the inherent theoretical weaknesses of both paradigms were unable to withstand criticisms of the theoretical apparatus and of the policy recommendations emerging from that synthesis, particularly an inability to provide a straightforward analysis of inflation when it became a serious policy problem in the late 1960s and 1970s. But neither the Keynesian nor the Walrasian paradigms were developing in a way that addressed the points of most serious weakness.

On the Keynesian side, the defects included the static nature of the workhorse IS-LM model, the absence of a market for real capital and of a market for endogenous money. On the Walrasian side, the defects were the lack of any theory of actual price determination or of dynamic adjustment. The Hicksian temporary equilibrium paradigm might have provided a viable way forward, and for a very different kind of synthesis, but not even Hicks himself realized the potential of his own creation.

While the first synthesis was a product of convenience and misplaced optimism, the second synthesis is a product of methodological hubris and misplaced complacency derived from an elementary misunderstanding of the distinction between optimization by a single agent and the simultaneous optimization of two or more independent, yet interdependent, agents. The equilibrium of each is the result of the equilibrium of all, and a theory of optimization involving two or more agents requires a theory of how two or more interdependent agents can optimize simultaneously. The New neoclassical synthesis rests on the demand for a macroeconomic theory of individual optimization that refuses even to ask, let along provide an answer to, the question whether the optimization that it demands is actually achieved in practice or what happens if it is not. This is not a synthesis that will last, or that deserves to. And the sooner it collapses, the better off macroeconomics will be.

What the answer is I don’t know, but if I had to offer a suggestion, the one offered by my teacher Axel Leijonhufvud towards the end of his great book, written more than half a century ago, strikes me as not bad at all:

One cannot assume that what went wrong was simply that Keynes slipped up here and there in his adaptation of standard tool, and that consequently, if we go back and tinker a little more with the Marshallian toolbox his purposes will be realized. What is required, I believe, is a systematic investigation, form the standpoint of the information problems stressed in this study, of what elements of the static theory of resource allocation can without further ado be utilized in the analysis of dynamic and historical systems. This, of course, would be merely a first-step: the gap yawns very wide between the systematic and rigorous modern analysis of the stability of “featureless,” pure exchange systems and Keynes’ inspired sketch of the income-constrained process in a monetary-exchange-cum-production system. But even for such a first step, the prescription cannot be to “go back to Keynes.” If one must retrace some steps of past developments in order to get on the right track—and that is probably advisable—my own preference is to go back to Hayek. Hayek’s Gestalt-conception of what happens during business cycles, it has been generally agreed, was much less sound than Keynes’. As an unhappy consequence, his far superior work on the fundamentals of the problem has not received the attention it deserves. (p. 401)

I agree with all that, but would also recommend Roy Radner’s development of an alternative to the Arrow-Debreu version of Walrasian general equilibrium theory that can accommodate Hicksian temporary equilibrium, and Hawtrey’s important contributions to our understanding of monetary theory and the role and potential instability of endogenous bank money. On top of that, Franklin Fisher in his important work, The Disequilibrium Foundations of Equilibrium Economics, has given us further valuable guidance in how to improve the current sorry state of macroeconomics.

 

My Paper “Hayek, Hicks, Radner and Four Equilibrium Concepts” Is Now Available Online.

The paper, forthcoming in The Review of Austrian Economics, can be read online.

Here is the abstract:

Hayek was among the first to realize that for intertemporal equilibrium to obtain all agents must have correct expectations of future prices. Before comparing four categories of intertemporal, the paper explains Hayek’s distinction between correct expectations and perfect foresight. The four equilibrium concepts considered are: (1) Perfect foresight equilibrium of which the Arrow-Debreu-McKenzie (ADM) model of equilibrium with complete markets is an alternative version, (2) Radner’s sequential equilibrium with incomplete markets, (3) Hicks’s temporary equilibrium, as extended by Bliss; (4) the Muth rational-expectations equilibrium as extended by Lucas into macroeconomics. While Hayek’s understanding closely resembles Radner’s sequential equilibrium, described by Radner as an equilibrium of plans, prices, and price expectations, Hicks’s temporary equilibrium seems to have been the natural extension of Hayek’s approach. The now dominant Lucas rational-expectations equilibrium misconceives intertemporal equilibrium, suppressing Hayek’s insights thereby retreating to a sterile perfect-foresight equilibrium.

And here is my concluding paragraph:

Four score and three years after Hayek explained how challenging the subtleties of the notion of intertemporal equilibrium and the elusiveness of any theoretical account of an empirical tendency toward intertemporal equilibrium, modern macroeconomics has now built a formidable theoretical apparatus founded on a methodological principle that rejects all the concerns that Hayek found so vexing denies that all those difficulties even exist. Many macroeconomists feel proud of what modern macroeconomics has achieved, but there is reason to think that the path trod by Hayek, Hicks and Radner could have led macroeconomics in a more fruitful direction than the one on which it has been led by Lucas and his associates.

Cleaning Up After Burns’s Mess

In my two recent posts (here and here) about Arthur Burns’s lamentable tenure as Chairman of the Federal Reserve System from 1970 to 1978, my main criticism of Burns has been that, apart from his willingness to subordinate monetary policy to the political interests of he who appointed him, Burns failed to understand that an incomes policy to restrain wages, thereby minimizing the tendency of disinflation to reduce employment, could not, in principle, reduce inflation if monetary restraint did not correspondingly reduce the growth of total spending and income. Inflationary (or employment-reducing) wage increases can’t be prevented by an incomes policy if the rate of increase in total spending, and hence total income, isn’t controlled. King Canute couldn’t prevent the tide from coming in, and neither Arthur Burns nor the Wage and Price Council could slow the increase in wages when total spending was increasing at a rate faster than was consistent with the 3% inflation rate that Burns was aiming for.

In this post, I’m going to discuss how the mess left behind by Burns, upon his departure from the Fed in 1978, had to be cleaned up. The mess got even worse under Burns’s successor, G. William Miller. The clean up didn’t begin until Carter appointed Paul Volcker in 1979 when it became obvious that the monetary policy of the Fed had failed to cope with problems left behind by Burns. After unleashing powerful inflationary forces under the cover of the wage-and-price controls he had persuaded Nixon to impose in 1971 as a precondition for delivering the monetary stimulus so desperately desired by Nixon to ensure his reelection, Burns continued providing that stimulus even after Nixon’s reelection, when it might still have been possible to taper off the stimulus before inflation flared up, and without aborting the expansion then under way. In his arrogance or ignorance, Burns chose not to adjust the policy that had already accomplished its intended result.

Not until the end of 1973, after crude oil prices quadrupled owing to a cutback in OPEC oil output, driving inflation above 10% in 1974, did Burns withdraw the monetary stimulus that had been administered in increasing doses since early 1971. Shocked out of his complacency by the outcry against 10% inflation, Burns shifted monetary policy toward restraint, bringing down the growth in nominal spending and income from over 11% in Q4 1973 to only 8% in Q1 1974.

After prolonging monetary stimulus unnecessarily for a year, Burn erred grievously by applying monetary restraint in response to the rise in oil prices. The largely exogenous rise in oil prices would most likely have caused a recession even with no change in monetary policy. By subjecting the economy to the added shock of reducing aggregate demand, Burns turned a mild recession into the worst recession since 1937-38 recession at the end of the Great Depression, with unemployment peaking at 8.8% in Q2 1975. Nor did the reduction in aggregate demand have much anti-inflationary effect, because the incremental reduction in total spending occasioned by the monetary tightening was reflected mainly in reduced output and employment rather than in reduced inflation.

But even with unemployment reaching the highest level in almost 40 years, inflation did not fall below 5% – and then only briefly – until a year after the bottom of the recession. When President Carter took office in 1977, Burns, hoping to be reappointed to another term, provided Carter with a monetary expansion to hasten the reduction in unemployment that Carter has promised in his Presidential campaign. However, Burns’s accommodative policy did not sufficiently endear him to Carter to secure the coveted reappointment.

The short and unhappy tenure of Carter’s first appointee, G. William Miller, during which inflation rose from 6.5% to 10%, ended abruptly when Carter, with his Administration in crisis, sacked his Treasury Secretary, replacing him with Miller. Under pressure from the financial community to address the seemingly intractable inflation that seemed to be accelerating in the wake of a second oil shock following the Iranian Revolution and hostage taking, Carter felt constrained to appoint Volcker, formerly a high official in the Treasury under both Kennedy and Nixon, then serving as President of the New York Federal Reserve Bank, who was known to be the favored choice of the financial community.

A year after leaving the Fed, Burns gave the annual Per Jacobson Lecture to the International Monetary Fund. Calling his lecture “The Anguish of Central Banking,” Burns offered a defense of his tenure, by arguing, in effect, that he should not be blamed for his poor performance, because the job of central banking is so very hard. Central bankers could control inflation, but only by inflicting unacceptably high unemployment. The political authorities and the public to whom central bankers are ultimately accountable would simply not tolerate the high unemployment that would be necessary for inflation to be controlled.

Viewed in the abstract, the Federal Reserve System had the power to abort the inflation at its incipient stage fifteen years ago or at any later point, and it has the power to end it today. At any time within that period, it could have restricted money supply and created sufficient strains in the financial and industrial markets to terminate inflation with little delay. It did not do so because the Federal Reserve was itself caught up in the philosophic and political currents that were transforming American life and culture.

Burns’s framing of the choices facing a central bank was tendentious; no policy maker had suggested that, after years of inflation had convinced the public to expect inflation to continue indefinitely, the Fed should “terminate inflation with little delay.” And Burns was hardly a disinterested actor as Fed chairman, having orchestrated a monetary expansion to promote the re-election chances of his benefactor Richard Nixon after securing, in return for that service, Nixon’s agreement to implement an incomes policy to limit the growth of wages, a policy that Burns believed would contain the inflationary consequences of the monetary expansion.

However, as I explained in my post on Hawtrey and Burns, the conceptual rationale for an incomes policy was not to allow monetary expansion to increase total spending, output and employment without causing increased inflation, but to allow the monetary restraint to be administered without increasing unemployment. But under the circumstances in the summer of 1971, when a recovery from the 1970 recession was just starting, and unemployment was still high, monetary expansion might have hastened a recovery in output and employment the resulting increase in total spending and income might still increase output and employment rather than being absorbed in higher wages and prices.

But using controls over wages and prices to speed the return to full employment could succeed only while substantial unemployment and unused capacity allowed output and employment to increase; the faster the recovery, the sooner increased spending would show up in rising prices and wages, or in supply shortages, rather than in increased output. An incomes policy to enable monetary expansion to speed the recovery from recession and restore full employment might theoretically be successful, but, only if the monetary stimulus were promptly tapered off before driving up inflation.

Thus, if Burns wanted an incomes policy to be able to hasten the recovery through monetary expansion and maximize the political benefit to Nixon in time for the 1972 election, he ought to have recognized the need to withdraw the stimulus after the election. But for a year after Nixon’s reelection, Burns continued the monetary expansion without let up. Burns’s expression of anguish at the dilemma foisted upon him by circumstances beyond his control hardly evokes sympathy, sounding more like an attempt to deflect responsibility for his own mistakes or malfeasance in serving as an instrument of the criminal Campaign to Re-elect the President without bothering to alter that politically motivated policy after its dishonorable mission had been accomplished.

But it was not until Burns’s successor, G. William Miller, was succeeded by Paul Volcker in August 1979 that the Fed was willing to adopt — and maintain — an anti-inflationary policy. In his recently published memoir Volcker recounts how, responding to President Carter’s request in July 1979 that he accept appointment as Fed chairman, he told Mr. Carter that, to bring down inflation, he would adopt a tighter monetary policy than had been followed by his predecessor. He also writes that, although he did not regard himself as a Friedmanite Monetarist, he had become convinced that to control inflation it was necessary to control the quantity of money, though he did not agree with Friedman that a rigid rule was required to keep the quantity of money growing at a constant rate. To what extent the Fed would set its policy in terms of a fixed target rate of growth in the quantity of money became the dominant issue in Fed policy during Volcker’s first term as Fed chairman.

In a review of Volcker’s memoir widely cited in the econ blogosphere, Tim Barker decried Volcker’s tenure, especially his determination to control inflation even at the cost of spilling blood — other people’s blood – if that was necessary to eradicate the inflationary psychology of the 1970s, which become a seemingly permanent feature of the economic environment at the time of Volcker’s appointment.

If someone were to make a movie about neoliberalism, there would need to be a starring role for the character of Paul Volcker. As chair of the Federal Reserve from 1979 to 1987, Volcker was the most powerful central banker in the world. These were the years when the industrial workers movement was defeated in the United States and United Kingdom, and third world debt crises exploded. Both of these owe something to Volcker. On October 6, 1979, after an unscheduled meeting of the Fed’s Open Market Committee, Volcker announced that he would start limiting the growth of the nation’s money supply. This would be accomplished by limiting the growth of bank reserves, which the Fed influenced by buying and selling government securities to member banks. As money became more scarce, banks would raise interest rates, limiting the amount of liquidity available in the overall economy. Though the interest rates were a result of Fed policy, the money supply target let Volcker avoid the politically explosive appearance of directly raising rates himself. The experiment—known as the Volcker Shock—lasted until 1982, inducing what remains the worst unemployment since the Great Depression and finally ending the inflation that had troubled the world economy since the late 1960s. To catalog all the results of the Volcker Shock—shuttered factories, broken unions, dizzying financialization—is to describe the whirlwind we are still reaping in 2019. . . .

Barker is correct that Volcker had been persuaded that to tighten monetary policy the quantity of reserves that the Fed was providing to the banking system had to be controlled. But making the quantity of bank reserves the policy instrument was a technical change. Monetary policy had been — and could still have been — conducted using an interest-rate instrument, and it would have been entirely possible for Volcker to tighten monetary policy using the traditional interest-rate instrument.

It is possible that, as Barker asserts, it was politically easier to tighten policy using a quantity instrument than an interest-rate instrument. But even so, the real difficulty was not the instrument used, but the economic and political consequences of a tight monetary policy. The choice of the instrument to carry out the policy could hardly have made more than a marginal difference on the balance of political forces favoring or opposing that policy. The real issue was whether a tight monetary policy aimed at reducing inflation was more effectively conducted using the traditional interest-rate instrument or the quantity-instrument that Volcker adopted. More on this point below.

Those who praise Volcker like to say he “broke the back” of inflation. Nancy Teeters, the lone dissenter on the Fed Board of Governors, had a different metaphor: “I told them, ‘You are pulling the financial fabric of this country so tight that it’s going to rip. You should understand that once you tear a piece of fabric, it’s very difficult, almost impossible, to put it back together again.” (Teeters, also the first woman on the Fed board, told journalist William Greider that “None of these guys has ever sewn anything in his life.”) Fabric or backbone: both images convey violence. In any case, a price index doesn’t have a spine or a seam; the broken bodies and rent garments of the early 1980s belonged to people. Reagan economic adviser Michael Mussa was nearer the truth when he said that “to establish its credibility, the Federal Reserve had to demonstrate its willingness to spill blood, lots of blood, other people’s blood.”

Did Volcker consciously see unemployment as the instrument of price stability? A Rhode Island representative asked him “Is it a necessary result to have a large increase in unemployment?” Volcker responded, “I don’t know what policies you would have to follow to avoid that result in the short run . . . We can’t undertake a policy now that will cure that problem [unemployment] in 1981.” Call this the necessary byproduct view: defeating inflation is the number one priority, and any action to put people back to work would raise inflationary expectations. Growth and full employment could be pursued once inflation was licked. But there was more to it than that. Even after prices stabilized, full employment would not mean what it once had. As late as 1986, unemployment was still 6.6 percent, the Reagan boom notwithstanding. This was the practical embodiment of Milton Friedman’s idea that there was a natural rate of unemployment, and attempts to go below it would always cause inflation (for this reason, the concept is known as NAIRU or non-accelerating inflation rate of unemployment). The logic here is plain: there needed to be millions of unemployed workers for the economy to work as it should.

I want to make two points about Volcker’s policy. The first, which I made in my book Free Banking and Monetary Reform over 30 years ago, and which I have reiterated in several posts on this blog and which I discussed in my recent paper “Rules versus Discretion in Monetary Policy Historically Contemplated” (for an ungated version click here) is that using a quantity instrument to tighten monetary policy, as advocated by Milton Friedman, and acquiesced in by Volcker, induces expectations about the future actions of the monetary authority that undermine the policy, rendering it untenable. Volcker eventually realized the perverse expectational consequences of trying to implement a monetary policy using a fixed rule for the quantity instrument, but his learning experience in following Friedman’s advice needlessly exacerbated and prolonged the agony of the 1982 downturn for months after inflationary expectations had been broken.

The problem was well-known in the nineteenth century thanks to British experience under the Bank Charter Act that imposed a fixed quantity limit on the total quantity of banknotes issued by the Bank of England. When the total of banknotes approached the legal maximum, a precautionary demand for banknotes was immediately induced by those who feared that they might not later be able to obtain credit if it were needed because the Bank of England would be barred from making additional credit available.

Here is how I described Volcker’s Monetarist experiment in my book.

The danger lurking in any Monetarist rule has been perhaps best summarized by F. A. Hayek, who wrote:

As regards Professor Friedman’s proposal of a legal limit on the rate at which a monopolistic issuer of money was to be allowed to increase the quantity in circulation, I can only say that I would not like to see what would happen if under such a provision it ever became known that the amount of cash in circulation was approaching the upper limit and therefore a need for increased liquidity could not be met.

Hayek’s warnings were subsequently borne out after the Federal Reserve Board shifted its policy from targeting interest rates to targeting the monetary aggregates. The apparent shift toward a less inflationary monetary policy, reinforced by the election of a conservative, antiinflationary president in 1980, induced an international shift from other currencies into the dollar. That shift caused the dollar to appreciate by almost 30 percent against other major currencies.

At the same time the domestic demand for deposits was increasing as deregulation of the banking system reduced the cost of holding deposits. But instead of accommodating the increase in the foreign and domestic demands for dollars, the Fed tightened monetary policy. . . . The deflationary impact of that tightening overwhelmed the fiscal stimulus of tax cuts and defense buildup, which, many had predicted, would cause inflation to speed up. Instead the economy fell into the deepest recession since the 1930s, while inflation, by 1982, was brought down to the lowest levels since the early 1960s. The contraction, which began in July 1981, accelerated in the fourth quarter of 1981 and the first quarter of 1982.

The rapid disinflation was bringing interest rates down from the record high levels of mid-1981 and the economy seemed to bottom out in the second quarter, showing a slight rise in real GNP over the first quarter. Sticking to its Monetarist strategy, the Fed reduced its targets for monetary growth in 1982 to between 2.5 and 5.5 percent. But in January and February, the money supply increased at a rapid rate, perhaps in anticipation of an incipient expansion. Whatever its cause, the early burst of the money supply pushed M-1 way over its target range.

For the next several months, as M-1 remained above its target, financial and commodity markets were preoccupied with what the Fed was going to do next. The fear that the Fed would tighten further to bring M-1 back within its target range reversed the slide in interest rates that began in the fall of 1981. A striking feature of the behavior of interest rates at that time was that credit markets seemed to be heavily influenced by the announcements every week of the change in M-1 during the previous week. Unexpectedly large increases in the money supply put upward pressure on interest rates.

The Monetarist explanation was that the announcements caused people to raise their expectations of inflation. But if the increase in interest rates had been associated with a rising inflation premium, the announcements should have been associated with weakness in the dollar on foreign exchange markets and rising commodities prices. In fact, the dollar was rising and commodities prices were falling consistently throughout this period – even immediately after an unexpectedly large jump in M-1 was announced. . . . (pp. 218-19)

I pause in my own earlier narrative to add the further comment that the increase in interest rates in early 1982 clearly reflected an increasing liquidity premium, caused by the reduced availability of bank reserves, making cash more desirable to hold than real assets, thereby inducing further declines in asset values.

However, increases in M-1 during July turned out to be far smaller than anticipated, relieving some of the pressure on credit and commodities markets and allowing interest rates to begin to fall again. The decline in interest rates may have been eased slightly by . . . Volcker’s statement to Congress on July 20 that monetary growth at the upper range of the Fed’s targets would be acceptable. More important, he added that he Fed was willing to let M-1 remain above its target range for a while if the reason seemed to be a precautionary demand for liquidity. By August, M-1 had actually fallen back within its target range. As fears of further tightening by the Fed subsided, the stage was set for the decline in interest rates to accelerate, [and] the great stock market rally began on August 17, when the Dow . . . rose over 38 points [almost 5%].

But anticipation of an incipient recovery again fed monetary growth. From the middle of August through the end of September, M-1 grew at an annual rate of over 15 percent. Fears that rapid monetary growth would induce the Fed to tighten monetary policy slowed down the decline in interest rates and led to renewed declines in commodities price and the stock market, while pushing up the dollar to new highs. On October 5 . . . the Wall Street Journal reported that bond prices had fallen amid fears that the Fed might tighten credit conditions to slow the recent strong growth in the money supply. But on the very next day it was reported that the Fed expected inflation to stay low and would therefore allow M-1 to exceed its targets. The report sparked a major decline in interest rates and the Dow . . . soared another 37 points. (pp. 219-20)

The subsequent recovery, which began at the end of 1982, quickly became very powerful, but persistent fears that the Fed would backslide, at the urging of Milton Friedman and his Monetarist followers, into its bad old Monetarist habits periodically caused interest-rate spikes reflecting rising liquidity premiums as the public built up precautionary cash balances. Luckily, Volcker was astute enough to shrug off the overwrought warnings of Friedman and other Monetarists that rapid increases in the monetary aggregates foreshadowed the imminent return of double-digit inflation.

Thus, the Monetarist obsession with controlling the monetary aggregates senselessly prolonged an already deep recession that, by Q1 1982, had already slain the inflationary dragon, inflation having fallen to less than half its 1981 peak while GDP actually contracted in nominal terms. But because the money supply was expanding at a faster rate than was acceptable to Monetarist ideology, the Fed continued in its futile, but destructive, effort to keep the monetary aggregates from overshooting their arbitrary Monetarist target range. It was not until Volcker, in the summer of 1982, finally and belatedly decided that enough was enough and announced that the Fed would declare victory over inflation and call off its Monetarist crusade even if doing so meant incurring Friedman’s wrath and condemnation for abandoning the true Monetarist doctrine.

Which brings me to my second point about Volcker’s policy. While it’s clear that Volcker’s decision to adopt control over the monetary aggregates as the focus of monetary policy was disastrously misguided, monetary policy can’t be conducted without some target. Although the Fed’s interest rate can serve as a policy instrument, it is not a plausible policy target. The preferred policy target is generally thought to be the rate of inflation. The Fed after all is mandated to achieve price stability, which is usually understood to mean targeting a rate of inflation of about 2%. A more sophisticated alternative would be to aim at a suitable price level, thereby allowing some upward movement, say, at a 2% annual rate, the difference between an inflation target and a moving price level target being that an inflation target is unaffected by past deviations of actual from targeted inflation while a moving price level target would require some catch up inflation to make up for past below-target inflation and reduced inflation to compensate for past above-target inflation.

However, the 1981-82 recession shows exactly why an inflation target and even a moving price level target are bad ideas. By almost any comprehensive measure, inflation was still positive throughout the 1981-82 recession, though the producer price index was nearly flat. Thus, inflation targeting during the 1981-82 recession would have been almost as bad a target for monetary policy as the monetary aggregates, with most measures of inflation showing that inflation was then between 3 and 5 percent even at the depth of the recession. Inflation targeting is thus, on its face, an unreliable basis for conducting monetary policy.

But the deeper problem with targeting inflation is that seeking to achieve an inflation target during a recession, when the very existence of a recession is presumptive evidence of the need for monetary stimulus, is actually a recipe for disaster, or, at the very least, for needlessly prolonging a recession. In a recession, the goal of monetary policy should be to stabilize the rate of increase in nominal spending along a time path consistent with the desired rate of inflation. Thus, as long as output is contracting or increasing very slowly, the desired rate of inflation should be higher than the desired rate over the long-term. The appropriate strategy for achieving an inflation target ought to be to let inflation be reduced by the accelerating expansion of output and employment characteristic of most recoveries relative to a stable expansion of nominal spending.

The true goal of monetary policy should always be to maintain a time path of total spending consistent with a desired price-level path over time. But it should not be the objective of the monetary policy to always be as close as possible to the desired path, because trying to stay on that path would likely destabilize the real economy. Market monetarists argue that the goal of monetary policy ought to be to keep nominal GDP expanding at that whatever rate is consistent with maintaining the desired long-run price-level path. That is certainly a reasonable practical rule for monetary policy, but the policy criterion I have discussed here would, at least in principle, be consistent with a more activist approach in which the monetary authority would seek to hasten the restoration of full employment during recessions by temporarily increasing the rate of monetary expansion and in nominal GDP as long as real output and employment remained below the maximum levels consistent with desired price level path over time. But such a strategy would require the monetary authority to be able to fine tune its monetary expansion so that it was tapered off just as the economy was reaching its maximum sustainable output and employment path. Whether such fine-tuning would be possible in practice is a question to which I don’t think we now know the answer.

 

Hayek v. Rawls on Social Justice: Correcting the False Narrative

Matt Yglesias, citing an article (“John Rawls, Socialist?“) by Ed Quish in the Jacobin arguing that Rawls, in his later years, drifted from his welfare-state liberalism to democratic socialism, tweeted a little while ago

I’m an admirer of, but no expert on, Rawls, so I won’t weigh in on where to pigeon-hole Rawls on the ideological spectrum. In general, I think such pigeon-holing is as likely to mislead as to clarify because it tends to obscure the individuality of the individual or thinker being pigeon-hold. Rawls was above all a Rawlsian and to reduce his complex and nuanced philosophy to simple catch-phrase like “socialism” or even “welfare-state liberalism” cannot possibly do his rich philosophical contributions justice (no pun intended).

A good way to illustrate both the complexity of Rawls’s philosophy and that of someone like F. A. Hayek, often regarded as standing on the opposite end of the philosophical spectrum from Rawls, is to quote from two passages of volume 2 of Law, Legislation and Liberty. Hayek entitled this volume The Mirage of Social Justice, and the main thesis of that volume is that the term “justice” is meaningful only in the context of the foreseen or foreseable consequences of deliberate decisions taken by responsible individual agents. Social justice, because it refers to the outcomes of complex social processes that no one is deliberately aiming at, is not a meaningful concept.

Because Rawls argued in favor of the difference principle, which says that unequal outcomes are only justifiable insofar as they promote the absolute (though not the relative) well-being of the least well-off individuals in society, most libertarians, including famously Robert Nozick whose book Anarchy, State and Utopia was a kind of rejoinder to Rawls’s book A Theory of Justice, viewed Rawls as an ideological opponent.

Hayek, however, had a very different take on Rawls. At the end of his preface to volume 2, explaining why he had not discussed various recent philosophical contributions on the subject of social justice, Hayek wrote:

[A]fter careful consideration I have come to the conclusion that what I might have to say about John Rawls’ A theory of Justice would not assist in the pursuit of my immediate object because the differences between us seemed more verbal than substantial. Though the first impression of readers may be different, Rawls’ statement which I quote later in this volume (p. 100) seems to me to show that we agree on what is to me the essential point. Indeed, as I indicate in a note to that passage, it appears to me that Rawls has been widely misunderstood on this central issue. (pp. xii-xiii)

Here is what Hayek says about Rawls in the cited passage.

Before leaving this subject I want to point out once more that the recognition that in such combinations as “social”, “economic”, “distributive”, or “retributive” justice the term “justice” is wholly empty should not lead us to throw the baby out with the bath water. Not only as the basis of the legal rules of just conduct is the justice which the courts of justice administer exceedingly important; there unquestionably also exists a genuine problem of justice in connection with the deliberate design of political institutions the problem to which Professor John Rawls has recently devoted an important book. The fact which I regret and regard as confusing is merely that in this connection he employs the term “social justice”. But I have no basic quarrel with an author who, before he proceeds to that problem, acknowledges that the task of selecting specific systems or distributions of desired things as just must be abandoned as mistaken in principle and it is, in any case, not capable of a definite answer. Rather, the principles of justice define the crucial constraints which institutions and joint activities must satisfy if persons engaging in them are to have no complaints against them. If these constraints are satisfied, the resulting distribution, whatever it is, may be accepted as just (or at least not unjust).” This is more or less what I have been trying to argue in this chapter.

In the footnote at the end of the quotation, Hayek cites the source from which he takes the quotation and then continues:

John Rawls, “Constitutional Liberty and the Concept of Justice,” Nomos IV, Justice (New York, 1963), p. 102. where the passage quoted is preceded by the statement that “It is the system of institutions which has to be judged and judged from a general point of view.” I am not aware that Professor Rawls’ later more widely read work A Theory of Justice contains a comparatively clear statement of the main point, which may explain why this work seems often, but as it  appears to me wrongly, to have been interpreted as lending support to socialist demands, e.g., by Daniel Bell, “On Meritocracy and Equality”, Public Interest, Autumn 1972, p. 72, who describes Rawls’ theory as “the most comprehensive effort in modern philosophy to justify a socialist ethic.”

Hirshleifer on the Private and Social Value of Information

I have written a number posts (here here here, and here) over the past few years citing an article by one of my favorite UCLA luminaries, Jack Hirshleifer, of the fabled UCLA economics department of the 1950s, 1960s, 1970s and 1980s. Like everything Hirshleifer wrote, the article, “The Private and Social Value of Information and the Reward to Inventive Activity,” published in 1971 in the American Economic Review, is deeply insightful, carefully reasoned, and lucidly explained, reflecting the author’s comprehensive mastery of the whole body of neoclassical microeconomic theory.

Hirshleifer’s article grew out of a whole literature inspired by two of Hayek’s most important articles “Economics and Knowledge” in 1937 and “The Use of Knowledge in Society” in 1945. Both articles were concerned with the fact that, contrary to the assumptions in textbook treatments, economic agents don’t have complete information about all the characteristics of the goods being traded and about the prices at which those goods are available. Hayek was aiming to show that markets are characteristically capable of transmitting information held by some agents in a condensed form to make it usable by other agents. That role is performed by prices. It is prices that provide both information and incentives to economic agents to formulate and tailor their plans, and if necessary, to readjust those plans in response to changed conditions. Agents need not know what those underlying changes are; they need only observe, and act on, the price changes that result from those changes.

Hayek’s argument, though profoundly insightful, was not totally convincing in demonstrating the superiority of the pure “free market,” for three reasons.

First, economic agents base decisions, as Hayek himself was among the first to understand, not just on actual current prices, but also on expected future prices. Although traders sometimes – but usually don’t — know what the current price of something is, one can only guess – not know — what the price of that thing will be in the future. So, the work of providing the information individuals need to make good economic decisions cannot be accomplished – even in principle – just by the adjustment of prices in current markets. People also need enough information to make good guesses – form correct expectations — about future prices.

Second, economic agents don’t automatically know all prices. The assumption that every trader knows exactly what prices are before executing plans to buy and sell is true, if at all, only in highly organized markets where prices are publicly posted and traders can always buy and sell at the posted price. In most other markets, transactors must devote time and effort to find out what prices are and to find out the characteristics of the goods that they are interested in buying. It takes effort or search or advertising or some other, more or less costly, discovery method for economic agents to find out what current prices are and what characteristics those goods have. If agents aren’t fully informed even about current prices, they don’t necessarily make good decisions.

Libertarians, free marketeers, and other Hayek acolytes often like to credit Hayek with having solved or having shown how “the market” solves “the knowledge problem,” a problem that Hayek definitively showed a central-planning regime to be incapable of solving. But the solution at best is only partial, and certainly not robust, because markets never transmit all available relevant information. That’s because markets transmit only information about costs and valuations known to private individuals, but there is a lot of information about public or social valuations and costs that is not known to private individuals and rarely if ever gets fed into, or is transmitted by, the price system — valuations of public goods and the social costs of pollution for example.

Third, a lot of information is not obtained or transmitted unless it is acquired, and acquiring information is costly. Economic agents must search for relevant information about the goods and services that they are interested in obtaining and about the prices at which those goods and services are available. Moreover, agents often engage in transactions with counterparties in which one side has an information advantage over the other. When traders have an information advantage over their counterparties, the opportunity for one party to take advantage of the inferior information of the counterparty may make it impossible for the two parties to reach mutually acceptable terms, because a party who realizes that the counterparty has an information advantage may be unwilling to risk being taken advantage of. Sometimes these problems can be surmounted by creative contractual arrangements or legal interventions, but often they can’t.

To recognize the limitations of Hayek’s insight is not to minimize its importance, either in its own right or as a stimulus to further research. Important early contributions (all published between 1961 and 1970) by Stigler (“The Economics of Information”) Ozga (“Imperfect Markets through Lack of Knowledge”), Arrow (“Economic Welfare and the Allocation of Resources for Invention”), Demsetz (“Information and Efficiency: Another Viewpoint”) and Alchian (“Information Costs, Pricing, and Resource Unemployment”) all analyzed the problem of incomplete and limited information and the incentives for acquiring information, the institutions and market arrangements that arise to cope with limited information and the implications for economic efficiency of these limitations and incentives. They can all be traced directly or indirectly to Hayek’s early contributions. Among the important results that seem to follow from these early papers was that the inability of those discovering or creating new knowledge to appropriate the net benefits accruing from the knowledge implied that the incentive to create new knowledge is less than optimal owing to their inability to claim full property rights over new knowledge through patents or other forms of intellectual property.

Here is where Hirshleifer’s paper enters the picture. Is more information always better? It would certainly seem that more of any good is better than less. But how valuable is new information? And are the incentives to create or discover new information aligned with the value of that information? Hayek’s discussion implicitly assumed that the amount of information in existence is a given stock, at least in the aggregate. How can the information that already exists be optimally used? Markets help us make use of the information that already exists. But the problem addressed by Hirshleifer was whether the incentives to discover and create new information call forth the optimal investment of time, effort and resources to make new discoveries and create new knowledge.

Instead of focusing on the incentives to search for information about existing opportunities, Hirshleifer analyzed the incentives to learn about uncertain resource endowments and about the productivity of those resources.

This paper deals with an entirely different aspect of the economics of information. We here revert to the textbook assumption that markets are perfect and costless. The individual is always fully acquainted with the supply-demand offers of all potential traders, and an equilibrium integrating all individuals’ supply-demand offers is attained instantaneously. Individuals are unsure only about the size of their own commodity endowments and/or about the returns attainable from their own productive investments. They are subject to technological uncertainty rather than market uncertainty.

Technological uncertainty brings immediately to mind the economics of research and invention. The traditional position been that the excess of the social over the private value of new technological knowledge leads to underinvestment in inventive activity. The main reason is that information, viewed as a product, is only imperfectly appropriable by its discoverer. But this paper will show that there is a hitherto unrecognized force operating in opposite direction. What has been scarcely appreciated in the literature, if recognized at all, is the distributive aspect of access to superior information. It will be seen below how this advantage provides a motivation for the private acquisition and dissemination of technological information that is quite apart from – and may even exist in the absence – of any social usefulness of that information. (p. 561)

The key insight motivating Hirshleifer was that privately held knowledge enables someone possessing that knowledge to anticipate future price movements once the privately held information becomes public. If you can anticipate a future price movement that no one else can, you can confidently trade with others who don’t know what you know, and then wait for the profit to roll in when the less well-informed acquire the knowledge that you have. By assumption the newly obtained knowledge doesn’t affect the quantity of goods available to be traded, so acquiring new knowledge or information provides no social benefit. In a pure-exchange model, newly discovered knowledge provides no net social benefit; it only enables better-informed traders to anticipate price movements that less well-informed traders don’t see coming. Any gains from new knowledge are exactly matched by the losses suffered by those without that knowledge. Hirshleifer called the kind of knowledge that enables one to anticipate future price movements “foreknowledge,” which he distinguished from actual discovery .

The type of information represented by foreknowledge is exemplified by ability to successfully predict tomorrow’s (or next year’s) weather. Here we have a stochastic situation: with particular probabilities the future weather might be hot or cold, rainy or dry, etc. But whatever does actually occur will, in due time, be evident to all: the only aspect of information that may be of advantage is prior knowledge as to what will happen. Discovery, in contrast, is correct recognition of something that is hidden from view. Examples include the determination of the properties of materials, of physical laws, even of mathematical attributes (e.g., the millionth digit in the decimal expansion of “π”). The essential point is that in such cases nature will not automatically reveal the information; only human action can extract it. (562)

Hirshleifer’s result, though derived in the context of a pure-exchange economy, is very powerful, implying that any expenditure of resources devoted to finding out new information that enables the first possessor of the information to predict price changes and reap profits from trading is unambiguously wasteful by reducing total consumption of the community.

[T]he community as a whole obtains no benefit, under pure exchange, from either the acquisition or the dissemination (by resale or otherwise) of private foreknowledge. . . .

[T]he expenditure of real resources for the production of technological information is socially wasteful in pure exchange, as the expenditure of resources for an increase in the quantity of money by mining gold is wasteful, and for essentially the same reason. Just as a smaller quantity of money serves monetary functions as well as a larger, the price level adjusting correspondingly, so a larger amount of foreknowledge serves no social purpose under pure exchange that the smaller amount did not. (pp. 565-66)

Relaxing the assumption that there is no production does not alter the conclusion, because the kind of information that is discovered, even if it did lead to efficient production decisions that increase the output of goods whose prices rise sooner as a result of the new information than they would have otherwise. But if the foreknowledge is privately obtained, the private incentive is to use that information by trading with another, less-well-informed, trader, at a price the other trader would not agree to if he weren’t at an information disadvantage. The private incentive to use foreknowledge that might cause a change in production decisions is not to use the information to alter production decisions but to use it to trade with, and profit from, those with inferior knowledge.

[A]s under the regime of pure exchange, private foreknowledge makes possible large private profit without leading to socially useful activity. The individual would have just as much incentive as under pure exchange (even more, in fact) to expend real resources in generating socially useless private information. (p. 567)

If the foreknowledge is publicly available, there would be a change in production incentives to shift production toward more valuable products. However, the private gain if the information is kept private greatly exceeds the private value of the information if the information is public. Under some circumstances, private individuals may have an incentive to publicize their private information to cause the price increases in expectation of which they have taken speculative positions. But it is primarily the gain from foreseen price changes, not the gain from more efficient production decisions, that creates the incentive to discover foreknowledge.

The key factor underlying [these] results . . . is the distributive significance of private foreknowledge. When private information fails to lead to improved productive alignments (as must necessarily be the case in a world of pure exchange, and also in a regime of production unless there is dissemination effected in the interest of speculation or resale), it is evident that the individual’s source of gain can only be at the expense of his fellows. But even where information is disseminated and does lead to improved productive commitments, the distributive transfer gain will surely be far greater than the relatively minor productive gain the individual might reap from the redirection of his own real investment commitments. (Id.)

Moreover, better-informed individuals – indeed individuals who wrongly believe themselves to be better informed — will perceive it to be in their self-interest to expend resources to disseminate the information in the expectation that the ensuing price changes would redound to their profit. The private gain expected from disseminating information far exceeds the social benefit from the prices changes once the new information is disseminated; the social benefit from the price changes resulting from the disseminated information corresponds to an improved allocation of resources, but that improvement will be very small compared to the expected private profit from anticipating the price change and trading with those that don’t anticipate it.

Hirshleifer then turns from the value of foreknowledge to the value of discovering new information about the world or about nature that makes a contribution to total social output by causing a shift of resources to more productive uses. Inasmuch as the discovery of new information about the world reveals previously unknown productive opportunities, it might be thought that the private incentive to devote resources to the discovery of technological information about productive opportunities generates substantial social benefits. But Hirshleifer shows that here, too, because the private discovery of information about the world creates private opportunities for gain by trading based on the consequent knowledge of future price changes, the private incentive to discover technological information always exceeds the social value of the discovery.

We need only consider the more general regime of production and exchange. Given private, prior, and sure information of event A [a state of the world in which a previously unknown natural relationship has been shown to exist] the individual in a world of perfect markets would not adapt his productive decisions if he were sure the information would remain private until after the close of trading. (p. 570)

Hirshleifer is saying that the discovery of a previously unknown property of the world can lead to an increase in total social output only by causing productive resources to be reallocated, but that reallocation can occur only if and when the new information is disclosed. So if someone discovers a previously unknown property of the world, the discoverer can profit from that information by anticipating the price effect likely to result once the information is disseminated and then making a speculative transaction based on the expectation of a price change. A corollary of this argument is that individuals who think that they are better informed about the world will take speculative positions based on their beliefs, but insofar as their investments in discovering properties of the world lead them to incorrect beliefs, their investments in information gathering and discovery will not be rewarded. The net social return to information gathering and discovery is thus almost certainly negative.

The obvious way of acquiring the private information in question is, of course, by performing technological research. By a now familiar argument we can show once again that the distributive advantage of private information provides an incentive for information-generating activity that may quite possibly be in excess of the social value of the information. (Id.)

Finally, Hirshliefer turns to the implications for patent policy of his analysis of the private and social value of information.

The issues involved may be clarified by distinguishing the “technological” and “pecuniary” effects of invention. The technological effects are the improvements in production functions . . . consequent upon the new idea. The pecuniary effects are the wealth shifts due to the price revaluations that take place upon release and/or utilization of the information. The pecuniary effects are purely redistributive.

For concreteness, we can think in terms of a simple cost-reducing innovation. The technological benefit to society is, roughly, the integrated area between the old and new marginal-cost curves for the preinvention level of output plus, for any additional output, and the area between the demand curve and the new marginal-cost curve. The holder of a (perpetual) patent could ideally extract, via a perfectly discriminatory fee policy, this entire technological benefit. Equivalence between the social and private benefits of innovation would thus induce the optimal amount of private inventive activity. Presumably it is reasoning of this sort that underlies the economic case for patent protection. (p. 571)

Here Hirshleifer is uncritically restating the traditional analysis for the social benefit from new technological knowledge. But the analysis overstates the benefit, by assuming incorrectly that, with no patent protection, the discovery would never be made. If the discovery would be made without patent protection, then obviously the technological benefit to society is only the area indicated over a limited time horizon, so a perpetual patent enabling the holder of the patent to extract all additional consumer and producer surplus flowing from invention in perpetuity would overcompensate the patent holder for the invention.

Nor does Hirshleifer mention the tendency of patents to increase the costs of invention, research and development owing to the royalties subsequent inventors would have to pay existing patent holders for infringing inventions even if those inventions were, or would have been, discovered with no knowledge of the patented invention. While rewarding some inventions and inventors, patent protection penalizes or blocks subsequent inventions and inventors. Inventions are outputs, but they are also inputs. If the use of past inventions is made more costly by new inventors, it is not clear that the net result will be an increase in the rate of invention.

Moreover, the knowledge that a patented invention may block or penalize a new invention that infringes on an existing patent or a patent that issues before a new invention is introduced, may in some cases cause an overinvestment in research as inventors race to gain the sole right to an invention, in order to avoid being excluded while gaining the right to exclude others.

Hirshleifer does mention some reasons why maximally rewarding patent holders for their inventions may lead to suboptimal results, but fails to acknowledge that the conventional assessment of the social gain from new invention is substantially overstated or patents may well have a negative effect on inventive activity in fields in which patent holders have gained the right to exclude potentially infringing inventions even if the infringing inventions would have been made without the knowledge publicly disclosed by the patent holders in their patent applications.

On the other side are the recognized disadvantages of patents: the social costs of the administrative-judicial process, the possible anti-competitve impact, and restriction of output due to the marginal burden of patent fees. As a second-best kind of judgment, some degree of patent protection has seemed a reasonable compromise among the objectives sought.

Of course, that judgment about the social utility of patents is not universally accepted, and authorities from Arnold Plant, to Fritz Machlup, and most recently Michele Boldrin and David Levine have been extremely skeptical of the arguments in favor of patent protection, copyright and other forms of intellectual property.

However, Hirshleifer advances a different counter-argument against patent protection based on his distinction between the private and social gains derived from information.

But recognition of the unique position of the innovator for forecasting and consequently capturing portions of the pecuniary effects – the wealth transfers due to price revaluation – may put matters in a different light. The “ideal” case of the perfectly discriminating patent holder earning the entire technological benefit is no longer so ideal. (pp. 571-72)

Of course, as I have pointed out, the ‘“ideal” case’ never was ideal.

For the same inventor is in a position to reap speculative profits, too; counting these as well, he would clearly be overcompensated. (p. 572)

Indeed!

Hirshleifer goes on to recognize that the capacity to profit from speculative activity may be beyond the capacity or the ken of many inventors.

Given the inconceivably vast number of potential contingencies and the costs of establishing markets, the prospective speculator will find it costly or even impossible ot purchase neutrality from “irrelevant” risks. Eli Whitney [inventor of the cotton gin who obtained one of the first US patents for his invention in 1794] could not be sure that his gin would make cotton prices fall: while a considerable force would clearly be acting in that direction, a multitude of other contingencies might also have possibly affected the price of cotton. Such “uninsurable” risks gravely limit the speculation feasible with any degree of prudence. (Id.)

HIrshleifer concludes that there is no compelling case either for or against patent protection, because the standard discussion of the case for patent protection has not taken into consideration the potential profit that inventors can gain by speculating on the anticipated price effects of their patents. Of course the argument that inventors are unlikely to be adept at making such speculative plays is a serious argument, we have also seen the rise of patent trolls that buy up patent rights from inventors and then file lawsuits against suspected infringers. In a world without patent protection, it is entirely possible that patent trolls would reinvent themselves as patent speculators, buying up information about new inventions from inventors and using that information to engage in speculative activity based on that information. By acquiring a portfolio of patents such invention speculators could pool the risks of speculation over their entire portfolio, enabling them to speculate more effectively than any single inventor could on his own invention. Hirshleifer concludes as follows:

Even though practical considerations limit the effective scale and consequent impact of speculation and/or resale [but perhaps not as much as Hirshleifer thought], the gains thus achievable eliminate any a priori anticipation of underinvestment in the generation of new technological knowledge. (p. 574)

And I reiterate one last time that Hirshleifer arrived at his non-endorsement of patent protection even while accepting the overstated estimate of the social value of inventions and neglecting the tendency of patents to increase the cost of inventive activity.

My Paper on Hayek, Hicks and Radner and 3 Equilibrium Concepts Now Available on SSRN

A little over a year ago, I posted a series of posts (here, here, here, here, and here) that came together as a paper (“Hayek and Three Equilibrium Concepts: Sequential, Temporary and Rational-Expectations”) that I presented at the History of Economics Society in Toronto in June 2017. After further revisions I posted the introductory section and the concluding section in April before presenting the paper at the Colloquium on Market Institutions and Economic Processes at NYU.

I have since been making further revisions and tweaks to the paper as well as adding the names of Hicks and Radner to the title, and I have just posted the current version on SSRN where it is available for download.

Here is the abstract:

Along with Erik Lindahl and Gunnar Myrdal, F. A. Hayek was among the first to realize that the necessary conditions for intertemporal, as opposed to stationary, equilibrium could be expressed in terms of correct expectations of future prices, often referred to as perfect foresight. Subsequently, J. R. Hicks further elaborated the concept of intertemporal equilibrium in Value and Capital in which he also developed the related concept of a temporary equilibrium in which future prices are not correctly foreseen. This paper attempts to compare three important subsequent developments of that idea with Hayek’s 1937 refinement of his original 1928 paper on intertemporal equilibrium. As a preliminary, the paper explains the significance of Hayek’s 1937 distinction between correct expectations and perfect foresight. In non-chronological order, the three developments of interest are: (1) Roy Radner’s model of sequential equilibrium with incomplete markets as an alternative to the Arrow-Debreu-McKenzie model of full equilibrium with complete markets; (2) Hicks’s temporary equilibrium model, and an important extension of that model by C. J. Bliss; (3) the Muth rational-expectations model and its illegitimate extension by Lucas from its original microeconomic application into macroeconomics. While Hayek’s 1937 treatment most closely resembles Radner’s sequential equilibrium model, which Radner, echoing Hayek, describes as an equilibrium of plans, prices, and price expectations, Hicks’s temporary equilibrium model would seem to have been the natural development of Hayek’s approach. The now dominant Lucas rational-expectations approach misconceives intertemporal equilibrium and ignores the fundamental Hayekian insights about the meaning of intertemporal equilibrium.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,948 other followers

Follow Uneasy Money on WordPress.com