Archive for the 'Milton Friedman' Category

Monetarism v. Hawtrey and Cassel

The following is an updated and revised version of the penultimate section of my paper with Ron Batchelder “Pre-Keynesian Theories of the Great Depressison: What Ever Happened to Hawtrey and Cassel?” which I am now preparing for publication. The previous version is available on SSRN.

In the 1950s and early 1960s, empirical studies of the effects of money and monetary policy by Milton Friedman, his students and followers, rehabilitated the idea that monetary policy had significant macroeconomic effects. Most importantly, in research with Anna Schwartz Friedman advanced the seemingly remarkable claim that the chief cause of the Great Depression had been a series of policy mistakes by the Federal Reserve. Although Hawtrey and Cassel, had also implicated the Federal Reserve in their explanation of the Great Depression, they were unmentioned by Friedman and Schwartz or by other Monetarists.[1]

The chief difference between the Monetarist and the Hawtrey-Cassel explanations of the Great Depression is that Monetarists posited a monetary shock (bank failures) specific to the U.S. as the primary, if not sole, cause of the Depression, while Hawtrey and Cassel considered the Depression a global phenomenon reflecting a rapidly increasing international demand for gold, bank failures being merely an incidental and aggravating symptom, specific to the U.S., of a more general monetary disorder.

Arguing that the Great Depression originated in the United States following a typical business-cycle downturn, Friedman and Schwartz (1963) attributed the depth of the downturn not to the unexplained initial shock, but to the contraction of the U.S. money stock caused by the bank failures. Dismissing any causal role for the gold standard in the Depression, Friedman and Schwartz (359-60) acknowledged only its role in propagating, via PSFM, an exogenous, policy-driven, contraction of the U.S. money stock to other gold-standard countries. According to Friedman and Schwartz, the monetary contraction was the cause, and deflation the effect.

But the causation posited by Friedman and Schwartz is the exact opposite of the true causation. Under the gold standard, deflation (i.e., gold appreciation) was the cause and the decline in the quantity of money the effect. Deflation in an international gold standard is not a local phenomenon originating in any single country; it occurs simultaneously in all gold standard countries.

To be sure the banking collapse in the U.S. exacerbated the catastrophe. But the collapse was the localized effect of a more general cause: deflation. Without deflation, neither the unexplained 1929 downturn nor the subsequent banking collapse would have occurred. Nor was an investment boom in the most advanced and most productive economy in the world unsustainable as posited, with no evidence of unsustainability other than the subsequent economic collapse, by the Austrian malinvestment hypothesis.

Friedman and Schwartz based their assertion that the monetary disturbance that caused the Great Depression occurred in the U.S. on the observation that, from 1929 to 1931, gold flowed into, not out of, the U.S. Had the disturbance occurred elsewhere, they argued, gold would have flowed out of, not into, the U. S.

Table 1 shows the half-year changes in U.S., French, and world gold reserves starting in June 1928, when the French monetary law re-establishing the gold standard was enacted.

TABLE 1: Gold Reserves in US, France, and the World June 1928-December 1931 (measured in dollars)
Date World ReservesUS ReservesUS Share (percent)French ReservesFrench Share (percent)
June 19289,7493,73238.31,13611.7
Dec. 192810,0573,74637.21,25412.4
2nd half 1928 change31214-1.11180.7
June 192910,1263,95639.11,43614.2
1st half 1929 change692101.91821.8
Dec. 192910,3363,90037.71,63315.8
2nd half 1929 change210-56-1.41971.6
June 193010,6714,17839.21,72716.2
1st half 1930 change3352781.5940.4
Dec. 193010,9444,22538.72,10019.2
2nd half 1930 change 27347-0.53733.0
June 193111,264459340.82,21219.6
1st half 1931 change3203682.11120.4
Dec. 193111,3234,05135.82,69923.8
2nd half 1931 change59-542-5.04874.2
June 1928-Dec. 1931 change1,574319-2.51,56312.1
Source: H. C. Johnson, Gold, France and the Great Depression

In the three-and-a-half years from June 1928 (when gold convertibility of the franc was restored) to December 1931, gold inflows into France exceeded gold inflows into the United States. The total gold inflow into France during the June 1928 to December 1931 period was $1.563 billion compared to only $319 billion into the United States.

However, much of the difference in the totals stems from the gold outflow from the U.S. into France in the second half of 1931, reflecting fears of a possible U.S. devaluation or suspension of convertibility after Great Britain and other countries suspended the gold standard in September 1931 (Hamilton 2012). From June 1928 through June 1931, the total gold inflow into the U.S. was $861 billion and the total gold inflow into France was $1.076 billion, the U.S. share of total reserves increasing from 38.3 percent to 40.6 percent, while the total French share increased from 11.7 percent to 19.6 percent.[2]

In the first half of 1931, when the first two waves of U.S. bank failures occurred, the increase in U.S. gold reserves exceeded the increase in world gold reserves. The shift by the public from holding bank deposits to holding currency increased reserve requirements, an increase reflected in the gold reserves held by the U.S. The increased U.S. demand for gold likely exacerbated the deflationary pressures affecting all gold-standard countries, perhaps contributing to the failure of the Credit-Anstalt in May 1931 that intensified the European crisis that forced Britain off the gold standard in September.

The combined increase in U.S. and French gold reserves was $1.937 billion compared to an increase of only $1.515 billion in total world reserves, indicating that the U.S. and France were drawing reserves either from other central banks or from privately held gold stocks. Clearly, both the U.S. and France were exerting powerful deflationary pressure on the world economy, before and during the downward spiral of the Great Depression.[3]

Deflationary forces were operating directly on prices before the quantity of money adjusted to the decline in prices. In some countries the adjustment of the quantity of money was relatively smooth; in the U.S. it was exceptionally difficult, but, not even in the U.S., was it the source of the disturbance. Hawtrey and Cassel understood that; Friedman did not.

In explaining the sources of his interest in monetary theory and the role of monetary policy, Friedman (1970) pointedly distinguished between the monetary tradition from which his work emerged and the dominant tradition in London circa 1930, citing Robbins’s (1934) Austrian-deflationist book on the Great Depression, while ignoring Hawtrey and Cassel. Friedman linked his work to the Chicago oral tradition, citing a lecture by Jacob Viner (1933) as foreshadowing his own explanation of the Great Depression, attributing the loss of interest in monetary theory and policy by the wider profession to the deflationism of LSE monetary economists. Friedman went on to suggest that the anti-deflationism of the Chicago monetary tradition immunized it against the broader reaction against monetary theory and policy, that the Austro-London pro-deflation bias provoked against monetary theory and policy.

Though perhaps superficially plausible, Friedman’s argument ignores, as he did throughout a half-century of scholarship and research, the contributions of Hawtrey and Cassel and especially their explanation of the Great Depression. Unfortunately, Friedman’s outsized influence on economists trained after the Keynesian Revolution distracted their attention from contributions outside the crude Keynesian-Monetarist dichotomy that shaped his approach to monetary economics.

Eclectics like Hawtrey and Cassel were neither natural sources of authority, nor obvious ideological foils for Friedman to focus upon. Already forgotten, providing neither convenient targets, nor ideological support, Hawtrey and Cassel, could be easily and conveniently ignored.


[1] Meltzer (2001) did mention Hawtrey, but the reference was perfunctory and did not address the substance of his and Cassel’s explanation of the Great Depression.

[2] By far the largest six-month increase in U.S. gold reserves was in the June-December 1931 period coinciding with the two waves of bank failures at the end of 1930 and in March 1931 causing a substantial shift from deposits to currency which required an increase in gold reserves owing to the higher ratio of required gold reserves against currency than against bank deposits.

[3] Fremling (1985) noted that, even during the 1929-31 period, the U.S. share of world gold reserves actually declined. However, her calculation includes the extraordinary outflow of gold from the U.S. in the second half of 1931. The U.S. share of global gold reserves rose from June 1928 to June 1931.

On the Price Specie Flow Mechanism

I have been working on a paper tentatively titled “The Smithian and Humean Traditions in Monetary Theory.” One section of the paper is on the price-specie-flow mechanism, about which I wrote last month in my previous post. This section develops the arguments of the previous post at greater length and draws on a number of earlier posts that I’ve written about PSFM as well (e.g., here and here )provides more detailed criticisms of both PSFM and sterilization and provides some further historical evidence to support some of the theoretical arguments. I will be grateful for any comments and feedback.

The tortured intellectual history of the price-specie-flow mechanism (PSFM) received its still classic exposition in a Hume (1752) essay, which has remained a staple of the theory of international adjustment under the gold standard, or any international system of fixed exchange rates. Regrettably, the two-and-a-half-century life span of PSFM provides no ground for optimism about the prospects for progress in what some are pleased to call without irony economic science.

PSFM describes how, under a gold standard, national price levels tend to be equalized, with deviations between the national price levels in any two countries inducing gold to be shipped from the country with higher prices to the one with lower prices until prices are equalized. Premised on a version of the quantity theory of money in which (1) the price level in each country on the gold standard is determined by the quantity of money in that country, and (2) money consists entirely in gold coin or bullion, Hume elegantly articulated a model of disturbance and equilibration after an exogenous change in the gold stock in one country.

Viewing banks as inflationary engines of financial disorder, Hume disregarded banks and the convertible monetary liabilities of banks in his account of PSFM, leaving to others the task of describing the international adjustment process under a gold standard with fractional-reserve banking. The task of devising an institutional framework, within which PSFM could operate, for a system of fractional-reserve banking proved to be problematic and ultimately unsuccessful.

For three-quarters of a century, PSFM served a purely theoretical function. During the Bullionist debates of the first two decades of the nineteenth century, triggered by the suspension of the convertibility of the pound sterling into gold in 1797, PSFM served as a theoretical benchmark not a guide for policy, it being generally assumed that, when convertibility was resumed, international monetary equilibrium would be restored automatically.

However, the 1821 resumption was followed by severe and recurring monetary disorders, leading some economists, who formed what became known as the Currency School, to view PSFM as a normative criterion for ensuring smooth adjustment to international gold flows. That criterion, the Currency Principle, stated that the total currency in circulation in Britain should increase or decrease by exactly as much as the amount of gold flowing into or out of Britain.[1]

The Currency Principle was codified by the Bank Charter Act of 1844. To mimic the Humean mechanism, it restricted, but did not suppress, the right of note-issuing banks in England and Wales, which were allowed to continue issuing notes, at current, but no higher, levels, without holding equivalent gold reserves. Scottish and Irish note-issuing banks were allowed to continue issuing notes, but could increase their note issue only if matched by increased holdings of gold or government debt. In England and Wales, the note issue could increase only if gold was exchanged for Bank of England notes, so that a 100-percent marginal gold reserve requirement was imposed on additional banknotes.

Opposition to the Bank Charter Act was led by the Banking School, notably John Fullarton and Thomas Tooke. Rejecting the Humean quantity-theoretic underpinnings of the Currency School and the Bank Charter Act, the Banking School rejected the quantitative limits of the Bank Charter Act as both unnecessary and counterproductive, because banks, obligated to redeem their liabilities directly or indirectly in gold, issue liabilities only insofar as they expect those liabilities to be willingly held by the public, or, if not, are capable of redeeming any liabilities no longer willingly held. Rather than the Humean view that banks issue banknotes or create deposits without constraint, the Banking School held Smith’s view that banks issue money in a form more convenient to hold and to transact with than metallic money, so that bank money allows an equivalent amount of gold to be shifted from monetary to real (non-monetary) uses, providing a net social savings. For a small open economy, the diversion (and likely export) of gold bullion from monetary to non-monetary uses has negligible effect on prices (which are internationally, not locally, determined).

The quarter century following enactment of the Bank Charter Act showed that the Act had not eliminated monetary disturbances, the government having been compelled to suspend the Act in 1847, 1857 and 1866 to prevent incipient crises from causing financial collapse. Indeed, it was precisely the fear that liquidity might not be forthcoming that precipitated increased demands for liquidity that the Act made it impossible to accommodate. Suspending the Act was sufficient to end the crises with limited intervention by the Bank. [check articles on the crises of 1847, 1857 and 1866.]

It may seem surprising, but the disappointing results of the Bank Charter Act provided little vindication to the Banking School. It led only to a partial, uneasy, and not entirely coherent, accommodation between PSFM doctrine and the reality of a monetary system in which the money stock consists mostly of banknotes and bank deposits issued by fractional-reserve banks. But despite the failure of the Bank Charter Act, PSFM achieved almost canonical status, continuing, albeit with some notable exceptions, to serve as the textbook model of the gold standard.

The requirement that gold flows induce equal changes in the quantity of money within a country into (or from) which gold is flowing was replaced by an admonition that gold flows lead to “appropriate” changes in the central-bank discount rate or an alternative monetary instrument to cause the quantity of money to change in the same direction as the gold flow. While such vague maxims, sometimes described as “the rules of the game,” gave only directional guidance about how to respond to change in gold reserves, their hortatory character, and avoidance of quantitative guidance, allowed monetary authorities latitude to avoid the self-inflicted crises that had resulted from the quantitative limits of the Bank Charter Act.

Nevertheless, the myth of vague “rules” relating the quantity of money in a country to changes in gold reserves, whose observance ensured the smooth functioning of the international gold standard before its collapse at the start of World War I, enshrined PSFM as the theoretical paradigm for international monetary adjustment under the gold standard.

That paradigm was misconceived in four ways that can be briefly summarized.

  • Contrary to PSFM, changes in the quantity of money in a gold-standard country cannot change local prices proportionately, because prices of tradable goods in that country are constrained by arbitrage to equal the prices of those goods in other countries.
  • Contrary to PSFM, changes in local gold reserves are not necessarily caused either by non-monetary disturbances such as shifts in the terms of trade between countries or by local monetary disturbances (e.g. overissue by local banks) that must be reversed or counteracted by central-bank policy.
  • Contrary to PSFM, changes in the national price levels of gold-standard countries were uncorrelated with gold flows, and changes in national price levels were positively, not negatively, correlated.
  • Local banks and monetary authorities exhibit their own demands for gold reserves, demands exhibited by choice (i.e., independent of legally required gold holdings) or by law (i.e., by legally requirement to hold gold reserves equal to some fraction of banknotes issued by banks or monetary authorities). Such changes in gold reserves may be caused by changes in the local demands for gold by local banks and the monetary authorities in one or more countries.

Many of the misconceptions underlying PSFM were identified by Fullarton’s refutation of the Currency School. In articulating the classical Law of Reflux, he established the logical independence of the quantity convertible money in a country from by the quantity of gold reserves held by the monetary authority. The gold reserves held by individual banks, or their deposits with the Bank of England, are not the raw material from which banks create money, either banknotes or deposits. Rather, it is their creation of banknotes or deposits when extending credit to customers that generates a derived demand to hold liquid assets (i.e., gold) to allow them to accommodate the demands of customers and other banks to redeem banknotes and deposits. Causality runs from creating banknotes and deposits to holding reserves, not vice versa.

The misconceptions inherent in PSFM and the resulting misunderstanding of gold flows under the gold standard led to a further misconception known as sterilization: the idea that central banks, violating the obligations imposed by “the rules of the game,” do not allow, or deliberately prevent, local money stocks from changing as their gold holdings change. The misconception is the presumption that gold inflows ought necessarily cause increases in local money stocks. The mechanisms causing local money stocks to change are entirely different from those causing gold flows. And insofar as those mechanisms are related, causality flows from the local money stock to gold reserves, not vice versa.

Gold flows also result when monetary authorities transform their own asset holdings into gold. Notable examples of such transformations occurred in the 1870s when a number of countries abandoned their de jure bimetallic (and de facto silver) standards to the gold standard. Monetary authorities in those countries transformed silver holdings into gold, driving the value of gold up and silver down. Similarly, but with more catastrophic consequences, the Bank of France, in 1928 after France restored the gold standard, began redeeming holdings of foreign-exchange reserves (financial claims on the United States or Britain, payable in gold) into gold. Following the French example, other countries rejoining the gold standard redeemed foreign exchange for gold, causing gold appreciation and deflation that led to the Great Depression.

Rereading the memoirs of this splendid translation . . . has impressed me with important subtleties that I missed when I read the memoirs in a language not my own and in which I am far from completely fluent. Had I fully appreciated those subtleties when Anna Schwartz and I were writing our A Monetary History of the United States, we would likely have assessed responsibility for the international character of the Great Depression somewhat differently. We attributed responsibility for the initiation of a worldwide contraction to the United States and I would not alter that judgment now. However, we also remarked, “The international effects were severe and the transmission rapid, not only because the gold-exchange standard had rendered the international financial system more vulnerable to disturbances, but also because the United States did not follow gold-standard rules.” Were I writing that sentence today, I would say “because the United States and France did not follow gold-standard rules.”

I pause to note for the record Friedman’s assertion that the United States and France did not follow “gold-standard rules.” Warming up to the idea, he then accused them of sterilization.

Benjamin Strong and Emile Moreau were admirable characters of personal force and integrity. But . . .the common policies they followed were misguided and contributed to the severity and rapidity of transmission of the U.S. shock to the international community. We stressed that the U.S. “did not permit the inflow of gold to expand the U.S. money stock. We not only sterilized it, we went much further. Our money stock moved perversely, going down as the gold stock went up” from 1929 to 1931.

Strong and Moreau tried to reconcile two ultimately incompatible objectives: fixed exchange rates and internal price stability. Thanks to the level at which Britain returned to gold in 1925, the U.S. dollar was undervalued, and thanks to the level at which France returned to gold at the end of 1926, so was the French franc. Both countries as a result experienced substantial gold inflows. Gold-standard rules called for letting the stock of money rise in response to the gold inflows and for price inflation in the U.S. and France, and deflation in Britain, to end the over-and under-valuations. But both Strong and Moreau were determined to prevent inflation and accordingly both sterilized the gold inflows, preventing them from providing the required increase in the quantity of money.

Friedman’s discussion of sterilization is at odds with basic theory. Working with a naïve version of PSFM, he imagines that gold flows passively respond to trade balances independent of monetary forces, and that the monetary authority under a gold standard is supposed to ensure that the domestic money stock varies roughly in proportion to its gold reserves. Ignoring the international deflationary dynamic, he asserts that the US money stock perversely declined from 1929 to 1931, while its gold stock increased. With a faltering banking system, the public shifted from holding demand deposits to currency. Gold reserves were legally required against currency, but not against demand deposits, so the shift from deposits to currency entailed an increase gold reserves. To be sure the increased US demand for gold added to upward pressure on value of gold, and to worldwide deflationary pressure. But US gold holdings rose by only $150 million from December 1929 to December 1931 compared with an increase of $1.06 billion in French gold holdings over the same period. Gold accumulation by the US and its direct contribution to world deflation during the first two years of the Depression was small relative to that of France.

Friedman also erred in stating “the common policies they followed were misguided and contributed to the severity and rapidity of transmission of the U.S. shock to the international community.” The shock to the international community clearly originated not in the US but in France. The Fed could have absorbed and mitigated the shock by allowing a substantial outflow of its huge gold reserves, but instead amplified the shock by raising interest rates to nearly unprecedented levels, causing gold to flow into the US.

After correctly noting the incompatibility between fixed exchange rates and internal price stability, Friedman contradicts himself by asserting that, in seeking to stabilize their internal price levels, Strong and Moreau violated the gold-standard “rules,” as if it were rules, not arbitrage, that constrain national price to converge toward a common level under a gold standard.

Friedman’s assertion that, after 1925, the dollar was undervalued and sterling overvalued was not wrong. But he misunderstood the consequences of currency undervaluation and overvaluation under the gold standard, a confusion stemming from the underlying misconception, derived from PSFM, that foreign exchange rates adjust to balance trade flows, so that, in equilibrium, no country runs a trade deficit or trade surplus.

Thus, in Friedman’s view, dollar undervaluation and sterling overvaluation implied a US trade surplus and British trade deficit, causing gold to flow from Britain to the US. Under gold-standard “rules,” the US money stock and US prices were supposed to rise and the British money stock and British prices were supposed to fall until undervaluation and overvaluation were eliminated. Friedman therefore blamed sterilization of gold inflows by the Fed for preventing the necessary increase in the US money stock and price level to restore equilibrium. But, in fact, from 1925 through 1928, prices in the US were roughly stable and prices in Britain fell slightly. Violating gold-standard “rules” did not prevent the US and British price levels from converging, a convergence driven by market forces, not “rules.”

The stance of monetary policy in a gold-standard country had minimal effect on either the quantity of money or the price level in that country, which were mainly determined by the internationally determined value of gold. What the stance of national monetary policy determines under the gold standard is whether the quantity of money in the country adjusts to the quantity demanded by a process of domestic monetary creation or withdrawal or by the inflow or outflow of gold. Sufficiently tight domestic monetary policy restricting the quantify of domestic money causes a compensatory gold inflow increasing the domestic money stock, while sufficiently easy money causes a compensatory outflow of gold reducing the domestic money stock. Tightness or ease of domestic monetary policy under the gold standard mainly affected gold and foreign-exchange reserves, and, only minimally, the quantity of domestic money and the domestic price level.

However, the combined effects of many countries simultaneously tightening monetary policy in a deliberate, or even inadvertent, attempt to accumulate — or at least prevent the loss — of gold reserves could indeed drive up the international value of gold through a deflationary process affecting prices in all gold-standard countries. Friedman, even while admitting that, in his Monetary History, he had understated the effect of the Bank of France on the Great Depression, referred only the overvaluation of sterling and undervaluation of the dollar and franc as causes of the Great Depression, remaining oblivious to the deflationary effects of gold accumulation and appreciation.

It was thus nonsensical for Friedman to argue that the mistake of the Bank of France during the Great Depression was not to increase the quantity of francs in proportion to the increase of its gold reserves. The problem was not that the quantity of francs was too low; it was that the Bank of France prevented the French public from collectively increasing the quantity of francs that they held except by importing gold.

Unlike Friedman, F. A. Hayek actually defended the policy of the Bank of France, and denied that the Bank of France had violated “the rules of the game” after nearly quadrupling its gold reserves between 1928 and 1932. Under his interpretation of those “rules,” because the Bank of France increased the quantity of banknotes after the 1928 restoration of convertibility by about as much as its gold reserves increased, it had fully complied with the “rules.” Hayek’s defense was incoherent; under its legal obligation to convert gold into francs at the official conversion rate, the Bank of France had no choice but to increase the quantity of francs by as much as its gold reserves increased.

That eminent economists like Hayek and Friedman could defend, or criticize, the conduct of the Bank of France during the Great Depression, because the Bank either did, or did not, follow “the rules of the game” under which the gold standard operated, shows the uselessness and irrelevance of the “rules of the game” as a guide to policy. For that reason alone, the failure of empirical studies to find evidence that “the rules of the game” were followed during the heyday of the gold standard is unsurprising. But the deeper reason for that lack of evidence is that PSFM, whose implementation “the rules of the game” were supposed to guarantee, was based on a misunderstanding of the international-adjustment mechanism under either the gold standard or any fixed-exchange-rates system.

Despite the grip of PSFM over most of the profession, a few economists did show a deeper understanding of the adjustment mechanism. The idea that the price level in terms of gold directly constrained the movements of national price levels across countries was indeed recognized by writers as diverse as Keynes, Mises, and Hawtrey who all pointed out that the prices of internationally traded commodities were constrained by arbitrage and that the free movement of capital across countries would limit discrepancies in interest rates across countries attached to the gold standard, observations that had already been made by Smith, Thornton, Ricardo, Fullarton and Mill in the classical period. But, until the Monetary Approach to the Balance of Payments became popular in the 1970s, only Hawtrey consistently and systematically deduced the implications of those insights in analyzing both the Great Depression and the Bretton Woods system of fixed, but adjustable, exchange rates following World War II.

The inconsistencies and internal contradictions of PSFM were sometimes recognized, but usually overlooked, by business-cycle theorists when focusing on the disturbing influence of central banks, perpetuating mistakes of the Humean Currency School doctrine that attributed cyclical disturbances to the misbehavior of local banking systems that were inherently disposed to overissue their liabilities.

White and Hogan on Hayek and Cassel on the Causes of the Great Depression

Lawrence White and Thomas Hogan have just published a new paper in the Journal of Economic Behavior and Organization (“Hayek, Cassel, and the origins of the great depression”). Since White is a leading Hayek scholar, who has written extensively on Hayek’s economic writings (e.g., his important 2008 article “Did Hayek and Robbins Deepen the Great Depression?”) and edited the new edition of Hayek’s notoriously difficult volume, The Pure Theory of Capital, when it was published as volume 11 of the Collected Works of F. A. Hayek, the conclusion reached by the new paper that Hayek had a better understanding than Cassel of what caused the Great Depression is not, in and of itself, surprising.

However, I admit to being taken aback by the abstract of the paper:

We revisit the origins of the Great Depression by contrasting the accounts of two contemporary economists, Friedrich A. Hayek and Gustav Cassel. Their distinct theories highlight important, but often unacknowledged, differences between the international depression and the Great Depression in the United States. Hayek’s business cycle theory offered a monetary overexpansion account for the 1920s investment boom, the collapse of which initiated the Great Depression in the United States. Cassel’s warnings about a scarcity gold reserves related to the international character of the downturn, but the mechanisms he emphasized contributed little to the deflation or depression in the United States.

I wouldn’t deny that there are differences between the way the Great Depression played out in the United States and in the rest of the world, e.g., Britain and France, which to be sure, suffered less severely than did the US or, say, Germany. It is both possible, and important, to explore and understand the differential effects of the Great Depression in various countries. I am sorry to say that White and Hogan do neither. Instead, taking at face value the dubious authority of Friedman and Schwartz’s treatment of the Great Depression in the Monetary History of the United States, they assert that the cause of the Great Depression in the US was fundamentally different from the cause of the Great Depression in many or all other countries.

Taking that insupportable premise from Friedman and Schwartz, they simply invoke various numerical facts from the Monetary History as if those facts, in and of themselves, demonstrate what requires to be demonstrated: that the causes of the Great Depression in the US were different from those of the Great Depression in the rest of the world. That assumption vitiated the entire treatment of the Great Depression in the Monetary History, and it vitiates the results that White and Hogan reach about the merits of the conflicting explanations of the Great Depression offered by Cassel and Hayek.

I’ve discussed the failings of Friedman’s treatment of the Great Depression and of other episodes he analyzed in the Monetary History in previous posts (e.g., here, here, here, here, and here). The common failing of all the episodes treated by Friedman in the Monetary History and elsewhere is that he misunderstood how the gold standard operated, because his model of the gold standard was a primitive version of the price-specie-flow mechanism in which the monetary authority determines the quantity of money, which then determines the price level, which then determines the balance of payments, the balance of payments being a function of the relative price levels of the different countries on the gold standard. Countries with relatively high price levels experience trade deficits and outflows of gold, and countries with relatively low price levels experience trade surpluses and inflows of gold. Under the mythical “rules of the game” under the gold standard, countries with gold inflows were supposed to expand their money supplies, so that prices would rise and countries with outflows were supposed to reduce their money supplies, so that prices fall. If countries followed the rules, then an international monetary equilibrium would eventually be reached.

That is the model of the gold standard that Friedman used throughout his career. He was not alone; Hayek and Mises and many others also used that model, following Hume’s treatment in his essay on the balance of trade. But it’s the wrong model. The correct model is the one originating with Adam Smith, based on the law of one price, which says that prices of all commodities in terms of gold are equalized by arbitrage in all countries on the gold standard.

As a first approximation, under the Smithean model, there is only one price level adjusted for different currency parities for all countries on the gold standard. So if there is deflation in one country on the gold standard, there is deflation for all countries on the gold standard. If the rest of the world was suffering from deflation under the gold standard, the US was also suffering from a deflation of approximately the same magnitude as every other country on the gold standard was suffering.

The entire premise of the Friedman account of the Great Depression, adopted unquestioningly by White and Hogan, is that there was a different causal mechanism for the Great Depression in the United States from the mechanism operating in the rest of the world. That premise is flatly wrong. The causation assumed by Friedman in the Monetary History was the exact opposite of the actual causation. It wasn’t, as Friedman assumed, that the decline in the quantity of money in the US was causing deflation; it was the common deflation in all gold-standard countries that was causing the quantity of money in the US to decline.

To be sure there was a banking collapse in the US that was exacerbating the catastrophe, but that was an effect of the underlying cause: deflation, not an independent cause. Absent the deflationary collapse, there is no reason to assume that the investment boom in the most advanced and most productive economy in the world after World War I was unsustainable as the Hayekian overinvestment/malinvestment hypothesis posits with no evidence of unsustainability other than the subsequent economic collapse.

So what did cause deflation under the gold standard? It was the rapid increase in the monetary demand for gold resulting from the insane policy of the Bank of France (disgracefully endorsed by Hayek as late as 1932) which Cassel, along with Ralph Hawtrey (whose writings, closely parallel to Cassel’s on the danger of postwar deflation, avoid all of the ancillary mistakes White and Hogan attribute to Cassel), was warning would lead to catastrophe.

It is true that Cassel also believed that over the long run not enough gold was being produced to avoid deflation. White and Hogan spend inordinate space and attention on that issue, because that secular tendency toward deflation is entirely different from the catastrophic effects of the increase in gold demand in the late 1920s triggered by the insane policy of the Bank of France.

The US could have mitigated the effects if it had been willing to accommodate the Bank of France’s demand to increase its gold holdings. Of course, mitigating the effects of the insane policy of the Bank of France would have rewarded the French for their catastrophic policy, but, under the circumstances, some other means of addressing French misconduct would have spared the world incalculable suffering. But misled by an inordinate fear of stock market speculation, the Fed tightened policy in 1928-29 and began accumulating gold rather than accommodate the French demand.

And the Depression came.

Dr. Popper: Or How I Learned to Stop Worrying and Love Metaphysics

Introduction to Falsificationism

Although his reputation among philosophers was never quite as exalted as it was among non-philosophers, Karl Popper was a pre-eminent figure in 20th century philosophy. As a non-philosopher, I won’t attempt to adjudicate which take on Popper is the more astute, but I think I can at least sympathize, if not fully agree, with philosophers who believe that Popper is overrated by non-philosophers. In an excellent blog post, Phillipe Lemoine gives a good explanation of why philosophers look askance at falsificationism, Popper’s most important contribution to philosophy.

According to Popper, what distinguishes or demarcates a scientific statement from a non-scientific (metaphysical) statement is whether the statement can, or could be, disproved or refuted – falsified (in the sense of being shown to be false not in the sense of being forged, misrepresented or fraudulently changed) – by an actual or potential observation. Vulnerability to potentially contradictory empirical evidence, according to Popper, is what makes science special, allowing it to progress through a kind of dialectical process of conjecture (hypothesis) and refutation (empirical testing) leading to further conjecture and refutation and so on.

Theories purporting to explain anything and everything are thus non-scientific or metaphysical. Claiming to be able to explain too much is a vice, not a virtue, in science. Science advances by risk-taking, not by playing it safe. Trying to explain too much is actually playing it safe. If you’re not willing to take the chance of putting your theory at risk, by saying that this and not that will happen — rather than saying that this or that will happen — you’re playing it safe. This view of science, portrayed by Popper in modestly heroic terms, was not unappealing to scientists, and in part accounts for the positive reception of Popper’s work among scientists.

But this heroic view of science, as Lemoine nicely explains, was just a bit oversimplified. Theories never exist in a vacuum, there is always implicit or explicit background knowledge that informs and provides context for the application of any theory from which a prediction is deduced. To deduce a prediction from any theory, background knowledge, including complementary theories that are presumed to be valid for purposes of making a prediction, is necessary. Any prediction relies not just on a single theory but on a system of related theories and auxiliary assumptions.

So when a prediction is deduced from a theory, and the predicted event is not observed, it is never unambiguously clear which of the multiple assumptions underlying the prediction is responsible for the failure of the predicted event to be observed. The one-to-one logical dependence between a theory and a prediction upon which Popper’s heroic view of science depends doesn’t exist. Because the heroic view of science is too simplified, Lemoine considers it false, at least in the naïve and heroic form in which it is often portrayed by its proponents.

But, as Lemoine himself acknowledges, Popper was not unaware of these issues and actually dealt with some if not all of them. Popper therefore dismissed those criticisms pointing to his various acknowledgments and even anticipations of and responses to the criticisms. Nevertheless, his rhetorical style was generally not to qualify his position but to present it in stark terms, thereby reinforcing the view of his critics that he actually did espouse the naïve version of falsificationism that, only under duress, would be toned down to meet the objections raised to the usual unqualified version of his argument. Popper after all believed in making bold conjectures and framing a theory in the strongest possible terms and characteristically adopted an argumentative and polemical stance in staking out his positions.

Toned-Down Falsificationism

In his tone-downed version of falsificationism, Popper acknowledged that one can never know if a prediction fails because the underlying theory is false or because one of the auxiliary assumptions required to make the prediction is false, or even because of an error in measurement. But that acknowledgment, Popper insisted, does not refute falsificationism, because falsificationism is not a scientific theory about how scientists do science; it is a normative theory about how scientists ought to do science. The normative implication of falsificationism is that scientists should not try to shield their theories by making just-so adjustments in their theories through ad hoc auxiliary assumptions, e.g., ceteris paribus assumptions, to shield their theories from empirical disproof. Rather they should accept the falsification of their theories when confronted by observations that conflict with the implications of their theories and then formulate new and better theories to replace the old ones.

But a strict methodological rule against adjusting auxiliary assumptions or making further assumptions of an ad hoc nature would have ruled out many fruitful theoretical developments resulting from attempts to account for failed predictions. For example, the planet Neptune was discovered in 1846 by scientists who posited (ad hoc) the existence of another planet to explain why the planet Uranus did not follow its predicted path. Rather than conclude that the Newtonian theory was falsified by the failure of Uranus to follow the orbital path predicted by Newtonian theory, the French astronomer Urbain Le Verrier posited the existence of another planet that would account for the path actually followed by Uranus. Now in this case, it was possible to observe the predicted position of the new planet, and its discovery in the predicted location turned out to be a sensational confirmation of Newtonian theory.

Popper therefore admitted that making an ad hoc assumption in order to save a theory from refutation was permissible under his version of normative faslisificationism, but only if the ad hoc assumption was independently testable. But suppose that, under the circumstances, it would have been impossible to observe the existence of the predicted planet, at least with the observational tools then available, making the ad hoc assumption testable only in principle, but not in practice. Strictly adhering to Popper’s methodological requirement of being able to test independently any ad hoc assumption would have meant accepting the refutation of the Newtonian theory rather than positing the untestable — but true — ad hoc other-planet hypothesis to account for the failed prediction of the orbital path of Uranus.

My point is not that ad hoc assumptions to save a theory from falsification are ok, but to point out that a strict methodological rules requiring rejection of any theory once it appears to be contradicted by empirical evidence and prohibiting the use of any ad hoc assumption to save the theory unless the ad hoc assumption is independently testable might well lead to the wrong conclusion given the nuances and special circumstances associated with every case in which a theory seems to be contradicted by observed evidence. Such contradictions are rarely so blatant that theory cannot be reconciled with the evidence. Indeed, as Popper himself recognized, all observations are themselves understood and interpreted in the light of theoretical presumptions. It is only in extreme cases that evidence cannot be interpreted in a way that more or less conforms to the theory under consideration. At first blush, the Copernican heliocentric view of the world seemed obviously contradicted by direct sensory observation that earth seems flat and the sun rise and sets. Empirical refutation could be avoided only by providing an alternative interpretation of the sensory data that could be reconciled with the apparent — and obvious — flatness and stationarity of the earth and the movement of the sun and moon in the heavens.

So the problem with falsificationism as a normative theory is that it’s not obvious why a moderately good, but less than perfect, theory should be abandoned simply because it’s not perfect and suffers from occasional predictive failures. To be sure, if a better theory than the one under consideration is available, predicting correctly whenever the one under consideration predicts correctly and predicting more accurately than the one under consideration when the latter fails to predict correctly, the alternative theory is surely preferable, but that simply underscores the point that evaluating any theory in isolation is not very important. After all, every theory, being a simplification, is an imperfect representation of reality. It is only when two or more theories are available that scientists must try to determine which of them is preferable.

Oakeshott and the Poverty of Falsificationism

These problems with falsificationism were brought into clearer focus by Michael Oakeshott in his famous essay “Rationalism in Politics,” which though not directed at Popper himself (whose colleague at the London School of Economics he was) can be read as a critique of Popper’s attempt to prescribe methodological rules for scientists to follow in carrying out their research. Methodological rules of the kind propounded by Popper are precisely the sort of supposedly rational rules of practice intended to ensure the successful outcome of an undertaking that Oakeshott believed to be ill-advised and hopelessly naïve. The rationalist conceit in Oakesott’s view is that there are demonstrably correct answers to practical questions and that practical activity is rational only when it is based on demonstrably true moral or causal rules.

The entry on Michael Oakeshott in the Stanford Encyclopedia of Philosophy summarizes Oakeshott’s position as follows:

The error of Rationalism is to think that making decisions simply requires skill in the technique of applying rules or calculating consequences. In an early essay on this theme, Oakeshott distinguishes between “technical” and “traditional” knowledge. Technical knowledge is of facts or rules that can be easily learned and applied, even by those who are without experience or lack the relevant skills. Traditional knowledge, in contrast, means “knowing how” rather than “knowing that” (Ryle 1949). It is acquired by engaging in an activity and involves judgment in handling facts or rules (RP 12–17). The point is not that rules cannot be “applied” but rather that using them skillfully or prudently means going beyond the instructions they provide.

The idea that a scientist’s decision about when to abandon one theory and replace it with another can be reduced to the application of a Popperian falsificationist maxim ignores all the special circumstances and all the accumulated theoretical and practical knowledge that a truly expert scientist will bring to bear in studying and addressing such a problem. Here is how Oakeshott addresses the problem in his famous essay.

These two sorts of knowledge, then, distinguishable but inseparable, are the twin components of the knowledge involved in every human activity. In a practical art such as cookery, nobody supposes that the knowledge that belongs to the good cook is confined to what is or what may be written down in the cookery book: technique and what I have called practical knowledge combine to make skill in cookery wherever it exists. And the same is true of the fine arts, of painting, of music, of poetry: a high degree of technical knowledge, even where it is both subtle and ready, is one thing; the ability to create a work of art, the ability to compose something with real musical qualities, the ability to write a great sonnet, is another, and requires in addition to technique, this other sort of knowledge. Again these two sorts of knowledge are involved in any genuinely scientific activity. The natural scientist will certainly make use of observation and verification that belong to his technique, but these rules remain only one of the components of his knowledge; advances in scientific knowledge were never achieved merely by following the rules. . . .

Technical knowledge . . . is susceptible of formulation in rules, principles, directions, maxims – comprehensively, in propositions. It is possible to write down technical knowledge in a book. Consequently, it does not surprise us that when an artist writes about his art, he writes only about the technique of his art. This is so, not because he is ignorant of what may be called asesthetic element, or thinks it unimportant, but because what he has to say about that he has said already (if he is a painter) in his pictures, and he knows no other way of saying it. . . . And it may be observed that this character of being susceptible of precise formulation gives to technical knowledge at least the appearance of certainty: it appears to be possible to be certain about a technique. On the other hand, it is characteristic of practical knowledge that it is not susceptible of formulation of that kind. Its normal expression is in a customary or traditional way of doing things, or, simply, in practice. And this gives it the appearance of imprecision and consequently of uncertainty, of being a matter of opinion, of probability rather than truth. It is indeed knowledge that is expressed in taste or connoisseurship, lacking rigidity and ready for the impress of the mind of the learner. . . .

Technical knowledge, in short, an be both taught and learned in the simplest meanings of these words. On the other hand, practical knowledge can neither be taught nor learned, but only imparted and acquired. It exists only in practice, and the only way to acquire it is by apprenticeship to a master – not because the master can teach it (he cannot), but because it can be acquired only by continuous contact with one who is perpetually practicing it. In the arts and in natural science what normally happens is that the pupil, in being taught and in learning the technique from his master, discovers himself to have acquired also another sort of knowledge than merely technical knowledge, without it ever having been precisely imparted and often without being able to say precisely what it is. Thus a pianist acquires artistry as well as technique, a chess-player style and insight into the game as well as knowledge of the moves, and a scientist acquires (among other things) the sort of judgement which tells him when his technique is leading him astray and the connoisseurship which enables him to distinguish the profitable from the unprofitable directions to explore.

Now, as I understand it, Rationalism is the assertion that what I have called practical knowledge is not knowledge at all, the assertion that, properly speaking, there is no knowledge which is not technical knowledge. The Rationalist holds that the only element of knowledge involved in any human activity is technical knowledge and that what I have called practical knowledge is really only a sort of nescience which would be negligible if it were not positively mischievous. (Rationalism in Politics and Other Essays, pp. 12-16)

Almost three years ago, I attended the History of Economics Society meeting at Duke University at which Jeff Biddle of Michigan State University delivered his Presidential Address, “Statistical Inference in Economics 1920-1965: Changes in Meaning and Practice, published in the June 2017 issue of the Journal of the History of Economic Thought. The paper is a remarkable survey of the differing attitudes towards using formal probability theory as the basis for making empirical inferences from the data. The underlying assumptions of probability theory about the nature of the data were widely viewed as being too extreme to make probability theory an acceptable basis for empirical inferences from the data. However, the early negative attitudes toward accepting probability theory as the basis for making statistical inferences from data were gradually overcome (or disregarded). But as late as the 1960s, even though econometric techniques were becoming more widely accepted, a great deal of empirical work, including by some of the leading empirical economists of the time, avoided using the techniques of statistical inference to assess empirical data using regression analysis. Only in the 1970s was there a rapid sea-change in professional opinion that made statistical inference based on explicit probabilisitic assumptions about underlying data distributions the requisite technique for drawing empirical inferences from the analysis of economic data. In the final section of his paper, Biddle offers an explanation for this rapid change in professional attitude toward the use of probabilistic assumptions about data distributions as the required method of the empirical assessment of economic data.

By the 1970s, there was a broad consensus in the profession that inferential methods justified by probability theory—methods of producing estimates, of assessing the reliability of those estimates, and of testing hypotheses—were not only applicable to economic data, but were a necessary part of almost any attempt to generalize on the basis of economic data. . . .

This paper has been concerned with beliefs and practices of economists who wanted to use samples of statistical data as a basis for drawing conclusions about what was true, or probably true, in the world beyond the sample. In this setting, “mechanical objectivity” means employing a set of explicit and detailed rules and procedures to produce conclusions that are objective in the sense that if many different people took the same statistical information, and followed the same rules, they would come to exactly the same conclusions. The trustworthiness of the conclusion depends on the quality of the method. The classical theory of inference is a prime example of this sort of mechanical objectivity.

Porter [Trust in Numbers: The Pursuit of Objectivity in Science and Public Life] contrasts mechanical objectivity with an objectivity based on the “expert judgment” of those who analyze data. Expertise is acquired through a sanctioned training process, enhanced by experience, and displayed through a record of work meeting the approval of other experts. One’s faith in the analyst’s conclusions depends on one’s assessment of the quality of his disciplinary expertise and his commitment to the ideal of scientific objectivity. Elmer Working’s method of determining whether measured correlations represented true cause-and-effect relationships involved a good amount of expert judgment. So, too, did Gregg Lewis’s adjustments of the various estimates of the union/non-union wage gap, in light of problems with the data and peculiarities of the times and markets from which they came. Keynes and Persons pushed for a definition of statistical inference that incorporated space for the exercise of expert judgment; what Arthur Goldberger and Lawrence Klein referred to as ‘statistical inference’ had no explicit place for expert judgment.

Speaking in these terms, I would say that in the 1920s and 1930s, empirical economists explicitly acknowledged the need for expert judgment in making statistical inferences. At the same time, mechanical objectivity was valued—there are many examples of economists of that period employing rule-oriented, replicable procedures for drawing conclusions from economic data. The rejection of the classical theory of inference during this period was simply a rejection of one particular means for achieving mechanical objectivity. By the 1970s, however, this one type of mechanical objectivity had become an almost required part of the process of drawing conclusions from economic data, and was taught to every economics graduate student.

Porter emphasizes the tension between the desire for mechanically objective methods and the belief in the importance of expert judgment in interpreting statistical evidence. This tension can certainly be seen in economists’ writings on statistical inference throughout the twentieth century. However, it would be wrong to characterize what happened to statistical inference between the 1940s and the 1970s as a displace-ment of procedures requiring expert judgment by mechanically objective procedures. In the econometric textbooks published after 1960, explicit instruction on statistical inference was largely limited to instruction in the mechanically objective procedures of the classical theory of inference. It was understood, however, that expert judgment was still an important part of empirical economic analysis, particularly in the specification of the models to be estimated. But the disciplinary knowledge needed for this task was to be taught in other classes, using other textbooks.

And in practice, even after the statistical model had been chosen, the estimates and standard errors calculated, and the hypothesis tests conducted, there was still room to exercise a fair amount of judgment before drawing conclusions from the statistical results. Indeed, as Marcel Boumans (2015, pp. 84–85) emphasizes, no procedure for drawing conclusions from data, no matter how algorithmic or rule bound, can dispense entirely with the need for expert judgment. This fact, though largely unacknowledged in the post-1960s econometrics textbooks, would not be denied or decried by empirical economists of the 1970s or today.

This does not mean, however, that the widespread embrace of the classical theory of inference was simply a change in rhetoric. When application of classical inferential procedures became a necessary part of economists’ analyses of statistical data, the results of applying those procedures came to act as constraints on the set of claims that a researcher could credibly make to his peers on the basis of that data. For example, if a regression analysis of sample data yielded a large and positive partial correlation, but the correlation was not “statistically significant,” it would simply not be accepted as evidence that the “population” correlation was positive. If estimation of a statistical model produced a significant estimate of a relationship between two variables, but a statistical test led to rejection of an assumption required for the model to produce unbiased estimates, the evidence of a relationship would be heavily discounted.

So, as we consider the emergence of the post-1970s consensus on how to draw conclusions from samples of statistical data, there are arguably two things to be explained. First, how did it come about that using a mechanically objective procedure to generalize on the basis of statistical measures went from being a choice determined by the preferences of the analyst to a professional requirement, one that had real con-sequences for what economists would and would not assert on the basis of a body of statistical evidence? Second, why was it the classical theory of inference that became the required form of mechanical objectivity? . . .

Perhaps searching for an explanation that focuses on the classical theory of inference as a means of achieving mechanical objectivity emphasizes the wrong characteristic of that theory. In contrast to earlier forms of mechanical objectivity used by economists, such as standardized methods of time series decomposition employed since the 1920s, the classical theory of inference is derived from, and justified by, a body of formal mathematics with impeccable credentials: modern probability theory. During a period when the value placed on mathematical expression in economics was increasing, it may have been this feature of the classical theory of inference that increased its perceived value enough to overwhelm long-standing concerns that it was not applicable to economic data. In other words, maybe the chief causes of the profession’s embrace of the classical theory of inference are those that drove the broader mathematization of economics, and one should simply look to the literature that explores possible explanations for that phenomenon rather than seeking a special explanation of the embrace of the classical theory of inference.

I would suggest one more factor that might have made the classical theory of inference more attractive to economists in the 1950s and 1960s: the changing needs of pedagogy in graduate economics programs. As I have just argued, since the 1920s, economists have employed both judgment based on expertise and mechanically objective data-processing procedures when generalizing from economic data. One important difference between these two modes of analysis is how they are taught and learned. The classical theory of inference as used by economists can be taught to many students simultaneously as a set of rules and procedures, recorded in a textbook and applicable to “data” in general. This is in contrast to the judgment-based reasoning that combines knowledge of statistical methods with knowledge of the circumstances under which the particular data being analyzed were generated. This form of reasoning is harder to teach in a classroom or codify in a textbook, and is probably best taught using an apprenticeship model, such as that which ideally exists when an aspiring economist writes a thesis under the supervision of an experienced empirical researcher.

During the 1950s and 1960s, the ratio of PhD candidates to senior faculty in PhD-granting programs was increasing rapidly. One consequence of this, I suspect, was that experienced empirical economists had less time to devote to providing each interested student with individualized feedback on his attempts to analyze data, so that relatively more of a student’s training in empirical economics came in an econometrics classroom, using a book that taught statistical inference as the application of classical inference procedures. As training in empirical economics came more and more to be classroom training, competence in empirical economics came more and more to mean mastery of the mechanically objective techniques taught in the econometrics classroom, a competence displayed to others by application of those techniques. Less time in the training process being spent on judgment-based procedures for interpreting statistical results meant fewer researchers using such procedures, or looking for them when evaluating the work of others.

This process, if indeed it happened, would not explain why the classical theory of inference was the particular mechanically objective method that came to dominate classroom training in econometrics; for that, I would again point to the classical theory’s link to a general and mathematically formalistic theory. But it does help to explain why the application of mechanically objective procedures came to be regarded as a necessary means of determining the reliability of a set of statistical measures and the extent to which they provided evidence for assertions about reality. This conjecture fits in with a larger possibility that I believe is worth further exploration: that is, that the changing nature of graduate education in economics might sometimes be a cause as well as a consequence of changing research practices in economics. (pp. 167-70)

The correspondence between Biddle’s discussion of the change in the attitude of the economics profession about how inferences should be drawn from data about empirical relationships is strikingly similar to Oakeshott’s discussion and depressing in its implications for the decline of expert judgment by economics, expert judgment having been replaced by mechanical and technical knowledge that can be objectively summarized in the form of rules or tests for statistical significance, itself an entirely arbitrary convention lacking any logical, or self-evident, justification.

But my point is not to condemn using rules derived from classical probability theory to assess the significance of relationships statistically estimated from historical data, but to challenge the methodological prohibition against the kinds of expert judgments that many statistically knowledgeable economists like Nobel Prize winners such as Simon Kuznets, Milton Friedman, Theodore Schultz and Gary Becker routinely used to make in their empirical studies. As Biddle notes:

In 1957, Milton Friedman published his theory of the consumption function. Friedman certainly understood statistical theory and probability theory as well as anyone in the profession in the 1950s, and he used statistical theory to derive testable hypotheses from his economic model: hypotheses about the relationships between estimates of the marginal propensity to consume for different groups and from different types of data. But one will search his book almost in vain for applications of the classical methods of inference. Six years later, Friedman and Anna Schwartz published their Monetary History of the United States, a work packed with graphs and tables of statistical data, as well as numerous generalizations based on that data. But the book contains no classical hypothesis tests, no confidence intervals, no reports of statistical significance or insignificance, and only a handful of regressions. (p. 164)

Friedman’s work on the Monetary History is still regarded as authoritative. My own view is that much of the Monetary History was either wrong or misleading. But my quarrel with the Monetary History mainly pertains to the era in which the US was on the gold standard, inasmuch as Friedman simply did not understand how the gold standard worked, either in theory or in practice, as McCloskey and Zecher showed in two important papers (here and here). Also see my posts about the empirical mistakes in the Monetary History (here and here). But Friedman’s problem was bad monetary theory, not bad empirical technique.

Friedman’s theoretical misunderstandings have no relationship to the misguided prohibition against doing quantitative empirical research without obeying the arbitrary methodological requirement that statistical be derived in a way that measures the statistical significance of the estimated relationships. These methodological requirements have been adopted to support a self-defeating pretense to scientific rigor, necessitating the use of relatively advanced mathematical techniques to perform quantitative empirical research. The methodological requirements for measuring statistical relationships were never actually shown to be generate more accurate or reliable statistical results than those derived from the less technically advanced, but in some respects more economically sophisticated, techniques that have almost totally been displaced. One more example of the fallacy that there is but one technique of research that ensures the discovery of truth, a mistake even Popper was never guilty of.

Methodological Prescriptions Go from Bad to Worse

The methodological requirement for the use of formal tests of statistical significance before any quantitative statistical estimate could be credited was a prelude, though it would be a stretch to link them causally, to another and more insidious form of methodological tyrannizing: the insistence that any macroeconomic model be derived from explicit micro-foundations based on the solution of an intertemporal-optimization exercise. Of course, the idea that such a model was in any way micro-founded was a pretense, the solution being derived only through the fiction of a single representative agent, rendering the entire optimization exercise fundamentally illegitimate and the exact opposite of micro-founded model. Having already explained in previous posts why transforming microfoundations from a legitimate theoretical goal into methodological necessity has taken a generation of macroeconomists down a blind alley (here, here, here, and here) I will only make the further comment that this is yet another example of the danger of elevating technique over practice and substance.

Popper’s More Important Contribution

This post has largely concurred with the negative assessment of Popper’s work registered by Lemoine. But I wish to end on a positive note, because I have learned a great deal from Popper, and even if he is overrated as a philosopher of science, he undoubtedly deserves great credit for suggesting falsifiability as the criterion by which to distinguish between science and metaphysics. Even if that criterion does not hold up, or holds up only when qualified to a greater extent than Popper admitted, Popper made a hugely important contribution by demolishing the startling claim of the Logical Positivists who in the 1920s and 1930s argued that only statements that can be empirically verified through direct or indirect observation have meaning, all other statements being meaningless or nonsensical. That position itself now seems to verge on the nonsensical. But at the time many of the world’s leading philosophers, including Ludwig Wittgenstein, no less, seemed to accept that remarkable view.

Thus, Popper’s demarcation between science and metaphysics had a two-fold significance. First, that it is not verifiability, but falsifiability, that distinguishes science from metaphysics. That’s the contribution for which Popper is usually remembered now. But it was really the other aspect of his contribution that was more significant: that even metaphysical, non-scientific, statements can be meaningful. According to the Logical Positivists, unless you are talking about something that can be empirically verified, you are talking nonsense. In other words they were deliberately hoisting themselves on their petard, because their discussions about what is and what is not meaningful, being discussions about concepts, not empirically verifiable objects, were themselves – on the Positivists’ own criterion of meaning — meaningless and nonsensical.

Popper made the world safe for metaphysics, and the world is a better place as a result. Science is a wonderful enterprise, rewarding for its own sake and because it contributes to the well-being of many millions of human beings, though like many other human endeavors, it can also have unintended and unfortunate consequences. But metaphysics, because it was used as a term of abuse by the Positivists, is still, too often, used as an epithet. It shouldn’t be.

Certainly economists should aspire to tease out whatever empirical implications they can from their theories. But that doesn’t mean that an economic theory with no falsifiable implications is useless, a judgment whereby Mark Blaug declared general equilibrium theory to be unscientific and useless, a judgment that I don’t think has stood the test of time. And even if general equilibrium theory is simply metaphysical, my response would be: so what? It could still serve as a source of inspiration and insight to us in framing other theories that may have falsifiable implications. And even if, in its current form, a theory has no empirical content, there is always the possibility that, through further discussion, critical analysis and creative thought, empirically falsifiable implications may yet become apparent.

Falsifiability is certainly a good quality for a theory to have, but even an unfalsifiable theory may be worth paying attention to and worth thinking about.

Cleaning Up After Burns’s Mess

In my two recent posts (here and here) about Arthur Burns’s lamentable tenure as Chairman of the Federal Reserve System from 1970 to 1978, my main criticism of Burns has been that, apart from his willingness to subordinate monetary policy to the political interests of he who appointed him, Burns failed to understand that an incomes policy to restrain wages, thereby minimizing the tendency of disinflation to reduce employment, could not, in principle, reduce inflation if monetary restraint did not correspondingly reduce the growth of total spending and income. Inflationary (or employment-reducing) wage increases can’t be prevented by an incomes policy if the rate of increase in total spending, and hence total income, isn’t controlled. King Canute couldn’t prevent the tide from coming in, and neither Arthur Burns nor the Wage and Price Council could slow the increase in wages when total spending was increasing at a rate faster than was consistent with the 3% inflation rate that Burns was aiming for.

In this post, I’m going to discuss how the mess left behind by Burns, upon his departure from the Fed in 1978, had to be cleaned up. The mess got even worse under Burns’s successor, G. William Miller. The clean up didn’t begin until Carter appointed Paul Volcker in 1979 when it became obvious that the monetary policy of the Fed had failed to cope with problems left behind by Burns. After unleashing powerful inflationary forces under the cover of the wage-and-price controls he had persuaded Nixon to impose in 1971 as a precondition for delivering the monetary stimulus so desperately desired by Nixon to ensure his reelection, Burns continued providing that stimulus even after Nixon’s reelection, when it might still have been possible to taper off the stimulus before inflation flared up, and without aborting the expansion then under way. In his arrogance or ignorance, Burns chose not to adjust the policy that had already accomplished its intended result.

Not until the end of 1973, after crude oil prices quadrupled owing to a cutback in OPEC oil output, driving inflation above 10% in 1974, did Burns withdraw the monetary stimulus that had been administered in increasing doses since early 1971. Shocked out of his complacency by the outcry against 10% inflation, Burns shifted monetary policy toward restraint, bringing down the growth in nominal spending and income from over 11% in Q4 1973 to only 8% in Q1 1974.

After prolonging monetary stimulus unnecessarily for a year, Burn erred grievously by applying monetary restraint in response to the rise in oil prices. The largely exogenous rise in oil prices would most likely have caused a recession even with no change in monetary policy. By subjecting the economy to the added shock of reducing aggregate demand, Burns turned a mild recession into the worst recession since 1937-38 recession at the end of the Great Depression, with unemployment peaking at 8.8% in Q2 1975. Nor did the reduction in aggregate demand have much anti-inflationary effect, because the incremental reduction in total spending occasioned by the monetary tightening was reflected mainly in reduced output and employment rather than in reduced inflation.

But even with unemployment reaching the highest level in almost 40 years, inflation did not fall below 5% – and then only briefly – until a year after the bottom of the recession. When President Carter took office in 1977, Burns, hoping to be reappointed to another term, provided Carter with a monetary expansion to hasten the reduction in unemployment that Carter has promised in his Presidential campaign. However, Burns’s accommodative policy did not sufficiently endear him to Carter to secure the coveted reappointment.

The short and unhappy tenure of Carter’s first appointee, G. William Miller, during which inflation rose from 6.5% to 10%, ended abruptly when Carter, with his Administration in crisis, sacked his Treasury Secretary, replacing him with Miller. Under pressure from the financial community to address the seemingly intractable inflation that seemed to be accelerating in the wake of a second oil shock following the Iranian Revolution and hostage taking, Carter felt constrained to appoint Volcker, formerly a high official in the Treasury under both Kennedy and Nixon, then serving as President of the New York Federal Reserve Bank, who was known to be the favored choice of the financial community.

A year after leaving the Fed, Burns gave the annual Per Jacobson Lecture to the International Monetary Fund. Calling his lecture “The Anguish of Central Banking,” Burns offered a defense of his tenure, by arguing, in effect, that he should not be blamed for his poor performance, because the job of central banking is so very hard. Central bankers could control inflation, but only by inflicting unacceptably high unemployment. The political authorities and the public to whom central bankers are ultimately accountable would simply not tolerate the high unemployment that would be necessary for inflation to be controlled.

Viewed in the abstract, the Federal Reserve System had the power to abort the inflation at its incipient stage fifteen years ago or at any later point, and it has the power to end it today. At any time within that period, it could have restricted money supply and created sufficient strains in the financial and industrial markets to terminate inflation with little delay. It did not do so because the Federal Reserve was itself caught up in the philosophic and political currents that were transforming American life and culture.

Burns’s framing of the choices facing a central bank was tendentious; no policy maker had suggested that, after years of inflation had convinced the public to expect inflation to continue indefinitely, the Fed should “terminate inflation with little delay.” And Burns was hardly a disinterested actor as Fed chairman, having orchestrated a monetary expansion to promote the re-election chances of his benefactor Richard Nixon after securing, in return for that service, Nixon’s agreement to implement an incomes policy to limit the growth of wages, a policy that Burns believed would contain the inflationary consequences of the monetary expansion.

However, as I explained in my post on Hawtrey and Burns, the conceptual rationale for an incomes policy was not to allow monetary expansion to increase total spending, output and employment without causing increased inflation, but to allow the monetary restraint to be administered without increasing unemployment. But under the circumstances in the summer of 1971, when a recovery from the 1970 recession was just starting, and unemployment was still high, monetary expansion might have hastened a recovery in output and employment the resulting increase in total spending and income might still increase output and employment rather than being absorbed in higher wages and prices.

But using controls over wages and prices to speed the return to full employment could succeed only while substantial unemployment and unused capacity allowed output and employment to increase; the faster the recovery, the sooner increased spending would show up in rising prices and wages, or in supply shortages, rather than in increased output. An incomes policy to enable monetary expansion to speed the recovery from recession and restore full employment might theoretically be successful, but, only if the monetary stimulus were promptly tapered off before driving up inflation.

Thus, if Burns wanted an incomes policy to be able to hasten the recovery through monetary expansion and maximize the political benefit to Nixon in time for the 1972 election, he ought to have recognized the need to withdraw the stimulus after the election. But for a year after Nixon’s reelection, Burns continued the monetary expansion without let up. Burns’s expression of anguish at the dilemma foisted upon him by circumstances beyond his control hardly evokes sympathy, sounding more like an attempt to deflect responsibility for his own mistakes or malfeasance in serving as an instrument of the criminal Campaign to Re-elect the President without bothering to alter that politically motivated policy after its dishonorable mission had been accomplished.

But it was not until Burns’s successor, G. William Miller, was succeeded by Paul Volcker in August 1979 that the Fed was willing to adopt — and maintain — an anti-inflationary policy. In his recently published memoir Volcker recounts how, responding to President Carter’s request in July 1979 that he accept appointment as Fed chairman, he told Mr. Carter that, to bring down inflation, he would adopt a tighter monetary policy than had been followed by his predecessor. He also writes that, although he did not regard himself as a Friedmanite Monetarist, he had become convinced that to control inflation it was necessary to control the quantity of money, though he did not agree with Friedman that a rigid rule was required to keep the quantity of money growing at a constant rate. To what extent the Fed would set its policy in terms of a fixed target rate of growth in the quantity of money became the dominant issue in Fed policy during Volcker’s first term as Fed chairman.

In a review of Volcker’s memoir widely cited in the econ blogosphere, Tim Barker decried Volcker’s tenure, especially his determination to control inflation even at the cost of spilling blood — other people’s blood – if that was necessary to eradicate the inflationary psychology of the 1970s, which become a seemingly permanent feature of the economic environment at the time of Volcker’s appointment.

If someone were to make a movie about neoliberalism, there would need to be a starring role for the character of Paul Volcker. As chair of the Federal Reserve from 1979 to 1987, Volcker was the most powerful central banker in the world. These were the years when the industrial workers movement was defeated in the United States and United Kingdom, and third world debt crises exploded. Both of these owe something to Volcker. On October 6, 1979, after an unscheduled meeting of the Fed’s Open Market Committee, Volcker announced that he would start limiting the growth of the nation’s money supply. This would be accomplished by limiting the growth of bank reserves, which the Fed influenced by buying and selling government securities to member banks. As money became more scarce, banks would raise interest rates, limiting the amount of liquidity available in the overall economy. Though the interest rates were a result of Fed policy, the money supply target let Volcker avoid the politically explosive appearance of directly raising rates himself. The experiment—known as the Volcker Shock—lasted until 1982, inducing what remains the worst unemployment since the Great Depression and finally ending the inflation that had troubled the world economy since the late 1960s. To catalog all the results of the Volcker Shock—shuttered factories, broken unions, dizzying financialization—is to describe the whirlwind we are still reaping in 2019. . . .

Barker is correct that Volcker had been persuaded that to tighten monetary policy the quantity of reserves that the Fed was providing to the banking system had to be controlled. But making the quantity of bank reserves the policy instrument was a technical change. Monetary policy had been — and could still have been — conducted using an interest-rate instrument, and it would have been entirely possible for Volcker to tighten monetary policy using the traditional interest-rate instrument.

It is possible that, as Barker asserts, it was politically easier to tighten policy using a quantity instrument than an interest-rate instrument. But even so, the real difficulty was not the instrument used, but the economic and political consequences of a tight monetary policy. The choice of the instrument to carry out the policy could hardly have made more than a marginal difference on the balance of political forces favoring or opposing that policy. The real issue was whether a tight monetary policy aimed at reducing inflation was more effectively conducted using the traditional interest-rate instrument or the quantity-instrument that Volcker adopted. More on this point below.

Those who praise Volcker like to say he “broke the back” of inflation. Nancy Teeters, the lone dissenter on the Fed Board of Governors, had a different metaphor: “I told them, ‘You are pulling the financial fabric of this country so tight that it’s going to rip. You should understand that once you tear a piece of fabric, it’s very difficult, almost impossible, to put it back together again.” (Teeters, also the first woman on the Fed board, told journalist William Greider that “None of these guys has ever sewn anything in his life.”) Fabric or backbone: both images convey violence. In any case, a price index doesn’t have a spine or a seam; the broken bodies and rent garments of the early 1980s belonged to people. Reagan economic adviser Michael Mussa was nearer the truth when he said that “to establish its credibility, the Federal Reserve had to demonstrate its willingness to spill blood, lots of blood, other people’s blood.”

Did Volcker consciously see unemployment as the instrument of price stability? A Rhode Island representative asked him “Is it a necessary result to have a large increase in unemployment?” Volcker responded, “I don’t know what policies you would have to follow to avoid that result in the short run . . . We can’t undertake a policy now that will cure that problem [unemployment] in 1981.” Call this the necessary byproduct view: defeating inflation is the number one priority, and any action to put people back to work would raise inflationary expectations. Growth and full employment could be pursued once inflation was licked. But there was more to it than that. Even after prices stabilized, full employment would not mean what it once had. As late as 1986, unemployment was still 6.6 percent, the Reagan boom notwithstanding. This was the practical embodiment of Milton Friedman’s idea that there was a natural rate of unemployment, and attempts to go below it would always cause inflation (for this reason, the concept is known as NAIRU or non-accelerating inflation rate of unemployment). The logic here is plain: there needed to be millions of unemployed workers for the economy to work as it should.

I want to make two points about Volcker’s policy. The first, which I made in my book Free Banking and Monetary Reform over 30 years ago, and which I have reiterated in several posts on this blog and which I discussed in my recent paper “Rules versus Discretion in Monetary Policy Historically Contemplated” (for an ungated version click here) is that using a quantity instrument to tighten monetary policy, as advocated by Milton Friedman, and acquiesced in by Volcker, induces expectations about the future actions of the monetary authority that undermine the policy, rendering it untenable. Volcker eventually realized the perverse expectational consequences of trying to implement a monetary policy using a fixed rule for the quantity instrument, but his learning experience in following Friedman’s advice needlessly exacerbated and prolonged the agony of the 1982 downturn for months after inflationary expectations had been broken.

The problem was well-known in the nineteenth century thanks to British experience under the Bank Charter Act that imposed a fixed quantity limit on the total quantity of banknotes issued by the Bank of England. When the total of banknotes approached the legal maximum, a precautionary demand for banknotes was immediately induced by those who feared that they might not later be able to obtain credit if it were needed because the Bank of England would be barred from making additional credit available.

Here is how I described Volcker’s Monetarist experiment in my book.

The danger lurking in any Monetarist rule has been perhaps best summarized by F. A. Hayek, who wrote:

As regards Professor Friedman’s proposal of a legal limit on the rate at which a monopolistic issuer of money was to be allowed to increase the quantity in circulation, I can only say that I would not like to see what would happen if under such a provision it ever became known that the amount of cash in circulation was approaching the upper limit and therefore a need for increased liquidity could not be met.

Hayek’s warnings were subsequently borne out after the Federal Reserve Board shifted its policy from targeting interest rates to targeting the monetary aggregates. The apparent shift toward a less inflationary monetary policy, reinforced by the election of a conservative, antiinflationary president in 1980, induced an international shift from other currencies into the dollar. That shift caused the dollar to appreciate by almost 30 percent against other major currencies.

At the same time the domestic demand for deposits was increasing as deregulation of the banking system reduced the cost of holding deposits. But instead of accommodating the increase in the foreign and domestic demands for dollars, the Fed tightened monetary policy. . . . The deflationary impact of that tightening overwhelmed the fiscal stimulus of tax cuts and defense buildup, which, many had predicted, would cause inflation to speed up. Instead the economy fell into the deepest recession since the 1930s, while inflation, by 1982, was brought down to the lowest levels since the early 1960s. The contraction, which began in July 1981, accelerated in the fourth quarter of 1981 and the first quarter of 1982.

The rapid disinflation was bringing interest rates down from the record high levels of mid-1981 and the economy seemed to bottom out in the second quarter, showing a slight rise in real GNP over the first quarter. Sticking to its Monetarist strategy, the Fed reduced its targets for monetary growth in 1982 to between 2.5 and 5.5 percent. But in January and February, the money supply increased at a rapid rate, perhaps in anticipation of an incipient expansion. Whatever its cause, the early burst of the money supply pushed M-1 way over its target range.

For the next several months, as M-1 remained above its target, financial and commodity markets were preoccupied with what the Fed was going to do next. The fear that the Fed would tighten further to bring M-1 back within its target range reversed the slide in interest rates that began in the fall of 1981. A striking feature of the behavior of interest rates at that time was that credit markets seemed to be heavily influenced by the announcements every week of the change in M-1 during the previous week. Unexpectedly large increases in the money supply put upward pressure on interest rates.

The Monetarist explanation was that the announcements caused people to raise their expectations of inflation. But if the increase in interest rates had been associated with a rising inflation premium, the announcements should have been associated with weakness in the dollar on foreign exchange markets and rising commodities prices. In fact, the dollar was rising and commodities prices were falling consistently throughout this period – even immediately after an unexpectedly large jump in M-1 was announced. . . . (pp. 218-19)

I pause in my own earlier narrative to add the further comment that the increase in interest rates in early 1982 clearly reflected an increasing liquidity premium, caused by the reduced availability of bank reserves, making cash more desirable to hold than real assets, thereby inducing further declines in asset values.

However, increases in M-1 during July turned out to be far smaller than anticipated, relieving some of the pressure on credit and commodities markets and allowing interest rates to begin to fall again. The decline in interest rates may have been eased slightly by . . . Volcker’s statement to Congress on July 20 that monetary growth at the upper range of the Fed’s targets would be acceptable. More important, he added that he Fed was willing to let M-1 remain above its target range for a while if the reason seemed to be a precautionary demand for liquidity. By August, M-1 had actually fallen back within its target range. As fears of further tightening by the Fed subsided, the stage was set for the decline in interest rates to accelerate, [and] the great stock market rally began on August 17, when the Dow . . . rose over 38 points [almost 5%].

But anticipation of an incipient recovery again fed monetary growth. From the middle of August through the end of September, M-1 grew at an annual rate of over 15 percent. Fears that rapid monetary growth would induce the Fed to tighten monetary policy slowed down the decline in interest rates and led to renewed declines in commodities price and the stock market, while pushing up the dollar to new highs. On October 5 . . . the Wall Street Journal reported that bond prices had fallen amid fears that the Fed might tighten credit conditions to slow the recent strong growth in the money supply. But on the very next day it was reported that the Fed expected inflation to stay low and would therefore allow M-1 to exceed its targets. The report sparked a major decline in interest rates and the Dow . . . soared another 37 points. (pp. 219-20)

The subsequent recovery, which began at the end of 1982, quickly became very powerful, but persistent fears that the Fed would backslide, at the urging of Milton Friedman and his Monetarist followers, into its bad old Monetarist habits periodically caused interest-rate spikes reflecting rising liquidity premiums as the public built up precautionary cash balances. Luckily, Volcker was astute enough to shrug off the overwrought warnings of Friedman and other Monetarists that rapid increases in the monetary aggregates foreshadowed the imminent return of double-digit inflation.

Thus, the Monetarist obsession with controlling the monetary aggregates senselessly prolonged an already deep recession that, by Q1 1982, had already slain the inflationary dragon, inflation having fallen to less than half its 1981 peak while GDP actually contracted in nominal terms. But because the money supply was expanding at a faster rate than was acceptable to Monetarist ideology, the Fed continued in its futile, but destructive, effort to keep the monetary aggregates from overshooting their arbitrary Monetarist target range. It was not until Volcker, in the summer of 1982, finally and belatedly decided that enough was enough and announced that the Fed would declare victory over inflation and call off its Monetarist crusade even if doing so meant incurring Friedman’s wrath and condemnation for abandoning the true Monetarist doctrine.

Which brings me to my second point about Volcker’s policy. While it’s clear that Volcker’s decision to adopt control over the monetary aggregates as the focus of monetary policy was disastrously misguided, monetary policy can’t be conducted without some target. Although the Fed’s interest rate can serve as a policy instrument, it is not a plausible policy target. The preferred policy target is generally thought to be the rate of inflation. The Fed after all is mandated to achieve price stability, which is usually understood to mean targeting a rate of inflation of about 2%. A more sophisticated alternative would be to aim at a suitable price level, thereby allowing some upward movement, say, at a 2% annual rate, the difference between an inflation target and a moving price level target being that an inflation target is unaffected by past deviations of actual from targeted inflation while a moving price level target would require some catch up inflation to make up for past below-target inflation and reduced inflation to compensate for past above-target inflation.

However, the 1981-82 recession shows exactly why an inflation target and even a moving price level target are bad ideas. By almost any comprehensive measure, inflation was still positive throughout the 1981-82 recession, though the producer price index was nearly flat. Thus, inflation targeting during the 1981-82 recession would have been almost as bad a target for monetary policy as the monetary aggregates, with most measures of inflation showing that inflation was then between 3 and 5 percent even at the depth of the recession. Inflation targeting is thus, on its face, an unreliable basis for conducting monetary policy.

But the deeper problem with targeting inflation is that seeking to achieve an inflation target during a recession, when the very existence of a recession is presumptive evidence of the need for monetary stimulus, is actually a recipe for disaster, or, at the very least, for needlessly prolonging a recession. In a recession, the goal of monetary policy should be to stabilize the rate of increase in nominal spending along a time path consistent with the desired rate of inflation. Thus, as long as output is contracting or increasing very slowly, the desired rate of inflation should be higher than the desired rate over the long-term. The appropriate strategy for achieving an inflation target ought to be to let inflation be reduced by the accelerating expansion of output and employment characteristic of most recoveries relative to a stable expansion of nominal spending.

The true goal of monetary policy should always be to maintain a time path of total spending consistent with a desired price-level path over time. But it should not be the objective of the monetary policy to always be as close as possible to the desired path, because trying to stay on that path would likely destabilize the real economy. Market monetarists argue that the goal of monetary policy ought to be to keep nominal GDP expanding at that whatever rate is consistent with maintaining the desired long-run price-level path. That is certainly a reasonable practical rule for monetary policy, but the policy criterion I have discussed here would, at least in principle, be consistent with a more activist approach in which the monetary authority would seek to hasten the restoration of full employment during recessions by temporarily increasing the rate of monetary expansion and in nominal GDP as long as real output and employment remained below the maximum levels consistent with desired price level path over time. But such a strategy would require the monetary authority to be able to fine tune its monetary expansion so that it was tapered off just as the economy was reaching its maximum sustainable output and employment path. Whether such fine-tuning would be possible in practice is a question to which I don’t think we now know the answer.

 

Friedman and Schwartz, Eichengreen and Temin, Hawtrey and Cassel

Barry Eichengreen and Peter Temin are two of the great economic historians of our time, writing, in the splendid tradition of Charles Kindleberger, profound and economically acute studies of the economic and financial history of the nineteenth and early twentieth centuries. Most notably they have focused on periods of panic, crisis and depression, of which by far the best-known and most important episode is the Great Depression that started late in 1929, bottomed out early in 1933, but lingered on for most of the 1930s, and they are rightly acclaimed for having emphasized and highlighted the critical role of the gold standard in the Great Depression, a role largely overlooked in the early Keynesian accounts of the Great Depression. Those accounts identified a variety of specific shocks, amplified by the volatile entrepreneurial expectations and animal spirits that drive, or dampen, business investment, and further exacerbated by inherent instabilities in market economies that lack self-stabilizing mechanisms for maintaining or restoring full employment.

That Keynesian vision of an unstable market economy vulnerable to episodic, but prolonged, lapses from full-employment was vigorously, but at first unsuccessfully, disputed by advocates of free-market economics. It wasn’t until Milton Friedman provided an alternative narrative explaining the depth and duration of the Great Depression, that the post-war dominance of Keynesian theory among academic economists seriously challenged. Friedman’s alternative narrative of the Great Depression was first laid out in the longest chapter (“The Great Contraction”) of his magnum opus, co-authored with Anna Schwartz, A Monetary History of the United States. In Friedman’s telling, the decline in the US money stock was the critical independent causal factor that directly led to the decline in prices, output, and employment. The contraction in the quantity of money was not caused by the inherent instability of free-market capitalism, but, owing to a combination of incompetence and dereliction of duty, by the Federal Reserve.

In the Monetary History of the United States, all the heavy lifting necessary to account for both secular and cyclical movements in the price level, output and employment is done by, supposedly exogenous, changes in the nominal quantity of money, Friedman having considered it to be of the utmost significance that the largest movements in both the quantity of money, and in prices, output and employment occurred during the Great Depression. The narrative arc of the Monetary History was designed to impress on the mind of the reader the axiomatic premise that monetary authority has virtually absolute control over the quantity of money which served as the basis for inferring that changes in the quantity of money are what cause changes in prices, output and employment.

Friedman’s treatment of the gold standard (which I have discussed here, here and here) was both perfunctory and theoretically confused. Unable to reconcile the notion that the monetary authority has absolute control over the nominal quantity of money with the proposition that the price level in any country on the gold standard cannot deviate from the price levels of other gold standard countries without triggering arbitrage transactions that restore the equality between the price levels of all gold standard countries, Friedman dodged the inconsistency repeatedly invoking his favorite fudge factor: long and variable lags between changes in the quantity of money and changes in prices, output and employment. Despite its vacuity, the long-and-variable-lag dodge allowed Friedman to ignore the inconvenient fact that the US price level in the Great Depression did not and could not vary independently of the price levels of all other countries then on the gold standard.

I’ll note parenthetically that Keynes himself was also responsible for this unnecessary and distracting detour, because the General Theory was written almost entirely in the context of a closed economy model with an exogenously determined quantity of money, thereby unwittingly providing with a useful tool with which to propagate his Monetarist narrative. The difference of course is that Keynes, as demonstrated in his brilliant early works, Indian Currency and Finance and A Tract on Monetary Reform and the Economic Consequences of Mr. Churchill, had a correct understanding of the basic theory of the gold standard, an understanding that, owing to his obsessive fixation on the nominal quantity of money, eluded Friedman over his whole career. Why Keynes, who had a perfectly good theory of what was happening in the Great Depression available to him, as it was to others, was diverted to an unnecessary, but not uninteresting, new theory is a topic that I wrote about a very long time ago here, though I’m not so sure that I came up with a good or even adequate explanation.

So it does not speak well of the economics profession that it took nearly a quarter of a century before the basic internal inconsistency underlying Friedman’s account of the Great Depression was sufficiently recognized to call for an alternative theoretical account of the Great Depression that placed the gold standard at the heart of the narrative. It was Peter Temin and Barry Eichengreen, both in their own separate works (e.g., Lessons of the Great Depression by Temin and Golden Fetters by Eichengreen) and in an important paper they co-authored and published in 2000 to remind both economists and historians how important a role the gold standard must play in any historical account of the Great Depression.

All credit is due to Temin and Eichengreen for having brought to the critical role of the gold standard in the Great Depression to the attention of economists who had largely derived their understanding of what had caused the Great Depression from either some variant of the Keynesian narrative or of Friedman’s Monetarist indictment of the Federal Reserve System. But it’s unfortunate that neither Temin nor Eichnegreen gave sufficient credit to either R. G. Hawtrey or to Gustav Cassel for having anticipated almost all of their key findings about the causes of the Great Depression. And I think that what prevented Eichengreen and Temin from realizing that Hawtrey in particular had anticipated their explanation of the Great Depression by more than half a century was that they did not fully grasp the key theoretical insight underlying Hawtrey’s explanation of the Great Depression.

That insight was that the key to understanding the common world price level in terms of gold under a gold standard is to think in terms of a given world stock of gold and to think of total world demand to hold gold consisting of real demands to hold gold for commercial, industrial and decorative uses, the private demand to hold gold as an asset, and the monetary demand for gold to be held either as a currency or as a reserve for currency. The combined demand to hold gold for all such purposes, given the existing stock of gold, determines a real relative price of gold in terms of all other commodities. This relative price when expressed in terms of a currency unit that is convertible into gold corresponds to an equivalent set of commodity prices in terms of those convertible currency units.

This way of thinking about the world price level under the gold standard was what underlay Hawtrey’s monetary analysis and his application of that analysis in explaining the Great Depression. Given that the world output of gold in any year is generally only about 2 or 3 percent of the existing stock of gold, it is fluctuations in the demand for gold, of which the monetary demand for gold in the period after the outbreak of World War I was clearly the least stable, that causes short-term fluctuations in the value of gold. Hawtrey’s efforts after the end of World War I were therefore focused on the necessity to stabilize the world’s monetary demands for gold in order to avoid fluctuations in the value of gold as the world moved toward the restoration of the gold standard that then seemed, to most monetary and financial experts and most monetary authorities and political leaders, to be both inevitable and desirable.

In the opening pages of Golden Fetters, Eichengreen beautifully describes backdrop against which the attempt to reconstitute the gold standard was about to made after World War I.

For more than a quarter of a century before World War I, the gold standard provided the framework for domestic and international monetary relations. . .  The gold standard had been a remarkably efficient mechanism for organizing financial affairs. No global crises comparable to the one that began in 1929 had disrupted the operation of financial markets. No economic slump had so depressed output and employment.

The central elements of this system were shattered by . . . World War I. More than a decade was required to complete their reconstruction. Quickly it became evident that the reconstructed gold standard was less resilient that its prewar predecessor. As early as 1929 the new international monetary system began to crumble. Rapid deflation forced countries to  producing primary commodities to suspend gold convertibility and depreciate their currencies. Payments problems spread next to the industrialized world. . . Britain, along with United State and France, one of the countries at the center of the international monetary system, was next to experience a crisis, abandoning the gold standard in the autumn of 1931. Some two dozen countries followed suit. The United States dropped the gold standard in 1933; France hung on till the bitter end, which came in 1936.

The collapse of the international monetary system is commonly indicted for triggering the financial crisis that transformed a modes economic downturn gold standard into an unprecedented slump. So long as the gold standard was maintained, it is argued, the post-1929 recession remained just another cyclical contraction. But the collapse of the gold standard destroyed confidence in financial stability, prompting capital flight which undermined the solvency of financial institutions. . . Removing the gold standard, the argument continues, further intensified the crisis. Having suspended gold convertibility, policymakers manipulated currencies, engaging in beggar thy neighbor depreciations that purportedly did nothing to stimulate economic recovery at home while only worsening the Depression abroad.

The gold standard, then, is conventionally portrayed as synonymous with financial stability. Its downfall starting in 1929 is implicated in the global financial crisis and the worldwide depression. A central message of this book is that precisely the opposite was true. (Golden Fetters, pp. 3-4).

That is about as clear and succinct and accurate a description of the basic facts leading up to and surrounding the Great Depression as one could ask for, save for the omission of one important causal factor: the world monetary demand for gold.

Eichengreen was certainly not unaware of the importance of the monetary demand for gold, and in the pages that immediately follow, he attempts to fill in that part of the story, adding to our understanding of how the gold standard worked by penetrating deeply into the nature and role of the expectations that supported the gold standard, during its heyday, and the difficulty of restoring those stabilizing expectations after the havoc of World War I and the unexpected post-war inflation and subsequent deep 1920-21 depression. Those stabilizing expectations, Eichengreen argued, were the result of the credibility of the commitment to the gold standard and the international cooperation between governments and monetary authorities to ensure that the international gold standard would be maintained notwithstanding the occasional stresses and strains to which a complex institution would inevitably be subjected.

The stability of the prewar gold standard was instead the result of two very different factors: credibility and cooperation. Credibility is the confidence invested by the public in the government’s commitment to a policy. The credibility of the gold standard derived from the priority attached by governments to the maintenance of to the maintenance of balance-of-payments equilibrium. In the core countries – Britain, France and Germany – there was little doubt that the authorities would take whatever steps were required to defend the central bank’s gold reserves and maintain the convertibility of the currency into gold. If one of these central banks lost gold reserves and its exchange rate weakened, fund would flow in from abroad in anticipation of the capital gains investors in domestic assets would reap once the authorities adopted measures to stem reserve losses and strengthen the exchange rate. . . The exchange rate consequently strengthened on its own, and stabilizing capital flows minimized the need for government intervention. The very credibility of the official commitment to gold meant that this commitment was rarely tested. (p. 5)

But credibility also required cooperation among the various countries on the gold standard, especially the major countries at its center, of which Britain was the most important.

Ultimately, however, the credibility of the prewar gold standard rested on international cooperation. When the stabilizing speculation and domestic intervention proved incapable of accommodating a disturbance, the system was stabilized through cooperation among governments and central banks. Minor problems could be solved by tacit cooperation, generally achieved without open communication among the parties involved. . .  Under such circumstances, the most prominent central bank, the Bank of England, signaled the need for coordinated action. When it lowered its discount rate, other central banks usually responded in kind. In effect, the Bank of England provided a focal point for the harmonization of national monetary policies. . .

Major crises in contrast typically required different responses from different countries. The country losing gold and threatened by a convertibility crisis had to raise interest rates to attract funds from abroad; other countries had to loosen domestic credit conditions to make funds available to the central bank experiencing difficulties. The follow-the-leader approach did not suffice. . . . Such crises were instead contained through overt, conscious cooperation among central banks and governments. . . Consequently, the resources any one country could draw on when its gold parity was under attack far exceeded its own reserves; they included the resources of the other gold standard countries. . . .

What rendered the commitment to the gold standard credible, then, was that the commitment was international, not merely national. That commitment was achieved through international cooperation. (pp. 7-8)

Eichengreen uses this excellent conceptual framework to explain the dysfunction of the newly restored gold standard in the 1920s. Because of the monetary dislocation and demonetization of gold during World War I, the value of gold had fallen to about half of its prewar level, thus to reestablish the gold standard required not only restoring gold as a currency standard but also readjusting – sometimes massively — the prewar relative values of the various national currency units. And to prevent the natural tendency of gold to revert to its prewar value as gold was remonetized would require an unprecedented level of international cooperation among the various countries as they restored the gold standard. Thus, the gold standard was being restored in the 1920s under conditions in which neither the credibility of the prewar commitment to the gold standard nor the level of international cooperation among countries necessary to sustain that commitment was restored.

An important further contribution that Eichengreen, following Temin, brings to the historical narrative of the Great Depression is to incorporate the political forces that affected and often determined the decisions of policy makers directly into the narrative rather than treat those decisions as being somehow exogenous to the purely economic forces that were controlling the unfolding catastrophe.

The connection between domestic politics and international economics is at the center of this book. The stability of the prewar gold standard was attributable to a particular constellation of political as well as economic forces. Similarly, the instability of the interwar gold standard is explicable in terms of political as well as economic changes. Politics enters at two levels. First, domestic political pressures influence governments’ choices of international economic policies. Second, domestic political pressures influence the credibility of governments’ commitments to policies and hence their economic effects. . . (p. 10)

The argument, in a nutshell, is that credibility and cooperation were central to the smooth operation of the classical gold standard. The scope for both declined abruptly with the intervention of World War I. The instability of the interwar gold standard was the inevitable result. (p. 11)

Having explained and focused attention on the necessity for credibility and cooperation for a gold standard to function smoothly, Eichengreen then begins his introductory account of how the lack of credibility and cooperation led to the breakdown of the gold standard that precipitated the Great Depression, starting with the structural shift after World War I that made the rest of the world highly dependent on the US as a source of goods and services and as a source of credit, rendering the rest of the world chronically disposed to run balance-of-payments deficits with the US, deficits that could be financed only by the extension of credit by the US.

[I]f U.S. lending were interrupted, the underlying weakness of other countries’ external positions . . . would be revealed. As they lost gold and foreign exchange reserves, the convertibility of their currencies into gold would be threatened. Their central banks would be forced to  restrict domestic credit, their fiscal authorities to compress public spending, even if doing so threatened to plunge their economies into recession.

This is what happened when U.S. lending was curtailed in the summer of 1928 as a result of increasingly stringent Federal Reserve monetary policy. Inauspiciously, the monetary contraction in the United States coincided with a massive flow of gold to France, where monetary policy was tight for independent reasons. Thus, gold and financial capital were drained by the United States and France from other parts of the world. Superimposed on already weak foreign balances of payments, these events provoked a greatly magnified monetary contraction abroad. In addition they caused a tightening of fiscal policies in parts of Europe and much of Latin America. This shift in policy worldwide, and not merely the relatively modest shift in the United States, provided the contractionary impulse that set the stage for the 1929 downturn. The minor shift in American policy had such dramatic effects because of the foreign reaction it provoked through its interactions with existing imbalances in the pattern of international settlements and with the gold standard constraints. (pp. 12-13)

Eichengreen then makes a rather bold statement, with which, despite my agreement with, and admiration for, everything he has written to this point, I would take exception.

This explanation for the onset of the Depression, which emphasizes concurrent shifts in economic policy in the Unites States and abroad, the gold standard as the connection between them, and the combined impact of U.S. and foreign economic policies on the level of activity, has not previously appeared in the literature. Its elements are familiar, but they have not been fit together into a coherent account of the causes of the 1929 downturn. (p. 13)

I don’t think that Eichengreen’s claim of priority for his explanation of the onset of the 1929 downturn can be defended, though I certainly wouldn’t suggest that he did not arrive at his understanding of what caused the Great Depression largely on his own. But it is abundantly clear from reading the writings of Hawtrey and Cassel starting as early as 1919, that the basic scenario outlined by Eichengreen was clearly spelled out by Hawtrey and Cassel well before the Great Depression started, as papers by Ron Batchelder and me and by Doug Irwin have thoroughly documented. Undoubtedly Eichengreen has added a great deal of additional insight and depth and done important quantitative and documentary empirical research to buttress his narrative account of the causes of the Great Depression, but the basic underlying theory has not changed.

Eichengreen is not unaware of Hawtrey’s contribution and in a footnote to the last quoted paragraph, Eichengreen writes as follows.

The closest precedents lie in the work of the British economists Lionel Robbins and Ralph Hawtrey, in the writings of German historians concerned with the causes of their economy’s precocious slump, and in Temin (1989). Robbins (1934) hinted at many of the mechanism emphasized here but failed to develop the argument fully. Hawtrey emphasized how the contractionary shift in U.S. monetary policy, superimposed on an already weak British balance of payments position, forced a draconian contraction on the Bank of England, plunging the world into recession. See Hawtrey (1933), especially chapter 2. But Hawtrey’s account focused almost entirely on the United States and the United Kingdom, neglecting the reaction of other central banks, notably the Bank of France, whose role was equally important. (p. 13, n. 17)

Unfortunately, this footnote neither clarifies nor supports Eichengreen’s claim of priority for his account of the role of the gold standard in the Great Depression. First, the bare citation of Robbins’s 1934 book The Great Depression is confusing at best, because Robbins’s explanation of the cause of the Great Depression, which he himself later disavowed, is largely a recapitulation of the Austrian business-cycle theory that attributed the downturn to a crisis caused by monetary expansion by the Fed and the Bank of England. Eichengreen correctly credits Hawtrey for attributing the Great Depression, in almost diametric opposition to Robbins, to contractionary monetary policy by the Fed and the Bank of England, but then seeks to distinguish Hawtrey’s explanation from his own by suggesting that Hawtrey neglected the role of the Bank of France.

Eichengreen mentions Hawtrey’s account of the Great Depression in his 1933 book, Trade Depression and the Way Out, 2nd edition. I no longer have a copy of that work accessible to me, but in the first edition of this work published in 1931, Hawtrey included a brief section under the heading “The Demand for Gold as Money since 1914.”

[S]ince 1914 arbitrary changes in monetary policy and in the demand for gold as money have been greater and more numerous than ever before. Frist came the general abandonment of the gold standard by the belligerent countries in favour of inconvertible paper, and the release of hundreds of millions of gold. By 1920 the wealth value of gold had fallen to two-fifths of what it had been in 1913. The United States, which was almost alone at that time in maintaining a gold standard, thereupon started contracting credit and absorbing gold on a vast scale. In June 1924 the wealth value of gold was seventy per cent higher than at its lowest point in 1920, and the amount of gold held for monetary purposes in the United States had grown from $2,840,000,000 in 1920 to $4,488,000,000.

Other countries were then beginning to return to the gold standard, Gemany in 1924, England in 1925, besides several of the smaller countries of Europe. In the years 1924-8 Germany absorbed over £100,000,000 of gold. France stabilized her currency in 1927 and re-established the gold standard in 1928, and absorbed over £60,000,000 in 1927-8. But meanwhile, the Unitd States had been parting with gold freely and her holding had fallen to $4,109,000,000 in June 1928. Large as these movements had been, they had not seriously disturbed the world value of gold. . . .

But from 1929 to the present time has been a period of immense and disastrous instability. France has added more than £200,000,000 to her gold holding, and the United Statesmore than $800,000,000. In the two and a half years the world’s gold output has been a little over £200,000,000, but a part of this been required for the normal demands of industry. The gold absorbed by France and America has exceeded the fresh supply of gold for monetary purposes by some £200,000,000.

This has had to be wrung from other countries, and much o of it has come from new countries such as Australia, Argentina and Brazil, which have been driven off the gold standard and have used their gold reserves to pay their external liabilities, such as interest on loans payable in foreign currencies. (pp. 20-21)

The idea that Hawtrey neglected the role of the Bank of France is clearly inconsistent with the work that Eichengreen himself cites as evidence for that neglect. Moreover in Hawtrey’s 1932 work, The Art of Central Banking, his first chapter is entitled “French Monetary Policy” which directly addresses the issues supposedly neglected by Hawtrey. Here is an example.

I am inclined therefore to say that while the French absorption of gold in the period from January 1929 to May 1931 was in fact one of the most powerful causes of the world depression, that is only because it was allowed to react an unnecessary degree upon the monetary policy of other countries. (p. 38)

In his foreward to the 1962 reprinting of his volume, Hawtrey mentions his chapter on French Monetary Policy in a section under the heading “Gold and the Great Depression.”

Conspicuous among countries accumulating reserves foreign exchange was France. Chapter 1 of this book records how, in the course of stabilizing the franc in the years 1926-8, the Bank of France accumulated a vast holding of foreign exchange [i.e., foreign bank liabilities payable in gold], and in the ensuing years proceeded to liquidate it [for gold]. Chapter IV . . . shows the bearing of the French absorption of gold upon the starting of the great depression of the 1930s. . . . The catastrophe foreseen in 1922 [!] had come to pass, and the moment had come to point to the moral. The disaster was due to the restoration of the gold standard without any provision for international cooperation to prevent undue fluctuations in the purchasing power of gold. (pp. xiv-xv)

Moreover, on p. 254 of Golden Fetters, Eichengreen himself cites Hawtrey as one of the “foreign critics” of Emile Moreau, Governor of the Bank of France during the 1920s and 1930s “for failing to build “a structure of credit” on their gold imports. By failing to expand domestic credit and to repel gold inflows, they argued, the French had violated the rules of the gold standard game.” In the same paragraph Eichengreen also cites Hawtrey’s recommendation that the Bank of France change its statutes to allow for the creation of domestically supplied money and credit that would have obviated the need for continuing imports of gold.

Finally, writers such as Clark Johnson and Kenneth Mouré, who have written widely respected works on French monetary policy during the 1920s and 1930s, cite Hawtrey extensively as one of the leading contemporary critics of French monetary policy.

PS I showed Barry Eichengreen a draft of this post a short while ago, and he agrees with my conclusion that Hawtrey, and presumably Cassel also, had anticipated the key elements of his explanation of how the breakdown of the gold standard, resulting largely from the breakdown of international cooperation, was the primary cause of the Great Depression. I am grateful to Barry for his quick and generous response to my query.

Milton Friedman’s Rabble-Rousing Case for Abolishing the Fed

I recently came across this excerpt from a longer interview of Milton Friedman conducted by Brian Lamb on Cspan in 1994. In this excerpt Lamb asks Friedman what he thinks of the Fed, and Friedman, barely able to contain his ideological fervor, quickly rattles off his version of the history of the Fed, blaming the Fed, at least by implication, for all the bad monetary and macroeconomic events that happened between 1914, when the Fed came into existence, and the1970s.

Here’s a rough summary of Friedman’s tirade:

I have long been in favor of abolishing [the Fed]. There is no institution in the United States that has such a high public standing and such a poor record of performance. . . . The Federal Reserve began operations in 1914 and presided over a doubling of prices during World War I. It produced a major collapse in 1921. It had a good period from about 1922 to 1928. It took actions in 1928 and 1929 that led to a major recession in 1929 and 1930, and it converted that recession by its actions into the Great Depression. The major villain in the Great Depression in my opinion was unquestionably the Federal Reserve System. Since that time, it presided over a doubling of price in World War II. It financed the inflation of the 1970s. On the whole it has a very poor record. It’s done far more harm than good.

Let’s go through Friedman’s complaints one at a time.

World War I inflation.

Friedman blames World War I inflation on the Fed. Friedman, as I have shown in many previous posts, had a very shaky understanding of how the gold standard worked. His remark about the Fed’s “presiding over a doubling of prices” during World War I is likely yet another example of Friedman’s incomprehension, though his use of the weasel words “presided over” rather than the straightforward “caused” does suggest that Friedman was merely trying to insinuate that the Fed was blameworthy when he actually understood that the Fed had almost no control over inflation in World War I, the US remaining formally on the gold standard until April 6, 1917, when the US declared war on Germany and entered World War I, formally suspending the convertibility of the dollar into gold.

As long as the US remained on a gold standard, the value of the dollar was determined by the value of gold. The US was importing lots of gold during the first two and a half years of the World War I as the belligerents used their gold reserves and demonetized their gold coins to finance imports of war material from the US. The massive demonetization of gold caused gold to depreciate on world markets. Another neutral country, Sweden, actually left the gold standard during World War I to avoid the inevitable inflation associated with the wartime depreciation of gold. So it was either ignorant or disingenuous for Friedman to attribute the World War I inflation to the actions of the Federal Reserve. No country could have remained on the gold standard during World War I without accepting inflation, and the Federal Reserve had no legal authority to abrogate or suspend the legal convertibility of the dollar into a fixed weight of gold.

The Post-War Collapse of 1921

Friedman correctly blames the 1921 collapse to the Fed. However, after a rapid wartime and postwar inflation, the US was trying to recreate a gold standard while holding 40% of the world’s gold reserves. The Fed therefore took steps to stabilize the value of gold, which meant raising interest rates, thereby inducing a further inflow of gold into the US to stop the real value of gold from falling in international markets. The problem was that the Fed went overboard, causing a really, and probably unnecessarily, steep deflation.

The Great Depression

Friedman is right that the Fed helped cause the Great Depression by its actions in 1928 and 1929, raising interest rates to try to quell rapidly rising stock prices. But the concerns about rising stock-market prices were probably misplaced, and the Fed’s raising of interest rates caused an inflow of gold into the US just when a gold outflow from the US was needed to accommodate the rising demand for gold on the part of the Bank of France and other central banks rejoining the gold standard and accumulating gold reserves. It was the sudden tightening of the world gold market, with the US and France and other countries rejoining the gold standard simultaneously trying to increase their gold holdings, that caused the value of gold to rise (and nominal prices to fall) in 1929 starting the Great Depression. Friedman totally ignored the international context in which the Fed was operating, failing to see that the US price level under the newly established gold standard, being determined by the international value of gold, was beyond the control of the Fed.

World War II Inflation

As with World War I, Friedman blamed the Fed for “presiding over” a doubling of prices in World War II. But unlike World War I, when rising US prices reflected a falling real value of gold caused by events outside the US and beyond the control of the Fed, in World War II rising US prices reflected the falling value of an inconvertible US dollar caused by Fed “money printing” at the behest of the President and the Treasury. But why did Friedman consider Fed money printing in World War II to have been a blameworthy act on the part of the Fed? The US was then engaged in a total war against the Axis powers. Under those circumstances, was the primary duty of the Fed to keep prices stable or to use its control over “printing press” to ensure that the US government had sufficient funds to win the war against Nazi totalitarianism and allied fascist forces, thereby preserving American liberties and values even more fundamental than keeping inflation low and enabling creditors to extract what was owed to them by their debtors in dollars of undiminished real purchasing power.

Now it’s true that many of Friedman’s libertarian allies were appalled by US participation in World War II, but Friedman, to his credit, did not share their disapproval of US participation in World War II. But, given his support for World War II, Friedman should have at least acknowledged the obvious role of inflationary finance in emergency war financing, a role which, as Earl Thompson and I and others have argued, rationalizes the historic legal monopoly on money printing maintained by almost all sovereign states. To condemn the Fed for inflationary policies during World War II without recognizing the critical role of the “printing press” in war finance was a remarkably uninformed and biased judgment on Friedman’s part.

1970s Inflation

The Fed certainly had a major role in inflation during the 1970s, which as early as 1966 was already starting to creep up from 1-2% rates that had prevailed from 1953 to 1965. The rise in inflation was again triggered by war-related expenditures, owing to the growing combat role of the US in Vietnam starting in 1965. The Fed’s role in rising inflation in the late 1960s and early 1970s was hardly the Fed’s finest hour, but again, it is unrealistic to expect a public institution like the Fed to withhold the financing necessary to support a military action undertaken by the national government. Certainly, the role of Arthur Burns, appointed by Nixon in 1970 to become Fed Chairman in encouraging Nixon to impose wage-and-price controls as an anti-inflationary measure was one of the most disreputable chapters in the Fed’s history, and the cluelessness of Carter’s first Fed Chairman, G. William Miller, appointed to succeed Burns, is almost legendary, but given the huge oil-price increases of 1973-74 and 1978-79, a policy of accommodating those supply-side shocks by allowing a temporary increase in inflation was probably optimal. So, given the difficult circumstances under which the Fed was operating, the increased inflation of the 1970s was not entirely undesirable.

But although Friedman was often sensitive to the subtleties and nuances of policy making when rendering scholarly historical and empirical judgments, he rarely allowed subtleties and nuances to encroach on his denunciations when he was operating in full rabble-rousing mode.

Pedantry and Mastery in Following Rules

From George Polya’s classic How to Solve It (p. 148).

To apply a rule to the letter, rigidly, unquestioningly, in cases where it fits and cases where it does not fit, is pedantry. Some pedants are poor fools; they never did understand the rule which they apply so conscientiously and so indiscriminately. Some pedants are quite successful; they understood their rule, at least in the beginning (before they became pedants), and chose a good one that fits in many cases and fails only occasionally.

To apply a rule with natural ease, with judgment, noticing the cases where it fits, and without ever letting the words of the rule obscure the purpose of the action or the opportunities of the situation, is mastery.

Polya, of course, was distinguishing between pedantry and mastery in applying rules for problem solving, but his distinction can be applied more generally: a distinction between following rules using judgment (aka discretion) and following rules mechanically without exercising judgment (i.e., without using discretion). Following rules by rote need not be dangerous when circumstances are more or less those envisioned when the rules were originally articulated, but, when unforeseen circumstances arise,  making the rule unsuitable to the new circumstances, following rules mindlessly can lead to really bad outcomes.

In the real world, the rules that we live by have to be revised and reinterpreted constantly in the light of experience and of new circumstances and changing values. Rules are supposed to conform to deeper principles, but the specific rules that we try to articulate to guide our actions are in need of periodic revision and adjustment to changing circumstances.

In deciding cases, judges change the legal rules that they apply by recognizing subtle — and relevant — distinctions that need to be taken into account in rendering decisions. They do not adjust rules willfully and arbitrarily. Instead, relying on deeper principles of justice and humanity, they adjust or bend the rules to temper the injustices that would from a mechanical and unthinking application of the rules. By exercising judgment — in other words, by doing what judges are supposed to do — they uphold, rather than subvert, the rule of law in the process of modifying the existing rules. The modern fetish for depriving judges of the discretion to exercise judgment in rendering decisions is antithetical to the concept of the rule of law.

A similar fetish for rules-based monetary policy, i.e., a monetary system requiring the monetary authority to mechanically follow some numerical rule, is an equally outlandish misapplication of the idea that law is nothing more than a system of rules and that judges should do more than select the relevant rule to be applied and render a decision based on that rule without considering whether the decision is consistent with the deeper underlying principles of justice on which the legal system as a whole is based.

Because judges exercise coercive power over the lives and property of individuals, the rule of law requires their decisions to be justified in terms of the explicit rules and implicit and explicit principles of the legal system judges apply. And litigants have a right to appeal judgments rendered if they can argue that the judge misapplied the relevant legal rules. Having no coercive power over the lives or property of individuals, the monetary authority need not be bound by the kind of legal constraints to which judges are subject in rendering decisions that directly affect the lives and property of individuals.

The apotheosis of the fetish for blindly following rules in monetary policy was the ideal expressed by Henry Simons in his famous essay “Rules versus Authorities in Monetary Policy” in which he pleaded for a monetary rule that “would work mechanically, with the chips falling where they may. We need to design and establish a system good enough so that, hereafter, we may hold to it unrationally — on faith — as a religion, if you please.”

However, Simons, recovering from this momentary lapse into irrationality, quickly conceded that his plea for a monetary system good enough to be held on faith was impractical, abandoning it in favor of the more modest goal of stabilizing the price level. However, Simons’s student Milton Friedman, surpassed his teacher in pedantry, invented what came to be known as his k-percent rule, under which the Federal Reserve was to be required to make the total quantity of  money in the economy increase continuously at an annual rate of growth equal to k percent. Friedman actually believed that his rule could be implemented by a computer, so that he confidently — and foolishly — recommended abolishing the Fed.

Eventually, after erroneously forecasting the return of double-digit inflation for nearly two decades, Friedman, a fervent ideologue but also a superb empirical economist, reluctantly allowed his ideological predispositions to give way in the face of contradictory empirical evidence and abandoned his k-percent rule. That was a good, if long overdue, call on Friedman’s part, and it should serve as a lesson and a warning to advocates of imposing overly rigid rules on the monetary authorities.

Milton Friedman and the Phillips Curve

In December 1967, Milton Friedman delivered his Presidential Address to the American Economic Association in Washington DC. In those days the AEA met in the week between Christmas and New Years, in contrast to the more recent practice of holding the convention in the week after New Years. That’s why the anniversary of Friedman’s 1967 address was celebrated at the 2018 AEA convention. A special session was dedicated to commemoration of that famous address, published in the March 1968 American Economic Review, and fittingly one of the papers at the session as presented by the outgoing AEA president Olivier Blanchard, who also wrote one of the papers discussed at the session. Other papers were written by Thomas Sargent and Robert Hall, and by Greg Mankiw and Ricardo Reis. The papers were discussed by Lawrence Summers, Eric Nakamura, and Stanley Fischer. An all-star cast.

Maybe in a future post, I will comment on the papers presented in the Friedman session, but in this post I want to discuss a point that has been generally overlooked, not only in the three “golden” anniversary papers on Friedman and the Phillips Curve, but, as best as I can recall, in all the commentaries I’ve seen about Friedman and the Phillips Curve. The key point to understand about Friedman’s address is that his argument was basically an extension of the idea of monetary neutrality, which says that the real equilibrium of an economy corresponds to a set of relative prices that allows all agents simultaneously to execute their optimal desired purchases and sales conditioned on those relative prices. So it is only relative prices, not absolute prices, that matter. Taking an economy in equilibrium, if you were suddenly to double all prices, relative prices remaining unchanged, the equilibrium would be preserved and the economy would proceed exactly – and optimally – as before as if nothing had changed. (There are some complications about what is happening to the quantity of money in this thought experiment that I am skipping over.) On the other hand, if you change just a single price, not only would the market in which that price is determined be disequilibrated, at least one, and potentially more than one, other market would be disequilibrated. The point here is that the real economy rules, and equilibrium in the real economy depends on relative, not absolute, prices.

What Friedman did was to argue that if money is neutral with respect to changes in the price level, it should also be neutral with respect to changes in the rate of inflation. The idea that you can wring some extra output and employment out of the economy just by choosing to increase the rate of inflation goes against the grain of two basic principles: (1) monetary neutrality (i.e., the real equilibrium of the economy is determined solely by real factors) and (2) Friedman’s famous non-existence (of a free lunch) theorem. In other words, you can’t make the economy as a whole better off just by printing money.

Or can you?

Actually you can, and Friedman himself understood that you can, but he argued that the possibility of making the economy as a whole better of (in the sense of increasing total output and employment) depends crucially on whether inflation is expected or unexpected. Only if inflation is not expected does it serve to increase output and employment. If inflation is correctly expected, the neutrality principle reasserts itself so that output and employment are no different from what they would have been had prices not changed.

What that means is that policy makers (monetary authorities) can cause output and employment to increase by inflating the currency, as implied by the downward-sloping Phillips Curve, but that simply reflects that actual inflation exceeds expected inflation. And, sure, the monetary authorities can always surprise the public by raising the rate of inflation above the rate expected by the public , but that doesn’t mean that the public can be perpetually fooled by a monetary authority determined to keep inflation higher than expected. If that is the strategy of the monetary authorities, it will lead, sooner or later, to a very unpleasant outcome.

So, in any time period – the length of the time period corresponding to the time during which expectations are given – the short-run Phillips Curve for that time period is downward-sloping. But given the futility of perpetually delivering higher than expected inflation, the long-run Phillips Curve from the point of view of the monetary authorities trying to devise a sustainable policy must be essentially vertical.

Two quick parenthetical remarks. Friedman’s argument was far from original. Many critics of Keynesian policies had made similar arguments; the names Hayek, Haberler, Mises and Viner come immediately to mind, but the list could easily be lengthened. But the earliest version of the argument of which I am aware is Hayek’s 1934 reply in Econometrica to a discussion of Prices and Production by Alvin Hansen and Herbert Tout in their 1933 article reviewing recent business-cycle literature in Econometrica in which they criticized Hayek’s assertion that a monetary expansion that financed investment spending in excess of voluntary savings would be unsustainable. They pointed out that there was nothing to prevent the monetary authority from continuing to create money, thereby continually financing investment in excess of voluntary savings. Hayek’s reply was that a permanent constant rate of monetary expansion would not suffice to permanently finance investment in excess of savings, because once that monetary expansion was expected, prices would adjust so that in real terms the constant flow of monetary expansion would correspond to the same amount of investment that had been undertaken prior to the first and unexpected round of monetary expansion. To maintain a rate of investment permanently in excess of voluntary savings would require progressively increasing rates of monetary expansion over and above the expected rate of monetary expansion, which would sooner or later prove unsustainable. The gist of the argument, more than three decades before Friedman’s 1967 Presidential address, was exactly the same as Friedman’s.

A further aside. But what Hayek failed to see in making this argument was that, in so doing, he was refuting his own argument in Prices and Production that only a constant rate of total expenditure and total income is consistent with maintenance of a real equilibrium in which voluntary saving and planned investment are equal. Obviously, any rate of monetary expansion, if correctly foreseen, would be consistent with a real equilibrium with saving equal to investment.

My second remark is to note the ambiguous meaning of the short-run Phillips Curve relationship. The underlying causal relationship reflected in the negative correlation between inflation and unemployment can be understood either as increases in inflation causing unemployment to go down, or as increases in unemployment causing inflation to go down. Undoubtedly the causality runs in both directions, but subtle differences in the understanding of the causal mechanism can lead to very different policy implications. Usually the Keynesian understanding of the causality is that it runs from unemployment to inflation, while a more monetarist understanding treats inflation as a policy instrument that determines (with expected inflation treated as a parameter) at least directionally the short-run change in the rate of unemployment.

Now here is the main point that I want to make in this post. The standard interpretation of the Friedman argument is that since attempts to increase output and employment by monetary expansion are futile, the best policy for a monetary authority to pursue is a stable and predictable one that keeps the economy at or near the optimal long-run growth path that is determined by real – not monetary – factors. Thus, the best policy is to find a clear and predictable rule for how the monetary authority will behave, so that monetary mismanagement doesn’t inadvertently become a destabilizing force causing the economy to deviate from its optimal growth path. In the 50 years since Friedman’s address, this message has been taken to heart by monetary economists and monetary authorities, leading to a broad consensus in favor of inflation targeting with the target now almost always set at 2% annual inflation. (I leave aside for now the tricky question of what a clear and predictable monetary rule would look like.)

But this interpretation, clearly the one that Friedman himself drew from his argument, doesn’t actually follow from the argument that monetary expansion can’t affect the long-run equilibrium growth path of an economy. The monetary neutrality argument, being a pure comparative-statics exercise, assumes that an economy, starting from a position of equilibrium, is subjected to a parametric change (either in the quantity of money or in the price level) and then asks what will the new equilibrium of the economy look like? The answer is: it will look exactly like the prior equilibrium, except that the price level will be twice as high with twice as much money as previously, but with relative prices unchanged. The same sort of reasoning, with appropriate adjustments, can show that changing the expected rate of inflation will have no effect on the real equilibrium of the economy, with only the rate of inflation and the rate of monetary expansion affected.

This comparative-statics exercise teaches us something, but not as much as Friedman and his followers thought. True, you can’t get more out of the economy – at least not for very long – than its real equilibrium will generate. But what if the economy is not operating at its real equilibrium? Even Friedman didn’t believe that the economy always operates at its real equilibrium. Just read his Monetary History of the United States. Real-business cycle theorists do believe that the economy always operates at its real equilibrium, but they, unlike Friedman, think monetary policy is useless, so we can forget about them — at least for purposes of this discussion. So if we have reason to think that the economy is falling short of its real equilibrium, as almost all of us believe that it sometimes does, why should we assume that monetary policy might not nudge the economy in the direction of its real equilibrium?

The answer to that question is not so obvious, but one answer might be that if you use monetary policy to move the economy toward its real equilibrium, you might make mistakes sometimes and overshoot the real equilibrium and then bad stuff would happen and inflation would run out of control, and confidence in the currency would be shattered, and you would find yourself in a re-run of the horrible 1970s. I get that argument, and it is not totally without merit, but I wouldn’t characterize it as overly compelling. On a list of compelling arguments, I would put it just above, or possibly just below, the domino theory on the basis of which the US fought the Vietnam War.

But even if the argument is not overly compelling, it should not be dismissed entirely, so here is a way of taking it into account. Just for fun, I will call it a Taylor Rule for the Inflation Target (IT). Let us assume that the long-run inflation target is 2% and let us say that (YY*) is the output gap between current real GDP and potential GDP (i.e., the GDP corresponding to the real equilibrium of the economy). We could then define the following Taylor Rule for the inflation target:

IT = α(2%) + β((YY*)/ Y*).

This equation says that the inflation target in any period would be a linear combination of the default Inflation Target of 2% times an adjustment coefficient α designed to keep successively chosen Inflation targets from deviating from the long-term price-level-path corresponding to 2% annual inflation and some fraction β of the output gap expressed as a percentage of potential GDP. Thus, for example, if the output gap was -0.5% and β was 0.5, the short-term Inflation Target would be raised to 4.5% if α were 1.

However, if on average output gaps are expected to be negative, then α would have to be chosen to be less than 1 in order for the actual time path of the price level to revert back to a target price-level corresponding to a 2% annual rate.

Such a procedure would fit well with the current dual inflation and employment mandate of the Federal Reserve. The long-term price level path would correspond to the price-stability mandate, while the adjustable short-term choice of the IT would correspond to and promote the goal of maximum employment by raising the inflation target when unemployment was high as a countercyclical policy for promoting recovery. But short-term changes in the IT would not be allowed to cause a long-term deviation of the price level from its target path. The dual mandate would ensure that relatively higher inflation in periods of high unemployment would be compensated for by periods of relatively low inflation in periods of low unemployment.

Alternatively, you could just target nominal GDP at a rate consistent with a long-run average 2% inflation target for the price level, with the target for nominal GDP adjusted over time as needed to ensure that the 2% average inflation target for the price level was also maintained.

Does Economic Theory Entail or Support Free-Market Ideology?

A few weeks ago, via Twitter, Beatrice Cherrier solicited responses to this query from Dina Pomeranz

It is a serious — and a disturbing – question, because it suggests that the free-market ideology which is a powerful – though not necessarily the most powerful — force in American right-wing politics, and probably more powerful in American politics than in the politics of any other country, is the result of how economics was taught in the 1970s and 1980s, and in the 1960s at UCLA, where I was an undergrad (AB 1970) and a graduate student (PhD 1977), and at Chicago.

In the 1950s, 1960s and early 1970s, free-market economics had been largely marginalized; Keynes and his successors were ascendant. But thanks to Milton Friedman and his compatriots at a few other institutions of higher learning, especially UCLA, the power of microeconomics (aka price theory) to explain a very broad range of economic and even non-economic phenomena was becoming increasingly appreciated by economists. A very broad range of advances in economic theory on a number of fronts — economics of information, industrial organization and antitrust, law and economics, public choice, monetary economics and economic history — supported by the award of the Nobel Prize to Hayek in 1974 and Friedman in 1976, greatly elevated the status of free-market economics just as Margaret Thatcher and Ronald Reagan were coming into office in 1979 and 1981.

The growing prestige of free-market economics was used by Thatcher and Reagan to bolster the credibility of their policies, especially when the recessions caused by their determination to bring double-digit inflation down to about 4% annually – a reduction below 4% a year then being considered too extreme even for Thatcher and Reagan – were causing both Thatcher and Reagan to lose popular support. But the growing prestige of free-market economics and economists provided some degree of intellectual credibility and weight to counter the barrage of criticism from their opponents, enabling both Thatcher and Reagan to use Friedman and Hayek, Nobel Prize winners with a popular fan base, as props and ornamentation under whose reflected intellectual glory they could take cover.

And so after George Stigler won the Nobel Prize in 1982, he was invited to the White House in hopes that, just in time, he would provide some additional intellectual star power for a beleaguered administration about to face the 1982 midterm elections with an unemployment rate over 10%. Famously sharp-tongued, and far less a team player than his colleague and friend Milton Friedman, Stigler refused to play his role as a prop and a spokesman for the administration when asked to meet reporters following his celebratory visit with the President, calling the 1981-82 downturn a “depression,” not a mere “recession,” and dismissing supply-side economics as “a slogan for packaging certain economic ideas rather than an orthodox economic category.” That Stiglerian outburst of candor brought the press conference to an unexpectedly rapid close as the Nobel Prize winner was quickly ushered out of the shouting range of White House reporters. On the whole, however, Republican politicians have not been lacking of economists willing to lend authority and intellectual credibility to Republican policies and to proclaim allegiance to the proposition that the market is endowed with magical properties for creating wealth for the masses.

Free-market economics in the 1960s and 1970s made a difference by bringing to light the many ways in which letting markets operate freely, allowing output and consumption decisions to be guided by market prices, could improve outcomes for all people. A notable success of Reagan’s free-market agenda was lifting, within days of his inauguration, all controls on the prices of domestically produced crude oil and refined products, carryovers of the disastrous wage-and-price controls imposed by Nixon in 1971, but which, following OPEC’s quadrupling of oil prices in 1973, neither Nixon, Ford, nor Carter had dared to scrap. Despite a political consensus against lifting controls, a consensus endorsed, or at least not strongly opposed, by a surprisingly large number of economists, Reagan, following the advice of Friedman and other hard-core free-market advisers, lifted the controls anyway. The Iran-Iraq war having started just a few months earlier, the Saudi oil minister was predicting that the price of oil would soon rise from $40 to at least $50 a barrel, and there were few who questioned his prediction. One opponent of decontrol described decontrol as writing a blank check to the oil companies and asking OPEC to fill in the amount. So the decision to decontrol oil prices was truly an act of some political courage, though it was then characterized as an act of blind ideological faith, or a craven sellout to Big Oil. But predictions of another round of skyrocketing oil prices, similar to the 1973-74 and 1978-79 episodes, were refuted almost immediately, international crude-oil prices falling steadily from $40/barrel in January to about $33/barrel in June.

Having only a marginal effect on domestic gasoline prices, via an implicit subsidy to imported crude oil, controls on domestic crude-oil prices were primarily a mechanism by which domestic refiners could extract a share of the rents that otherwise would have accrued to domestic crude-oil producers. Because additional crude-oil imports increased a domestic refiner’s allocation of “entitlements” to cheap domestic crude oil, thereby reducing the net cost of foreign crude oil below the price paid by the refiner, one overall effect of the controls was to subsidize the importation of crude oil, notwithstanding the goal loudly proclaimed by all the Presidents overseeing the controls: to achieve US “energy independence.” In addition to increasing the demand for imported crude oil, the controls reduced the elasticity of refiners’ demand for imported crude, controls and “entitlements” transforming a given change in the international price of crude into a reduced change in the net cost to domestic refiners of imported crude, thereby raising OPEC’s profit-maximizing price for crude oil. Once domestic crude oil prices were decontrolled, market forces led almost immediately to reductions in the international price of crude oil, so the coincidence of a fall in oil prices with Reagan’s decision to lift all price controls on crude oil was hardly accidental.

The decontrol of domestic petroleum prices was surely as pure a victory for, and vindication of, free-market economics as one could have ever hoped for [personal disclosure: I wrote a book for The Independent Institute, a free-market think tank, Politics, Prices and Petroleum, explaining in rather tedious detail many of the harmful effects of price controls on crude oil and refined products]. Unfortunately, the coincidence of free-market ideology with good policy is not necessarily as comprehensive as Friedman and his many acolytes, myself included, had assumed.

To be sure, price-fixing is almost always a bad idea, and attempts at price-fixing almost always turn out badly, providing lots of ammunition for critics of government intervention of all kinds. But the implicit assumption underlying the idea that freely determined market prices optimally guide the decentralized decisions of economic agents is that the private costs and benefits taken into account by economic agents in making and executing their plans about how much to buy and sell and produce closely correspond to the social costs and benefits that an omniscient central planner — if such a being actually did exist — would take into account in making his plans. But in the real world, the private costs and benefits considered by individual agents when making their plans and decisions often don’t reflect all relevant costs and benefits, so the presumption that market prices determined by the elemental forces of supply and demand always lead to the best possible outcomes is hardly ironclad, as we – i.e., those of us who are not philosophical anarchists – all acknowledge in practice, and in theory, when we affirm that competing private armies and competing private police forces and competing judicial systems would not provide for common defense and for domestic tranquility more effectively than our national, state, and local governments, however imperfectly, provide those essential services. The only question is where and how to draw the ever-shifting lines between those decisions that are left mostly or entirely to the voluntary decisions and plans of private economic agents and those decisions that are subject to, and heavily — even mainly — influenced by, government rule-making, oversight, or intervention.

I didn’t fully appreciate how widespread and substantial these deviations of private costs and benefits from social costs and benefits can be even in well-ordered economies until early in my blogging career, when it occurred to me that the presumption underlying that central pillar of modern right-wing, free-market ideology – that reducing marginal income tax rates increases economic efficiency and promotes economic growth with little or no loss in tax revenue — implicitly assumes that all taxable private income corresponds to the output of goods and services whose private values and costs equal their social values and costs.

But one of my eminent UCLA professors, Jack Hirshleifer, showed that this presumption is subject to a huge caveat, because insofar as some people can earn income by exploiting their knowledge advantages over the counterparties with whom they trade, incentives are created to seek the kinds of knowledge that can be exploited in trades with less-well informed counterparties. The incentive to search for, and exploit, knowledge advantages implies excessive investment in the acquisition of exploitable knowledge, the private gain from acquiring such knowledge greatly exceeding the net gain to society from the acquisition of such knowledge, inasmuch as gains accruing to the exploiter are largely achieved at the expense of the knowledge-disadvantaged counterparties with whom they trade.

For example, substantial resources are now almost certainly wasted by various forms of financial research aiming to gain information that would have been revealed in due course anyway slightly sooner than the knowledge is gained by others, so that the better-informed traders can profit by trading with less knowledgeable counterparties. Similarly, the incentive to exploit knowledge advantages encourages the creation of financial products and structuring other kinds of transactions designed mainly to capitalize on and exploit individual weaknesses in underestimating the probability of adverse events (e.g., late repayment penalties, gambling losses when the house knows the odds better than most gamblers do). Even technical and inventive research encouraged by the potential to patent those discoveries may induce too much research activity by enabling patent-protected monopolies to exploit discoveries that would have been made eventually even without the monopoly rents accruing to the patent holders.

The list of examples of transactions that are profitable for one side only because the other side is less well-informed than, or even misled by, his counterparty could be easily multiplied. Because much, if not most, of the highest incomes earned, are associated with activities whose private benefits are at least partially derived from losses to less well-informed counterparties, it is not a stretch to suspect that reducing marginal income tax rates may have led resources to be shifted from activities in which private benefits and costs approximately equal social benefits and costs to more lucrative activities in which the private benefits and costs are very different from social benefits and costs, the benefits being derived largely at the expense of losses to others.

Reducing marginal tax rates may therefore have simultaneously reduced economic efficiency, slowed economic growth and increased the inequality of income. I don’t deny that this hypothesis is largely speculative, but the speculative part is strictly about the magnitude, not the existence, of the effect. The underlying theory is completely straightforward.

So there is no logical necessity requiring that right-wing free-market ideological policy implications be inferred from orthodox economic theory. Economic theory is a flexible set of conceptual tools and models, and the policy implications following from those models are sensitive to the basic assumptions and initial conditions specified in those models, as well as the value judgments informing an evaluation of policy alternatives. Free-market policy implications require factual assumptions about low transactions costs and about the existence of a low-cost process of creating and assigning property rights — including what we now call intellectual property rights — that imply that private agents perceive costs and benefits that closely correspond to social costs and benefits. Altering those assumptions can radically change the policy implications of the theory.

The best example I can find to illustrate that point is another one of my UCLA professors, the late Earl Thompson, who was certainly the most relentless economic reductionist whom I ever met, perhaps the most relentless whom I can even think of. Despite having a Harvard Ph.D. when he arrived back at UCLA as an assistant professor in the early 1960s, where he had been an undergraduate student of Armen Alchian, he too started out as a pro-free-market Friedman acolyte. But gradually adopting the Buchanan public-choice paradigm – Nancy Maclean, please take note — of viewing democratic politics as a vehicle for advancing the self-interest of agents participating in the political process (marketplace), he arrived at increasingly unorthodox policy conclusions to the consternation and dismay of many of his free-market friends and colleagues. Unlike most public-choice theorists, Earl viewed the political marketplace as a largely efficient mechanism for achieving collective policy goals. The main force tending to make the political process inefficient, Earl believed, was ideologically driven politicians pursuing ideological aims rather than the interests of their constituents, a view that seems increasingly on target as our political process becomes simultaneously increasingly ideological and increasingly dysfunctional.

Until Earl’s untimely passing in 2010, I regarded his support of a slew of interventions in the free-market economy – mostly based on national-defense grounds — as curiously eccentric, and I am still inclined to disagree with many of them. But my point here is not to argue whether Earl was right or wrong on specific policies. What matters in the context of the question posed by Dina Pomeranz is the economic logic that gets you from a set of facts and a set of behavioral and causality assumptions to a set of policy conclusion. What is important to us as economists has to be the process not the conclusion. There is simply no presumption that the economic logic that takes you from a set of reasonably accurate factual assumptions and a set of plausible behavioral and causality assumptions has to take you to the policy conclusions advocated by right-wing, free-market ideologues, or, need I add, to the policy conclusions advocated by anti-free-market ideologues of either left or right.

Certainly we are all within our rights to advocate for policy conclusions that are congenial to our own political preferences, but our obligation as economists is to acknowledge the extent to which a policy conclusion follows from a policy preference rather than from strict economic logic.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,770 other followers

Follow Uneasy Money on WordPress.com