Dr. Shelton Remains Outspoken: She Should Have Known Better

I started blogging in July 2011, and in one of my first blogposts I discussed an article in the now defunct Weekly Standard by Dr. Judy Shelton entitled “Gold Standard or Bust.” I wrote then:

I don’t know, and have never met Dr. Shelton, but she has been a frequent op-ed contributor to the Wall Street Journal and various other publications of a like ideological orientation for 20 years or more, invariably advocating a return to the gold standard.  In 1994, she published a book Money Meltdown touting the gold standard as a cure for all our monetary ills.

I was tempted to provide a line-by-line commentary on Dr. Shelton’s Weekly Standard piece, but it would be tedious and churlish to dwell excessively on her deficiencies as a wordsmith or lapses from lucidity.

So I was not very impressed by Dr. Shelton then. I have had occasion to write about her again a few times since, and I cannot report that I have detected any improvement in the lucidity of her thought or the clarity of her exposition.

Aside from, or perhaps owing to, her infatuation with the gold standard, Dr. Shelton seems to have developed a deep aversion to what is commonly, and usually misleadingly, known as currency manipulation. Using her modest entrepreneurial skills as a monetary-policy pundit, Dr. Shelton has tried to use the specter of currency manipulation as a talking point for gold-standard advocacy. So, in 2017 Dr. Shelton wrote an op-ed about currency manipulation for the Wall Street Journal that was so woefully uninformed and unintelligible, that I felt obligated to write a blogpost just for her, a tutorial on the ABCs of currency manipulation, as I called it then. Here’s an excerpt from my tutorial:

[i]t was no surprise to see in Tuesday’s Wall Street Journal that monetary-policy entrepreneur Dr. Judy Shelton has written another one of her screeds promoting the gold standard, in which, showing no awareness of the necessary conditions for currency manipulation, she assures us that a) currency manipulation is a real problem and b) that restoring the gold standard would solve it.

Certainly the rules regarding international exchange-rate arrangements are not working. Monetary integrity was the key to making Bretton Woods institutions work when they were created after World War II to prevent future breakdowns in world order due to trade. The international monetary system, devised in 1944, was based on fixed exchange rates linked to a gold-convertible dollar.

No such system exists today. And no real leader can aspire to champion both the logic and the morality of free trade without confronting the practice that undermines both: currency manipulation.

Ahem, pray tell, which rules relating to exchange-rate arrangements does Dr. Shelton believe are not working? She doesn’t cite any. And, what, on earth does “monetary integrity” even mean, and what does that high-minded, but totally amorphous, concept have to do with the rules of exchange-rate arrangements that aren’t working?

Dr. Shelton mentions “monetary integrity” in the context of the Bretton Woods system, a system based — well, sort of — on fixed exchange rates, forgetting – or choosing not — to acknowledge that, under the Bretton Woods system, exchange rates were also unilaterally adjustable by participating countries. Not only were they adjustable, but currency devaluations were implemented on numerous occasions as a strategy for export promotion, the most notorious example being Britain’s 30% devaluation of sterling in 1949, just five years after the Bretton Woods agreement had been signed. Indeed, many other countries, including West Germany, Italy, and Japan, also had chronically undervalued currencies under the Bretton Woods system, as did France after it rejoined the gold standard in 1926 at a devalued rate deliberately chosen to ensure that its export industries would enjoy a competitive advantage.

The key point to keep in mind is that for a country to gain a competitive advantage by lowering its exchange rate, it has to prevent the automatic tendency of international price arbitrage and corresponding flows of money to eliminate competitive advantages arising from movements in exchange rates. If a depreciated exchange rate gives rise to an export surplus, a corresponding inflow of foreign funds to finance the export surplus will eventually either drive the exchange rate back toward its old level, thereby reducing or eliminating the initial depreciation, or, if the lower rate is maintained, the cash inflow will accumulate in reserve holdings of the central bank. Unless the central bank is willing to accept a continuing accumulation of foreign-exchange reserves, the increased domestic demand and monetary expansion associated with the export surplus will lead to a corresponding rise in domestic prices, wages and incomes, thereby reducing or eliminating the competitive advantage created by the depressed exchange rate. Thus, unless the central bank is willing to accumulate foreign-exchange reserves without limit, or can create an increased demand by private banks and the public to hold additional cash, thereby creating a chronic excess demand for money that can be satisfied only by a continuing export surplus, a permanently reduced foreign-exchange rate creates only a transitory competitive advantage.

I don’t say that currency manipulation is not possible. It is not only possible, but we know that currency manipulation has been practiced. But currency manipulation can occur under a fixed-exchange rate regime as well as under flexible exchange-rate regimes, as demonstrated by the conduct of the Bank of France from 1926 to 1935 while it was operating under a gold standard. And the most egregious recent example of currency manipulation was undertaken by the Chinese central bank when it effectively pegged the yuan to the dollar at a fixed rate. Keeping its exchange rate fixed against the dollar was precisely the offense that the currency-manipulation police accused the Chinese of committing.

I leave it to interested readers to go back and finish the rest of my tutorial for Dr. Shelton. And if you read carefully and attentively, you are likely to understand the concept of currency manipulation a lot more clearly than when you started.

Alas, it’s obvious that Dr. Shelton has either not read or not understood the tutorial I wrote for her, because, in her latest pronouncement on the subject she covers substantially the same ground as she did two years ago, with no sign of increased comprehension of the subject on which she expounds with such misplaced self-assurance. Here are some samples of Dr. Shelton’s conceptual confusion and historical ignorance.

History can be especially informative when it comes to evaluating the relationship between optimal economic performance and monetary regimes. In the 1930s, for example, the “beggar thy neighbor” tactic of devaluing currencies against gold to gain a trade export advantage hampered a global economic recovery.

Beggar thy neighbor policies were indeed adopted by the United States, but they were adopted first in the 1922 (the Fordney-McCumber Act) and again in 1930 (Smoot-Hawley Act) when the US was on the gold standard with the value of the dollar pegged at $4.86 $20.67 for an ounce of gold. The Great Depression started in late 1929, but the stock market crash of 1929 may have been in part precipitated by fears that the Smoot-Hawley Act would be passed by Congress and signed into law by President Hoover.

At any rate, exchange rates among most major countries were pegged to either gold or the dollar until September 1931 when Britain suspended the convertibility of the pound into gold. The Great Depression was the result of a rapid deflation caused by gold accumulation by central banks as they rejoined the gold standard that had been almost universally suspended during World War I. Countries that remained on the gold standard during the Great Depression were condemned to suffer deflation as gold became ever more valuable in real terms, so that currency depreciation against gold was the only pathway to recovery. Thus, once convertibility was suspended and the pound allowed to depreciate, the British economy stopped contracting and began a modest recovery with slowly expanding output and employment.

The United States, however, kept the dollar pegged to its $4.86 $20.67 an ounce parity with gold until April 1933, when FDR saved the American economy by suspending convertibility and commencing a policy of deliberate reflation (i.e. inflation to restore the 1926 price level). An unprecedented expansion of output, employment and income accompanied the rise in prices following the suspension of the gold standard. Currency depreciation was the key to recovery from, not the cause of, depression.

Having exposed her ignorance of the causes of the Great Depression, Dr. Shelton then begins a descent into her confusion about the subject of currency manipulation, about which I had tried to tutor her, evidently without success.

The absence of rules aimed at maintaining a level monetary playing field invites currency manipulation that could spark a backlash against the concept of free trade. Countries engaged in competitive depreciation undermine the principles of genuine competition, and those that have sought to participate in good faith in the global marketplace are unfairly penalized by the monetary sleight of hand executed through central banks.

Currency manipulation is possible only under specific conditions. A depreciating currency is not normally a manipulated currency. Currencies fluctuate in relative values for many different reasons, but if prices adjust in rough proportion to the change in exchange rates, the competitive positions of the countries are only temporarily affected by the change in exchange rates. For a country to gain a sustained advantage for its export and import-competing industries by depreciating its exchange rate, it must adopt a monetary policy that consistently provides less cash than the public demands or needs to satisfy its liquidity needs, forcing the public to obtain the desired cash balances through a balance-of-payments surplus and an inflow of foreign-exchange reserves into the country’s central bank or treasury.

U.S. leadership is necessary to address this fundamental violation of free-trade practices and its distortionary impact on free-market outcomes. When the United States’ trading partners engage in currency manipulation, it is not competing — it’s cheating.

That is why it is vital to weigh the implications of U.S. monetary policy on the dollar’s exchange-rate value against other currencies. Trade and financial flows can be substantially altered by speculative market forces responding to the public comments of officials at the helm of the European Central Bank, the Bank of Japan or the People’s Bank of China — with calls for “additional stimulus” alerting currency players to impending devaluation policies.

Dr. Shelton here reveals a comprehensive misunderstanding of the difference between a monetary policy that aims to stimulate economic activity in general by raising the price level or increasing the rate of inflation to stimulate expenditure and a policy of monetary restraint that aims to raise the relative price of domestic export and import-competing products relative to the prices of domestic non-tradable goods and services, e.g., new homes and apartments. It is only the latter combination of tight monetary policy and exchange-rate intervention to depreciate a currency in foreign-exchange markets that qualifies as currency manipulation.

And, under that understanding, it is obvious that currency manipulation is possible under a fixed-exchange-rate system, as France did in the 1920s and 1930s, and as most European countries and Japan did in the 1950s and early 1960s under the Bretton Woods system so well loved by Dr. Shelton.

In the 1950s and early 1960s, the US dollar was chronically overvalued. The situation was not remediated until the 1960s under the Kennedy administration when consistently loose monetary policy by the Fed made currency manipulation so costly for the Germans and Japanese that they revalued their currencies upward to avoid the inflationary consequences of US monetary expansion.

And then, in a final flourish, Dr. Shelton puts her ignorance of what happened in the Great Depression on public display with the following observation.

When currencies shift downward against the dollar, it makes U.S. exports more expensive for consumers in other nations. It also discounts the cost of imported goods compared with domestic U.S. products. Downshifting currencies against the dollar has the same punishing impact as a tariff. That is why, as in the 1930s during the Great Depression, currency devaluation prompts retaliatory tariffs.

The retaliatory tariffs were imposed in response to the US tariffs that preceded the or were imposed at the outset of the Great Depression in 1930. The devaluations against gold promoted economic recovery, and were accompanied by a general reduction in tariff levels under FDR after the US devalued the dollar against gold and the remaining gold standard currencies. Whereof she knows nothing, thereof Dr. Shelton would do better to remain silent.

Advertisements

Phillips Curve Musings: Second Addendum on Keynes and the Rate of Interest

In my two previous posts (here and here), I have argued that the partial-equilibrium analysis of a single market, like the labor market, is inappropriate and not particularly relevant, in situations in which the market under analysis is large relative to other markets, and likely to have repercussions on those markets, which, in turn, will have further repercussions on the market under analysis, violating the standard ceteris paribus condition applicable to partial-equilibrium analysis. When the standard ceteris paribus condition of partial equilibrium is violated, as it surely is in analyzing the overall labor market, the analysis is, at least, suspect, or, more likely, useless and misleading.

I suggested that Keynes in chapter 19 of the General Theory was aiming at something like this sort of argument, and I think he was largely right in his argument. But, in all modesty, I think that Keynes would have done better to have couched his argument in terms of the distinction between partial-equilibrium and general-equilibrium analysis. But his Marshallian training, which he simultaneously embraced and rejected, may have made it difficult for him to adopt the Walrasian general-equilibrium approach that Marshall and the Marshallians regarded as overly abstract and unrealistic.

In my next post, I suggested that the standard argument about the tendency of public-sector budget deficits to raise interest rates by competing with private-sector borrowers for loanable funds is fundamentally misguided, because it, too, inappropriately applies the partial-equilibrium analysis of a narrow market for government securities, or even a more broadly defined market for loanable funds in general.

That is a gross mistake, because the rate of interest is determined in a general-equilibrium system along with markets for all long-lived assets, embodying expected flows of income that must be discounted to the present to determine an estimated present value. Some assets are riskier than others and that risk is reflected in those valuations. But the rate of interest is distilled from the combination of all of those valuations, not prior to, or apart from, those valuations. Interest rates of different duration and different risk are embeded in the entire structure of current and expected prices for all long-lived assets. To focus solely on a very narrow subset of markets for newly issued securities, whose combined value is only a small fraction of the total value of all existing long-lived assets, is to miss the forest for the trees.

What I want to point out in this post is that Keynes, whom I credit for having recognized that partial-equilibrium analysis is inappropriate and misleading when applied to an overall market for labor, committed exactly the same mistake that he condemned in the context of the labor market, by asserting that the rate of interest is determined in a single market: the market for money. According to Keynes, the market rate of interest is that rate which equates the stock of money in existence with the amount of money demanded by the public. The higher the rate of interest, Keynes argued, the less money the public wants to hold.

Keynes, applying the analysis of Marshall and his other Cambridge predecessors, provided a wonderful analysis of the factors influencing the amount of money that people want to hold (usually expressed in terms of a fraction of their income). However, as superb as his analysis of the demand for money was, it was a partial-equilibrium analysis, and there was no recognition on his part that other markets in the economy are influenced by, and exert influence upon, the rate of interest.

What makes Keynes’s partial-equilibrium analysis of the interest rate so difficult to understand is that in chapter 17 of the General Theory, a magnificent tour de force of verbal general-equilibrium theorizing, explained the relationships that must exist between the expected returns for alternative long-lived assets that are held in equilibrium. Yet, disregarding his own analysis of the equilibrium relationship between returns on alternative assets, Keynes insisted on explaining the rate of interest in a one-period model (a model roughly corresponding to IS-LM) with only two alternative assets: money and bonds, but no real capital asset.

A general-equilibrium analysis of the rate of interest ought to have at least two periods, and it ought to have a real capital good that may be held in the present for use or consumption in the future, a possibility entirely missing from the Keynesian model. I have discussed this major gap in the Keynesian model in a series of posts (here, here, here, here, and here) about Earl Thompson’s 1976 paper “A Reformulation of Macroeconomic Theory.”

Although Thompson’s model seems to me too simple to account for many macroeconomic phenomena, it would have been a far better starting point for the development of macroeconomics than any of the models from which modern macroeconomic theory has evolved.

Phillips Curve Musings: Addendum on Budget Deficits and Interest Rates

In my previous post, I discussed a whole bunch of stuff, but I spent a lot of time discussing the inappropriate use of partial-equilibrium supply-demand analysis to explain price and quantity movements when price and quantity movements in those markets are dominated by precisely those forces that are supposed to be held constant — the old ceteris paribus qualification — in doing partial equilibrium analysis. Thus, the idea that in a depression or deep recession, high unemployment can be cured by cutting nominal wages is a classic misapplication of partial equilibrium analysis in a situation in which the forces primarily affecting wages and employment are not confined to a supposed “labor market,” but reflect broader macro-economic conditions. As Keynes understood, but did not explain well to his economist readers, analyzing unemployment in terms of the wage rate is futile, because wage changes induce further macroeconomic effects that may counteract whatever effects resulted from the wage changes.

Well, driving home this afternoon, I was listening to Marketplace on NPR with Kai Ryssdal interviewing Neil Irwin. Ryssdal asked Irwin why there is so much nervousness about the economy when unemployment and inflation are both about as low as they have ever been — certainly at the same time — in the last 50 years. Irwin’s response was that it is unsettling to many people that, with budget deficits high and rising, we observe stable inflation and falling interest rates on long-term Treasuries. This, after we have been told for so long that budget deficits drive up the cost of borrowing money and also cause are a major cause of inflation. The cognitive dissonance of stable inflation, falling interest rates and rapidly rising budget deficits, Irwin suggested, accounts for a vague feeling of disorientation, and gives rise to fears that the current apparent stability can’t last very long and will lead to some sort of distress or crisis in the future.

I’m not going to try to reassure Ryssdal and Irwin that there will never be another crisis. I certainly wouldn’t venture to say that all is now well with the Republic, much less with the rest of the world. I will just stick to the narrow observation that the bad habit of predicting the future course of interest rates by the size of the current budget deficit has no basis in economic theory, and reflects a colossal misunderstanding of how interest rates are determined. And that misunderstanding is precisely the one I discussed in my previous post about the misuse of partial-equilibrium analysis when general-equilibrium analysis is required.

To infer anything about interest rates from the market for government debt is a category error. Government debt is a long-lived financial asset providing an income stream, and its price reflects the current value of the promised income stream. Based on the price of a particular instrument with a given duration, it is possible to calculate a corresponding interest rate. That calculation is just a fairly simple mathematical exercise.

But it is a mistake to think that the interest rate for that duration is determined in the market for government debt of that duration. Why? Because, there are many other physical assets or financial instruments that could be held instead of government debt of any particular duration. And asset holders in a financially sophisticated economy can easily shift from one type of asset to another at will, at fairly minimal transactions costs. So it is very unlikely that any long-lived asset is so special that the expected yield from holding that asset varies independently from the expected yield from holding alternative assets that could be held.

That’s not to say that there are no differences in the expected yields from different assets, just that at the margin, taking into account the different characteristics of different assets, their expected returns must be fairly closely connected, so that any large change in the conditions in the market for any single asset are unlikely to have a large effect on the price of that asset alone. Rather, any change in one market will cause shifts in asset-holdings across different markets that will tend to offset the immediate effect that would have been reflected in a single market viewed in isolation.

This holds true as long as each specific market is relatively small compared to the entire economy. That is certainly true for the US economy and the world economy into which the US economy is very closely integrated. The value of all assets — real and financial — dwarfs the total outstanding value of US Treasuries. Interest rates are a measure of the relationship between expected flows of income and the value of the underlying assets.

To assume that increased borrowing by the US government to fund a substantial increase in the US budget deficit will substantially affect the overall economy-wide relationship between current and expected future income flows on the one hand and asset values on the other is wildly implausible. So no one should be surprised to find that the recent sharp increase in the US budget deficit has had no perceptible effect on the interest rates at which US government debt is now yielding.

A more likely cause of a change in interest rates would be an increase in expected inflation, but inflation expectations are not necessarily correlated with the budget deficit, and changing inflation expectations aren’t necessarily reflected in corresponding changes in nominal interest rates, as Monetarist economists have often maintained.

So it’s about time that we disabused ourselves of the simplistic notion that changes in the budget deficit have any substantial effect on interest rates.

Phillips Curve Musings

There’s a lot of talk about the Phillips Curve these days; people wonder why, with the unemployment rate reaching historically low levels, nominal and real wages have increased minimally with inflation remaining securely between 1.5 and 2%. The Phillips Curve, for those untutored in basic macroeconomics, depicts a relationship between inflation and unemployment. The original empirical Philips Curve relationship showed that high rates of unemployment were associated with low or negative rates of wage inflation while low rates of unemployment were associated with high rates of wage inflation. This empirical relationship suggested a causal theory that the rate of wage increase tends to rise when unemployment is low and tends to fall when unemployment is high, a causal theory that seems to follow from a simple supply-demand model in which wages rise when there is an excess demand for labor (unemployment is low) and wages fall when there is an excess supply of labor (unemployment is high).

Viewed in this light, low unemployment, signifying a tight labor market, signals that inflation is likely to rise, providing a rationale for monetary policy to be tightened to prevent inflation from rising at it normally does when unemployment is low. Seeming to accept that rationale, the Fed has gradually raised interest rates for the past two years or so. But the increase in interest rates has now slowed the expansion of employment and decline in unemployment to historic lows. Nor has the improving employment situation resulted in any increase in price inflation and at most a minimal increase in the rate of increase in wages.

In a couple of previous posts about sticky wages (here and here), I’ve questioned whether the simple supply-demand model of the labor market motivating the standard interpretation of the Phillips Curve is a useful way to think about wage adjustment and inflation-employment dynamics. I’ve offered a few reasons why the supply-demand model, though applicable in some situations, is not useful for understanding how wages adjust.

The particular reason that I want to focus on here is Keynes’s argument in chapter 19 of the General Theory (though I express it in terms different from his) that supply-demand analysis can’t explain how wages and employment are determined. The upshot of his argument I believe is that supply demand-analysis only works in a partial-equilibrium setting in which feedback effects from the price changes in the market under consideration don’t affect equilibrium prices in other markets, so that the position of the supply and demand curves in the market of interest can be assumed stable even as price and quantity in that market adjust from one equilibrium to another (the comparative-statics method).

Because the labor market, affecting almost every other market, is not a small part of the economy, partial-equilibrium analysis is unsuitable for understanding that market, the normal stability assumption being untenable if we attempt to trace the adjustment from one labor-market equilibrium to another after an exogenous disturbance. In the supply-demand paradigm, unemployment is a measure of the disequilibrium in the labor market, a disequilibrium that could – at least in principle — be eliminated by a wage reduction sufficient to equate the quantity of labor services supplied with the amount demanded. Viewed from this supply-demand perspective, the failure of the wage to fall to a supposed equilibrium level is attributable to some sort of endogenous stickiness or some external impediment (minimum wage legislation or union intransigence) in wage adjustment that prevents the normal equilibrating free-market adjustment mechanism. But the habitual resort to supply-demand analysis by economists, reinforced and rewarded by years of training and professionalization, is actually misleading when applied in an inappropriate context.

So Keynes was right to challenge this view of a potentially equilibrating market mechanism that is somehow stymied from behaving in the manner described in the textbook version of supply-demand analysis. Instead, Keynes argued that the level of employment is determined by the level of spending and income at an exogenously given wage level, an approach that seems to be deeply at odds with idea that price adjustments are an essential part of the process whereby a complex economic system arrives at, or at least tends to move toward, an equilibrium.

One of the main motivations for a search for microfoundations in the decades after the General Theory was published was to be able to articulate a convincing microeconomic rationale for persistent unemployment that was not eliminated by the usual tendency of market prices to adjust to eliminate excess supplies of any commodity or service. But Keynes was right to question whether there is any automatic market mechanism that adjusts nominal or real wages in a manner even remotely analogous to the adjustment of prices in organized commodity or stock exchanges – the sort of markets that serve as exemplars of automatic price adjustments in response to excess demands or supplies.

Keynes was also correct to argue that, even if there was a mechanism causing automatic wage adjustments in response to unemployment, the labor market, accounting for roughly 60 percent of total income, is so large that any change in wages necessarily affects all other markets, causing system-wide repercussions that might well offset any employment-increasing tendency of the prior wage adjustment.

But what I want to suggest in this post is that Keynes’s criticism of the supply-demand paradigm is relevant to any general-equilibrium system in the following sense: if a general-equilibrium system is considered from an initial non-equilibrium position, does the system have any tendency to move toward equilibrium? And to make the analysis relatively tractable, assume that the system is such that a unique equilibrium exists. Before proceeding, I also want to note that I am not arguing that traditional supply-demand analysis is necessarily flawed; I am just emphasizing that traditional supply-demand analysis is predicated on a macroeconomic foundation: that all markets but the one under consideration are in, or are in the neighborhood of, equilibrium. It is only because the system as a whole is in the neighborhood of equilibrium, that the microeconomic forces on which traditional supply-demand analysis relies appear to be so powerful and so stabilizing.

However, if our focus is a general-equilibrium system, microeconomic supply-demand analysis of a single market in isolation provides no basis on which to argue that the system as a whole has a self-correcting tendency toward equilibrium. To make such an argument is to commit a fallacy of composition. The tendency of any single market toward equilibrium is premised on an assumption that all markets but the one under analysis are already at, or in the neighborhood of, equilibrium. But when the system as a whole is in a disequilibrium state, the method of partial equilibrium analysis is misplaced; partial-equilibrium analysis provides no ground – no micro-foundation — for an argument that the adjustment of market prices in response to excess demands and excess supplies will ever – much less rapidly — guide the entire system back to an equilibrium state.

The lack of automatic market forces that return a system not in the neighborhood — for purposes of this discussion “neighborhood” is left undefined – of equilibrium back to equilibrium is implied by the Sonnenschein-Mantel-Debreu Theorem, which shows that, even if a unique general equilibrium exists, there may be no rule or algorithm for increasing (decreasing) prices in markets with excess demands (supplies) by which the general-equilibrium price vector would be discovered in a finite number of steps.

The theorem holds even under a Walrasian tatonnement mechanism in which no trading at disequilibrium prices is allowed. The reason is that the interactions between individual markets may be so complicated that a price-adjustment rule will not eliminate all excess demands, because even if a price adjustment reduces excess demand in one market, that price adjustment may cause offsetting disturbances in one or more other markets. So, unless the equilibrium price vector is somehow hit upon by accident, no rule or algorithm for price adjustment based on the excess demand in each market will necessarily lead to discovery of the equilibrium price vector.

The Sonnenschein Mantel Debreu Theorem reinforces the insight of Kenneth Arrow in an important 1959 paper “Toward a Theory of Price Adjustment,” which posed the question: how does the theory of perfect competition account for the determination of the equilibrium price at which all agents can buy or sell as much as they want to at the equilibrium (“market-clearing”) price? As Arrow observed, “there exists a logical gap in the usual formulations of the theory of perfectly competitive economy, namely, that there is no place for a rational decision with respect to prices as there is with respect to quantities.”

Prices in perfect competition are taken as parameters by all agents in the model, and optimization by agents consists in choosing optimal quantities. The equilibrium solution allows the mutually consistent optimization by all agents at the equilibrium price vector. This is true for the general-equilibrium system as a whole, and for partial equilibrium in every market. Not only is there no positive theory of price adjustment within the competitive general-equilibrium model, as pointed out by Arrow, but the Sonnenschein-Mantel-Debreu Theorem shows that there’s no guarantee that even the notional tatonnement method of price adjustment can ensure that a unique equilibrium price vector will be discovered.

While acknowledging his inability to fill the gap, Arrow suggested that, because perfect competition and price taking are properties of general equilibrium, there are inevitably pockets of market power, in non-equilibrium states, so that some transactors in non-equilibrium states, are price searchers rather than price takers who therefore choose both an optimal quantity and an optimal price. I have no problem with Arrow’s insight as far as it goes, but it still doesn’t really solve his problem, because he couldn’t explain, even intuitively, how a disequilibrium system with some agents possessing market power (either as sellers or buyers) transitions into an equilibrium system in which all agents are price-takers who can execute their planned optimal purchases and sales at the parametric prices.

One of the few helpful, but, as far as I can tell, totally overlooked, contributions of the rational-expectations revolution was to solve (in a very narrow sense) the problem that Arrow identified and puzzled over, although Hayek, Lindahl and Myrdal, in their original independent formulations of the concept of intertemporal equilibrium, had already provided the key to the solution. Hayek, Lindahl, and Myrdal showed that an intertemporal equilibrium is possible only insofar as agents form expectations of future prices that are so similar to each other that, if future prices turn out as expected, the agents would be able to execute their planned sales and purchases as expected.

But if agents have different expectations about the future price(s) of some commodity(ies), and if their plans for future purchases and sales are conditioned on those expectations, then when the expectations of at least some agents are inevitably disappointed, those agents will necessarily have to abandon (or revise) the plans that their previously formulated plans.

What led to Arrow’s confusion about how equilibrium prices are arrived at was the habit of thinking that market prices are determined by way of a Walrasian tatonnement process (supposedly mimicking the haggling over price by traders). So the notion that a mythical market auctioneer, who first calls out prices at random (prix cries au hasard), and then, based on the tallied market excess demands and supplies, adjusts those prices until all markets “clear,” is untenable, because continual trading at disequilibrium prices keeps changing the solution of the general-equilibrium system. An actual system with trading at non-equilibrium prices may therefore be moving away from, rather converging on, an equilibrium state.

Here is where the rational-expectations hypothesis comes in. The rational-expectations assumption posits that revisions of previously formulated plans are never necessary, because all agents actually do correctly anticipate the equilibrium price vector in advance. That is indeed a remarkable assumption to make; it is an assumption that all agents in the model have the capacity to anticipate, insofar as their future plans to buy and sell require them to anticipate, the equilibrium prices that will prevail for the products and services that they plan to purchase or sell. Of course, in a general-equilibrium system, all prices being determined simultaneously, the equilibrium prices for some future prices cannot generally be forecast in isolation from the equilibrium prices for all other products. So, in effect, the rational-expectations hypothesis supposes that each agent in the model is an omniscient central planner able to solve an entire general-equilibrium system for all future prices!

But let us not be overly nitpicky about details. So forget about false trading, and forget about the Sonnenschein-Mantel-Debreu theorem. Instead, just assume that, at time t, agents form rational expectations of the future equilibrium price vector in period (t+1). If agents at time t form rational expectations of the equilibrium price vector in period (t+1), then they may well assume that the equilibrium price vector in period t is equal to the expected price vector in period (t+1).

Now, the expected price vector in period (t+1) may or may not be an equilibrium price vector in period t. If it is an equilibrium price vector in period t as well as in period (t+1), then all is right with the world, and everyone will succeed in buying and selling as much of each commodity as he or she desires. If not, prices may or may not adjust in response to that disequilibrium, and expectations may or may not change accordingly.

Thus, instead of positing a mythical auctioneer in a contrived tatonnement process as the mechanism whereby prices are determined for currently executed transactions, the rational-expectations hypothesis posits expected future prices as the basis for the prices at which current transactions are executed, providing a straightforward solution to Arrow’s problem. The prices at which agents are willing to purchase or sell correspond to their expectations of prices in the future. If they find trading partners with similar expectations of future prices, they will reach agreement and execute transactions at those prices. If they don’t find traders with similar expectations, they will either be unable to transact, or will revise their price expectations, or they will assume that current market conditions are abnormal and then decide whether to transact at prices different from those they had expected.

When current prices are more favorable than expected, agents will want to buy or sell more than they would have if current prices were equal to their expectations for the future. If current prices are less favorable than they expect future prices to be, they will not transact at all or will seek to buy or sell less than they would have bought or sold if current prices had equaled expected future prices. The dichotomy between observed current prices, dictated by current demands and supplies, and expected future prices is unrealistic; all current transactions are made with an eye to expected future prices and to their opportunities to postpone current transactions until the future, or to advance future transactions into the present.

If current prices for similar commodities are not uniform in all current transactions, a circumstance that Arrow attributes to the existence of varying degrees of market power across imperfectly competitive suppliers, price dispersion may actually be caused, not by market power, but by dispersion in the expectations of future prices held by agents. Sellers expecting future prices to rise will be less willing to sell at relatively low prices now than are suppliers with pessimistic expectations about future prices. Equilibrium occurs when all transactors share the same expectations of future prices and expected future prices correspond to equilibrium prices in the current period.

Of course, that isn’t the only possible equilibrium situation. There may be situations in which a future event that will change a subset of prices can be anticipated. If the anticipation of the future event affects not only expected future prices, it must also and necessarily affect current prices insofar as current supplies can be carried into the future from the present or current purchases can be postponed until the future or future consumption shifted into the present.

The practical upshot of these somewhat disjointed reflections is, I think,primarily to reinforce skepticism that the traditional Phillips Curve supposition that low and falling unemployment necessarily presages an increase in inflation. Wages are not primarily governed by the current state of the labor market, whatever the labor market might even mean in macroeconomic context.

Expectations rule! And the rational-expectations revolution to the contrary notwithstanding, we have no good theory of how expectations are actually formed and there is certainly no reason to assume that, as a general matter, all agents share the same set of expectations.

The current fairly benign state of the economy reflects the absence of any serious disappointment of price expectations. If an economy is operating not very far from an equilibrium, although expectations are not the same, they likely are not very different. They will only be very different after the unexpected strikes. When that happens, borrowers and traders who had taken positions based on overly optimistic expectations find themselves unable to meet their obligations. It is only then that we will see whether the economy is really as strong and resilient as it now seems.

Expecting the unexpected is hard to do, but you can be sure that, sooner or later, the unexpected is going to happen.

Say’s (and Walras’s) Law Revisited

Update (6/18/2019): The current draft of my paper is now available on SSRN. Here is a link.

The annual meeting of the History of Economics Society is coming up in two weeks. It will be held at Columbia University at New York, and I will be presenting an unpublished paper of mine “Say’s Law and the Classical Theory of Depressions.” I began writing this paper about 20 years ago, but never finished it. My thinking about Say’s Law goes back to my first paper on classical monetary theory, and I have previously written blog-posts about Say’s Law (here and here). And more recently I realized that in a temporary-equilibrium framework, both Say’s Law and Walras’s Law, however understood, may be violated.

Here’s the abstract from my paper:

Say’s Law occupies a prominent, but equivocal, position in the history of economics, having been the object of repeated controversies about its meaning and significance since it was first propounded early in the nineteenth century. It has been variously defined, and arguments about its meaning and validity have not reached consensus about what was being attacked or defended. This paper proposes a unifying interpretation of Say’s Law based on the idea that the monetary sector of an economy with a competitively supplied money involves at least two distinct markets not just one. Thus, contrary to the Lange-Patinkin interpretation of Say’s Law, an excess supply or demand for money does not necessarily imply an excess supply or demand for goods in a Walrasian GE model. Beyond modifying the standard interpretation of the inconsistency between Say’s Law and a monetary economy, the paper challenges another standard interpretation of Say’s Law as being empirically refuted by the existence of lapses from full employment and economic depressions. Under the alternative interpretation, originally suggested by Clower and Leijonhufvud and by Hutt, Say’s Law provides a theory whereby disequilibrium in one market, causing the amount actually supplied to fall short of what had been planned to be supplied, reduces demand in other markets, initiating a cumulative process of shrinking demand and supply. This cumulative process of contracting supply is analogous to the Keynesian multiplier whereby a reduction in demand initiates a cumulative process of declining demand. Finally, it is shown that in a temporary-equilibrium context, Walras’s Law (and a fortiori Say’ Law) may be violated.

Here is the Introduction of my paper.

I. Introduction

Say’s Law occupies a prominent, but uncertain, position in the history of economics, having been the object of repeated controversies since the early nineteenth century. Despite a formidable secondary literature, the recurring controversies still demand a clear resolution. Say’s Law has been variously defined, and arguments about its meaning and validity have failed to achieve any clear consensus about just what is being defended or attacked. So, I propose in this paper to reconsider Say’s Law in a way that is faithful in spirit to how it was understood by its principal architects, J. B. Say, James Mill, and David Ricardo as well as their contemporary critics, and to provide a conceptual framework within which to assess the views of subsequent commentators.

In doing so, I hope to dispel perhaps the oldest and certainly the most enduring misunderstanding about Say’s Law: that it somehow was meant to assert that depressions cannot occur, or that they are necessarily self-correcting if market forces are allowed to operate freely. As I have tried to suggest with the title of this paper, Say’s Law was actually an element of Classical insights into the causes of depressions. Indeed, a version of the same idea expressed by Say’s Law implicitly underlies those modern explanations of depressions that emphasize coordination failures, though Say’s Law actually conveys an additional insight missing from most modern explanations.

The conception of Say’s Law articulated in this paper bears a strong resemblance to what Clower (1965, 1967) and Leijonhufvud (1968, 1981) called Say’s Principle. However, their artificial distinction between Say’s Law and Say’s Principle suggests a narrower conception and application of Say’s principle than, I believe, is warranted.  Moreover, their apparent endorsement of the idea that the validity of Say’s Law somehow depends in a critical way on the absence of money implied a straightforward misinterpretation of Say’s Law earlier propounded by, among other, Hayek, Lange and Patinkin in which only what became known as Walras’s Law and not Say’s Law is a logically necessary property of a general-equilibrium system. Finally, it is appropriate to note at the outset that, in most respects, the conception of Say’s Law for which I shall be arguing was anticipated in a quirky, but unjustly neglected, work by Hutt (1975) and by the important, and similarly neglected, work of Earl Thompson (1974).

In the next section, I offer a restatement of the Classical conception of Say’s Law. That conception was indeed based on the insight that, in the now familiar formulation, supply creates its own demand. But to grasp how this insight was originally understood, one must first understand the problem for which Say’s Law was proposed as a solution. The problem concerns the relationship between a depression and a general glut of all goods, but it has two aspects. First, is a depression in some sense caused by a general glut of all goods? Second, is a general glut of all goods logically conceivable in a market economy? In section three, I shall consider the Classical objections to Say’s Law and the responses offered by the Classical originators of the doctrine in reply to those objections. In section four, I discuss the modern objections offered to Say’s Law, their relation to the earlier objections, and the validity of the modern objections to the doctrine. In section five, I re-examine the Classical doctrine, relating it explicitly to a theory of depressions characterized by “inadequate aggregate demand.” I also elaborate on the subtle, but important, differences between my understanding of Say’s Law and what Clower and Leijonhufvud have called Say’s Principle. In section six, I show that when considered in the context of a temporary-equilibrium model in there is an incomplete set of forward and state-contingent markets, not even Walras’s Law, let alone Say’s Law, is logically necessary property of the model. An understanding of the conditions in which neither Walras’s Law nor Say’s Law is satisfied provides an important insight into financial crises and the systemic coordination failures that are characteristic of the deep depression to which they lead.

And here are the last two sections of the paper.

VI. Say’s Law Violated

            I have just argued that Clower, Leijonhufvud and Hutt explained in detail how the insight provided by Say’s Law into the mechanism whereby disturbances causing disequilibrium in one market or sector can be propagated and amplified into broader and deeper economy-wide disturbances and disequilibria. I now want to argue that by relaxing the strict Walrasian framework in which since Lange (1942) articulated Walras’s Law and Say’s Law, it is possible to show conditions under which neither Walras’s Law nor Say’s Law is satisfied.

            I relax the Walrasian framework by assuming that there is not a complete set of forward and state-contingent markets in which future transactions can be undertaken in the present. Because there a complete set of markets in which future prices are determined and visible to everyone, economic agents must formulate their intertemporal plans for production and consumption relying not only on observed current prices, but also on their expectations of currently unobservable future prices. As already noted, the standard proof of Walras’s Law and a fortiori of Say’s Law (or Identity) are premised on the assumption that all agents make their decisions about purchases and sales on their common knowledge of all prices.

            Thus, in the temporary-equilibrium framework, economic agents make their production and consumption decisions not on the basis of their common knowledge of future market prices common, but on their own conjectural expectations of those prices, expectations that may, or may not, be correct, and may, or may not, be aligned with the expectations of other agents. Unless the agents’ expectations of future prices are aligned, the expectations of some, or all, agents must be disappointed, and the plans to buy and sell formulated based on those expectations will have to be revised, or abandoned, once agents realize that their expectations were incorrect.

            Consider a simple two-person, two-good, two-period model in which agents make plans based on current prices observed in period 1 and their expectations of what prices will be in period 2. Given price expectations for period 2, period-1 prices are determined in a tatonnement process, so that no trading occurs until a temporary- equilibrium price vector for period 1 is found. Assume, further, that price expectations for period 2 do not change in the course of the tatonnement. Once a period-1 equilibrium price vector is found, the two budget constraints subject to which the agents make their optimal decisions, need not have the same values for expected prices in period 2, because it is not assumed that the period-2 price expectations of the two agents are aligned. Because the proof of Walras’s Law depends on agents basing their decisions to buy and sell each commodity on prices for each commodity in each period that are common to both agents, Walras’s Law cannot be proved unless the period-2 price expectations of both agents are aligned.

            The implication of the potential violation of Walras’s Law is that when actual prices turn out to be different from what they were expected to be, economic agents who previously assumed obligations that are about to come due may be unable to discharge those obligations. In standard general-equilibrium models, the tatonnement process assures that no trading takes place unless equilibrium prices have been identified. But in a temporary-equilibrium model, when decisions to purchase and sell are based not on equilibrium prices, but on actual prices that may not have been expected, the discharge of commitments is not certain.

            Of course, if Walras’s Law cannot be proved, neither can Say’s Law. Supply cannot create demand when the insolvency of economic agents obstructs mutually advantageous transactions between agents when some agents have negative net worth. The negative net worth of some agents can be transmitted to other agents holding obligations undertaken by agents whose net worth has become negative.

            Moreover, because the private supply of a medium of exchange by banks depends on the value of money-backing assets held by banks, the monetary system may cease to function in an economy in which the net worth of agents whose obligations are held banks becomes negative. Thus, the argument made in section IV.A for the validity of Say’s Law in the Identity sense breaks down once a sufficient number of agents no longer have positive net worth.

VII.      Conclusion

            My aim in this paper has been to explain and clarify a number of the different ways in which Say’s Law has been understood and misunderstood. A fair reading of the primary and secondary literature allows one to understand that many of the criticisms of Say’s Law have been not properly understood the argument that Say’s Law was either intended or could be reasonably interpreted to have said. Indeed, Say’s Law, properly understood, can actually help one understand the cumulative process of economic contraction whose existence supposedly proved its invalidity. However, I have also been able to show that there are plausible conditions in which a sufficiently serious financial breakdown, associated with financial crises in which substantial losses of net worth lead to widespread and contagious insolvency, when even Walras’s Law, and a fortiori Say’s Law, no longer hold. Understanding how Say’s Law may be violated may thus help in understanding the dynamics of financial crises and the cumulative systemic coordination failures of deep depressions.

I will soon be posting the paper on SSRN. When it’s posted I will post a link to an update to this post.

 

Michael Oakeshott Exposes Originalism’s Puerile Rationalistic Pretension to Jurisprudential Profundity

Last week in my post about Popperian Falsificationism, I quoted at length from Michael Oakeshott’s essay “Rationalism in Politics.” Rereading Oakeshott’s essay reminded me that Oakeshott’s work also casts an unflattering light on the faux-conservative jurisprudential Originalism, of which right-wing pretend-populists masquerading as conservatives have become so enamored under the expert tutelage of their idol Justice Scalia.

The faux-conservative nature of Originalism was nowhere made so obvious as in Scalia’s own Tanner Lectures at the University of Utah College of Law, “Common-Law Courts in a Civil-Law System” in which Scalia made plain his utter contempt for the common-law jurisprudence upon which the American legal system is founded. Here is that contempt on display in a mocking description of how law is taught in American law schools.

It is difficult to convey to someone who has not attended law school the enormous impact of the first year of study. Many students remark upon the phenomenon: It is like a mental rebirth, the acquisition of what seems like a whole new mode of perceiving and thinking. Thereafter, even if one does not yet know much law, he – as the expression goes – “thinks like a lawyer.”

The overwhelming majority of the courses taught in that first year of law school, and surely the ones that have the most impact, are courses that teach the substance, and the methodology, of the common law – torts, for example; contracts; property; criminal law. We lawyers cut our teeth upon the common law. To understand what an effect that must have, you must appreciate that the common law is not really common law, except insofar as judges can be regarded as common. That is to say, it is not “customary law,” or a reflection of the people’s practices, but is rather law developed by the judges. Perhaps in the very infancy of the common law it could have been thought that the courts were mere expositors of generally accepted social practices ; and certainly, even in the full maturity of the common law, a well established commercial or social practice could form the basis for a court’s decision. But from an early time – as early as the Year Books, which record English judicial decisions from the end of the thirteenth century to the beginning of the sixteenth – any equivalence between custom and common law had ceased to exist, except in the sense that the doctrine of stare decisis rendered prior judicial decisions “custom.” The issues coming before the courts involved, more and more, refined questions that customary practice gave no answer to.

Oliver Wendell Holmes’s influential book The Common Law – which is still suggested reading for entering law students – talks a little bit about Germanic and early English custom. . . . Holmes’s book is a paean to reason, and to the men who brought that faculty to bear in order to create Anglo-American law. This is the image of the law – the common law – to which an aspiring lawyer is first exposed, even if he hasn’t read Holmes over the previous summer as he was supposed to. (pp. 79-80)

What intellectual fun all of this is! I describe it to you, not – please believe me – to induce those of you in the audience who are not yet lawyers to go to law school. But rather, to explain why first-year law school is so exhilarating: because it consists of playing common-law judge. Which in turn consists of playing king – devising, out of the brilliance of one’s own mind, those laws that ought to govern mankind. What a thrill! And no wonder so many lawyers, having tasted this heady brew, aspire to be judges!

Besides learning how to think about, and devise, the “best” legal rule, there is another skill imparted in the first year of law school that is essential to the making of a good common-law judge. It is the technique of what is called “distinguishing” cases. It is a necessary skill, because an absolute prerequisite to common-law lawmaking is the doctrine of stare decisis – that is, the principle that a decision made in one case will be followed in the next. Quite obviously, without such a principle common-law courts would not be making any “law”; they would just be resolving the particular dispute before them. It is the requirement that future courts adhere to the principle underlying a judicial decision which causes that decision to be a legal rule. (There is no such requirement in the civil-law system, where it is the text of the law rather than any prior judicial interpretation of that text which is authoritative. Prior judicial opinions are consulted for their persuasive effect, much as academic commentary would be; but they are not binding.)

Within such a precedent-bound common-law system, it is obviously critical for the lawyer, or the judge, to establish whether the case at hand falls within a principle that has already been decided. Hence the technique – or the art, or the game – of “distinguishing” earlier cases. A whole series of lectures could be devoted to this subject, and I do not want to get into it too deeply here. Suffice to say that there is a good deal of wiggle-room as to what an earlier case “holds.” In the strictest sense, the holding of a decision cannot go beyond the facts that were before the court. . . .

As I have described, this system of making law by judicial opinion, and making law by distinguishing earlier cases, is what every American law student, what every newborn American lawyer, first sees when he opens his eyes. And the impression remains with him for life. His image of the great judge — the Holmes, the Cardozo — is the man (or woman) who has the intelligence to know what is the best rule of law to govern the case at hand, and then the skill to perform the broken-field running through earlier cases that leaves him free to impose that rule — distinguishing one prior case on his left, straight-arming another one on his right, high-stepping away from another precedent about to tackle him from the rear, until (bravo!) he reaches his goal: good law. That image of the great judge remains with the former law student when he himself becomes a judge, and thus the common-law tradition is passed on and on. (pp. 83-85)

In place of common law judging, Scalia argues that the judicial function should be confined to the parsing of statutory or Constitutional texts to find their meaning, contrasting that limited undertaking to the anything-goes practice of common-law judging.

[T]he subject of statutory interpretation deserves study and attention in its own right, as the principal business of lawyers and judges. It will not do to treat the enterprise as simply an inconvenient modern add-on to the judges’ primary role of common-law lawmaking. Indeed, attacking the enterprise with the Mr. Fix-it mentality of the common-law judge is a sure recipe for incompetence and usurpation.

The state of the science of statutory interpretation in American law is accurately described by Professors Henry Hart and Albert Sacks (or by Professors William Eskridge and Philip Frickey, editors of the famous often-taught-but-never-published Hart-Sachs materials on the legal process) as follows:

Do not expect anybody’s theory of statutory interpretation, whether it is your own or somebody else’s, to be an accurate statement of what courts actually do with statutes. The hard truth of the matter is that American courts have no intelligible, generally accepted, and consistently applied theory of statutory interpretation.

Surely this is a sad commentary: We American judges have no intelligible theory of what we do most. (pp. 89-90)

But the Great Divide with regard to constitutional interpretation is not that between Framers’ intent and objective meaning; but rather that between original meaning (whether derived from Framers’ intent or not) and current meaning. The ascendant school of constitutional interpretation affirms the existence of what is called the “living Constitution,” a body of law that (unlike normal statutes) grows and changes from age to age, in order to meet the needs of a changing society. And it is the judges who determine those needs and “find” that changing law. Seems familiar, doesn’t it? Yes, it is the common law returned, but infinitely more powerful than what the old common law ever pretended to be, for now it trumps even the statutes of democratic legislatures.

If you go into a constitutional law class, or study a constitutional-law casebook, or read a brief filed in a constitutional-law case, you will rarely find the discussion addressed to the text of the constitutional provision that is at issue, or to the question of what was the originally understood or even the originally intended meaning of that text. Judges simply ask themselves (as a good common-law judge would) what ought the result to be, and then proceed to the task of distinguishing (or, if necessary, overruling) any prior Supreme Court cases that stand in the way. Should there be (to take one of the less controversial examples) a constitutional right to die? If so, there is. Should there be a constitutional right to reclaim a biological child put out for adoption by the other parent? Again, if so, there is. If it is good, it is so. Never mind the text that we are supposedly construing; we will smuggle these in, if all else fails, under the Due Process Clause (which, as I have described, is textually incapable of containing them). Moreover, what the Constitution meant yesterday it does not necessarily mean today. As our opinions say in the context of our Eighth Amendment jurisprudence (the Cruel and Unusual Punishments Clause), its meaning changes to reflect “the evolving standards of decency that mark the progress of a maturing society.”

This is preeminently a common-law way of making law, and not the way of construing a democratically adopted text. . . . The Constitution, however, even though a democratically adopted text, we formally treat like the common law. What, it is fair to ask, is our justification for doing so? (pp. 112-14)

Aside from engaging in the most ridiculous caricature of how common-law judging is conducted by actual courts, Scalia, in describing statutory interpretation as a science, either deliberately misrepresents or simply betrays his own misunderstanding of what science is all about. Scientists seek to discover anomalies, contradictions, and gaps within a received body of conjectural knowledge by finding solutions for those anomalies and contradictions and finding new hypotheses to explain gaps in knowledge. And they evaluate their work by criticizing the logic of their solutions and hypotheses and by testing those solutions and hypotheses against empirical evidence.

What Scalia calls a science of statutory interpretation seems to be nothing more than a set exegetical or hermeneutic rules passively and mechanically applied to arrive at a supposedly authoritative reading of the statute without regard to the substantive meaning or practical implications of applying the statute after those exegetical rules have been faithfully applied. In other words, the role of judge is to skillfully read and interpret legal texts, not to render a just verdict or decision, not unless, that is, justice is tautologically defined as the outcome of the Scalia-sanctioned exegetical/hermeneutic exercise. Scalia fraudulently attempts to endow this purely formal approach to textual exegesis with scientific authority, as if by so doing he could invoke the authority of science to override, or annihilate, the authority of judging.

Here is where I want to invite Michael Oakeshott into the conversation. I quote from his essay “Political Education” reprinted as chapter two of his Rationalism in Politics and Other Essays.

[A] tradition of behaviour is a tricky thing to get to know. Indeed, it may even appear to be essentially unintelligible. It is neither fixed nor finished; it has no changeless centre to which understanding anchor itself; there is no sovereign purpose to be perceived or inevitable direction to be detected; there is no model to be copied, idea to be realized, or rule to be followed. Some parts of it may change more slowly than others, but none is immune from change. Everything is temporary. Nevertheless, though a tradition of behaviour is flimsy and elusive, it is not without identity, and what makes it a possible object of knowledge is the fact that all its parts do not change at the same time and that the changes it undergoes are potential within it. Its principle is a principle of continuity: authority is diffused between past, present, and future; between the old, the new, and what is to come. It is steady because, though it moves, it is never wholly in motion; and though it is tranquil, it is never wholly at rest. Nothing that ever belonged to it: we are always swerving back to recover and make something topical out of even its remotest moments; and nothing for long remains unmodified. Everything is temporary, but nothing is arbitrary. Everything figures by comparison, not with what stands next to it, but with the whole. And since a tradition of behaviour is not susceptible of the distinction between essence and accident, knowledge of it is unavoidable knowledge of its detail: to know only the gist is to know nothing. What has to be learned is not an abstract idea, or a set of tricks, not even a ritual, but a concrete, coherent manner of living in all its intricateness. (pp. 61-62).

In a footnote to this passage, Oakeshott added the following comment.

The critic who found “some mystical qualities” in this passage leaves me puzzled: it seems to me an exceedingly matter-of-fact description of the characteristics of any tradition — the Common Law of England, for example, the so-called British Constitution, the Christian religion, modern physics, the game of cricket, shipbuilding.

I will close with another passage from Oakeshott, this time from his essay Rationalism in Politics, but with certain terms placed in parentheses to be replaced with corresponding, substitute terms placed in brackets.

The heart of the matter is the pre-occupation of the [Originalist] (Rationalist) with certainty. Technique and certainty are, for him, inseparably joined because certain knowledge is, for him, knowledge that is, which not only ends with certainty but begins with  certainty and is certain throughout. And this is precisely what [textual exegesis] (technical knowledge) appears to be. It seems to be a self-complete sort of knowledge because it seems to range between an identifiable initial point (where it breaks in upon sheer ignorance) and an identifiable terminal point, where it is complete, as in learning the rules of a new game. It has the aspect of knowledge that can be contained wholly between the covers of a [written statutory code], whose application is, as nearly as possible, purely mechanical, and which does not assume knowledge not itself provided in the [exegetical] technique. For example, the superiority of an ideology over a tradition of thought lies in the appearance of being self-contained. It can be taught best to those whose minds are empty: and if it is to be taught to one who already believes something, the first step of the teacher must be to administer a purge, to make certain that all prejudices and preconceptions are removed, to lay his foundation upon the unshakeable rock of absolute ignorance. In short, [textual exegesis] (technical knowledge) appears to e the only kind of knowledge which satisfies the standard of certainty which the [Originalist] (Rationalist) has chosen. (p. 16)

Dr. Popper: Or How I Learned to Stop Worrying and Love Metaphysics

Introduction to Falsificationism

Although his reputation among philosophers was never quite as exalted as it was among non-philosophers, Karl Popper was a pre-eminent figure in 20th century philosophy. As a non-philosopher, I won’t attempt to adjudicate which take on Popper is the more astute, but I think I can at least sympathize, if not fully agree, with philosophers who believe that Popper is overrated by non-philosophers. In an excellent blog post, Phillipe Lemoine gives a good explanation of why philosophers look askance at falsificationism, Popper’s most important contribution to philosophy.

According to Popper, what distinguishes or demarcates a scientific statement from a non-scientific (metaphysical) statement is whether the statement can, or could be, disproved or refuted – falsified (in the sense of being shown to be false not in the sense of being forged, misrepresented or fraudulently changed) – by an actual or potential observation. Vulnerability to potentially contradictory empirical evidence, according to Popper, is what makes science special, allowing it to progress through a kind of dialectical process of conjecture (hypothesis) and refutation (empirical testing) leading to further conjecture and refutation and so on.

Theories purporting to explain anything and everything are thus non-scientific or metaphysical. Claiming to be able to explain too much is a vice, not a virtue, in science. Science advances by risk-taking, not by playing it safe. Trying to explain too much is actually playing it safe. If you’re not willing to take the chance of putting your theory at risk, by saying that this and not that will happen — rather than saying that this or that will happen — you’re playing it safe. This view of science, portrayed by Popper in modestly heroic terms, was not unappealing to scientists, and in part accounts for the positive reception of Popper’s work among scientists.

But this heroic view of science, as Lemoine nicely explains, was just a bit oversimplified. Theories never exist in a vacuum, there is always implicit or explicit background knowledge that informs and provides context for the application of any theory from which a prediction is deduced. To deduce a prediction from any theory, background knowledge, including complementary theories that are presumed to be valid for purposes of making a prediction, is necessary. Any prediction relies not just on a single theory but on a system of related theories and auxiliary assumptions.

So when a prediction is deduced from a theory, and the predicted event is not observed, it is never unambiguously clear which of the multiple assumptions underlying the prediction is responsible for the failure of the predicted event to be observed. The one-to-one logical dependence between a theory and a prediction upon which Popper’s heroic view of science depends doesn’t exist. Because the heroic view of science is too simplified, Lemoine considers it false, at least in the naïve and heroic form in which it is often portrayed by its proponents.

But, as Lemoine himself acknowledges, Popper was not unaware of these issues and actually dealt with some if not all of them. Popper therefore dismissed those criticisms pointing to his various acknowledgments and even anticipations of and responses to the criticisms. Nevertheless, his rhetorical style was generally not to qualify his position but to present it in stark terms, thereby reinforcing the view of his critics that he actually did espouse the naïve version of falsificationism that, only under duress, would be toned down to meet the objections raised to the usual unqualified version of his argument. Popper after all believed in making bold conjectures and framing a theory in the strongest possible terms and characteristically adopted an argumentative and polemical stance in staking out his positions.

Toned-Down Falsificationism

In his tone-downed version of falsificationism, Popper acknowledged that one can never know if a prediction fails because the underlying theory is false or because one of the auxiliary assumptions required to make the prediction is false, or even because of an error in measurement. But that acknowledgment, Popper insisted, does not refute falsificationism, because falsificationism is not a scientific theory about how scientists do science; it is a normative theory about how scientists ought to do science. The normative implication of falsificationism is that scientists should not try to shield their theories by making just-so adjustments in their theories through ad hoc auxiliary assumptions, e.g., ceteris paribus assumptions, to shield their theories from empirical disproof. Rather they should accept the falsification of their theories when confronted by observations that conflict with the implications of their theories and then formulate new and better theories to replace the old ones.

But a strict methodological rule against adjusting auxiliary assumptions or making further assumptions of an ad hoc nature would have ruled out many fruitful theoretical developments resulting from attempts to account for failed predictions. For example, the planet Neptune was discovered in 1846 by scientists who posited (ad hoc) the existence of another planet to explain why the planet Uranus did not follow its predicted path. Rather than conclude that the Newtonian theory was falsified by the failure of Uranus to follow the orbital path predicted by Newtonian theory, the French astronomer Urbain Le Verrier posited the existence of another planet that would account for the path actually followed by Uranus. Now in this case, it was possible to observe the predicted position of the new planet, and its discovery in the predicted location turned out to be a sensational confirmation of Newtonian theory.

Popper therefore admitted that making an ad hoc assumption in order to save a theory from refutation was permissible under his version of normative faslisificationism, but only if the ad hoc assumption was independently testable. But suppose that, under the circumstances, it would have been impossible to observe the existence of the predicted planet, at least with the observational tools then available, making the ad hoc assumption testable only in principle, but not in practice. Strictly adhering to Popper’s methodological requirement of being able to test independently any ad hoc assumption would have meant accepting the refutation of the Newtonian theory rather than positing the untestable — but true — ad hoc other-planet hypothesis to account for the failed prediction of the orbital path of Uranus.

My point is not that ad hoc assumptions to save a theory from falsification are ok, but to point out that a strict methodological rules requiring rejection of any theory once it appears to be contradicted by empirical evidence and prohibiting the use of any ad hoc assumption to save the theory unless the ad hoc assumption is independently testable might well lead to the wrong conclusion given the nuances and special circumstances associated with every case in which a theory seems to be contradicted by observed evidence. Such contradictions are rarely so blatant that theory cannot be reconciled with the evidence. Indeed, as Popper himself recognized, all observations are themselves understood and interpreted in the light of theoretical presumptions. It is only in extreme cases that evidence cannot be interpreted in a way that more or less conforms to the theory under consideration. At first blush, the Copernican heliocentric view of the world seemed obviously contradicted by direct sensory observation that earth seems flat and the sun rise and sets. Empirical refutation could be avoided only by providing an alternative interpretation of the sensory data that could be reconciled with the apparent — and obvious — flatness and stationarity of the earth and the movement of the sun and moon in the heavens.

So the problem with falsificationism as a normative theory is that it’s not obvious why a moderately good, but less than perfect, theory should be abandoned simply because it’s not perfect and suffers from occasional predictive failures. To be sure, if a better theory than the one under consideration is available, predicting correctly whenever the one under consideration predicts correctly and predicting more accurately than the one under consideration when the latter fails to predict correctly, the alternative theory is surely preferable, but that simply underscores the point that evaluating any theory in isolation is not very important. After all, every theory, being a simplification, is an imperfect representation of reality. It is only when two or more theories are available that scientists must try to determine which of them is preferable.

Oakeshott and the Poverty of Falsificationism

These problems with falsificationism were brought into clearer focus by Michael Oakeshott in his famous essay “Rationalism in Politics,” which though not directed at Popper himself (whose colleague at the London School of Economics he was) can be read as a critique of Popper’s attempt to prescribe methodological rules for scientists to follow in carrying out their research. Methodological rules of the kind propounded by Popper are precisely the sort of supposedly rational rules of practice intended to ensure the successful outcome of an undertaking that Oakeshott believed to be ill-advised and hopelessly naïve. The rationalist conceit in Oakesott’s view is that there are demonstrably correct answers to practical questions and that practical activity is rational only when it is based on demonstrably true moral or causal rules.

The entry on Michael Oakeshott in the Stanford Encyclopedia of Philosophy summarizes Oakeshott’s position as follows:

The error of Rationalism is to think that making decisions simply requires skill in the technique of applying rules or calculating consequences. In an early essay on this theme, Oakeshott distinguishes between “technical” and “traditional” knowledge. Technical knowledge is of facts or rules that can be easily learned and applied, even by those who are without experience or lack the relevant skills. Traditional knowledge, in contrast, means “knowing how” rather than “knowing that” (Ryle 1949). It is acquired by engaging in an activity and involves judgment in handling facts or rules (RP 12–17). The point is not that rules cannot be “applied” but rather that using them skillfully or prudently means going beyond the instructions they provide.

The idea that a scientist’s decision about when to abandon one theory and replace it with another can be reduced to the application of a Popperian falsificationist maxim ignores all the special circumstances and all the accumulated theoretical and practical knowledge that a truly expert scientist will bring to bear in studying and addressing such a problem. Here is how Oakeshott addresses the problem in his famous essay.

These two sorts of knowledge, then, distinguishable but inseparable, are the twin components of the knowledge involved in every human activity. In a practical art such as cookery, nobody supposes that the knowledge that belongs to the good cook is confined to what is or what may be written down in the cookery book: technique and what I have called practical knowledge combine to make skill in cookery wherever it exists. And the same is true of the fine arts, of painting, of music, of poetry: a high degree of technical knowledge, even where it is both subtle and ready, is one thing; the ability to create a work of art, the ability to compose something with real musical qualities, the ability to write a great sonnet, is another, and requires in addition to technique, this other sort of knowledge. Again these two sorts of knowledge are involved in any genuinely scientific activity. The natural scientist will certainly make use of observation and verification that belong to his technique, but these rules remain only one of the components of his knowledge; advances in scientific knowledge were never achieved merely by following the rules. . . .

Technical knowledge . . . is susceptible of formulation in rules, principles, directions, maxims – comprehensively, in propositions. It is possible to write down technical knowledge in a book. Consequently, it does not surprise us that when an artist writes about his art, he writes only about the technique of his art. This is so, not because he is ignorant of what may be called asesthetic element, or thinks it unimportant, but because what he has to say about that he has said already (if he is a painter) in his pictures, and he knows no other way of saying it. . . . And it may be observed that this character of being susceptible of precise formulation gives to technical knowledge at least the appearance of certainty: it appears to be possible to be certain about a technique. On the other hand, it is characteristic of practical knowledge that it is not susceptible of formulation of that kind. Its normal expression is in a customary or traditional way of doing things, or, simply, in practice. And this gives it the appearance of imprecision and consequently of uncertainty, of being a matter of opinion, of probability rather than truth. It is indeed knowledge that is expressed in taste or connoisseurship, lacking rigidity and ready for the impress of the mind of the learner. . . .

Technical knowledge, in short, an be both taught and learned in the simplest meanings of these words. On the other hand, practical knowledge can neither be taught nor learned, but only imparted and acquired. It exists only in practice, and the only way to acquire it is by apprenticeship to a master – not because the master can teach it (he cannot), but because it can be acquired only by continuous contact with one who is perpetually practicing it. In the arts and in natural science what normally happens is that the pupil, in being taught and in learning the technique from his master, discovers himself to have acquired also another sort of knowledge than merely technical knowledge, without it ever having been precisely imparted and often without being able to say precisely what it is. Thus a pianist acquires artistry as well as technique, a chess-player style and insight into the game as well as knowledge of the moves, and a scientist acquires (among other things) the sort of judgement which tells him when his technique is leading him astray and the connoisseurship which enables him to distinguish the profitable from the unprofitable directions to explore.

Now, as I understand it, Rationalism is the assertion that what I have called practical knowledge is not knowledge at all, the assertion that, properly speaking, there is no knowledge which is not technical knowledge. The Rationalist holds that the only element of knowledge involved in any human activity is technical knowledge and that what I have called practical knowledge is really only a sort of nescience which would be negligible if it were not positively mischievous. (Rationalism in Politics and Other Essays, pp. 12-16)

Almost three years ago, I attended the History of Economics Society meeting at Duke University at which Jeff Biddle of Michigan State University delivered his Presidential Address, “Statistical Inference in Economics 1920-1965: Changes in Meaning and Practice, published in the June 2017 issue of the Journal of the History of Economic Thought. The paper is a remarkable survey of the differing attitudes towards using formal probability theory as the basis for making empirical inferences from the data. The underlying assumptions of probability theory about the nature of the data were widely viewed as being too extreme to make probability theory an acceptable basis for empirical inferences from the data. However, the early negative attitudes toward accepting probability theory as the basis for making statistical inferences from data were gradually overcome (or disregarded). But as late as the 1960s, even though econometric techniques were becoming more widely accepted, a great deal of empirical work, including by some of the leading empirical economists of the time, avoided using the techniques of statistical inference to assess empirical data using regression analysis. Only in the 1970s was there a rapid sea-change in professional opinion that made statistical inference based on explicit probabilisitic assumptions about underlying data distributions the requisite technique for drawing empirical inferences from the analysis of economic data. In the final section of his paper, Biddle offers an explanation for this rapid change in professional attitude toward the use of probabilistic assumptions about data distributions as the required method of the empirical assessment of economic data.

By the 1970s, there was a broad consensus in the profession that inferential methods justified by probability theory—methods of producing estimates, of assessing the reliability of those estimates, and of testing hypotheses—were not only applicable to economic data, but were a necessary part of almost any attempt to generalize on the basis of economic data. . . .

This paper has been concerned with beliefs and practices of economists who wanted to use samples of statistical data as a basis for drawing conclusions about what was true, or probably true, in the world beyond the sample. In this setting, “mechanical objectivity” means employing a set of explicit and detailed rules and procedures to produce conclusions that are objective in the sense that if many different people took the same statistical information, and followed the same rules, they would come to exactly the same conclusions. The trustworthiness of the conclusion depends on the quality of the method. The classical theory of inference is a prime example of this sort of mechanical objectivity.

Porter [Trust in Numbers: The Pursuit of Objectivity in Science and Public Life] contrasts mechanical objectivity with an objectivity based on the “expert judgment” of those who analyze data. Expertise is acquired through a sanctioned training process, enhanced by experience, and displayed through a record of work meeting the approval of other experts. One’s faith in the analyst’s conclusions depends on one’s assessment of the quality of his disciplinary expertise and his commitment to the ideal of scientific objectivity. Elmer Working’s method of determining whether measured correlations represented true cause-and-effect relationships involved a good amount of expert judgment. So, too, did Gregg Lewis’s adjustments of the various estimates of the union/non-union wage gap, in light of problems with the data and peculiarities of the times and markets from which they came. Keynes and Persons pushed for a definition of statistical inference that incorporated space for the exercise of expert judgment; what Arthur Goldberger and Lawrence Klein referred to as ‘statistical inference’ had no explicit place for expert judgment.

Speaking in these terms, I would say that in the 1920s and 1930s, empirical economists explicitly acknowledged the need for expert judgment in making statistical inferences. At the same time, mechanical objectivity was valued—there are many examples of economists of that period employing rule-oriented, replicable procedures for drawing conclusions from economic data. The rejection of the classical theory of inference during this period was simply a rejection of one particular means for achieving mechanical objectivity. By the 1970s, however, this one type of mechanical objectivity had become an almost required part of the process of drawing conclusions from economic data, and was taught to every economics graduate student.

Porter emphasizes the tension between the desire for mechanically objective methods and the belief in the importance of expert judgment in interpreting statistical evidence. This tension can certainly be seen in economists’ writings on statistical inference throughout the twentieth century. However, it would be wrong to characterize what happened to statistical inference between the 1940s and the 1970s as a displace-ment of procedures requiring expert judgment by mechanically objective procedures. In the econometric textbooks published after 1960, explicit instruction on statistical inference was largely limited to instruction in the mechanically objective procedures of the classical theory of inference. It was understood, however, that expert judgment was still an important part of empirical economic analysis, particularly in the specification of the models to be estimated. But the disciplinary knowledge needed for this task was to be taught in other classes, using other textbooks.

And in practice, even after the statistical model had been chosen, the estimates and standard errors calculated, and the hypothesis tests conducted, there was still room to exercise a fair amount of judgment before drawing conclusions from the statistical results. Indeed, as Marcel Boumans (2015, pp. 84–85) emphasizes, no procedure for drawing conclusions from data, no matter how algorithmic or rule bound, can dispense entirely with the need for expert judgment. This fact, though largely unacknowledged in the post-1960s econometrics textbooks, would not be denied or decried by empirical economists of the 1970s or today.

This does not mean, however, that the widespread embrace of the classical theory of inference was simply a change in rhetoric. When application of classical inferential procedures became a necessary part of economists’ analyses of statistical data, the results of applying those procedures came to act as constraints on the set of claims that a researcher could credibly make to his peers on the basis of that data. For example, if a regression analysis of sample data yielded a large and positive partial correlation, but the correlation was not “statistically significant,” it would simply not be accepted as evidence that the “population” correlation was positive. If estimation of a statistical model produced a significant estimate of a relationship between two variables, but a statistical test led to rejection of an assumption required for the model to produce unbiased estimates, the evidence of a relationship would be heavily discounted.

So, as we consider the emergence of the post-1970s consensus on how to draw conclusions from samples of statistical data, there are arguably two things to be explained. First, how did it come about that using a mechanically objective procedure to generalize on the basis of statistical measures went from being a choice determined by the preferences of the analyst to a professional requirement, one that had real con-sequences for what economists would and would not assert on the basis of a body of statistical evidence? Second, why was it the classical theory of inference that became the required form of mechanical objectivity? . . .

Perhaps searching for an explanation that focuses on the classical theory of inference as a means of achieving mechanical objectivity emphasizes the wrong characteristic of that theory. In contrast to earlier forms of mechanical objectivity used by economists, such as standardized methods of time series decomposition employed since the 1920s, the classical theory of inference is derived from, and justified by, a body of formal mathematics with impeccable credentials: modern probability theory. During a period when the value placed on mathematical expression in economics was increasing, it may have been this feature of the classical theory of inference that increased its perceived value enough to overwhelm long-standing concerns that it was not applicable to economic data. In other words, maybe the chief causes of the profession’s embrace of the classical theory of inference are those that drove the broader mathematization of economics, and one should simply look to the literature that explores possible explanations for that phenomenon rather than seeking a special explanation of the embrace of the classical theory of inference.

I would suggest one more factor that might have made the classical theory of inference more attractive to economists in the 1950s and 1960s: the changing needs of pedagogy in graduate economics programs. As I have just argued, since the 1920s, economists have employed both judgment based on expertise and mechanically objective data-processing procedures when generalizing from economic data. One important difference between these two modes of analysis is how they are taught and learned. The classical theory of inference as used by economists can be taught to many students simultaneously as a set of rules and procedures, recorded in a textbook and applicable to “data” in general. This is in contrast to the judgment-based reasoning that combines knowledge of statistical methods with knowledge of the circumstances under which the particular data being analyzed were generated. This form of reasoning is harder to teach in a classroom or codify in a textbook, and is probably best taught using an apprenticeship model, such as that which ideally exists when an aspiring economist writes a thesis under the supervision of an experienced empirical researcher.

During the 1950s and 1960s, the ratio of PhD candidates to senior faculty in PhD-granting programs was increasing rapidly. One consequence of this, I suspect, was that experienced empirical economists had less time to devote to providing each interested student with individualized feedback on his attempts to analyze data, so that relatively more of a student’s training in empirical economics came in an econometrics classroom, using a book that taught statistical inference as the application of classical inference procedures. As training in empirical economics came more and more to be classroom training, competence in empirical economics came more and more to mean mastery of the mechanically objective techniques taught in the econometrics classroom, a competence displayed to others by application of those techniques. Less time in the training process being spent on judgment-based procedures for interpreting statistical results meant fewer researchers using such procedures, or looking for them when evaluating the work of others.

This process, if indeed it happened, would not explain why the classical theory of inference was the particular mechanically objective method that came to dominate classroom training in econometrics; for that, I would again point to the classical theory’s link to a general and mathematically formalistic theory. But it does help to explain why the application of mechanically objective procedures came to be regarded as a necessary means of determining the reliability of a set of statistical measures and the extent to which they provided evidence for assertions about reality. This conjecture fits in with a larger possibility that I believe is worth further exploration: that is, that the changing nature of graduate education in economics might sometimes be a cause as well as a consequence of changing research practices in economics. (pp. 167-70)

The correspondence between Biddle’s discussion of the change in the attitude of the economics profession about how inferences should be drawn from data about empirical relationships is strikingly similar to Oakeshott’s discussion and depressing in its implications for the decline of expert judgment by economics, expert judgment having been replaced by mechanical and technical knowledge that can be objectively summarized in the form of rules or tests for statistical significance, itself an entirely arbitrary convention lacking any logical, or self-evident, justification.

But my point is not to condemn using rules derived from classical probability theory to assess the significance of relationships statistically estimated from historical data, but to challenge the methodological prohibition against the kinds of expert judgments that many statistically knowledgeable economists like Nobel Prize winners such as Simon Kuznets, Milton Friedman, Theodore Schultz and Gary Becker routinely used to make in their empirical studies. As Biddle notes:

In 1957, Milton Friedman published his theory of the consumption function. Friedman certainly understood statistical theory and probability theory as well as anyone in the profession in the 1950s, and he used statistical theory to derive testable hypotheses from his economic model: hypotheses about the relationships between estimates of the marginal propensity to consume for different groups and from different types of data. But one will search his book almost in vain for applications of the classical methods of inference. Six years later, Friedman and Anna Schwartz published their Monetary History of the United States, a work packed with graphs and tables of statistical data, as well as numerous generalizations based on that data. But the book contains no classical hypothesis tests, no confidence intervals, no reports of statistical significance or insignificance, and only a handful of regressions. (p. 164)

Friedman’s work on the Monetary History is still regarded as authoritative. My own view is that much of the Monetary History was either wrong or misleading. But my quarrel with the Monetary History mainly pertains to the era in which the US was on the gold standard, inasmuch as Friedman simply did not understand how the gold standard worked, either in theory or in practice, as McCloskey and Zecher showed in two important papers (here and here). Also see my posts about the empirical mistakes in the Monetary History (here and here). But Friedman’s problem was bad monetary theory, not bad empirical technique.

Friedman’s theoretical misunderstandings have no relationship to the misguided prohibition against doing quantitative empirical research without obeying the arbitrary methodological requirement that statistical be derived in a way that measures the statistical significance of the estimated relationships. These methodological requirements have been adopted to support a self-defeating pretense to scientific rigor, necessitating the use of relatively advanced mathematical techniques to perform quantitative empirical research. The methodological requirements for measuring statistical relationships were never actually shown to be generate more accurate or reliable statistical results than those derived from the less technically advanced, but in some respects more economically sophisticated, techniques that have almost totally been displaced. One more example of the fallacy that there is but one technique of research that ensures the discovery of truth, a mistake even Popper was never guilty of.

Methodological Prescriptions Go from Bad to Worse

The methodological requirement for the use of formal tests of statistical significance before any quantitative statistical estimate could be credited was a prelude, though it would be a stretch to link them causally, to another and more insidious form of methodological tyrannizing: the insistence that any macroeconomic model be derived from explicit micro-foundations based on the solution of an intertemporal-optimization exercise. Of course, the idea that such a model was in any way micro-founded was a pretense, the solution being derived only through the fiction of a single representative agent, rendering the entire optimization exercise fundamentally illegitimate and the exact opposite of micro-founded model. Having already explained in previous posts why transforming microfoundations from a legitimate theoretical goal into methodological necessity has taken a generation of macroeconomists down a blind alley (here, here, here, and here) I will only make the further comment that this is yet another example of the danger of elevating technique over practice and substance.

Popper’s More Important Contribution

This post has largely concurred with the negative assessment of Popper’s work registered by Lemoine. But I wish to end on a positive note, because I have learned a great deal from Popper, and even if he is overrated as a philosopher of science, he undoubtedly deserves great credit for suggesting falsifiability as the criterion by which to distinguish between science and metaphysics. Even if that criterion does not hold up, or holds up only when qualified to a greater extent than Popper admitted, Popper made a hugely important contribution by demolishing the startling claim of the Logical Positivists who in the 1920s and 1930s argued that only statements that can be empirically verified through direct or indirect observation have meaning, all other statements being meaningless or nonsensical. That position itself now seems to verge on the nonsensical. But at the time many of the world’s leading philosophers, including Ludwig Wittgenstein, no less, seemed to accept that remarkable view.

Thus, Popper’s demarcation between science and metaphysics had a two-fold significance. First, that it is not verifiability, but falsifiability, that distinguishes science from metaphysics. That’s the contribution for which Popper is usually remembered now. But it was really the other aspect of his contribution that was more significant: that even metaphysical, non-scientific, statements can be meaningful. According to the Logical Positivists, unless you are talking about something that can be empirically verified, you are talking nonsense. In other words they were deliberately hoisting themselves on their petard, because their discussions about what is and what is not meaningful, being discussions about concepts, not empirically verifiable objects, were themselves – on the Positivists’ own criterion of meaning — meaningless and nonsensical.

Popper made the world safe for metaphysics, and the world is a better place as a result. Science is a wonderful enterprise, rewarding for its own sake and because it contributes to the well-being of many millions of human beings, though like many other human endeavors, it can also have unintended and unfortunate consequences. But metaphysics, because it was used as a term of abuse by the Positivists, is still, too often, used as an epithet. It shouldn’t be.

Certainly economists should aspire to tease out whatever empirical implications they can from their theories. But that doesn’t mean that an economic theory with no falsifiable implications is useless, a judgment whereby Mark Blaug declared general equilibrium theory to be unscientific and useless, a judgment that I don’t think has stood the test of time. And even if general equilibrium theory is simply metaphysical, my response would be: so what? It could still serve as a source of inspiration and insight to us in framing other theories that may have falsifiable implications. And even if, in its current form, a theory has no empirical content, there is always the possibility that, through further discussion, critical analysis and creative thought, empirically falsifiable implications may yet become apparent.

Falsifiability is certainly a good quality for a theory to have, but even an unfalsifiable theory may be worth paying attention to and worth thinking about.

Cleaning Up After Burns’s Mess

In my two recent posts (here and here) about Arthur Burns’s lamentable tenure as Chairman of the Federal Reserve System from 1970 to 1978, my main criticism of Burns has been that, apart from his willingness to subordinate monetary policy to the political interests of he who appointed him, Burns failed to understand that an incomes policy to restrain wages, thereby minimizing the tendency of disinflation to reduce employment, could not, in principle, reduce inflation if monetary restraint did not correspondingly reduce the growth of total spending and income. Inflationary (or employment-reducing) wage increases can’t be prevented by an incomes policy if the rate of increase in total spending, and hence total income,  isn’t controlled. King Canute couldn’t prevent the tide from coming in, and neither Arthur Burns nor the Wage and Price Council could slow the increase in wages when total spending was increasing at rate faster than was consistent with the 3% inflation rate that Burns was aiming for.

In this post, I’m going to discuss how the mess Burns left behind him upon leaving the Fed in 1978 had to be cleaned up. The mess got even worse under Burns’s successor, G. William Miller. The clean up did not begin until Carter appointed Paul Volcker in 1979 when it became obvious that the monetary policy of the Fed had failed to cope with problems left behind by Burns. After unleashing powerful inflationary forces under the cover of the wage-and-price controls he had persuaded Nixon to impose in 1971 as a precondition for delivering the monetary stimulus so desperately desired by Nixon to ensure his reelection, Burns continued providing that stimulus even after Nixon’s reelection, when it might still have been possible to taper off the stimulus before inflation flared up, and without aborting the expansion then under way. In his arrogance or ignorance, Burns chose not to adjust the policy that had so splendidly accomplished its intended result.

Not until the end of 1973, after crude oil prices quadrupled owing to a cutback in OPEC oil output, driving inflation above 10% in 1974, did Burns withdraw the monetary stimulus that had been administered in increasing doses since early 1971. Shocked out of his complacency by the outcry against 10% inflation, Burns shifted monetary policy toward restraint, bringing down the growth in nominal spending and income from over 11% in Q4 1973 to only 8% in Q1 1974.

After prolonging monetary stimulus unnecessarily for a year, Burn erred grievously by applying monetary restraint in response to the rise in oil prices. The largely exogenous rise in oil prices would most likely have caused a recession even with no change in monetary policy. By subjecting the economy to the added shock of reducing aggregate demand, Burns turned a mild recession into the worst recession since 1937-38 recession at the end of the Great Depression, with unemployment peaking at 8.8% in Q2 1975.. Nor did the reduction in aggregate demand have much anti-inflationary effect, because the incremental reduction in total spending occasioned by the monetary tightening was reflected mainly in reduced output and employment rather than in reduced inflation.

But even with unemployment reaching the highest level in almost 40 years, inflation did not fall below 5% – and then only briefly – until a year after the bottom of the recession. When President Carter took office in 1977, Burns, hoping to be reappointed to another term, provided Carter with a monetary expansion to hasten the reduction in unemployment that Carter has promised in his Presidential campaign. However, Burns’s accommodative policy did not sufficiently endear him to Carter to secure the coveted reappointment.

The short and unhappy tenure of Carter’s first appointee, G. William Miller, during which inflation rose from 6.5% to 10%, ended abruptly when Carter, with his Administration in crisis, sacked his Treasury Secretary, replacing him with Miller. Under pressure from the financial community to address the seemingly intractable inflation that seemed to be accelerating in the wake of a second oil shock following the Iranian Revolution and hostage taking, Carter felt constrained to appoint Volcker, formerly a high official in the Treasury in both the Kennedy and Nixon administrations, then serving as President of the New York Federal Reserve Bank, who was known to be the favored choice of the financial community.

A year after leaving the Fed, Burns gave the annual Per Jacobson Lecture to the International Monetary Fund. Calling his lecture “The Anguish of Central Banking,” Burns offered a defense of his tenure, by arguing, in effect, that he should not be blamed for his poor performance, because the job of central banking is so very hard. Central bankers could control inflation, but only by inflicting unacceptably high unemployment. The political authorities and the public to whom central bankers are ultimately accountable would simply not tolerate the high unemployment that would be necessary for inflation to be controlled.

Viewed in the abstract, the Federal Reserve System had the power to abort the inflation at its incipient stage fifteen years ago or at any later point, and it has the power to end it today. At any time within that period, it could have restricted money supply and created sufficient strains in the financial and industrial markets to terminate inflation with little delay. It did not do so because the Federal Reserve was itself caught up in the philosophic and political currents that were transforming American life and culture.

Burns’s framing of the choices facing a central bank was tendentious; no policy maker had suggested that, after years of inflation had convinced the public to expect inflation to continue indefinitely, the Fed should “terminate inflation with little delay.” And Burns was hardly a disinterested actor as Fed chairman, having orchestrated a monetary expansion to promote the re-election chances of his benefactor Richard Nixon after securing, in return for that service, Nixon’s agreement to implement an incomes policy to limit the growth of wages, a policy that Burns believed would contain the inflationary consequences of the monetary expansion.

However, as I explained in my post on Hawtrey and Burns, the conceptual rationale for an incomes policy was not to allow monetary expansion to increase total spending, output and employment without causing increased inflation, but to allow the monetary restraint to be administered without increasing unemployment. But under the circumstances in the summer of 1971, when a recovery from the 1970 recession was just starting, and unemployment was still high, monetary expansion might have hastened a recovery in output and employment the resulting increase in total spending and income might still increase output and employment rather than being absorbed in higher wages and prices.

But using controls over wages and prices to speed the return to full employment could succeed only while substantial unemployment and unused capacity allowed output and employment to increase; the faster the recovery, the sooner increased spending would show up in rising prices and wages, or in supply shortages, rather than in increased output. So an incomes policy to enable monetary expansion to speed the recovery from recession and restore full employment might theoretically be successful, but, only if the monetary stimulus were promptly tapered off before driving up inflation.

Thus, if Burns wanted an incomes policy to be able to hasten the recovery through monetary expansion and maximize the political benefit to Nixon in time for the 1972 election, he ought to have recognized the need to withdraw the stimulus after the election. But for a year after Nixon’s reelection, Burns continued the monetary expansion without let up. Burns’s expression of anguish at the dilemma foisted upon him by circumstances beyond his control hardly evokes sympathy, sounding more like an attempt to deflect responsibility for his own mistakes or malfeasance in serving as an instrument of the criminal Campaign to Re-elect the President without bothering to alter that politically motivated policy after accomplishing his dishonorable mission.

But it was not until Burns’s successor, G. William Miller, was succeeded by Paul Volcker in August 1979 that the Fed was willing to adopt — and maintain — an anti-inflationary policy. In his recently published memoir Volcker recounts how, responding to President Carter’s request in July 1979 that he accept appointment as Fed chairman, he told Mr. Carter that, to bring down inflation, he would adopt a tighter monetary policy than had been followed by his predecessor. He also writes that, although he did not regard himself as a Friedmanite Monetarist, he had become convinced that to control inflation it was necessary to control the quantity of money, though he did not agree with Friedman that a rigid rule was required to keep the quantity of money growing at a constant rate. To what extent the Fed would set its policy in terms of a fixed target rate of growth in the quantity of money became the dominant issue in Fed policy during Volcker’s first term as Fed chairman.

In a review of Volcker’s memoir widely cited in the econ blogosphere, Tim Barker decried Volcker’s tenure, especially his determination to control inflation even at the cost of spilling blood — other people’s blood – if that was necessary to eradicate the inflationary psychology of the 1970s, which become a seemingly permanent feature of the economic environment at the time of Volcker’s appointment.

If someone were to make a movie about neoliberalism, there would need to be a starring role for the character of Paul Volcker. As chair of the Federal Reserve from 1979 to 1987, Volcker was the most powerful central banker in the world. These were the years when the industrial workers movement was defeated in the United States and United Kingdom, and third world debt crises exploded. Both of these owe something to Volcker. On October 6, 1979, after an unscheduled meeting of the Fed’s Open Market Committee, Volcker announced that he would start limiting the growth of the nation’s money supply. This would be accomplished by limiting the growth of bank reserves, which the Fed influenced by buying and selling government securities to member banks. As money became more scarce, banks would raise interest rates, limiting the amount of liquidity available in the overall economy. Though the interest rates were a result of Fed policy, the money supply target let Volcker avoid the politically explosive appearance of directly raising rates himself. The experiment—known as the Volcker Shock—lasted until 1982, inducing what remains the worst unemployment since the Great Depression and finally ending the inflation that had troubled the world economy since the late 1960s. To catalog all the results of the Volcker Shock—shuttered factories, broken unions, dizzying financialization—is to describe the whirlwind we are still reaping in 2019. . . .

Barker is correct that Volcker had been persuaded that to tighten monetary policy the quantity of reserves that the Fed was providing to the banking system had to be controlled. But making the quantity of bank reserves the policy instrument was a technical change. Monetary policy had been — and could still have been — conducted using an interest-rate instrument, and it would have been entirely possible for Volcker to tighten monetary policy using the traditional interest-rate instrument. It is possible that, as Barker asserts, it was politically easier to tighten policy using a quantity instrument than an interest-rate instrument.

But even if so, the real difficulty was not the instrument used, but the economic and political consequences of a tight monetary policy. The choice of the instrument to carry out the policy could hardly have made more than a marginal difference on the balance of political forces favoring or opposing that policy. The real issue was whether a tight monetary policy aimed at reducing inflation was more effectively conducted using the traditional interest-rate instrument or the quantity-instrument that Volcker adopted. More on this point below.

Those who praise Volcker like to say he “broke the back” of inflation. Nancy Teeters, the lone dissenter on the Fed Board of Governors, had a different metaphor: “I told them, ‘You are pulling the financial fabric of this country so tight that it’s going to rip. You should understand that once you tear a piece of fabric, it’s very difficult, almost impossible, to put it back together again.” (Teeters, also the first woman on the Fed board, told journalist William Greider that “None of these guys has ever sewn anything in his life.”) Fabric or backbone: both images convey violence. In any case, a price index doesn’t have a spine or a seam; the broken bodies and rent garments of the early 1980s belonged to people. Reagan economic adviser Michael Mussa was nearer the truth when he said that “to establish its credibility, the Federal Reserve had to demonstrate its willingness to spill blood, lots of blood, other people’s blood.”

Did Volcker consciously see unemployment as the instrument of price stability? A Rhode Island representative asked him “Is it a necessary result to have a large increase in unemployment?” Volcker responded, “I don’t know what policies you would have to follow to avoid that result in the short run . . . We can’t undertake a policy now that will cure that problem [unemployment] in 1981.” Call this the necessary byproduct view: defeating inflation is the number one priority, and any action to put people back to work would raise inflationary expectations. Growth and full employment could be pursued once inflation was licked. But there was more to it than that. Even after prices stabilized, full employment would not mean what it once had. As late as 1986, unemployment was still 6.6 percent, the Reagan boom notwithstanding. This was the practical embodiment of Milton Friedman’s idea that there was a natural rate of unemployment, and attempts to go below it would always cause inflation (for this reason, the concept is known as NAIRU or non-accelerating inflation rate of unemployment). The logic here is plain: there need to be millions of unemployed workers for the economy to work as it should.

I want to make two points about Volcker’s policy. The first, which I made in my book Free Banking and Monetary Reform over 30 years ago, and which I have reiterated in several posts on this blog and which I discussed in my recent paper “Rules versus Discretion in Monetary Policy Historically Contemplated” (for an ungated version click here) is that using a quantity instrument to tighten monetary policy, as advocated by Milton Friedman, and acquiesced in by Volcker, induces expectations about the future actions of the monetary authority that undermine the policy and render it untenable. Volcker eventually realized the perverse expectational consequences of trying to implement a monetary policy using a fixed rule for the quantity instrument, but his learning experience in following Friedman’s advice needlessly exacerbated and prolonged the agony of the 1982 downturn for months after inflationary expectations had been broken.

The problem was well-known in the nineteenth century thanks to British experience under the Bank Charter Act that imposed a fixed quantity limit on the total quantity of banknotes issued by the Bank of England. When the total of banknotes approached the legal maximum, a precautionary demand for banknotes was immediately induced by those who feared that they might not later be able to obtain credit if it were needed because the Bank of England would be barred from making additional credit available.

Here is how I described Volcker’s Monetarist experiment in my book.

The danger lurking in any Monetarist rule has been perhaps best summarized by F. A. Hayek, who wrote:

As regards Professor Friedman’s proposal of a legal limit on the rate at which a monopolistic issuer of money was to be allowed to increase the quantity in circulation, I can only say that I would not like to see what would happen if under such a provision it ever became known that the amount of cash in circulation was approaching the upper limit and therefore a need for increased liquidity could not be met.

Hayek’s warnings were subsequently borne out after the Federal Reserve Board shifted its policy from targeting interest rates to targeting the monetary aggregates. The apparent shift toward a less inflationary monetary policy, reinforced by the election of a conservative, antiinflationary president in 1980, induced an international shift from other currencies into the dollar. That shift caused the dollar to appreciate by almost 30 percent against other major currencies.

At the same time the domestic demand for deposits was increasing as deregulation of the banking system reduced the cost of holding deposits. But instead of accommodating the increase in the foreign and domestic demands for dollars, the Fed tightened monetary policy. . . . The deflationary impact of that tightening overwhelmed the fiscal stimulus of tax cuts and defense buildup, which, many had predicted, would cause inflation to speed up. Instead the economy fell into the deepest recession since the 1930s, while inflation, by 1982, was brought down to the lowest levels since the early 1960s. The contraction, which began in July 1981, accelerated in the fourth quarter of 1981 and the first quarter of 1982.

The rapid disinflation was bringing interest rates down from the record high levels of mid-1981 and the economy seemed to bottom out in the second quarter, showing a slight rise in real GNP over the first quarter. Sticking to its Monetarist strategy, the Fed reduced its targets for monetary growth in 1982 to between 2.5 and 5.5 percent. But in January and February, the money supply increased at a rapid rate, perhaps in anticipation of an incipient expansion. Whatever its cause, the early burst of the money supply pushed M-1 way over its target range.

For the next several months, as M-1 remained above its target, financial and commodity markets were preoccupied with what the Fed was going to do next. The fear that the Fed would tighten further to bring M-1 back within its target range reversed the slide in interest rates that began in the fall of 1981. A striking feature of the behavior of interest rates at that time was that credit markets seemed to be heavily influenced by the announcements every week of the change in M-1 during the previous week. Unexpectedly large increases in the money supply put upward pressure on interest rates.

The Monetarist explanation was that the announcements caused people to raise their expectations of inflation. But if the increase in interest rates had been associated with a rising inflation premium, the announcements should have been associated with weakness in the dollar on foreign exchange markets and rising commodities prices. In fact, the dollar was rising and commodities prices were falling consistently throughout this period – even immediately after an unexpectedly large jump in M-1 was announced. . . . (pp. 218-19)

I pause in my own earlier narrative to add the further comment that the increase in interest rates in early 1982 clearly reflected an increasing liquidity premium, caused by the reduced availability of bank reserves, making cash desirable to hold than real assets thereby inducing further declines in asset values.

However, increases in M-1 during July turned out to be far smaller than anticipated, relieving some of the pressure on credit and commodities markets and allowing interest rates to begin to fall again. The decline in interest rates may have been eased slightly by . . . Volcker’s statement to Congress on July 20 that monetary growth at the upper range of the Fed’s targets would be acceptable. More important, he added that he Fed was willing to let M-1 remain above its target range for a while if the reason seemed to be a precautionary demand for liquidity. By August, M-1 had actually fallen back within its target range. As fears of further tightening by the Fed subsided, the stage was set for the decline in interest rates to accelerate, [and] the great stock market rally began on August 17, when the Dow . . . rose over 38 points [almost 5%].

But anticipation of an incipient recovery again fed monetary growth. From the middle of August through the end of September, M-1 grew at an annual rate of over 15 percent. Fears that rapid monetary growth would induce the Fed to tighten monetary policy slowed down the decline in interest rates and led to renewed declines in commodities price and the stock market, while pushing up the dollar to new highs. On October 5 . . . the Wall Street Journal reported that bond prices had fallen amid fears that the Fed might tighten credit conditions to slow the recent strong growth in the money supply. But on the very next day it was reported that the Fed expected inflation to stay low and would therefore allow M-1 to exceed its targets. The report sparked a major decline in interest rates and the Dow . . . soared another 37 points. (pp. 219-20)

The subsequent recovery, which began at the end of 1982, quickly became very powerful, but persistent fears that the Fed would backslide, at the urging of Milton Friedman and his Monetarist followers, into its bad old Monetarist habits periodically caused interest-rate spikes reflecting rising liquidity premiums as the public built up precautionary cash balances. Luckily, Volcker was astute enough to shrug off the overwrought warnings of Friedman and other Monetarists that rapid increases in the monetary aggregates foreshadowed the imminent return of double-digit inflation.

Thus, the Monetarist obsession with controlling the monetary aggregates senselessly prolonged an already deep recession that, by Q1 1982, had already slain the inflationary dragon, inflation having fallen to less than half its 1981 peak while GDP actually contracted in nominal terms. But because the money supply was expanding at a faster rate than was acceptable to Monetarist ideology, the Fed continued in its futile but destructive campaign to keep the monetary aggregates from overshooting their arbitrary Monetarist target range. It was not until Volcker in summer of 1982 finally and belatedly decided that enough was enough and announced that the Fed would declare victory over inflation and call off its Monetarist campaign even if doing so meant incurring Friedman’s wrath and condemnation for abandoning the true Monetarist doctrine.

Which brings me to my second point about Volcker’s policy. While it’s clear that Volcker’s decision to adopt control over the monetary aggregates as the focus of monetary policy was disastrously misguided, monetary policy can’t be conducted without some target. Although the Fed’s interest rate can serve as a policy instrument, it is not a plausible policy target. The preferred policy target is generally thought to be the rate of inflation. The Fed after all is mandated to achieve price stability, which is usually understood to mean targeting a rate of inflation of about 2%. A more sophisticated alternative would be to aim at a suitable price level, thereby allowing some upward movement, say, at a 2% annual rate, the difference between an inflation target and a moving price level target being that an inflation target is unaffected by past deviations of actual from targeted inflation while a moving price level target would require some catch up inflation to make up for past below-target inflation and reduced inflation to compensate for past above-target inflation.

However, the 1981-82 recession shows exactly why an inflation target and even a moving price level target is a bad idea. By almost any comprehensive measure, inflation was still positive throughout the 1981-82 recession, though the producer price index was nearly flat. Thus, inflation targeting during the 1981-82 recession would have been almost as bad a target for monetary policy as the monetary aggregates, with most measures of inflation showing that inflation was then between 3 and 5 percent even at the depth of the recession. Inflation targeting is thus, on its face, an unreliable basis for conducting monetary policy.

But the deeper problem with targeting inflation is that seeking to achieve an inflation target during a recession, when the very existence of a recession is presumptive evidence of the need for monetary stimulus, is actually a recipe for disaster, or, at the very least, for needlessly prolonging a recession. In a recession, the goal of monetary policy should be to stabilize the rate of increase in nominal spending along a time path consistent with the desired rate of inflation. Thus, as long as output is contracting or increasing very slowly, the desired rate of inflation should be higher than the desired rate over the long-term. The appropriate strategy for achieving an inflation target ought to be to let inflation be reduced by the accelerating expansion of output and employment characteristic of most recoveries relative to a stable expansion of nominal spending.

The true goal of monetary policy should always be to maintain a time path of total spending consistent with a desired price-level path over time. But it should not be the objective of the monetary policy to always be as close as possible to the desired path, because trying to stay on that path would likely destabilize the real economy. Market monetarists argue that the goal of monetary policy ought to be to keep nominal GDP expanding at that whatever rate is consistent with maintaining the desired long-run price-level path. That is certainly a reasonable practical rule for monetary policy, but the policy criterion I have discussed here would, at least in principle, be consistent with a more activist approach in which the monetary authority would seek to hasten the restoration of full employment during recessions by temporarily increasing the rate of monetary expansion and in nominal GDP as long as real output and employment remained below the maximum levels consistent with desired price level path over time. But such a strategy would require the monetary authority to be able to fine tune its monetary expansion so that it was tapered off just as the economy was reaching its maximum sustainable output and employment path. Whether such fine-tuning would be possible in practice is a question to which I don’t think we now know the answer.

 

Judy Shelton Speaks Up for the Gold Standard

I have been working on a third installment in my series on how, with a huge assist from Arthur Burns, things fell apart in the 1970s. In my third installment, I will discuss the sad denouement of Burns’s misunderstandings and mistakes when Paul Volcker administered a brutal dose of tight money that caused the worst downturn and highest unemployment since the Great Depression in the Great Recession of 1981-82. But having seen another one of Judy Shelton’s less than enlightening op-eds arguing for a gold standard in the formerly respectable editorial section of the Wall Street Journal, I am going to pause from my account of Volcker’s monetary policy in the early 1980s to give Dr. Shelton my undivided attention.

The opening paragraph of Dr. Shelton’s op-ed is a less than auspicious start.

Since President Trump announced his intention to nominate Herman Cain and Stephen Moore to serve on the Federal Reserve’s board of governors, mainstream commentators have made a point of dismissing anyone sympathetic to a gold standard as crankish or unqualified.

That is a totally false charge. Since Herman Cain and Stephen Moore were nominated, they have been exposed as incompetent and unqualified to serve on the Board of Governors of the world’s most important central bank. It is not support for reestablishing the gold standard that demonstrates their incompetence and lack of qualifications. It is true that most economists, myself included, oppose restoring the gold standard. It is also true that most supporters of the gold standard, like, say — to choose a name more or less at random — Ron Paul, are indeed cranks and unqualified to hold high office, but there is indeed a minority of economists, including some outstanding ones like Larry White, George Selgin, Richard Timberlake and Nobel Laureate Robert Mundell, who do favor restoring the gold standard, at least under certain conditions.

But Cain and Moore are so unqualified and so incompetent, that they are incapable of doing more than mouthing platitudes about how wonderful it would be to have a dollar as good as gold by restoring some unspecified link between the dollar and gold. Because of their manifest ignorance about how a gold standard would work now or how it did work when it was in operation, they were unprepared to defend their support of a gold standard when called upon to do so by inquisitive reporters. So they just lied and denied that they had ever supported returning to the gold standard. Thus, in addition to being ignorant, incompetent and unqualified to serve on the Board of Governors of the Federal Reserve, Cain and Moore exposed their own foolishness and stupidity, because it was easy for reporters to dig up multiple statements by both aspiring central bankers explicitly calling for a gold standard to be restored and muddled utterances bearing at least vague resemblance to support for the gold standard.

So Dr. Shelton, in accusing mainstream commentators of dismissing anyone sympathetic to a gold standard as crankish or unqualified is accusing mainstream commentators of a level of intolerance and closed-mindedness for which she supplies not a shred of evidence.

After making a defamatory accusation with no basis in fact, Dr. Shelton turns her attention to a strawman whom she slays mercilessly.

But it is wholly legitimate, and entirely prudent, to question the infallibility of the Federal Reserve in calibrating the money supply to the needs of the economy. No other government institution had more influence over the creation of money and credit in the lead-up to the devastating 2008 global meltdown.

Where to begin? The Federal Reserve has not been targeting the quantity of money in the economy as a policy instrument since the early 1980s when the Fed misguidedly used the quantity of money as its policy target in its anti-inflation strategy. After acknowledging that mistake the Fed has, ever since, eschewed attempts to conduct monetary policy by targeting any monetary aggregate. It is through the independent choices and decisions of individual agents and of many competing private banking institutions, not the dictate of the Federal Reserve, that the quantity of money in the economy at any given time is determined. Indeed, it is true that the Federal Reserve played a great role in the run-up to the 2008 financial crisis, but its mistake had nothing to do with the amount of money being created. Rather the problem was that the Fed was setting its policy interest rate at too high a level throughout 2008 because of misplaced inflation fears fueled by a temporary increases in commodity prices that deterred the Fed from providing the monetary stimulus needed to counter a rapidly deepening recession.

But guess who was urging the Fed to raise its interest rate in 2008 exactly when a cut in interest rates was what the economy needed? None other than the Wall Street Journal editorial page. And guess who was the lead editorial writer on the Wall Street Journal in 2008 for economic policy? None other than Stephen Moore himself. Isn’t that special?

I will forbear from discussing Dr. Shelton’s comments on the Fed’s policy of paying interest on reserves, because I actually agree with her criticism of the policy. But I do want to say a word about her discussion of currency manipulation and the supposed role of the gold standard in minimizing such currency manipulation.

The classical gold standard established an international benchmark for currency values, consistent with free-trade principles. Today’s arrangements permit governments to manipulate their currencies to gain an export advantage.

Having previously explained to Dr. Shelton that currency manipulation to gain an export advantage depends not just on the exchange rate, but the monetary policy that is associated with that exchange rate, I have to admit some disappointment that my previous efforts to instruct her don’t seem to have improved her understanding of the ABCs of currency manipulation. But I will try again. Let me just quote from my last attempt to educate her.

The key point to keep in mind is that for a country to gain a competitive advantage by lowering its exchange rate, it has to prevent the automatic tendency of international price arbitrage and corresponding flows of money to eliminate competitive advantages arising from movements in exchange rates. If a depreciated exchange rate gives rise to an export surplus, a corresponding inflow of foreign funds to finance the export surplus will eventually either drive the exchange rate back toward its old level, thereby reducing or eliminating the initial depreciation, or, if the lower rate is maintained, the cash inflow will accumulate in reserve holdings of the central bank. Unless the central bank is willing to accept a continuing accumulation of foreign-exchange reserves, the increased domestic demand and monetary expansion associated with the export surplus will lead to a corresponding rise in domestic prices, wages and incomes, thereby reducing or eliminating the competitive advantage created by the depressed exchange rate. Thus, unless the central bank is willing to accumulate foreign-exchange reserves without limit, or can create an increased demand by private banks and the public to hold additional cash, thereby creating a chronic excess demand for money that can be satisfied only by a continuing export surplus, a permanently reduced foreign-exchange rate creates only a transitory competitive advantage.

I don’t say that currency manipulation is not possible. It is not only possible, but we know that currency manipulation has been practiced. But currency manipulation can occur under a fixed-exchange rate regime as well as under flexible exchange-rate regimes, as demonstrated by the conduct of the Bank of France from 1926 to 1935 while it was operating under a gold standard.

Dr. Shelton believes that restoring a gold standard would usher in a period of economic growth like the one that followed World War II under the Bretton Woods System. Well, Dr. Shelton might want to reconsider how well the Bretton Woods system worked to the advantage of the United States.

The fact is that, as Ralph Hawtrey pointed out in his Incomes and Money, the US dollar was overvalued relative to the currencies of most its European trading parties, which is why unemployment in the US was chronically above 5% after 1954 to 1965. With undervalued currencies, West Germany, Italy, Belgium, Britain, France and Japan all had much lower unemployment than the US. It was only in 1961, after John Kennedy became President, when the Federal Reserve systematically loosened monetary policy, forcing Germany and other countries to revalue their countries upward to avoid importing US inflation that the US was able redress the overvaluation of the dollar. But in doing so, the US also gradually rendered the $35/ounce price of gold, at which it maintained a kind of semi-convertibility of the dollar, unsustainable, leading a decade later to the final abandonment of the gold-dollar peg.

Dr. Shelton is obviously dedicated to restoring the gold standard, but she really ought to study up on how the gold standard actually worked in its previous incarnations and semi-incarnations, before she opines any further about how it might work in the future. At present, she doesn’t seem to be knowledgeable about how the gold standard worked in the past, and her confidence that it would work well in the future is entirely misplaced.

Ralph Hawtrey Wrote the Book that Arthur Burns Should Have Read — but Didn’t

In my previous post I wrote about the mistakes made by Arthur Burns after Nixon appointed him Chairman of the Federal Reserve Board. Here are the critical missteps of Burns’s unfortunate tenure.

1 Upon becoming chairman in January 1970, with inflation running at over 5% despite a modest tightening by his predecessor in 1969, Burns further tightened monetary policy, causing a downturn and a recession lasting the whole of 1970. The recession was politically damaging to Nixon, leading to sizable Republican losses in the November midterm elections, and causing Nixon to panic about losing his re-election bid in 1972. In his agitation, Nixon then began badgering Burns to loosen monetary policy.

2 Yielding to Nixon’s demands for an easing of monetary policy, Burns eased monetary policy sufficiently to allow a modest recovery to get under way in 1971. But the recovery was too tepid to suit Nixon. Fearing the inflationary implications of a further monetary loosening, Burns began publicly lobbying for the adoption of an incomes policy to limit the increase of wages set by collective bargaining between labor unions and major businesses.

3 Burns’s unwillingness to provide the powerful stimulus desired by Nixon until an incomes policy was in place to hold down inflation led Nixon to abandon his earlier opposition to wage-and-price controls. On August 15, 1971 Nixon imposed a 90-day freeze on all wages and prices to be followed by comprehensive wage-and-price controls. With controls in place, Burns felt secure in accelerating the rate of monetary expansion, leaving it to those controlling wages and prices to keep inflation within acceptable bounds.

4 With controls in place, monetary expansion at first fueled rapid growth of output, but as time passed, the increase in spending was increasingly reflected in inflation rather than output growth. By Q4 1973, inflation rose to 7%, a rate only marginally affected by the Arab oil embargo on oil shipments to the United States and a general reduction in oil output, which led to a quadrupling of oil prices by early 1974.

5 The sharp oil-price increase simultaneously caused inflation to rise sharply above the 7% rate it had reached at the end of 1973 even as it caused a deep downturn and recession in the first quarter of 1974. Rather than accommodate the increase in oil prices by tolerating a temporary increase in inflation, Burns sharply tightened monetary policy reducing the rate of monetary expansion so that the rate of growth of total spending dropped precipitously. Given the increase in oil prices, the drop in total spending caused a major contraction in output and employment, resulting in the deepest recession since 1937-38.

These mistakes all stemmed from a failure by Burns to understand the rationale of an incomes policy. Burns was not alone in that failure, which was actually widespread at the time. But the rationale for such a policy and the key to its implementation had already been spelled out cogently by Ralph Hawtrey in his 1967 diagnosis of the persistent failures of British monetary policy and macroeconomic performance in the post World War II period, failures that had also been deeply tied up in the misunderstanding of the rationale for – and the implementation of — an incomes policy. Unlike Burns, Hawtrey did not view an incomes policy as a substitute for, or an alternative to, monetary policy to reduce inflation. Rather, an incomes policy was precisely the use of monetary policy to achieve a rate of growth in total spending and income that could be compatible with full employment, provided the rate of growth of wages was consistent with full employment.

In Burns’s understanding, the role of an incomes policy was to prevent wage increases from driving up production costs so high that businesses could not operate profitably at maximum capacity without a further increase in inflation by the Federal Reserve. If the wage increases negotiated by the unions exceeded the level compatible with full employment at the targeted rate of inflation, businesses would reduce output and lay off workers. Faced with that choice, the Fed or any monetary authority would be caught in the dreaded straits of Scylla and Charybdis (aka between a rock and a hard place).

What Burns evidently didn’t understand, or chose to ignore, was that adopting an incomes policy to restrain wage increases did not allow the monetary authority to implement a monetary policy that would cause nominal GDP to rise at a rate faster than was consistent with full employment at the target rate of inflation. If, for example, the growth of the labor force and the expected increase in productivity was consistent with a 4% rate of real GDP growth over time and the monetary authority was aiming for an inflation rate no greater than 3%, the monetary authority could not allow nominal GDP to grow at a rate above 7%.

This conclusion is subject to the following qualification. During a transition from high unemployment to full employment, a faster rate of nominal GDP growth than the posited 7% rate could hasten the restoration of full employment. But temporarily speeding nominal GDP growth would also require that, as a state of full employment was approached, the growth of nominal GDP be tapered off and brought down to a sustainable rate.

But what if an incomes policy does keep the rate of increase in wages below the rate consistent with 3% inflation? Could the monetary authority then safely conduct a monetary policy that increased the rate of nominal GDP growth in order to accelerate real economic growth without breaching the 3% inflation target? Once again, the answer is that real GDP growth can be accelerated only as long as sufficient slack remains in an economy with less than full employment so that accelerating spending growth does not result in shortages of labor or intermediate products. Once shortages emerge, wages or prices of products in short supply must be raised to allocate resources efficiently and to prevent shortages from causing production breakdowns.

Burns might have pulled off a remarkable feat by ensuring Nixon’s re-election in 1972 with a massive monetary stimulus causing the fastest increase in nominal real GDP since the Korean War in Q4 of 1972, while wage-and-price controls ensured that the monetary stimulus would be channeled into increased output rather than accelerating inflation. But that strategy was viable only while sufficient slack remained to allow additional spending to call forth further increases in output rather than cause either price increases, or, if wages and prices are subject to binding controls, shortages of supply. Early in 1973, as inflation began to increase and real GDP growth began to diminish, the time to slow down monetary expansion had arrived. But Burns was insensible to the obvious change in conditions.

Here is where we need to focus the discussion directly on Hawtrey’s book Incomes and Money. By the time Hawtrey wrote this book – his last — at the age of 87, he had long been eclipsed not only in the public eye, but in the economics profession, by his slightly younger contemporary and fellow Cambridge graduate, J. M. Keynes. For a while in the 1920s, Hawtrey might have been the more influential of the two, but after The General Theory was published, Hawtrey was increasingly marginalized as new students no longer studied Hawtrey’s writing, while older economists, who still remembered Hawtrey and were familiar with his work, gradually left the scene. Moreover, as a civil servant for most of his career, Hawtrey never collected around himself a group disciples who, because they themselves had a personal stake in the ideas of their mentor, would carry on and propagate those ideas. By the end of World War II, Hawtrey was largely unknown to younger economists.

As a graduate student in the early 1970s, Hawtrey’s name came only occasionally to my attention, mostly in the context of his having been a notable pre-Keynesian monetary theorist whose ideas were of interest mainly to historians of thought. My most notable recollection relating to Hawtrey was that in a conversation with Hayek, whose specific context I no longer recall, Hayek mentioned Hawtrey to me as an economist whose work had been unduly neglected and whose importance was insufficiently recognized, even while acknowledging that he himself had written critically about what he regarded as Hawtrey’s overemphasis on changes in the value of money as the chief cause of business-cycle fluctuations.

It was probably because I remembered that recommendation that when I was in Manhattan years later and happened upon a brand new copy of Incomes and Money on sale in a Barnes and Noble bookstore, I picked it up and bought it. But buying it on the strength of Hayek’s recommendation didn’t lead me to actually read it. I actually can’t remember when I finally did read the book, but it was likely not until after I discovered that Hawtrey had anticipated the gold-appreciation theory of the Great Depression that I had first heard, as a graduate student, from Earl Thompson.

In Incomes and Money, Hawtrey focused not on the Great Depression, which he notably had discussed in earlier books like The Gold Standard and The Art of Central Banking, but on the experience of Great Britain after World War II. That experience was conditioned on the transition from the wartime controls under which Britain had operated in World War II to the partial peacetime decontrol under the Labour government that assumed power at the close of World War II. One feature of wartime controls was that, owing to the shortages and rationing caused by price controls, substantial unwanted holdings of cash were accumulating in the hands of individuals unable to use their cash to purchase desired goods and services.

The US dollar and the British pound were then the two primary currencies used in international trade, but as long as products were in short supply because of price controls, neither currency could serve as an effective medium of exchange for international transactions, which were largely conducted via managed exchange or barter between governments. After the war, the US moved quickly to decontrol prices, allowing prices to rise sufficiently to eliminate excess cash, thereby enabling the dollar to again function as an international medium of exchange and creating a ready demand to hold dollar balances outside the US. The Labour government being ideologically unwilling to scrap price controls, excess holdings of pounds within Britain could only be disposed of insofar as they could be exchanged for dollars with which products could be procured from abroad.

There was therefore intense British demand for dollars but little or no American demand for pounds, an imbalance reflected in a mounting balance-of-payments deficit. The balance-of-payments deficit was misunderstood and misinterpreted as an indication that British products were uncompetitive, British production costs (owing to excessive British wages) supposedly being too high to allow the British products to be competitive in international markets. If British production costs were excessive, then the appropriate remedy was either to cut British wages or to devalue the pound to reduce the real wages paid to British workers. But Hawtrey maintained that the balance-of-payments deficit was a purely monetary phenomenon — an excess supply of pounds and an excess demand for dollars — that could properly be remedied either by withdrawing excess pounds from the holdings of the British public or by decontrolling prices so that excess pounds could be used to buy desired goods and services at market-clearing prices.

Thus, almost two decades before the Monetary Approach to the Balance of Payments was developed by Harry Johnson, Robert Mundell and associates, Hawtrey had already in the 1940s anticipated its principal conclusion that a chronic balance-of-payments disequilibrium results from a monetary policy that creates either more or less cash than the public wishes to hold rather than a disequilibrium in its exchange rate. If so, the remedy for the disequilibrium is not a change in the exchange rate, but a change in monetary policy.

In his preface to Incomes and Money, Hawtrey set forth the main outlines of his argument.

This book is primarily a criticism of British monetary policy since 1945, along with an application of the criticism to questions of future policy.

The aims of policy were indicated to the Radcliffe Committee in 1957 in a paper on Monetary Policy and the Control of Economic Conditions: “The primary object of policy has been to combine a high and stable level of employment with a satisfactory state of the balance of payments”. When Sir Robert Hall was giving oral evidence on behalf of the Treasury, Lord Radcliffe asked, ”Where does sound money as an objective stand?” The reply was that “there may well be a conflict between the objective of high employment and the objective of sound money”, a dilemma which Treasury did not claim to have solved.

Sound money here meant price stability, and Sir Robert Hall admitted that “there has been a practically continuous rise in the price level. The rise in prices of manufactures since 1949 had in fact been 40 percent. The wage level had risen 70 percent.

Government pronouncements ever since 1944 had repeatedly insisted that wages ought not to rise more than in proportion to productivity. This formula meaning in effect price level of home production, embodies the incomes policy which is now professed by all parties. But it has never been enforced through monetary policy. It has only been enjoined by exhortation and persuasion. (p. ix)

The lack of commitment to a policy of stabilizing the price level was the key point for Hawtrey. If policy makers desired to control the rise in the price level by controlling the increase in incomes, they could, in Hawtrey’s view, only do so by way of a monetary policy whose goal was to keep total spending (and hence total income) at a level – or on a path – that was consistent with the price-level objective that policy-makers were aiming for. If there was also a goal of full employment, then the full-employment goal could be achieved only insofar as the wage rates arrived at in bargaining between labor and management were consistent with the targeted level of spending and income.

Incomes policy and monetary policy cannot be separated. Monetary policy includes all those measures by which the flow of money can be accelerated or retarded, and it is by them that the money value of a given structure of incomes is determined. If monetary policy is directed by some other criterion than the desired incomes policy, the income policy gives way to the other criterion. In particular, if monetary policy is directed to maintaining the money unit at a prescribed exchange rate parity, the level of incomes will adapt itself to this parity and not to the desired policy.

When the exchange parity of sterling was fixed in 1949 at $2.80, the pound had already been undervalued at the previous rate of $4.03. The British wage level was tied by the rate of exchange to the American. The level of incomes was predetermined, and there was no way for an incomes policy to depart from it. Economic forces came into operation to correct the undervaluation by an increase in the wage level. . . .

It was a paradox that the devaluation, which had been intended as a remedy for an adverse balance of payments, induced an inflation which was liable itself to cause an adverse balance. The undervaluation did indeed swell the demand for British exports, but when production passed the limit of capacity, and output could not be further increased, the monetary expansion continued by its own momentum. Demand expanded beyond output and attracted an excess of imports. There was no dilemma, because the employment situation and the balance of payments situation both required the same treatment, a monetary contraction. The contraction would not cause unemployment, provided it went no further than to eliminate over-employment.

The White Paper of 1956 on the Economic Implications of Full Employment, while confirming the Incomes Policy of price stabilization, placed definitely on the Government the responsibility for regulating the pressure of demand through “fiscal, monetary and social policies”. The Radcliffe Committee obtained from the Treasury the admission that this was not being done. No measures other than persuasion and exhortation were being taken to give effect to the incomes policy. Reluctant as the authorities were to resort to deflation, they nevertheless imposed a Bank rate of 7 per cent and other contractive measures to cope with a balance of payments crisis at the very moment when the Treasury representative were appearing before the Committee. But that did not mean that they were prepared to pursue a contractive policy in support of the incomes policy. The crises of 1957 and 1961 were no more than episodes, temporarily interfering with the policy of easy credit and expansion. The crisis of 1964-6 has been more than an episode, only because the deflationary measures were long delayed, and when taken, were half-hearted.

It would be unfair to impute the entire responsibility for these faults of policy to Ministers. They are guided by their advisers, and they can plead in their defence that their misconceptions have been shared by the vast majority of economists. . . .

The fault of traditional monetary theory has been that it is static, and that is still true of Keynes’s theory. But a peculiarity of monetary policy is that, whenever practical measures have to be taken, the situation is always one of transition, when the conditions of static equilibrium have been departed from. The task of policy is to decide the best way to get back to equilibrium, and very likely to choose which of several alternative equilibrium positions to aim at. . . .

An incomes policy, or a wages policy, is the indispensable means of stabilizing the money unit when an independent metallic standard has failed us. Such a policy can only be given effect by a regulation of credit. The world has had long experience of the regulation of credit for the maintenance of a metallic standard. Maintenance of a wages standard requires the same instruments but will be more exacting because it will be guided by many symptoms instead of exclusively by movements of gold, and because it will require unremitting vigilance instead of occasional interference. (pp. ix-xii)

The confusion identified by Hawtrey between an incomes policy aiming at achieving a level of income consistent with full employment at a given level of wages by the appropriate conduct of monetary policy and an incomes policy aiming at the direct control of wages was precisely the confusion that led to the consistent failure of British monetary policy after World War II and to the failure of Arthur Burns. The essence of an incomes policy was to control total spending by way of monetary policy while gaining the cooperation of labor unions and business to prevent wage increases that would be inconsistent with full employment at the targeted level of income. Only monetary policy could determine the level of income, and the only role of exhortation and persuasion or direct controls was to prevent excessive wage increases that would prevent full employment from being achieved at the targeted income level.

After the 1949 devaluation, the Labour government appealed to the labour unions, its chief constituency, not to demand wage increases larger than productivity increases, so that British exporters could maintain the competitive advantage provided them by devaluation. Understanding the protectionist motive for devaluation was to undervalue the pound with a view to promoting exports and discouraging imports, Hawtrey also explained why the protectionist goal had been subverted by the low interest-rate, expansionary monetary policy of the Labour government to keep unemployment well below 2 percent.

British wages rose therefore not only because the pound was undervalued, but because monetary expansion increased aggregate demand faster than the British productive capacity was increasing, adding further upward pressure on British wages and labor costs. Excess aggregate demand in Britain also meant that domestic output that might have been exported was instead sold to domestic customers, while drawing imports to satisfy the unmet demands of domestic consumers, so that the British trade balance showed little improvement notwithstanding a 40% devaluation.

In this analysis, Hawtrey anticipated Max Corden’s theory of exchange-rate protection in identifying the essential mechanism by which to manipulate a nominal exchange rate so as to subsidize the tradable-goods sector (domestic export industries and domestic import-competing industries) as a tight-money policy that creates an excess demand for cash, thereby forcing the public to reduce spending as they try to accumulate the desired increases in cash holdings. The reduced demand for home production as spending is reduced results in a shift of productive resources from the non-tradable- to the tradable-goods sector.

To sum up, what Burns might have learned from Hawtrey was that even if some form of control of wages was essential for maintaining full employment in an economic environment in which strong labor unions could bargain effectively with employers, that control over wages did not — and could not — free the central bank from its responsibility to control aggregate demand and the growth of total spending and income.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,198 other followers

Follow Uneasy Money on WordPress.com
Advertisements