The Rises and Falls of Keynesianism and Monetarism

The following is extracted from a paper on the history of macroeconomics that I’m now writing. I don’t know yet where or when it will be published and there may or may not be further installments, but I would be interested in any comments or suggestions that readers might have. Regular readers, if there are any, will probably recognize some familiar themes that I’ve been writing about in a number of my posts over the past several months. So despite the diminished frequency of my posting, I haven’t been entirely idle.

Recognizing the cognitive dissonance between the vision of the optimal equilibrium of a competitive market economy described by Marshallian economic theory and the massive unemployment of the Great Depression, Keynes offered an alternative, and, in his view, more general, theory, the optimal neoclassical equilibrium being a special case.[1] The explanatory barrier that Keynes struggled, not quite successfully, to overcome in the dire circumstances of the 1930s, was why market-price adjustments do not have the equilibrating tendencies attributed to them by Marshallian theory. The power of Keynes’s analysis, enhanced by his rhetorical gifts, enabled him to persuade much of the economics profession, especially many of the most gifted younger economists at the time, that he was right. But his argument, failing to expose the key weakness in the neoclassical orthodoxy, was incomplete.

The full title of Keynes’s book, The General Theory of Employment, Interest and Money identifies the key elements of his revision of neoclassical theory. First, contrary to a simplistic application of Marshallian theory, the mass unemployment of the Great Depression would not be substantially reduced by cutting wages to “clear” the labor market. The reason, according to Keynes, is that the levels of output and unemployment depend not on money wages, but on planned total spending (aggregate demand). Mass unemployment is the result of too little spending not excessive wages. Reducing wages would simply cause a corresponding decline in total spending, without increasing output or employment.

If wage cuts do not increase output and employment, the ensuing high unemployment, Keynes argued, is involuntary, not the outcome of optimizing choices made by workers and employers. Ever since, the notion that unemployment can be involuntary has remained a contested issue between Keynesians and neoclassicists, a contest requiring resolution in favor of one or the other theory or some reconciliation of the two.

Besides rejecting the neoclassical theory of employment, Keynes also famously disputed the neoclassical theory of interest by arguing that the rate of interest is not, as in the neoclassical theory, a reward for saving, but a reward for sacrificing liquidity. In Keynes’s view, rather than equilibrate savings and investment, interest equilibrates the demand to hold the money issued by the monetary authority with the amount issued by the monetary authority. Under the neoclassical theory, it is the price level that adjusts to equilibrate the demand for money with the quantity issued.

Had Keynes been more attuned to the Walrasian paradigm, he might have recast his argument that cutting wages would not eliminate unemployment by noting the inapplicability of a Marshallian supply-demand analysis of the labor market (accounting for over 50 percent of national income), because wage cuts would shift demand and supply curves in almost every other input and output market, grossly violating the ceteris-paribus assumption underlying Marshallian supply-demand paradigm. When every change in the wage shifts supply and demand curves in all markets for good and services, which in turn causes the labor-demand and labor-supply curves to shift, a supply-demand analysis of aggregate unemployment becomes a futile exercise.

Keynes’s work had two immediate effects on economics and economists. First, it immediately opened up a new field of research – macroeconomics – based on his theory that total output and employment are determined by aggregate demand. Representing only one element of Keynes’s argument, the simplified Keynesian model, on which macroeconomic theory was founded, seemed disconnected from either the Marshallian or Walrasian versions of neoclassical theory.

Second, the apparent disconnect between the simple Keynesian macro-model and neoclassical theory provoked an ongoing debate about the extent to which Keynesian theory could be deduced, or even reconciled, with the premises of neoclassical theory. Initial steps toward a reconciliation were provided when a model incorporating the quantity of money and the interest rate into the Keynesian analysis was introduced, soon becoming the canonical macroeconomic model of undergraduate and graduate textbooks.

Critics of Keynesian theory, usually those opposed to its support for deficit spending as a tool of aggregate demand management, its supposed inflationary bias, and its encouragement or toleration of government intervention in the free-market economy, tried to debunk Keynesianism by pointing out its inconsistencies with the neoclassical doctrine of a self-regulating market economy. But proponents of Keynesian precepts were also trying to reconcile Keynesian analysis with neoclassical theory. Future Nobel Prize winners like J. R. Hicks, J. E. Meade, Paul Samuelson, Franco Modigliani, James Tobin, and Lawrence Klein all derived various Keynesian propositions from neoclassical assumptions, usually by resorting to the un-Keynesian assumption of rigid or sticky prices and wages.

What both Keynesian and neoclassical economists failed to see is that, notwithstanding the optimality of an economy with equilibrium market prices, in either the Walrasian or the Marshallian versions, cannot explain either how that set of equilibrium prices is, or can be, found, or how it results automatically from the routine operation of free markets.

The assumption made implicitly by both Keynesians and neoclassicals was that, in an ideal perfectly competitive free-market economy, prices would adjust, if not instantaneously, at least eventually, to their equilibrium, market-clearing, levels so that the economy would achieve an equilibrium state. Not all Keynesians, of course, agreed that a perfectly competitive economy would reach that outcome, even in the long-run. But, according to neoclassical theory, equilibrium is the state toward which a competitive economy is drawn.

Keynesian policy could therefore be rationalized as an instrument for reversing departures from equilibrium and ensuring that such departures are relatively small and transitory. Notwithstanding Keynes’s explicit argument that wage cuts cannot eliminate involuntary unemployment, the sticky-prices-and-wages story was too convenient not to be adopted as a rationalization of Keynesian policy while also reconciling that policy with the neoclassical orthodoxy associated with the postwar ascendancy of the Walrasian paradigm.

The Walrasian ascendancy in neoclassical theory was the culmination of a silent revolution beginning in the late 1920s when the work of Walras and his successors was taken up by a younger generation of mathematically trained economists. The revolution proceeded along many fronts, of which the most important was proving the existence of a solution of the system of equations describing a general equilibrium for a competitive economy — a proof that Walras himself had not provided. The sophisticated mathematics used to describe the relevant general-equilibrium models and derive mathematically rigorous proofs encouraged the process of rapid development, adoption and application of mathematical techniques by subsequent generations of economists.

Despite the early success of the Walrasian paradigm, Kenneth Arrow, perhaps the most important Walrasian theorist of the second half of the twentieth century, drew attention to the explanatory gap within the paradigm: how the adjustment of disequilibrium prices is possible in a model of perfect competition in which every transactor takes market price as given. The Walrasian theory shows that a competitive equilibrium ensuring the consistency of agents’ plans to buy and sell results from an equilibrium set of prices for all goods and services. But the theory is silent about how those equilibrium prices are found and communicated to the agents of the model, the Walrasian tâtonnement process being an empirically empty heuristic artifact.

In fact, the explanatory gap identified by Arrow was even wider than he had suggested or realized, for another aspect of the Walrasian revolution of the late 1920s and 1930s was the extension of the equilibrium concept from a single-period equilibrium to an intertemporal equilibrium. Although earlier works by Irving Fisher and Frank Knight laid a foundation for this extension, the explicit articulation of intertemporal-equilibrium analysis was the nearly simultaneous contribution of three young economists, two Swedes (Myrdal and Lindahl) and an Austrian (Hayek) whose significance, despite being partially incorporated into the canonical Arrow-Debreu-McKenzie version of the Walrasian model, remains insufficiently recognized.

These three economists transformed the concept of equilibrium from an unchanging static economic system at rest to a dynamic system changing from period to period. While Walras and Marshall had conceived of a single-period equilibrium with no tendency to change barring an exogenous change in underlying conditions, Myrdal, Lindahl and Hayek conceived of an equilibrium unfolding through time, defined by the mutual consistency of the optimal plans of disparate agents to buy and sell in the present and in the future.

In formulating optimal plans that extend through time, agents consider both the current prices at which they can buy and sell, and the prices at which they will (or expect to) be able to buy and sell in the future. Although it may sometimes be possible to buy or sell forward at a currently quoted price for future delivery, agents planning to buy and sell goods or services rely, for the most part, on their expectations of future prices. Those expectations, of course, need not always turn out to have been accurate.

The dynamic equilibrium described by Myrdal, Lindahl and Hayek is a contingent event in which all agents have correctly anticipated the future prices on which they have based their plans. In the event that some, if not all, agents have incorrectly anticipated future prices, those agents whose plans were based on incorrect expectations may have to revise their plans or be unable to execute them. But unless all agents share the same expectations of future prices, their expectations cannot all be correct, and some of those plans may not be realized.

The impossibility of an intertemporal equilibrium of optimal plans if agents do not share the same expectations of future prices implies that the adjustment of perfectly flexible market prices is not sufficient an optimal equilibrium to be achieved. I shall have more to say about this point below, but for now I want to note that the growing interest in the quiet Walrasian revolution in neoclassical theory that occurred almost simultaneously with the Keynesian revolution made it inevitable that Keynesian models would be recast in explicitly Walrasian terms.

What emerged from the Walrasian reformulation of Keynesian analysis was the neoclassical synthesis that became the textbook version of macroeconomics in the 1960s and 1970s. But the seemingly anomalous conjunction of both inflation and unemployment during the 1970s led to a reconsideration and widespread rejection of the Keynesian proposition that output and employment are directly related to aggregate demand.

Indeed, supporters of the Monetarist views of Milton Friedman argued that the high inflation and unemployment of the 1970s amounted to an empirical refutation of the Keynesian system. But Friedman’s political conservatism, free-market ideology, and his acerbic criticism of Keynesian policies obscured the extent to which his largely atheoretical monetary thinking was influenced by Keynesian and Marshallian concepts that rendered his version of Monetarism an unattractive alternative for younger monetary theorists, schooled in the Walrasian version of neoclassicism, who were seeking a clear theoretical contrast with the Keynesian macro model.

The brief Monetarist ascendancy following 1970s inflation conveniently collapsed in the early 1980s, after Friedman’s Monetarist policy advice for controlling the quantity of money proved unworkable, when central banks, foolishly trying to implement the advice, prolonged a needlessly deep recession while central banks consistently overshot their monetary targets, thereby provoking a long series of embarrassing warnings from Friedman about the imminent return of double-digit inflation.


[1] Hayek, both a friend and a foe of Keynes, would chide Keynes decades after Keynes’s death for calling his theory a general theory when, in Hayek’s view, it was a special theory relevant only in periods of substantially less than full employment when increasing aggregate demand could increase total output. But in making this criticism, Hayek, himself, implicitly assumed that which he had himself admitted in his theory of intertemporal equilibrium that there is no automatic equilibration mechanism that ensures that general equilibrium obtains.

Sic Transit Inflatio Mundi

Larry Summers continues to lead the charge for a quick, decisive tightening of monetary policy by the Federal Reserve to head off an inflationary surge that, he believes, is about to overtake us. Undoubtedly one of the most capable economists of his generation, Summers also had a long career as a policy maker at the highest levels, so his advice cannot be casually dismissed. Even aside from Summers’s warning, the current economic environment fully justifies heightened concern caused by the recent uptick in inflation.

I am, nevertheless, not inclined to share Summers’s confidence in his oft-repeated predictions of resurgent inflation unless monetary policy is substantially tightened soon to prevent current inflation from being entrenched into the expectations of households and businesses. Summers’s’ latest warning came in a Washington Post op-ed following the statement by the FOMC and by Chairman Jay Powell that Fed policy would shift to give priority to maintaining price stability.

After welcoming the FOMC statement, Summers immediately segued into a critique of the Fed position on every substantive point.

There have been few, if any, instances in which inflation has been successfully stabilized without recession. Every U.S. economic expansion between the Korean War and Paul A. Volcker’s slaying of inflation after 1979 ended as the Federal Reserve tried to put the brakes on inflation and the economy skidded into recession. Since Volcker’s victory, there have been no major outbreaks of inflation until this year, and so no need for monetary policy to engineer a soft landing of the kind that the Fed hopes for over the next several years.

The not-very-encouraging history of disinflation efforts suggests that the Fed will need to be both skillful and lucky as it seeks to apply sufficient restraint to cause inflation to come down to its 2 percent target without pushing the economy into recession. Unfortunately, several aspects of the Open Market Committee statement and Powell’s news conference suggest that the Fed may not yet fully grasp either the current economic situation or the implications of current monetary policy.

Summers cites the recessions between the Korean War and the 1979-82 Volcker Monetarist experiment to support his anti-inflationary diagnosis and remedy. But none of the three recessions in the 1950s during the Eisenhower Presidency was needed to cope with any significant inflationary threat. There was no substantial inflation in the US during the 1950s, never reaching 3% in any year between 1953 and 1960, and rarely exceeding 2%.

Inflation during the late 1960 and 1970s was caused by a combination of factors, including both excess demand fueled by Vietnam War spending and politically motivated monetary expansion, plus two oil shocks in 1973-74 and 1979-80, an economic environment with only modest similarity to the current economic situation.

But the important lesson from the disastrous Volcker-Friedman recession is that most of the reduction in inflation following Volcker’s decisive move to tighten monetary policy in early 1981 did not come until a year and a half later, when with the US unemployment rate above 10%, Volcker finally abandoned the futile and counterproductive Monetarist policy of making the monetary aggregates policy instruments. Had it not been for the Monetarist obsession with controlling the monetary aggregates, a recovery could have started six months to a year earlier than it did, with inflation continuing on the downward trajectory as output and employment expanded.

The key point is that falling output, in and of itself, tends to cause rising, not falling, prices, so that postponing the start of a recovery actually delays, rather than hastens, the reduction of inflation. As I explained in another post, rather than focusing onthe monetary aggregates, monetary policy ought to have aimed to reduce the rate of growth of total nominal spending from well over 12% in 1980-81 to a rate of about 7%, which would have been consistent with the informal 4% inflation target that Volcker and Reagan had set for themselves.

The appropriate lesson to take away from the Volcker-Friedman recession of 1981-82 is therefore that a central bank can meet its inflation target by reducing the rate of increase in total nominal spending and income to the rate, given anticipated real expansion of capacity and productivity, consistent with its inflation target. The rate of growth in nominal spending and income cannot be controlled with a degree of accuracy, but rates of increase in spending above or below the target rate of increase provide the central bank with real time indications of whether policy needs to be tightened or loosened to meet the inflation target. That approach would avoid the inordinate cost of reducing inflation associated with the Volcker-Friedman episode.

A further aggravating factor in the 1981-82 recession was that interest rates had risen to double-digit levels even before Volcker embarked on his Monetarist anti-inflation strategy, showing how deeply embedded inflation expectations had become in the plans of households and businesses. By contrast, interest rates have actually been falling for months, suggesting that Summers’s warnings about inflation expectations becoming entrenched are overstated.

The Fed forecast calls for inflation to significantly subside even as the economy sustains 3.5 percent unemployment — a development without precedent in U.S. economic history. The Fed believes this even though it regards the sustainable level of unemployment as 4 percent. This only makes sense if the Fed is clinging to the idea that current inflation is transitory and expects it to subside of its own accord.

Summers’s factual assertion that the US unemployment rate has never fallen, without inflationary stimulus, to 3.5%, an argument predicated on the assumption that the natural (or non-accelerating- inflation rate of unemployment) is firmly fixed at 4% is not well supported by the data. In 2019 and early 2020, the unemployment rate dropped to 3.5% without evident inflationary pressure. In the late 1990s unemployment also dropped below 4% without inflationary pressure. So, the expectation that a 3.5% unemployment rate could be restored without inflationary pressure may be optimistic, but it’s hardly unprecedented.

Summers suggests that the Fed is confused because it expects the unemployment rate to fall back to the 3.5% rate of 2019 even while supposedly regarding a 4%, not a 3.5%, rate of unemployment as sustainable. According to Summers, reaching a 3.5% rate of unemployment would be possible only if the current increase in the inflation rate is temporary. But the bond market seems to share that view with the Fed given the recent decreases in the yields on Treasury bonds of 5 to 30 years duration. But Summers takes a different view.

In fact, there is solid reason to think inflation may accelerate. The consumer price index’s shelter component, which represents one-third of the index, has gone up by less than 4 percent, even as private calculations without exception suggest increases of 10 to 20 percent in rent and home prices. Catch-up is likely. More fundamentally, job vacancies are at record levels and the labor market is still heating up, according to the Fed forecast. This portends acceleration rather than deceleration in labor costs — by far the largest cost for the business sector.

Projecting how increases in rent and home prices that have already occurred will affect reported inflation in the future is a tricky exercise. It is certain that those effects will show up in the future, but those effects are already baked into those future inflation reports, so they provide an uneasy basis on which to conduct monetary policy. Insofar as inflation is a problem, it is a problem not because of short-term fluctuations in prices in specific goods, even home prices and rents, or whole sectors of the economy, but because of generalized and potentially continuing long-term trends affecting the whole structure of prices.

The current number of job vacancies reflects both the demand for, and the supply of, labor. The labor-force participation rate is still well below the pre-pandemic level, reflecting the effect of withdrawal from the labor force by workers afraid of contracting the COVID virus, or unable to find day care for children, or deterred from seeking by other pandemic-related concerns from seeking or accepting employment. Under such circumstances, the re-allocations associated with high job-vacancy rates are likely to enhance the efficiency and productivity of the workers that are re-employed, and need not exacerbate inflationary pressures.

Presumably, the Fed has judged that current aggregate-demand increases have less to do with observed inflation than labor-supply constraints or other supply-side bottlenecks whose effects on prices are likely self-limiting. This judgment is neither obviously right nor obviously wrong. But, for now at least, it is not unreasonable for the Fed to remain cautious before making a drastic policy change, neither committing itself to an immediate tightening, as Summers is proposing, nor doubling down on a commitment to its current accommodative stance.

Meanwhile, the pandemic-related bottlenecks central to the transitory argument are exaggerated. Prices for more than 80 percent of goods in the CPI have increased more than 3 percent in the past year.With the economy’s capacity growing 2 percent a year and the Fed’s own forecast calling for 4 percent growth in 2022, price pressures seem more likely to grow than to abate.

This argument makes no sense. We have, to be sure, gone through a period of actual broad-based inflation, so pointing out that 80% of goods in the CPI have increased in price by more than 3% in the past year is unsurprising. The bottleneck point is that supply constraints have prevented the real economy from growing as fast as nominal spending has grown. As I’ve pointed out recently, there’s an overhang of cash and liquid assets, accumulated rather than spent during the pandemic, which has amplified aggregate-demand growth since the economy began to recover from the pandemic, opening up previously closed opportunities for spending. The mismatch between the growth of demand and the growth of supply has been manifested in rising inflation. If the bottleneck theory of inflation is true, then the short-term growth potential of the economy is greater than the 2% rate posited by Summers. As bottlenecks are removed and workers that withdrew from the labor force during the pandemic are re-employed, the economy could easily grow faster than Summers is willing to acknowledge. Summers simply assumes, but doesn’t demonstrate, his conclusion.

This all suggests that policy will need to restrain demand to restore price stability.

No, it does not suggest that at all. It only suggests the possibility that demand may have to be restrained to keep prices stable. Recent inflation may have been a delayed response to an expansive monetary policy designed to prevent a contraction of demand during the pandemic. A temporary increase in inflation does not necessarily call for an immediate contractionary response. It’s too early to tell with confidence whether preventing future inflation requires, as Summers asserts, monetary policy to be tightened immediately. That option shouldn’t be taken off the table, but the Fed clearly hasn’t done so.

How much tightening is required? No one knows, and the Fed is right to insist that it will monitor the economy and adjust. We do know, however, that monetary policy is far looser today — in a high-inflation, low-unemployment economy — than it was about a year ago when inflation was below the Fed’s target and unemployment was around 8 percent. With relatively constant nominal interest rates, higher inflation and the expectation of future inflation have led to dramatic reductions in real interest rates over the past year. This is why bubbles are increasingly pervasive in asset markets ranging from crypto to beachfront properties and meme stocks to tech start-ups.

Summers, again, is just assuming, not demonstrating, his own preferred conclusion. A year ago, high unemployment was caused by the unique confluence of essentially simultaneous negative demand and supply shocks. The unprecedented coincidence of two simultaneous shocks posed a unique policy challenge to which the Fed has so far responded with remarkable skill. But the unfamiliar and challenging economic environment remains murky, and premature responses to unclear conditions may not yield the anticipated results. Undaunted by any doubt in his own reading of an opaque situation, Summers self-assurance is characteristic and impressive, but his argument is less than compelling.

The implication is that restoring monetary policy to a normal posture, let alone to applying restraint to the economy, will require far more than the three quarter-point rate increases the Fed has predicted for next year. This point takes on particular force once it is recognized that, contrary to Powell’s assertion, almost all economists believe there is a lag of about a year between the application of a rate change and its effect. Failure to restore policy neutrality next year means allowing two more years of highly inflationary monetary policy.

All of this suggests that even with its actions this week, the Fed remains well behind the curve in its commitment to fighting inflation. If its statements reflect its convictions, this is a matter of serious concern.

The idea that there is a one-year lag between applying a policy and its effect is hardly credible. The problem is not the length of the lag, but the uncertain effects of policy in a given set of circumstances. The effects of a change in the money stock or a change in the policy rate may not be apparent if they are offset by other changes. The ceteris-paribus proviso that qualifies every analysis of the effects of monetary policy is rarely satisfied in the real world; almost every policy action by the central bank is an uncertain bet. Under current circumstances, the Fed response to the recent increase in inflation seems eminently sensible: signal that the Fed is anticipating the likelihood that monetary policy will have to be tightened if the current rate of increase in nominal spending remains substantially above the rate consistent with the Fed’s average inflation target of 2%, but wait for further evidence before deciding about the magnitude of any changes in the Fed’s policy instruments.

My Paper “Between Walras and Marshall: Menger’s Third Way” Is Now Posted on SSRN

As regular readers of this blog will realize, several of my recent posts (here, here, here, here, and here) have been incorporated in my new paper, which I have been writing for the upcoming Carl Menger 2021 Conference next week in Nice, France. The paper is now available on SSRN.

Here is the abstract to the paper:

Neoclassical economics is bifurcated between Marshall’s partial-equilibrium and Walras’s general-equilibrium analyses. Given the failure of neoclassical theory to explain the Great Depression, Keynes proposed an explanation of involuntary unemployment. Keynes’s contribution was later subsumed under the neoclassical synthesis of the Keynesian and Walrasian theories. Lacking microfoundations consistent with Walrasian theory, the neoclassical synthesis collapsed. But Walrasian GE theory provides no plausible account of how GE is achieved. Whatever plausibility is attributed to the assumption that price flexibility leads to equilibrium derives from Marshallian PE analysis, with prices equilibrating supply and demand. But Marshallian PE analysis presumes that all markets, but the small one being analyzed, are at equilibrium, so that price adjustments in the analyzed market neither affect nor are affected by other markets. The demand and cost (curves) of PE analysis are drawn on the assumption that all other prices reflect Walrasian GE values. While based on Walrasian assumptions, modern macroeconomics relies on the Marshallian intuition that agents know or anticipate the prices consistent with GE. Menger’s third way offers an alternative to this conceptual impasse by recognizing that nearly all economic activity is subjective and guided by expectations of the future. Current prices are set based on expectations of future prices, so equilibrium is possible only if agents share the same expectations of future prices. If current prices are set based on differing expectations, arbitrage opportunities are created, causing prices and expectations to change, leading to further arbitrage, expectational change, and so on, but not necessarily to equilibrium.

Here is a link to the paper: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3964127

The current draft if preliminary, and any comments, suggestions or criticisms from readers would be greatly appreciated.

High-Inflation Anxiety

UPDATE (9:25am 11/16/2021): Thanks to philipji for catching some problematic passages in my initially posted version. I have also revised the opening paragraph, which was less than clear. Apologies for my sloppy late-night editing before posting.

When I started blogging ten-plus years ago, most of my posts were about monetary policy, because I then felt that the Fed was not doing as much as it could, and should, have been doing to promote a recovery from the Little Depression (aka Great Recession) for which Fed’s policy mistakes bore heavy responsibility. The 2008 financial crisis and the ensuing deep downturn were largely the product of an overly tight monetary policy starting in 2006, and, despite interest-rate cuts in 2007 and 2008, the Fed’s policy consistently erred on the side of tightness because of concerns about rising oil and commodities prices, for almost two months after the start of the crisis. The stance of monetary policy cannot be assessed just by looking at the level of the central bank policy rate; the stance depends on the relationship between the policy rate and the economic conditions at any particular moment. The 2-percent Fed Funds target in the summer of 2008, given economic conditions at the time, meant that monetary policy was tight, not easy.

Although, after the crisis, the Fed never did as much as it could — and should — have to promote recovery, it at least took small measures to avoid a lapse into a ruinous deflation, even as many of the sound-money types, egged on by deranged right-wing scare mongers, warned of runaway inflation.

Slowly but surely, a pathetically slow recovery by the end of Obama’s second term, brought us back to near full employment. By then, my interest in the conduct of monetary policy had given way to a variety of other concerns as we dove into the anni horribiles of the maladministration of Obama’s successor.

Riding a recovery that started seven and a half years before he took office, and buoyed by a right-wing propaganda juggernaut and a pathetically obscene personality cult that broadcast and amplified his brazenly megalomaniacal self-congratulations for the inherited recovery over which he presided, Obama’s successor watched incompetently as the Covid 19 virus spread through the United States, causing the sharpest drop in output and employment in US history.

Ten months after Obama’s successor departed from the White House, the US economy has recovered much, but not all, of the ground lost during the pandemic, employment still below its peak at the start of 2020, and real output still lagging the roughly 2% real growth path along which the economy had been moving for most of the preceding decade.

However, the very rapid increase in output in Q2 2021 and the less rapid, but still substantial, increase in output in Q3 2021, combined with inflation that has risen to the highest rates in 30 years, has provoked ominous warning of resurgent inflation similar to the inflation from the late 1960s till the early 1980s, ending only with the deep 1981-82 recession caused by the resolute anti-inflation policy administered by Fed Chairman Paul Volcker with the backing of the newly elected Ronald Reagan.

It’s worth briefly revisiting that history (which I have discussed previously on this blog here, here, and especially in the following series (1), (2) and (3) from 2020) to understand the nature of the theoretical misunderstanding and the resulting policy errors in the 1970s and 1980s. While I agree that the recent increase in inflation is worrisome, it’s far from clear that inflation is likely, as many now predict, to get worse, although the inflation risk can’t be dismissed.

What I find equally if not more worrisome is that the anti-inflation commentary that we are hearing now from very serious people like Larry Summers in today’s Washington Post is how much it sounds like the inflation talk of 2008, which frightened the Fed, then presided over by a truly eminent economist, Ben Bernanke, into thinking that the chief risk facing the economy was rising headline inflation that would cause inflation expectations to become “unanchored.”

So, rather than provide an economy rapidly sliding into recession, the FOMC, focused on rapid increases in oil and commodity prices, refused to loosen monetary policy in the summer of 2008, even though the pace of growth in nominal gross domestic product (NGDP) had steadily decelerated measured year on year. The accompanying table shows the steady decline in the quarterly year-on-year growth of NGDP in each successive quarter between Q1 2004 and Q4 2008. Between 2004 and 2006, the decline was gradual, but accelerated in 2007 leading to the start of a recession in December 2007.

Source: https://fred.stlouisfed.org/series/DFEDTAR

The decline in the rate of NGDP growth was associated with a gradual increase in the Fed funds target rate from a very low level 1% until Q2 2004, but by Q2 2006, when the rate reached 5%, the slowdown in the growth of total spending quickened. As the rate of spending declined, the Fed eased interest rates in the second half of 2007, but that easing was insufficient to prevent an economy, already suffering financial distress after the the housing bubble burst in 2006, from lapsing into recession.

Although the Fed cut its interest-rate target substantially in March 2008, during the summer of 2008, when the recession was rapidly deteriorating, the FOMC, fearing that inflation expectations would become “unanchored”, given rising headline inflation associated with very large increases in crude-oil prices (which climbed to a record $130 a barrel) and in commodity prices even as the recession was visibly worsening, refused to reduce rates further.

The Fed, to be sure, was confronted with a difficult policy dilemma, but it was a disastrous error to prioritize a speculative concern about the “unanchoring” of long-term inflation expectations over the reality of a fragile and clearly weakening financial system in a contracting economy already clearly in a recession. The Fed made the wrong choice, and the crisis came.

That was then, and now is now. The choices are different, but once again, on one side there is pressure to prioritize the speculative concern about the “unanchoring” of long-term inflation expectations over promoting recovery and increased employment after a a recession and a punishing pandemic. And, once again, the concerns about inflation are driven by a likely transitory increase in the price of crude oil and gasoline prices.

The case for prioritizing fighting inflation was just made by none other than Larry Summers in an op-ed in the Washington Post. Let’s have a look at Summer’s case for fighting inflation now.

Fed Chair Jerome H. Powell’s Jackson Hole speech in late August provided a clear, comprehensive and authoritative statement, enumerated in five pillars, of the widespread team “transitory” view of inflation that prevailed at that time and shaped policy thinking at the central bank and in the administration. Today, all five pillars are wobbly at best.

First, there was a claim that price increases were confined to a few sectors. No longer. In October, prices for commodity goods outside of food and energy rose at more than a 12 percent annual rate. Various Federal Reserve system indexes that exclude sectors with extreme price movements are now at record highs.

https://www.washingtonpost.com/opinions/2021/11/15/inflation-its-past-time-team-transitory-stand-down/

Summers has a point. Price increases are spreading throughout the economy. However, that doesn’t mean that increasing oil prices are not causing the prices of many other products to increase as well, inasmuch as oil and other substitute forms of energy are so widely used throughout the economy. If the increase in oil prices, and likely, in food prices, has peaked, or will do so soon, it does not necessarily make sense to fight a war against an enemy that has retreated or is about to do so.

Second, Powell suggested that high inflation in key sectors, such as used cars and durable goods more broadly, was coming under control and would start falling again. In October, used-car prices accelerated to more than a 30 percent annual inflation rate, new cars to a 17 percent rate and household furnishings by an annualized rate of just above 10 percent.

Id.

Again, citing large increases in the price of cars when it’s clear that there are special circumstances causing new car prices to rise rapidly, bringing used-car prices tagging along, is not very persuasive, especially when those special circumstances appear likely to be short-lived. To be sure other durables prices are also rising, but in the absence of a deeper source of inflation, the atmospherics cited by Summers are not that compelling.

Third, the speech pointed out that there was “little evidence of wage increases that might threaten excessive inflation.” This claim is untenable today with vacancy and quit rates at record highs, workers who switch jobs in sectors ranging from fast food to investment banking getting double-digit pay increases, and ominous Employment Cost Index increases.

Id.

Wage increases are usually an indicator of inflation, though, again the withdrawal, permanent or temporary, of many workers from employment in the past two years is a likely cause of increased wages that is independent of an underlying and ongoing inflationary trend.

Fourth, the speech argued that inflation expectations remained anchored. When Powell spoke, market inflation expectations for the term of the next Federal Reserve chair were around 2.5 percent. Now they are about 3.1 percent, up half a percentage point in the past month alone. And consumer sentiment is at a 10-year low due to inflation fears.

Id.

Clearly inflation expectations have increased over the short term for a variety of reasons that we have just been considering. But the curve of inflation expectations still seems to be reverting toward a lower level in the medium term and the long-term.

Fifth, Powell emphasized global deflationary trends. In the same week the United States learned of the fastest annual inflation rate in 30 years, Japan, China and Germany all reported their highest inflation in more than a decade. And the price of oil, the most important global determinant of inflation, is very high and not expected by forward markets to decline rapidly.

Id.

Again, Summers is simply recycling the same argument. We know that there has been a short-term increase in inflation. The question we need to grapple with is whether this short-term inflationary blip is likely to be self-limiting, or will feed on itself, causing inflation expectations to become “unanchored”. Forward prices of oil may not be showing that the price of oil will decline rapidly, but they aren’t showing expectations of further increases. Without further increases in oil prices, it is fair to ask what the source of further, ongoing inflation, that will cause “unanchoring”?

As it has in the past, the threat of “unanchoring”, is doing an awful lot of work. And it is not clear how the work is being done except by way of begging the question that really needs to be answered not begged.

After his windup, Summers offers fairly mild suggestions for his anti-inflation program, and only one of his comments seems mistaken.

Because of inflation, real interest rates are lower, as money is easier than a year ago. The Fed should signal that this is unacceptable and will be reversed.

Id.

The real interest rate about which the Fed should be concerned is the ex ante real interest rate reflecting both the expected yield from real capital and the expected rate of inflation (which may and often does have feedback effects on the expected yield from real capital). Past inflation does not automatically get transformed into an increase in expected inflation, and it is not necessarily the case that past inflation has left the expected yield from real capital unaffected, so Summers’ inference that the recent blip in inflation necessarily implies that monetary policy has been eased could well be mistaken. Yet again, these are judgments (or even guesses) that policymakers have to make about the subjective judgments of market participants. Those policy judgments that can’t be made simply by reading off a computer screen.

While I’m not overly concerned by Summers’s list of inflation danger signs, there’s no doubt that inflation risk has risen. Yet, at least for now, that risk seems to be manageable. The risk may require the Fed to take pre-emptive measures against inflation down the road, but I don’t think that we have not reached that point yet.

The main reason why I think that inflation risk has been overblown is that inflation is a common occurrence in postwar economies, as occurred in the US after both World Wars, and after the Korean War and the Vietnam War. It is widely recognized that war itself is inflationary owing, among other reasons, to the usual practice of governments to finance wartime expenditures by printing money, but inflationary pressures tend to persist even after the wars end.

Why does inflation persist after wars come to an end? The main reason is that, during wartime, resources, labor and capital, are shifted from producing goods for civilian purposes to waging war and producing and transporting supplies to support the war effort. Because total spending, financed by printing money, increases during the war, money income goes up even though the production of goods and services for civilian purposes goes down.

The output of goods and services for civilian purposes having been reduced, the increased money income accruing to the civilian population implies rising prices of the civilian goods and services that are produced. The tendency for prices to rise during wartime is mitigated by the reduced availability of outlets for private spending, people normally postponing much of their non-essential spending while the war is ongoing. Consequently, the public accumulates cash and liquid assets during wartime with the intention of spending the accumulated cash and liquid assets when life returns to normal after the war.

The lack of outlets for private spending is reinforced when, as happened in World War I, World War II, the Korean War and the late stages of the Vietnam War, price controls prevent the prices of civilian goods still being produced from rising, so that consumers can’t buy goods – either at all or as much as they would like – that they would willingly have paid for. The result is suppressed inflation until wartime price controls are lifted, and deferred price increases are allowed to occur. As prices rise, the excess cash that had been accumulated while the goods people demanded were unavailable is absorbed by purchases made at the postponed increases in price.

In his last book, Incomes and Money, Ralph Hawtrey described with characteristic clarity the process by which postwar inflation absorbed redundant cash balances accumulated during the World War II when price controls were lifted.

America, like Britain, had imposed price controls during the war, and had accumulated a great amount of redundant money. So long as the price controls continued, the American manufacturers were precluded from checking demand by raising their prices. But the price controls were abandoned in the latter half of 1946, and there resulted a rise of prices reaching 30 per cent on manufactured goods in the latter part of 1947. That meant that American industry was able to defend itself against the excess demand. By the end of 1947 the rise of prices had nearly eliminated the redundant money; that it to say, the quantity of money (currency and bank deposits) was little more than in a normal proportion to the national income. There was no longer over-employment in American industry, and there was no reluctance to take export orders.

Hawtrey, Incomes and Money, p. 7

Responding to Paul Krugman’s similar claim that there was high inflation following World War II, Summers posted the following twitter thread.

@paulkrugman continues his efforts to minimize the inflation threat to the American economy and progressive politics by pointing to the fact that inflation surged and then there was a year of deflation after World War 2.

If this is the best argument for not being alarmed that someone as smart, rhetorically effective and committed as Paul can make, my anxiety about inflation is increased.

Pervasive price controls were removed after the war. Economists know that measured prices with controls are artificial, so subsequent inflation proves little.

Millions of soldiers were returning home and a massive demobilization was in effect. Nothing like the current pervasive labor shortage was present.

https://twitter.com/LHSummers/status/1459992638170583041

Summers is surely correct that the situation today is not perfectly analogous to the post-WWII situation, but post-WWII inflation, as Hawtrey explained, was only partially attributable to the lifting of price controls. He ignores the effect of excess cash balances, which ultimately had to be spent or somehow withdrawn from circulation through a deliberate policy of deflation, which neither Summers nor most economists would think advisable or even acceptable. While the inflationary effect of absorbing excess cash balances is therefore almost inevitable, the duration of the inflation is limited and need not cause inflation expectations to become “unanchored.”

With the advent of highly effective Covid vaccines, we are now gradually emerging from the worst horrors of the Covid pandemic, when a substantial fraction of the labor force was either laid off or chose to withdraw from employment. As formerly idle workers return to work, we are in a prolonged quasi-postwar situation.

Just as the demand for civilian products declines during wartime, the demand for a broad range of private goods declined during the pandemic as people stopped going to restaurants, going on vacations, attending public gathering, and limited their driving and travel. Thus, the fraction of earnings that was saved increased as outlets for private spending became unavailable, inappropriate or undesirable.

As the pandemic has receded, restoring outlets for private spending, pent-up suppressed private demands have re-emerged, financed by households drawing down accumulated cash balances or drawing on credit lines augmented by paying down indebtedness. For many goods, like cars, the release of pent-up private demand has outpaced the increase in supply, leading to substantial price increases that are unlikely to be sustained once short-term supply bottlenecks are eliminated. But such imbalances between rapid increases in demand and sluggish increases in supply does not seem like a reliable basis on which to make policy choices.

So what are we to do now? As always, Ralph Hawtrey offers the best advice. The control of inflation, he taught, ultimately depends on controlling the relationship between the rate of growth in total nominal spending (and income) and the rate of growth of total real output. If total nominal spending (and income) is increasing faster than the increase in total real output, the difference will be reflected in the prices at which goods and services are provided.

In the five years from 2015 to 2019, the average growth rate in nominal spending (and income) was about 3.9%. During that period the average rate of growth in real output was 2.2% annually and the average rate of inflation was 1.7%. It has been reasonably suggested that extrapolating the 3.9% annual growth in nominal spending in the previous five years provides a reasonable baseline against which to compare actual spending in 2020 and 2021.

Actual nominal spending in Q3 2021 was slightly below what nominal GDP would have been in Q3 if it had continued growing at the extrapolated 3.9% growth path in nominal GDP. But for nominal GDP in Q4 not exceed that extrapolated growth path in Q4, Q4 could increase by an annual rate of no more than 4.3%. Inasmuch as spending in Q3 2021 was growing at 7.8%, the growth rate of nominal spending would have to slow substantially in Q4 from its Q3 growth rate.

But it is not clear that a 3.9% growth rate in nominal spending is the appropriate baseline to use. From 2015 to 2019, the average growth rate in real output was only 2.2% annually and the average inflation rate was only 1.7%. The Fed has long announced that its inflation target was 2% and in the 2015 to 2019 period, it consistently failed to meet that target. If the target inflation was 2% rather than 1.7%, presumably the Fed believed that annual growth would not have been less with 2% inflation than with 1.7%, so there is no reason to believe that the Fed should not have been aiming for more than 3.9% growth in total spending. If so a baseline for extrapolating the growth path for nominal spending should certainly not be less than 4.2%, Even a 4.5% baseline seems reasonable, and a baseline as high as 5% does not seem unreasonable.

With a 5% baseline, total nominal spending in Q4 could increase by as much as 5.4% without raising total nominal spending above its target path. But I think the more important point is not whether total spending does or does not rise above its growth path. The important goal is for the growth in nominal spending to decline steadily toward a reasonable growth path of about 4.5 to 5% and for this goal to be communicated to the public in a convincing manner. The 13.4% increase in total spending in Q2, when it appeared that the pandemic might soon be over, was likely a one-off outlier reflecting the release of pent-up demand. The 7.8% increase in Q3 was excessive, but substantially less than the Q2 rate of increase. If the Q4 increase does not continue downward trend in the rate of increase in nominal spending, it will be time to re-evaluate policy to ensure that the growth of spending is brought down to a non-inflationary range.

More on Arrow’s Explanatory Gap and the Milgrom-Stokey Argument

In my post yesterday, I discussed what I call Kenneth Arrow’s explanatory gap: the absence of any account in neoclassical economic theory of how the equilibrium price vector is actually arrived at and how changes in that equilibrium price vector result when changes in underlying conditions imply changes in equilibrium prices. I post below some revisions to several paragraphs in yesterday’s post supplemented by a more detailed discussion of the Milgrom-Stokey “no-trade theorem” and its significance. The following is drawn from a work in progress to be presented later this month at a conference celebrating the 150th anniversary of the publication of the Carl Menger’s Grundsätze der Volkswirtschaftslehre.

Thus, just twenty years after Arrow called attention to the explanatory gap in neoclassical theory by observing that neoclassical theory provides no explanation of how competitive prices can change, Paul Milgrom and Nancy Stokey (1982) turned Arrow’s argument on its head by arguing that, under rational expectations, no trading would ever occur at disequilibrium prices, because every potential trader would realize that an offer to trade at disequilibrium prices would not be made unless the offer was based on private knowledge and would therefore lead to a wealth transfer to the trader relying on private knowledge. Because no traders with rational expectations would agree to a trade at a disequilibrium price, there would be no incentive to seek or exploit private information, and all trades would occur at equilibrium prices.

This would have been a profound and important argument had it been made as a reductio ad absurdum to show the untenability of the rational-expectations as a theory of expectation formation, inasmuch as it leads to the obviously false factual implication that private information is never valuable and that no profitable trades are made by those possessed of private information. In concluding their paper, Milgrom and Stokey (1982) acknowledge the troubling implication of their argument:

Our results concerning rational expectations market equilibria raise anew the disturbing questions expressed by Beja (1977), Grossman and Stiglitz (1980), and Tirole (1980): Why do traders bother to gather information if they cannot profit from it? How does information come to be reflected in prices if informed traders do not trade or if they ignore their private information in making inferences? These questions can be answered satisfactorily only in the context of models of the price formation process; and our central result, the no-trade theorem, applies to all such models when rational expectations are assumed. (p. 17)

What Milgrom and Stokey seem not to have grasped is that the rational-expectations assumption dispenses with the need for a theory of price formation, inasmuch as every agent is assumed to be able to calculate what the equilibrium price is. They attempt to mitigate the extreme nature of this assumption by arguing that by observing price changes, traders can infer what changes in common knowledge would have implied the observed changes. That argument seems insufficient because any given change in price could be caused by more than one potential cause. As Scott Sumner has often argued, one can’t reason from a price change. If one doesn’t have independent knowledge of the cause of the price change, one can’t use the price change as a basis for further inference.

The Explanatory Gap and Mengerian Subjectivism

My last several posts have been focused on Marshall and Walras and the relationships and differences between the partial equilibrium approach of Marshall and the general-equilibrium approach of Walras and how that current state of neoclassical economics is divided between the more practical applied approach of Marshallian partial-equilibrium analysis and the more theoretical general-equilibrium approach of Walras. The divide is particularly important for the history of macroeconomics, because many of the macroeconomic controversies in the decades since Keynes have also involved differences between Marshallians and Walrasians. I’m not happy with either the Marshallian or Walrasian approach, and I have been trying to articulate my unhappiness with both branches of current neoclassical thinking by going back to the work of the forgotten marginal revolutionary, Carl Menger. I’ve been writing a paper for a conference later this month celebrating the 150th anniversary of Menger’s great work which draws on some of my recent musings, because I think it offers at least some hints at how to go about developing an improved neoclassical theory. Here’s a further sampling of my thinking which is drawn from one of the sections of my work in progress.

Both the Marshallian and the Walrasian versions of equilibrium analysis have failed to bridge an explanatory gap between the equilibrium state, whose existence is crucial for such empirical content as can be claimed on behalf of those versions of neoclassical theory, and such an equilibrium state could ever be attained. The gap was identified by one of the chief architects of modern neoclassical theory, Kenneth Arrow, in his 1958 paper “Toward a Theory of Price Adjustment.”

The equilibrium is defined in terms of a set of prices. In the Marshallian version, the equilibrium prices are assumed to have already been determined in all but a single market (or perhaps a subset of closely related markets), so that the Marshallian equilibrium simply represents how, in a single small or isolated market, an equilibrium price in that market is determined, under suitable ceteris-paribus conditions thereby leaving the equilibrium prices determined in other markets unaffected.

In the Walrasian version, all prices in all markets are determined simultaneously, but the method for determining those prices simultaneously was not spelled out by Walras other than by reference to the admittedly fictitious and purely heuristic tâtonnement process.

Both the Marshallian and Walrasian versions can show that equilibrium has optimal properties, but neither version can explain how the equilibrium is reached or how it can be discovered in practice. This is true even in the single-period context in which the Walrasian and Marshallian equilibrium analyses were originally carried out.

The single-period equilibrium has been extended, at least in a formal way, in the standard Arrow-Debreu-McKenzie (ADM) version of the Walrasian equilibrium, but this version is in important respects just an enhanced version of a single-period model inasmuch as all trades take place at time zero in a complete array of future state-contingent markets. So it is something of a stretch to consider the ADM model a truly intertemporal model in which the future can unfold in potentially surprising ways as opposed to just playing out a script already written in which agents go through the motions of executing a set of consistent plans to produce, purchase and sell in a sequence of predetermined actions.

Under less extreme assumptions than those of the ADM model, an intertemporal equilibrium involves both equilibrium current prices and equilibrium expected prices, and just as the equilibrium current prices are the same for all agents, equilibrium expected future prices must be equal for all agents. In his 1937 exposition of the concept of intertemporal equilibrium, Hayek explained the difference between what agents are assumed to know in a state of intertemporal equilibrium and what they are assumed to know in a single-period equilibrium.

If all agents share common knowledge, it may be plausible to assume that they will rationally arrive at similar expectations of the future prices. But if their stock of knowledge consists of both common knowledge and private knowledge, then it seems implausible to assume that the price expectations of different agents will always be in accord. Nevertheless, it is not necessarily inconceivable, though perhaps improbable, that agents will all arrive at the same expectations of future prices.

In the single-period equilibrium, all agents share common knowledge of equilibrium prices of all commodities. But in intertemporal equilibrium, agents lack knowledge of the future, but can only form expectations of future prices derived from their own, more or less accurate, stock of private knowledge. However, an equilibrium may still come about if, based on their private knowledge, they arrive at sufficiently similar expectations of future prices for their plans for their current and future purchases and sales to be mutually compatible.

Thus, just twenty years after Arrow called attention to the explanatory gap in neoclassical theory by observing that there is no neoclassical theory of how competitive prices can change, Milgrom and Stokey turned Arrow’s argument on its head by arguing that, under rational expectations, no trading would ever occur at prices other than equilibrium prices, so that it would be impossible for a trader with private information to take advantage of that information. This argument seems to suffer from a widely shared misunderstanding of what rational expectations signify.

Thus, in the Mengerian view articulated by Hayek, intertemporal equilibrium, given the diversity of private knowledge and expectations, is an unlikely, but not inconceivable, state of affairs, a view that stands in sharp contrast to the argument of Paul Milgrom and Nancy Stokey (1982), in which they argue that under a rational-expectations equilibrium there is no private knowledge, only common knowledge, and that it would be impossible for any trader to trade on private knowledge, because no other trader with rational expectations would be willing to trade with anyone at a price other than the equilibrium price.

Rational expectations is not a property of individual agents making rational and efficient use of the information from whatever source it is acquired. As I have previously explained here (and a revised version here) rational expectations is a property of intertemporal equilibrium; it is not an intrinsic property that agents have by virtue of being rational, just as the fact that the three angles in a triangle sum to 180 degrees is not a property of the angles qua angles, but a property of the triangle. When the expectations that agents hold about future prices are identical, their expectations are equilibrium expectations and they are rational. That the agents hold rational expectations in equilibrium, does not mean that the agents are possessed of the power to calculate equilibrium prices or even to know if their expectations of future prices are equilibrium expectations. Equilibrium is the cause of rational expectations; rational expectations do not exist if the conditions for equilibrium aren’t satisfied. See Blume, Curry and Easley (2006).

The assumption, now routinely regarded as axiomatic, that rational expectations is sufficient to ensure that equilibrium is automatic achieved, and that agents’ price expectations necessarily correspond to equilibrium price expectations is a form of question begging disguised as a methodological imperative that requires all macroeconomic models to be properly microfounded. The newly published volume edited by Arnon, Young and van der Beek Expectations: Theory and Applications from Historical Perspectives contains a wonderful essay by Duncan Foley that elucidates these issues.

In his centenary retrospective on Menger’s contribution, Hayek (1970), commenting on the inexactness of Menger’s account of economic theory, focused on Menger’s reluctance to embrace mathematics as an expository medium with which to articulate economic-theoretical concepts. While this may have been an aspect of Menger’s skepticism about mathematical reasoning, his recognition that expectations of the future are inherently inexact and conjectural and more akin to a range of potential outcomes of different probability may have been an even more significant factor in how Menger chose to articulate his theoretical vision.

But it is noteworthy that Hayek (1937) explicitly recognized that there is no theoretical explanation that accounts for any tendency toward intertemporal equilibrium, and instead merely (and in 1937!) relied an empirical tendency of economies to move in the direction of equilibrium as a justification for considering economic theory to have any practical relevance.

My Conversation with Hendrickson and Albrecht on the Economic Forces Podcast

Josh Hendrickson and Brian Albrecht have just posted our conversation about UCLA economics and economists, price theory vs. microfoundations, and my new book on their new Economic Forces Podcast. It was a really interesting conversation. Below are links to the podcast and to my book, which is now available online, and can be pre-ordered. The print version should be available in December.

.

https://link.springer.com/book/10.1007/978-3-030-83426-5

The Walras-Marshall Divide in Neoclassical Theory, Part II

In my previous post, which itself followed up an earlier post “General Equilibrium, Partial Equilibrium and Costs,” I laid out the serious difficulties with neoclassical theory in either its Walrasian or Marshallian versions: its exclusive focus on equilibrium states with no plausible explanation of any economic process that leads from disequilibrium to equilibrium.

The Walrasian approach treats general equilibrium as the primary equilibrium concept, because no equilibrium solution in a single market can be isolated from the equilibrium solutions for all other markets. Marshall understood that no single market could be in isolated equilibrium independent of all other markets, but the practical difficulty of framing an analysis of the simultaneous equilibration of all markets made focusing on general equilibrium unappealing to Marshall, who wanted economic analysis to be relevant to the concerns of the public, i.e., policy makers and men of affairs whom he regarded as his primary audience.

Nevertheless, in doing partial-equilibrium analysis, Marshall conceded that it had to be embedded within a general-equilibrium context, so he was careful to specify the ceteris-paribus conditions under which partial-equilibrium analysis could be undertaken. In particular, any market under analysis had to be sufficiently small, or the disturbance to which that market was subject had to be sufficiently small, for the repercussions of the disturbance in that market to have only minimal effect on other markets, or, if substantial, those effects had to concentrated on a specific market (e.g., the market for a substitute, or complementary, good).

By focusing on equilibrium in a single market, Marshall believed he was making the analysis of equilibrium more tractable than the Walrasian alternative of focusing on the analysis of simultaneous equilibrium in all markets. Walras chose to make his approach to general equilibrium, if not tractable, at least intuitive by appealing to the fiction of tatonnement conducted by an imaginary auctioneer adjusting prices in all markets in response to any inconsistencies in the plans of transactors preventing them from executing their plans at the announced prices.

But it eventually became clear, to Walras and to others, that tatonnement could not be considered a realistic representation of actual market behavior, because the tatonnement fiction disallows trading at disequilibrium prices by pausing all transactions while a complete set of equilibrium prices for all desired transactions is sought by a process of trial and error. Not only is all economic activity and the passage of time suspended during the tatonnement process, there is not even a price-adjustment algorithm that can be relied on to find a complete set of equilibrium prices in a finite number of iterations.

Despite its seeming realism, the Marshallian approach, piecemeal market-by-market equilibration of each distinct market, is no more tenable theoretically than tatonnement, the partial-equilibrium method being premised on a ceteris-paribus assumption in which all prices and all other endogenous variables determined in markets other than the one under analysis are held constant. That assumption can be maintained only on the condition that all markets are in equilibrium. So the implicit assumption of partial-equilibrium analysis is no less theoretically extreme than Walras’s tatonnement fiction.

In my previous post, I quoted Michel De Vroey’s dismissal of Keynes’s rationale for the existence of involuntary unemployment, a violation in De Vroey’s estimation, of Marshallian partial-equilibrium premises. Let me quote De Vroey again.

When the strict Marshallian viewpoint is adopted, everything is simple: it is assumed that the aggregate supply price function incorporates wages at their market-clearing magnitude. Instead, when taking Keynes’s line, it must be assumed that the wage rate that firms consider when constructing their supply price function is a “false” (i.e., non-market-clearing) wage. Now, if we want to keep firms’ perfect foresight assumption (and, let me repeat, we need to lest we fall into a theoretical wilderness), it must be concluded that firms’ incorporation of a false wage into their supply function follows from their correct expectation that this is indeed what will happen in the labor market. That is, firms’ managers are aware that in this market something impairs market clearing. No other explanation than the wage floor assumption is available as long as one remains in the canonical Marshallian framework. Therefore, all Keynes’s claims to the contrary notwithstanding, it is difficult to escape the conclusion that his effective demand reasoning is based on the fixed-wage hypothesis. The reason for unemployment lies in the labor market, and no fuss should be made about effective demand being [the reason rather] than the other way around.

A History of Macroeconomics from Keynes to Lucas and Beyond, pp. 22-23

My interpretation of De Vroey’s argument is that the strict Marshallian viewpoint requires that firms correctly anticipate the wages that they will have to pay in making their hiring and production decisions, while presumably also correctly anticipating the future demand for their products. I am unable to make sense of this argument unless it means that firms — and why should firm owners or managers be the only agents endowed with perfect or correct foresight? – correctly foresee the prices of the products that they sell and of the inputs that they purchase or hire. In other words, the strict Marshallian viewpoint invoked by De Vroey assumes that each transactor foresees, without the intervention of a timeless tatonnement process guided by a fictional auctioneer, the equilibrium price vector. In other words, when the strict Marshallian viewpoint is adopted, everything is simple; every transactor is a Walrasian auctioneer.

My interpretation of Keynes – and perhaps I’m just reading my own criticism of partial-equilibrium analysis into Keynes – is that he understood that the aggregate labor market can’t be analyzed in a partial-equilibrium setting, because Marshall’s ceteris-paribus proviso can’t be maintained for a market that accounts for roughly half the earnings of the economy. When conditions change in the labor market, everything else also changes. So the equilibrium conditions of the labor market must be governed by aggregate equilibrium conditions that can’t be captured in, or accounted for by, a Marshallian partial-equilibrium framework. Because something other than supply and demand in the labor market determines the equilibrium, what happens in the labor market can’t, by itself, restore an equilibrium.

That, I think, was Keynes’s intuition. But while identifying a serious defect in the Marshallian viewpoint, that intuition did not provide an adequate theory of adjustment. But the inadequacy of Keynes’s critique doesn’t rehabilitate the Marshallian viewpoint, certainly not in the form in which De Vroey represents it.

But there’s a deeper problem with the Marshallian viewpoint than just the interdependence of all markets. Although Marshall accepted marginal-utility theory in principle and used it to explain consumer demand, he tried to limit its application to demand while retaining the classical theory of the cost of production as a coordinate factor explaining the relative prices of goods and services. Marginal utility determines demand while cost determines supply, so that the interaction of supply and demand (cost and utility) jointly determine price just as the two blades of a scissor jointly cut a piece of cloth or paper.

This view of the role of cost could be maintained only in the context of the typical Marshallian partial-equilibrium exercise in which all prices — including input prices — except the price of a single output are held fixed at their general-equilibrium values. But the equilibrium prices of inputs are not determined independently of the values of the outputs they produce, so their equilibrium market values are derived exclusively from the value of whatever outputs they produce.

This was a point that Marshall, desiring to minimize the extent to which the Marginal Revolution overturned the classical theory of value, either failed to grasp, or obscured: that both prices and costs are simultaneously determined. By focusing on partial-equilibrium analysis, in which input prices are treated as exogenous variables rather than, as in general-equilibrium analysis, endogenously determined variables, Marshall was able to argue as if the classical theory that the cost incurred to produce something determines its value or its market price, had not been overturned.

The absolute dependence of input prices on the value of the outputs that they are being used to produce was grasped more clearly by Carl Menger than by Walras and certainly more clearly than by Marshall. What’s more, unlike either Walras or Marshall, Menger explicitly recognized the time lapse between the purchasing and hiring of inputs by a firm and the sale of the final output, inputs having been purchased or hired in expectation of the future sale of the output. But expected future sales are at prices anticipated, but not known, in advance, making the valuation of inputs equally conjectural and forcing producers to make commitments without knowing either their costs or their revenues before undertaking those commitments.

It is precisely this contingent relationship between the expectation of future sales at unknown, but anticipated, prices and the valuations that firms attach to the inputs they purchase or hire that provides an alternative to the problematic Marshallian and Walrasian accounts of how equilibrium market prices are actually reached.

The critical role of expected future prices in determining equilibrium prices was missing from both the Marshallian and the Walrasian theories of price determination. In the Walrasian theory, price determination was attributed to a fictional tatonnement process that Walras originally thought might serve as a kind of oversimplified and idealized version of actual market behavior. But Walras seems eventually to have recognized and acknowledged how far removed from reality his tatonnement invention actually was.

The seemingly more realistic Marshallian account of price determination avoided the unrealism of the Walrasian auctioneer, but only by attributing equally, if not more, unrealistic powers of foreknowledge to the transactors than Walras had attributed to his auctioneer. Only Menger, who realistically avoided attributing extraordinary knowledge either to transactors or to an imaginary auctioneer, instead attributing to transactors only an imperfect and fallible ability to anticipate future prices, provided a realistic account, or at least a conceptual approach toward a realistic account, of how prices are actually formed.

In a future post, I will try spell out in greater detail my version of a Mengerian account of price formation and how this account might tell us about the process by which a set of equilibrium prices might be realized.

The Walras-Marshall Divide in Neoclassical Theory, Part I

This year, 2021, puts us squarely in the midst of the sesquicentennial period of the great marginal revolution in economics that began with the almost simultaneous appearance in 1871 of Menger’s Grundsatze der Volkwirtschaft and Jevons’s Theory of Political Economy followed in 1874 by Walras’s Elements d’Economie Politique Pure. Jevons left few students behind to continue his work, so his influence pales in comparison with that of his younger contemporary Alfred Marshall who, working along similar lines, published his Principles of Economics in 1890. It was Marshall’s version of marginal utility theory that defined for more than a generation what became known as neoclassical theory in the Anglophone world. Menger’s work, via his disciples, Bohm-Bawerk and Wieser, was actually the most influential work on marginal-utility theory for at least 50 years, the work of Walras and his successor, Vilfredo Pareto, being too mathematical, even for professional economists, to become influential before the 1930s.

But after it was restated in a form not only more accessible, but more coherent and more sophisticated by J. R. Hicks in his immensely influential treatise Value and Capital, Walras’s work became the standard for rigorous formal economic analysis. Although the Walrasian paradigm became the standard for formal theoretical work, the Marshallian paradigm remained influential for applied microeconomic theory and empirical research, especially in fields like industrial organization, labor economics and international trade. Neoclassical economics, the corpus of economic mainstream economic theory that grew out of the marginal revolution was therefore built almost entirely on the works of Marshall and Walras, the influence of Menger, like that of Jevons, having been largely, but not entirely, assimilated into the main body of neoclassical theory.

The subsequent development of monetary theory and macroeconomics, especially after the Keynesian Revolution swept the economics profession, was also influenced by both Marshall and Walras. And the question whether Keynes belonged to the Marshallian tradition in which he was trained, or became, either consciously or unconsciously, a Walrasian has been an ongoing dispute among historians of macroeconomics since the late 1940s.

The first attempt to merge Keynes into the Walrasian paradigm led to the first neoclassical synthesis, which gained a brief ascendancy in the 1960s and early 1970s before being eclipsed by the New Classical rational expectations macroeconomics of Lucas and Sargent that led to a transformation of macroeconomics.

With that in mind, I’ve been reading Michel De Vroey’s excellent History of Macroeconomics from Keynes to Lucas and Beyond. An important feature of De Vroey’s book is its classification of macrotheories as either Marshallian or Walrasian in structure and orientation. I believe that the Walras vs. Marshall distinction is important, but I would frame that distinction differently from how De Vroey does. To be sure, De Vroey identifies some key differences between the Marshallian and Walrasian schemas, but I question whether he focuses on the differences between Marshall and Walras that really matter. And I also believe that he fails to address adequately the important problem that both Marhsall and Walras failed to address, namely their inability adequately describe a market mechanism that actually does, or even might, lead an economy toward an equilibrium position.

One reason for De Vroey’s misplaced emphasis is that he focuses on the different stories told by Walras and Marshall to explain how equilibrium — either for the entire system (Walras) or for a single market (Marshall) – is achieved. The story that Walras famously told was the tatonnement stratagem conceived by Walras to provide an account of how market forces, left undisturbed, would automatically bring an economy to a state of rest (general equilibrium). But Walras eventually realized that tatonnement could never be realistic for an economy with both exchange and production. The point of tatonnement is to prevent trading at disequilibium prices, but assuming that production is suspended during tatonnement is untenable, because production cannot be interrupted until the search for the equilibrium price vector is successfully completed.

Nevertheless, De Vroey treats tatonnement, despite its hopeless unrealism, as sine qua non for any model to be classified as Walrasian. In chapter 19 (“The History of Macroeconomics through the lens of the Marshall-Walras Divide”), DeVroey provides a comprehensive list of differences between the Marshallian and Walrasian modeling approaches which makes tatonnement a key distinction between the two approaches. I will discuss the three that seem most important.

1 Price formation: Walras assumes all exchange occurs at equilibrium prices found through tatonnement conducted by a deus-ex-machina auctioneer. All agents are therefore price takers even in “markets” in which, absent the auctioneer, market power could be exercised. Marshall assumes that prices are determined in the course of interaction of suppliers and demanders in distinct markets, so that the mix of price-taking and price-setting agents depends on the characteristics of those distinct markets.

This dichotomy between the Walrasian and Marshallian accounts of how prices are determined sheds light on the motivation that led Marshall and Walras to adopt their differing modeling approaches, but there is an important distinction between a model and the intuition that motivates or rationalizes the model. The model stands on its own whatever the intuition motivating the model. The motivation behind the model can inform how the model is assessed, but the substance of the model and its implications remain in tact even if the intuition behind the model is rejected.

2 Market equilibrium: Walras assumes that no market is in equilibrium unless general equilibrium obtains. Marshall assumes partial equililbrium is reached separately in each market. General equilibrium is achieved when all markets are in partial equilibrium. The Walrasian approach is top-down, the Marshallian bottom-up.

3 Realism: Marshall is more realistic than Walras in depicting individual markets in which transactors themselves engage in the price-setting process, assessing market conditions, and gaining information about supply-and-demand conditions; Walras assumes that all agents are passive price takers merely calculating their optimal, but provisional, plans to buy and sell at any price vector announced by the auctioneer who then processes those plans to determine whether the plans are mutually consistent or whether a new price vector must be tried. But whatever the gain in realism, it comes at a cost, because, except in obvious cases of complementarity or close substitutability between products or services, the Marshallian paradigm ignores the less obvious, but not necessarily negligible, interactions between markets. Those interactions render the Marshallian ceteris-paribus proviso for partial-equilibrium analysis logically dubious, except under the most stringent assumptions.

The absence of an auctioneer from Marshall’s schema leads De Vroey to infer that market participants in that schema must be endowed with knowledge of market demand-and-supply conditions. I claim no expertise as a Marshallian scholar, but I find it hard to accept that, given his emphasis on realism, Marshall would have attributed perfect knowledge to market participants. The implausibility of the Walrasian assumptions is thus matched, in De Vroey’s view, by different, but scarcely less implausible, Marshallian assumptions.

De Vroey proceeds to argue that Keynes himself was squarely on the Marshallian, not the Walrasian, side of the divide. Here’s how, focusing on the IS-LM model, he puts it:

As far as the representation of the economy is concerned, the economy that the IS-LM model analyzes is composed of markets that function separately, each of them being an autonomous locus of equilibrium. Turning to trade technology, no auctioneer is supposedly present. As for the information assumption, it is true that economists using the IS-LM model scarcely evoke the possibility that it might rest on the assumption that agents are omniscient. But then nobody seems to have raised the issue of how equilibrium is reached in this model. Once raised, I see no other explanation than assuming agents’ ability to reconstruct the equilibrium values of the economy, that is, their being omniscient. On all these scores, the IS-LM model is Marshallian.

A History of Macroeconomics from Keynes to Lucas and Beyond, p. 350

De Vroey’s dichotomy between the Walrasian and Marshallian modeling approaches leads him to make needlessly sharp distinctions between them. The basic IS-LM model determines the quantity of money, consumption, saving and investment, income and the rate of interest rate. Presumably, by autonomous locus of equilibrium,” De Vroey means that the adjustment of some variable determined in one of the IS-LM markets adjusts in response to disequilibrium in that market alone, but even so, the markets are not isolated from each other as they are in Marshallian partial-equilibrium analysis. The equilibrium values of the variables in the IS-LM model are simultaneously determined in all markets, so the autonomy of each market does not preclude simultaneous determination. Nor does the equilibrium of the model depend, as De Vroey seems to suggest, on the existence of an auctioneer; the role of the auctioneer is merely to provide a story (however implausible) about how the equilibrium is, or might be, reached.

Elsewhere De Vroey faults Keynes for characterizing cyclical unemployment as involuntary, because that characterization is incompatible with a Marshallian analysis of the labor market. Without endorsing Keynes’s reasoning, I cannot accept De Vroey’s argument against Keynes, because the argument is based explicitly on the assumption of perfect foresight. Describing the difference between a strict Marshallian approach and that taken by Keynes, De Vroey writes as follows:

When the strict Marshallian viewpoint is adopted, everything is simple: it is assumed that the aggregate supply price function incorporates wages at their market-clearing magnitude. Instead, when taking Keynes’s line, it must be assumed that the wage rate that firms consider when constructing their supply price function is a “false” (i.e., non-market-clearing) wage. Now, if we want to keep firms’ perfect foresight assumption (and, let me repeat, we need to lest we fall into a theoretical wilderness), it must be concluded that firms’ incorporation of a false wage into their supply function follows from their correct expectation that this is indeed what will happen in the labor market. That is, firms’ managers are aware that in this market something impairs market clearing. No other explanation than the wage floor assumption is available as long as one remains in the canonical Marshallian framework. Therefore, all Keynes’s claims to the contrary notwithstanding, it is difficult to escape the conclusion that his effective demand reasoning is based on the fixed-wage hypothesis. The reason for unemployment lies in the labor market, and no fuss should be made about effective demand being [the reason rather] than the other way around.

Id. pp. 22-23

De Vroey seems to be saying that if firms anticipate an equilibrium outcome, the equilibrium outcome will be realized. This is not an argument; it is question-begging, question-begging which De Vroey justifies by warning that the alternative to question-begging is to “fall into a theoretical wilderness.” Thus, Keynes’s argument for involuntary unemployment is rejected based on the argument that the in the only foreseeable outcome under the assumption of perfect information, unemployment cannot be involuntary.

Because neither the Walrasian nor the Marshallian modeling approach gives a plausible account of how an equilibrium is reached, De Vroey’s insistence that either implausible story is somehow essential to the corresponding modeling approach is misplaced, each approach committing the fallacy of misplaced concreteness in focusing on an equilibrium solution that cannot plausibly be realized. For De Vroey instead to argue that, because the Marshallian approach cannot otherwise explain how equilibrium is realized, the agents must be omniscient is akin to the advice of one Senator during the Vietnam war for President Nixon to declare victory and then withdraw all American troops.

I will have more to say about the Walras-Marshall divide and how to surmount the difficulties with both in a future post (or posts).

The Demise of Bretton Woods Fifty Years On

Today, Sunday, August 15, 2021, marks the 50th anniversary of the closing of the gold window at the US Treasury, at which a small set of privileged entities were at least legally entitled to demand redemption of dollar claims issued by the US government at the official gold price of $35 an ounce. (In 1971, as in 2021, August 15 fell on a Sunday.) When I started blogging in July 2011, I wrote one of my early posts about the 40th anniversary of that inauspicious event. My attention in that post was directed more at the horrific consequences of Nixon’s decision to combine a freeze on wages and price with the closing of the gold window, which was clearly far more damaging than the largely symbolic effect of closing the gold window. I am also re-upping my original post with some further comments, but in this post, my attention is directed solely on the closing of the gold window.

The advent of cryptocurrencies and the continuing agitprop aiming to restore the gold standard apparently suggest to some people that the intrinsically trivial decision to do away with the final vestige of the last remnant of the short-lived international gold standard is somehow laden with cosmic significance. See for example the new book by Jeffrey Garten (Three Days at Camp David) marking the 50th anniversary.

About 10 years before the gold window was closed, Milton Friedman gave a lecture at the Mont Pelerin Society which he called “Real and Pseudo-Gold Standards“, which I previously wrote about here. Many if not most of the older members of the Mont Pelerin Society, notably (L. v. Mises and Jacques Rueff) were die-hard supporters of the gold standard who regarded the Bretton Woods system as a deplorable counterfeit imitation of the real gold standard and longed for restoration of that old-time standard. In his lecture, Friedman bowed in their direction by faintly praising what he called a real gold standard, which he described as a state of affairs in which the quantity of money could be increased only by minting gold or by exchanging gold for banknotes representing an equivalent value of gold. Friedman argued that although a real gold standard was an admirable monetary system, the Bretton Woods system was nothing of the sort, calling it a pseudo-gold standard. Given that the then existing Bretton Woods system was not a real gold standard, but merely a system of artificially controlling the price of a particular commodity, Friedman argued that the next-best alternative would be to impose a quantitative limit on the increase in the quantity of fiat money, by enacting a law that would prohibit the quantity of money from growing by more than some prescribed amount or by some percentage (k-percent per year) of the existing stock percent in any given time period.

While failing to win over the die-hard supporters of the gold standard, Friedman’s gambit was remarkably successful, and for many years, it actually was the rule of choice among most like-minded libertarians and self-styled classical liberals and small-government conservatives. Eventually, the underlying theoretical and practical defects in Friedman’s k-percent rule became sufficiently obvious to cause even Friedman, however reluctantly, to abandon his single-minded quest for a supposedly automatic non-discretionary quantitative monetary rule.

Nevertheless, Friedman ultimately did succeed in undermining support among most right-wing conservative, libertarian and many centrist or left-leaning economists and decision makers for the Bretton Woods system of fixed, but adjustable, exchange rates anchored by a fixed dollar price of gold. And a major reason for his success was his argument that it was only by shifting to flexible exchange rates and abandoning a fixed gold price that the exchange controls and restrictions on capital movements that were in place for a quarter of a century after World Was II could be lifted, a rationale congenial and persuasive to many who might have otherwise been unwilling to experiment with a system of flexible exchange rates among fiat currencies that had never previously been implemented.

Indeed, the neoliberal economic and financial globalization that followed the closing of the gold window and freeing of exchange rates after the demise of the Bretton Woods system, whether one applauds or reviles it, can largely be attributed to Friedman’s influence both as an economic theorist and as a propagandist. As much as Friedman deplored the imposition of wage and price controls on August 15, 1971, he had reason to feel vindicated by the closing of the gold window, the freeing of exchange rates, and, eventually, the lifting of all capital controls and the legalization of gold ownership by private individuals, all of which followed from the Camp David meeting.

But, the objective economic situation confronted by those at Camp David was such that the Bretton Woods System could not be salvaged. As I wrote in my 2011 post, the Bretton Woods system built on the foundation of a fixed gold price of $35 an ounce was not a true gold standard because a free market in gold did not exist and could not be maintained at the official price. Trade in gold was sharply restricted, and only privileged central banks and governments were legally entitled to buy or sell gold at the official price. Even the formal right of the privileged foreign governments and central banks was subject to the informal, but unwelcome and potentially dangerous, disapproval of the United States.

The gold standard is predicated on the idea that gold has an ascertainable value, so that if money is made exchangeable for gold at a fixed rate, money and gold will have an identical value owing to arbitrage transactions. Such arbitrage transactions can occur only if, and so long as, no barriers prevent effective arbitrage. The unquestioned convertibility of a unit of currency into gold ensured that arbitrage would constrain the value of money to equal the value of gold. But under Bretton Woods the opportunities for arbitrage were so drastically limited that the value of the dollar was never clearly equal to the value of gold, which was governed by, pardon the expression, fiat rather than by free-market transactions.

The lack of a tight link between the value of gold and the value of the dollar was not a serious problem as long as the value of the dollar was kept essentially stable and there was a functioning (albeit not freely) gold market. After its closure during World War II, the gold market did not function at all until 1954, so the wartime and postwar inflation and the brief Korean War inflation did not undermine the official gold price of $35 an ounce that had been set in 1934 and was maintained under Bretton Woods. Even after a functioning, but not entirely free, gold market was reopened in 1954, the official price was easily sustained until the late 1960s thanks to central-bank cooperation, whose formalization through the International Monetary Fund (IMF) was one of the positive achievements of Bretton Woods. The London gold price was hardly a free-market price, because of central bank intervention and restrictions imposed on access to the market, but the gold holdings of the central banks were so large that it had always been in their power to control the market price if they were sufficiently determined to do so. But over the course of the 1960s, their cohesion gradually came undone. Why was that?

The first point to note is that the gold standard evolved over the course of the eighteenth and nineteenth centuries first as a British institution, and much later as an international institution, largely by accident from a system of simultaneous gold and silver coinages that were closely but imperfectly linked by a relative price of between 15 to 16 ounces of silver per ounce of gold. Depending on the precise legal price ratio of silver coins to gold coins in any particular country, the legally overvalued undervalued metal would flow out of that country and the undervalued overvalued metal would flow into that country.

When Britain undervalued gold at the turn of the 18th century, gold flowed into Britain, leading to the birth of the British of gold standard. In most other countries, silver and gold coins were circulating simultaneously at a ratio of 15.5 ounces of silver per ounce of gold. It was only when the US, after the Civil War, formally adopted a gold standard and the newly formed German Reich also shifted from a bimetallic to a gold standard that the increased demand for gold caused gold to appreciate relative to silver. To avoid the resulting inflation, countries with bimetallic systems based on a 15.5 to 1 silver/gold suspended the free coinage of silver and shifted to the gold standard further raising the silver/gold price ratio. Thus, the gold standard became an international not just a British system only in the 1870s, and it happened not by design or international consensus but by a series of piecemeal decisions by individual countries.

The important takeaway from this short digression into monetary history is that the relative currency values of the gold standard currencies were largely inherited from the historical definitions of the currency units of each country, not by deliberate policy decisions about what currency value to adopt in establishing the gold standard in any particular country. But when the gold standard collapsed in August 1914 at the start of World War I, the gold standard had to be recreated more or less from scratch after the War. The US, holding 40% of the world’s monetary gold reserves was in a position to determine the value of gold, so it could easily restore convertibility at the prewar gold price of $20.67 an ounce. For other countries, the choice of the value at which to restore gold convertibility was really a decision about the dollar exchange rate at which to peg their currencies.

Before the war, the dollar-pound exchange rate was $4.86 per pound. The postwar dollar-pound exchange rate was just barely close enough to the prewar rate to make restoring the convertibility of the pound at the prewar rate with the dollar seem doable. Many including Keynes argued that Britain would be better with an exchange rate in the neighborhood of $4.40 or less, but Winston Churchill, then Chancellor of the Exchequer, was persuaded to restore convertibility at the prewar parity. That decision may or may not have been a good one, but I believe that its significance for the world economy at the time and subsequently has been overstated. After convertibility was restored at the prewar parity, chronically high postwar British unemployment increased only slightly in 1925-26 before declining modestly until with the onset of the Great Deflation and Great Depression in late 1929. The British economy would have gotten a boost if the prewar dollar-pound parity had not been restored (or if the Fed had accommodated the prewar parity by domestic monetary expansion), but the drag on the British economy after 1925 was a negligible factor compared to the other factors, primarily gold accumulation by the US and France, that triggered the Great Deflation in late 1929.

The cause of that deflation was largely centered in France (with a major assist from the Federal Reserve). Before the war the French franc was worth about 20 cents, but disastrous French postwar economic policies caused the franc to fall to just 2 cents in 1926 when Raymond Poincaré was called upon to lead a national-unity government to stabilize the situation. His success was remarkable, the franc rising to over 4 cents within a few months. However, despite earlier solemn pledges to restore the franc to its prewar value of 20 cents, he was persuaded to stabilize the franc at just 3.92 cents when convertibility into gold was reestablished in June 1928, undervaluing the franc against both the dollar and the pound.

Not only was the franc undervalued, but the Bank of France, which, under previous governments had been persuaded or compelled to supply francs to finance deficit spending, was prohibited by the new Monetary Law that restored convertibility at the fixed rate of 3.92 cents from increasing the quantity of francs except in exchange for gold or foreign-exchange convertible into gold. While protecting the independence of the Bank of France from government fiscal demands, the law also prevented the French money stock from increasing to accommodate increases in the French demand for money except by way of a current account surplus, or a capital inflow.

Meanwhile, the Bank of France began converting foreign-exchange reserves into gold. The resulting increase in French gold holdings led to gold appreciation. Under the gold standard, gold appreciation is manifested in price deflation affecting all gold-standard countries. That deflation was the direct and primary cause of the Great Depression, which led, over a period of five brutal years, to the failure and demise of the newly restored international gold standard.

These painful lessons were not widely or properly understood at the time, or for a long time afterward, but the clear takeaway from that experience was that trying to restore the gold standard again would be a dangerous undertaking. Another lesson that was intuited, if not fully understood, is that if a country pegs its exchange rate to gold or to another currency, it is safer to err on the side of undervaluation than overvaluation. So, when the task of recreating an international monetary system was undertaken at Bretton Woods in July 1944, the architects of the system tried to adapt it to the formal trappings of the gold standard while eliminating the deflationary biases and incentives that had doomed the interwar gold standard. To prevent increasing demand for gold from causing deflation, the obligation to convert cash into gold was limited to the United States and access to the US gold window was restricted to other central banks via the newly formed international monetary fund. Each country could, in consultation with the IMF, determine its exchange rate with the dollar.

Given the earlier experience, countries had an incentive to set exchange rates that undervalued their currencies relative to the dollar. Thus, for most of the 1950s and early 1960s, the US had to contend with a currency that was overvalued relative to the currencies of its principal trading partners, Germany and Italy (the two fastest growing economies in Europe) and Japan (later joined by South Korea and Taiwan) in Asia. In one sense, the overvaluation was beneficial to the US, because access to low-cost and increasingly high-quality imports was a form of repayment to the US of its foreign-aid assistance, and its ongoing defense protection against the threat of Communist expansionism , but the benefit came with the competitive disadvantage to US tradable-goods industries.

When West Germany took control of its economic policy from the US military in 1948, most price-and-wage controls were lifted and the new deutschmark was devalued by a third relative to the official value of the old reichsmark. A further devaluation of almost 25% followed a year later. Great Britain in 1949, perhaps influenced by the success of the German devaluation, devalued the pound by 30% from old parity of $4.03 to $2.80 in 1949. But unlike Germany, Britain, under the postwar Labour government, attempting to avoid postwar inflation, maintained wartime exchange controls and price controls. The underlying assumption at the time was that the Britain’s balance-of-payments deficit reflected an overvalued currency, so that devaluation would avoid repeating the mistake made two decades earlier when the dollar-pound parity had overvalued the pound.

That assumption, as Ralph Hawtrey had argued in lonely opposition to the devaluation, was misguided; the idea that the current account depends only, or even primarily, on the exchange rate abstracts from the monetary forces that affect the balance of payments and the current account. Worse, because British monetary policy was committed to the goal of maximizing short-term employment, the resulting excess supply of cash inevitably increased domestic spending, thereby attracting imports and diverting domestically produced products from export markets and preventing the devaluation from achieving the goal of improving the trade balance and promoting expansion of the tradable-goods sector.

Other countries, like Germany and Italy, combined currency undervaluation with monetary restraint, allowing only monetary expansion that was occasioned by current-account surpluses. This became the classic strategy, later called exchange-rate protection by Max Corden, of combining currency undervaluation with tight monetary policy. British attempts to use monetary policy to promote both (over)full employment subject to the balance-of-payments constraint imposed by an exchange rate pegged to the dollar proved unsustainable, while Germany, Italy, France (after De Gaulle came to power in 1958 and devalued the franc) found the combination of monetary restraint and currency undervaluation a successful economic strategy until the United States increased monetary expansion to counter chronic overvaluation of the dollar.

Because the dollar was the key currency of the world monetary system, and had committed itself to maintain the $35 an ounce price of gold, the US, unlike other countries whose currencies were pegged to the dollar, could not adjust the dollar exchange rate to reduce or alleviate the overvaluation of the dollar relative to the currencies of its trading partners. Mindful of its duties as supplier of the world’s reserve currency, US monetary authorities kept US inflation close to zero after the 1953 Korean War armistice.

However, that restrained monetary policy led to three recessions under the Eisenhower administration (1953-54, 1957-58, and 1960-61). The latter recessions led to disastrous Republican losses in the 1958 midterm elections and to Richard Nixon’s razor-thin loss in 1960 to John Kennedy, who had campaigned on a pledge to get the US economy moving again. The loss to Kennedy was a lesson that Nixon never forgot, and he was determined never to allow himself to lose another election merely because of scruples about US obligations as supplier of the world’s reserve currency.

Upon taking office, the Kennedy administration pressed for an easing of Fed policy to end the recession and to promote accelerated economic expansion. The result was a rapid recovery from the 1960-61 recession and the start of a nearly nine-year period of unbroken economic growth at perhaps the highest average growth rate in US history. While credit for the economic expansion is often given to the across-the-board tax cuts proposed by Kennedy in 1963 and enacted in 1964 under Lyndon Johnson, the expansion was already well under way by mid-1961, three years before the tax cuts became effective.

The international aim of monetary policy was to increase nominal domestic spending and to force US trading partners with undervalued currencies either to accept increased holdings of US liabilities or to revalue their exchange rates relative to the dollar to diminish their undervaluation relative to the dollar. Easier US monetary policy led to increasing complaints from Europeans, especially the Germans, that the US was exporting inflation and to charges that the US was taking advantage of the exorbitant privilege of its position as supplier of the world’s reserve currency.

The aggressive response of the Kennedy administration to undervaluation of most other currencies led to predictable pushback from France under de Gaulle who, like many other conservative and right-wing French politicians, was fixated on the gold standard and deeply resented Anglo-American monetary pre-eminence after World War I and American dominance after World War II. Like France under Poincaré, France under de Gaulle sought to increase its gold holdings as it accumulated dollar-denominated foreign exchange. But under Bretton Woods, French gold accumulation had little immediate economic effect other than to enhance the French and Gaullist pretensions to grandiosity.

Already in 1961 Robert Triffin predicted that the Bretton Woods system could not endure permanently because the growing world demand for liquidity could not be satisfied by the United States in a world with a relatively fixed gold stock and a stable or rising price level. The problem identified by Triffin was not unlike that raised by Gustav Cassel in the 1920s when he predicted that the world gold stock would likely not increase enough to prevent a worldwide deflation. This was a different problem from the one that actually caused the Great Depression, which was a substantial increase in gold demand associated with the restoration of the gold standard that triggered the deflationary collapse of late 1929. The long-term gold shortage feared by Cassel was a long-term problem distinct from the increase in gold demand caused by the restoration of the gold standard in the 1920s.

The problem Triffin identified was also a long-term consequence of the failure of the international gold stock to increase to provide the increased gold reserves that would be needed for the US to be able to credibly commit to maintaining the convertibility of the dollar into gold without relying on deflation to cause the needed increase in the real value of gold reserves.

Had it not been for the Vietnam War, Bretton Woods might have survived for several more years, but the rise of US inflation to over 4% in 1968-69, coupled with the 1969-70 recession in an unsuccessful attempt to reduce inflation, followed by a weak recovery in 1971, made it clear that the US would not undertake a deflationary policy to make the official $35 gold price credible. Although de Gaulle’s unexpected retirement in 1969 removed the fiercest opponent of US monetary domination, confidence that the US could maintain the official gold peg, when the London gold price was already 10% higher than the official price, caused other central banks to fear that they would be stuck with devalued dollar claims once the US raised the official gold price. Not only the French, but other central banks were already demanding redemption in gold of the dollar claims that they were holding.

An eleventh-hour policy reversal by the administration to save the official gold price was not in the cards, and everyone knew it. So all the handwringing about the abandonment of Bretton Woods on August 15, 1971 is either simple foolishness or gaslighting. The system was already broken, and it couldn’t be fixed at any price worth pondering for even half an instant. Nixon and his accomplices tried to sugarcoat their scrapping of the Bretton Woods System by pretending that they were announcing a plan that was the first step toward its reform and rejuvenation. But that pretense led to a so-called agreement with a new gold-price peg of $38 an ounce, which lasted hardly a year before it died not with a bang but a whimper.

What can we learn from this story? For me the real lesson is that the original international gold standard was, to borrow (via Hayek) a phrase from Adam Ferguson: “the [accidental] result of human action, not human design.” The gold standard, as it existed for those 40 years, was not an intuitively obvious or well understood mechanism working according to a clear blueprint; it was an improvised set of practices, partly legislated and partly customary, and partially nothing more than conventional, but not very profound, wisdom.

The original gold standard collapsed with the outbreak of World War I and the attempt to recreate it after World War I, based on imperfect understanding of how it had actually functioned, ended catastrophically with the Great Depression, a second collapse, and another, even more catastrophic, World War. The attempt to recreate a new monetary system –the Bretton Woods system — using a modified feature of the earlier gold standard as a kind of window dressing, was certainly not a real gold standard, and, perhaps, not even a pseudo-gold standard; those who profess to mourn its demise are either fooling themselves or trying to fool the rest of us.

We are now stuck with a fiat system that has evolved and been tinkered with over centuries. We have learned how to manage it, at least so far, to avoid catastrophe. With hard work and good luck, perhaps we will continue to learn how to manage it better than we have so far. But to seek to recreate a system that functioned fairly successfully for at most 40 years under conditions not even remotely likely ever again to be approximated, is hardly likely to lead to an outcome that will enhance human well-being. Even worse, if that system were recreated, the resulting outcome might be far worse than anything we have experienced in the last half century.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,948 other followers

Follow Uneasy Money on WordPress.com