My Paper “Between Walras and Marshall: Menger’s Third Way” Is Now Posted on SSRN

As regular readers of this blog will realize, several of my recent posts (here, here, here, here, and here) have been incorporated in my new paper, which I have been writing for the upcoming Carl Menger 2021 Conference next week in Nice, France. The paper is now available on SSRN.

Here is the abstract to the paper:

Neoclassical economics is bifurcated between Marshall’s partial-equilibrium and Walras’s general-equilibrium analyses. Given the failure of neoclassical theory to explain the Great Depression, Keynes proposed an explanation of involuntary unemployment. Keynes’s contribution was later subsumed under the neoclassical synthesis of the Keynesian and Walrasian theories. Lacking microfoundations consistent with Walrasian theory, the neoclassical synthesis collapsed. But Walrasian GE theory provides no plausible account of how GE is achieved. Whatever plausibility is attributed to the assumption that price flexibility leads to equilibrium derives from Marshallian PE analysis, with prices equilibrating supply and demand. But Marshallian PE analysis presumes that all markets, but the small one being analyzed, are at equilibrium, so that price adjustments in the analyzed market neither affect nor are affected by other markets. The demand and cost (curves) of PE analysis are drawn on the assumption that all other prices reflect Walrasian GE values. While based on Walrasian assumptions, modern macroeconomics relies on the Marshallian intuition that agents know or anticipate the prices consistent with GE. Menger’s third way offers an alternative to this conceptual impasse by recognizing that nearly all economic activity is subjective and guided by expectations of the future. Current prices are set based on expectations of future prices, so equilibrium is possible only if agents share the same expectations of future prices. If current prices are set based on differing expectations, arbitrage opportunities are created, causing prices and expectations to change, leading to further arbitrage, expectational change, and so on, but not necessarily to equilibrium.

Here is a link to the paper: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3964127

The current draft if preliminary, and any comments, suggestions or criticisms from readers would be greatly appreciated.

High-Inflation Anxiety

UPDATE (9:25am 11/16/2021): Thanks to philipji for catching some problematic passages in my initially posted version. I have also revised the opening paragraph, which was less than clear. Apologies for my sloppy late-night editing before posting.

When I started blogging ten-plus years ago, most of my posts were about monetary policy, because I then felt that the Fed was not doing as much as it could, and should, have been doing to promote a recovery from the Little Depression (aka Great Recession) for which Fed’s policy mistakes bore heavy responsibility. The 2008 financial crisis and the ensuing deep downturn were largely the product of an overly tight monetary policy starting in 2006, and, despite interest-rate cuts in 2007 and 2008, the Fed’s policy consistently erred on the side of tightness because of concerns about rising oil and commodities prices, for almost two months after the start of the crisis. The stance of monetary policy cannot be assessed just by looking at the level of the central bank policy rate; the stance depends on the relationship between the policy rate and the economic conditions at any particular moment. The 2-percent Fed Funds target in the summer of 2008, given economic conditions at the time, meant that monetary policy was tight, not easy.

Although, after the crisis, the Fed never did as much as it could — and should — have to promote recovery, it at least took small measures to avoid a lapse into a ruinous deflation, even as many of the sound-money types, egged on by deranged right-wing scare mongers, warned of runaway inflation.

Slowly but surely, a pathetically slow recovery by the end of Obama’s second term, brought us back to near full employment. By then, my interest in the conduct of monetary policy had given way to a variety of other concerns as we dove into the anni horribiles of the maladministration of Obama’s successor.

Riding a recovery that started seven and a half years before he took office, and buoyed by a right-wing propaganda juggernaut and a pathetically obscene personality cult that broadcast and amplified his brazenly megalomaniacal self-congratulations for the inherited recovery over which he presided, Obama’s successor watched incompetently as the Covid 19 virus spread through the United States, causing the sharpest drop in output and employment in US history.

Ten months after Obama’s successor departed from the White House, the US economy has recovered much, but not all, of the ground lost during the pandemic, employment still below its peak at the start of 2020, and real output still lagging the roughly 2% real growth path along which the economy had been moving for most of the preceding decade.

However, the very rapid increase in output in Q2 2021 and the less rapid, but still substantial, increase in output in Q3 2021, combined with inflation that has risen to the highest rates in 30 years, has provoked ominous warning of resurgent inflation similar to the inflation from the late 1960s till the early 1980s, ending only with the deep 1981-82 recession caused by the resolute anti-inflation policy administered by Fed Chairman Paul Volcker with the backing of the newly elected Ronald Reagan.

It’s worth briefly revisiting that history (which I have discussed previously on this blog here, here, and especially in the following series (1), (2) and (3) from 2020) to understand the nature of the theoretical misunderstanding and the resulting policy errors in the 1970s and 1980s. While I agree that the recent increase in inflation is worrisome, it’s far from clear that inflation is likely, as many now predict, to get worse, although the inflation risk can’t be dismissed.

What I find equally if not more worrisome is that the anti-inflation commentary that we are hearing now from very serious people like Larry Summers in today’s Washington Post is how much it sounds like the inflation talk of 2008, which frightened the Fed, then presided over by a truly eminent economist, Ben Bernanke, into thinking that the chief risk facing the economy was rising headline inflation that would cause inflation expectations to become “unanchored.”

So, rather than provide an economy rapidly sliding into recession, the FOMC, focused on rapid increases in oil and commodity prices, refused to loosen monetary policy in the summer of 2008, even though the pace of growth in nominal gross domestic product (NGDP) had steadily decelerated measured year on year. The accompanying table shows the steady decline in the quarterly year-on-year growth of NGDP in each successive quarter between Q1 2004 and Q4 2008. Between 2004 and 2006, the decline was gradual, but accelerated in 2007 leading to the start of a recession in December 2007.

Source: https://fred.stlouisfed.org/series/DFEDTAR

The decline in the rate of NGDP growth was associated with a gradual increase in the Fed funds target rate from a very low level 1% until Q2 2004, but by Q2 2006, when the rate reached 5%, the slowdown in the growth of total spending quickened. As the rate of spending declined, the Fed eased interest rates in the second half of 2007, but that easing was insufficient to prevent an economy, already suffering financial distress after the the housing bubble burst in 2006, from lapsing into recession.

Although the Fed cut its interest-rate target substantially in March 2008, during the summer of 2008, when the recession was rapidly deteriorating, the FOMC, fearing that inflation expectations would become “unanchored”, given rising headline inflation associated with very large increases in crude-oil prices (which climbed to a record $130 a barrel) and in commodity prices even as the recession was visibly worsening, refused to reduce rates further.

The Fed, to be sure, was confronted with a difficult policy dilemma, but it was a disastrous error to prioritize a speculative concern about the “unanchoring” of long-term inflation expectations over the reality of a fragile and clearly weakening financial system in a contracting economy already clearly in a recession. The Fed made the wrong choice, and the crisis came.

That was then, and now is now. The choices are different, but once again, on one side there is pressure to prioritize the speculative concern about the “unanchoring” of long-term inflation expectations over promoting recovery and increased employment after a a recession and a punishing pandemic. And, once again, the concerns about inflation are driven by a likely transitory increase in the price of crude oil and gasoline prices.

The case for prioritizing fighting inflation was just made by none other than Larry Summers in an op-ed in the Washington Post. Let’s have a look at Summer’s case for fighting inflation now.

Fed Chair Jerome H. Powell’s Jackson Hole speech in late August provided a clear, comprehensive and authoritative statement, enumerated in five pillars, of the widespread team “transitory” view of inflation that prevailed at that time and shaped policy thinking at the central bank and in the administration. Today, all five pillars are wobbly at best.

First, there was a claim that price increases were confined to a few sectors. No longer. In October, prices for commodity goods outside of food and energy rose at more than a 12 percent annual rate. Various Federal Reserve system indexes that exclude sectors with extreme price movements are now at record highs.

https://www.washingtonpost.com/opinions/2021/11/15/inflation-its-past-time-team-transitory-stand-down/

Summers has a point. Price increases are spreading throughout the economy. However, that doesn’t mean that increasing oil prices are not causing the prices of many other products to increase as well, inasmuch as oil and other substitute forms of energy are so widely used throughout the economy. If the increase in oil prices, and likely, in food prices, has peaked, or will do so soon, it does not necessarily make sense to fight a war against an enemy that has retreated or is about to do so.

Second, Powell suggested that high inflation in key sectors, such as used cars and durable goods more broadly, was coming under control and would start falling again. In October, used-car prices accelerated to more than a 30 percent annual inflation rate, new cars to a 17 percent rate and household furnishings by an annualized rate of just above 10 percent.

Id.

Again, citing large increases in the price of cars when it’s clear that there are special circumstances causing new car prices to rise rapidly, bringing used-car prices tagging along, is not very persuasive, especially when those special circumstances appear likely to be short-lived. To be sure other durables prices are also rising, but in the absence of a deeper source of inflation, the atmospherics cited by Summers are not that compelling.

Third, the speech pointed out that there was “little evidence of wage increases that might threaten excessive inflation.” This claim is untenable today with vacancy and quit rates at record highs, workers who switch jobs in sectors ranging from fast food to investment banking getting double-digit pay increases, and ominous Employment Cost Index increases.

Id.

Wage increases are usually an indicator of inflation, though, again the withdrawal, permanent or temporary, of many workers from employment in the past two years is a likely cause of increased wages that is independent of an underlying and ongoing inflationary trend.

Fourth, the speech argued that inflation expectations remained anchored. When Powell spoke, market inflation expectations for the term of the next Federal Reserve chair were around 2.5 percent. Now they are about 3.1 percent, up half a percentage point in the past month alone. And consumer sentiment is at a 10-year low due to inflation fears.

Id.

Clearly inflation expectations have increased over the short term for a variety of reasons that we have just been considering. But the curve of inflation expectations still seems to be reverting toward a lower level in the medium term and the long-term.

Fifth, Powell emphasized global deflationary trends. In the same week the United States learned of the fastest annual inflation rate in 30 years, Japan, China and Germany all reported their highest inflation in more than a decade. And the price of oil, the most important global determinant of inflation, is very high and not expected by forward markets to decline rapidly.

Id.

Again, Summers is simply recycling the same argument. We know that there has been a short-term increase in inflation. The question we need to grapple with is whether this short-term inflationary blip is likely to be self-limiting, or will feed on itself, causing inflation expectations to become “unanchored”. Forward prices of oil may not be showing that the price of oil will decline rapidly, but they aren’t showing expectations of further increases. Without further increases in oil prices, it is fair to ask what the source of further, ongoing inflation, that will cause “unanchoring”?

As it has in the past, the threat of “unanchoring”, is doing an awful lot of work. And it is not clear how the work is being done except by way of begging the question that really needs to be answered not begged.

After his windup, Summers offers fairly mild suggestions for his anti-inflation program, and only one of his comments seems mistaken.

Because of inflation, real interest rates are lower, as money is easier than a year ago. The Fed should signal that this is unacceptable and will be reversed.

Id.

The real interest rate about which the Fed should be concerned is the ex ante real interest rate reflecting both the expected yield from real capital and the expected rate of inflation (which may and often does have feedback effects on the expected yield from real capital). Past inflation does not automatically get transformed into an increase in expected inflation, and it is not necessarily the case that past inflation has left the expected yield from real capital unaffected, so Summers’ inference that the recent blip in inflation necessarily implies that monetary policy has been eased could well be mistaken. Yet again, these are judgments (or even guesses) that policymakers have to make about the subjective judgments of market participants. Those policy judgments that can’t be made simply by reading off a computer screen.

While I’m not overly concerned by Summers’s list of inflation danger signs, there’s no doubt that inflation risk has risen. Yet, at least for now, that risk seems to be manageable. The risk may require the Fed to take pre-emptive measures against inflation down the road, but I don’t think that we have not reached that point yet.

The main reason why I think that inflation risk has been overblown is that inflation is a common occurrence in postwar economies, as occurred in the US after both World Wars, and after the Korean War and the Vietnam War. It is widely recognized that war itself is inflationary owing, among other reasons, to the usual practice of governments to finance wartime expenditures by printing money, but inflationary pressures tend to persist even after the wars end.

Why does inflation persist after wars come to an end? The main reason is that, during wartime, resources, labor and capital, are shifted from producing goods for civilian purposes to waging war and producing and transporting supplies to support the war effort. Because total spending, financed by printing money, increases during the war, money income goes up even though the production of goods and services for civilian purposes goes down.

The output of goods and services for civilian purposes having been reduced, the increased money income accruing to the civilian population implies rising prices of the civilian goods and services that are produced. The tendency for prices to rise during wartime is mitigated by the reduced availability of outlets for private spending, people normally postponing much of their non-essential spending while the war is ongoing. Consequently, the public accumulates cash and liquid assets during wartime with the intention of spending the accumulated cash and liquid assets when life returns to normal after the war.

The lack of outlets for private spending is reinforced when, as happened in World War I, World War II, the Korean War and the late stages of the Vietnam War, price controls prevent the prices of civilian goods still being produced from rising, so that consumers can’t buy goods – either at all or as much as they would like – that they would willingly have paid for. The result is suppressed inflation until wartime price controls are lifted, and deferred price increases are allowed to occur. As prices rise, the excess cash that had been accumulated while the goods people demanded were unavailable is absorbed by purchases made at the postponed increases in price.

In his last book, Incomes and Money, Ralph Hawtrey described with characteristic clarity the process by which postwar inflation absorbed redundant cash balances accumulated during the World War II when price controls were lifted.

America, like Britain, had imposed price controls during the war, and had accumulated a great amount of redundant money. So long as the price controls continued, the American manufacturers were precluded from checking demand by raising their prices. But the price controls were abandoned in the latter half of 1946, and there resulted a rise of prices reaching 30 per cent on manufactured goods in the latter part of 1947. That meant that American industry was able to defend itself against the excess demand. By the end of 1947 the rise of prices had nearly eliminated the redundant money; that it to say, the quantity of money (currency and bank deposits) was little more than in a normal proportion to the national income. There was no longer over-employment in American industry, and there was no reluctance to take export orders.

Hawtrey, Incomes and Money, p. 7

Responding to Paul Krugman’s similar claim that there was high inflation following World War II, Summers posted the following twitter thread.

@paulkrugman continues his efforts to minimize the inflation threat to the American economy and progressive politics by pointing to the fact that inflation surged and then there was a year of deflation after World War 2.

If this is the best argument for not being alarmed that someone as smart, rhetorically effective and committed as Paul can make, my anxiety about inflation is increased.

Pervasive price controls were removed after the war. Economists know that measured prices with controls are artificial, so subsequent inflation proves little.

Millions of soldiers were returning home and a massive demobilization was in effect. Nothing like the current pervasive labor shortage was present.

https://twitter.com/LHSummers/status/1459992638170583041

Summers is surely correct that the situation today is not perfectly analogous to the post-WWII situation, but post-WWII inflation, as Hawtrey explained, was only partially attributable to the lifting of price controls. He ignores the effect of excess cash balances, which ultimately had to be spent or somehow withdrawn from circulation through a deliberate policy of deflation, which neither Summers nor most economists would think advisable or even acceptable. While the inflationary effect of absorbing excess cash balances is therefore almost inevitable, the duration of the inflation is limited and need not cause inflation expectations to become “unanchored.”

With the advent of highly effective Covid vaccines, we are now gradually emerging from the worst horrors of the Covid pandemic, when a substantial fraction of the labor force was either laid off or chose to withdraw from employment. As formerly idle workers return to work, we are in a prolonged quasi-postwar situation.

Just as the demand for civilian products declines during wartime, the demand for a broad range of private goods declined during the pandemic as people stopped going to restaurants, going on vacations, attending public gathering, and limited their driving and travel. Thus, the fraction of earnings that was saved increased as outlets for private spending became unavailable, inappropriate or undesirable.

As the pandemic has receded, restoring outlets for private spending, pent-up suppressed private demands have re-emerged, financed by households drawing down accumulated cash balances or drawing on credit lines augmented by paying down indebtedness. For many goods, like cars, the release of pent-up private demand has outpaced the increase in supply, leading to substantial price increases that are unlikely to be sustained once short-term supply bottlenecks are eliminated. But such imbalances between rapid increases in demand and sluggish increases in supply does not seem like a reliable basis on which to make policy choices.

So what are we to do now? As always, Ralph Hawtrey offers the best advice. The control of inflation, he taught, ultimately depends on controlling the relationship between the rate of growth in total nominal spending (and income) and the rate of growth of total real output. If total nominal spending (and income) is increasing faster than the increase in total real output, the difference will be reflected in the prices at which goods and services are provided.

In the five years from 2015 to 2019, the average growth rate in nominal spending (and income) was about 3.9%. During that period the average rate of growth in real output was 2.2% annually and the average rate of inflation was 1.7%. It has been reasonably suggested that extrapolating the 3.9% annual growth in nominal spending in the previous five years provides a reasonable baseline against which to compare actual spending in 2020 and 2021.

Actual nominal spending in Q3 2021 was slightly below what nominal GDP would have been in Q3 if it had continued growing at the extrapolated 3.9% growth path in nominal GDP. But for nominal GDP in Q4 not exceed that extrapolated growth path in Q4, Q4 could increase by an annual rate of no more than 4.3%. Inasmuch as spending in Q3 2021 was growing at 7.8%, the growth rate of nominal spending would have to slow substantially in Q4 from its Q3 growth rate.

But it is not clear that a 3.9% growth rate in nominal spending is the appropriate baseline to use. From 2015 to 2019, the average growth rate in real output was only 2.2% annually and the average inflation rate was only 1.7%. The Fed has long announced that its inflation target was 2% and in the 2015 to 2019 period, it consistently failed to meet that target. If the target inflation was 2% rather than 1.7%, presumably the Fed believed that annual growth would not have been less with 2% inflation than with 1.7%, so there is no reason to believe that the Fed should not have been aiming for more than 3.9% growth in total spending. If so a baseline for extrapolating the growth path for nominal spending should certainly not be less than 4.2%, Even a 4.5% baseline seems reasonable, and a baseline as high as 5% does not seem unreasonable.

With a 5% baseline, total nominal spending in Q4 could increase by as much as 5.4% without raising total nominal spending above its target path. But I think the more important point is not whether total spending does or does not rise above its growth path. The important goal is for the growth in nominal spending to decline steadily toward a reasonable growth path of about 4.5 to 5% and for this goal to be communicated to the public in a convincing manner. The 13.4% increase in total spending in Q2, when it appeared that the pandemic might soon be over, was likely a one-off outlier reflecting the release of pent-up demand. The 7.8% increase in Q3 was excessive, but substantially less than the Q2 rate of increase. If the Q4 increase does not continue downward trend in the rate of increase in nominal spending, it will be time to re-evaluate policy to ensure that the growth of spending is brought down to a non-inflationary range.

More on Arrow’s Explanatory Gap and the Milgrom-Stokey Argument

In my post yesterday, I discussed what I call Kenneth Arrow’s explanatory gap: the absence of any account in neoclassical economic theory of how the equilibrium price vector is actually arrived at and how changes in that equilibrium price vector result when changes in underlying conditions imply changes in equilibrium prices. I post below some revisions to several paragraphs in yesterday’s post supplemented by a more detailed discussion of the Milgrom-Stokey “no-trade theorem” and its significance. The following is drawn from a work in progress to be presented later this month at a conference celebrating the 150th anniversary of the publication of the Carl Menger’s Grundsätze der Volkswirtschaftslehre.

Thus, just twenty years after Arrow called attention to the explanatory gap in neoclassical theory by observing that neoclassical theory provides no explanation of how competitive prices can change, Paul Milgrom and Nancy Stokey (1982) turned Arrow’s argument on its head by arguing that, under rational expectations, no trading would ever occur at disequilibrium prices, because every potential trader would realize that an offer to trade at disequilibrium prices would not be made unless the offer was based on private knowledge and would therefore lead to a wealth transfer to the trader relying on private knowledge. Because no traders with rational expectations would agree to a trade at a disequilibrium price, there would be no incentive to seek or exploit private information, and all trades would occur at equilibrium prices.

This would have been a profound and important argument had it been made as a reductio ad absurdum to show the untenability of the rational-expectations as a theory of expectation formation, inasmuch as it leads to the obviously false factual implication that private information is never valuable and that no profitable trades are made by those possessed of private information. In concluding their paper, Milgrom and Stokey (1982) acknowledge the troubling implication of their argument:

Our results concerning rational expectations market equilibria raise anew the disturbing questions expressed by Beja (1977), Grossman and Stiglitz (1980), and Tirole (1980): Why do traders bother to gather information if they cannot profit from it? How does information come to be reflected in prices if informed traders do not trade or if they ignore their private information in making inferences? These questions can be answered satisfactorily only in the context of models of the price formation process; and our central result, the no-trade theorem, applies to all such models when rational expectations are assumed. (p. 17)

What Milgrom and Stokey seem not to have grasped is that the rational-expectations assumption dispenses with the need for a theory of price formation, inasmuch as every agent is assumed to be able to calculate what the equilibrium price is. They attempt to mitigate the extreme nature of this assumption by arguing that by observing price changes, traders can infer what changes in common knowledge would have implied the observed changes. That argument seems insufficient because any given change in price could be caused by more than one potential cause. As Scott Sumner has often argued, one can’t reason from a price change. If one doesn’t have independent knowledge of the cause of the price change, one can’t use the price change as a basis for further inference.

The Explanatory Gap and Mengerian Subjectivism

My last several posts have been focused on Marshall and Walras and the relationships and differences between the partial equilibrium approach of Marshall and the general-equilibrium approach of Walras and how that current state of neoclassical economics is divided between the more practical applied approach of Marshallian partial-equilibrium analysis and the more theoretical general-equilibrium approach of Walras. The divide is particularly important for the history of macroeconomics, because many of the macroeconomic controversies in the decades since Keynes have also involved differences between Marshallians and Walrasians. I’m not happy with either the Marshallian or Walrasian approach, and I have been trying to articulate my unhappiness with both branches of current neoclassical thinking by going back to the work of the forgotten marginal revolutionary, Carl Menger. I’ve been writing a paper for a conference later this month celebrating the 150th anniversary of Menger’s great work which draws on some of my recent musings, because I think it offers at least some hints at how to go about developing an improved neoclassical theory. Here’s a further sampling of my thinking which is drawn from one of the sections of my work in progress.

Both the Marshallian and the Walrasian versions of equilibrium analysis have failed to bridge an explanatory gap between the equilibrium state, whose existence is crucial for such empirical content as can be claimed on behalf of those versions of neoclassical theory, and such an equilibrium state could ever be attained. The gap was identified by one of the chief architects of modern neoclassical theory, Kenneth Arrow, in his 1958 paper “Toward a Theory of Price Adjustment.”

The equilibrium is defined in terms of a set of prices. In the Marshallian version, the equilibrium prices are assumed to have already been determined in all but a single market (or perhaps a subset of closely related markets), so that the Marshallian equilibrium simply represents how, in a single small or isolated market, an equilibrium price in that market is determined, under suitable ceteris-paribus conditions thereby leaving the equilibrium prices determined in other markets unaffected.

In the Walrasian version, all prices in all markets are determined simultaneously, but the method for determining those prices simultaneously was not spelled out by Walras other than by reference to the admittedly fictitious and purely heuristic tâtonnement process.

Both the Marshallian and Walrasian versions can show that equilibrium has optimal properties, but neither version can explain how the equilibrium is reached or how it can be discovered in practice. This is true even in the single-period context in which the Walrasian and Marshallian equilibrium analyses were originally carried out.

The single-period equilibrium has been extended, at least in a formal way, in the standard Arrow-Debreu-McKenzie (ADM) version of the Walrasian equilibrium, but this version is in important respects just an enhanced version of a single-period model inasmuch as all trades take place at time zero in a complete array of future state-contingent markets. So it is something of a stretch to consider the ADM model a truly intertemporal model in which the future can unfold in potentially surprising ways as opposed to just playing out a script already written in which agents go through the motions of executing a set of consistent plans to produce, purchase and sell in a sequence of predetermined actions.

Under less extreme assumptions than those of the ADM model, an intertemporal equilibrium involves both equilibrium current prices and equilibrium expected prices, and just as the equilibrium current prices are the same for all agents, equilibrium expected future prices must be equal for all agents. In his 1937 exposition of the concept of intertemporal equilibrium, Hayek explained the difference between what agents are assumed to know in a state of intertemporal equilibrium and what they are assumed to know in a single-period equilibrium.

If all agents share common knowledge, it may be plausible to assume that they will rationally arrive at similar expectations of the future prices. But if their stock of knowledge consists of both common knowledge and private knowledge, then it seems implausible to assume that the price expectations of different agents will always be in accord. Nevertheless, it is not necessarily inconceivable, though perhaps improbable, that agents will all arrive at the same expectations of future prices.

In the single-period equilibrium, all agents share common knowledge of equilibrium prices of all commodities. But in intertemporal equilibrium, agents lack knowledge of the future, but can only form expectations of future prices derived from their own, more or less accurate, stock of private knowledge. However, an equilibrium may still come about if, based on their private knowledge, they arrive at sufficiently similar expectations of future prices for their plans for their current and future purchases and sales to be mutually compatible.

Thus, just twenty years after Arrow called attention to the explanatory gap in neoclassical theory by observing that there is no neoclassical theory of how competitive prices can change, Milgrom and Stokey turned Arrow’s argument on its head by arguing that, under rational expectations, no trading would ever occur at prices other than equilibrium prices, so that it would be impossible for a trader with private information to take advantage of that information. This argument seems to suffer from a widely shared misunderstanding of what rational expectations signify.

Thus, in the Mengerian view articulated by Hayek, intertemporal equilibrium, given the diversity of private knowledge and expectations, is an unlikely, but not inconceivable, state of affairs, a view that stands in sharp contrast to the argument of Paul Milgrom and Nancy Stokey (1982), in which they argue that under a rational-expectations equilibrium there is no private knowledge, only common knowledge, and that it would be impossible for any trader to trade on private knowledge, because no other trader with rational expectations would be willing to trade with anyone at a price other than the equilibrium price.

Rational expectations is not a property of individual agents making rational and efficient use of the information from whatever source it is acquired. As I have previously explained here (and a revised version here) rational expectations is a property of intertemporal equilibrium; it is not an intrinsic property that agents have by virtue of being rational, just as the fact that the three angles in a triangle sum to 180 degrees is not a property of the angles qua angles, but a property of the triangle. When the expectations that agents hold about future prices are identical, their expectations are equilibrium expectations and they are rational. That the agents hold rational expectations in equilibrium, does not mean that the agents are possessed of the power to calculate equilibrium prices or even to know if their expectations of future prices are equilibrium expectations. Equilibrium is the cause of rational expectations; rational expectations do not exist if the conditions for equilibrium aren’t satisfied. See Blume, Curry and Easley (2006).

The assumption, now routinely regarded as axiomatic, that rational expectations is sufficient to ensure that equilibrium is automatic achieved, and that agents’ price expectations necessarily correspond to equilibrium price expectations is a form of question begging disguised as a methodological imperative that requires all macroeconomic models to be properly microfounded. The newly published volume edited by Arnon, Young and van der Beek Expectations: Theory and Applications from Historical Perspectives contains a wonderful essay by Duncan Foley that elucidates these issues.

In his centenary retrospective on Menger’s contribution, Hayek (1970), commenting on the inexactness of Menger’s account of economic theory, focused on Menger’s reluctance to embrace mathematics as an expository medium with which to articulate economic-theoretical concepts. While this may have been an aspect of Menger’s skepticism about mathematical reasoning, his recognition that expectations of the future are inherently inexact and conjectural and more akin to a range of potential outcomes of different probability may have been an even more significant factor in how Menger chose to articulate his theoretical vision.

But it is noteworthy that Hayek (1937) explicitly recognized that there is no theoretical explanation that accounts for any tendency toward intertemporal equilibrium, and instead merely (and in 1937!) relied an empirical tendency of economies to move in the direction of equilibrium as a justification for considering economic theory to have any practical relevance.

My Conversation with Hendrickson and Albrecht on the Economic Forces Podcast

Josh Hendrickson and Brian Albrecht have just posted our conversation about UCLA economics and economists, price theory vs. microfoundations, and my new book on their new Economic Forces Podcast. It was a really interesting conversation. Below are links to the podcast and to my book, which is now available online, and can be pre-ordered. The print version should be available in December.

.

https://link.springer.com/book/10.1007/978-3-030-83426-5

The Walras-Marshall Divide in Neoclassical Theory, Part II

In my previous post, which itself followed up an earlier post “General Equilibrium, Partial Equilibrium and Costs,” I laid out the serious difficulties with neoclassical theory in either its Walrasian or Marshallian versions: its exclusive focus on equilibrium states with no plausible explanation of any economic process that leads from disequilibrium to equilibrium.

The Walrasian approach treats general equilibrium as the primary equilibrium concept, because no equilibrium solution in a single market can be isolated from the equilibrium solutions for all other markets. Marshall understood that no single market could be in isolated equilibrium independent of all other markets, but the practical difficulty of framing an analysis of the simultaneous equilibration of all markets made focusing on general equilibrium unappealing to Marshall, who wanted economic analysis to be relevant to the concerns of the public, i.e., policy makers and men of affairs whom he regarded as his primary audience.

Nevertheless, in doing partial-equilibrium analysis, Marshall conceded that it had to be embedded within a general-equilibrium context, so he was careful to specify the ceteris-paribus conditions under which partial-equilibrium analysis could be undertaken. In particular, any market under analysis had to be sufficiently small, or the disturbance to which that market was subject had to be sufficiently small, for the repercussions of the disturbance in that market to have only minimal effect on other markets, or, if substantial, those effects had to concentrated on a specific market (e.g., the market for a substitute, or complementary, good).

By focusing on equilibrium in a single market, Marshall believed he was making the analysis of equilibrium more tractable than the Walrasian alternative of focusing on the analysis of simultaneous equilibrium in all markets. Walras chose to make his approach to general equilibrium, if not tractable, at least intuitive by appealing to the fiction of tatonnement conducted by an imaginary auctioneer adjusting prices in all markets in response to any inconsistencies in the plans of transactors preventing them from executing their plans at the announced prices.

But it eventually became clear, to Walras and to others, that tatonnement could not be considered a realistic representation of actual market behavior, because the tatonnement fiction disallows trading at disequilibrium prices by pausing all transactions while a complete set of equilibrium prices for all desired transactions is sought by a process of trial and error. Not only is all economic activity and the passage of time suspended during the tatonnement process, there is not even a price-adjustment algorithm that can be relied on to find a complete set of equilibrium prices in a finite number of iterations.

Despite its seeming realism, the Marshallian approach, piecemeal market-by-market equilibration of each distinct market, is no more tenable theoretically than tatonnement, the partial-equilibrium method being premised on a ceteris-paribus assumption in which all prices and all other endogenous variables determined in markets other than the one under analysis are held constant. That assumption can be maintained only on the condition that all markets are in equilibrium. So the implicit assumption of partial-equilibrium analysis is no less theoretically extreme than Walras’s tatonnement fiction.

In my previous post, I quoted Michel De Vroey’s dismissal of Keynes’s rationale for the existence of involuntary unemployment, a violation in De Vroey’s estimation, of Marshallian partial-equilibrium premises. Let me quote De Vroey again.

When the strict Marshallian viewpoint is adopted, everything is simple: it is assumed that the aggregate supply price function incorporates wages at their market-clearing magnitude. Instead, when taking Keynes’s line, it must be assumed that the wage rate that firms consider when constructing their supply price function is a “false” (i.e., non-market-clearing) wage. Now, if we want to keep firms’ perfect foresight assumption (and, let me repeat, we need to lest we fall into a theoretical wilderness), it must be concluded that firms’ incorporation of a false wage into their supply function follows from their correct expectation that this is indeed what will happen in the labor market. That is, firms’ managers are aware that in this market something impairs market clearing. No other explanation than the wage floor assumption is available as long as one remains in the canonical Marshallian framework. Therefore, all Keynes’s claims to the contrary notwithstanding, it is difficult to escape the conclusion that his effective demand reasoning is based on the fixed-wage hypothesis. The reason for unemployment lies in the labor market, and no fuss should be made about effective demand being [the reason rather] than the other way around.

A History of Macroeconomics from Keynes to Lucas and Beyond, pp. 22-23

My interpretation of De Vroey’s argument is that the strict Marshallian viewpoint requires that firms correctly anticipate the wages that they will have to pay in making their hiring and production decisions, while presumably also correctly anticipating the future demand for their products. I am unable to make sense of this argument unless it means that firms — and why should firm owners or managers be the only agents endowed with perfect or correct foresight? – correctly foresee the prices of the products that they sell and of the inputs that they purchase or hire. In other words, the strict Marshallian viewpoint invoked by De Vroey assumes that each transactor foresees, without the intervention of a timeless tatonnement process guided by a fictional auctioneer, the equilibrium price vector. In other words, when the strict Marshallian viewpoint is adopted, everything is simple; every transactor is a Walrasian auctioneer.

My interpretation of Keynes – and perhaps I’m just reading my own criticism of partial-equilibrium analysis into Keynes – is that he understood that the aggregate labor market can’t be analyzed in a partial-equilibrium setting, because Marshall’s ceteris-paribus proviso can’t be maintained for a market that accounts for roughly half the earnings of the economy. When conditions change in the labor market, everything else also changes. So the equilibrium conditions of the labor market must be governed by aggregate equilibrium conditions that can’t be captured in, or accounted for by, a Marshallian partial-equilibrium framework. Because something other than supply and demand in the labor market determines the equilibrium, what happens in the labor market can’t, by itself, restore an equilibrium.

That, I think, was Keynes’s intuition. But while identifying a serious defect in the Marshallian viewpoint, that intuition did not provide an adequate theory of adjustment. But the inadequacy of Keynes’s critique doesn’t rehabilitate the Marshallian viewpoint, certainly not in the form in which De Vroey represents it.

But there’s a deeper problem with the Marshallian viewpoint than just the interdependence of all markets. Although Marshall accepted marginal-utility theory in principle and used it to explain consumer demand, he tried to limit its application to demand while retaining the classical theory of the cost of production as a coordinate factor explaining the relative prices of goods and services. Marginal utility determines demand while cost determines supply, so that the interaction of supply and demand (cost and utility) jointly determine price just as the two blades of a scissor jointly cut a piece of cloth or paper.

This view of the role of cost could be maintained only in the context of the typical Marshallian partial-equilibrium exercise in which all prices — including input prices — except the price of a single output are held fixed at their general-equilibrium values. But the equilibrium prices of inputs are not determined independently of the values of the outputs they produce, so their equilibrium market values are derived exclusively from the value of whatever outputs they produce.

This was a point that Marshall, desiring to minimize the extent to which the Marginal Revolution overturned the classical theory of value, either failed to grasp, or obscured: that both prices and costs are simultaneously determined. By focusing on partial-equilibrium analysis, in which input prices are treated as exogenous variables rather than, as in general-equilibrium analysis, endogenously determined variables, Marshall was able to argue as if the classical theory that the cost incurred to produce something determines its value or its market price, had not been overturned.

The absolute dependence of input prices on the value of the outputs that they are being used to produce was grasped more clearly by Carl Menger than by Walras and certainly more clearly than by Marshall. What’s more, unlike either Walras or Marshall, Menger explicitly recognized the time lapse between the purchasing and hiring of inputs by a firm and the sale of the final output, inputs having been purchased or hired in expectation of the future sale of the output. But expected future sales are at prices anticipated, but not known, in advance, making the valuation of inputs equally conjectural and forcing producers to make commitments without knowing either their costs or their revenues before undertaking those commitments.

It is precisely this contingent relationship between the expectation of future sales at unknown, but anticipated, prices and the valuations that firms attach to the inputs they purchase or hire that provides an alternative to the problematic Marshallian and Walrasian accounts of how equilibrium market prices are actually reached.

The critical role of expected future prices in determining equilibrium prices was missing from both the Marshallian and the Walrasian theories of price determination. In the Walrasian theory, price determination was attributed to a fictional tatonnement process that Walras originally thought might serve as a kind of oversimplified and idealized version of actual market behavior. But Walras seems eventually to have recognized and acknowledged how far removed from reality his tatonnement invention actually was.

The seemingly more realistic Marshallian account of price determination avoided the unrealism of the Walrasian auctioneer, but only by attributing equally, if not more, unrealistic powers of foreknowledge to the transactors than Walras had attributed to his auctioneer. Only Menger, who realistically avoided attributing extraordinary knowledge either to transactors or to an imaginary auctioneer, instead attributing to transactors only an imperfect and fallible ability to anticipate future prices, provided a realistic account, or at least a conceptual approach toward a realistic account, of how prices are actually formed.

In a future post, I will try spell out in greater detail my version of a Mengerian account of price formation and how this account might tell us about the process by which a set of equilibrium prices might be realized.

The Walras-Marshall Divide in Neoclassical Theory, Part I

This year, 2021, puts us squarely in the midst of the sesquicentennial period of the great marginal revolution in economics that began with the almost simultaneous appearance in 1871 of Menger’s Grundsatze der Volkwirtschaft and Jevons’s Theory of Political Economy followed in 1874 by Walras’s Elements d’Economie Politique Pure. Jevons left few students behind to continue his work, so his influence pales in comparison with that of his younger contemporary Alfred Marshall who, working along similar lines, published his Principles of Economics in 1890. It was Marshall’s version of marginal utility theory that defined for more than a generation what became known as neoclassical theory in the Anglophone world. Menger’s work, via his disciples, Bohm-Bawerk and Wieser, was actually the most influential work on marginal-utility theory for at least 50 years, the work of Walras and his successor, Vilfredo Pareto, being too mathematical, even for professional economists, to become influential before the 1930s.

But after it was restated in a form not only more accessible, but more coherent and more sophisticated by J. R. Hicks in his immensely influential treatise Value and Capital, Walras’s work became the standard for rigorous formal economic analysis. Although the Walrasian paradigm became the standard for formal theoretical work, the Marshallian paradigm remained influential for applied microeconomic theory and empirical research, especially in fields like industrial organization, labor economics and international trade. Neoclassical economics, the corpus of economic mainstream economic theory that grew out of the marginal revolution was therefore built almost entirely on the works of Marshall and Walras, the influence of Menger, like that of Jevons, having been largely, but not entirely, assimilated into the main body of neoclassical theory.

The subsequent development of monetary theory and macroeconomics, especially after the Keynesian Revolution swept the economics profession, was also influenced by both Marshall and Walras. And the question whether Keynes belonged to the Marshallian tradition in which he was trained, or became, either consciously or unconsciously, a Walrasian has been an ongoing dispute among historians of macroeconomics since the late 1940s.

The first attempt to merge Keynes into the Walrasian paradigm led to the first neoclassical synthesis, which gained a brief ascendancy in the 1960s and early 1970s before being eclipsed by the New Classical rational expectations macroeconomics of Lucas and Sargent that led to a transformation of macroeconomics.

With that in mind, I’ve been reading Michel De Vroey’s excellent History of Macroeconomics from Keynes to Lucas and Beyond. An important feature of De Vroey’s book is its classification of macrotheories as either Marshallian or Walrasian in structure and orientation. I believe that the Walras vs. Marshall distinction is important, but I would frame that distinction differently from how De Vroey does. To be sure, De Vroey identifies some key differences between the Marshallian and Walrasian schemas, but I question whether he focuses on the differences between Marshall and Walras that really matter. And I also believe that he fails to address adequately the important problem that both Marhsall and Walras failed to address, namely their inability adequately describe a market mechanism that actually does, or even might, lead an economy toward an equilibrium position.

One reason for De Vroey’s misplaced emphasis is that he focuses on the different stories told by Walras and Marshall to explain how equilibrium — either for the entire system (Walras) or for a single market (Marshall) – is achieved. The story that Walras famously told was the tatonnement stratagem conceived by Walras to provide an account of how market forces, left undisturbed, would automatically bring an economy to a state of rest (general equilibrium). But Walras eventually realized that tatonnement could never be realistic for an economy with both exchange and production. The point of tatonnement is to prevent trading at disequilibium prices, but assuming that production is suspended during tatonnement is untenable, because production cannot be interrupted until the search for the equilibrium price vector is successfully completed.

Nevertheless, De Vroey treats tatonnement, despite its hopeless unrealism, as sine qua non for any model to be classified as Walrasian. In chapter 19 (“The History of Macroeconomics through the lens of the Marshall-Walras Divide”), DeVroey provides a comprehensive list of differences between the Marshallian and Walrasian modeling approaches which makes tatonnement a key distinction between the two approaches. I will discuss the three that seem most important.

1 Price formation: Walras assumes all exchange occurs at equilibrium prices found through tatonnement conducted by a deus-ex-machina auctioneer. All agents are therefore price takers even in “markets” in which, absent the auctioneer, market power could be exercised. Marshall assumes that prices are determined in the course of interaction of suppliers and demanders in distinct markets, so that the mix of price-taking and price-setting agents depends on the characteristics of those distinct markets.

This dichotomy between the Walrasian and Marshallian accounts of how prices are determined sheds light on the motivation that led Marshall and Walras to adopt their differing modeling approaches, but there is an important distinction between a model and the intuition that motivates or rationalizes the model. The model stands on its own whatever the intuition motivating the model. The motivation behind the model can inform how the model is assessed, but the substance of the model and its implications remain in tact even if the intuition behind the model is rejected.

2 Market equilibrium: Walras assumes that no market is in equilibrium unless general equilibrium obtains. Marshall assumes partial equililbrium is reached separately in each market. General equilibrium is achieved when all markets are in partial equilibrium. The Walrasian approach is top-down, the Marshallian bottom-up.

3 Realism: Marshall is more realistic than Walras in depicting individual markets in which transactors themselves engage in the price-setting process, assessing market conditions, and gaining information about supply-and-demand conditions; Walras assumes that all agents are passive price takers merely calculating their optimal, but provisional, plans to buy and sell at any price vector announced by the auctioneer who then processes those plans to determine whether the plans are mutually consistent or whether a new price vector must be tried. But whatever the gain in realism, it comes at a cost, because, except in obvious cases of complementarity or close substitutability between products or services, the Marshallian paradigm ignores the less obvious, but not necessarily negligible, interactions between markets. Those interactions render the Marshallian ceteris-paribus proviso for partial-equilibrium analysis logically dubious, except under the most stringent assumptions.

The absence of an auctioneer from Marshall’s schema leads De Vroey to infer that market participants in that schema must be endowed with knowledge of market demand-and-supply conditions. I claim no expertise as a Marshallian scholar, but I find it hard to accept that, given his emphasis on realism, Marshall would have attributed perfect knowledge to market participants. The implausibility of the Walrasian assumptions is thus matched, in De Vroey’s view, by different, but scarcely less implausible, Marshallian assumptions.

De Vroey proceeds to argue that Keynes himself was squarely on the Marshallian, not the Walrasian, side of the divide. Here’s how, focusing on the IS-LM model, he puts it:

As far as the representation of the economy is concerned, the economy that the IS-LM model analyzes is composed of markets that function separately, each of them being an autonomous locus of equilibrium. Turning to trade technology, no auctioneer is supposedly present. As for the information assumption, it is true that economists using the IS-LM model scarcely evoke the possibility that it might rest on the assumption that agents are omniscient. But then nobody seems to have raised the issue of how equilibrium is reached in this model. Once raised, I see no other explanation than assuming agents’ ability to reconstruct the equilibrium values of the economy, that is, their being omniscient. On all these scores, the IS-LM model is Marshallian.

A History of Macroeconomics from Keynes to Lucas and Beyond, p. 350

De Vroey’s dichotomy between the Walrasian and Marshallian modeling approaches leads him to make needlessly sharp distinctions between them. The basic IS-LM model determines the quantity of money, consumption, saving and investment, income and the rate of interest rate. Presumably, by autonomous locus of equilibrium,” De Vroey means that the adjustment of some variable determined in one of the IS-LM markets adjusts in response to disequilibrium in that market alone, but even so, the markets are not isolated from each other as they are in Marshallian partial-equilibrium analysis. The equilibrium values of the variables in the IS-LM model are simultaneously determined in all markets, so the autonomy of each market does not preclude simultaneous determination. Nor does the equilibrium of the model depend, as De Vroey seems to suggest, on the existence of an auctioneer; the role of the auctioneer is merely to provide a story (however implausible) about how the equilibrium is, or might be, reached.

Elsewhere De Vroey faults Keynes for characterizing cyclical unemployment as involuntary, because that characterization is incompatible with a Marshallian analysis of the labor market. Without endorsing Keynes’s reasoning, I cannot accept De Vroey’s argument against Keynes, because the argument is based explicitly on the assumption of perfect foresight. Describing the difference between a strict Marshallian approach and that taken by Keynes, De Vroey writes as follows:

When the strict Marshallian viewpoint is adopted, everything is simple: it is assumed that the aggregate supply price function incorporates wages at their market-clearing magnitude. Instead, when taking Keynes’s line, it must be assumed that the wage rate that firms consider when constructing their supply price function is a “false” (i.e., non-market-clearing) wage. Now, if we want to keep firms’ perfect foresight assumption (and, let me repeat, we need to lest we fall into a theoretical wilderness), it must be concluded that firms’ incorporation of a false wage into their supply function follows from their correct expectation that this is indeed what will happen in the labor market. That is, firms’ managers are aware that in this market something impairs market clearing. No other explanation than the wage floor assumption is available as long as one remains in the canonical Marshallian framework. Therefore, all Keynes’s claims to the contrary notwithstanding, it is difficult to escape the conclusion that his effective demand reasoning is based on the fixed-wage hypothesis. The reason for unemployment lies in the labor market, and no fuss should be made about effective demand being [the reason rather] than the other way around.

Id. pp. 22-23

De Vroey seems to be saying that if firms anticipate an equilibrium outcome, the equilibrium outcome will be realized. This is not an argument; it is question-begging, question-begging which De Vroey justifies by warning that the alternative to question-begging is to “fall into a theoretical wilderness.” Thus, Keynes’s argument for involuntary unemployment is rejected based on the argument that the in the only foreseeable outcome under the assumption of perfect information, unemployment cannot be involuntary.

Because neither the Walrasian nor the Marshallian modeling approach gives a plausible account of how an equilibrium is reached, De Vroey’s insistence that either implausible story is somehow essential to the corresponding modeling approach is misplaced, each approach committing the fallacy of misplaced concreteness in focusing on an equilibrium solution that cannot plausibly be realized. For De Vroey instead to argue that, because the Marshallian approach cannot otherwise explain how equilibrium is realized, the agents must be omniscient is akin to the advice of one Senator during the Vietnam war for President Nixon to declare victory and then withdraw all American troops.

I will have more to say about the Walras-Marshall divide and how to surmount the difficulties with both in a future post (or posts).

The Demise of Bretton Woods Fifty Years On

Today, Sunday, August 15, 2021, marks the 50th anniversary of the closing of the gold window at the US Treasury, at which a small set of privileged entities were at least legally entitled to demand redemption of dollar claims issued by the US government at the official gold price of $35 an ounce. (In 1971, as in 2021, August 15 fell on a Sunday.) When I started blogging in July 2011, I wrote one of my early posts about the 40th anniversary of that inauspicious event. My attention in that post was directed more at the horrific consequences of Nixon’s decision to combine a freeze on wages and price with the closing of the gold window, which was clearly far more damaging than the largely symbolic effect of closing the gold window. I am also re-upping my original post with some further comments, but in this post, my attention is directed solely on the closing of the gold window.

The advent of cryptocurrencies and the continuing agitprop aiming to restore the gold standard apparently suggest to some people that the intrinsically trivial decision to do away with the final vestige of the last remnant of the short-lived international gold standard is somehow laden with cosmic significance. See for example the new book by Jeffrey Garten (Three Days at Camp David) marking the 50th anniversary.

About 10 years before the gold window was closed, Milton Friedman gave a lecture at the Mont Pelerin Society which he called “Real and Pseudo-Gold Standards“, which I previously wrote about here. Many if not most of the older members of the Mont Pelerin Society, notably (L. v. Mises and Jacques Rueff) were die-hard supporters of the gold standard who regarded the Bretton Woods system as a deplorable counterfeit imitation of the real gold standard and longed for restoration of that old-time standard. In his lecture, Friedman bowed in their direction by faintly praising what he called a real gold standard, which he described as a state of affairs in which the quantity of money could be increased only by minting gold or by exchanging gold for banknotes representing an equivalent value of gold. Friedman argued that although a real gold standard was an admirable monetary system, the Bretton Woods system was nothing of the sort, calling it a pseudo-gold standard. Given that the then existing Bretton Woods system was not a real gold standard, but merely a system of artificially controlling the price of a particular commodity, Friedman argued that the next-best alternative would be to impose a quantitative limit on the increase in the quantity of fiat money, by enacting a law that would prohibit the quantity of money from growing by more than some prescribed amount or by some percentage (k-percent per year) of the existing stock percent in any given time period.

While failing to win over the die-hard supporters of the gold standard, Friedman’s gambit was remarkably successful, and for many years, it actually was the rule of choice among most like-minded libertarians and self-styled classical liberals and small-government conservatives. Eventually, the underlying theoretical and practical defects in Friedman’s k-percent rule became sufficiently obvious to cause even Friedman, however reluctantly, to abandon his single-minded quest for a supposedly automatic non-discretionary quantitative monetary rule.

Nevertheless, Friedman ultimately did succeed in undermining support among most right-wing conservative, libertarian and many centrist or left-leaning economists and decision makers for the Bretton Woods system of fixed, but adjustable, exchange rates anchored by a fixed dollar price of gold. And a major reason for his success was his argument that it was only by shifting to flexible exchange rates and abandoning a fixed gold price that the exchange controls and restrictions on capital movements that were in place for a quarter of a century after World Was II could be lifted, a rationale congenial and persuasive to many who might have otherwise been unwilling to experiment with a system of flexible exchange rates among fiat currencies that had never previously been implemented.

Indeed, the neoliberal economic and financial globalization that followed the closing of the gold window and freeing of exchange rates after the demise of the Bretton Woods system, whether one applauds or reviles it, can largely be attributed to Friedman’s influence both as an economic theorist and as a propagandist. As much as Friedman deplored the imposition of wage and price controls on August 15, 1971, he had reason to feel vindicated by the closing of the gold window, the freeing of exchange rates, and, eventually, the lifting of all capital controls and the legalization of gold ownership by private individuals, all of which followed from the Camp David meeting.

But, the objective economic situation confronted by those at Camp David was such that the Bretton Woods System could not be salvaged. As I wrote in my 2011 post, the Bretton Woods system built on the foundation of a fixed gold price of $35 an ounce was not a true gold standard because a free market in gold did not exist and could not be maintained at the official price. Trade in gold was sharply restricted, and only privileged central banks and governments were legally entitled to buy or sell gold at the official price. Even the formal right of the privileged foreign governments and central banks was subject to the informal, but unwelcome and potentially dangerous, disapproval of the United States.

The gold standard is predicated on the idea that gold has an ascertainable value, so that if money is made exchangeable for gold at a fixed rate, money and gold will have an identical value owing to arbitrage transactions. Such arbitrage transactions can occur only if, and so long as, no barriers prevent effective arbitrage. The unquestioned convertibility of a unit of currency into gold ensured that arbitrage would constrain the value of money to equal the value of gold. But under Bretton Woods the opportunities for arbitrage were so drastically limited that the value of the dollar was never clearly equal to the value of gold, which was governed by, pardon the expression, fiat rather than by free-market transactions.

The lack of a tight link between the value of gold and the value of the dollar was not a serious problem as long as the value of the dollar was kept essentially stable and there was a functioning (albeit not freely) gold market. After its closure during World War II, the gold market did not function at all until 1954, so the wartime and postwar inflation and the brief Korean War inflation did not undermine the official gold price of $35 an ounce that had been set in 1934 and was maintained under Bretton Woods. Even after a functioning, but not entirely free, gold market was reopened in 1954, the official price was easily sustained until the late 1960s thanks to central-bank cooperation, whose formalization through the International Monetary Fund (IMF) was one of the positive achievements of Bretton Woods. The London gold price was hardly a free-market price, because of central bank intervention and restrictions imposed on access to the market, but the gold holdings of the central banks were so large that it had always been in their power to control the market price if they were sufficiently determined to do so. But over the course of the 1960s, their cohesion gradually came undone. Why was that?

The first point to note is that the gold standard evolved over the course of the eighteenth and nineteenth centuries first as a British institution, and much later as an international institution, largely by accident from a system of simultaneous gold and silver coinages that were closely but imperfectly linked by a relative price of between 15 to 16 ounces of silver per ounce of gold. Depending on the precise legal price ratio of silver coins to gold coins in any particular country, the legally overvalued undervalued metal would flow out of that country and the undervalued overvalued metal would flow into that country.

When Britain undervalued gold at the turn of the 18th century, gold flowed into Britain, leading to the birth of the British of gold standard. In most other countries, silver and gold coins were circulating simultaneously at a ratio of 15.5 ounces of silver per ounce of gold. It was only when the US, after the Civil War, formally adopted a gold standard and the newly formed German Reich also shifted from a bimetallic to a gold standard that the increased demand for gold caused gold to appreciate relative to silver. To avoid the resulting inflation, countries with bimetallic systems based on a 15.5 to 1 silver/gold suspended the free coinage of silver and shifted to the gold standard further raising the silver/gold price ratio. Thus, the gold standard became an international not just a British system only in the 1870s, and it happened not by design or international consensus but by a series of piecemeal decisions by individual countries.

The important takeaway from this short digression into monetary history is that the relative currency values of the gold standard currencies were largely inherited from the historical definitions of the currency units of each country, not by deliberate policy decisions about what currency value to adopt in establishing the gold standard in any particular country. But when the gold standard collapsed in August 1914 at the start of World War I, the gold standard had to be recreated more or less from scratch after the War. The US, holding 40% of the world’s monetary gold reserves was in a position to determine the value of gold, so it could easily restore convertibility at the prewar gold price of $20.67 an ounce. For other countries, the choice of the value at which to restore gold convertibility was really a decision about the dollar exchange rate at which to peg their currencies.

Before the war, the dollar-pound exchange rate was $4.86 per pound. The postwar dollar-pound exchange rate was just barely close enough to the prewar rate to make restoring the convertibility of the pound at the prewar rate with the dollar seem doable. Many including Keynes argued that Britain would be better with an exchange rate in the neighborhood of $4.40 or less, but Winston Churchill, then Chancellor of the Exchequer, was persuaded to restore convertibility at the prewar parity. That decision may or may not have been a good one, but I believe that its significance for the world economy at the time and subsequently has been overstated. After convertibility was restored at the prewar parity, chronically high postwar British unemployment increased only slightly in 1925-26 before declining modestly until with the onset of the Great Deflation and Great Depression in late 1929. The British economy would have gotten a boost if the prewar dollar-pound parity had not been restored (or if the Fed had accommodated the prewar parity by domestic monetary expansion), but the drag on the British economy after 1925 was a negligible factor compared to the other factors, primarily gold accumulation by the US and France, that triggered the Great Deflation in late 1929.

The cause of that deflation was largely centered in France (with a major assist from the Federal Reserve). Before the war the French franc was worth about 20 cents, but disastrous French postwar economic policies caused the franc to fall to just 2 cents in 1926 when Raymond Poincaré was called upon to lead a national-unity government to stabilize the situation. His success was remarkable, the franc rising to over 4 cents within a few months. However, despite earlier solemn pledges to restore the franc to its prewar value of 20 cents, he was persuaded to stabilize the franc at just 3.92 cents when convertibility into gold was reestablished in June 1928, undervaluing the franc against both the dollar and the pound.

Not only was the franc undervalued, but the Bank of France, which, under previous governments had been persuaded or compelled to supply francs to finance deficit spending, was prohibited by the new Monetary Law that restored convertibility at the fixed rate of 3.92 cents from increasing the quantity of francs except in exchange for gold or foreign-exchange convertible into gold. While protecting the independence of the Bank of France from government fiscal demands, the law also prevented the French money stock from increasing to accommodate increases in the French demand for money except by way of a current account surplus, or a capital inflow.

Meanwhile, the Bank of France began converting foreign-exchange reserves into gold. The resulting increase in French gold holdings led to gold appreciation. Under the gold standard, gold appreciation is manifested in price deflation affecting all gold-standard countries. That deflation was the direct and primary cause of the Great Depression, which led, over a period of five brutal years, to the failure and demise of the newly restored international gold standard.

These painful lessons were not widely or properly understood at the time, or for a long time afterward, but the clear takeaway from that experience was that trying to restore the gold standard again would be a dangerous undertaking. Another lesson that was intuited, if not fully understood, is that if a country pegs its exchange rate to gold or to another currency, it is safer to err on the side of undervaluation than overvaluation. So, when the task of recreating an international monetary system was undertaken at Bretton Woods in July 1944, the architects of the system tried to adapt it to the formal trappings of the gold standard while eliminating the deflationary biases and incentives that had doomed the interwar gold standard. To prevent increasing demand for gold from causing deflation, the obligation to convert cash into gold was limited to the United States and access to the US gold window was restricted to other central banks via the newly formed international monetary fund. Each country could, in consultation with the IMF, determine its exchange rate with the dollar.

Given the earlier experience, countries had an incentive to set exchange rates that undervalued their currencies relative to the dollar. Thus, for most of the 1950s and early 1960s, the US had to contend with a currency that was overvalued relative to the currencies of its principal trading partners, Germany and Italy (the two fastest growing economies in Europe) and Japan (later joined by South Korea and Taiwan) in Asia. In one sense, the overvaluation was beneficial to the US, because access to low-cost and increasingly high-quality imports was a form of repayment to the US of its foreign-aid assistance, and its ongoing defense protection against the threat of Communist expansionism , but the benefit came with the competitive disadvantage to US tradable-goods industries.

When West Germany took control of its economic policy from the US military in 1948, most price-and-wage controls were lifted and the new deutschmark was devalued by a third relative to the official value of the old reichsmark. A further devaluation of almost 25% followed a year later. Great Britain in 1949, perhaps influenced by the success of the German devaluation, devalued the pound by 30% from old parity of $4.03 to $2.80 in 1949. But unlike Germany, Britain, under the postwar Labour government, attempting to avoid postwar inflation, maintained wartime exchange controls and price controls. The underlying assumption at the time was that the Britain’s balance-of-payments deficit reflected an overvalued currency, so that devaluation would avoid repeating the mistake made two decades earlier when the dollar-pound parity had overvalued the pound.

That assumption, as Ralph Hawtrey had argued in lonely opposition to the devaluation, was misguided; the idea that the current account depends only, or even primarily, on the exchange rate abstracts from the monetary forces that affect the balance of payments and the current account. Worse, because British monetary policy was committed to the goal of maximizing short-term employment, the resulting excess supply of cash inevitably increased domestic spending, thereby attracting imports and diverting domestically produced products from export markets and preventing the devaluation from achieving the goal of improving the trade balance and promoting expansion of the tradable-goods sector.

Other countries, like Germany and Italy, combined currency undervaluation with monetary restraint, allowing only monetary expansion that was occasioned by current-account surpluses. This became the classic strategy, later called exchange-rate protection by Max Corden, of combining currency undervaluation with tight monetary policy. British attempts to use monetary policy to promote both (over)full employment subject to the balance-of-payments constraint imposed by an exchange rate pegged to the dollar proved unsustainable, while Germany, Italy, France (after De Gaulle came to power in 1958 and devalued the franc) found the combination of monetary restraint and currency undervaluation a successful economic strategy until the United States increased monetary expansion to counter chronic overvaluation of the dollar.

Because the dollar was the key currency of the world monetary system, and had committed itself to maintain the $35 an ounce price of gold, the US, unlike other countries whose currencies were pegged to the dollar, could not adjust the dollar exchange rate to reduce or alleviate the overvaluation of the dollar relative to the currencies of its trading partners. Mindful of its duties as supplier of the world’s reserve currency, US monetary authorities kept US inflation close to zero after the 1953 Korean War armistice.

However, that restrained monetary policy led to three recessions under the Eisenhower administration (1953-54, 1957-58, and 1960-61). The latter recessions led to disastrous Republican losses in the 1958 midterm elections and to Richard Nixon’s razor-thin loss in 1960 to John Kennedy, who had campaigned on a pledge to get the US economy moving again. The loss to Kennedy was a lesson that Nixon never forgot, and he was determined never to allow himself to lose another election merely because of scruples about US obligations as supplier of the world’s reserve currency.

Upon taking office, the Kennedy administration pressed for an easing of Fed policy to end the recession and to promote accelerated economic expansion. The result was a rapid recovery from the 1960-61 recession and the start of a nearly nine-year period of unbroken economic growth at perhaps the highest average growth rate in US history. While credit for the economic expansion is often given to the across-the-board tax cuts proposed by Kennedy in 1963 and enacted in 1964 under Lyndon Johnson, the expansion was already well under way by mid-1961, three years before the tax cuts became effective.

The international aim of monetary policy was to increase nominal domestic spending and to force US trading partners with undervalued currencies either to accept increased holdings of US liabilities or to revalue their exchange rates relative to the dollar to diminish their undervaluation relative to the dollar. Easier US monetary policy led to increasing complaints from Europeans, especially the Germans, that the US was exporting inflation and to charges that the US was taking advantage of the exorbitant privilege of its position as supplier of the world’s reserve currency.

The aggressive response of the Kennedy administration to undervaluation of most other currencies led to predictable pushback from France under de Gaulle who, like many other conservative and right-wing French politicians, was fixated on the gold standard and deeply resented Anglo-American monetary pre-eminence after World War I and American dominance after World War II. Like France under Poincaré, France under de Gaulle sought to increase its gold holdings as it accumulated dollar-denominated foreign exchange. But under Bretton Woods, French gold accumulation had little immediate economic effect other than to enhance the French and Gaullist pretensions to grandiosity.

Already in 1961 Robert Triffin predicted that the Bretton Woods system could not endure permanently because the growing world demand for liquidity could not be satisfied by the United States in a world with a relatively fixed gold stock and a stable or rising price level. The problem identified by Triffin was not unlike that raised by Gustav Cassel in the 1920s when he predicted that the world gold stock would likely not increase enough to prevent a worldwide deflation. This was a different problem from the one that actually caused the Great Depression, which was a substantial increase in gold demand associated with the restoration of the gold standard that triggered the deflationary collapse of late 1929. The long-term gold shortage feared by Cassel was a long-term problem distinct from the increase in gold demand caused by the restoration of the gold standard in the 1920s.

The problem Triffin identified was also a long-term consequence of the failure of the international gold stock to increase to provide the increased gold reserves that would be needed for the US to be able to credibly commit to maintaining the convertibility of the dollar into gold without relying on deflation to cause the needed increase in the real value of gold reserves.

Had it not been for the Vietnam War, Bretton Woods might have survived for several more years, but the rise of US inflation to over 4% in 1968-69, coupled with the 1969-70 recession in an unsuccessful attempt to reduce inflation, followed by a weak recovery in 1971, made it clear that the US would not undertake a deflationary policy to make the official $35 gold price credible. Although de Gaulle’s unexpected retirement in 1969 removed the fiercest opponent of US monetary domination, confidence that the US could maintain the official gold peg, when the London gold price was already 10% higher than the official price, caused other central banks to fear that they would be stuck with devalued dollar claims once the US raised the official gold price. Not only the French, but other central banks were already demanding redemption in gold of the dollar claims that they were holding.

An eleventh-hour policy reversal by the administration to save the official gold price was not in the cards, and everyone knew it. So all the handwringing about the abandonment of Bretton Woods on August 15, 1971 is either simple foolishness or gaslighting. The system was already broken, and it couldn’t be fixed at any price worth pondering for even half an instant. Nixon and his accomplices tried to sugarcoat their scrapping of the Bretton Woods System by pretending that they were announcing a plan that was the first step toward its reform and rejuvenation. But that pretense led to a so-called agreement with a new gold-price peg of $38 an ounce, which lasted hardly a year before it died not with a bang but a whimper.

What can we learn from this story? For me the real lesson is that the original international gold standard was, to borrow (via Hayek) a phrase from Adam Ferguson: “the [accidental] result of human action, not human design.” The gold standard, as it existed for those 40 years, was not an intuitively obvious or well understood mechanism working according to a clear blueprint; it was an improvised set of practices, partly legislated and partly customary, and partially nothing more than conventional, but not very profound, wisdom.

The original gold standard collapsed with the outbreak of World War I and the attempt to recreate it after World War I, based on imperfect understanding of how it had actually functioned, ended catastrophically with the Great Depression, a second collapse, and another, even more catastrophic, World War. The attempt to recreate a new monetary system –the Bretton Woods system — using a modified feature of the earlier gold standard as a kind of window dressing, was certainly not a real gold standard, and, perhaps, not even a pseudo-gold standard; those who profess to mourn its demise are either fooling themselves or trying to fool the rest of us.

We are now stuck with a fiat system that has evolved and been tinkered with over centuries. We have learned how to manage it, at least so far, to avoid catastrophe. With hard work and good luck, perhaps we will continue to learn how to manage it better than we have so far. But to seek to recreate a system that functioned fairly successfully for at most 40 years under conditions not even remotely likely ever again to be approximated, is hardly likely to lead to an outcome that will enhance human well-being. Even worse, if that system were recreated, the resulting outcome might be far worse than anything we have experienced in the last half century.

August 15, 1971: Unhappy Anniversary (Update)

[[Update 8/15/2021: I’m about to post a new post on the decision to close the gold window rather than the effects of the decision to freeze wages and prices. The new post is longer than this one and covers a different set of issues, but the two are complementary and readers may find both of interest]

[Update 8/15/2019: It seems appropriate to republish this post originally published about 40 days after I started blogging. I have made a few small changes and inserted a few comments to reflect my improved understanding of certain concepts like “sterilization” that I was uncritically accepting. I actually have learned a thing or two in the eight plus years that I’ve been blogging. I am grateful to all my readers — both those who agreed and those who disagreed — for challenging me and inspiring me to keep thinking critically. It wasn’t easy, but we did survive August 15, 1971. Let’s hope we survive August 15, 2019.]

August 15, 1971 may not exactly be a day that will live in infamy, but it is hardly a day to celebrate 40 years later.  It was the day on which one of the most cynical Presidents in American history committed one of his most cynical acts:  violating solemn promises undertaken many times previously, both before and after his election as President, Richard Nixon declared a 90-day freeze on wages and prices.  Nixon also announced the closing of the gold window at the US Treasury, severing the last shred of a link between gold and the dollar.  Interestingly, the current (August 13th, 2011) Economist (Buttonwood column) and Forbes  (Charles Kadlec op-ed) and today’s Wall Street Journal (Lewis Lehrman op-ed) mark the anniversary with critical commentaries on Nixon’s action ruefully focusing on the baleful consequences of breaking the link to gold, while barely mentioning the 90-day freeze that became the prelude to  the comprehensive wage and price controls imposed after the freeze expired.

Of the two events, the wage and price freeze and subsequent controls had by far the more adverse consequences, the closing of the gold window merely ratifying the demise of a gold standard that long since had ceased to function as it had for much of the 19th and early 20th centuries.  In contrast to the final break with gold, no economic necessity or even a coherent economic argument on the merits lay behind the decision to impose a wage and price freeze, notwithstanding the ex-post rationalizations offered by Nixon’s economic advisers, including such estimable figures as Herbert Stein, Paul McKracken, and George Schultz, who surely knew better,  but somehow were persuaded to fall into line behind a policy of massive, breathtaking, intervention into private market transactions.

The argument for closing the gold window was that the official gold peg of $35 an ounce was probably at least 10-20% below any realistic estimate of the true market value of gold at the time, making it impossible to reestablish the old parity as an economically meaningful price without imposing an intolerable deflation on the world economy.  An alternative response might have been to officially devalue the dollar to something like the market value of gold $40-42 an ounce.  But to have done so would merely have demonstrated that the official price of gold was a policy instrument subject to the whims of the US monetary authorities, undermining faith in the viability of a gold standard.  In the event, an attempt to patch together the Bretton Woods System (the Smithsonian Agreement of December 1971) based on an official $38 an ounce peg was made, but it quickly became obvious that a new monetary system based on any form of gold convertibility could no longer survive.

How did the $35 an ounce price became unsustainable barely 25 years after the Bretton Woods System was created?  The problem that emerged within a few years of its inception was that the main trading partners of the US systematically kept their own currencies undervalued in terms of the dollar, promoting their exports while sterilizing the consequent dollar inflow, allowing neither sufficient domestic inflation nor sufficient exchange-rate appreciation to eliminate the overvaluation of their currencies against the dollar. [DG 8/15/19: “sterilization” is a misleading term because it implies that persistent gold or dollar inflows just happen randomly; the persistent inflow occur only because they are induced by a persistent increased demand for reserves or insufficient creation of cash.] After a burst of inflation in the Korean War, the Fed’s tight monetary policy and a persistently overvalued exchange rate kept US inflation low at the cost of sluggish growth and three recessions between 1953 and 1960.  It was not until the Kennedy administration came into office on a pledge to get the country moving again that the Fed was pressured to loosen monetary policy, initiating the long boom of the 1960s some three years before the Kennedy tax cuts were posthumously enacted in 1964.

Monetary expansion by the Fed reduced the relative overvaluation of the dollar in terms of other currencies, but the increasing export of dollars left the $35 an ounce peg increasingly dependent on the willingness of foreign government to hold dollars.  However, President Charles de Gaulle of France, having overcome domestic opposition to his rule, felt secure enough to assert [his conception of] French interests against the US, resuming the traditional French policy of accumulating physical gold reserves rather than mere claims on gold physically held elsewhere.  By 1967 the London gold pool, a central bank cartel acting to control the price of gold in the London gold market, was collapsing, as France withdrew from the cartel, demanding that gold be shipped to Paris from New York.  In 1968, unable to hold down the market price of gold any longer, the US and other central banks let the gold price rise above the official price, but agreed to conduct official transactions among themselves at the official price of $35 an ounce.  As market prices for gold, driven by US monetary expansion, inched steadily higher, the incentives for central banks to demand gold from the US at the official price became too strong to contain, so that the system was on the verge of collapse when Nixon acknowledged the inevitable and closed the gold window rather than allow depletion of US gold holdings.

Assertions that the Bretton Woods system could somehow have been saved simply ignore the economic reality that by 1971 the Bretton Woods System was broken beyond repair, or at least beyond any repair that could have been effected at a tolerable cost.

But Nixon clearly had another motivation in his August 15 announcement, less than 15 months before the next Presidential election.  It was in effect the opening shot of his reelection campaign.  Remembering all too well that he lost the 1960 election to John Kennedy because the Fed had not provided enough monetary stimulus to cut short the 1960-61 recession, Nixon had appointed his long-time economic adviser, Arthur Burns to replace William McChesney Martin as chairman of the Fed in 1970.  A mild tightening of monetary policy in 1969 as inflation was rising above a 5% annual rate, had produced a recession in late 1969 and early 1970, without providing much relief from inflation.  Burns eased policy enough to allow a mild recovery, but the economy seemed to be suffering the worst of both worlds — inflation still near 4 percent and unemployment at what then seemed an unacceptably high level of almost 6 percent. [For more on Burns and his deplorable role in all of this see this post.]

With an election looming ever closer on the horizon, Nixon in the summer of 1971 became consumed by the political imperative of speeding up the recovery.  Meanwhile a Democratic Congress, assuming that Nixon really did mean his promises never to impose wage and price controls to stop inflation, began clamoring for controls as the way to stop inflation without the pain of a recession, even authorizing the President to impose controls, a dare they never dreamed he would accept.  Arthur Burns, himself, perhaps unwittingly [I was being too kind], provided support for such a step by voicing frustration that inflation persisted in the face of a recession and high unemployment, suggesting that the old rules of economics were no longer operating as they once had.  He even offered vague support for what was then called an incomes policy, generally understood as an informal attempt to bring down inflation by announcing a target  for wage increases corresponding to productivity gains, thereby eliminating the need for businesses to raise prices to compensate for increased labor costs.  What such proposals usually ignored was the necessity for a monetary policy that would limit the growth of total spending sufficiently to limit the growth of wage incomes to the desired target. [On incomes policies and how they might work if they were properly understood see this post.]

Having been persuaded that there was no acceptable alternative to closing the gold window — from Nixon’s perspective and from that of most conventional politicians, a painfully unpleasant admission of US weakness in the face of its enemies (all this was occurring at the height of the Vietnam War and the antiwar protests) – Nixon decided that he could now combine that decision, sugar-coated with an aggressive attack on international currency speculators and a protectionist 10% duty on imports into the United States, with the even more radical measure of a wage-price freeze to be followed by a longer-lasting program to control price increases, thereby snatching the most powerful and popular economic proposal of the Democrats right from under their noses.  Meanwhile, with the inflation threat neutralized, Arthur Burns could be pressured mercilessly to increase the rate of monetary expansion, ensuring that Nixon could stand for reelection in the middle of an economic boom.

But just as Nixon’s electoral triumph fell apart because of his Watergate fiasco, his economic success fell apart when an inflationary monetary policy combined with wage-and-price controls to produce increasing dislocations, shortages and inefficiencies, gradually sapping the strength of an economic recovery fueled by excess demand rather than increasing productivity.  Because broad based, as opposed to narrowly targeted, price controls tend to be more popular before they are imposed than after (as too many expectations about favorable regulatory treatment are disappointed), the vast majority of controls were allowed to lapse when the original grant of Congressional authority to control prices expired in April 1974.

Already by the summer of 1973, shortages of gasoline and other petroleum products were becoming commonplace, and shortages of heating oil and natural gas had been widely predicted for the winter of 1973-74.  But in October 1973 in the wake of the Yom Kippur War and the imposition of an Arab Oil Embargo against the United States and other Western countries sympathetic to Israel, the shortages turned into the first “Energy Crisis.”  A Democratic Congress and the Nixon Administration sprang into action, enacting special legislation to allow controls to be kept on petroleum products of all sorts together with emergency authority to authorize the government to allocate products in short supply.

It still amazes me that almost all the dislocations manifested after the embargo and the associated energy crisis were attributed to excessive consumption of oil and petroleum products in general or to excessive dependence on imports, as if any of the shortages and dislocations would have occurred in the absence of price controls.  And hardly anyone realizes that price controls tend to drive the prices of whatever portion of the supply is exempt from control even higher than they would have risen in the absence of any controls.

About ten years after the first energy crisis, I published a book in which I tried to explain how all the dislocations that emerged from the Arab oil embargo and the 1978-79 crisis following the Iranian Revolution were attributable to the price controls first imposed by Richard Nixon on August 15, 1971.  But the connection between the energy crisis in all its ramifications and the Nixonian price controls unfortunately remains largely overlooked and ignored to this day.  If there is reason to reflect on what happened forty years ago on this date, it surely is for that reason and not because Nixon pulled the plug on a gold standard that had not been functioning for years.

General Equilibrium, Partial Equilibrium and Costs

Neoclassical economics is now bifurcated between Marshallian partial-equilibrium and Walrasian general-equilibrium analyses. With the apparent inability of neoclassical theory to explain the coordination failure of the Great Depression, J. M. Keynes proposed an alternative paradigm to explain the involuntary unemployment of the 1930s. But within two decades, Keynes’s contribution was subsumed under what became known as the neoclassical synthesis of the Keynesian and Walrasian theories (about which I have written frequently, e.g., here and here). Lacking microfoundations that could be reconciled with the assumptions of Walrasian general-equilibrium theory, the neoclassical synthesis collapsed, owing to the supposedly inadequate microfoundations of Keynesian theory.

But Walrasian general-equilibrium theory provides no plausible, much less axiomatic, account of how general equilibrium is, or could be, achieved. Even the imaginary tatonnement process lacks an algorithm that guarantees that a general-equilibrium solution, if it exists, would be found. Whatever plausibility is attributed to the assumption that price flexibility leads to equilibrium derives from Marshallian partial-equilibrium analysis, with market prices adjusting to equilibrate supply and demand.

Yet modern macroeconomics, despite its explicit Walrasian assumptions, implicitly relies on the Marshallian intuition that the fundamentals of general-equilibrium, prices and costs are known to agents who, except for random disturbances, continuously form rational expectations of market-clearing equilibrium prices in all markets.

I’ve written many earlier posts (e.g., here and here) contesting, in one way or another, the notion that all macroeconomic theories must be founded on first principles (i.e., microeconomic axioms about optimizing individuals). Any macroeconomic theory not appropriately founded on the axioms of individual optimization by consumers and producers is now dismissed as scientifically defective and unworthy of attention by serious scientific practitioners of macroeconomics.

When contesting the presumed necessity for macroeconomics to be microeconomically founded, I’ve often used Marshall’s partial-equilibrium method as a point of reference. Though derived from underlying preference functions that are independent of prices, the demand curves of partial-equilibrium analysis presume that all product prices, except the price of the product under analysis, are held constant. Similarly, the supply curves are derived from individual firm marginal-cost curves whose geometric position or algebraic description depends critically on the prices of raw materials and factors of production used in the production process. But neither the prices of alternative products to be purchased by consumers nor the prices of raw materials and factors of production are given independently of the general-equilibrium solution of the whole system.

Thus, partial-equilibrium analysis, to be analytically defensible, requires a ceteris-paribus proviso. But to be analytically tenable, that proviso must posit an initial position of general equilibrium. Unless the analysis starts from a state of general equilibrium, the assumption that all prices but one remain constant can’t be maintained, the constancy of disequilibrium prices being a nonsensical assumption.

The ceteris-paribus proviso also entails an assumption about the market under analysis; either the market itself, or the disturbance to which it’s subject, must be so small that any change in the equilibrium price of the product in question has de minimus repercussions on the prices of every other product and of every input and factor of production used in producing that product. Thus, the validity of partial-equilibrium analysis depends on the presumption that the unique and locally stable general-equilibrium is approximately undisturbed by whatever changes result from by the posited change in the single market being analyzed. But that presumption is not so self-evidently plausible that our reliance on it to make empirical predictions is always, or even usually, justified.

Perhaps the best argument for taking partial-equilibrium analysis seriously is that the analysis identifies certain deep structural tendencies that, at least under “normal” conditions of moderate macroeconomic stability (i.e., moderate unemployment and reasonable price stability), will usually be observable despite the disturbing influences that are subsumed under the ceteris-paribus proviso. That assumption — an assumption of relative ignorance about the nature of the disturbances that are assumed to be constant — posits that those disturbances are more or less random, and as likely to cause errors in one direction as another. Consequently, the predictions of partial-equilibrium analysis can be assumed to be statistically, though not invariably, correct.

Of course, the more interconnected a given market is with other markets in the economy, and the greater its size relative to the total economy, the less confidence we can have that the implications of partial-equilibrium analysis will be corroborated by empirical investigation.

Despite its frequent unsuitability, economists and commentators are often willing to deploy partial-equilibrium analysis in offering policy advice even when the necessary ceteris-paribus proviso of partial-equilibrium analysis cannot be plausibly upheld. For example, two of the leading theories of the determination of the rate of interest are the loanable-funds doctrine and the Keynesian liquidity-preference theory. Both these theories of the rate of interest suppose that the rate of interest is determined in a single market — either for loanable funds or for cash balances — and that the rate of interest adjusts to equilibrate one or the other of those two markets. But the rate of interest is an economy-wide price whose determination is an intertemporal-general-equilibrium phenomenon that cannot be reduced, as the loanable-funds and liquidity preference theories try to do, to the analysis of a single market.

Similarly partial-equilibrium analysis of the supply of, and the demand for, labor has been used of late to predict changes in wages from immigration and to advocate for changes in immigration policy, while, in an earlier era, it was used to recommend wage reductions as a remedy for persistently high aggregate unemployment. In the General Theory, Keynes correctly criticized those using a naïve version of the partial-equilibrium method to recommend curing high unemployment by cutting wage rates, correctly observing that the conditions for full employment required the satisfaction of certain macroeconomic conditions for equilibrium that would not necessarily be satisfied by cutting wages.

However, in the very same volume, Keynes argued that the rate of interest is determined exclusively by the relationship between the quantity of money and the demand to hold money, ignoring that the rate of interest is an intertemporal relationship between current and expected future prices, an insight earlier explained by Irving Fisher that Keynes himself had expertly deployed in his Tract on Monetary Reform and elsewhere (Chapter 17) in the General Theory itself.

Evidently, the allure of supply-demand analysis can sometimes be too powerful for well-trained economists to resist even when they actually know better themselves that it ought to be resisted.

A further point also requires attention: the conditions necessary for partial-equilibrium analysis to be valid are never really satisfied; firms don’t know the costs that determine the optimal rate of production when they actually must settle on a plan of how much to produce, how much raw materials to buy, and how much labor and other factors of production to employ. Marshall, the originator of partial-equilibrium analysis, analogized supply and demand to the blades of a scissor acting jointly to achieve a intended result.

But Marshall erred in thinking that supply (i.e., cost) is an independent determinant of price, because the equality of costs and prices is a characteristic of general equilibrium. It can be applied to partial-equilibrium analysis only under the ceteris-paribus proviso that situates partial-equilibrium analysis in a pre-existing general equilibrium of the entire economy. It is only in general-equilibrium state, that the cost incurred by a firm in producing its output represents the value of the foregone output that could have been produced had the firm’s output been reduced. Only if the analyzed market is so small that changes in how much firms in that market produce do not affect the prices of the inputs used in to produce that output can definite marginal-cost curves be drawn or algebraically specified.

Unless general equilibrium obtains, prices need not equal costs, as measured by the quantities and prices of inputs used by firms to produce any product. Partial equilibrium analysis is possible only if carried out in the context of general equilibrium. Cost cannot be an independent determinant of prices, because cost is itself determined simultaneously along with all other prices.

But even aside from the reasons why partial-equilibrium analysis presumes that all prices, but the price in the single market being analyzed, are general-equilibrium prices, there’s another, even more problematic, assumption underlying partial-equilibrium analysis: that producers actually know the prices that they will pay for the inputs and resources to be used in producing their outputs. The cost curves of the standard economic analysis of the firm from which the supply curves of partial-equilibrium analysis are derived, presume that the prices of all inputs and factors of production correspond to those that are consistent with general equilibrium. But general-equilibrium prices are never known by anyone except the hypothetical agents in a general-equilibrium model with complete markets, or by agents endowed with perfect foresight (aka rational expectations in the strict sense of that misunderstood term).

At bottom, Marshallian partial-equilibrium analysis is comparative statics: a comparison of two alternative (hypothetical) equilibria distinguished by some difference in the parameters characterizing the two equilibria. By comparing the equilibria corresponding to the different parameter values, the analyst can infer the effect (at least directionally) of a parameter change.

But comparative-statics analysis is subject to a serious limitation: comparing two alternative hypothetical equilibria is very different from making empirical predictions about the effects of an actual parameter change in real time.

Comparing two alternative equilibria corresponding to different values of a parameter may be suggestive of what could happen after a policy decision to change that parameter, but there are many reasons why the change implied by the comparative-statics exercise might not match or even approximate the actual change.

First, the initial state was almost certainly not an equilibrium state, so systemic changes will be difficult, if not impossible, to disentangle from the effect of parameter change implied by the comparative-statics exercise.

Second, even if the initial state was an equilibrium, the transition to a new equilibrium is never instantaneous. The transitional period therefore leads to changes that in turn induce further systemic changes that cause the new equilibrium toward which the system gravitates to differ from the final equilibrium of the comparative-statics exercise.

Third, each successive change in the final equilibrium toward which the system is gravitating leads to further changes that in turn keep changing the final equilibrium. There is no reason why the successive changes lead to convergence on any final equilibrium end state. Nor is there any theoretical proof that the adjustment path leading from one equilibrium to another ever reaches an equilibrium end state. The gap between the comparative-statics exercise and the theory of adjustment in real time remains unbridged and may, even in principle, be unbridgeable.

Finally, without a complete system of forward and state-contingent markets, equilibrium requires not just that current prices converge to equilibrium prices; it requires that expectations of all agents about future prices converge to equilibrium expectations of future prices. Unless, agents’ expectations of future prices converge to their equilibrium values, an equilibrium many not even exist, let alone be approached or attained.

So the Marshallian assumption that producers know their costs of production and make production and pricing decisions based on that knowledge is both factually wrong and logically untenable. Nor do producers know what the demand curves for their products really looks like, except in the extreme case in which suppliers take market prices to be parametrically determined. But even then, they make decisions not on known prices, but on expected prices. Their expectations are constantly being tested against market information about actual prices, information that causes decision makers to affirm or revise their expectations in light of the constant flow of new information about prices and market conditions.

I don’t reject partial-equilibrium analysis, but I do call attention to its limitations, and to its unsuitability as a supposedly essential foundation for macroeconomic analysis, especially inasmuch as microeconomic analysis, AKA partial-equilibrium analysis, is utterly dependent on the uneasy macrofoundation of general-equilibrium theory. The intuition of Marshallian partial equilibrium cannot fil the gap, long ago noted by Kenneth Arrow, in the neoclassical theory of equilibrium price adjustment.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,924 other followers

Follow Uneasy Money on WordPress.com