Archive for the 'Keynes' Category

General Equilibrium, Partial Equilibrium and Costs

Neoclassical economics is now bifurcated between Marshallian partial-equilibrium and Walrasian general-equilibrium analyses. With the apparent inability of neoclassical theory to explain the coordination failure of the Great Depression, J. M. Keynes proposed an alternative paradigm to explain the involuntary unemployment of the 1930s. But within two decades, Keynes’s contribution was subsumed under what became known as the neoclassical synthesis of the Keynesian and Walrasian theories (about which I have written frequently, e.g., here and here). Lacking microfoundations that could be reconciled with the assumptions of Walrasian general-equilibrium theory, the neoclassical synthesis collapsed, owing to the supposedly inadequate microfoundations of Keynesian theory.

But Walrasian general-equilibrium theory provides no plausible, much less axiomatic, account of how general equilibrium is, or could be, achieved. Even the imaginary tatonnement process lacks an algorithm that guarantees that a general-equilibrium solution, if it exists, would be found. Whatever plausibility is attributed to the assumption that price flexibility leads to equilibrium derives from Marshallian partial-equilibrium analysis, with market prices adjusting to equilibrate supply and demand.

Yet modern macroeconomics, despite its explicit Walrasian assumptions, implicitly relies on the Marshallian intuition that the fundamentals of general-equilibrium, prices and costs are known to agents who, except for random disturbances, continuously form rational expectations of market-clearing equilibrium prices in all markets.

I’ve written many earlier posts (e.g., here and here) contesting, in one way or another, the notion that all macroeconomic theories must be founded on first principles (i.e., microeconomic axioms about optimizing individuals). Any macroeconomic theory not appropriately founded on the axioms of individual optimization by consumers and producers is now dismissed as scientifically defective and unworthy of attention by serious scientific practitioners of macroeconomics.

When contesting the presumed necessity for macroeconomics to be microeconomically founded, I’ve often used Marshall’s partial-equilibrium method as a point of reference. Though derived from underlying preference functions that are independent of prices, the demand curves of partial-equilibrium analysis presume that all product prices, except the price of the product under analysis, are held constant. Similarly, the supply curves are derived from individual firm marginal-cost curves whose geometric position or algebraic description depends critically on the prices of raw materials and factors of production used in the production process. But neither the prices of alternative products to be purchased by consumers nor the prices of raw materials and factors of production are given independently of the general-equilibrium solution of the whole system.

Thus, partial-equilibrium analysis, to be analytically defensible, requires a ceteris-paribus proviso. But to be analytically tenable, that proviso must posit an initial position of general equilibrium. Unless the analysis starts from a state of general equilibrium, the assumption that all prices but one remain constant can’t be maintained, the constancy of disequilibrium prices being a nonsensical assumption.

The ceteris-paribus proviso also entails an assumption about the market under analysis; either the market itself, or the disturbance to which it’s subject, must be so small that any change in the equilibrium price of the product in question has de minimus repercussions on the prices of every other product and of every input and factor of production used in producing that product. Thus, the validity of partial-equilibrium analysis depends on the presumption that the unique and locally stable general-equilibrium is approximately undisturbed by whatever changes result from by the posited change in the single market being analyzed. But that presumption is not so self-evidently plausible that our reliance on it to make empirical predictions is always, or even usually, justified.

Perhaps the best argument for taking partial-equilibrium analysis seriously is that the analysis identifies certain deep structural tendencies that, at least under “normal” conditions of moderate macroeconomic stability (i.e., moderate unemployment and reasonable price stability), will usually be observable despite the disturbing influences that are subsumed under the ceteris-paribus proviso. That assumption — an assumption of relative ignorance about the nature of the disturbances that are assumed to be constant — posits that those disturbances are more or less random, and as likely to cause errors in one direction as another. Consequently, the predictions of partial-equilibrium analysis can be assumed to be statistically, though not invariably, correct.

Of course, the more interconnected a given market is with other markets in the economy, and the greater its size relative to the total economy, the less confidence we can have that the implications of partial-equilibrium analysis will be corroborated by empirical investigation.

Despite its frequent unsuitability, economists and commentators are often willing to deploy partial-equilibrium analysis in offering policy advice even when the necessary ceteris-paribus proviso of partial-equilibrium analysis cannot be plausibly upheld. For example, two of the leading theories of the determination of the rate of interest are the loanable-funds doctrine and the Keynesian liquidity-preference theory. Both these theories of the rate of interest suppose that the rate of interest is determined in a single market — either for loanable funds or for cash balances — and that the rate of interest adjusts to equilibrate one or the other of those two markets. But the rate of interest is an economy-wide price whose determination is an intertemporal-general-equilibrium phenomenon that cannot be reduced, as the loanable-funds and liquidity preference theories try to do, to the analysis of a single market.

Similarly partial-equilibrium analysis of the supply of, and the demand for, labor has been used of late to predict changes in wages from immigration and to advocate for changes in immigration policy, while, in an earlier era, it was used to recommend wage reductions as a remedy for persistently high aggregate unemployment. In the General Theory, Keynes correctly criticized those using a naïve version of the partial-equilibrium method to recommend curing high unemployment by cutting wage rates, correctly observing that the conditions for full employment required the satisfaction of certain macroeconomic conditions for equilibrium that would not necessarily be satisfied by cutting wages.

However, in the very same volume, Keynes argued that the rate of interest is determined exclusively by the relationship between the quantity of money and the demand to hold money, ignoring that the rate of interest is an intertemporal relationship between current and expected future prices, an insight earlier explained by Irving Fisher that Keynes himself had expertly deployed in his Tract on Monetary Reform and elsewhere (Chapter 17) in the General Theory itself.

Evidently, the allure of supply-demand analysis can sometimes be too powerful for well-trained economists to resist even when they actually know better themselves that it ought to be resisted.

A further point also requires attention: the conditions necessary for partial-equilibrium analysis to be valid are never really satisfied; firms don’t know the costs that determine the optimal rate of production when they actually must settle on a plan of how much to produce, how much raw materials to buy, and how much labor and other factors of production to employ. Marshall, the originator of partial-equilibrium analysis, analogized supply and demand to the blades of a scissor acting jointly to achieve a intended result.

But Marshall erred in thinking that supply (i.e., cost) is an independent determinant of price, because the equality of costs and prices is a characteristic of general equilibrium. It can be applied to partial-equilibrium analysis only under the ceteris-paribus proviso that situates partial-equilibrium analysis in a pre-existing general equilibrium of the entire economy. It is only in general-equilibrium state, that the cost incurred by a firm in producing its output represents the value of the foregone output that could have been produced had the firm’s output been reduced. Only if the analyzed market is so small that changes in how much firms in that market produce do not affect the prices of the inputs used in to produce that output can definite marginal-cost curves be drawn or algebraically specified.

Unless general equilibrium obtains, prices need not equal costs, as measured by the quantities and prices of inputs used by firms to produce any product. Partial equilibrium analysis is possible only if carried out in the context of general equilibrium. Cost cannot be an independent determinant of prices, because cost is itself determined simultaneously along with all other prices.

But even aside from the reasons why partial-equilibrium analysis presumes that all prices, but the price in the single market being analyzed, are general-equilibrium prices, there’s another, even more problematic, assumption underlying partial-equilibrium analysis: that producers actually know the prices that they will pay for the inputs and resources to be used in producing their outputs. The cost curves of the standard economic analysis of the firm from which the supply curves of partial-equilibrium analysis are derived, presume that the prices of all inputs and factors of production correspond to those that are consistent with general equilibrium. But general-equilibrium prices are never known by anyone except the hypothetical agents in a general-equilibrium model with complete markets, or by agents endowed with perfect foresight (aka rational expectations in the strict sense of that misunderstood term).

At bottom, Marshallian partial-equilibrium analysis is comparative statics: a comparison of two alternative (hypothetical) equilibria distinguished by some difference in the parameters characterizing the two equilibria. By comparing the equilibria corresponding to the different parameter values, the analyst can infer the effect (at least directionally) of a parameter change.

But comparative-statics analysis is subject to a serious limitation: comparing two alternative hypothetical equilibria is very different from making empirical predictions about the effects of an actual parameter change in real time.

Comparing two alternative equilibria corresponding to different values of a parameter may be suggestive of what could happen after a policy decision to change that parameter, but there are many reasons why the change implied by the comparative-statics exercise might not match or even approximate the actual change.

First, the initial state was almost certainly not an equilibrium state, so systemic changes will be difficult, if not impossible, to disentangle from the effect of parameter change implied by the comparative-statics exercise.

Second, even if the initial state was an equilibrium, the transition to a new equilibrium is never instantaneous. The transitional period therefore leads to changes that in turn induce further systemic changes that cause the new equilibrium toward which the system gravitates to differ from the final equilibrium of the comparative-statics exercise.

Third, each successive change in the final equilibrium toward which the system is gravitating leads to further changes that in turn keep changing the final equilibrium. There is no reason why the successive changes lead to convergence on any final equilibrium end state. Nor is there any theoretical proof that the adjustment path leading from one equilibrium to another ever reaches an equilibrium end state. The gap between the comparative-statics exercise and the theory of adjustment in real time remains unbridged and may, even in principle, be unbridgeable.

Finally, without a complete system of forward and state-contingent markets, equilibrium requires not just that current prices converge to equilibrium prices; it requires that expectations of all agents about future prices converge to equilibrium expectations of future prices. Unless, agents’ expectations of future prices converge to their equilibrium values, an equilibrium many not even exist, let alone be approached or attained.

So the Marshallian assumption that producers know their costs of production and make production and pricing decisions based on that knowledge is both factually wrong and logically untenable. Nor do producers know what the demand curves for their products really looks like, except in the extreme case in which suppliers take market prices to be parametrically determined. But even then, they make decisions not on known prices, but on expected prices. Their expectations are constantly being tested against market information about actual prices, information that causes decision makers to affirm or revise their expectations in light of the constant flow of new information about prices and market conditions.

I don’t reject partial-equilibrium analysis, but I do call attention to its limitations, and to its unsuitability as a supposedly essential foundation for macroeconomic analysis, especially inasmuch as microeconomic analysis, AKA partial-equilibrium analysis, is utterly dependent on the uneasy macrofoundation of general-equilibrium theory. The intuition of Marshallian partial equilibrium cannot fil the gap, long ago noted by Kenneth Arrow, in the neoclassical theory of equilibrium price adjustment.

Krugman on Mr. Keynes and the Moderns

UPDATE: Re-upping this slightly revised post from July 11, 2011

Paul Krugman recently gave a lecture “Mr. Keynes and the Moderns” (a play on the title of the most influential article ever written about The General Theory, “Mr. Keynes and the Classics,” by another Nobel laureate J. R. Hicks) at a conference in Cambridge, England commemorating the publication of Keynes’s General Theory 75 years ago. Scott Sumner and Nick Rowe, among others, have already commented on his lecture. Coincidentally, in my previous posting, I discussed the views of Sumner and Krugman on the zero-interest lower bound, a topic that figures heavily in Krugman’s discussion of Keynes and his relevance for our current difficulties. (I note in passing that Krugman credits Brad Delong for applying the term “Little Depression” to those difficulties, a term that I thought I had invented, but, oh well, I am happy to share the credit with Brad).

In my earlier posting, I mentioned that Keynes’s, slightly older, colleague A. C. Pigou responded to the zero-interest lower bound in his review of The General Theory. In a way, the response enhanced Pigou’s reputation, attaching his name to one of the most famous “effects” in the history of economics, but it made no dent in the Keynesian Revolution. I also referred to “the layers upon layers of interesting personal and historical dynamics lying beneath the surface of Pigou’s review of Keynes.” One large element of those dynamics was that Keynes chose to make, not Hayek or Robbins, not French devotees of the gold standard, not American laissez-faire ideologues, but Pigou, a left-of-center social reformer, who in the early 1930s had co-authored with Keynes a famous letter advocating increased public-works spending to combat unemployment, the main target of his immense rhetorical powers and polemical invective.  The first paragraph of Pigou’s review reveals just how deeply Keynes’s onslaught had wounded Pigou.

When in 1919, he wrote The Economic Consequences of the Peace, Mr. Keynes did a good day’s work for the world, in helping it back towards sanity. But he did a bad day’s work for himself as an economist. For he discovered then, and his sub-conscious mind has not been able to forget since, that the best way to win attention for one’s own ideas is to present them in a matrix of sarcastic comment upon other people. This method has long been a routine one among political pamphleteers. It is less appropriate, and fortunately less common, in scientific discussion.  Einstein actually did for Physics what Mr. Keynes believes himself to have done for Economics. He developed a far-reaching generalization, under which Newton’s results can be subsumed as a special case. But he did not, in announcing his discovery, insinuate, through carefully barbed sentences, that Newton and those who had hitherto followed his lead were a gang of incompetent bunglers. The example is illustrious: but Mr. Keynes has not followed it. The general tone de haut en bas and the patronage extended to his old master Marshall are particularly to be regretted. It is not by this manner of writing that his desire to convince his fellow economists is best promoted.

Krugman acknowledges Keynes’s shady scholarship (“I know that there’s dispute about whether Keynes was fair in characterizing the classical economists in this way”), only to absolve him of blame. He then uses Keynes’s example to attack “modern economists” who deny that a failure of aggregate demand can cause of mass unemployment, offering up John Cochrane and Niall Ferguson as examples, even though Ferguson is a historian not an economist.

Krugman also addresses Robert Barro’s assertion that Keynes’s explanation for high unemployment was that wages and prices were stuck at levels too high to allow full employment, a problem easily solvable, in Barro’s view, by monetary expansion. Although plainly annoyed by Barro’s attempt to trivialize Keynes’s contribution, Krugman never addresses the point squarely, preferring instead to justify Keynes’s frustration with those (conveniently nameless) “classical economists.”

Keynes’s critique of the classical economists was that they had failed to grasp how everything changes when you allow for the fact that output may be demand-constrained.

Not so, as I pointed out in my first post. Frederick Lavington, an even more orthodox disciple than Pigou of Marshall, had no trouble understanding that “the inactivity of all is the cause of the inactivity of each.” It was Keynes who failed to see that the failure of demand was equally a failure of supply.

They mistook accounting identities for causal relationships, believing in particular that because spending must equal income, supply creates its own demand and desired savings are automatically invested.

Supply does create its own demand when economic agents succeed in executing their plans to supply; it is when, owing to their incorrect and inconsistent expectations about future prices, economic agents fail to execute their plans to supply, that both supply and demand start to contract. Lavington understood that; Pigou understood that. Keynes understood it, too, but believing that his new way of understanding how contractions are caused was superior to that of his predecessors, he felt justified in misrepresenting their views, and attributing to them a caricature of Say’s Law that they would never have taken seriously.

And to praise Keynes for understanding the difference between accounting identities and causal relationships that befuddled his predecessors is almost perverse, as Keynes’s notorious confusion about whether the equality of savings and investment is an equilibrium condition or an accounting identity was pointed out by Dennis Robertson, Ralph Hawtrey and Gottfried Haberler within a year after The General Theory was published. To quote Robertson:

(Mr. Keynes’s critics) have merely maintained that he has so framed his definition that Amount Saved and Amount Invested are identical; that it therefore makes no sense even to inquire what the force is which “ensures equality” between them; and that since the identity holds whether money income is constant or changing, and, if it is changing, whether real income is changing proportionately, or not at all, this way of putting things does not seem to be a very suitable instrument for the analysis of economic change.

It just so happens that in 1925, Keynes, in one of his greatest pieces of sustained, and almost crushing sarcasm, The Economic Consequences of Mr. Churchill, offered an explanation of high unemployment exactly the same as that attributed to Keynes by Barro. Churchill’s decision to restore the convertibility of sterling to gold at the prewar parity meant that a further deflation of at least 10 percent in wages and prices would be necessary to restore equilibrium.  Keynes felt that the human cost of that deflation would be intolerable, and held Churchill responsible for it.

Of course Keynes in 1925 was not yet the Keynes of The General Theory. But what historical facts of the 10 years following Britain’s restoration of the gold standard in 1925 at the prewar parity cannot be explained with the theoretical resources available in 1925? The deflation that began in England in 1925 had been predicted by Keynes. The even worse deflation that began in 1929 had been predicted by Ralph Hawtrey and Gustav Cassel soon after World War I ended, if a way could not be found to limit the demand for gold by countries, rejoining the gold standard in aftermath of the war. The United States, holding 40 percent of the world’s monetary gold reserves, might have accommodated that demand by allowing some of its reserves to be exported. But obsession with breaking a supposed stock-market bubble in 1928-29 led the Fed to tighten its policy even as the international demand for gold was increasing rapidly, as Germany, France and many other countries went back on the gold standard, producing the international credit crisis and deflation of 1929-31. Recovery came not from Keynesian policies, but from abandoning the gold standard, thereby eliminating the deflationary pressure implicit in a rapidly rising demand for gold with a more or less fixed total supply.

Keynesian stories about liquidity traps and Monetarist stories about bank failures are epiphenomena obscuring rather than illuminating the true picture of what was happening.  The story of the Little Depression is similar in many ways, except the source of monetary tightness was not the gold standard, but a monetary regime that focused attention on rising price inflation in 2008 when the appropriate indicator, wage inflation, had already started to decline.

Krugman and Sumner on the Zero-Interest Lower Bound: Some History of Thought

UPDATE: Re-upping my post from July 8, 2011

I indicated in my first posting on Tuesday that I was going to comment on some recent comparisons between the current anemic recovery and earlier more robust recoveries since World War II. The comparison that I want to perform involves some simple econometrics, and it is taking longer than anticipated to iron out the little kinks that I keep finding. So I will have to put off that discussion a while longer. As a diversion, I will follow up on a point that Scott Sumner made in discussing Paul Krugman’s reasoning for having favored fiscal policy over monetary policy to lead us out of the recession.

Scott’s focus is on the factual question whether it is really true, as Krugman and Michael Woodford have claimed, that a monetary authority, like, say, the Bank of Japan, may simply be unable to create the inflation expectations necessary to achieve equilibrium, given the zero-interest-rate lower bound, when the equilibrium real interest rate is less than zero. Scott counters that a more plausible explanation for the inability of the Bank of Japan to escape from a liquidity trap is that its aversion to inflation is so well-known that it becomes rational for the public to expect that the Bank of Japan would not permit the inflation necessary for equilibrium.

It seems that a lot of people have trouble understanding the idea that there can be conditions in which inflation — or, to be more precise, expected inflation — is necessary for a recovery from a depression. We have become so used to thinking of inflation as a costly and disruptive aspect of economic life, that the notion that inflation may be an integral element of an economic equilibrium goes very deeply against the grain of our intuition.

The theoretical background of this point actually goes back to A. C. Pigou (another famous Cambridge economist, Alfred Marshall’s successor) who, in his 1936 review of Keynes’s General Theory, referred to what he called Mr. Keynes’s vision of the day of judgment, namely, a situation in which, because of depressed entrepreneurial profit expectations or a high propensity to save, macro-equilibrium (the equality of savings and investment) would correspond to a level of income and output below the level consistent with full employment.

The “classical” or “orthodox” remedy to such a situation was to reduce the rate of interest, or, as the British say “Bank Rate” (as in “Magna Carta” with no definite article) at which the Bank of England lends to its customers (mainly banks).  But if entrepreneurs are so pessimistic, or households so determined to save rather than consume, an equilibrium corresponding to a level of income and output consistent with full employment could, in Keynes’s ghastly vision, only come about with a negative interest rate. Now a zero interest rate in economics is a little bit like the speed of light in physics; all kinds of crazy things start to happen if you posit a negative interest rate and it seems inconsistent with the assumptions of rational behavior to assume that people would lend for a negative interest when they could simply hold the money already in their pockets. That’s why Pigou’s metaphor was so powerful. There are layers upon layers of interesting personal and historical dynamics lying beneath the surface of Pigou’s review of Keynes, but I won’t pursue that tangent here, tempting though it would be to go in that direction.

The conclusion that Keynes drew from his model is the one that we all were taught in our first course in macro and that Paul Krugman holds close to his heart, the government can come to the rescue by increasing its spending on whatever, thereby increasing aggregate demand, raising income and output up to the level consistent with full employment. But Pigou, whose own policy recommendations were not much different from those of Keynes, felt that Keynes had left out an important element of the model in his discussion. As a matter of logic, which to Pigou was as, or more important than, policy, an economy confronting Keynes’s day of judgment would not forever be stuck in “underemployment equilibrium” just because the rate of interest could not fall to the (negative) level required for full employment.

Rather, Pigou insisted, at least in theory, though not necessarily in practice, deflation, resulting from unemployed workers bidding down wages to gain employment, would raise the real value of the money supply (fixed in nominal terms in Keynes’s model) thereby generating a windfall to holders of money, inducing them to increase consumption, raising aggregate demand and eventually restoring full employment.  Discussion of the theoretical validity and policy relevance of what came to be known as the Pigou effect (or, occasionally, as the Pigou-Haberler Effect, or even the Pigou-Haberler-Scitovsky effect) became a really big deal in macroeconomics in the 1940s and 1950s and was still being taught in the 1960s and 1970s.

What seems remarkable to me now about that whole episode is that the analysis simply left out the possibility that the zero-interest-rate lower bound becomes irrelevant if the expected rate of inflation exceeds the putative negative equilibrium real interest rate that would hypothetically generate a macro-equilibrium at a level of income and output consistent with full employment.

If only Pigou had corrected the logic of Keynes’s model by positing an expected rate of inflation greater than the negative real interest rate rather than positing a process of deflation to increase the real value of the money stock, how different would the course of history and the development of macroeconomics and monetary theory have been.

One economist who did think about the expected rate of inflation as an equilibrating variable in a macroeconomic model was one of my teachers, the late, great Earl Thompson, who introduced the idea of an equilibrium rate of inflation in his remarkable unpublished paper, “A Reformulation of Macreconomic Theory.” If inflation is an equilibrating variable, then it cannot make sense for monetary authorities to commit themselves to a single unvarying target for the rate of inflation. Under certain circumstances, macroeconomic equilibrium may be incompatible with a rate of inflation below some minimum level. Has it occurred to the inflation hawks on the FOMC and their supporters that the minimum rate of inflation consistent with equilibrium is above the 2 percent rate that Fed has now set as its policy goal?

One final point, which I am still trying to work out more coherently, is that it really may not be appropriate to think of the real rate of interest and the expected rate of inflation as being determined independently of each other. They clearly interact. As I point out in my paper “The Fisher Effect Under Deflationary Expectations,” increasing the expected rate of inflation when the real rate of interest is very low or negative tends to increase not just the nominal rate, but the real rate as well, by generating the positive feedback effects on income and employment that result when a depressed economy starts to expand.

Welcome to Uneasy Money, aka the Hawtreyblog

UPDATE: I’m re-upping my introductory blog post, which I posted ten years ago toady. It’s been a great run for me, and I hope for many of you, whose interest and responses have motivated to keep it going. So thanks to all of you who have read and responded to my posts. I’m adding a few retrospective comments and making some slight revisions along the way. In addition to new posts, I will be re-upping some of my old posts that still seem to have relevance to the current state of our world.

What the world needs now, with apologies to the great Burt Bachrach and Hal David, is, well, another blog.  But inspired by the great Ralph Hawtrey and the near great Scott Sumner, I decided — just in time for Scott’s return to active blogging — to raise another voice on behalf of a monetary policy actively seeking to promote recovery from what I call the Little Depression, instead of the monetary policy we have now:  waiting for recovery to arrive on its own.  Just like the Great Depression, our Little Depression was caused mainly by overly tight money in an environment of over-indebtedness and financial fragility, and was then allowed to deepen and become entrenched by monetary authorities unwilling to commit themselves to a monetary expansion aimed at raising prices enough to make business expansion profitable.

That was the lesson of the Great Depression.  Unfortunately that lesson, for reasons too complicated to go into now, was never properly understood, because neither Keynesians nor Monetarists had a fully coherent understanding of what happened in the Great Depression.  Although Ralph Hawtrey — called by none other than Keynes “his grandparent in the paths of errancy,” and an early, but unacknowledged, progenitor of Chicago School Monetarism — had such an understanding,  Hawtrey’s contributions were overshadowed and largely ignored, because of often irrelevant and misguided polemics between Keynesians and Monetarists and Austrians.  One of my goals for this blog is to bring to light the many insights of this perhaps most underrated — though competition for that title is pretty stiff — economist of the twentieth century.  I have discussed Hawtrey’s contributions in my book on free banking and in a paper published years ago in Encounter and available here.  Patrick Deutscher has written a biography of Hawtrey.

What deters businesses from expanding output and employment in a depression is lack of demand; they fear that if they do expand, they won’t be able to sell the added output at prices high enough to cover their costs, winding up with redundant workers and having to engage in costly layoffs.  Thus, an expectation of low demand tends to be self-fulfilling.  But so is an expectation of rising prices, because the additional output and employment induced by expectations of rising prices will generate the demand that will validate the initial increase in output and employment, creating a virtuous cycle of rising income, expenditure, output, and employment.

The insight that “the inactivity of all is the cause of the inactivity of each” is hardly new.  It was not the discovery of Keynes or Keynesian economics; it is the 1922 formulation of Frederick Lavington, another great, but underrated, pre-Keynesian economist in the Cambridge tradition, who, in his modesty and self-effacement, would have been shocked and embarrassed to be credited with the slightest originality for that statement.  Indeed, Lavington’s dictum might even be understood as a restatement of Say’s Law, the bugbear of Keynes and object of his most withering scorn.  Keynesian economics skillfully repackaged the well-known and long-accepted idea that when an economy is operating with idle capacity and high unemployment, any increase in output tends to be self-reinforcing and cumulative, just as, on the way down, each reduction in output is self-reinforcing and cumulative.

But at least Keynesians get the point that, in a depression or deep recession, individual incentives may not be enough to induce a healthy expansion of output and employment. Aggregate demand can be too low for an expansion to get started on its own. Even though aggregate demand is nothing but the flip side of aggregate supply (as Say’s Law teaches), if resources are idle for whatever reason, perceived effective demand is deficient, diluting incentives to increase production so much that the potential output expansion does not materialize, because expected prices are too low for businesses to want to expand. But if businesses can be induced to expand output, more than likely, they will sell it, because (as Say’s Law teaches) supply usually does create its own demand.

[Comment after 10 years: In a comment, Rowe asked why I wrote that Say’s Law teaches that supply “usually” creates its own demand. At that time, I responded that I was just using “usually” as a weasel word. But I subsequently realized (and showed in a post last year) that the standard proofs of both Walras’s Law and Say’s Law are defective for economies with incomplete forward and state-contingent markets. We actually know less than we once thought we did!] 

Keynesians mistakenly denied that, by creating price-level expectations consistent with full employment, monetary policy could induce an expansion of output even in a depression. But at least they understood that the private economy can reach an impasse with price-level expectations too low to sustain full employment. Fiscal policy may play a role in remedying a mismatch between expectations and full employment, but fiscal policy can only be as effective as monetary policy allows it to be. Unfortunately, since the downturn of December 2007, monetary policy, except possibly during QE1 and QE2, has consistently erred on the side of uneasiness.

With some unfortunate exceptions, however, few Keynesians have actually argued against monetary easing. Rather, with some honorable exceptions, it has been conservatives who, by condemning a monetary policy designed to provide incentives conducive to business expansion, have helped to hobble a recovery led by the private sector rather than the government which  they profess to want. It is not my habit to attribute ill motives or bad faith to people whom I disagree with. One of the finest compliments ever paid to F. A. Hayek was by Joseph Schumpeter in his review of The Road to Serfdom who chided Hayek for “politeness to a fault in hardly ever attributing to his opponents anything but intellectual error.” But it is a challenge to come up with a plausible explanation for right-wing opposition to monetary easing.

[Comment after 10 years: By 2011 when this post was written, right-wing bad faith had already become too obvious to ignore, but who could then have imagined where the willingness to resort to bad faith arguments without the slightest trace of compunction would lead them and lead us.] 

In condemning monetary easing, right-wing opponents claim to be following the good old conservative tradition of supporting sound money and resisting the inflationary proclivities of Democrats and liberals. But how can claims of principled opposition to inflation be taken seriously when inflation, by every measure, is at its lowest ebb since the 1950s and early 1960s? With prices today barely higher than they were three years ago before the crash, scare talk about currency debasement and future hyperinflation reminds me of Ralph Hawtrey’s famous remark that warnings that leaving the gold standard during the Great Depression would cause runaway inflation were like crying “fire, fire” in Noah’s flood.

The groundlessness of right-wing opposition to monetary easing becomes even plainer when one recalls the attacks on Paul Volcker during the first Reagan administration. In that episode President Reagan and Volcker, previously appointed by Jimmy Carter to replace the feckless G. William Miller as Fed Chairman, agreed to make bringing double-digit inflation under control their top priority, whatever the short-term economic and political costs. Reagan, indeed, courageously endured a sharp decline in popularity before the first signs of a recovery became visible late in the summer of 1982, too late to save Reagan and the Republicans from a drubbing in the mid-term elections, despite the drop in inflation to 3-4 percent. By early 1983, with recovery was in full swing, the Fed, having abandoned its earlier attempt to impose strict Monetarist controls on monetary expansion, allowed the monetary aggregates to grow at unusually rapid rates.

However, in 1984 (a Presidential election year) after several consecutive quarters of GDP growth at annual rates above 7 percent, the Fed, fearing a resurgence of inflation, began limiting the rate of growth in the monetary aggregates. Reagan’s secretary of the Treasury, Donald Regan, as well as a variety of outside Administration supporters like Arthur Laffer, Larry Kudlow, and the editorial page of the Wall Street Journal, began to complain bitterly that the Fed, in its preoccupation with fighting inflation, was deliberately sabotaging the recovery. The argument against the Fed’s tightening of monetary policy in 1984 was not without merit. But regardless of the wisdom of the Fed tightening in 1984 (when inflation was significantly higher than it is now), holding up the 1983-84 Reagan recovery as the model for us to follow now, while excoriating Obama and Bernanke for driving inflation all the way up to 1 percent, supposedly leading to currency debauchment and hyperinflation, is just a bit rich. What, I wonder, would Hawtrey have said about that?

In my next posting I will look a little more closely at some recent comparisons between the current non-recovery and recoveries from previous recessions, especially that of 1983-84.

Involuntary Unemployment, the Mind-Body Problem, and Rubbernecking

The term involuntary unemployment was introduced by Keynes in the General Theory as the name he attached to the phenomenon of high cyclical unemployment during the downward phase of business cycle. He didn’t necessarily restrict the term to unemployment at the trough of the business cycle, because he at least entertained the possibility of underemployment equilibrium, presumably to indicate that involuntary unemployment could be a long-lasting, even permanent, phenomenon, unless countered by deliberate policy measures.

Keynes provided an explicit definition of involuntary unemployment in the General Theory, a definition that is far from straightforward, but boils down to the following: if unemployment would not fall as a result of a cut in nominal wages, but would fall as a result of a cut real wages brought about by an increase in the price level, then there is involuntary unemployment. Thus, Keynes explicitly excluded from his definition of involuntary unemployment, unemployment caused by minimum wages or labor-union monopoly power.

Keynes’s definition has always been controversial, because it implies that wage stickiness or rigidity is not the cause of unemployment. There have been at least two approaches to Keynes’s definition of involuntary that now characterize the views of mainstream macroeconomists to involuntary unemployment.

The first is rationalization. Examples of such rationalization are search and matching theories of unemployment, implicit-contract theories, and efficiency-wage theories. The problem with such rationalizations is that they are rationalizations of why nominal wages are sticky or rigid. But Keynes’s definition of involuntary unemployment was based on the premise that reducing nominal wages does not reduce involuntary unemployment, so the rationalizations of why nominal wages aren’t cut to reduce unemployment seem sort of irrelevant to the concept of involuntary unemployment, or, at least to Keynes’s understanding of the concept.

The second is denial. Perhaps the best example of such denial is provided by Robert Lucas. Here’s his take on involuntary unemployment.

The worker who loses a good job in prosperous times does not volunteer to be in this situation: he has suffered a capital loss. Similarly, the firm which loses an experienced employee in depressed times suffers an undesired capital loss. Nevertheless the unemployed worker at any time can always find some job at once, and a firm can always fill a vacancy instantaneously. That neither typically does so by choice is not difficult to understand given the quality of the jobs and the employees which are easiest to find. Thus there is an involuntary element in all unemployment, in the sense that no one chooses bad luck over good; there is also a voluntary element in all unemployment, in the sense that however miserable one’s current work options, one can always choose to accept them.

R. E. Lucas, Studies in Business-Cycle Theory, p. 242

Because Lucas believes that it is impossible to determine the extent to which any observed unemployment reflects a voluntary choice by the unemployed worker, or is involuntarily imposed on the worker by a social process beyond the worker’s control, he rejects the distinction as artificial and lacking empirical content, the product of Keynes’s overactive imagination. As such, the concept requires no explanation by economists.

Involuntary unemployment is not a fact or a phenomenon which it is the task of theorists to explain. It is, on the contrary, a theoretical construct which Keynes introduced in the hope that it would be helpful in discovering a correct explanation for a genuine phenomenon: large-scale fluctuations in measured, total unemployment. Is it the task of modern theoretical economics to “explain” the theoretical constructs of our predecessors, whether or not they have proved fruitful? I hope not, for a surer route to sterility could scarcely be imagined?

Id., p. 243

Lucas’s point seems to be that the distinction between voluntary and involuntary unemployment is purely semantic and doesn’t correspond to any observable phenomena that are of scientific interest. He may be right, and if he chooses to explain observed fluctuations in unemployment without reference to the distinction between voluntary and involuntary unemployment, he is under no obligation to accommodate the preferences of those economists that believe that involuntary unemployment is a real phenomenon that does require an explanation.

There is a real conflict of paradigms here. Surely Lucas is entitled to reject the Keynesian involuntary unemployment paradigm, and he may be right that trying to explain involuntary unemployment is unlikely to result in a progressive scientific research program. But it is not obvious that he is right.

One might argue that Lucas’s argument against involuntary unemployment resembles the argument of physicalists who deny the reality of mind and of consciousness. According to physicalists, only the brain and brain states exist. The mind and consciousness are just metaphysical concepts lacking any empirical basis. I happen to think that denying the reality of mind and consciousness borders on the absurd, but I am even less of an expert on the mind-body problem than I am on the existence of involuntary unemployment, so I won’t push this particular analogy any further.

Instead, let me try another analogy. Within the legal speed limits, drivers choose different speeds at which they drive while on a turnpike. Does it make sense to distinguish between situations in which they drive less than the speed limit voluntarily and situations in which they drive less than the speed limit involuntarily? Sometimes, there are physical bottlenecks (e.g., lane closures or other obstructions of traffic flows) that prevent cars on the turnpike from going as fast as drivers would have chosen to but for those physical constraints.

Would Lucas deny that the distinction between driving at less than the speed limit voluntarily and driving at less than the speed limit involuntarily is meaningful and empirically relevant?

There are also situations in which drivers involuntarily drive at less than the speed limit, not because of any physical bottleneck on traffic flows, but because of the voluntary choices of some drivers to slow down to rubberneck at something at the side of the turnpike but doesn’t physically obstruct the flow of traffic. Does the interaction between the voluntary choices of different drivers on the turnpike result in some drivers making involuntary choices?

I think the distinction between voluntary and involuntary choices may be relevant and meaningful in this context, but I know almost nothing about traffic-flow theory or queuing theory. I would welcome hearing what readers think about the relevance of the voluntary-involuntary distinction in the context of traffic-flow theory and whether they see any implications for such a distinction in unemployment theory.

An Austrian Tragedy

It was hardly predictable that the New York Review of Books would take notice of Marginal Revolutionaries by Janek Wasserman, marking the susquicentenial of the publication of Carl Menger’s Grundsätze (Principles of Economics) which, along with Jevons’s Principles of Political Economy and Walras’s Elements of Pure Economics ushered in the marginal revolution upon which all of modern economics, for better or for worse, is based. The differences among the three founding fathers of modern economic theory were not insubstantial, and the Jevonian version was largely superseded by the work of his younger contemporary Alfred Marshall, so that modern neoclassical economics is built on the work of only one of the original founders, Leon Walras, Jevons’s work having left little impression on the future course of economics.

Menger’s work, however, though largely, but not totally, eclipsed by that of Marshall and Walras, did leave a more enduring imprint and a more complicated legacy than Jevons’s — not only for economics, but for political theory and philosophy, more generally. Judging from Edward Chancellor’s largely favorable review of Wasserman’s volume, one might even hope that a start might be made in reassessing that legacy, a process that could provide an opportunity for mutually beneficial interaction between long-estranged schools of thought — one dominant and one marginal — that are struggling to overcome various conceptual, analytical and philosophical problems for which no obvious solutions seem available.

In view of the failure of modern economists to anticipate the Great Recession of 2008, the worst financial shock since the 1930s, it was perhaps inevitable that the Austrian School, a once favored branch of economics that had made a specialty of booms and busts, would enjoy a revival of public interest.

The theme of Austrians as outsiders runs through Janek Wasserman’s The Marginal Revolutionaries: How Austrian Economists Fought the War of Ideas, a general history of the Austrian School from its beginnings to the present day. The title refers both to the later marginalization of the Austrian economists and to the original insight of its founding father, Carl Menger, who introduced the notion of marginal utility—namely, that economic value does not derive from the cost of inputs such as raw material or labor, as David Ricardo and later Karl Marx suggested, but from the utility an individual derives from consuming an additional amount of any good or service. Water, for instance, may be indispensable to humans, but when it is abundant, the marginal value of an extra glass of the stuff is close to zero. Diamonds are less useful than water, but a great deal rarer, and hence command a high market price. If diamonds were as common as dewdrops, however, they would be worthless.

Menger was not the first economist to ponder . . . the “paradox of value” (why useless things are worth more than essentials)—the Italian Ferdinando Galiani had gotten there more than a century earlier. His central idea of marginal utility was simultaneously developed in England by W. S. Jevons and on the Continent by Léon Walras. Menger’s originality lay in applying his theory to the entire production process, showing how the value of capital goods like factory equipment derived from the marginal value of the goods they produced. As a result, Austrian economics developed a keen interest in the allocation of capital. Furthermore, Menger and his disciples emphasized that value was inherently subjective, since it depends on what consumers are willing to pay for something; this imbued the Austrian school from the outset with a fiercely individualistic and anti-statist aspect.

Menger’s unique contribution is indeed worthy of special emphasis. He was more explicit than Jevons or Walras, and certainly more than Marshall, in explaining that the value of factors of production is derived entirely from the value of the incremental output that could be attributed (or imputed) to their services. This insight implies that cost is not an independent determinant of value, as Marshall, despite accepting the principle of marginal utility, continued to insist – famously referring to demand and supply as the two blades of the analytical scissors that determine value. The cost of production therefore turns out to be nothing but the value the output foregone when factors are used to produce one output instead of the next most highly valued alternative. Cost therefore does not determine, but is determined by, equilibrium price, which means that, in practice, costs are always subjective and conjectural. (I have made this point in an earlier post in a different context.) I will have more to say below about the importance of Menger’s specific contribution and its lasting imprint on the Austrian school.

Menger’s Principles of Economics, published in 1871, established the study of economics in Vienna—before then, no economic journals were published in Austria, and courses in economics were taught in law schools. . . .

The Austrian School was also bound together through family and social ties: [his two leading disciples, [Eugen von] Böhm-Bawerk and Friedrich von Wieser [were brothers-in-law]. [Wieser was] a close friend of the statistician Franz von Juraschek, Friedrich Hayek’s maternal grandfather. Young Austrian economists bonded on Alpine excursions and met in Böhm-Bawerk’s famous seminars (also attended by the Bolshevik Nikolai Bukharin and the German Marxist Rudolf Hilferding). Ludwig von Mises continued this tradition, holding private seminars in Vienna in the 1920s and later in New York. As Wasserman notes, the Austrian School was “a social network first and last.”

After World War I, the Habsburg Empire was dismantled by the victorious Allies. The Austrian bureaucracy shrank, and university placements became scarce. Menger, the last surviving member of the first generation of Austrian economists, died in 1921. The economic school he founded, with its emphasis on individualism and free markets, might have disappeared under the socialism of “Red Vienna.” Instead, a new generation of brilliant young economists emerged: Schumpeter, Hayek, and Mises—all of whom published best-selling works in English and remain familiar names today—along with a number of less well known but influential economists, including Oskar Morgenstern, Fritz Machlup, Alexander Gerschenkron, and Gottfried Haberler.

Two factual corrections are in order. Menger outlived Böhm-Bawerk, but not his other chief disciple von Wieser, who died in 1926, not long after supervising Hayek’s doctoral dissertation, later published in 1927, and, in 1933, translated into English and published as Monetary Theory and the Trade Cycle. Moreover, a 16-year gap separated Mises and Schumpeter, who were exact contemporaries, from Hayek (born in 1899) who was a few years older than Gerschenkron, Haberler, Machlup and Morgenstern.

All the surviving members or associates of the Austrian school wound up either in the US or Britain after World War II, and Hayek, who had taken a position in London in 1931, moved to the US in 1950, taking a position in the Committee on Social Thought at the University of Chicago after having been refused a position in the economics department. Through the intervention of wealthy sponsors, Mises obtained an academic appointment of sorts at the NYU economics department, where he succeeded in training two noteworthy disciples who wrote dissertations under his tutelage, Murray Rothbard and Israel Kirzner. (Kirzner wrote his dissertation under Mises at NYU, but Rothbard did his graduate work at Colulmbia.) Schumpeter, Haberler and Gerschenkron eventually took positions at Harvard, while Machlup (with some stops along the way) and Morgenstern made their way to Princeton. However, Hayek’s interests shifted from pure economic theory to deep philosophical questions. While Machlup and Haberler continued to work on economic theory, the Austrian influence on their work after World War II was barely recognizable. Morgenstern and Schumpeter made major contributions to economics, but did not hide their alienation from the doctrines of the Austrian School.

So there was little reason to expect that the Austrian School would survive its dispersal when the Nazis marched unopposed into Vienna in 1938. That it did survive is in no small measure due to its ideological usefulness to anti-socialist supporters who provided financial support to Hayek, enabling his appointment to the Committee on Social Thought at the University of Chicago, and Mises’s appointment at NYU, and other forms of research support to Hayek, Mises and other like-minded scholars, as well as funding the Mont Pelerin Society, an early venture in globalist networking, started by Hayek in 1947. Such support does not discredit the research to which it gave rise. That the survival of the Austrian School would probably not have been possible without the support of wealthy benefactors who anticipated that the Austrians would advance their political and economic interests does not invalidate the research thereby enabled. (In the interest of transparency, I acknowledge that I received support from such sources for two books that I wrote.)

Because Austrian School survivors other than Mises and Hayek either adapted themselves to mainstream thinking without renouncing their earlier beliefs (Haberler and Machlup) or took an entirely different direction (Morgenstern), and because the economic mainstream shifted in two directions that were most uncongenial to the Austrians: Walrasian general-equilibrium theory and Keynesian macroeconomics, the Austrian remnant, initially centered on Mises at NYU, adopted a sharply adversarial attitude toward mainstream economic doctrines.

Despite its minute numbers, the lonely remnant became a house divided against itself, Mises’s two outstanding NYU disciples, Murray Rothbard and Israel Kirzner, holding radically different conceptions of how to carry on the Austrian tradition. An extroverted radical activist, Rothbard was not content just to lead a school of economic thought, he aspired to become the leader of a fantastical anarchistic revolutionary movement to replace all established governments under a reign of private-enterprise anarcho-capitalism. Rothbard’s political radicalism, which, despite his Jewish ancestry, even included dabbling in Holocaust denialism, so alienated his mentor, that Mises terminated all contact with Rothbard for many years before his death. Kirzner, self-effacing, personally conservative, with no political or personal agenda other than the advancement of his own and his students’ scholarship, published hundreds of articles and several books filling 10 thick volumes of his collected works published by the Liberty Fund, while establishing a robust Austrian program at NYU, training many excellent scholars who found positions in respected academic and research institutions. Similar Austrian programs, established under the guidance of Kirzner’s students, were started at other institutions, most notably at George Mason University.

One of the founders of the Cato Institute, which for nearly half a century has been the leading avowedly libertarian think tank in the US, Rothbard was eventually ousted by Cato, and proceeded to set up a rival think tank, the Ludwig von Mises Institute, at Auburn University, which has turned into a focal point for extreme libertarians and white nationalists to congregate, get acquainted, and strategize together.

Isolation and marginalization tend to cause a subspecies either to degenerate toward extinction, to somehow blend in with the members of the larger species, thereby losing its distinctive characteristics, or to accentuate its unique traits, enabling it to find some niche within which to survive as a distinct sub-species. Insofar as they have engaged in economic analysis rather than in various forms of political agitation and propaganda, the Rothbardian Austrians have focused on anarcho-capitalist theory and the uniquely perverse evils of fractional-reserve banking.

Rejecting the political extremism of the Rothbardians, Kirznerian Austrians differentiate themselves by analyzing what they call market processes and emphasizing the limitations on the knowledge and information possessed by actual decision-makers. They attribute this misplaced focus on equilibrium to the extravagantly unrealistic and patently false assumptions of mainstream models on the knowledge possessed by economic agents, which effectively make equilibrium the inevitable — and trivial — conclusion entailed by those extreme assumptions. In their view, the focus of mainstream models on equilibrium states with unrealistic assumptions results from a preoccupation with mathematical formalism in which mathematical tractability rather than sound economics dictates the choice of modeling assumptions.

Skepticism of the extreme assumptions about the informational endowments of agents covers a range of now routine assumptions in mainstream models, e.g., the ability of agents to form precise mathematical estimates of the probability distributions of future states of the world, implying that agents never confront decisions about which they are genuinely uncertain. Austrians also object to the routine assumption that all the information needed to determine the solution of a model is the common knowledge of the agents in the model, so that an existing equilibrium cannot be disrupted unless new information randomly and unpredictably arrives. Each agent in the model having been endowed with the capacity of a semi-omniscient central planner, solving the model for its equilibrium state becomes a trivial exercise in which the optimal choices of a single agent are taken as representative of the choices made by all of the model’s other, semi-omnicient, agents.

Although shreds of subjectivism — i.e., agents make choices based own preference orderings — are shared by all neoclassical economists, Austrian criticisms of mainstream neoclassical models are aimed at what Austrians consider to be their insufficient subjectivism. It is this fierce commitment to a robust conception of subjectivism, in which an equilibrium state of shared expectations by economic agents must be explained, not just assumed, that Chancellor properly identifies as a distinguishing feature of the Austrian School.

Menger’s original idea of marginal utility was posited on the subjective preferences of consumers. This subjectivist position was retained by subsequent generations of the school. It inspired a tradition of radical individualism, which in time made the Austrians the favorite economists of American libertarians. Subjectivism was at the heart of the Austrians’ polemical rejection of Marxism. Not only did they dismiss Marx’s labor theory of value, they argued that socialism couldn’t possibly work since it would lack the means to allocate resources efficiently.

The problem with central planning, according to Hayek, is that so much of the knowledge that people act upon is specific knowledge that individuals acquire in the course of their daily activities and life experience, knowledge that is often difficult to articulate – mere intuition and guesswork, yet more reliable than not when acted upon by people whose livelihoods depend on being able to do the right thing at the right time – much less communicate to a central planner.

Chancellor attributes Austrian mistrust of statistical aggregates or indices, like GDP and price levels, to Austrian subjectivism, which regards such magnitudes as abstractions irrelevant to the decisions of private decision-makers, except perhaps in forming expectations about the actions of government policy makers. (Of course, this exception potentially provides full subjectivist license and legitimacy for macroeconomic theorizing despite Austrian misgivings.) Observed statistical correlations between aggregate variables identified by macroeconomists are dismissed as irrelevant unless grounded in, and implied by, the purposeful choices of economic agents.

But such scruples about the use of macroeconomic aggregates and inferring causal relationships from observed correlations are hardly unique to the Austrian school. One of the most important contributions of the 20th century to the methodology of economics was an article by T. C. Koopmans, “Measurement Without Theory,” which argued that measured correlations between macroeconomic variables provide a reliable basis for business-cycle research and policy advice only if the correlations can be explained in terms of deeper theoretical or structural relationships. The Nobel Prize Committee, in awarding the 1975 Prize to Koopmans, specifically mentioned this paper in describing Koopmans’s contributions. Austrians may be more fastidious than their mainstream counterparts in rejecting macroeconomic relationships not based on microeconomic principles, but they aren’t the only ones mistrustful of mere correlations.

Chancellor cites mistrust about the use of statistical aggregates and price indices as a factor in Hayek’s disastrous policy advice warning against anti-deflationary or reflationary measures during the Great Depression.

Their distrust of price indexes brought Austrian economists into conflict with mainstream economic opinion during the 1920s. At the time, there was a general consensus among leading economists, ranging from Irving Fisher at Yale to Keynes at Cambridge, that monetary policy should aim at delivering a stable price level, and in particular seek to prevent any decline in prices (deflation). Hayek, who earlier in the decade had spent time at New York University studying monetary policy and in 1927 became the first director of the Austrian Institute for Business Cycle Research, argued that the policy of price stabilization was misguided. It was only natural, Hayek wrote, that improvements in productivity should lead to lower prices and that any resistance to this movement (sometimes described as “good deflation”) would have damaging economic consequences.

The argument that deflation stemming from economic expansion and increasing productivity is normal and desirable isn’t what led Hayek and the Austrians astray in the Great Depression; it was their failure to realize the deflation that triggered the Great Depression was a monetary phenomenon caused by a malfunctioning international gold standard. Moreover, Hayek’s own business-cycle theory explicitly stated that a neutral (stable) monetary policy ought to aim at keeping the flow of total spending and income constant in nominal terms while his policy advice of welcoming deflation meant a rapidly falling rate of total spending. Hayek’s policy advice was an inexcusable error of judgment, which, to his credit, he did acknowledge after the fact, though many, perhaps most, Austrians have refused to follow him even that far.

Considered from the vantage point of almost a century, the collapse of the Austrian School seems to have been inevitable. Hayek’s long-shot bid to establish his business-cycle theory as the dominant explanation of the Great Depression was doomed from the start by the inadequacies of the very specific version of his basic model and his disregard of the obvious implication of that model: prevent total spending from contracting. The promising young students and colleagues who had briefly gathered round him upon his arrival in England, mostly attached themselves to other mentors, leaving Hayek with only one or two immediate disciples to carry on his research program. The collapse of his research program, which he himself abandoned after completing his final work in economic theory, marked a research hiatus of almost a quarter century, with the notable exception of publications by his student, Ludwig Lachmann who, having decamped in far-away South Africa, labored in relative obscurity for most of his career.

The early clash between Keynes and Hayek, so important in the eyes of Chancellor and others, is actually overrated. Chancellor, quoting Lachmann and Nicholas Wapshott, describes it as a clash of two irreconcilable views of the economic world, and the clash that defined modern economics. In later years, Lachmann actually sought to effect a kind of reconciliation between their views. It was not a conflict of visions that undid Hayek in 1931-32, it was his misapplication of a narrowly constructed model to a problem for which it was irrelevant.

Although the marginalization of the Austrian School, after its misguided policy advice in the Great Depression and its dispersal during and after World War II, is hardly surprising, the unwillingness of mainstream economists to sort out what was useful and relevant in the teachings of the Austrian School from what is not was unfortunate not only for the Austrians. Modern economics was itself impoverished by its disregard for the complexity and interconnectedness of economic phenomena. It’s precisely the Austrian attentiveness to the complexity of economic activity — the necessity for complementary goods and factors of production to be deployed over time to satisfy individual wants – that is missing from standard economic models.

That Austrian attentiveness, pioneered by Menger himself, to the complementarity of inputs applied over the course of time undoubtedly informed Hayek’s seminal contribution to economic thought: his articulation of the idea of intertemporal equilibrium that comprehends the interdependence of the plans of independent agents and the need for them to all fit together over the course of time for equilibrium to obtain. Hayek’s articulation represented a conceptual advance over earlier versions of equilibrium analysis stemming from Walras and Pareto, and even from Irving Fisher who did pay explicit attention to intertemporal equilibrium. But in Fisher’s articulation, intertemporal consistency was described in terms of aggregate production and income, leaving unexplained the mechanisms whereby the individual plans to produce and consume particular goods over time are reconciled. Hayek’s granular exposition enabled him to attend to, and articulate, necessary but previously unspecified relationships between the current prices and expected future prices.

Moreover, neither mainstream nor Austrian economists have ever explained how prices are adjust in non-equilibrium settings. The focus of mainstream analysis has always been the determination of equilibrium prices, with the implicit understanding that “market forces” move the price toward its equilibrium value. The explanatory gap has been filled by the mainstream New Classical School which simply posits the existence of an equilibrium price vector, and, to replace an empirically untenable tâtonnement process for determining prices, posits an equally untenable rational-expectations postulate to assert that market economies typically perform as if they are in, or near the neighborhood of, equilibrium, so that apparent fluctuations in real output are viewed as optimal adjustments to unexplained random productivity shocks.

Alternatively, in New Keynesian mainstream versions, constraints on price changes prevent immediate adjustments to rationally expected equilibrium prices, leading instead to persistent reductions in output and employment following demand or supply shocks. (I note parenthetically that the assumption of rational expectations is not, as often suggested, an assumption distinct from market-clearing, because the rational expectation of all agents of a market-clearing price vector necessarily implies that the markets clear unless one posits a constraint, e.g., a binding price floor or ceiling, that prevents all mutually beneficial trades from being executed.)

Similarly, the Austrian school offers no explanation of how unconstrained price adjustments by market participants is a sufficient basis for a systemic tendency toward equilibrium. Without such an explanation, their belief that market economies have strong self-correcting properties is unfounded, because, as Hayek demonstrated in his 1937 paper, “Economics and Knowledge,” price adjustments in current markets don’t, by themselves, ensure a systemic tendency toward equilibrium values that coordinate the plans of independent economic agents unless agents’ expectations of future prices are sufficiently coincident. To take only one passage of many discussing the difficulty of explaining or accounting for a process that leads individuals toward a state of equilibrium, I offer the following as an example:

All that this condition amounts to, then, is that there must be some discernible regularity in the world which makes it possible to predict events correctly. But, while this is clearly not sufficient to prove that people will learn to foresee events correctly, the same is true to a hardly less degree even about constancy of data in an absolute sense. For any one individual, constancy of the data does in no way mean constancy of all the facts independent of himself, since, of course, only the tastes and not the actions of the other people can in this sense be assumed to be constant. As all those other people will change their decisions as they gain experience about the external facts and about other people’s actions, there is no reason why these processes of successive changes should ever come to an end. These difficulties are well known, and I mention them here only to remind you how little we actually know about the conditions under which an equilibrium will ever be reached.

In this theoretical muddle, Keynesian economics and the neoclassical synthesis were abandoned, because the key proposition of Keynesian economics was supposedly the tendency of a modern economy toward an equilibrium with involuntary unemployment while the neoclassical synthesis rejected that proposition, so that the supposed synthesis was no more than an agreement to disagree. That divided house could not stand. The inability of Keynesian economists such as Hicks, Modigliani, Samuelson and Patinkin to find a satisfactory (at least in terms of a preferred Walrasian general-equilibrium model) rationalization for Keynes’s conclusion that an economy would likely become stuck in an equilibrium with involuntary unemployment led to the breakdown of the neoclassical synthesis and the displacement of Keynesianism as the dominant macroeconomic paradigm.

But perhaps the way out of the muddle is to abandon the idea that a systemic tendency toward equilibrium is a property of an economic system, and, instead, to recognize that equilibrium is, as Hayek suggested, a contingent, not a necessary, property of a complex economy. Ludwig Lachmann, cited by Chancellor for his remark that the early theoretical clash between Hayek and Keynes was a conflict of visions, eventually realized that in an important sense both Hayek and Keynes shared a similar subjectivist conception of the crucial role of individual expectations of the future in explaining the stability or instability of market economies. And despite the efforts of New Classical economists to establish rational expectations as an axiomatic equilibrating property of market economies, that notion rests on nothing more than arbitrary methodological fiat.

Chancellor concludes by suggesting that Wasserman’s characterization of the Austrians as marginalized is not entirely accurate inasmuch as “the Austrians’ view of the economy as a complex, evolving system continues to inspire new research.” Indeed, if economics is ever to find a way out of its current state of confusion, following Lachmann in his quest for a synthesis of sorts between Keynes and Hayek might just be a good place to start from.

Filling the Arrow Explanatory Gap

The following (with some minor revisions) is a Twitter thread I posted yesterday. Unfortunately, because it was my first attempt at threading the thread wound up being split into three sub-threads and rather than try to reconnect them all, I will just post the complete thread here as a blogpost.

1. Here’s an outline of an unwritten paper developing some ideas from my paper “Hayek Hicks Radner and Four Equilibrium Concepts” (see here for an earlier ungated version) and some from previous blog posts, in particular Phillips Curve Musings

2. Standard supply-demand analysis is a form of partial-equilibrium (PE) analysis, which means that it is contingent on a ceteris paribus (CP) assumption, an assumption largely incompatible with realistic dynamic macroeconomic analysis.

3. Macroeconomic analysis is necessarily situated a in general-equilibrium (GE) context that precludes any CP assumption, because there are no variables that are held constant in GE analysis.

4. In the General Theory, Keynes criticized the argument based on supply-demand analysis that cutting nominal wages would cure unemployment. Instead, despite his Marshallian training (upbringing) in PE analysis, Keynes argued that PE (AKA supply-demand) analysis is unsuited for understanding the problem of aggregate (involuntary) unemployment.

5. The comparative-statics method described by Samuelson in the Foundations of Econ Analysis formalized PE analysis under the maintained assumption that a unique GE obtains and deriving a “meaningful theorem” from the 1st- and 2nd-order conditions for a local optimum.

6. PE analysis, as formalized by Samuelson, is conditioned on the assumption that GE obtains. It is focused on the effect of changing a single parameter in a single market small enough for the effects on other markets of the parameter change to be made negligible.

7. Thus, PE analysis, the essence of micro-economics is predicated on the macrofoundation that all, but one, markets are in equilibrium.

8. Samuelson’s meaningful theorems were a misnomer reflecting mid-20th-century operationalism. They can now be understood as empirically refutable propositions implied by theorems augmented with a CP assumption that interactions b/w markets are small enough to be neglected.

9. If a PE model is appropriately specified, and if the market under consideration is small or only minimally related to other markets, then differences between predictions and observations will be statistically insignificant.

10. So PE analysis uses comparative-statics to compare two alternative general equilibria that differ only in respect of a small parameter change.

11. The difference allows an inference about the causal effect of a small change in that parameter, but says nothing about how an economy would actually adjust to a parameter change.

12. PE analysis is conditioned on the CP assumption that the analyzed market and the parameter change are small enough to allow any interaction between the parameter change and markets other than the market under consideration to be disregarded.

13. However, the process whereby one equilibrium transitions to another is left undetermined; the difference between the two equilibria with and without the parameter change is computed but no account of an adjustment process leading from one equilibrium to the other is provided.

14. Hence, the term “comparative statics.”

15. The only suggestion of an adjustment process is an assumption that the price-adjustment in any market is an increasing function of excess demand in the market.

16. In his seminal account of GE, Walras posited the device of an auctioneer who announces prices–one for each market–computes desired purchases and sales at those prices, and sets, under an adjustment algorithm, new prices at which desired purchases and sales are recomputed.

17. The process continues until a set of equilibrium prices is found at which excess demands in all markets are zero. In Walras’s heuristic account of what he called the tatonnement process, trading is allowed only after the equilibrium price vector is found by the auctioneer.

18. Walras and his successors assumed, but did not prove, that, if an equilibrium price vector exists, the tatonnement process would eventually, through trial and error, converge on that price vector.

19. However, contributions by Sonnenschein, Mantel and Debreu (hereinafter referred to as the SMD Theorem) show that no price-adjustment rule necessarily converges on a unique equilibrium price vector even if one exists.

20. The possibility that there are multiple equilibria with distinct equilibrium price vectors may or may not be worth explicit attention, but for purposes of this discussion, I confine myself to the case in which a unique equilibrium exists.

21. The SMD Theorem underscores the lack of any explanatory account of a mechanism whereby changes in market prices, responding to excess demands or supplies, guide a decentralized system of competitive markets toward an equilibrium state, even if a unique equilibrium exists.

22. The Walrasian tatonnement process has been replaced by the Arrow-Debreu-McKenzie (ADM) model in an economy of infinite duration consisting of an infinite number of generations of agents with given resources and technology.

23. The equilibrium of the model involves all agents populating the economy over all time periods meeting before trading starts, and, based on initial endowments and common knowledge, making plans given an announced equilibrium price vector for all time in all markets.

24. Uncertainty is accommodated by the mechanism of contingent trading in alternative states of the world. Given assumptions about technology and preferences, the ADM equilibrium determines the set prices for all contingent states of the world in all time periods.

25. Given equilibrium prices, all agents enter into optimal transactions in advance, conditioned on those prices. Time unfolds according to the equilibrium set of plans and associated transactions agreed upon at the outset and executed without fail over the course of time.

26. At the ADM equilibrium price vector all agents can execute their chosen optimal transactions at those prices in all markets (certain or contingent) in all time periods. In other words, at that price vector, excess demands in all markets with positive prices are zero.

27. The ADM model makes no pretense of identifying a process that discovers the equilibrium price vector. All that can be said about that price vector is that if it exists and trading occurs at equilibrium prices, then excess demands will be zero if prices are positive.

28. Arrow himself drew attention to the gap in the ADM model, writing in 1959:

29. In addition to the explanatory gap identified by Arrow, another shortcoming of the ADM model was discussed by Radner: the dependence of the ADM model on a complete set of forward and state-contingent markets at time zero when equilibrium prices are determined.

30. Not only is the complete-market assumption a backdoor reintroduction of perfect foresight, it excludes many features of the greatest interest in modern market economies: the existence of money, stock markets, and money-crating commercial banks.

31. Radner showed that for full equilibrium to obtain, not only must excess demands in current markets be zero, but whenever current markets and current prices for future delivery are missing, agents must correctly expect those future prices.

32. But there is no plausible account of an equilibrating mechanism whereby price expectations become consistent with GE. Although PE analysis suggests that price adjustments do clear markets, no analogous analysis explains how future price expectations are equilibrated.

33. But if both price expectations and actual prices must be equilibrated for GE to obtain, the notion that “market-clearing” price adjustments are sufficient to achieve macroeconomic “equilibrium” is untenable.

34. Nevertheless, the idea that individual price expectations are rational (correct), so that, except for random shocks, continuous equilibrium is maintained, became the bedrock for New Classical macroeconomics and its New Keynesian and real-business cycle offshoots.

35. Macroeconomic theory has become a theory of dynamic intertemporal optimization subject to stochastic disturbances and market frictions that prevent or delay optimal adjustment to the disturbances, potentially allowing scope for countercyclical monetary or fiscal policies.

36. Given incomplete markets, the assumption of nearly continuous intertemporal equilibrium implies that agents correctly foresee future prices except when random shocks occur, whereupon agents revise expectations in line with the new information communicated by the shocks.
37. Modern macroeconomics replaced the Walrasian auctioneer with agents able to forecast the time path of all prices indefinitely into the future, except for intermittent unforeseen shocks that require agents to optimally their revise previous forecasts.
38. When new information or random events, requiring revision of previous expectations, occur, the new information becomes common knowledge and is processed and interpreted in the same way by all agents. Agents with rational expectations always share the same expectations.
39. So in modern macro, Arrow’s explanatory gap is filled by assuming that all agents, given their common knowledge, correctly anticipate current and future equilibrium prices subject to unpredictable forecast errors that change their expectations of future prices to change.
40. Equilibrium prices aren’t determined by an economic process or idealized market interactions of Walrasian tatonnement. Equilibrium prices are anticipated by agents, except after random changes in common knowledge. Semi-omniscient agents replace the Walrasian auctioneer.
41. Modern macro assumes that agents’ common knowledge enables them to form expectations that, until superseded by new knowledge, will be validated. The assumption is wrong, and the mistake is deeper than just the unrealism of perfect competition singled out by Arrow.
42. Assuming perfect competition, like assuming zero friction in physics, may be a reasonable simplification for some problems in economics, because the simplification renders an otherwise intractable problem tractable.
43. But to assume that agents’ common knowledge enables them to forecast future prices correctly transforms a model of decentralized decision-making into a model of central planning with each agent possessing the knowledge only possessed by an omniscient central planner.
44. The rational-expectations assumption fills Arrow’s explanatory gap, but in a deeply unsatisfactory way. A better approach to filling the gap would be to acknowledge that agents have private knowledge (and theories) that they rely on in forming their expectations.
45. Agents’ expectations are – at least potentially, if not inevitably – inconsistent. Because expectations differ, it’s the expectations of market specialists, who are better-informed than non-specialists, that determine the prices at which most transactions occur.
46. Because price expectations differ even among specialists, prices, even in competitive markets, need not be uniform, so that observed price differences reflect expectational differences among specialists.
47. When market specialists have similar expectations about future prices, current prices will converge on the common expectation, with arbitrage tending to force transactions prices to converge toward notwithstanding the existence of expectational differences.
48. However, the knowledge advantage of market specialists over non-specialists is largely limited to their knowledge of the workings of, at most, a small number of related markets.
49. The perspective of specialists whose expectations govern the actual transactions prices in most markets is almost always a PE perspective from which potentially relevant developments in other markets and in macroeconomic conditions are largely excluded.
50. The interrelationships between markets that, according to the SMD theorem, preclude any price-adjustment algorithm, from converging on the equilibrium price vector may also preclude market specialists from converging, even roughly, on the equilibrium price vector.
51. A strict equilibrium approach to business cycles, either real-business cycle or New Keynesian, requires outlandish assumptions about agents’ common knowledge and their capacity to anticipate the future prices upon which optimal production and consumption plans are based.
52. It is hard to imagine how, without those outlandish assumptions, the theoretical superstructure of real-business cycle theory, New Keynesian theory, or any other version of New Classical economics founded on the rational-expectations postulate can be salvaged.
53. The dominance of an untenable macroeconomic paradigm has tragically led modern macroeconomics into a theoretical dead end.

Graeber Against Economics

David Graeber’s vitriolic essay “Against Economics” in the New York Review of Books has generated responses from Noah Smith and Scott Sumner among others. I don’t disagree with much that Noah or Scott have to say, but I want to dig a little deeper than they did into some of Graeber’s arguments, because even though I think he is badly misinformed on many if not most of the subjects he writes about, I actually have some sympathy for his dissatisfaction with the current state of economics. Graeber wastes no time on pleasantries.

There is a growing feeling, among those who have the responsibility of managing large economies, that the discipline of economics is no longer fit for purpose. It is beginning to look like a science designed to solve problems that no longer exist.

A serious polemicist should avoid blatant mischaracterizations, exaggerations and cheap shots, and should be well-grounded in the object of his critique, thereby avoiding criticisms that undermine his own claims to expertise. I grant that  Graeber has some valid criticisms to make, even agreeing with him, at least in part, on some of them. But his indiscriminate attacks on, and caricatures of, all neoclassical economics betrays a superficial understanding of that discipline.

Graeber begins by attacking what he considers the misguided and obsessive focus on inflation by economists.

A good example is the obsession with inflation. Economists still teach their students that the primary economic role of government—many would insist, its only really proper economic role—is to guarantee price stability. We must be constantly vigilant over the dangers of inflation. For governments to simply print money is therefore inherently sinful.

Every currency unit, or banknote issued by a central bank, now in circulation, as Graeber must know, has been “printed.” So to say that economists consider it sinful for governments to print money is either a deliberate falsehood, or an emotional rhetorical outburst, as Graeber immediately, and apparently unwittingly, acknowledges!

If, however, inflation is kept at bay through the coordinated action of government and central bankers, the market should find its “natural rate of unemployment,” and investors, taking advantage of clear price signals, should be able to ensure healthy growth. These assumptions came with the monetarism of the 1980s, the idea that government should restrict itself to managing the money supply, and by the 1990s had come to be accepted as such elementary common sense that pretty much all political debate had to set out from a ritual acknowledgment of the perils of government spending. This continues to be the case, despite the fact that, since the 2008 recession, central banks have been printing money frantically [my emphasis] in an attempt to create inflation and compel the rich to do something useful with their money, and have been largely unsuccessful in both endeavors.

Graeber’s use of the ambiguous pronoun “this” beginning the last sentence of the paragraph betrays his own confusion about what he is saying. Central banks are printing money and attempting to “create” inflation while supposedly still believing that inflation is a menace and printing money is a sin. Go figure.

We now live in a different economic universe than we did before the crash. Falling unemployment no longer drives up wages. Printing money does not cause inflation. Yet the language of public debate, and the wisdom conveyed in economic textbooks, remain almost entirely unchanged.

Again showing an inadequate understanding of basic economic theory, Graeber suggests that, in theory if not practice, falling unemployment should cause wages to rise. The Philips Curve, upon which Graeber’s suggestion relies, represents the empirically observed negative correlation between the rate of average wage increase and the rate of unemployment. But correlation does not imply causation, so there is no basis in economic theory to assert that falling unemployment causes the rate of increase in wages to accelerate. That the empirical correlation between unemployment and wage increases has not recently been in evidence provides no compelling reason for changing textbook theory.

From this largely unfounded and attack on economic theory – a theory which I myself consider, in many respects, inadequate and unreliable – Graeber launches a bitter diatribe against the supposed hegemony of economists over policy-making.

Mainstream economists nowadays might not be particularly good at predicting financial crashes, facilitating general prosperity, or coming up with models for preventing climate change, but when it comes to establishing themselves in positions of intellectual authority, unaffected by such failings, their success is unparalleled. One would have to look at the history of religions to find anything like it.

The ability to predict financial crises would be desirable, but that cannot be the sole criterion for whether economics has advanced our understanding of how economic activity is organized or what effects policy changes have. (I note parenthetically that many economists defensively reject the notion that economic crises are predictable on the grounds that if economists could predict a future economic crisis, those predictions would be immediately self-fulfilling. This response, of course, effectively disproves the idea that economists could predict that an economic crisis would occur in the way that astronomers predict solar eclipses. But this response slays a strawman. The issue is not whether economists can predict future crises, but whether they can identify conditions indicating an increased likelihood of a crisis and suggest precautionary measures to reduce the likelihood that a potential crisis will occur. But Graeber seems uninterested in or incapable of engaging the question at even this moderate level of subtlety.)

In general, I doubt that economists can make more than a modest contribution to improved policy-making, and the best that one can hope for is probably that they steer us away from the worst potential decisions rather than identifying the best ones. But no one, as far as I know, has yet been burned at the stake by a tribunal of economists.

To this day, economics continues to be taught not as a story of arguments—not, like any other social science, as a welter of often warring theoretical perspectives—but rather as something more like physics, the gradual realization of universal, unimpeachable mathematical truths. “Heterodox” theories of economics do, of course, exist (institutionalist, Marxist, feminist, “Austrian,” post-Keynesian…), but their exponents have been almost completely locked out of what are considered “serious” departments, and even outright rebellions by economics students (from the post-autistic economics movement in France to post-crash economics in Britain) have largely failed to force them into the core curriculum.

I am now happy to register agreement with something that Graeber says. Economists in general have become overly attached to axiomatic and formalistic mathematical models that create a false and misleading impression of rigor and mathematical certainty. In saying this, I don’t dispute that mathematical modeling is an important part of much economic theorizing, but it should not exclude other approaches to economic analysis and discourse.

As a result, heterodox economists continue to be treated as just a step or two away from crackpots, despite the fact that they often have a much better record of predicting real-world economic events. What’s more, the basic psychological assumptions on which mainstream (neoclassical) economics is based—though they have long since been disproved by actual psychologists—have colonized the rest of the academy, and have had a profound impact on popular understandings of the world.

That heterodox economists have a better record of predicting economic events than mainstream economists is an assertion for which Graeber offers no evidence or examples. I would not be surprised if he could cite examples, but one would have to weigh the evidence surrounding those examples before concluding that predictions by heterodox economists were more accurate than those of their mainstream counterparts.

Graeber returns to the topic of monetary theory, which seems a particular bugaboo of his. Taking the extreme liberty of holding up Mrs. Theresa May as a spokesperson for orthodox economics, he focuses on her definitive 2017 statement that there is no magic money tree.

The truly extraordinary thing about May’s phrase is that it isn’t true. There are plenty of magic money trees in Britain, as there are in any developed economy. They are called “banks.” Since modern money is simply credit, banks can and do create money literally out of nothing, simply by making loans. Almost all of the money circulating in Britain at the moment is bank-created in this way.

What Graeber chooses to ignore is that banks do not operate magically; they make loans and create deposits in seeking to earn profits; their decisions are not magical, but are oriented toward making profits. Whether they make good or bad decisions is debatable, but the debate isn’t about a magical process; it’s a debate about theory and evidence. Graeber describe how he thinks that economists think about how banks create money, correctly observing that there is a debate about how that process works, but without understanding those differences or their significance.

Economists, for obvious reasons, can’t be completely oblivious to the role of banks, but they have spent much of the twentieth century arguing about what actually happens when someone applies for a loan. One school insists that banks transfer existing funds from their reserves, another that they produce new money, but only on the basis of a multiplier effect). . . Only a minority—mostly heterodox economists, post-Keynesians, and modern money theorists—uphold what is called the “credit creation theory of banking”: that bankers simply wave a magic wand and make the money appear, secure in the confidence that even if they hand a client a credit for $1 million, ultimately the recipient will put it back in the bank again, so that, across the system as a whole, credits and debts will cancel out. Rather than loans being based in deposits, in this view, deposits themselves were the result of loans.

The one thing it never seemed to occur to anyone to do was to get a job at a bank, and find out what actually happens when someone asks to borrow money. In 2014 a German economist named Richard Werner did exactly that, and discovered that, in fact, loan officers do not check their existing funds, reserves, or anything else. They simply create money out of thin air, or, as he preferred to put it, “fairy dust.”

Graeber is right that economists differ in how they understand banking. But the simple transfer-of-funds view, a product of the eighteenth century, was gradually rejected over the course of the nineteenth century; the money-multiplier view largely superseded it, enjoying a half-century or more of dominance as a theory of banking, still remains a popular way for introductory textbooks to explain how banking works, though it would be better if it were decently buried and forgotten. But since James Tobin’s classic essay “Commercial banks as creators of money” was published in 1963, most economists who have thought carefully about banking have concluded that the amount of deposits created by banks corresponds to the quantity of deposits that the public, given their expectations about the future course of the economy and the future course of prices, chooses to hold. The important point is that while a bank can create deposits without incurring more than the negligible cost of making a book-keeping, or an electronic, entry in a customer’s account, the creation of a deposit is typically associated with a demand by the bank to hold either reserves in its account with the Fed or to hold some amount of Treasury instruments convertible, on very short notice, into reserves at the Fed.

Graeber seems to think that there is something fundamental at stake for the whole of macroeconomics in the question whether deposits created loans or loans create deposits. I agree that it’s an important question, but not as significant as Graeber believes. But aside from that nuance, what’s remarkable is that Graeber actually acknowledges that the weight of professional opinion is on the side that says that loans create deposits. He thus triumphantly cites a report by Bank of England economists that correctly explained that banks create money and do so in the normal course of business by making loans.

Before long, the Bank of England . . . rolled out an elaborate official report called “Money Creation in the Modern Economy,” replete with videos and animations, making the same point: existing economics textbooks, and particularly the reigning monetarist orthodoxy, are wrong. The heterodox economists are right. Private banks create money. Central banks like the Bank of England create money as well, but monetarists are entirely wrong to insist that their proper function is to control the money supply.

Graeber, I regret to say, is simply exposing the inadequacy of his knowledge of the history of economics. Adam Smith in The Wealth of Nations explained that banks create money who, in doing so, saved the resources that would have been wasted on creating additional gold and silver. Subsequent economists from David Ricardo through Henry Thornton, J. S. Mill and R. G. Hawtrey were perfectly aware that banks can supply money — either banknotes or deposits — at less than the cost of mining and minting new coins, as they extend their credit in making loans to borrowers. So what is at issue, Graeber to the contrary notwithstanding, is not a dispute between orthodoxy and heterodoxy.

In fact, central banks do not in any sense control the money supply; their main function is to set the interest rate—to determine how much private banks can charge for the money they create.

Central banks set a rental price for reserves, thereby controlling the quantity of reserves into which bank deposits are convertible that is available to the economy. One way to think about that quantity is that the quantity of reserves along with the aggregate demand to hold reserves determines the exchange value of reserves and hence the price level; another way to think about it is that the interest rate or the implied policy stance of the central bank helps to determine the expectations of the public about the future course of the price level which is what determines – within some margin of error or range – what the future course of the price level will turn out to be.

Almost all public debate on these subjects is therefore based on false premises. For example, if what the Bank of England was saying were true, government borrowing didn’t divert funds from the private sector; it created entirely new money that had not existed before.

This is just silly. Funds may or may not be diverted from the private sector, but the total available resources to society is finite. If the central bank creates additional money, it creates additional claims to those resources and the creation of additional claims to resources necessarily has an effect on the prices of inputs and of outputs.

One might have imagined that such an admission would create something of a splash, and in certain restricted circles, it did. Central banks in Norway, Switzerland, and Germany quickly put out similar papers. Back in the UK, the immediate media response was simply silence. The Bank of England report has never, to my knowledge, been so much as mentioned on the BBC or any other TV news outlet. Newspaper columnists continued to write as if monetarism was self-evidently correct. Politicians continued to be grilled about where they would find the cash for social programs. It was as if a kind of entente cordiale had been established, in which the technocrats would be allowed to live in one theoretical universe, while politicians and news commentators would continue to exist in an entirely different one.

Even if we stipulate that this characterization of what the BBC and newspaper columnists believe is correct, what we would have — at best — is a commentary on the ability of economists to communicate their understanding of how the economy works to the intelligentsia that communicates to ordinary citizens. It is not in and of itself a commentary on the state of economic knowledge, inasmuch as Graeber himself concedes that most economists don’t accept monetarism. And that has been the case, as Noah Smith pointed out in his Bloomberg column on Graeber, since the early 1980s when the Monetarist experiment in trying to conduct monetary policy by controlling the monetary aggregates proved entirely unworkable and had to be abandoned as it was on the verge of precipitating a financial crisis.

Only after this long warmup decrying the sorry state of contemporary economic theory does Graeber begin discussing the book under review Money and Government by Robert Skidelsky.

What [Skidelsky] reveals is an endless war between two broad theoretical perspectives. . . The crux of the argument always seems to turn on the nature of money. Is money best conceived of as a physical commodity, a precious substance used to facilitate exchange, or is it better to see money primarily as a credit, a bookkeeping method or circulating IOU—in any case, a social arrangement? This is an argument that has been going on in some form for thousands of years. What we call “money” is always a mixture of both, and, as I myself noted in Debt (2011), the center of gravity between the two tends to shift back and forth over time. . . .One important theoretical innovation that these new bullion-based theories of money allowed was, as Skidelsky notes, what has come to be called the quantity theory of money (usually referred to in textbooks—since economists take endless delight in abbreviations—as QTM).

But these two perspectives are not mutually exclusive, and, depending on time, place, circumstances, and the particular problem that is the focus of attention, either of the two may be the appropriate paradigm for analysis.

The QTM argument was first put forward by a French lawyer named Jean Bodin, during a debate over the cause of the sharp, destablizing price inflation that immediately followed the Iberian conquest of the Americas. Bodin argued that the inflation was a simple matter of supply and demand: the enormous influx of gold and silver from the Spanish colonies was cheapening the value of money in Europe. The basic principle would no doubt have seemed a matter of common sense to anyone with experience of commerce at the time, but it turns out to have been based on a series of false assumptions. For one thing, most of the gold and silver extracted from Mexico and Peru did not end up in Europe at all, and certainly wasn’t coined into money. Most of it was transported directly to China and India (to buy spices, silks, calicoes, and other “oriental luxuries”), and insofar as it had inflationary effects back home, it was on the basis of speculative bonds of one sort or another. This almost always turns out to be true when QTM is applied: it seems self-evident, but only if you leave most of the critical factors out.

In the case of the sixteenth-century price inflation, for instance, once one takes account of credit, hoarding, and speculation—not to mention increased rates of economic activity, investment in new technology, and wage levels (which, in turn, have a lot to do with the relative power of workers and employers, creditors and debtors)—it becomes impossible to say for certain which is the deciding factor: whether the money supply drives prices, or prices drive the money supply.

As a matter of logic, if the value of money depends on the precious metals (gold or silver) from which coins were minted, the value of money is necessarily affected by a change in the value of the metals used to coin money. Because a large increase in the stock of gold and silver, as Graeber concedes, must reduce the value of those metals, subsequent inflation then being attributable, at least in part, to the gold and silver discoveries even if the newly mined gold and silver was shipped mainly to privately held Indian and Chinese hoards rather than minted into new coins. An exogenous increase in prices may well have caused the quantity of credit money to increase, but that is analytically distinct from the inflationary effect of a reduced value of gold or silver when, as was the case in the sixteenth century, money is legally defined as a specific weight of gold or silver.

Technically, this comes down to a choice between what are called exogenous and endogenous theories of money. Should money be treated as an outside factor, like all those Spanish dubloons supposedly sweeping into Antwerp, Dublin, and Genoa in the days of Philip II, or should it be imagined primarily as a product of economic activity itself, mined, minted, and put into circulation, or more often, created as credit instruments such as loans, in order to meet a demand—which would, of course, mean that the roots of inflation lie elsewhere?

There is no such choice, because any theory must posit certain initial conditions and definitions, which are given or exogenous to the analysis. How the theory is framed and which variables are treated as exogenous and which are treated as endogenous is a matter of judgment in light of the problem and the circumstances. Graeber is certainly correct that, in any realistic model, the quantity of money is endogenously, not exogenously, determined, but that doesn’t mean that the value of gold and silver may not usefully be treated as exogenous in a system in which money is defined as a weight of gold or silver.

To put it bluntly: QTM is obviously wrong. Doubling the amount of gold in a country will have no effect on the price of cheese if you give all the gold to rich people and they just bury it in their yards, or use it to make gold-plated submarines (this is, incidentally, why quantitative easing, the strategy of buying long-term government bonds to put money into circulation, did not work either). What actually matters is spending.

Graeber is talking in circles, failing to distinguish between the quantity theory of money – a theory about the value of a pure medium of exchange with no use except to be received in exchange — and a theory of the real value of gold and silver when money is defined as a weight of gold or silver. The value of gold (or silver) in monetary uses must be roughly equal to its value in non-monetary uses. which is determined by the total stock of gold and the demand to hold gold or to use it in coinage or for other uses (e.g., jewelry and ornamentation). An increase in the stock of gold relative to demand must reduce its value. That relationship between price and quantity is not the same as QTM. The quantity of a metallic money will increase as its value in non-monetary uses declines. If there is literally an unlimited demand for newly mined gold to be immediately sent unused into hoards, Graeber’s argument would be correct. But the fact that much of the newly mined gold initially went into hoards does not mean that all of the newly mined gold went into hoards.

In sum, Graeber is confused between the quantity theory of money and a theory of a commodity money used both as money and as a real commodity. The quantity theory of money of a pure medium of exchange posits that changes in the quantity of money cause proportionate changes in the price level. Changes in the quantity of a real commodity also used as money have nothing to do with the quantity theory of money.

Relying on a dubious account of the history of monetary theory by Skidelsky, Graeber blames the obsession of economists with the quantity theory for repeated monetary disturbances starting with the late 17th century deflation in Britain when silver appreciated relative to gold causing prices measured in silver to fall. Graeber thus fails to see that under a metallic money, real disturbances do have repercussion on the level of prices, repercussions having nothing to do with an exogenous prior change in the quantity of money.

According to Skidelsky, the pattern was to repeat itself again and again, in 1797, the 1840s, the 1890s, and, ultimately, the late 1970s and early 1980s, with Thatcher and Reagan’s (in each case brief) adoption of monetarism. Always we see the same sequence of events:

(1) The government adopts hard-money policies as a matter of principle.

(2) Disaster ensues.

(3) The government quietly abandons hard-money policies.

(4) The economy recovers.

(5) Hard-money philosophy nonetheless becomes, or is reinforced as, simple universal common sense.

There is so much indiscriminate generalization here that it is hard to know what to make of it. But the conduct of monetary policy has always been fraught, and learning has been slow and painful. We can and must learn to do better, but blanket condemnations of economics are unlikely to lead to better outcomes.

How was it possible to justify such a remarkable string of failures? Here a lot of the blame, according to Skidelsky, can be laid at the feet of the Scottish philosopher David Hume. An early advocate of QTM, Hume was also the first to introduce the notion that short-term shocks—such as Locke produced—would create long-term benefits if they had the effect of unleashing the self-regulating powers of the market:

Actually I agree that Hume, as great and insightful a philosopher as he was and as sophisticated an economic observer as he was, was an unreliable monetary theorist. And one of the reasons he was led astray was his unwarranted attachment to the quantity theory of money, an attachment that was not shared by his close friend Adam Smith.

Ever since Hume, economists have distinguished between the short-run and the long-run effects of economic change, including the effects of policy interventions. The distinction has served to protect the theory of equilibrium, by enabling it to be stated in a form which took some account of reality. In economics, the short-run now typically stands for the period during which a market (or an economy of markets) temporarily deviates from its long-term equilibrium position under the impact of some “shock,” like a pendulum temporarily dislodged from a position of rest. This way of thinking suggests that governments should leave it to markets to discover their natural equilibrium positions. Government interventions to “correct” deviations will only add extra layers of delusion to the original one.

I also agree that focusing on long-run equilibrium without regard to short-run fluctuations can lead to terrible macroeconomic outcomes, but that doesn’t mean that long-run effects are never of concern and may be safely disregarded. But just as current suffering must not be disregarded when pursuing vague and uncertain long-term benefits, ephemeral transitory benefits shouldn’t obscure serious long-term consequences. Weighing such alternatives isn’t easy, but nothing is gained by denying that the alternatives exist. Making those difficult choices is inherent in policy-making, whether macroeconomic or climate policy-making.

Although Graeber takes a valid point – that a supposed tendency toward an optimal long-run equilibrium does not justify disregard of an acute short-term problem – to an extreme, his criticism of the New Classical approach to policy-making that replaced the flawed mainstream Keynesian macroeconomics of the late 1970s is worth listening to. The New Classical approach self-consciously rejected any policy aimed at short-run considerations owing to a time-inconsistency paradox was based almost entirely on the logic of general-equilibrium theory and an illegitimate methodological argument rejecting all macroeconomic theories not rigorously deduced from the unarguable axiom of optimizing behavior by rational agents (and therefore not, in the official jargon, microfounded) as unscientific and unworthy of serious consideration in the brave New Classical world of scientific macroeconomics.

It’s difficult for outsiders to see what was really at stake here, because the argument has come to be recounted as a technical dispute between the roles of micro- and macroeconomics. Keynesians insisted that the former is appropriate to studying the behavior of individual households or firms, trying to optimize their advantage in the marketplace, but that as soon as one begins to look at national economies, one is moving to an entirely different level of complexity, where different sorts of laws apply. Just as it is impossible to understand the mating habits of an aardvark by analyzing all the chemical reactions in their cells, so patterns of trade, investment, or the fluctuations of interest or employment rates were not simply the aggregate of all the microtransactions that seemed to make them up. The patterns had, as philosophers of science would put it, “emergent properties.” Obviously, it was necessary to understand the micro level (just as it was necessary to understand the chemicals that made up the aardvark) to have any chance of understand the macro, but that was not, in itself, enough.

As an aisde, it’s worth noting that the denial or disregard of the possibility of any emergent properties by New Classical economists (of which what came to be known as New Keynesian economics is really a mildly schismatic offshoot) is nicely illustrated by the un-self-conscious alacrity with which the representative-agent approach was adopted as a modeling strategy in the first few generations of New Classical models. That New Classical theorists now insist that representative agency is not an essential to New Classical modeling is true, but the methodologically reductive nature of New Classical macroeconomics, in which all macroeconomic theories must be derived under the axiom of individually maximizing behavior except insofar as specific “frictions” are introduced by explicit assumption, is essential. (See here, here, and here)

The counterrevolutionaries, starting with Keynes’s old rival Friedrich Hayek . . . took aim directly at this notion that national economies are anything more than the sum of their parts. Politically, Skidelsky notes, this was due to a hostility to the very idea of statecraft (and, in a broader sense, of any collective good). National economies could indeed be reduced to the aggregate effect of millions of individual decisions, and, therefore, every element of macroeconomics had to be systematically “micro-founded.”

Hayek’s role in the microfoundations movement is important, but his position was more sophisticated and less methodologically doctrinaire than that of the New Classical macroeconomists, if for no other reason than that Hayek didn’t believe that macroeconomics should, or could, be derived from general-equilibrium theory. His criticism, like that of economists like Clower and Leijonhufvud, of Keynesian macroeconomics for being insufficiently grounded in microeconomic principles, was aimed at finding microeconomic arguments that could explain and embellish and modify the propositions of Keynesian macroeconomic theory. That is the sort of scientific – not methodological — reductivism that Hayek’s friend Karl Popper advocated: a theoretical and empirical challenge of reducing a higher level theory to its more fundamental foundations, e.g., when physicists and chemists search for theoretical breakthroughs that allow the propositions of chemistry to be reduced to more fundamental propositions of physics. The attempt to reduce chemistry to underlying physical principles is very different from a methodological rejection of all chemistry that cannot be derived from underlying deep physical theories.

There is probably more than a grain of truth in Graeber’s belief that there was a political and ideological subtext in the demand for microfoundations by New Classical macroeconomists, but the success of the microfoundations program was also the result of philosophically unsophisticated methodological error. How to apportion the share of blame going to mistaken methodology, professional and academic opportunism, and a hidden political agenda is a question worthy of further investigation. The easy part is to identify the mistaken methodology, which Graeber does. As for the rest, Graeber simply asserts bad faith, but with little evidence.

In Graeber’s comprehensive condemnation of modern economics, the efficient market hypothesis, being closely related to the rational-expectations hypothesis so central to New Classical economics, is not spared either. Here again, though I share and sympathize with his disdain for EMH, Graeber can’t resist exaggeration.

In other words, we were obliged to pretend that markets could not, by definition, be wrong—if in the 1980s the land on which the Imperial compound in Tokyo was built, for example, was valued higher than that of all the land in New York City, then that would have to be because that was what it was actually worth. If there are deviations, they are purely random, “stochastic” and therefore unpredictable, temporary, and, ultimately, insignificant.

Of course, no one is obliged to pretend that markets could not be wrong — and certainly not by a definition. The EMH simply asserts that the price of an asset reflects all the publicly available information. But what EMH asserts is certainly not true in many or even most cases, because people with non-public information (or with superior capacity to process public information) may affect asset prices, and such people may profit at the expense of those less knowledgeable or less competent in anticipating price changes. Moreover, those advantages may result from (largely wasted) resources devoted to acquiring and processing information, and it is those people who make fortunes betting on the future course of asset prices.

Graeber then quotes Skidelsky approvingly:

There is a paradox here. On the one hand, the theory says that there is no point in trying to profit from speculation, because shares are always correctly priced and their movements cannot be predicted. But on the other hand, if investors did not try to profit, the market would not be efficient because there would be no self-correcting mechanism. . .

Secondly, if shares are always correctly priced, bubbles and crises cannot be generated by the market….

This attitude leached into policy: “government officials, starting with [Fed Chairman] Alan Greenspan, were unwilling to burst the bubble precisely because they were unwilling to even judge that it was a bubble.” The EMH made the identification of bubbles impossible because it ruled them out a priori.

So the apparent paradox that concerns Skidelsky and Graeber dissolves upon (only a modest amount of) further reflection. Proper understanding and revision of the EMH makes it clear that bubbles can occur. But that doesn’t mean that bursting bubbles is a job that can be safely delegated to any agency, including the Fed.

Moreover, the housing bubble peaked in early 2006, two and a half years before the financial crisis in September 2008. The financial crisis was not unrelated to the housing bubble, which undoubtedly added to the fragility of the financial system and its vulnerability to macroeconomic shocks, but the main cause of the crisis was Fed policy that was unnecessarily focused on a temporary blip in commodity prices persuading the Fed not to loosen policy in 2008 during a worsening recession. That was a scenario similar to the one in 1929 when concern about an apparent stock-market bubble caused the Fed to repeatedly tighten money, raising interest rates, thereby causing a downturn and crash of asset prices triggering the Great Depression.

Graeber and Skidelsky correctly identify some of the problems besetting macroeconomics, but their indiscriminate attack on all economic theory is unlikely to improve the situation. A pity, because a focused and sophisticated critique of economics than they have served up has never been more urgently needed than it is now to enable economists to perform the modest service to mankind of which they might be capable.

Jack Schwartz on the Weaknesses of the Mathematical Mind

I was recently rereading an essay by Karl Popper, “A Realistic View of Logic, Physics, and History” published in his collection of essays, Objective Knowledge: An Evolutionary Approach, because it discusses the role of reductivism in science and philosophy, a topic about which I’ve written a number of previous posts discussing the microfoundations of macroeconomics.

Here is an important passage from Popper’s essay:

What I should wish to assert is (1) that criticism is a most important methodological device: and (2) that if you answer criticism by saying, “I do not like your logic: your logic may be all right for you, but I prefer a different logic, and according to my logic this criticism is not valid”, then you may undermine the method of critical discussion.

Now I should distinguish between two main uses of logic, namely (1) its use in the demonstrative sciences – that is to say, the mathematical sciences – and (2) its use in the empirical sciences.

In the demonstrative sciences logic is used in the main for proofs – for the transmission of truth – while in the empirical sciences it is almost exclusively used critically – for the retransmission of falsity. Of course, applied mathematics comes in too, which implicitly makes use of the proofs of pure mathematics, but the role of mathematics in the empirical sciences is somewhat dubious in several respects. (There exists a wonderful article by Schwartz to this effect.)

The article to which Popper refers appears by Jack Schwartz in a volume edited by Ernst Nagel, Patrick Suppes, and Alfred Tarski, Logic, Methodology and Philosophy of Science. The title of the essay, “The Pernicious Influence of Mathematics on Science” caught my eye, so I tried to track it down. Unavailable on the internet except behind a paywall, I bought a used copy for $6 including postage. The essay was well worth the $6 I paid to read it.

Before quoting from the essay, I would just note that Jacob T. (Jack) Schwartz was far from being innocent of mathematical and scientific knowledge. Here’s a snippet from the Wikipedia entry on Schwartz.

His research interests included the theory of linear operatorsvon Neumann algebrasquantum field theorytime-sharingparallel computingprogramming language design and implementation, robotics, set-theoretic approaches in computational logicproof and program verification systems; multimedia authoring tools; experimental studies of visual perception; multimedia and other high-level software techniques for analysis and visualization of bioinformatic data.

He authored 18 books and more than 100 papers and technical reports.

He was also the inventor of the Artspeak programming language that historically ran on mainframes and produced graphical output using a single-color graphical plotter.[3]

He served as Chairman of the Computer Science Department (which he founded) at the Courant Institute of Mathematical SciencesNew York University, from 1969 to 1977. He also served as Chairman of the Computer Science Board of the National Research Council and was the former Chairman of the National Science Foundation Advisory Committee for Information, Robotics and Intelligent Systems. From 1986 to 1989, he was the Director of DARPA‘s Information Science and Technology Office (DARPA/ISTO) in Arlington, Virginia.

Here is a link to his obituary.

Though not trained as an economist, Schwartz, an autodidact, wrote two books on economic theory.

With that introduction, I quote from, and comment on, Schwartz’s essay.

Our announced subject today is the role of mathematics in the formulation of physical theories. I wish, however, to make use of the license permitted at philosophical congresses, in two regards: in the first place, to confine myself to the negative aspects of this role, leaving it to others to dwell on the amazing triumphs of the mathematical method; in the second place, to comment not only on physical science but also on social science, in which the characteristic inadequacies which I wish to discuss are more readily apparent.

Computer programmers often make a certain remark about computing machines, which may perhaps be taken as a complaint: that computing machines, with a perfect lack of discrimination, will do any foolish thing they are told to do. The reason for this lies of course in the narrow fixation of the computing machines “intelligence” upon the basely typographical details of its own perceptions – its inability to be guided by any large context. In a psychological description of the computer intelligence, three related adjectives push themselves forward: single-mindedness, literal-mindedness, simple-mindedness. Recognizing this, we should at the same time recognize that this single-mindedness, literal-mindedness, simple-mindedness also characterizes theoretical mathematics, though to a lesser extent.

It is a continual result of the fact that science tries to deal with reality that even the most precise sciences normally work with more or less ill-understood approximations toward which the scientist must maintain an appropriate skepticism. Thus, for instance, it may come as a shock to the mathematician to learn that the Schrodinger equation for the hydrogen atom, which he is able to solve only after a considerable effort of functional analysis and special function theory, is not a literally correct description of this atom, but only an approximation to a somewhat more correct equation taking account of spin, magnetic dipole, and relativistic effects; that this corrected equation is itself only an ill-understood approximation to an infinite set of quantum field-theoretic equations; and finally that the quantum field theory, besides diverging, neglects a myriad of strange-particle interactions whose strength and form are largely unknown. The physicist looking at the original Schrodinger equation, learns to sense in it the presence of many invisible terms, integral, intergrodifferential, perhaps even more complicated types of operators, in addition to the differential terms visible, and this sense inspires an entirely appropriate disregard for the purely technical features of the equation which he sees. This very healthy self-skepticism is foreign to the mathematical approach. . . .

Schwartz, in other words, is noting that the mathematical equations that physicists use in many contexts cannot be relied upon without qualification as accurate or exact representations of reality. The understanding that the mathematics that physicists and other physical scientists use to express their theories is often inexact or approximate inasmuch as reality is more complicated than our theories can capture mathematically. Part of what goes into the making of a good scientist is a kind of artistic feeling for how to adjust or interpret a mathematical model to take into account what the bare mathematics cannot describe in a manageable way.

The literal-mindedness of mathematics . . . makes it essential, if mathematics is to be appropriately used in science, that the assumptions upon which mathematics is to elaborate be correctly chosen from a larger point of view, invisible to mathematics itself. The single-mindedness of mathematics reinforces this conclusion. Mathematics is able to deal successfully only with the simplest of situations, more precisely, with a complex situation only to the extent that rare good fortune makes this complex situation hinge upon a few dominant simple factors. Beyond the well-traversed path, mathematics loses its bearing in a jungle of unnamed special functions and impenetrable combinatorial particularities. Thus, mathematical technique can only reach far if it starts from a point close to the simple essentials of a problem which has simple essentials. That form of wisdom which is the opposite of single-mindedness, the ability to keep many threads in hand, to draw for an argument from many disparate sources, is quite foreign to mathematics. The inability accounts for much of the difficulty which mathematics experiences in attempting to penetrate the social sciences. We may perhaps attempt a mathematical economics – but how difficult would be a mathematical history! Mathematics adjusts only with reluctance to the external, and vitally necessary, approximating of the scientists, and shudders each time a batch of small terms is cavalierly erased. Only with difficulty does it find its way to the scientist’s ready grasp of the relative importance of many factors. Quite typically, science leaps ahead and mathematics plods behind.

Schwartz having referenced mathematical economics, let me try to restate his point more concretely than he did by referring to the Walrasian theory of general equilibrium. “Mathematics,” Schwartz writes, “adjusts only with reluctance to the external, and vitally necessary, approximating of the scientists, and shudders each time a batch of small terms is cavalierly erased.” The Walrasian theory is at once too general and too special to be relied on as an applied theory. It is too general because the functional forms of most of its reliant equations can’t be specified or even meaningfully restricted on very special simplifying assumptions; it is too special, because the simplifying assumptions about the agents and the technologies and the constraints and the price-setting mechanism are at best only approximations and, at worst, are entirely divorced from reality.

Related to this deficiency of mathematics, and perhaps more productive of rueful consequence, is the simple-mindedness of mathematics – its willingness, like that of a computing machine, to elaborate upon any idea, however absurd; to dress scientific brilliancies and scientific absurdities alike in the impressive uniform of formulae and theorems. Unfortunately however, an absurdity in uniform is far more persuasive than an absurdity unclad. The very fact that a theory appears in mathematical form, that, for instance, a theory has provided the occasion for the application of a fixed-point theorem, or of a result about difference equations, somehow makes us more ready to take it seriously. And the mathematical-intellectual effort of applying the theorem fixes in us the particular point of view of the theory with which we deal, making us blind to whatever appears neither as a dependent nor as an independent parameter in its mathematical formulation. The result, perhaps most common in the social sciences, is bad theory with a mathematical passport. The present point is best established by reference to a few horrible examples. . . . I confine myself . . . to the citation of a delightful passage from Keynes’ General Theory, in which the issues before us are discussed with a characteristic wisdom and wit:

“It is the great fault of symbolic pseudomathematical methods of formalizing a system of economic analysis . . . that they expressly assume strict independence between the factors involved and lose all their cogency and authority if this hypothesis is disallowed; whereas, in ordinary discourse, where we are not blindly manipulating but know all the time what we are doing and what the words mean, we can keep ‘at the back of our heads’ the necessary reserves and qualifications and adjustments which we shall have to make later on, in a way in which we cannot keep complicated partial differentials ‘at the back’ of several pages of algebra which assume they all vanish. Too large a proportion of recent ‘mathematical’ economics are mere concoctions, as imprecise as the initial assumptions they reset on, which allow the author to lose sight of the complexities and interdependencies of the real world in a maze of pretentions and unhelpful symbols.”

Although it would have been helpful if Keynes had specifically identified the pseudomathematical methods that he had in mind, I am inclined to think that he was expressing his impatience with the Walrasian general-equilibrium approach that was characteristic of the Marshallian tradition that he carried forward even as he struggled to transcend it. Walrasian general equilibrium analysis, he seems to be suggesting, is too far removed from reality to provide any reliable guide to macroeconomic policy-making, because the necessary qualifications required to make general-equilibrium analysis practically relevant are simply unmanageable within the framework of general-equilibrium analysis. A different kind of analysis is required. As a Marshallian he was less skeptical of partial-equilibrium analysis than of general-equilibrium analysis. But he also recognized that partial-equilibrium analysis could not be usefully applied in situations, e.g., analysis of an overall “market” for labor, where the usual ceteris paribus assumptions underlying the use of stable demand and supply curves as analytical tools cannot be maintained. But for some reason that didn’t stop Keynes from trying to explain the nominal rate of interest by positing a demand curve to hold money and a fixed stock of money supplied by a central bank. But we all have our blind spots and miss obvious implications of familiar ideas that we have already encountered and, at least partially, understand.

Schwartz concludes his essay with an arresting thought that should give us pause about how we often uncritically accept probabilistic and statistical propositions as if we actually knew how they matched up with the stochastic phenomena that we are seeking to analyze. But although there is a lot to unpack in his conclusion, I am afraid someone more capable than I will have to do the unpacking.

[M]athematics, concentrating our attention, makes us blind to its own omissions – what I have already called the single-mindedness of mathematics. Typically, mathematics, knows better what to do than why to do it. Probability theory is a famous example. . . . Here also, the mathematical formalism may be hiding as much as it reveals.

What’s Wrong with DSGE Models Is Not Representative Agency

The basic DSGE macroeconomic model taught to students is based on a representative agent. Many critics of modern macroeconomics and DSGE models have therefore latched on to the representative agent as the key – and disqualifying — feature in DSGE models, and by extension, with modern macroeconomics. Criticism of representative-agent models is certainly appropriate, because, as Alan Kirman admirably explained some 25 years ago, the simplification inherent in a macoreconomic model based on a representative agent, renders the model entirely inappropriate and unsuitable for most of the problems that a macroeconomic model might be expected to address, like explaining why economies might suffer from aggregate fluctuations in output and employment and the price level.

While altogether fitting and proper, criticism of the representative agent model in macroeconomics had an unfortunate unintended consequence, which was to focus attention on representative agency rather than on the deeper problem with DSGE models, problems that cannot be solved by just throwing the Representative Agent under the bus.

Before explaining why representative agency is not the root problem with DSGE models, let’s take a moment or two to talk about where the idea of representative agency comes from. The idea can be traced back to F. Y. Edgeworth who, in his exposition of the ideas of W. S. Jevons – one of the three marginal revolutionaries of the 1870s – introduced two “representative particulars” to illustrate how trade could maximize the utility of each particular subject to the benchmark utility of the counterparty. That analysis of two different representative particulars, reflected in what is now called the Edgeworth Box, remains one of the outstanding achievements and pedagogical tools of economics. (See a superb account of the historical development of the Box and the many contributions to economic theory that it facilitated by Thomas Humphrey). But Edgeworth’s analysis and its derivatives always focused on the incentives of two representative agents rather than a single isolated representative agent.

Only a few years later, Alfred Marshall in his Principles of Economics, offered an analysis of how the equilibrium price for the product of a competitive industry is determined by the demand for (derived from the marginal utility accruing to consumers from increments of the product) and the supply of that product (derived from the cost of production). The concepts of the marginal cost of an individual firm as a function of quantity produced and the supply of an individual firm as a function of price not yet having been formulated, Marshall, in a kind of hand-waving exercise, introduced a hypothetical representative firm as a stand-in for the entire industry.

The completely ad hoc and artificial concept of a representative firm was not well-received by Marshall’s contemporaries, and the young Lionel Robbins, starting his long career at the London School of Economics, subjected the idea to withering criticism in a 1928 article. Even without Robbins’s criticism, the development of the basic theory of a profit-maximizing firm quickly led to the disappearance of Marshall’s concept from subsequent economics textbooks. James Hartley wrote about the short and unhappy life of Marshall’s Representative Firm in the Journal of Economic Perspectives.

One might have thought that the inauspicious career of Marshall’s Representative Firm would have discouraged modern macroeconomists from resurrecting the Representative Firm in the barely disguised form of a Representative Agent in their DSGE models, but the convenience and relative simplicity of solving a DSGE model for a single agent was too enticing to be resisted.

Therein lies the difference between the theory of the firm and a macroeconomic theory. The gain in convenience from adopting the Representative Firm was radically reduced by Marshall’s Cambridge students and successors who, without the representative firm, provided a more rigorous, more satisfying and more flexible exposition of the industry supply curve and the corresponding partial-equilibrium analysis than Marshall had with it. Providing no advantages of realism, logical coherence, analytical versatility or heuristic intuition, the Representative Firm was unceremoniously expelled from the polite company of economists.

However, as a heuristic device for portraying certain properties of an equilibrium state — whose existence is assumed not derived — even a single representative individual or agent proved to be a serviceable device with which to display the defining first-order conditions, the simultaneous equality of marginal rates of substitution in consumption and production with the marginal rate of substitution at market prices. Unlike the Edgeworth Box populated by two representative agents whose different endowments or preference maps result in mutually beneficial trade, the representative agent, even if afforded the opportunity to trade, can find no gain from engaging in it.

An excellent example of this heuristic was provided by Jack Hirshleifer in his 1970 textbook Investment, Interest, and Capital, wherein he adapted the basic Fisherian model of intertemporal consumption, production and exchange opportunities, representing the canonical Fisherian exposition in a single basic diagram. But the representative agent necessarily represents a state of no trade, because, for a single isolated agent, production and consumption must coincide, and the equilibrium price vector must have the property that the representative agent chooses not to trade at that price vector. I reproduce Hirshleifer’s diagram (Figure 4-6) in the attached chart.

Here is how Hirshleifer explained what was going on.

Figure 4-6 illustrates a technique that will be used often from now on: the representative-individual device. If one makes the assumption that all individuals have identical tastes and are identically situated with respect to endowments and productive opportunities, it follows that the individual optimum must be a microcosm of the social equilibrium. In this model the productive and consumptive solutions coincide, as in the Robinson Crusoe case. Nevertheless, market opportunities exist, as indicated by the market line M’M’ through the tangency point P* = C*. But the price reflected in the slope of M’M’ is a sustaining price, such that each individual prefers to hold the combination attained by productive transformations rather than engage in market transactions. The representative-individual device is helpful in suggesting how the equilibrium will respond to changes in exogenous data—the proviso being that such changes od not modify the distribution of wealth among individuals.

While not spelling out the limitations of the representative-individual device, Hirshleifer makes it clear that the representative-agent device is being used as an expository technique to describe, not as an analytical tool to determine, intertemporal equilibrium. The existence of intertemporal equilibrium does not depend on the assumptions necessary to allow a representative individual to serve as a stand-in for all other agents. The representative-individual is portrayed only to provide the student with a special case serving as a visual aid with which to gain an intuitive grasp of the necessary conditions characterizing an intertemporal equilibrium in production and consumption.

But the role of the representative agent in the DSGE model is very different from the representative individual in Hirshleifer’s exposition of the canonical Fisherian theory. In Hirshleifer’s exposition, the representative individual is just a special case and a visual aid with no independent analytical importance. In contrast to Hirshleifer’s deployment of the representative-individual, representative-agent in the DSGE model is used as an assumption whereby an analytical solution to the DSGE model can be derived, allowing the modeler to generate quantitative results to be compared with existing time-series data, to generate forecasts of future economic conditions, and to evaluate the effects of alternative policy rules.

The prominent and dubious role of the representative agent in DSGE models provided a convenient target for critics of DSGE models to direct their criticisms. In Congressional testimony, Robert Solow famously attacked DSGE models and used their reliance on the representative-agents to make them seem, well, simply ridiculous.

Most economists are willing to believe that most individual “agents” – consumers investors, borrowers, lenders, workers, employers – make their decisions so as to do the best that they can for themselves, given their possibilities and their information. Clearly they do not always behave in this rational way, and systematic deviations are well worth studying. But this is not a bad first approximation in many cases. The DSGE school populates its simplified economy – remember that all economics is about simplified economies just as biology is about simplified cells – with exactly one single combination worker-owner-consumer-everything-else who plans ahead carefully and lives forever. One important consequence of this “representative agent” assumption is that there are no conflicts of interest, no incompatible expectations, no deceptions.

This all-purpose decision-maker essentially runs the economy according to its own preferences. Not directly, of course: the economy has to operate through generally well-behaved markets and prices. Under pressure from skeptics and from the need to deal with actual data, DSGE modellers have worked hard to allow for various market frictions and imperfections like rigid prices and wages, asymmetries of information, time lags, and so on. This is all to the good. But the basic story always treats the whole economy as if it were like a person, trying consciously and rationally to do the best it can on behalf of the representative agent, given its circumstances. This cannot be an adequate description of a national economy, which is pretty conspicuously not pursuing a consistent goal. A thoughtful person, faced with the thought that economic policy was being pursued on this basis, might reasonably wonder what planet he or she is on.

An obvious example is that the DSGE story has no real room for unemployment of the kind we see most of the time, and especially now: unemployment that is pure waste. There are competent workers, willing to work at the prevailing wage or even a bit less, but the potential job is stymied by a market failure. The economy is unable to organize a win-win situation that is apparently there for the taking. This sort of outcome is incompatible with the notion that the economy is in rational pursuit of an intelligible goal. The only way that DSGE and related models can cope with unemployment is to make it somehow voluntary, a choice of current leisure or a desire to retain some kind of flexibility for the future or something like that. But this is exactly the sort of explanation that does not pass the smell test.

While Solow’s criticism of the representative agent was correct, he left himself open to an effective rejoinder by defenders of DSGE models who could point out that the representative agent was adopted by DSGE modelers not because it was an essential feature of the DSGE model but because it enabled DSGE modelers to simplify the task of analytically solving for an equilibrium solution. With enough time and computing power, however, DSGE modelers were able to write down models with a few heterogeneous agents (themselves representative of particular kinds of agents in the model) and then crank out an equilibrium solution for those models.

Unfortunately for Solow, V. V. Chari also testified at the same hearing, and he responded directly to Solow, denying that DSGE models necessarily entail the assumption of a representative agent and identifying numerous examples even in 2010 of DSGE models with heterogeneous agents.

What progress have we made in modern macro? State of the art models in, say, 1982, had a representative agent, no role for unemployment, no role for Financial factors, no sticky prices or sticky wages, no role for crises and no role for government. What do modern macroeconomic models look like? The models have all kinds of heterogeneity in behavior and decisions. This heterogeneity arises because people’s objectives dier, they differ by age, by information, by the history of their past experiences. Please look at the seminal work by Rao Aiyagari, Per Krusell and Tony Smith, Tim Kehoe and David Levine, Victor Rios Rull, Nobu Kiyotaki and John Moore. All of them . . . prominent macroeconomists at leading departments . . . much of their work is explicitly about models without representative agents. Any claim that modern macro is dominated by representative-agent models is wrong.

So on the narrow question of whether DSGE models are necessarily members of the representative-agent family, Solow was debunked by Chari. But debunking the claim that DSGE models must be representative-agent models doesn’t mean that DSGE models have the basic property that some of us at least seek in a macro-model: the capacity to explain how and why an economy may deviate from a potential full-employment time path.

Chari actually addressed the charge that DSGE models cannot explain lapses from full employment (to use Pigou’s rather anodyne terminology for depressions). Here is Chari’s response:

In terms of unemployment, the baseline model used in the analysis of labor markets in modern macroeconomics is the Mortensen-Pissarides model. The main point of this model is to focus on the dynamics of unemployment. It is specifically a model in which labor markets are beset with frictions.

Chari’s response was thus to treat lapses from full employment as “frictions.” To treat unemployment as the result of one or more frictions is to take a very narrow view of the potential causes of unemployment. The argument that Keynes made in the General Theory was that unemployment is a systemic failure of a market economy, which lacks an error-correction mechanism that is capable of returning the economy to a full-employment state, at least not within a reasonable period of time.

The basic approach of DSGE is to treat the solution of the model as an optimal solution of a problem. In the representative-agent version of a DSGE model, the optimal solution is optimal solution for a single agent, so optimality is already baked into the model. With heterogeneous agents, the solution of the model is a set of mutually consistent optimal plans, and optimality is baked into that heterogenous-agent DSGE model as well. Sophisticated heterogeneous-agent models can incorporate various frictions and constraints that cause the solution to deviate from a hypothetical frictionless, unconstrained first-best optimum.

The policy message emerging from this modeling approach is that unemployment is attributable to frictions and other distortions that don’t permit a first-best optimum that would be achieved automatically in their absence from being reached. The possibility that the optimal plans of individuals might be incompatible resulting in a systemic breakdown — that there could be a failure to coordinate — does not even come up for discussion.

One needn’t accept Keynes’s own theoretical explanation of unemployment to find the attribution of cyclical unemployment to frictions deeply problematic. But, as I have asserted in many previous posts (e.g., here and here) a modeling approach that excludes a priori any systemic explanation of cyclical unemployment, attributing instead all cyclical unemployment to frictions or inefficient constraints on market pricing, cannot be regarded as anything but an exercise in question begging.

 


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,829 other followers

Follow Uneasy Money on WordPress.com