White and Hogan on Hayek and Cassel on the Causes of the Great Depression

Lawrence White and Thomas Hogan have just published a new paper in the Journal of Economic Behavior and Organization (“Hayek, Cassel, and the origins of the great depression”). Since White is a leading Hayek scholar, who has written extensively on Hayek’s economic writings (e.g., his important 2008 article “Did Hayek and Robbins Deepen the Great Depression?”) and edited the new edition of Hayek’s notoriously difficult volume, The Pure Theory of Capital, when it was published as volume 11 of the Collected Works of F. A. Hayek, the conclusion reached by the new paper that Hayek had a better understanding than Cassel of what caused the Great Depression is not, in and of itself, surprising.

However, I admit to being taken aback by the abstract of the paper:

We revisit the origins of the Great Depression by contrasting the accounts of two contemporary economists, Friedrich A. Hayek and Gustav Cassel. Their distinct theories highlight important, but often unacknowledged, differences between the international depression and the Great Depression in the United States. Hayek’s business cycle theory offered a monetary overexpansion account for the 1920s investment boom, the collapse of which initiated the Great Depression in the United States. Cassel’s warnings about a scarcity gold reserves related to the international character of the downturn, but the mechanisms he emphasized contributed little to the deflation or depression in the United States.

I wouldn’t deny that there are differences between the way the Great Depression played out in the United States and in the rest of the world, e.g., Britain and France, which to be sure, suffered less severely than did the US or, say, Germany. It is both possible, and important, to explore and understand the differential effects of the Great Depression in various countries. I am sorry to say that White and Hogan do neither. Instead, taking at face value the dubious authority of Friedman and Schwartz’s treatment of the Great Depression in the Monetary History of the United States, they assert that the cause of the Great Depression in the US was fundamentally different from the cause of the Great Depression in many or all other countries.

Taking that insupportable premise from Friedman and Schwartz, they simply invoke various numerical facts from the Monetary History as if those facts, in and of themselves, demonstrate what requires to be demonstrated: that the causes of the Great Depression in the US were different from those of the Great Depression in the rest of the world. That assumption vitiated the entire treatment of the Great Depression in the Monetary History, and it vitiates the results that White and Hogan reach about the merits of the conflicting explanations of the Great Depression offered by Cassel and Hayek.

I’ve discussed the failings of Friedman’s treatment of the Great Depression and of other episodes he analyzed in the Monetary History in previous posts (e.g., here, here, here, here, and here). The common failing of all the episodes treated by Friedman in the Monetary History and elsewhere is that he misunderstood how the gold standard operated, because his model of the gold standard was a primitive version of the price-specie-flow mechanism in which the monetary authority determines the quantity of money, which then determines the price level, which then determines the balance of payments, the balance of payments being a function of the relative price levels of the different countries on the gold standard. Countries with relatively high price levels experience trade deficits and outflows of gold, and countries with relatively low price levels experience trade surpluses and inflows of gold. Under the mythical “rules of the game” under the gold standard, countries with gold inflows were supposed to expand their money supplies, so that prices would rise and countries with outflows were supposed to reduce their money supplies, so that prices fall. If countries followed the rules, then an international monetary equilibrium would eventually be reached.

That is the model of the gold standard that Friedman used throughout his career. He was not alone; Hayek and Mises and many others also used that model, following Hume’s treatment in his essay on the balance of trade. But it’s the wrong model. The correct model is the one originating with Adam Smith, based on the law of one price, which says that prices of all commodities in terms of gold are equalized by arbitrage in all countries on the gold standard.

As a first approximation, under the Smithean model, there is only one price level adjusted for different currency parities for all countries on the gold standard. So if there is deflation in one country on the gold standard, there is deflation for all countries on the gold standard. If the rest of the world was suffering from deflation under the gold standard, the US was also suffering from a deflation of approximately the same magnitude as every other country on the gold standard was suffering.

The entire premise of the Friedman account of the Great Depression, adopted unquestioningly by White and Hogan, is that there was a different causal mechanism for the Great Depression in the United States from the mechanism operating in the rest of the world. That premise is flatly wrong. The causation assumed by Friedman in the Monetary History was the exact opposite of the actual causation. It wasn’t, as Friedman assumed, that the decline in the quantity of money in the US was causing deflation; it was the common deflation in all gold-standard countries that was causing the quantity of money in the US to decline.

To be sure there was a banking collapse in the US that was exacerbating the catastrophe, but that was an effect of the underlying cause: deflation, not an independent cause. Absent the deflationary collapse, there is no reason to assume that the investment boom in the most advanced and most productive economy in the world after World War I was unsustainable as the Hayekian overinvestment/malinvestment hypothesis posits with no evidence of unsustainability other than the subsequent economic collapse.

So what did cause deflation under the gold standard? It was the rapid increase in the monetary demand for gold resulting from the insane policy of the Bank of France (disgracefully endorsed by Hayek as late as 1932) which Cassel, along with Ralph Hawtrey (whose writings, closely parallel to Cassel’s on the danger of postwar deflation, avoid all of the ancillary mistakes White and Hogan attribute to Cassel), was warning would lead to catastrophe.

It is true that Cassel also believed that over the long run not enough gold was being produced to avoid deflation. White and Hogan spend inordinate space and attention on that issue, because that secular tendency toward deflation is entirely different from the catastrophic effects of the increase in gold demand in the late 1920s triggered by the insane policy of the Bank of France.

The US could have mitigated the effects if it had been willing to accommodate the Bank of France’s demand to increase its gold holdings. Of course, mitigating the effects of the insane policy of the Bank of France would have rewarded the French for their catastrophic policy, but, under the circumstances, some other means of addressing French misconduct would have spared the world incalculable suffering. But misled by an inordinate fear of stock market speculation, the Fed tightened policy in 1928-29 and began accumulating gold rather than accommodate the French demand.

And the Depression came.

A Primer on Say’s Law and Walras’s Law

Say’s Law, often paraphrased as “supply creates its own demand,” is one of oldest “laws” in economics. It is also one of the least understood and most contentious propositions in economics. I am now in the process of revising my current draft of my paper “Say’s Law and the Classical Theory of Depressions,” which surveys and clarifies various interpretations, disputes and misunderstandings about Say’s Law. I thought that a brief update of my section discussing the relationship between Say’s Law and Walras’s Law might make for a useful blogpost. Not only does it discuss the meaning of Say’s Law and its relationship to Walras’s Law, it expands the narrow understanding of Say’s Law and corrects the mistaken view that Say’s Law does not hold in a monetary economy, because, given a demand to hold a pure medium of exchange, real goods may be supplied only to accumulate cash not to obtain real goods and services. IOW, supply may be a demand for cash not for goods. Under this interpretation, Say’s Law is valid only when the economy is in a macro or monetary equilibrium with no excess demand for money.

Here’s my discussion of that logically incorrect belief. (Let me add as a qualification that not only Say’s Law, but Walras’s Law, as I explained elsewhere in my paper, is not valid when there is not a complete set of forward and contingent markets. That’s because to prove Walras’s Law all agents must be optimizing on the same set of prices, whether actual observed prices or expected, but currently unobserved, prices. See also an earlier post about this paper in which I included the relevant excerpt from the paper.)

The argument that a demand to hold cash invalidates Say’s Law, because output may be produced for the purpose of accumulating cash rather than to buy other goods and services is an argument that had been made by nineteenth-century critics of Say’s Law. The argument did not go without response, but the nature and import of the response was not well, or widely, understood, and the criticism was widely credited. Thus, in his early writings on business-cycle theory, F. A. Hayek, making no claim to originality, maintained, matter of factly, that money involves a disconnect between aggregate supply and aggregate demand, describing money as a “loose joint” in the theory of general equilibrium, creating the central theoretical problem to be addressed by business-cycle theory. So, even Hayek in 1927 did not accept the validity of Say’s Law

Oskar Lange (“Say’s Law a Restatement and Criticism”) subsequently formalized the problem, introducing his distinction between Say’s Law and Walras’s Law. Lange defined Walras’s Law as the proposition that the sum of excess demands, corresponding to any price vector announced by a Walrasian auctioneer, must identically equal zero.[1] In a barter model, individual optimization, subject to the budget constraint corresponding to a given price vector, implies that the value of the planned purchases and planned sales by each agent must be exactly equal; if the value of the excess demands of each individual agent is zero the sum of the values of the excess demands of all individuals must also be zero. In a barter model, Walras’s Law and Say’s Law are equivalent: demand is always sufficient to absorb supply.

But in a model in which agents hold cash, which they use when transacting, they may supply real goods in order to add to their cash holdings. Because individual agents may seek to change their cash holdings, Lange argued that the equivalence between Walras’s Law and Say’s Law in a barter model does not carry over to a model in which agents hold money. Say’s Law cannot hold in such an economy unless excess demands in the markets for real goods sum to zero. But if agents all wish to add to their holdings of cash, their excess demand for cash will be offset by an excess supply of goods, which is precisely what Say’s Law denies.

It is only when an equilibrium price vector is found at which the excess demand in each market is zero that Say’s Law is satisfied. Say’s Law, according to Lange, is a property of a general equilibrium, not a necessary property of rational economic conduct, as Say and his contemporaries and followers had argued. When our model is extended from a barter to a monetary setting, Say’s Law must be restated in the generalized form of Walras’s Law. But, unlike Say’s Law, Walras’s Law does not exclude the possibility of an aggregate excess supply of all goods. Aggregate demand can be deficient, and it can result in involuntary unemployment.

At bottom, this critique of Say’s Law depends on the assumption that the quantity of money is exogenously fixed, so that individuals can increase or decrease their holdings of money only by spending either less or more than their incomes. However, as noted above, if there is a market mechanism that allows an increased demand for cash balances to elicit an increased quantity of cash balances, so that the public need not reduce expenditures to finance additions to their holdings of cash, Lange’s critique may not invalidate Say’s Law.

A competitive monetary system based on convertibility into gold or some other asset[2] has precisely this property. In particular, with money privately supplied by a set of traders (let’s call them banks), money is created when a bank accepts a money-backing asset (IOU) supplied by a customer in exchange for issuing its liability (a banknote or a deposit), which is widely acceptable as a medium of exchange. As first pointed out by Thompson (1974), Lange’s analytical oversight was to assume that in a Walrasian model with n real goods and money, there are only (n+1) goods or assets. In fact, there are really (n+2) goods or assets; there are n real goods and two monetary assets (i.e., the money issued by the bank and the money-backing asset accepted by the bank in exchange for the money that it issues). Thus, an excess demand for money need not, as Lange assumed, be associated with, or offset by, an excess supply of real commodities; it may be offset by a supply of money-backing assets supplied by those seeking to increase their cash holdings.

Properly specifying the monetary model relevant to macroeconomic analysis eliminates a misconception that afflicted monetary and macroeconomic theory for a very long time, and provides a limited rehabilitation of Say’s Law. But that rehabilitation doesn’t mean that all would be well if we got rid of central banks, abandoned all countercyclical policies and let private banks operate without restrictions. None of those difficult and complicated questions can be answered by invoking or rejecting Say’s Law.

[1] Excess supplies are recorded as negative excess demands.

[2] The classical economists generally regarded gold or silver as the appropriate underlying asset into which privately issued monies would be convertible, but the possibility of a fiat standard was not rejected on analytical principle.

Bitcoin Doesn’t Rule

Look out, bitcoin’s back. After the first bitcoin bubble burst almost exactly three years ago on December 15, 2017, when bitcoin hit its previous all-time high of over $19,600, bitcoin lost more than 80% its value, falling to less than $3200 on December 14, 2018. It gradually recovered, more than doubling its value (to over $7000) by December 13, 2019 and reached a new all-time high today at $20,872.

Bitcoin’s remarkable recovery is again sparking senseless talk by its more extreme promoters that it  will soon transform the world monetary system leading the collapse of fiat currencies and a flight to bitcoin to escape the hyperinflationary collapse of worthless unbacked fiat currencies. This is nonsense, as I have explained at length in a number of previous posts (e.g., here, here, and here) at even greater length in a forthcoming paper, a draft of which is available here.

In this post, I offer a summary of my argument about why bitcoin, despite its success as a speculative asset, is intrinsically unsuited to be a widely used medium of exchange, let alone a dominant currency. Indeed, success as a speculative asset is precisely what disqualifies it as anything more than a niche medium of exchange.

By a “medium of exchange,” I mean a good or instrument readily accepted in exchange by agents even if they don’t value the non-monetary services provided by that good or instrument in the expectation that other agents will be accepted in exchange at a reasonably predictable value close to its current value. After a good begins to function as a medium of exchange, its value may rise above the value it would have had if it were demanded only for the real services it provides, but arbitrage ensures that the value of a medium of exchange will be equalized in all uses, monetary or and real, so the expected value of a medium of exchange will not vary as a result of the intended use for which it is acquired.

By a “pure medium of exchange” I mean a good or instrument providing no non-monetary services and therefore demanded solely on account of its expected future resale value. The analysis of the value of a pure medium of exchange seems to hinge on three different factors affecting its expected future resale value: (1) backward induction, (2) tax liability, and (3) network effects.

Backward induction refers to the influence of a predictable future state on the present. If agents all foresee a future event that influence of that future event, however distant, must rebound backward towards the present. Thus, the certainty that a pure medium of exchange must lose its value once no one is willing to accept in exchange, its predictable loss of value must deprive it of value immediately, because no one will want to be the last person to accept it.

Backward induction is used routinely in formal exchange and game-theoretic models, but its relevance is often disputed when the certainty and the timing of the last period is unclear. In that environment, people seem more willing to assume that there will always be someone else around who will accept the medium of exchange at a positive value. Even so, backward induction at least suggests that the value of any pure medium of exchange is sensitive to expectational shocks, making a pure medium of exchange a potentially unstable pillar of an economic system.

Tax liability refers to the acceptability of a medium of exchange to discharge tax liabilities to the government. Acceptability to discharge a stream of future tax liabilities imposed by the government can maintain a positive value for a pure medium of exchange even if the backward induction argument is otherwise compelling. However, acceptability to discharge tax payments is routinely extended to government issued, or government sanctioned, fiat currencies but never to privately issued cryptocurrencies.

Network effects result from the property of some goods that their usefulness is contingent on and enhanced by the extent to which the good is used by other people. Think of the difference between a refrigerator and a telephone. Clearly, the desirability of and the demand for a medium of exchange increases with the number of other people using that medium of exchange. Additionally, as more people use any given medium of exchange, the cost to any individual user of switching to another medium of exchange than that used by the network of users of which that user is a part increases.

Network effects are important for many reasons, but for purposes of this discussion, network effects are particularly important because they provide another explanation than the tax-liability argument for why backward induction need not drive the value of a pure medium of exchange to zero. If a new medium of exchange provides some exchange service superior to, or not provided at all by, the service provided by the existing, more widely used, medium of exchange, and it attracts even a small network of users that take advantage of that service, a demand for the continued use of the alternative medium of exchange may be created. If the switching cost associated with adopting another medium of exchange and foregoing the unique service provided by the new medium of exchange is sufficiently high, current users of the new medium of exchange may persist in use of the new medium of exchange despite its predictable future loss of value.

Thus, even though it provides no current real services, and even though its current value is drawn entirely from its expected future value, bitcoin may have succeeded in providing a niche medium of exchange service for transactions in which one or both parties have a strong desire or need for anonymity. The underlying blockchain technology is thought to provide such assurance to those transacting with bitcoins. At least for now, this niche service seems to serve a small network of users better than any available alternative, and the current costs of switching to an alternative medium of exchange providing similar assurance of anonymity may be prohibitive. That network effect, combined with high switching cost, may be sufficient to prevent backward induction from driving the value of bitcoins down to zero, as might otherwise seem likely.

While this argument suggests that bitcoin will not soon disappear, the hopes of its promoters and supporters for continued appreciation to result in the collapse of fiat currencies and their replacement by bitcoin and possibly other cryptocurrencies seem destined for disappointment. Expectations of rapid appreciation do not attract new uses into the network of users of a medium of exchange. On the contrary, as implicitly recognized by the familiar proposition (now known as Gresham’s Law) that bad money drives out the good, the expectation of rapid appreciation deters, rather than attracts, traders from using the appreciating (good) money in exchange, encouraging instead the use of the alternative (bad) money whose value is expected to be comparatively stable. Centuries, if not millenia, of monetary experience have demonstrated the wisdom of this proposition over and over again.

The very success of bitcoin as a speculative asset turns out to be the kiss of death for its chances of ever displacing the dollar as the dominant currency in the world.

My Paper “Fiat Money, Cryptocurrencies and the Pure Theory of Money” is now available on SSRN

I have just posted a draft of a paper that will appear in a forthcoming volume, Edward Elgar Handbook of Blockchain and Cryptocurrencies. The paper draws on a number of my earlier posts on fiat currencies, bitcoins and cryptocurrencies, such as this, this, this and this.

Here is the abstract of my paper.

This paper attempts to account for the rising value of cryptocurrencies using basic concepts of monetary theory. A positive value of fiat money is itself problematic inasmuch as that value apparently depends entirely on its expected resale value. A current value entirely dependent on expected future resale value seems inconsistent with backward induction. While fiat money can avoid the backward-induction problem if it is made acceptable in payment of taxes, acceptability for tax payments is unavailable to cryptocurrencies. Is the rising value of bitcoin and other cryptocurrencies a bubble? The paper argues that network effects may be an alternative mechanism for avoiding the logic of backward induction. Because users of any good subject to substantial network effects incur costs by switching to an incompatible alternative to the good currently used, users of a bitcoin for certain transactions may be locked into continued use of bitcoin despite an expectation that its future value will eventually go to zero. Thus, even if bitcoin and other cryptocurrencies are bubble phenomena, network effects may lock existing users of bitcoin into continued use of bitcoin for those transactions for which bitcoins provide superior transactional services to those provided by conventional currencies. Nevertheless, the prospects for bitcoin’s expansion beyond its current niche uses are dim, because its architecture implies that a significant expansion in the demand for its transactional services would lead to rapid appreciation that is incompatible with service as a medium of exchange.

My Paper (with Sean Sullivan) on Defining Relevant Antitrust Markets Now Available on SSRN

UPDATE: The paper was selected by Concurrences as the best academic antitrust economics paper of 2020, and is forthcoming in the Antitrust Law Journal volume 83, number 2.

Antitrust aficionados may want to have a look at this new paper (“The Logic of Market Definition”) that I have co-authored with Sean Sullivan of the University of Iowa School of Law about defining relevant antitrust markets. The paper is now posted on SSRN.

Here is the abstract:

Despite the voluminous commentary that the topic has attracted in recent years, much confusion still surrounds the proper definition of antitrust markets. This paper seeks to clarify market definition, partly by explaining what should not factor into the exercise. Specifically, we identify and describe three common errors in how courts and advocates approach market definition. The first error is what we call the natural market fallacy: the mistake of treating market boundaries as preexisting features of competition, rather than the purely conceptual abstractions of a particular analytical process. The second is the independent market fallacy: the failure to recognize that antitrust markets must always be defined to reflect a theory of harm, and do not exist independent of a theory of harm. The third is the single market fallacy: the tendency of courts and advocates to seek some single, best relevant market, when in reality there will typically be many relevant markets, all of which could be appropriately drawn to aid in competitive effects analysis. In the process of dispelling these common fallacies, this paper offers a clarifying framework for understanding the fundamental logic of market definition.

My Paper Schumpeterian Enigmas Is Now Available on SSRN

I have just posted a paper I started writing in 2007 after reading Thomas McCraw’s excellent biography of Joseph Schumpeter, Prophet of Innovation. The paper, almost entirely written in 2007, lay unfinished until a few months ago, when I finally figured out how to conclude the paper. I greatly benefited from the comments and encouragement of David Laidler, R. G. Lipsey and Geoff Harcourt in its final stages.

The paper can be accessed or downloaded here.

Here is the abstract:

Drawing on McCraw’s (2007) biography, this paper assesses the character of Joseph Schumpeter. After a biographical summary of Schumpeter’s life and career as an economist, the paper considers a thread of deliberate posturing and pretense in Schumpeter’s grandiose ambitions and claims about himself. It also takes account of his ambiguous political and moral stance in both his personal, public and scholarly lives, in particular his tenure as finance minister in the short-lived German Socialist government after World War I and his famous prediction of the ultimate demise of capitalism in his celebrated Capitalism, Socialism and Democracy. Although he emigrated to the US in the 1930s Schumpeter was suspected of harboring pro-German or even pro-Nazi sympathies during World War II, sympathies that are at least partially confirmed by the letters and papers discussed at length by McCraw. Moreover, despite Schumpeter’s support for his student Paul Samuelson, when Samuelson, owing to anti-Semitic prejudice, was rejected for a permanent appointment at Harvard, Samuelson himself judged Schumpeter to have been antisemitic. Nevertheless, despite his character flaws, Schumpeter exhibited a generosity of spirit in his assessments of the work of other economists in his last and greatest work The History of Economic Analysis, a work also exhibiting uncharacteristic self-effacement by its author. That self-effacement may beattributable to Schumpeter’s own tragic and largely unrealized ambition to achieve the technical analytical breakthroughs to which he accorded highest honors in his assessments of the work of other economists, notably, Quesnay, Cournot and Walras.

Why The Wall Street Journal Editorial Page is a Disgrace

In view of today’s absurdly self-righteous statement by the Wall Street Journal editorial board, I thought it would be a good idea to update one of my first posts (almost nine years ago) on this blog. Plus ca change plus c’est la meme chose; just gets worse and worse even with only occasional contributions by the estimable Mr. Stephen Moore.

Stephen Moore has the dubious honor of being a member of the editorial board of The Wall Street Journal.  He lives up (or down) to that honor by imparting his wisdom from time to time in signed columns appearing on the Journal’s editorial page. His contribution in today’s Journal (“Why Americans Hate Economics”) is noteworthy for typifying the sad decline of the Journal’s editorial page into a self-parody of obnoxious, philistine anti-intellectualism.

Mr. Moore begins by repeating a joke once told by Professor Christina Romer, formerly President Obama’s chief economist, now on the economics department at the University of California at Berkeley. The joke, not really that funny, is that there are two kinds of students:  those who hate economics and those who really hate economics.  Professor Romer apparently told the joke to explain that it’s not true. Mr. Moore repeats it to explain why he thinks it really is. Why does he? Let Mr. Moore speak for himself:  “Because too often economic theories defy common sense.” That’s it in a nutshell for Mr. Moore:  common sense — the ultimate standard of truth.

So what’s that you say, Galileo? The sun is stationary and the earth travels around it? You must be kidding! Why any child can tell you that the sun rises in the east and moves across the sky every day and then travels beneath the earth at night to reappear in the east the next morning. And you expect anyone in his right mind to believe otherwise. What? It’s the earth rotating on its axis? Are you possessed of demons? And you say that the earth is round? If the earth were round, how could anybody stand at the bottom of the earth and not fall off? Galileo, you are a raving lunatic. And you, Mr. Einstein, you say that there is something called a space-time continuum, so that time slows down as the speed one travels approaches the speed of light. My God, where could you have come up with such an idea?  By that reasoning, two people could not agree on which of two events happened first if one of them was stationary and the other traveling at half the speed of light.  Away with you, and don’t ever dare speak such nonsense again, or, by God, you shall be really, really sorry.

The point of course is not to disregard common sense–that would not be very intelligent–but to recognize that common sense isn’t enough. Sometimes things are not what they seem – the earth, Mr. Moore, is not flat – and our common sense has to be trained to correspond with a reality that can only be discerned by the intensive application of our reasoning powers, in other words, by thinking harder about what the world is really like than just accepting what common sense seems to be telling us. But once you recognize that common sense has its limitations, the snide populist sneers–the stock-in-trade of the Journal editorial page–mocking economists with degrees from elite universities in which Mr. Moore likes to indulge are exposed for what they are:  the puerile defensiveness of those unwilling to do the hard thinking required to push back the frontiers of their own ignorance.

In today’s column, Mr. Moore directs his ridicule at a number of Keynesian nostrums that I would not necessarily subscribe to, at least not without significant qualification. But Keynesian ideas are also rooted in certain common-sense notions, for example, the idea that income and expenditure are mutually interdependent, the income of one person being derived from the expenditure of another. So when Mr. Moore simply dismisses as “nonsensical” the idea that extending unemployment insurance to keep the unemployed from having to stop spending, he is in fact rejecting an idea that is no less grounded in common sense than the idea that paying people not to work discourages work. The problem is that our common sense cuts in both directions. Mr. Moore likes one and wants to ignore the other.

What we would like economists–even those unfortunate enough to have graduated from an elite university–to tell us is which effect is stronger or, perhaps, when is one effect stronger and when is the other stronger. But all that would be too complicated and messy for Mr. Moore’s–and the Journal‘s–cartoonish view of the world.

In that cartoonish view, the problem is that good old Adam Smith of “invisible hand” fame and his virtuous economic doctrines supporting free enterprise got tossed aside when the dastardly Keynes invented “macroeconomics” in the 1930s. And here is Mr. Moore’s understanding of macroeconomics.

Macroeconomics simply took basic laws of economics we know to be true for the firm or family –i.e., that demand curves are downward-sloping; that when you tax something, you get less of it; that debts have to be repaid—and turned them on their head as national policy.

Simple, isn’t it? The economics of Adam Smith (the microeconomics of firm and family) is good because it is based on common sense; the macroeconomics of Keynes is bad because it turns common sense on its head. Now I don’t know how much Mr. Moore knows about economics other than that demand curves are downward-sloping, but perhaps he has heard of, or even studied, the law of comparative advantage.

The law of comparative advantage says, in one of its formulations, that even if a country is less productive (because of, say, backward technology or a poor endowment of natural resources) than other countries in producing every single product that it produces, it would still have a lower cost of production in at least one of those products, and could profitably export that product (or those products) in international markets in sufficient amounts to pay for its imports of other products. If there is a less common-sensical notion than that in all of macroeconomics, indeed in any scientific discipline, I would like to hear about it. And trust me as a former university teacher of economics, there is no proposition in economics that students hate more or find harder to reconcile with their notions of common sense than the law of comparative advantage. Indeed, even most students who can correctly answer an exam question about comparative advantage don’t believe a word of what they wrote. The only students who actually do believe it are the ones who become economists.

But the law of comparative advantage is logically unassailable; you might as well try to disprove “two plus two equals four.” So, no, Mr. Moore, you don’t know why Americans hate economics, not unless, by Americans you mean that (one hopes small) group of individuals who happen to think exactly the same way as does the editorial board of The Wall Street Journal.

What’s Right and not so Right with Modern Monetary Theory

I am finishing up a first draft of a paper on fiat money, bitcoins and cryptocurrencies that will be included in a forthcoming volume on bitcoins and cryptocurrencies. The paper is loosely based on a number of posts that have appeared on this blog since I started blogging almost nine years ago. My first post appeared on July 5, 2011. Here are some of my posts on and fiat money, bitcoins and cryptocurrencies (this, this, this, and this). In writing the paper, it occurred to me that it might be worthwhile to include a comment on Modern Monetary Theory inasmuch as the proposition that the value of fiat money is derived from the acceptability of fiat money for discharging the tax liabilities imposed by the governments issuing those fiat moneys, which is a proposition that Modern Monetary Theorists have adopted from the chartalist school of thought associated with the work of G. F. Knapp. But there were clearly other economists before and since Knapp that have offered roughly the same explanation for the positive value of fiat money that offers no real non-monetary services to those holding such moneys. Here is the section from my draft about Modern Monetary Theory.

Although there’s a long line of prominent economic theorists who have recognized that acceptability of a fiat money for discharging tax liabilities, the proposition is now generally associated with the chartalist views of G. F. Knapp, whose views have been explicitly cited in recent works by economists associated with what is known as Modern Monetary Theory (MMT). While the capacity of fiat money to discharge tax liabilities is surely an important aspect of MMT, not all propositions associated with MMT automatically follow from that premise. Recognizing the role of the capacity of fiat money to discharge tax liabilities, Knapp juxtaposed his “state theory of money” from the metallist theory. The latter holds that the institution of money evolved from barter trade, because certain valuable commodities, especially precious metals became widely used as media of exchange, because, for whatever reason, they were readily accepted in exchange, thereby triggering the self-reinforcing network effects discussed above.[1]

However, the often bitter debates between chartalists and metallists notwithstanding, there is no necessary, or logical, inconsistency between the theories. Both theories about the origin of money could be simultaneously true, each under different historical conditions. Each theory posits an explanation for why a monetary instrument providing no direct service is readily accepted in exchange. That one explanation could be true does not entail the falsity of the other.

Taking chartalism as its theoretical foundation, MMT focuses on a set of accounting identities that are presumed to embody deep structural relationships. Because money is regarded as the creature of the state, the quantity of money is said to reflect the cumulative difference between government tax revenues and expenditures which are financed by issuing fiat money. The role of government bonds is to provide a buffer with which short-term fluctuations in the inflow of taxes (recurrently peaking at particular times of the year when tax payments become due) and government expenditures.

But the problem with MMT, shared with many other sorts of monetary theory, is that it focuses on a particular causal relationship, working through the implications of that relationship conditioned on a ceteris-paribus assumption that all other relationships are held constant and are unaffected by the changes on which the theory is focusing, regardless of whether the assumption can be maintained.

For example, MMT posits that increases in taxes are deflationary and reductions in taxes are inflationary, because an increase in taxes implies a net drain of purchasing power from the private sector to the government sector and a reduction in taxes implies an injection of purchasing power.[2] According to the MMT, the price level reflects the relationship between total spending and total available productive resources, At given current prices, some level of total spending would just suffice to ensure that all available resources are fully employed. If total spending exceeds that amount, the excess spending must cause prices to rise to absorb the extra spending.

This naïve theory of inflation captures a basic intuition about the effect of increasing the rate of spending, but it is not a complete theory of inflation, because the level of spending depends not only on how much the government spends and how much tax revenue it collects; it also depends on, among other things, whether the public is trying to add to, or to reduce, the quantity of cash balances being held. Now it’s true that an efficiently operating banking system tends to adjust the quantity of cash to the demands of the public, but the banking system also has demands for the reserves that the government, via the central bank, makes available to be held, and its demands to hold reserves may match, or fall short of, the amount that banks at any moment wish to hold.

There is an interbank system of reserves, but if the amount of reserves that the government central bank creates is systematically above the amount of reserves that banks wish to hold, the deficiency will have repercussions on total spending. MMT theorists insist that the government central bank is obligated to provide whatever quantity of reserves is demanded, but that’s because the demand of banks to hold reserves is a function of the foregone interest incurred by banks holding reserves. Given the cost of holding reserves implied by the interest-rate target established by the government central bank, the banking system will demand a corresponding quantity of reserves, and, at that interest rate, government central banks will supply all the reserves demanded. But that doesn’t mean that, in setting its target rate, the government central bank isn’t implicitly determining the quantity of reserves for the entire system, thereby exercising an independent influence on the price level or the rate of inflation that must be reconciled with the fiscal stance of the government.

A tendency toward oversimplification is hardly unique to MMT. It’s also characteristic of older schools of thought, like the metallist theory of money, the polar opposite from the MMT and the chartalist theory. The metallist theory asserts that the value of a metallic money must equal the value of the amount of the metal represented by any particular monetary unit defined in terms of that metal. Under a gold standard, for example, all monetary units represent some particular quantity of gold, and the relative values of those units correspond to the ratios of the gold represented by those units. The value of gold standard currency therefore doesn’t deviate more than trivially from the value of the amount of gold represented by the currency.

But, here again, we confront a simplification; the value of gold, or of any commodity serving as a monetary standard, isn’t independent of its monetary-standard function. The value of any commodity depends on the total demand for any and all purposes for which it is, or may be, used. If gold serves as money, either as coins actually exchanged or a reserves sitting in bank vaults, that amount of gold is withdrawn from potential non-monetary uses, so that the value of gold relative to other commodities must rise to reflect the diversion of that portion of the total stock from non-monetary uses. If the demand to hold money rises, and the additional money that must be created to meet that demand requires additional gold to be converted into monetary form, either as coins or as reserves held by banks, the additional derived demand for gold tends to increase the value of gold, and, as a result, the value of money.

Moreover, insofar as governments accumulate reserves of gold that are otherwise held idle, the decision about how much gold reserves to continue holding in relation to the monetary claims on those reserves also affects the value of gold. It’s therefore not necessarily correct to say that, under a gold standard, the value of gold determines the value of money. The strictly correct proposition is that, under a gold standard, the value of gold and the value of money must be equal. But the value of money causally affects the value of gold no less than the value of gold causally affects the value of money.

In the context of a fiat money, whose value necessarily reflects expectations of its future purchasing power, it is not only the current policies of the government and the monetary authority, but expectations about future economic conditions and about the future responses of policy-makers to those conditions that determine the value of a fiat money. A useful theory of the value of money and of the effect of monetary policy on the value of money cannot be formulated without taking the expectations of individuals into account. Rational-expectations may be a useful first step to in formulating models that explicitly take expectations into account, but their underlying suppositions of most rational-expectations models are too far-fetched – especially the assumption that all expectations converge on the “correct” probability distributions of all future prices – to provide practical insight, much less useful policy guidance (Glasner 2020).

So, in the end, all simple theories of causation, like MMT, that suggest one particular variable determines the value of another are untenable in any complex system of mutually interrelated phenomena (Hayek 1967). There are few systems in nature as complex as a modern economy; only if it were possible to write out a complete system of equations describing all those interrelationships, could we trace out the effects of increasing the income tax rate or the level of government spending on the overall price level, as MMT claims to do. But for a complex interrelated system, no direct causal relationship between any two variables to the exclusion of all the others is likely to serve as a reliable guide to policy except in special situations when it can plausibly be assumed that a ceteris-paribus assumption is likely to be even approximately true.

[1] The classic exposition of this theory of money was provided by Carl Menger (1892).

 

[2] In an alternate version of the tax theory of inflation, an increase in taxes increases the value of money by increasing the demand of money at the moment when tax liabilities come due. The value of money is determined by its value at those peak periods, and it is the expected value of money at those peak periods that maintains its value during non-peak periods. The problem with this version is that it presumes that the value of money is solely a function of its value in discharging tax liabilities, but money is also demanded to serve as a medium of exchange which implies an increase in value above the value it would have solely from the demand occasioned by its acceptability to discharge tax liabilities.

An Austrian Tragedy

It was hardly predictable that the New York Review of Books would take notice of Marginal Revolutionaries by Janek Wasserman, marking the susquicentenial of the publication of Carl Menger’s Grundsätze (Principles of Economics) which, along with Jevons’s Principles of Political Economy and Walras’s Elements of Pure Economics ushered in the marginal revolution upon which all of modern economics, for better or for worse, is based. The differences among the three founding fathers of modern economic theory were not insubstantial, and the Jevonian version was largely superseded by the work of his younger contemporary Alfred Marshall, so that modern neoclassical economics is built on the work of only one of the original founders, Leon Walras, Jevons’s work having left little impression on the future course of economics.

Menger’s work, however, though largely, but not totally, eclipsed by that of Marshall and Walras, did leave a more enduring imprint and a more complicated legacy than Jevons’s — not only for economics, but for political theory and philosophy, more generally. Judging from Edward Chancellor’s largely favorable review of Wasserman’s volume, one might even hope that a start might be made in reassessing that legacy, a process that could provide an opportunity for mutually beneficial interaction between long-estranged schools of thought — one dominant and one marginal — that are struggling to overcome various conceptual, analytical and philosophical problems for which no obvious solutions seem available.

In view of the failure of modern economists to anticipate the Great Recession of 2008, the worst financial shock since the 1930s, it was perhaps inevitable that the Austrian School, a once favored branch of economics that had made a specialty of booms and busts, would enjoy a revival of public interest.

The theme of Austrians as outsiders runs through Janek Wasserman’s The Marginal Revolutionaries: How Austrian Economists Fought the War of Ideas, a general history of the Austrian School from its beginnings to the present day. The title refers both to the later marginalization of the Austrian economists and to the original insight of its founding father, Carl Menger, who introduced the notion of marginal utility—namely, that economic value does not derive from the cost of inputs such as raw material or labor, as David Ricardo and later Karl Marx suggested, but from the utility an individual derives from consuming an additional amount of any good or service. Water, for instance, may be indispensable to humans, but when it is abundant, the marginal value of an extra glass of the stuff is close to zero. Diamonds are less useful than water, but a great deal rarer, and hence command a high market price. If diamonds were as common as dewdrops, however, they would be worthless.

Menger was not the first economist to ponder . . . the “paradox of value” (why useless things are worth more than essentials)—the Italian Ferdinando Galiani had gotten there more than a century earlier. His central idea of marginal utility was simultaneously developed in England by W. S. Jevons and on the Continent by Léon Walras. Menger’s originality lay in applying his theory to the entire production process, showing how the value of capital goods like factory equipment derived from the marginal value of the goods they produced. As a result, Austrian economics developed a keen interest in the allocation of capital. Furthermore, Menger and his disciples emphasized that value was inherently subjective, since it depends on what consumers are willing to pay for something; this imbued the Austrian school from the outset with a fiercely individualistic and anti-statist aspect.

Menger’s unique contribution is indeed worthy of special emphasis. He was more explicit than Jevons or Walras, and certainly more than Marshall, in explaining that the value of factors of production is derived entirely from the value of the incremental output that could be attributed (or imputed) to their services. This insight implies that cost is not an independent determinant of value, as Marshall, despite accepting the principle of marginal utility, continued to insist – famously referring to demand and supply as the two blades of the analytical scissors that determine value. The cost of production therefore turns out to be nothing but the value the output foregone when factors are used to produce one output instead of the next most highly valued alternative. Cost therefore does not determine, but is determined by, equilibrium price, which means that, in practice, costs are always subjective and conjectural. (I have made this point in an earlier post in a different context.) I will have more to say below about the importance of Menger’s specific contribution and its lasting imprint on the Austrian school.

Menger’s Principles of Economics, published in 1871, established the study of economics in Vienna—before then, no economic journals were published in Austria, and courses in economics were taught in law schools. . . .

The Austrian School was also bound together through family and social ties: [his two leading disciples, [Eugen von] Böhm-Bawerk and Friedrich von Wieser [were brothers-in-law]. [Wieser was] a close friend of the statistician Franz von Juraschek, Friedrich Hayek’s maternal grandfather. Young Austrian economists bonded on Alpine excursions and met in Böhm-Bawerk’s famous seminars (also attended by the Bolshevik Nikolai Bukharin and the German Marxist Rudolf Hilferding). Ludwig von Mises continued this tradition, holding private seminars in Vienna in the 1920s and later in New York. As Wasserman notes, the Austrian School was “a social network first and last.”

After World War I, the Habsburg Empire was dismantled by the victorious Allies. The Austrian bureaucracy shrank, and university placements became scarce. Menger, the last surviving member of the first generation of Austrian economists, died in 1921. The economic school he founded, with its emphasis on individualism and free markets, might have disappeared under the socialism of “Red Vienna.” Instead, a new generation of brilliant young economists emerged: Schumpeter, Hayek, and Mises—all of whom published best-selling works in English and remain familiar names today—along with a number of less well known but influential economists, including Oskar Morgenstern, Fritz Machlup, Alexander Gerschenkron, and Gottfried Haberler.

Two factual corrections are in order. Menger outlived Böhm-Bawerk, but not his other chief disciple von Wieser, who died in 1926, not long after supervising Hayek’s doctoral dissertation, later published in 1927, and, in 1933, translated into English and published as Monetary Theory and the Trade Cycle. Moreover, a 16-year gap separated Mises and Schumpeter, who were exact contemporaries, from Hayek (born in 1899) who was a few years older than Gerschenkron, Haberler, Machlup and Morgenstern.

All the surviving members or associates of the Austrian school wound up either in the US or Britain after World War II, and Hayek, who had taken a position in London in 1931, moved to the US in 1950, taking a position in the Committee on Social Thought at the University of Chicago after having been refused a position in the economics department. Through the intervention of wealthy sponsors, Mises obtained an academic appointment of sorts at the NYU economics department, where he succeeded in training two noteworthy disciples who wrote dissertations under his tutelage, Murray Rothbard and Israel Kirzner. (Kirzner wrote his dissertation under Mises at NYU, but Rothbard did his graduate work at Colulmbia.) Schumpeter, Haberler and Gerschenkron eventually took positions at Harvard, while Machlup (with some stops along the way) and Morgenstern made their way to Princeton. However, Hayek’s interests shifted from pure economic theory to deep philosophical questions. While Machlup and Haberler continued to work on economic theory, the Austrian influence on their work after World War II was barely recognizable. Morgenstern and Schumpeter made major contributions to economics, but did not hide their alienation from the doctrines of the Austrian School.

So there was little reason to expect that the Austrian School would survive its dispersal when the Nazis marched unopposed into Vienna in 1938. That it did survive is in no small measure due to its ideological usefulness to anti-socialist supporters who provided financial support to Hayek, enabling his appointment to the Committee on Social Thought at the University of Chicago, and Mises’s appointment at NYU, and other forms of research support to Hayek, Mises and other like-minded scholars, as well as funding the Mont Pelerin Society, an early venture in globalist networking, started by Hayek in 1947. Such support does not discredit the research to which it gave rise. That the survival of the Austrian School would probably not have been possible without the support of wealthy benefactors who anticipated that the Austrians would advance their political and economic interests does not invalidate the research thereby enabled. (In the interest of transparency, I acknowledge that I received support from such sources for two books that I wrote.)

Because Austrian School survivors other than Mises and Hayek either adapted themselves to mainstream thinking without renouncing their earlier beliefs (Haberler and Machlup) or took an entirely different direction (Morgenstern), and because the economic mainstream shifted in two directions that were most uncongenial to the Austrians: Walrasian general-equilibrium theory and Keynesian macroeconomics, the Austrian remnant, initially centered on Mises at NYU, adopted a sharply adversarial attitude toward mainstream economic doctrines.

Despite its minute numbers, the lonely remnant became a house divided against itself, Mises’s two outstanding NYU disciples, Murray Rothbard and Israel Kirzner, holding radically different conceptions of how to carry on the Austrian tradition. An extroverted radical activist, Rothbard was not content just to lead a school of economic thought, he aspired to become the leader of a fantastical anarchistic revolutionary movement to replace all established governments under a reign of private-enterprise anarcho-capitalism. Rothbard’s political radicalism, which, despite his Jewish ancestry, even included dabbling in Holocaust denialism, so alienated his mentor, that Mises terminated all contact with Rothbard for many years before his death. Kirzner, self-effacing, personally conservative, with no political or personal agenda other than the advancement of his own and his students’ scholarship, published hundreds of articles and several books filling 10 thick volumes of his collected works published by the Liberty Fund, while establishing a robust Austrian program at NYU, training many excellent scholars who found positions in respected academic and research institutions. Similar Austrian programs, established under the guidance of Kirzner’s students, were started at other institutions, most notably at George Mason University.

One of the founders of the Cato Institute, which for nearly half a century has been the leading avowedly libertarian think tank in the US, Rothbard was eventually ousted by Cato, and proceeded to set up a rival think tank, the Ludwig von Mises Institute, at Auburn University, which has turned into a focal point for extreme libertarians and white nationalists to congregate, get acquainted, and strategize together.

Isolation and marginalization tend to cause a subspecies either to degenerate toward extinction, to somehow blend in with the members of the larger species, thereby losing its distinctive characteristics, or to accentuate its unique traits, enabling it to find some niche within which to survive as a distinct sub-species. Insofar as they have engaged in economic analysis rather than in various forms of political agitation and propaganda, the Rothbardian Austrians have focused on anarcho-capitalist theory and the uniquely perverse evils of fractional-reserve banking.

Rejecting the political extremism of the Rothbardians, Kirznerian Austrians differentiate themselves by analyzing what they call market processes and emphasizing the limitations on the knowledge and information possessed by actual decision-makers. They attribute this misplaced focus on equilibrium to the extravagantly unrealistic and patently false assumptions of mainstream models on the knowledge possessed by economic agents, which effectively make equilibrium the inevitable — and trivial — conclusion entailed by those extreme assumptions. In their view, the focus of mainstream models on equilibrium states with unrealistic assumptions results from a preoccupation with mathematical formalism in which mathematical tractability rather than sound economics dictates the choice of modeling assumptions.

Skepticism of the extreme assumptions about the informational endowments of agents covers a range of now routine assumptions in mainstream models, e.g., the ability of agents to form precise mathematical estimates of the probability distributions of future states of the world, implying that agents never confront decisions about which they are genuinely uncertain. Austrians also object to the routine assumption that all the information needed to determine the solution of a model is the common knowledge of the agents in the model, so that an existing equilibrium cannot be disrupted unless new information randomly and unpredictably arrives. Each agent in the model having been endowed with the capacity of a semi-omniscient central planner, solving the model for its equilibrium state becomes a trivial exercise in which the optimal choices of a single agent are taken as representative of the choices made by all of the model’s other, semi-omnicient, agents.

Although shreds of subjectivism — i.e., agents make choices based own preference orderings — are shared by all neoclassical economists, Austrian criticisms of mainstream neoclassical models are aimed at what Austrians consider to be their insufficient subjectivism. It is this fierce commitment to a robust conception of subjectivism, in which an equilibrium state of shared expectations by economic agents must be explained, not just assumed, that Chancellor properly identifies as a distinguishing feature of the Austrian School.

Menger’s original idea of marginal utility was posited on the subjective preferences of consumers. This subjectivist position was retained by subsequent generations of the school. It inspired a tradition of radical individualism, which in time made the Austrians the favorite economists of American libertarians. Subjectivism was at the heart of the Austrians’ polemical rejection of Marxism. Not only did they dismiss Marx’s labor theory of value, they argued that socialism couldn’t possibly work since it would lack the means to allocate resources efficiently.

The problem with central planning, according to Hayek, is that so much of the knowledge that people act upon is specific knowledge that individuals acquire in the course of their daily activities and life experience, knowledge that is often difficult to articulate – mere intuition and guesswork, yet more reliable than not when acted upon by people whose livelihoods depend on being able to do the right thing at the right time – much less communicate to a central planner.

Chancellor attributes Austrian mistrust of statistical aggregates or indices, like GDP and price levels, to Austrian subjectivism, which regards such magnitudes as abstractions irrelevant to the decisions of private decision-makers, except perhaps in forming expectations about the actions of government policy makers. (Of course, this exception potentially provides full subjectivist license and legitimacy for macroeconomic theorizing despite Austrian misgivings.) Observed statistical correlations between aggregate variables identified by macroeconomists are dismissed as irrelevant unless grounded in, and implied by, the purposeful choices of economic agents.

But such scruples about the use of macroeconomic aggregates and inferring causal relationships from observed correlations are hardly unique to the Austrian school. One of the most important contributions of the 20th century to the methodology of economics was an article by T. C. Koopmans, “Measurement Without Theory,” which argued that measured correlations between macroeconomic variables provide a reliable basis for business-cycle research and policy advice only if the correlations can be explained in terms of deeper theoretical or structural relationships. The Nobel Prize Committee, in awarding the 1975 Prize to Koopmans, specifically mentioned this paper in describing Koopmans’s contributions. Austrians may be more fastidious than their mainstream counterparts in rejecting macroeconomic relationships not based on microeconomic principles, but they aren’t the only ones mistrustful of mere correlations.

Chancellor cites mistrust about the use of statistical aggregates and price indices as a factor in Hayek’s disastrous policy advice warning against anti-deflationary or reflationary measures during the Great Depression.

Their distrust of price indexes brought Austrian economists into conflict with mainstream economic opinion during the 1920s. At the time, there was a general consensus among leading economists, ranging from Irving Fisher at Yale to Keynes at Cambridge, that monetary policy should aim at delivering a stable price level, and in particular seek to prevent any decline in prices (deflation). Hayek, who earlier in the decade had spent time at New York University studying monetary policy and in 1927 became the first director of the Austrian Institute for Business Cycle Research, argued that the policy of price stabilization was misguided. It was only natural, Hayek wrote, that improvements in productivity should lead to lower prices and that any resistance to this movement (sometimes described as “good deflation”) would have damaging economic consequences.

The argument that deflation stemming from economic expansion and increasing productivity is normal and desirable isn’t what led Hayek and the Austrians astray in the Great Depression; it was their failure to realize the deflation that triggered the Great Depression was a monetary phenomenon caused by a malfunctioning international gold standard. Moreover, Hayek’s own business-cycle theory explicitly stated that a neutral (stable) monetary policy ought to aim at keeping the flow of total spending and income constant in nominal terms while his policy advice of welcoming deflation meant a rapidly falling rate of total spending. Hayek’s policy advice was an inexcusable error of judgment, which, to his credit, he did acknowledge after the fact, though many, perhaps most, Austrians have refused to follow him even that far.

Considered from the vantage point of almost a century, the collapse of the Austrian School seems to have been inevitable. Hayek’s long-shot bid to establish his business-cycle theory as the dominant explanation of the Great Depression was doomed from the start by the inadequacies of the very specific version of his basic model and his disregard of the obvious implication of that model: prevent total spending from contracting. The promising young students and colleagues who had briefly gathered round him upon his arrival in England, mostly attached themselves to other mentors, leaving Hayek with only one or two immediate disciples to carry on his research program. The collapse of his research program, which he himself abandoned after completing his final work in economic theory, marked a research hiatus of almost a quarter century, with the notable exception of publications by his student, Ludwig Lachmann who, having decamped in far-away South Africa, labored in relative obscurity for most of his career.

The early clash between Keynes and Hayek, so important in the eyes of Chancellor and others, is actually overrated. Chancellor, quoting Lachmann and Nicholas Wapshott, describes it as a clash of two irreconcilable views of the economic world, and the clash that defined modern economics. In later years, Lachmann actually sought to effect a kind of reconciliation between their views. It was not a conflict of visions that undid Hayek in 1931-32, it was his misapplication of a narrowly constructed model to a problem for which it was irrelevant.

Although the marginalization of the Austrian School, after its misguided policy advice in the Great Depression and its dispersal during and after World War II, is hardly surprising, the unwillingness of mainstream economists to sort out what was useful and relevant in the teachings of the Austrian School from what is not was unfortunate not only for the Austrians. Modern economics was itself impoverished by its disregard for the complexity and interconnectedness of economic phenomena. It’s precisely the Austrian attentiveness to the complexity of economic activity — the necessity for complementary goods and factors of production to be deployed over time to satisfy individual wants – that is missing from standard economic models.

That Austrian attentiveness, pioneered by Menger himself, to the complementarity of inputs applied over the course of time undoubtedly informed Hayek’s seminal contribution to economic thought: his articulation of the idea of intertemporal equilibrium that comprehends the interdependence of the plans of independent agents and the need for them to all fit together over the course of time for equilibrium to obtain. Hayek’s articulation represented a conceptual advance over earlier versions of equilibrium analysis stemming from Walras and Pareto, and even from Irving Fisher who did pay explicit attention to intertemporal equilibrium. But in Fisher’s articulation, intertemporal consistency was described in terms of aggregate production and income, leaving unexplained the mechanisms whereby the individual plans to produce and consume particular goods over time are reconciled. Hayek’s granular exposition enabled him to attend to, and articulate, necessary but previously unspecified relationships between the current prices and expected future prices.

Moreover, neither mainstream nor Austrian economists have ever explained how prices are adjust in non-equilibrium settings. The focus of mainstream analysis has always been the determination of equilibrium prices, with the implicit understanding that “market forces” move the price toward its equilibrium value. The explanatory gap has been filled by the mainstream New Classical School which simply posits the existence of an equilibrium price vector, and, to replace an empirically untenable tâtonnement process for determining prices, posits an equally untenable rational-expectations postulate to assert that market economies typically perform as if they are in, or near the neighborhood of, equilibrium, so that apparent fluctuations in real output are viewed as optimal adjustments to unexplained random productivity shocks.

Alternatively, in New Keynesian mainstream versions, constraints on price changes prevent immediate adjustments to rationally expected equilibrium prices, leading instead to persistent reductions in output and employment following demand or supply shocks. (I note parenthetically that the assumption of rational expectations is not, as often suggested, an assumption distinct from market-clearing, because the rational expectation of all agents of a market-clearing price vector necessarily implies that the markets clear unless one posits a constraint, e.g., a binding price floor or ceiling, that prevents all mutually beneficial trades from being executed.)

Similarly, the Austrian school offers no explanation of how unconstrained price adjustments by market participants is a sufficient basis for a systemic tendency toward equilibrium. Without such an explanation, their belief that market economies have strong self-correcting properties is unfounded, because, as Hayek demonstrated in his 1937 paper, “Economics and Knowledge,” price adjustments in current markets don’t, by themselves, ensure a systemic tendency toward equilibrium values that coordinate the plans of independent economic agents unless agents’ expectations of future prices are sufficiently coincident. To take only one passage of many discussing the difficulty of explaining or accounting for a process that leads individuals toward a state of equilibrium, I offer the following as an example:

All that this condition amounts to, then, is that there must be some discernible regularity in the world which makes it possible to predict events correctly. But, while this is clearly not sufficient to prove that people will learn to foresee events correctly, the same is true to a hardly less degree even about constancy of data in an absolute sense. For any one individual, constancy of the data does in no way mean constancy of all the facts independent of himself, since, of course, only the tastes and not the actions of the other people can in this sense be assumed to be constant. As all those other people will change their decisions as they gain experience about the external facts and about other people’s actions, there is no reason why these processes of successive changes should ever come to an end. These difficulties are well known, and I mention them here only to remind you how little we actually know about the conditions under which an equilibrium will ever be reached.

In this theoretical muddle, Keynesian economics and the neoclassical synthesis were abandoned, because the key proposition of Keynesian economics was supposedly the tendency of a modern economy toward an equilibrium with involuntary unemployment while the neoclassical synthesis rejected that proposition, so that the supposed synthesis was no more than an agreement to disagree. That divided house could not stand. The inability of Keynesian economists such as Hicks, Modigliani, Samuelson and Patinkin to find a satisfactory (at least in terms of a preferred Walrasian general-equilibrium model) rationalization for Keynes’s conclusion that an economy would likely become stuck in an equilibrium with involuntary unemployment led to the breakdown of the neoclassical synthesis and the displacement of Keynesianism as the dominant macroeconomic paradigm.

But perhaps the way out of the muddle is to abandon the idea that a systemic tendency toward equilibrium is a property of an economic system, and, instead, to recognize that equilibrium is, as Hayek suggested, a contingent, not a necessary, property of a complex economy. Ludwig Lachmann, cited by Chancellor for his remark that the early theoretical clash between Hayek and Keynes was a conflict of visions, eventually realized that in an important sense both Hayek and Keynes shared a similar subjectivist conception of the crucial role of individual expectations of the future in explaining the stability or instability of market economies. And despite the efforts of New Classical economists to establish rational expectations as an axiomatic equilibrating property of market economies, that notion rests on nothing more than arbitrary methodological fiat.

Chancellor concludes by suggesting that Wasserman’s characterization of the Austrians as marginalized is not entirely accurate inasmuch as “the Austrians’ view of the economy as a complex, evolving system continues to inspire new research.” Indeed, if economics is ever to find a way out of its current state of confusion, following Lachmann in his quest for a synthesis of sorts between Keynes and Hayek might just be a good place to start from.

A Tale of Two Syntheses

I recently finished reading a slender, but weighty, collection of essays, Microfoundtions Reconsidered: The Relationship of Micro and Macroeconomics in Historical Perspective, edited by Pedro Duarte and Gilberto Lima; it contains in addition to a brief introductory essay by the editors, and contributions by Kevin Hoover, Robert Leonard, Wade Hands, Phil Mirowski, Michel De Vroey, and Pedro Duarte. The volume is both informative and stimulating, helping me to crystalize ideas about which I have been ruminating and writing for a long time, but especially in some of my more recent posts (e.g., here, here, and here) and my recent paper “Hayek, Hicks, Radner and Four Equilibrium Concepts.”

Hoover’s essay provides a historical account of the microfoundations, making clear that the search for microfoundations long preceded the Lucasian microfoundations movement of the 1970s and 1980s that would revolutionize macroeconomics in the late 1980s and early 1990s. I have been writing about the differences between varieties of microfoundations for quite a while (here and here), and Hoover provides valuable detail about early discussions of microfoundations and about their relationship to the now regnant Lucasian microfoundations dogma. But for my purposes here, Hoover’s key contribution is his deconstruction of the concept of microfoundations, showing that the idea of microfoundations depends crucially on the notion that agents in a macroeconomic model be explicit optimizers, meaning that they maximize an explicit function subject to explicit constraints.

What Hoover clarifies is vacuity of the Lucasian optimization dogma. Until Lucas, optimization by agents had been merely a necessary condition for a model to be microfounded. But there was also another condition: that the optimizing choices of agents be mutually consistent. Establishing that the optimizing choices of agents are mutually consistent is not necessarily easy or even possible, so often the consistency of optimizing plans can only be suggested by some sort of heuristic argument. But Lucas and his cohorts, followed by their acolytes, unable to explain, even informally or heuristically, how the optimizing choices of individual agents are rendered mutually consistent, instead resorted to question-begging and question-dodging techniques to avoid addressing the consistency issue, of which one — the most egregious, but not the only — is the representative agent. In so doing, Lucas et al. transformed the optimization problem from the coordination of multiple independent choices into the optimal plan of a single decision maker. Heckuva job!

The second essay by Robert Leonard, though not directly addressing the question of microfoundations, helps clarify and underscore the misrepresentation perpetrated by the Lucasian microfoundational dogma in disregarding and evading the need to describe a mechanism whereby the optimal choices of individual agents are, or could be, reconciled. Leonard focuses on a particular economist, Oskar Morgenstern, who began his career in Vienna as a not untypical adherent of the Austrian school of economics, a member of the Mises seminar and successor of F. A. Hayek as director of the Austrian Institute for Business Cycle Research upon Hayek’s 1931 departure to take a position at the London School of Economics. However, Morgenstern soon began to question the economic orthodoxy of neoclassical economic theory and its emphasis on the tendency of economic forces to reach a state of equilibrium.

In his famous early critique of the foundations of equilibrium theory, Morgenstern tried to show that the concept of perfect foresight, upon which, he alleged, the concept of equilibrium rests, is incoherent. To do so, Morgenstern used the example of the Holmes-Moriarity interaction in which Holmes and Moriarty are caught in a dilemma in which neither can predict whether the other will get off or stay on the train on which they are both passengers, because the optimal choice of each depends on the choice of the other. The unresolvable conflict between Holmes and Moriarty, in Morgenstern’s view, showed that the incoherence of the idea of perfect foresight.

As his disillusionment with orthodox economic theory deepened, Morgenstern became increasingly interested in the potential of mathematics to serve as a tool of economic analysis. Through his acquaintance with the mathematician Karl Menger, the son of Carl Menger, founder of the Austrian School of economics. Morgenstern became close to Menger’s student, Abraham Wald, a pure mathematician of exceptional ability, who, to support himself, was working on statistical and mathematical problems for the Austrian Institute for Business Cycle Resarch, and tutoring Morgenstern in mathematics and its applications to economic theory. Wald, himself, went on to make seminal contributions to mathematical economics and statistical analysis.

Moregenstern also became acquainted with another student of Menger, John von Neumnn, with an interest in applying advanced mathematics to economic theory. Von Neumann and Morgenstern would later collaborate in writing The Theory of Games and Economic Behavior, as a result of which Morgenstern came to reconsider his early view of the Holmes-Moriarty paradox inasmuch as it could be shown that an equilibrium solution of their interaction could be found if payoffs to their joint choices were specified, thereby enabling Holmes and Moriarty to choose optimal probablistic strategies.

I don’t think that the game-theoretic solution to the Holmes Moriarty game is as straightforward as Morgenstern eventually agreed, but the critical point in the microfoundations discussion is that the mathematical solution to the Holmes-Moriarty paradox acknowledges the necessity for the choices made by two or more agents in an economic or game-theoretic equilibrium to be reconciled – i.e., rendered mutually consistent — in equilibrium. Under Lucasian microfoundations dogma, the problem is either annihilated by positing an optimizing representative agent having no need to coordinate his decision with other agents (I leave the question who, in the Holmes-Moriarty interaction, is the representative agent as an exercise for the reader) or it is assumed away by positing the existence of a magical equilibrium with no explanation of how the mutually consistent choices are arrived at.

The third essay (“The Rise and Fall of Walrasian Economics: The Keynes Effect”) by Wade Hands considers the first of the two syntheses – the neoclassical synthesis — that are alluded to in the title of this post. Hands gives a learned account of the mutually reinforcing co-development of Walrasian general equilibrium theory and Keynesian economics in the 25 years or so following World War II. Although Hands agrees that there is no necessary connection between Walrasian GE theory and Keynesian theory, he argues that there was enough common ground between Keynesians and Walrasians, as famously explained by Hicks in summarizing Keynesian theory by way of his IS-LM model, to allow the two disparate research programs to nourish each other in a kind of symbiotic relationship as the two research programs came to dominate postwar economics.

The task for Keynesian macroeconomists following the lead of Samuelson, Solow and Modigliani at MIT, Alvin Hansen at Harvard and James Tobin at Yale was to elaborate the Hicksian IS-LM approach by embedding it in a more general Walrasian framework. In so doing, they helped to shape a research agenda for Walrasian general-equilibrium theorists working out the details of the newly developed Arrow-Debreu model, deriving conditions for the uniqueness and stability of the equilibrium of that model. The neoclassical synthesis followed from those efforts, achieving an uneasy reconciliation between Walrasian general equilibrium theory and Keynesian theory. It received its most complete articulation in the impressive treatise of Don Patinkin which attempted to derive or at least evaluate key Keyensian propositions in the context of a full general equilibrium model. At an even higher level of theoretical sophistication, the 1971 summation of general equilibrium theory by Arrow and Hahn, gave disproportionate attention to Keynesian ideas which were presented and analyzed using the tools of state-of-the art Walrasian analysis.

Hands sums up the coexistence of Walrasian and Keynesian ideas in the Arrow-Hahn volume as follows:

Arrow and Hahn’s General Competitive Analysis – the canonical summary of the literature – dedicated far more pages to stability than to any other topic. The book had fourteen chapters (and a number of mathematical appendices); there was one chapter on consumer choice, one chapter on production theory, and one chapter on existence [of equilibrium], but there were three chapters on stability analysis, (two on the traditional tatonnement and one on alternative ways of modeling general equilibrium dynamics). Add to this the fact that there was an important chapter on “The Keynesian Model’; and it becomes clear how important stability analysis and its connection to Keynesian economics was for Walrasian microeconomics during this period. The purpose of this section has been to show that that would not have been the case if the Walrasian economics of the day had not been a product of co-evolution with Keynesian economic theory. (p. 108)

What seems most unfortunate about the neoclassical synthesis is that it elevated and reinforced the least relevant and least fruitful features of both the Walrasian and the Keynesian research programs. The Hicksian IS-LM setup abstracted from the dynamic and forward-looking aspects of Keynesian theory, modeling a static one-period model, not easily deployed as a tool of dynamic analysis. Walrasian GE analysis, which, following the pathbreaking GE existence proofs of Arrow and Debreu, then proceeded to a disappointing search for the conditions for a unique and stable general equilibrium.

It was Paul Samuelson who, building on Hicks’s pioneering foray into stability analysis, argued that the stability question could be answered by investigating whether a system of Lyapounov differential equations could describe market price adjustments as functions of market excess demands that would converge on an equilibrium price vector. But Samuelson’s approach to establishing stability required the mechanism of a fictional tatonnement process. Even with that unsatisfactory assumption, the stability results were disappointing.

Although for Walrasian theorists the results hardly repaid the effort expended, for those Keynesians who interpreted Keynes as an instability theorist, the weak Walrasian stability results might have been viewed as encouraging. But that was not any easy route to take either, because Keynes had also argued that a persistent unemployment equilibrium might be the norm.

It’s also hard to understand how the stability of equilibrium in an imaginary tatonnement process could ever have been considered relevant to the operation of an actual economy in real time – a leap of faith almost as extraordinary as imagining an economy represented by a single agent. Any conventional comparative-statics exercise – the bread and butter of microeconomic analysis – involves comparing two equilibria, corresponding to a specified parametric change in the conditions of the economy. The comparison presumes that, starting from an equilibrium position, the parametric change leads from an initial to a new equilibrium. If the economy isn’t stable, a disturbance causing an economy to depart from an initial equilibrium need not result in an adjustment to a new equilibrium comparable to the old one.

If conventional comparative statics hinges on an implicit stability assumption, it’s hard to see how a stability analysis of tatonnement has any bearing on the comparative-statics routinely relied upon by economists. No actual economy ever adjusts to a parametric change by way of tatonnement. Whether a parametric change displacing an economy from its equilibrium time path would lead the economy toward another equilibrium time path is another interesting and relevant question, but it’s difficult to see what insight would be gained by proving the stability of equilibrium under a tatonnement process.

Moreover, there is a distinct question about the endogenous stability of an economy: are there endogenous tendencies within an economy that lead it away from its equilibrium time path. But questions of endogenous stability can only be posed in a dynamic, rather than a static, model. While extending the Walrasian model to include an infinity of time periods, Arrow and Debreu telescoped determination of the intertemporal-equilibrium price vector into a preliminary time period before time, production, exchange and consumption begin. So, even in the formally intertemporal Arrow-Debreu model, the equilibrium price vector, once determined, is fixed and not subject to revision. Standard stability analysis was concerned with the response over time to changing circumstances only insofar as changes are foreseen at time zero, before time begins, so that they can be and are taken fully into account when the equilibrium price vector is determined.

Though not entirely uninteresting, the intertemporal analysis had little relevance to the stability of an actual economy operating in real time. Thus, neither the standard Keyensian (IS-LM) model nor the standard Walrasian Arrow-Debreu model provided an intertemporal framework within which to address the dynamic stability that Keynes (and contemporaries like Hayek, Myrdal, Lindahl and Hicks) had developed in the 1930s. In particular, Hicks’s analytical device of temporary equilibrium might have facilitated such an analysis. But, having introduced his IS-LM model two years before publishing his temporary equilibrium analysis in Value and Capital, Hicks concentrated his attention primarily on Keynesian analysis and did not return to the temporary equilibrium model until 1965 in Capital and Growth. And it was IS-LM that became, for a generation or two, the preferred analytical framework for macroeconomic analysis, while temproary equilibrium remained overlooked until the 1970s just as the neoclassical synthesis started coming apart.

The fourth essay by Phil Mirowski investigates the role of the Cowles Commission, based at the University of Chicago from 1939 to 1955, in undermining Keynesian macroeconomics. While Hands argues that Walrasians and Keynesians came together in a non-hostile spirit of tacit cooperation, Mirowski believes that owing to their Walrasian sympathies, the Cowles Committee had an implicit anti-Keynesian orientation and was therefore at best unsympathetic if not overtly hostile to Keynesian theorizing, which was incompatible the Walrasian optimization paradigm endorsed by the Cowles economists. (Another layer of unexplored complexity is the tension between the Walrasianism of the Cowles economists and the Marshallianism of the Chicago School economists, especially Knight and Friedman, which made Chicago an inhospitable home for the Cowles Commission and led to its eventual departure to Yale.)

Whatever differences, both the Mirowski and the Hands essays support the conclusion that the uneasy relationship between Walrasianism and Keynesianism was inherently problematic and unltimately unsustainable. But to me the tragedy is that before the fall, in the 1950s and 1960s, when the neoclassical synthesis bestrode economics like a colossus, the static orientation of both the Walrasian and the Keynesian research programs combined to distract economists from a more promising research program. Such a program, instead of treating expectations either as parametric constants or as merely adaptive, based on an assumed distributed lag function, might have considered whether expectations could perform a potentially equilibrating role in a general equilibrium model.

The equilibrating role of expectations, though implicit in various contributions by Hayek, Myrdal, Lindahl, Irving Fisher, and even Keynes, is contingent so that equilibrium is not inevitable, only a possibility. Instead, the introduction of expectations as an equilibrating variable did not occur until the mid-1970s when Robert Lucas, Tom Sargent and Neil Wallace, borrowing from John Muth’s work in applied microeconomics, introduced the idea of rational expectations into macroeconomics. But in introducing rational expectations, Lucas et al. made rational expectations not the condition of a contingent equilibrium but an indisputable postulate guaranteeing the realization of equilibrium without offering any theoretical account of a mechanism whereby the rationality of expectations is achieved.

The fifth essay by Michel DeVroey (“Microfoundations: a decisive dividing line between Keynesian and new classical macroeconomics?”) is a philosophically sophisticated analysis of Lucasian microfoundations methodological principles. DeVroey begins by crediting Lucas with the revolution in macroeconomics that displaced a Keynesian orthodoxy already discredited in the eyes of many economists after its failure to account for simultaneously rising inflation and unemployment.

The apparent theoretical disorder characterizing the Keynesian orthodoxy and its Monetarist opposition left a void for Lucas to fill by providing a seemingly rigorous microfounded alternative to the confused state of macroeconomics. And microfoundations became the methodological weapon by which Lucas and his associates and followers imposed an iron discipline on the unruly community of macroeconomists. “In Lucas’s eyes,” DeVroey aptly writes,“ the mere intention to produce a theory of involuntary unemployment constitutes an infringement of the equilibrium discipline.” Showing that his description of Lucas is hardly overstated, DeVroey quotes from the famous 1978 joint declaration of war issued by Lucas and Sargent against Keynesian macroeconomics:

After freeing himself of the straightjacket (or discipline) imposed by the classical postulates, Keynes described a model in which rules of thumb, such as the consumption function and liquidity preference schedule, took the place of decision functions that a classical economist would insist be derived from the theory of choice. And rather than require that wages and prices be determined by the postulate that markets clear – which for the labor market seemed patently contradicted by the severity of business depressions – Keynes took as an unexamined postulate that money wages are sticky, meaning that they are set at a level or by a process that could be taken as uninfluenced by the macroeconomic forces he proposed to analyze.

Echoing Keynes’s famous description of the sway of Ricardian doctrines over England in the nineteenth century, DeVroey remarks that the microfoundations requirement “conquered macroeconomics as quickly and thoroughly as the Holy Inquisition conquered Spain,” noting, even more tellingly, that the conquest was achieved without providing any justification. Ricardo had, at least, provided a substantive analysis that could be debated; Lucas offered only an undisputable methodological imperative about the sole acceptable mode of macroeconomic reasoning. Just as optimization is a necessary component of the equilibrium discipline that had to be ruthlessly imposed on pain of excommunication from the macroeconomic community, so, too, did the correlate principle of market-clearing. To deviate from the market-clearing postulate was ipso facto evidence of an impure and heretical state of mind. DeVroey further quotes from the war declaration of Lucas and Sargent.

Cleared markets is simply a principle, not verifiable by direct observation, which may or may not be useful in constructing successful hypotheses about the behavior of these [time] series.

What was only implicit in the war declaration became evident later after right-thinking was enforced, and woe unto him that dared deviate from the right way of thinking.

But, as DeVroey skillfully shows, what is most remarkable is that, having declared market clearing an indisputable methodological principle, Lucas, contrary to his own demand for theoretical discipline, used the market-clearing postulate to free himself from the very equilibrium discipline he claimed to be imposing. How did the market-clearing postulate liberate Lucas from equilibrium discipline? To show how the sleight-of-hand was accomplished, DeVroey, in an argument parallel to that of Hoover in chapter one and that suggested by Leonard in chapter two, contrasts Lucas’s conception of microfoundations with a different microfoundations conception espoused by Hayek and Patinkin. Unlike Lucas, Hayek and Patinkin recognized that the optimization of individual economic agents is conditional on the optimization of other agents. Lucas assumes that if all agents optimize, then their individual optimization ensures that a social optimum is achieved, the whole being the sum of its parts. But that assumption ignores that the choices made interacting agents are themelves interdependent.

To capture the distinction between independent and interdependent optimization, DeVroey distinguishes between optimal plans and optimal behavior. Behavior is optimal only if an optimal plan can be executed. All agents can optimize individually in making their plans, but the optimality of their behavior depends on their capacity to carry those plans out. And the capacity of each to carry out his plan is contingent on the optimal choices of all other agents.

Optimizing plans refers to agents’ intentions before the opening of trading, the solution to the choice-theoretical problem with which they are faced. Optimizing behavior refers to what is observable after trading has started. Thus optimal behavior implies that the optimal plan has been realized. . . . [O]ptmizing plans and optimizing behavior need to be logically separated – there is a difference between finding a solution to a choice problem and implementing the solution. In contrast, whenever optimizing behavior is the sole concept used, the possibility of there being a difference between them is discarded by definition. This is the standpoint takenby Lucas and Sargent. Once it is adopted, it becomes misleading to claim . . .that the microfoundations requirement is based on two criteria, optimizing behavior and market clearing. A single criterion is needed, and it is irrelevant whether this is called generalized optimizing behavior or market clearing. (De Vroey, p. 176)

Each agent is free to optimize his plan, but no agent can execute his optimal plan unless the plan coincides with the complementary plans of other agents. So, the execution of an optimal plan is not within the unilateral control of an agent formulating his own plan. One can readily assume that agents optimize their plans, but one cannot just assume that those plans can be executed as planned. The optimality of interdependent plans is not self-evident; it is a proposition that must be demonstrated. Assuming that agents optimize, Lucas simply asserts that, because agents optimize, markets must clear.

That is a remarkable non-sequitur. And from that non-sequitur, Lucas jumps to a further non-sequitur: that an optimizing representative agent is all that’s required for a macroeconomic model. The logical straightjacket (or discipline) of demonstrating that interdependent optimal plans are consistent is thus discarded (or trampled upon). Lucas’s insistence on a market-clearing principle turns out to be subterfuge by which the pretense of its upholding conceals its violation in practice.

My own view is that the assumption that agents formulate optimizing plans cannot be maintained without further analysis unless the agents are operating in isolation. If the agents interacting with each other, the assumption that they optimize requires a theory of their interaction. If the focus is on equilibrium interactions, then one can have a theory of equilibrium, but then the possibility of non-equilibrium states must also be acknowledged.

That is what John Nash did in developing his equilibrium theory of positive-sum games. He defined conditions for the existence of equilibrium, but he offered no theory of how equilibrium is achieved. Lacking such a theory, he acknowledged that non-equilibrium solutions might occur, e.g., in some variant of the Holmes-Moriarty game. To simply assert that because interdependent agents try to optimize, they must, as a matter of principle, succeed in optimizing is to engage in question-begging on a truly grand scale. To insist, as a matter of methodological principle, that everyone else must also engage in question-begging on equally grand scale is what I have previously called methodological arrogance, though an even harsher description might be appropriate.

In the sixth essay (“Not Going Away: Microfoundations in the making of a new consensus in macroeconomics”), Pedro Duarte considers the current state of apparent macroeconomic consensus in the wake of the sweeping triumph of the Lucasian micorfoundtions methodological imperative. In its current state, mainstream macroeconomists from a variety of backgrounds have reconciled themselves and adjusted to the methodological absolutism Lucas and his associates and followers have imposed on macroeconomic theorizing. Leading proponents of the current consensus are pleased to announce, in unseemly self-satisfaction, that macroeconomics is now – but presumably not previously – “firmly grounded in the principles of economic [presumably neoclassical] theory.” But the underlying conception of neoclassical economic theory motivating such a statement is almost laughably narrow, and, as I have just shown, strictly false even if, for argument’s sake, that narrow conception is accepted.

Duarte provides an informative historical account of the process whereby most mainstream Keynesians and former old-line Monetarists, who had, in fact, adopted much of the underlying Keynesian theoretical framework themselves, became reconciled to the non-negotiable methodological microfoundational demands upon which Lucas and his New Classical followers and Real-Business-Cycle fellow-travelers insisted. While Lucas was willing to tolerate differences of opinion about the importance of monetary factors in accounting for business-cycle fluctuations in real output and employment, and even willing to countenance a role for countercyclical monetary policy, such differences of opinion could be tolerated only if they could be derived from an acceptable microfounded model in which the agent(s) form rational expectations. If New Keynesians were able to produce results rationalizing countercyclical policies in such microfounded models with rational expectations, Lucas was satisfied. Presumably, Lucas felt the price of conceding the theoretical legitimacy of countercyclical policy was worth paying in order to achieve methodological hegemony over macroeconomic theory.

And no doubt, for Lucas, the price was worth paying, because it led to what Marvin Goodfriend and Robert King called the New Neoclassical Synthesis in their 1997 article ushering in the new era of good feelings, a synthesis based on “the systematic application of intertemporal optimization and rational expectations” while embodying “the insights of monetarists . . . regarding the theory and practice of monetary policy.”

While the first synthesis brought about a convergence of sorts between the disparate Walrasian and Keynesian theoretical frameworks, the convergence proved unstable because the inherent theoretical weaknesses of both paradigms were unable to withstand criticisms of the theoretical apparatus and of the policy recommendations emerging from that synthesis, particularly an inability to provide a straightforward analysis of inflation when it became a serious policy problem in the late 1960s and 1970s. But neither the Keynesian nor the Walrasian paradigms were developing in a way that addressed the points of most serious weakness.

On the Keynesian side, the defects included the static nature of the workhorse IS-LM model, the absence of a market for real capital and of a market for endogenous money. On the Walrasian side, the defects were the lack of any theory of actual price determination or of dynamic adjustment. The Hicksian temporary equilibrium paradigm might have provided a viable way forward, and for a very different kind of synthesis, but not even Hicks himself realized the potential of his own creation.

While the first synthesis was a product of convenience and misplaced optimism, the second synthesis is a product of methodological hubris and misplaced complacency derived from an elementary misunderstanding of the distinction between optimization by a single agent and the simultaneous optimization of two or more independent, yet interdependent, agents. The equilibrium of each is the result of the equilibrium of all, and a theory of optimization involving two or more agents requires a theory of how two or more interdependent agents can optimize simultaneously. The New neoclassical synthesis rests on the demand for a macroeconomic theory of individual optimization that refuses even to ask, let along provide an answer to, the question whether the optimization that it demands is actually achieved in practice or what happens if it is not. This is not a synthesis that will last, or that deserves to. And the sooner it collapses, the better off macroeconomics will be.

What the answer is I don’t know, but if I had to offer a suggestion, the one offered by my teacher Axel Leijonhufvud towards the end of his great book, written more than half a century ago, strikes me as not bad at all:

One cannot assume that what went wrong was simply that Keynes slipped up here and there in his adaptation of standard tool, and that consequently, if we go back and tinker a little more with the Marshallian toolbox his purposes will be realized. What is required, I believe, is a systematic investigation, form the standpoint of the information problems stressed in this study, of what elements of the static theory of resource allocation can without further ado be utilized in the analysis of dynamic and historical systems. This, of course, would be merely a first-step: the gap yawns very wide between the systematic and rigorous modern analysis of the stability of “featureless,” pure exchange systems and Keynes’ inspired sketch of the income-constrained process in a monetary-exchange-cum-production system. But even for such a first step, the prescription cannot be to “go back to Keynes.” If one must retrace some steps of past developments in order to get on the right track—and that is probably advisable—my own preference is to go back to Hayek. Hayek’s Gestalt-conception of what happens during business cycles, it has been generally agreed, was much less sound than Keynes’. As an unhappy consequence, his far superior work on the fundamentals of the problem has not received the attention it deserves. (p. 401)

I agree with all that, but would also recommend Roy Radner’s development of an alternative to the Arrow-Debreu version of Walrasian general equilibrium theory that can accommodate Hicksian temporary equilibrium, and Hawtrey’s important contributions to our understanding of monetary theory and the role and potential instability of endogenous bank money. On top of that, Franklin Fisher in his important work, The Disequilibrium Foundations of Equilibrium Economics, has given us further valuable guidance in how to improve the current sorry state of macroeconomics.

 


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,689 other followers

Follow Uneasy Money on WordPress.com