Archive for the 'New Keynesians' Category

Axel Leijonhufvud and Modern Macroeconomics

For many baby boomers like me growing up in Los Angeles, UCLA was an almost inevitable choice for college. As an incoming freshman, I was undecided whether to major in political science or economics. PoliSci 1 didn’t impress me, but Econ 1 did. More than my Econ 1 professor, it was the assigned textbook, University Economics, 1st edition, by Alchian and Allen that impressed me. That’s how my career in economics started.

After taking introductory micro and macro as a freshman, I started the intermediate theory sequence of micro (utility and cost theory, econ 101a), (general equilibrium theory, 101b), and (macro theory, 102) as a sophomore. It was in the winter 1968 quarter that I encountered Axel Leijonhufvud. This was about a year before his famous book – his doctoral dissertation – Keynesian Economics and the Economics of Keynes was published in the fall of 1968 to instant acclaim. Although it must have been known in the department that the book, which he’d been working on for several years, would soon appear, I doubt that its remarkable impact on the economics profession could have been anticipated, turning Axel almost overnight from an obscure untenured assistant professor into a tenured professor at one of the top economics departments in the world and a kind of academic rock star widely sought after to lecture and appear at conferences around the globe. I offer the following scattered recollections of him, drawn from memories at least a half-century old, to those interested in his writings, and some reflections on his rise to the top of the profession, followed by a gradual loss of influence as theoretical marcroeconomics, fell under the influence of Robert Lucas and the rational-expectations movement in its various forms (New Classical, Real Business-Cycle, New-Keynesian).

Axel, then in his early to mid-thirties, was an imposing figure, very tall and gaunt with a short beard and a shock of wavy blondish hair, but his attire reflecting the lowly position he then occupied in the academic hierarchy. He spoke perfect English with a distinct Swedish lilt, frequently leavening his lectures and responses to students’ questions with wry and witty comments and asides.  

Axel’s presentation of general-equilibrium theory was, as then still the norm, at least at UCLA, mostly graphical, supplemented occasionally by some algebra and elementary calculus. The Edgeworth box was his principal technique for analyzing both bilateral trade and production in the simple two-output, two-input case, and he used it to elucidate concepts like Pareto optimality, general-equilibrium prices, and the two welfare theorems, an exposition which I, at least, found deeply satisfying. The assigned readings were the classic paper by F. M. Bator, “The Simple Analytics of Welfare-Maximization,” which I relied on heavily to gain a working grasp of the basics of general-equilibrium theory, and as a supplementary text, Peter Newman’s The Theory of Exchange, much of which was too advanced for me to comprehend more than superficially. Axel also introduced us to the concept of tâtonnement and highlighting its importance as an explanation of sorts of how the equilibrium price vector might, at least in theory, be found, an issue whose profound significance I then only vaguely comprehended, if at all. Another assigned text was Modern Capital Theory by Donald Dewey, providing an introduction to the role of capital, time, and the rate of interest in monetary and macroeconomic theory and a bridge to the intermediate macro course that he would teach the following quarter.

A highlight of Axel’s general-equilibrium course was the guest lecture by Bob Clower, then visiting UCLA from Northwestern, with whom Axel became friendly only after leaving Northwestern, and two of whose papers (“A Reconsideration of the Microfoundations of Monetary Theory,” and “The Keynesian Counterrevolution: A Theoretical Appraisal”) were discussed at length in his forthcoming book. (The collaboration between Clower and Leijonhufvud and their early Northwestern connection has led to the mistaken idea that Clower had been Axel’s thesis advisor. Axel’s dissertation was actually written under Meyer Burstein.) Clower himself came to UCLA economics a few years later when I was already a third-year graduate student, and my contact with him was confined to seeing him at seminars and workshops. I still have a vivid memory of Bob in his lecture explaining, with the aid of chalk and a blackboard, how ballistic theory was developed into an orbital theory by way of a conceptual experiment imagining that the distance travelled by a projectile launched from a fixed position being progressively lengthened until the projectile’s trajectory transitioned into an orbit around the earth.

Axel devoted the first part of his macro course to extending the Keynesian-cross diagram we had been taught in introductory macro into the Hicksian IS-LM model by making investment a negative function of the rate of interest and adding a money market with a fixed money stock and a demand for money that’s a negative function of the interest rate. Depending on the assumptions about elasticities, IS-LM could be an analytical vehicle that could accommodate either the extreme Keynesian-cross case, in which fiscal policy is all-powerful and monetary policy is ineffective, or the Monetarist (classical) case, in which fiscal policy is ineffective and monetary policy all-powerful, which was how macroeconomics was often framed as a debate about the elasticity of the demand for money curve with respect to interest rate. Friedman himself, in his not very successful attempt to articulate his own framework for monetary analysis, accepted that framing, one of the few rhetorical and polemical misfires of his career.

In his intermediate macro course, Axel presented the standard macro model, and I don’t remember his weighing in that much with his own criticism; he didn’t teach from a standard intermediate macro textbook, standard textbook versions of the dominant Keynesian model not being at all to his liking. Instead, he assigned early sources of what became Keynesian economics like Hicks’s 1937 exposition of the IS-LM model and Alvin Hansen’s A Guide to Keynes (1953), with Friedman’s 1956 restatement of the quantity theory serving as a counterpoint, and further developments of Keynesian thought like Patinkin’s 1948 paper on price flexibility and full employment, A. W. Phillips original derivation of the Phillips Curve, Harry Johnson on the General Theory after 25 years, and his own preview “Keynes and the Keynesians: A Suggested Interpretation” of his forthcoming book, and probably others that I’m not now remembering. Presenting the material piecemeal from original sources allowed him to underscore the weaknesses and questionable assumptions latent in the standard Keynesian model.

Of course, for most of us, it was a challenge just to reproduce the standard model and apply it to some specific problems, but we at least we got the sense that there was more going on under the hood of the model than we would have imagined had we learned its structure from a standard macro text. I have the melancholy feeling that the passage of years has dimmed my memory of his teaching too much to adequately describe how stimulating, amusing and enjoyable his lectures were to those of us just starting our journey into economic theory.

The following quarter, in the fall 1968 quarter, when his book had just appeared in print, Axel created a new advanced course called macrodynamics. He talked a lot about Wicksell and Keynes, of course, but he was then also fascinated by the work of Norbert Wiener on cybernetics, assigning Wiener’s book Cybernetics as a primary text and a key to understanding what Keynes was really trying to do. He introduced us to concepts like positive and negative feedback, servo mechanisms, stable and unstable dynamic systems and related those concepts to economic concepts like the price mechanism, stable and unstable equilibria, and to business cycles. Here’s how a put it in On Keynesian Economics and the Economics of Keynes:

Cybernetics as a formal theory, of course, began to develop only during the was and it was only with the appearance of . . . Weiner’s book in 1948 that the first results of serious work on a general theory of dynamic systems – and the term itself – reached a wider public. Even then, research in this field seemed remote from economic problems, and it is thus not surprising that the first decade or more of the Keynesian debate did not go in this direction. But it is surprising that so few monetary economists have caught on to developments in this field in the last ten or twelve years, and that the work of those who have has not triggered a more dramatic chain reaction. This, I believe, is the Keynesian Revolution that did not come off.

In conveying the essential departure of cybernetics from traditional physics, Wiener once noted:

Here there emerges a very interesting distinction between the physics of our grandfathers and that of the present day. In nineteenth-century physics, it seemed to cost nothing to get information.

In context, the reference was to Maxwell’s Demon. In its economic reincarnation as Walras’ auctioneer, the demon has not yet been exorcised. But this certainly must be what Keynes tried to do. If a single distinction is to be drawn between the Economics of Keynes and the economics of our grandfathers, this is it. It is only on this basis that Keynes’ claim to have essayed a more “general theory” can be maintained. If this distinction is not recognized as both valid and important, I believe we must conclude that Keynes’ contribution to pure theory is nil.

Axel’s hopes that cybernetics could provide an analytical tool with which to bring Keynes’s insights into informational scarcity on macroeconomic analysis were never fulfilled. A glance at the index to Axel’s excellent collection of essays written from the late 1960s and the late 1970s Information and Coordination reveals not a single reference either to cybernetics or to Wiener. Instead, to his chagrin and disappointment, macroeconomics took a completely different path following the path blazed by Robert Lucas and his followers of insisting on a nearly continuous state of rational-expectations equilibrium and implicitly denying that there is an intertemporal coordination problem for macroeconomics to analyze, much less to solve.

After getting my BA in economics at UCLA, I stayed put and began my graduate studies there in the next academic year, taking the graduate micro sequence given that year by Jack Hirshleifer, the graduate macro sequence with Axel and the graduate monetary theory sequence with Ben Klein, who started his career as a monetary economist before devoting himself a few years later entirely to IO and antitrust.

Not surprisingly, Axel’s macro course drew heavily on his book, which meant it drew heavily on the history of macroeconomics including, of course, Keynes himself, but also his Cambridge predecessors and collaborators, his friendly, and not so friendly, adversaries, and the Keynesians that followed him. His main point was that if you take Keynes seriously, you can’t argue, as the standard 1960s neoclassical synthesis did, that the main lesson taught by Keynes was that if the real wage in an economy is somehow stuck above the market-clearing wage, an increase in aggregate demand is necessary to allow the labor market to clear at the prevailing market wage by raising the price level to reduce the real wage down to the market-clearing level.

This interpretation of Keynes, Axel argued, trivialized Keynes by implying that he didn’t say anything that had not been said previously by his predecessors who had also blamed high unemployment on wages being kept above market-clearing levels by minimum-wage legislation or the anticompetitive conduct of trade-union monopolies.

Axel sought to reinterpret Keynes as an early precursor of search theories of unemployment subsequently developed by Armen Alchian and Edward Phelps who would soon be followed by others including Robert Lucas. Because negative shocks to aggregate demand are rarely anticipated, the immediate wage and price adjustments to a new post-shock equilibrium price vector that would maintain full employment would occur only under the imaginary tâtonnement system naively taken as the paradigm for price adjustment under competitive market conditions, Keynes believed that a deliberate countercyclical policy response was needed to avoid a potentially long-lasting or permanent decline in output and employment. The issue is not price flexibility per se, but finding the equilibrium price vector consistent with intertemporal coordination. Price flexibility that doesn’t arrive quickly (immediately?) at the equilibrium price vector achieves nothing. Trading at disequilibrium prices leads inevitably to a contraction of output and income. In an inspired turn of phrase, Axel called this cumulative process of aggregate demand shrinkage Say’s Principle, which years later led me to write my paper “Say’s Law and the Classical Theory of Depressions” included as Chapter 9 of my recent book Studies in the History of Monetary Theory.

Attention to the implications of the lack of an actual coordinating mechanism simply assumed (either in the form of Walrasian tâtonnement or the implicit Marshallian ceteris paribus assumption) by neoclassical economic theory was, in Axel’s view, the great contribution of Keynes. Axel deplored the neoclassical synthesis, because its rote acceptance of the neoclassical equilibrium paradigm trivialized Keynes’s contribution, treating unemployment as a phenomenon attributable to sticky or rigid wages without inquiring whether alternative informational assumptions could explain unemployment even with flexible wages.

The new literature on search theories of unemployment advanced by Alchian, Phelps, et al. and the success of his book gave Axel hope that a deepened version of neoclassical economic theory that paid attention to its underlying informational assumptions could lead to a meaningful reconciliation of the economics of Keynes with neoclassical theory and replace the superficial neoclassical synthesis of the 1960s. That quest for an alternative version of neoclassical economic theory was for a while subsumed under the trite heading of finding microfoundations for macroeconomics, by which was meant finding a way to explain Keynesian (involuntary) unemployment caused by deficient aggregate demand without invoking special ad hoc assumptions like rigid or sticky wages and prices. The objective was to analyze the optimizing behavior of individual agents given limitations in or imperfections of the information available to them and to identify and provide remedies for the disequilibrium conditions that characterize coordination failures.

For a short time, perhaps from the early 1970s until the early 1980s, a number of seemingly promising attempts to develop a disequilibrium theory of macroeconomics appeared, most notably by Robert Barro and Herschel Grossman in the US, and by and J. P. Benassy, J. M. Grandmont, and Edmond Malinvaud in France. Axel and Clower were largely critical of these efforts, regarding them as defective and even misguided in many respects.

But at about the same time, another, very different, approach to microfoundations was emerging, inspired by the work of Robert Lucas and Thomas Sargent and their followers, who were introducing the concept of rational expectations into macroeconomics. Axel and Clower had focused their dissatisfaction with neoclassical economics on the rise of the Walrasian paradigm which used the obviously fantastical invention of a tâtonnement process to account for the attainment of an equilibrium price vector perfectly coordinating all economic activity. They argued for an interpretation of Keynes’s contribution as an attempt to steer economics away from an untenable theoretical and analytical paradigm rather than, as the neoclassical synthesis had done, to make peace with it through the adoption of ad hoc assumptions about price and wage rigidity, thereby draining Keynes’s contribution of novelty and significance.

And then Lucas came along to dispense with the auctioneer, eliminate tâtonnement, while achieving the same result by way of a methodological stratagem in three parts: a) insisting that all agents be treated as equilibrium optimizers, and b) who therefore form identical rational expectations of all future prices using the same common knowledge, so that c) they all correctly anticipate the equilibrium price vector that earlier economists had assumed could be found only through the intervention of an imaginary auctioneer conducting a fantastical tâtonnement process.

This methodological imperatives laid down by Lucas were enforced with a rigorous discipline more befitting a religious order than an academic research community. The discipline of equilibrium reasoning, it was decreed by methodological fiat, imposed a question-begging research strategy on researchers in which correct knowledge of future prices became part of the endowment of all optimizing agents.

While microfoundations for Axel, Clower, Alchian, Phelps and their collaborators and followers had meant relaxing the informational assumptions of the standard neoclassical model, for Lucas and his followers microfoundations came to mean that each and every individual agent must be assumed to have all the knowledge that exists in the model. Otherwise the rational-expectations assumption required by the model could not be justified.

The early Lucasian models did assume a certain kind of informational imperfection or ambiguity about whether observed price changes were relative changes or absolute changes, which would be resolved only after a one-period time lag. However, the observed serial correlation in aggregate time series could not be rationalized by an informational ambiguity resolved after just one period. This deficiency in the original Lucasian model led to the development of real-business-cycle models that attribute business cycles to real-productivity shocks that dispense with Lucasian informational ambiguity in accounting for observed aggregate time-series fluctuations. So-called New Keynesian economists chimed in with ad hoc assumptions about wage and price stickiness to create a new neoclassical synthesis to replace the old synthesis but with little claim to any actual analytical insight.

The success of the Lucasian paradigm was disheartening to Axel, and his research agenda gradually shifted from macroeconomic theory to applied policy, especially inflation control in developing countries. Although my own interest in macroeconomics was largely inspired by Axel, my approach to macroeconomics and monetary theory eventually diverged from Axel’s, when, in my last couple of years of graduate work at UCLA, I became close to Earl Thompson whose courses I had not taken as an undergraduate or a graduate student. I had read some of Earl’s monetary theory papers when preparing for my preliminary exams; I found them interesting but quirky and difficult to understand. After I had already started writing my dissertation, under Harold Demsetz on an IO topic, I decided — I think at the urging of my friend and eventual co-author, Ron Batchelder — to sit in on Earl’s graduate macro sequence, which he would sometimes offer as an alternative to Axel’s more popular graduate macro sequence. It was a relatively small group — probably not more than 25 or so attended – that met one evening a week for three hours. Each session – and sometimes more than one session — was devoted to discussing one of Earl’s published or unpublished macroeconomic or monetary theory papers. Hearing Earl explain his papers and respond to questions and criticisms brought them alive to me in a way that just reading them had never done, and I gradually realized that his arguments, which I had previously dismissed or misunderstood, were actually profoundly insightful and theoretically compelling.

For me at least, Earl provided a more systematic way of thinking about macroeconomics and a more systematic critique of standard macro than I could piece together from Axel’s writings and lectures. But one of the lessons that I had learned from Axel was the seminal importance of two Hayek essays: “The Use of Knowledge in Society,” and, especially “Economics and Knowledge.” The former essay is the easier to understand, and I got the gist of it on my first reading; the latter essay is more subtle and harder to follow, and it took years and a number of readings before I could really follow it. I’m not sure when I began to really understand it, but it might have been when I heard Earl expound on the importance of Hicks’s temporary-equilibrium method first introduced in Value and Capital.

In working out the temporary equilibrium method, Hicks relied on the work of Myrdal, Lindahl and Hayek, and Earl’s explanation of the temporary-equilibrium method based on the assumption that markets for current delivery clear, but those market-clearing prices are different from the prices that agents had expected when formulating their optimal intertemporal plans, causing agents to revise their plans and their expectations of future prices. That seemed to be the proper way to think about the intertemporal-coordination failures that Axel was so concerned about, but somehow he never made the connection between Hayek’s work, which he greatly admired, and the Hicksian temporary-equilibrium method which I never heard him refer to, even though he also greatly admired Hicks.

It always seemed to me that a collaboration between Earl and Axel could have been really productive and might even have led to an alternative to the Lucasian reign over macroeconomics. But for some reason, no such collaboration ever took place, and macroeconomics was impoverished as a result. They are both gone, but we still benefit from having Duncan Foley still with us, still active, and still making important contributions to our understanding, And we should be grateful.

An Austrian Tragedy

It was hardly predictable that the New York Review of Books would take notice of Marginal Revolutionaries by Janek Wasserman, marking the susquicentenial of the publication of Carl Menger’s Grundsätze (Principles of Economics) which, along with Jevons’s Principles of Political Economy and Walras’s Elements of Pure Economics ushered in the marginal revolution upon which all of modern economics, for better or for worse, is based. The differences among the three founding fathers of modern economic theory were not insubstantial, and the Jevonian version was largely superseded by the work of his younger contemporary Alfred Marshall, so that modern neoclassical economics is built on the work of only one of the original founders, Leon Walras, Jevons’s work having left little impression on the future course of economics.

Menger’s work, however, though largely, but not totally, eclipsed by that of Marshall and Walras, did leave a more enduring imprint and a more complicated legacy than Jevons’s — not only for economics, but for political theory and philosophy, more generally. Judging from Edward Chancellor’s largely favorable review of Wasserman’s volume, one might even hope that a start might be made in reassessing that legacy, a process that could provide an opportunity for mutually beneficial interaction between long-estranged schools of thought — one dominant and one marginal — that are struggling to overcome various conceptual, analytical and philosophical problems for which no obvious solutions seem available.

In view of the failure of modern economists to anticipate the Great Recession of 2008, the worst financial shock since the 1930s, it was perhaps inevitable that the Austrian School, a once favored branch of economics that had made a specialty of booms and busts, would enjoy a revival of public interest.

The theme of Austrians as outsiders runs through Janek Wasserman’s The Marginal Revolutionaries: How Austrian Economists Fought the War of Ideas, a general history of the Austrian School from its beginnings to the present day. The title refers both to the later marginalization of the Austrian economists and to the original insight of its founding father, Carl Menger, who introduced the notion of marginal utility—namely, that economic value does not derive from the cost of inputs such as raw material or labor, as David Ricardo and later Karl Marx suggested, but from the utility an individual derives from consuming an additional amount of any good or service. Water, for instance, may be indispensable to humans, but when it is abundant, the marginal value of an extra glass of the stuff is close to zero. Diamonds are less useful than water, but a great deal rarer, and hence command a high market price. If diamonds were as common as dewdrops, however, they would be worthless.

Menger was not the first economist to ponder . . . the “paradox of value” (why useless things are worth more than essentials)—the Italian Ferdinando Galiani had gotten there more than a century earlier. His central idea of marginal utility was simultaneously developed in England by W. S. Jevons and on the Continent by Léon Walras. Menger’s originality lay in applying his theory to the entire production process, showing how the value of capital goods like factory equipment derived from the marginal value of the goods they produced. As a result, Austrian economics developed a keen interest in the allocation of capital. Furthermore, Menger and his disciples emphasized that value was inherently subjective, since it depends on what consumers are willing to pay for something; this imbued the Austrian school from the outset with a fiercely individualistic and anti-statist aspect.

Menger’s unique contribution is indeed worthy of special emphasis. He was more explicit than Jevons or Walras, and certainly more than Marshall, in explaining that the value of factors of production is derived entirely from the value of the incremental output that could be attributed (or imputed) to their services. This insight implies that cost is not an independent determinant of value, as Marshall, despite accepting the principle of marginal utility, continued to insist – famously referring to demand and supply as the two blades of the analytical scissors that determine value. The cost of production therefore turns out to be nothing but the value the output foregone when factors are used to produce one output instead of the next most highly valued alternative. Cost therefore does not determine, but is determined by, equilibrium price, which means that, in practice, costs are always subjective and conjectural. (I have made this point in an earlier post in a different context.) I will have more to say below about the importance of Menger’s specific contribution and its lasting imprint on the Austrian school.

Menger’s Principles of Economics, published in 1871, established the study of economics in Vienna—before then, no economic journals were published in Austria, and courses in economics were taught in law schools. . . .

The Austrian School was also bound together through family and social ties: [his two leading disciples, [Eugen von] Böhm-Bawerk and Friedrich von Wieser [were brothers-in-law]. [Wieser was] a close friend of the statistician Franz von Juraschek, Friedrich Hayek’s maternal grandfather. Young Austrian economists bonded on Alpine excursions and met in Böhm-Bawerk’s famous seminars (also attended by the Bolshevik Nikolai Bukharin and the German Marxist Rudolf Hilferding). Ludwig von Mises continued this tradition, holding private seminars in Vienna in the 1920s and later in New York. As Wasserman notes, the Austrian School was “a social network first and last.”

After World War I, the Habsburg Empire was dismantled by the victorious Allies. The Austrian bureaucracy shrank, and university placements became scarce. Menger, the last surviving member of the first generation of Austrian economists, died in 1921. The economic school he founded, with its emphasis on individualism and free markets, might have disappeared under the socialism of “Red Vienna.” Instead, a new generation of brilliant young economists emerged: Schumpeter, Hayek, and Mises—all of whom published best-selling works in English and remain familiar names today—along with a number of less well known but influential economists, including Oskar Morgenstern, Fritz Machlup, Alexander Gerschenkron, and Gottfried Haberler.

Two factual corrections are in order. Menger outlived Böhm-Bawerk, but not his other chief disciple von Wieser, who died in 1926, not long after supervising Hayek’s doctoral dissertation, later published in 1927, and, in 1933, translated into English and published as Monetary Theory and the Trade Cycle. Moreover, a 16-year gap separated Mises and Schumpeter, who were exact contemporaries, from Hayek (born in 1899) who was a few years older than Gerschenkron, Haberler, Machlup and Morgenstern.

All the surviving members or associates of the Austrian school wound up either in the US or Britain after World War II, and Hayek, who had taken a position in London in 1931, moved to the US in 1950, taking a position in the Committee on Social Thought at the University of Chicago after having been refused a position in the economics department. Through the intervention of wealthy sponsors, Mises obtained an academic appointment of sorts at the NYU economics department, where he succeeded in training two noteworthy disciples who wrote dissertations under his tutelage, Murray Rothbard and Israel Kirzner. (Kirzner wrote his dissertation under Mises at NYU, but Rothbard did his graduate work at Colulmbia.) Schumpeter, Haberler and Gerschenkron eventually took positions at Harvard, while Machlup (with some stops along the way) and Morgenstern made their way to Princeton. However, Hayek’s interests shifted from pure economic theory to deep philosophical questions. While Machlup and Haberler continued to work on economic theory, the Austrian influence on their work after World War II was barely recognizable. Morgenstern and Schumpeter made major contributions to economics, but did not hide their alienation from the doctrines of the Austrian School.

So there was little reason to expect that the Austrian School would survive its dispersal when the Nazis marched unopposed into Vienna in 1938. That it did survive is in no small measure due to its ideological usefulness to anti-socialist supporters who provided financial support to Hayek, enabling his appointment to the Committee on Social Thought at the University of Chicago, and Mises’s appointment at NYU, and other forms of research support to Hayek, Mises and other like-minded scholars, as well as funding the Mont Pelerin Society, an early venture in globalist networking, started by Hayek in 1947. Such support does not discredit the research to which it gave rise. That the survival of the Austrian School would probably not have been possible without the support of wealthy benefactors who anticipated that the Austrians would advance their political and economic interests does not invalidate the research thereby enabled. (In the interest of transparency, I acknowledge that I received support from such sources for two books that I wrote.)

Because Austrian School survivors other than Mises and Hayek either adapted themselves to mainstream thinking without renouncing their earlier beliefs (Haberler and Machlup) or took an entirely different direction (Morgenstern), and because the economic mainstream shifted in two directions that were most uncongenial to the Austrians: Walrasian general-equilibrium theory and Keynesian macroeconomics, the Austrian remnant, initially centered on Mises at NYU, adopted a sharply adversarial attitude toward mainstream economic doctrines.

Despite its minute numbers, the lonely remnant became a house divided against itself, Mises’s two outstanding NYU disciples, Murray Rothbard and Israel Kirzner, holding radically different conceptions of how to carry on the Austrian tradition. An extroverted radical activist, Rothbard was not content just to lead a school of economic thought, he aspired to become the leader of a fantastical anarchistic revolutionary movement to replace all established governments under a reign of private-enterprise anarcho-capitalism. Rothbard’s political radicalism, which, despite his Jewish ancestry, even included dabbling in Holocaust denialism, so alienated his mentor, that Mises terminated all contact with Rothbard for many years before his death. Kirzner, self-effacing, personally conservative, with no political or personal agenda other than the advancement of his own and his students’ scholarship, published hundreds of articles and several books filling 10 thick volumes of his collected works published by the Liberty Fund, while establishing a robust Austrian program at NYU, training many excellent scholars who found positions in respected academic and research institutions. Similar Austrian programs, established under the guidance of Kirzner’s students, were started at other institutions, most notably at George Mason University.

One of the founders of the Cato Institute, which for nearly half a century has been the leading avowedly libertarian think tank in the US, Rothbard was eventually ousted by Cato, and proceeded to set up a rival think tank, the Ludwig von Mises Institute, at Auburn University, which has turned into a focal point for extreme libertarians and white nationalists to congregate, get acquainted, and strategize together.

Isolation and marginalization tend to cause a subspecies either to degenerate toward extinction, to somehow blend in with the members of the larger species, thereby losing its distinctive characteristics, or to accentuate its unique traits, enabling it to find some niche within which to survive as a distinct sub-species. Insofar as they have engaged in economic analysis rather than in various forms of political agitation and propaganda, the Rothbardian Austrians have focused on anarcho-capitalist theory and the uniquely perverse evils of fractional-reserve banking.

Rejecting the political extremism of the Rothbardians, Kirznerian Austrians differentiate themselves by analyzing what they call market processes and emphasizing the limitations on the knowledge and information possessed by actual decision-makers. They attribute this misplaced focus on equilibrium to the extravagantly unrealistic and patently false assumptions of mainstream models on the knowledge possessed by economic agents, which effectively make equilibrium the inevitable — and trivial — conclusion entailed by those extreme assumptions. In their view, the focus of mainstream models on equilibrium states with unrealistic assumptions results from a preoccupation with mathematical formalism in which mathematical tractability rather than sound economics dictates the choice of modeling assumptions.

Skepticism of the extreme assumptions about the informational endowments of agents covers a range of now routine assumptions in mainstream models, e.g., the ability of agents to form precise mathematical estimates of the probability distributions of future states of the world, implying that agents never confront decisions about which they are genuinely uncertain. Austrians also object to the routine assumption that all the information needed to determine the solution of a model is the common knowledge of the agents in the model, so that an existing equilibrium cannot be disrupted unless new information randomly and unpredictably arrives. Each agent in the model having been endowed with the capacity of a semi-omniscient central planner, solving the model for its equilibrium state becomes a trivial exercise in which the optimal choices of a single agent are taken as representative of the choices made by all of the model’s other, semi-omnicient, agents.

Although shreds of subjectivism — i.e., agents make choices based own preference orderings — are shared by all neoclassical economists, Austrian criticisms of mainstream neoclassical models are aimed at what Austrians consider to be their insufficient subjectivism. It is this fierce commitment to a robust conception of subjectivism, in which an equilibrium state of shared expectations by economic agents must be explained, not just assumed, that Chancellor properly identifies as a distinguishing feature of the Austrian School.

Menger’s original idea of marginal utility was posited on the subjective preferences of consumers. This subjectivist position was retained by subsequent generations of the school. It inspired a tradition of radical individualism, which in time made the Austrians the favorite economists of American libertarians. Subjectivism was at the heart of the Austrians’ polemical rejection of Marxism. Not only did they dismiss Marx’s labor theory of value, they argued that socialism couldn’t possibly work since it would lack the means to allocate resources efficiently.

The problem with central planning, according to Hayek, is that so much of the knowledge that people act upon is specific knowledge that individuals acquire in the course of their daily activities and life experience, knowledge that is often difficult to articulate – mere intuition and guesswork, yet more reliable than not when acted upon by people whose livelihoods depend on being able to do the right thing at the right time – much less communicate to a central planner.

Chancellor attributes Austrian mistrust of statistical aggregates or indices, like GDP and price levels, to Austrian subjectivism, which regards such magnitudes as abstractions irrelevant to the decisions of private decision-makers, except perhaps in forming expectations about the actions of government policy makers. (Of course, this exception potentially provides full subjectivist license and legitimacy for macroeconomic theorizing despite Austrian misgivings.) Observed statistical correlations between aggregate variables identified by macroeconomists are dismissed as irrelevant unless grounded in, and implied by, the purposeful choices of economic agents.

But such scruples about the use of macroeconomic aggregates and inferring causal relationships from observed correlations are hardly unique to the Austrian school. One of the most important contributions of the 20th century to the methodology of economics was an article by T. C. Koopmans, “Measurement Without Theory,” which argued that measured correlations between macroeconomic variables provide a reliable basis for business-cycle research and policy advice only if the correlations can be explained in terms of deeper theoretical or structural relationships. The Nobel Prize Committee, in awarding the 1975 Prize to Koopmans, specifically mentioned this paper in describing Koopmans’s contributions. Austrians may be more fastidious than their mainstream counterparts in rejecting macroeconomic relationships not based on microeconomic principles, but they aren’t the only ones mistrustful of mere correlations.

Chancellor cites mistrust about the use of statistical aggregates and price indices as a factor in Hayek’s disastrous policy advice warning against anti-deflationary or reflationary measures during the Great Depression.

Their distrust of price indexes brought Austrian economists into conflict with mainstream economic opinion during the 1920s. At the time, there was a general consensus among leading economists, ranging from Irving Fisher at Yale to Keynes at Cambridge, that monetary policy should aim at delivering a stable price level, and in particular seek to prevent any decline in prices (deflation). Hayek, who earlier in the decade had spent time at New York University studying monetary policy and in 1927 became the first director of the Austrian Institute for Business Cycle Research, argued that the policy of price stabilization was misguided. It was only natural, Hayek wrote, that improvements in productivity should lead to lower prices and that any resistance to this movement (sometimes described as “good deflation”) would have damaging economic consequences.

The argument that deflation stemming from economic expansion and increasing productivity is normal and desirable isn’t what led Hayek and the Austrians astray in the Great Depression; it was their failure to realize the deflation that triggered the Great Depression was a monetary phenomenon caused by a malfunctioning international gold standard. Moreover, Hayek’s own business-cycle theory explicitly stated that a neutral (stable) monetary policy ought to aim at keeping the flow of total spending and income constant in nominal terms while his policy advice of welcoming deflation meant a rapidly falling rate of total spending. Hayek’s policy advice was an inexcusable error of judgment, which, to his credit, he did acknowledge after the fact, though many, perhaps most, Austrians have refused to follow him even that far.

Considered from the vantage point of almost a century, the collapse of the Austrian School seems to have been inevitable. Hayek’s long-shot bid to establish his business-cycle theory as the dominant explanation of the Great Depression was doomed from the start by the inadequacies of the very specific version of his basic model and his disregard of the obvious implication of that model: prevent total spending from contracting. The promising young students and colleagues who had briefly gathered round him upon his arrival in England, mostly attached themselves to other mentors, leaving Hayek with only one or two immediate disciples to carry on his research program. The collapse of his research program, which he himself abandoned after completing his final work in economic theory, marked a research hiatus of almost a quarter century, with the notable exception of publications by his student, Ludwig Lachmann who, having decamped in far-away South Africa, labored in relative obscurity for most of his career.

The early clash between Keynes and Hayek, so important in the eyes of Chancellor and others, is actually overrated. Chancellor, quoting Lachmann and Nicholas Wapshott, describes it as a clash of two irreconcilable views of the economic world, and the clash that defined modern economics. In later years, Lachmann actually sought to effect a kind of reconciliation between their views. It was not a conflict of visions that undid Hayek in 1931-32, it was his misapplication of a narrowly constructed model to a problem for which it was irrelevant.

Although the marginalization of the Austrian School, after its misguided policy advice in the Great Depression and its dispersal during and after World War II, is hardly surprising, the unwillingness of mainstream economists to sort out what was useful and relevant in the teachings of the Austrian School from what is not was unfortunate not only for the Austrians. Modern economics was itself impoverished by its disregard for the complexity and interconnectedness of economic phenomena. It’s precisely the Austrian attentiveness to the complexity of economic activity — the necessity for complementary goods and factors of production to be deployed over time to satisfy individual wants – that is missing from standard economic models.

That Austrian attentiveness, pioneered by Menger himself, to the complementarity of inputs applied over the course of time undoubtedly informed Hayek’s seminal contribution to economic thought: his articulation of the idea of intertemporal equilibrium that comprehends the interdependence of the plans of independent agents and the need for them to all fit together over the course of time for equilibrium to obtain. Hayek’s articulation represented a conceptual advance over earlier versions of equilibrium analysis stemming from Walras and Pareto, and even from Irving Fisher who did pay explicit attention to intertemporal equilibrium. But in Fisher’s articulation, intertemporal consistency was described in terms of aggregate production and income, leaving unexplained the mechanisms whereby the individual plans to produce and consume particular goods over time are reconciled. Hayek’s granular exposition enabled him to attend to, and articulate, necessary but previously unspecified relationships between the current prices and expected future prices.

Moreover, neither mainstream nor Austrian economists have ever explained how prices are adjust in non-equilibrium settings. The focus of mainstream analysis has always been the determination of equilibrium prices, with the implicit understanding that “market forces” move the price toward its equilibrium value. The explanatory gap has been filled by the mainstream New Classical School which simply posits the existence of an equilibrium price vector, and, to replace an empirically untenable tâtonnement process for determining prices, posits an equally untenable rational-expectations postulate to assert that market economies typically perform as if they are in, or near the neighborhood of, equilibrium, so that apparent fluctuations in real output are viewed as optimal adjustments to unexplained random productivity shocks.

Alternatively, in New Keynesian mainstream versions, constraints on price changes prevent immediate adjustments to rationally expected equilibrium prices, leading instead to persistent reductions in output and employment following demand or supply shocks. (I note parenthetically that the assumption of rational expectations is not, as often suggested, an assumption distinct from market-clearing, because the rational expectation of all agents of a market-clearing price vector necessarily implies that the markets clear unless one posits a constraint, e.g., a binding price floor or ceiling, that prevents all mutually beneficial trades from being executed.)

Similarly, the Austrian school offers no explanation of how unconstrained price adjustments by market participants is a sufficient basis for a systemic tendency toward equilibrium. Without such an explanation, their belief that market economies have strong self-correcting properties is unfounded, because, as Hayek demonstrated in his 1937 paper, “Economics and Knowledge,” price adjustments in current markets don’t, by themselves, ensure a systemic tendency toward equilibrium values that coordinate the plans of independent economic agents unless agents’ expectations of future prices are sufficiently coincident. To take only one passage of many discussing the difficulty of explaining or accounting for a process that leads individuals toward a state of equilibrium, I offer the following as an example:

All that this condition amounts to, then, is that there must be some discernible regularity in the world which makes it possible to predict events correctly. But, while this is clearly not sufficient to prove that people will learn to foresee events correctly, the same is true to a hardly less degree even about constancy of data in an absolute sense. For any one individual, constancy of the data does in no way mean constancy of all the facts independent of himself, since, of course, only the tastes and not the actions of the other people can in this sense be assumed to be constant. As all those other people will change their decisions as they gain experience about the external facts and about other people’s actions, there is no reason why these processes of successive changes should ever come to an end. These difficulties are well known, and I mention them here only to remind you how little we actually know about the conditions under which an equilibrium will ever be reached.

In this theoretical muddle, Keynesian economics and the neoclassical synthesis were abandoned, because the key proposition of Keynesian economics was supposedly the tendency of a modern economy toward an equilibrium with involuntary unemployment while the neoclassical synthesis rejected that proposition, so that the supposed synthesis was no more than an agreement to disagree. That divided house could not stand. The inability of Keynesian economists such as Hicks, Modigliani, Samuelson and Patinkin to find a satisfactory (at least in terms of a preferred Walrasian general-equilibrium model) rationalization for Keynes’s conclusion that an economy would likely become stuck in an equilibrium with involuntary unemployment led to the breakdown of the neoclassical synthesis and the displacement of Keynesianism as the dominant macroeconomic paradigm.

But perhaps the way out of the muddle is to abandon the idea that a systemic tendency toward equilibrium is a property of an economic system, and, instead, to recognize that equilibrium is, as Hayek suggested, a contingent, not a necessary, property of a complex economy. Ludwig Lachmann, cited by Chancellor for his remark that the early theoretical clash between Hayek and Keynes was a conflict of visions, eventually realized that in an important sense both Hayek and Keynes shared a similar subjectivist conception of the crucial role of individual expectations of the future in explaining the stability or instability of market economies. And despite the efforts of New Classical economists to establish rational expectations as an axiomatic equilibrating property of market economies, that notion rests on nothing more than arbitrary methodological fiat.

Chancellor concludes by suggesting that Wasserman’s characterization of the Austrians as marginalized is not entirely accurate inasmuch as “the Austrians’ view of the economy as a complex, evolving system continues to inspire new research.” Indeed, if economics is ever to find a way out of its current state of confusion, following Lachmann in his quest for a synthesis of sorts between Keynes and Hayek might just be a good place to start from.

A Tale of Two Syntheses

I recently finished reading a slender, but weighty, collection of essays, Microfoundtions Reconsidered: The Relationship of Micro and Macroeconomics in Historical Perspective, edited by Pedro Duarte and Gilberto Lima; it contains in addition to a brief introductory essay by the editors, and contributions by Kevin Hoover, Robert Leonard, Wade Hands, Phil Mirowski, Michel De Vroey, and Pedro Duarte. The volume is both informative and stimulating, helping me to crystalize ideas about which I have been ruminating and writing for a long time, but especially in some of my more recent posts (e.g., here, here, and here) and my recent paper “Hayek, Hicks, Radner and Four Equilibrium Concepts.”

Hoover’s essay provides a historical account of the microfoundations, making clear that the search for microfoundations long preceded the Lucasian microfoundations movement of the 1970s and 1980s that would revolutionize macroeconomics in the late 1980s and early 1990s. I have been writing about the differences between varieties of microfoundations for quite a while (here and here), and Hoover provides valuable detail about early discussions of microfoundations and about their relationship to the now regnant Lucasian microfoundations dogma. But for my purposes here, Hoover’s key contribution is his deconstruction of the concept of microfoundations, showing that the idea of microfoundations depends crucially on the notion that agents in a macroeconomic model be explicit optimizers, meaning that they maximize an explicit function subject to explicit constraints.

What Hoover clarifies is vacuity of the Lucasian optimization dogma. Until Lucas, optimization by agents had been merely a necessary condition for a model to be microfounded. But there was also another condition: that the optimizing choices of agents be mutually consistent. Establishing that the optimizing choices of agents are mutually consistent is not necessarily easy or even possible, so often the consistency of optimizing plans can only be suggested by some sort of heuristic argument. But Lucas and his cohorts, followed by their acolytes, unable to explain, even informally or heuristically, how the optimizing choices of individual agents are rendered mutually consistent, instead resorted to question-begging and question-dodging techniques to avoid addressing the consistency issue, of which one — the most egregious, but not the only — is the representative agent. In so doing, Lucas et al. transformed the optimization problem from the coordination of multiple independent choices into the optimal plan of a single decision maker. Heckuva job!

The second essay by Robert Leonard, though not directly addressing the question of microfoundations, helps clarify and underscore the misrepresentation perpetrated by the Lucasian microfoundational dogma in disregarding and evading the need to describe a mechanism whereby the optimal choices of individual agents are, or could be, reconciled. Leonard focuses on a particular economist, Oskar Morgenstern, who began his career in Vienna as a not untypical adherent of the Austrian school of economics, a member of the Mises seminar and successor of F. A. Hayek as director of the Austrian Institute for Business Cycle Research upon Hayek’s 1931 departure to take a position at the London School of Economics. However, Morgenstern soon began to question the economic orthodoxy of neoclassical economic theory and its emphasis on the tendency of economic forces to reach a state of equilibrium.

In his famous early critique of the foundations of equilibrium theory, Morgenstern tried to show that the concept of perfect foresight, upon which, he alleged, the concept of equilibrium rests, is incoherent. To do so, Morgenstern used the example of the Holmes-Moriarity interaction in which Holmes and Moriarty are caught in a dilemma in which neither can predict whether the other will get off or stay on the train on which they are both passengers, because the optimal choice of each depends on the choice of the other. The unresolvable conflict between Holmes and Moriarty, in Morgenstern’s view, showed that the incoherence of the idea of perfect foresight.

As his disillusionment with orthodox economic theory deepened, Morgenstern became increasingly interested in the potential of mathematics to serve as a tool of economic analysis. Through his acquaintance with the mathematician Karl Menger, the son of Carl Menger, founder of the Austrian School of economics. Morgenstern became close to Menger’s student, Abraham Wald, a pure mathematician of exceptional ability, who, to support himself, was working on statistical and mathematical problems for the Austrian Institute for Business Cycle Resarch, and tutoring Morgenstern in mathematics and its applications to economic theory. Wald, himself, went on to make seminal contributions to mathematical economics and statistical analysis.

Moregenstern also became acquainted with another student of Menger, John von Neumnn, with an interest in applying advanced mathematics to economic theory. Von Neumann and Morgenstern would later collaborate in writing The Theory of Games and Economic Behavior, as a result of which Morgenstern came to reconsider his early view of the Holmes-Moriarty paradox inasmuch as it could be shown that an equilibrium solution of their interaction could be found if payoffs to their joint choices were specified, thereby enabling Holmes and Moriarty to choose optimal probablistic strategies.

I don’t think that the game-theoretic solution to the Holmes Moriarty game is as straightforward as Morgenstern eventually agreed, but the critical point in the microfoundations discussion is that the mathematical solution to the Holmes-Moriarty paradox acknowledges the necessity for the choices made by two or more agents in an economic or game-theoretic equilibrium to be reconciled – i.e., rendered mutually consistent — in equilibrium. Under Lucasian microfoundations dogma, the problem is either annihilated by positing an optimizing representative agent having no need to coordinate his decision with other agents (I leave the question who, in the Holmes-Moriarty interaction, is the representative agent as an exercise for the reader) or it is assumed away by positing the existence of a magical equilibrium with no explanation of how the mutually consistent choices are arrived at.

The third essay (“The Rise and Fall of Walrasian Economics: The Keynes Effect”) by Wade Hands considers the first of the two syntheses – the neoclassical synthesis — that are alluded to in the title of this post. Hands gives a learned account of the mutually reinforcing co-development of Walrasian general equilibrium theory and Keynesian economics in the 25 years or so following World War II. Although Hands agrees that there is no necessary connection between Walrasian GE theory and Keynesian theory, he argues that there was enough common ground between Keynesians and Walrasians, as famously explained by Hicks in summarizing Keynesian theory by way of his IS-LM model, to allow the two disparate research programs to nourish each other in a kind of symbiotic relationship as the two research programs came to dominate postwar economics.

The task for Keynesian macroeconomists following the lead of Samuelson, Solow and Modigliani at MIT, Alvin Hansen at Harvard and James Tobin at Yale was to elaborate the Hicksian IS-LM approach by embedding it in a more general Walrasian framework. In so doing, they helped to shape a research agenda for Walrasian general-equilibrium theorists working out the details of the newly developed Arrow-Debreu model, deriving conditions for the uniqueness and stability of the equilibrium of that model. The neoclassical synthesis followed from those efforts, achieving an uneasy reconciliation between Walrasian general equilibrium theory and Keynesian theory. It received its most complete articulation in the impressive treatise of Don Patinkin which attempted to derive or at least evaluate key Keyensian propositions in the context of a full general equilibrium model. At an even higher level of theoretical sophistication, the 1971 summation of general equilibrium theory by Arrow and Hahn, gave disproportionate attention to Keynesian ideas which were presented and analyzed using the tools of state-of-the art Walrasian analysis.

Hands sums up the coexistence of Walrasian and Keynesian ideas in the Arrow-Hahn volume as follows:

Arrow and Hahn’s General Competitive Analysis – the canonical summary of the literature – dedicated far more pages to stability than to any other topic. The book had fourteen chapters (and a number of mathematical appendices); there was one chapter on consumer choice, one chapter on production theory, and one chapter on existence [of equilibrium], but there were three chapters on stability analysis, (two on the traditional tatonnement and one on alternative ways of modeling general equilibrium dynamics). Add to this the fact that there was an important chapter on “The Keynesian Model’; and it becomes clear how important stability analysis and its connection to Keynesian economics was for Walrasian microeconomics during this period. The purpose of this section has been to show that that would not have been the case if the Walrasian economics of the day had not been a product of co-evolution with Keynesian economic theory. (p. 108)

What seems most unfortunate about the neoclassical synthesis is that it elevated and reinforced the least relevant and least fruitful features of both the Walrasian and the Keynesian research programs. The Hicksian IS-LM setup abstracted from the dynamic and forward-looking aspects of Keynesian theory, modeling a static one-period model, not easily deployed as a tool of dynamic analysis. Walrasian GE analysis, which, following the pathbreaking GE existence proofs of Arrow and Debreu, then proceeded to a disappointing search for the conditions for a unique and stable general equilibrium.

It was Paul Samuelson who, building on Hicks’s pioneering foray into stability analysis, argued that the stability question could be answered by investigating whether a system of Lyapounov differential equations could describe market price adjustments as functions of market excess demands that would converge on an equilibrium price vector. But Samuelson’s approach to establishing stability required the mechanism of a fictional tatonnement process. Even with that unsatisfactory assumption, the stability results were disappointing.

Although for Walrasian theorists the results hardly repaid the effort expended, for those Keynesians who interpreted Keynes as an instability theorist, the weak Walrasian stability results might have been viewed as encouraging. But that was not any easy route to take either, because Keynes had also argued that a persistent unemployment equilibrium might be the norm.

It’s also hard to understand how the stability of equilibrium in an imaginary tatonnement process could ever have been considered relevant to the operation of an actual economy in real time – a leap of faith almost as extraordinary as imagining an economy represented by a single agent. Any conventional comparative-statics exercise – the bread and butter of microeconomic analysis – involves comparing two equilibria, corresponding to a specified parametric change in the conditions of the economy. The comparison presumes that, starting from an equilibrium position, the parametric change leads from an initial to a new equilibrium. If the economy isn’t stable, a disturbance causing an economy to depart from an initial equilibrium need not result in an adjustment to a new equilibrium comparable to the old one.

If conventional comparative statics hinges on an implicit stability assumption, it’s hard to see how a stability analysis of tatonnement has any bearing on the comparative-statics routinely relied upon by economists. No actual economy ever adjusts to a parametric change by way of tatonnement. Whether a parametric change displacing an economy from its equilibrium time path would lead the economy toward another equilibrium time path is another interesting and relevant question, but it’s difficult to see what insight would be gained by proving the stability of equilibrium under a tatonnement process.

Moreover, there is a distinct question about the endogenous stability of an economy: are there endogenous tendencies within an economy that lead it away from its equilibrium time path. But questions of endogenous stability can only be posed in a dynamic, rather than a static, model. While extending the Walrasian model to include an infinity of time periods, Arrow and Debreu telescoped determination of the intertemporal-equilibrium price vector into a preliminary time period before time, production, exchange and consumption begin. So, even in the formally intertemporal Arrow-Debreu model, the equilibrium price vector, once determined, is fixed and not subject to revision. Standard stability analysis was concerned with the response over time to changing circumstances only insofar as changes are foreseen at time zero, before time begins, so that they can be and are taken fully into account when the equilibrium price vector is determined.

Though not entirely uninteresting, the intertemporal analysis had little relevance to the stability of an actual economy operating in real time. Thus, neither the standard Keyensian (IS-LM) model nor the standard Walrasian Arrow-Debreu model provided an intertemporal framework within which to address the dynamic stability that Keynes (and contemporaries like Hayek, Myrdal, Lindahl and Hicks) had developed in the 1930s. In particular, Hicks’s analytical device of temporary equilibrium might have facilitated such an analysis. But, having introduced his IS-LM model two years before publishing his temporary equilibrium analysis in Value and Capital, Hicks concentrated his attention primarily on Keynesian analysis and did not return to the temporary equilibrium model until 1965 in Capital and Growth. And it was IS-LM that became, for a generation or two, the preferred analytical framework for macroeconomic analysis, while temproary equilibrium remained overlooked until the 1970s just as the neoclassical synthesis started coming apart.

The fourth essay by Phil Mirowski investigates the role of the Cowles Commission, based at the University of Chicago from 1939 to 1955, in undermining Keynesian macroeconomics. While Hands argues that Walrasians and Keynesians came together in a non-hostile spirit of tacit cooperation, Mirowski believes that owing to their Walrasian sympathies, the Cowles Committee had an implicit anti-Keynesian orientation and was therefore at best unsympathetic if not overtly hostile to Keynesian theorizing, which was incompatible the Walrasian optimization paradigm endorsed by the Cowles economists. (Another layer of unexplored complexity is the tension between the Walrasianism of the Cowles economists and the Marshallianism of the Chicago School economists, especially Knight and Friedman, which made Chicago an inhospitable home for the Cowles Commission and led to its eventual departure to Yale.)

Whatever differences, both the Mirowski and the Hands essays support the conclusion that the uneasy relationship between Walrasianism and Keynesianism was inherently problematic and unltimately unsustainable. But to me the tragedy is that before the fall, in the 1950s and 1960s, when the neoclassical synthesis bestrode economics like a colossus, the static orientation of both the Walrasian and the Keynesian research programs combined to distract economists from a more promising research program. Such a program, instead of treating expectations either as parametric constants or as merely adaptive, based on an assumed distributed lag function, might have considered whether expectations could perform a potentially equilibrating role in a general equilibrium model.

The equilibrating role of expectations, though implicit in various contributions by Hayek, Myrdal, Lindahl, Irving Fisher, and even Keynes, is contingent so that equilibrium is not inevitable, only a possibility. Instead, the introduction of expectations as an equilibrating variable did not occur until the mid-1970s when Robert Lucas, Tom Sargent and Neil Wallace, borrowing from John Muth’s work in applied microeconomics, introduced the idea of rational expectations into macroeconomics. But in introducing rational expectations, Lucas et al. made rational expectations not the condition of a contingent equilibrium but an indisputable postulate guaranteeing the realization of equilibrium without offering any theoretical account of a mechanism whereby the rationality of expectations is achieved.

The fifth essay by Michel DeVroey (“Microfoundations: a decisive dividing line between Keynesian and new classical macroeconomics?”) is a philosophically sophisticated analysis of Lucasian microfoundations methodological principles. DeVroey begins by crediting Lucas with the revolution in macroeconomics that displaced a Keynesian orthodoxy already discredited in the eyes of many economists after its failure to account for simultaneously rising inflation and unemployment.

The apparent theoretical disorder characterizing the Keynesian orthodoxy and its Monetarist opposition left a void for Lucas to fill by providing a seemingly rigorous microfounded alternative to the confused state of macroeconomics. And microfoundations became the methodological weapon by which Lucas and his associates and followers imposed an iron discipline on the unruly community of macroeconomists. “In Lucas’s eyes,” DeVroey aptly writes,“ the mere intention to produce a theory of involuntary unemployment constitutes an infringement of the equilibrium discipline.” Showing that his description of Lucas is hardly overstated, DeVroey quotes from the famous 1978 joint declaration of war issued by Lucas and Sargent against Keynesian macroeconomics:

After freeing himself of the straightjacket (or discipline) imposed by the classical postulates, Keynes described a model in which rules of thumb, such as the consumption function and liquidity preference schedule, took the place of decision functions that a classical economist would insist be derived from the theory of choice. And rather than require that wages and prices be determined by the postulate that markets clear – which for the labor market seemed patently contradicted by the severity of business depressions – Keynes took as an unexamined postulate that money wages are sticky, meaning that they are set at a level or by a process that could be taken as uninfluenced by the macroeconomic forces he proposed to analyze.

Echoing Keynes’s famous description of the sway of Ricardian doctrines over England in the nineteenth century, DeVroey remarks that the microfoundations requirement “conquered macroeconomics as quickly and thoroughly as the Holy Inquisition conquered Spain,” noting, even more tellingly, that the conquest was achieved without providing any justification. Ricardo had, at least, provided a substantive analysis that could be debated; Lucas offered only an undisputable methodological imperative about the sole acceptable mode of macroeconomic reasoning. Just as optimization is a necessary component of the equilibrium discipline that had to be ruthlessly imposed on pain of excommunication from the macroeconomic community, so, too, did the correlate principle of market-clearing. To deviate from the market-clearing postulate was ipso facto evidence of an impure and heretical state of mind. DeVroey further quotes from the war declaration of Lucas and Sargent.

Cleared markets is simply a principle, not verifiable by direct observation, which may or may not be useful in constructing successful hypotheses about the behavior of these [time] series.

What was only implicit in the war declaration became evident later after right-thinking was enforced, and woe unto him that dared deviate from the right way of thinking.

But, as DeVroey skillfully shows, what is most remarkable is that, having declared market clearing an indisputable methodological principle, Lucas, contrary to his own demand for theoretical discipline, used the market-clearing postulate to free himself from the very equilibrium discipline he claimed to be imposing. How did the market-clearing postulate liberate Lucas from equilibrium discipline? To show how the sleight-of-hand was accomplished, DeVroey, in an argument parallel to that of Hoover in chapter one and that suggested by Leonard in chapter two, contrasts Lucas’s conception of microfoundations with a different microfoundations conception espoused by Hayek and Patinkin. Unlike Lucas, Hayek and Patinkin recognized that the optimization of individual economic agents is conditional on the optimization of other agents. Lucas assumes that if all agents optimize, then their individual optimization ensures that a social optimum is achieved, the whole being the sum of its parts. But that assumption ignores that the choices made interacting agents are themelves interdependent.

To capture the distinction between independent and interdependent optimization, DeVroey distinguishes between optimal plans and optimal behavior. Behavior is optimal only if an optimal plan can be executed. All agents can optimize individually in making their plans, but the optimality of their behavior depends on their capacity to carry those plans out. And the capacity of each to carry out his plan is contingent on the optimal choices of all other agents.

Optimizing plans refers to agents’ intentions before the opening of trading, the solution to the choice-theoretical problem with which they are faced. Optimizing behavior refers to what is observable after trading has started. Thus optimal behavior implies that the optimal plan has been realized. . . . [O]ptmizing plans and optimizing behavior need to be logically separated – there is a difference between finding a solution to a choice problem and implementing the solution. In contrast, whenever optimizing behavior is the sole concept used, the possibility of there being a difference between them is discarded by definition. This is the standpoint takenby Lucas and Sargent. Once it is adopted, it becomes misleading to claim . . .that the microfoundations requirement is based on two criteria, optimizing behavior and market clearing. A single criterion is needed, and it is irrelevant whether this is called generalized optimizing behavior or market clearing. (De Vroey, p. 176)

Each agent is free to optimize his plan, but no agent can execute his optimal plan unless the plan coincides with the complementary plans of other agents. So, the execution of an optimal plan is not within the unilateral control of an agent formulating his own plan. One can readily assume that agents optimize their plans, but one cannot just assume that those plans can be executed as planned. The optimality of interdependent plans is not self-evident; it is a proposition that must be demonstrated. Assuming that agents optimize, Lucas simply asserts that, because agents optimize, markets must clear.

That is a remarkable non-sequitur. And from that non-sequitur, Lucas jumps to a further non-sequitur: that an optimizing representative agent is all that’s required for a macroeconomic model. The logical straightjacket (or discipline) of demonstrating that interdependent optimal plans are consistent is thus discarded (or trampled upon). Lucas’s insistence on a market-clearing principle turns out to be subterfuge by which the pretense of its upholding conceals its violation in practice.

My own view is that the assumption that agents formulate optimizing plans cannot be maintained without further analysis unless the agents are operating in isolation. If the agents interacting with each other, the assumption that they optimize requires a theory of their interaction. If the focus is on equilibrium interactions, then one can have a theory of equilibrium, but then the possibility of non-equilibrium states must also be acknowledged.

That is what John Nash did in developing his equilibrium theory of positive-sum games. He defined conditions for the existence of equilibrium, but he offered no theory of how equilibrium is achieved. Lacking such a theory, he acknowledged that non-equilibrium solutions might occur, e.g., in some variant of the Holmes-Moriarty game. To simply assert that because interdependent agents try to optimize, they must, as a matter of principle, succeed in optimizing is to engage in question-begging on a truly grand scale. To insist, as a matter of methodological principle, that everyone else must also engage in question-begging on equally grand scale is what I have previously called methodological arrogance, though an even harsher description might be appropriate.

In the sixth essay (“Not Going Away: Microfoundations in the making of a new consensus in macroeconomics”), Pedro Duarte considers the current state of apparent macroeconomic consensus in the wake of the sweeping triumph of the Lucasian micorfoundtions methodological imperative. In its current state, mainstream macroeconomists from a variety of backgrounds have reconciled themselves and adjusted to the methodological absolutism Lucas and his associates and followers have imposed on macroeconomic theorizing. Leading proponents of the current consensus are pleased to announce, in unseemly self-satisfaction, that macroeconomics is now – but presumably not previously – “firmly grounded in the principles of economic [presumably neoclassical] theory.” But the underlying conception of neoclassical economic theory motivating such a statement is almost laughably narrow, and, as I have just shown, strictly false even if, for argument’s sake, that narrow conception is accepted.

Duarte provides an informative historical account of the process whereby most mainstream Keynesians and former old-line Monetarists, who had, in fact, adopted much of the underlying Keynesian theoretical framework themselves, became reconciled to the non-negotiable methodological microfoundational demands upon which Lucas and his New Classical followers and Real-Business-Cycle fellow-travelers insisted. While Lucas was willing to tolerate differences of opinion about the importance of monetary factors in accounting for business-cycle fluctuations in real output and employment, and even willing to countenance a role for countercyclical monetary policy, such differences of opinion could be tolerated only if they could be derived from an acceptable microfounded model in which the agent(s) form rational expectations. If New Keynesians were able to produce results rationalizing countercyclical policies in such microfounded models with rational expectations, Lucas was satisfied. Presumably, Lucas felt the price of conceding the theoretical legitimacy of countercyclical policy was worth paying in order to achieve methodological hegemony over macroeconomic theory.

And no doubt, for Lucas, the price was worth paying, because it led to what Marvin Goodfriend and Robert King called the New Neoclassical Synthesis in their 1997 article ushering in the new era of good feelings, a synthesis based on “the systematic application of intertemporal optimization and rational expectations” while embodying “the insights of monetarists . . . regarding the theory and practice of monetary policy.”

While the first synthesis brought about a convergence of sorts between the disparate Walrasian and Keynesian theoretical frameworks, the convergence proved unstable because the inherent theoretical weaknesses of both paradigms were unable to withstand criticisms of the theoretical apparatus and of the policy recommendations emerging from that synthesis, particularly an inability to provide a straightforward analysis of inflation when it became a serious policy problem in the late 1960s and 1970s. But neither the Keynesian nor the Walrasian paradigms were developing in a way that addressed the points of most serious weakness.

On the Keynesian side, the defects included the static nature of the workhorse IS-LM model, the absence of a market for real capital and of a market for endogenous money. On the Walrasian side, the defects were the lack of any theory of actual price determination or of dynamic adjustment. The Hicksian temporary equilibrium paradigm might have provided a viable way forward, and for a very different kind of synthesis, but not even Hicks himself realized the potential of his own creation.

While the first synthesis was a product of convenience and misplaced optimism, the second synthesis is a product of methodological hubris and misplaced complacency derived from an elementary misunderstanding of the distinction between optimization by a single agent and the simultaneous optimization of two or more independent, yet interdependent, agents. The equilibrium of each is the result of the equilibrium of all, and a theory of optimization involving two or more agents requires a theory of how two or more interdependent agents can optimize simultaneously. The New neoclassical synthesis rests on the demand for a macroeconomic theory of individual optimization that refuses even to ask, let along provide an answer to, the question whether the optimization that it demands is actually achieved in practice or what happens if it is not. This is not a synthesis that will last, or that deserves to. And the sooner it collapses, the better off macroeconomics will be.

What the answer is I don’t know, but if I had to offer a suggestion, the one offered by my teacher Axel Leijonhufvud towards the end of his great book, written more than half a century ago, strikes me as not bad at all:

One cannot assume that what went wrong was simply that Keynes slipped up here and there in his adaptation of standard tool, and that consequently, if we go back and tinker a little more with the Marshallian toolbox his purposes will be realized. What is required, I believe, is a systematic investigation, form the standpoint of the information problems stressed in this study, of what elements of the static theory of resource allocation can without further ado be utilized in the analysis of dynamic and historical systems. This, of course, would be merely a first-step: the gap yawns very wide between the systematic and rigorous modern analysis of the stability of “featureless,” pure exchange systems and Keynes’ inspired sketch of the income-constrained process in a monetary-exchange-cum-production system. But even for such a first step, the prescription cannot be to “go back to Keynes.” If one must retrace some steps of past developments in order to get on the right track—and that is probably advisable—my own preference is to go back to Hayek. Hayek’s Gestalt-conception of what happens during business cycles, it has been generally agreed, was much less sound than Keynes’. As an unhappy consequence, his far superior work on the fundamentals of the problem has not received the attention it deserves. (p. 401)

I agree with all that, but would also recommend Roy Radner’s development of an alternative to the Arrow-Debreu version of Walrasian general equilibrium theory that can accommodate Hicksian temporary equilibrium, and Hawtrey’s important contributions to our understanding of monetary theory and the role and potential instability of endogenous bank money. On top of that, Franklin Fisher in his important work, The Disequilibrium Foundations of Equilibrium Economics, has given us further valuable guidance in how to improve the current sorry state of macroeconomics.

 

Filling the Arrow Explanatory Gap

The following (with some minor revisions) is a Twitter thread I posted yesterday. Unfortunately, because it was my first attempt at threading the thread wound up being split into three sub-threads and rather than try to reconnect them all, I will just post the complete thread here as a blogpost.

1. Here’s an outline of an unwritten paper developing some ideas from my paper “Hayek Hicks Radner and Four Equilibrium Concepts” (see here for an earlier ungated version) and some from previous blog posts, in particular Phillips Curve Musings

2. Standard supply-demand analysis is a form of partial-equilibrium (PE) analysis, which means that it is contingent on a ceteris paribus (CP) assumption, an assumption largely incompatible with realistic dynamic macroeconomic analysis.

3. Macroeconomic analysis is necessarily situated a in general-equilibrium (GE) context that precludes any CP assumption, because there are no variables that are held constant in GE analysis.

4. In the General Theory, Keynes criticized the argument based on supply-demand analysis that cutting nominal wages would cure unemployment. Instead, despite his Marshallian training (upbringing) in PE analysis, Keynes argued that PE (AKA supply-demand) analysis is unsuited for understanding the problem of aggregate (involuntary) unemployment.

5. The comparative-statics method described by Samuelson in the Foundations of Econ Analysis formalized PE analysis under the maintained assumption that a unique GE obtains and deriving a “meaningful theorem” from the 1st- and 2nd-order conditions for a local optimum.

6. PE analysis, as formalized by Samuelson, is conditioned on the assumption that GE obtains. It is focused on the effect of changing a single parameter in a single market small enough for the effects on other markets of the parameter change to be made negligible.

7. Thus, PE analysis, the essence of micro-economics is predicated on the macrofoundation that all, but one, markets are in equilibrium.

8. Samuelson’s meaningful theorems were a misnomer reflecting mid-20th-century operationalism. They can now be understood as empirically refutable propositions implied by theorems augmented with a CP assumption that interactions b/w markets are small enough to be neglected.

9. If a PE model is appropriately specified, and if the market under consideration is small or only minimally related to other markets, then differences between predictions and observations will be statistically insignificant.

10. So PE analysis uses comparative-statics to compare two alternative general equilibria that differ only in respect of a small parameter change.

11. The difference allows an inference about the causal effect of a small change in that parameter, but says nothing about how an economy would actually adjust to a parameter change.

12. PE analysis is conditioned on the CP assumption that the analyzed market and the parameter change are small enough to allow any interaction between the parameter change and markets other than the market under consideration to be disregarded.

13. However, the process whereby one equilibrium transitions to another is left undetermined; the difference between the two equilibria with and without the parameter change is computed but no account of an adjustment process leading from one equilibrium to the other is provided.

14. Hence, the term “comparative statics.”

15. The only suggestion of an adjustment process is an assumption that the price-adjustment in any market is an increasing function of excess demand in the market.

16. In his seminal account of GE, Walras posited the device of an auctioneer who announces prices–one for each market–computes desired purchases and sales at those prices, and sets, under an adjustment algorithm, new prices at which desired purchases and sales are recomputed.

17. The process continues until a set of equilibrium prices is found at which excess demands in all markets are zero. In Walras’s heuristic account of what he called the tatonnement process, trading is allowed only after the equilibrium price vector is found by the auctioneer.

18. Walras and his successors assumed, but did not prove, that, if an equilibrium price vector exists, the tatonnement process would eventually, through trial and error, converge on that price vector.

19. However, contributions by Sonnenschein, Mantel and Debreu (hereinafter referred to as the SMD Theorem) show that no price-adjustment rule necessarily converges on a unique equilibrium price vector even if one exists.

20. The possibility that there are multiple equilibria with distinct equilibrium price vectors may or may not be worth explicit attention, but for purposes of this discussion, I confine myself to the case in which a unique equilibrium exists.

21. The SMD Theorem underscores the lack of any explanatory account of a mechanism whereby changes in market prices, responding to excess demands or supplies, guide a decentralized system of competitive markets toward an equilibrium state, even if a unique equilibrium exists.

22. The Walrasian tatonnement process has been replaced by the Arrow-Debreu-McKenzie (ADM) model in an economy of infinite duration consisting of an infinite number of generations of agents with given resources and technology.

23. The equilibrium of the model involves all agents populating the economy over all time periods meeting before trading starts, and, based on initial endowments and common knowledge, making plans given an announced equilibrium price vector for all time in all markets.

24. Uncertainty is accommodated by the mechanism of contingent trading in alternative states of the world. Given assumptions about technology and preferences, the ADM equilibrium determines the set prices for all contingent states of the world in all time periods.

25. Given equilibrium prices, all agents enter into optimal transactions in advance, conditioned on those prices. Time unfolds according to the equilibrium set of plans and associated transactions agreed upon at the outset and executed without fail over the course of time.

26. At the ADM equilibrium price vector all agents can execute their chosen optimal transactions at those prices in all markets (certain or contingent) in all time periods. In other words, at that price vector, excess demands in all markets with positive prices are zero.

27. The ADM model makes no pretense of identifying a process that discovers the equilibrium price vector. All that can be said about that price vector is that if it exists and trading occurs at equilibrium prices, then excess demands will be zero if prices are positive.

28. Arrow himself drew attention to the gap in the ADM model, writing in 1959:

29. In addition to the explanatory gap identified by Arrow, another shortcoming of the ADM model was discussed by Radner: the dependence of the ADM model on a complete set of forward and state-contingent markets at time zero when equilibrium prices are determined.

30. Not only is the complete-market assumption a backdoor reintroduction of perfect foresight, it excludes many features of the greatest interest in modern market economies: the existence of money, stock markets, and money-crating commercial banks.

31. Radner showed that for full equilibrium to obtain, not only must excess demands in current markets be zero, but whenever current markets and current prices for future delivery are missing, agents must correctly expect those future prices.

32. But there is no plausible account of an equilibrating mechanism whereby price expectations become consistent with GE. Although PE analysis suggests that price adjustments do clear markets, no analogous analysis explains how future price expectations are equilibrated.

33. But if both price expectations and actual prices must be equilibrated for GE to obtain, the notion that “market-clearing” price adjustments are sufficient to achieve macroeconomic “equilibrium” is untenable.

34. Nevertheless, the idea that individual price expectations are rational (correct), so that, except for random shocks, continuous equilibrium is maintained, became the bedrock for New Classical macroeconomics and its New Keynesian and real-business cycle offshoots.

35. Macroeconomic theory has become a theory of dynamic intertemporal optimization subject to stochastic disturbances and market frictions that prevent or delay optimal adjustment to the disturbances, potentially allowing scope for countercyclical monetary or fiscal policies.

36. Given incomplete markets, the assumption of nearly continuous intertemporal equilibrium implies that agents correctly foresee future prices except when random shocks occur, whereupon agents revise expectations in line with the new information communicated by the shocks.
37. Modern macroeconomics replaced the Walrasian auctioneer with agents able to forecast the time path of all prices indefinitely into the future, except for intermittent unforeseen shocks that require agents to optimally their revise previous forecasts.
38. When new information or random events, requiring revision of previous expectations, occur, the new information becomes common knowledge and is processed and interpreted in the same way by all agents. Agents with rational expectations always share the same expectations.
39. So in modern macro, Arrow’s explanatory gap is filled by assuming that all agents, given their common knowledge, correctly anticipate current and future equilibrium prices subject to unpredictable forecast errors that change their expectations of future prices to change.
40. Equilibrium prices aren’t determined by an economic process or idealized market interactions of Walrasian tatonnement. Equilibrium prices are anticipated by agents, except after random changes in common knowledge. Semi-omniscient agents replace the Walrasian auctioneer.
41. Modern macro assumes that agents’ common knowledge enables them to form expectations that, until superseded by new knowledge, will be validated. The assumption is wrong, and the mistake is deeper than just the unrealism of perfect competition singled out by Arrow.
42. Assuming perfect competition, like assuming zero friction in physics, may be a reasonable simplification for some problems in economics, because the simplification renders an otherwise intractable problem tractable.
43. But to assume that agents’ common knowledge enables them to forecast future prices correctly transforms a model of decentralized decision-making into a model of central planning with each agent possessing the knowledge only possessed by an omniscient central planner.
44. The rational-expectations assumption fills Arrow’s explanatory gap, but in a deeply unsatisfactory way. A better approach to filling the gap would be to acknowledge that agents have private knowledge (and theories) that they rely on in forming their expectations.
45. Agents’ expectations are – at least potentially, if not inevitably – inconsistent. Because expectations differ, it’s the expectations of market specialists, who are better-informed than non-specialists, that determine the prices at which most transactions occur.
46. Because price expectations differ even among specialists, prices, even in competitive markets, need not be uniform, so that observed price differences reflect expectational differences among specialists.
47. When market specialists have similar expectations about future prices, current prices will converge on the common expectation, with arbitrage tending to force transactions prices to converge toward notwithstanding the existence of expectational differences.
48. However, the knowledge advantage of market specialists over non-specialists is largely limited to their knowledge of the workings of, at most, a small number of related markets.
49. The perspective of specialists whose expectations govern the actual transactions prices in most markets is almost always a PE perspective from which potentially relevant developments in other markets and in macroeconomic conditions are largely excluded.
50. The interrelationships between markets that, according to the SMD theorem, preclude any price-adjustment algorithm, from converging on the equilibrium price vector may also preclude market specialists from converging, even roughly, on the equilibrium price vector.
51. A strict equilibrium approach to business cycles, either real-business cycle or New Keynesian, requires outlandish assumptions about agents’ common knowledge and their capacity to anticipate the future prices upon which optimal production and consumption plans are based.
52. It is hard to imagine how, without those outlandish assumptions, the theoretical superstructure of real-business cycle theory, New Keynesian theory, or any other version of New Classical economics founded on the rational-expectations postulate can be salvaged.
53. The dominance of an untenable macroeconomic paradigm has tragically led modern macroeconomics into a theoretical dead end.

Roger Farmer’s Prosperity for All

I have just read a review copy of Roger Farmer’s new book Prosperity for All, which distills many of Roger’s very interesting ideas into a form which, though readable, is still challenging — at least, it was for me. There is a lot that I like and agree with in Roger’s book, and the fact that he is a UCLA economist, though he came to UCLA after my departure, is certainly a point in his favor. So I will begin by mentioning some of the things that I really liked about Roger’s book.

What I like most is that he recognizes that beliefs are fundamental, which is almost exactly what I meant when I wrote this post (“Expectations Are Fundamental”) five years ago. The point I wanted to make is that the idea that there is some fundamental existential reality that economic agents try — and, if they are rational, will — perceive is a gross and misleading oversimplification, because expectations themselves are part of reality. In a world in which expectations are fundamental, the Keynesian beauty-contest theory of expectations and stock prices (described in chapter 12 of The General Theory) is not absurd as it is widely considered to be believers in the efficient market hypothesis. The almost universal unprofitability of simple trading rules or algorithms is not inconsistent with a market process in which the causality between prices and expectations goes in both directions, in which case anticipating expectations is no less rational than anticipating future cash flows.

One of the treats of reading this book is Farmer’s recollections of his time as a graduate student at Penn in the early 1980s when David Cass, Karl Shell, and Costas Azariadis were developing their theory of sunspot equilibrium in which expectations are self-fulfilling, an idea skillfully deployed by Roger to revise the basic New Keynesian model and re-orient it along a very different path from the standard New Keynesian one. I am sympathetic to that reorientation, and the main reason for that re-orientation is that Roger rejects the idea that there is a unique equilibrium to which the economy automatically reverts, albeit somewhat more slowly than if speeded along by the appropriate monetary policy, on its own. The notion that there is a unique equilibrium to which the economy automatically reverts is an assumption with no basis in theory or experience. The most that the natural-rate hypothesis can tell us is that if an economy is operating at its natural rate of unemployment, monetary expansion cannot permanently reduce the rate of unemployment below that natural rate. Eventually — once economic agents come to expect that the monetary expansion and the correspondingly higher rate of inflation will be maintained indefinitely — the unemployment rate must revert to the natural rate. But the natural-rate hypothesis does not tell us that monetary expansion cannot reduce unemployment when the actual unemployment rate exceeds the natural rate, although it is often misinterpreted as making that assertion.

In his book, Roger takes the anti-natural-rate argument a step further, asserting that the natural rate of unemployment rate is not unique. There is actually a range of unemployment rates at which the economy can permanently remain; which of those alternative natural rates the economy winds up at depends on the expectations held by the public about nominal future income. The higher expected future income, the greater consumption spending and, consequently, the greater employment. Things are a bit more complicated than I have just described them, because Roger also believes that consumption depends not on current income but on wealth. However, in the very simplified model with which Roger operates, wealth depends on expectations about future income. The more optimistic people are about their income-earning opportunities, the higher asset values; the higher asset values, the wealthier the public, and the greater consumption spending. The relationship between current income and expected future income is what Roger calls the belief function.

Thus, Roger juxtaposes a simple New Keynesian model against his own monetary model. The New Keynesian model consists of 1) an investment equals saving equilibrium condition (IS curve) describing the optimal consumption/savings decision of the representative individual as a locus of combinations of expected real interest rates and real income, based on the assumed rate of time preference of the representative individual, expected future income, and expected future inflation; 2) a Taylor rule describing how the monetary authority sets its nominal interest rate as a function of inflation and the output gap and its target (natural) nominal interest rate; 3) a short-run Phillips Curve that expresses actual inflation as a function of expected future inflation and the output gap. The three basic equations allow three endogenous variables, inflation, real income and the nominal rate of interest to be determined. The IS curve represents equilibrium combinations of real income and real interest rates; the Taylor rule determines a nominal interest rate; given the nominal rate determined by the Taylor rule, the IS curve can be redrawn to represent equilibrium combinations of real income and inflation. The intersection of the redrawn IS curve with the Phillips curve determines the inflation rate and real income.

Roger doesn’t like the New Keynesian model because he rejects the notion of a unique equilibrium with a unique natural rate of unemployment, a notion that I have argued is theoretically unfounded. Roger dismisses the natural-rate hypothesis on empirical grounds, the frequent observations of persistently high rates of unemployment being inconsistent with the idea that there are economic forces causing unemployment to revert back to the natural rate. Two responses to this empirical anomaly are possible: 1) the natural rate of unemployment is unstable, so that the observed persistence of high unemployment reflect increases in the underlying but unobservable natural rate of unemployment; 2) the adverse economic shocks that produce high unemployment are persistent, with unemployment returning to a natural level only after the adverse shocks have ceased. In the absence of independent empirical tests of the hypothesis that the natural rate of unemployment has changed, or of the hypothesis that adverse shocks causing unemployment to rise above the natural rate are persistent, neither of these responses is plausible, much less persuasive.

So Roger recasts the basic New Keynesian model in a very different form. While maintaining the Taylor Rule, he rewrites the IS curve so that it describes a relationship between the nominal interest rate and the expected growth of nominal income given the assumed rate of time preference, and in place of the Phillips Curve, he substitutes his belief function, which says that the expected growth of nominal income in the next period equals the current rate of growth. The IS curve and the Taylor Rule provide two steady state equations in three variables, nominal income growth, nominal interest rate and inflation, so that the rate of inflation is left undetermined. Once the belief function specifies the expected rate of growth of nominal income, the nominal interest rate consistent with expected nominal-income growth is determined. Since the belief function tells us only that the expected nominal-income growth equals the current rate of nominal-income growth, any change in nominal-income growth persists into the next period.

At any rate, Roger’s policy proposal is not to change the interest-rate rule followed by the monetary authority, but to propose a rule whereby the monetary authority influences the public’s expectations of nominal-income growth. The greater expected nominal-income growth, the greater wealth, and the greater consumption expenditures. The greater consumption expenditures, the greater income and employment. Expectations are self-fulfilling. Roger therefore advocates a policy by which the government buys and sells a stock-market index fund in order to keep overall wealth at a level that will generate enough consumption expenditures to support maximum sustainable employment.

This is a quick summary of some of the main substantive arguments that Roger makes in his book, and I hope that I have not misrepresented them too badly. As I have already said, I very much sympathize with his criticism of the New Keynesian model, and I agree with nearly all of his criticisms. I also agree wholeheartedly with his emphasis on the importance of expectations and on self-fulfilling character of expectations. Nevertheless, I have to admit that I have trouble taking Roger’s own monetary model and his policy proposal for stabilizing a broad index of equity prices over time seriously. And the reason I am so skeptical about Roger’s model and his policy recommendation is that his model, which does after all bear at least a family resemblance to the simple New Keynesian model, strikes me as being far too simplified to be credible as a representation of a real-world economy. His model, like the New Keynesian model, is an intertemporal model with neither money nor real capital, and the idea that there is an interest rate in such model is, though theoretically defensible, not very plausible. There may be a sequence of periods in such a model in which some form of intertemporal exchange takes place, but without explicitly introducing at least one good that is carried over from period to period, the extent of intertemporal trading is limited and devoid of the arbitrage constraints inherent in a system in which real assets are held from one period to the next.

So I am very skeptical about any macroeconomic model with no market for real assets so that the interest rate interacts with asset values and expected future prices in such a way that the existing stock of durable assets is willingly held over time. The simple New Keynesian model in which there is no money and no durable assets, but simply bonds whose existence is difficult to rationalize in the absence of money or durable assets, does not strike me as a sound foundation for making macroeconomic policy. An interest rate may exist in such a model, but such a model strikes me as woefully inadequate for macroeconomic policy analysis. And although Roger has certainly offered some interesting improvements on the simple New Keynesian model, I would not be willing to rely on Roger’s monetary model for the sweeping policy and institutional recommendations that he proposes, especially his proposal for stabilizing the long-run growth path of a broad index of stock prices.

This is an important point, so I will try to restate it within a wider context. Modern macroeconomics, of which Roger’s model is one of the more interesting examples, flatters itself by claiming to be grounded in the secure microfoundations of the Arrow-Debreu-McKenzie general equilibrium model. But the great achievement of the ADM model was to show the logical possibility of an equilibrium of the independently formulated, optimizing plans of an unlimited number of economic agents producing and trading an unlimited number of commodities over an unlimited number of time periods.

To prove the mutual consistency of such a decentralized decision-making process coordinated by a system of equilibrium prices was a remarkable intellectual achievement. Modern macroeconomics deceptively trades on the prestige of this achievement in claiming to be founded on the ADM general-equilibrium model; the claim is at best misleading, because modern macroeconomics collapses the multiplicity of goods, services, and assets into a single non-durable commodity, so that the only relevant plan the agents in the modern macromodel are called upon to make is a decision about how much to spend in the current period given a shared utility function and a shared production technology for the single output. In the process, all the hard work performed by the ADM general-equilibrium model in explaining how a system of competitive prices could achieve an equilibrium of the complex independent — but interdependent — intertemporal plans of a multitude of decision-makers is effectively discarded and disregarded.

This approach to macroeconomics is not microfounded, but its opposite. The approach relies on the assumption that all but a very small set of microeconomic issues are irrelevant to macroeconomics. Now it is legitimate for macroeconomics to disregard many microeconomic issues, but the assumption that there is continuous microeconomic coordination, apart from the handful of potential imperfections on which modern macroeconomics chooses to focus is not legitimate. In particular, to collapse the entire economy into a single output, implies that all the separate markets encompassed by an actual economy are in equilibrium and that the equilibrium is maintained over time. For that equilibrium to be maintained over time, agents must formulate correct expectations of all the individual relative prices that prevail in those markets over time. The ADM model sidestepped that expectational problem by assuming that a full set of current and forward markets exists in the initial period and that all the agents participating in the economy are present and endowed with wealth enabling them to trade in the initial period. Under those rather demanding assumptions, if an equilibrium price vector covering all current and future markets is arrived at, the optimizing agents will formulate a set of mutually consistent optimal plans conditional on that vector of equilibrium prices so that all the optimal plans can and will be carried out as time happily unfolds for as long as the agents continue in their blissful existence.

However, without a complete set of current and forward markets, achieving the full equilibrium of the ADM model requires that agents formulate consistent expectations of the future prices that will be realized only over the course of time not in the initial period. Roy Radner, who extended the ADM model to accommodate the case of incomplete markets, called such a sequential equilibrium, an equilibrium of plans, prices and expectations. The sequential equilibrium described by Radner has the property that expectations are rational, but the assumption of rational expectations for all future prices over a sequence of future time periods is so unbelievably outlandish as an approximation to reality — sort of like the assumption that it could be 76 degrees fahrenheit in Washington DC in February — that to build that assumption into a macroeconomic model is an absurdity of mind-boggling proportions. But that is precisely what modern macroeconomics, in both its Real Business Cycle and New Keynesian incarnations, has done.

If instead of the sequential equilibrium of plans, prices and expectations, one tries to model an economy in which the price expectations of agents can be inconsistent, while prices adjust within any period to clear markets – the method of temporary equilibrium first described by Hicks in Value and Capital – one can begin to develop a richer conception of how a macroeconomic system can be subject to the financial disturbances, and financial crises to which modern macroeconomies are occasionally, if not routinely, vulnerable. But that would require a reorientation, if not a repudiation, of the path on which macroeconomics has been resolutely marching for nigh on forty years. In his 1984 paper “Consistent Temporary Equilibrium,” published in a volume edited by J. P. Fitoussi, C. J. Bliss made a start on developing such a macroeconomic theory.

There are few economists better equipped than Roger Farmer to lead macroeconomics onto a new and more productive path. He has not done so in this book, but I am hoping that, in his next one, he will.

There Is No Intertemporal Budget Constraint

Last week Nick Rowe posted a link to a just published article in a special issue of the Review of Keynesian Economics commemorating the 80th anniversary of the General Theory. Nick’s article discusses the confusion in the General Theory between saving and hoarding, and Nick invited readers to weigh in with comments about his article. The ROKE issue also features an article by Simon Wren-Lewis explaining the eclipse of Keynesian theory as a result of the New Classical Counter-Revolution, correctly identified by Wren-Lewis as a revolution inspired not by empirical success but by a methodological obsession with reductive micro-foundationalism. While deploring the New Classical methodological authoritarianism, Wren-Lewis takes solace from the ability of New Keynesians to survive under the New Classical methodological regime, salvaging a role for activist counter-cyclical policy by, in effect, negotiating a safe haven for the sticky-price assumption despite its shaky methodological credentials. The methodological fiction that sticky prices qualify as micro-founded allowed New Keynesianism to survive despite the ascendancy of micro-foundationalist methodology, thereby enabling the core Keynesian policy message to survive.

I mention the Wren-Lewis article in this context because of an exchange between two of the commenters on Nick’s article: the presumably pseudonymous Avon Barksdale and blogger Jason Smith about microfoundations and Keynesian economics. Avon began by chastising Nick for wasting time discussing Keynes’s 80-year old ideas, something Avon thinks would never happen in a discussion about a true science like physics, the 100-year-old ideas of Einstein being of no interest except insofar as they have been incorporated into the theoretical corpus of modern physics. Of course, this is simply vulgar scientism, as if the only legitimate way to do economics is to mimic how physicists do physics. This methodological scolding is typically charming New Classical arrogance. Sort of reminds one of how Friedrich Engels described Marxian theory as scientific socialism. I mean who, other than a religious fanatic, would be stupid enough to argue with the assertions of science?

Avon continues with a quotation from David Levine, a fine economist who has done a lot of good work, but who is also enthralled by the New Classical methodology. Avon’s scientism provoked the following comment from Jason Smith, a Ph. D. in physics with a deep interest in and understanding of economics.

You quote from Levine: “Keynesianism as argued by people such as Paul Krugman and Brad DeLong is a theory without people either rational or irrational”

This is false. The L in ISLM means liquidity preference and e.g. here …

http://krugman.blogs.nytimes.com/2013/11/18/the-new-keynesian-case-for-fiscal-policy-wonkish/

… Krugman mentions an Euler equation. The Euler equation essentially says that an agent must be indifferent between consuming one more unit today on the one hand and saving that unit and consuming in the future on the other if utility is maximized.

So there are agents in both formulations preferring one state of the world relative to others.

Avon replied:

Jason,

“This is false. The L in ISLM means liquidity preference and e.g. here”

I know what ISLM is. It’s not recursive so it really doesn’t have people in it. The dynamics are not set by any micro-foundation. If you’d like to see models with people in them, try Ljungqvist and Sargent, Recursive Macroeconomic Theory.

To which Jason retorted:

Avon,

So the definition of “people” is restricted to agents making multi-period optimizations over time, solving a dynamic programming problem?

Well then any such theory is obviously wrong because people don’t behave that way. For example, humans don’t optimize the dictator game. How can you add up optimizing agents and get a result that is true for non-optimizing agents … coincident with the details of the optimizing agents mattering.

Your microfoundation requirement is like saying the ideal gas law doesn’t have any atoms in it. And it doesn’t! It is an aggregate property of individual “agents” that don’t have properties like temperature or pressure (or even volume in a meaningful sense). Atoms optimize entropy, but not out of any preferences.

So how do you know for a fact that macro properties like inflation or interest rates are directly related to agent optimizations? Maybe inflation is like temperature — it doesn’t exist for individuals and is only a property of economics in aggregate.

These questions are not answered definitively, and they’d have to be to enforce a requirement for microfoundations … or a particular way of solving the problem.

Are quarks important to nuclear physics? Not really — it’s all pions and nucleons. Emergent degrees of freedom. Sure, you can calculate pion scattering from QCD lattice calculations (quark and gluon DoF), but it doesn’t give an empirically better result than chiral perturbation theory (pion DoF) that ignores the microfoundations (QCD).

Assuming quarks are required to solve nuclear physics problems would have been a giant step backwards.

To which Avon rejoined:

Jason

The microfoundation of nuclear physics and quarks is quantum mechanics and quantum field theory. How the degrees of freedom reorganize under the renormalization group flow, what effective field theory results is an empirical question. Keynesian economics is worse tha[n] useless. It’s wrong empirically, it has no theoretical foundation, it has no laws. It has no microfoundation. No serious grad school has taught Keynesian economics in nearly 40 years.

To which Jason answered:

Avon,

RG flow is irrelevant to chiral perturbation theory which is based on the approximate chiral symmetry of QCD. And chiral perturbation theory could exist without QCD as the “microfoundation”.

Quantum field theory is not a ‘microfoundation’, but rather a framework for building theories that may or may not have microfoundations. As Weinberg (1979) said:

” … quantum field theory itself has no content beyond analyticity, unitarity,
cluster decomposition, and symmetry.”

If I put together an NJL model, there is no requirement that the scalar field condensate be composed of quark-antiquark pairs. In fact, the basic idea was used for Cooper pairs as a model of superconductivity. Same macro theory; different microfoundations. And that is a general problem with microfoundations — different microfoundations can lead to the same macro theory, so which one is right?

And the IS-LM model is actually pretty empirically accurate (for economics):

http://informationtransfereconomics.blogspot.com/2014/03/the-islm-model-again.html

To which Avon responded:

First, ISLM analysis does not hold empirically. It just doesn’t work. That’s why we ended up with the macro revolution of the 70s and 80s. Keynesian economics ignores intertemporal budget constraints, it violates Ricardian equivalence. It’s just not the way the world works. People might not solve dynamic programs to set their consumption path, but at least these models include a future which people plan over. These models work far better than Keynesian ISLM reasoning.

As for chiral perturbation theory and the approximate chiral symmetries of QCD, I am not making the case that NJL models requires QCD. NJL is an effective field theory so it comes from something else. That something else happens to be QCD. It could have been something else, that’s an empirical question. The microfoundation I’m talking about with theories like NJL is QFT and the symmetries of the vacuum, not the short distance physics that might be responsible for it. The microfoundation here is about the basic laws, the principles.

ISLM and Keynesian economics has none of this. There is no principle. The microfoundation of modern macro is not about increasing the degrees of freedom to model every person in the economy on some short distance scale, it is about building the basic principles from consistent economic laws that we find in microeconomics.

Well, I totally agree that IS-LM is a flawed macroeconomic model, and, in its original form, it was borderline-incoherent, being a single-period model with an interest rate, a concept without meaning except as an intertemporal price relationship. These deficiencies of IS-LM became obvious in the 1970s, so the model was extended to include a future period, with an expected future price level, making it possible to speak meaningfully about real and nominal interest rates, inflation and an equilibrium rate of spending. So the failure of IS-LM to explain stagflation, cited by Avon as the justification for rejecting IS-LM in favor of New Classical macro, was not that hard to fix, at least enough to make it serviceable. And comparisons of the empirical success of augmented IS-LM and the New Classical models have shown that IS-LM models consistently outperform New Classical models.

What Avon fails to see is that the microfoundations that he considers essential for macroeconomics are themselves derived from the assumption that the economy is operating in macroeconomic equilibrium. Thus, insisting on microfoundations – at least in the formalist sense that Avon and New Classical macroeconomists understand the term – does not provide a foundation for macroeconomics; it is just question begging aka circular reasoning or petitio principia.

The circularity is obvious from even a cursory reading of Samuelson’s Foundations of Economic Analysis, Robert Lucas’s model for doing economics. What Samuelson called meaningful theorems – thereby betraying his misguided acceptance of the now discredited logical positivist dogma that only potentially empirically verifiable statements have meaning – are derived using the comparative-statics method, which involves finding the sign of the derivative of an endogenous economic variable with respect to a change in some parameter. But the comparative-statics method is premised on the assumption that before and after the parameter change the system is in full equilibrium or at an optimum, and that the equilibrium, if not unique, is at least locally stable and the parameter change is sufficiently small not to displace the system so far that it does not revert back to a new equilibrium close to the original one. So the microeconomic laws invoked by Avon are valid only in the neighborhood of a stable equilibrium, and the macroeconomics that Avon’s New Classical mentors have imposed on the economics profession is a macroeconomics that, by methodological fiat, is operative only in the neighborhood of a locally stable equilibrium.

Avon dismisses Keynesian economics because it ignores intertemporal budget constraints. But the intertemporal budget constraint doesn’t exist in any objective sense. Certainly macroeconomics has to take into account intertemporal choice, but the idea of an intertemporal budget constraint analogous to the microeconomic budget constraint underlying the basic theory of consumer choice is totally misguided. In the static theory of consumer choice, the consumer has a given resource endowment and known prices at which consumers can transact at will, so the utility-maximizing vector of purchases and sales can be determined as the solution of a constrained-maximization problem.

In the intertemporal context, consumers have a given resource endowment, but prices are not known. So consumers have to make current transactions based on their expectations about future prices and a variety of other circumstances about which consumers can only guess. Their budget constraints are thus not real but totally conjectural based on their expectations of future prices. The optimizing Euler equations are therefore entirely conjectural as well, and subject to continual revision in response to changing expectations. The idea that the microeconomic theory of consumer choice is straightforwardly applicable to the intertemporal choice problem in a setting in which consumers don’t know what future prices will be and agents’ expectations of future prices are a) likely to be very different from each other and thus b) likely to be different from their ultimate realizations is a huge stretch. The intertemporal budget constraint has a completely different role in macroeconomics from the role it has in microeconomics.

If I expect that the demand for my services will be such that my disposable income next year would be $500k, my consumption choices would be very different from what they would have been if I were expecting a disposable income of $100k next year. If I expect a disposable income of $500k next year, and it turns out that next year’s income is only $100k, I may find myself in considerable difficulty, because my planned expenditure and the future payments I have obligated myself to make may exceed my disposable income or my capacity to borrow. So if there are a lot of people who overestimate their future incomes, the repercussions of their over-optimism may reverberate throughout the economy, leading to bankruptcies and unemployment and other bad stuff.

A large enough initial shock of mistaken expectations can become self-amplifying, at least for a time, possibly resembling the way a large initial displacement of water can generate a tsunami. A financial crisis, which is hard to model as an equilibrium phenomenon, may rather be an emergent phenomenon with microeconomic sources, but whose propagation can’t be described in microeconomic terms. New Classical macroeconomics simply excludes such possibilities on methodological grounds by imposing a rational-expectations general-equilibrium structure on all macroeconomic models.

This is not to say that the rational expectations assumption does not have a useful analytical role in macroeconomics. But the most interesting and most important problems in macroeconomics arise when the rational expectations assumption does not hold, because it is when individual expectations are very different and very unstable – say, like now, for instance — that macroeconomies become vulnerable to really scary instability.

Simon Wren-Lewis makes a similar point in his paper in the Review of Keynesian Economics.

Much discussion of current divisions within macroeconomics focuses on the ‘saltwater/freshwater’ divide. This understates the importance of the New Classical Counter Revolution (hereafter NCCR). It may be more helpful to think about the NCCR as involving two strands. The one most commonly talked about involves Keynesian monetary and fiscal policy. That is of course very important, and plays a role in the policy reaction to the recent Great Recession. However I want to suggest that in some ways the second strand, which was methodological, is more important. The NCCR helped completely change the way academic macroeconomics is done.

Before the NCCR, macroeconomics was an intensely empirical discipline: something made possible by the developments in statistics and econometrics inspired by The General Theory. After the NCCR and its emphasis on microfoundations, it became much more deductive. As Hoover (2001, p. 72) writes, ‘[t]he conviction that macroeconomics must possess microfoundations has changed the face of the discipline in the last quarter century’. In terms of this second strand, the NCCR was triumphant and remains largely unchallenged within mainstream academic macroeconomics.

Perhaps I will have some more to say about Wren-Lewis’s article in a future post. And perhaps also about Nick Rowe’s article.

HT: Tom Brown

Update (02/11/16):

On his blog Jason Smith provides some further commentary on his exchange with Avon on Nick Rowe’s blog, explaining at greater length how irrelevant microfoundations are to doing real empirically relevant physics. He also expands on and puts into a broader meta-theoretical context my point about the extremely narrow range of applicability of the rational-expectations equilibrium assumptions of New Classical macroeconomics.

David Glasner found a back-and-forth between me and a commenter (with the pseudonym “Avon Barksdale” after [a] character on The Wire who [didn’t end] up taking an economics class [per Tom below]) on Nick Rowe’s blog who expressed the (widely held) view that the only scientific way to proceed in economics is with rigorous microfoundations. “Avon” held physics up as a purported shining example of this approach.
I couldn’t let it go: even physics isn’t that reductionist. I gave several examples of cases where the microfoundations were actually known, but not used to figure things out: thermodynamics, nuclear physics. Even modern physics is supposedly built on string theory. However physicists do not require every pion scattering amplitude be calculated from QCD. Some people do do so-called lattice calculations. But many resort to the “effective” chiral perturbation theory. In a sense, that was what my thesis was about — an effective theory that bridges the gap between lattice QCD and chiral perturbation theory. That effective theory even gave up on one of the basic principles of QCD — confinement. It would be like an economist giving up opportunity cost (a basic principle of the micro theory). But no physicist ever said to me “your model is flawed because it doesn’t have true microfoundations”. That’s because the kind of hard core reductionism that surrounds the microfoundations paradigm doesn’t exist in physics — the most hard core reductionist natural science!
In his post, Glasner repeated something that he had before and — probably because it was in the context of a bunch of quotes about physics — I thought of another analogy.

Glasner says:

But the comparative-statics method is premised on the assumption that before and after the parameter change the system is in full equilibrium or at an optimum, and that the equilibrium, if not unique, is at least locally stable and the parameter change is sufficiently small not to displace the system so far that it does not revert back to a new equilibrium close to the original one. So the microeconomic laws invoked by Avon are valid only in the neighborhood of a stable equilibrium, and the macroeconomics that Avon’s New Classical mentors have imposed on the economics profession is a macroeconomics that, by methodological fiat, is operative only in the neighborhood of a locally stable equilibrium.

 

This hits on a basic principle of physics: any theory radically simplifies near an equilibrium.

Go to Jason’s blog to read the rest of his important and insightful post.

Making Sense of the Phillips Curve

In a comment on my previous post about supposedly vertical long run Phillips Curve, Richard Lipsey mentioned a paper he presented a couple of years ago at the History of Economics Society Meeting: “The Phillips Curve and the Tyranny of an Assumed Unique Macro Equilibrium.” In a subsequent comment, Richard also posted the abstract to his paper. The paper provides a succinct yet fascinating overview of the evolution macroeconomists’ interpretations of the Phillips curve since Phillips published his paper almost 60 years ago.

The two key points that I take away from Richard’s discussion are the following. 1) A key microeconomic assumption underlying the Keynesian model is that over a broad range of outputs, most firms are operating under conditions of constant short-run marginal cost, because in the short run firms keep the capital labor ratio fixed, varying their usage of capital along with the amount of labor utilized. With a fixed capital-labor ration, marginal cost is flat. In the usual textbook version, the short-run marginal cost is rising because of a declining capital-labor ratio, requiring an increasing number of workers to wring out successive equal increments of output from a fixed amount of capital. Given flat marginal cost, firms respond to changes in demand by varying output but not price until they hit a capacity bottleneck.

The second point, a straightforward implication of the first, is that there are multiple equilibria for such an economy, each equilibrium corresponding to a different level of total demand, with a price level more or less determined by costs, at any rate until total output approaches the limits of its capacity.

Thus, early on, the Phillips Curve was thought to be relatively flat, with little effect on inflation unless unemployment was forced down below some very low level. The key question was how far unemployment could be pushed down before significant inflationary pressure would begin to emerge. Doctrinaire Keynesians advocated driving unemployment down as low as possible, while skeptics argued that significant inflationary pressure would begin to emerge even at higher rates of unemployment, so that a prudent policy would be to operate at a level of unemployment sufficiently high to keep inflationary pressures in check.

Lipsey allows that, in the 1960s, the view that the Phillips Curve presented a menu of alternative combinations of unemployment and inflation from which policymakers could choose did take hold, acknowledging that he himself expressed such a view in a 1965 paper (“Structural and Deficient Demand Unemployment Reconsidered” in Employment Policy and the Labor Market edited by Arthur Ross), “inflationary points on the Phillips Curve represent[ing] disequilibrium points that had to be maintained by monetary policy that perpetuated the disequilibrium by suitable increases in the rate of monetary expansion.” It was this version of the Phillips Curve that was effectively attacked by Friedman and Phelps, who replaced it with a version in which the equilibrium rate of unemployment is uniquely determined by real factors, the natural rate of unemployment, any deviation from the natural rate resulting in a series of adjustments in inflation and expected inflation that would restore the natural rate of unemployment.

Sometime in the 1960s the Phillips curve came to be thought of as providing a stable trade-off between inflation and unemployment. When Lipsey did adopt this trade-off version, as for example Lipsey (1965), inflationary points on the Phillips curve represented disequilibrium points that had to be maintained by monetary policy that perpetuated the disequilibrium by suitable increases in the rate of monetary expansion. In the new Classical interpretation that began with Edmund Phelps (1967), Milton Friedman (1968) and Lucas and Rapping (1969), each point was an equilibrium point because demands and supplies of agents were shifted from their full-information locations when they misinterpreted the price signals. There was, however, only one full-information equilibrium of income, Y*, and unemployment, U*.

The Friedman-Phelps argument was made as inflation rose significantly in the late 1960s, and the mild 1969-70 recession reduce inflation by only a smidgen, setting the stage for Nixon’s imposition of his disastrous wage and price controls in 1971 combined with a loosening of monetary policy by a compliant Arthur Burns as part of Nixon’s 1972 reelection strategy. When the hangover to the 1972 monetary binge was combined with a quadrupling of oil prices by OPEC in late 1973, the result was a simultaneous increase in inflation and unemployment – stagflation — a combination widely perceived as a decisive refutation of Keynesian theory. To cope with that theoretical conundrum, the Keynesian model was expanded to incorporate the determination of the price level by deriving an aggregate supply and aggregate demand curve in price-level/output space.

Lipsey acknowledges a crucial misstep in constructing the Aggregate Demand/Aggregate Supply framework: assuming a unique macroeconomic equilibrium, an assumption that implied the existence of a unique natural rate of unemployment. Keynesians won the battle, providing a perfectly respectable theoretical explanation for stagflation, but, in doing so, they lost the war to Friedman, paving the way for the malign ascendancy of New Classical economics, with which New Keynesian economics became an effective collaborator. Whether the collaboration was willing or unwilling is unclear and unimportant; by assuming a unique equilibrium, New Keynesians gave up the game.

I was so intent in showing that this AD-AS construction provided a simple Keynesian explanation of stagflation, contrary to the accusation of the New Classical economists that stagflation provided a conclusive refutation of Keynesian economics that I paid too little attention to the enormous importance of the new assumption introduced into Keynesian models. The addition of an expectations-augmented Philips curve, negatively sloped in the short run but vertical in the long run, produced a unique macro equilibrium that would be reached whatever macroeconomic policy was adopted.

Lipsey does not want to go back to the old Keynesian paradigm; he prefers a third approach that can be traced back to, among others, Joseph Schumpeter in which the economy is viewed “as constantly evolving under the impact of endogenously generated technological change.” Such technological change can be vaguely foreseen, but also gives rise to genuine surprises. The course of economic development is not predetermined, but path-dependent. History matters.

I suggest that the explanation of the current behaviour of inflation, output and unemployment in modern industrial economies is provided not by any EWD [equilibrium with deviations] theory but by evolutionary theories. These build on the obvious observation that technological change is continual in modern economies (decade by decade at least since 1760), but uneven (tending to come in spurts), and path dependent (because, among other reasons, knowledge is cumulative with one advance enabling another). These changes are generated endogenously by private-sector, profit-seeking agents competing in terms of new products, new processes and new forms of organisation, and by public sector activities in such places as universities and government research laboratories. They continually alter the structure of the economy, causing waves of serially correlated investment expenditure that are a major cause of cycles, as well as driving the long-term growth that continually transforms our economic, social and political structures. In their important book As Time Goes By, Freeman and Louça (2001) trace these processes as they have operated since the beginnings of the First Industrial Revolution.

A critical distinction in all such theories is between risk, which is easily handled in neoclassical economics, and uncertainty, which is largely ignored in it except to pay it lip service. In risky situations, agents with the same objective function and identical knowledge will chose the same alternative: the one that maximizes the expected value of their profits or utility. This gives rise to unique predictable behaviour of agents acting under specified conditions. In contrast in uncertain situations, two identically situated and motivated agents can, and observably do, choose different alternatives — as for example when different firms all looking for the same technological breakthrough chose different lines of R&D — and there is no way to tell in advance of knowing the results which is the better choice. Importantly, agents typically make R&D decisions under conditions of genuine uncertainty. No one knows if a direction of technological investigation will go up a blind alley or open onto a rich field of applications until funds are spend investigating the route. Sometimes trivial expenses produce results of great value while major expenses produce nothing of value. Since there is no way to decide in advance which of two alternative actions with respect to invention or innovation is the best one until the results are known, there is no unique line of behaviour that maximises agents’ expected profits. Thus agents are better understood as groping into an uncertain future in a purposeful, profit- or utility-seeking manner, rather than as maximizing their profits or utility.

This is certainly the right way to think about how economies evolve over time, but I would just add that even if one stays within the more restricted framework of Walrasian general equilibrium, there is simply no persuasive theoretical reason to assume that there is a unique equilibrium or that an economy will necessarily arrive at that equilibrium no matter how long we wait. I have discussed this point several times before most recently here. The assumption that there is a natural rate of unemployment “ground out,” as Milton Friedman put it so awkwardly, “by the Walrasian system of general equilibrium equations” simply lacks any theoretical foundation. Even in a static model in which knowledge and technology were not evolving, the natural rate of unemployment is a will o the wisp.

Because there is no unique static equilibrium in the evolutionary world in which history matters, no adjustment mechanism is required to maintain it. Instead, the constantly changing economy can exist over a wide range of income, employment and unemployment values, without behaving as it would if its inflation rate were determined by an expectations-augmented Phillips curve or any similar construct centred on unique general equilibrium values of Y and U. Thus there is no stable long-run vertical Phillips curve or aggregate supply curve.

Instead of the Phillips curve there is a band as shown in Figure 4 [See below]. Its midpoint is at the expected rate of inflation. If the central bank has a credible inflation target that it sticks to, the expected rate will be that target rate, shown as πe in the figure. The actual rate will vary around the expected rate depending on a number of influences such as changes in productivity, the price of oil and food, but not significantly on variations in U or Y. At either end of this band, there may be something closer to a conventional Phillips curve with prices and wages falling in the face of a major depression and rising in the face of a major boom financed by monetary expansion. Also, the whole band will be shifted by anything that changes the expected rate of inflation.

phillips_lipsey

Lipsey concludes as follows:

So we seem to have gone full circle from early Keynesian view in which there was no unique level of income to which the economy was inevitably drawn, through a simple Phillips curve with its implied trade off, to an expectations-augmented Phillips curve (or any of its more modern equivalents) with its associated unique level of national income, and finally back to the early non-unique Keynesian view in which policy makers had an option as to the average pressure of aggregate demand at which the economy could be operated.

“Perhaps [then] Keynesians were too hasty in following the New Classical economists in accepting the view that follows from static [and all EWD] models that stable rates of wage and price inflation are poised on the razor’s edge of a unique NAIRU and its accompanying Y*. The alternative does not require a long term Phillips curve trade off, nor does it deny the possibility of accelerating inflations of the kind that have bedevilled many third world countries. It is merely states that industrialised economies with low expected inflation rates may be less precisely responsive than current theory assumes because they are subject to many lags and inertias, and are operating in an ever-changing and uncertain world of endogenous technological change, which has no unique long term static equilibrium. If so, the economy may not be similar to the smoothly functioning mechanical world of Newtonian mechanics but rather to the imperfectly evolving world of evolutionary biology. The Phillips relation then changes from being a precise curve to being a band within which various combinations of inflation and unemployment are possible but outside of which inflation tends to accelerate or decelerate. Perhaps then the great [pre-Phillips curve] debates of the 1940s and early 1950s that assumed that there was a range within which the economy could be run with varying pressures of demand, and varying amounts of unemployment and inflation[ary pressure], were not as silly as they were made to seem when both Keynesian and New Classical economists accepted the assumption of a perfectly inelastic, one-dimensional, long run Phillips curve located at a unique equilibrium Y* and NAIRU.” (Lipsey, “The Phillips Curve,” In Famous Figures and Diagrams in Economics, edited by Mark Blaug and Peter Lloyd, p. 389)

Krugman on the Volcker Disinflation

Earlier in the week, Paul Krugman wrote about the Volcker disinflation of the 1980s. Krugman’s annoyance at Stephen Moore (whom Krugman flatters by calling him an economist) and John Cochrane (whom Krugman disflatters by comparing him to Stephen Moore) is understandable, but he has less excuse for letting himself get carried away in an outburst of Keynesian triumphalism.

Right-wing economists like Stephen Moore and John Cochrane — it’s becoming ever harder to tell the difference — have some curious beliefs about history. One of those beliefs is that the experience of disinflation in the 1980s was a huge shock to Keynesians, refuting everything they believed. What makes this belief curious is that it’s the exact opposite of the truth. Keynesians came into the Volcker disinflation — yes, it was mainly the Fed’s doing, not Reagan’s — with a standard, indeed textbook, model of what should happen. And events matched their expectations almost precisely.

I’ve been cleaning out my library, and just unearthed my copy of Dornbusch and Fischer’s Macroeconomics, first edition, copyright 1978. Quite a lot of that book was concerned with inflation and disinflation, using an adaptive-expectations Phillips curve — that is, an assumed relationship in which the current inflation rate depends on the unemployment rate and on lagged inflation. Using that approach, they laid out at some length various scenarios for a strategy of reducing the rate of money growth, and hence eventually reducing inflation. Here’s one of their charts, with the top half showing inflation and the bottom half showing unemployment:




Not the cleanest dynamics in the world, but the basic point should be clear: cutting inflation would require a temporary surge in unemployment. Eventually, however, unemployment could come back down to more or less its original level; this temporary surge in unemployment would deliver a permanent reduction in the inflation rate, because it would change expectations.

And here’s what the Volcker disinflation actually looked like:


A temporary but huge surge in unemployment, with inflation coming down to a sustained lower level.

So were Keynesian economists feeling amazed and dismayed by the events of the 1980s? On the contrary, they were feeling pretty smug: disinflation had played out exactly the way the models in their textbooks said it should.

Well, this is true, but only up to a point. What Krugman neglects to mention, which is why the Volcker disinflation is not widely viewed as having enhanced the Keynesian forecasting record, is that most Keynesians had opposed the Reagan tax cuts, and one of their main arguments was that the tax cuts would be inflationary. However, in the Reagan-Volcker combination of loose fiscal policy and tight money, it was tight money that dominated. Score one for the Monetarists. The rapid drop in inflation, though accompanied by high unemployment, was viewed as a vindication of the Monetarist view that inflation is always and everywhere a monetary phenomenon, a view which now seems pretty commonplace, but in the 1970s and 1980s was hotly contested, including by Keynesians.

However, the (Friedmanian) Monetarist view was only partially vindicated, because the Volcker disinflation was achieved by way of high interest rates not by tightly controlling the money supply. As I have written before on this blog (here and here) and in chapter 10 of my book on free banking (especially, pp. 214-21), Volcker actually tried very hard to slow down the rate of growth in the money supply, but the attempt to implement a k-percent rule induced perverse dynamics, creating a precautionary demand for money whenever monetary growth overshot the target range, the anticipation of an imminent future tightening causing people, fearful that cash would soon be unavailable, to hoard cash by liquidating assets before the tightening. The scenario played itself out repeatedly in the 1981-82 period, when the most closely watched economic or financial statistic in the world was the Fed’s weekly report of growth in the money supply, with growth rates over the target range being associated with falling stock and commodities prices. Finally, in the summer of 1982, Volcker announced that the Fed would stop trying to achieve its money growth targets, and the great stock market rally of the 1980s took off, and economic recovery quickly followed.

So neither the old-line Keynesian dismissal of monetary policy as irrelevant to the control of inflation, nor the Monetarist obsession with controlling the monetary aggregates fared very well in the aftermath of the Volcker disinflation. The result was the New Keynesian focus on monetary policy as the key tool for macroeconomic stabilization, except that monetary policy no longer meant controlling a targeted monetary aggregate, but controlling a targeted interest rate (as in the Taylor rule).

But Krugman doesn’t mention any of this, focusing instead on the conflicts among  non-Keynesians.

Indeed, it was the other side of the macro divide that was left scrambling for answers. The models Chicago was promoting in the 1970s, based on the work of Robert Lucas and company, said that unemployment should have come down quickly, as soon as people realized that the Fed really was bringing down inflation.

Lucas came to Chicago in 1975, and he was the wave of the future at Chicago, but it’s not as if Friedman disappeared; after all, he did win the Nobel Prize in 1976. And although Friedman did not explicitly attack Lucas, it’s clear that, to his credit, Friedman never bought into the rational-expectations revolution. So although Friedman may have been surprised at the depth of the 1981-82 recession – in part attributable to the perverse effects of the money-supply targeting he had convinced the Fed to adopt – the adaptive-expectations model in the Dornbusch-Fischer macro textbook is as much Friedmanian as Keynesian. And by the way, Dornbush and Fischer were both at Chicago in the mid 1970s when the first edition of their macro text was written.

By a few years into the 80s it was obvious that those models were unsustainable in the face of the data. But rather than admit that their dismissal of Keynes was premature, most of those guys went into real business cycle theory — basically, denying that the Fed had anything to do with recessions. And from there they just kept digging ever deeper into the rabbit hole.

But anyway, what you need to know is that the 80s were actually a decade of Keynesian analysis triumphant.

I am just as appalled as Krugman by the real-business-cycle episode, but it was as much a rejection of Friedman, and of all other non-Keynesian monetary theory, as of Keynes. So the inspiring morality tale spun by Krugman in which the hardy band of true-blue Keynesians prevail against those nasty new classical barbarians is a bit overdone and vastly oversimplified.

John Cochrane, Meet Richard Lipsey and Kenneth Carlaw

Paul Krugman wrote an uncharacteristically positive post today about John Cochrane’s latest post in which Cochrane dialed it down a bit after writing two rather heated posts (here and here) attacking Alan Blinder for a recent piece he wrote in the New York Review of Books in which Blinder wrote dismissively quoted Cochrane’s dismissive remark about Keynesian economics being fairy tales that haven’t been taught to graduate students since the 1960s. I don’t want to get into that fracas, but I was amused to read the following paragraphs at the end of Cochrane’s second post in the current series.

Thus, if you read Krugman’s columns, you will see him occasionally crowing about how Keynesian economics won, and how the disciples of Stan Fisher at MIT have spread out to run the world. He’s right. Then you see him complaining about how nobody in academia understands Keynesian economics. He’s right again.

Perhaps academic research ran off the rails for 40 years producing nothing of value. Social sciences can do that. Perhaps our policy makers are stuck with simple stories they learned as undergraduates; and, as has happened countless times before, new ideas will percolate up when the generation trained in the 1980s makes their way to to top of policy circles.

I think we can agree on something. If one wants to write about “what’s wrong with economics,” such a huge divide between academic research ideas and the ideas running our policy establishment is not a good situation.

The right way to address this is with models — written down, objective models, not pundit prognostications — and data. What accounts, quantitatively, for our experience?  I see old-fashioned Keynesianism losing because, having dramatically failed that test once, its advocates are unwilling to do so again, preferring a campaign of personal attack in the popular press. Models confront data in the pages of the AER, the JPE, the QJE, and Econometrica. If old-time Keynesianism really does account for the data, write it down and let’s see.

So Cochrane wants to take this bickering out of the realm of punditry and put the conflicting models to an objective test of how well they perform against the data. Sounds good to me, but I can’t help but wonder if Cochrane means to attribute the academic ascendancy of RBC/New Classical models to their having empirically outperformed competing models? If so, I am not aware that anyone else has made that claim, including Kartik Athreya who wrote the book on the subject. (Here’s my take on the book.) Again just wondering – I am not a macroeconometrician – but is there any study showing that RBC or DSGE models outperform old-fashioned Keynesian models in explaining macro-time-series data?

But I am aware of, and have previously written about, a paper by Kenneth Carlaw and Richard Lipsey (“Does History Matter?: Empirical Analysis of Evolutionary versus Stationary Equilibrium Views of the Economy”) in which they show that time-series data for six OECD countries provide no evidence of the stylized facts about inflation and unemployment implied by RBC and New Keynesian theory. Here is the abstract from the Carlaw-Lipsey paper.

The evolutionary vision in which history matters is of an evolving economy driven by bursts of technological change initiated by agents facing uncertainty and producing long term, path-dependent growth and shorter-term, non-random investment cycles. The alternative vision in which history does not matter is of a stationary, ergodic process driven by rational agents facing risk and producing stable trend growth and shorter term cycles caused by random disturbances. We use Carlaw and Lipsey’s simulation model of non-stationary, sustained growth driven by endogenous, path-dependent technological change under uncertainty to generate artificial macro data. We match these data to the New Classical stylized growth facts. The raw simulation data pass standard tests for trend and difference stationarity, exhibiting unit roots and cointegrating processes of order one. Thus, contrary to current belief, these tests do not establish that the real data are generated by a stationary process. Real data are then used to estimate time-varying NAIRU’s for six OECD countries. The estimates are shown to be highly sensitive to the time period over which they are made. They also fail to show any relation between the unemployment gap, actual unemployment minus estimated NAIRU and the acceleration of inflation. Thus there is no tendency for inflation to behave as required by the New Keynesian and earlier New Classical theory. We conclude by rejecting the existence of a well-defined a short-run, negatively sloped Philips curve, a NAIRU, a unique general equilibrium, short and long-run, a vertical long-run Phillips curve, and the long-run neutrality of money.

Cochrane, like other academic macroeconomists with a RBC/New Classical orientation seems inordinately self-satisfied with the current state of the modern macroeconomics, but curiously sensitive to, and defensive about, criticism from the unwashed masses. Rather than weigh in again with my own criticisms, let me close by quoting another abstract – this one from a paper (“Complexity Eonomics: A Different Framework for Economic Thought”) by Brian Arthur, certainly one of the smartest, and most technically capable, economists around.

This paper provides a logical framework for complexity economics. Complexity economics builds from the proposition that the economy is not necessarily in equilibrium: economic agents (firms, consumers, investors) constantly change their actions and strategies in response to the outcome they mutually create. This further changes the outcome, which requires them to adjust afresh. Agents thus live in a world where their beliefs and strategies are constantly being “tested” for survival within an outcome or “ecology” these beliefs and strategies together create. Economics has largely avoided this nonequilibrium view in the past, but if we allow it, we see patterns or phenomena not visible to equilibrium analysis. These emerge probabilistically, last for some time and dissipate, and they correspond to complex structures in other fields. We also see the economy not as something given and existing but forming from a constantly developing set of technological innovations, institutions, and arrangements that draw forth further innovations, institutions and arrangements.

Complexity economics sees the economy as in motion, perpetually “computing” itself — perpetually constructingitself anew. Where equilibrium economics emphasizes order, determinacy, deduction, and stasis, complexity economics emphasizes contingency, indeterminacy, sense-making, and openness to change. In this framework time, in the sense of real historical time, becomes important, and a solution is no longer necessarily a set of mathematical conditions but a pattern, a set of emergent phenomena, a set of changes that may induce further changes, a set of existing entities creating novel entities. Equilibrium economics is a special case of nonequilibrium and hence complexity economics, therefore complexity economics is economics done in a more general way. It shows us an economy perpetually inventing itself, creating novel structures and possibilities for exploitation, and perpetually open to response.

HT: Mike Norman

Explaining the Hegemony of New Classical Economics

Simon Wren-Lewis, Robert Waldmann, and Paul Krugman have all recently devoted additional space to explaining – ruefully, for the most part – how it came about that New Classical Economics took over mainstream macroeconomics just about half a century after the Keynesian Revolution. And Mark Thoma got them all started by a complaint about the sorry state of modern macroeconomics and its failure to prevent or to cure the Little Depression.

Wren-Lewis believes that the main problem with modern macro is too much of a good thing, the good thing being microfoundations. Those microfoundations, in Wren-Lewis’s rendering, filled certain gaps in the ad hoc Keynesian expenditure functions. Although the gaps were not as serious as the New Classical School believed, adding an explicit model of intertemporal expenditure plans derived from optimization conditions and rational expectations, was, in Wren-Lewis’s estimation, an improvement on the old Keynesian theory. The improvements could have been easily assimilated into the old Keynesian theory, but weren’t because New Classicals wanted to junk, not improve, the received Keynesian theory.

Wren-Lewis believes that it is actually possible for the progeny of Keynes and the progeny of Fisher to coexist harmoniously, and despite his discomfort with the anti-Keynesian bias of modern macroeconomics, he views the current macroeconomic research program as progressive. By progressive, I interpret him to mean that macroeconomics is still generating new theoretical problems to investigate, and that attempts to solve those problems are producing a stream of interesting and useful publications – interesting and useful, that is, to other economists doing macroeconomic research. Whether the problems and their solutions are useful to anyone else is perhaps not quite so clear. But even if interest in modern macroeconomics is largely confined to practitioners of modern macroeconomics, that fact alone would not conclusively show that the research program in which they are engaged is not progressive, the progressiveness of the research program requiring no more than a sufficient number of self-selecting econ grad students, and a willingness of university departments and sources of research funding to cater to the idiosyncratic tastes of modern macroeconomists.

Robert Waldmann, unsurprisingly, takes a rather less charitable view of modern macroeconomics, focusing on its failure to discover any new, previously unknown, empirical facts about macroeconomic, or to better explain known facts than do alternative models, e.g., by more accurately predicting observed macro time-series data. By that, admittedly, demanding criterion, Waldmann finds nothing progressive in the modern macroeconomics research program.

Paul Krugman weighed in by emphasizing not only the ideological agenda behind the New Classical Revolution, but the self-interest of those involved:

Well, while the explicit message of such manifestos is intellectual – this is the only valid way to do macroeconomics – there’s also an implicit message: from now on, only my students and disciples will get jobs at good schools and publish in major journals/ And that, to an important extent, is exactly what happened; Ken Rogoff wrote about the “scars of not being able to publish stick-price papers during the years of new classical repression.” As time went on and members of the clique made up an ever-growing share of senior faculty and journal editors, the clique’s dominance became self-perpetuating – and impervious to intellectual failure.

I don’t disagree that there has been intellectual repression, and that this has made professional advancement difficult for those who don’t subscribe to the reigning macroeconomic orthodoxy, but I think that the story is more complicated than Krugman suggests. The reason I say that is because I cannot believe that the top-ranking economics departments at schools like MIT, Harvard, UC Berkeley, Princeton, and Penn, and other supposed bastions of saltwater thinking have bought into the underlying New Classical ideology. Nevertheless, microfounded DSGE models have become de rigueur for any serious academic macroeconomic theorizing, not only in the Journal of Political Economy (Chicago), but in the Quarterly Journal of Economics (Harvard), the Review of Economics and Statistics (MIT), and the American Economic Review. New Keynesians, like Simon Wren-Lewis, have made their peace with the new order, and old Keynesians have been relegated to the periphery, unable to publish in the journals that matter without observing the generally accepted (even by those who don’t subscribe to New Classical ideology) conventions of proper macroeconomic discourse.

So I don’t think that Krugman’s ideology plus self-interest story fully explains how the New Classical hegemony was achieved. What I think is missing from his story is the spurious methodological requirement of microfoundations foisted on macroeconomists in the course of the 1970s. I have discussed microfoundations in a number of earlier posts (here, here, here, here, and here) so I will try, possibly in vain, not to repeat myself too much.

The importance and desirability of microfoundations were never questioned. What, after all, was the neoclassical synthesis, if not an attempt, partly successful and partly unsuccessful, to integrate monetary theory with value theory, or macroeconomics with microeconomics? But in the early 1970s the focus of attempts, notably in the 1970 Phelps volume, to provide microfoundations changed from embedding the Keynesian system in a general-equilibrium framework, as Patinkin had done, to providing an explicit microeconomic rationale for the Keynesian idea that the labor market could not be cleared via wage adjustments.

In chapter 19 of the General Theory, Keynes struggled to come up with a convincing general explanation for the failure of nominal-wage reductions to clear the labor market. Instead, he offered an assortment of seemingly ad hoc arguments about why nominal-wage adjustments would not succeed in reducing unemployment, enabling all workers willing to work at the prevailing wage to find employment at that wage. This forced Keynesians into the awkward position of relying on an argument — wages tend to be sticky, especially in the downward direction — that was not really different from one used by the “Classical Economists” excoriated by Keynes to explain high unemployment: that rigidities in the price system – often politically imposed rigidities – prevented wage and price adjustments from equilibrating demand with supply in the textbook fashion.

These early attempts at providing microfoundations were largely exercises in applied price theory, explaining why self-interested behavior by rational workers and employers lacking perfect information about all potential jobs and all potential workers would not result in immediate price adjustments that would enable all workers to find employment at a uniform market-clearing wage. Although these largely search-theoretic models led to a more sophisticated and nuanced understanding of labor-market dynamics than economists had previously had, the models ultimately did not provide a fully satisfactory account of cyclical unemployment. But the goal of microfoundations was to explain a certain set of phenomena in the labor market that had not been seriously investigated, in the hope that price and wage stickiness could be analyzed as an economic phenomenon rather than being arbitrarily introduced into models as an ad hoc, albeit seemingly plausible, assumption.

But instead of pursuing microfoundations as an explanatory strategy, the New Classicals chose to impose it as a methodological prerequisite. A macroeconomic model was inadmissible unless it could be explicitly and formally derived from the optimizing choices of fully rational agents. Instead of trying to enrich and potentially transform the Keynesian model with a deeper analysis and understanding of the incentives and constraints under which workers and employers make decisions, the New Classicals used microfoundations as a methodological tool by which to delegitimize Keynesian models, those models being insufficiently or improperly microfounded. Instead of using microfoundations as a method by which to make macroeconomic models conform more closely to the imperfect and limited informational resources available to actual employers deciding to hire or fire employees, and actual workers deciding to accept or reject employment opportunities, the New Classicals chose to use microfoundations as a methodological justification for the extreme unrealism of the rational-expectations assumption, portraying it as nothing more than the consistent application of the rationality postulate underlying standard neoclassical price theory.

For the New Classicals, microfoundations became a reductionist crusade. There is only one kind of economics, and it is not macroeconomics. Even the idea that there could be a conceptual distinction between micro and macroeconomics was unacceptable to Robert Lucas, just as the idea that there is, or could be, a mind not reducible to the brain is unacceptable to some deranged neuroscientists. No science, not even chemistry, has been reduced to physics. Were it ever to be accomplished, the reduction of chemistry to physics would be a great scientific achievement. Some parts of chemistry have been reduced to physics, which is a good thing, especially when doing so actually enhances our understanding of the chemical process and results in an improved, or more exact, restatement of the relevant chemical laws. But it would be absurd and preposterous simply to reject, on supposed methodological principle, those parts of chemistry that have not been reduced to physics. And how much more absurd would it be to reject higher-level sciences, like biology and ecology, for no other reason than that they have not been reduced to physics.

But reductionism is what modern macroeconomics, under the New Classical hegemony, insists on. No exceptions allowed; don’t even ask. Meekly and unreflectively, modern macroeconomics has succumbed to the absurd and arrogant methodological authoritarianism of the New Classical Revolution. What an embarrassment.

UPDATE (11:43 AM EDST): I made some minor editorial revisions to eliminate some grammatical errors and misplaced or superfluous words.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,263 other subscribers
Follow Uneasy Money on WordPress.com