Archive for the 'IS-LM' Category

An Austrian Tragedy

It was hardly predictable that the New York Review of Books would take notice of Marginal Revolutionaries by Janek Wasserman, marking the susquicentenial of the publication of Carl Menger’s Grundsätze (Principles of Economics) which, along with Jevons’s Principles of Political Economy and Walras’s Elements of Pure Economics ushered in the marginal revolution upon which all of modern economics, for better or for worse, is based. The differences among the three founding fathers of modern economic theory were not insubstantial, and the Jevonian version was largely superseded by the work of his younger contemporary Alfred Marshall, so that modern neoclassical economics is built on the work of only one of the original founders, Leon Walras, Jevons’s work having left little impression on the future course of economics.

Menger’s work, however, though largely, but not totally, eclipsed by that of Marshall and Walras, did leave a more enduring imprint and a more complicated legacy than Jevons’s — not only for economics, but for political theory and philosophy, more generally. Judging from Edward Chancellor’s largely favorable review of Wasserman’s volume, one might even hope that a start might be made in reassessing that legacy, a process that could provide an opportunity for mutually beneficial interaction between long-estranged schools of thought — one dominant and one marginal — that are struggling to overcome various conceptual, analytical and philosophical problems for which no obvious solutions seem available.

In view of the failure of modern economists to anticipate the Great Recession of 2008, the worst financial shock since the 1930s, it was perhaps inevitable that the Austrian School, a once favored branch of economics that had made a specialty of booms and busts, would enjoy a revival of public interest.

The theme of Austrians as outsiders runs through Janek Wasserman’s The Marginal Revolutionaries: How Austrian Economists Fought the War of Ideas, a general history of the Austrian School from its beginnings to the present day. The title refers both to the later marginalization of the Austrian economists and to the original insight of its founding father, Carl Menger, who introduced the notion of marginal utility—namely, that economic value does not derive from the cost of inputs such as raw material or labor, as David Ricardo and later Karl Marx suggested, but from the utility an individual derives from consuming an additional amount of any good or service. Water, for instance, may be indispensable to humans, but when it is abundant, the marginal value of an extra glass of the stuff is close to zero. Diamonds are less useful than water, but a great deal rarer, and hence command a high market price. If diamonds were as common as dewdrops, however, they would be worthless.

Menger was not the first economist to ponder . . . the “paradox of value” (why useless things are worth more than essentials)—the Italian Ferdinando Galiani had gotten there more than a century earlier. His central idea of marginal utility was simultaneously developed in England by W. S. Jevons and on the Continent by Léon Walras. Menger’s originality lay in applying his theory to the entire production process, showing how the value of capital goods like factory equipment derived from the marginal value of the goods they produced. As a result, Austrian economics developed a keen interest in the allocation of capital. Furthermore, Menger and his disciples emphasized that value was inherently subjective, since it depends on what consumers are willing to pay for something; this imbued the Austrian school from the outset with a fiercely individualistic and anti-statist aspect.

Menger’s unique contribution is indeed worthy of special emphasis. He was more explicit than Jevons or Walras, and certainly more than Marshall, in explaining that the value of factors of production is derived entirely from the value of the incremental output that could be attributed (or imputed) to their services. This insight implies that cost is not an independent determinant of value, as Marshall, despite accepting the principle of marginal utility, continued to insist – famously referring to demand and supply as the two blades of the analytical scissors that determine value. The cost of production therefore turns out to be nothing but the value the output foregone when factors are used to produce one output instead of the next most highly valued alternative. Cost therefore does not determine, but is determined by, equilibrium price, which means that, in practice, costs are always subjective and conjectural. (I have made this point in an earlier post in a different context.) I will have more to say below about the importance of Menger’s specific contribution and its lasting imprint on the Austrian school.

Menger’s Principles of Economics, published in 1871, established the study of economics in Vienna—before then, no economic journals were published in Austria, and courses in economics were taught in law schools. . . .

The Austrian School was also bound together through family and social ties: [his two leading disciples, [Eugen von] Böhm-Bawerk and Friedrich von Wieser [were brothers-in-law]. [Wieser was] a close friend of the statistician Franz von Juraschek, Friedrich Hayek’s maternal grandfather. Young Austrian economists bonded on Alpine excursions and met in Böhm-Bawerk’s famous seminars (also attended by the Bolshevik Nikolai Bukharin and the German Marxist Rudolf Hilferding). Ludwig von Mises continued this tradition, holding private seminars in Vienna in the 1920s and later in New York. As Wasserman notes, the Austrian School was “a social network first and last.”

After World War I, the Habsburg Empire was dismantled by the victorious Allies. The Austrian bureaucracy shrank, and university placements became scarce. Menger, the last surviving member of the first generation of Austrian economists, died in 1921. The economic school he founded, with its emphasis on individualism and free markets, might have disappeared under the socialism of “Red Vienna.” Instead, a new generation of brilliant young economists emerged: Schumpeter, Hayek, and Mises—all of whom published best-selling works in English and remain familiar names today—along with a number of less well known but influential economists, including Oskar Morgenstern, Fritz Machlup, Alexander Gerschenkron, and Gottfried Haberler.

Two factual corrections are in order. Menger outlived Böhm-Bawerk, but not his other chief disciple von Wieser, who died in 1926, not long after supervising Hayek’s doctoral dissertation, later published in 1927, and, in 1933, translated into English and published as Monetary Theory and the Trade Cycle. Moreover, a 16-year gap separated Mises and Schumpeter, who were exact contemporaries, from Hayek (born in 1899) who was a few years older than Gerschenkron, Haberler, Machlup and Morgenstern.

All the surviving members or associates of the Austrian school wound up either in the US or Britain after World War II, and Hayek, who had taken a position in London in 1931, moved to the US in 1950, taking a position in the Committee on Social Thought at the University of Chicago after having been refused a position in the economics department. Through the intervention of wealthy sponsors, Mises obtained an academic appointment of sorts at the NYU economics department, where he succeeded in training two noteworthy disciples who wrote dissertations under his tutelage, Murray Rothbard and Israel Kirzner. (Kirzner wrote his dissertation under Mises at NYU, but Rothbard did his graduate work at Colulmbia.) Schumpeter, Haberler and Gerschenkron eventually took positions at Harvard, while Machlup (with some stops along the way) and Morgenstern made their way to Princeton. However, Hayek’s interests shifted from pure economic theory to deep philosophical questions. While Machlup and Haberler continued to work on economic theory, the Austrian influence on their work after World War II was barely recognizable. Morgenstern and Schumpeter made major contributions to economics, but did not hide their alienation from the doctrines of the Austrian School.

So there was little reason to expect that the Austrian School would survive its dispersal when the Nazis marched unopposed into Vienna in 1938. That it did survive is in no small measure due to its ideological usefulness to anti-socialist supporters who provided financial support to Hayek, enabling his appointment to the Committee on Social Thought at the University of Chicago, and Mises’s appointment at NYU, and other forms of research support to Hayek, Mises and other like-minded scholars, as well as funding the Mont Pelerin Society, an early venture in globalist networking, started by Hayek in 1947. Such support does not discredit the research to which it gave rise. That the survival of the Austrian School would probably not have been possible without the support of wealthy benefactors who anticipated that the Austrians would advance their political and economic interests does not invalidate the research thereby enabled. (In the interest of transparency, I acknowledge that I received support from such sources for two books that I wrote.)

Because Austrian School survivors other than Mises and Hayek either adapted themselves to mainstream thinking without renouncing their earlier beliefs (Haberler and Machlup) or took an entirely different direction (Morgenstern), and because the economic mainstream shifted in two directions that were most uncongenial to the Austrians: Walrasian general-equilibrium theory and Keynesian macroeconomics, the Austrian remnant, initially centered on Mises at NYU, adopted a sharply adversarial attitude toward mainstream economic doctrines.

Despite its minute numbers, the lonely remnant became a house divided against itself, Mises’s two outstanding NYU disciples, Murray Rothbard and Israel Kirzner, holding radically different conceptions of how to carry on the Austrian tradition. An extroverted radical activist, Rothbard was not content just to lead a school of economic thought, he aspired to become the leader of a fantastical anarchistic revolutionary movement to replace all established governments under a reign of private-enterprise anarcho-capitalism. Rothbard’s political radicalism, which, despite his Jewish ancestry, even included dabbling in Holocaust denialism, so alienated his mentor, that Mises terminated all contact with Rothbard for many years before his death. Kirzner, self-effacing, personally conservative, with no political or personal agenda other than the advancement of his own and his students’ scholarship, published hundreds of articles and several books filling 10 thick volumes of his collected works published by the Liberty Fund, while establishing a robust Austrian program at NYU, training many excellent scholars who found positions in respected academic and research institutions. Similar Austrian programs, established under the guidance of Kirzner’s students, were started at other institutions, most notably at George Mason University.

One of the founders of the Cato Institute, which for nearly half a century has been the leading avowedly libertarian think tank in the US, Rothbard was eventually ousted by Cato, and proceeded to set up a rival think tank, the Ludwig von Mises Institute, at Auburn University, which has turned into a focal point for extreme libertarians and white nationalists to congregate, get acquainted, and strategize together.

Isolation and marginalization tend to cause a subspecies either to degenerate toward extinction, to somehow blend in with the members of the larger species, thereby losing its distinctive characteristics, or to accentuate its unique traits, enabling it to find some niche within which to survive as a distinct sub-species. Insofar as they have engaged in economic analysis rather than in various forms of political agitation and propaganda, the Rothbardian Austrians have focused on anarcho-capitalist theory and the uniquely perverse evils of fractional-reserve banking.

Rejecting the political extremism of the Rothbardians, Kirznerian Austrians differentiate themselves by analyzing what they call market processes and emphasizing the limitations on the knowledge and information possessed by actual decision-makers. They attribute this misplaced focus on equilibrium to the extravagantly unrealistic and patently false assumptions of mainstream models on the knowledge possessed by economic agents, which effectively make equilibrium the inevitable — and trivial — conclusion entailed by those extreme assumptions. In their view, the focus of mainstream models on equilibrium states with unrealistic assumptions results from a preoccupation with mathematical formalism in which mathematical tractability rather than sound economics dictates the choice of modeling assumptions.

Skepticism of the extreme assumptions about the informational endowments of agents covers a range of now routine assumptions in mainstream models, e.g., the ability of agents to form precise mathematical estimates of the probability distributions of future states of the world, implying that agents never confront decisions about which they are genuinely uncertain. Austrians also object to the routine assumption that all the information needed to determine the solution of a model is the common knowledge of the agents in the model, so that an existing equilibrium cannot be disrupted unless new information randomly and unpredictably arrives. Each agent in the model having been endowed with the capacity of a semi-omniscient central planner, solving the model for its equilibrium state becomes a trivial exercise in which the optimal choices of a single agent are taken as representative of the choices made by all of the model’s other, semi-omnicient, agents.

Although shreds of subjectivism — i.e., agents make choices based own preference orderings — are shared by all neoclassical economists, Austrian criticisms of mainstream neoclassical models are aimed at what Austrians consider to be their insufficient subjectivism. It is this fierce commitment to a robust conception of subjectivism, in which an equilibrium state of shared expectations by economic agents must be explained, not just assumed, that Chancellor properly identifies as a distinguishing feature of the Austrian School.

Menger’s original idea of marginal utility was posited on the subjective preferences of consumers. This subjectivist position was retained by subsequent generations of the school. It inspired a tradition of radical individualism, which in time made the Austrians the favorite economists of American libertarians. Subjectivism was at the heart of the Austrians’ polemical rejection of Marxism. Not only did they dismiss Marx’s labor theory of value, they argued that socialism couldn’t possibly work since it would lack the means to allocate resources efficiently.

The problem with central planning, according to Hayek, is that so much of the knowledge that people act upon is specific knowledge that individuals acquire in the course of their daily activities and life experience, knowledge that is often difficult to articulate – mere intuition and guesswork, yet more reliable than not when acted upon by people whose livelihoods depend on being able to do the right thing at the right time – much less communicate to a central planner.

Chancellor attributes Austrian mistrust of statistical aggregates or indices, like GDP and price levels, to Austrian subjectivism, which regards such magnitudes as abstractions irrelevant to the decisions of private decision-makers, except perhaps in forming expectations about the actions of government policy makers. (Of course, this exception potentially provides full subjectivist license and legitimacy for macroeconomic theorizing despite Austrian misgivings.) Observed statistical correlations between aggregate variables identified by macroeconomists are dismissed as irrelevant unless grounded in, and implied by, the purposeful choices of economic agents.

But such scruples about the use of macroeconomic aggregates and inferring causal relationships from observed correlations are hardly unique to the Austrian school. One of the most important contributions of the 20th century to the methodology of economics was an article by T. C. Koopmans, “Measurement Without Theory,” which argued that measured correlations between macroeconomic variables provide a reliable basis for business-cycle research and policy advice only if the correlations can be explained in terms of deeper theoretical or structural relationships. The Nobel Prize Committee, in awarding the 1975 Prize to Koopmans, specifically mentioned this paper in describing Koopmans’s contributions. Austrians may be more fastidious than their mainstream counterparts in rejecting macroeconomic relationships not based on microeconomic principles, but they aren’t the only ones mistrustful of mere correlations.

Chancellor cites mistrust about the use of statistical aggregates and price indices as a factor in Hayek’s disastrous policy advice warning against anti-deflationary or reflationary measures during the Great Depression.

Their distrust of price indexes brought Austrian economists into conflict with mainstream economic opinion during the 1920s. At the time, there was a general consensus among leading economists, ranging from Irving Fisher at Yale to Keynes at Cambridge, that monetary policy should aim at delivering a stable price level, and in particular seek to prevent any decline in prices (deflation). Hayek, who earlier in the decade had spent time at New York University studying monetary policy and in 1927 became the first director of the Austrian Institute for Business Cycle Research, argued that the policy of price stabilization was misguided. It was only natural, Hayek wrote, that improvements in productivity should lead to lower prices and that any resistance to this movement (sometimes described as “good deflation”) would have damaging economic consequences.

The argument that deflation stemming from economic expansion and increasing productivity is normal and desirable isn’t what led Hayek and the Austrians astray in the Great Depression; it was their failure to realize the deflation that triggered the Great Depression was a monetary phenomenon caused by a malfunctioning international gold standard. Moreover, Hayek’s own business-cycle theory explicitly stated that a neutral (stable) monetary policy ought to aim at keeping the flow of total spending and income constant in nominal terms while his policy advice of welcoming deflation meant a rapidly falling rate of total spending. Hayek’s policy advice was an inexcusable error of judgment, which, to his credit, he did acknowledge after the fact, though many, perhaps most, Austrians have refused to follow him even that far.

Considered from the vantage point of almost a century, the collapse of the Austrian School seems to have been inevitable. Hayek’s long-shot bid to establish his business-cycle theory as the dominant explanation of the Great Depression was doomed from the start by the inadequacies of the very specific version of his basic model and his disregard of the obvious implication of that model: prevent total spending from contracting. The promising young students and colleagues who had briefly gathered round him upon his arrival in England, mostly attached themselves to other mentors, leaving Hayek with only one or two immediate disciples to carry on his research program. The collapse of his research program, which he himself abandoned after completing his final work in economic theory, marked a research hiatus of almost a quarter century, with the notable exception of publications by his student, Ludwig Lachmann who, having decamped in far-away South Africa, labored in relative obscurity for most of his career.

The early clash between Keynes and Hayek, so important in the eyes of Chancellor and others, is actually overrated. Chancellor, quoting Lachmann and Nicholas Wapshott, describes it as a clash of two irreconcilable views of the economic world, and the clash that defined modern economics. In later years, Lachmann actually sought to effect a kind of reconciliation between their views. It was not a conflict of visions that undid Hayek in 1931-32, it was his misapplication of a narrowly constructed model to a problem for which it was irrelevant.

Although the marginalization of the Austrian School, after its misguided policy advice in the Great Depression and its dispersal during and after World War II, is hardly surprising, the unwillingness of mainstream economists to sort out what was useful and relevant in the teachings of the Austrian School from what is not was unfortunate not only for the Austrians. Modern economics was itself impoverished by its disregard for the complexity and interconnectedness of economic phenomena. It’s precisely the Austrian attentiveness to the complexity of economic activity — the necessity for complementary goods and factors of production to be deployed over time to satisfy individual wants – that is missing from standard economic models.

That Austrian attentiveness, pioneered by Menger himself, to the complementarity of inputs applied over the course of time undoubtedly informed Hayek’s seminal contribution to economic thought: his articulation of the idea of intertemporal equilibrium that comprehends the interdependence of the plans of independent agents and the need for them to all fit together over the course of time for equilibrium to obtain. Hayek’s articulation represented a conceptual advance over earlier versions of equilibrium analysis stemming from Walras and Pareto, and even from Irving Fisher who did pay explicit attention to intertemporal equilibrium. But in Fisher’s articulation, intertemporal consistency was described in terms of aggregate production and income, leaving unexplained the mechanisms whereby the individual plans to produce and consume particular goods over time are reconciled. Hayek’s granular exposition enabled him to attend to, and articulate, necessary but previously unspecified relationships between the current prices and expected future prices.

Moreover, neither mainstream nor Austrian economists have ever explained how prices are adjust in non-equilibrium settings. The focus of mainstream analysis has always been the determination of equilibrium prices, with the implicit understanding that “market forces” move the price toward its equilibrium value. The explanatory gap has been filled by the mainstream New Classical School which simply posits the existence of an equilibrium price vector, and, to replace an empirically untenable tâtonnement process for determining prices, posits an equally untenable rational-expectations postulate to assert that market economies typically perform as if they are in, or near the neighborhood of, equilibrium, so that apparent fluctuations in real output are viewed as optimal adjustments to unexplained random productivity shocks.

Alternatively, in New Keynesian mainstream versions, constraints on price changes prevent immediate adjustments to rationally expected equilibrium prices, leading instead to persistent reductions in output and employment following demand or supply shocks. (I note parenthetically that the assumption of rational expectations is not, as often suggested, an assumption distinct from market-clearing, because the rational expectation of all agents of a market-clearing price vector necessarily implies that the markets clear unless one posits a constraint, e.g., a binding price floor or ceiling, that prevents all mutually beneficial trades from being executed.)

Similarly, the Austrian school offers no explanation of how unconstrained price adjustments by market participants is a sufficient basis for a systemic tendency toward equilibrium. Without such an explanation, their belief that market economies have strong self-correcting properties is unfounded, because, as Hayek demonstrated in his 1937 paper, “Economics and Knowledge,” price adjustments in current markets don’t, by themselves, ensure a systemic tendency toward equilibrium values that coordinate the plans of independent economic agents unless agents’ expectations of future prices are sufficiently coincident. To take only one passage of many discussing the difficulty of explaining or accounting for a process that leads individuals toward a state of equilibrium, I offer the following as an example:

All that this condition amounts to, then, is that there must be some discernible regularity in the world which makes it possible to predict events correctly. But, while this is clearly not sufficient to prove that people will learn to foresee events correctly, the same is true to a hardly less degree even about constancy of data in an absolute sense. For any one individual, constancy of the data does in no way mean constancy of all the facts independent of himself, since, of course, only the tastes and not the actions of the other people can in this sense be assumed to be constant. As all those other people will change their decisions as they gain experience about the external facts and about other people’s actions, there is no reason why these processes of successive changes should ever come to an end. These difficulties are well known, and I mention them here only to remind you how little we actually know about the conditions under which an equilibrium will ever be reached.

In this theoretical muddle, Keynesian economics and the neoclassical synthesis were abandoned, because the key proposition of Keynesian economics was supposedly the tendency of a modern economy toward an equilibrium with involuntary unemployment while the neoclassical synthesis rejected that proposition, so that the supposed synthesis was no more than an agreement to disagree. That divided house could not stand. The inability of Keynesian economists such as Hicks, Modigliani, Samuelson and Patinkin to find a satisfactory (at least in terms of a preferred Walrasian general-equilibrium model) rationalization for Keynes’s conclusion that an economy would likely become stuck in an equilibrium with involuntary unemployment led to the breakdown of the neoclassical synthesis and the displacement of Keynesianism as the dominant macroeconomic paradigm.

But perhaps the way out of the muddle is to abandon the idea that a systemic tendency toward equilibrium is a property of an economic system, and, instead, to recognize that equilibrium is, as Hayek suggested, a contingent, not a necessary, property of a complex economy. Ludwig Lachmann, cited by Chancellor for his remark that the early theoretical clash between Hayek and Keynes was a conflict of visions, eventually realized that in an important sense both Hayek and Keynes shared a similar subjectivist conception of the crucial role of individual expectations of the future in explaining the stability or instability of market economies. And despite the efforts of New Classical economists to establish rational expectations as an axiomatic equilibrating property of market economies, that notion rests on nothing more than arbitrary methodological fiat.

Chancellor concludes by suggesting that Wasserman’s characterization of the Austrians as marginalized is not entirely accurate inasmuch as “the Austrians’ view of the economy as a complex, evolving system continues to inspire new research.” Indeed, if economics is ever to find a way out of its current state of confusion, following Lachmann in his quest for a synthesis of sorts between Keynes and Hayek might just be a good place to start from.

There Is No Intertemporal Budget Constraint

Last week Nick Rowe posted a link to a just published article in a special issue of the Review of Keynesian Economics commemorating the 80th anniversary of the General Theory. Nick’s article discusses the confusion in the General Theory between saving and hoarding, and Nick invited readers to weigh in with comments about his article. The ROKE issue also features an article by Simon Wren-Lewis explaining the eclipse of Keynesian theory as a result of the New Classical Counter-Revolution, correctly identified by Wren-Lewis as a revolution inspired not by empirical success but by a methodological obsession with reductive micro-foundationalism. While deploring the New Classical methodological authoritarianism, Wren-Lewis takes solace from the ability of New Keynesians to survive under the New Classical methodological regime, salvaging a role for activist counter-cyclical policy by, in effect, negotiating a safe haven for the sticky-price assumption despite its shaky methodological credentials. The methodological fiction that sticky prices qualify as micro-founded allowed New Keynesianism to survive despite the ascendancy of micro-foundationalist methodology, thereby enabling the core Keynesian policy message to survive.

I mention the Wren-Lewis article in this context because of an exchange between two of the commenters on Nick’s article: the presumably pseudonymous Avon Barksdale and blogger Jason Smith about microfoundations and Keynesian economics. Avon began by chastising Nick for wasting time discussing Keynes’s 80-year old ideas, something Avon thinks would never happen in a discussion about a true science like physics, the 100-year-old ideas of Einstein being of no interest except insofar as they have been incorporated into the theoretical corpus of modern physics. Of course, this is simply vulgar scientism, as if the only legitimate way to do economics is to mimic how physicists do physics. This methodological scolding is typically charming New Classical arrogance. Sort of reminds one of how Friedrich Engels described Marxian theory as scientific socialism. I mean who, other than a religious fanatic, would be stupid enough to argue with the assertions of science?

Avon continues with a quotation from David Levine, a fine economist who has done a lot of good work, but who is also enthralled by the New Classical methodology. Avon’s scientism provoked the following comment from Jason Smith, a Ph. D. in physics with a deep interest in and understanding of economics.

You quote from Levine: “Keynesianism as argued by people such as Paul Krugman and Brad DeLong is a theory without people either rational or irrational”

This is false. The L in ISLM means liquidity preference and e.g. here …

http://krugman.blogs.nytimes.com/2013/11/18/the-new-keynesian-case-for-fiscal-policy-wonkish/

… Krugman mentions an Euler equation. The Euler equation essentially says that an agent must be indifferent between consuming one more unit today on the one hand and saving that unit and consuming in the future on the other if utility is maximized.

So there are agents in both formulations preferring one state of the world relative to others.

Avon replied:

Jason,

“This is false. The L in ISLM means liquidity preference and e.g. here”

I know what ISLM is. It’s not recursive so it really doesn’t have people in it. The dynamics are not set by any micro-foundation. If you’d like to see models with people in them, try Ljungqvist and Sargent, Recursive Macroeconomic Theory.

To which Jason retorted:

Avon,

So the definition of “people” is restricted to agents making multi-period optimizations over time, solving a dynamic programming problem?

Well then any such theory is obviously wrong because people don’t behave that way. For example, humans don’t optimize the dictator game. How can you add up optimizing agents and get a result that is true for non-optimizing agents … coincident with the details of the optimizing agents mattering.

Your microfoundation requirement is like saying the ideal gas law doesn’t have any atoms in it. And it doesn’t! It is an aggregate property of individual “agents” that don’t have properties like temperature or pressure (or even volume in a meaningful sense). Atoms optimize entropy, but not out of any preferences.

So how do you know for a fact that macro properties like inflation or interest rates are directly related to agent optimizations? Maybe inflation is like temperature — it doesn’t exist for individuals and is only a property of economics in aggregate.

These questions are not answered definitively, and they’d have to be to enforce a requirement for microfoundations … or a particular way of solving the problem.

Are quarks important to nuclear physics? Not really — it’s all pions and nucleons. Emergent degrees of freedom. Sure, you can calculate pion scattering from QCD lattice calculations (quark and gluon DoF), but it doesn’t give an empirically better result than chiral perturbation theory (pion DoF) that ignores the microfoundations (QCD).

Assuming quarks are required to solve nuclear physics problems would have been a giant step backwards.

To which Avon rejoined:

Jason

The microfoundation of nuclear physics and quarks is quantum mechanics and quantum field theory. How the degrees of freedom reorganize under the renormalization group flow, what effective field theory results is an empirical question. Keynesian economics is worse tha[n] useless. It’s wrong empirically, it has no theoretical foundation, it has no laws. It has no microfoundation. No serious grad school has taught Keynesian economics in nearly 40 years.

To which Jason answered:

Avon,

RG flow is irrelevant to chiral perturbation theory which is based on the approximate chiral symmetry of QCD. And chiral perturbation theory could exist without QCD as the “microfoundation”.

Quantum field theory is not a ‘microfoundation’, but rather a framework for building theories that may or may not have microfoundations. As Weinberg (1979) said:

” … quantum field theory itself has no content beyond analyticity, unitarity,
cluster decomposition, and symmetry.”

If I put together an NJL model, there is no requirement that the scalar field condensate be composed of quark-antiquark pairs. In fact, the basic idea was used for Cooper pairs as a model of superconductivity. Same macro theory; different microfoundations. And that is a general problem with microfoundations — different microfoundations can lead to the same macro theory, so which one is right?

And the IS-LM model is actually pretty empirically accurate (for economics):

http://informationtransfereconomics.blogspot.com/2014/03/the-islm-model-again.html

To which Avon responded:

First, ISLM analysis does not hold empirically. It just doesn’t work. That’s why we ended up with the macro revolution of the 70s and 80s. Keynesian economics ignores intertemporal budget constraints, it violates Ricardian equivalence. It’s just not the way the world works. People might not solve dynamic programs to set their consumption path, but at least these models include a future which people plan over. These models work far better than Keynesian ISLM reasoning.

As for chiral perturbation theory and the approximate chiral symmetries of QCD, I am not making the case that NJL models requires QCD. NJL is an effective field theory so it comes from something else. That something else happens to be QCD. It could have been something else, that’s an empirical question. The microfoundation I’m talking about with theories like NJL is QFT and the symmetries of the vacuum, not the short distance physics that might be responsible for it. The microfoundation here is about the basic laws, the principles.

ISLM and Keynesian economics has none of this. There is no principle. The microfoundation of modern macro is not about increasing the degrees of freedom to model every person in the economy on some short distance scale, it is about building the basic principles from consistent economic laws that we find in microeconomics.

Well, I totally agree that IS-LM is a flawed macroeconomic model, and, in its original form, it was borderline-incoherent, being a single-period model with an interest rate, a concept without meaning except as an intertemporal price relationship. These deficiencies of IS-LM became obvious in the 1970s, so the model was extended to include a future period, with an expected future price level, making it possible to speak meaningfully about real and nominal interest rates, inflation and an equilibrium rate of spending. So the failure of IS-LM to explain stagflation, cited by Avon as the justification for rejecting IS-LM in favor of New Classical macro, was not that hard to fix, at least enough to make it serviceable. And comparisons of the empirical success of augmented IS-LM and the New Classical models have shown that IS-LM models consistently outperform New Classical models.

What Avon fails to see is that the microfoundations that he considers essential for macroeconomics are themselves derived from the assumption that the economy is operating in macroeconomic equilibrium. Thus, insisting on microfoundations – at least in the formalist sense that Avon and New Classical macroeconomists understand the term – does not provide a foundation for macroeconomics; it is just question begging aka circular reasoning or petitio principia.

The circularity is obvious from even a cursory reading of Samuelson’s Foundations of Economic Analysis, Robert Lucas’s model for doing economics. What Samuelson called meaningful theorems – thereby betraying his misguided acceptance of the now discredited logical positivist dogma that only potentially empirically verifiable statements have meaning – are derived using the comparative-statics method, which involves finding the sign of the derivative of an endogenous economic variable with respect to a change in some parameter. But the comparative-statics method is premised on the assumption that before and after the parameter change the system is in full equilibrium or at an optimum, and that the equilibrium, if not unique, is at least locally stable and the parameter change is sufficiently small not to displace the system so far that it does not revert back to a new equilibrium close to the original one. So the microeconomic laws invoked by Avon are valid only in the neighborhood of a stable equilibrium, and the macroeconomics that Avon’s New Classical mentors have imposed on the economics profession is a macroeconomics that, by methodological fiat, is operative only in the neighborhood of a locally stable equilibrium.

Avon dismisses Keynesian economics because it ignores intertemporal budget constraints. But the intertemporal budget constraint doesn’t exist in any objective sense. Certainly macroeconomics has to take into account intertemporal choice, but the idea of an intertemporal budget constraint analogous to the microeconomic budget constraint underlying the basic theory of consumer choice is totally misguided. In the static theory of consumer choice, the consumer has a given resource endowment and known prices at which consumers can transact at will, so the utility-maximizing vector of purchases and sales can be determined as the solution of a constrained-maximization problem.

In the intertemporal context, consumers have a given resource endowment, but prices are not known. So consumers have to make current transactions based on their expectations about future prices and a variety of other circumstances about which consumers can only guess. Their budget constraints are thus not real but totally conjectural based on their expectations of future prices. The optimizing Euler equations are therefore entirely conjectural as well, and subject to continual revision in response to changing expectations. The idea that the microeconomic theory of consumer choice is straightforwardly applicable to the intertemporal choice problem in a setting in which consumers don’t know what future prices will be and agents’ expectations of future prices are a) likely to be very different from each other and thus b) likely to be different from their ultimate realizations is a huge stretch. The intertemporal budget constraint has a completely different role in macroeconomics from the role it has in microeconomics.

If I expect that the demand for my services will be such that my disposable income next year would be $500k, my consumption choices would be very different from what they would have been if I were expecting a disposable income of $100k next year. If I expect a disposable income of $500k next year, and it turns out that next year’s income is only $100k, I may find myself in considerable difficulty, because my planned expenditure and the future payments I have obligated myself to make may exceed my disposable income or my capacity to borrow. So if there are a lot of people who overestimate their future incomes, the repercussions of their over-optimism may reverberate throughout the economy, leading to bankruptcies and unemployment and other bad stuff.

A large enough initial shock of mistaken expectations can become self-amplifying, at least for a time, possibly resembling the way a large initial displacement of water can generate a tsunami. A financial crisis, which is hard to model as an equilibrium phenomenon, may rather be an emergent phenomenon with microeconomic sources, but whose propagation can’t be described in microeconomic terms. New Classical macroeconomics simply excludes such possibilities on methodological grounds by imposing a rational-expectations general-equilibrium structure on all macroeconomic models.

This is not to say that the rational expectations assumption does not have a useful analytical role in macroeconomics. But the most interesting and most important problems in macroeconomics arise when the rational expectations assumption does not hold, because it is when individual expectations are very different and very unstable – say, like now, for instance — that macroeconomies become vulnerable to really scary instability.

Simon Wren-Lewis makes a similar point in his paper in the Review of Keynesian Economics.

Much discussion of current divisions within macroeconomics focuses on the ‘saltwater/freshwater’ divide. This understates the importance of the New Classical Counter Revolution (hereafter NCCR). It may be more helpful to think about the NCCR as involving two strands. The one most commonly talked about involves Keynesian monetary and fiscal policy. That is of course very important, and plays a role in the policy reaction to the recent Great Recession. However I want to suggest that in some ways the second strand, which was methodological, is more important. The NCCR helped completely change the way academic macroeconomics is done.

Before the NCCR, macroeconomics was an intensely empirical discipline: something made possible by the developments in statistics and econometrics inspired by The General Theory. After the NCCR and its emphasis on microfoundations, it became much more deductive. As Hoover (2001, p. 72) writes, ‘[t]he conviction that macroeconomics must possess microfoundations has changed the face of the discipline in the last quarter century’. In terms of this second strand, the NCCR was triumphant and remains largely unchallenged within mainstream academic macroeconomics.

Perhaps I will have some more to say about Wren-Lewis’s article in a future post. And perhaps also about Nick Rowe’s article.

HT: Tom Brown

Update (02/11/16):

On his blog Jason Smith provides some further commentary on his exchange with Avon on Nick Rowe’s blog, explaining at greater length how irrelevant microfoundations are to doing real empirically relevant physics. He also expands on and puts into a broader meta-theoretical context my point about the extremely narrow range of applicability of the rational-expectations equilibrium assumptions of New Classical macroeconomics.

David Glasner found a back-and-forth between me and a commenter (with the pseudonym “Avon Barksdale” after [a] character on The Wire who [didn’t end] up taking an economics class [per Tom below]) on Nick Rowe’s blog who expressed the (widely held) view that the only scientific way to proceed in economics is with rigorous microfoundations. “Avon” held physics up as a purported shining example of this approach.
I couldn’t let it go: even physics isn’t that reductionist. I gave several examples of cases where the microfoundations were actually known, but not used to figure things out: thermodynamics, nuclear physics. Even modern physics is supposedly built on string theory. However physicists do not require every pion scattering amplitude be calculated from QCD. Some people do do so-called lattice calculations. But many resort to the “effective” chiral perturbation theory. In a sense, that was what my thesis was about — an effective theory that bridges the gap between lattice QCD and chiral perturbation theory. That effective theory even gave up on one of the basic principles of QCD — confinement. It would be like an economist giving up opportunity cost (a basic principle of the micro theory). But no physicist ever said to me “your model is flawed because it doesn’t have true microfoundations”. That’s because the kind of hard core reductionism that surrounds the microfoundations paradigm doesn’t exist in physics — the most hard core reductionist natural science!
In his post, Glasner repeated something that he had before and — probably because it was in the context of a bunch of quotes about physics — I thought of another analogy.

Glasner says:

But the comparative-statics method is premised on the assumption that before and after the parameter change the system is in full equilibrium or at an optimum, and that the equilibrium, if not unique, is at least locally stable and the parameter change is sufficiently small not to displace the system so far that it does not revert back to a new equilibrium close to the original one. So the microeconomic laws invoked by Avon are valid only in the neighborhood of a stable equilibrium, and the macroeconomics that Avon’s New Classical mentors have imposed on the economics profession is a macroeconomics that, by methodological fiat, is operative only in the neighborhood of a locally stable equilibrium.

 

This hits on a basic principle of physics: any theory radically simplifies near an equilibrium.

Go to Jason’s blog to read the rest of his important and insightful post.

Thompson’s Reformulation of Macroeconomic Theory, Part V: A Neoclassical Black Hole

It’s been over three years since I posted the fourth of my four previous installments in this series about Earl Thompson’s unpublished paper “A Reformulation of Macroeconomic Theory,” Thompson’s strictly neoclassical alternative to the standard Keynesian IS-LM model. Given the long hiatus, a short recapitulation seems in order.

The first installment was an introduction summarizing Thompson’s two main criticisms of the Keynesian model: 1) the disconnect between the standard neoclassical marginal productivity theory of production and factor pricing and the Keynesian assertion that labor receives a wage equal to its marginal product, thereby implying the existence of a second scarce factor of production (capital), but with the market for capital services replaced in the IS-LM model by the Keynesian expenditure functions, creating a potential inconsistency between the IS-LM model and a deep property of neoclassical theory; 2) the market for capital services having been excluded from the IS-LM model, the model lacks a variable that equilibrates the choice between holding money or real assets, so that the Keynesian investment function is incompletely specified, the Keynesian equilibrium condition for spending – equality between savings and investment – taking no account of the incentive for capital accumulation or the relationship, explicitly discussed by Keynes, between current investment and the (expected) future price level. Excluding the dependence of the equilibrium rate of spending on (expected) inflation from the IS-LM model renders the model logically incomplete.

The second installment was a discussion of the Hicksian temporary-equilibrium method used by Thompson to rationalize the existence of involuntary unemployment. For Thompson involuntary unemployment means unemployment caused by overly optimistic expectations by workers of wage offers, leading them to mistakenly set reservation wages too high. The key idea of advantage of the temporary-equilibrium method is that it reconciles the convention of allowing a market-clearing price to equilibrate supply and demand with the phenomenon of substantial involuntary unemployment in business-cycle downturns. Because workers have an incentive to withhold their services in order to engage in further job search or job training or leisure, their actual short-run supply of labor services in a given time period is highly elastic at the expected wage. If wage offers are below expectations, workers (mistakenly = involuntarily) choose unemployment, but given those mistaken expectations, the labor market is cleared with the observed wage equilibrating the demand for labor services and supply of labor services. There are clearly problems with this way of modeling the labor market, but it does provide an analytical technique that can account for cyclical fluctuations in unemployment within a standard microeconomic framework.

In the third installment, I showed how Thompson derived his FF curve, representing combinations of price levels and interest rates consistent with (temporary) equilibrium in both factor markets (labor services and capital services) and two versions of the LM curve, representing price levels and interest rates consistent with equilibrium in the money market. The two versions of the LM curve (analogous, but not identical, to the Keynesian LM curve) correspond to different monetary regimes. In what Thompson called the classical case, the price level is fixed by convertibility of output into cash at a fixed exchange rate, with money being supplied by a competitive banking system paying competitive interest on cash balances. The LM curve in this case is vertical at the fixed price level, with any nominal rate of interest being consistent with equilibrium in the money market, inasmuch as the amount of money demanded depends not on the nominal interest rate, but on the difference between the nominal interest rate and the competitively determined interest rate paid on cash. In the modern case, cash is non-interest bearing and supplied monopolistically by the monetary authority, so the LM curve is upward-sloping, with the cost of holding cash rising with the rate of interest, thereby reducing the amount of money demanded and increasing the price level for a given quantity of money supplied by the monetary authority. The solution of the model corresponds to the intersection of the FF and LM curves. For the classical case, the intersection is unique, but in the modern case since both curves are upward sloping, multiple intersections are possible.

The focus of the fourth installment was on setting up a model analogous to the Keynesian model by replacing the market for capital services excluded by Walras’s Law with something similar to the Keynesian expenditure functions (consumption, investment, government spending, etc.). The key point is that the FF and LM curves implicitly define a corresponding CC curve (shown in Figure 4 of the third installment) with the property that, at all points on the CC curve, the excess demand for (supply of) money exactly equals the excess supply of (demand for) labor. Thus, the CC curve represents a stock equilibrium in the market for commodities (i.e., a single consumption/capital good) rather than a flow rate of expenditure and income as represented by the conventional IS curve. But the inconsistency between the upward-sloping CC curve and the downward sloping IS curve reflects the underlying inconsistency between the neoclassical and the Keynesian paradigms.

In this installment, I am going to work through Thompson’s argument about the potential for an unstable equilibrium in the version of his model with an upward-sloping LM curve corresponding to the case in which non-interest bearing money is monopolistically supplied by a central bank. Thompson makes the argument using Figure 5, a phase diagram showing the potential equilibria for such an economy in terms of the FF curve (representing price levels and nominal interest rates consistent with equilibrium in the markets for labor and capital services) and the CC curve (representing price levels and nominal interest rates consistent with equilibrium in the output market).

Thompson_Figure5A phase diagram shows the direction of price adjustment when the economy is not in equilibrium (one of the two points of intersection between the FF and the CC curves). A disequilibrium implies a price change in response to an excess supply or excess demand in some market. All points above and to the left of the FF curve correspond to an excess supply of capital services, implying a falling nominal interest rate; points below and to the right of the FF curve correspond to excess demand for capital services, implying a rising interest rate. Points above and to the left of the CC curve correspond to an excess demand for output, implying a rising price level; points below and to the right of the CC curve correspond to an excess supply of output, implying a falling price level. Points in between the FF and CC curves correspond either to an excess demand for commodities and for capital services, implying a rising price level and a rising nominal interest rate (in the region between the two points of intersection – Eu and Es — between the CC and FF curves) or to an excess supply of both capital services and commodities, implying a falling interest rate and a falling price level (in the regions below the lower intersection Eu and above the upper intersection Es). The arrows in the diagram indicate the direction in which the price level and the nominal interest rate are changing at any point in the diagram.

Given the direction of price change corresponding to points off the CC and FF curves, the upper intersection is shown to be a stable equilibrium, while the lower intersection is unstable. Moreover, the instability corresponding to the lower intersection is very dangerous, because entering the region between the CC and FF curves below Eu means getting sucked into a vicious downward spiral of prices and interest rates that can only be prevented by a policy intervention to shift the CC curve to the right, either directly by way of increased government spending or tax cuts, or indirectly, through monetary policy aimed at raising the price level and expected inflation, shifting the LM curve, and thereby the CC curve, to the right. It’s like stepping off a cliff into a black hole.

Although I have a lot of reservations about the practical relevance of this model as an analytical tool for understanding cyclical fluctuations and counter-cyclical policy, which I plan to discuss in a future post, the model does resonate with me, and it does so especially after my recent posts about the representative-agent modeling strategy in New Classical economics (here, here, and here). Representative-agent models, I argued, are inherently unable to serve as analytical tools in macroeconomics, because their reductionist approach implies that all relevant decision making can be reduced to the optimization of a single agent, insulating the analysis from any interactions between decision-makers. But it is precisely the interaction effects between decision makers that create analytical problems that constitute the subject matter of the discipline or sub-discipline known as macroeconomics. That Robert Lucas has made it his life’s work to annihilate this field of study is a sad commentary on his contribution, Nobel Prize or no Nobel Prize, as an economic theorist.

That is one reason why I regard Thompson’s model, despite its oversimplifications, as important: it is constructed on a highly aggregated, yet strictly neoclassical, foundation, including continuous market-clearing, arriving at the remarkable conclusion that not only is there an unstable equilibrium, but it is at least possible for an economy in the neighborhood of the unstable equilibrium to be caught in a vicious downward deflationary spiral in which falling prices do not restore equilibrium but, instead, suck the economy into a zero-output black hole. That result seems to me to be a major conceptual breakthrough, showing that the strict rationality assumptions of neoclassical theory can lead to aoutcome that is totally at odds with the usual presumption that the standard neoclassical assumptions inevitably generate a unique stable equilibrium and render macroeconomics superfluous.

Can We All Export Our Way out of Depression?

Tyler Cowen has a post chastising Keynesians for scolding Germany for advising their Euro counterparts to adopt the virtuous German example of increasing their international competitiveness so that they can increase their exports, thereby increasing GDP and employment. The Keynesian response is that increasing exports is a zero-sum game, so that, far from being a recipe for recovery, the German advice is actually a recipe for continued stagnation.

Tyler doesn’t think much of the Keynesian response.

But that Keynesian counter is a mistake, perhaps brought on by the IS-LM model and its impoverished treatment of banking and credit.

Let’s say all nations could indeed increase their gross exports, although of course the sum of net exports could not go up.  The first effect is that small- and medium-sized enterprises would be more profitable in the currently troubled economies.  They would receive more credit and the broader monetary aggregates would go up in those countries, reflating their economies.  (Price level integration is not so tight in these cases, furthermore much of the reflation could operate through q’s rather than p’s.)  It sometimes feels like the IS-LM users have a mercantilist gold standard model, where the commodity base money can only be shuffled around in zero-sum fashion and not much more can happen in a positive direction.

The problem with Tyler’s rejoinder to the Keynesian response, which, I agree, provides an incomplete picture of what is going on, is that he assumes that which he wants to prove, thereby making his job just a bit too easy. That is, Tyler just assumes that “all nations could indeed increase their gross exports.” Obviously, if all nations increase their gross exports, they will very likely all increase their total output and employment. (It is, I suppose, theoretically possible that all the additional exports could be generated by shifting output from non-tradables to tradables, but that seems an extremely unlikely scenario.) The reaction of credit markets and monetary aggregates would be very much a second-order reaction. It’s the initial assumption–  that all nations could increase gross exports simultaneously — that is doing all the heavy lifting.

Concerning Tyler’s characterization of the IS-LM model as a mercantilist gold-standard model, I agree that IS-LM has serious deficiencies, but that characterization strikes me as unfair. The simple IS-LM model is a closed economy model, with an exogenously determined price level. Such a model certainly has certain similarities to a mercantilist gold standard model, but that doesn’t mean that the two models are essentially the same. There are many ways of augmenting the IS-LM model to turn it into an open-economy model, in which case it would not necessarily resemble the a mercantilist gold-standard model.

Now I am guessing that Tyler would respond to my criticism by asking: “well, why wouldn’t all countries increase their gross exports is they all followed the German advice?”

My response to that question would be that the conclusion that everybody’s exports would increase if everybody became more efficient logically follows only in a comparative-statics framework. But, for purposes of this exercise, we are not starting from an equilibrium, and we have no assurance that, in a disequilibrium environment, the interaction of the overall macro disequilibrium with the posited increase of efficiency would produce, as the comparative-statics exercise would lead us to believe, a straightforward increase in everyone’s exports. Indeed, even the comparative-statics exercise is making an unsubstantiated assumption that the initial equilibrium is locally unique and stable.

Of course, this response might be dismissed as a mere theoretical possibility, though the likelihood that widespread adoption of export-increasing policies in the midst of an international depression, unaccompanied by monetary expansion, would lead to increased output does not seem all that high to me. So let’s think about what might happen if all countries simultaneously adopted export-increasing policies. The first point to consider is that not all countries are the same, and not all are in a position to increase their exports by as much or as quickly as others. Inevitably, some countries would increase their exports faster than others. As a result, it is also inevitable that some countries would lose export markets as other countries penetrated export markets before they did. In addition, some countries would experience declines in domestic output as domestic-import competing industries were forced by import competition to curtail output. In the absence of demand-increasing monetary policies, output and employment in some countries would very likely fall. This is the kernel of truth in the conventional IS-LM analysis that Tyler tries to dismiss. The IS-LM framework abstracts from the output-increasing tendency of export-led growth, but the comparative-statics approach abstracts from aggregate-demand effects that could easily overwhelm the comparative-statics effect.

Now, to be fair, I must acknowledge that Tyler reaches a pretty balanced conclusion:

This interpretation of the meaning of zero-sum net exports is one of the most common economic mistakes you will hear from serious economists in the blogosphere, and yet it is often presented dogmatically or dismissively in a single sentence, without much consideration of more complex or more realistic scenarios.

That is a reasonable conclusion, but I think it would be just as dogmatic, if not more so, to rely on the comparative-statics analysis that Tyler goes through in the first part of his post without consideration of more complex or more realistic scenarios.

Let me also offer a comment on Scott Sumner’s take on Tyler’s post. Scott tries to translate Tyler’s analysis into macroeconomic terms to support Tyler’s comparative-statics analysis. Scott considers three methods by which exports might be increased: 1) supply-side reforms, 2) monetary stimulus aimed at currency depreciation, and 3) increased government saving (fiscal austerity). The first two, Scott believes, lead to increased output and employment, and that the third is a wash. I agree with Scott about monetary stimulus aimed at currency depreciation, but I disagree (at least in part) about the other two.

Supply-side reforms [to increase exports] boost output under either an inflation target, or a dual mandate.  If you want to use the Keynesian model, these reforms boost the Wicksellian equilibrium interest rate, which makes NGDP grow faster, even at the zero bound.

Scott makes a fair point, but I don’t think it is necessarily true for all inflation targets. Here is how I would put it. Because supply-side reforms to increase exports could cause aggregate demand in some countries to fall, and we have very little ability to predict by how much aggregate demand could go down in some countries adversely affected by increased competition from exports by other countries, it is at least possible that worldwide aggregate demand would fall if such policies were generally adopted. You can’t tell how the Wicksellian natural rate would be affected until you’ve accounted for all the indirect feedback effects on aggregate demand. If the Wicksellian natural rate fell, an inflation target, even if met, might not prevent a slowdown in NGDP growth, and a net reduction in output and employment. To prevent a slowdown in NGDP growth would require increasing the inflation target. Of course, under a real dual mandate (as opposed to the sham dual mandate now in place at the Fed) or an NGDP target, monetary policy would have to be loosened sufficiently to prevent output and employment from falling.

As far as government saving (fiscal austerity), I’d say it’s a net wash, for monetary offset reasons.

I am not sure what Scott means about monetary offset in this context. As I have argued in several earlier posts (here, here, here and here), attempting to increase employment via currency depreciation and increased saving involves tightening monetary policy, not loosening it. So I don’t see how fiscal policy can be used to depreciate a currency at the same time that monetary policy is being loosened. At any rate, if monetary policy is being used to depreciate the currency, then I see no difference between options 2) and 3).

But my general comment is that, like Tyler, Scott seems to be exaggerating the difference between his bottom line and the one that comes out of the IS-LM model, though I am certainly not saying that IS-LM is  last word on the subject.

Hicks on IS-LM and Temporary Equilibrium

Jan, commenting on my recent post about Krugman, Minsky and IS-LM, quoted the penultimate paragraph of J. R. Hicks’s 1980 paper on IS-LM in the Journal of Post-Keynesian Economics, a brand of economics not particularly sympathetic to Hicks’s invention. Hicks explained that in the mid-1930s he had been thinking along lines similar to Keynes’s even before the General Theory was published, and had the basic idea of IS-LM in his mind even before he had read the General Theory, while also acknowledging that his enthusiasm for the IS-LM construct had waned considerably over the years.

Hicks discussed both the similarities and the differences between his model and IS-LM. But as the discussion proceeds, it becomes clear that what he is thinking of as his model is what became his model of temporary equilibrium in Value and Capital. So it really is important to understand what Hicks felt were the similarities as well as the key differences between the temporary- equilibrium model, and the IS-LM model. Here is how Hicks put it:

I recognized immediately, as soon as I read The General Theory, that my model and Keynes’ had some things in common. Both of us fixed our attention on the behavior of an economy during a period—a period that had a past, which nothing that was done during the period could alter, and a future, which during the period was unknown. Expectations of the future would nevertheless affect what happened during the period. Neither of us made any assumption about “rational expectations” ; expectations, in our models, were strictly exogenous.3 (Keynes made much more fuss over that than I did, but there is the same implication in my model also.) Subject to these data— the given equipment carried over from the past, the production possibilities within the period, the preference schedules, and the given expectations— the actual performance of the economy within the period was supposed to be determined, or determinable. It would be determined as an equilibrium performance, with respect to these data.

There was all this in common between my model and Keynes’; it was enough to make me recognize, as soon as I saw The General Theory, that his model was a relation of mine and, as such, one which I could warmly welcome. There were, however, two differences, on which (as we shall see) much depends. The more obvious difference was that mine was a flexprice model, a perfect competition model, in which all prices were flexible, while in Keynes’ the level of money wages (at least) was exogenously determined. So Keynes’ was a model that was consistent with unemployment, while mine, in his terms, was a full employment model. I shall have much to say about this difference, but I may as well note, at the start, that I do not think it matters much. I did not think, even in 1936, that it mattered much. IS-LM was in fact a translation of Keynes’ nonflexprice model into my terms. It seemed to me already that that could be done; but how it is done requires explanation.

The other difference is more fundamental; it concerns the length of the period. Keynes’ (he said) was a “short-period,” a term with connotations derived from Marshall; we shall not go far wrong if we think of it as a year. Mine was an “ultra-short-period” ; I called it a week. Much more can happen in a year than in a week; Keynes has to allow for quite a lot of things to happen. I wanted to avoid so much happening, so that my (flexprice) markets could reflect propensities (and expectations) as they are at a moment. So it was that I made my markets open only on a Monday; what actually happened during the ensuing week was not to affect them. This was a very artificial device, not (I would think now) much to be recommended. But the point of it was to exclude the things which might happen, and must disturb the markets, during a period of finite length; and this, as we shall see, is a very real trouble in Keynes. (pp. 139-40)

Hicks then explained how the specific idea of the IS-LM model came to him as a result of working on a three-good Walrasian system in which the solution could be described in terms of equilibrium in two markets, the third market necessarily being in equilibrium if the other two were in equilibrium. That’s an interesting historical tidbit, but the point that I want to discuss is what I think is Hicks’s failure to fully understand the significance of his own model, whose importance, regrettably, he consistently underestimated in later work (e.g., in Capital and Growth and in this paper).

The point that I want to focus on is in the second paragraph quoted above where Hicks says “mine [i.e. temporary equilibrium] was a flexprice model, a perfect competition model, in which all prices were flexible, while in Keynes’ the level of money wages (at least) was exogenously determined. So Keynes’ was a model that was consistent with unemployment, while mine, in his terms, was a full employment model.” This, it seems to me, is all wrong, because Hicks, is taking a very naïve and misguided view of what perfect competition and flexible prices mean. Those terms are often mistakenly assumed to meant that if prices are simply allowed to adjust freely, all  markets will clear and all resources will be utilized.

I think that is a total misconception, and the significance of the temporary-equilibrium construct is in helping us understand why an economy can operate sub-optimally with idle resources even when there is perfect competition and markets “clear.” What prevents optimality and allows resources to remain idle despite freely adjustming prices and perfect competition is that the expectations held by agents are not consistent. If expectations are not consistent, the plans based on those expectations are not consistent. If plans are not consistent, then how can one expect resources to be used optimally or even at all? Thus, for Hicks to assert, casually without explicit qualification, that his temporary-equilibrium model was a full-employment model, indicates to me that Hicks was unaware of the deeper significance of his own model.

If we take a full equilibrium as our benchmark, and look at how one of the markets in that full equilibrium clears, we can imagine the equilibrium as the intersection of a supply curve and a demand curve, whose positions in the standard price/quantity space depend on the price expectations of suppliers and of demanders. Different, i.e, inconsistent, price expectations would imply shifts in both the demand and supply curves from those corresponding to full intertemporal equilibrium. Overall, the price expectations consistent with a full intertemporal equilibrium will in some sense maximize total output and employment, so when price expectations are inconsistent with full intertemporal equilibrium, the shifts of the demand and supply curves will be such that they will intersect at points corresponding to less output and less employment than would have been the case in full intertemporal equilibrium. In fact, it is possible to imagine that expectations on the supply side and the demand side are so inconsistent that the point of intersection between the demand and supply curves corresponds to an output (and hence employment) that is way less than it would have been in full intertemporal equilibrium. The problem is not that the price in the market doesn’t allow the market to clear. Rather, given the positions of the demand and supply curves, their point of intersection implies a low output, because inconsistent price expectations are such that potentially advantageous trading opportunities are not being recognized.

So for Hicks to assert that his flexprice temporary-equilibrium model was (in Keynes’s terms) a full-employment model without noting the possibility of a significant contraction of output (and employment) in a perfectly competitive flexprice temporary-equilibrium model when there are significant inconsistencies in expectations suggests strongly that Hicks somehow did not fully comprehend what his own creation was all about. His failure to comprehend his own model also explains why he felt the need to abandon the flexprice temporary-equilibrium model in his later work for a fixprice model.

There is, of course, a lot more to be said about all this, and Hicks’s comments concerning the choice of a length of the period are also of interest, but the clear (or so it seems to me) misunderstanding by Hicks of what is entailed by a flexprice temporary equilibrium is an important point to recognize in evaluating both Hicks’s work and his commentary on that work and its relation to Keynes.

Temporary Equilibrium One More Time

It’s always nice to be noticed, especially by Paul Krugman. So I am not upset, but in his response to my previous post, I don’t think that Krugman quite understood what I was trying to convey. I will try to be clearer this time. It will be easiest if I just quote from his post and insert my comments or explanations.

Glasner is right to say that the Hicksian IS-LM analysis comes most directly not out of Keynes but out of Hicks’s own Value and Capital, which introduced the concept of “temporary equilibrium”.

Actually, that’s not what I was trying to say. I wasn’t making any explicit connection between Hicks’s temporary-equilibrium concept from Value and Capital and the IS-LM model that he introduced two years earlier in his paper on Keynes and the Classics. Of course that doesn’t mean that the temporary equilibrium method isn’t connected to the IS-LM model; one would need to do a more in-depth study than I have done of Hicks’s intellectual development to determine how much IS-LM was influenced by Hicks’s interest in intertemporal equilibrium and in the method of temporary equilibrium as a way of analyzing intertemporal issues.

This involves using quasi-static methods to analyze a dynamic economy, not because you don’t realize that it’s dynamic, but simply as a tool. In particular, V&C discussed at some length a temporary equilibrium in a three-sector economy, with goods, bonds, and money; that’s essentially full-employment IS-LM, which becomes the 1937 version with some price stickiness. I wrote about that a long time ago.

Now I do think that it’s fair to say that the IS-LM model was very much in the spirit of Value and Capital, in which Hicks deployed an explicit general-equilibrium model to analyze an economy at a Keynesian level of aggregation: goods, bonds, and money. But the temporary-equilibrium aspect of Value and Capital went beyond the Keynesian analysis, because the temporary equilibrium analysis was explicitly intertemporal, all agents formulating plans based on explicit future price expectations, and the inconsistency between expected prices and actual prices was explicitly noted, while in the General Theory, and in IS-LM, price expectations were kept in the background, making an appearance only in the discussion of the marginal efficiency of capital.

So is IS-LM really Keynesian? I think yes — there is a lot of temporary equilibrium in The General Theory, even if there’s other stuff too. As I wrote in the last post, one key thing that distinguished TGT from earlier business cycle theorizing was precisely that it stopped trying to tell a dynamic story — no more periods, forced saving, boom and bust, instead a focus on how economies can stay depressed. Anyway, does it matter? The real question is whether the method of temporary equilibrium is useful.

That is precisely where I think Krugman’s grasp on the concept of temporary equilibrium is slipping. Temporary equilibrium is indeed about periods, and it is explicitly dynamic. In my previous post I referred to Hicks’s discussion in Capital and Growth, about 25 years after writing Value and Capital, in which he wrote

The Temporary Equilibrium model of Value and Capital, also, is “quasi-static” [like the Keynes theory] – in just the same sense. The reason why I was contented with such a model was because I had my eyes fixed on Keynes.

As I read this passage now — and it really bothered me when I read it as I was writing my previous post — I realize that what Hicks was saying was that his desire to conform to the Keynesian paradigm led him to compromise the integrity of the temporary equilibrium model, by forcing it to be “quasi-static” when it really was essentially dynamic. The challenge has been to convert a “quasi-static” IS-LM model into something closer to the temporary-equilibrium method that Hicks introduced, but did not fully execute in Value and Capital.

What are the alternatives? One — which took over much of macro — is to do intertemporal equilibrium all the way, with consumers making lifetime consumption plans, prices set with the future rationally expected, and so on. That’s DSGE — and I think Glasner and I agree that this hasn’t worked out too well. In fact, economists who never learned temporary-equiibrium-style modeling have had a strong tendency to reinvent pre-Keynesian fallacies (cough-Say’s Law-cough), because they don’t know how to think out of the forever-equilibrium straitjacket.

Yes, I agree! Rational expectations, full-equilibrium models have turned out to be a regression, not an advance. But the way I would make the point is that the temporary-equilibrium method provides a sort of a middle way to do intertemporal dynamics without presuming that consumption plans and investment plans are always optimal.

What about disequilibrium dynamics all the way? Basically, I have never seen anyone pull this off. Like the forever-equilibrium types, constant-disequilibrium theorists have a remarkable tendency to make elementary conceptual mistakes.

Again, I agree. We can’t work without some sort of equilibrium conditions, but temporary equilibrium provides a way to keep the discipline of equilibrium without assuming (nearly) full optimality.

Still, Glasner says that temporary equilibrium must involve disappointed expectations, and fails to take account of the dynamics that must result as expectations are revised.

Perhaps I was unclear, but I thought I was saying just the opposite. It’s the “quasi-static” IS-LM model, not temporary equilibrium, that fails to take account of the dynamics produced by revised expectations.

I guess I’d say two things. First, I’m not sure that this is always true. Hicks did indeed assume static expectations — the future will be like the present; but in Keynes’s vision of an economy stuck in sustained depression, such static expectations will be more or less right.

Again, I agree. There may be self-fulfilling expectations of a low-income, low-employment equilibrium. But I don’t think that that is the only explanation for such a situation, and certainly not for the downturn that can lead to such an equilibrium.

Second, those of us who use temporary equilibrium often do think in terms of dynamics as expectations adjust. In fact, you could say that the textbook story of how the short-run aggregate supply curve adjusts over time, eventually restoring full employment, is just that kind of thing. It’s not a great story, but it is the kind of dynamics Glasner wants — and it’s Econ 101 stuff.

Again, I agree. It’s not a great story, but, like it or not, the story is not a Keynesian story.

So where does this leave us? I’m not sure, but my impression is that Krugman, in his admiration for the IS-LM model, is trying too hard to identify IS-LM with the temporary-equilibrium approach, which I think represented a major conceptual advance over both the Keynesian model and the IS-LM representation of the Keynesian model. Temporary equilibrium and IS-LM are not necessarily inconsistent, but I mainly wanted to point out that the two aren’t the same, and shouldn’t be conflated.

Krugman on Minsky, IS-LM and Temporary Equilibrium

Catching up on my blog reading, I found this one from Paul Krugman from almost two weeks ago defending the IS-LM model against Hyman Minsky’s criticism (channeled by his student Lars Syll) that IS-LM misrepresented the message of Keynes’s General Theory. That is an old debate, and it’s a debate that will never be resolved because IS-LM is a nice way of incorporating monetary effects into the pure income-expenditure model that was the basis of Keynes’s multiplier analysis and his policy prescriptions. On the other hand, the model leaves out much of what most interesting and insightful in the General Theory — precisely the stuff that could not easily be distilled into a simple analytic model.

Here’s Krugman:

Lars Syll approvingly quotes Hyman Minsky denouncing IS-LM analysis as an “obfuscation” of Keynes; Brad DeLong disagrees. As you might guess, so do I.

There are really two questions here. The less important is whether something like IS-LM — a static, equilibrium analysis of output and employment that takes expectations and financial conditions as given — does violence to the spirit of Keynes. Why isn’t this all that important? Because Keynes was a smart guy, not a prophet. The General Theory is interesting and inspiring, but not holy writ.

It’s also a protean work that contains a lot of different ideas, not necessarily consistent with each other. Still, when I read Minsky putting into Keynes’s mouth the claim that

Only a theory that was explicitly cyclical and overtly financial was capable of being useful

I have to wonder whether he really read the book! As I read the General Theory — and I’ve read it carefully — one of Keynes’s central insights was precisely that you wanted to step back from thinking about the business cycle. Previous thinkers had focused all their energy on trying to explain booms and busts; Keynes argued that the real thing that needed explanation was the way the economy seemed to spend prolonged periods in a state of underemployment:

[I]t is an outstanding characteristic of the economic system in which we live that, whilst it is subject to severe fluctuations in respect of output and employment, it is not violently unstable. Indeed it seems capable of remaining in a chronic condition of subnormal activity for a considerable period without any marked tendency either towards recovery or towards complete collapse.

So Keynes started with a, yes, equilibrium model of a depressed economy. He then went on to offer thoughts about how changes in animal spirits could alter this equilibrium; but he waited until Chapter 22 (!) to sketch out a story about the business cycle, and made it clear that this was not the centerpiece of his theory. Yes, I know that he later wrote an article claiming that it was all about the instability of expectations, but the book is what changed economics, and that’s not what it says.

This all seems pretty sensible to me. Nevertheless, there is so much in the General Theory — both good and bad – that isn’t reflected in IS-LM, that to reduce the General Theory to IS-LM is a kind of misrepresentation. And to be fair, Hicks himself acknowledged that IS-LM was merely a way of representing one critical difference in the assumptions underlying the Keynesian and the “Classical” analyses of macroeconomic equilibrium.

But I would take issue with the following assertion by Krugman.

The point is that Keynes very much made use of the method of temporary equilibrium — interpreting the state of the economy in the short run as if it were a static equilibrium with a lot of stuff taken provisionally as given — as a way to clarify thought. And the larger point is that he was right to do this.

When people like me use something like IS-LM, we’re not imagining that the IS curve is fixed in position for ever after. It’s a ceteris paribus thing, just like supply and demand. Assuming short-run equilibrium in some things — in this case interest rates and output — doesn’t mean that you’ve forgotten that things change, it’s just a way to clarify your thought. And the truth is that people who try to think in terms of everything being dynamic all at once almost always end up either confused or engaging in a lot of implicit theorizing they don’t even realize they’re doing.

When I think of a temporary equilibrium, the most important – indeed the defining — characteristic of that temporary equilibrium is that expectations of at least some agents have been disappointed. The disappointment of expectations is likely to, but does not strictly require, a revision of disappointed expectations and of the plans conditioned on those expectations. The revision of expectations and plans as a result of expectations being disappointed is what gives rise to a dynamic adjustment process. But that is precisely what is excluded from – or at least not explicitly taken into account by – the IS-LM model. There is nothing in the IS-LM model that provides any direct insight into the process by which expectations are revised as a result of being disappointed. That Keynes could so easily think in terms of a depressed economy being in equilibrium suggests to me that he was missing what I regard as the key insight of the temporary-equilibrium method.

Of course, there are those who argue, perhaps most notably Roger Farmer, that economies have multiple equilibria, each with different levels of output and employment corresponding to different expectational parameters. That seems to me a more Keynesian approach, an approach recognizing that expectations can be self-fulfilling, than the temporary-equilibrium approach in which the focus is on mistaken and conflicting expectations, not their self-fulfillment.

Now to be fair, I have to admit that Hicks, himself, who introduced the temporary-equilibrium approach in Value and Capital (1939) later (1965) suggested in Capital and Growth (p. 65) that both the Keynes in the General Theory and the temporary-equilibrium approach of Value and Capital were “quasi-static.” The analysis of the General Theory “is not the analysis of a process; no means has been provided by which we can pass from one Keynesian period to the next. . . . The Temporary Equilibrium model of Value and Capital, also, is quasi-static in just the same sense. The reason why I was contented with such a model was because I had my eyes fixed on Keynes.

Despite Hicks’s identification of the temporary-equilibrium method with Keynes’s method in the General Theory, I think that Hicks was overly modest in assessing his own contribution in Value and Capital, failing to appreciate the full significance of the method he had introduced. Which, I suppose, just goes to show that you can’t assume that the person who invents a concept or an idea is necessarily the one who has the best, or most comprehensive, understanding of what the concept means of what its significance is.

The Trouble with IS-LM (and its Successors)

Lately, I have been reading a paper by Roger Backhouse and David Laidler, “What Was Lost with IS-LM” (an earlier version is available here) which was part of a very interesting symposium of 11 papers on the IS-LM model published as a supplement to the 2004 volume of History of Political Economy. The main thesis of the paper is that the IS-LM model, like the General Theory of which it is a partial and imperfect distillation, aborted a number of promising developments in the rapidly developing, but still nascent, field of macroeconomics in the 1920 and 1930s, developments that just might, had they not been elbowed aside by the IS-LM model, have evolved into a more useful and relevant theory of macroeconomic fluctuations and policy than we now possess. Even though I have occasionally sparred with Scott Sumner about IS-LM – with me pushing back a bit at Scott’s attacks on IS-LM — I have a lot of sympathy for the Backhouse-Laidler thesis.

The Backhouse-Laidler paper is too long to summarize, but I will just note that there are four types of loss that they attribute to IS-LM, which are all, more or less, derivative of the static equilibrium character of Keynes’s analytic method in both the General Theory and the IS-LM construction.

1 The loss of dynamic analysis. IS-LM is a single-period model.

2 The loss of intertemporal choice and expectations. Intertemporal choice and expectations are excluded a priori in a single-period model.

3 The loss of policy regimes. In a single-period model, policy is a one-time affair. The problem of setting up a regime that leads to optimal results over time doesn’t arise.

4 The loss of intertemporal coordination failures. Another concept that is irrelevant in a one-period model.

There was one particular passage that I found especially impressive. Commenting on the lack of any systematic dynamic analysis in the GT, Backhouse and Laidler observe,

[A]lthough [Keynes] made many remarks that could be (and in some cases were later) turned into dynamic models, the emphasis of the General Theory was nevertheless on unemployment as an equilibrium phenomenon.

Dynamic accounts of how money wages might affect employment were only a little more integrated into Keynes’s formal analysis than they were later into IS-LM. Far more significant for the development in Keynes’s thought is how Keynes himself systematically neglected dynamic factors that had been discussed in previous explanations of unemployment. This was a feature of the General Theory remarked on by Bertil Ohlin (1937, 235-36):

Keynes’s theoretical system . . . is equally “old-fashioned” in the second respect which characterizes recent economic theory – namely, the attempt to break away from an explanation of economic events by means of orthodox equilibrium constructions. No other analysis of trade fluctuations in recent years – with the possible exception of the Mises-Hayek school – follows such conservative lines in this respect. In fact, Keynes is much more of an “equilibrium theorist” than such economists as Cassel and, I think, Marshall.

Backhouse and Laidler go on to cite the Stockholm School (of which Ohlin was a leading figure) as an example of explicitly dynamic analysis.

As Bjorn Hansson (1982) has shown, this group developed an explicit method, using the idea of a succession of “unit periods,” in which each period began with agents having plans based on newly formed expectations about the outcome of executing them, and ended with the economy in some new situation that was the outcome of executing them, and ended with the economy in some new situation that was the outcome of market processes set in motion by the incompatibility of those plans, and in which expectations had been reformulated, too, in the light of experience. They applied this method to the construction of a wide variety of what they called “model sequences,” many of which involved downward spirals in economic activity at whose very heart lay rising unemployment. This is not the place to discuss the vexed question of the extent to which some of this work anticipated the Keynesian multiplier process, but it should be noted that, in IS-LM, it is the limit to which such processes move, rather than the time path they follow to get there, that is emphasized.

The Stockholm method seems to me exactly the right way to explain business-cycle downturns. In normal times, there is a rough – certainly not perfect, but good enough — correspondence of expectations among agents. That correspondence of expectations implies that the individual plans contingent on those expectations will be more or less compatible with one another. Surprises happen; here and there people are disappointed and regret past decisions, but, on the whole, they are able to adjust as needed to muddle through. There is usually enough flexibility in a system to allow most people to adjust their plans in response to unforeseen circumstances, so that the disappointment of some expectations doesn’t become contagious, causing a systemic crisis.

But when there is some sort of major shock – and it can only be a shock if it is unforeseen – the system may not be able to adjust. Instead, the disappointment of expectations becomes contagious. If my customers aren’t able to sell their products, I may not be able to sell mine. Expectations are like networks. If there is a breakdown at some point in the network, the whole network may collapse or malfunction. Because expectations and plans fit together in interlocking networks, it is possible that even a disturbance at one point in the network can cascade over an increasingly wide group of agents, leading to something like a system-wide breakdown, a financial crisis or a depression.

But the “problem” with the Stockholm method was that it was open-ended. It could offer only “a wide variety” of “model sequences,” without specifying a determinate solution. It was just this gap in the Stockholm approach that Keynes was able to fill. He provided a determinate equilibrium, “the limit to which the Stockholm model sequences would move, rather than the time path they follow to get there.” A messy, but insightful, approach to explaining the phenomenon of downward spirals in economic activity coupled with rising unemployment was cast aside in favor of the neater, simpler approach of Keynes. No wonder Ohlin sounds annoyed in his comment, quoted by Backhouse and Laidler, about Keynes. Tractability trumped insight.

Unfortunately, that is still the case today. Open-ended models of the sort that the Stockholm School tried to develop still cannot compete with the RBC and DSGE models that have displaced IS-LM and now dominate modern macroeconomics. The basic idea that modern economies form networks, and that networks have properties that are not reducible to just the nodes forming them has yet to penetrate the trained intuition of modern macroeconomists. Otherwise, how would it have been possible to imagine that a macroeconomic model could consist of a single representative agent? And just because modern macroeconomists have expanded their models to include more than a single representative agent doesn’t mean that the intellectual gap evidenced by the introduction of representative-agent models into macroeconomic discourse has been closed.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,627 other followers

Follow Uneasy Money on WordPress.com