Archive for the 'general equilibrium' Category

An Austrian Tragedy

It was hardly predictable that the New York Review of Books would take notice of Marginal Revolutionaries by Janek Wasserman, marking the susquicentenial of the publication of Carl Menger’s Grundsätze (Principles of Economics) which, along with Jevons’s Principles of Political Economy and Walras’s Elements of Pure Economics ushered in the marginal revolution upon which all of modern economics, for better or for worse, is based. The differences among the three founding fathers of modern economic theory were not insubstantial, and the Jevonian version was largely superseded by the work of his younger contemporary Alfred Marshall, so that modern neoclassical economics is built on the work of only one of the original founders, Leon Walras, Jevons’s work having left little impression on the future course of economics.

Menger’s work, however, though largely, but not totally, eclipsed by that of Marshall and Walras, did leave a more enduring imprint and a more complicated legacy than Jevons’s — not only for economics, but for political theory and philosophy, more generally. Judging from Edward Chancellor’s largely favorable review of Wasserman’s volume, one might even hope that a start might be made in reassessing that legacy, a process that could provide an opportunity for mutually beneficial interaction between long-estranged schools of thought — one dominant and one marginal — that are struggling to overcome various conceptual, analytical and philosophical problems for which no obvious solutions seem available.

In view of the failure of modern economists to anticipate the Great Recession of 2008, the worst financial shock since the 1930s, it was perhaps inevitable that the Austrian School, a once favored branch of economics that had made a specialty of booms and busts, would enjoy a revival of public interest.

The theme of Austrians as outsiders runs through Janek Wasserman’s The Marginal Revolutionaries: How Austrian Economists Fought the War of Ideas, a general history of the Austrian School from its beginnings to the present day. The title refers both to the later marginalization of the Austrian economists and to the original insight of its founding father, Carl Menger, who introduced the notion of marginal utility—namely, that economic value does not derive from the cost of inputs such as raw material or labor, as David Ricardo and later Karl Marx suggested, but from the utility an individual derives from consuming an additional amount of any good or service. Water, for instance, may be indispensable to humans, but when it is abundant, the marginal value of an extra glass of the stuff is close to zero. Diamonds are less useful than water, but a great deal rarer, and hence command a high market price. If diamonds were as common as dewdrops, however, they would be worthless.

Menger was not the first economist to ponder . . . the “paradox of value” (why useless things are worth more than essentials)—the Italian Ferdinando Galiani had gotten there more than a century earlier. His central idea of marginal utility was simultaneously developed in England by W. S. Jevons and on the Continent by Léon Walras. Menger’s originality lay in applying his theory to the entire production process, showing how the value of capital goods like factory equipment derived from the marginal value of the goods they produced. As a result, Austrian economics developed a keen interest in the allocation of capital. Furthermore, Menger and his disciples emphasized that value was inherently subjective, since it depends on what consumers are willing to pay for something; this imbued the Austrian school from the outset with a fiercely individualistic and anti-statist aspect.

Menger’s unique contribution is indeed worthy of special emphasis. He was more explicit than Jevons or Walras, and certainly more than Marshall, in explaining that the value of factors of production is derived entirely from the value of the incremental output that could be attributed (or imputed) to their services. This insight implies that cost is not an independent determinant of value, as Marshall, despite accepting the principle of marginal utility, continued to insist – famously referring to demand and supply as the two blades of the analytical scissors that determine value. The cost of production therefore turns out to be nothing but the value the output foregone when factors are used to produce one output instead of the next most highly valued alternative. Cost therefore does not determine, but is determined by, equilibrium price, which means that, in practice, costs are always subjective and conjectural. (I have made this point in an earlier post in a different context.) I will have more to say below about the importance of Menger’s specific contribution and its lasting imprint on the Austrian school.

Menger’s Principles of Economics, published in 1871, established the study of economics in Vienna—before then, no economic journals were published in Austria, and courses in economics were taught in law schools. . . .

The Austrian School was also bound together through family and social ties: [his two leading disciples, [Eugen von] Böhm-Bawerk and Friedrich von Wieser [were brothers-in-law]. [Wieser was] a close friend of the statistician Franz von Juraschek, Friedrich Hayek’s maternal grandfather. Young Austrian economists bonded on Alpine excursions and met in Böhm-Bawerk’s famous seminars (also attended by the Bolshevik Nikolai Bukharin and the German Marxist Rudolf Hilferding). Ludwig von Mises continued this tradition, holding private seminars in Vienna in the 1920s and later in New York. As Wasserman notes, the Austrian School was “a social network first and last.”

After World War I, the Habsburg Empire was dismantled by the victorious Allies. The Austrian bureaucracy shrank, and university placements became scarce. Menger, the last surviving member of the first generation of Austrian economists, died in 1921. The economic school he founded, with its emphasis on individualism and free markets, might have disappeared under the socialism of “Red Vienna.” Instead, a new generation of brilliant young economists emerged: Schumpeter, Hayek, and Mises—all of whom published best-selling works in English and remain familiar names today—along with a number of less well known but influential economists, including Oskar Morgenstern, Fritz Machlup, Alexander Gerschenkron, and Gottfried Haberler.

Two factual corrections are in order. Menger outlived Böhm-Bawerk, but not his other chief disciple von Wieser, who died in 1926, not long after supervising Hayek’s doctoral dissertation, later published in 1927, and, in 1933, translated into English and published as Monetary Theory and the Trade Cycle. Moreover, a 16-year gap separated Mises and Schumpeter, who were exact contemporaries, from Hayek (born in 1899) who was a few years older than Gerschenkron, Haberler, Machlup and Morgenstern.

All the surviving members or associates of the Austrian school wound up either in the US or Britain after World War II, and Hayek, who had taken a position in London in 1931, moved to the US in 1950, taking a position in the Committee on Social Thought at the University of Chicago after having been refused a position in the economics department. Through the intervention of wealthy sponsors, Mises obtained an academic appointment of sorts at the NYU economics department, where he succeeded in training two noteworthy disciples who wrote dissertations under his tutelage, Murray Rothbard and Israel Kirzner. (Kirzner wrote his dissertation under Mises at NYU, but Rothbard did his graduate work at Colulmbia.) Schumpeter, Haberler and Gerschenkron eventually took positions at Harvard, while Machlup (with some stops along the way) and Morgenstern made their way to Princeton. However, Hayek’s interests shifted from pure economic theory to deep philosophical questions. While Machlup and Haberler continued to work on economic theory, the Austrian influence on their work after World War II was barely recognizable. Morgenstern and Schumpeter made major contributions to economics, but did not hide their alienation from the doctrines of the Austrian School.

So there was little reason to expect that the Austrian School would survive its dispersal when the Nazis marched unopposed into Vienna in 1938. That it did survive is in no small measure due to its ideological usefulness to anti-socialist supporters who provided financial support to Hayek, enabling his appointment to the Committee on Social Thought at the University of Chicago, and Mises’s appointment at NYU, and other forms of research support to Hayek, Mises and other like-minded scholars, as well as funding the Mont Pelerin Society, an early venture in globalist networking, started by Hayek in 1947. Such support does not discredit the research to which it gave rise. That the survival of the Austrian School would probably not have been possible without the support of wealthy benefactors who anticipated that the Austrians would advance their political and economic interests does not invalidate the research thereby enabled. (In the interest of transparency, I acknowledge that I received support from such sources for two books that I wrote.)

Because Austrian School survivors other than Mises and Hayek either adapted themselves to mainstream thinking without renouncing their earlier beliefs (Haberler and Machlup) or took an entirely different direction (Morgenstern), and because the economic mainstream shifted in two directions that were most uncongenial to the Austrians: Walrasian general-equilibrium theory and Keynesian macroeconomics, the Austrian remnant, initially centered on Mises at NYU, adopted a sharply adversarial attitude toward mainstream economic doctrines.

Despite its minute numbers, the lonely remnant became a house divided against itself, Mises’s two outstanding NYU disciples, Murray Rothbard and Israel Kirzner, holding radically different conceptions of how to carry on the Austrian tradition. An extroverted radical activist, Rothbard was not content just to lead a school of economic thought, he aspired to become the leader of a fantastical anarchistic revolutionary movement to replace all established governments under a reign of private-enterprise anarcho-capitalism. Rothbard’s political radicalism, which, despite his Jewish ancestry, even included dabbling in Holocaust denialism, so alienated his mentor, that Mises terminated all contact with Rothbard for many years before his death. Kirzner, self-effacing, personally conservative, with no political or personal agenda other than the advancement of his own and his students’ scholarship, published hundreds of articles and several books filling 10 thick volumes of his collected works published by the Liberty Fund, while establishing a robust Austrian program at NYU, training many excellent scholars who found positions in respected academic and research institutions. Similar Austrian programs, established under the guidance of Kirzner’s students, were started at other institutions, most notably at George Mason University.

One of the founders of the Cato Institute, which for nearly half a century has been the leading avowedly libertarian think tank in the US, Rothbard was eventually ousted by Cato, and proceeded to set up a rival think tank, the Ludwig von Mises Institute, at Auburn University, which has turned into a focal point for extreme libertarians and white nationalists to congregate, get acquainted, and strategize together.

Isolation and marginalization tend to cause a subspecies either to degenerate toward extinction, to somehow blend in with the members of the larger species, thereby losing its distinctive characteristics, or to accentuate its unique traits, enabling it to find some niche within which to survive as a distinct sub-species. Insofar as they have engaged in economic analysis rather than in various forms of political agitation and propaganda, the Rothbardian Austrians have focused on anarcho-capitalist theory and the uniquely perverse evils of fractional-reserve banking.

Rejecting the political extremism of the Rothbardians, Kirznerian Austrians differentiate themselves by analyzing what they call market processes and emphasizing the limitations on the knowledge and information possessed by actual decision-makers. They attribute this misplaced focus on equilibrium to the extravagantly unrealistic and patently false assumptions of mainstream models on the knowledge possessed by economic agents, which effectively make equilibrium the inevitable — and trivial — conclusion entailed by those extreme assumptions. In their view, the focus of mainstream models on equilibrium states with unrealistic assumptions results from a preoccupation with mathematical formalism in which mathematical tractability rather than sound economics dictates the choice of modeling assumptions.

Skepticism of the extreme assumptions about the informational endowments of agents covers a range of now routine assumptions in mainstream models, e.g., the ability of agents to form precise mathematical estimates of the probability distributions of future states of the world, implying that agents never confront decisions about which they are genuinely uncertain. Austrians also object to the routine assumption that all the information needed to determine the solution of a model is the common knowledge of the agents in the model, so that an existing equilibrium cannot be disrupted unless new information randomly and unpredictably arrives. Each agent in the model having been endowed with the capacity of a semi-omniscient central planner, solving the model for its equilibrium state becomes a trivial exercise in which the optimal choices of a single agent are taken as representative of the choices made by all of the model’s other, semi-omnicient, agents.

Although shreds of subjectivism — i.e., agents make choices based own preference orderings — are shared by all neoclassical economists, Austrian criticisms of mainstream neoclassical models are aimed at what Austrians consider to be their insufficient subjectivism. It is this fierce commitment to a robust conception of subjectivism, in which an equilibrium state of shared expectations by economic agents must be explained, not just assumed, that Chancellor properly identifies as a distinguishing feature of the Austrian School.

Menger’s original idea of marginal utility was posited on the subjective preferences of consumers. This subjectivist position was retained by subsequent generations of the school. It inspired a tradition of radical individualism, which in time made the Austrians the favorite economists of American libertarians. Subjectivism was at the heart of the Austrians’ polemical rejection of Marxism. Not only did they dismiss Marx’s labor theory of value, they argued that socialism couldn’t possibly work since it would lack the means to allocate resources efficiently.

The problem with central planning, according to Hayek, is that so much of the knowledge that people act upon is specific knowledge that individuals acquire in the course of their daily activities and life experience, knowledge that is often difficult to articulate – mere intuition and guesswork, yet more reliable than not when acted upon by people whose livelihoods depend on being able to do the right thing at the right time – much less communicate to a central planner.

Chancellor attributes Austrian mistrust of statistical aggregates or indices, like GDP and price levels, to Austrian subjectivism, which regards such magnitudes as abstractions irrelevant to the decisions of private decision-makers, except perhaps in forming expectations about the actions of government policy makers. (Of course, this exception potentially provides full subjectivist license and legitimacy for macroeconomic theorizing despite Austrian misgivings.) Observed statistical correlations between aggregate variables identified by macroeconomists are dismissed as irrelevant unless grounded in, and implied by, the purposeful choices of economic agents.

But such scruples about the use of macroeconomic aggregates and inferring causal relationships from observed correlations are hardly unique to the Austrian school. One of the most important contributions of the 20th century to the methodology of economics was an article by T. C. Koopmans, “Measurement Without Theory,” which argued that measured correlations between macroeconomic variables provide a reliable basis for business-cycle research and policy advice only if the correlations can be explained in terms of deeper theoretical or structural relationships. The Nobel Prize Committee, in awarding the 1975 Prize to Koopmans, specifically mentioned this paper in describing Koopmans’s contributions. Austrians may be more fastidious than their mainstream counterparts in rejecting macroeconomic relationships not based on microeconomic principles, but they aren’t the only ones mistrustful of mere correlations.

Chancellor cites mistrust about the use of statistical aggregates and price indices as a factor in Hayek’s disastrous policy advice warning against anti-deflationary or reflationary measures during the Great Depression.

Their distrust of price indexes brought Austrian economists into conflict with mainstream economic opinion during the 1920s. At the time, there was a general consensus among leading economists, ranging from Irving Fisher at Yale to Keynes at Cambridge, that monetary policy should aim at delivering a stable price level, and in particular seek to prevent any decline in prices (deflation). Hayek, who earlier in the decade had spent time at New York University studying monetary policy and in 1927 became the first director of the Austrian Institute for Business Cycle Research, argued that the policy of price stabilization was misguided. It was only natural, Hayek wrote, that improvements in productivity should lead to lower prices and that any resistance to this movement (sometimes described as “good deflation”) would have damaging economic consequences.

The argument that deflation stemming from economic expansion and increasing productivity is normal and desirable isn’t what led Hayek and the Austrians astray in the Great Depression; it was their failure to realize the deflation that triggered the Great Depression was a monetary phenomenon caused by a malfunctioning international gold standard. Moreover, Hayek’s own business-cycle theory explicitly stated that a neutral (stable) monetary policy ought to aim at keeping the flow of total spending and income constant in nominal terms while his policy advice of welcoming deflation meant a rapidly falling rate of total spending. Hayek’s policy advice was an inexcusable error of judgment, which, to his credit, he did acknowledge after the fact, though many, perhaps most, Austrians have refused to follow him even that far.

Considered from the vantage point of almost a century, the collapse of the Austrian School seems to have been inevitable. Hayek’s long-shot bid to establish his business-cycle theory as the dominant explanation of the Great Depression was doomed from the start by the inadequacies of the very specific version of his basic model and his disregard of the obvious implication of that model: prevent total spending from contracting. The promising young students and colleagues who had briefly gathered round him upon his arrival in England, mostly attached themselves to other mentors, leaving Hayek with only one or two immediate disciples to carry on his research program. The collapse of his research program, which he himself abandoned after completing his final work in economic theory, marked a research hiatus of almost a quarter century, with the notable exception of publications by his student, Ludwig Lachmann who, having decamped in far-away South Africa, labored in relative obscurity for most of his career.

The early clash between Keynes and Hayek, so important in the eyes of Chancellor and others, is actually overrated. Chancellor, quoting Lachmann and Nicholas Wapshott, describes it as a clash of two irreconcilable views of the economic world, and the clash that defined modern economics. In later years, Lachmann actually sought to effect a kind of reconciliation between their views. It was not a conflict of visions that undid Hayek in 1931-32, it was his misapplication of a narrowly constructed model to a problem for which it was irrelevant.

Although the marginalization of the Austrian School, after its misguided policy advice in the Great Depression and its dispersal during and after World War II, is hardly surprising, the unwillingness of mainstream economists to sort out what was useful and relevant in the teachings of the Austrian School from what is not was unfortunate not only for the Austrians. Modern economics was itself impoverished by its disregard for the complexity and interconnectedness of economic phenomena. It’s precisely the Austrian attentiveness to the complexity of economic activity — the necessity for complementary goods and factors of production to be deployed over time to satisfy individual wants – that is missing from standard economic models.

That Austrian attentiveness, pioneered by Menger himself, to the complementarity of inputs applied over the course of time undoubtedly informed Hayek’s seminal contribution to economic thought: his articulation of the idea of intertemporal equilibrium that comprehends the interdependence of the plans of independent agents and the need for them to all fit together over the course of time for equilibrium to obtain. Hayek’s articulation represented a conceptual advance over earlier versions of equilibrium analysis stemming from Walras and Pareto, and even from Irving Fisher who did pay explicit attention to intertemporal equilibrium. But in Fisher’s articulation, intertemporal consistency was described in terms of aggregate production and income, leaving unexplained the mechanisms whereby the individual plans to produce and consume particular goods over time are reconciled. Hayek’s granular exposition enabled him to attend to, and articulate, necessary but previously unspecified relationships between the current prices and expected future prices.

Moreover, neither mainstream nor Austrian economists have ever explained how prices are adjust in non-equilibrium settings. The focus of mainstream analysis has always been the determination of equilibrium prices, with the implicit understanding that “market forces” move the price toward its equilibrium value. The explanatory gap has been filled by the mainstream New Classical School which simply posits the existence of an equilibrium price vector, and, to replace an empirically untenable tâtonnement process for determining prices, posits an equally untenable rational-expectations postulate to assert that market economies typically perform as if they are in, or near the neighborhood of, equilibrium, so that apparent fluctuations in real output are viewed as optimal adjustments to unexplained random productivity shocks.

Alternatively, in New Keynesian mainstream versions, constraints on price changes prevent immediate adjustments to rationally expected equilibrium prices, leading instead to persistent reductions in output and employment following demand or supply shocks. (I note parenthetically that the assumption of rational expectations is not, as often suggested, an assumption distinct from market-clearing, because the rational expectation of all agents of a market-clearing price vector necessarily implies that the markets clear unless one posits a constraint, e.g., a binding price floor or ceiling, that prevents all mutually beneficial trades from being executed.)

Similarly, the Austrian school offers no explanation of how unconstrained price adjustments by market participants is a sufficient basis for a systemic tendency toward equilibrium. Without such an explanation, their belief that market economies have strong self-correcting properties is unfounded, because, as Hayek demonstrated in his 1937 paper, “Economics and Knowledge,” price adjustments in current markets don’t, by themselves, ensure a systemic tendency toward equilibrium values that coordinate the plans of independent economic agents unless agents’ expectations of future prices are sufficiently coincident. To take only one passage of many discussing the difficulty of explaining or accounting for a process that leads individuals toward a state of equilibrium, I offer the following as an example:

All that this condition amounts to, then, is that there must be some discernible regularity in the world which makes it possible to predict events correctly. But, while this is clearly not sufficient to prove that people will learn to foresee events correctly, the same is true to a hardly less degree even about constancy of data in an absolute sense. For any one individual, constancy of the data does in no way mean constancy of all the facts independent of himself, since, of course, only the tastes and not the actions of the other people can in this sense be assumed to be constant. As all those other people will change their decisions as they gain experience about the external facts and about other people’s actions, there is no reason why these processes of successive changes should ever come to an end. These difficulties are well known, and I mention them here only to remind you how little we actually know about the conditions under which an equilibrium will ever be reached.

In this theoretical muddle, Keynesian economics and the neoclassical synthesis were abandoned, because the key proposition of Keynesian economics was supposedly the tendency of a modern economy toward an equilibrium with involuntary unemployment while the neoclassical synthesis rejected that proposition, so that the supposed synthesis was no more than an agreement to disagree. That divided house could not stand. The inability of Keynesian economists such as Hicks, Modigliani, Samuelson and Patinkin to find a satisfactory (at least in terms of a preferred Walrasian general-equilibrium model) rationalization for Keynes’s conclusion that an economy would likely become stuck in an equilibrium with involuntary unemployment led to the breakdown of the neoclassical synthesis and the displacement of Keynesianism as the dominant macroeconomic paradigm.

But perhaps the way out of the muddle is to abandon the idea that a systemic tendency toward equilibrium is a property of an economic system, and, instead, to recognize that equilibrium is, as Hayek suggested, a contingent, not a necessary, property of a complex economy. Ludwig Lachmann, cited by Chancellor for his remark that the early theoretical clash between Hayek and Keynes was a conflict of visions, eventually realized that in an important sense both Hayek and Keynes shared a similar subjectivist conception of the crucial role of individual expectations of the future in explaining the stability or instability of market economies. And despite the efforts of New Classical economists to establish rational expectations as an axiomatic equilibrating property of market economies, that notion rests on nothing more than arbitrary methodological fiat.

Chancellor concludes by suggesting that Wasserman’s characterization of the Austrians as marginalized is not entirely accurate inasmuch as “the Austrians’ view of the economy as a complex, evolving system continues to inspire new research.” Indeed, if economics is ever to find a way out of its current state of confusion, following Lachmann in his quest for a synthesis of sorts between Keynes and Hayek might just be a good place to start from.

A Tale of Two Syntheses

I recently finished reading a slender, but weighty, collection of essays, Microfoundtions Reconsidered: The Relationship of Micro and Macroeconomics in Historical Perspective, edited by Pedro Duarte and Gilberto Lima; it contains in addition to a brief introductory essay by the editors, and contributions by Kevin Hoover, Robert Leonard, Wade Hands, Phil Mirowski, Michel De Vroey, and Pedro Duarte. The volume is both informative and stimulating, helping me to crystalize ideas about which I have been ruminating and writing for a long time, but especially in some of my more recent posts (e.g., here, here, and here) and my recent paper “Hayek, Hicks, Radner and Four Equilibrium Concepts.”

Hoover’s essay provides a historical account of the microfoundations, making clear that the search for microfoundations long preceded the Lucasian microfoundations movement of the 1970s and 1980s that would revolutionize macroeconomics in the late 1980s and early 1990s. I have been writing about the differences between varieties of microfoundations for quite a while (here and here), and Hoover provides valuable detail about early discussions of microfoundations and about their relationship to the now regnant Lucasian microfoundations dogma. But for my purposes here, Hoover’s key contribution is his deconstruction of the concept of microfoundations, showing that the idea of microfoundations depends crucially on the notion that agents in a macroeconomic model be explicit optimizers, meaning that they maximize an explicit function subject to explicit constraints.

What Hoover clarifies is vacuity of the Lucasian optimization dogma. Until Lucas, optimization by agents had been merely a necessary condition for a model to be microfounded. But there was also another condition: that the optimizing choices of agents be mutually consistent. Establishing that the optimizing choices of agents are mutually consistent is not necessarily easy or even possible, so often the consistency of optimizing plans can only be suggested by some sort of heuristic argument. But Lucas and his cohorts, followed by their acolytes, unable to explain, even informally or heuristically, how the optimizing choices of individual agents are rendered mutually consistent, instead resorted to question-begging and question-dodging techniques to avoid addressing the consistency issue, of which one — the most egregious, but not the only — is the representative agent. In so doing, Lucas et al. transformed the optimization problem from the coordination of multiple independent choices into the optimal plan of a single decision maker. Heckuva job!

The second essay by Robert Leonard, though not directly addressing the question of microfoundations, helps clarify and underscore the misrepresentation perpetrated by the Lucasian microfoundational dogma in disregarding and evading the need to describe a mechanism whereby the optimal choices of individual agents are, or could be, reconciled. Leonard focuses on a particular economist, Oskar Morgenstern, who began his career in Vienna as a not untypical adherent of the Austrian school of economics, a member of the Mises seminar and successor of F. A. Hayek as director of the Austrian Institute for Business Cycle Research upon Hayek’s 1931 departure to take a position at the London School of Economics. However, Morgenstern soon began to question the economic orthodoxy of neoclassical economic theory and its emphasis on the tendency of economic forces to reach a state of equilibrium.

In his famous early critique of the foundations of equilibrium theory, Morgenstern tried to show that the concept of perfect foresight, upon which, he alleged, the concept of equilibrium rests, is incoherent. To do so, Morgenstern used the example of the Holmes-Moriarity interaction in which Holmes and Moriarty are caught in a dilemma in which neither can predict whether the other will get off or stay on the train on which they are both passengers, because the optimal choice of each depends on the choice of the other. The unresolvable conflict between Holmes and Moriarty, in Morgenstern’s view, showed that the incoherence of the idea of perfect foresight.

As his disillusionment with orthodox economic theory deepened, Morgenstern became increasingly interested in the potential of mathematics to serve as a tool of economic analysis. Through his acquaintance with the mathematician Karl Menger, the son of Carl Menger, founder of the Austrian School of economics. Morgenstern became close to Menger’s student, Abraham Wald, a pure mathematician of exceptional ability, who, to support himself, was working on statistical and mathematical problems for the Austrian Institute for Business Cycle Resarch, and tutoring Morgenstern in mathematics and its applications to economic theory. Wald, himself, went on to make seminal contributions to mathematical economics and statistical analysis.

Moregenstern also became acquainted with another student of Menger, John von Neumnn, with an interest in applying advanced mathematics to economic theory. Von Neumann and Morgenstern would later collaborate in writing The Theory of Games and Economic Behavior, as a result of which Morgenstern came to reconsider his early view of the Holmes-Moriarty paradox inasmuch as it could be shown that an equilibrium solution of their interaction could be found if payoffs to their joint choices were specified, thereby enabling Holmes and Moriarty to choose optimal probablistic strategies.

I don’t think that the game-theoretic solution to the Holmes Moriarty game is as straightforward as Morgenstern eventually agreed, but the critical point in the microfoundations discussion is that the mathematical solution to the Holmes-Moriarty paradox acknowledges the necessity for the choices made by two or more agents in an economic or game-theoretic equilibrium to be reconciled – i.e., rendered mutually consistent — in equilibrium. Under Lucasian microfoundations dogma, the problem is either annihilated by positing an optimizing representative agent having no need to coordinate his decision with other agents (I leave the question who, in the Holmes-Moriarty interaction, is the representative agent as an exercise for the reader) or it is assumed away by positing the existence of a magical equilibrium with no explanation of how the mutually consistent choices are arrived at.

The third essay (“The Rise and Fall of Walrasian Economics: The Keynes Effect”) by Wade Hands considers the first of the two syntheses – the neoclassical synthesis — that are alluded to in the title of this post. Hands gives a learned account of the mutually reinforcing co-development of Walrasian general equilibrium theory and Keynesian economics in the 25 years or so following World War II. Although Hands agrees that there is no necessary connection between Walrasian GE theory and Keynesian theory, he argues that there was enough common ground between Keynesians and Walrasians, as famously explained by Hicks in summarizing Keynesian theory by way of his IS-LM model, to allow the two disparate research programs to nourish each other in a kind of symbiotic relationship as the two research programs came to dominate postwar economics.

The task for Keynesian macroeconomists following the lead of Samuelson, Solow and Modigliani at MIT, Alvin Hansen at Harvard and James Tobin at Yale was to elaborate the Hicksian IS-LM approach by embedding it in a more general Walrasian framework. In so doing, they helped to shape a research agenda for Walrasian general-equilibrium theorists working out the details of the newly developed Arrow-Debreu model, deriving conditions for the uniqueness and stability of the equilibrium of that model. The neoclassical synthesis followed from those efforts, achieving an uneasy reconciliation between Walrasian general equilibrium theory and Keynesian theory. It received its most complete articulation in the impressive treatise of Don Patinkin which attempted to derive or at least evaluate key Keyensian propositions in the context of a full general equilibrium model. At an even higher level of theoretical sophistication, the 1971 summation of general equilibrium theory by Arrow and Hahn, gave disproportionate attention to Keynesian ideas which were presented and analyzed using the tools of state-of-the art Walrasian analysis.

Hands sums up the coexistence of Walrasian and Keynesian ideas in the Arrow-Hahn volume as follows:

Arrow and Hahn’s General Competitive Analysis – the canonical summary of the literature – dedicated far more pages to stability than to any other topic. The book had fourteen chapters (and a number of mathematical appendices); there was one chapter on consumer choice, one chapter on production theory, and one chapter on existence [of equilibrium], but there were three chapters on stability analysis, (two on the traditional tatonnement and one on alternative ways of modeling general equilibrium dynamics). Add to this the fact that there was an important chapter on “The Keynesian Model’; and it becomes clear how important stability analysis and its connection to Keynesian economics was for Walrasian microeconomics during this period. The purpose of this section has been to show that that would not have been the case if the Walrasian economics of the day had not been a product of co-evolution with Keynesian economic theory. (p. 108)

What seems most unfortunate about the neoclassical synthesis is that it elevated and reinforced the least relevant and least fruitful features of both the Walrasian and the Keynesian research programs. The Hicksian IS-LM setup abstracted from the dynamic and forward-looking aspects of Keynesian theory, modeling a static one-period model, not easily deployed as a tool of dynamic analysis. Walrasian GE analysis, which, following the pathbreaking GE existence proofs of Arrow and Debreu, then proceeded to a disappointing search for the conditions for a unique and stable general equilibrium.

It was Paul Samuelson who, building on Hicks’s pioneering foray into stability analysis, argued that the stability question could be answered by investigating whether a system of Lyapounov differential equations could describe market price adjustments as functions of market excess demands that would converge on an equilibrium price vector. But Samuelson’s approach to establishing stability required the mechanism of a fictional tatonnement process. Even with that unsatisfactory assumption, the stability results were disappointing.

Although for Walrasian theorists the results hardly repaid the effort expended, for those Keynesians who interpreted Keynes as an instability theorist, the weak Walrasian stability results might have been viewed as encouraging. But that was not any easy route to take either, because Keynes had also argued that a persistent unemployment equilibrium might be the norm.

It’s also hard to understand how the stability of equilibrium in an imaginary tatonnement process could ever have been considered relevant to the operation of an actual economy in real time – a leap of faith almost as extraordinary as imagining an economy represented by a single agent. Any conventional comparative-statics exercise – the bread and butter of microeconomic analysis – involves comparing two equilibria, corresponding to a specified parametric change in the conditions of the economy. The comparison presumes that, starting from an equilibrium position, the parametric change leads from an initial to a new equilibrium. If the economy isn’t stable, a disturbance causing an economy to depart from an initial equilibrium need not result in an adjustment to a new equilibrium comparable to the old one.

If conventional comparative statics hinges on an implicit stability assumption, it’s hard to see how a stability analysis of tatonnement has any bearing on the comparative-statics routinely relied upon by economists. No actual economy ever adjusts to a parametric change by way of tatonnement. Whether a parametric change displacing an economy from its equilibrium time path would lead the economy toward another equilibrium time path is another interesting and relevant question, but it’s difficult to see what insight would be gained by proving the stability of equilibrium under a tatonnement process.

Moreover, there is a distinct question about the endogenous stability of an economy: are there endogenous tendencies within an economy that lead it away from its equilibrium time path. But questions of endogenous stability can only be posed in a dynamic, rather than a static, model. While extending the Walrasian model to include an infinity of time periods, Arrow and Debreu telescoped determination of the intertemporal-equilibrium price vector into a preliminary time period before time, production, exchange and consumption begin. So, even in the formally intertemporal Arrow-Debreu model, the equilibrium price vector, once determined, is fixed and not subject to revision. Standard stability analysis was concerned with the response over time to changing circumstances only insofar as changes are foreseen at time zero, before time begins, so that they can be and are taken fully into account when the equilibrium price vector is determined.

Though not entirely uninteresting, the intertemporal analysis had little relevance to the stability of an actual economy operating in real time. Thus, neither the standard Keyensian (IS-LM) model nor the standard Walrasian Arrow-Debreu model provided an intertemporal framework within which to address the dynamic stability that Keynes (and contemporaries like Hayek, Myrdal, Lindahl and Hicks) had developed in the 1930s. In particular, Hicks’s analytical device of temporary equilibrium might have facilitated such an analysis. But, having introduced his IS-LM model two years before publishing his temporary equilibrium analysis in Value and Capital, Hicks concentrated his attention primarily on Keynesian analysis and did not return to the temporary equilibrium model until 1965 in Capital and Growth. And it was IS-LM that became, for a generation or two, the preferred analytical framework for macroeconomic analysis, while temproary equilibrium remained overlooked until the 1970s just as the neoclassical synthesis started coming apart.

The fourth essay by Phil Mirowski investigates the role of the Cowles Commission, based at the University of Chicago from 1939 to 1955, in undermining Keynesian macroeconomics. While Hands argues that Walrasians and Keynesians came together in a non-hostile spirit of tacit cooperation, Mirowski believes that owing to their Walrasian sympathies, the Cowles Committee had an implicit anti-Keynesian orientation and was therefore at best unsympathetic if not overtly hostile to Keynesian theorizing, which was incompatible the Walrasian optimization paradigm endorsed by the Cowles economists. (Another layer of unexplored complexity is the tension between the Walrasianism of the Cowles economists and the Marshallianism of the Chicago School economists, especially Knight and Friedman, which made Chicago an inhospitable home for the Cowles Commission and led to its eventual departure to Yale.)

Whatever differences, both the Mirowski and the Hands essays support the conclusion that the uneasy relationship between Walrasianism and Keynesianism was inherently problematic and unltimately unsustainable. But to me the tragedy is that before the fall, in the 1950s and 1960s, when the neoclassical synthesis bestrode economics like a colossus, the static orientation of both the Walrasian and the Keynesian research programs combined to distract economists from a more promising research program. Such a program, instead of treating expectations either as parametric constants or as merely adaptive, based on an assumed distributed lag function, might have considered whether expectations could perform a potentially equilibrating role in a general equilibrium model.

The equilibrating role of expectations, though implicit in various contributions by Hayek, Myrdal, Lindahl, Irving Fisher, and even Keynes, is contingent so that equilibrium is not inevitable, only a possibility. Instead, the introduction of expectations as an equilibrating variable did not occur until the mid-1970s when Robert Lucas, Tom Sargent and Neil Wallace, borrowing from John Muth’s work in applied microeconomics, introduced the idea of rational expectations into macroeconomics. But in introducing rational expectations, Lucas et al. made rational expectations not the condition of a contingent equilibrium but an indisputable postulate guaranteeing the realization of equilibrium without offering any theoretical account of a mechanism whereby the rationality of expectations is achieved.

The fifth essay by Michel DeVroey (“Microfoundations: a decisive dividing line between Keynesian and new classical macroeconomics?”) is a philosophically sophisticated analysis of Lucasian microfoundations methodological principles. DeVroey begins by crediting Lucas with the revolution in macroeconomics that displaced a Keynesian orthodoxy already discredited in the eyes of many economists after its failure to account for simultaneously rising inflation and unemployment.

The apparent theoretical disorder characterizing the Keynesian orthodoxy and its Monetarist opposition left a void for Lucas to fill by providing a seemingly rigorous microfounded alternative to the confused state of macroeconomics. And microfoundations became the methodological weapon by which Lucas and his associates and followers imposed an iron discipline on the unruly community of macroeconomists. “In Lucas’s eyes,” DeVroey aptly writes,“ the mere intention to produce a theory of involuntary unemployment constitutes an infringement of the equilibrium discipline.” Showing that his description of Lucas is hardly overstated, DeVroey quotes from the famous 1978 joint declaration of war issued by Lucas and Sargent against Keynesian macroeconomics:

After freeing himself of the straightjacket (or discipline) imposed by the classical postulates, Keynes described a model in which rules of thumb, such as the consumption function and liquidity preference schedule, took the place of decision functions that a classical economist would insist be derived from the theory of choice. And rather than require that wages and prices be determined by the postulate that markets clear – which for the labor market seemed patently contradicted by the severity of business depressions – Keynes took as an unexamined postulate that money wages are sticky, meaning that they are set at a level or by a process that could be taken as uninfluenced by the macroeconomic forces he proposed to analyze.

Echoing Keynes’s famous description of the sway of Ricardian doctrines over England in the nineteenth century, DeVroey remarks that the microfoundations requirement “conquered macroeconomics as quickly and thoroughly as the Holy Inquisition conquered Spain,” noting, even more tellingly, that the conquest was achieved without providing any justification. Ricardo had, at least, provided a substantive analysis that could be debated; Lucas offered only an undisputable methodological imperative about the sole acceptable mode of macroeconomic reasoning. Just as optimization is a necessary component of the equilibrium discipline that had to be ruthlessly imposed on pain of excommunication from the macroeconomic community, so, too, did the correlate principle of market-clearing. To deviate from the market-clearing postulate was ipso facto evidence of an impure and heretical state of mind. DeVroey further quotes from the war declaration of Lucas and Sargent.

Cleared markets is simply a principle, not verifiable by direct observation, which may or may not be useful in constructing successful hypotheses about the behavior of these [time] series.

What was only implicit in the war declaration became evident later after right-thinking was enforced, and woe unto him that dared deviate from the right way of thinking.

But, as DeVroey skillfully shows, what is most remarkable is that, having declared market clearing an indisputable methodological principle, Lucas, contrary to his own demand for theoretical discipline, used the market-clearing postulate to free himself from the very equilibrium discipline he claimed to be imposing. How did the market-clearing postulate liberate Lucas from equilibrium discipline? To show how the sleight-of-hand was accomplished, DeVroey, in an argument parallel to that of Hoover in chapter one and that suggested by Leonard in chapter two, contrasts Lucas’s conception of microfoundations with a different microfoundations conception espoused by Hayek and Patinkin. Unlike Lucas, Hayek and Patinkin recognized that the optimization of individual economic agents is conditional on the optimization of other agents. Lucas assumes that if all agents optimize, then their individual optimization ensures that a social optimum is achieved, the whole being the sum of its parts. But that assumption ignores that the choices made interacting agents are themelves interdependent.

To capture the distinction between independent and interdependent optimization, DeVroey distinguishes between optimal plans and optimal behavior. Behavior is optimal only if an optimal plan can be executed. All agents can optimize individually in making their plans, but the optimality of their behavior depends on their capacity to carry those plans out. And the capacity of each to carry out his plan is contingent on the optimal choices of all other agents.

Optimizing plans refers to agents’ intentions before the opening of trading, the solution to the choice-theoretical problem with which they are faced. Optimizing behavior refers to what is observable after trading has started. Thus optimal behavior implies that the optimal plan has been realized. . . . [O]ptmizing plans and optimizing behavior need to be logically separated – there is a difference between finding a solution to a choice problem and implementing the solution. In contrast, whenever optimizing behavior is the sole concept used, the possibility of there being a difference between them is discarded by definition. This is the standpoint takenby Lucas and Sargent. Once it is adopted, it becomes misleading to claim . . .that the microfoundations requirement is based on two criteria, optimizing behavior and market clearing. A single criterion is needed, and it is irrelevant whether this is called generalized optimizing behavior or market clearing. (De Vroey, p. 176)

Each agent is free to optimize his plan, but no agent can execute his optimal plan unless the plan coincides with the complementary plans of other agents. So, the execution of an optimal plan is not within the unilateral control of an agent formulating his own plan. One can readily assume that agents optimize their plans, but one cannot just assume that those plans can be executed as planned. The optimality of interdependent plans is not self-evident; it is a proposition that must be demonstrated. Assuming that agents optimize, Lucas simply asserts that, because agents optimize, markets must clear.

That is a remarkable non-sequitur. And from that non-sequitur, Lucas jumps to a further non-sequitur: that an optimizing representative agent is all that’s required for a macroeconomic model. The logical straightjacket (or discipline) of demonstrating that interdependent optimal plans are consistent is thus discarded (or trampled upon). Lucas’s insistence on a market-clearing principle turns out to be subterfuge by which the pretense of its upholding conceals its violation in practice.

My own view is that the assumption that agents formulate optimizing plans cannot be maintained without further analysis unless the agents are operating in isolation. If the agents interacting with each other, the assumption that they optimize requires a theory of their interaction. If the focus is on equilibrium interactions, then one can have a theory of equilibrium, but then the possibility of non-equilibrium states must also be acknowledged.

That is what John Nash did in developing his equilibrium theory of positive-sum games. He defined conditions for the existence of equilibrium, but he offered no theory of how equilibrium is achieved. Lacking such a theory, he acknowledged that non-equilibrium solutions might occur, e.g., in some variant of the Holmes-Moriarty game. To simply assert that because interdependent agents try to optimize, they must, as a matter of principle, succeed in optimizing is to engage in question-begging on a truly grand scale. To insist, as a matter of methodological principle, that everyone else must also engage in question-begging on equally grand scale is what I have previously called methodological arrogance, though an even harsher description might be appropriate.

In the sixth essay (“Not Going Away: Microfoundations in the making of a new consensus in macroeconomics”), Pedro Duarte considers the current state of apparent macroeconomic consensus in the wake of the sweeping triumph of the Lucasian micorfoundtions methodological imperative. In its current state, mainstream macroeconomists from a variety of backgrounds have reconciled themselves and adjusted to the methodological absolutism Lucas and his associates and followers have imposed on macroeconomic theorizing. Leading proponents of the current consensus are pleased to announce, in unseemly self-satisfaction, that macroeconomics is now – but presumably not previously – “firmly grounded in the principles of economic [presumably neoclassical] theory.” But the underlying conception of neoclassical economic theory motivating such a statement is almost laughably narrow, and, as I have just shown, strictly false even if, for argument’s sake, that narrow conception is accepted.

Duarte provides an informative historical account of the process whereby most mainstream Keynesians and former old-line Monetarists, who had, in fact, adopted much of the underlying Keynesian theoretical framework themselves, became reconciled to the non-negotiable methodological microfoundational demands upon which Lucas and his New Classical followers and Real-Business-Cycle fellow-travelers insisted. While Lucas was willing to tolerate differences of opinion about the importance of monetary factors in accounting for business-cycle fluctuations in real output and employment, and even willing to countenance a role for countercyclical monetary policy, such differences of opinion could be tolerated only if they could be derived from an acceptable microfounded model in which the agent(s) form rational expectations. If New Keynesians were able to produce results rationalizing countercyclical policies in such microfounded models with rational expectations, Lucas was satisfied. Presumably, Lucas felt the price of conceding the theoretical legitimacy of countercyclical policy was worth paying in order to achieve methodological hegemony over macroeconomic theory.

And no doubt, for Lucas, the price was worth paying, because it led to what Marvin Goodfriend and Robert King called the New Neoclassical Synthesis in their 1997 article ushering in the new era of good feelings, a synthesis based on “the systematic application of intertemporal optimization and rational expectations” while embodying “the insights of monetarists . . . regarding the theory and practice of monetary policy.”

While the first synthesis brought about a convergence of sorts between the disparate Walrasian and Keynesian theoretical frameworks, the convergence proved unstable because the inherent theoretical weaknesses of both paradigms were unable to withstand criticisms of the theoretical apparatus and of the policy recommendations emerging from that synthesis, particularly an inability to provide a straightforward analysis of inflation when it became a serious policy problem in the late 1960s and 1970s. But neither the Keynesian nor the Walrasian paradigms were developing in a way that addressed the points of most serious weakness.

On the Keynesian side, the defects included the static nature of the workhorse IS-LM model, the absence of a market for real capital and of a market for endogenous money. On the Walrasian side, the defects were the lack of any theory of actual price determination or of dynamic adjustment. The Hicksian temporary equilibrium paradigm might have provided a viable way forward, and for a very different kind of synthesis, but not even Hicks himself realized the potential of his own creation.

While the first synthesis was a product of convenience and misplaced optimism, the second synthesis is a product of methodological hubris and misplaced complacency derived from an elementary misunderstanding of the distinction between optimization by a single agent and the simultaneous optimization of two or more independent, yet interdependent, agents. The equilibrium of each is the result of the equilibrium of all, and a theory of optimization involving two or more agents requires a theory of how two or more interdependent agents can optimize simultaneously. The New neoclassical synthesis rests on the demand for a macroeconomic theory of individual optimization that refuses even to ask, let along provide an answer to, the question whether the optimization that it demands is actually achieved in practice or what happens if it is not. This is not a synthesis that will last, or that deserves to. And the sooner it collapses, the better off macroeconomics will be.

What the answer is I don’t know, but if I had to offer a suggestion, the one offered by my teacher Axel Leijonhufvud towards the end of his great book, written more than half a century ago, strikes me as not bad at all:

One cannot assume that what went wrong was simply that Keynes slipped up here and there in his adaptation of standard tool, and that consequently, if we go back and tinker a little more with the Marshallian toolbox his purposes will be realized. What is required, I believe, is a systematic investigation, form the standpoint of the information problems stressed in this study, of what elements of the static theory of resource allocation can without further ado be utilized in the analysis of dynamic and historical systems. This, of course, would be merely a first-step: the gap yawns very wide between the systematic and rigorous modern analysis of the stability of “featureless,” pure exchange systems and Keynes’ inspired sketch of the income-constrained process in a monetary-exchange-cum-production system. But even for such a first step, the prescription cannot be to “go back to Keynes.” If one must retrace some steps of past developments in order to get on the right track—and that is probably advisable—my own preference is to go back to Hayek. Hayek’s Gestalt-conception of what happens during business cycles, it has been generally agreed, was much less sound than Keynes’. As an unhappy consequence, his far superior work on the fundamentals of the problem has not received the attention it deserves. (p. 401)

I agree with all that, but would also recommend Roy Radner’s development of an alternative to the Arrow-Debreu version of Walrasian general equilibrium theory that can accommodate Hicksian temporary equilibrium, and Hawtrey’s important contributions to our understanding of monetary theory and the role and potential instability of endogenous bank money. On top of that, Franklin Fisher in his important work, The Disequilibrium Foundations of Equilibrium Economics, has given us further valuable guidance in how to improve the current sorry state of macroeconomics.

 

Filling the Arrow Explanatory Gap

The following (with some minor revisions) is a Twitter thread I posted yesterday. Unfortunately, because it was my first attempt at threading the thread wound up being split into three sub-threads and rather than try to reconnect them all, I will just post the complete thread here as a blogpost.

1. Here’s an outline of an unwritten paper developing some ideas from my paper “Hayek Hicks Radner and Four Equilibrium Concepts” (see here for an earlier ungated version) and some from previous blog posts, in particular Phillips Curve Musings

2. Standard supply-demand analysis is a form of partial-equilibrium (PE) analysis, which means that it is contingent on a ceteris paribus (CP) assumption, an assumption largely incompatible with realistic dynamic macroeconomic analysis.

3. Macroeconomic analysis is necessarily situated a in general-equilibrium (GE) context that precludes any CP assumption, because there are no variables that are held constant in GE analysis.

4. In the General Theory, Keynes criticized the argument based on supply-demand analysis that cutting nominal wages would cure unemployment. Instead, despite his Marshallian training (upbringing) in PE analysis, Keynes argued that PE (AKA supply-demand) analysis is unsuited for understanding the problem of aggregate (involuntary) unemployment.

5. The comparative-statics method described by Samuelson in the Foundations of Econ Analysis formalized PE analysis under the maintained assumption that a unique GE obtains and deriving a “meaningful theorem” from the 1st- and 2nd-order conditions for a local optimum.

6. PE analysis, as formalized by Samuelson, is conditioned on the assumption that GE obtains. It is focused on the effect of changing a single parameter in a single market small enough for the effects on other markets of the parameter change to be made negligible.

7. Thus, PE analysis, the essence of micro-economics is predicated on the macrofoundation that all, but one, markets are in equilibrium.

8. Samuelson’s meaningful theorems were a misnomer reflecting mid-20th-century operationalism. They can now be understood as empirically refutable propositions implied by theorems augmented with a CP assumption that interactions b/w markets are small enough to be neglected.

9. If a PE model is appropriately specified, and if the market under consideration is small or only minimally related to other markets, then differences between predictions and observations will be statistically insignificant.

10. So PE analysis uses comparative-statics to compare two alternative general equilibria that differ only in respect of a small parameter change.

11. The difference allows an inference about the causal effect of a small change in that parameter, but says nothing about how an economy would actually adjust to a parameter change.

12. PE analysis is conditioned on the CP assumption that the analyzed market and the parameter change are small enough to allow any interaction between the parameter change and markets other than the market under consideration to be disregarded.

13. However, the process whereby one equilibrium transitions to another is left undetermined; the difference between the two equilibria with and without the parameter change is computed but no account of an adjustment process leading from one equilibrium to the other is provided.

14. Hence, the term “comparative statics.”

15. The only suggestion of an adjustment process is an assumption that the price-adjustment in any market is an increasing function of excess demand in the market.

16. In his seminal account of GE, Walras posited the device of an auctioneer who announces prices–one for each market–computes desired purchases and sales at those prices, and sets, under an adjustment algorithm, new prices at which desired purchases and sales are recomputed.

17. The process continues until a set of equilibrium prices is found at which excess demands in all markets are zero. In Walras’s heuristic account of what he called the tatonnement process, trading is allowed only after the equilibrium price vector is found by the auctioneer.

18. Walras and his successors assumed, but did not prove, that, if an equilibrium price vector exists, the tatonnement process would eventually, through trial and error, converge on that price vector.

19. However, contributions by Sonnenschein, Mantel and Debreu (hereinafter referred to as the SMD Theorem) show that no price-adjustment rule necessarily converges on a unique equilibrium price vector even if one exists.

20. The possibility that there are multiple equilibria with distinct equilibrium price vectors may or may not be worth explicit attention, but for purposes of this discussion, I confine myself to the case in which a unique equilibrium exists.

21. The SMD Theorem underscores the lack of any explanatory account of a mechanism whereby changes in market prices, responding to excess demands or supplies, guide a decentralized system of competitive markets toward an equilibrium state, even if a unique equilibrium exists.

22. The Walrasian tatonnement process has been replaced by the Arrow-Debreu-McKenzie (ADM) model in an economy of infinite duration consisting of an infinite number of generations of agents with given resources and technology.

23. The equilibrium of the model involves all agents populating the economy over all time periods meeting before trading starts, and, based on initial endowments and common knowledge, making plans given an announced equilibrium price vector for all time in all markets.

24. Uncertainty is accommodated by the mechanism of contingent trading in alternative states of the world. Given assumptions about technology and preferences, the ADM equilibrium determines the set prices for all contingent states of the world in all time periods.

25. Given equilibrium prices, all agents enter into optimal transactions in advance, conditioned on those prices. Time unfolds according to the equilibrium set of plans and associated transactions agreed upon at the outset and executed without fail over the course of time.

26. At the ADM equilibrium price vector all agents can execute their chosen optimal transactions at those prices in all markets (certain or contingent) in all time periods. In other words, at that price vector, excess demands in all markets with positive prices are zero.

27. The ADM model makes no pretense of identifying a process that discovers the equilibrium price vector. All that can be said about that price vector is that if it exists and trading occurs at equilibrium prices, then excess demands will be zero if prices are positive.

28. Arrow himself drew attention to the gap in the ADM model, writing in 1959:

29. In addition to the explanatory gap identified by Arrow, another shortcoming of the ADM model was discussed by Radner: the dependence of the ADM model on a complete set of forward and state-contingent markets at time zero when equilibrium prices are determined.

30. Not only is the complete-market assumption a backdoor reintroduction of perfect foresight, it excludes many features of the greatest interest in modern market economies: the existence of money, stock markets, and money-crating commercial banks.

31. Radner showed that for full equilibrium to obtain, not only must excess demands in current markets be zero, but whenever current markets and current prices for future delivery are missing, agents must correctly expect those future prices.

32. But there is no plausible account of an equilibrating mechanism whereby price expectations become consistent with GE. Although PE analysis suggests that price adjustments do clear markets, no analogous analysis explains how future price expectations are equilibrated.

33. But if both price expectations and actual prices must be equilibrated for GE to obtain, the notion that “market-clearing” price adjustments are sufficient to achieve macroeconomic “equilibrium” is untenable.

34. Nevertheless, the idea that individual price expectations are rational (correct), so that, except for random shocks, continuous equilibrium is maintained, became the bedrock for New Classical macroeconomics and its New Keynesian and real-business cycle offshoots.

35. Macroeconomic theory has become a theory of dynamic intertemporal optimization subject to stochastic disturbances and market frictions that prevent or delay optimal adjustment to the disturbances, potentially allowing scope for countercyclical monetary or fiscal policies.

36. Given incomplete markets, the assumption of nearly continuous intertemporal equilibrium implies that agents correctly foresee future prices except when random shocks occur, whereupon agents revise expectations in line with the new information communicated by the shocks.
37. Modern macroeconomics replaced the Walrasian auctioneer with agents able to forecast the time path of all prices indefinitely into the future, except for intermittent unforeseen shocks that require agents to optimally their revise previous forecasts.
38. When new information or random events, requiring revision of previous expectations, occur, the new information becomes common knowledge and is processed and interpreted in the same way by all agents. Agents with rational expectations always share the same expectations.
39. So in modern macro, Arrow’s explanatory gap is filled by assuming that all agents, given their common knowledge, correctly anticipate current and future equilibrium prices subject to unpredictable forecast errors that change their expectations of future prices to change.
40. Equilibrium prices aren’t determined by an economic process or idealized market interactions of Walrasian tatonnement. Equilibrium prices are anticipated by agents, except after random changes in common knowledge. Semi-omniscient agents replace the Walrasian auctioneer.
41. Modern macro assumes that agents’ common knowledge enables them to form expectations that, until superseded by new knowledge, will be validated. The assumption is wrong, and the mistake is deeper than just the unrealism of perfect competition singled out by Arrow.
42. Assuming perfect competition, like assuming zero friction in physics, may be a reasonable simplification for some problems in economics, because the simplification renders an otherwise intractable problem tractable.
43. But to assume that agents’ common knowledge enables them to forecast future prices correctly transforms a model of decentralized decision-making into a model of central planning with each agent possessing the knowledge only possessed by an omniscient central planner.
44. The rational-expectations assumption fills Arrow’s explanatory gap, but in a deeply unsatisfactory way. A better approach to filling the gap would be to acknowledge that agents have private knowledge (and theories) that they rely on in forming their expectations.
45. Agents’ expectations are – at least potentially, if not inevitably – inconsistent. Because expectations differ, it’s the expectations of market specialists, who are better-informed than non-specialists, that determine the prices at which most transactions occur.
46. Because price expectations differ even among specialists, prices, even in competitive markets, need not be uniform, so that observed price differences reflect expectational differences among specialists.
47. When market specialists have similar expectations about future prices, current prices will converge on the common expectation, with arbitrage tending to force transactions prices to converge toward notwithstanding the existence of expectational differences.
48. However, the knowledge advantage of market specialists over non-specialists is largely limited to their knowledge of the workings of, at most, a small number of related markets.
49. The perspective of specialists whose expectations govern the actual transactions prices in most markets is almost always a PE perspective from which potentially relevant developments in other markets and in macroeconomic conditions are largely excluded.
50. The interrelationships between markets that, according to the SMD theorem, preclude any price-adjustment algorithm, from converging on the equilibrium price vector may also preclude market specialists from converging, even roughly, on the equilibrium price vector.
51. A strict equilibrium approach to business cycles, either real-business cycle or New Keynesian, requires outlandish assumptions about agents’ common knowledge and their capacity to anticipate the future prices upon which optimal production and consumption plans are based.
52. It is hard to imagine how, without those outlandish assumptions, the theoretical superstructure of real-business cycle theory, New Keynesian theory, or any other version of New Classical economics founded on the rational-expectations postulate can be salvaged.
53. The dominance of an untenable macroeconomic paradigm has tragically led modern macroeconomics into a theoretical dead end.

The Equilibrium of Each Is the Result of the Equilibrium of All, or, the Rational Expectation of Each is the Result of the Rational Expectation of All

A few weeks ago, I wrote a post whose title (“The Idleness of Each Is the Result of the Idleness of All”) was taken from the marvelous remark of the great, but sadly forgotten, Cambridge economist Frederick Lavington’s book The Trade Cycle. Lavington was born two years after Ralph Hawtrey and two years before John Maynard Keynes. The brilliant insight expressed so eloquently by Lavington is that the inability of some those unemployed to find employment may not be the result of a voluntary decision made by an individual worker any more than the inability of a driver stuck in a traffic jam to drive at the speed he wants to drive at is a voluntary decision. The circumstances in which an unemployed worker finds himself may be such that he or she has no practical alternative other than to remain unemployed.

In this post I merely want to express the same idea from two different vantage points. In any economic model, the equilibrium decision of any agent in the model is conditional on a corresponding set of equilibrium decisions taken by all other agents in the model. Unless all other agents are making optimal choices, the equilibrium (optimal) choice of any individual agent is neither feasible nor optimal, because the optimality of any decision is conditional on the decisions taken by all other agents. Only if the optimal decisions of each are mutually consistent are they individually optimal. (Individual optimality does not necessarily result in overall optimality owing to interdependencies (aka externalities) among the individuals). My ability to buy as much as I want to, and to sell as much as I want to, at market-clearing prices is contingent on everyone else being able to buy and sell as much as I and they want to at those same prices.

Now let’s take the argument a step further. Suppose the equilibrium decisions involve making purchases and sales in both the present and the future, according to current expectations of what future conditions will be like. If you are running a business, how much inputs you buy today to turn into output to be sold tomorrow will depend on the price at which you expect to be able to sell the output produced tomorrow. If decisions to purchase and sell today depend not only on current prices but also on expected future prices, then your optimal decisions now about how much to buy and sell now will depend on your expectations of buying and selling prices in the future. For an equilibrium in which everyone can execute his or her plans (as originally formulated) to exist, each person must have rational expectations about what future prices will be, and such rational expectations are possible only when those expectations are mutually consistent. In game-theoretical terms, a Nash equilibrium obtains only when all the individual expectations on which decisions are conditional converge.

Here is how Tom Schelling explained the idea of rational – i.e., convergent – expectations in a classic discussion of cooperative games.

One may or may not agree with any particular hypothesis as to how a bargainer’s expectations are formed either in the bargaining process or before it and either by the bargaining itself or by other forces. But it does seem clear that the outcome of a bargaining process is to be described most immediately, most straightforwardly, and most empirically, in terms of some phenomenon of stable and convergent expectations. Whether one agrees explicitly to a bargain, or agrees tacitly, or accepts by default, he must if he has his wits about him, expect that he could do no better and recognize that the other party must reciprocate the feeling. Thus, the fact of an outcome, which is simply a coordinated choice, should be analytically characterized by the notion of convergent expectations.

The intuitive formulation, or even a careful formulation in psychological terms, of what it is that a rational player expects in relation to another rational player in the “pure” bargaining game, poses a problem in sheer scientific description. Both players, being rational, must recognize that the only kind of “rational” expectation they can have is a fully shared expectation of an outcome. It is not quite accurate – as a description of a psychological phenomenon – to say that one expects the second to concede something; the second’s readiness to concede or to accept is only an expression of what he expects the first to accept or to concede, which in turn is what he expects the first to expect the second to expect the first to expect, and so on. To avoid an “ad infinitum” in the description process, we have to say that both sense a shared expectation of an outcome; one’s expectation is a belief that both identify the outcome as being indicated by the situation, hence as virtually inevitable. Both players, in effect, accept a common authority – the power of the game to dictate its own solution through their intellectual capacity to perceive it – and what they “expect” is that they both perceive the same solution.

If expectations of everyone do not converge — individuals having conflicting expectations about what will happen — then the expectations of none of the individuals can be rational. Even if one individual correctly anticipates the outcome, from the point of view of the disequilibrium system as a whole, the correct expectations are not rational because those expectations are inconsistent with equilibrium of the entire system. A change in the expectations of any other individual would imply that future prices would change from what had been expected. Only equilibrium expectations can be considered rational, and equilibrium expectations are a set of individual expectations that are convergent.

Jack Schwartz on the Weaknesses of the Mathematical Mind

I was recently rereading an essay by Karl Popper, “A Realistic View of Logic, Physics, and History” published in his collection of essays, Objective Knowledge: An Evolutionary Approach, because it discusses the role of reductivism in science and philosophy, a topic about which I’ve written a number of previous posts discussing the microfoundations of macroeconomics.

Here is an important passage from Popper’s essay:

What I should wish to assert is (1) that criticism is a most important methodological device: and (2) that if you answer criticism by saying, “I do not like your logic: your logic may be all right for you, but I prefer a different logic, and according to my logic this criticism is not valid”, then you may undermine the method of critical discussion.

Now I should distinguish between two main uses of logic, namely (1) its use in the demonstrative sciences – that is to say, the mathematical sciences – and (2) its use in the empirical sciences.

In the demonstrative sciences logic is used in the main for proofs – for the transmission of truth – while in the empirical sciences it is almost exclusively used critically – for the retransmission of falsity. Of course, applied mathematics comes in too, which implicitly makes use of the proofs of pure mathematics, but the role of mathematics in the empirical sciences is somewhat dubious in several respects. (There exists a wonderful article by Schwartz to this effect.)

The article to which Popper refers appears by Jack Schwartz in a volume edited by Ernst Nagel, Patrick Suppes, and Alfred Tarski, Logic, Methodology and Philosophy of Science. The title of the essay, “The Pernicious Influence of Mathematics on Science” caught my eye, so I tried to track it down. Unavailable on the internet except behind a paywall, I bought a used copy for $6 including postage. The essay was well worth the $6 I paid to read it.

Before quoting from the essay, I would just note that Jacob T. (Jack) Schwartz was far from being innocent of mathematical and scientific knowledge. Here’s a snippet from the Wikipedia entry on Schwartz.

His research interests included the theory of linear operatorsvon Neumann algebrasquantum field theorytime-sharingparallel computingprogramming language design and implementation, robotics, set-theoretic approaches in computational logicproof and program verification systems; multimedia authoring tools; experimental studies of visual perception; multimedia and other high-level software techniques for analysis and visualization of bioinformatic data.

He authored 18 books and more than 100 papers and technical reports.

He was also the inventor of the Artspeak programming language that historically ran on mainframes and produced graphical output using a single-color graphical plotter.[3]

He served as Chairman of the Computer Science Department (which he founded) at the Courant Institute of Mathematical SciencesNew York University, from 1969 to 1977. He also served as Chairman of the Computer Science Board of the National Research Council and was the former Chairman of the National Science Foundation Advisory Committee for Information, Robotics and Intelligent Systems. From 1986 to 1989, he was the Director of DARPA‘s Information Science and Technology Office (DARPA/ISTO) in Arlington, Virginia.

Here is a link to his obituary.

Though not trained as an economist, Schwartz, an autodidact, wrote two books on economic theory.

With that introduction, I quote from, and comment on, Schwartz’s essay.

Our announced subject today is the role of mathematics in the formulation of physical theories. I wish, however, to make use of the license permitted at philosophical congresses, in two regards: in the first place, to confine myself to the negative aspects of this role, leaving it to others to dwell on the amazing triumphs of the mathematical method; in the second place, to comment not only on physical science but also on social science, in which the characteristic inadequacies which I wish to discuss are more readily apparent.

Computer programmers often make a certain remark about computing machines, which may perhaps be taken as a complaint: that computing machines, with a perfect lack of discrimination, will do any foolish thing they are told to do. The reason for this lies of course in the narrow fixation of the computing machines “intelligence” upon the basely typographical details of its own perceptions – its inability to be guided by any large context. In a psychological description of the computer intelligence, three related adjectives push themselves forward: single-mindedness, literal-mindedness, simple-mindedness. Recognizing this, we should at the same time recognize that this single-mindedness, literal-mindedness, simple-mindedness also characterizes theoretical mathematics, though to a lesser extent.

It is a continual result of the fact that science tries to deal with reality that even the most precise sciences normally work with more or less ill-understood approximations toward which the scientist must maintain an appropriate skepticism. Thus, for instance, it may come as a shock to the mathematician to learn that the Schrodinger equation for the hydrogen atom, which he is able to solve only after a considerable effort of functional analysis and special function theory, is not a literally correct description of this atom, but only an approximation to a somewhat more correct equation taking account of spin, magnetic dipole, and relativistic effects; that this corrected equation is itself only an ill-understood approximation to an infinite set of quantum field-theoretic equations; and finally that the quantum field theory, besides diverging, neglects a myriad of strange-particle interactions whose strength and form are largely unknown. The physicist looking at the original Schrodinger equation, learns to sense in it the presence of many invisible terms, integral, intergrodifferential, perhaps even more complicated types of operators, in addition to the differential terms visible, and this sense inspires an entirely appropriate disregard for the purely technical features of the equation which he sees. This very healthy self-skepticism is foreign to the mathematical approach. . . .

Schwartz, in other words, is noting that the mathematical equations that physicists use in many contexts cannot be relied upon without qualification as accurate or exact representations of reality. The understanding that the mathematics that physicists and other physical scientists use to express their theories is often inexact or approximate inasmuch as reality is more complicated than our theories can capture mathematically. Part of what goes into the making of a good scientist is a kind of artistic feeling for how to adjust or interpret a mathematical model to take into account what the bare mathematics cannot describe in a manageable way.

The literal-mindedness of mathematics . . . makes it essential, if mathematics is to be appropriately used in science, that the assumptions upon which mathematics is to elaborate be correctly chosen from a larger point of view, invisible to mathematics itself. The single-mindedness of mathematics reinforces this conclusion. Mathematics is able to deal successfully only with the simplest of situations, more precisely, with a complex situation only to the extent that rare good fortune makes this complex situation hinge upon a few dominant simple factors. Beyond the well-traversed path, mathematics loses its bearing in a jungle of unnamed special functions and impenetrable combinatorial particularities. Thus, mathematical technique can only reach far if it starts from a point close to the simple essentials of a problem which has simple essentials. That form of wisdom which is the opposite of single-mindedness, the ability to keep many threads in hand, to draw for an argument from many disparate sources, is quite foreign to mathematics. The inability accounts for much of the difficulty which mathematics experiences in attempting to penetrate the social sciences. We may perhaps attempt a mathematical economics – but how difficult would be a mathematical history! Mathematics adjusts only with reluctance to the external, and vitally necessary, approximating of the scientists, and shudders each time a batch of small terms is cavalierly erased. Only with difficulty does it find its way to the scientist’s ready grasp of the relative importance of many factors. Quite typically, science leaps ahead and mathematics plods behind.

Schwartz having referenced mathematical economics, let me try to restate his point more concretely than he did by referring to the Walrasian theory of general equilibrium. “Mathematics,” Schwartz writes, “adjusts only with reluctance to the external, and vitally necessary, approximating of the scientists, and shudders each time a batch of small terms is cavalierly erased.” The Walrasian theory is at once too general and too special to be relied on as an applied theory. It is too general because the functional forms of most of its reliant equations can’t be specified or even meaningfully restricted on very special simplifying assumptions; it is too special, because the simplifying assumptions about the agents and the technologies and the constraints and the price-setting mechanism are at best only approximations and, at worst, are entirely divorced from reality.

Related to this deficiency of mathematics, and perhaps more productive of rueful consequence, is the simple-mindedness of mathematics – its willingness, like that of a computing machine, to elaborate upon any idea, however absurd; to dress scientific brilliancies and scientific absurdities alike in the impressive uniform of formulae and theorems. Unfortunately however, an absurdity in uniform is far more persuasive than an absurdity unclad. The very fact that a theory appears in mathematical form, that, for instance, a theory has provided the occasion for the application of a fixed-point theorem, or of a result about difference equations, somehow makes us more ready to take it seriously. And the mathematical-intellectual effort of applying the theorem fixes in us the particular point of view of the theory with which we deal, making us blind to whatever appears neither as a dependent nor as an independent parameter in its mathematical formulation. The result, perhaps most common in the social sciences, is bad theory with a mathematical passport. The present point is best established by reference to a few horrible examples. . . . I confine myself . . . to the citation of a delightful passage from Keynes’ General Theory, in which the issues before us are discussed with a characteristic wisdom and wit:

“It is the great fault of symbolic pseudomathematical methods of formalizing a system of economic analysis . . . that they expressly assume strict independence between the factors involved and lose all their cogency and authority if this hypothesis is disallowed; whereas, in ordinary discourse, where we are not blindly manipulating but know all the time what we are doing and what the words mean, we can keep ‘at the back of our heads’ the necessary reserves and qualifications and adjustments which we shall have to make later on, in a way in which we cannot keep complicated partial differentials ‘at the back’ of several pages of algebra which assume they all vanish. Too large a proportion of recent ‘mathematical’ economics are mere concoctions, as imprecise as the initial assumptions they reset on, which allow the author to lose sight of the complexities and interdependencies of the real world in a maze of pretentions and unhelpful symbols.”

Although it would have been helpful if Keynes had specifically identified the pseudomathematical methods that he had in mind, I am inclined to think that he was expressing his impatience with the Walrasian general-equilibrium approach that was characteristic of the Marshallian tradition that he carried forward even as he struggled to transcend it. Walrasian general equilibrium analysis, he seems to be suggesting, is too far removed from reality to provide any reliable guide to macroeconomic policy-making, because the necessary qualifications required to make general-equilibrium analysis practically relevant are simply unmanageable within the framework of general-equilibrium analysis. A different kind of analysis is required. As a Marshallian he was less skeptical of partial-equilibrium analysis than of general-equilibrium analysis. But he also recognized that partial-equilibrium analysis could not be usefully applied in situations, e.g., analysis of an overall “market” for labor, where the usual ceteris paribus assumptions underlying the use of stable demand and supply curves as analytical tools cannot be maintained. But for some reason that didn’t stop Keynes from trying to explain the nominal rate of interest by positing a demand curve to hold money and a fixed stock of money supplied by a central bank. But we all have our blind spots and miss obvious implications of familiar ideas that we have already encountered and, at least partially, understand.

Schwartz concludes his essay with an arresting thought that should give us pause about how we often uncritically accept probabilistic and statistical propositions as if we actually knew how they matched up with the stochastic phenomena that we are seeking to analyze. But although there is a lot to unpack in his conclusion, I am afraid someone more capable than I will have to do the unpacking.

[M]athematics, concentrating our attention, makes us blind to its own omissions – what I have already called the single-mindedness of mathematics. Typically, mathematics, knows better what to do than why to do it. Probability theory is a famous example. . . . Here also, the mathematical formalism may be hiding as much as it reveals.

Phillips Curve Musings: Second Addendum on Keynes and the Rate of Interest

In my two previous posts (here and here), I have argued that the partial-equilibrium analysis of a single market, like the labor market, is inappropriate and not particularly relevant, in situations in which the market under analysis is large relative to other markets, and likely to have repercussions on those markets, which, in turn, will have further repercussions on the market under analysis, violating the standard ceteris paribus condition applicable to partial-equilibrium analysis. When the standard ceteris paribus condition of partial equilibrium is violated, as it surely is in analyzing the overall labor market, the analysis is, at least, suspect, or, more likely, useless and misleading.

I suggested that Keynes in chapter 19 of the General Theory was aiming at something like this sort of argument, and I think he was largely right in his argument. But, in all modesty, I think that Keynes would have done better to have couched his argument in terms of the distinction between partial-equilibrium and general-equilibrium analysis. But his Marshallian training, which he simultaneously embraced and rejected, may have made it difficult for him to adopt the Walrasian general-equilibrium approach that Marshall and the Marshallians regarded as overly abstract and unrealistic.

In my next post, I suggested that the standard argument about the tendency of public-sector budget deficits to raise interest rates by competing with private-sector borrowers for loanable funds is fundamentally misguided, because it, too, inappropriately applies the partial-equilibrium analysis of a narrow market for government securities, or even a more broadly defined market for loanable funds in general.

That is a gross mistake, because the rate of interest is determined in a general-equilibrium system along with markets for all long-lived assets, embodying expected flows of income that must be discounted to the present to determine an estimated present value. Some assets are riskier than others and that risk is reflected in those valuations. But the rate of interest is distilled from the combination of all of those valuations, not prior to, or apart from, those valuations. Interest rates of different duration and different risk are embeded in the entire structure of current and expected prices for all long-lived assets. To focus solely on a very narrow subset of markets for newly issued securities, whose combined value is only a small fraction of the total value of all existing long-lived assets, is to miss the forest for the trees.

What I want to point out in this post is that Keynes, whom I credit for having recognized that partial-equilibrium analysis is inappropriate and misleading when applied to an overall market for labor, committed exactly the same mistake that he condemned in the context of the labor market, by asserting that the rate of interest is determined in a single market: the market for money. According to Keynes, the market rate of interest is that rate which equates the stock of money in existence with the amount of money demanded by the public. The higher the rate of interest, Keynes argued, the less money the public wants to hold.

Keynes, applying the analysis of Marshall and his other Cambridge predecessors, provided a wonderful analysis of the factors influencing the amount of money that people want to hold (usually expressed in terms of a fraction of their income). However, as superb as his analysis of the demand for money was, it was a partial-equilibrium analysis, and there was no recognition on his part that other markets in the economy are influenced by, and exert influence upon, the rate of interest.

What makes Keynes’s partial-equilibrium analysis of the interest rate so difficult to understand is that in chapter 17 of the General Theory, a magnificent tour de force of verbal general-equilibrium theorizing, explained the relationships that must exist between the expected returns for alternative long-lived assets that are held in equilibrium. Yet, disregarding his own analysis of the equilibrium relationship between returns on alternative assets, Keynes insisted on explaining the rate of interest in a one-period model (a model roughly corresponding to IS-LM) with only two alternative assets: money and bonds, but no real capital asset.

A general-equilibrium analysis of the rate of interest ought to have at least two periods, and it ought to have a real capital good that may be held in the present for use or consumption in the future, a possibility entirely missing from the Keynesian model. I have discussed this major gap in the Keynesian model in a series of posts (here, here, here, here, and here) about Earl Thompson’s 1976 paper “A Reformulation of Macroeconomic Theory.”

Although Thompson’s model seems to me too simple to account for many macroeconomic phenomena, it would have been a far better starting point for the development of macroeconomics than any of the models from which modern macroeconomic theory has evolved.

Phillips Curve Musings: Addendum on Budget Deficits and Interest Rates

In my previous post, I discussed a whole bunch of stuff, but I spent a lot of time discussing the inappropriate use of partial-equilibrium supply-demand analysis to explain price and quantity movements when price and quantity movements in those markets are dominated by precisely those forces that are supposed to be held constant — the old ceteris paribus qualification — in doing partial equilibrium analysis. Thus, the idea that in a depression or deep recession, high unemployment can be cured by cutting nominal wages is a classic misapplication of partial equilibrium analysis in a situation in which the forces primarily affecting wages and employment are not confined to a supposed “labor market,” but reflect broader macro-economic conditions. As Keynes understood, but did not explain well to his economist readers, analyzing unemployment in terms of the wage rate is futile, because wage changes induce further macroeconomic effects that may counteract whatever effects resulted from the wage changes.

Well, driving home this afternoon, I was listening to Marketplace on NPR with Kai Ryssdal interviewing Neil Irwin. Ryssdal asked Irwin why there is so much nervousness about the economy when unemployment and inflation are both about as low as they have ever been — certainly at the same time — in the last 50 years. Irwin’s response was that it is unsettling to many people that, with budget deficits high and rising, we observe stable inflation and falling interest rates on long-term Treasuries. This, after we have been told for so long that budget deficits drive up the cost of borrowing money and also cause are a major cause of inflation. The cognitive dissonance of stable inflation, falling interest rates and rapidly rising budget deficits, Irwin suggested, accounts for a vague feeling of disorientation, and gives rise to fears that the current apparent stability can’t last very long and will lead to some sort of distress or crisis in the future.

I’m not going to try to reassure Ryssdal and Irwin that there will never be another crisis. I certainly wouldn’t venture to say that all is now well with the Republic, much less with the rest of the world. I will just stick to the narrow observation that the bad habit of predicting the future course of interest rates by the size of the current budget deficit has no basis in economic theory, and reflects a colossal misunderstanding of how interest rates are determined. And that misunderstanding is precisely the one I discussed in my previous post about the misuse of partial-equilibrium analysis when general-equilibrium analysis is required.

To infer anything about interest rates from the market for government debt is a category error. Government debt is a long-lived financial asset providing an income stream, and its price reflects the current value of the promised income stream. Based on the price of a particular instrument with a given duration, it is possible to calculate a corresponding interest rate. That calculation is just a fairly simple mathematical exercise.

But it is a mistake to think that the interest rate for that duration is determined in the market for government debt of that duration. Why? Because, there are many other physical assets or financial instruments that could be held instead of government debt of any particular duration. And asset holders in a financially sophisticated economy can easily shift from one type of asset to another at will, at fairly minimal transactions costs. So it is very unlikely that any long-lived asset is so special that the expected yield from holding that asset varies independently from the expected yield from holding alternative assets that could be held.

That’s not to say that there are no differences in the expected yields from different assets, just that at the margin, taking into account the different characteristics of different assets, their expected returns must be fairly closely connected, so that any large change in the conditions in the market for any single asset are unlikely to have a large effect on the price of that asset alone. Rather, any change in one market will cause shifts in asset-holdings across different markets that will tend to offset the immediate effect that would have been reflected in a single market viewed in isolation.

This holds true as long as each specific market is relatively small compared to the entire economy. That is certainly true for the US economy and the world economy into which the US economy is very closely integrated. The value of all assets — real and financial — dwarfs the total outstanding value of US Treasuries. Interest rates are a measure of the relationship between expected flows of income and the value of the underlying assets.

To assume that increased borrowing by the US government to fund a substantial increase in the US budget deficit will substantially affect the overall economy-wide relationship between current and expected future income flows on the one hand and asset values on the other is wildly implausible. So no one should be surprised to find that the recent sharp increase in the US budget deficit has had no perceptible effect on the interest rates at which US government debt is now yielding.

A more likely cause of a change in interest rates would be an increase in expected inflation, but inflation expectations are not necessarily correlated with the budget deficit, and changing inflation expectations aren’t necessarily reflected in corresponding changes in nominal interest rates, as Monetarist economists have often maintained.

So it’s about time that we disabused ourselves of the simplistic notion that changes in the budget deficit have any substantial effect on interest rates.

Phillips Curve Musings

There’s a lot of talk about the Phillips Curve these days; people wonder why, with the unemployment rate reaching historically low levels, nominal and real wages have increased minimally with inflation remaining securely between 1.5 and 2%. The Phillips Curve, for those untutored in basic macroeconomics, depicts a relationship between inflation and unemployment. The original empirical Philips Curve relationship showed that high rates of unemployment were associated with low or negative rates of wage inflation while low rates of unemployment were associated with high rates of wage inflation. This empirical relationship suggested a causal theory that the rate of wage increase tends to rise when unemployment is low and tends to fall when unemployment is high, a causal theory that seems to follow from a simple supply-demand model in which wages rise when there is an excess demand for labor (unemployment is low) and wages fall when there is an excess supply of labor (unemployment is high).

Viewed in this light, low unemployment, signifying a tight labor market, signals that inflation is likely to rise, providing a rationale for monetary policy to be tightened to prevent inflation from rising at it normally does when unemployment is low. Seeming to accept that rationale, the Fed has gradually raised interest rates for the past two years or so. But the increase in interest rates has now slowed the expansion of employment and decline in unemployment to historic lows. Nor has the improving employment situation resulted in any increase in price inflation and at most a minimal increase in the rate of increase in wages.

In a couple of previous posts about sticky wages (here and here), I’ve questioned whether the simple supply-demand model of the labor market motivating the standard interpretation of the Phillips Curve is a useful way to think about wage adjustment and inflation-employment dynamics. I’ve offered a few reasons why the supply-demand model, though applicable in some situations, is not useful for understanding how wages adjust.

The particular reason that I want to focus on here is Keynes’s argument in chapter 19 of the General Theory (though I express it in terms different from his) that supply-demand analysis can’t explain how wages and employment are determined. The upshot of his argument I believe is that supply demand-analysis only works in a partial-equilibrium setting in which feedback effects from the price changes in the market under consideration don’t affect equilibrium prices in other markets, so that the position of the supply and demand curves in the market of interest can be assumed stable even as price and quantity in that market adjust from one equilibrium to another (the comparative-statics method).

Because the labor market, affecting almost every other market, is not a small part of the economy, partial-equilibrium analysis is unsuitable for understanding that market, the normal stability assumption being untenable if we attempt to trace the adjustment from one labor-market equilibrium to another after an exogenous disturbance. In the supply-demand paradigm, unemployment is a measure of the disequilibrium in the labor market, a disequilibrium that could – at least in principle — be eliminated by a wage reduction sufficient to equate the quantity of labor services supplied with the amount demanded. Viewed from this supply-demand perspective, the failure of the wage to fall to a supposed equilibrium level is attributable to some sort of endogenous stickiness or some external impediment (minimum wage legislation or union intransigence) in wage adjustment that prevents the normal equilibrating free-market adjustment mechanism. But the habitual resort to supply-demand analysis by economists, reinforced and rewarded by years of training and professionalization, is actually misleading when applied in an inappropriate context.

So Keynes was right to challenge this view of a potentially equilibrating market mechanism that is somehow stymied from behaving in the manner described in the textbook version of supply-demand analysis. Instead, Keynes argued that the level of employment is determined by the level of spending and income at an exogenously given wage level, an approach that seems to be deeply at odds with idea that price adjustments are an essential part of the process whereby a complex economic system arrives at, or at least tends to move toward, an equilibrium.

One of the main motivations for a search for microfoundations in the decades after the General Theory was published was to be able to articulate a convincing microeconomic rationale for persistent unemployment that was not eliminated by the usual tendency of market prices to adjust to eliminate excess supplies of any commodity or service. But Keynes was right to question whether there is any automatic market mechanism that adjusts nominal or real wages in a manner even remotely analogous to the adjustment of prices in organized commodity or stock exchanges – the sort of markets that serve as exemplars of automatic price adjustments in response to excess demands or supplies.

Keynes was also correct to argue that, even if there was a mechanism causing automatic wage adjustments in response to unemployment, the labor market, accounting for roughly 60 percent of total income, is so large that any change in wages necessarily affects all other markets, causing system-wide repercussions that might well offset any employment-increasing tendency of the prior wage adjustment.

But what I want to suggest in this post is that Keynes’s criticism of the supply-demand paradigm is relevant to any general-equilibrium system in the following sense: if a general-equilibrium system is considered from an initial non-equilibrium position, does the system have any tendency to move toward equilibrium? And to make the analysis relatively tractable, assume that the system is such that a unique equilibrium exists. Before proceeding, I also want to note that I am not arguing that traditional supply-demand analysis is necessarily flawed; I am just emphasizing that traditional supply-demand analysis is predicated on a macroeconomic foundation: that all markets but the one under consideration are in, or are in the neighborhood of, equilibrium. It is only because the system as a whole is in the neighborhood of equilibrium, that the microeconomic forces on which traditional supply-demand analysis relies appear to be so powerful and so stabilizing.

However, if our focus is a general-equilibrium system, microeconomic supply-demand analysis of a single market in isolation provides no basis on which to argue that the system as a whole has a self-correcting tendency toward equilibrium. To make such an argument is to commit a fallacy of composition. The tendency of any single market toward equilibrium is premised on an assumption that all markets but the one under analysis are already at, or in the neighborhood of, equilibrium. But when the system as a whole is in a disequilibrium state, the method of partial equilibrium analysis is misplaced; partial-equilibrium analysis provides no ground – no micro-foundation — for an argument that the adjustment of market prices in response to excess demands and excess supplies will ever – much less rapidly — guide the entire system back to an equilibrium state.

The lack of automatic market forces that return a system not in the neighborhood — for purposes of this discussion “neighborhood” is left undefined – of equilibrium back to equilibrium is implied by the Sonnenschein-Mantel-Debreu Theorem, which shows that, even if a unique general equilibrium exists, there may be no rule or algorithm for increasing (decreasing) prices in markets with excess demands (supplies) by which the general-equilibrium price vector would be discovered in a finite number of steps.

The theorem holds even under a Walrasian tatonnement mechanism in which no trading at disequilibrium prices is allowed. The reason is that the interactions between individual markets may be so complicated that a price-adjustment rule will not eliminate all excess demands, because even if a price adjustment reduces excess demand in one market, that price adjustment may cause offsetting disturbances in one or more other markets. So, unless the equilibrium price vector is somehow hit upon by accident, no rule or algorithm for price adjustment based on the excess demand in each market will necessarily lead to discovery of the equilibrium price vector.

The Sonnenschein Mantel Debreu Theorem reinforces the insight of Kenneth Arrow in an important 1959 paper “Toward a Theory of Price Adjustment,” which posed the question: how does the theory of perfect competition account for the determination of the equilibrium price at which all agents can buy or sell as much as they want to at the equilibrium (“market-clearing”) price? As Arrow observed, “there exists a logical gap in the usual formulations of the theory of perfectly competitive economy, namely, that there is no place for a rational decision with respect to prices as there is with respect to quantities.”

Prices in perfect competition are taken as parameters by all agents in the model, and optimization by agents consists in choosing optimal quantities. The equilibrium solution allows the mutually consistent optimization by all agents at the equilibrium price vector. This is true for the general-equilibrium system as a whole, and for partial equilibrium in every market. Not only is there no positive theory of price adjustment within the competitive general-equilibrium model, as pointed out by Arrow, but the Sonnenschein-Mantel-Debreu Theorem shows that there’s no guarantee that even the notional tatonnement method of price adjustment can ensure that a unique equilibrium price vector will be discovered.

While acknowledging his inability to fill the gap, Arrow suggested that, because perfect competition and price taking are properties of general equilibrium, there are inevitably pockets of market power, in non-equilibrium states, so that some transactors in non-equilibrium states, are price searchers rather than price takers who therefore choose both an optimal quantity and an optimal price. I have no problem with Arrow’s insight as far as it goes, but it still doesn’t really solve his problem, because he couldn’t explain, even intuitively, how a disequilibrium system with some agents possessing market power (either as sellers or buyers) transitions into an equilibrium system in which all agents are price-takers who can execute their planned optimal purchases and sales at the parametric prices.

One of the few helpful, but, as far as I can tell, totally overlooked, contributions of the rational-expectations revolution was to solve (in a very narrow sense) the problem that Arrow identified and puzzled over, although Hayek, Lindahl and Myrdal, in their original independent formulations of the concept of intertemporal equilibrium, had already provided the key to the solution. Hayek, Lindahl, and Myrdal showed that an intertemporal equilibrium is possible only insofar as agents form expectations of future prices that are so similar to each other that, if future prices turn out as expected, the agents would be able to execute their planned sales and purchases as expected.

But if agents have different expectations about the future price(s) of some commodity(ies), and if their plans for future purchases and sales are conditioned on those expectations, then when the expectations of at least some agents are inevitably disappointed, those agents will necessarily have to abandon (or revise) the plans that their previously formulated plans.

What led to Arrow’s confusion about how equilibrium prices are arrived at was the habit of thinking that market prices are determined by way of a Walrasian tatonnement process (supposedly mimicking the haggling over price by traders). So the notion that a mythical market auctioneer, who first calls out prices at random (prix cries au hasard), and then, based on the tallied market excess demands and supplies, adjusts those prices until all markets “clear,” is untenable, because continual trading at disequilibrium prices keeps changing the solution of the general-equilibrium system. An actual system with trading at non-equilibrium prices may therefore be moving away from, rather converging on, an equilibrium state.

Here is where the rational-expectations hypothesis comes in. The rational-expectations assumption posits that revisions of previously formulated plans are never necessary, because all agents actually do correctly anticipate the equilibrium price vector in advance. That is indeed a remarkable assumption to make; it is an assumption that all agents in the model have the capacity to anticipate, insofar as their future plans to buy and sell require them to anticipate, the equilibrium prices that will prevail for the products and services that they plan to purchase or sell. Of course, in a general-equilibrium system, all prices being determined simultaneously, the equilibrium prices for some future prices cannot generally be forecast in isolation from the equilibrium prices for all other products. So, in effect, the rational-expectations hypothesis supposes that each agent in the model is an omniscient central planner able to solve an entire general-equilibrium system for all future prices!

But let us not be overly nitpicky about details. So forget about false trading, and forget about the Sonnenschein-Mantel-Debreu theorem. Instead, just assume that, at time t, agents form rational expectations of the future equilibrium price vector in period (t+1). If agents at time t form rational expectations of the equilibrium price vector in period (t+1), then they may well assume that the equilibrium price vector in period t is equal to the expected price vector in period (t+1).

Now, the expected price vector in period (t+1) may or may not be an equilibrium price vector in period t. If it is an equilibrium price vector in period t as well as in period (t+1), then all is right with the world, and everyone will succeed in buying and selling as much of each commodity as he or she desires. If not, prices may or may not adjust in response to that disequilibrium, and expectations may or may not change accordingly.

Thus, instead of positing a mythical auctioneer in a contrived tatonnement process as the mechanism whereby prices are determined for currently executed transactions, the rational-expectations hypothesis posits expected future prices as the basis for the prices at which current transactions are executed, providing a straightforward solution to Arrow’s problem. The prices at which agents are willing to purchase or sell correspond to their expectations of prices in the future. If they find trading partners with similar expectations of future prices, they will reach agreement and execute transactions at those prices. If they don’t find traders with similar expectations, they will either be unable to transact, or will revise their price expectations, or they will assume that current market conditions are abnormal and then decide whether to transact at prices different from those they had expected.

When current prices are more favorable than expected, agents will want to buy or sell more than they would have if current prices were equal to their expectations for the future. If current prices are less favorable than they expect future prices to be, they will not transact at all or will seek to buy or sell less than they would have bought or sold if current prices had equaled expected future prices. The dichotomy between observed current prices, dictated by current demands and supplies, and expected future prices is unrealistic; all current transactions are made with an eye to expected future prices and to their opportunities to postpone current transactions until the future, or to advance future transactions into the present.

If current prices for similar commodities are not uniform in all current transactions, a circumstance that Arrow attributes to the existence of varying degrees of market power across imperfectly competitive suppliers, price dispersion may actually be caused, not by market power, but by dispersion in the expectations of future prices held by agents. Sellers expecting future prices to rise will be less willing to sell at relatively low prices now than are suppliers with pessimistic expectations about future prices. Equilibrium occurs when all transactors share the same expectations of future prices and expected future prices correspond to equilibrium prices in the current period.

Of course, that isn’t the only possible equilibrium situation. There may be situations in which a future event that will change a subset of prices can be anticipated. If the anticipation of the future event affects not only expected future prices, it must also and necessarily affect current prices insofar as current supplies can be carried into the future from the present or current purchases can be postponed until the future or future consumption shifted into the present.

The practical upshot of these somewhat disjointed reflections is, I think,primarily to reinforce skepticism that the traditional Phillips Curve supposition that low and falling unemployment necessarily presages an increase in inflation. Wages are not primarily governed by the current state of the labor market, whatever the labor market might even mean in macroeconomic context.

Expectations rule! And the rational-expectations revolution to the contrary notwithstanding, we have no good theory of how expectations are actually formed and there is certainly no reason to assume that, as a general matter, all agents share the same set of expectations.

The current fairly benign state of the economy reflects the absence of any serious disappointment of price expectations. If an economy is operating not very far from an equilibrium, although expectations are not the same, they likely are not very different. They will only be very different after the unexpected strikes. When that happens, borrowers and traders who had taken positions based on overly optimistic expectations find themselves unable to meet their obligations. It is only then that we will see whether the economy is really as strong and resilient as it now seems.

Expecting the unexpected is hard to do, but you can be sure that, sooner or later, the unexpected is going to happen.

More on Sticky Wages

It’s been over four and a half years since I wrote my second most popular post on this blog (“Why are Wages Sticky?”). Although the post was linked to and discussed by Paul Krugman (which is almost always a guarantee of getting a lot of traffic) and by other econoblogosphere standbys like Mark Thoma and Barry Ritholz, unlike most of my other popular posts, it has continued ever since to attract a steady stream of readers. It’s the posts that keep attracting readers long after their original expiration date that I am generally most proud of.

I made a few preliminary points about wage stickiness before getting to my point. First, although Keynes is often supposed to have used sticky wages as the basis for his claim that market forces, unaided by stimulus to aggregate demand, cannot automatically eliminate cyclical unemployment within the short or even medium term, he actually devoted a lot of effort and space in the General Theory to arguing that nominal wage reductions would not increase employment, and to criticizing economists who blamed unemployment on nominal wages fixed by collective bargaining at levels too high to allow all workers to be employed. So, the idea that wage stickiness is a Keynesian explanation for unemployment doesn’t seem to me to be historically accurate.

I also discussed the search theories of unemployment that in some ways have improved our understanding of why some level of unemployment is a normal phenomenon even when people are able to find jobs fairly easily and why search and unemployment can actually be productive, enabling workers and employers to improve the matches between the skills and aptitudes that workers have and the skills and aptitudes that employers are looking for. But search theories also have trouble accounting for some basic facts about unemployment.

First, a lot of job search takes place when workers have jobs while search theories assume that workers can’t or don’t search while they are employed. Second, when unemployment rises in recessions, it’s not because workers mistakenly expect more favorable wage offers than employers are offering and mistakenly turn down job offers that they later regret not having accepted, which is a very skewed way of interpreting what happens in recessions; it’s because workers are laid off by employers who are cutting back output and idling production lines.

I then suggested the following alternative explanation for wage stickiness:

Consider the incentive to cut price of a firm that can’t sell as much as it wants [to sell] at the current price. The firm is off its supply curve. The firm is a price taker in the sense that, if it charges a higher price than its competitors, it won’t sell anything, losing all its sales to competitors. Would the firm have any incentive to cut its price? Presumably, yes. But let’s think about that incentive. Suppose the firm has a maximum output capacity of one unit, and can produce either zero or one units in any time period. Suppose that demand has gone down, so that the firm is not sure if it will be able to sell the unit of output that it produces (assume also that the firm only produces if it has an order in hand). Would such a firm have an incentive to cut price? Only if it felt that, by doing so, it would increase the probability of getting an order sufficiently to compensate for the reduced profit margin at the lower price. Of course, the firm does not want to set a price higher than its competitors, so it will set a price no higher than the price that it expects its competitors to set.

Now consider a different sort of firm, a firm that can easily expand its output. Faced with the prospect of losing its current sales, this type of firm, unlike the first type, could offer to sell an increased amount at a reduced price. How could it sell an increased amount when demand is falling? By undercutting its competitors. A firm willing to cut its price could, by taking share away from its competitors, actually expand its output despite overall falling demand. That is the essence of competitive rivalry. Obviously, not every firm could succeed in such a strategy, but some firms, presumably those with a cost advantage, or a willingness to accept a reduced profit margin, could expand, thereby forcing marginal firms out of the market.

Workers seem to me to have the characteristics of type-one firms, while most actual businesses seem to resemble type-two firms. So what I am suggesting is that the inability of workers to take over the jobs of co-workers (the analog of output expansion by a firm) when faced with the prospect of a layoff means that a powerful incentive operating in non-labor markets for price cutting in response to reduced demand is not present in labor markets. A firm faced with the prospect of being terminated by a customer whose demand for the firm’s product has fallen may offer significant concessions to retain the customer’s business, especially if it can, in the process, gain an increased share of the customer’s business. A worker facing the prospect of a layoff cannot offer his employer a similar deal. And requiring a workforce of many workers, the employer cannot generally avoid the morale-damaging effects of a wage cut on his workforce by replacing current workers with another set of workers at a lower wage than the old workers were getting.

I think that what I wrote four years ago is clearly right, identifying an important reason for wage stickiness. But there’s also another reason that I didn’t mention then, but whose importance has since come to appear increasingly significant to me, especially as a result of writing and rewriting my paper “Hayek, Hicks, Radner and three concepts of intertemporal equilibrium.”

If you are unemployed because the demand for your employer’s product has gone down, and your employer, planning to reduce output, is laying off workers no longer needed, how could you, as an individual worker, unconstrained by a union collective-bargaining agreement or by a minimum-wage law, persuade your employer not to lay you off? Could you really keep your job by offering to accept a wage cut — no matter how big? If you are being laid off because your employer is reducing output, would your offer to work at a lower wage cause your employer to keep output unchanged, despite a reduction in demand? If not, how would your offer to take a pay cut help you keep your job? Unless enough workers are willing to accept a big enough wage cut for your employer to find it profitable to maintain current output instead of cutting output, how would your own willingness to accept a wage cut enable you to keep your job?

Now, if all workers were to accept a sufficiently large wage cut, it might make sense for an employer not to carry out a planned reduction in output, but the offer by any single worker to accept a wage cut certainly would not cause the employer to change its output plans. So, if you are making an independent decision whether to offer to accept a wage cut, and other workers are making their own independent decisions about whether to accept a wage cut, would it be rational for you or any of them to accept a wage cut? Whether it would or wouldn’t might depend on what each worker was expecting other workers to do. But certainly given the expectation that other workers are not offering to accept a wage cut, why would it make any sense for any worker to be the one to offer to accept a wage cut? Would offering to accept a wage cut, increase the likelihood that a worker would be one of the lucky ones chosen not to be laid off? Why would offering to accept a wage cut that no one else was offering to accept, make the worker willing to work for less appear more desirable to the employer than the others that wouldn’t accept a wage cut? One reaction by the employer might be: what’s this guy’s problem?

Combining this way of looking at the incentives workers have to offer to accept wage reductions to keep their jobs with my argument in my post of four years ago, I now am inclined to suggest that unemployment as such provides very little incentive for workers and employers to cut wages. Price cutting in periods of excess supply is often driven by aggressive price cutting by suppliers with large unsold inventories. There may be lots of unemployment, but no one is holding a large stock of unemployed workers, and no is in a position to offer low wages to undercut the position of those currently employed at  nominal wages that, arguably, are too high.

That’s not how labor markets operate. Labor markets involve matching individual workers and individual employers more or less one at a time. If nominal wages fall, it’s not because of an overhang of unsold labor flooding the market; it’s because something is changing the expectations of workers and employers about what wage will be offered by employers, and accepted by workers, for a particular kind of work. If the expected wage is too high, not all workers willing to work at that wage will find employment; if it’s too low, employers will not be able to find as many workers as they would like to hire, but the situation will not change until wage expectations change. And the reason that wage expectations change is not because the excess demand for workers causes any immediate pressure for nominal wages to rise.

The further point I would make is that the optimal responses of workers and the optimal responses of their employers to a recessionary reduction in demand, in which the employers, given current input and output prices, are planning to cut output and lay off workers, are mutually interdependent. While it is, I suppose, theoretically possible that if enough workers decided to immediately offer to accept sufficiently large wage cuts, some employers might forego plans to lay off their workers, there are no obvious market signals that would lead to such a response, because such a response would be contingent on a level of coordination between workers and employers and a convergence of expectations about future outcomes that is almost unimaginable.

One can’t simply assume that it is in the independent self-interest of every worker to accept a wage cut as soon as an employer perceives a reduced demand for its product, making the current level of output unprofitable. But unless all, or enough, workers decide to accept a wage cut, the optimal response of the employer is still likely to be to cut output and lay off workers. There is no automatic mechanism by which the market adjusts to demand shocks to achieve the set of mutually consistent optimal decisions that characterizes a full-employment market-clearing equilibrium. Market-clearing equilibrium requires not merely isolated price and wage cuts by individual suppliers of inputs and final outputs, but a convergence of expectations about the prices of inputs and outputs that will be consistent with market clearing. And there is no market mechanism that achieves that convergence of expectations.

So, this brings me back to Keynes and the idea of sticky wages as the key to explaining cyclical fluctuations in output and employment. Keynes writes at the beginning of chapter 19 of the General Theory.

For the classical theory has been accustomed to rest the supposedly self-adjusting character of the economic system on an assumed fluidity of money-wages; and, when there is rigidity, to lay on this rigidity the blame of maladjustment.

A reduction in money-wages is quite capable in certain circumstances of affording a stimulus to output, as the classical theory supposes. My difference from this theory is primarily a difference of analysis. . . .

The generally accept explanation is . . . quite a simple one. It does not depend on roundabout repercussions, such as we shall discuss below. The argument simply is that a reduction in money wages will, cet. par. Stimulate demand by diminishing the price of the finished product, and will therefore increase output, and will therefore increase output and employment up to the point where  the reduction which labour has agreed to accept in its money wages is just offset by the diminishing marginal efficiency of labour as output . . . is increased. . . .

It is from this type of analysis that I fundamentally differ.

[T]his way of thinking is probably reached as follows. In any given industry we have a demand schedule for the product relating the quantities which can be sold to the prices asked; we have a series of supply schedules relating the prices which will be asked for the sale of different quantities. .  . and these schedules between them lead up to a further schedule which, on the assumption that other costs are unchanged . . . gives us the demand schedule for labour in the industry relating the quantity of employment to different levels of wages . . . This conception is then transferred . . . to industry as a whole; and it is supposed, by a parity of reasoning, that we have a demand schedule for labour in industry as a whole relating the quantity of employment to different levels of wages. It is held that it makes no material difference to this argument whether it is in terms of money-wages or of real wages. If we are thinking of real wages, we must, of course, correct for changes in the value of money; but this leaves the general tendency of the argument unchanged, since prices certainly do not change in exact proportion to changes in money wages.

If this is the groundwork of the argument . . ., surely it is fallacious. For the demand schedules for particular industries can only be constructed on some fixed assumption as to the nature of the demand and supply schedules of other industries and as to the amount of aggregate effective demand. It is invalid, therefore, to transfer the argument to industry as a whole unless we also transfer our assumption that the aggregate effective demand is fixed. Yet this assumption amount to an ignoratio elenchi. For whilst no one would wish to deny the proposition that a reduction in money-wages accompanied by the same aggregate demand as before will be associated with an increase in employment, the precise question at issue is whether the reduction in money wages will or will not be accompanied by the same aggregate effective demand as before measured in money, or, at any rate, measured by an aggregate effective demand which is not reduced in full proportion to the reduction in money-wages. . . But if the classical theory is not allowed to extend by analogy its conclusions in respect of a particular industry to industry as a whole, it is wholly unable to answer the question what effect on employment a reduction in money-wages will have. For it has no method of analysis wherewith to tackle the problem. (General Theory, pp. 257-60)

Keynes’s criticism here is entirely correct. But I would restate slightly differently. Standard microeconomic reasoning about preferences, demand, cost and supply is partial-equilbriium analysis. The focus is on how equilibrium in a single market is achieved by the adjustment of the price in a single market to equate the amount demanded in that market with amount supplied in that market.

Supply and demand is a wonderful analytical tool that can illuminate and clarify many economic problems, providing the key to important empirical insights and knowledge. But supply-demand analysis explicitly – but too often without realizing its limiting implications – assumes that other prices and incomes in other markets are held constant. That assumption essentially means that the market – i.e., the demand, cost and supply curves used to represent the behavioral characteristics of the market being analyzed – is small relative to the rest of the economy, so that changes in that single market can be assumed to have a de minimus effect on the equilibrium of all other markets. (The conditions under which such an assumption could be justified are themselves not unproblematic, but I am now assuming that those problems can in fact be assumed away at least in many applications. And a good empirical economist will have a good instinctual sense for when it’s OK to make the assumption and when it’s not OK to make the assumption.)

So, the underlying assumption of microeconomics is that the individual markets under analysis are very small relative to the whole economy. Why? Because if those markets are not small, we can’t assume that the demand curves, cost curves, and supply curves end up where they started. Because a high price in one market may have effects on other markets and those effects will have further repercussions that move the very demand, cost and supply curves that were drawn to represent the market of interest. If the curves themselves are unstable, the ability to predict the final outcome is greatly impaired if not completely compromised.

The working assumption of the bread and butter partial-equilibrium analysis that constitutes econ 101 is that markets have closed borders. And that assumption is not always valid. If markets have open borders so that there is a lot of spillover between and across markets, the markets can only be analyzed in terms of broader systems of simultaneous equations, not the simplified solutions that we like to draw in two-dimensional space corresponding to intersections of stable supply curves with stable supply curves.

What Keynes was saying is that it makes no sense to draw a curve representing the demand of an entire economy for labor or a curve representing the supply of labor of an entire economy, because the underlying assumption of such curves that all other prices are constant cannot possibly be satisfied when you are drawing a demand curve and a supply curve for an input that generates more than half the income earned in an economy.

But the problem is even deeper than just the inability to draw a curve that meaningfully represents the demand of an entire economy for labor. The assumption that you can model a transition from one point on the curve to another point on the curve is simply untenable, because not only is the assumption that other variables are being held constant untenable and self-contradictory, the underlying assumption that you are starting from an equilibrium state is never satisfied when you are trying to analyze a situation of unemployment – at least if you have enough sense not to assume that economy is starting from, and is not always in, a state of general equilibrium.

So, Keynes was certainly correct to reject the naïve transfer of partial equilibrium theorizing from its legitimate field of applicability in analyzing the effects of small parameter changes on outcomes in individual markets – what later came to be known as comparative statics – to macroeconomic theorizing about economy-wide disturbances in which the assumptions underlying the comparative-statics analysis used in microeconomics are clearly not satisfied. That illegitimate transfer of one kind of theorizing to another has come to be known as the demand for microfoundations in macroeconomic models that is the foundational methodological principle of modern macroeconomics.

The principle, as I have been arguing for some time, is illegitimate for a variety of reasons. And one of those reasons is that microeconomics itself is based on the macroeconomic foundational assumption of a pre-existing general equilibrium, in which all plans in the entire economy are, and will remain, perfectly coordinated throughout the analysis of a particular parameter change in a single market. Once you relax the assumption that all, but one, markets are in equilibrium, the discipline imposed by the assumption of the rationality of general equilibrium and comparative statics is shattered, and a different kind of theorizing must be adopted to replace it.

The search for that different kind of theorizing is the challenge that has always faced macroeconomics. Despite heroic attempts to avoid facing that challenge and pretend that macroeconomics can be built as if it were microeconomics, the search for a different kind of theorizing will continue; it must continue. But it would certainly help if more smart and creative people would join in that search.

Hayek and Temporary Equilibrium

In my three previous posts (here, here, and here) about intertemporal equilibrium, I have been emphasizing that the defining characteristic of an intertemporal equilibrium is that agents all share the same expectations of future prices – or at least the same expectations of those future prices on which they are basing their optimizing plans – over their planning horizons. At a given moment at which agents share the same expectations of future prices, the optimizing plans of the agents are consistent, because none of the agents would have any reason to change his optimal plan as long as price expectations do not change, or are not disappointed as a result of prices turning out to be different from what they had been expected to be.

The failure of expected prices to be fulfilled would therefore signify that the information available to agents in forming their expectations and choosing optimal plans conditional on their expectations had been superseded by newly obtained information. The arrival of new information can thus be viewed as a cause of disequilibrium as can any difference in information among agents. The relationship between information and equilibrium can be expressed as follows: differences in information or differences in how agents interpret information leads to disequilibrium, because those differences lead agents to form differing expectations of future prices.

Now the natural way to generalize the intertemporal equilibrium model is to allow for agents to have different expectations of future prices reflecting their differences in how they acquire, or in how they process, information. But if agents have different information, so that their expectations of future prices are not the same, the plans on which agents construct their subjectively optimal plans will be inconsistent and incapable of implementation without at least some revisions. But this generalization seems incompatible with the equilibrium of optimal plans, prices and price expectations described by Roy Radner, which I have identified as an updated version of Hayek’s concept of intertemporal equilibrium.

The question that I want to explore in this post is how to reconcile the absence of equilibrium of optimal plans, prices, and price expectations, with the intuitive notion of market clearing that we use to analyze asset markets and markets for current delivery. If markets for current delivery and for existing assets are in equilibrium in the sense that prices are adjusting in those markets to equate demand and supply in those markets, how can we understand the idea that  the optimizing plans that agents are seeking to implement are mutually inconsistent?

The classic attempt to explain this intermediate situation which partially is and partially is not an equilibrium, was made by J. R. Hicks in 1939 in Value and Capital when he coined the term “temporary equilibrium” to describe a situation in which current prices are adjusting to equilibrate supply and demand in current markets even though agents are basing their choices of optimal plans to implement over time on different expectations of what prices will be in the future. The divergence of the price expectations on the basis of which agents choose their optimal plans makes it inevitable that some or all of those expectations won’t be realized, and that some, or all, of those agents won’t be able to implement the optimal plans that they have chosen, without at least some revisions.

In Hayek’s early works on business-cycle theory, he argued that the correct approach to the analysis of business cycles must be analyzed as a deviation by the economy from its equilibrium path. The problem that he acknowledged with this approach was that the tools of equilibrium analysis could be used to analyze the nature of the equilibrium path of an economy, but could not easily be deployed to analyze how an economy performs once it deviates from its equilibrium path. Moreover, cyclical deviations from an equilibrium path tend not to be immediately self-correcting, but rather seem to be cumulative. Hayek attributed the tendency toward cumulative deviations from equilibrium to the lagged effects of monetary expansion which cause cumulative distortions in the capital structure of the economy that lead at first to an investment-driven expansion of output, income and employment and then later to cumulative contractions in output, income, and employment. But Hayek’s monetary analysis was never really integrated with the equilibrium analysis that he regarded as the essential foundation for a theory of business cycles, so the monetary analysis of the cycle remained largely distinct from, if not inconsistent with, the equilibrium analysis.

I would suggest that for Hayek the Hicksian temporary-equilibrium construct would have been the appropriate theoretical framework within which to formulate a monetary analysis consistent with equilibrium analysis. Although there are hints in the last part of The Pure Theory of Capital that Hayek was thinking along these lines, I don’t believe that he got very far, and he certainly gave no indication that he saw in the Hicksian method the analytical tool with which to weave the two threads of his analysis.

I will now try to explain how the temporary-equilibrium method makes it possible to understand  the conditions for a cumulative monetary disequilibrium. I make no attempt to outline a specifically Austrian or Hayekian theory of monetary disequilibrium, but perhaps others will find it worthwhile to do so.

As I mentioned in my previous post, agents understand that their price expectations may not be realized, and that their plans may have to be revised. Agents also recognize that, given the uncertainty underlying all expectations and plans, not all debt instruments (IOUs) are equally reliable. The general understanding that debt – promises to make future payments — must be evaluated and assessed makes it profitable for some agents to specialize in in debt assessment. Such specialists are known as financial intermediaries. And, as I also mentioned previously, the existence of financial intermediaries cannot be rationalized in the ADM model, because, all contracts being made in period zero, there can be no doubt that the equilibrium exchanges planned in period zero will be executed whenever and exactly as scheduled, so that everyone’s promise to pay in time zero is equally good and reliable.

For our purposes, a particular kind of financial intermediary — banks — are of primary interest. The role of a bank is to assess the quality of the IOUs offered by non-banks, and select from the IOUs offered to them those that are sufficiently reliable to be accepted by the bank. Once a prospective borrower’s IOU is accepted, the bank exchanges its own IOU for the non-bank’s IOU. No non-bank would accept a non-bank’s IOU, at least not on terms as favorable as those on which the bank offers in accepting an IOU. In return for the non-bank IOU, the bank credits the borrower with a corresponding amount of its own IOUs, which, because the bank promises to redeem its IOUs for the numeraire commodity on demand, is generally accepted at face value.

Thus, bank debt functions as a medium of exchange even as it enables non-bank agents to make current expenditures they could not have made otherwise if they can demonstrate to the bank that they are sufficiently likely to repay the loan in the future at agreed upon terms. Such borrowing and repayments are presumably similar to the borrowing and repayments that would occur in the ADM model unmediated by any financial intermediary. In assessing whether a prospective borrower will repay a loan, the bank makes two kinds of assessments. First, does the borrower have sufficient income-earning capacity to generate enough future income to make the promised repayments that the borrower would be committing himself to make? Second, should the borrower’s future income, for whatever reason, turn out to be insufficient to finance the promised repayments, does the borrower have collateral that would allow the bank to secure repayment from the collateral offered as security? In making both kinds of assessments the bank has to form an expectation about the future — the future income of the borrower and the future value of the collateral.

In a temporary-equilibrium context, the expectations of future prices held by agents are not the same, so the expectations of future prices of at least some agents will not be accurate, and some agents won’tbe able to execute their plans as intended. Agents that can’t execute their plans as intended are vulnerable if they have incurred future obligations based on their expectations of future prices that exceed their repayment capacity given the future prices that are actually realized. If they have sufficient wealth — i.e., if they have asset holdings of sufficient value — they may still be able to repay their obligations. However, in the process they may have to sell assets or reduce their own purchases, thereby reducing the income earned by other agents. Selling assets under pressure of obligations coming due is almost always associated with selling those assets at a significant loss, which is precisely why it usually preferable to finance current expenditure by borrowing funds and making repayments on a fixed schedule than to finance the expenditure by the sale of assets.

Now, in adjusting their plans when they observe that their price expectations are disappointed, agents may respond in two different ways. One type of adjustment is to increase sales or decrease purchases of particular goods and services that they had previously been planning to purchase or sell; such marginal adjustments do not fundamentally alter what agents are doing and are unlikely to seriously affect other agents. But it is also possible that disappointed expectations will cause some agents to conclude that their previous plans are no longer sustainable under the conditions in which they unexpectedly find themselves, so that they must scrap their old plans replacing them with completely new plans instead. In the latter case, the abandonment of plans that are no longer viable given disappointed expectations may cause other agents to conclude that the plans that they had expected to implement are no longer profitable and must be scrapped.

When agents whose price expectations have been disappointed respond with marginal adjustments in their existing plans rather than scrapping them and replacing them with new ones, a temporary equilibrium with disappointed expectations may still exist and that equilibrium may be reached through appropriate price adjustments in the markets for current delivery despite the divergent expectations of future prices held by agents. Operation of the price mechanism may still be able to achieve a reconciliation of revised but sub-optimal plans. The sub-optimal temporary equilibrium will be inferior to the allocation that would have resulted had agents all held correct expectations of future prices. Nevertheless, given a history of incorrect price expectations and misallocations of capital assets, labor, and other factors of production, a sub-optimal temporary equilibrium may be the best feasible outcome.

But here’s the problem. There is no guarantee that, when prices turn out to be very different from what they were expected to be, the excess demands of agents will adjust smoothly to changes in current prices. A plan that was optimal based on the expectation that the price of widgets would be $500 a unit may well be untenable at a price of $120 a unit. When realized prices are very different from what they had been expected to be, those price changes can lead to discontinuous adjustments, violating a basic assumption — the continuity of excess demand functions — necessary to prove the existence of an equilibrium. Once output prices reach some minimum threshold, the best response for some firms may be to shut down, the excess demand for the product produced by the firm becoming discontinuous at the that threshold price. The firms shutting down operations may be unable to repay loans they had obligated themselves to repay based on their disappointed price expectations. If ownership shares in firms forced to cease production are held by households that have predicated their consumption plans on prior borrowing and current repayment obligations, the ability of those households to fulfill their obligations may be compromised once those firms stop paying out the expected profit streams. Banks holding debts incurred by firms or households that borrowers cannot service may find that their own net worth is reduced sufficiently to make the banks’ own debt unreliable, potentially causing a breakdown in the payment system. Such effects are entirely consistent with a temporary-equilibrium model if actual prices turn out to be very different from what agents had expected and upon which they had constructed their future consumption and production plans.

Sufficiently large differences between expected and actual prices in a given period may result in discontinuities in excess demand functions once prices reach critical thresholds, thereby violating the standard continuity assumptions on which the existence of general equilibrium depends under the fixed-point theorems that are the lynchpin of modern existence proofs. C. J. Bliss made such an argument in a 1983 paper (“Consistent Temporary Equilibrium” in the volume Modern Macroeconomic Theory edited by  J. P. Fitoussi) in which he also suggested, as I did above, that the divergence of individual expectations implies that agents will not typically regard the debt issued by other agents as homogeneous. Bliss therefore posited the existence of a “Financier” who would subject the borrowing plans of prospective borrowers to an evaluation process to determine if the plan underlying the prospective loan sought by a borrower was likely to generate sufficient cash flow to enable the borrower to repay the loan. The role of the Financier is to ensure that the plans that firms choose are based on roughly similar expectations of future prices so that firms will not wind up acting on price expectations that must inevitably be disappointed.

I am unsure how to understand the function that Bliss’s Financier is supposed to perform. Presumably the Financier is meant as a kind of idealized companion to the Walrasian auctioneer rather than as a representation of an actual institution, but the resemblance between what the Financier is supposed to do and what bankers actually do is close enough to make it unclear to me why Bliss chose an obviously fictitious character to weed out business plans based on implausible price expectations rather than have the role filled by more realistic characters that do what their real-world counterparts are supposed to do. Perhaps Bliss’s implicit assumption is that real-world bankers do not constrain the expectations of prospective borrowers sufficiently to suggest that their evaluation of borrowers would increase the likelihood that a temporary equilibrium actually exists so that only an idealized central authority could impose sufficient consistency on the price expectations to make the existence of a temporary equilibrium likely.

But from the perspective of positive macroeconomic and business-cycle theory, explicitly introducing banks that simultaneously provide an economy with a medium of exchange – either based on convertibility into a real commodity or into a fiat base money issued by the monetary authority – while intermediating between ultimate borrowers and ultimate lenders seems to be a promising way of modeling a dynamic economy that sometimes may — and sometimes may not — function at or near a temporary equilibrium.

We observe economies operating in the real world that sometimes appear to be functioning, from a macroeconomic perspective, reasonably well with reasonably high employment, increasing per capita output and income, and reasonable price stability. At other times, these economies do not function well at all, with high unemployment and negative growth, sometimes with high rates of inflation or with deflation. Sometimes, these economies are beset with financial crises in which there is a general crisis of solvency, and even apparently solvent firms are unable to borrow. A macroeconomic model should be able to account in some way for the diversity of observed macroeconomic experience. The temporary equilibrium paradigm seems to offer a theoretical framework capable of accounting for this diversity of experience and for explaining at least in a very general way what accounts for the difference in outcomes: the degree of congruence between the price expectations of agents. When expectations are reasonably consistent, the economy is able to function at or near a temporary equilibrium which is likely to exist. When expectations are highly divergent, a temporary equilibrium may not exist, and even if it does, the economy may not be able to find its way toward the equilibrium. Price adjustments in current markets may be incapable of restoring equilibrium inasmuch as expectations of future prices must also adjust to equilibrate the economy, there being no market mechanism by which equilibrium price expectations can be adjusted or restored.

This, I think, is the insight underlying Axel Leijonhufvud’s idea of a corridor within which an economy tends to stay close to an equilibrium path. However if the economy drifts or is shocked away from its equilibrium time path, the stabilizing forces that tend to keep an economy within the corridor cease to operate at all or operate only weakly, so that the tendency for the economy to revert back to its equilibrium time path is either absent or disappointingly weak.

The temporary-equilibrium method, it seems to me, might have been a path that Hayek could have successfully taken in pursuing the goal he had set for himself early in his career: to reconcile equilibrium-analysis with a theory of business cycles. Why he ultimately chose not to take this path is a question that, for now at least, I will leave to others to try to answer.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,627 other followers

Follow Uneasy Money on WordPress.com