Archive for the 'Keynes' Category

An Austrian Tragedy

It was hardly predictable that the New York Review of Books would take notice of Marginal Revolutionaries by Janek Wasserman, marking the susquicentenial of the publication of Carl Menger’s Grundsätze (Principles of Economics) which, along with Jevons’s Principles of Political Economy and Walras’s Elements of Pure Economics ushered in the marginal revolution upon which all of modern economics, for better or for worse, is based. The differences among the three founding fathers of modern economic theory were not insubstantial, and the Jevonian version was largely superseded by the work of his younger contemporary Alfred Marshall, so that modern neoclassical economics is built on the work of only one of the original founders, Leon Walras, Jevons’s work having left little impression on the future course of economics.

Menger’s work, however, though largely, but not totally, eclipsed by that of Marshall and Walras, did leave a more enduring imprint and a more complicated legacy than Jevons’s — not only for economics, but for political theory and philosophy, more generally. Judging from Edward Chancellor’s largely favorable review of Wasserman’s volume, one might even hope that a start might be made in reassessing that legacy, a process that could provide an opportunity for mutually beneficial interaction between long-estranged schools of thought — one dominant and one marginal — that are struggling to overcome various conceptual, analytical and philosophical problems for which no obvious solutions seem available.

In view of the failure of modern economists to anticipate the Great Recession of 2008, the worst financial shock since the 1930s, it was perhaps inevitable that the Austrian School, a once favored branch of economics that had made a specialty of booms and busts, would enjoy a revival of public interest.

The theme of Austrians as outsiders runs through Janek Wasserman’s The Marginal Revolutionaries: How Austrian Economists Fought the War of Ideas, a general history of the Austrian School from its beginnings to the present day. The title refers both to the later marginalization of the Austrian economists and to the original insight of its founding father, Carl Menger, who introduced the notion of marginal utility—namely, that economic value does not derive from the cost of inputs such as raw material or labor, as David Ricardo and later Karl Marx suggested, but from the utility an individual derives from consuming an additional amount of any good or service. Water, for instance, may be indispensable to humans, but when it is abundant, the marginal value of an extra glass of the stuff is close to zero. Diamonds are less useful than water, but a great deal rarer, and hence command a high market price. If diamonds were as common as dewdrops, however, they would be worthless.

Menger was not the first economist to ponder . . . the “paradox of value” (why useless things are worth more than essentials)—the Italian Ferdinando Galiani had gotten there more than a century earlier. His central idea of marginal utility was simultaneously developed in England by W. S. Jevons and on the Continent by Léon Walras. Menger’s originality lay in applying his theory to the entire production process, showing how the value of capital goods like factory equipment derived from the marginal value of the goods they produced. As a result, Austrian economics developed a keen interest in the allocation of capital. Furthermore, Menger and his disciples emphasized that value was inherently subjective, since it depends on what consumers are willing to pay for something; this imbued the Austrian school from the outset with a fiercely individualistic and anti-statist aspect.

Menger’s unique contribution is indeed worthy of special emphasis. He was more explicit than Jevons or Walras, and certainly more than Marshall, in explaining that the value of factors of production is derived entirely from the value of the incremental output that could be attributed (or imputed) to their services. This insight implies that cost is not an independent determinant of value, as Marshall, despite accepting the principle of marginal utility, continued to insist – famously referring to demand and supply as the two blades of the analytical scissors that determine value. The cost of production therefore turns out to be nothing but the value the output foregone when factors are used to produce one output instead of the next most highly valued alternative. Cost therefore does not determine, but is determined by, equilibrium price, which means that, in practice, costs are always subjective and conjectural. (I have made this point in an earlier post in a different context.) I will have more to say below about the importance of Menger’s specific contribution and its lasting imprint on the Austrian school.

Menger’s Principles of Economics, published in 1871, established the study of economics in Vienna—before then, no economic journals were published in Austria, and courses in economics were taught in law schools. . . .

The Austrian School was also bound together through family and social ties: [his two leading disciples, [Eugen von] Böhm-Bawerk and Friedrich von Wieser [were brothers-in-law]. [Wieser was] a close friend of the statistician Franz von Juraschek, Friedrich Hayek’s maternal grandfather. Young Austrian economists bonded on Alpine excursions and met in Böhm-Bawerk’s famous seminars (also attended by the Bolshevik Nikolai Bukharin and the German Marxist Rudolf Hilferding). Ludwig von Mises continued this tradition, holding private seminars in Vienna in the 1920s and later in New York. As Wasserman notes, the Austrian School was “a social network first and last.”

After World War I, the Habsburg Empire was dismantled by the victorious Allies. The Austrian bureaucracy shrank, and university placements became scarce. Menger, the last surviving member of the first generation of Austrian economists, died in 1921. The economic school he founded, with its emphasis on individualism and free markets, might have disappeared under the socialism of “Red Vienna.” Instead, a new generation of brilliant young economists emerged: Schumpeter, Hayek, and Mises—all of whom published best-selling works in English and remain familiar names today—along with a number of less well known but influential economists, including Oskar Morgenstern, Fritz Machlup, Alexander Gerschenkron, and Gottfried Haberler.

Two factual corrections are in order. Menger outlived Böhm-Bawerk, but not his other chief disciple von Wieser, who died in 1926, not long after supervising Hayek’s doctoral dissertation, later published in 1927, and, in 1933, translated into English and published as Monetary Theory and the Trade Cycle. Moreover, a 16-year gap separated Mises and Schumpeter, who were exact contemporaries, from Hayek (born in 1899) who was a few years older than Gerschenkron, Haberler, Machlup and Morgenstern.

All the surviving members or associates of the Austrian school wound up either in the US or Britain after World War II, and Hayek, who had taken a position in London in 1931, moved to the US in 1950, taking a position in the Committee on Social Thought at the University of Chicago after having been refused a position in the economics department. Through the intervention of wealthy sponsors, Mises obtained an academic appointment of sorts at the NYU economics department, where he succeeded in training two noteworthy disciples who wrote dissertations under his tutelage, Murray Rothbard and Israel Kirzner. (Kirzner wrote his dissertation under Mises at NYU, but Rothbard did his graduate work at Colulmbia.) Schumpeter, Haberler and Gerschenkron eventually took positions at Harvard, while Machlup (with some stops along the way) and Morgenstern made their way to Princeton. However, Hayek’s interests shifted from pure economic theory to deep philosophical questions. While Machlup and Haberler continued to work on economic theory, the Austrian influence on their work after World War II was barely recognizable. Morgenstern and Schumpeter made major contributions to economics, but did not hide their alienation from the doctrines of the Austrian School.

So there was little reason to expect that the Austrian School would survive its dispersal when the Nazis marched unopposed into Vienna in 1938. That it did survive is in no small measure due to its ideological usefulness to anti-socialist supporters who provided financial support to Hayek, enabling his appointment to the Committee on Social Thought at the University of Chicago, and Mises’s appointment at NYU, and other forms of research support to Hayek, Mises and other like-minded scholars, as well as funding the Mont Pelerin Society, an early venture in globalist networking, started by Hayek in 1947. Such support does not discredit the research to which it gave rise. That the survival of the Austrian School would probably not have been possible without the support of wealthy benefactors who anticipated that the Austrians would advance their political and economic interests does not invalidate the research thereby enabled. (In the interest of transparency, I acknowledge that I received support from such sources for two books that I wrote.)

Because Austrian School survivors other than Mises and Hayek either adapted themselves to mainstream thinking without renouncing their earlier beliefs (Haberler and Machlup) or took an entirely different direction (Morgenstern), and because the economic mainstream shifted in two directions that were most uncongenial to the Austrians: Walrasian general-equilibrium theory and Keynesian macroeconomics, the Austrian remnant, initially centered on Mises at NYU, adopted a sharply adversarial attitude toward mainstream economic doctrines.

Despite its minute numbers, the lonely remnant became a house divided against itself, Mises’s two outstanding NYU disciples, Murray Rothbard and Israel Kirzner, holding radically different conceptions of how to carry on the Austrian tradition. An extroverted radical activist, Rothbard was not content just to lead a school of economic thought, he aspired to become the leader of a fantastical anarchistic revolutionary movement to replace all established governments under a reign of private-enterprise anarcho-capitalism. Rothbard’s political radicalism, which, despite his Jewish ancestry, even included dabbling in Holocaust denialism, so alienated his mentor, that Mises terminated all contact with Rothbard for many years before his death. Kirzner, self-effacing, personally conservative, with no political or personal agenda other than the advancement of his own and his students’ scholarship, published hundreds of articles and several books filling 10 thick volumes of his collected works published by the Liberty Fund, while establishing a robust Austrian program at NYU, training many excellent scholars who found positions in respected academic and research institutions. Similar Austrian programs, established under the guidance of Kirzner’s students, were started at other institutions, most notably at George Mason University.

One of the founders of the Cato Institute, which for nearly half a century has been the leading avowedly libertarian think tank in the US, Rothbard was eventually ousted by Cato, and proceeded to set up a rival think tank, the Ludwig von Mises Institute, at Auburn University, which has turned into a focal point for extreme libertarians and white nationalists to congregate, get acquainted, and strategize together.

Isolation and marginalization tend to cause a subspecies either to degenerate toward extinction, to somehow blend in with the members of the larger species, thereby losing its distinctive characteristics, or to accentuate its unique traits, enabling it to find some niche within which to survive as a distinct sub-species. Insofar as they have engaged in economic analysis rather than in various forms of political agitation and propaganda, the Rothbardian Austrians have focused on anarcho-capitalist theory and the uniquely perverse evils of fractional-reserve banking.

Rejecting the political extremism of the Rothbardians, Kirznerian Austrians differentiate themselves by analyzing what they call market processes and emphasizing the limitations on the knowledge and information possessed by actual decision-makers. They attribute this misplaced focus on equilibrium to the extravagantly unrealistic and patently false assumptions of mainstream models on the knowledge possessed by economic agents, which effectively make equilibrium the inevitable — and trivial — conclusion entailed by those extreme assumptions. In their view, the focus of mainstream models on equilibrium states with unrealistic assumptions results from a preoccupation with mathematical formalism in which mathematical tractability rather than sound economics dictates the choice of modeling assumptions.

Skepticism of the extreme assumptions about the informational endowments of agents covers a range of now routine assumptions in mainstream models, e.g., the ability of agents to form precise mathematical estimates of the probability distributions of future states of the world, implying that agents never confront decisions about which they are genuinely uncertain. Austrians also object to the routine assumption that all the information needed to determine the solution of a model is the common knowledge of the agents in the model, so that an existing equilibrium cannot be disrupted unless new information randomly and unpredictably arrives. Each agent in the model having been endowed with the capacity of a semi-omniscient central planner, solving the model for its equilibrium state becomes a trivial exercise in which the optimal choices of a single agent are taken as representative of the choices made by all of the model’s other, semi-omnicient, agents.

Although shreds of subjectivism — i.e., agents make choices based own preference orderings — are shared by all neoclassical economists, Austrian criticisms of mainstream neoclassical models are aimed at what Austrians consider to be their insufficient subjectivism. It is this fierce commitment to a robust conception of subjectivism, in which an equilibrium state of shared expectations by economic agents must be explained, not just assumed, that Chancellor properly identifies as a distinguishing feature of the Austrian School.

Menger’s original idea of marginal utility was posited on the subjective preferences of consumers. This subjectivist position was retained by subsequent generations of the school. It inspired a tradition of radical individualism, which in time made the Austrians the favorite economists of American libertarians. Subjectivism was at the heart of the Austrians’ polemical rejection of Marxism. Not only did they dismiss Marx’s labor theory of value, they argued that socialism couldn’t possibly work since it would lack the means to allocate resources efficiently.

The problem with central planning, according to Hayek, is that so much of the knowledge that people act upon is specific knowledge that individuals acquire in the course of their daily activities and life experience, knowledge that is often difficult to articulate – mere intuition and guesswork, yet more reliable than not when acted upon by people whose livelihoods depend on being able to do the right thing at the right time – much less communicate to a central planner.

Chancellor attributes Austrian mistrust of statistical aggregates or indices, like GDP and price levels, to Austrian subjectivism, which regards such magnitudes as abstractions irrelevant to the decisions of private decision-makers, except perhaps in forming expectations about the actions of government policy makers. (Of course, this exception potentially provides full subjectivist license and legitimacy for macroeconomic theorizing despite Austrian misgivings.) Observed statistical correlations between aggregate variables identified by macroeconomists are dismissed as irrelevant unless grounded in, and implied by, the purposeful choices of economic agents.

But such scruples about the use of macroeconomic aggregates and inferring causal relationships from observed correlations are hardly unique to the Austrian school. One of the most important contributions of the 20th century to the methodology of economics was an article by T. C. Koopmans, “Measurement Without Theory,” which argued that measured correlations between macroeconomic variables provide a reliable basis for business-cycle research and policy advice only if the correlations can be explained in terms of deeper theoretical or structural relationships. The Nobel Prize Committee, in awarding the 1975 Prize to Koopmans, specifically mentioned this paper in describing Koopmans’s contributions. Austrians may be more fastidious than their mainstream counterparts in rejecting macroeconomic relationships not based on microeconomic principles, but they aren’t the only ones mistrustful of mere correlations.

Chancellor cites mistrust about the use of statistical aggregates and price indices as a factor in Hayek’s disastrous policy advice warning against anti-deflationary or reflationary measures during the Great Depression.

Their distrust of price indexes brought Austrian economists into conflict with mainstream economic opinion during the 1920s. At the time, there was a general consensus among leading economists, ranging from Irving Fisher at Yale to Keynes at Cambridge, that monetary policy should aim at delivering a stable price level, and in particular seek to prevent any decline in prices (deflation). Hayek, who earlier in the decade had spent time at New York University studying monetary policy and in 1927 became the first director of the Austrian Institute for Business Cycle Research, argued that the policy of price stabilization was misguided. It was only natural, Hayek wrote, that improvements in productivity should lead to lower prices and that any resistance to this movement (sometimes described as “good deflation”) would have damaging economic consequences.

The argument that deflation stemming from economic expansion and increasing productivity is normal and desirable isn’t what led Hayek and the Austrians astray in the Great Depression; it was their failure to realize the deflation that triggered the Great Depression was a monetary phenomenon caused by a malfunctioning international gold standard. Moreover, Hayek’s own business-cycle theory explicitly stated that a neutral (stable) monetary policy ought to aim at keeping the flow of total spending and income constant in nominal terms while his policy advice of welcoming deflation meant a rapidly falling rate of total spending. Hayek’s policy advice was an inexcusable error of judgment, which, to his credit, he did acknowledge after the fact, though many, perhaps most, Austrians have refused to follow him even that far.

Considered from the vantage point of almost a century, the collapse of the Austrian School seems to have been inevitable. Hayek’s long-shot bid to establish his business-cycle theory as the dominant explanation of the Great Depression was doomed from the start by the inadequacies of the very specific version of his basic model and his disregard of the obvious implication of that model: prevent total spending from contracting. The promising young students and colleagues who had briefly gathered round him upon his arrival in England, mostly attached themselves to other mentors, leaving Hayek with only one or two immediate disciples to carry on his research program. The collapse of his research program, which he himself abandoned after completing his final work in economic theory, marked a research hiatus of almost a quarter century, with the notable exception of publications by his student, Ludwig Lachmann who, having decamped in far-away South Africa, labored in relative obscurity for most of his career.

The early clash between Keynes and Hayek, so important in the eyes of Chancellor and others, is actually overrated. Chancellor, quoting Lachmann and Nicholas Wapshott, describes it as a clash of two irreconcilable views of the economic world, and the clash that defined modern economics. In later years, Lachmann actually sought to effect a kind of reconciliation between their views. It was not a conflict of visions that undid Hayek in 1931-32, it was his misapplication of a narrowly constructed model to a problem for which it was irrelevant.

Although the marginalization of the Austrian School, after its misguided policy advice in the Great Depression and its dispersal during and after World War II, is hardly surprising, the unwillingness of mainstream economists to sort out what was useful and relevant in the teachings of the Austrian School from what is not was unfortunate not only for the Austrians. Modern economics was itself impoverished by its disregard for the complexity and interconnectedness of economic phenomena. It’s precisely the Austrian attentiveness to the complexity of economic activity — the necessity for complementary goods and factors of production to be deployed over time to satisfy individual wants – that is missing from standard economic models.

That Austrian attentiveness, pioneered by Menger himself, to the complementarity of inputs applied over the course of time undoubtedly informed Hayek’s seminal contribution to economic thought: his articulation of the idea of intertemporal equilibrium that comprehends the interdependence of the plans of independent agents and the need for them to all fit together over the course of time for equilibrium to obtain. Hayek’s articulation represented a conceptual advance over earlier versions of equilibrium analysis stemming from Walras and Pareto, and even from Irving Fisher who did pay explicit attention to intertemporal equilibrium. But in Fisher’s articulation, intertemporal consistency was described in terms of aggregate production and income, leaving unexplained the mechanisms whereby the individual plans to produce and consume particular goods over time are reconciled. Hayek’s granular exposition enabled him to attend to, and articulate, necessary but previously unspecified relationships between the current prices and expected future prices.

Moreover, neither mainstream nor Austrian economists have ever explained how prices are adjust in non-equilibrium settings. The focus of mainstream analysis has always been the determination of equilibrium prices, with the implicit understanding that “market forces” move the price toward its equilibrium value. The explanatory gap has been filled by the mainstream New Classical School which simply posits the existence of an equilibrium price vector, and, to replace an empirically untenable tâtonnement process for determining prices, posits an equally untenable rational-expectations postulate to assert that market economies typically perform as if they are in, or near the neighborhood of, equilibrium, so that apparent fluctuations in real output are viewed as optimal adjustments to unexplained random productivity shocks.

Alternatively, in New Keynesian mainstream versions, constraints on price changes prevent immediate adjustments to rationally expected equilibrium prices, leading instead to persistent reductions in output and employment following demand or supply shocks. (I note parenthetically that the assumption of rational expectations is not, as often suggested, an assumption distinct from market-clearing, because the rational expectation of all agents of a market-clearing price vector necessarily implies that the markets clear unless one posits a constraint, e.g., a binding price floor or ceiling, that prevents all mutually beneficial trades from being executed.)

Similarly, the Austrian school offers no explanation of how unconstrained price adjustments by market participants is a sufficient basis for a systemic tendency toward equilibrium. Without such an explanation, their belief that market economies have strong self-correcting properties is unfounded, because, as Hayek demonstrated in his 1937 paper, “Economics and Knowledge,” price adjustments in current markets don’t, by themselves, ensure a systemic tendency toward equilibrium values that coordinate the plans of independent economic agents unless agents’ expectations of future prices are sufficiently coincident. To take only one passage of many discussing the difficulty of explaining or accounting for a process that leads individuals toward a state of equilibrium, I offer the following as an example:

All that this condition amounts to, then, is that there must be some discernible regularity in the world which makes it possible to predict events correctly. But, while this is clearly not sufficient to prove that people will learn to foresee events correctly, the same is true to a hardly less degree even about constancy of data in an absolute sense. For any one individual, constancy of the data does in no way mean constancy of all the facts independent of himself, since, of course, only the tastes and not the actions of the other people can in this sense be assumed to be constant. As all those other people will change their decisions as they gain experience about the external facts and about other people’s actions, there is no reason why these processes of successive changes should ever come to an end. These difficulties are well known, and I mention them here only to remind you how little we actually know about the conditions under which an equilibrium will ever be reached.

In this theoretical muddle, Keynesian economics and the neoclassical synthesis were abandoned, because the key proposition of Keynesian economics was supposedly the tendency of a modern economy toward an equilibrium with involuntary unemployment while the neoclassical synthesis rejected that proposition, so that the supposed synthesis was no more than an agreement to disagree. That divided house could not stand. The inability of Keynesian economists such as Hicks, Modigliani, Samuelson and Patinkin to find a satisfactory (at least in terms of a preferred Walrasian general-equilibrium model) rationalization for Keynes’s conclusion that an economy would likely become stuck in an equilibrium with involuntary unemployment led to the breakdown of the neoclassical synthesis and the displacement of Keynesianism as the dominant macroeconomic paradigm.

But perhaps the way out of the muddle is to abandon the idea that a systemic tendency toward equilibrium is a property of an economic system, and, instead, to recognize that equilibrium is, as Hayek suggested, a contingent, not a necessary, property of a complex economy. Ludwig Lachmann, cited by Chancellor for his remark that the early theoretical clash between Hayek and Keynes was a conflict of visions, eventually realized that in an important sense both Hayek and Keynes shared a similar subjectivist conception of the crucial role of individual expectations of the future in explaining the stability or instability of market economies. And despite the efforts of New Classical economists to establish rational expectations as an axiomatic equilibrating property of market economies, that notion rests on nothing more than arbitrary methodological fiat.

Chancellor concludes by suggesting that Wasserman’s characterization of the Austrians as marginalized is not entirely accurate inasmuch as “the Austrians’ view of the economy as a complex, evolving system continues to inspire new research.” Indeed, if economics is ever to find a way out of its current state of confusion, following Lachmann in his quest for a synthesis of sorts between Keynes and Hayek might just be a good place to start from.

Filling the Arrow Explanatory Gap

The following (with some minor revisions) is a Twitter thread I posted yesterday. Unfortunately, because it was my first attempt at threading the thread wound up being split into three sub-threads and rather than try to reconnect them all, I will just post the complete thread here as a blogpost.

1. Here’s an outline of an unwritten paper developing some ideas from my paper “Hayek Hicks Radner and Four Equilibrium Concepts” (see here for an earlier ungated version) and some from previous blog posts, in particular Phillips Curve Musings

2. Standard supply-demand analysis is a form of partial-equilibrium (PE) analysis, which means that it is contingent on a ceteris paribus (CP) assumption, an assumption largely incompatible with realistic dynamic macroeconomic analysis.

3. Macroeconomic analysis is necessarily situated a in general-equilibrium (GE) context that precludes any CP assumption, because there are no variables that are held constant in GE analysis.

4. In the General Theory, Keynes criticized the argument based on supply-demand analysis that cutting nominal wages would cure unemployment. Instead, despite his Marshallian training (upbringing) in PE analysis, Keynes argued that PE (AKA supply-demand) analysis is unsuited for understanding the problem of aggregate (involuntary) unemployment.

5. The comparative-statics method described by Samuelson in the Foundations of Econ Analysis formalized PE analysis under the maintained assumption that a unique GE obtains and deriving a “meaningful theorem” from the 1st- and 2nd-order conditions for a local optimum.

6. PE analysis, as formalized by Samuelson, is conditioned on the assumption that GE obtains. It is focused on the effect of changing a single parameter in a single market small enough for the effects on other markets of the parameter change to be made negligible.

7. Thus, PE analysis, the essence of micro-economics is predicated on the macrofoundation that all, but one, markets are in equilibrium.

8. Samuelson’s meaningful theorems were a misnomer reflecting mid-20th-century operationalism. They can now be understood as empirically refutable propositions implied by theorems augmented with a CP assumption that interactions b/w markets are small enough to be neglected.

9. If a PE model is appropriately specified, and if the market under consideration is small or only minimally related to other markets, then differences between predictions and observations will be statistically insignificant.

10. So PE analysis uses comparative-statics to compare two alternative general equilibria that differ only in respect of a small parameter change.

11. The difference allows an inference about the causal effect of a small change in that parameter, but says nothing about how an economy would actually adjust to a parameter change.

12. PE analysis is conditioned on the CP assumption that the analyzed market and the parameter change are small enough to allow any interaction between the parameter change and markets other than the market under consideration to be disregarded.

13. However, the process whereby one equilibrium transitions to another is left undetermined; the difference between the two equilibria with and without the parameter change is computed but no account of an adjustment process leading from one equilibrium to the other is provided.

14. Hence, the term “comparative statics.”

15. The only suggestion of an adjustment process is an assumption that the price-adjustment in any market is an increasing function of excess demand in the market.

16. In his seminal account of GE, Walras posited the device of an auctioneer who announces prices–one for each market–computes desired purchases and sales at those prices, and sets, under an adjustment algorithm, new prices at which desired purchases and sales are recomputed.

17. The process continues until a set of equilibrium prices is found at which excess demands in all markets are zero. In Walras’s heuristic account of what he called the tatonnement process, trading is allowed only after the equilibrium price vector is found by the auctioneer.

18. Walras and his successors assumed, but did not prove, that, if an equilibrium price vector exists, the tatonnement process would eventually, through trial and error, converge on that price vector.

19. However, contributions by Sonnenschein, Mantel and Debreu (hereinafter referred to as the SMD Theorem) show that no price-adjustment rule necessarily converges on a unique equilibrium price vector even if one exists.

20. The possibility that there are multiple equilibria with distinct equilibrium price vectors may or may not be worth explicit attention, but for purposes of this discussion, I confine myself to the case in which a unique equilibrium exists.

21. The SMD Theorem underscores the lack of any explanatory account of a mechanism whereby changes in market prices, responding to excess demands or supplies, guide a decentralized system of competitive markets toward an equilibrium state, even if a unique equilibrium exists.

22. The Walrasian tatonnement process has been replaced by the Arrow-Debreu-McKenzie (ADM) model in an economy of infinite duration consisting of an infinite number of generations of agents with given resources and technology.

23. The equilibrium of the model involves all agents populating the economy over all time periods meeting before trading starts, and, based on initial endowments and common knowledge, making plans given an announced equilibrium price vector for all time in all markets.

24. Uncertainty is accommodated by the mechanism of contingent trading in alternative states of the world. Given assumptions about technology and preferences, the ADM equilibrium determines the set prices for all contingent states of the world in all time periods.

25. Given equilibrium prices, all agents enter into optimal transactions in advance, conditioned on those prices. Time unfolds according to the equilibrium set of plans and associated transactions agreed upon at the outset and executed without fail over the course of time.

26. At the ADM equilibrium price vector all agents can execute their chosen optimal transactions at those prices in all markets (certain or contingent) in all time periods. In other words, at that price vector, excess demands in all markets with positive prices are zero.

27. The ADM model makes no pretense of identifying a process that discovers the equilibrium price vector. All that can be said about that price vector is that if it exists and trading occurs at equilibrium prices, then excess demands will be zero if prices are positive.

28. Arrow himself drew attention to the gap in the ADM model, writing in 1959:

29. In addition to the explanatory gap identified by Arrow, another shortcoming of the ADM model was discussed by Radner: the dependence of the ADM model on a complete set of forward and state-contingent markets at time zero when equilibrium prices are determined.

30. Not only is the complete-market assumption a backdoor reintroduction of perfect foresight, it excludes many features of the greatest interest in modern market economies: the existence of money, stock markets, and money-crating commercial banks.

31. Radner showed that for full equilibrium to obtain, not only must excess demands in current markets be zero, but whenever current markets and current prices for future delivery are missing, agents must correctly expect those future prices.

32. But there is no plausible account of an equilibrating mechanism whereby price expectations become consistent with GE. Although PE analysis suggests that price adjustments do clear markets, no analogous analysis explains how future price expectations are equilibrated.

33. But if both price expectations and actual prices must be equilibrated for GE to obtain, the notion that “market-clearing” price adjustments are sufficient to achieve macroeconomic “equilibrium” is untenable.

34. Nevertheless, the idea that individual price expectations are rational (correct), so that, except for random shocks, continuous equilibrium is maintained, became the bedrock for New Classical macroeconomics and its New Keynesian and real-business cycle offshoots.

35. Macroeconomic theory has become a theory of dynamic intertemporal optimization subject to stochastic disturbances and market frictions that prevent or delay optimal adjustment to the disturbances, potentially allowing scope for countercyclical monetary or fiscal policies.

36. Given incomplete markets, the assumption of nearly continuous intertemporal equilibrium implies that agents correctly foresee future prices except when random shocks occur, whereupon agents revise expectations in line with the new information communicated by the shocks.
37. Modern macroeconomics replaced the Walrasian auctioneer with agents able to forecast the time path of all prices indefinitely into the future, except for intermittent unforeseen shocks that require agents to optimally their revise previous forecasts.
38. When new information or random events, requiring revision of previous expectations, occur, the new information becomes common knowledge and is processed and interpreted in the same way by all agents. Agents with rational expectations always share the same expectations.
39. So in modern macro, Arrow’s explanatory gap is filled by assuming that all agents, given their common knowledge, correctly anticipate current and future equilibrium prices subject to unpredictable forecast errors that change their expectations of future prices to change.
40. Equilibrium prices aren’t determined by an economic process or idealized market interactions of Walrasian tatonnement. Equilibrium prices are anticipated by agents, except after random changes in common knowledge. Semi-omniscient agents replace the Walrasian auctioneer.
41. Modern macro assumes that agents’ common knowledge enables them to form expectations that, until superseded by new knowledge, will be validated. The assumption is wrong, and the mistake is deeper than just the unrealism of perfect competition singled out by Arrow.
42. Assuming perfect competition, like assuming zero friction in physics, may be a reasonable simplification for some problems in economics, because the simplification renders an otherwise intractable problem tractable.
43. But to assume that agents’ common knowledge enables them to forecast future prices correctly transforms a model of decentralized decision-making into a model of central planning with each agent possessing the knowledge only possessed by an omniscient central planner.
44. The rational-expectations assumption fills Arrow’s explanatory gap, but in a deeply unsatisfactory way. A better approach to filling the gap would be to acknowledge that agents have private knowledge (and theories) that they rely on in forming their expectations.
45. Agents’ expectations are – at least potentially, if not inevitably – inconsistent. Because expectations differ, it’s the expectations of market specialists, who are better-informed than non-specialists, that determine the prices at which most transactions occur.
46. Because price expectations differ even among specialists, prices, even in competitive markets, need not be uniform, so that observed price differences reflect expectational differences among specialists.
47. When market specialists have similar expectations about future prices, current prices will converge on the common expectation, with arbitrage tending to force transactions prices to converge toward notwithstanding the existence of expectational differences.
48. However, the knowledge advantage of market specialists over non-specialists is largely limited to their knowledge of the workings of, at most, a small number of related markets.
49. The perspective of specialists whose expectations govern the actual transactions prices in most markets is almost always a PE perspective from which potentially relevant developments in other markets and in macroeconomic conditions are largely excluded.
50. The interrelationships between markets that, according to the SMD theorem, preclude any price-adjustment algorithm, from converging on the equilibrium price vector may also preclude market specialists from converging, even roughly, on the equilibrium price vector.
51. A strict equilibrium approach to business cycles, either real-business cycle or New Keynesian, requires outlandish assumptions about agents’ common knowledge and their capacity to anticipate the future prices upon which optimal production and consumption plans are based.
52. It is hard to imagine how, without those outlandish assumptions, the theoretical superstructure of real-business cycle theory, New Keynesian theory, or any other version of New Classical economics founded on the rational-expectations postulate can be salvaged.
53. The dominance of an untenable macroeconomic paradigm has tragically led modern macroeconomics into a theoretical dead end.

Graeber Against Economics

David Graeber’s vitriolic essay “Against Economics” in the New York Review of Books has generated responses from Noah Smith and Scott Sumner among others. I don’t disagree with much that Noah or Scott have to say, but I want to dig a little deeper than they did into some of Graeber’s arguments, because even though I think he is badly misinformed on many if not most of the subjects he writes about, I actually have some sympathy for his dissatisfaction with the current state of economics. Graeber wastes no time on pleasantries.

There is a growing feeling, among those who have the responsibility of managing large economies, that the discipline of economics is no longer fit for purpose. It is beginning to look like a science designed to solve problems that no longer exist.

A serious polemicist should avoid blatant mischaracterizations, exaggerations and cheap shots, and should be well-grounded in the object of his critique, thereby avoiding criticisms that undermine his own claims to expertise. I grant that  Graeber has some valid criticisms to make, even agreeing with him, at least in part, on some of them. But his indiscriminate attacks on, and caricatures of, all neoclassical economics betrays a superficial understanding of that discipline.

Graeber begins by attacking what he considers the misguided and obsessive focus on inflation by economists.

A good example is the obsession with inflation. Economists still teach their students that the primary economic role of government—many would insist, its only really proper economic role—is to guarantee price stability. We must be constantly vigilant over the dangers of inflation. For governments to simply print money is therefore inherently sinful.

Every currency unit, or banknote issued by a central bank, now in circulation, as Graeber must know, has been “printed.” So to say that economists consider it sinful for governments to print money is either a deliberate falsehood, or an emotional rhetorical outburst, as Graeber immediately, and apparently unwittingly, acknowledges!

If, however, inflation is kept at bay through the coordinated action of government and central bankers, the market should find its “natural rate of unemployment,” and investors, taking advantage of clear price signals, should be able to ensure healthy growth. These assumptions came with the monetarism of the 1980s, the idea that government should restrict itself to managing the money supply, and by the 1990s had come to be accepted as such elementary common sense that pretty much all political debate had to set out from a ritual acknowledgment of the perils of government spending. This continues to be the case, despite the fact that, since the 2008 recession, central banks have been printing money frantically [my emphasis] in an attempt to create inflation and compel the rich to do something useful with their money, and have been largely unsuccessful in both endeavors.

Graeber’s use of the ambiguous pronoun “this” beginning the last sentence of the paragraph betrays his own confusion about what he is saying. Central banks are printing money and attempting to “create” inflation while supposedly still believing that inflation is a menace and printing money is a sin. Go figure.

We now live in a different economic universe than we did before the crash. Falling unemployment no longer drives up wages. Printing money does not cause inflation. Yet the language of public debate, and the wisdom conveyed in economic textbooks, remain almost entirely unchanged.

Again showing an inadequate understanding of basic economic theory, Graeber suggests that, in theory if not practice, falling unemployment should cause wages to rise. The Philips Curve, upon which Graeber’s suggestion relies, represents the empirically observed negative correlation between the rate of average wage increase and the rate of unemployment. But correlation does not imply causation, so there is no basis in economic theory to assert that falling unemployment causes the rate of increase in wages to accelerate. That the empirical correlation between unemployment and wage increases has not recently been in evidence provides no compelling reason for changing textbook theory.

From this largely unfounded and attack on economic theory – a theory which I myself consider, in many respects, inadequate and unreliable – Graeber launches a bitter diatribe against the supposed hegemony of economists over policy-making.

Mainstream economists nowadays might not be particularly good at predicting financial crashes, facilitating general prosperity, or coming up with models for preventing climate change, but when it comes to establishing themselves in positions of intellectual authority, unaffected by such failings, their success is unparalleled. One would have to look at the history of religions to find anything like it.

The ability to predict financial crises would be desirable, but that cannot be the sole criterion for whether economics has advanced our understanding of how economic activity is organized or what effects policy changes have. (I note parenthetically that many economists defensively reject the notion that economic crises are predictable on the grounds that if economists could predict a future economic crisis, those predictions would be immediately self-fulfilling. This response, of course, effectively disproves the idea that economists could predict that an economic crisis would occur in the way that astronomers predict solar eclipses. But this response slays a strawman. The issue is not whether economists can predict future crises, but whether they can identify conditions indicating an increased likelihood of a crisis and suggest precautionary measures to reduce the likelihood that a potential crisis will occur. But Graeber seems uninterested in or incapable of engaging the question at even this moderate level of subtlety.)

In general, I doubt that economists can make more than a modest contribution to improved policy-making, and the best that one can hope for is probably that they steer us away from the worst potential decisions rather than identifying the best ones. But no one, as far as I know, has yet been burned at the stake by a tribunal of economists.

To this day, economics continues to be taught not as a story of arguments—not, like any other social science, as a welter of often warring theoretical perspectives—but rather as something more like physics, the gradual realization of universal, unimpeachable mathematical truths. “Heterodox” theories of economics do, of course, exist (institutionalist, Marxist, feminist, “Austrian,” post-Keynesian…), but their exponents have been almost completely locked out of what are considered “serious” departments, and even outright rebellions by economics students (from the post-autistic economics movement in France to post-crash economics in Britain) have largely failed to force them into the core curriculum.

I am now happy to register agreement with something that Graeber says. Economists in general have become overly attached to axiomatic and formalistic mathematical models that create a false and misleading impression of rigor and mathematical certainty. In saying this, I don’t dispute that mathematical modeling is an important part of much economic theorizing, but it should not exclude other approaches to economic analysis and discourse.

As a result, heterodox economists continue to be treated as just a step or two away from crackpots, despite the fact that they often have a much better record of predicting real-world economic events. What’s more, the basic psychological assumptions on which mainstream (neoclassical) economics is based—though they have long since been disproved by actual psychologists—have colonized the rest of the academy, and have had a profound impact on popular understandings of the world.

That heterodox economists have a better record of predicting economic events than mainstream economists is an assertion for which Graeber offers no evidence or examples. I would not be surprised if he could cite examples, but one would have to weigh the evidence surrounding those examples before concluding that predictions by heterodox economists were more accurate than those of their mainstream counterparts.

Graeber returns to the topic of monetary theory, which seems a particular bugaboo of his. Taking the extreme liberty of holding up Mrs. Theresa May as a spokesperson for orthodox economics, he focuses on her definitive 2017 statement that there is no magic money tree.

The truly extraordinary thing about May’s phrase is that it isn’t true. There are plenty of magic money trees in Britain, as there are in any developed economy. They are called “banks.” Since modern money is simply credit, banks can and do create money literally out of nothing, simply by making loans. Almost all of the money circulating in Britain at the moment is bank-created in this way.

What Graeber chooses to ignore is that banks do not operate magically; they make loans and create deposits in seeking to earn profits; their decisions are not magical, but are oriented toward making profits. Whether they make good or bad decisions is debatable, but the debate isn’t about a magical process; it’s a debate about theory and evidence. Graeber describe how he thinks that economists think about how banks create money, correctly observing that there is a debate about how that process works, but without understanding those differences or their significance.

Economists, for obvious reasons, can’t be completely oblivious to the role of banks, but they have spent much of the twentieth century arguing about what actually happens when someone applies for a loan. One school insists that banks transfer existing funds from their reserves, another that they produce new money, but only on the basis of a multiplier effect). . . Only a minority—mostly heterodox economists, post-Keynesians, and modern money theorists—uphold what is called the “credit creation theory of banking”: that bankers simply wave a magic wand and make the money appear, secure in the confidence that even if they hand a client a credit for $1 million, ultimately the recipient will put it back in the bank again, so that, across the system as a whole, credits and debts will cancel out. Rather than loans being based in deposits, in this view, deposits themselves were the result of loans.

The one thing it never seemed to occur to anyone to do was to get a job at a bank, and find out what actually happens when someone asks to borrow money. In 2014 a German economist named Richard Werner did exactly that, and discovered that, in fact, loan officers do not check their existing funds, reserves, or anything else. They simply create money out of thin air, or, as he preferred to put it, “fairy dust.”

Graeber is right that economists differ in how they understand banking. But the simple transfer-of-funds view, a product of the eighteenth century, was gradually rejected over the course of the nineteenth century; the money-multiplier view largely superseded it, enjoying a half-century or more of dominance as a theory of banking, still remains a popular way for introductory textbooks to explain how banking works, though it would be better if it were decently buried and forgotten. But since James Tobin’s classic essay “Commercial banks as creators of money” was published in 1963, most economists who have thought carefully about banking have concluded that the amount of deposits created by banks corresponds to the quantity of deposits that the public, given their expectations about the future course of the economy and the future course of prices, chooses to hold. The important point is that while a bank can create deposits without incurring more than the negligible cost of making a book-keeping, or an electronic, entry in a customer’s account, the creation of a deposit is typically associated with a demand by the bank to hold either reserves in its account with the Fed or to hold some amount of Treasury instruments convertible, on very short notice, into reserves at the Fed.

Graeber seems to think that there is something fundamental at stake for the whole of macroeconomics in the question whether deposits created loans or loans create deposits. I agree that it’s an important question, but not as significant as Graeber believes. But aside from that nuance, what’s remarkable is that Graeber actually acknowledges that the weight of professional opinion is on the side that says that loans create deposits. He thus triumphantly cites a report by Bank of England economists that correctly explained that banks create money and do so in the normal course of business by making loans.

Before long, the Bank of England . . . rolled out an elaborate official report called “Money Creation in the Modern Economy,” replete with videos and animations, making the same point: existing economics textbooks, and particularly the reigning monetarist orthodoxy, are wrong. The heterodox economists are right. Private banks create money. Central banks like the Bank of England create money as well, but monetarists are entirely wrong to insist that their proper function is to control the money supply.

Graeber, I regret to say, is simply exposing the inadequacy of his knowledge of the history of economics. Adam Smith in The Wealth of Nations explained that banks create money who, in doing so, saved the resources that would have been wasted on creating additional gold and silver. Subsequent economists from David Ricardo through Henry Thornton, J. S. Mill and R. G. Hawtrey were perfectly aware that banks can supply money — either banknotes or deposits — at less than the cost of mining and minting new coins, as they extend their credit in making loans to borrowers. So what is at issue, Graeber to the contrary notwithstanding, is not a dispute between orthodoxy and heterodoxy.

In fact, central banks do not in any sense control the money supply; their main function is to set the interest rate—to determine how much private banks can charge for the money they create.

Central banks set a rental price for reserves, thereby controlling the quantity of reserves into which bank deposits are convertible that is available to the economy. One way to think about that quantity is that the quantity of reserves along with the aggregate demand to hold reserves determines the exchange value of reserves and hence the price level; another way to think about it is that the interest rate or the implied policy stance of the central bank helps to determine the expectations of the public about the future course of the price level which is what determines – within some margin of error or range – what the future course of the price level will turn out to be.

Almost all public debate on these subjects is therefore based on false premises. For example, if what the Bank of England was saying were true, government borrowing didn’t divert funds from the private sector; it created entirely new money that had not existed before.

This is just silly. Funds may or may not be diverted from the private sector, but the total available resources to society is finite. If the central bank creates additional money, it creates additional claims to those resources and the creation of additional claims to resources necessarily has an effect on the prices of inputs and of outputs.

One might have imagined that such an admission would create something of a splash, and in certain restricted circles, it did. Central banks in Norway, Switzerland, and Germany quickly put out similar papers. Back in the UK, the immediate media response was simply silence. The Bank of England report has never, to my knowledge, been so much as mentioned on the BBC or any other TV news outlet. Newspaper columnists continued to write as if monetarism was self-evidently correct. Politicians continued to be grilled about where they would find the cash for social programs. It was as if a kind of entente cordiale had been established, in which the technocrats would be allowed to live in one theoretical universe, while politicians and news commentators would continue to exist in an entirely different one.

Even if we stipulate that this characterization of what the BBC and newspaper columnists believe is correct, what we would have — at best — is a commentary on the ability of economists to communicate their understanding of how the economy works to the intelligentsia that communicates to ordinary citizens. It is not in and of itself a commentary on the state of economic knowledge, inasmuch as Graeber himself concedes that most economists don’t accept monetarism. And that has been the case, as Noah Smith pointed out in his Bloomberg column on Graeber, since the early 1980s when the Monetarist experiment in trying to conduct monetary policy by controlling the monetary aggregates proved entirely unworkable and had to be abandoned as it was on the verge of precipitating a financial crisis.

Only after this long warmup decrying the sorry state of contemporary economic theory does Graeber begin discussing the book under review Money and Government by Robert Skidelsky.

What [Skidelsky] reveals is an endless war between two broad theoretical perspectives. . . The crux of the argument always seems to turn on the nature of money. Is money best conceived of as a physical commodity, a precious substance used to facilitate exchange, or is it better to see money primarily as a credit, a bookkeeping method or circulating IOU—in any case, a social arrangement? This is an argument that has been going on in some form for thousands of years. What we call “money” is always a mixture of both, and, as I myself noted in Debt (2011), the center of gravity between the two tends to shift back and forth over time. . . .One important theoretical innovation that these new bullion-based theories of money allowed was, as Skidelsky notes, what has come to be called the quantity theory of money (usually referred to in textbooks—since economists take endless delight in abbreviations—as QTM).

But these two perspectives are not mutually exclusive, and, depending on time, place, circumstances, and the particular problem that is the focus of attention, either of the two may be the appropriate paradigm for analysis.

The QTM argument was first put forward by a French lawyer named Jean Bodin, during a debate over the cause of the sharp, destablizing price inflation that immediately followed the Iberian conquest of the Americas. Bodin argued that the inflation was a simple matter of supply and demand: the enormous influx of gold and silver from the Spanish colonies was cheapening the value of money in Europe. The basic principle would no doubt have seemed a matter of common sense to anyone with experience of commerce at the time, but it turns out to have been based on a series of false assumptions. For one thing, most of the gold and silver extracted from Mexico and Peru did not end up in Europe at all, and certainly wasn’t coined into money. Most of it was transported directly to China and India (to buy spices, silks, calicoes, and other “oriental luxuries”), and insofar as it had inflationary effects back home, it was on the basis of speculative bonds of one sort or another. This almost always turns out to be true when QTM is applied: it seems self-evident, but only if you leave most of the critical factors out.

In the case of the sixteenth-century price inflation, for instance, once one takes account of credit, hoarding, and speculation—not to mention increased rates of economic activity, investment in new technology, and wage levels (which, in turn, have a lot to do with the relative power of workers and employers, creditors and debtors)—it becomes impossible to say for certain which is the deciding factor: whether the money supply drives prices, or prices drive the money supply.

As a matter of logic, if the value of money depends on the precious metals (gold or silver) from which coins were minted, the value of money is necessarily affected by a change in the value of the metals used to coin money. Because a large increase in the stock of gold and silver, as Graeber concedes, must reduce the value of those metals, subsequent inflation then being attributable, at least in part, to the gold and silver discoveries even if the newly mined gold and silver was shipped mainly to privately held Indian and Chinese hoards rather than minted into new coins. An exogenous increase in prices may well have caused the quantity of credit money to increase, but that is analytically distinct from the inflationary effect of a reduced value of gold or silver when, as was the case in the sixteenth century, money is legally defined as a specific weight of gold or silver.

Technically, this comes down to a choice between what are called exogenous and endogenous theories of money. Should money be treated as an outside factor, like all those Spanish dubloons supposedly sweeping into Antwerp, Dublin, and Genoa in the days of Philip II, or should it be imagined primarily as a product of economic activity itself, mined, minted, and put into circulation, or more often, created as credit instruments such as loans, in order to meet a demand—which would, of course, mean that the roots of inflation lie elsewhere?

There is no such choice, because any theory must posit certain initial conditions and definitions, which are given or exogenous to the analysis. How the theory is framed and which variables are treated as exogenous and which are treated as endogenous is a matter of judgment in light of the problem and the circumstances. Graeber is certainly correct that, in any realistic model, the quantity of money is endogenously, not exogenously, determined, but that doesn’t mean that the value of gold and silver may not usefully be treated as exogenous in a system in which money is defined as a weight of gold or silver.

To put it bluntly: QTM is obviously wrong. Doubling the amount of gold in a country will have no effect on the price of cheese if you give all the gold to rich people and they just bury it in their yards, or use it to make gold-plated submarines (this is, incidentally, why quantitative easing, the strategy of buying long-term government bonds to put money into circulation, did not work either). What actually matters is spending.

Graeber is talking in circles, failing to distinguish between the quantity theory of money – a theory about the value of a pure medium of exchange with no use except to be received in exchange — and a theory of the real value of gold and silver when money is defined as a weight of gold or silver. The value of gold (or silver) in monetary uses must be roughly equal to its value in non-monetary uses. which is determined by the total stock of gold and the demand to hold gold or to use it in coinage or for other uses (e.g., jewelry and ornamentation). An increase in the stock of gold relative to demand must reduce its value. That relationship between price and quantity is not the same as QTM. The quantity of a metallic money will increase as its value in non-monetary uses declines. If there is literally an unlimited demand for newly mined gold to be immediately sent unused into hoards, Graeber’s argument would be correct. But the fact that much of the newly mined gold initially went into hoards does not mean that all of the newly mined gold went into hoards.

In sum, Graeber is confused between the quantity theory of money and a theory of a commodity money used both as money and as a real commodity. The quantity theory of money of a pure medium of exchange posits that changes in the quantity of money cause proportionate changes in the price level. Changes in the quantity of a real commodity also used as money have nothing to do with the quantity theory of money.

Relying on a dubious account of the history of monetary theory by Skidelsky, Graeber blames the obsession of economists with the quantity theory for repeated monetary disturbances starting with the late 17th century deflation in Britain when silver appreciated relative to gold causing prices measured in silver to fall. Graeber thus fails to see that under a metallic money, real disturbances do have repercussion on the level of prices, repercussions having nothing to do with an exogenous prior change in the quantity of money.

According to Skidelsky, the pattern was to repeat itself again and again, in 1797, the 1840s, the 1890s, and, ultimately, the late 1970s and early 1980s, with Thatcher and Reagan’s (in each case brief) adoption of monetarism. Always we see the same sequence of events:

(1) The government adopts hard-money policies as a matter of principle.

(2) Disaster ensues.

(3) The government quietly abandons hard-money policies.

(4) The economy recovers.

(5) Hard-money philosophy nonetheless becomes, or is reinforced as, simple universal common sense.

There is so much indiscriminate generalization here that it is hard to know what to make of it. But the conduct of monetary policy has always been fraught, and learning has been slow and painful. We can and must learn to do better, but blanket condemnations of economics are unlikely to lead to better outcomes.

How was it possible to justify such a remarkable string of failures? Here a lot of the blame, according to Skidelsky, can be laid at the feet of the Scottish philosopher David Hume. An early advocate of QTM, Hume was also the first to introduce the notion that short-term shocks—such as Locke produced—would create long-term benefits if they had the effect of unleashing the self-regulating powers of the market:

Actually I agree that Hume, as great and insightful a philosopher as he was and as sophisticated an economic observer as he was, was an unreliable monetary theorist. And one of the reasons he was led astray was his unwarranted attachment to the quantity theory of money, an attachment that was not shared by his close friend Adam Smith.

Ever since Hume, economists have distinguished between the short-run and the long-run effects of economic change, including the effects of policy interventions. The distinction has served to protect the theory of equilibrium, by enabling it to be stated in a form which took some account of reality. In economics, the short-run now typically stands for the period during which a market (or an economy of markets) temporarily deviates from its long-term equilibrium position under the impact of some “shock,” like a pendulum temporarily dislodged from a position of rest. This way of thinking suggests that governments should leave it to markets to discover their natural equilibrium positions. Government interventions to “correct” deviations will only add extra layers of delusion to the original one.

I also agree that focusing on long-run equilibrium without regard to short-run fluctuations can lead to terrible macroeconomic outcomes, but that doesn’t mean that long-run effects are never of concern and may be safely disregarded. But just as current suffering must not be disregarded when pursuing vague and uncertain long-term benefits, ephemeral transitory benefits shouldn’t obscure serious long-term consequences. Weighing such alternatives isn’t easy, but nothing is gained by denying that the alternatives exist. Making those difficult choices is inherent in policy-making, whether macroeconomic or climate policy-making.

Although Graeber takes a valid point – that a supposed tendency toward an optimal long-run equilibrium does not justify disregard of an acute short-term problem – to an extreme, his criticism of the New Classical approach to policy-making that replaced the flawed mainstream Keynesian macroeconomics of the late 1970s is worth listening to. The New Classical approach self-consciously rejected any policy aimed at short-run considerations owing to a time-inconsistency paradox was based almost entirely on the logic of general-equilibrium theory and an illegitimate methodological argument rejecting all macroeconomic theories not rigorously deduced from the unarguable axiom of optimizing behavior by rational agents (and therefore not, in the official jargon, microfounded) as unscientific and unworthy of serious consideration in the brave New Classical world of scientific macroeconomics.

It’s difficult for outsiders to see what was really at stake here, because the argument has come to be recounted as a technical dispute between the roles of micro- and macroeconomics. Keynesians insisted that the former is appropriate to studying the behavior of individual households or firms, trying to optimize their advantage in the marketplace, but that as soon as one begins to look at national economies, one is moving to an entirely different level of complexity, where different sorts of laws apply. Just as it is impossible to understand the mating habits of an aardvark by analyzing all the chemical reactions in their cells, so patterns of trade, investment, or the fluctuations of interest or employment rates were not simply the aggregate of all the microtransactions that seemed to make them up. The patterns had, as philosophers of science would put it, “emergent properties.” Obviously, it was necessary to understand the micro level (just as it was necessary to understand the chemicals that made up the aardvark) to have any chance of understand the macro, but that was not, in itself, enough.

As an aisde, it’s worth noting that the denial or disregard of the possibility of any emergent properties by New Classical economists (of which what came to be known as New Keynesian economics is really a mildly schismatic offshoot) is nicely illustrated by the un-self-conscious alacrity with which the representative-agent approach was adopted as a modeling strategy in the first few generations of New Classical models. That New Classical theorists now insist that representative agency is not an essential to New Classical modeling is true, but the methodologically reductive nature of New Classical macroeconomics, in which all macroeconomic theories must be derived under the axiom of individually maximizing behavior except insofar as specific “frictions” are introduced by explicit assumption, is essential. (See here, here, and here)

The counterrevolutionaries, starting with Keynes’s old rival Friedrich Hayek . . . took aim directly at this notion that national economies are anything more than the sum of their parts. Politically, Skidelsky notes, this was due to a hostility to the very idea of statecraft (and, in a broader sense, of any collective good). National economies could indeed be reduced to the aggregate effect of millions of individual decisions, and, therefore, every element of macroeconomics had to be systematically “micro-founded.”

Hayek’s role in the microfoundations movement is important, but his position was more sophisticated and less methodologically doctrinaire than that of the New Classical macroeconomists, if for no other reason than that Hayek didn’t believe that macroeconomics should, or could, be derived from general-equilibrium theory. His criticism, like that of economists like Clower and Leijonhufvud, of Keynesian macroeconomics for being insufficiently grounded in microeconomic principles, was aimed at finding microeconomic arguments that could explain and embellish and modify the propositions of Keynesian macroeconomic theory. That is the sort of scientific – not methodological — reductivism that Hayek’s friend Karl Popper advocated: a theoretical and empirical challenge of reducing a higher level theory to its more fundamental foundations, e.g., when physicists and chemists search for theoretical breakthroughs that allow the propositions of chemistry to be reduced to more fundamental propositions of physics. The attempt to reduce chemistry to underlying physical principles is very different from a methodological rejection of all chemistry that cannot be derived from underlying deep physical theories.

There is probably more than a grain of truth in Graeber’s belief that there was a political and ideological subtext in the demand for microfoundations by New Classical macroeconomists, but the success of the microfoundations program was also the result of philosophically unsophisticated methodological error. How to apportion the share of blame going to mistaken methodology, professional and academic opportunism, and a hidden political agenda is a question worthy of further investigation. The easy part is to identify the mistaken methodology, which Graeber does. As for the rest, Graeber simply asserts bad faith, but with little evidence.

In Graeber’s comprehensive condemnation of modern economics, the efficient market hypothesis, being closely related to the rational-expectations hypothesis so central to New Classical economics, is not spared either. Here again, though I share and sympathize with his disdain for EMH, Graeber can’t resist exaggeration.

In other words, we were obliged to pretend that markets could not, by definition, be wrong—if in the 1980s the land on which the Imperial compound in Tokyo was built, for example, was valued higher than that of all the land in New York City, then that would have to be because that was what it was actually worth. If there are deviations, they are purely random, “stochastic” and therefore unpredictable, temporary, and, ultimately, insignificant.

Of course, no one is obliged to pretend that markets could not be wrong — and certainly not by a definition. The EMH simply asserts that the price of an asset reflects all the publicly available information. But what EMH asserts is certainly not true in many or even most cases, because people with non-public information (or with superior capacity to process public information) may affect asset prices, and such people may profit at the expense of those less knowledgeable or less competent in anticipating price changes. Moreover, those advantages may result from (largely wasted) resources devoted to acquiring and processing information, and it is those people who make fortunes betting on the future course of asset prices.

Graeber then quotes Skidelsky approvingly:

There is a paradox here. On the one hand, the theory says that there is no point in trying to profit from speculation, because shares are always correctly priced and their movements cannot be predicted. But on the other hand, if investors did not try to profit, the market would not be efficient because there would be no self-correcting mechanism. . .

Secondly, if shares are always correctly priced, bubbles and crises cannot be generated by the market….

This attitude leached into policy: “government officials, starting with [Fed Chairman] Alan Greenspan, were unwilling to burst the bubble precisely because they were unwilling to even judge that it was a bubble.” The EMH made the identification of bubbles impossible because it ruled them out a priori.

So the apparent paradox that concerns Skidelsky and Graeber dissolves upon (only a modest amount of) further reflection. Proper understanding and revision of the EMH makes it clear that bubbles can occur. But that doesn’t mean that bursting bubbles is a job that can be safely delegated to any agency, including the Fed.

Moreover, the housing bubble peaked in early 2006, two and a half years before the financial crisis in September 2008. The financial crisis was not unrelated to the housing bubble, which undoubtedly added to the fragility of the financial system and its vulnerability to macroeconomic shocks, but the main cause of the crisis was Fed policy that was unnecessarily focused on a temporary blip in commodity prices persuading the Fed not to loosen policy in 2008 during a worsening recession. That was a scenario similar to the one in 1929 when concern about an apparent stock-market bubble caused the Fed to repeatedly tighten money, raising interest rates, thereby causing a downturn and crash of asset prices triggering the Great Depression.

Graeber and Skidelsky correctly identify some of the problems besetting macroeconomics, but their indiscriminate attack on all economic theory is unlikely to improve the situation. A pity, because a focused and sophisticated critique of economics than they have served up has never been more urgently needed than it is now to enable economists to perform the modest service to mankind of which they might be capable.

Jack Schwartz on the Weaknesses of the Mathematical Mind

I was recently rereading an essay by Karl Popper, “A Realistic View of Logic, Physics, and History” published in his collection of essays, Objective Knowledge: An Evolutionary Approach, because it discusses the role of reductivism in science and philosophy, a topic about which I’ve written a number of previous posts discussing the microfoundations of macroeconomics.

Here is an important passage from Popper’s essay:

What I should wish to assert is (1) that criticism is a most important methodological device: and (2) that if you answer criticism by saying, “I do not like your logic: your logic may be all right for you, but I prefer a different logic, and according to my logic this criticism is not valid”, then you may undermine the method of critical discussion.

Now I should distinguish between two main uses of logic, namely (1) its use in the demonstrative sciences – that is to say, the mathematical sciences – and (2) its use in the empirical sciences.

In the demonstrative sciences logic is used in the main for proofs – for the transmission of truth – while in the empirical sciences it is almost exclusively used critically – for the retransmission of falsity. Of course, applied mathematics comes in too, which implicitly makes use of the proofs of pure mathematics, but the role of mathematics in the empirical sciences is somewhat dubious in several respects. (There exists a wonderful article by Schwartz to this effect.)

The article to which Popper refers appears by Jack Schwartz in a volume edited by Ernst Nagel, Patrick Suppes, and Alfred Tarski, Logic, Methodology and Philosophy of Science. The title of the essay, “The Pernicious Influence of Mathematics on Science” caught my eye, so I tried to track it down. Unavailable on the internet except behind a paywall, I bought a used copy for $6 including postage. The essay was well worth the $6 I paid to read it.

Before quoting from the essay, I would just note that Jacob T. (Jack) Schwartz was far from being innocent of mathematical and scientific knowledge. Here’s a snippet from the Wikipedia entry on Schwartz.

His research interests included the theory of linear operatorsvon Neumann algebrasquantum field theorytime-sharingparallel computingprogramming language design and implementation, robotics, set-theoretic approaches in computational logicproof and program verification systems; multimedia authoring tools; experimental studies of visual perception; multimedia and other high-level software techniques for analysis and visualization of bioinformatic data.

He authored 18 books and more than 100 papers and technical reports.

He was also the inventor of the Artspeak programming language that historically ran on mainframes and produced graphical output using a single-color graphical plotter.[3]

He served as Chairman of the Computer Science Department (which he founded) at the Courant Institute of Mathematical SciencesNew York University, from 1969 to 1977. He also served as Chairman of the Computer Science Board of the National Research Council and was the former Chairman of the National Science Foundation Advisory Committee for Information, Robotics and Intelligent Systems. From 1986 to 1989, he was the Director of DARPA‘s Information Science and Technology Office (DARPA/ISTO) in Arlington, Virginia.

Here is a link to his obituary.

Though not trained as an economist, Schwartz, an autodidact, wrote two books on economic theory.

With that introduction, I quote from, and comment on, Schwartz’s essay.

Our announced subject today is the role of mathematics in the formulation of physical theories. I wish, however, to make use of the license permitted at philosophical congresses, in two regards: in the first place, to confine myself to the negative aspects of this role, leaving it to others to dwell on the amazing triumphs of the mathematical method; in the second place, to comment not only on physical science but also on social science, in which the characteristic inadequacies which I wish to discuss are more readily apparent.

Computer programmers often make a certain remark about computing machines, which may perhaps be taken as a complaint: that computing machines, with a perfect lack of discrimination, will do any foolish thing they are told to do. The reason for this lies of course in the narrow fixation of the computing machines “intelligence” upon the basely typographical details of its own perceptions – its inability to be guided by any large context. In a psychological description of the computer intelligence, three related adjectives push themselves forward: single-mindedness, literal-mindedness, simple-mindedness. Recognizing this, we should at the same time recognize that this single-mindedness, literal-mindedness, simple-mindedness also characterizes theoretical mathematics, though to a lesser extent.

It is a continual result of the fact that science tries to deal with reality that even the most precise sciences normally work with more or less ill-understood approximations toward which the scientist must maintain an appropriate skepticism. Thus, for instance, it may come as a shock to the mathematician to learn that the Schrodinger equation for the hydrogen atom, which he is able to solve only after a considerable effort of functional analysis and special function theory, is not a literally correct description of this atom, but only an approximation to a somewhat more correct equation taking account of spin, magnetic dipole, and relativistic effects; that this corrected equation is itself only an ill-understood approximation to an infinite set of quantum field-theoretic equations; and finally that the quantum field theory, besides diverging, neglects a myriad of strange-particle interactions whose strength and form are largely unknown. The physicist looking at the original Schrodinger equation, learns to sense in it the presence of many invisible terms, integral, intergrodifferential, perhaps even more complicated types of operators, in addition to the differential terms visible, and this sense inspires an entirely appropriate disregard for the purely technical features of the equation which he sees. This very healthy self-skepticism is foreign to the mathematical approach. . . .

Schwartz, in other words, is noting that the mathematical equations that physicists use in many contexts cannot be relied upon without qualification as accurate or exact representations of reality. The understanding that the mathematics that physicists and other physical scientists use to express their theories is often inexact or approximate inasmuch as reality is more complicated than our theories can capture mathematically. Part of what goes into the making of a good scientist is a kind of artistic feeling for how to adjust or interpret a mathematical model to take into account what the bare mathematics cannot describe in a manageable way.

The literal-mindedness of mathematics . . . makes it essential, if mathematics is to be appropriately used in science, that the assumptions upon which mathematics is to elaborate be correctly chosen from a larger point of view, invisible to mathematics itself. The single-mindedness of mathematics reinforces this conclusion. Mathematics is able to deal successfully only with the simplest of situations, more precisely, with a complex situation only to the extent that rare good fortune makes this complex situation hinge upon a few dominant simple factors. Beyond the well-traversed path, mathematics loses its bearing in a jungle of unnamed special functions and impenetrable combinatorial particularities. Thus, mathematical technique can only reach far if it starts from a point close to the simple essentials of a problem which has simple essentials. That form of wisdom which is the opposite of single-mindedness, the ability to keep many threads in hand, to draw for an argument from many disparate sources, is quite foreign to mathematics. The inability accounts for much of the difficulty which mathematics experiences in attempting to penetrate the social sciences. We may perhaps attempt a mathematical economics – but how difficult would be a mathematical history! Mathematics adjusts only with reluctance to the external, and vitally necessary, approximating of the scientists, and shudders each time a batch of small terms is cavalierly erased. Only with difficulty does it find its way to the scientist’s ready grasp of the relative importance of many factors. Quite typically, science leaps ahead and mathematics plods behind.

Schwartz having referenced mathematical economics, let me try to restate his point more concretely than he did by referring to the Walrasian theory of general equilibrium. “Mathematics,” Schwartz writes, “adjusts only with reluctance to the external, and vitally necessary, approximating of the scientists, and shudders each time a batch of small terms is cavalierly erased.” The Walrasian theory is at once too general and too special to be relied on as an applied theory. It is too general because the functional forms of most of its reliant equations can’t be specified or even meaningfully restricted on very special simplifying assumptions; it is too special, because the simplifying assumptions about the agents and the technologies and the constraints and the price-setting mechanism are at best only approximations and, at worst, are entirely divorced from reality.

Related to this deficiency of mathematics, and perhaps more productive of rueful consequence, is the simple-mindedness of mathematics – its willingness, like that of a computing machine, to elaborate upon any idea, however absurd; to dress scientific brilliancies and scientific absurdities alike in the impressive uniform of formulae and theorems. Unfortunately however, an absurdity in uniform is far more persuasive than an absurdity unclad. The very fact that a theory appears in mathematical form, that, for instance, a theory has provided the occasion for the application of a fixed-point theorem, or of a result about difference equations, somehow makes us more ready to take it seriously. And the mathematical-intellectual effort of applying the theorem fixes in us the particular point of view of the theory with which we deal, making us blind to whatever appears neither as a dependent nor as an independent parameter in its mathematical formulation. The result, perhaps most common in the social sciences, is bad theory with a mathematical passport. The present point is best established by reference to a few horrible examples. . . . I confine myself . . . to the citation of a delightful passage from Keynes’ General Theory, in which the issues before us are discussed with a characteristic wisdom and wit:

“It is the great fault of symbolic pseudomathematical methods of formalizing a system of economic analysis . . . that they expressly assume strict independence between the factors involved and lose all their cogency and authority if this hypothesis is disallowed; whereas, in ordinary discourse, where we are not blindly manipulating but know all the time what we are doing and what the words mean, we can keep ‘at the back of our heads’ the necessary reserves and qualifications and adjustments which we shall have to make later on, in a way in which we cannot keep complicated partial differentials ‘at the back’ of several pages of algebra which assume they all vanish. Too large a proportion of recent ‘mathematical’ economics are mere concoctions, as imprecise as the initial assumptions they reset on, which allow the author to lose sight of the complexities and interdependencies of the real world in a maze of pretentions and unhelpful symbols.”

Although it would have been helpful if Keynes had specifically identified the pseudomathematical methods that he had in mind, I am inclined to think that he was expressing his impatience with the Walrasian general-equilibrium approach that was characteristic of the Marshallian tradition that he carried forward even as he struggled to transcend it. Walrasian general equilibrium analysis, he seems to be suggesting, is too far removed from reality to provide any reliable guide to macroeconomic policy-making, because the necessary qualifications required to make general-equilibrium analysis practically relevant are simply unmanageable within the framework of general-equilibrium analysis. A different kind of analysis is required. As a Marshallian he was less skeptical of partial-equilibrium analysis than of general-equilibrium analysis. But he also recognized that partial-equilibrium analysis could not be usefully applied in situations, e.g., analysis of an overall “market” for labor, where the usual ceteris paribus assumptions underlying the use of stable demand and supply curves as analytical tools cannot be maintained. But for some reason that didn’t stop Keynes from trying to explain the nominal rate of interest by positing a demand curve to hold money and a fixed stock of money supplied by a central bank. But we all have our blind spots and miss obvious implications of familiar ideas that we have already encountered and, at least partially, understand.

Schwartz concludes his essay with an arresting thought that should give us pause about how we often uncritically accept probabilistic and statistical propositions as if we actually knew how they matched up with the stochastic phenomena that we are seeking to analyze. But although there is a lot to unpack in his conclusion, I am afraid someone more capable than I will have to do the unpacking.

[M]athematics, concentrating our attention, makes us blind to its own omissions – what I have already called the single-mindedness of mathematics. Typically, mathematics, knows better what to do than why to do it. Probability theory is a famous example. . . . Here also, the mathematical formalism may be hiding as much as it reveals.

What’s Wrong with DSGE Models Is Not Representative Agency

The basic DSGE macroeconomic model taught to students is based on a representative agent. Many critics of modern macroeconomics and DSGE models have therefore latched on to the representative agent as the key – and disqualifying — feature in DSGE models, and by extension, with modern macroeconomics. Criticism of representative-agent models is certainly appropriate, because, as Alan Kirman admirably explained some 25 years ago, the simplification inherent in a macoreconomic model based on a representative agent, renders the model entirely inappropriate and unsuitable for most of the problems that a macroeconomic model might be expected to address, like explaining why economies might suffer from aggregate fluctuations in output and employment and the price level.

While altogether fitting and proper, criticism of the representative agent model in macroeconomics had an unfortunate unintended consequence, which was to focus attention on representative agency rather than on the deeper problem with DSGE models, problems that cannot be solved by just throwing the Representative Agent under the bus.

Before explaining why representative agency is not the root problem with DSGE models, let’s take a moment or two to talk about where the idea of representative agency comes from. The idea can be traced back to F. Y. Edgeworth who, in his exposition of the ideas of W. S. Jevons – one of the three marginal revolutionaries of the 1870s – introduced two “representative particulars” to illustrate how trade could maximize the utility of each particular subject to the benchmark utility of the counterparty. That analysis of two different representative particulars, reflected in what is now called the Edgeworth Box, remains one of the outstanding achievements and pedagogical tools of economics. (See a superb account of the historical development of the Box and the many contributions to economic theory that it facilitated by Thomas Humphrey). But Edgeworth’s analysis and its derivatives always focused on the incentives of two representative agents rather than a single isolated representative agent.

Only a few years later, Alfred Marshall in his Principles of Economics, offered an analysis of how the equilibrium price for the product of a competitive industry is determined by the demand for (derived from the marginal utility accruing to consumers from increments of the product) and the supply of that product (derived from the cost of production). The concepts of the marginal cost of an individual firm as a function of quantity produced and the supply of an individual firm as a function of price not yet having been formulated, Marshall, in a kind of hand-waving exercise, introduced a hypothetical representative firm as a stand-in for the entire industry.

The completely ad hoc and artificial concept of a representative firm was not well-received by Marshall’s contemporaries, and the young Lionel Robbins, starting his long career at the London School of Economics, subjected the idea to withering criticism in a 1928 article. Even without Robbins’s criticism, the development of the basic theory of a profit-maximizing firm quickly led to the disappearance of Marshall’s concept from subsequent economics textbooks. James Hartley wrote about the short and unhappy life of Marshall’s Representative Firm in the Journal of Economic Perspectives.

One might have thought that the inauspicious career of Marshall’s Representative Firm would have discouraged modern macroeconomists from resurrecting the Representative Firm in the barely disguised form of a Representative Agent in their DSGE models, but the convenience and relative simplicity of solving a DSGE model for a single agent was too enticing to be resisted.

Therein lies the difference between the theory of the firm and a macroeconomic theory. The gain in convenience from adopting the Representative Firm was radically reduced by Marshall’s Cambridge students and successors who, without the representative firm, provided a more rigorous, more satisfying and more flexible exposition of the industry supply curve and the corresponding partial-equilibrium analysis than Marshall had with it. Providing no advantages of realism, logical coherence, analytical versatility or heuristic intuition, the Representative Firm was unceremoniously expelled from the polite company of economists.

However, as a heuristic device for portraying certain properties of an equilibrium state — whose existence is assumed not derived — even a single representative individual or agent proved to be a serviceable device with which to display the defining first-order conditions, the simultaneous equality of marginal rates of substitution in consumption and production with the marginal rate of substitution at market prices. Unlike the Edgeworth Box populated by two representative agents whose different endowments or preference maps result in mutually beneficial trade, the representative agent, even if afforded the opportunity to trade, can find no gain from engaging in it.

An excellent example of this heuristic was provided by Jack Hirshleifer in his 1970 textbook Investment, Interest, and Capital, wherein he adapted the basic Fisherian model of intertemporal consumption, production and exchange opportunities, representing the canonical Fisherian exposition in a single basic diagram. But the representative agent necessarily represents a state of no trade, because, for a single isolated agent, production and consumption must coincide, and the equilibrium price vector must have the property that the representative agent chooses not to trade at that price vector. I reproduce Hirshleifer’s diagram (Figure 4-6) in the attached chart.

Here is how Hirshleifer explained what was going on.

Figure 4-6 illustrates a technique that will be used often from now on: the representative-individual device. If one makes the assumption that all individuals have identical tastes and are identically situated with respect to endowments and productive opportunities, it follows that the individual optimum must be a microcosm of the social equilibrium. In this model the productive and consumptive solutions coincide, as in the Robinson Crusoe case. Nevertheless, market opportunities exist, as indicated by the market line M’M’ through the tangency point P* = C*. But the price reflected in the slope of M’M’ is a sustaining price, such that each individual prefers to hold the combination attained by productive transformations rather than engage in market transactions. The representative-individual device is helpful in suggesting how the equilibrium will respond to changes in exogenous data—the proviso being that such changes od not modify the distribution of wealth among individuals.

While not spelling out the limitations of the representative-individual device, Hirshleifer makes it clear that the representative-agent device is being used as an expository technique to describe, not as an analytical tool to determine, intertemporal equilibrium. The existence of intertemporal equilibrium does not depend on the assumptions necessary to allow a representative individual to serve as a stand-in for all other agents. The representative-individual is portrayed only to provide the student with a special case serving as a visual aid with which to gain an intuitive grasp of the necessary conditions characterizing an intertemporal equilibrium in production and consumption.

But the role of the representative agent in the DSGE model is very different from the representative individual in Hirshleifer’s exposition of the canonical Fisherian theory. In Hirshleifer’s exposition, the representative individual is just a special case and a visual aid with no independent analytical importance. In contrast to Hirshleifer’s deployment of the representative-individual, representative-agent in the DSGE model is used as an assumption whereby an analytical solution to the DSGE model can be derived, allowing the modeler to generate quantitative results to be compared with existing time-series data, to generate forecasts of future economic conditions, and to evaluate the effects of alternative policy rules.

The prominent and dubious role of the representative agent in DSGE models provided a convenient target for critics of DSGE models to direct their criticisms. In Congressional testimony, Robert Solow famously attacked DSGE models and used their reliance on the representative-agents to make them seem, well, simply ridiculous.

Most economists are willing to believe that most individual “agents” – consumers investors, borrowers, lenders, workers, employers – make their decisions so as to do the best that they can for themselves, given their possibilities and their information. Clearly they do not always behave in this rational way, and systematic deviations are well worth studying. But this is not a bad first approximation in many cases. The DSGE school populates its simplified economy – remember that all economics is about simplified economies just as biology is about simplified cells – with exactly one single combination worker-owner-consumer-everything-else who plans ahead carefully and lives forever. One important consequence of this “representative agent” assumption is that there are no conflicts of interest, no incompatible expectations, no deceptions.

This all-purpose decision-maker essentially runs the economy according to its own preferences. Not directly, of course: the economy has to operate through generally well-behaved markets and prices. Under pressure from skeptics and from the need to deal with actual data, DSGE modellers have worked hard to allow for various market frictions and imperfections like rigid prices and wages, asymmetries of information, time lags, and so on. This is all to the good. But the basic story always treats the whole economy as if it were like a person, trying consciously and rationally to do the best it can on behalf of the representative agent, given its circumstances. This cannot be an adequate description of a national economy, which is pretty conspicuously not pursuing a consistent goal. A thoughtful person, faced with the thought that economic policy was being pursued on this basis, might reasonably wonder what planet he or she is on.

An obvious example is that the DSGE story has no real room for unemployment of the kind we see most of the time, and especially now: unemployment that is pure waste. There are competent workers, willing to work at the prevailing wage or even a bit less, but the potential job is stymied by a market failure. The economy is unable to organize a win-win situation that is apparently there for the taking. This sort of outcome is incompatible with the notion that the economy is in rational pursuit of an intelligible goal. The only way that DSGE and related models can cope with unemployment is to make it somehow voluntary, a choice of current leisure or a desire to retain some kind of flexibility for the future or something like that. But this is exactly the sort of explanation that does not pass the smell test.

While Solow’s criticism of the representative agent was correct, he left himself open to an effective rejoinder by defenders of DSGE models who could point out that the representative agent was adopted by DSGE modelers not because it was an essential feature of the DSGE model but because it enabled DSGE modelers to simplify the task of analytically solving for an equilibrium solution. With enough time and computing power, however, DSGE modelers were able to write down models with a few heterogeneous agents (themselves representative of particular kinds of agents in the model) and then crank out an equilibrium solution for those models.

Unfortunately for Solow, V. V. Chari also testified at the same hearing, and he responded directly to Solow, denying that DSGE models necessarily entail the assumption of a representative agent and identifying numerous examples even in 2010 of DSGE models with heterogeneous agents.

What progress have we made in modern macro? State of the art models in, say, 1982, had a representative agent, no role for unemployment, no role for Financial factors, no sticky prices or sticky wages, no role for crises and no role for government. What do modern macroeconomic models look like? The models have all kinds of heterogeneity in behavior and decisions. This heterogeneity arises because people’s objectives dier, they differ by age, by information, by the history of their past experiences. Please look at the seminal work by Rao Aiyagari, Per Krusell and Tony Smith, Tim Kehoe and David Levine, Victor Rios Rull, Nobu Kiyotaki and John Moore. All of them . . . prominent macroeconomists at leading departments . . . much of their work is explicitly about models without representative agents. Any claim that modern macro is dominated by representative-agent models is wrong.

So on the narrow question of whether DSGE models are necessarily members of the representative-agent family, Solow was debunked by Chari. But debunking the claim that DSGE models must be representative-agent models doesn’t mean that DSGE models have the basic property that some of us at least seek in a macro-model: the capacity to explain how and why an economy may deviate from a potential full-employment time path.

Chari actually addressed the charge that DSGE models cannot explain lapses from full employment (to use Pigou’s rather anodyne terminology for depressions). Here is Chari’s response:

In terms of unemployment, the baseline model used in the analysis of labor markets in modern macroeconomics is the Mortensen-Pissarides model. The main point of this model is to focus on the dynamics of unemployment. It is specifically a model in which labor markets are beset with frictions.

Chari’s response was thus to treat lapses from full employment as “frictions.” To treat unemployment as the result of one or more frictions is to take a very narrow view of the potential causes of unemployment. The argument that Keynes made in the General Theory was that unemployment is a systemic failure of a market economy, which lacks an error-correction mechanism that is capable of returning the economy to a full-employment state, at least not within a reasonable period of time.

The basic approach of DSGE is to treat the solution of the model as an optimal solution of a problem. In the representative-agent version of a DSGE model, the optimal solution is optimal solution for a single agent, so optimality is already baked into the model. With heterogeneous agents, the solution of the model is a set of mutually consistent optimal plans, and optimality is baked into that heterogenous-agent DSGE model as well. Sophisticated heterogeneous-agent models can incorporate various frictions and constraints that cause the solution to deviate from a hypothetical frictionless, unconstrained first-best optimum.

The policy message emerging from this modeling approach is that unemployment is attributable to frictions and other distortions that don’t permit a first-best optimum that would be achieved automatically in their absence from being reached. The possibility that the optimal plans of individuals might be incompatible resulting in a systemic breakdown — that there could be a failure to coordinate — does not even come up for discussion.

One needn’t accept Keynes’s own theoretical explanation of unemployment to find the attribution of cyclical unemployment to frictions deeply problematic. But, as I have asserted in many previous posts (e.g., here and here) a modeling approach that excludes a priori any systemic explanation of cyclical unemployment, attributing instead all cyclical unemployment to frictions or inefficient constraints on market pricing, cannot be regarded as anything but an exercise in question begging.

 

My Paper “Hawtrey and Keynes” Is Now Available on SSRN

About five or six years ago, I was invited by Robert Dimand and Harald Hagemann to contribute an article on Hawtrey for The Elgar Companion to John Maynard Keynes, which they edited. I have now posted an early (2014) version of my article on SSRN.

Here is the abstract of my article on Hawtrey and Keynes

R. G. Hawtrey, like his younger contemporary J. M. Keynes, was a Cambridge graduate in mathematics, an Apostle, deeply influenced by the Cambridge philosopher G. E. Moore, attached, if only peripherally, to the Bloomsbury group, and largely an autodidact in economics. Both entered the British Civil Service shortly after graduation, publishing their first books on economics in 1913. Though eventually overshadowed by Keynes, Hawtrey, after publishing Currency and Credit in 1919, was in the front rank of monetary economists in the world and a major figure at the 1922 Genoa International Monetary Conference planning for a restoration of the international gold standard. This essay explores their relationship during the 1920s and 1930s, focusing on their interactions concerning the plans for restoring an international gold standard immediately after World War I, the 1925 decision to restore the convertibility of sterling at the prewar dollar parity, Hawtrey’s articulation of what became known as the Treasury view, Hawtrey’s commentary on Keynes’s Treatise on Money, including his exposition of the multiplier, Keynes’s questioning of Hawtrey after his testimony before the Macmillan Committee, their differences over the relative importance of the short-term and long-term rates of interest as instruments of monetary policy, Hawtrey’s disagreement with Keynes about the causes of the Great Depression, and finally the correspondence between Keynes and Hawtrey while Keynes was writing the General Theory, a correspondence that failed to resolve theoretical differences culminating in Hawtrey’s critical review of the General Theory and their 1937 exchange in the Economic Journal.

Irving Fisher Demolishes the Loanable-Funds Theory of Interest

In some recent posts (here, here and here) I have discussed the inappropriate application of partial-equilibrium analysis (aka supply-demand analysis) when the conditions under which the ceteris paribus assumption underlying partial-equilibrium analysis are not satisfied. The two examples of inappropriate application of partial equilibrium analysis I have mentioned were: 1) drawing a supply curve of labor and demand curve for labor to explain aggregate unemployment in the economy, and 2) drawing a supply curve of loanable funds and a demand curve for loanable funds to explain the rate of interest. In neither case can one assume that a change in the wage of labor or in the rate of interest can occur without at the same time causing the demand curve and the supply curve to shift from their original position to a new one. Because of the feedback effects from a change in the wage or a change in the rate of interest inevitably cause the demand and supply curves to shift, the standard supply-and-demand analysis breaks down in the face of such feedback effects.

I pointed out that while Keynes correctly observed that demand-and-supply analysis of the labor market was inappropriate, it is puzzling that it did not occur to him that demand-and-supply analysis could not be used to explain the rate of interest.

Keynes explained the rate of interest as a measure of the liquidity premium commanded by holders of money for parting with liquidity when lending money to a borrower. That view is sometimes contrasted with Fisher’s explanation of the rate interest as a measure of the productivity of capital in shifting output from the present to the future and the time preference of individuals for consuming in the present rather waiting to consume in the future. Sometimes the Fisherian theory of the rate of interest is juxtaposed with the Keynesian theory by contrasting the liquidity preference theory with a loanable-funds theory. But that contrast between liquidity preference and loanable funds misrepresents Fisher’s view, because a loanable funds theory is also an inappropriate misapplication of partial-equilibrium analysis when general-equilibrium anlaysis is required.

I recently came upon a passage from Fisher’s classic 1907 treatise, The Rate of Interest: Its Nature, Determination and Relation to Economic Phenomena, which explicitly rejects supply-demand analysis of the market for loanable funds as a useful way of explaining the rate of interest. Here is how Fisher made that fundamental point.

If a modern business man is asked what determines the rate of interest, he may usually be expected to answer, “the supply and demand of loanable money.” But “supply and demand” is a phrase which has been too often into service to cover up difficult problems. Even economists have been prone to employ it to describe economic causation which they could not unravel. It was once wittily remarked of the early writers on economic problems, “Catch a parrot and teach him to say ‘supply and demand,’ and you have an excellent economist.” Prices, wages, rent, interest, and profits were thought to be fully “explained” by this glib phrase. It is true that every ratio of exchange is due to the resultant of causes operating on the buyer and seller, and we may classify these as “demand” and supply.” But this fact does not relieve us of the necessity of examining specifically the two sets of causes, including utility in its effect on demand, and cost in its effect on supply. Consequently, when we say that the rate of interest is due to the supply and demand of “capital” or of “money” or of “loans,” we are very far from having an adequate explanation. It is true that when merchants seek to discount bills at a bank in large numbers and for large amounts, the rate of interest will tend to be low. But we must inquire for what purposes and from what causes merchants thus apply to a bank for the discount of loans and others supply the bank with the funds to be loaned. The real problem is: What causes make the demand for loans and what causes make the supply? This question is not answered by the summary “supply and demand” theory. The explanation is not simply that those who have little capital demand them. In fact, the contrary is often the case. The depositors in savings banks are the lenders, and they are usually poor, whereas those to whom the savings bank in turn lends the funds are relatively rich. (pp. 6-7)

Phillips Curve Musings: Second Addendum on Keynes and the Rate of Interest

In my two previous posts (here and here), I have argued that the partial-equilibrium analysis of a single market, like the labor market, is inappropriate and not particularly relevant, in situations in which the market under analysis is large relative to other markets, and likely to have repercussions on those markets, which, in turn, will have further repercussions on the market under analysis, violating the standard ceteris paribus condition applicable to partial-equilibrium analysis. When the standard ceteris paribus condition of partial equilibrium is violated, as it surely is in analyzing the overall labor market, the analysis is, at least, suspect, or, more likely, useless and misleading.

I suggested that Keynes in chapter 19 of the General Theory was aiming at something like this sort of argument, and I think he was largely right in his argument. But, in all modesty, I think that Keynes would have done better to have couched his argument in terms of the distinction between partial-equilibrium and general-equilibrium analysis. But his Marshallian training, which he simultaneously embraced and rejected, may have made it difficult for him to adopt the Walrasian general-equilibrium approach that Marshall and the Marshallians regarded as overly abstract and unrealistic.

In my next post, I suggested that the standard argument about the tendency of public-sector budget deficits to raise interest rates by competing with private-sector borrowers for loanable funds is fundamentally misguided, because it, too, inappropriately applies the partial-equilibrium analysis of a narrow market for government securities, or even a more broadly defined market for loanable funds in general.

That is a gross mistake, because the rate of interest is determined in a general-equilibrium system along with markets for all long-lived assets, embodying expected flows of income that must be discounted to the present to determine an estimated present value. Some assets are riskier than others and that risk is reflected in those valuations. But the rate of interest is distilled from the combination of all of those valuations, not prior to, or apart from, those valuations. Interest rates of different duration and different risk are embeded in the entire structure of current and expected prices for all long-lived assets. To focus solely on a very narrow subset of markets for newly issued securities, whose combined value is only a small fraction of the total value of all existing long-lived assets, is to miss the forest for the trees.

What I want to point out in this post is that Keynes, whom I credit for having recognized that partial-equilibrium analysis is inappropriate and misleading when applied to an overall market for labor, committed exactly the same mistake that he condemned in the context of the labor market, by asserting that the rate of interest is determined in a single market: the market for money. According to Keynes, the market rate of interest is that rate which equates the stock of money in existence with the amount of money demanded by the public. The higher the rate of interest, Keynes argued, the less money the public wants to hold.

Keynes, applying the analysis of Marshall and his other Cambridge predecessors, provided a wonderful analysis of the factors influencing the amount of money that people want to hold (usually expressed in terms of a fraction of their income). However, as superb as his analysis of the demand for money was, it was a partial-equilibrium analysis, and there was no recognition on his part that other markets in the economy are influenced by, and exert influence upon, the rate of interest.

What makes Keynes’s partial-equilibrium analysis of the interest rate so difficult to understand is that in chapter 17 of the General Theory, a magnificent tour de force of verbal general-equilibrium theorizing, explained the relationships that must exist between the expected returns for alternative long-lived assets that are held in equilibrium. Yet, disregarding his own analysis of the equilibrium relationship between returns on alternative assets, Keynes insisted on explaining the rate of interest in a one-period model (a model roughly corresponding to IS-LM) with only two alternative assets: money and bonds, but no real capital asset.

A general-equilibrium analysis of the rate of interest ought to have at least two periods, and it ought to have a real capital good that may be held in the present for use or consumption in the future, a possibility entirely missing from the Keynesian model. I have discussed this major gap in the Keynesian model in a series of posts (here, here, here, here, and here) about Earl Thompson’s 1976 paper “A Reformulation of Macroeconomic Theory.”

Although Thompson’s model seems to me too simple to account for many macroeconomic phenomena, it would have been a far better starting point for the development of macroeconomics than any of the models from which modern macroeconomic theory has evolved.

Phillips Curve Musings

There’s a lot of talk about the Phillips Curve these days; people wonder why, with the unemployment rate reaching historically low levels, nominal and real wages have increased minimally with inflation remaining securely between 1.5 and 2%. The Phillips Curve, for those untutored in basic macroeconomics, depicts a relationship between inflation and unemployment. The original empirical Philips Curve relationship showed that high rates of unemployment were associated with low or negative rates of wage inflation while low rates of unemployment were associated with high rates of wage inflation. This empirical relationship suggested a causal theory that the rate of wage increase tends to rise when unemployment is low and tends to fall when unemployment is high, a causal theory that seems to follow from a simple supply-demand model in which wages rise when there is an excess demand for labor (unemployment is low) and wages fall when there is an excess supply of labor (unemployment is high).

Viewed in this light, low unemployment, signifying a tight labor market, signals that inflation is likely to rise, providing a rationale for monetary policy to be tightened to prevent inflation from rising at it normally does when unemployment is low. Seeming to accept that rationale, the Fed has gradually raised interest rates for the past two years or so. But the increase in interest rates has now slowed the expansion of employment and decline in unemployment to historic lows. Nor has the improving employment situation resulted in any increase in price inflation and at most a minimal increase in the rate of increase in wages.

In a couple of previous posts about sticky wages (here and here), I’ve questioned whether the simple supply-demand model of the labor market motivating the standard interpretation of the Phillips Curve is a useful way to think about wage adjustment and inflation-employment dynamics. I’ve offered a few reasons why the supply-demand model, though applicable in some situations, is not useful for understanding how wages adjust.

The particular reason that I want to focus on here is Keynes’s argument in chapter 19 of the General Theory (though I express it in terms different from his) that supply-demand analysis can’t explain how wages and employment are determined. The upshot of his argument I believe is that supply demand-analysis only works in a partial-equilibrium setting in which feedback effects from the price changes in the market under consideration don’t affect equilibrium prices in other markets, so that the position of the supply and demand curves in the market of interest can be assumed stable even as price and quantity in that market adjust from one equilibrium to another (the comparative-statics method).

Because the labor market, affecting almost every other market, is not a small part of the economy, partial-equilibrium analysis is unsuitable for understanding that market, the normal stability assumption being untenable if we attempt to trace the adjustment from one labor-market equilibrium to another after an exogenous disturbance. In the supply-demand paradigm, unemployment is a measure of the disequilibrium in the labor market, a disequilibrium that could – at least in principle — be eliminated by a wage reduction sufficient to equate the quantity of labor services supplied with the amount demanded. Viewed from this supply-demand perspective, the failure of the wage to fall to a supposed equilibrium level is attributable to some sort of endogenous stickiness or some external impediment (minimum wage legislation or union intransigence) in wage adjustment that prevents the normal equilibrating free-market adjustment mechanism. But the habitual resort to supply-demand analysis by economists, reinforced and rewarded by years of training and professionalization, is actually misleading when applied in an inappropriate context.

So Keynes was right to challenge this view of a potentially equilibrating market mechanism that is somehow stymied from behaving in the manner described in the textbook version of supply-demand analysis. Instead, Keynes argued that the level of employment is determined by the level of spending and income at an exogenously given wage level, an approach that seems to be deeply at odds with idea that price adjustments are an essential part of the process whereby a complex economic system arrives at, or at least tends to move toward, an equilibrium.

One of the main motivations for a search for microfoundations in the decades after the General Theory was published was to be able to articulate a convincing microeconomic rationale for persistent unemployment that was not eliminated by the usual tendency of market prices to adjust to eliminate excess supplies of any commodity or service. But Keynes was right to question whether there is any automatic market mechanism that adjusts nominal or real wages in a manner even remotely analogous to the adjustment of prices in organized commodity or stock exchanges – the sort of markets that serve as exemplars of automatic price adjustments in response to excess demands or supplies.

Keynes was also correct to argue that, even if there was a mechanism causing automatic wage adjustments in response to unemployment, the labor market, accounting for roughly 60 percent of total income, is so large that any change in wages necessarily affects all other markets, causing system-wide repercussions that might well offset any employment-increasing tendency of the prior wage adjustment.

But what I want to suggest in this post is that Keynes’s criticism of the supply-demand paradigm is relevant to any general-equilibrium system in the following sense: if a general-equilibrium system is considered from an initial non-equilibrium position, does the system have any tendency to move toward equilibrium? And to make the analysis relatively tractable, assume that the system is such that a unique equilibrium exists. Before proceeding, I also want to note that I am not arguing that traditional supply-demand analysis is necessarily flawed; I am just emphasizing that traditional supply-demand analysis is predicated on a macroeconomic foundation: that all markets but the one under consideration are in, or are in the neighborhood of, equilibrium. It is only because the system as a whole is in the neighborhood of equilibrium, that the microeconomic forces on which traditional supply-demand analysis relies appear to be so powerful and so stabilizing.

However, if our focus is a general-equilibrium system, microeconomic supply-demand analysis of a single market in isolation provides no basis on which to argue that the system as a whole has a self-correcting tendency toward equilibrium. To make such an argument is to commit a fallacy of composition. The tendency of any single market toward equilibrium is premised on an assumption that all markets but the one under analysis are already at, or in the neighborhood of, equilibrium. But when the system as a whole is in a disequilibrium state, the method of partial equilibrium analysis is misplaced; partial-equilibrium analysis provides no ground – no micro-foundation — for an argument that the adjustment of market prices in response to excess demands and excess supplies will ever – much less rapidly — guide the entire system back to an equilibrium state.

The lack of automatic market forces that return a system not in the neighborhood — for purposes of this discussion “neighborhood” is left undefined – of equilibrium back to equilibrium is implied by the Sonnenschein-Mantel-Debreu Theorem, which shows that, even if a unique general equilibrium exists, there may be no rule or algorithm for increasing (decreasing) prices in markets with excess demands (supplies) by which the general-equilibrium price vector would be discovered in a finite number of steps.

The theorem holds even under a Walrasian tatonnement mechanism in which no trading at disequilibrium prices is allowed. The reason is that the interactions between individual markets may be so complicated that a price-adjustment rule will not eliminate all excess demands, because even if a price adjustment reduces excess demand in one market, that price adjustment may cause offsetting disturbances in one or more other markets. So, unless the equilibrium price vector is somehow hit upon by accident, no rule or algorithm for price adjustment based on the excess demand in each market will necessarily lead to discovery of the equilibrium price vector.

The Sonnenschein Mantel Debreu Theorem reinforces the insight of Kenneth Arrow in an important 1959 paper “Toward a Theory of Price Adjustment,” which posed the question: how does the theory of perfect competition account for the determination of the equilibrium price at which all agents can buy or sell as much as they want to at the equilibrium (“market-clearing”) price? As Arrow observed, “there exists a logical gap in the usual formulations of the theory of perfectly competitive economy, namely, that there is no place for a rational decision with respect to prices as there is with respect to quantities.”

Prices in perfect competition are taken as parameters by all agents in the model, and optimization by agents consists in choosing optimal quantities. The equilibrium solution allows the mutually consistent optimization by all agents at the equilibrium price vector. This is true for the general-equilibrium system as a whole, and for partial equilibrium in every market. Not only is there no positive theory of price adjustment within the competitive general-equilibrium model, as pointed out by Arrow, but the Sonnenschein-Mantel-Debreu Theorem shows that there’s no guarantee that even the notional tatonnement method of price adjustment can ensure that a unique equilibrium price vector will be discovered.

While acknowledging his inability to fill the gap, Arrow suggested that, because perfect competition and price taking are properties of general equilibrium, there are inevitably pockets of market power, in non-equilibrium states, so that some transactors in non-equilibrium states, are price searchers rather than price takers who therefore choose both an optimal quantity and an optimal price. I have no problem with Arrow’s insight as far as it goes, but it still doesn’t really solve his problem, because he couldn’t explain, even intuitively, how a disequilibrium system with some agents possessing market power (either as sellers or buyers) transitions into an equilibrium system in which all agents are price-takers who can execute their planned optimal purchases and sales at the parametric prices.

One of the few helpful, but, as far as I can tell, totally overlooked, contributions of the rational-expectations revolution was to solve (in a very narrow sense) the problem that Arrow identified and puzzled over, although Hayek, Lindahl and Myrdal, in their original independent formulations of the concept of intertemporal equilibrium, had already provided the key to the solution. Hayek, Lindahl, and Myrdal showed that an intertemporal equilibrium is possible only insofar as agents form expectations of future prices that are so similar to each other that, if future prices turn out as expected, the agents would be able to execute their planned sales and purchases as expected.

But if agents have different expectations about the future price(s) of some commodity(ies), and if their plans for future purchases and sales are conditioned on those expectations, then when the expectations of at least some agents are inevitably disappointed, those agents will necessarily have to abandon (or revise) the plans that their previously formulated plans.

What led to Arrow’s confusion about how equilibrium prices are arrived at was the habit of thinking that market prices are determined by way of a Walrasian tatonnement process (supposedly mimicking the haggling over price by traders). So the notion that a mythical market auctioneer, who first calls out prices at random (prix cries au hasard), and then, based on the tallied market excess demands and supplies, adjusts those prices until all markets “clear,” is untenable, because continual trading at disequilibrium prices keeps changing the solution of the general-equilibrium system. An actual system with trading at non-equilibrium prices may therefore be moving away from, rather converging on, an equilibrium state.

Here is where the rational-expectations hypothesis comes in. The rational-expectations assumption posits that revisions of previously formulated plans are never necessary, because all agents actually do correctly anticipate the equilibrium price vector in advance. That is indeed a remarkable assumption to make; it is an assumption that all agents in the model have the capacity to anticipate, insofar as their future plans to buy and sell require them to anticipate, the equilibrium prices that will prevail for the products and services that they plan to purchase or sell. Of course, in a general-equilibrium system, all prices being determined simultaneously, the equilibrium prices for some future prices cannot generally be forecast in isolation from the equilibrium prices for all other products. So, in effect, the rational-expectations hypothesis supposes that each agent in the model is an omniscient central planner able to solve an entire general-equilibrium system for all future prices!

But let us not be overly nitpicky about details. So forget about false trading, and forget about the Sonnenschein-Mantel-Debreu theorem. Instead, just assume that, at time t, agents form rational expectations of the future equilibrium price vector in period (t+1). If agents at time t form rational expectations of the equilibrium price vector in period (t+1), then they may well assume that the equilibrium price vector in period t is equal to the expected price vector in period (t+1).

Now, the expected price vector in period (t+1) may or may not be an equilibrium price vector in period t. If it is an equilibrium price vector in period t as well as in period (t+1), then all is right with the world, and everyone will succeed in buying and selling as much of each commodity as he or she desires. If not, prices may or may not adjust in response to that disequilibrium, and expectations may or may not change accordingly.

Thus, instead of positing a mythical auctioneer in a contrived tatonnement process as the mechanism whereby prices are determined for currently executed transactions, the rational-expectations hypothesis posits expected future prices as the basis for the prices at which current transactions are executed, providing a straightforward solution to Arrow’s problem. The prices at which agents are willing to purchase or sell correspond to their expectations of prices in the future. If they find trading partners with similar expectations of future prices, they will reach agreement and execute transactions at those prices. If they don’t find traders with similar expectations, they will either be unable to transact, or will revise their price expectations, or they will assume that current market conditions are abnormal and then decide whether to transact at prices different from those they had expected.

When current prices are more favorable than expected, agents will want to buy or sell more than they would have if current prices were equal to their expectations for the future. If current prices are less favorable than they expect future prices to be, they will not transact at all or will seek to buy or sell less than they would have bought or sold if current prices had equaled expected future prices. The dichotomy between observed current prices, dictated by current demands and supplies, and expected future prices is unrealistic; all current transactions are made with an eye to expected future prices and to their opportunities to postpone current transactions until the future, or to advance future transactions into the present.

If current prices for similar commodities are not uniform in all current transactions, a circumstance that Arrow attributes to the existence of varying degrees of market power across imperfectly competitive suppliers, price dispersion may actually be caused, not by market power, but by dispersion in the expectations of future prices held by agents. Sellers expecting future prices to rise will be less willing to sell at relatively low prices now than are suppliers with pessimistic expectations about future prices. Equilibrium occurs when all transactors share the same expectations of future prices and expected future prices correspond to equilibrium prices in the current period.

Of course, that isn’t the only possible equilibrium situation. There may be situations in which a future event that will change a subset of prices can be anticipated. If the anticipation of the future event affects not only expected future prices, it must also and necessarily affect current prices insofar as current supplies can be carried into the future from the present or current purchases can be postponed until the future or future consumption shifted into the present.

The practical upshot of these somewhat disjointed reflections is, I think,primarily to reinforce skepticism that the traditional Phillips Curve supposition that low and falling unemployment necessarily presages an increase in inflation. Wages are not primarily governed by the current state of the labor market, whatever the labor market might even mean in macroeconomic context.

Expectations rule! And the rational-expectations revolution to the contrary notwithstanding, we have no good theory of how expectations are actually formed and there is certainly no reason to assume that, as a general matter, all agents share the same set of expectations.

The current fairly benign state of the economy reflects the absence of any serious disappointment of price expectations. If an economy is operating not very far from an equilibrium, although expectations are not the same, they likely are not very different. They will only be very different after the unexpected strikes. When that happens, borrowers and traders who had taken positions based on overly optimistic expectations find themselves unable to meet their obligations. It is only then that we will see whether the economy is really as strong and resilient as it now seems.

Expecting the unexpected is hard to do, but you can be sure that, sooner or later, the unexpected is going to happen.

More on Sticky Wages

It’s been over four and a half years since I wrote my second most popular post on this blog (“Why are Wages Sticky?”). Although the post was linked to and discussed by Paul Krugman (which is almost always a guarantee of getting a lot of traffic) and by other econoblogosphere standbys like Mark Thoma and Barry Ritholz, unlike most of my other popular posts, it has continued ever since to attract a steady stream of readers. It’s the posts that keep attracting readers long after their original expiration date that I am generally most proud of.

I made a few preliminary points about wage stickiness before getting to my point. First, although Keynes is often supposed to have used sticky wages as the basis for his claim that market forces, unaided by stimulus to aggregate demand, cannot automatically eliminate cyclical unemployment within the short or even medium term, he actually devoted a lot of effort and space in the General Theory to arguing that nominal wage reductions would not increase employment, and to criticizing economists who blamed unemployment on nominal wages fixed by collective bargaining at levels too high to allow all workers to be employed. So, the idea that wage stickiness is a Keynesian explanation for unemployment doesn’t seem to me to be historically accurate.

I also discussed the search theories of unemployment that in some ways have improved our understanding of why some level of unemployment is a normal phenomenon even when people are able to find jobs fairly easily and why search and unemployment can actually be productive, enabling workers and employers to improve the matches between the skills and aptitudes that workers have and the skills and aptitudes that employers are looking for. But search theories also have trouble accounting for some basic facts about unemployment.

First, a lot of job search takes place when workers have jobs while search theories assume that workers can’t or don’t search while they are employed. Second, when unemployment rises in recessions, it’s not because workers mistakenly expect more favorable wage offers than employers are offering and mistakenly turn down job offers that they later regret not having accepted, which is a very skewed way of interpreting what happens in recessions; it’s because workers are laid off by employers who are cutting back output and idling production lines.

I then suggested the following alternative explanation for wage stickiness:

Consider the incentive to cut price of a firm that can’t sell as much as it wants [to sell] at the current price. The firm is off its supply curve. The firm is a price taker in the sense that, if it charges a higher price than its competitors, it won’t sell anything, losing all its sales to competitors. Would the firm have any incentive to cut its price? Presumably, yes. But let’s think about that incentive. Suppose the firm has a maximum output capacity of one unit, and can produce either zero or one units in any time period. Suppose that demand has gone down, so that the firm is not sure if it will be able to sell the unit of output that it produces (assume also that the firm only produces if it has an order in hand). Would such a firm have an incentive to cut price? Only if it felt that, by doing so, it would increase the probability of getting an order sufficiently to compensate for the reduced profit margin at the lower price. Of course, the firm does not want to set a price higher than its competitors, so it will set a price no higher than the price that it expects its competitors to set.

Now consider a different sort of firm, a firm that can easily expand its output. Faced with the prospect of losing its current sales, this type of firm, unlike the first type, could offer to sell an increased amount at a reduced price. How could it sell an increased amount when demand is falling? By undercutting its competitors. A firm willing to cut its price could, by taking share away from its competitors, actually expand its output despite overall falling demand. That is the essence of competitive rivalry. Obviously, not every firm could succeed in such a strategy, but some firms, presumably those with a cost advantage, or a willingness to accept a reduced profit margin, could expand, thereby forcing marginal firms out of the market.

Workers seem to me to have the characteristics of type-one firms, while most actual businesses seem to resemble type-two firms. So what I am suggesting is that the inability of workers to take over the jobs of co-workers (the analog of output expansion by a firm) when faced with the prospect of a layoff means that a powerful incentive operating in non-labor markets for price cutting in response to reduced demand is not present in labor markets. A firm faced with the prospect of being terminated by a customer whose demand for the firm’s product has fallen may offer significant concessions to retain the customer’s business, especially if it can, in the process, gain an increased share of the customer’s business. A worker facing the prospect of a layoff cannot offer his employer a similar deal. And requiring a workforce of many workers, the employer cannot generally avoid the morale-damaging effects of a wage cut on his workforce by replacing current workers with another set of workers at a lower wage than the old workers were getting.

I think that what I wrote four years ago is clearly right, identifying an important reason for wage stickiness. But there’s also another reason that I didn’t mention then, but whose importance has since come to appear increasingly significant to me, especially as a result of writing and rewriting my paper “Hayek, Hicks, Radner and three concepts of intertemporal equilibrium.”

If you are unemployed because the demand for your employer’s product has gone down, and your employer, planning to reduce output, is laying off workers no longer needed, how could you, as an individual worker, unconstrained by a union collective-bargaining agreement or by a minimum-wage law, persuade your employer not to lay you off? Could you really keep your job by offering to accept a wage cut — no matter how big? If you are being laid off because your employer is reducing output, would your offer to work at a lower wage cause your employer to keep output unchanged, despite a reduction in demand? If not, how would your offer to take a pay cut help you keep your job? Unless enough workers are willing to accept a big enough wage cut for your employer to find it profitable to maintain current output instead of cutting output, how would your own willingness to accept a wage cut enable you to keep your job?

Now, if all workers were to accept a sufficiently large wage cut, it might make sense for an employer not to carry out a planned reduction in output, but the offer by any single worker to accept a wage cut certainly would not cause the employer to change its output plans. So, if you are making an independent decision whether to offer to accept a wage cut, and other workers are making their own independent decisions about whether to accept a wage cut, would it be rational for you or any of them to accept a wage cut? Whether it would or wouldn’t might depend on what each worker was expecting other workers to do. But certainly given the expectation that other workers are not offering to accept a wage cut, why would it make any sense for any worker to be the one to offer to accept a wage cut? Would offering to accept a wage cut, increase the likelihood that a worker would be one of the lucky ones chosen not to be laid off? Why would offering to accept a wage cut that no one else was offering to accept, make the worker willing to work for less appear more desirable to the employer than the others that wouldn’t accept a wage cut? One reaction by the employer might be: what’s this guy’s problem?

Combining this way of looking at the incentives workers have to offer to accept wage reductions to keep their jobs with my argument in my post of four years ago, I now am inclined to suggest that unemployment as such provides very little incentive for workers and employers to cut wages. Price cutting in periods of excess supply is often driven by aggressive price cutting by suppliers with large unsold inventories. There may be lots of unemployment, but no one is holding a large stock of unemployed workers, and no is in a position to offer low wages to undercut the position of those currently employed at  nominal wages that, arguably, are too high.

That’s not how labor markets operate. Labor markets involve matching individual workers and individual employers more or less one at a time. If nominal wages fall, it’s not because of an overhang of unsold labor flooding the market; it’s because something is changing the expectations of workers and employers about what wage will be offered by employers, and accepted by workers, for a particular kind of work. If the expected wage is too high, not all workers willing to work at that wage will find employment; if it’s too low, employers will not be able to find as many workers as they would like to hire, but the situation will not change until wage expectations change. And the reason that wage expectations change is not because the excess demand for workers causes any immediate pressure for nominal wages to rise.

The further point I would make is that the optimal responses of workers and the optimal responses of their employers to a recessionary reduction in demand, in which the employers, given current input and output prices, are planning to cut output and lay off workers, are mutually interdependent. While it is, I suppose, theoretically possible that if enough workers decided to immediately offer to accept sufficiently large wage cuts, some employers might forego plans to lay off their workers, there are no obvious market signals that would lead to such a response, because such a response would be contingent on a level of coordination between workers and employers and a convergence of expectations about future outcomes that is almost unimaginable.

One can’t simply assume that it is in the independent self-interest of every worker to accept a wage cut as soon as an employer perceives a reduced demand for its product, making the current level of output unprofitable. But unless all, or enough, workers decide to accept a wage cut, the optimal response of the employer is still likely to be to cut output and lay off workers. There is no automatic mechanism by which the market adjusts to demand shocks to achieve the set of mutually consistent optimal decisions that characterizes a full-employment market-clearing equilibrium. Market-clearing equilibrium requires not merely isolated price and wage cuts by individual suppliers of inputs and final outputs, but a convergence of expectations about the prices of inputs and outputs that will be consistent with market clearing. And there is no market mechanism that achieves that convergence of expectations.

So, this brings me back to Keynes and the idea of sticky wages as the key to explaining cyclical fluctuations in output and employment. Keynes writes at the beginning of chapter 19 of the General Theory.

For the classical theory has been accustomed to rest the supposedly self-adjusting character of the economic system on an assumed fluidity of money-wages; and, when there is rigidity, to lay on this rigidity the blame of maladjustment.

A reduction in money-wages is quite capable in certain circumstances of affording a stimulus to output, as the classical theory supposes. My difference from this theory is primarily a difference of analysis. . . .

The generally accept explanation is . . . quite a simple one. It does not depend on roundabout repercussions, such as we shall discuss below. The argument simply is that a reduction in money wages will, cet. par. Stimulate demand by diminishing the price of the finished product, and will therefore increase output, and will therefore increase output and employment up to the point where  the reduction which labour has agreed to accept in its money wages is just offset by the diminishing marginal efficiency of labour as output . . . is increased. . . .

It is from this type of analysis that I fundamentally differ.

[T]his way of thinking is probably reached as follows. In any given industry we have a demand schedule for the product relating the quantities which can be sold to the prices asked; we have a series of supply schedules relating the prices which will be asked for the sale of different quantities. .  . and these schedules between them lead up to a further schedule which, on the assumption that other costs are unchanged . . . gives us the demand schedule for labour in the industry relating the quantity of employment to different levels of wages . . . This conception is then transferred . . . to industry as a whole; and it is supposed, by a parity of reasoning, that we have a demand schedule for labour in industry as a whole relating the quantity of employment to different levels of wages. It is held that it makes no material difference to this argument whether it is in terms of money-wages or of real wages. If we are thinking of real wages, we must, of course, correct for changes in the value of money; but this leaves the general tendency of the argument unchanged, since prices certainly do not change in exact proportion to changes in money wages.

If this is the groundwork of the argument . . ., surely it is fallacious. For the demand schedules for particular industries can only be constructed on some fixed assumption as to the nature of the demand and supply schedules of other industries and as to the amount of aggregate effective demand. It is invalid, therefore, to transfer the argument to industry as a whole unless we also transfer our assumption that the aggregate effective demand is fixed. Yet this assumption amount to an ignoratio elenchi. For whilst no one would wish to deny the proposition that a reduction in money-wages accompanied by the same aggregate demand as before will be associated with an increase in employment, the precise question at issue is whether the reduction in money wages will or will not be accompanied by the same aggregate effective demand as before measured in money, or, at any rate, measured by an aggregate effective demand which is not reduced in full proportion to the reduction in money-wages. . . But if the classical theory is not allowed to extend by analogy its conclusions in respect of a particular industry to industry as a whole, it is wholly unable to answer the question what effect on employment a reduction in money-wages will have. For it has no method of analysis wherewith to tackle the problem. (General Theory, pp. 257-60)

Keynes’s criticism here is entirely correct. But I would restate slightly differently. Standard microeconomic reasoning about preferences, demand, cost and supply is partial-equilbriium analysis. The focus is on how equilibrium in a single market is achieved by the adjustment of the price in a single market to equate the amount demanded in that market with amount supplied in that market.

Supply and demand is a wonderful analytical tool that can illuminate and clarify many economic problems, providing the key to important empirical insights and knowledge. But supply-demand analysis explicitly – but too often without realizing its limiting implications – assumes that other prices and incomes in other markets are held constant. That assumption essentially means that the market – i.e., the demand, cost and supply curves used to represent the behavioral characteristics of the market being analyzed – is small relative to the rest of the economy, so that changes in that single market can be assumed to have a de minimus effect on the equilibrium of all other markets. (The conditions under which such an assumption could be justified are themselves not unproblematic, but I am now assuming that those problems can in fact be assumed away at least in many applications. And a good empirical economist will have a good instinctual sense for when it’s OK to make the assumption and when it’s not OK to make the assumption.)

So, the underlying assumption of microeconomics is that the individual markets under analysis are very small relative to the whole economy. Why? Because if those markets are not small, we can’t assume that the demand curves, cost curves, and supply curves end up where they started. Because a high price in one market may have effects on other markets and those effects will have further repercussions that move the very demand, cost and supply curves that were drawn to represent the market of interest. If the curves themselves are unstable, the ability to predict the final outcome is greatly impaired if not completely compromised.

The working assumption of the bread and butter partial-equilibrium analysis that constitutes econ 101 is that markets have closed borders. And that assumption is not always valid. If markets have open borders so that there is a lot of spillover between and across markets, the markets can only be analyzed in terms of broader systems of simultaneous equations, not the simplified solutions that we like to draw in two-dimensional space corresponding to intersections of stable supply curves with stable supply curves.

What Keynes was saying is that it makes no sense to draw a curve representing the demand of an entire economy for labor or a curve representing the supply of labor of an entire economy, because the underlying assumption of such curves that all other prices are constant cannot possibly be satisfied when you are drawing a demand curve and a supply curve for an input that generates more than half the income earned in an economy.

But the problem is even deeper than just the inability to draw a curve that meaningfully represents the demand of an entire economy for labor. The assumption that you can model a transition from one point on the curve to another point on the curve is simply untenable, because not only is the assumption that other variables are being held constant untenable and self-contradictory, the underlying assumption that you are starting from an equilibrium state is never satisfied when you are trying to analyze a situation of unemployment – at least if you have enough sense not to assume that economy is starting from, and is not always in, a state of general equilibrium.

So, Keynes was certainly correct to reject the naïve transfer of partial equilibrium theorizing from its legitimate field of applicability in analyzing the effects of small parameter changes on outcomes in individual markets – what later came to be known as comparative statics – to macroeconomic theorizing about economy-wide disturbances in which the assumptions underlying the comparative-statics analysis used in microeconomics are clearly not satisfied. That illegitimate transfer of one kind of theorizing to another has come to be known as the demand for microfoundations in macroeconomic models that is the foundational methodological principle of modern macroeconomics.

The principle, as I have been arguing for some time, is illegitimate for a variety of reasons. And one of those reasons is that microeconomics itself is based on the macroeconomic foundational assumption of a pre-existing general equilibrium, in which all plans in the entire economy are, and will remain, perfectly coordinated throughout the analysis of a particular parameter change in a single market. Once you relax the assumption that all, but one, markets are in equilibrium, the discipline imposed by the assumption of the rationality of general equilibrium and comparative statics is shattered, and a different kind of theorizing must be adopted to replace it.

The search for that different kind of theorizing is the challenge that has always faced macroeconomics. Despite heroic attempts to avoid facing that challenge and pretend that macroeconomics can be built as if it were microeconomics, the search for a different kind of theorizing will continue; it must continue. But it would certainly help if more smart and creative people would join in that search.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,628 other followers

Follow Uneasy Money on WordPress.com