Archive for the 'partial equilibrium' Category

A New Version of my Paper “Between Walras and Marshall: Menger’s Third Way” Is Now Available on SSRN

Last week I reposted a revised version of a blogpost from last November, which was a revised section from my paper “Between Walras and Marshall: Menger’s Third Way.” That paper was presented at a conference in September 2021 marking the 100th anniversary of Menger’s death. I have now completed my revision of the entire paper, and the new version is now posted on SSRN.

Here is the link to the new version, and here is the abstract of the paper:

Neoclassical economics is bifurcated between Marshall’s partial-equilibrium and Walras’s general-equilibrium. Neoclassical theory having failed to explain the Great Depression, Keynes proposed a theory of involuntary unemployment, later subsumed under the neoclassical synthesis of Keynesian and Walrasian theories. Lacking suitable microfoundations, that synthesis collapsed. But Walrasian theory provides no account of how equilibrium is achieved. Marshallian partial-equilibrium analysis offered a more plausible account of how general equilibrium is reached. But presuming that all markets, but the one being analyzed, are already in equilibrium, Marshallian partial equilibrium, like Walrasian general equilibrium, begs the question of how equilibrium is attained. A Mengerian approach to circumvent this conceptual impasse, relying in part on a critique of Franklin Fisher’s analysis of the stability of general equilibrium, is proposed.

Commnets, criticisms and suggestions are welcomed and encouraged.

Robert Lucas and the Pretense of Science

F. A. Hayek entitled his 1974 Nobel Lecture whose principal theme was to attack the simple notion that the long-observed correlation between aggregate demand and employment was a reliable basis for conducting macroeconomic policy, “The Pretence of Knowledge.” Reiterating an argument that he had made over 40 years earlier about the transitory stimulus provided to profits and production by monetary expansion, Hayek was informally anticipating the argument that Robert Lucas famously repackaged two years later in his famous critique of econometric policy evaluation. Hayek’s argument hinged on a distinction between “phenomena of unorganized complexity” and phenomena of organized complexity.” Statistical relationships or correlations between phenomena of disorganized complexity may be relied upon to persist, but observed statistical correlations displayed by phenomena of organized complexity cannot be relied upon without detailed knowledge of the individual elements that constitute the system. It was the facile assumption that observed statistical correlations in systems of organized complexity can be uncritically relied upon in making policy decisions that Hayek dismissed as merely the pretense of knowledge.

Adopting many of Hayek’s complaints about macroeconomic theory, Lucas founded his New Classical approach to macroeconomics on a methodological principle that all macroeconomic models be grounded in the axioms of neoclassical economic theory as articulated in the canonical Arrow-Debreu-McKenzie models of general equilibrium models. Without such grounding in neoclassical axioms and explicit formal derivations of theorems from those axioms, Lucas maintained that macroeconomics could not be considered truly scientific. Forty years of Keynesian macroeconomics were, in Lucas’s view, largely pre-scientific or pseudo-scientific, because they lacked satisfactory microfoundations.

Lucas’s methodological program for macroeconomics was thus based on two basic principles: reductionism and formalism. First, all macroeconomic models not only had to be consistent with rational individual decisions, they had to be reduced to those choices. Second, all the propositions of macroeconomic models had to be explicitly derived from the formal definitions and axioms of neoclassical theory. Lucas demanded nothing less than the explicit assumption individual rationality in every macroeconomic model and that all decisions by agents in a macroeconomic model be individually rational.

In practice, implementing Lucasian methodological principles required that in any macroeconomic model all agents’ decisions be derived within an explicit optimization problem. However, as Hayek had himself shown in his early studies of business cycles and intertemporal equilibrium, individual optimization in the standard Walrasian framework, within which Lucas wished to embed macroeconomic theory, is possible only if all agents are optimizing simultaneously, all individual decisions being conditional on the decisions of other agents. Individual optimization can only be solved simultaneously for all agents, not individually in isolation.

The difficulty of solving a macroeconomic equilibrium model for the simultaneous optimal decisions of all the agents in the model led Lucas and his associates and followers to a strategic simplification: reducing the entire model to a representative agent. The optimal choices of a single agent would then embody the consumption and production decisions of all agents in the model.

The staggering simplification involved in reducing a purported macroeconomic model to a representative agent is obvious on its face, but the sleight of hand being performed deserves explicit attention. The existence of an equilibrium solution to the neoclassical system of equations was assumed, based on faulty reasoning by Walras, Fisher and Pareto who simply counted equations and unknowns. A rigorous proof of existence was only provided by Abraham Wald in 1936 and subsequently in more general form by Arrow, Debreu and McKenzie, working independently, in the 1950s. But proving the existence of a solution to the system of equations does not establish that an actual neoclassical economy would, in fact, converge on such an equilibrium.

Neoclassical theory was and remains silent about the process whereby equilibrium is, or could be, reached. The Marshallian branch of neoclassical theory, focusing on equilibrium in individual markets rather than the systemic equilibrium, is often thought to provide an account of how equilibrium is arrived at, but the Marshallian partial-equilibrium analysis presumes that all markets and prices except the price in the single market under analysis, are in a state of equilibrium. So the Marshallian approach provides no more explanation of a process by which a set of equilibrium prices for an entire economy is, or could be, reached than the Walrasian approach.

Lucasian methodology has thus led to substituting a single-agent model for an actual macroeconomic model. It does so on the premise that an economic system operates as if it were in a state of general equilibrium. The factual basis for this premise apparently that it is possible, using versions of a suitable model with calibrated coefficients, to account for observed aggregate time series of consumption, investment, national income, and employment. But the time series derived from these models are derived by attributing all observed variations in national income to unexplained shocks in productivity, so that the explanation provided is in fact an ex-post rationalization of the observed variations not an explanation of those variations.

Nor did Lucasian methodology have a theoretical basis in received neoclassical theory. In a famous 1960 paper “Towards a Theory of Price Adjustment,” Kenneth Arrow identified the explanatory gap in neoclassical theory: the absence of a theory of price change in competitive markets in which every agent is a price taker. The existence of an equilibrium does not entail that the equilibrium will be, or is even likely to be, found. The notion that price flexibility is somehow a guarantee that market adjustments reliably lead to an equilibrium outcome is a presumption or a preconception, not the result of rigorous analysis.

However, Lucas used the concept of rational expectations, which originally meant no more than that agents try to use all available information to anticipate future prices, to make the concept of equilibrium, notwithstanding its inherent implausibility, a methodological necessity. A rational-expectations equilibrium was methodologically necessary and ruthlessly enforced on researchers, because it was presumed to be entailed by the neoclassical assumption of rationality. Lucasian methodology transformed rational expectations into the proposition that all agents form identical, and correct, expectations of future prices based on the same available information (common knowledge). Because all agents reach the same, correct expectations of future prices, general equilibrium is continuously achieved, except at intermittent moments when new information arrives and is used by agents to revise their expectations.

In his Nobel Lecture, Hayek decried a pretense of knowledge about correlations between macroeconomic time series that lack a foundation in the deeper structural relationships between those related time series. Without an understanding of the deeper structural relationships between those time series, observed correlations cannot be relied on when formulating economic policies. Lucas’s own famous critique echoed the message of Hayek’s lecture.

The search for microfoundations was always a natural and commendable endeavor. Scientists naturally try to reduce higher-level theories to deeper and more fundamental principles. But the endeavor ought to be conducted as a theoretical and empirical endeavor. If successful, the reduction of the higher-level theory to a deeper theory will provide insight and disclose new empirical implications to both the higher-level and the deeper theories. But reduction by methodological fiat accomplishes neither and discourages the research that might actually achieve a theoretical reduction of a higher-level theory to a deeper one. Similarly, formalism can provide important insights into the structure of theories and disclose gaps or mistakes the reasoning underlying the theories. But most important theories, even in pure mathematics, start out as informal theories that only gradually become axiomatized as logical gaps and ambiguities in the theories are discovered and filled or refined.

The resort to the reductionist and formalist methodological imperatives with which Lucas and his followers have justified their pretentions to scientific prestige and authority, and have used that authority to compel compliance with those imperatives, only belie their pretensions.

The Rises and Falls of Keynesianism and Monetarism

The following is extracted from a paper on the history of macroeconomics that I’m now writing. I don’t know yet where or when it will be published and there may or may not be further installments, but I would be interested in any comments or suggestions that readers might have. Regular readers, if there are any, will probably recognize some familiar themes that I’ve been writing about in a number of my posts over the past several months. So despite the diminished frequency of my posting, I haven’t been entirely idle.

Recognizing the cognitive dissonance between the vision of the optimal equilibrium of a competitive market economy described by Marshallian economic theory and the massive unemployment of the Great Depression, Keynes offered an alternative, and, in his view, more general, theory, the optimal neoclassical equilibrium being a special case.[1] The explanatory barrier that Keynes struggled, not quite successfully, to overcome in the dire circumstances of the 1930s, was why market-price adjustments do not have the equilibrating tendencies attributed to them by Marshallian theory. The power of Keynes’s analysis, enhanced by his rhetorical gifts, enabled him to persuade much of the economics profession, especially many of the most gifted younger economists at the time, that he was right. But his argument, failing to expose the key weakness in the neoclassical orthodoxy, was incomplete.

The full title of Keynes’s book, The General Theory of Employment, Interest and Money identifies the key elements of his revision of neoclassical theory. First, contrary to a simplistic application of Marshallian theory, the mass unemployment of the Great Depression would not be substantially reduced by cutting wages to “clear” the labor market. The reason, according to Keynes, is that the levels of output and unemployment depend not on money wages, but on planned total spending (aggregate demand). Mass unemployment is the result of too little spending not excessive wages. Reducing wages would simply cause a corresponding decline in total spending, without increasing output or employment.

If wage cuts do not increase output and employment, the ensuing high unemployment, Keynes argued, is involuntary, not the outcome of optimizing choices made by workers and employers. Ever since, the notion that unemployment can be involuntary has remained a contested issue between Keynesians and neoclassicists, a contest requiring resolution in favor of one or the other theory or some reconciliation of the two.

Besides rejecting the neoclassical theory of employment, Keynes also famously disputed the neoclassical theory of interest by arguing that the rate of interest is not, as in the neoclassical theory, a reward for saving, but a reward for sacrificing liquidity. In Keynes’s view, rather than equilibrate savings and investment, interest equilibrates the demand to hold the money issued by the monetary authority with the amount issued by the monetary authority. Under the neoclassical theory, it is the price level that adjusts to equilibrate the demand for money with the quantity issued.

Had Keynes been more attuned to the Walrasian paradigm, he might have recast his argument that cutting wages would not eliminate unemployment by noting the inapplicability of a Marshallian supply-demand analysis of the labor market (accounting for over 50 percent of national income), because wage cuts would shift demand and supply curves in almost every other input and output market, grossly violating the ceteris-paribus assumption underlying Marshallian supply-demand paradigm. When every change in the wage shifts supply and demand curves in all markets for good and services, which in turn causes the labor-demand and labor-supply curves to shift, a supply-demand analysis of aggregate unemployment becomes a futile exercise.

Keynes’s work had two immediate effects on economics and economists. First, it immediately opened up a new field of research – macroeconomics – based on his theory that total output and employment are determined by aggregate demand. Representing only one element of Keynes’s argument, the simplified Keynesian model, on which macroeconomic theory was founded, seemed disconnected from either the Marshallian or Walrasian versions of neoclassical theory.

Second, the apparent disconnect between the simple Keynesian macro-model and neoclassical theory provoked an ongoing debate about the extent to which Keynesian theory could be deduced, or even reconciled, with the premises of neoclassical theory. Initial steps toward a reconciliation were provided when a model incorporating the quantity of money and the interest rate into the Keynesian analysis was introduced, soon becoming the canonical macroeconomic model of undergraduate and graduate textbooks.

Critics of Keynesian theory, usually those opposed to its support for deficit spending as a tool of aggregate demand management, its supposed inflationary bias, and its encouragement or toleration of government intervention in the free-market economy, tried to debunk Keynesianism by pointing out its inconsistencies with the neoclassical doctrine of a self-regulating market economy. But proponents of Keynesian precepts were also trying to reconcile Keynesian analysis with neoclassical theory. Future Nobel Prize winners like J. R. Hicks, J. E. Meade, Paul Samuelson, Franco Modigliani, James Tobin, and Lawrence Klein all derived various Keynesian propositions from neoclassical assumptions, usually by resorting to the un-Keynesian assumption of rigid or sticky prices and wages.

What both Keynesian and neoclassical economists failed to see is that, notwithstanding the optimality of an economy with equilibrium market prices, in either the Walrasian or the Marshallian versions, cannot explain either how that set of equilibrium prices is, or can be, found, or how it results automatically from the routine operation of free markets.

The assumption made implicitly by both Keynesians and neoclassicals was that, in an ideal perfectly competitive free-market economy, prices would adjust, if not instantaneously, at least eventually, to their equilibrium, market-clearing, levels so that the economy would achieve an equilibrium state. Not all Keynesians, of course, agreed that a perfectly competitive economy would reach that outcome, even in the long-run. But, according to neoclassical theory, equilibrium is the state toward which a competitive economy is drawn.

Keynesian policy could therefore be rationalized as an instrument for reversing departures from equilibrium and ensuring that such departures are relatively small and transitory. Notwithstanding Keynes’s explicit argument that wage cuts cannot eliminate involuntary unemployment, the sticky-prices-and-wages story was too convenient not to be adopted as a rationalization of Keynesian policy while also reconciling that policy with the neoclassical orthodoxy associated with the postwar ascendancy of the Walrasian paradigm.

The Walrasian ascendancy in neoclassical theory was the culmination of a silent revolution beginning in the late 1920s when the work of Walras and his successors was taken up by a younger generation of mathematically trained economists. The revolution proceeded along many fronts, of which the most important was proving the existence of a solution of the system of equations describing a general equilibrium for a competitive economy — a proof that Walras himself had not provided. The sophisticated mathematics used to describe the relevant general-equilibrium models and derive mathematically rigorous proofs encouraged the process of rapid development, adoption and application of mathematical techniques by subsequent generations of economists.

Despite the early success of the Walrasian paradigm, Kenneth Arrow, perhaps the most important Walrasian theorist of the second half of the twentieth century, drew attention to the explanatory gap within the paradigm: how the adjustment of disequilibrium prices is possible in a model of perfect competition in which every transactor takes market price as given. The Walrasian theory shows that a competitive equilibrium ensuring the consistency of agents’ plans to buy and sell results from an equilibrium set of prices for all goods and services. But the theory is silent about how those equilibrium prices are found and communicated to the agents of the model, the Walrasian tâtonnement process being an empirically empty heuristic artifact.

In fact, the explanatory gap identified by Arrow was even wider than he had suggested or realized, for another aspect of the Walrasian revolution of the late 1920s and 1930s was the extension of the equilibrium concept from a single-period equilibrium to an intertemporal equilibrium. Although earlier works by Irving Fisher and Frank Knight laid a foundation for this extension, the explicit articulation of intertemporal-equilibrium analysis was the nearly simultaneous contribution of three young economists, two Swedes (Myrdal and Lindahl) and an Austrian (Hayek) whose significance, despite being partially incorporated into the canonical Arrow-Debreu-McKenzie version of the Walrasian model, remains insufficiently recognized.

These three economists transformed the concept of equilibrium from an unchanging static economic system at rest to a dynamic system changing from period to period. While Walras and Marshall had conceived of a single-period equilibrium with no tendency to change barring an exogenous change in underlying conditions, Myrdal, Lindahl and Hayek conceived of an equilibrium unfolding through time, defined by the mutual consistency of the optimal plans of disparate agents to buy and sell in the present and in the future.

In formulating optimal plans that extend through time, agents consider both the current prices at which they can buy and sell, and the prices at which they will (or expect to) be able to buy and sell in the future. Although it may sometimes be possible to buy or sell forward at a currently quoted price for future delivery, agents planning to buy and sell goods or services rely, for the most part, on their expectations of future prices. Those expectations, of course, need not always turn out to have been accurate.

The dynamic equilibrium described by Myrdal, Lindahl and Hayek is a contingent event in which all agents have correctly anticipated the future prices on which they have based their plans. In the event that some, if not all, agents have incorrectly anticipated future prices, those agents whose plans were based on incorrect expectations may have to revise their plans or be unable to execute them. But unless all agents share the same expectations of future prices, their expectations cannot all be correct, and some of those plans may not be realized.

The impossibility of an intertemporal equilibrium of optimal plans if agents do not share the same expectations of future prices implies that the adjustment of perfectly flexible market prices is not sufficient an optimal equilibrium to be achieved. I shall have more to say about this point below, but for now I want to note that the growing interest in the quiet Walrasian revolution in neoclassical theory that occurred almost simultaneously with the Keynesian revolution made it inevitable that Keynesian models would be recast in explicitly Walrasian terms.

What emerged from the Walrasian reformulation of Keynesian analysis was the neoclassical synthesis that became the textbook version of macroeconomics in the 1960s and 1970s. But the seemingly anomalous conjunction of both inflation and unemployment during the 1970s led to a reconsideration and widespread rejection of the Keynesian proposition that output and employment are directly related to aggregate demand.

Indeed, supporters of the Monetarist views of Milton Friedman argued that the high inflation and unemployment of the 1970s amounted to an empirical refutation of the Keynesian system. But Friedman’s political conservatism, free-market ideology, and his acerbic criticism of Keynesian policies obscured the extent to which his largely atheoretical monetary thinking was influenced by Keynesian and Marshallian concepts that rendered his version of Monetarism an unattractive alternative for younger monetary theorists, schooled in the Walrasian version of neoclassicism, who were seeking a clear theoretical contrast with the Keynesian macro model.

The brief Monetarist ascendancy following 1970s inflation conveniently collapsed in the early 1980s, after Friedman’s Monetarist policy advice for controlling the quantity of money proved unworkable, when central banks, foolishly trying to implement the advice, prolonged a needlessly deep recession while central banks consistently overshot their monetary targets, thereby provoking a long series of embarrassing warnings from Friedman about the imminent return of double-digit inflation.


[1] Hayek, both a friend and a foe of Keynes, would chide Keynes decades after Keynes’s death for calling his theory a general theory when, in Hayek’s view, it was a special theory relevant only in periods of substantially less than full employment when increasing aggregate demand could increase total output. But in making this criticism, Hayek, himself, implicitly assumed that which he had himself admitted in his theory of intertemporal equilibrium that there is no automatic equilibration mechanism that ensures that general equilibrium obtains.

My Paper “Between Walras and Marshall: Menger’s Third Way” Is Now Posted on SSRN

As regular readers of this blog will realize, several of my recent posts (here, here, here, here, and here) have been incorporated in my new paper, which I have been writing for the upcoming Carl Menger 2021 Conference next week in Nice, France. The paper is now available on SSRN.

Here is the abstract to the paper:

Neoclassical economics is bifurcated between Marshall’s partial-equilibrium and Walras’s general-equilibrium analyses. Given the failure of neoclassical theory to explain the Great Depression, Keynes proposed an explanation of involuntary unemployment. Keynes’s contribution was later subsumed under the neoclassical synthesis of the Keynesian and Walrasian theories. Lacking microfoundations consistent with Walrasian theory, the neoclassical synthesis collapsed. But Walrasian GE theory provides no plausible account of how GE is achieved. Whatever plausibility is attributed to the assumption that price flexibility leads to equilibrium derives from Marshallian PE analysis, with prices equilibrating supply and demand. But Marshallian PE analysis presumes that all markets, but the small one being analyzed, are at equilibrium, so that price adjustments in the analyzed market neither affect nor are affected by other markets. The demand and cost (curves) of PE analysis are drawn on the assumption that all other prices reflect Walrasian GE values. While based on Walrasian assumptions, modern macroeconomics relies on the Marshallian intuition that agents know or anticipate the prices consistent with GE. Menger’s third way offers an alternative to this conceptual impasse by recognizing that nearly all economic activity is subjective and guided by expectations of the future. Current prices are set based on expectations of future prices, so equilibrium is possible only if agents share the same expectations of future prices. If current prices are set based on differing expectations, arbitrage opportunities are created, causing prices and expectations to change, leading to further arbitrage, expectational change, and so on, but not necessarily to equilibrium.

Here is a link to the paper: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3964127

The current draft if preliminary, and any comments, suggestions or criticisms from readers would be greatly appreciated.

The Explanatory Gap and Mengerian Subjectivism

My last several posts have been focused on Marshall and Walras and the relationships and differences between the partial equilibrium approach of Marshall and the general-equilibrium approach of Walras and how that current state of neoclassical economics is divided between the more practical applied approach of Marshallian partial-equilibrium analysis and the more theoretical general-equilibrium approach of Walras. The divide is particularly important for the history of macroeconomics, because many of the macroeconomic controversies in the decades since Keynes have also involved differences between Marshallians and Walrasians. I’m not happy with either the Marshallian or Walrasian approach, and I have been trying to articulate my unhappiness with both branches of current neoclassical thinking by going back to the work of the forgotten marginal revolutionary, Carl Menger. I’ve been writing a paper for a conference later this month celebrating the 150th anniversary of Menger’s great work which draws on some of my recent musings, because I think it offers at least some hints at how to go about developing an improved neoclassical theory. Here’s a further sampling of my thinking which is drawn from one of the sections of my work in progress.

Both the Marshallian and the Walrasian versions of equilibrium analysis have failed to bridge an explanatory gap between the equilibrium state, whose existence is crucial for such empirical content as can be claimed on behalf of those versions of neoclassical theory, and such an equilibrium state could ever be attained. The gap was identified by one of the chief architects of modern neoclassical theory, Kenneth Arrow, in his 1958 paper “Toward a Theory of Price Adjustment.”

The equilibrium is defined in terms of a set of prices. In the Marshallian version, the equilibrium prices are assumed to have already been determined in all but a single market (or perhaps a subset of closely related markets), so that the Marshallian equilibrium simply represents how, in a single small or isolated market, an equilibrium price in that market is determined, under suitable ceteris-paribus conditions thereby leaving the equilibrium prices determined in other markets unaffected.

In the Walrasian version, all prices in all markets are determined simultaneously, but the method for determining those prices simultaneously was not spelled out by Walras other than by reference to the admittedly fictitious and purely heuristic tâtonnement process.

Both the Marshallian and Walrasian versions can show that equilibrium has optimal properties, but neither version can explain how the equilibrium is reached or how it can be discovered in practice. This is true even in the single-period context in which the Walrasian and Marshallian equilibrium analyses were originally carried out.

The single-period equilibrium has been extended, at least in a formal way, in the standard Arrow-Debreu-McKenzie (ADM) version of the Walrasian equilibrium, but this version is in important respects just an enhanced version of a single-period model inasmuch as all trades take place at time zero in a complete array of future state-contingent markets. So it is something of a stretch to consider the ADM model a truly intertemporal model in which the future can unfold in potentially surprising ways as opposed to just playing out a script already written in which agents go through the motions of executing a set of consistent plans to produce, purchase and sell in a sequence of predetermined actions.

Under less extreme assumptions than those of the ADM model, an intertemporal equilibrium involves both equilibrium current prices and equilibrium expected prices, and just as the equilibrium current prices are the same for all agents, equilibrium expected future prices must be equal for all agents. In his 1937 exposition of the concept of intertemporal equilibrium, Hayek explained the difference between what agents are assumed to know in a state of intertemporal equilibrium and what they are assumed to know in a single-period equilibrium.

If all agents share common knowledge, it may be plausible to assume that they will rationally arrive at similar expectations of the future prices. But if their stock of knowledge consists of both common knowledge and private knowledge, then it seems implausible to assume that the price expectations of different agents will always be in accord. Nevertheless, it is not necessarily inconceivable, though perhaps improbable, that agents will all arrive at the same expectations of future prices.

In the single-period equilibrium, all agents share common knowledge of equilibrium prices of all commodities. But in intertemporal equilibrium, agents lack knowledge of the future, but can only form expectations of future prices derived from their own, more or less accurate, stock of private knowledge. However, an equilibrium may still come about if, based on their private knowledge, they arrive at sufficiently similar expectations of future prices for their plans for their current and future purchases and sales to be mutually compatible.

Thus, just twenty years after Arrow called attention to the explanatory gap in neoclassical theory by observing that there is no neoclassical theory of how competitive prices can change, Milgrom and Stokey turned Arrow’s argument on its head by arguing that, under rational expectations, no trading would ever occur at prices other than equilibrium prices, so that it would be impossible for a trader with private information to take advantage of that information. This argument seems to suffer from a widely shared misunderstanding of what rational expectations signify.

Thus, in the Mengerian view articulated by Hayek, intertemporal equilibrium, given the diversity of private knowledge and expectations, is an unlikely, but not inconceivable, state of affairs, a view that stands in sharp contrast to the argument of Paul Milgrom and Nancy Stokey (1982), in which they argue that under a rational-expectations equilibrium there is no private knowledge, only common knowledge, and that it would be impossible for any trader to trade on private knowledge, because no other trader with rational expectations would be willing to trade with anyone at a price other than the equilibrium price.

Rational expectations is not a property of individual agents making rational and efficient use of the information from whatever source it is acquired. As I have previously explained here (and a revised version here) rational expectations is a property of intertemporal equilibrium; it is not an intrinsic property that agents have by virtue of being rational, just as the fact that the three angles in a triangle sum to 180 degrees is not a property of the angles qua angles, but a property of the triangle. When the expectations that agents hold about future prices are identical, their expectations are equilibrium expectations and they are rational. That the agents hold rational expectations in equilibrium, does not mean that the agents are possessed of the power to calculate equilibrium prices or even to know if their expectations of future prices are equilibrium expectations. Equilibrium is the cause of rational expectations; rational expectations do not exist if the conditions for equilibrium aren’t satisfied. See Blume, Curry and Easley (2006).

The assumption, now routinely regarded as axiomatic, that rational expectations is sufficient to ensure that equilibrium is automatic achieved, and that agents’ price expectations necessarily correspond to equilibrium price expectations is a form of question begging disguised as a methodological imperative that requires all macroeconomic models to be properly microfounded. The newly published volume edited by Arnon, Young and van der Beek Expectations: Theory and Applications from Historical Perspectives contains a wonderful essay by Duncan Foley that elucidates these issues.

In his centenary retrospective on Menger’s contribution, Hayek (1970), commenting on the inexactness of Menger’s account of economic theory, focused on Menger’s reluctance to embrace mathematics as an expository medium with which to articulate economic-theoretical concepts. While this may have been an aspect of Menger’s skepticism about mathematical reasoning, his recognition that expectations of the future are inherently inexact and conjectural and more akin to a range of potential outcomes of different probability may have been an even more significant factor in how Menger chose to articulate his theoretical vision.

But it is noteworthy that Hayek (1937) explicitly recognized that there is no theoretical explanation that accounts for any tendency toward intertemporal equilibrium, and instead merely (and in 1937!) relied an empirical tendency of economies to move in the direction of equilibrium as a justification for considering economic theory to have any practical relevance.

The Walras-Marshall Divide in Neoclassical Theory, Part II

In my previous post, which itself followed up an earlier post “General Equilibrium, Partial Equilibrium and Costs,” I laid out the serious difficulties with neoclassical theory in either its Walrasian or Marshallian versions: its exclusive focus on equilibrium states with no plausible explanation of any economic process that leads from disequilibrium to equilibrium.

The Walrasian approach treats general equilibrium as the primary equilibrium concept, because no equilibrium solution in a single market can be isolated from the equilibrium solutions for all other markets. Marshall understood that no single market could be in isolated equilibrium independent of all other markets, but the practical difficulty of framing an analysis of the simultaneous equilibration of all markets made focusing on general equilibrium unappealing to Marshall, who wanted economic analysis to be relevant to the concerns of the public, i.e., policy makers and men of affairs whom he regarded as his primary audience.

Nevertheless, in doing partial-equilibrium analysis, Marshall conceded that it had to be embedded within a general-equilibrium context, so he was careful to specify the ceteris-paribus conditions under which partial-equilibrium analysis could be undertaken. In particular, any market under analysis had to be sufficiently small, or the disturbance to which that market was subject had to be sufficiently small, for the repercussions of the disturbance in that market to have only minimal effect on other markets, or, if substantial, those effects had to concentrated on a specific market (e.g., the market for a substitute, or complementary, good).

By focusing on equilibrium in a single market, Marshall believed he was making the analysis of equilibrium more tractable than the Walrasian alternative of focusing on the analysis of simultaneous equilibrium in all markets. Walras chose to make his approach to general equilibrium, if not tractable, at least intuitive by appealing to the fiction of tatonnement conducted by an imaginary auctioneer adjusting prices in all markets in response to any inconsistencies in the plans of transactors preventing them from executing their plans at the announced prices.

But it eventually became clear, to Walras and to others, that tatonnement could not be considered a realistic representation of actual market behavior, because the tatonnement fiction disallows trading at disequilibrium prices by pausing all transactions while a complete set of equilibrium prices for all desired transactions is sought by a process of trial and error. Not only is all economic activity and the passage of time suspended during the tatonnement process, there is not even a price-adjustment algorithm that can be relied on to find a complete set of equilibrium prices in a finite number of iterations.

Despite its seeming realism, the Marshallian approach, piecemeal market-by-market equilibration of each distinct market, is no more tenable theoretically than tatonnement, the partial-equilibrium method being premised on a ceteris-paribus assumption in which all prices and all other endogenous variables determined in markets other than the one under analysis are held constant. That assumption can be maintained only on the condition that all markets are in equilibrium. So the implicit assumption of partial-equilibrium analysis is no less theoretically extreme than Walras’s tatonnement fiction.

In my previous post, I quoted Michel De Vroey’s dismissal of Keynes’s rationale for the existence of involuntary unemployment, a violation in De Vroey’s estimation, of Marshallian partial-equilibrium premises. Let me quote De Vroey again.

When the strict Marshallian viewpoint is adopted, everything is simple: it is assumed that the aggregate supply price function incorporates wages at their market-clearing magnitude. Instead, when taking Keynes’s line, it must be assumed that the wage rate that firms consider when constructing their supply price function is a “false” (i.e., non-market-clearing) wage. Now, if we want to keep firms’ perfect foresight assumption (and, let me repeat, we need to lest we fall into a theoretical wilderness), it must be concluded that firms’ incorporation of a false wage into their supply function follows from their correct expectation that this is indeed what will happen in the labor market. That is, firms’ managers are aware that in this market something impairs market clearing. No other explanation than the wage floor assumption is available as long as one remains in the canonical Marshallian framework. Therefore, all Keynes’s claims to the contrary notwithstanding, it is difficult to escape the conclusion that his effective demand reasoning is based on the fixed-wage hypothesis. The reason for unemployment lies in the labor market, and no fuss should be made about effective demand being [the reason rather] than the other way around.

A History of Macroeconomics from Keynes to Lucas and Beyond, pp. 22-23

My interpretation of De Vroey’s argument is that the strict Marshallian viewpoint requires that firms correctly anticipate the wages that they will have to pay in making their hiring and production decisions, while presumably also correctly anticipating the future demand for their products. I am unable to make sense of this argument unless it means that firms — and why should firm owners or managers be the only agents endowed with perfect or correct foresight? – correctly foresee the prices of the products that they sell and of the inputs that they purchase or hire. In other words, the strict Marshallian viewpoint invoked by De Vroey assumes that each transactor foresees, without the intervention of a timeless tatonnement process guided by a fictional auctioneer, the equilibrium price vector. In other words, when the strict Marshallian viewpoint is adopted, everything is simple; every transactor is a Walrasian auctioneer.

My interpretation of Keynes – and perhaps I’m just reading my own criticism of partial-equilibrium analysis into Keynes – is that he understood that the aggregate labor market can’t be analyzed in a partial-equilibrium setting, because Marshall’s ceteris-paribus proviso can’t be maintained for a market that accounts for roughly half the earnings of the economy. When conditions change in the labor market, everything else also changes. So the equilibrium conditions of the labor market must be governed by aggregate equilibrium conditions that can’t be captured in, or accounted for by, a Marshallian partial-equilibrium framework. Because something other than supply and demand in the labor market determines the equilibrium, what happens in the labor market can’t, by itself, restore an equilibrium.

That, I think, was Keynes’s intuition. But while identifying a serious defect in the Marshallian viewpoint, that intuition did not provide an adequate theory of adjustment. But the inadequacy of Keynes’s critique doesn’t rehabilitate the Marshallian viewpoint, certainly not in the form in which De Vroey represents it.

But there’s a deeper problem with the Marshallian viewpoint than just the interdependence of all markets. Although Marshall accepted marginal-utility theory in principle and used it to explain consumer demand, he tried to limit its application to demand while retaining the classical theory of the cost of production as a coordinate factor explaining the relative prices of goods and services. Marginal utility determines demand while cost determines supply, so that the interaction of supply and demand (cost and utility) jointly determine price just as the two blades of a scissor jointly cut a piece of cloth or paper.

This view of the role of cost could be maintained only in the context of the typical Marshallian partial-equilibrium exercise in which all prices — including input prices — except the price of a single output are held fixed at their general-equilibrium values. But the equilibrium prices of inputs are not determined independently of the values of the outputs they produce, so their equilibrium market values are derived exclusively from the value of whatever outputs they produce.

This was a point that Marshall, desiring to minimize the extent to which the Marginal Revolution overturned the classical theory of value, either failed to grasp, or obscured: that both prices and costs are simultaneously determined. By focusing on partial-equilibrium analysis, in which input prices are treated as exogenous variables rather than, as in general-equilibrium analysis, endogenously determined variables, Marshall was able to argue as if the classical theory that the cost incurred to produce something determines its value or its market price, had not been overturned.

The absolute dependence of input prices on the value of the outputs that they are being used to produce was grasped more clearly by Carl Menger than by Walras and certainly more clearly than by Marshall. What’s more, unlike either Walras or Marshall, Menger explicitly recognized the time lapse between the purchasing and hiring of inputs by a firm and the sale of the final output, inputs having been purchased or hired in expectation of the future sale of the output. But expected future sales are at prices anticipated, but not known, in advance, making the valuation of inputs equally conjectural and forcing producers to make commitments without knowing either their costs or their revenues before undertaking those commitments.

It is precisely this contingent relationship between the expectation of future sales at unknown, but anticipated, prices and the valuations that firms attach to the inputs they purchase or hire that provides an alternative to the problematic Marshallian and Walrasian accounts of how equilibrium market prices are actually reached.

The critical role of expected future prices in determining equilibrium prices was missing from both the Marshallian and the Walrasian theories of price determination. In the Walrasian theory, price determination was attributed to a fictional tatonnement process that Walras originally thought might serve as a kind of oversimplified and idealized version of actual market behavior. But Walras seems eventually to have recognized and acknowledged how far removed from reality his tatonnement invention actually was.

The seemingly more realistic Marshallian account of price determination avoided the unrealism of the Walrasian auctioneer, but only by attributing equally, if not more, unrealistic powers of foreknowledge to the transactors than Walras had attributed to his auctioneer. Only Menger, who realistically avoided attributing extraordinary knowledge either to transactors or to an imaginary auctioneer, instead attributing to transactors only an imperfect and fallible ability to anticipate future prices, provided a realistic account, or at least a conceptual approach toward a realistic account, of how prices are actually formed.

In a future post, I will try spell out in greater detail my version of a Mengerian account of price formation and how this account might tell us about the process by which a set of equilibrium prices might be realized.

The Walras-Marshall Divide in Neoclassical Theory, Part I

This year, 2021, puts us squarely in the midst of the sesquicentennial period of the great marginal revolution in economics that began with the almost simultaneous appearance in 1871 of Menger’s Grundsatze der Volkwirtschaft and Jevons’s Theory of Political Economy followed in 1874 by Walras’s Elements d’Economie Politique Pure. Jevons left few students behind to continue his work, so his influence pales in comparison with that of his younger contemporary Alfred Marshall who, working along similar lines, published his Principles of Economics in 1890. It was Marshall’s version of marginal utility theory that defined for more than a generation what became known as neoclassical theory in the Anglophone world. Menger’s work, via his disciples, Bohm-Bawerk and Wieser, was actually the most influential work on marginal-utility theory for at least 50 years, the work of Walras and his successor, Vilfredo Pareto, being too mathematical, even for professional economists, to become influential before the 1930s.

But after it was restated in a form not only more accessible, but more coherent and more sophisticated by J. R. Hicks in his immensely influential treatise Value and Capital, Walras’s work became the standard for rigorous formal economic analysis. Although the Walrasian paradigm became the standard for formal theoretical work, the Marshallian paradigm remained influential for applied microeconomic theory and empirical research, especially in fields like industrial organization, labor economics and international trade. Neoclassical economics, the corpus of economic mainstream economic theory that grew out of the marginal revolution was therefore built almost entirely on the works of Marshall and Walras, the influence of Menger, like that of Jevons, having been largely, but not entirely, assimilated into the main body of neoclassical theory.

The subsequent development of monetary theory and macroeconomics, especially after the Keynesian Revolution swept the economics profession, was also influenced by both Marshall and Walras. And the question whether Keynes belonged to the Marshallian tradition in which he was trained, or became, either consciously or unconsciously, a Walrasian has been an ongoing dispute among historians of macroeconomics since the late 1940s.

The first attempt to merge Keynes into the Walrasian paradigm led to the first neoclassical synthesis, which gained a brief ascendancy in the 1960s and early 1970s before being eclipsed by the New Classical rational expectations macroeconomics of Lucas and Sargent that led to a transformation of macroeconomics.

With that in mind, I’ve been reading Michel De Vroey’s excellent History of Macroeconomics from Keynes to Lucas and Beyond. An important feature of De Vroey’s book is its classification of macrotheories as either Marshallian or Walrasian in structure and orientation. I believe that the Walras vs. Marshall distinction is important, but I would frame that distinction differently from how De Vroey does. To be sure, De Vroey identifies some key differences between the Marshallian and Walrasian schemas, but I question whether he focuses on the differences between Marshall and Walras that really matter. And I also believe that he fails to address adequately the important problem that both Marhsall and Walras failed to address, namely their inability adequately describe a market mechanism that actually does, or even might, lead an economy toward an equilibrium position.

One reason for De Vroey’s misplaced emphasis is that he focuses on the different stories told by Walras and Marshall to explain how equilibrium — either for the entire system (Walras) or for a single market (Marshall) – is achieved. The story that Walras famously told was the tatonnement stratagem conceived by Walras to provide an account of how market forces, left undisturbed, would automatically bring an economy to a state of rest (general equilibrium). But Walras eventually realized that tatonnement could never be realistic for an economy with both exchange and production. The point of tatonnement is to prevent trading at disequilibium prices, but assuming that production is suspended during tatonnement is untenable, because production cannot be interrupted until the search for the equilibrium price vector is successfully completed.

Nevertheless, De Vroey treats tatonnement, despite its hopeless unrealism, as sine qua non for any model to be classified as Walrasian. In chapter 19 (“The History of Macroeconomics through the lens of the Marshall-Walras Divide”), DeVroey provides a comprehensive list of differences between the Marshallian and Walrasian modeling approaches which makes tatonnement a key distinction between the two approaches. I will discuss the three that seem most important.

1 Price formation: Walras assumes all exchange occurs at equilibrium prices found through tatonnement conducted by a deus-ex-machina auctioneer. All agents are therefore price takers even in “markets” in which, absent the auctioneer, market power could be exercised. Marshall assumes that prices are determined in the course of interaction of suppliers and demanders in distinct markets, so that the mix of price-taking and price-setting agents depends on the characteristics of those distinct markets.

This dichotomy between the Walrasian and Marshallian accounts of how prices are determined sheds light on the motivation that led Marshall and Walras to adopt their differing modeling approaches, but there is an important distinction between a model and the intuition that motivates or rationalizes the model. The model stands on its own whatever the intuition motivating the model. The motivation behind the model can inform how the model is assessed, but the substance of the model and its implications remain in tact even if the intuition behind the model is rejected.

2 Market equilibrium: Walras assumes that no market is in equilibrium unless general equilibrium obtains. Marshall assumes partial equililbrium is reached separately in each market. General equilibrium is achieved when all markets are in partial equilibrium. The Walrasian approach is top-down, the Marshallian bottom-up.

3 Realism: Marshall is more realistic than Walras in depicting individual markets in which transactors themselves engage in the price-setting process, assessing market conditions, and gaining information about supply-and-demand conditions; Walras assumes that all agents are passive price takers merely calculating their optimal, but provisional, plans to buy and sell at any price vector announced by the auctioneer who then processes those plans to determine whether the plans are mutually consistent or whether a new price vector must be tried. But whatever the gain in realism, it comes at a cost, because, except in obvious cases of complementarity or close substitutability between products or services, the Marshallian paradigm ignores the less obvious, but not necessarily negligible, interactions between markets. Those interactions render the Marshallian ceteris-paribus proviso for partial-equilibrium analysis logically dubious, except under the most stringent assumptions.

The absence of an auctioneer from Marshall’s schema leads De Vroey to infer that market participants in that schema must be endowed with knowledge of market demand-and-supply conditions. I claim no expertise as a Marshallian scholar, but I find it hard to accept that, given his emphasis on realism, Marshall would have attributed perfect knowledge to market participants. The implausibility of the Walrasian assumptions is thus matched, in De Vroey’s view, by different, but scarcely less implausible, Marshallian assumptions.

De Vroey proceeds to argue that Keynes himself was squarely on the Marshallian, not the Walrasian, side of the divide. Here’s how, focusing on the IS-LM model, he puts it:

As far as the representation of the economy is concerned, the economy that the IS-LM model analyzes is composed of markets that function separately, each of them being an autonomous locus of equilibrium. Turning to trade technology, no auctioneer is supposedly present. As for the information assumption, it is true that economists using the IS-LM model scarcely evoke the possibility that it might rest on the assumption that agents are omniscient. But then nobody seems to have raised the issue of how equilibrium is reached in this model. Once raised, I see no other explanation than assuming agents’ ability to reconstruct the equilibrium values of the economy, that is, their being omniscient. On all these scores, the IS-LM model is Marshallian.

A History of Macroeconomics from Keynes to Lucas and Beyond, p. 350

De Vroey’s dichotomy between the Walrasian and Marshallian modeling approaches leads him to make needlessly sharp distinctions between them. The basic IS-LM model determines the quantity of money, consumption, saving and investment, income and the rate of interest rate. Presumably, by autonomous locus of equilibrium,” De Vroey means that the adjustment of some variable determined in one of the IS-LM markets adjusts in response to disequilibrium in that market alone, but even so, the markets are not isolated from each other as they are in Marshallian partial-equilibrium analysis. The equilibrium values of the variables in the IS-LM model are simultaneously determined in all markets, so the autonomy of each market does not preclude simultaneous determination. Nor does the equilibrium of the model depend, as De Vroey seems to suggest, on the existence of an auctioneer; the role of the auctioneer is merely to provide a story (however implausible) about how the equilibrium is, or might be, reached.

Elsewhere De Vroey faults Keynes for characterizing cyclical unemployment as involuntary, because that characterization is incompatible with a Marshallian analysis of the labor market. Without endorsing Keynes’s reasoning, I cannot accept De Vroey’s argument against Keynes, because the argument is based explicitly on the assumption of perfect foresight. Describing the difference between a strict Marshallian approach and that taken by Keynes, De Vroey writes as follows:

When the strict Marshallian viewpoint is adopted, everything is simple: it is assumed that the aggregate supply price function incorporates wages at their market-clearing magnitude. Instead, when taking Keynes’s line, it must be assumed that the wage rate that firms consider when constructing their supply price function is a “false” (i.e., non-market-clearing) wage. Now, if we want to keep firms’ perfect foresight assumption (and, let me repeat, we need to lest we fall into a theoretical wilderness), it must be concluded that firms’ incorporation of a false wage into their supply function follows from their correct expectation that this is indeed what will happen in the labor market. That is, firms’ managers are aware that in this market something impairs market clearing. No other explanation than the wage floor assumption is available as long as one remains in the canonical Marshallian framework. Therefore, all Keynes’s claims to the contrary notwithstanding, it is difficult to escape the conclusion that his effective demand reasoning is based on the fixed-wage hypothesis. The reason for unemployment lies in the labor market, and no fuss should be made about effective demand being [the reason rather] than the other way around.

Id. pp. 22-23

De Vroey seems to be saying that if firms anticipate an equilibrium outcome, the equilibrium outcome will be realized. This is not an argument; it is question-begging, question-begging which De Vroey justifies by warning that the alternative to question-begging is to “fall into a theoretical wilderness.” Thus, Keynes’s argument for involuntary unemployment is rejected based on the argument that the in the only foreseeable outcome under the assumption of perfect information, unemployment cannot be involuntary.

Because neither the Walrasian nor the Marshallian modeling approach gives a plausible account of how an equilibrium is reached, De Vroey’s insistence that either implausible story is somehow essential to the corresponding modeling approach is misplaced, each approach committing the fallacy of misplaced concreteness in focusing on an equilibrium solution that cannot plausibly be realized. For De Vroey instead to argue that, because the Marshallian approach cannot otherwise explain how equilibrium is realized, the agents must be omniscient is akin to the advice of one Senator during the Vietnam war for President Nixon to declare victory and then withdraw all American troops.

I will have more to say about the Walras-Marshall divide and how to surmount the difficulties with both in a future post (or posts).

General Equilibrium, Partial Equilibrium and Costs

Neoclassical economics is now bifurcated between Marshallian partial-equilibrium and Walrasian general-equilibrium analyses. With the apparent inability of neoclassical theory to explain the coordination failure of the Great Depression, J. M. Keynes proposed an alternative paradigm to explain the involuntary unemployment of the 1930s. But within two decades, Keynes’s contribution was subsumed under what became known as the neoclassical synthesis of the Keynesian and Walrasian theories (about which I have written frequently, e.g., here and here). Lacking microfoundations that could be reconciled with the assumptions of Walrasian general-equilibrium theory, the neoclassical synthesis collapsed, owing to the supposedly inadequate microfoundations of Keynesian theory.

But Walrasian general-equilibrium theory provides no plausible, much less axiomatic, account of how general equilibrium is, or could be, achieved. Even the imaginary tatonnement process lacks an algorithm that guarantees that a general-equilibrium solution, if it exists, would be found. Whatever plausibility is attributed to the assumption that price flexibility leads to equilibrium derives from Marshallian partial-equilibrium analysis, with market prices adjusting to equilibrate supply and demand.

Yet modern macroeconomics, despite its explicit Walrasian assumptions, implicitly relies on the Marshallian intuition that the fundamentals of general-equilibrium, prices and costs are known to agents who, except for random disturbances, continuously form rational expectations of market-clearing equilibrium prices in all markets.

I’ve written many earlier posts (e.g., here and here) contesting, in one way or another, the notion that all macroeconomic theories must be founded on first principles (i.e., microeconomic axioms about optimizing individuals). Any macroeconomic theory not appropriately founded on the axioms of individual optimization by consumers and producers is now dismissed as scientifically defective and unworthy of attention by serious scientific practitioners of macroeconomics.

When contesting the presumed necessity for macroeconomics to be microeconomically founded, I’ve often used Marshall’s partial-equilibrium method as a point of reference. Though derived from underlying preference functions that are independent of prices, the demand curves of partial-equilibrium analysis presume that all product prices, except the price of the product under analysis, are held constant. Similarly, the supply curves are derived from individual firm marginal-cost curves whose geometric position or algebraic description depends critically on the prices of raw materials and factors of production used in the production process. But neither the prices of alternative products to be purchased by consumers nor the prices of raw materials and factors of production are given independently of the general-equilibrium solution of the whole system.

Thus, partial-equilibrium analysis, to be analytically defensible, requires a ceteris-paribus proviso. But to be analytically tenable, that proviso must posit an initial position of general equilibrium. Unless the analysis starts from a state of general equilibrium, the assumption that all prices but one remain constant can’t be maintained, the constancy of disequilibrium prices being a nonsensical assumption.

The ceteris-paribus proviso also entails an assumption about the market under analysis; either the market itself, or the disturbance to which it’s subject, must be so small that any change in the equilibrium price of the product in question has de minimus repercussions on the prices of every other product and of every input and factor of production used in producing that product. Thus, the validity of partial-equilibrium analysis depends on the presumption that the unique and locally stable general-equilibrium is approximately undisturbed by whatever changes result from by the posited change in the single market being analyzed. But that presumption is not so self-evidently plausible that our reliance on it to make empirical predictions is always, or even usually, justified.

Perhaps the best argument for taking partial-equilibrium analysis seriously is that the analysis identifies certain deep structural tendencies that, at least under “normal” conditions of moderate macroeconomic stability (i.e., moderate unemployment and reasonable price stability), will usually be observable despite the disturbing influences that are subsumed under the ceteris-paribus proviso. That assumption — an assumption of relative ignorance about the nature of the disturbances that are assumed to be constant — posits that those disturbances are more or less random, and as likely to cause errors in one direction as another. Consequently, the predictions of partial-equilibrium analysis can be assumed to be statistically, though not invariably, correct.

Of course, the more interconnected a given market is with other markets in the economy, and the greater its size relative to the total economy, the less confidence we can have that the implications of partial-equilibrium analysis will be corroborated by empirical investigation.

Despite its frequent unsuitability, economists and commentators are often willing to deploy partial-equilibrium analysis in offering policy advice even when the necessary ceteris-paribus proviso of partial-equilibrium analysis cannot be plausibly upheld. For example, two of the leading theories of the determination of the rate of interest are the loanable-funds doctrine and the Keynesian liquidity-preference theory. Both these theories of the rate of interest suppose that the rate of interest is determined in a single market — either for loanable funds or for cash balances — and that the rate of interest adjusts to equilibrate one or the other of those two markets. But the rate of interest is an economy-wide price whose determination is an intertemporal-general-equilibrium phenomenon that cannot be reduced, as the loanable-funds and liquidity preference theories try to do, to the analysis of a single market.

Similarly partial-equilibrium analysis of the supply of, and the demand for, labor has been used of late to predict changes in wages from immigration and to advocate for changes in immigration policy, while, in an earlier era, it was used to recommend wage reductions as a remedy for persistently high aggregate unemployment. In the General Theory, Keynes correctly criticized those using a naïve version of the partial-equilibrium method to recommend curing high unemployment by cutting wage rates, correctly observing that the conditions for full employment required the satisfaction of certain macroeconomic conditions for equilibrium that would not necessarily be satisfied by cutting wages.

However, in the very same volume, Keynes argued that the rate of interest is determined exclusively by the relationship between the quantity of money and the demand to hold money, ignoring that the rate of interest is an intertemporal relationship between current and expected future prices, an insight earlier explained by Irving Fisher that Keynes himself had expertly deployed in his Tract on Monetary Reform and elsewhere (Chapter 17) in the General Theory itself.

Evidently, the allure of supply-demand analysis can sometimes be too powerful for well-trained economists to resist even when they actually know better themselves that it ought to be resisted.

A further point also requires attention: the conditions necessary for partial-equilibrium analysis to be valid are never really satisfied; firms don’t know the costs that determine the optimal rate of production when they actually must settle on a plan of how much to produce, how much raw materials to buy, and how much labor and other factors of production to employ. Marshall, the originator of partial-equilibrium analysis, analogized supply and demand to the blades of a scissor acting jointly to achieve a intended result.

But Marshall erred in thinking that supply (i.e., cost) is an independent determinant of price, because the equality of costs and prices is a characteristic of general equilibrium. It can be applied to partial-equilibrium analysis only under the ceteris-paribus proviso that situates partial-equilibrium analysis in a pre-existing general equilibrium of the entire economy. It is only in general-equilibrium state, that the cost incurred by a firm in producing its output represents the value of the foregone output that could have been produced had the firm’s output been reduced. Only if the analyzed market is so small that changes in how much firms in that market produce do not affect the prices of the inputs used in to produce that output can definite marginal-cost curves be drawn or algebraically specified.

Unless general equilibrium obtains, prices need not equal costs, as measured by the quantities and prices of inputs used by firms to produce any product. Partial equilibrium analysis is possible only if carried out in the context of general equilibrium. Cost cannot be an independent determinant of prices, because cost is itself determined simultaneously along with all other prices.

But even aside from the reasons why partial-equilibrium analysis presumes that all prices, but the price in the single market being analyzed, are general-equilibrium prices, there’s another, even more problematic, assumption underlying partial-equilibrium analysis: that producers actually know the prices that they will pay for the inputs and resources to be used in producing their outputs. The cost curves of the standard economic analysis of the firm from which the supply curves of partial-equilibrium analysis are derived, presume that the prices of all inputs and factors of production correspond to those that are consistent with general equilibrium. But general-equilibrium prices are never known by anyone except the hypothetical agents in a general-equilibrium model with complete markets, or by agents endowed with perfect foresight (aka rational expectations in the strict sense of that misunderstood term).

At bottom, Marshallian partial-equilibrium analysis is comparative statics: a comparison of two alternative (hypothetical) equilibria distinguished by some difference in the parameters characterizing the two equilibria. By comparing the equilibria corresponding to the different parameter values, the analyst can infer the effect (at least directionally) of a parameter change.

But comparative-statics analysis is subject to a serious limitation: comparing two alternative hypothetical equilibria is very different from making empirical predictions about the effects of an actual parameter change in real time.

Comparing two alternative equilibria corresponding to different values of a parameter may be suggestive of what could happen after a policy decision to change that parameter, but there are many reasons why the change implied by the comparative-statics exercise might not match or even approximate the actual change.

First, the initial state was almost certainly not an equilibrium state, so systemic changes will be difficult, if not impossible, to disentangle from the effect of parameter change implied by the comparative-statics exercise.

Second, even if the initial state was an equilibrium, the transition to a new equilibrium is never instantaneous. The transitional period therefore leads to changes that in turn induce further systemic changes that cause the new equilibrium toward which the system gravitates to differ from the final equilibrium of the comparative-statics exercise.

Third, each successive change in the final equilibrium toward which the system is gravitating leads to further changes that in turn keep changing the final equilibrium. There is no reason why the successive changes lead to convergence on any final equilibrium end state. Nor is there any theoretical proof that the adjustment path leading from one equilibrium to another ever reaches an equilibrium end state. The gap between the comparative-statics exercise and the theory of adjustment in real time remains unbridged and may, even in principle, be unbridgeable.

Finally, without a complete system of forward and state-contingent markets, equilibrium requires not just that current prices converge to equilibrium prices; it requires that expectations of all agents about future prices converge to equilibrium expectations of future prices. Unless, agents’ expectations of future prices converge to their equilibrium values, an equilibrium many not even exist, let alone be approached or attained.

So the Marshallian assumption that producers know their costs of production and make production and pricing decisions based on that knowledge is both factually wrong and logically untenable. Nor do producers know what the demand curves for their products really looks like, except in the extreme case in which suppliers take market prices to be parametrically determined. But even then, they make decisions not on known prices, but on expected prices. Their expectations are constantly being tested against market information about actual prices, information that causes decision makers to affirm or revise their expectations in light of the constant flow of new information about prices and market conditions.

I don’t reject partial-equilibrium analysis, but I do call attention to its limitations, and to its unsuitability as a supposedly essential foundation for macroeconomic analysis, especially inasmuch as microeconomic analysis, AKA partial-equilibrium analysis, is utterly dependent on the uneasy macrofoundation of general-equilibrium theory. The intuition of Marshallian partial equilibrium cannot fil the gap, long ago noted by Kenneth Arrow, in the neoclassical theory of equilibrium price adjustment.

What’s so Great about Supply-Demand Analysis?

Just about the first thing taught to economics students is that there are demand curves for goods and services and supply curves of goods and services. Demand curves show how much customers wish to buy of a particular good or service within a period of time at various prices that might be charged for that good or service. The supply curve shows how much suppliers of a good or service would offer to sell at those prices.

Economists assume, and given certain more basic assumptions can (almost) prove, that customers will seek to buy less at higher prices for a good or service than at lower prices. Similarly, they assume that suppliers of the good or service offer to sell more at higher prices than at lower prices. Reflecting those assumptions, demand curves are downward-sloping and supply curve are upward-sloping. An upward-sloping supply curve is likely to intersect a downward-sloping demand curve at a single point, which corresponds to an equilibrium that allows customers to buy as much as they want to and suppliers to sell as much as they want to in the relevant time period.

This analysis is the bread and butter of economics. It leads to the conclusion that, when customers can’t buy as much as they would like, the price goes up, and, when suppliers can’t sell as much as they would like, the price goes down. So the natural tendency in any market is for the price to rise if it’s less than the equilibrium price, and to fall if it’s greater than the equilibrium price. This is the logic behind letting the market determine prices.

It can also be shown, if some further assumptions are made, that the intersection of the supply and demand curves represents an optimal allocation of resources in the sense that the total value of output is maximized. The necessary assumptions are, first, that the demand curve measures the marginal value placed on additional units of output, and, second, that the supply curve measures the marginal cost of producing additional units of output. The intersection of the supply and the demand curves corresponds to the maximization of the total value of output, because the marginal cost represents the value of output that could have been produced if the resources devoted to producing the good in question had been shifted to more valuable uses. When the supply curve rises above the demand curve it means that the resources would produce a greater value if devoted to producing something else than the value of the additional output of the good in question.

There is much to be said for the analysis, and it would be wrong to dismiss it. But it’s also important to understand its limitations, and, especially, the implicit assumptions on which it relies. In a sense, supply-demand analysis is foundational, the workhorse model that is the first resort of economists. But its role as a workhorse model does not automatically render analyses untethered to supply and demand illegitimate.

Supply-demand analysis has three key functions. First, it focuses attention on the idea of an equilibrium price at which all buyers can buy as much as they would like, and all sellers can sell as much as they would like. In a typical case, with an upward sloping supply curve and a downward-sloping demand curve, there is one, and only one, price with that property.

Second, as explained above, there is a sense in which that equilibrium price, aside from enabling the mutual compatibility of buyers’ and sellers’ plans to buy or to sell, has optimal properties.

Third, it’s a tool for predicting how changes in market conditions, like imposing a sales or excise tax, affect customers and suppliers. It compares two equilibrium positions on the assumption that only one parameter changes and predicts the effect of the parameter change by comparing the new and old equilibria. It’s the prototype for the comparative-statics method.

The chief problem with supply-demand analysis is that it requires a strict ceteris-paribus assumption, so that everything but the price and the quantity of the good under analysis remains constant. For many reasons, that assumption can’t literally be true. If the price of the good rises (falls), the real income of consumers decreases (increases). And if the price rises (falls), suppliers likely pay more (less) for their inputs. Changes in the price of one good also affect the prices of other goods, which, in turn, may affect the demand for the good under analysis. Each of those consequences would cause the supply and demand curves to shift from their initial positions. How much the ceteris-paribus assumption matters depends on how much of their incomes consumers spend on the good under analysis. The more they spend, the less plausible the ceteris paribus assumption.

But another implicit assumption underlies supply-demand analysis: that the economic system starts from a state of general equilibrium. Why must this assumption be made? The answer is that it‘s implied by the ceteris-paribus assumption that all other prices remain constant. Unless other markets are in equilibrium, it can’t be assumed that all other prices and incomes remain constant; if they aren’t, then prices for other goods, and for inputs used to produce the product under analysis, will change, violating the ceteris-paribus assumption. Unless the prices (and wages) of the inputs used to produce the good under analysis remain constant, the supply curve of the product can’t be assumed to remain unchanged.

On top of that, Walras’s Law implies that if one market is in disequilibrium, then at least one other market must also be in disequilibrium. So an internal contradiction lies at the heart of supply-demand analysis. The contradiction can be avoided, but not resolved, only by assuming that the market being analyzed is so minute relative to the rest of the economy, or so isolated from all other markets, that a disturbance in that market that changes its equilibrium position either wouldn’t disrupt the existing equilibrium in all other markets, or that the disturbances to the equilibria in all the other markets are so small that they can be safely ignored.

But we’re not done yet. The underlying general equilibrium on which the partial equilibrium (supply-demand) analysis is based, exists only conceptually, not in reality. Although it’s possible to prove the existence of such an equilibrium under more or less mathematically plausible assumptions about convexity and the continuity of the relevant functions, it is less straightforward to prove that the equilibrium is unique, or at least locally stable. If it is not unique or locally stable, there is no guarantee that comparative statics is possible, because a displacement from an unstable equilibrium may cause an unpredictable adjustment violates the ceteris-paribus assumption.

Finally, and perhaps most problematic, comparative statics is merely a comparison of two alternative equilibria, neither of which can be regarded as the outcome of a theoretically explicable, much less practical, process leading from initial conditions to the notional equilibrium state. Accordingly, neither is there any process whereby a disturbance to – a parameter change in — an initial equilibrium would lead from the initial equilibrium to a new equilibrium. That is what comparative statics means: the comparison of two alternative and disconnected equilibria. There is no transition from one to the other merely a comparison of the difference between them attributable to the change in a particular parameter in the initial conditions underlying the equilibria.

Given all the assumptions that must be satisfied for the basic implications of conventional supply-demand analysis to be unambiguously valid, that analysis obviously cannot provide demonstrably true predictions. As just explained, the comparative-statics method in general and supply-demand analysis in particular provide no actual predictions; they are merely conjectural comparisons of alternative notional equilibria.

The ceteris paribus assumption is often dismissed as making any theory tautological and untestable. If an ad hoc assumption introduced when observations don’t match the predictions derived from a given theory is independently testable, it adds to the empirical content of the theory, as demonstrated by the ad hoc assumption of an eighth planet (Neptune) in our solar system when predictions about the orbits of the seven known planets did not accord with their observed orbits.

Friedman’s famous methodological argument that only predictions, not assumptions, matter is clearly wrong. Economists have to be willing to modify assumptions and infer the implications that follow from modified or supplementary assumptions rather than take for granted that assumptions cannot meaningfully and productively affect the implications of a general analytical approach. It would be a travesty if physicists maintained the no-friction assumption, because it’s just a simplifying assumption to make the analysis tractable. That approach is a prescription for scientific stagnation.

The art of economics is to identify the key assumptions that ought to be modified to make a general analytical approach relevant and fruitful. When they are empirically testable, ad hoc assumptions that modify the ceteris paribus restriction constitute scientific advance.

But it’s important to understand how tenuous the connection is between the formalism of supply-demand analysis and of the comparative-statics method and the predictive power of that analysis and that method. The formalism stops far short of being able to generate clear and unambiguous conditions. The relationship between the formalism and the real world is tenuous and the apparent logical rigor of the formalism must be supplemented by notable and sometimes embarrassing doses of hand-waving or question-begging.

And it is also worth remembering the degree to which the supposed rigor of neoclassical microeconomic supply-demand formalism depends on the macroeconomic foundation of the existence (and at least approximate reality) of a unique or locally stable general equilibrium.

Filling the Arrow Explanatory Gap

The following (with some minor revisions) is a Twitter thread I posted yesterday. Unfortunately, because it was my first attempt at threading the thread wound up being split into three sub-threads and rather than try to reconnect them all, I will just post the complete thread here as a blogpost.

1. Here’s an outline of an unwritten paper developing some ideas from my paper “Hayek Hicks Radner and Four Equilibrium Concepts” (see here for an earlier ungated version) and some from previous blog posts, in particular Phillips Curve Musings

2. Standard supply-demand analysis is a form of partial-equilibrium (PE) analysis, which means that it is contingent on a ceteris paribus (CP) assumption, an assumption largely incompatible with realistic dynamic macroeconomic analysis.

3. Macroeconomic analysis is necessarily situated a in general-equilibrium (GE) context that precludes any CP assumption, because there are no variables that are held constant in GE analysis.

4. In the General Theory, Keynes criticized the argument based on supply-demand analysis that cutting nominal wages would cure unemployment. Instead, despite his Marshallian training (upbringing) in PE analysis, Keynes argued that PE (AKA supply-demand) analysis is unsuited for understanding the problem of aggregate (involuntary) unemployment.

5. The comparative-statics method described by Samuelson in the Foundations of Econ Analysis formalized PE analysis under the maintained assumption that a unique GE obtains and deriving a “meaningful theorem” from the 1st- and 2nd-order conditions for a local optimum.

6. PE analysis, as formalized by Samuelson, is conditioned on the assumption that GE obtains. It is focused on the effect of changing a single parameter in a single market small enough for the effects on other markets of the parameter change to be made negligible.

7. Thus, PE analysis, the essence of micro-economics is predicated on the macrofoundation that all, but one, markets are in equilibrium.

8. Samuelson’s meaningful theorems were a misnomer reflecting mid-20th-century operationalism. They can now be understood as empirically refutable propositions implied by theorems augmented with a CP assumption that interactions b/w markets are small enough to be neglected.

9. If a PE model is appropriately specified, and if the market under consideration is small or only minimally related to other markets, then differences between predictions and observations will be statistically insignificant.

10. So PE analysis uses comparative-statics to compare two alternative general equilibria that differ only in respect of a small parameter change.

11. The difference allows an inference about the causal effect of a small change in that parameter, but says nothing about how an economy would actually adjust to a parameter change.

12. PE analysis is conditioned on the CP assumption that the analyzed market and the parameter change are small enough to allow any interaction between the parameter change and markets other than the market under consideration to be disregarded.

13. However, the process whereby one equilibrium transitions to another is left undetermined; the difference between the two equilibria with and without the parameter change is computed but no account of an adjustment process leading from one equilibrium to the other is provided.

14. Hence, the term “comparative statics.”

15. The only suggestion of an adjustment process is an assumption that the price-adjustment in any market is an increasing function of excess demand in the market.

16. In his seminal account of GE, Walras posited the device of an auctioneer who announces prices–one for each market–computes desired purchases and sales at those prices, and sets, under an adjustment algorithm, new prices at which desired purchases and sales are recomputed.

17. The process continues until a set of equilibrium prices is found at which excess demands in all markets are zero. In Walras’s heuristic account of what he called the tatonnement process, trading is allowed only after the equilibrium price vector is found by the auctioneer.

18. Walras and his successors assumed, but did not prove, that, if an equilibrium price vector exists, the tatonnement process would eventually, through trial and error, converge on that price vector.

19. However, contributions by Sonnenschein, Mantel and Debreu (hereinafter referred to as the SMD Theorem) show that no price-adjustment rule necessarily converges on a unique equilibrium price vector even if one exists.

20. The possibility that there are multiple equilibria with distinct equilibrium price vectors may or may not be worth explicit attention, but for purposes of this discussion, I confine myself to the case in which a unique equilibrium exists.

21. The SMD Theorem underscores the lack of any explanatory account of a mechanism whereby changes in market prices, responding to excess demands or supplies, guide a decentralized system of competitive markets toward an equilibrium state, even if a unique equilibrium exists.

22. The Walrasian tatonnement process has been replaced by the Arrow-Debreu-McKenzie (ADM) model in an economy of infinite duration consisting of an infinite number of generations of agents with given resources and technology.

23. The equilibrium of the model involves all agents populating the economy over all time periods meeting before trading starts, and, based on initial endowments and common knowledge, making plans given an announced equilibrium price vector for all time in all markets.

24. Uncertainty is accommodated by the mechanism of contingent trading in alternative states of the world. Given assumptions about technology and preferences, the ADM equilibrium determines the set prices for all contingent states of the world in all time periods.

25. Given equilibrium prices, all agents enter into optimal transactions in advance, conditioned on those prices. Time unfolds according to the equilibrium set of plans and associated transactions agreed upon at the outset and executed without fail over the course of time.

26. At the ADM equilibrium price vector all agents can execute their chosen optimal transactions at those prices in all markets (certain or contingent) in all time periods. In other words, at that price vector, excess demands in all markets with positive prices are zero.

27. The ADM model makes no pretense of identifying a process that discovers the equilibrium price vector. All that can be said about that price vector is that if it exists and trading occurs at equilibrium prices, then excess demands will be zero if prices are positive.

28. Arrow himself drew attention to the gap in the ADM model, writing in 1959:

29. In addition to the explanatory gap identified by Arrow, another shortcoming of the ADM model was discussed by Radner: the dependence of the ADM model on a complete set of forward and state-contingent markets at time zero when equilibrium prices are determined.

30. Not only is the complete-market assumption a backdoor reintroduction of perfect foresight, it excludes many features of the greatest interest in modern market economies: the existence of money, stock markets, and money-crating commercial banks.

31. Radner showed that for full equilibrium to obtain, not only must excess demands in current markets be zero, but whenever current markets and current prices for future delivery are missing, agents must correctly expect those future prices.

32. But there is no plausible account of an equilibrating mechanism whereby price expectations become consistent with GE. Although PE analysis suggests that price adjustments do clear markets, no analogous analysis explains how future price expectations are equilibrated.

33. But if both price expectations and actual prices must be equilibrated for GE to obtain, the notion that “market-clearing” price adjustments are sufficient to achieve macroeconomic “equilibrium” is untenable.

34. Nevertheless, the idea that individual price expectations are rational (correct), so that, except for random shocks, continuous equilibrium is maintained, became the bedrock for New Classical macroeconomics and its New Keynesian and real-business cycle offshoots.

35. Macroeconomic theory has become a theory of dynamic intertemporal optimization subject to stochastic disturbances and market frictions that prevent or delay optimal adjustment to the disturbances, potentially allowing scope for countercyclical monetary or fiscal policies.

36. Given incomplete markets, the assumption of nearly continuous intertemporal equilibrium implies that agents correctly foresee future prices except when random shocks occur, whereupon agents revise expectations in line with the new information communicated by the shocks.
37. Modern macroeconomics replaced the Walrasian auctioneer with agents able to forecast the time path of all prices indefinitely into the future, except for intermittent unforeseen shocks that require agents to optimally their revise previous forecasts.
38. When new information or random events, requiring revision of previous expectations, occur, the new information becomes common knowledge and is processed and interpreted in the same way by all agents. Agents with rational expectations always share the same expectations.
39. So in modern macro, Arrow’s explanatory gap is filled by assuming that all agents, given their common knowledge, correctly anticipate current and future equilibrium prices subject to unpredictable forecast errors that change their expectations of future prices to change.
40. Equilibrium prices aren’t determined by an economic process or idealized market interactions of Walrasian tatonnement. Equilibrium prices are anticipated by agents, except after random changes in common knowledge. Semi-omniscient agents replace the Walrasian auctioneer.
41. Modern macro assumes that agents’ common knowledge enables them to form expectations that, until superseded by new knowledge, will be validated. The assumption is wrong, and the mistake is deeper than just the unrealism of perfect competition singled out by Arrow.
42. Assuming perfect competition, like assuming zero friction in physics, may be a reasonable simplification for some problems in economics, because the simplification renders an otherwise intractable problem tractable.
43. But to assume that agents’ common knowledge enables them to forecast future prices correctly transforms a model of decentralized decision-making into a model of central planning with each agent possessing the knowledge only possessed by an omniscient central planner.
44. The rational-expectations assumption fills Arrow’s explanatory gap, but in a deeply unsatisfactory way. A better approach to filling the gap would be to acknowledge that agents have private knowledge (and theories) that they rely on in forming their expectations.
45. Agents’ expectations are – at least potentially, if not inevitably – inconsistent. Because expectations differ, it’s the expectations of market specialists, who are better-informed than non-specialists, that determine the prices at which most transactions occur.
46. Because price expectations differ even among specialists, prices, even in competitive markets, need not be uniform, so that observed price differences reflect expectational differences among specialists.
47. When market specialists have similar expectations about future prices, current prices will converge on the common expectation, with arbitrage tending to force transactions prices to converge toward notwithstanding the existence of expectational differences.
48. However, the knowledge advantage of market specialists over non-specialists is largely limited to their knowledge of the workings of, at most, a small number of related markets.
49. The perspective of specialists whose expectations govern the actual transactions prices in most markets is almost always a PE perspective from which potentially relevant developments in other markets and in macroeconomic conditions are largely excluded.
50. The interrelationships between markets that, according to the SMD theorem, preclude any price-adjustment algorithm, from converging on the equilibrium price vector may also preclude market specialists from converging, even roughly, on the equilibrium price vector.
51. A strict equilibrium approach to business cycles, either real-business cycle or New Keynesian, requires outlandish assumptions about agents’ common knowledge and their capacity to anticipate the future prices upon which optimal production and consumption plans are based.
52. It is hard to imagine how, without those outlandish assumptions, the theoretical superstructure of real-business cycle theory, New Keynesian theory, or any other version of New Classical economics founded on the rational-expectations postulate can be salvaged.
53. The dominance of an untenable macroeconomic paradigm has tragically led modern macroeconomics into a theoretical dead end.

About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,272 other subscribers
Follow Uneasy Money on WordPress.com