Archive for the 'Uncategorized' Category

An Updated Version of my Paper “Robert Lucas and the Pretense of Science” Has Been Posted on SSRN

I have just submitted the paper to the European Journal of the History of Economic Thought. The updated version is not substantively different from the previous version, but I have cut some marginally relevant material and made what I hope are editorial improvements. Here’s a link to the new version.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4260708

Any comments, questions, criticisms or suggestions would be greatly appreciated.

I hope to post a revised version of my paper “Between Walras and Marshall: Menger’s Third Way” on SSRN within the next week or two. In my previous post I copied a revision of the section on Franklin Fisher’s important book Disequilibrium Foundations of Equilibrium Economics.

Franklin Fisher on the Disequilibrium Foundations of Economics and the Stability of General Equilibium

As I’ve pointed out many times on this blog, equilibrium is an extremely important, but very problematic, concept in economic theory. What economists even mean when they talk about equilibrium is often unclear and how the concept relates to the real world as opposed to an imagined abstract world is even less clear. Nevertheless, almost all the propositions of economic theory that are used by economists in analyzing the world and in making either conditional or unconditional predictions about the world or in analyzing current or historical events are based on propositions of economic theory deduced from the theoretical analysis of equilibrium states,

Last year I wrote a paper for a conference marking the hundredth anniversary of Carl Menger’s death in 1921 and 150 years after his seminal work launching, along with Jevons and Walras, what eventually became neoclassical economic theory. Here is a link to that paper. Of late I have been revising the paper and I have now substantially rewritten (and I hope improved) one of the sections of the paper discussing Franklin Fisher’s important work on the stability of general equilibrium, which I have been puzzling over and writing about for several years, e.g., here and here, as well as chapter 17 of my book, Studies in the History of Monetary Theory: Controversies and Clarifications.

I’ve recently been revising that paper — one of a number of distractions that have prevented me from posting recently — and have substantially rewritten a couple sections of the paper, especially section 7 about Fisher’s treatment of the stability of general equilibrium. Because I’m not totally sure that I’ve properly characterized Fisher’s own proof of stability under a different set of assumptions than the standard treatments of stability, I’m posting my new version of the section in hopes of eliciting feedback from readers. Here’s the new version of section 7 (not yet included in the SSRN version).

Unsuccessful attempts to prove, under standard neoclassical assumptions, the stability of general equilibrium led Franklin Fisher (1983) to suggest an alternative approach to proving stability. Fisher based his approach on three assumptions: (1) trading occurs at disequilibrium prices (in contrast to the standard assumption that no trading takes place until a new equilibrium is found with prices being adjusted under a tatonnement process); (2) all unsatisfied transactors — either unsatisfied demanders or unsatisfied suppliers — in any disequilibrated market are all either on the demand side or on the supply side of that market; (3) the “no favorable surprises” (NFS) assumption previously advanced by Hahn (1978).

At the starting point of a disequilibrium process, some commodities would be in excess demand, some in excess supply, and, perhaps, some in equilibrium. Let Zi denote the excess demand for any commodity, i ranging between 1 and n; let commodities in excess demand be numbered from 1 to k, commodities initially in equilibrium numbered from k+1 to m, and commodities in excess supply numbered from m+1 to n. Thus, by assumption, no agent had an excess supply of commodities numbered from 1 to k, no agent had an excess demand for commodities numbered from m+1 to n, and no agent had an excess demand or excess supply for commodities numbered between k+1 and m.

Fisher argued that, with prices rising in markets with excess demand and falling in markets with excess supply, and not changing in markets with zero excess demand, the sequence of adjustments would converge on an equilibrium price vector. Prices would rise in markets with excess demand and fall in markets with excess supply, because unsatisfied demanders and suppliers would seek to execute their unsuccessful attempts by offering to pay more for commodities in excess demand, or accept less for commodities in excess supply, than currently posted prices. And insofar as those attempts were successful, arbitrage would cause all prices for commodities in excess demand to increase and all prices for commodities in excess supply to decrease.

Fisher then defined a function in which the actual utility of agents after trading would be subtracted from their expected utility before trading. For agents who succeed in executing planned purchases at the expected prices, the value of the function would be zero, but for agents unable to execute planned purchases at the expected prices, the value of the function would be positive, their realized utility being less than their expected utility, as agents with excess demands had to pay higher prices than they had expected and agents with excess supplies had to accept lower prices than expected. As prices of goods in excess demand rise while prices of goods in excess supply fall, the value of the function would fall until equilibrium was reached, thereby satisfying the stability condition for a Lyapunov function, thereby confirming the stability of the disequilibrium arbitrage proces.

It may well be true that an economy of rational agents who understand that there is disequilibrium and act arbitrage opportunities is driven toward equilibrium, but not if these agents continually perceive new previously unanticipated opportunities for further arbitrage. The appearance of such new and unexpected opportunities will generally disturb the system until they are absorbed.

Such opportunities can be of different kinds. The most obvious sort is the appearance of unforeseen technological developments – the unanticipated development of new products or processes. There are other sorts of new opportunities as well. An unanticipated change in tastes or the development of new uses for old products is one; the discovery of new sources of raw materials another. Further, efficiency improvements in firms are not restricted to technological developments. The discovery of a more efficidnt mode of internal organization or of a better way of marketing can also present a new opportunity.

Because a favorable surprise during the adjustment process following the displacement of a prior equilibrium would potentially violate the stability condition that a Lyapunov function be non-increasing, the NFS assumption is needed for a proof that arbitrage of price differences leads to convergence on a new equilibrium. It is not, of course, only favorable surprises that can cause instability, inasmuch as the Lyapunov function must be positive as well as being non-increasing, and a sufficiently large unfavorable surprise would violate the non-negativity condition.[1] While listing several possible causes of favorable surprises that might prevent convergence, Fisher considered the assumption plausible enough to justify accepting stability as a working hypothesis for applied microeconomics and macroeconomics.

However, the NFS assumption suffers from two problems deeper than Fisher acknowledged. First, it reckons only with equilibrating adjustments in current prices without considering that equilibrating adjustments are required in agents’ expectations of future prices on which their plans for current and future transactions depend. Unlike the market feedback on current prices in current markets conveyed by unsatisfied demanders and suppliers, inconsistencies in agents’ notional plans for future transactions convey no discernible feedback, in an economic setting of incomplete markets, on their expectations of future prices. Without such feedback on expectations, a plausible account of how expectations of future prices are equilibrated cannot — except under implausibly extreme assumptions — easily be articulated.[2] Nor can the existence of a temporary equilibrium of current prices in current markets, beset by agents’ inconsistent and conflicting expectations, be taken for granted under standard assumptions. And even if a temporary equilibrium exists, it cannot, under standard assumptions, be shown to be optimal. (Arrow and Hahn, 1971, 136-51).

Second, in Fisher’s account, price changes occur when transactors cannot execute their desired transactions at current prices, those price changes then creating arbitrage opportunities that induce further price changes. Fisher’s stability argument hinges on defining a Lyapunov function in which actual prices of goods in excess demand gradually rise to eliminate excess demands and actual prices of goods in excess supply gradually fall to eliminate those excess demands and supplies. But the argument works only if a price adjustment in one market caused by a previous excess demand or excess supply does not simultaneously create excess demands or supplies in markets not previously in disequilibrium, cause markets previously in excess demand to become markets in excess supply, or cause excess demands or excess supplies to increase rather than decrease.

To understand why, Fisher’s ad hoc assumptions do not guarantee that the Lyapunov function he defined will be continuously non-increasing, it will be helpful to refer to the famous Lipsey and Lancaster (1956) second-best theorem. According to their theorem, if one optimality condition in an economic model is unsatisfied because a relevant variable is constrained, the second-best solution, rather than satisfy the other unconstrained optimum conditions, involves revision of at least some of the unconstrained optimum conditions to take account of the constraint.

Contrast Fisher’s statement of the No Favorable Surprise assumption with how Lipsey and Lancaster (1956, 11) described the import of their theorem.

From this theorem there follows the important negative corollary that there is no a priori way to judge as between various situations in which some of the Paretian optimum conditions are fulfilled while others are not. Specifically, it is not true that a situation in which more, but not all, of the optimum conditions are fulfilled is necessarily, or is even likely to be, superior to a situation in which fewer are fulfilled. It follows, therefore, that in a situation in which there exist many constraints which prevent the fulfilment of the Paretian optimum conditions the removal of any one constraint may affect welfare or efficiency either by raising it, by lowering it, or by leaving it unchanged.

The general theorem of the second best states that if one of the Paretian optimum conditions cannot be fulfilled a second-best optimum situation is achieved only by departing from all other optimum conditions. It is important to note that in general, nothing can be said about the direction or the magnitude of the secondary departures from optimum conditions made necessary by the original non-fulfillment of one condition.

Although Lipsey and Lancaster were not referring to the adjustment process triggered by an adjustment process that follows a displacement from a prior equilibrium, nevertheless, their discussion implies that the stability of an adjustment process depends on the specific sequence of adjustments in that process, inasmuch as each successive price adjustment, aside from its immediate effect on the particular market in which the price adjusts, transmits feedback effects to related markets. A price adjustment in one market may increase, decrease, or leave unchanged, the efficiency of other markets, and the equilibrating tendency of a price adjustment in one market may be offset by indirect disequilibrating tendencies in other markets. When a price adjustment in one market indirectly reduces efficiency in other markets, the resulting price adjustments that follow may well trigger yet further indirect efficiency reductions.

Thus, in adjustment processes involving interrelated markets, price changes in one market can cause favorable surprises in other markets in which prices are not already at their general-equilibrium levels, by indirectly causing net increases in utility through feedback effects on related markets.

Consider a macroeconomic equilibrium satisfying all optimality conditions between marginal rates of substitution in production and consumption and relative prices. If that equilibrium is subjected to a macoreconomic disturbance affecting all, or most, individual markets, thereby changing all optimality conditions corresponding to the prior equilibrium, the new equilibrium will likely entail a different set of optimality conditions. While systemic optimality requires price adjustments to satisfy all the optimality conditions, actual price adjustments occur sequentially, in piecemeal fashion, with prices changing market by market or firm by firm, price changes occurring as agents perceive demand or cost changes. Those changes need not always induce equilibrating adjustments, nor is the arbitraging of price differences necessarily equilibrating when, under suboptimal conditions, prices have generally deviated from their equilibrium values. 

Smithian invisible-hand theorems are of little relevance in explaining the transition to a new equilibrium following a macroeconomic disturbance, because, in this context, the invisible-hand theorem begs the relevant question by assuming that the equilibrium price vector has been found. When all markets are in disequilibrium, moving toward equilibrium in one market will have repercussions on other markets, and the simple story of how price adjustment in response to a disequilibrium restores equilibrium breaks down, because market conditions in every market depend on market conditions in every other market. So, unless all optimality conditions are satisfied simultaneously, there is no assurance that piecemeal adjustments will bring the system closer to an optimal, or even a second-best, state.

If my interpretation of the NFS assumption is correct, Fisher’s stability results may provide support for Leijonhufvud’s (1973) suggestion that there is a corridor of stability around an equilibrium time path within which, under normal circumstances, an economy will not be displaced too far from path, so that an economy, unless displaced outside that corridor, will revert, more or less on its own, to its equilibrium path.[3]

Leijonhufvud attributed such resilience to the holding of buffer stocks of inventories of goods, holdings of cash and the availability of credit lines enabling agents to operate normally despite disappointed expectations. If negative surprises persist, agents will be unable to add to, or draw from, inventories indefinitely, or to finance normal expenditures by borrowing or drawing down liquid assets. Once buffer stocks are exhausted, the stabilizing properties of the economy have been overwhelmed by the destabilizing tendencies, income-constrained agents cut expenditures, as implied by the Keynesian multiplier analysis, triggering a cumulative contraction, and rendering a spontaneous recovery without compensatory fiscal or monetary measures, impossible.


[1] It was therefore incorrect for Fisher (1983, 88) to assert: “we can hope to show that  that the continued presence new opportunities is a necessary condition for instability — for continued change,” inasmuch as continued negative surprises can also cause continued — or at least prolonged — change.

[2] Fisher does recognize (pp. 88-89) that changes in expectations can be destabilizing. However, he considers only the possibility of exogenous events that cause expectations to change, but does not consider the possibility that expectations may change endogenously in a destabilizing fashion in the course of an adjustment process following a displacement from a prior equilibrium. See, however, his discussion (p. 91)

How is . . . an [“exogenous”] shock to be distinguished from the “endogenous” shock brought about by adjustment to the original shock? No Favorable Surprise may not be precisely what is wanted as an assumption in this area, but it is quite difficult to see exactly how to refine it.

A proof of stability under No Favorable Surprise, then, seems quite desirable for a number of related reasons. First, it is the strongest version of an assumption of No Favorable Exogenous Surprise (whatever that may mean precisely); hence, if stability does not hold under No Favorable Surprise it cannot be expected to hold under the more interesting weaker assumption.  

[3] Presumably because the income and output are maximized at the equilibrium path, it is unlikely that an economy will overshoot the path unless entrepreneurial or policy error cause such overshooting which is presumably an unlikely occurrence, although Austrian business cycle theory and perhaps certain other monetary business cycle theories suggest that such overshooting is not or has not always been an uncommon event.

Robert Lucas and Real Business-Cycle Theory

In 1978 Robert Lucas and Thomas Sargent launched a famous attack on Keynes and Keynesian economics, which they viewed as having been discredited by the confluence of high inflation and high unemployment in the 1970s. They also expressed optimistism that an equilibrium approach to business-cycle modeling would succeed in replicating reasonably well the observed time-series variables relating to output and employment. In particular they posited that a model subjected to an unexpected monetary shock causing an immediate downturn from an equilibrium time path would be followed by a gradual reversion to that time path, thereby capturing the main stylized facts of historical business cycles. Their optimism was disappointed, because the model that Lucas had developed, based on an informational imperfection preventing agents from distinguishing immediately between real and nominal price changes, could not account for downturns because the informational imperfection assumed by Lucas could not account for the typical multi-period duration of business-cycle downturns.

It was this empirical anomaly in Lucas’s monetary business-cycle model that prompted Kydland and Prescott to construct their real-business cycle model. Lucas warmly welcomed their contribution, the abandonment of the monetary-theoretical motivation that Lucas had inherited from his academic training at Chicago being a small price to pay for the advancement of the larger research agenda derived from his methodological imperatives.

The real-business cycle variant of the Lucasian research program rested on two empirical pillars: (1) the identification of technology shocks with deviations, as measured by the Solow residual, from the trend rate of increase in total factor productivity, positive residuals corresponding to positive shocks and negative residuals corresponding to negative shocks; and (2) estimates of elasticities of intertemporal rates of labor substitution.

Positive productivity shocks induce wage increases, and negative shocks induce wage decreases. Responding to the shifts in wages, presumed to be temporary, workers increase the amount of labor supplied in response to above-trend increases in wages and decrease the amount of labor supplied in response to below-trend increases in wages. The higher the elasticity of intertemporal labor substitution, the greater the supply response to a given deviation of actual wages from the expected trend rate of increase in wages. Real-business-cycle theorists used calibration techniques to obtain estimates labor-supply elasticities from microeconomic studies.

The real-business-cycle variant of the Lucasian research program embraced all the dubious methodological precepts of its parent while adding further dubious practices of its own. Most problematic, of course, is the methodological insistence that equilibrium is necessarily and continuously maintained, which is possible only if all agents correctly anticipate future prices and wages. If equilibrium is not continuously maintained, then Solow residuals may capture not productivity shocks, but, depending on their sign, either movements away from, or toward, equilibrium. In disequilibrium, labor and capital may be held idle by firms in anticipation of subsequent increases in output, so that measured productivity does not reflect the state of technology, but the inherent inefficiency of unemployment resulting from coordination failure, a contingency explicitly deemed by Lucasian methodology to be off limits.

Such ad hocery is generally frowned upon by scientists. Ad hoc assumptions are not always unscientific or unproductive, as famously exemplified by the discovery of Neptune. But in the latter case, the ad hoc assumption was subject to empirical testing; Neptune might not have been there waiting to be discovered. But no independent test of the presence or absence of a technology shock, aside from the Solow residual itself, is available. Even this situation might be tolerable, if Lucasian methodology permitted one to inquire whether the world or an economy might not be in an equilibrium state. But Lucasian methodology forbids such an inquiry.

The use of calibration to estimate intertemporal labor-supply elasticities from microeconomic studies are also extremely dubious, because microeconomic estimates of labor-supply elasticities are typically made under conditions approximating equilibrium, when workers have some flexibility in choosing whether to work more or less in the present or in the future. Those are not the conditions in which workers find themselves in periods of high aggregate unemployment, and are, therefore, not confident that they will retain their jobs in the present and near future, or, if they lose their jobs, that they will succeed in finding another job at an acceptable wage. The calibrated estimates of labor-supply elasticity are, for exactly the reasons identified in the Lucas Critique, unreliable for use in replicating time series.

An early real-business-cycle theorist Charles Plosser (“Understanding Real Business Cycles”) responded to criticisms of the RBC techniques as follows:

If the measured technological shocks are poor estimates (that is, they are confounded by other factors such as “demand” shocks, preference shocks or change in government policies, and so on) then feeding these values into our real business cycle model should result in poor predictions for the behavior of consumption, investment, hours worked, wages and output.

Plosser’s response ignores the question-begging nature of the RBC model; the supposed productivity shocks that cause cyclical fluctuations in the model are identified by the very time series that the model purports to explain. Nor does calibration provide clear and unambiguous estimates that the modeler can transfer without exercising discretion about which studies and which values to insert into an RBC model. Plosser’s defense of RBC is not so very different from the sort of defense made on behalf of the highly accurate epicyclical replications of observed planetary movements, replications that were based largely on the ingenuity and diligence of the epicyclist.

Eventually, the methodological prohibitions against heliocentrism were overcome. Perhaps, one day, the methodological prohibitions against non-reductionist macroeconomic theories will also be overcome.

Lucasian macroeconomics gained not only ascendance, but dominance, on the basis of  conceptual and methodological misunderstandings. The continued dominance of the offspring of the early Lucasian theories has been portrayed as a scientific advance by Lucas and his followers. In fact, the theories and the supposed methodological imperatives by which they have been justified are scientifically suspect because they rely on circular, question-begging arguments and reject alternative theories based on specious reductionist arguments.

Lucas and Sargent on Optimization and Equilibrium in Macroeconomics

In a famous contribution to a conference sponsored by the Federal Reserve Bank of Boston, Robert Lucas and Thomas Sargent (1978) harshly attacked Keynes and Keynesian macroeconomics for shortcomings both theoretical and econometric. The econometric criticisms, drawing on the famous Lucas Critique (Lucas 1976), were focused on technical identification issues and on the dependence of estimated regression coefficients of econometric models on agents’ expectations conditional on the macroeconomic policies actually in effect, rendering those econometric models an unreliable basis for policymaking. But Lucas and Sargent reserved their harshest criticism for abandoning what they called the classical postulates.

Economists prior to the 1930s did not recognize a need for a special branch of economics, with its own special postulates, designed to explain the business cycle. Keynes founded that subdiscipline, called macroeconomics, because he thought that it was impossible to explain the characteristics of business cycles within the discipline imposed by classical economic theory, a discipline imposed by its insistence on . . . two postulates (a) that markets . . . clear, and (b) that agents . . . act in their own self-interest [optimize]. The outstanding fact that seemed impossible to reconcile with these two postulates was the length and severity of business depressions and the large scale unemployment which they entailed. . . . After freeing himself of the straight-jacket (or discipline) imposed by the classical postulates, Keynes described a model in which rules of thumb, such as the consumption function and liquidity preference schedule, took the place of decision functions that a classical economist would insist be derived from the theory of choice. And rather than require that wages and prices be determined by the postulate that markets clear — which for the labor market seemed patently contradicted by the severity of business depressions — Keynes took as an unexamined postulate that money wages are “sticky,” meaning that they are set at a level or by a process that could be taken as uninfluenced by the macroeconomic forces he proposed to analyze[1]. . . .

In recent years, the meaning of the term “equilibrium” has undergone such dramatic development that a theorist of the 1930s would not recognize it. It is now routine to describe an economy following a multivariate stochastic process as being “in equilibrium,” by which is meant nothing more than that at each point in time, postulates (a) and (b) above are satisfied. This development, which stemmed mainly from work by K. J. Arrow and G. Debreu, implies that simply to look at any economic time series and conclude that it is a “disequilibrium phenomenon” is a meaningless observation. Indeed, a more likely conjecture, on the basis of recent work by Hugo Sonnenschein, is that the general hypothesis that a collection of time series describes an economy in competitive equilibrium is without content. (pp. 58-59)

Lucas and Sargent maintain that ‘classical” (by which they obviously mean “neoclassical”) economics is based on the twin postulates of (a) market clearing and (b) optimization. But optimization is a postulate about individual conduct or decision making under ideal conditions in which individuals can choose costlessly among alternatives that they can rank. Market clearing is not a postulate about individuals, it is the outcome of a process that neoclassical theory did not, and has not, described in any detail.

Instead of describing the process by which markets clear, neoclassical economic theory provides a set of not too realistic stories about how markets might clear, of which the two best-known stories are the Walrasian auctioneer/tâtonnement story, widely regarded as merely heuristic, if not fantastical, and the clearly heuristic and not-well-developed Marshallian partial-equilibrium story of a “long-run” equilibrium price for each good correctly anticipated by market participants corresponding to the long-run cost of production. However, the cost of production on which the Marhsallian long-run equilibrium price depends itself presumes that a general equilibrium of all other input and output prices has been reached, so it is not an alternative to, but must be subsumed under, the Walrasian general equilibrium paradigm.

Thus, in invoking the neoclassical postulates of market-clearing and optimization, Lucas and Sargent unwittingly, or perhaps wittingly, begged the question how market clearing, which requires that the plans of individual optimizing agents to buy and sell reconciled in such a way that each agent can carry out his/her/their plan as intended, comes about. Rather than explain how market clearing is achieved, they simply assert – and rather loudly – that we must postulate that market clearing is achieved, and thereby submit to the virtuous discipline of equilibrium.

Because they could provide neither empirical evidence that equilibrium is continuously achieved nor a plausible explanation of the process whereby it might, or could be, achieved, Lucas and Sargent try to normalize their insistence that equilibrium is an obligatory postulate that must be accepted by economists by calling it “routine to describe an economy following a multivariate stochastic process as being ‘in equilibrium,’ by which is meant nothing more than that at each point in time, postulates (a) and (b) above are satisfied,” as if the routine adoption of any theoretical or methodological assumption becomes ipso facto justified once adopted routinely. That justification was unacceptable to Lucas and Sargent when made on behalf of “sticky wages” or Keynesian “rules of thumb, but somehow became compelling when invoked on behalf of perpetual “equilibrium” and neoclassical discipline.

Using the authority of Arrow and Debreu to support the normalcy of the assumption that equilibrium is a necessary and continuous property of reality, Lucas and Sargent maintained that it is “meaningless” to conclude that any economic time series is a disequilibrium phenomenon. A proposition ismeaningless if and only if neither the proposition nor its negation is true. So, in effect, Lucas and Sargent are asserting that it is nonsensical to say that an economic time either reflects or does not reflect an equilibrium, but that it is, nevertheless, methodologically obligatory to for any economic model to make that nonsensical assumption.

It is curious that, in making such an outlandish claim, Lucas and Sargent would seek to invoke the authority of Arrow and Debreu. Leave aside the fact that Arrow (1959) himself identified the lack of a theory of disequilibrium pricing as an explanatory gap in neoclassical general-equilibrium theory. But if equilibrium is a necessary and continuous property of reality, why did Arrow and Debreu, not to mention Wald and McKenzie, devoted so much time and prodigious intellectual effort to proving that an equilibrium solution to a system of equations exists. If, as Lucas and Sargent assert (nonsensically), it makes no sense to entertain the possibility that an economy is, or could be, in a disequilibrium state, why did Wald, Arrow, Debreu and McKenzie bother to prove that the only possible state of the world actually exists?

Having invoked the authority of Arrow and Debreu, Lucas and Sargent next invoke the seminal contribution of Sonnenschein (1973), though without mentioning the similar and almost simultaneous contributions of Mantel (1974) and Debreu (1974), to argue that it is empirically empty to argue that any collection of economic time series is either in equilibrium or out of equilibrium. This property has subsequently been described as an “Anything Goes Theorem” (Mas-Colell, Whinston, and Green, 1995).

Presumably, Lucas and Sargent believe the empirically empty hypothesis that a collection of economic time series is, or, alternatively is not, in equilibrium is an argument supporting the methodological imperative of maintaining the assumption that the economy absolutely and necessarily is in a continuous state of equilibrium. But what Sonnenschein (and Mantel and Debreu) showed was that even if the excess demands of all individual agents are continuous, are homogeneous of degree zero, and even if Walras’s Law is satisfied, aggregating the excess demands of all agents would not necessarily cause the aggregate excess demand functions to behave in such a way that a unique or a stable equilibrium. But if we have no good argument to explain why a unique or at least a stable neoclassical general-economic equilibrium exists, on what methodological ground is it possible to insist that no deviation from the admittedly empirically empty and meaningless postulate of necessary and continuous equilibrium may be tolerated by conscientious economic theorists? Or that the gatekeepers of reputable neoclassical economics must enforce appropriate standards of professional practice?

As Franklin Fisher (1989) showed, inability to prove that there is a stable equilibrium leaves neoclassical economics unmoored, because the bread and butter of neoclassical price theory (microeconomics), comparative statics exercises, is conditional on the assumption that there is at least one stable general equilibrium solution for a competitive economy.

But it’s not correct to say that general equilibrium theory in its Arrow-Debreu-McKenzie version is empirically empty. Indeed, it has some very strong implications. There is no money, no banks, no stock market, and no missing markets; there is no advertising, no unsold inventories, no search, no private information, and no price discrimination. There are no surprises and there are no regrets, no mistakes and no learning. I could go on, but you get the idea. As a theory of reality, the ADM general-equilibrium model is simply preposterous. And, yet, this is the model of economic reality on the basis of which Lucas and Sargent proposed to build a useful and relevant theory of macroeconomic fluctuations. OMG!

Lucas, in various writings, has actually disclaimed any interest in providing an explanation of reality, insisting that his only aim is to devise mathematical models capable of accounting for the observed values of the relevant time series of macroeconomic variables. In Lucas’s conception of science, the only criterion for scientific knowledge is the capacity of a theory – an algorithm for generating numerical values to be measured against observed time series – to generate predicted values approximating the observed values of the time series. The only constraint on the algorithm is Lucas’s methodological preference that the algorithm be derived from what he conceives to be an acceptable microfounded version of neoclassical theory: a set of predictions corresponding to the solution of a dynamic optimization problem for a “representative agent.”

In advancing his conception of the role of science, Lucas has reverted to the approach of ancient astronomers who, for methodological reasons of their own, believed that the celestial bodies revolved around the earth in circular orbits. To ensure that their predictions matched the time series of the observed celestial positions of the planets, ancient astronomers, following Ptolemy, relied on epicycles or second-order circular movements of planets while traversing their circular orbits around the earth to account for their observed motions.

Kepler and later Galileo conceived of the solar system in a radically different way from the ancients, placing the sun, not the earth, at the fixed center of the solar system and proposing that the orbits of the planets were elliptical, not circular. For a long time, however, the actual time series of geocentric predictions outperformed the new heliocentric predictions. But even before the heliocentric predictions started to outperform the geocentric predictions, the greater simplicity and greater realism of the heliocentric theory attracted an increasing number of followers, forcing methodological supporters of the geocentric theory to take active measures to suppress the heliocentric theory.

I hold no particular attachment to the pre-Lucasian versions of macroeconomic theory, whether Keynesian, Monetarist, or heterodox. Macroeconomic theory required a grounding in an explicit intertemporal setting that had been lacking in most earlier theories. But the ruthless enforcement, based on a preposterous methodological imperative, lacking scientific or philosophical justification, of formal intertemporal optimization models as the only acceptable form of macroeconomic theorizing has sidetracked macroeconomics from a more relevant inquiry into the nature and causes of intertemporal coordination failures that Keynes, along with many some of his predecessors and contemporaries, had initiated.

Just as the dispute about whether planetary motion is geocentric or heliocentric was a dispute about what the world is like, not just about the capacity of models to generate accurate predictions of time series variables, current macroeconomic disputes are real disputes about what the world is like and whether aggregate economic fluctuations are the result of optimizing equilibrium choices by economic agents or about coordination failures that cause economic agents to be surprised and disappointed and rendered unable to carry out their plans in the manner in which they had hoped and expected to be able to do. It’s long past time for this dispute about reality to be joined openly with the seriousness that it deserves, instead of being suppressed by a spurious pseudo-scientific methodology.

HT: Arash Molavi Vasséi, Brian Albrecht, and Chris Edmonds


[1] Lucas and Sargent are guilty of at least two misrepresentations in this paragraph. First, Keynes did not “found” macroeconomics, though he certainly influenced its development decisively. Keynes used the term “macroeconomics,” and his work, though crucial, explicitly drew upon earlier work by Marshall, Wicksell, Fisher, Pigou, Hawtrey, and Robertson, among others. See Laidler (1999). Second, having explicitly denied and argued at length that his results did not depend on the assumption of sticky wages, Keynes certainly never introduced the assumption of sticky wages himself. See Leijonhufvud (1968)

Although RATEX Must Be a Contingent Property of any Economic Model, RATEX Is an Unlikely Property of Reality

Without searching through my old posts, I’m confident that I’ve already made this point many times in passing, but I just want to restate this point up front — highlighted and underscored. Any economic model must satisfy the following rational-expectations condition:

If the agents in the model expect the equilibrium outcome of the model (or, if there are multiple equilibrium outcomes, they all expect the same one of those equilibrium outcomes), that expected equilibrium outcome will be realized.

When the agents in an economic model all expect the equilibrium outcome of the model, the agents may be said to have rational expectations, and those rational expectations are self-fulfilling. Any economic model that lacks this contingent RATEX property is incoherent. But unless an economic model provides a theory of expectation formation whereby the agents all form correct expectations of the equilibrium outcome of the model, RATEX is a merely contingent, not an essential, property of the model.

Although an actual expectation-formation theory of rational expectations has never, to my knowledge, been derived from plausible assumptions, the RATEX assumption is disingenuously insisted upon as a property of rational decision-making implied by neoclassical theory. Such tyrannizing methodological intimidation is groundless and entails the reductio ad absurdum of the Milgrom and Stokey No-Trade Theorem.

Supply Shocks and the Summer of our Inflation Discontent

This post started out as a short Twitter thread discussing the role of supply shocks in our current burst of inflation. The thread was triggered by Skanda Aramanth’s tweet arguing that, within a traditional aggregate-demand/aggregate-supply framework, a negative supply shock would have an effect sufficiently inflationary to cause the rate of NGDP growth to rise even with an unchanged monetary policy if the aggregate-demand curve is highly inelastic.

Skanda received some pushback on his contention from those, e.g., George Selgin, who dismissed the assumption of an inelastic aggregate demand as an implausible explanation of recent experience.

Here are the tweets by Skanda and George.

Without weighing in on the plausibility of the inelastic aggregate demand curve assumption, not being very enamored of the aggregate demand/aggregate supply paradigm, which strikes me as a mishmash of inconsistent partial-equilibrium and general-equilibrium reasoning based on a static model with inflationary expectations uneasily attached, I offered the following alternative account of our recent inflationary experience.

There were two supply shocks. The first was the pandemic, in 2020-21. That was followed in late 2021 by the prelude to Putin’s war which sent oil prices up from $50/barrel in early 2021 to nearly $100/barrel by the end of 2021.

The first supply shock required income support for basic consumption during the pandemic resulting in a buildup of purchasing power in the form of cash balances or other liquid assets for which there was no immediate outlet during the pandemic.

The buildup of unused purchasing power implied that the end of the pandemic would involve a positive but transitory shock to aggregate demand when the economy (production and consumption patterns) returned to normal as the limitations imposed by the pandemic began to ease.

The alternative to allowing the positive but transitory shock to aggregate demand would have been to adopt a restrictive policy as the pandemic was easing, which made neither economic or political sense. The optimal policy was to accept temporary inflation during the recovery, rather than impose a deflationary policy to suppress transitory inflation.

The transitory inflation was exacerbated by various supply bottlenecks and shortages of workers and other productive resources, which were reflected the difficulties of ramping up production quickly after lengthy production shutdowns or curtailments during the height of the pandemic.

These transitory difficulties would have likely worked themselves out by the end of 2021 had it not been for the second supply shock associated with the months long buildup to Putin’s war which was anticipated for months before it actually started in February 2022,  causing a second increase in inflation just when the first burst of inflation in the second half of 2021 would have tapered off.

No doubt, it would have been better for the Fed to have started tightening earlier so keep the NGDP from increasing so rapidly at the end of 2021 and the start of 2022, but the scare talk about unanchoring inflation expectations has been overdone.

Financial markets clearly reflect expectations that the Fed is going to rein in aggregate demand so that the excess growth in NGDP in 2021 will have little long-term effect. Even with the continuing potential that Putin’s War will cause further supply disruptions with short-term inflationary effects, the current and likely future conditions seem far better than result than that would have produced by the Volcker 2.0 policy for which Larry Summers et al. are still pining.

Summer 2008 Redux?

Nearly 14 years ago, in the summer of 2008, as a recession that started late in 2007 was rapidly deepening and unemployment rapidly rising, the Fed, mainly concerned about rising headline inflation fueled by record-breaking oil prices, kept its Fed Funds target at the 2% level set in May (slightly reduced from the 2.25% target set in March), lest inflation expectations become unanchored.

Let’s look at what happened after the Fed Funds target was reduced to 2.25% in March 2008. The price of crude oil (West Texas Intermediate) rose by nearly 50% between March and July, causing CPI inflation (year over year) between March and August to increase from 4% to 5.5%, even as unemployment rose from 5.1% in March to 5.8% in July. The PCE index, closely watched by the Fed as more indicative of underlying inflation than the CPI, showed inflation rising even faster than did the CPI.

Not only did the Fed refuse to counter rising unemployment and declining income and output by reducing its Fed Funds target, it made clear that reducing inflation was a more urgent goal than countering economic contraction and rising unemployment. An unchanged Fed Funds target while income and employment are falling, in effect, tightens monetary policy, a point underscored by the Fed as it emphasized its intent, despite the uptick in inflation caused by rising oil prices, to keep inflation expectations anchored.

The passive tightening of monetary policy associated with an unchanged Federal Funds target while income and employment were falling and the price of oil was rising led to a nearly 15% decline in the price of between mid-July and the end of August, and to a concurrent 10% increase in the dollar exchange rate against the euro, a deflationary trend also refelcted in an increase in the unemployment rate to 6.1% in August.

Evidently pleased with the deflationary impact of its passive tightening of monetary policy, the Fed viewed the falling price of oil and the appreciation of the dollar as an implicit endorsement by the markets, notwithstanding a deepening recession in a financially fragile economy, of its hard line on inflation. With major financial institutions weakened by the aftereffects of bad and sometimes fraudulent investments made in the expectation of rising home prices that then began falling, many debtors (both households and businesses) had neither sufficient cash flow nor sufficient credit to meet their debt obligations. Perhaps emboldened by the perceived market endorsement of its hard line on inflation, When the Lehman Brothers investment bank, heavily invested in subprime mortgages, was on the verge of collapse in the second week of September, the Fed, perhaps emboldened by the perceived approval of its anti-inflation hard line by the markets, refused to provide, or arrange for, emergency financing to enable Lehman to meet obligations coming due, triggering a financial panic stoked by fears that other institutions were at risk, causing an almost immediate freeze up of credit facilities in financial centers in the US and around the world. The rest is history.

Why bring up this history now? I do so, because I see troubling parallels between what happened in 2008 and what is happening now, parallels that make me concerned that a too narrow focus on preventing inflation expectations from being unanchored could lead to unpleasant and unnecessary consequences.

First, in 2008, the WTI price of oil rose by nearly 50% between March and July, while in 2021-22 the WTI oil price rose by over 75% between December 2021 and April 2022. Both episodes of rising oil prices clearly depressed real GDP growth. Second, in both 2008 and 2021-22, the rising oil price caused actual, and, very likely, expected rates of inflation to rise. Third, in 2008, the dollar appreciated from $1.59/euro on July 15 to $1.39/euro on September 12, while, in 2022, the dollar has appreciated from $1.14/euro on February 11 to $1.05/euro on April 29.

In 2008, an inflationary burst, fed in part by rapidly rising oil prices, led to a passive tightening of monetary policy, manifested in dollar appreciation in forex markets, plunging an economy, burdened with a fragile financial system carrying overvalued assets, and already in recession, into a financial crisis. This time, even steeper increases in oil prices, having fueled an initial burst of inflation during the recovery from a pandemic/supply-side recession, were later reinforced by further negative supply shocks stemming from Russia’s invasion of Ukraine. The complex effects of both negative supply-shocks and excess aggregate demand have caused monetary policy to shift from ease to restraint, once again manifested in dollar appreciation in foreign-exchange markets.

In September 2008, the Fed, focused narrowly on inflation, was oblivious to the looming financial crisis as deflationary forces, amplified by the passive monetary tightening of the preceding two months, were gathering. This time, although monetary tightening to reign in excess aggregate demand is undoubtedly appropriate, signs of ebbing inflationary pressure are multiplying, and many forecasters are predicting that inflation will subside to 4% or less by year’s end. Modest further tightening to reduce aggregate demand to a level consistent with a 2% inflation rate might be appropriate, but the watchword for policymakers now should be caution.

While there is little reason to think that the US economy and financial system are now in as precarious a state as they were in the summer of 2008, a decision to raise the target Fed Funds rate by more than 50 basis points as a demonstration of the Fed’s resolve to hold the line on inflation would certainly be ill-advised, and an increase of more than 25 basis points would now be imprudent.

The preliminary report on first-quarter 2022 GDP, presented a mixed picture of the economy. A small drop in real GDP seems like an artefact of technical factors, and an upward revision seems likely with no evidence yet of declining employment or slack in the labor market. While noiminal GDP growth declined substantially in the first quarter from the double-digit growth rate in 2021, it is above the rate consistent with the 2% inflation rate that remains the Fed’s policy target. However, given the continuing risks of further negative supply-side shocks while the war in Ukraine continues, the Fed should not allow the nominal growth rate of GDP to fall below the 5% rate that ought to remain the short-term target under current conditions.

If the Fed is committed to a policy target of 2% average inflation over a suitably long time horizon, the rate of nominal GDP growth need not fall below 5% before normal peacetime economic conditions have been restored. Until a return to normalcy, avoiding the risk of reducing nominal GDP growth below a 5% rate should have priority over quickly reducing inflation to the targeted long-run average rate. To do otherwise would increase the risk that inadvertent policy mistakes in an uncertain economic environment might cause sufficient financial distress to tip the economy into recession and even another financial crisis. Better safe than sorry.

Why I’m not Apologizing for Calling Recent Inflation Transitory

I’ve written three recent blogposts explaining why the inflation that began accelerating in the second half of 2021 was likely to be transitory (High Inflation Anxiety, Sic Transit Inflatio del Mundi, and Wherein I Try to Calm Professor Blanchard’s Nerves). I didn’t deny that inflation was accelerating and likely required a policy adjustment, but I also didn’t accept that the inflation threat was (or is) as urgent as some, notably Larry Summers, were suggesting.

In my two posts in late 2021, I argued that Summers’s concerns were overblown, because the burst of inflation in the second half of 2021 was caused mainly by increased consumer spending as consumers began drawing down cash and liquid assets accumulated when spending outlets had been unavailable, and was exacerbated by supply bottlenecks that kept output from accommodating increased consumer demand. Beyond that, despite rising expectations at the short-end, I minimized concerns about the unanchoring of inflation expectations owing to the inflationary burst in the second half of 2021, in the absence of any signs of rising inflation expectations in longer-term (5 years or more) bond prices.

Aside from criticizing excessive concern with what I viewed as a transitory burst of inflation not entirely caused by expansive monetary policy, I cautioned against reacting to inflation caused by negative supply shocks. In contrast to Summers’s warnings about the lessons of the 1970s when high inflation became entrenched before finally being broken — at the cost of the worst recession since the Great Depression, by Volcker’s anti-inflation policy — I explained that much of 1970s inflation was caused by supply-side oil shocks, which triggered an unnecessarily severe monetary tightening in 1974-75 and a deep recession that only modestly reduced inflation. Most of the decline in inflation following the oil shock occurred during the 1976 expansion when inflation fell to 5%. But, rather than allow a strong recovery to proceed on its own, the incoming Carter Administration and a compliant Fed, attempting to accelerate the restoration of full employment, increased monetary expansion. (It’s noteworthy that much of the high unemployment at the time reflected the entry of baby-boomers and women into the labor force, one of the few occasions in which an increased natural rate of unemployment can be easily identified.)

The 1977-79 monetary expansion caused inflation to accelerate to the high single digits even before the oil-shocks of 1979-80 led to double-digit inflation, setting the stage for Volcker’s brutal disinflationary campaign in 1981-82. But the mistake of tightening of monetary policy to suppress inflation resulting from negative supply shocks (usually associated with rising oil prices) went unacknowledged, the only lesson being learned, albeit mistakenly, was that high inflation can be reduced only by a monetary tightening sufficient to cause a deep recession.

Because of that mistaken lesson, the Fed, focused solely on the danger of unanchored inflation expectations, resisted pleas in the summer of 2008 to ease monetary policy as the economy was contracting and unemployment rising rapidly until October, a month after the start of the financial crisis. That disastrous misjudgment made me doubt that the arguments of Larry Summers et al. that tight money is required to counter inflation and prevent the unanchoring of inflation expectations, recent inflation being largely attributable, like the inflation blip in 2008, to negative supply shocks, with little evidence that inflation expectations had, or were likely to, become unanchored.

My first two responses to inflation hawks occurred before release of the fourth quarter 2021 GDP report. In the first three quarters, nominal GDP grew by 10.9%, 13.4% and 8.4%. My hope was that the Q4 rate of increase in nominal GDP would show a further decline from the Q3 rate, or at least show no increase. The rising trend of inflation in the final months of 2021, with no evidence of a slowdown in economic activity, made it unlikely that nominal GDP growth in Q4 had not accelerated. In the event, the acceleration of nominal GDP growth to 14.5% in Q4 showed that a tightening of monetary policy had become necessary.

Although a tightening of policy was clearly required to reduce the rate of nominal GDP growth, there was still reason for optimism that the negative supply-side shocks that had amplified inflationary pressure would recede, thereby allowing nominal GDP growth to slow down with no contraction in output and employment. Unfortunately, the economic environment deteriorated drastically in the latter part of 2021 as Russia began the buildup to its invasion of Ukraine, and deteriorated even more once the invasion started.

The price of Brent crude, just over $50/barrel in January 2021, rose to over $80/barrel in November of 2021. Tensions between Russia and Ukraine rose steadily during 2021, so it is not easy to determine the extent to which those increasing tensions were causing oil prices to rise and to what extent they rose because of increasing economic activity and inflationary pressure on oil prices. Brent crude fell to $70 in December before rising to $100/barrel in February on the eve of the invasion, briefly reaching $130/barrel shortly thereafter, before falling back to $100/barrel. Aside from the effect on energy prices, generalized uncertainty and potential effects on wheat prices and the federal budget from a drawn-out conflict in Ukraine have caused inflation expectations to increase.

Under these circumstances, it makes little sense to tighten policy suddenly. The appropriate policy strategy is to lean toward restraint and announce that the aim of policy is to reduce the rate of GDP growth gradually until a sustainable 4-5% rate of nominal GDP growth consistent with an inflation rate of about 2-3% a year is reached. The overnight rate of interest being the primary instrument whereby the Fed can either increase or decrease the rate of nominal GDP growth, it is unnecessary, and probably unwise, for the Fed to announce in advance a path of interest-rate increases. Instead, the Fed should communicate its target range for nominal GDP growth and condition the size and frequency of future rate increases on the deviations of the economy from that targeted growth path of nominal GDP.

Previous monetary policy mistakes that caused either recessions or excessive inflation have for more than half a century resulted from using interest rates or some other policy instrument to control inflation or unemployment rather than to moderate deviations from a stable growth rate in nominal GDP. Attempts to reduce inflation by maintaining or increasing already high interest rates until inflation actually fell needlessly and perversely prolonged and deepened recessions. Monetary conditions ought be eased as soon as nominal GDP growth falls below the target range for nominal GDP growth. Inflation automatically tends to fall in the early stages of recovery from a recession, and nothing is gained, and much harm is done, by maintaining a tight-money policy after nominal GDP growth has fallen below the target range. That’s the great, and still unlearned, lesson of monetary policy.

On the Labor Supply Function

The bread and butter of economics is demand and supply. The basic idea of a demand function (or a demand curve) is to describe a relationship between the price at which a given product, commodity or service can be bought and the quantity that will bought by some individual. The standard assumption is that the quantity demanded increases as the price falls, so that the demand curve is downward-sloping, but not much more can be said about the shape of a demand curve unless special assumptions are made about the individual’s preferences.

Demand curves aren’t natural phenomena with concrete existence; they are hypothetical or notional constructs pertaining to individual preferences. To pass from individual demands to a market demand for a product, commodity or service requires another conceptual process summing the quantities demanded by each individual at any given price. The conceptual process is never actually performed, so the downward-sloping market demand curve is just presumed, not observed as a fact of nature.

The summation process required to pass from individual demands to a market demand implies that the quantity demanded at any price is the quantity demanded when each individual pays exactly the same price that every other demander pays. At a price of $10/widget, the widget demand curve tells us how many widgets would be purchased if every purchaser in the market can buy as much as desired at $10/widget. If some customers can buy at $10/widget while others have to pay $20/widget or some can’t buy any widgets at any price, then the quantity of widgets actually bought will not equal the quantity on the hypothetical widget demand curve corresponding to $10/widget.

Similar reasoning underlies the supply function or supply curve for any product, commodity or service. The market supply curve is built up from the preferences and costs of individuals and firms and represents the amount of a product, commodity or service that would be willing to offer for sale at different prices. The market supply curve is the result of a conceptual summation process that adds up the amounts that would be hypothetically be offered for sale by every agent at different prices.

The point of this pedantry is to emphasize the that the demand and supply curves we use are drawn on the assumption that a single uniform market price prevails in every market and that all demanders and suppliers can trade without limit at those prices and their trading plans are fully executed. This is the equilibrium paradigm underlying the supply-demand analysis of econ 101.

Economists quite unself-consciously deploy supply-demand concepts to analyze labor markets in a variety of settings. Sometimes, if the labor market under analysis is limited to a particular trade or a particular skill or a particular geographic area, the supply-demand framework is reasonable and appropriate. But when applied to the aggregate labor market of the whole economy, the supply-demand framework is inappropriate, because the ceteris-paribus proviso (all prices other than the price of the product, commodity or service in question are held constant) attached to every supply-demand model is obviously violated.

Thoughtlessly applying a simple supply-demand model to analyze the labor market of an entire economy leads to the conclusion that widespread unemployment, when some workers are unemployed, but would have accepted employment offers at wages that comparably skilled workers are actually receiving, implies that wages are above the market-clearing wage level consistent with full employment.

The attached diagram for simplest version of this analysis. The market wage (W1) is higher than the equilibrium wage (We) at which all workers willing to accept that wage could be employed. The difference between the number of workers seeking employment at the market wage (LS) and the number of workers that employers seek to hire (LD) measures the amount of unemployment. According to this analysis, unemployment would be eliminated if the market wage fell from W1 to We.

Applying supply-demand analysis to aggregate unemployment fails on two levels. First, workers clearly are unable to execute their plans to offer their labor services at the wage at which other workers are employed, so individual workers are off their supply curves. Second, it is impossible to assume, supply-demand analysis requires, that all other prices and incomes remain constant so that the demand and supply curves do not move as wages and employment change. When multiple variables are mutually interdependent and simultaneously determined, the analysis of just two variables (wages and employment) cannot be isolated from the rest of the system. Focusing on the wage as the variable that needs to change to restore full employment is an example of the tunnel vision.

Keynes rejected the idea that economy-wide unemployment could be eliminated by cutting wages. Although Keynes’s argument against wage cuts as a cure for unemployment was flawed, he did have at least an intuitive grasp of the basic weakness in the argument for wage cuts: that high aggregate unemployment is not usefully analyzed as a symptom of excessive wages. To explain why wage cuts aren’t the cure for high unemployment, Keynes introduced a distinction between voluntary and involuntary unemployment.

Forty years later, Robert Lucas began his effort — not the first such effort, but by far the most successful — to discredit the concept of involuntary unemployment. Here’s an early example:

Keynes [hypothesized] that measured unemployment can be decomposed into two distinct components: ‘voluntary’ (or frictional) and ‘involuntary’, with full employment then identified as the level prevailing when involuntary employment equals zero. It seems appropriate, then, to begin by reviewing Keynes’ reasons for introducing this distinction in the first place. . . .

Accepting the necessity of a distinction between explanations for normal and cyclical unemployment does not, however, compel one to identify the first as voluntary and the second as involuntary, as Keynes goes on to do. This terminology suggests that the key to the distinction lies in some difference in the way two different types of unemployment are perceived by workers. Now in the first place, the distinction we are after concerns sources of unemployment, not differentiated types. . . .[O]ne may classify motives for holding money without imagining that anyone can subdivide his own cash holdings into “transactions balances,” “precautionary balances”, and so forth. The recognition that one needs to distinguish among sources of unemployment does not in any way imply that one needs to distinguish among types.

Nor is there any evident reason why one would want to draw this distinction. Certainly the more one thinks about the decision problem facing individual workers and firms the less sense this distinction makes. The worker who loses a good job in prosperous time does not volunteer to be in this situation: he has suffered a capital loss. Similarly, the firm which loses an experienced employee in depressed times suffers an undesirable capital loss. Nevertheless, the unemployed worker at any time can always find some job at once, and a firm can always fill a vacancy instantaneously. That neither typically does so by choice is not difficult to understand given the quality of the jobs and the employees which are easiest to find. Thus there is an involuntary element in all unemployment, in the sense that no one chooses bad luck over good; there is also a voluntary element in all unemployment, in the sense that however miserable one’s current work options, one can always choose to accept them.

Lucas, Studies in Business Cycle Theory, pp. 241-43

Consider this revision of Lucas’s argument:

The expressway driver who is slowed down in a traffic jam does not volunteer to be in this situation; he has suffered a waste of his time. Nevertheless, the driver can get off the expressway at the next exit to find an alternate route. Thus, there is an involuntary element in every traffic jam, in the sense that no one chooses to waste time; there is also a voluntary element in all traffic jams, in the sense that however stuck one is in traffic, one can always take the next exit on the expressway.

What is lost on Lucas is that, for an individual worker, taking a wage cut to avoid being laid off by the employer accomplishes nothing, because the willingness of a single worker to accept a wage cut would not induce the employer to increase output and employment. Unless all workers agreed to take wage cuts, a wage cut to one employee would have not cause the employer to reconsider its plan to reduce in the face of declining demand for its product. Only the collective offer of all workers to accept a wage cut would induce an output response by the employer and a decision not to lay off part of its work force.

But even a collective offer by all workers to accept a wage cut would be unlikely to avoid an output reduction and layoffs. Consider a simple case in which the demand for the employer’s output declines by a third. Suppose the employer’s marginal cost of output is half the selling price (implying a demand elasticity of -2). Assume that demand is linear. With no change in its marginal cost, the firm would reduce output by a third, presumably laying off up to a third of its employees. Could workers avoid the layoffs by accepting lower wages to enable the firm to reduce its price? Or asked in another way, how much would marginal cost have to fall for the firm not to reduce output after the demand reduction?

Working out the algebra, one finds that for the firm to keep producing as much after a one-third reduction in demand, the firm’s marginal cost would have to fall by two-thirds, a decline that could only be achieved by a radical reduction in labor costs. This is surely an oversimplified view of the alternatives available to workers and employers, but the point is that workers facing a layoff after the demand for the product they produce have almost no ability to remain employed even by collectively accepting a wage cut.

That conclusion applies a fortiori when decisions whether to accept a wage cut are left to individual workers, because the willingness of workers individually to accept a wage cut is irrelevant to their chances of retaining their jobs. Being laid off because of decline in the demand for the product a worker is producing is a much different situation from being laid off, because a worker’s employer is shifting to a new technology for which the workers lack the requisite skills, and can remain employed only by accepting re-assignment to a lower-paying job.

Let’s follow Lucas a bit further:

Keynes, in chapter 2, deals with the situation facing an individual unemployed worker by evasion and wordplay only. Sentences like “more labor would, as a rule, be forthcoming at the existing money wage if it were demanded” are used again and again as though, from the point of view of a jobless worker, it is unambiguous what is meant by “the existing money wage.” Unless we define an individual’s wage rate as the price someone else is willing to pay him for his labor (in which case Keynes’s assertion is defined to be false to be false), what is it?

Lucas, Id.

I must admit that, reading this passage again perhaps 30 or more years after my first reading, I’m astonished that I could have once read it without astonishment. Lucas gives the game away by accusing Keynes of engaging in evasion and wordplay before embarking himself on sustained evasion and wordplay. The meaning of the “existing money wage” is hardly ambiguous, it is the money wage the unemployed worker was receiving before losing his job and the wage that his fellow workers, who remain employed, continue to receive.

Is Lucas suggesting that the reason that the worker lost his job while his fellow workers who did not lose theirs is that the value of his marginal product fell but the value of his co-workers’ marginal product did not? Perhaps, but that would only add to my astonishment. At the current wage, employers had to reduce the number of workers until their marginal product was high enough for the employer to continue employing them. That was not necessarily, and certainly not primarily, because some workers were more capable than those that were laid off.

The fact is, I think, that Keynes wanted to get labor markets out of the way in chapter 2 so that he could get on to the demand theory which really interested him.

More wordplay. Is it fact or opinion? Well, he says that thinks it’s a fact. In other words, it’s really an opinion.

This is surely understandable, but what is the excuse for letting his carelessly drawn distinction between voluntary and involuntary unemployment dominate aggregative thinking on labor markets for the forty years following?

Mr. Keynes, really, what is your excuse for being such an awful human being?

[I]nvoluntary unemployment is not a fact or a phenomenon which it is the task of theorists to explain. It is, on the contrary, a theoretical construct which Keynes introduced in the hope it would be helpful in discovering a correct explanation for a genuine phenomenon: large-scale fluctuations in measured, total unemployment. Is it the task of modern theoretical economics to ‘explain’ the theoretical constructs of our predecessor, whether or not they have proved fruitful? I hope not, for a surer route to sterility could scarcely be imagined.

Lucas, Id.

Let’s rewrite this paragraph with a few strategic word substitutions:

Heliocentrism is not a fact or phenomenon which it is the task of theorists to explain. It is, on the contrary, a theoretical construct which Copernicus introduced in the hope it would be helpful in discovering a correct explanation for a genuine phenomenon the observed movement of the planets in the heavens. Is it the task of modern theoretical physics to “explain” the theoretical constructs of our predecessors, whether or not they have proved fruitful? I hope not, for a surer route to sterility could scarcely be imagined.

Copernicus died in 1542 shortly before his work on heliocentrism was published. Galileo’s works on heliocentrism were not published until 1610 almost 70 years after Copernicus published his work. So, under Lucas’s forty-year time limit, Galileo had no business trying to explain Copernican heliocentrism which had still not yet proven fruitful. Moreover, even after Galileo had published his works, geocentric models were providing predictions of planetary motion as good as, if not better than, the heliocentric models, so decisive empirical evidence in favor of heliocentrism was still lacking. Not until Newton published his great work 70 years after Galileo, and 140 years after Copernicus, was heliocentrism finally accepted as fact.

In summary, it does not appear possible, even in principle, to classify individual unemployed people as either voluntarily or involuntarily unemployed depending on the characteristics of the decision problem they face. One cannot, even conceptually, arrive at a usable definition of full employment

Lucas, Id.

Belying his claim to be introducing scientific rigor into macroeocnomics, Lucas restorts to an extended scholastic inquiry into whether an unemployed worker can really ever be unemployed involuntarily. Based on his scholastic inquiry into the nature of volunatriness, Lucas declares that Keynes was mistaken because would not accept the discipline of optimization and equilibrium. But Lucas’s insistence on the discipline of optimization and equilibrium is misplaced unless he can provide an actual mechanism whereby the notional optimization of a single agent can be reconciled with notional optimization of other individuals.

It was his inability to provide any explanation of the mechanism whereby the notional optimization of individual agents can be reconciled with the notional optimizations of other individual agents that led Lucas to resort to rational expectations to circumvent the need for such a mechanism. He successfully persuaded the economics profession that evading the need to explain such a reconciliation mechanism, the profession would not be shirking their explanatory duty, but would merely be fulfilling their methodological obligation to uphold the neoclassical axioms of rationality and optimization neatly subsumed under the heading of microfoundations.

Rational expectations and microfoundations provided the pretext that could justify or at least excuse the absence of any explanation of how an equilibrium is reached and maintained by assuming that the rational expectations assumption is an adequate substitute for the Walrasian auctioneer, so that each and every agent, using the common knowledge (and only the common knowledge) available to all agents, would reliably anticipate the equilibrium price vector prevailing throughout their infinite lives, thereby guaranteeing continuous equilibrium and consistency of all optimal plans. That feat having been securely accomplished, it was but a small and convenient step to collapse the multitude of individual agents into a single representative agent, so that the virtue of submitting to the discipline of optimization could find its just and fitting reward.

Eight Recurring Ideas in My Studies in the History of Monetary Theory

In the introductory chapter of my book Studies in the History of Monetary Theory: Controversies and Clarifications, I list eight main ideas to which I often come back in the sixteen subsequent chapters. Here they are:

  1. The standard neoclassical models of economics textbooks typically assume full information and perfect competition. But these assumptions are, or ought to be, just the starting point, not the end, of analysis. Recognizing when and why these assumptions need to be relaxed and what empirical implications follow from relaxing those assumptions is how economists gain practical insight into, and understanding of, complex economic phenomena.
  2. Since the late eighteenth or early nineteenth century, much, if not most, of the financial instruments actually used as media of exchange (money) have been produced by private financial institutions (usually commercial banks); the amount of money that is privately produced is governed by the revenue generated and the cost incurred by creating money.
  3. The standard textbook model of international monetary adjustment under the gold standard (or any fixed-exchange rate system), the price-specie-flow mechanism, introduced by David Hume mischaracterized the adjustment mechanism by overlooking that the prices of tradable goods in any country are constrained by the prices of those tradable goods in other countries. That arbitrage constraint on the prices of tradable goods in any country prevents price levels in different currency areas from deviating, regardless of local changes in the quantity of money, from a common international level.
  4. The Great Depression was caused by a rapid appreciation of gold resulting from the increasing monetary demand for gold occasioned by the restoration of the international gold standard in the 1920s after the demonetization of gold in World War I.
  5. If the expected rate of deflation exceeds the real rate of interest, real-asset prices crash and economies collapse.
  6. The primary concern of macroeconomics as a field of economics is to explain systemic failures of coordination that lead to significant lapses from full employment.
  7. Lapses from full employment result from substantial and widespread disappointment of agents’ expectations of future prices.
  8. The only – or at least the best — systematic analytical approach to the study of such lapses is the temporary-equilibrium approach introduced by Hicks in Value and Capital.

Here is a list of the chapter titles

1. Introduction

Part One: Classical Monetary Theory

2. A Reinterpretation of Classical Monetary Theory

3. On Some Classical Monetary Controversies

4. The Real Bills Doctrine in the Light of the Law of Reflux

5. Classical Monetary Theory and the Quantity Theory

6. Monetary Disequilibrium and the Demand for Money in Ricardo and Thornton

7. The Humean and Smithian Traditions in Monetary Theory

8. Rules versus Discretion in Monetary Policy Historically Contemplated

9. Say’s Law and the Classical Theory of Depressions

Part Two: Hawtrey, Keynes, and Hayek

10. Hawtrey’s Good and Bad Trade: A Centenary Retrospective

11. Hawtrey and Keynes

12. Where Keynes Went Wrong

13. Debt, Deflation, the Gold Standard and the Great Depression

14. Pre-Keynesian Monetary Theories of the Great Depression: Whatever Happened to Hawtrey and Cassel? (with Ronald Batchelder)

15. The Sraffa-Hayek Debate on the Natural Rate of Interest (with Paul Zimmerman)

16. Hayek, Deflation, Gold and Nihilism

17. Hayek, Hicks, Radner and Four Equilibrium Concepts: Intertemporal, Sequential, Temporary and Rational Expectations


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,261 other subscribers
Follow Uneasy Money on WordPress.com