Archive for the 'Lucas Critique' Category

Lucas and Sargent on Optimization and Equilibrium in Macroeconomics

In a famous contribution to a conference sponsored by the Federal Reserve Bank of Boston, Robert Lucas and Thomas Sargent (1978) harshly attacked Keynes and Keynesian macroeconomics for shortcomings both theoretical and econometric. The econometric criticisms, drawing on the famous Lucas Critique (Lucas 1976), were focused on technical identification issues and on the dependence of estimated regression coefficients of econometric models on agents’ expectations conditional on the macroeconomic policies actually in effect, rendering those econometric models an unreliable basis for policymaking. But Lucas and Sargent reserved their harshest criticism for abandoning what they called the classical postulates.

Economists prior to the 1930s did not recognize a need for a special branch of economics, with its own special postulates, designed to explain the business cycle. Keynes founded that subdiscipline, called macroeconomics, because he thought that it was impossible to explain the characteristics of business cycles within the discipline imposed by classical economic theory, a discipline imposed by its insistence on . . . two postulates (a) that markets . . . clear, and (b) that agents . . . act in their own self-interest [optimize]. The outstanding fact that seemed impossible to reconcile with these two postulates was the length and severity of business depressions and the large scale unemployment which they entailed. . . . After freeing himself of the straight-jacket (or discipline) imposed by the classical postulates, Keynes described a model in which rules of thumb, such as the consumption function and liquidity preference schedule, took the place of decision functions that a classical economist would insist be derived from the theory of choice. And rather than require that wages and prices be determined by the postulate that markets clear — which for the labor market seemed patently contradicted by the severity of business depressions — Keynes took as an unexamined postulate that money wages are “sticky,” meaning that they are set at a level or by a process that could be taken as uninfluenced by the macroeconomic forces he proposed to analyze[1]. . . .

In recent years, the meaning of the term “equilibrium” has undergone such dramatic development that a theorist of the 1930s would not recognize it. It is now routine to describe an economy following a multivariate stochastic process as being “in equilibrium,” by which is meant nothing more than that at each point in time, postulates (a) and (b) above are satisfied. This development, which stemmed mainly from work by K. J. Arrow and G. Debreu, implies that simply to look at any economic time series and conclude that it is a “disequilibrium phenomenon” is a meaningless observation. Indeed, a more likely conjecture, on the basis of recent work by Hugo Sonnenschein, is that the general hypothesis that a collection of time series describes an economy in competitive equilibrium is without content. (pp. 58-59)

Lucas and Sargent maintain that ‘classical” (by which they obviously mean “neoclassical”) economics is based on the twin postulates of (a) market clearing and (b) optimization. But optimization is a postulate about individual conduct or decision making under ideal conditions in which individuals can choose costlessly among alternatives that they can rank. Market clearing is not a postulate about individuals, it is the outcome of a process that neoclassical theory did not, and has not, described in any detail.

Instead of describing the process by which markets clear, neoclassical economic theory provides a set of not too realistic stories about how markets might clear, of which the two best-known stories are the Walrasian auctioneer/tâtonnement story, widely regarded as merely heuristic, if not fantastical, and the clearly heuristic and not-well-developed Marshallian partial-equilibrium story of a “long-run” equilibrium price for each good correctly anticipated by market participants corresponding to the long-run cost of production. However, the cost of production on which the Marhsallian long-run equilibrium price depends itself presumes that a general equilibrium of all other input and output prices has been reached, so it is not an alternative to, but must be subsumed under, the Walrasian general equilibrium paradigm.

Thus, in invoking the neoclassical postulates of market-clearing and optimization, Lucas and Sargent unwittingly, or perhaps wittingly, begged the question how market clearing, which requires that the plans of individual optimizing agents to buy and sell reconciled in such a way that each agent can carry out his/her/their plan as intended, comes about. Rather than explain how market clearing is achieved, they simply assert – and rather loudly – that we must postulate that market clearing is achieved, and thereby submit to the virtuous discipline of equilibrium.

Because they could provide neither empirical evidence that equilibrium is continuously achieved nor a plausible explanation of the process whereby it might, or could be, achieved, Lucas and Sargent try to normalize their insistence that equilibrium is an obligatory postulate that must be accepted by economists by calling it “routine to describe an economy following a multivariate stochastic process as being ‘in equilibrium,’ by which is meant nothing more than that at each point in time, postulates (a) and (b) above are satisfied,” as if the routine adoption of any theoretical or methodological assumption becomes ipso facto justified once adopted routinely. That justification was unacceptable to Lucas and Sargent when made on behalf of “sticky wages” or Keynesian “rules of thumb, but somehow became compelling when invoked on behalf of perpetual “equilibrium” and neoclassical discipline.

Using the authority of Arrow and Debreu to support the normalcy of the assumption that equilibrium is a necessary and continuous property of reality, Lucas and Sargent maintained that it is “meaningless” to conclude that any economic time series is a disequilibrium phenomenon. A proposition ismeaningless if and only if neither the proposition nor its negation is true. So, in effect, Lucas and Sargent are asserting that it is nonsensical to say that an economic time either reflects or does not reflect an equilibrium, but that it is, nevertheless, methodologically obligatory to for any economic model to make that nonsensical assumption.

It is curious that, in making such an outlandish claim, Lucas and Sargent would seek to invoke the authority of Arrow and Debreu. Leave aside the fact that Arrow (1959) himself identified the lack of a theory of disequilibrium pricing as an explanatory gap in neoclassical general-equilibrium theory. But if equilibrium is a necessary and continuous property of reality, why did Arrow and Debreu, not to mention Wald and McKenzie, devoted so much time and prodigious intellectual effort to proving that an equilibrium solution to a system of equations exists. If, as Lucas and Sargent assert (nonsensically), it makes no sense to entertain the possibility that an economy is, or could be, in a disequilibrium state, why did Wald, Arrow, Debreu and McKenzie bother to prove that the only possible state of the world actually exists?

Having invoked the authority of Arrow and Debreu, Lucas and Sargent next invoke the seminal contribution of Sonnenschein (1973), though without mentioning the similar and almost simultaneous contributions of Mantel (1974) and Debreu (1974), to argue that it is empirically empty to argue that any collection of economic time series is either in equilibrium or out of equilibrium. This property has subsequently been described as an “Anything Goes Theorem” (Mas-Colell, Whinston, and Green, 1995).

Presumably, Lucas and Sargent believe the empirically empty hypothesis that a collection of economic time series is, or, alternatively is not, in equilibrium is an argument supporting the methodological imperative of maintaining the assumption that the economy absolutely and necessarily is in a continuous state of equilibrium. But what Sonnenschein (and Mantel and Debreu) showed was that even if the excess demands of all individual agents are continuous, are homogeneous of degree zero, and even if Walras’s Law is satisfied, aggregating the excess demands of all agents would not necessarily cause the aggregate excess demand functions to behave in such a way that a unique or a stable equilibrium. But if we have no good argument to explain why a unique or at least a stable neoclassical general-economic equilibrium exists, on what methodological ground is it possible to insist that no deviation from the admittedly empirically empty and meaningless postulate of necessary and continuous equilibrium may be tolerated by conscientious economic theorists? Or that the gatekeepers of reputable neoclassical economics must enforce appropriate standards of professional practice?

As Franklin Fisher (1989) showed, inability to prove that there is a stable equilibrium leaves neoclassical economics unmoored, because the bread and butter of neoclassical price theory (microeconomics), comparative statics exercises, is conditional on the assumption that there is at least one stable general equilibrium solution for a competitive economy.

But it’s not correct to say that general equilibrium theory in its Arrow-Debreu-McKenzie version is empirically empty. Indeed, it has some very strong implications. There is no money, no banks, no stock market, and no missing markets; there is no advertising, no unsold inventories, no search, no private information, and no price discrimination. There are no surprises and there are no regrets, no mistakes and no learning. I could go on, but you get the idea. As a theory of reality, the ADM general-equilibrium model is simply preposterous. And, yet, this is the model of economic reality on the basis of which Lucas and Sargent proposed to build a useful and relevant theory of macroeconomic fluctuations. OMG!

Lucas, in various writings, has actually disclaimed any interest in providing an explanation of reality, insisting that his only aim is to devise mathematical models capable of accounting for the observed values of the relevant time series of macroeconomic variables. In Lucas’s conception of science, the only criterion for scientific knowledge is the capacity of a theory – an algorithm for generating numerical values to be measured against observed time series – to generate predicted values approximating the observed values of the time series. The only constraint on the algorithm is Lucas’s methodological preference that the algorithm be derived from what he conceives to be an acceptable microfounded version of neoclassical theory: a set of predictions corresponding to the solution of a dynamic optimization problem for a “representative agent.”

In advancing his conception of the role of science, Lucas has reverted to the approach of ancient astronomers who, for methodological reasons of their own, believed that the celestial bodies revolved around the earth in circular orbits. To ensure that their predictions matched the time series of the observed celestial positions of the planets, ancient astronomers, following Ptolemy, relied on epicycles or second-order circular movements of planets while traversing their circular orbits around the earth to account for their observed motions.

Kepler and later Galileo conceived of the solar system in a radically different way from the ancients, placing the sun, not the earth, at the fixed center of the solar system and proposing that the orbits of the planets were elliptical, not circular. For a long time, however, the actual time series of geocentric predictions outperformed the new heliocentric predictions. But even before the heliocentric predictions started to outperform the geocentric predictions, the greater simplicity and greater realism of the heliocentric theory attracted an increasing number of followers, forcing methodological supporters of the geocentric theory to take active measures to suppress the heliocentric theory.

I hold no particular attachment to the pre-Lucasian versions of macroeconomic theory, whether Keynesian, Monetarist, or heterodox. Macroeconomic theory required a grounding in an explicit intertemporal setting that had been lacking in most earlier theories. But the ruthless enforcement, based on a preposterous methodological imperative, lacking scientific or philosophical justification, of formal intertemporal optimization models as the only acceptable form of macroeconomic theorizing has sidetracked macroeconomics from a more relevant inquiry into the nature and causes of intertemporal coordination failures that Keynes, along with many some of his predecessors and contemporaries, had initiated.

Just as the dispute about whether planetary motion is geocentric or heliocentric was a dispute about what the world is like, not just about the capacity of models to generate accurate predictions of time series variables, current macroeconomic disputes are real disputes about what the world is like and whether aggregate economic fluctuations are the result of optimizing equilibrium choices by economic agents or about coordination failures that cause economic agents to be surprised and disappointed and rendered unable to carry out their plans in the manner in which they had hoped and expected to be able to do. It’s long past time for this dispute about reality to be joined openly with the seriousness that it deserves, instead of being suppressed by a spurious pseudo-scientific methodology.

HT: Arash Molavi Vasséi, Brian Albrecht, and Chris Edmonds


[1] Lucas and Sargent are guilty of at least two misrepresentations in this paragraph. First, Keynes did not “found” macroeconomics, though he certainly influenced its development decisively. Keynes used the term “macroeconomics,” and his work, though crucial, explicitly drew upon earlier work by Marshall, Wicksell, Fisher, Pigou, Hawtrey, and Robertson, among others. See Laidler (1999). Second, having explicitly denied and argued at length that his results did not depend on the assumption of sticky wages, Keynes certainly never introduced the assumption of sticky wages himself. See Leijonhufvud (1968)

Hayek and the Lucas Critique

In March I wrote a blog post, “Robert Lucas and the Pretense of Science,” which was a draft proposal for a paper for a conference on Coordination Issues in Historical Perspectives to be held in September. My proposal having been accepted I’m going to post sections of the paper on the blog in hopes of getting some feedback as a write the paper. What follows is the first of several anticipated draft sections.

Just 31 years old, F. A. Hayek rose rapidly to stardom after giving four lectures at the London School of Economics at the invitation of his almost exact contemporary, and soon to be best friend, Lionel Robbins. Hayek had already published several important works, of which Hayek ([1928], 1984) laying out basic conceptualization of an intertemporal equilibrium almost simultaneously with the similar conceptualizations of two young Swedish economists, Gunnar Myrdal (1927) and Erik Lindahl [1929] 1939), was the most important.

Hayek’s (1931a) LSE lectures aimed to provide a policy-relevant version of a specific theoretical model of the business cycle that drew upon but was a just a particular instantiation of the general conceptualization developed in his 1928 contribution. Delivered less than two years after the start of the Great Depression, Hayek’s lectures gave a historical overview of the monetary theory of business-cycles, an account of how monetary disturbances cause real effects, and a skeptical discussion of how monetary policy might, or more likely might not, counteract or mitigate the downturn then underway. It was Hayek’s skepticism about countercyclical policy that helped make those lectures so compelling but also elicited such a hostile reaction during the unfolding crisis.

The extraordinary success of his lectures established Hayek’s reputation as a preeminent monetary theorist alongside established figures like Irving Fisher, A. C. Pigou, D. H. Robertson, R. G. Hawtrey, and of course J. M. Keynes. Hayek’s (1931b) critical review of Keynes’s just published Treatise on Money (1930), published soon after his LSE lectures, provoking a heated exchange with Keynes, himself, showed him to be a skilled debater and a powerful polemicist.

Hayek’s meteoric rise was, however, followed by a rapid fall from the briefly held pinnacle of his early career. Aside from the imperfections and weaknesses of his own theoretical framework (Glasner and Zimmerman 2021), his diagnosis of the causes of the Great Depression (Glasner and Batchelder [1994] 2021a, 2021b) and his policy advice (Glasner 2021) were theoretically misguided and inappropriate to the deflationary conditions underlying the Great Depression).

Nevertheless, Hayek’s conceptualization of intertemporal equilibrium provided insight into the role not only of prices, but also of price expectations, in accounting for cyclical fluctuations. In Hayek’s 1931 version of his cycle theory, the upturn results from bank-financed investment spending enabled by monetary expansion that fuels an economic boom characterized by increased total spending, output and employment. However, owing to resource constraints, misalignments between demand and supply, and drains of bank reserves, the optimistic expectations engendered by the boom are doomed to eventual disappointment, whereupon a downturn begins.

I need not engage here with the substance of Hayek’s cycle theory which I have criticized elsewhere (see references above). But I would like to consider his 1934 explanation, responding to Hansen and Tout (1933), of why a permanent monetary expansion would be impossible. Hansen and Tout disputed Hayek’s contention that monetary expansion would inevitably lead to a recession, because an unconstrained monetary authority would not be forced by a reserve drain to halt a monetary expansion, allowing a boom to continue indefinitely, permanently maintaining an excess of investment over saving.

Hayek (1934) responded as follows:

[A] constant rate of forced saving (i.e., investment in excess of voluntary saving) a rate of credit expansion which will enable the producers of intermediate products, during each successive unit of time, to compete successfully with the producers of consumers’ goods for constant additional quantities of the original factors of production. But as the competing demand from the producers of consumers’ goods rises (in terms of money) in consequence of, and in proportion to, the preceding increase of expenditure on the factors of production (income), an increase of credit which is to enable the producers of intermediate products to attract additional original factors, will have to be, not only absolutely but even relatively, greater than the last increase which is now reflected in the increased demand for consumers’ goods. Even in order to attract only as great a proportion of the original factors, i.e., in order merely to maintain the already existing capital, every new increase would have to be proportional to the last increase, i.e., credit would have to expand progressively at a constant rate. But in order to bring about constant additions to capital, it would have to do more: it would have to increase at a constantly increasing rate. The rate at which this rate of increase must increase would be dependent upon the time lag between the first expenditure of the additional money on the factors of production and the re-expenditure of the income so created on consumers’ goods. . . .

But I think it can be shown . . . that . . . such a policy would . . . inevitably lead to a rapid and progressive rise in prices which, in addition to its other undesirable effects, would set up movements which would soon counteract, and finally more than offset, the “forced saving.” That it is impossible, either for a simple progressive increase of credit which only helps to maintain, and does not add to, the already existing “forced saving,” or for an increase in credit at an increasing rate, to continue for a considerable time without causing a rise in prices, results from the fact that in neither case have we reason to assume that the increase in the supply of consumers’ goods will keep pace with the increase in the flow of money coming on to the market for consumers’ goods. Insofar as, in the second case, the credit expansion leads to an ultimate increase in the output of consumers’ goods, this increase will lag considerably and increasingly (as the period of production increases) behind the increase in the demand for them. But whether the prices of consumers’ goods will rise faster or slower, all other prices, and particularly the prices of the original factors of production, will rise even faster. It is only a question of time when this general and progressive rise of prices becomes very rapid. My argument is not that such a development is inevitable once a policy of credit expansion is embarked upon, but that it has to be carried to that point if a certain result—a constant rate of forced saving, or maintenance without the help of voluntary saving of capital accumulated by forced saving—is to be achieved.

Friedman’s (1968) argument why monetary expansion could not permanently reduce unemployment below its “natural rate” closely mirrors (though he almost certainly never read) Hayek’s argument that monetary expansion could not permanently maintain a rate of investment spending above the rate of voluntary saving. Generalizing Friedman’s logic, Lucas (1976) transformed it into a critique of using econometric estimates of relationships like the Phillips Curve, the specific target of Friedman’s argument, as a basis for predicting the effects of policy changes, such estimates being conditional on implicit expectational assumptions which aren’t invariant to the policy changes derived from those estimates.

Restated differently, such econometric estimates are reduced forms that, without identifying restrictions, do not allow the estimated regression coefficients to be used to predict the effects of a policy change.

Only by specifying, and estimating, the deep structural relationships governing the response to a policy change could the effect of a potential policy change be predicted with some confidence that the prediction would not prove erroneous because of changes in the econometrically estimated relationships once agents altered their behavior in response to the policy change.

In his 1974 Nobel Lecture, Hayek offered a similar explanation of why an observed correlation between aggregate demand and employment provides no basis for predicting the effect of policies aimed at increasing aggregate demand and reducing unemployment if the likely changes in structural relationships caused by those policies are not taken into account.

[T]he very measures which the dominant “macro-economic” theory has recommended as a remedy for unemployment, namely the increase of aggregate demand, have become a cause of a very extensive misallocation of resources which is likely to make later large-scale unemployment inevitable. The continuous injection . . . money at points of the economic system where it creates a temporary demand which must cease when the increase of the quantity of money stops or slows down, together with the expectation of a continuing rise of prices, draws labour . . . into employments which can last only so long as the increase of the quantity of money continues at the same rate – or perhaps even only so long as it continues to accelerate at a given rate. What this policy has produced is not so much a level of employment that could not have been brought about in other ways, as a distribution of employment which cannot be indefinitely maintained . . . The fact is that by a mistaken theoretical view we have been led into a precarious position in which we cannot prevent substantial unemployment from re-appearing; not because . . . this unemployment is deliberately brought about as a means to combat inflation, but because it is now bound to occur as a deeply regrettable but inescapable consequence of the mistaken policies of the past as soon as inflation ceases to accelerate.

Hayek’s point that an observed correlation between the rate of inflation (a proxy for aggregate demand) and unemployment cannot be relied on in making economic policy was articulated succinctly and abstractly by Lucas as follows:

In short, one can imagine situations in which empirical Phillips curves exhibit long lags and situations in which there are no lagged effects. In either case, the “long-run” output inflation relationship as calculated or simulated in the conventional way has no bearing on the actual consequences of pursing a policy of inflation.

[T]he ability . . . to forecast consequences of a change in policy rests crucially on the assumption that the parameters describing the new policy . . . are known by agents. Over periods for which this assumption is not approximately valid . . . empirical Phillips curves will appear subject to “parameter drift,” describable over the sample period, but unpredictable for all but the very near future.

The lesson inferred by both Hayek and Lucas was that Keynesian macroeconomic models of aggregate demand, inflation and employment can’t reliably guide economic policy and should be discarded in favor of models more securely grounded in the microeconomic theories of supply and demand that emerged from the Marginal Revolution of the 1870s and eventually becoming the neoclassical economic theory that describes the characteristics of an efficient, decentralized and self-regulating economic system. This was the microeconomic basis on which Hayek and Lucas believed macroeconomic theory ought to be based instead of the Keynesian system that they were criticizing. But that superficial similarity obscures the profound methodological and substantive differences between them.

Those differences will be considered in future posts.

References

Friedman, M. 1968. “The Role of Monetary Policy.” American Economic Review 58(1):1-17.

Glasner, D. 2021. “Hayek, Deflation, Gold and Nihilism.” Ch. 16 in D. Glasner Studies in the History of Monetary Theory: Controversies and Clarifications. London: Palgrave Macmillan.

Glasner, D. and Batchelder, R. W. [1994] 2021. “Debt, Deflation, the Gold Standard and the Great Depression.” Ch. 13 in D. Glasner Studies in the History of Monetary Theory: Controversies and Clarifications. London: Palgrave Macmillan.

Glasner, D. and Batchelder, R. W. 2021. “Pre-Keynesian Monetary Theories of the Great Depression: Whatever Happened to Hawtrey and Cassel?” Ch. 14 in D. Glasner Studies in the History of Monetary Theory: Controversies and Clarifications. London: Palgrave Macmillan.

Glasner, D. and Zimmerman, P. 2021.  “The Sraffa-Hayek Debate on the Natural Rate of Interest.” Ch. 15 in D. Glasner Studies in the History of Monetary Theory: Controversies and Clarifications. London: Palgrave Macmillan.

Hansen, A. and Tout, H. 1933. “Annual Survey of Business Cycle Theory: Investment and Saving in Business Cycle Theory,” Econometrica 1(2): 119-47.

Hayek, F. A. [1928] 1984. “Intertemporal Price Equilibrium and Movements in the Value of Money.” In R. McCloughry (Ed.), Money, Capital and Fluctuations: Early Essays (pp. 171–215). Routledge.

Hayek, F. A. 1931a. Prices and Produciton. London: Macmillan.

Hayek, F. A. 1931b. “Reflections on the Pure Theory of Money of Mr. Keynes.” Economica 33:270-95.

Hayek, F. A. 1934. “Capital and Industrial Fluctuations.” Econometrica 2(2): 152-67.

Keynes, J. M. 1930. A Treatise on Money. 2 vols. London: Macmillan.

Lindahl. E. [1929] 1939. “The Place of Capital in the Theory of Price.” In E. Lindahl, Studies in the Theory of Money and Capital. George, Allen & Unwin.

Lucas, R. E. [1976] 1985. “Econometric Policy Evaluation: A Critique.” In R. E. Lucas, Studies in Business-Cycle Theory. Cambridge: MIT Press.

Myrdal, G. 1927. Prisbildningsproblemet och Foranderligheten (Price Formation and the Change Factor). Almqvist & Wicksell.

Robert Lucas and the Pretense of Science

F. A. Hayek entitled his 1974 Nobel Lecture whose principal theme was to attack the simple notion that the long-observed correlation between aggregate demand and employment was a reliable basis for conducting macroeconomic policy, “The Pretence of Knowledge.” Reiterating an argument that he had made over 40 years earlier about the transitory stimulus provided to profits and production by monetary expansion, Hayek was informally anticipating the argument that Robert Lucas famously repackaged two years later in his famous critique of econometric policy evaluation. Hayek’s argument hinged on a distinction between “phenomena of unorganized complexity” and phenomena of organized complexity.” Statistical relationships or correlations between phenomena of disorganized complexity may be relied upon to persist, but observed statistical correlations displayed by phenomena of organized complexity cannot be relied upon without detailed knowledge of the individual elements that constitute the system. It was the facile assumption that observed statistical correlations in systems of organized complexity can be uncritically relied upon in making policy decisions that Hayek dismissed as merely the pretense of knowledge.

Adopting many of Hayek’s complaints about macroeconomic theory, Lucas founded his New Classical approach to macroeconomics on a methodological principle that all macroeconomic models be grounded in the axioms of neoclassical economic theory as articulated in the canonical Arrow-Debreu-McKenzie models of general equilibrium models. Without such grounding in neoclassical axioms and explicit formal derivations of theorems from those axioms, Lucas maintained that macroeconomics could not be considered truly scientific. Forty years of Keynesian macroeconomics were, in Lucas’s view, largely pre-scientific or pseudo-scientific, because they lacked satisfactory microfoundations.

Lucas’s methodological program for macroeconomics was thus based on two basic principles: reductionism and formalism. First, all macroeconomic models not only had to be consistent with rational individual decisions, they had to be reduced to those choices. Second, all the propositions of macroeconomic models had to be explicitly derived from the formal definitions and axioms of neoclassical theory. Lucas demanded nothing less than the explicit assumption individual rationality in every macroeconomic model and that all decisions by agents in a macroeconomic model be individually rational.

In practice, implementing Lucasian methodological principles required that in any macroeconomic model all agents’ decisions be derived within an explicit optimization problem. However, as Hayek had himself shown in his early studies of business cycles and intertemporal equilibrium, individual optimization in the standard Walrasian framework, within which Lucas wished to embed macroeconomic theory, is possible only if all agents are optimizing simultaneously, all individual decisions being conditional on the decisions of other agents. Individual optimization can only be solved simultaneously for all agents, not individually in isolation.

The difficulty of solving a macroeconomic equilibrium model for the simultaneous optimal decisions of all the agents in the model led Lucas and his associates and followers to a strategic simplification: reducing the entire model to a representative agent. The optimal choices of a single agent would then embody the consumption and production decisions of all agents in the model.

The staggering simplification involved in reducing a purported macroeconomic model to a representative agent is obvious on its face, but the sleight of hand being performed deserves explicit attention. The existence of an equilibrium solution to the neoclassical system of equations was assumed, based on faulty reasoning by Walras, Fisher and Pareto who simply counted equations and unknowns. A rigorous proof of existence was only provided by Abraham Wald in 1936 and subsequently in more general form by Arrow, Debreu and McKenzie, working independently, in the 1950s. But proving the existence of a solution to the system of equations does not establish that an actual neoclassical economy would, in fact, converge on such an equilibrium.

Neoclassical theory was and remains silent about the process whereby equilibrium is, or could be, reached. The Marshallian branch of neoclassical theory, focusing on equilibrium in individual markets rather than the systemic equilibrium, is often thought to provide an account of how equilibrium is arrived at, but the Marshallian partial-equilibrium analysis presumes that all markets and prices except the price in the single market under analysis, are in a state of equilibrium. So the Marshallian approach provides no more explanation of a process by which a set of equilibrium prices for an entire economy is, or could be, reached than the Walrasian approach.

Lucasian methodology has thus led to substituting a single-agent model for an actual macroeconomic model. It does so on the premise that an economic system operates as if it were in a state of general equilibrium. The factual basis for this premise apparently that it is possible, using versions of a suitable model with calibrated coefficients, to account for observed aggregate time series of consumption, investment, national income, and employment. But the time series derived from these models are derived by attributing all observed variations in national income to unexplained shocks in productivity, so that the explanation provided is in fact an ex-post rationalization of the observed variations not an explanation of those variations.

Nor did Lucasian methodology have a theoretical basis in received neoclassical theory. In a famous 1960 paper “Towards a Theory of Price Adjustment,” Kenneth Arrow identified the explanatory gap in neoclassical theory: the absence of a theory of price change in competitive markets in which every agent is a price taker. The existence of an equilibrium does not entail that the equilibrium will be, or is even likely to be, found. The notion that price flexibility is somehow a guarantee that market adjustments reliably lead to an equilibrium outcome is a presumption or a preconception, not the result of rigorous analysis.

However, Lucas used the concept of rational expectations, which originally meant no more than that agents try to use all available information to anticipate future prices, to make the concept of equilibrium, notwithstanding its inherent implausibility, a methodological necessity. A rational-expectations equilibrium was methodologically necessary and ruthlessly enforced on researchers, because it was presumed to be entailed by the neoclassical assumption of rationality. Lucasian methodology transformed rational expectations into the proposition that all agents form identical, and correct, expectations of future prices based on the same available information (common knowledge). Because all agents reach the same, correct expectations of future prices, general equilibrium is continuously achieved, except at intermittent moments when new information arrives and is used by agents to revise their expectations.

In his Nobel Lecture, Hayek decried a pretense of knowledge about correlations between macroeconomic time series that lack a foundation in the deeper structural relationships between those related time series. Without an understanding of the deeper structural relationships between those time series, observed correlations cannot be relied on when formulating economic policies. Lucas’s own famous critique echoed the message of Hayek’s lecture.

The search for microfoundations was always a natural and commendable endeavor. Scientists naturally try to reduce higher-level theories to deeper and more fundamental principles. But the endeavor ought to be conducted as a theoretical and empirical endeavor. If successful, the reduction of the higher-level theory to a deeper theory will provide insight and disclose new empirical implications to both the higher-level and the deeper theories. But reduction by methodological fiat accomplishes neither and discourages the research that might actually achieve a theoretical reduction of a higher-level theory to a deeper one. Similarly, formalism can provide important insights into the structure of theories and disclose gaps or mistakes the reasoning underlying the theories. But most important theories, even in pure mathematics, start out as informal theories that only gradually become axiomatized as logical gaps and ambiguities in the theories are discovered and filled or refined.

The resort to the reductionist and formalist methodological imperatives with which Lucas and his followers have justified their pretentions to scientific prestige and authority, and have used that authority to compel compliance with those imperatives, only belie their pretensions.

The Phillips Curve and the Lucas Critique

With unemployment at the lowest levels since the start of the millennium (initial unemployment claims in February were the lowest since 1973!), lots of people are starting to wonder if we might be headed for a pick-up in the rate of inflation, which has been averaging well under 2% a year since the financial crisis of September 2008 ushered in the Little Depression of 2008-09 and beyond. The Fed has already signaled its intention to continue raising interest rates even though inflation remains well anchored at rates below the Fed’s 2% target. And among Fed watchers and Fed cognoscenti, the only question being asked is not whether the Fed will raise its Fed Funds rate target, but how frequent those (presumably) quarter-point increments will be.

The prevailing view seems to be that the thought process of the Federal Open Market Committee (FOMC) in raising interest rates — even before there is any real evidence of an increase in an inflation rate that is still below the Fed’s 2% target — is that a preemptive strike is required to prevent inflation from accelerating and rising above what has become an inflation ceiling — not an inflation target — of 2%.

Why does the Fed believe that inflation is going to rise? That’s what the econoblogosphere has, of late, been trying to figure out. And the consensus seems to be that the FOMC is basing its assessment that the risk that inflation will break the 2% ceiling that it has implicitly adopted has become unacceptably high. That risk assessment is based on some sort of analysis in which it is inferred from the Phillips Curve that, with unemployment nearing historically low levels, rising inflation has become dangerously likely. And so the next question is: why is the FOMC fretting about the Phillips Curve?

In a blog post earlier this week, David Andolfatto of the St. Louis Federal Reserve Bank, tried to spell out in some detail the kind of reasoning that lay behind the FOMC decision to actively tighten the stance of monetary policy to avoid any increase in inflation. At the same time, Andolfatto expressed his own view, that the rate of inflation is not determined by the rate of unemployment, but by the stance of monetary policy.

Andolfatto’s avowal of monetarist faith in the purely monetary forces that govern the rate of inflation elicited a rejoinder from Paul Krugman expressing considerable annoyance at Andolfatto’s monetarism.

Here are three questions about inflation, unemployment, and Fed policy. Some people may imagine that they’re the same question, but they definitely aren’t:

  1. Does the Fed know how low the unemployment rate can go?
  2. Should the Fed be tightening now, even though inflation is still low?
  3. Is there any relationship between unemployment and inflation?

It seems obvious to me that the answer to (1) is no. We’re currently well above historical estimates of full employment, and inflation remains subdued. Could unemployment fall to 3.5% without accelerating inflation? Honestly, we don’t know.

Agreed.

I would also argue that the Fed is making a mistake by tightening now, for several reasons. One is that we really don’t know how low U can go, and won’t find out if we don’t give it a chance. Another is that the costs of getting it wrong are asymmetric: waiting too long to tighten might be awkward, but tightening too soon increases the risks of falling back into a liquidity trap. Finally, there are very good reasons to believe that the Fed’s 2 percent inflation target is too low; certainly the belief that it was high enough to make the zero lower bound irrelevant has been massively falsified by experience.

Agreed, but the better approach would be to target the price level, or even better nominal GDP, so that short-term undershooting of the inflation target would provide increased leeway to allow inflation to overshoot the inflation target without undermining the credibility of the commitment to price stability.

But should we drop the whole notion that unemployment has anything to do with inflation? Via FTAlphaville, I see that David Andolfatto is at it again, asserting that there’s something weird about asserting an unemployment-inflation link, and that inflation is driven by an imbalance between money supply and money demand.

But one can fully accept that inflation is driven by an excess supply of money without denying that there is a link between inflation and unemployment. In the normal course of events an excess supply of money may lead to increased spending as people attempt to exchange their excess cash balances for real goods and services. The increased spending can induce additional output and additional employment along with rising prices. The reverse happens when there is an excess demand for cash balances and people attempt to build up their cash holdings by cutting back their spending, reducing output. So the inflation unemployment relationship results from the effects induced by a particular causal circumstance. Nor does that mean that an imbalance in the supply of money is the only cause of inflation or price level changes.

Inflation can also result from nothing more than the anticipation of inflation. Expected inflation can also affect output and employment, so inflation and unemployment are related not only by both being affected by excess supply of (demand for) money, but by both being affect by expected inflation.

Even if you think that inflation is fundamentally a monetary phenomenon (which you shouldn’t, as I’ll explain in a minute), wage- and price-setters don’t care about money demand; they care about their own ability or lack thereof to charge more, which has to – has to – involve the amount of slack in the economy. As Karl Smith pointed out a decade ago, the doctrine of immaculate inflation, in which money translates directly into inflation – a doctrine that was invoked to predict inflationary consequences from Fed easing despite a depressed economy – makes no sense.

There’s no reason for anyone to care about overall money demand in this scenario. Price setters respond to the perceived change in the rate of spending induced by an excess supply of money. (I note parenthetically, that I am referring now to an excess supply of base money, not to an excess supply of bank-created money, which, unlike base money, is not a hot potato that cannot be withdrawn from circulation in response to market incentives.) Now some price setters may actually use macroeconomic information to forecast price movements, but recognizing that channel would take us into the realm of an expectations-theory of inflation, not the strict monetary theory of inflation that Krugman is criticizing.

And the claim that there’s weak or no evidence of a link between unemployment and inflation is sustainable only if you insist on restricting yourself to recent U.S. data. Take a longer and broader view, and the evidence is obvious.

Consider, for example, the case of Spain. Inflation in Spain is definitely not driven by monetary factors, since Spain hasn’t even had its own money since it joined the euro. Nonetheless, there have been big moves in both Spanish inflation and Spanish unemployment:

That period of low unemployment, by Spanish standards, was the result of huge inflows of capital, fueling a real estate bubble. Then came the sudden stop after the Greek crisis, which sent unemployment soaring.

Meanwhile, the pre-crisis era was marked by relatively high inflation, well above the euro-area average; the post-crisis era by near-zero inflation, below the rest of the euro area, allowing Spain to achieve (at immense cost) an “internal devaluation” that has driven an export-led recovery.

So, do you really want to claim that the swings in inflation had nothing to do with the swings in unemployment? Really, really?

No one claims – at least no one who believes in a monetary theory of inflation — should claim that swings in inflation and unemployment are unrelated, but to acknowledge the relationship between inflation and unemployment does not entail acceptance of the proposition that unemployment is a causal determinant of inflation.

But if you concede that unemployment had a lot to do with Spanish inflation and disinflation, you’ve already conceded the basic logic of the Phillips curve. You may say, with considerable justification, that U.S. data are too noisy to have any confidence in particular estimates of that curve. But denying that it makes sense to talk about unemployment driving inflation is foolish.

No it’s not foolish, because the relationship between inflation and unemployment is not a causal relationship; it’s a coincidental relationship. The level of employment depends on many things and some of the things that employment depends on also affect inflation. That doesn’t mean that employment causally affects inflation.

When I read Krugman’s post and the Andalfatto post that provoked Krugman, it occurred to me that the way to summarize all of this is to say that unemployment and inflation are determined by a variety of deep structural (causal) relationships. The Phillips Curve, although it was once fashionable to refer to it as the missing equation in the Keynesian model, is not a structural relationship; it is a reduced form. The negative relationship between unemployment and inflation that is found by empirical studies does not tell us that high unemployment reduces inflation, any more than a positive empirical relationship between the price of a commodity and the quantity sold would tell you that the demand curve for that product is positively sloped.

It may be interesting to know that there is a negative empirical relationship between inflation and unemployment, but we can’t rely on that relationship in making macroeconomic policy. I am not a big admirer of the Lucas Critique for reasons that I have discussed in other posts (e.g., here and here). But, the Lucas Critique, a rather trivial result that was widely understood even before Lucas took ownership of the idea, does at least warn us not to confuse a reduced form with a causal relationship.

The Standard Narrative on the History of Macroeconomics: An Exercise in Self-Serving Apologetics

During my recent hiatus from blogging, I have been pondering an important paper presented in June at the History of Economics Society meeting in Toronto, “The Standard Narrative on History of Macroeconomics: Central Banks and DSGE Models” by Francesco Sergi of the University of Bristol, which was selected by the History of Economics Society as the best conference paper by a young scholar in 2017.

Here is the abstract of Sergi’s paper:

How do macroeconomists write the history of their own discipline? This article provides a careful reconstruction of the history of macroeconomics told by the practitioners working today in the dynamic stochastic general equilibrium (DSGE) approach.

Such a tale is a “standard narrative”: a widespread and “standardizing” view of macroeconomics as a field evolving toward “scientific progress”. The standard narrative explains scientific progress as resulting from two factors: “consensus” about theory and “technical change” in econometric tools and computational power. This interpretation is a distinctive feature of central banks’ technical reports about their DSGE models.

Furthermore, such a view on “consensus” and “technical change” is a significantly different view with respect to similar tales told by macroeconomists in the past — which rather emphasized the role of “scientific revolutions” and struggles among competing “schools of thought”. Thus, this difference raises some new questions for historians of macroeconomics.

Sergi’s paper is too long and too rich in content to easily summarize in this post, so what I will do is reproduce and comment on some of the many quotations provided by Sergi, taken mostly from central-bank reports, but also from some leading macroeconomic textbooks and historical survey papers, about the “progress” of modern macroeconomics, and especially about the critical role played by “microfoundations” in achieving that progress. The general tenor of the standard narrative is captured well by the following quotations from V. V. Chari

[A]ny interesting model must be a dynamic stochastic general equilibrium model. From this perspective, there is no other game in town. […] A useful aphorism in macroeconomics is: “If you have an interesting and coherent story to tell, you can tell it in a DSGE model.  (Chari 2010, 2)

I could elaborate on this quotation at length, but I will just leave it out there for readers to ponder with a link to an earlier post of mine about methodological arrogance. Instead I will focus on two other sections of Sergi’s paper “the five steps of theoretical progress” and “microfoundations as theoretical progress.” Here is how Sergi explains the role of the five steps:

The standard narrative provides a detailed account of the progressive evolution toward the synthesis. Following a teleological perspective, each step of this evolution is an incremental, linear improvement of the theoretical tool box for model building. The standard narrative identifies five steps . . . .  Each step corresponds to the emergence of a school of thought. Therefore, in the standard narrative, there are not such things as competing schools of thought and revolutions. Firstly, because schools of thought are represented as a sequence; one school (one step) is always leading to another school (the following step), hence different schools are not coexisting for a long period of time. Secondly, there are no revolutions because, while emerging, new schools of thought [do] not overthrow the previous ones; instead, they suggest improvements and amendments, that are accepted as an improvement by pre-existing schools therefore, accumulation of knowledge takes place thanks to consensus. (pp. 17-18)

The first step in the standard narrative is the family of Keynesian macroeconometric models of the 1950s and 1960s, the primitive ancestors of the modern DSGE models. The second step was the emergence of New Classical macroeconomics which introduced the ideas of rational expectations and dynamic optimization into theoretical macroeconomic discourse in the 1970s. The third step was the development, inspired by New Classical ideas, of Real-Business-Cycle models of the 1980s, and the fourth step was introduction of New Keynesian models in the late 1980s and 1990s that tweaked the Real-Business-Cycle models in ways that rationalized the use of counter-cyclical macroeconomic policy within the theoretical framework of the Real-Business-Cycle approach. The final step, the DSGE model, emerged more or less naturally as a synthesis of the converging Real-Business-Cycle and New Keynesian approaches.

After detailing the five steps of theoretical progress, Sergi focuses attention on “the crucial improvement” that allowed the tool box of macroeconomic modelling to be extended in such a theoretically fruitful way: the insistence on providing explicit microfoundations for macroeconomic models. He writes:

Abiding [by] the Lucasian microfoundational program is put forward by DSGE modellers as the very fundamental essence of theoretical progress allowed by [the] consensus. As Sanajay K. Chugh (University of Pennsylvania) explains in the historical chapter of his textbook, microfoundations is all what modern macroeconomics is about: (p. 20)

Modern macroeconomics begin by explicitly studying the microeconomic principles of utility maximization, profit maximization and market-clearing. [. . . ] This modern macroeconomics quickly captured the attention of the profession through the 1980s [because] it actually begins with microeconomic principles, which was a rather attractive idea. Rather than building a framework of economy-wide events from the top down [. . .] one could build this framework using microeconomic discipline from the bottom up. (Chugh 2015, 170)

Chugh’s rationale for microfoundations is a naïve expression of reductionist bias dressed up as simple homespun common-sense. Everyone knows that you should build from the bottom up, not from the top down, right? But things are not always quite as simple as they seem. Here is an attempt to present microfoundations as being cutting-edge and sophisticated offered in a 2009 technical report written by Cuche-Curti et al. for the Swiss National Bank.

The key property of DSGE models is that they rely on explicit micro-foundations and a rational treatment of expectations in a general equilibrium context. They thus provide a coherent and compelling theoretical framework for macroeconomic analysis. (Cuche-Curti et al. 2009, 6)

A similar statement is made by Gomes et al in a 2010 technical report for the European Central Bank:

The microfoundations of the model together with its rich structure allow [us] to conduct a quantitative analysis in a theoretically coherent and fully consistent model setup, clearly spelling out all the policy implications. (Gomes et al. 2010, 5)

These laudatory descriptions of the DSGE model stress its “coherence” as a primary virtue. What is meant by “coherence” is spelled out more explicitly in a 2006 technical report describing NEMO, a macromodel of the Norwegian economy, by Brubakk et al. for the Norges Bank.

Various agents’ behavior is modelled explicitly in NEMO, based on microeconomic theory. A consistent theoretical framework makes it easier to interpret relationships and mechanisms in the model in the light of economic theory. One advantage is that we can analyse the economic effects of changes of a more structural nature […] [making it] possible to provide a consistent and detailed economic rationale for Norges Bank’s projections for the Norwegian economy. This distinguishes NEMO from purely statistical models, which to a limited extent provide scope for economic interpretations. (Brubakk and Sveen 2009, 39)

By creating microfounded models, in which all agents are optimizers making choices consistent with the postulates of microeconomic theory, DSGE model-builders, in effect, create “laboratories” from which to predict the consequences of alternative monetary policies, enabling policy makers to make informed policy choices. I pause merely to note and draw attention to the tendentious and misleading misappropriation of the language of empirical science by these characteristically self-aggrandizing references to DSGE models as “laboratories” as if what was going on in such models was determined by an actual physical process, as is routinely the case in the laboratories of physical and natural scientists, rather than speculative exercises in high-level calculations derived from the manipulation of DSGE models.

As a result of recent advances in macroeconomic theory and computational techniques, it has become feasible to construct richly structured dynamic stochastic general equilibrium models and use them as laboratories for the study of business cycles and for the formulation and analysis of monetary policy. (Cuche-Curri et al. 2009, 39)

Policy makers can be confident that the conditional predictions corresponding to the policy alternative under consideration, which are derived from their “laboratory” DSGE models, because those models, having been constructed on the basis of the postulates of economic theory, are therefore microfounded, embodying deep structural parameters that are invariant to policy changes. Microfounded models are thus immune to the Lucas Critique of macroeconomic policy evaluation, under which the empirically estimated coefficients of traditional Keynesian macroeconometric models cannot be assumed to remain constant under policy changes, because those coefficient estimates are themselves conditional to policy choices.

Here is how the point is made in three different central bank technical reports: by Argov et al. in a 2012 technical report about MOISE, a DSGE model for the Israeli economy, by Cuche-Curti et al. and by Medina and Soto in a 2006 technical report about a new DSGE model for the Chilean economy for the Central Bank of Chile.

Being micro-founded, the model enables the central bank to assess the effect of its alternative policy choices on the future paths of the economy’s endogenous variables, in a way that is immune to the Lucas critique. (Argov et al. 2012, 5)

[The DSGE] approach has three distinct advantages in comparison to other modelling strategies. First and foremost, its microfoundations should allow it to escape the Lucas critique. (Cuche-Curti et al. 2009, 6)

The main advantage of this type of model, over more traditional reduce-form macro models, is that the structural interpretation of their parameters allows [it] to overcome the Lucas Critique. This is clearly an advantage for policy analysis. (Medina and Soto, 2006, 2)

These quotations show clearly that escaping, immunizing, or overcoming the Lucas Critique is viewed by DSGE modelers as the holy grail of macroeconomic model building and macroeconomic policy analysis. If the Lucas Critique cannot be neutralized, the coefficient estimates derived from reduced-form macroeconometric models cannot be treated as invariant to policy and therefore cannot provide a secure basis for predicting the effects of alternative policies. But DSGE models allow deep structural relationships, reflecting the axioms underlying microeconomic theory, to be estimated. Because they reflect the deep, and presumably stable, microeconomic structure of the economy, estimates of deep parameters derived from DSGE models, DSGE modelers claim that these estimates provide policy makers with a reliable basis for conditional forecasting of the effects of macroeconomic policy.

Because of the consistently poor track record of DSGE models in actual forecasting (for evidence of that poor track record see the paper by Carlaw and Lipsey and my post about their paper) comparing the predictive performance of DSGE models with more traditional macroeconometric models), the emphasis placed on the Lucas Critique by DSGE modelers has an apologetic character, DSGE modelers having to account for the relatively poor comparative predictive power of DSGE models by relentlessly invoking the Lucas Critique in trying to account for, and explain away, the poor predictive performance of the DSGE models. But if DSGE models really are better than traditional macro models why are their unconditional predictions not at least as good as those of traditional macroeconometric models? Obviously estimates of the deep structural relationships provided by microfounded models are not as reliable as DSGE apologetics tries to suggest.

And the reason that the estimates of deep structural relationships derived from DSGE models are not reliable is that those models, no less than traditional macroeconometric models, are subject to the Lucas Critique, the deep microeconomic structural relationships embodied in DSGE models being conditional on the existence of a unique equilibrium solution that persists long enough for the structural relationships characterizing that equilibrium to be inferred from the data-generating mechanism whereby those models are estimated. (I have made this point previously here.) But if the data-generating mechanism does not conform to the unique general equilibrium upon whose existence the presumed deep structural relationships of microeconomic theory embodied in DSGE models are conditioned, the econometric estimates derived from DSGE models cannot capture the desired deep structural relationships, and the resulting structural estimates are therefore incapable of providing a reliable basis for macroeconomic-policy analysis or for conditional forecasts of the effects of alternative policies, much less unconditional forecasts of endogenous macroeconomic variables.

Of course, the problem is even more intractable than the discussion above implies, because there is no reason why the deep structural relationships corresponding to a particular equilibrium should be invariant to changes in the equilibrium. So any change in economic policy that displaces a pre-existing equilibrium, let alone any other unforeseen technological change or change in tastes or resource endowments that displaces a pre-existing equilibrium will necessarily cause all the deep structural relationships to change correspondingly. So the deep structural parameters upon whose invariance the supposedly unique capacity of DSGE models to provide policy analysis upon which policy makers can rely simply don’t exist. Policy making based on DSGE models is as much an uncertain art requiring the exercise of finely developed judgment and intuition as policy making based on any other kind of economic modeling. DSGE models provide no uniquely reliable basis for making macroeconomic policy.

References

Argov, E., Barnea, E., Binyamini, A., Borenstein, E., Elkayam, D., and Rozenshtrom, I. (2012). MOISE: A DSGE Model for the Israeli Economy. Technical Report 2012.06, Bank of Israel.
Brubakk, L.,Husebø, T. A., Maih, J., Olsen, K., and Østnor, M. (2006). Finding NEMO: Documentation of the Norwegian economy model. Technical Report 2006/6, Norges Bank, Staff Memo.
Carlaw, K. I., and Lipsey, R. G. (2012). “Does History Matter?: Empirical Analysis of Evolutionary versus Stationary Equilibrium Views of the Economy.” Journal of Evolutionary Economics. 22(4):735-66.
Chari, V. V. (2010). Testimony before the committee on Science and Technology, Subcommittee on Investigations and Oversight, US House of Representatives. In Building a Science of Economics for the Real World.
Chugh, S. K. (2015). Modern Macroeconomics. MIT Press, Cambridge (MA).
Cuche-Curti, N. A., Dellas, H., and Natal, J.-M. (2009). DSGE-CH. A Dynamic Stochastic General Equilibrium Model for Switzerland. Technical Report 5, Swiss National Bank.
Gomes, S., Jacquinot, P., and Pisani, M. (2010). The EAGLE. A Model for Policy Analysis of Macroeconomic Interdependence in the Euro Area. Technical Report 1195, European Central Bank.
Medina, J. P. and Soto, C. (2006). Model for Analysis and Simulations (MAS): A New DSGE Model for the Chilean Economy. Technical report, Central Bank of Chile.

Richard Lipsey and the Phillips Curve Redux

Almost three and a half years ago, I published a post about Richard Lipsey’s paper “The Phillips Curve and the Tyranny of an Assumed Unique Macro Equilibrium.” The paper originally presented at the 2013 meeting of the History of Econmics Society has just been published in the Journal of the History of Economic Thought, with a slightly revised title “The Phillips Curve and an Assumed Unique Macroeconomic Equilibrium in Historical Context.” The abstract of the revised published version of the paper is different from the earlier abstract included in my 2013 post. Here is the new abstract.

An early post-WWII debate concerned the most desirable demand and inflationary pressures at which to run the economy. Context was provided by Keynesian theory devoid of a full employment equilibrium and containing its mainly forgotten, but still relevant, microeconomic underpinnings. A major input came with the estimates provided by the original Phillips curve. The debate seemed to be rendered obsolete by the curve’s expectations-augmented version with its natural rate of unemployment, and associated unique equilibrium GDP, as the only values consistent with stable inflation. The current behavior of economies with the successful inflation targeting is inconsistent with this natural-rate view, but is consistent with evolutionary theory in which economies have a wide range of GDP-compatible stable inflation. Now the early post-WWII debates are seen not to be as misguided as they appeared to be when economists came to accept the assumptions implicit in the expectations-augmented Phillips curve.

Publication of Lipsey’s article nicely coincides with Roger Farmer’s new book Prosperity for All which I discussed in my previous post. A key point that Roger makes is that the assumption of a unique equilibrium which underlies modern macroeconomics and the vertical long-run Phillips Curve is neither theoretically compelling nor consistent with the empirical evidence. Lipsey’s article powerfully reinforces those arguments. Access to Lipsey’s article is gated on the JHET website, so in addition to the abstract, I will quote the introduction and a couple of paragraphs from the conclusion.

One important early post-WWII debate, which took place particularly in the UK, concerned the demand and inflationary pressures at which it was best to run the economy. The context for this debate was provided by early Keynesian theory with its absence of a unique full-employment equilibrium and its mainly forgotten, but still relevant, microeconomic underpinnings. The original Phillips Curve was highly relevant to this debate. All this changed, however, with the introduction of the expectations-augmented version of the curve with its natural rate of unemployment, and associated unique equilibrium GDP, as the only values consistent with a stable inflation rate. This new view of the economy found easy acceptance partly because most economists seem to feel deeply in their guts — and their training predisposes them to do so — that the economy must have a unique equilibrium to which market forces inevitably propel it, even if the approach is sometimes, as some believe, painfully slow.

The current behavior of economies with successful inflation targeting is inconsistent with the existence of a unique non-accelerating-inflation rate of unemployment (NAIRU) but is consistent with evolutionary theory in which the economy is constantly evolving in the face of path-dependent, endogenously generated, technological change, and has a wide range of unemployment and GDP over which the inflation rate is stable. This view explains what otherwise seems mysterious in the recent experience of many economies and makes the early post-WWII debates not seem as silly as they appeared to be when economists came to accept the assumption of a perfectly inelastic, long-run Phillips curve located at the unique equilibrium level of unemployment. One thing that stands in the way of accepting this view, however, the tyranny of the generally accepted assumption of a unique, self-sustaining macroeconomic equilibrium.

This paper covers some of the key events in the theory concerning, and the experience of, the economy’s behavior with respect to inflation and unemployment over the post-WWII period. The stage is set by the pressure-of-demand debate in the 1950s and the place that the simple Phillips curve came to play in it. The action begins with the introduction of the expectations-augmented Phillips curve and the acceptance by most Keynesians of its implication of a unique, self-sustaining macro equilibrium. This view seemed not inconsistent with the facts of inflation and unemployment until the mid-1990s, when the successful adoption of inflation targeting made it inconsistent with the facts. An alternative view is proposed, on that is capable of explaining current macro behavior and reinstates the relevance of the early pressure-of-demand debate. (pp. 415-16).

In reviewing the evidence that stable inflation is consistent with a range of unemployment rates, Lipsey generalizes the concept of a unique NAIRU to a non-accelerating-inflation band of unemployment (NAIBU) within which multiple rates of unemployment are consistent with a basically stable expected rate of inflation. In an interesting footnote, Lipsey addresses a possible argument against the relevance of the empirical evidence for policy makers based on the Lucas critique.

Some might raise the Lucas critique here, arguing that one finds the NAIBU in the data because policymakers are credibly concerned only with inflation. As soon as policymakers made use of the NAIBU, the whole unemployment-inflation relation that has been seen since the mid-1990s might change or break. For example, unions, particularly in the European Union, where they are typically more powerful than in North America, might alter their behavior once they became aware that the central bank was actually targeting employment levels directly and appeared to have the power to do so. If so, the Bank would have to establish that its priorities were lexicographically ordered with control of inflation paramount so that any level-of-activity target would be quickly dropped whenever inflation threatened to go outside of the target bands. (pp. 426-27)

I would just mention in this context that in this 2013 post about the Lucas critique, I pointed out that in the paper in which Lucas articulated his critique, he assumed that the only possible source of disequilibrium was a mistake in expected inflation. If everything else is working well, causing inflation expectations to be incorrect will make things worse. But if there are other sources of disequilibrium, it is not clear that incorrect inflation expectations will make things worse; they could make things better. That is a point that Lipsey and Kelvin Lancaster taught the profession in a classic article “The General Theory of Second Best,” 20 years before Lucas published his critique of econometric policy evaluation.

I conclude by quoting Lipsey’s penultimate paragraph (the final paragraph being a quote from Lipsey’s paper on the Phillips Curve from the Blaug and Lloyd volume Famous Figures and Diagrams in Economics which I quoted in full in my 2013 post.

So we seem to have gone full circle from the early Keynesian view in which there was no unique level of GDP to which the economy was inevitably drawn, through a simple Phillips curve with its implied trade-0ff, to an expectations-augmented Phillips curve (or any of its more modern equivalents) with its associated unique level of GDP, and finally back to the early Keynesian view in which policymakers had an option as to the average pressure of aggregate demand at which economic activity could be sustained. However, the modern debated about whether to aim for [the high or low range of stable unemployment rates] is not a debate about inflation versus growth, as it was in the 1950s, but between those who would risk an occasional rise of inflation above the target band as the price of getting unemployment as low as possible and those who would risk letting unemployment fall below that indicated by the lower boundary of the NAIBU  as the price of never risking an acceleration of inflation above the target rate. (p. 427)

Paul Romer on Modern Macroeconomics, Or, the “All Models Are False” Dodge

Paul Romer has been engaged for some time in a worthy campaign against the travesty of modern macroeconomics. A little over a year ago I commented favorably about Romer’s takedown of Robert Lucas, but I also defended George Stigler against what I thought was an unfair attempt by Romer to identify George Stigler as an inspiration and role model for Lucas’s transgressions. Now just a week ago, a paper based on Romer’s Commons Memorial Lecture to the Omicron Delta Epsilon Society, has become just about the hottest item in the econ-blogosophere, even drawing the attention of Daniel Drezner in the Washington Post.

I have already written critically about modern macroeconomics in my five years of blogging, and here are some links to previous posts (link, link, link, link). It’s good to see that Romer is continuing to voice his criticisms, and that they are gaining a lot of attention. But the macroeconomic hierarchy is used to criticism, and has its standard responses to criticism, which are being dutifully deployed by defenders of the powers that be.

Romer’s most effective rhetorical strategy is to point out that the RBC core of modern DSGE models posit unobservable taste and technology shocks to account for fluctuations in the economic time series, but that these taste and technology shocks are themselves simply inferred from the fluctuations in the times-series data, so that the entire structure of modern macroeconometrics is little more than an elaborate and sophisticated exercise in question-begging.

In this post, I just want to highlight one of the favorite catch-phrases of modern macroeconomics which serves as a kind of default excuse and self-justification for the rampant empirical failures of modern macroeconomics (documented by Lipsey and Carlaw as I showed in this post). When confronted by evidence that the predictions of their models are wrong, the standard and almost comically self-confident response of the modern macroeconomists is: All models are false. By which the modern macroeconomists apparently mean something like: “And if they are all false anyway, you can’t hold us accountable, because any model can be proven wrong. What really matters is that our models, being microfounded, are not subject to the Lucas Critique, and since all other models than ours are not micro-founded, and, therefore, being subject to the Lucas Critique, they are simply unworthy of consideration. This is what I have called methodological arrogance. That response is simply not true, because the Lucas Critique applies even to micro-founded models, those models being strictly valid only in equilibrium settings and being unable to predict the adjustment of economies in the transition between equilibrium states. All models are subject to the Lucas Critique.

Here is Romer’s take:

In response to the observation that the shocks are imaginary, a standard defense invokes Milton Friedman’s (1953) methodological assertion from unnamed authority that “the more significant the theory, the more unrealistic the assumptions (p.14).” More recently, “all models are false” seems to have become the universal hand-wave for dismissing any fact that does not conform to the model that is the current favorite.

Friedman’s methodological assertion would have been correct had Friedman substituted “simple” for “unrealistic.” Sometimes simplifications are unrealistic, but they don’t have to be. A simplification is a generalization of something complicated. By simplifying, we can transform a problem that had been too complex to handle into a problem more easily analyzed. But such simplifications aren’t necessarily unrealistic. To say that all models are false is simply a dodge to avoid having to account for failure. The excuse of course is that all those other models are subject to the Lucas Critique, so my model wins. But your model is subject to the Lucas Critique even though you claim it’s not, so even according to the rules you have arbitrarily laid down, you don’t win.

So I was just curious about where the little phrase “all models are false” came from. I was expecting that Karl Popper might have said it, in which case to use the phrase as a defense mechanism against empirical refutation would have been a particularly fraudulent tactic, because it would have been a perversion of Popper’s methodological stance, which was to force our theoretical constructs to face up to, not to insulate it from, empirical testing. But when I googled “all theories are false” what I found was not Popper, but the British statistician, G. E. P. Box who wrote in his paper “Science and Statistics” based on his R. A. Fisher Memorial Lecture to the American Statistical Association: “All models are wrong.” Here’s the exact quote:

Since all models are wrong the scientist cannot obtain a “correct” one by excessive elaboration. On the contrary following William of Occam he should seek an economical description of natural phenomena. Just as the ability to devise simple but evocative models is the signature of the great scientist so overelaboration and overparameterization is often the mark of mediocrity.

Since all models are wrong the scientist must be alert to what is importantly wrong. It is inappropriate to be concerned about mice when there are tigers abroad. Pure mathematics is concerned with propositions like “given that A is true, does B necessarily follow?” Since the statement is a conditional one, it has nothing whatsoever to do with the truth of A nor of the consequences B in relation to real life. The pure mathematician, acting in that capacity, need not, and perhaps should not, have any contact with practical matters at all.

In applying mathematics to subjects such as physics or statistics we make tentative assumptions about the real world which we know are false but which we believe may be useful nonetheless. The physicist knows that particles have mass and yet certain results, approximating what really happens, may be derived from the assumption that they do not. Equally, the statistician knows, for example, that in nature there never was a normal distribution, there never was a straight line, yet with normal and linear assumptions, known to be false, he can often derive results which match, to a useful approximation, those found in the real world. It follows that, although rigorous derivation of logical consequences is of great importance to statistics, such derivations are necessarily encapsulated in the knowledge that premise, and hence consequence, do not describe natural truth.

It follows that we cannot know that any statistical technique we develop is useful unless we use it. Major advances in science and in the science of statistics in particular, usually occur, therefore, as the result of the theory-practice iteration.

One of the most annoying conceits of modern macroeconomists is the constant self-congratulatory references to themselves as scientists because of their ostentatious use of axiomatic reasoning, formal proofs, and higher mathematical techniques. The tiresome self-congratulation might get toned down ever so slightly if they bothered to read and take to heart Box’s lecture.

What’s Wrong with Monetarism?

UPDATE: (05/06): In an email Richard Lipsey has chided me for seeming to endorse the notion that 1970s stagflation refuted Keynesian economics. Lipsey rightly points out that by introducing inflation expectations into the Phillips Curve or the Aggregate Supply Curve, a standard Keynesian model is perfectly capable of explaining stagflation, so that it is simply wrong to suggest that 1970s stagflation constituted an empirical refutation of Keynesian theory. So my statement in the penultimate paragraph that the k-percent rule

was empirically demolished in the 1980s in a failure even more embarrassing than the stagflation failure of Keynesian economics.

should be amended to read “the supposed stagflation failure of Keynesian economics.”

Brad DeLong recently did a post (“The Disappearance of Monetarism”) referencing an old (apparently unpublished) paper of his following up his 2000 article (“The Triumph of Monetarism”) in the Journal of Economic Perspectives. Paul Krugman added his own gloss on DeLong on Friedman in a post called “Why Monetarism Failed.” In the JEP paper, DeLong argued that the New Keynesian policy consensus of the 1990s was built on the foundation of what DeLong called “classic monetarism,” the analytical core of the doctrine developed by Friedman in the 1950s and 1960s, a core that survived the demise of what he called “political monetarism,” the set of factual assumptions and policy preferences required to justify Friedman’s k-percent rule as the holy grail of monetary policy.

In his follow-up paper, DeLong balanced his enthusiasm for Friedman with a bow toward Keynes, noting the influence of Keynes on both classic and political monetarism, arguing that, unlike earlier adherents of the quantity theory, Friedman believed that a passive monetary policy was not the appropriate policy stance during the Great Depression; Friedman famously held the Fed responsible for the depth and duration of what he called the Great Contraction, because it had allowed the US money supply to drop by a third between 1929 and 1933. This was in sharp contrast to hard-core laissez-faire opponents of Fed policy, who regarded even the mild and largely ineffectual steps taken by the Fed – increasing the monetary base by 15% – as illegitimate interventionism to obstruct the salutary liquidation of bad investments, thereby postponing the necessary reallocation of real resources to more valuable uses. So, according to DeLong, Friedman, no less than Keynes, was battling against the hard-core laissez-faire opponents of any positive action to speed recovery from the Depression. While Keynes believed that in a deep depression only fiscal policy would be effective, Friedman believed that, even in a deep depression, monetary policy would be effective. But both agreed that there was no structural reason why stimulus would necessarily counterproductive; both rejected the idea that only if the increased output generated during the recovery was of a particular composition would recovery be sustainable.

Indeed, that’s why Friedman has always been regarded with suspicion by laissez-faire dogmatists who correctly judged him to be soft in his criticism of Keynesian doctrines, never having disputed the possibility that “artificially” increasing demand – either by government spending or by money creation — in a deep depression could lead to sustainable economic growth. From the point of view of laissez-faire dogmatists that concession to Keynesianism constituted a total sellout of fundamental free-market principles.

Friedman parried such attacks on the purity of his free-market dogmatism with a counterattack against his free-market dogmatist opponents, arguing that the gold standard to which they were attached so fervently was itself inconsistent with free-market principles, because, in virtually all historical instances of the gold standard, the monetary authorities charged with overseeing or administering the gold standard retained discretionary authority allowing them to set interest rates and exercise control over the quantity of money. Because monetary authorities retained substantial discretionary latitude under the gold standard, Friedman argued that a gold standard was institutionally inadequate and incapable of constraining the behavior of the monetary authorities responsible for its operation.

The point of a gold standard, in Friedman’s view, was that it makes it costly to increase the quantity of money. That might once have been true, but advances in banking technology eventually made it easy for banks to increase the quantity of money without any increase in the quantity of gold, making inflation possible even under a gold standard. True, eventually the inflation would have to be reversed to maintain the gold standard, but that simply made alternative periods of boom and bust inevitable. Thus, the gold standard, i.e., a mere obligation to convert banknotes or deposits into gold, was an inadequate constraint on the quantity of money, and an inadequate systemic assurance of stability.

In other words, if the point of a gold standard is to prevent the quantity of money from growing excessively, then, why not just eliminate the middleman, and simply establish a monetary rule constraining the growth in the quantity of money. That was why Friedman believed that his k-percent rule – please pardon the expression – trumped the gold standard, accomplishing directly what the gold standard could not accomplish, even indirectly: a gradual steady increase in the quantity of money that would prevent monetary-induced booms and busts.

Moreover, the k-percent rule made the monetary authority responsible for one thing, and one thing alone, imposing a rule on the monetary authority prescribing the time path of a targeted instrument – the quantity of money – over which the monetary authority has direct control: the quantity of money. The belief that the monetary authority in a modern banking system has direct control over the quantity of money was, of course, an obvious mistake. That the mistake could have persisted as long as it did was the result of the analytical distraction of the money multiplier: one of the leading fallacies of twentieth-century monetary thought, a fallacy that introductory textbooks unfortunately continue even now to foist upon unsuspecting students.

The money multiplier is not a structural supply-side variable, it is a reduced-form variable incorporating both supply-side and demand-side parameters, but Friedman and other Monetarists insisted on treating it as if it were a structural — and a deep structural variable at that – supply variable, so that it no less vulnerable to the Lucas Critique than, say, the Phillips Curve. Nevertheless, for at least a decade and a half after his refutation of the structural Phillips Curve, demonstrating its dangers as a guide to policy making, Friedman continued treating the money multiplier as if it were a deep structural variable, leading to the Monetarist forecasting debacle of the 1980s when Friedman and his acolytes were confidently predicting – over and over again — the return of double-digit inflation because the quantity of money was increasing for most of the 1980s at double-digit rates.

So once the k-percent rule collapsed under an avalanche of contradictory evidence, the Monetarist alternative to the gold standard that Friedman had persuasively, though fallaciously, argued was, on strictly libertarian grounds, preferable to the gold standard, the gold standard once again became the default position of laissez-faire dogmatists. There was to be sure some consideration given to free banking as an alternative to the gold standard. In his old age, after winning the Nobel Prize, F. A. Hayek introduced a proposal for direct currency competition — the elimination of legal tender laws and the like – which he later developed into a proposal for the denationalization of money. Hayek’s proposals suggested that convertibility into a real commodity was not necessary for a non-legal tender currency to have value – a proposition which I have argued is fallacious. So Hayek can be regarded as the grandfather of crypto currencies like the bitcoin. On the other hand, advocates of free banking, with a few exceptions like Earl Thompson and me, have generally gravitated back to the gold standard.

So while I agree with DeLong and Krugman (and for that matter with his many laissez-faire dogmatist critics) that Friedman had Keynesian inclinations which, depending on his audience, he sometimes emphasized, and sometimes suppressed, the most important reason that he was unable to retain his hold on right-wing monetary-economics thinking is that his key monetary-policy proposal – the k-percent rule – was empirically demolished in a failure even more embarrassing than the stagflation failure of Keynesian economics. With the k-percent rule no longer available as an alternative, what’s a right-wing ideologue to do?

Anyone for nominal gross domestic product level targeting (or NGDPLT for short)?

All New Classical Models Are Subject to the Lucas Critique

Almost 40 years ago, Robert Lucas made a huge, but not quite original, contribution, when he provided a very compelling example of how the predictions of the then standard macroeconometric models used for policy analysis were inherently vulnerable to shifts in the empirically estimated parameters contained in the models, shifts induced by the very policy change under consideration. Insofar as those models could provide reliable forecasts of the future course of the economy, it was because the policy environment under which the parameters of the model had been estimated was not changing during the time period for which the forecasts were made. But any forecast deduced from the model conditioned on a policy change would necessarily be inaccurate, because the policy change itself would cause the agents in the model to alter their expectations in light of the policy change, causing the parameters of the model to diverge from their previously estimated values. Lucas concluded that only models based on deep parameters reflecting the underlying tastes, technology, and resource constraints under which agents make decisions could provide a reliable basis for policy analysis.

The Lucas critique undoubtedly conveyed an important insight about how to use econometric models in analyzing the effects of policy changes, and if it did no more than cause economists to be more cautious in offering policy advice based on their econometric models and policy makers to more skeptical about the advice they got from economists using such models, the Lucas critique would have performed a very valuable public service. Unfortunately, the lesson that the economics profession learned from the Lucas critique went far beyond that useful warning about the reliability of conditional forecasts potentially sensitive to unstable parameter estimates. In an earlier post, I discussed another way in which the Lucas Critique has been misapplied. (One responsible way to deal with unstable parameter estimates would be make forecasts showing a range of plausible outcome depending on how parameter estimates might change as a result of the policy change. Such an approach is inherently messy, and, at least in the short run, would tend to make policy makers less likely to pay attention to the policy advice of economists. But the inherent sensitivity of forecasts to unstable model parameters ought to make one skeptical about the predictions derived from any econometric model.)

Instead, the Lucas critique was used by Lucas and his followers as a tool by which to advance a reductionist agenda of transforming macroeconomics into a narrow slice of microeconomics, the slice being applied general-equilibrium theory in which the models required drastic simplification before they could generate quantitative predictions. The key to deriving quantitative results from these models is to find an optimal intertemporal allocation of resources given the specified tastes, technology and resource constraints, which is typically done by describing the model in terms of an optimizing representative agent with a utility function, a production function, and a resource endowment. A kind of hand-waving is performed via the rational-expectations assumption, thereby allowing the optimal intertemporal allocation of the representative agent to be identified as a composite of the mutually compatible optimal plans of a set of decentralized agents, the hand-waving being motivated by the Arrow-Debreu welfare theorems proving that any Pareto-optimal allocation can be sustained by a corresponding equilibrium price vector. Under rational expectations, agents correctly anticipate future equilibrium prices, so that market-clearing prices in the current period are consistent with full intertemporal equilibrium.

What is amazing – mind-boggling might be a more apt adjective – is that this modeling strategy is held by Lucas and his followers to be invulnerable to the Lucas critique, being based supposedly on deep parameters reflecting nothing other than tastes, technology and resource endowments. The first point to make – there are many others, but we needn’t exhaust the list – is that it is borderline pathological to convert a valid and important warning about how economic models may be subject to misunderstanding or misuse as a weapon with which to demolish any model susceptible of such misunderstanding or misuse as a prelude to replacing those models by the class of reductionist micromodels that now pass for macroeconomics.

But there is a second point to make, which is that the reductionist models adopted by Lucas and his followers are no less vulnerable to the Lucas critique than the models they replaced. All the New Classical models are explicitly conditioned on the assumption of optimality. It is only by positing an optimal solution for the representative agent that the equilibrium price vector can be inferred. The deep parameters of the model are conditioned on the assumption of optimality and the existence of an equilibrium price vector supporting that equilibrium. If the equilibrium does not obtain – the optimal plans of the individual agents or the fantastical representative agent becoming incapable of execution — empirical estimates of the parameters of the model parameters cannot correspond to the equilibrium values implied by the model itself. Parameter estimates are therefore sensitive to how closely the economic environment in which the parameters were estimated corresponded to conditions of equilibrium. If the conditions under which the parameters were estimated more nearly approximated the conditions of equilibrium than the period in which the model is being used to make conditional forecasts, those forecasts, from the point of view of the underlying equilibrium model, must be inaccurate. The Lucas critique devours its own offspring.

Barro and Krugman Yet Again on Regular Economics vs. Keynesian Economics

A lot of people have been getting all worked up about Paul Krugman’s acerbic takedown of Robert Barro for suggesting in a Wall Street Journal op-ed in 2011 that increased government spending would not stimulate the economy. Barro’s target was a claim by Agriculture Secretary Tom Vilsack that every additional dollar spent on food stamps would actually result in a net increase of $1.84 in total spending. This statement so annoyed Barro that, in a fit of pique, he wrote the following.

Keynesian economics argues that incentives and other forces in regular economics are overwhelmed, at least in recessions, by effects involving “aggregate demand.” Recipients of food stamps use their transfers to consume more. Compared to this urge, the negative effects on consumption and investment by taxpayers are viewed as weaker in magnitude, particularly when the transfers are deficit-financed.

Thus, the aggregate demand for goods rises, and businesses respond by selling more goods and then by raising production and employment. The additional wage and profit income leads to further expansions of demand and, hence, to more production and employment. As per Mr. Vilsack, the administration believes that the cumulative effect is a multiplier around two.

If valid, this result would be truly miraculous. The recipients of food stamps get, say, $1 billion but they are not the only ones who benefit. Another $1 billion appears that can make the rest of society better off. Unlike the trade-off in regular economics, that extra $1 billion is the ultimate free lunch.

How can it be right? Where was the market failure that allowed the government to improve things just by borrowing money and giving it to people? Keynes, in his “General Theory” (1936), was not so good at explaining why this worked, and subsequent generations of Keynesian economists (including my own youthful efforts) have not been more successful.

Sorry to brag, but it was actually none other than moi that (via Mark Thoma) brought this little gem to Krugman’s attention. In what is still my third most visited blog post, I expressed incredulity that Barro could ask where Is the market failure about a situation in which unemployment suddenly rises to more than double its pre-recession level. I also pointed out that Barro had himself previously acknowledged in a Wall Street Journal op-ed that monetary expansion could alleviate a cyclical increase in unemployment. If monetary policy (printing money on worthless pieces of paper) can miraculously reduce unemployment, why is out of the question that government spending could also reduce unemployment, especially when it is possible to view government spending as a means of transferring cash from people with unlimited demand for money to those unwilling to increase their holdings of cash? So, given Barro’s own explicit statement that monetary policy could be stimulative, it seemed odd for him to suggest, without clarification, that it would be a miracle if fiscal policy were effective.

Apparently, Krugman felt compelled to revisit this argument of Barro’s because of the recent controversy about extending unemployment insurance, an issue to which Barro made only passing reference in his 2011 piece. Krugman again ridiculed the idea that just because regular economics says that a policy will have adverse effects under “normal” conditions, the policy must be wrongheaded even in a recession.

But if you follow right-wing talk — by which I mean not Rush Limbaugh but the Wall Street Journal and famous economists like Robert Barro — you see the notion that aid to the unemployed can create jobs dismissed as self-evidently absurd. You think that you can reduce unemployment by paying people not to work? Hahahaha!

Quite aside from the fact that this ridicule is dead wrong, and has had a malign effect on policy, think about what it represents: it amounts to casually trashing one of the most important discoveries economists have ever made, one of my profession’s main claims to be useful to humanity.

Krugman was subsequently accused of bad faith in making this argument because he, like other Keynesians, has acknowledged that unemployment insurance tends to increase the unemployment rate. Therefore, his critics argue, it was hypocritical of Krugman to criticize Barro and the Wall Street Journal for making precisely the same argument that he himself has made. Well, you can perhaps accuse Krugman of being a bit artful in his argument by not acknowledging explicitly that a full policy assessment might in fact legitimately place some limit on UI benefits, but Krugman’s main point is obviously not to assert that “regular economics” is necessarily wrong, just that Barro and the Wall Street Journal are refusing to acknowledge that countercyclical policy of some type could ever, under any circumstances, be effective. Or, to put it another way, Krugman could (and did) easily agree that increasing UI will increases the natural rate of unemployment, but, in a recession, actual unemployment is above the natural rate, and UI can cause the actual rate to fall even as it causes the natural rate to rise.

Now Barro might respond that all he was really saying in his 2011 piece was that the existence of a government spending multiplier significantly greater than zero is not supported by the empirical evidenc. But there are two problems with that response. First, it would still not resolve the theoretical inconsistency in Barro’s argument that monetary policy does have magical properties in a recession with his position that fiscal policy has no such magical powers. Second, and perhaps less obviously, the empirical evidence on which Barro relies does not necessarily distinguish between periods of severe recession or depression and periods when the economy is close to full employment. If so, the empirical estimates of government spending multipliers are subject to the Lucas critique. Parameter estimates may not be stable over time, because those parameters may change depending on the cyclical phase of the economy. The multiplier at the trough of a deep business cycle may be much greater than the multiplier at close to full employment. The empirical estimates for the multiplier cited by Barro make no real allowance for different cyclical phases in estimating the multiplier.

PS Scott Sumner also comes away from reading Barro’s 2011 piece perplexed by what Barro is really saying and why, and does an excellent job of trying in vain to find some coherent conceptual framework within which to understand Barro. The problem is that there is none. That’s why Barro deserves the rough treatment he got from Krugman.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,261 other subscribers
Follow Uneasy Money on WordPress.com