Archive for the 'DSGE models' Category

A Tale of Two Syntheses

I recently finished reading a slender, but weighty, collection of essays, Microfoundtions Reconsidered: The Relationship of Micro and Macroeconomics in Historical Perspective, edited by Pedro Duarte and Gilberto Lima; it contains in addition to a brief introductory essay by the editors, and contributions by Kevin Hoover, Robert Leonard, Wade Hands, Phil Mirowski, Michel De Vroey, and Pedro Duarte. The volume is both informative and stimulating, helping me to crystalize ideas about which I have been ruminating and writing for a long time, but especially in some of my more recent posts (e.g., here, here, and here) and my recent paper “Hayek, Hicks, Radner and Four Equilibrium Concepts.”

Hoover’s essay provides a historical account of the microfoundations, making clear that the search for microfoundations long preceded the Lucasian microfoundations movement of the 1970s and 1980s that would revolutionize macroeconomics in the late 1980s and early 1990s. I have been writing about the differences between varieties of microfoundations for quite a while (here and here), and Hoover provides valuable detail about early discussions of microfoundations and about their relationship to the now regnant Lucasian microfoundations dogma. But for my purposes here, Hoover’s key contribution is his deconstruction of the concept of microfoundations, showing that the idea of microfoundations depends crucially on the notion that agents in a macroeconomic model be explicit optimizers, meaning that they maximize an explicit function subject to explicit constraints.

What Hoover clarifies is vacuity of the Lucasian optimization dogma. Until Lucas, optimization by agents had been merely a necessary condition for a model to be microfounded. But there was also another condition: that the optimizing choices of agents be mutually consistent. Establishing that the optimizing choices of agents are mutually consistent is not necessarily easy or even possible, so often the consistency of optimizing plans can only be suggested by some sort of heuristic argument. But Lucas and his cohorts, followed by their acolytes, unable to explain, even informally or heuristically, how the optimizing choices of individual agents are rendered mutually consistent, instead resorted to question-begging and question-dodging techniques to avoid addressing the consistency issue, of which one — the most egregious, but not the only — is the representative agent. In so doing, Lucas et al. transformed the optimization problem from the coordination of multiple independent choices into the optimal plan of a single decision maker. Heckuva job!

The second essay by Robert Leonard, though not directly addressing the question of microfoundations, helps clarify and underscore the misrepresentation perpetrated by the Lucasian microfoundational dogma in disregarding and evading the need to describe a mechanism whereby the optimal choices of individual agents are, or could be, reconciled. Leonard focuses on a particular economist, Oskar Morgenstern, who began his career in Vienna as a not untypical adherent of the Austrian school of economics, a member of the Mises seminar and successor of F. A. Hayek as director of the Austrian Institute for Business Cycle Research upon Hayek’s 1931 departure to take a position at the London School of Economics. However, Morgenstern soon began to question the economic orthodoxy of neoclassical economic theory and its emphasis on the tendency of economic forces to reach a state of equilibrium.

In his famous early critique of the foundations of equilibrium theory, Morgenstern tried to show that the concept of perfect foresight, upon which, he alleged, the concept of equilibrium rests, is incoherent. To do so, Morgenstern used the example of the Holmes-Moriarity interaction in which Holmes and Moriarty are caught in a dilemma in which neither can predict whether the other will get off or stay on the train on which they are both passengers, because the optimal choice of each depends on the choice of the other. The unresolvable conflict between Holmes and Moriarty, in Morgenstern’s view, showed that the incoherence of the idea of perfect foresight.

As his disillusionment with orthodox economic theory deepened, Morgenstern became increasingly interested in the potential of mathematics to serve as a tool of economic analysis. Through his acquaintance with the mathematician Karl Menger, the son of Carl Menger, founder of the Austrian School of economics. Morgenstern became close to Menger’s student, Abraham Wald, a pure mathematician of exceptional ability, who, to support himself, was working on statistical and mathematical problems for the Austrian Institute for Business Cycle Resarch, and tutoring Morgenstern in mathematics and its applications to economic theory. Wald, himself, went on to make seminal contributions to mathematical economics and statistical analysis.

Moregenstern also became acquainted with another student of Menger, John von Neumnn, with an interest in applying advanced mathematics to economic theory. Von Neumann and Morgenstern would later collaborate in writing The Theory of Games and Economic Behavior, as a result of which Morgenstern came to reconsider his early view of the Holmes-Moriarty paradox inasmuch as it could be shown that an equilibrium solution of their interaction could be found if payoffs to their joint choices were specified, thereby enabling Holmes and Moriarty to choose optimal probablistic strategies.

I don’t think that the game-theoretic solution to the Holmes Moriarty game is as straightforward as Morgenstern eventually agreed, but the critical point in the microfoundations discussion is that the mathematical solution to the Holmes-Moriarty paradox acknowledges the necessity for the choices made by two or more agents in an economic or game-theoretic equilibrium to be reconciled – i.e., rendered mutually consistent — in equilibrium. Under Lucasian microfoundations dogma, the problem is either annihilated by positing an optimizing representative agent having no need to coordinate his decision with other agents (I leave the question who, in the Holmes-Moriarty interaction, is the representative agent as an exercise for the reader) or it is assumed away by positing the existence of a magical equilibrium with no explanation of how the mutually consistent choices are arrived at.

The third essay (“The Rise and Fall of Walrasian Economics: The Keynes Effect”) by Wade Hands considers the first of the two syntheses – the neoclassical synthesis — that are alluded to in the title of this post. Hands gives a learned account of the mutually reinforcing co-development of Walrasian general equilibrium theory and Keynesian economics in the 25 years or so following World War II. Although Hands agrees that there is no necessary connection between Walrasian GE theory and Keynesian theory, he argues that there was enough common ground between Keynesians and Walrasians, as famously explained by Hicks in summarizing Keynesian theory by way of his IS-LM model, to allow the two disparate research programs to nourish each other in a kind of symbiotic relationship as the two research programs came to dominate postwar economics.

The task for Keynesian macroeconomists following the lead of Samuelson, Solow and Modigliani at MIT, Alvin Hansen at Harvard and James Tobin at Yale was to elaborate the Hicksian IS-LM approach by embedding it in a more general Walrasian framework. In so doing, they helped to shape a research agenda for Walrasian general-equilibrium theorists working out the details of the newly developed Arrow-Debreu model, deriving conditions for the uniqueness and stability of the equilibrium of that model. The neoclassical synthesis followed from those efforts, achieving an uneasy reconciliation between Walrasian general equilibrium theory and Keynesian theory. It received its most complete articulation in the impressive treatise of Don Patinkin which attempted to derive or at least evaluate key Keyensian propositions in the context of a full general equilibrium model. At an even higher level of theoretical sophistication, the 1971 summation of general equilibrium theory by Arrow and Hahn, gave disproportionate attention to Keynesian ideas which were presented and analyzed using the tools of state-of-the art Walrasian analysis.

Hands sums up the coexistence of Walrasian and Keynesian ideas in the Arrow-Hahn volume as follows:

Arrow and Hahn’s General Competitive Analysis – the canonical summary of the literature – dedicated far more pages to stability than to any other topic. The book had fourteen chapters (and a number of mathematical appendices); there was one chapter on consumer choice, one chapter on production theory, and one chapter on existence [of equilibrium], but there were three chapters on stability analysis, (two on the traditional tatonnement and one on alternative ways of modeling general equilibrium dynamics). Add to this the fact that there was an important chapter on “The Keynesian Model’; and it becomes clear how important stability analysis and its connection to Keynesian economics was for Walrasian microeconomics during this period. The purpose of this section has been to show that that would not have been the case if the Walrasian economics of the day had not been a product of co-evolution with Keynesian economic theory. (p. 108)

What seems most unfortunate about the neoclassical synthesis is that it elevated and reinforced the least relevant and least fruitful features of both the Walrasian and the Keynesian research programs. The Hicksian IS-LM setup abstracted from the dynamic and forward-looking aspects of Keynesian theory, modeling a static one-period model, not easily deployed as a tool of dynamic analysis. Walrasian GE analysis, which, following the pathbreaking GE existence proofs of Arrow and Debreu, then proceeded to a disappointing search for the conditions for a unique and stable general equilibrium.

It was Paul Samuelson who, building on Hicks’s pioneering foray into stability analysis, argued that the stability question could be answered by investigating whether a system of Lyapounov differential equations could describe market price adjustments as functions of market excess demands that would converge on an equilibrium price vector. But Samuelson’s approach to establishing stability required the mechanism of a fictional tatonnement process. Even with that unsatisfactory assumption, the stability results were disappointing.

Although for Walrasian theorists the results hardly repaid the effort expended, for those Keynesians who interpreted Keynes as an instability theorist, the weak Walrasian stability results might have been viewed as encouraging. But that was not any easy route to take either, because Keynes had also argued that a persistent unemployment equilibrium might be the norm.

It’s also hard to understand how the stability of equilibrium in an imaginary tatonnement process could ever have been considered relevant to the operation of an actual economy in real time – a leap of faith almost as extraordinary as imagining an economy represented by a single agent. Any conventional comparative-statics exercise – the bread and butter of microeconomic analysis – involves comparing two equilibria, corresponding to a specified parametric change in the conditions of the economy. The comparison presumes that, starting from an equilibrium position, the parametric change leads from an initial to a new equilibrium. If the economy isn’t stable, a disturbance causing an economy to depart from an initial equilibrium need not result in an adjustment to a new equilibrium comparable to the old one.

If conventional comparative statics hinges on an implicit stability assumption, it’s hard to see how a stability analysis of tatonnement has any bearing on the comparative-statics routinely relied upon by economists. No actual economy ever adjusts to a parametric change by way of tatonnement. Whether a parametric change displacing an economy from its equilibrium time path would lead the economy toward another equilibrium time path is another interesting and relevant question, but it’s difficult to see what insight would be gained by proving the stability of equilibrium under a tatonnement process.

Moreover, there is a distinct question about the endogenous stability of an economy: are there endogenous tendencies within an economy that lead it away from its equilibrium time path. But questions of endogenous stability can only be posed in a dynamic, rather than a static, model. While extending the Walrasian model to include an infinity of time periods, Arrow and Debreu telescoped determination of the intertemporal-equilibrium price vector into a preliminary time period before time, production, exchange and consumption begin. So, even in the formally intertemporal Arrow-Debreu model, the equilibrium price vector, once determined, is fixed and not subject to revision. Standard stability analysis was concerned with the response over time to changing circumstances only insofar as changes are foreseen at time zero, before time begins, so that they can be and are taken fully into account when the equilibrium price vector is determined.

Though not entirely uninteresting, the intertemporal analysis had little relevance to the stability of an actual economy operating in real time. Thus, neither the standard Keyensian (IS-LM) model nor the standard Walrasian Arrow-Debreu model provided an intertemporal framework within which to address the dynamic stability that Keynes (and contemporaries like Hayek, Myrdal, Lindahl and Hicks) had developed in the 1930s. In particular, Hicks’s analytical device of temporary equilibrium might have facilitated such an analysis. But, having introduced his IS-LM model two years before publishing his temporary equilibrium analysis in Value and Capital, Hicks concentrated his attention primarily on Keynesian analysis and did not return to the temporary equilibrium model until 1965 in Capital and Growth. And it was IS-LM that became, for a generation or two, the preferred analytical framework for macroeconomic analysis, while temproary equilibrium remained overlooked until the 1970s just as the neoclassical synthesis started coming apart.

The fourth essay by Phil Mirowski investigates the role of the Cowles Commission, based at the University of Chicago from 1939 to 1955, in undermining Keynesian macroeconomics. While Hands argues that Walrasians and Keynesians came together in a non-hostile spirit of tacit cooperation, Mirowski believes that owing to their Walrasian sympathies, the Cowles Committee had an implicit anti-Keynesian orientation and was therefore at best unsympathetic if not overtly hostile to Keynesian theorizing, which was incompatible the Walrasian optimization paradigm endorsed by the Cowles economists. (Another layer of unexplored complexity is the tension between the Walrasianism of the Cowles economists and the Marshallianism of the Chicago School economists, especially Knight and Friedman, which made Chicago an inhospitable home for the Cowles Commission and led to its eventual departure to Yale.)

Whatever differences, both the Mirowski and the Hands essays support the conclusion that the uneasy relationship between Walrasianism and Keynesianism was inherently problematic and unltimately unsustainable. But to me the tragedy is that before the fall, in the 1950s and 1960s, when the neoclassical synthesis bestrode economics like a colossus, the static orientation of both the Walrasian and the Keynesian research programs combined to distract economists from a more promising research program. Such a program, instead of treating expectations either as parametric constants or as merely adaptive, based on an assumed distributed lag function, might have considered whether expectations could perform a potentially equilibrating role in a general equilibrium model.

The equilibrating role of expectations, though implicit in various contributions by Hayek, Myrdal, Lindahl, Irving Fisher, and even Keynes, is contingent so that equilibrium is not inevitable, only a possibility. Instead, the introduction of expectations as an equilibrating variable did not occur until the mid-1970s when Robert Lucas, Tom Sargent and Neil Wallace, borrowing from John Muth’s work in applied microeconomics, introduced the idea of rational expectations into macroeconomics. But in introducing rational expectations, Lucas et al. made rational expectations not the condition of a contingent equilibrium but an indisputable postulate guaranteeing the realization of equilibrium without offering any theoretical account of a mechanism whereby the rationality of expectations is achieved.

The fifth essay by Michel DeVroey (“Microfoundations: a decisive dividing line between Keynesian and new classical macroeconomics?”) is a philosophically sophisticated analysis of Lucasian microfoundations methodological principles. DeVroey begins by crediting Lucas with the revolution in macroeconomics that displaced a Keynesian orthodoxy already discredited in the eyes of many economists after its failure to account for simultaneously rising inflation and unemployment.

The apparent theoretical disorder characterizing the Keynesian orthodoxy and its Monetarist opposition left a void for Lucas to fill by providing a seemingly rigorous microfounded alternative to the confused state of macroeconomics. And microfoundations became the methodological weapon by which Lucas and his associates and followers imposed an iron discipline on the unruly community of macroeconomists. “In Lucas’s eyes,” DeVroey aptly writes,“ the mere intention to produce a theory of involuntary unemployment constitutes an infringement of the equilibrium discipline.” Showing that his description of Lucas is hardly overstated, DeVroey quotes from the famous 1978 joint declaration of war issued by Lucas and Sargent against Keynesian macroeconomics:

After freeing himself of the straightjacket (or discipline) imposed by the classical postulates, Keynes described a model in which rules of thumb, such as the consumption function and liquidity preference schedule, took the place of decision functions that a classical economist would insist be derived from the theory of choice. And rather than require that wages and prices be determined by the postulate that markets clear – which for the labor market seemed patently contradicted by the severity of business depressions – Keynes took as an unexamined postulate that money wages are sticky, meaning that they are set at a level or by a process that could be taken as uninfluenced by the macroeconomic forces he proposed to analyze.

Echoing Keynes’s famous description of the sway of Ricardian doctrines over England in the nineteenth century, DeVroey remarks that the microfoundations requirement “conquered macroeconomics as quickly and thoroughly as the Holy Inquisition conquered Spain,” noting, even more tellingly, that the conquest was achieved without providing any justification. Ricardo had, at least, provided a substantive analysis that could be debated; Lucas offered only an undisputable methodological imperative about the sole acceptable mode of macroeconomic reasoning. Just as optimization is a necessary component of the equilibrium discipline that had to be ruthlessly imposed on pain of excommunication from the macroeconomic community, so, too, did the correlate principle of market-clearing. To deviate from the market-clearing postulate was ipso facto evidence of an impure and heretical state of mind. DeVroey further quotes from the war declaration of Lucas and Sargent.

Cleared markets is simply a principle, not verifiable by direct observation, which may or may not be useful in constructing successful hypotheses about the behavior of these [time] series.

What was only implicit in the war declaration became evident later after right-thinking was enforced, and woe unto him that dared deviate from the right way of thinking.

But, as DeVroey skillfully shows, what is most remarkable is that, having declared market clearing an indisputable methodological principle, Lucas, contrary to his own demand for theoretical discipline, used the market-clearing postulate to free himself from the very equilibrium discipline he claimed to be imposing. How did the market-clearing postulate liberate Lucas from equilibrium discipline? To show how the sleight-of-hand was accomplished, DeVroey, in an argument parallel to that of Hoover in chapter one and that suggested by Leonard in chapter two, contrasts Lucas’s conception of microfoundations with a different microfoundations conception espoused by Hayek and Patinkin. Unlike Lucas, Hayek and Patinkin recognized that the optimization of individual economic agents is conditional on the optimization of other agents. Lucas assumes that if all agents optimize, then their individual optimization ensures that a social optimum is achieved, the whole being the sum of its parts. But that assumption ignores that the choices made interacting agents are themelves interdependent.

To capture the distinction between independent and interdependent optimization, DeVroey distinguishes between optimal plans and optimal behavior. Behavior is optimal only if an optimal plan can be executed. All agents can optimize individually in making their plans, but the optimality of their behavior depends on their capacity to carry those plans out. And the capacity of each to carry out his plan is contingent on the optimal choices of all other agents.

Optimizing plans refers to agents’ intentions before the opening of trading, the solution to the choice-theoretical problem with which they are faced. Optimizing behavior refers to what is observable after trading has started. Thus optimal behavior implies that the optimal plan has been realized. . . . [O]ptmizing plans and optimizing behavior need to be logically separated – there is a difference between finding a solution to a choice problem and implementing the solution. In contrast, whenever optimizing behavior is the sole concept used, the possibility of there being a difference between them is discarded by definition. This is the standpoint takenby Lucas and Sargent. Once it is adopted, it becomes misleading to claim . . .that the microfoundations requirement is based on two criteria, optimizing behavior and market clearing. A single criterion is needed, and it is irrelevant whether this is called generalized optimizing behavior or market clearing. (De Vroey, p. 176)

Each agent is free to optimize his plan, but no agent can execute his optimal plan unless the plan coincides with the complementary plans of other agents. So, the execution of an optimal plan is not within the unilateral control of an agent formulating his own plan. One can readily assume that agents optimize their plans, but one cannot just assume that those plans can be executed as planned. The optimality of interdependent plans is not self-evident; it is a proposition that must be demonstrated. Assuming that agents optimize, Lucas simply asserts that, because agents optimize, markets must clear.

That is a remarkable non-sequitur. And from that non-sequitur, Lucas jumps to a further non-sequitur: that an optimizing representative agent is all that’s required for a macroeconomic model. The logical straightjacket (or discipline) of demonstrating that interdependent optimal plans are consistent is thus discarded (or trampled upon). Lucas’s insistence on a market-clearing principle turns out to be subterfuge by which the pretense of its upholding conceals its violation in practice.

My own view is that the assumption that agents formulate optimizing plans cannot be maintained without further analysis unless the agents are operating in isolation. If the agents interacting with each other, the assumption that they optimize requires a theory of their interaction. If the focus is on equilibrium interactions, then one can have a theory of equilibrium, but then the possibility of non-equilibrium states must also be acknowledged.

That is what John Nash did in developing his equilibrium theory of positive-sum games. He defined conditions for the existence of equilibrium, but he offered no theory of how equilibrium is achieved. Lacking such a theory, he acknowledged that non-equilibrium solutions might occur, e.g., in some variant of the Holmes-Moriarty game. To simply assert that because interdependent agents try to optimize, they must, as a matter of principle, succeed in optimizing is to engage in question-begging on a truly grand scale. To insist, as a matter of methodological principle, that everyone else must also engage in question-begging on equally grand scale is what I have previously called methodological arrogance, though an even harsher description might be appropriate.

In the sixth essay (“Not Going Away: Microfoundations in the making of a new consensus in macroeconomics”), Pedro Duarte considers the current state of apparent macroeconomic consensus in the wake of the sweeping triumph of the Lucasian micorfoundtions methodological imperative. In its current state, mainstream macroeconomists from a variety of backgrounds have reconciled themselves and adjusted to the methodological absolutism Lucas and his associates and followers have imposed on macroeconomic theorizing. Leading proponents of the current consensus are pleased to announce, in unseemly self-satisfaction, that macroeconomics is now – but presumably not previously – “firmly grounded in the principles of economic [presumably neoclassical] theory.” But the underlying conception of neoclassical economic theory motivating such a statement is almost laughably narrow, and, as I have just shown, strictly false even if, for argument’s sake, that narrow conception is accepted.

Duarte provides an informative historical account of the process whereby most mainstream Keynesians and former old-line Monetarists, who had, in fact, adopted much of the underlying Keynesian theoretical framework themselves, became reconciled to the non-negotiable methodological microfoundational demands upon which Lucas and his New Classical followers and Real-Business-Cycle fellow-travelers insisted. While Lucas was willing to tolerate differences of opinion about the importance of monetary factors in accounting for business-cycle fluctuations in real output and employment, and even willing to countenance a role for countercyclical monetary policy, such differences of opinion could be tolerated only if they could be derived from an acceptable microfounded model in which the agent(s) form rational expectations. If New Keynesians were able to produce results rationalizing countercyclical policies in such microfounded models with rational expectations, Lucas was satisfied. Presumably, Lucas felt the price of conceding the theoretical legitimacy of countercyclical policy was worth paying in order to achieve methodological hegemony over macroeconomic theory.

And no doubt, for Lucas, the price was worth paying, because it led to what Marvin Goodfriend and Robert King called the New Neoclassical Synthesis in their 1997 article ushering in the new era of good feelings, a synthesis based on “the systematic application of intertemporal optimization and rational expectations” while embodying “the insights of monetarists . . . regarding the theory and practice of monetary policy.”

While the first synthesis brought about a convergence of sorts between the disparate Walrasian and Keynesian theoretical frameworks, the convergence proved unstable because the inherent theoretical weaknesses of both paradigms were unable to withstand criticisms of the theoretical apparatus and of the policy recommendations emerging from that synthesis, particularly an inability to provide a straightforward analysis of inflation when it became a serious policy problem in the late 1960s and 1970s. But neither the Keynesian nor the Walrasian paradigms were developing in a way that addressed the points of most serious weakness.

On the Keynesian side, the defects included the static nature of the workhorse IS-LM model, the absence of a market for real capital and of a market for endogenous money. On the Walrasian side, the defects were the lack of any theory of actual price determination or of dynamic adjustment. The Hicksian temporary equilibrium paradigm might have provided a viable way forward, and for a very different kind of synthesis, but not even Hicks himself realized the potential of his own creation.

While the first synthesis was a product of convenience and misplaced optimism, the second synthesis is a product of methodological hubris and misplaced complacency derived from an elementary misunderstanding of the distinction between optimization by a single agent and the simultaneous optimization of two or more independent, yet interdependent, agents. The equilibrium of each is the result of the equilibrium of all, and a theory of optimization involving two or more agents requires a theory of how two or more interdependent agents can optimize simultaneously. The New neoclassical synthesis rests on the demand for a macroeconomic theory of individual optimization that refuses even to ask, let along provide an answer to, the question whether the optimization that it demands is actually achieved in practice or what happens if it is not. This is not a synthesis that will last, or that deserves to. And the sooner it collapses, the better off macroeconomics will be.

What the answer is I don’t know, but if I had to offer a suggestion, the one offered by my teacher Axel Leijonhufvud towards the end of his great book, written more than half a century ago, strikes me as not bad at all:

One cannot assume that what went wrong was simply that Keynes slipped up here and there in his adaptation of standard tool, and that consequently, if we go back and tinker a little more with the Marshallian toolbox his purposes will be realized. What is required, I believe, is a systematic investigation, form the standpoint of the information problems stressed in this study, of what elements of the static theory of resource allocation can without further ado be utilized in the analysis of dynamic and historical systems. This, of course, would be merely a first-step: the gap yawns very wide between the systematic and rigorous modern analysis of the stability of “featureless,” pure exchange systems and Keynes’ inspired sketch of the income-constrained process in a monetary-exchange-cum-production system. But even for such a first step, the prescription cannot be to “go back to Keynes.” If one must retrace some steps of past developments in order to get on the right track—and that is probably advisable—my own preference is to go back to Hayek. Hayek’s Gestalt-conception of what happens during business cycles, it has been generally agreed, was much less sound than Keynes’. As an unhappy consequence, his far superior work on the fundamentals of the problem has not received the attention it deserves. (p. 401)

I agree with all that, but would also recommend Roy Radner’s development of an alternative to the Arrow-Debreu version of Walrasian general equilibrium theory that can accommodate Hicksian temporary equilibrium, and Hawtrey’s important contributions to our understanding of monetary theory and the role and potential instability of endogenous bank money. On top of that, Franklin Fisher in his important work, The Disequilibrium Foundations of Equilibrium Economics, has given us further valuable guidance in how to improve the current sorry state of macroeconomics.

 

What’s Wrong with DSGE Models Is Not Representative Agency

The basic DSGE macroeconomic model taught to students is based on a representative agent. Many critics of modern macroeconomics and DSGE models have therefore latched on to the representative agent as the key – and disqualifying — feature in DSGE models, and by extension, with modern macroeconomics. Criticism of representative-agent models is certainly appropriate, because, as Alan Kirman admirably explained some 25 years ago, the simplification inherent in a macoreconomic model based on a representative agent, renders the model entirely inappropriate and unsuitable for most of the problems that a macroeconomic model might be expected to address, like explaining why economies might suffer from aggregate fluctuations in output and employment and the price level.

While altogether fitting and proper, criticism of the representative agent model in macroeconomics had an unfortunate unintended consequence, which was to focus attention on representative agency rather than on the deeper problem with DSGE models, problems that cannot be solved by just throwing the Representative Agent under the bus.

Before explaining why representative agency is not the root problem with DSGE models, let’s take a moment or two to talk about where the idea of representative agency comes from. The idea can be traced back to F. Y. Edgeworth who, in his exposition of the ideas of W. S. Jevons – one of the three marginal revolutionaries of the 1870s – introduced two “representative particulars” to illustrate how trade could maximize the utility of each particular subject to the benchmark utility of the counterparty. That analysis of two different representative particulars, reflected in what is now called the Edgeworth Box, remains one of the outstanding achievements and pedagogical tools of economics. (See a superb account of the historical development of the Box and the many contributions to economic theory that it facilitated by Thomas Humphrey). But Edgeworth’s analysis and its derivatives always focused on the incentives of two representative agents rather than a single isolated representative agent.

Only a few years later, Alfred Marshall in his Principles of Economics, offered an analysis of how the equilibrium price for the product of a competitive industry is determined by the demand for (derived from the marginal utility accruing to consumers from increments of the product) and the supply of that product (derived from the cost of production). The concepts of the marginal cost of an individual firm as a function of quantity produced and the supply of an individual firm as a function of price not yet having been formulated, Marshall, in a kind of hand-waving exercise, introduced a hypothetical representative firm as a stand-in for the entire industry.

The completely ad hoc and artificial concept of a representative firm was not well-received by Marshall’s contemporaries, and the young Lionel Robbins, starting his long career at the London School of Economics, subjected the idea to withering criticism in a 1928 article. Even without Robbins’s criticism, the development of the basic theory of a profit-maximizing firm quickly led to the disappearance of Marshall’s concept from subsequent economics textbooks. James Hartley wrote about the short and unhappy life of Marshall’s Representative Firm in the Journal of Economic Perspectives.

One might have thought that the inauspicious career of Marshall’s Representative Firm would have discouraged modern macroeconomists from resurrecting the Representative Firm in the barely disguised form of a Representative Agent in their DSGE models, but the convenience and relative simplicity of solving a DSGE model for a single agent was too enticing to be resisted.

Therein lies the difference between the theory of the firm and a macroeconomic theory. The gain in convenience from adopting the Representative Firm was radically reduced by Marshall’s Cambridge students and successors who, without the representative firm, provided a more rigorous, more satisfying and more flexible exposition of the industry supply curve and the corresponding partial-equilibrium analysis than Marshall had with it. Providing no advantages of realism, logical coherence, analytical versatility or heuristic intuition, the Representative Firm was unceremoniously expelled from the polite company of economists.

However, as a heuristic device for portraying certain properties of an equilibrium state — whose existence is assumed not derived — even a single representative individual or agent proved to be a serviceable device with which to display the defining first-order conditions, the simultaneous equality of marginal rates of substitution in consumption and production with the marginal rate of substitution at market prices. Unlike the Edgeworth Box populated by two representative agents whose different endowments or preference maps result in mutually beneficial trade, the representative agent, even if afforded the opportunity to trade, can find no gain from engaging in it.

An excellent example of this heuristic was provided by Jack Hirshleifer in his 1970 textbook Investment, Interest, and Capital, wherein he adapted the basic Fisherian model of intertemporal consumption, production and exchange opportunities, representing the canonical Fisherian exposition in a single basic diagram. But the representative agent necessarily represents a state of no trade, because, for a single isolated agent, production and consumption must coincide, and the equilibrium price vector must have the property that the representative agent chooses not to trade at that price vector. I reproduce Hirshleifer’s diagram (Figure 4-6) in the attached chart.

Here is how Hirshleifer explained what was going on.

Figure 4-6 illustrates a technique that will be used often from now on: the representative-individual device. If one makes the assumption that all individuals have identical tastes and are identically situated with respect to endowments and productive opportunities, it follows that the individual optimum must be a microcosm of the social equilibrium. In this model the productive and consumptive solutions coincide, as in the Robinson Crusoe case. Nevertheless, market opportunities exist, as indicated by the market line M’M’ through the tangency point P* = C*. But the price reflected in the slope of M’M’ is a sustaining price, such that each individual prefers to hold the combination attained by productive transformations rather than engage in market transactions. The representative-individual device is helpful in suggesting how the equilibrium will respond to changes in exogenous data—the proviso being that such changes od not modify the distribution of wealth among individuals.

While not spelling out the limitations of the representative-individual device, Hirshleifer makes it clear that the representative-agent device is being used as an expository technique to describe, not as an analytical tool to determine, intertemporal equilibrium. The existence of intertemporal equilibrium does not depend on the assumptions necessary to allow a representative individual to serve as a stand-in for all other agents. The representative-individual is portrayed only to provide the student with a special case serving as a visual aid with which to gain an intuitive grasp of the necessary conditions characterizing an intertemporal equilibrium in production and consumption.

But the role of the representative agent in the DSGE model is very different from the representative individual in Hirshleifer’s exposition of the canonical Fisherian theory. In Hirshleifer’s exposition, the representative individual is just a special case and a visual aid with no independent analytical importance. In contrast to Hirshleifer’s deployment of the representative-individual, representative-agent in the DSGE model is used as an assumption whereby an analytical solution to the DSGE model can be derived, allowing the modeler to generate quantitative results to be compared with existing time-series data, to generate forecasts of future economic conditions, and to evaluate the effects of alternative policy rules.

The prominent and dubious role of the representative agent in DSGE models provided a convenient target for critics of DSGE models to direct their criticisms. In Congressional testimony, Robert Solow famously attacked DSGE models and used their reliance on the representative-agents to make them seem, well, simply ridiculous.

Most economists are willing to believe that most individual “agents” – consumers investors, borrowers, lenders, workers, employers – make their decisions so as to do the best that they can for themselves, given their possibilities and their information. Clearly they do not always behave in this rational way, and systematic deviations are well worth studying. But this is not a bad first approximation in many cases. The DSGE school populates its simplified economy – remember that all economics is about simplified economies just as biology is about simplified cells – with exactly one single combination worker-owner-consumer-everything-else who plans ahead carefully and lives forever. One important consequence of this “representative agent” assumption is that there are no conflicts of interest, no incompatible expectations, no deceptions.

This all-purpose decision-maker essentially runs the economy according to its own preferences. Not directly, of course: the economy has to operate through generally well-behaved markets and prices. Under pressure from skeptics and from the need to deal with actual data, DSGE modellers have worked hard to allow for various market frictions and imperfections like rigid prices and wages, asymmetries of information, time lags, and so on. This is all to the good. But the basic story always treats the whole economy as if it were like a person, trying consciously and rationally to do the best it can on behalf of the representative agent, given its circumstances. This cannot be an adequate description of a national economy, which is pretty conspicuously not pursuing a consistent goal. A thoughtful person, faced with the thought that economic policy was being pursued on this basis, might reasonably wonder what planet he or she is on.

An obvious example is that the DSGE story has no real room for unemployment of the kind we see most of the time, and especially now: unemployment that is pure waste. There are competent workers, willing to work at the prevailing wage or even a bit less, but the potential job is stymied by a market failure. The economy is unable to organize a win-win situation that is apparently there for the taking. This sort of outcome is incompatible with the notion that the economy is in rational pursuit of an intelligible goal. The only way that DSGE and related models can cope with unemployment is to make it somehow voluntary, a choice of current leisure or a desire to retain some kind of flexibility for the future or something like that. But this is exactly the sort of explanation that does not pass the smell test.

While Solow’s criticism of the representative agent was correct, he left himself open to an effective rejoinder by defenders of DSGE models who could point out that the representative agent was adopted by DSGE modelers not because it was an essential feature of the DSGE model but because it enabled DSGE modelers to simplify the task of analytically solving for an equilibrium solution. With enough time and computing power, however, DSGE modelers were able to write down models with a few heterogeneous agents (themselves representative of particular kinds of agents in the model) and then crank out an equilibrium solution for those models.

Unfortunately for Solow, V. V. Chari also testified at the same hearing, and he responded directly to Solow, denying that DSGE models necessarily entail the assumption of a representative agent and identifying numerous examples even in 2010 of DSGE models with heterogeneous agents.

What progress have we made in modern macro? State of the art models in, say, 1982, had a representative agent, no role for unemployment, no role for Financial factors, no sticky prices or sticky wages, no role for crises and no role for government. What do modern macroeconomic models look like? The models have all kinds of heterogeneity in behavior and decisions. This heterogeneity arises because people’s objectives dier, they differ by age, by information, by the history of their past experiences. Please look at the seminal work by Rao Aiyagari, Per Krusell and Tony Smith, Tim Kehoe and David Levine, Victor Rios Rull, Nobu Kiyotaki and John Moore. All of them . . . prominent macroeconomists at leading departments . . . much of their work is explicitly about models without representative agents. Any claim that modern macro is dominated by representative-agent models is wrong.

So on the narrow question of whether DSGE models are necessarily members of the representative-agent family, Solow was debunked by Chari. But debunking the claim that DSGE models must be representative-agent models doesn’t mean that DSGE models have the basic property that some of us at least seek in a macro-model: the capacity to explain how and why an economy may deviate from a potential full-employment time path.

Chari actually addressed the charge that DSGE models cannot explain lapses from full employment (to use Pigou’s rather anodyne terminology for depressions). Here is Chari’s response:

In terms of unemployment, the baseline model used in the analysis of labor markets in modern macroeconomics is the Mortensen-Pissarides model. The main point of this model is to focus on the dynamics of unemployment. It is specifically a model in which labor markets are beset with frictions.

Chari’s response was thus to treat lapses from full employment as “frictions.” To treat unemployment as the result of one or more frictions is to take a very narrow view of the potential causes of unemployment. The argument that Keynes made in the General Theory was that unemployment is a systemic failure of a market economy, which lacks an error-correction mechanism that is capable of returning the economy to a full-employment state, at least not within a reasonable period of time.

The basic approach of DSGE is to treat the solution of the model as an optimal solution of a problem. In the representative-agent version of a DSGE model, the optimal solution is optimal solution for a single agent, so optimality is already baked into the model. With heterogeneous agents, the solution of the model is a set of mutually consistent optimal plans, and optimality is baked into that heterogenous-agent DSGE model as well. Sophisticated heterogeneous-agent models can incorporate various frictions and constraints that cause the solution to deviate from a hypothetical frictionless, unconstrained first-best optimum.

The policy message emerging from this modeling approach is that unemployment is attributable to frictions and other distortions that don’t permit a first-best optimum that would be achieved automatically in their absence from being reached. The possibility that the optimal plans of individuals might be incompatible resulting in a systemic breakdown — that there could be a failure to coordinate — does not even come up for discussion.

One needn’t accept Keynes’s own theoretical explanation of unemployment to find the attribution of cyclical unemployment to frictions deeply problematic. But, as I have asserted in many previous posts (e.g., here and here) a modeling approach that excludes a priori any systemic explanation of cyclical unemployment, attributing instead all cyclical unemployment to frictions or inefficient constraints on market pricing, cannot be regarded as anything but an exercise in question begging.

 

The Standard Narrative on the History of Macroeconomics: An Exercise in Self-Serving Apologetics

During my recent hiatus from blogging, I have been pondering an important paper presented in June at the History of Economics Society meeting in Toronto, “The Standard Narrative on History of Macroeconomics: Central Banks and DSGE Models” by Francesco Sergi of the University of Bristol, which was selected by the History of Economics Society as the best conference paper by a young scholar in 2017.

Here is the abstract of Sergi’s paper:

How do macroeconomists write the history of their own discipline? This article provides a careful reconstruction of the history of macroeconomics told by the practitioners working today in the dynamic stochastic general equilibrium (DSGE) approach.

Such a tale is a “standard narrative”: a widespread and “standardizing” view of macroeconomics as a field evolving toward “scientific progress”. The standard narrative explains scientific progress as resulting from two factors: “consensus” about theory and “technical change” in econometric tools and computational power. This interpretation is a distinctive feature of central banks’ technical reports about their DSGE models.

Furthermore, such a view on “consensus” and “technical change” is a significantly different view with respect to similar tales told by macroeconomists in the past — which rather emphasized the role of “scientific revolutions” and struggles among competing “schools of thought”. Thus, this difference raises some new questions for historians of macroeconomics.

Sergi’s paper is too long and too rich in content to easily summarize in this post, so what I will do is reproduce and comment on some of the many quotations provided by Sergi, taken mostly from central-bank reports, but also from some leading macroeconomic textbooks and historical survey papers, about the “progress” of modern macroeconomics, and especially about the critical role played by “microfoundations” in achieving that progress. The general tenor of the standard narrative is captured well by the following quotations from V. V. Chari

[A]ny interesting model must be a dynamic stochastic general equilibrium model. From this perspective, there is no other game in town. […] A useful aphorism in macroeconomics is: “If you have an interesting and coherent story to tell, you can tell it in a DSGE model.  (Chari 2010, 2)

I could elaborate on this quotation at length, but I will just leave it out there for readers to ponder with a link to an earlier post of mine about methodological arrogance. Instead I will focus on two other sections of Sergi’s paper “the five steps of theoretical progress” and “microfoundations as theoretical progress.” Here is how Sergi explains the role of the five steps:

The standard narrative provides a detailed account of the progressive evolution toward the synthesis. Following a teleological perspective, each step of this evolution is an incremental, linear improvement of the theoretical tool box for model building. The standard narrative identifies five steps . . . .  Each step corresponds to the emergence of a school of thought. Therefore, in the standard narrative, there are not such things as competing schools of thought and revolutions. Firstly, because schools of thought are represented as a sequence; one school (one step) is always leading to another school (the following step), hence different schools are not coexisting for a long period of time. Secondly, there are no revolutions because, while emerging, new schools of thought [do] not overthrow the previous ones; instead, they suggest improvements and amendments, that are accepted as an improvement by pre-existing schools therefore, accumulation of knowledge takes place thanks to consensus. (pp. 17-18)

The first step in the standard narrative is the family of Keynesian macroeconometric models of the 1950s and 1960s, the primitive ancestors of the modern DSGE models. The second step was the emergence of New Classical macroeconomics which introduced the ideas of rational expectations and dynamic optimization into theoretical macroeconomic discourse in the 1970s. The third step was the development, inspired by New Classical ideas, of Real-Business-Cycle models of the 1980s, and the fourth step was introduction of New Keynesian models in the late 1980s and 1990s that tweaked the Real-Business-Cycle models in ways that rationalized the use of counter-cyclical macroeconomic policy within the theoretical framework of the Real-Business-Cycle approach. The final step, the DSGE model, emerged more or less naturally as a synthesis of the converging Real-Business-Cycle and New Keynesian approaches.

After detailing the five steps of theoretical progress, Sergi focuses attention on “the crucial improvement” that allowed the tool box of macroeconomic modelling to be extended in such a theoretically fruitful way: the insistence on providing explicit microfoundations for macroeconomic models. He writes:

Abiding [by] the Lucasian microfoundational program is put forward by DSGE modellers as the very fundamental essence of theoretical progress allowed by [the] consensus. As Sanajay K. Chugh (University of Pennsylvania) explains in the historical chapter of his textbook, microfoundations is all what modern macroeconomics is about: (p. 20)

Modern macroeconomics begin by explicitly studying the microeconomic principles of utility maximization, profit maximization and market-clearing. [. . . ] This modern macroeconomics quickly captured the attention of the profession through the 1980s [because] it actually begins with microeconomic principles, which was a rather attractive idea. Rather than building a framework of economy-wide events from the top down [. . .] one could build this framework using microeconomic discipline from the bottom up. (Chugh 2015, 170)

Chugh’s rationale for microfoundations is a naïve expression of reductionist bias dressed up as simple homespun common-sense. Everyone knows that you should build from the bottom up, not from the top down, right? But things are not always quite as simple as they seem. Here is an attempt to present microfoundations as being cutting-edge and sophisticated offered in a 2009 technical report written by Cuche-Curti et al. for the Swiss National Bank.

The key property of DSGE models is that they rely on explicit micro-foundations and a rational treatment of expectations in a general equilibrium context. They thus provide a coherent and compelling theoretical framework for macroeconomic analysis. (Cuche-Curti et al. 2009, 6)

A similar statement is made by Gomes et al in a 2010 technical report for the European Central Bank:

The microfoundations of the model together with its rich structure allow [us] to conduct a quantitative analysis in a theoretically coherent and fully consistent model setup, clearly spelling out all the policy implications. (Gomes et al. 2010, 5)

These laudatory descriptions of the DSGE model stress its “coherence” as a primary virtue. What is meant by “coherence” is spelled out more explicitly in a 2006 technical report describing NEMO, a macromodel of the Norwegian economy, by Brubakk et al. for the Norges Bank.

Various agents’ behavior is modelled explicitly in NEMO, based on microeconomic theory. A consistent theoretical framework makes it easier to interpret relationships and mechanisms in the model in the light of economic theory. One advantage is that we can analyse the economic effects of changes of a more structural nature […] [making it] possible to provide a consistent and detailed economic rationale for Norges Bank’s projections for the Norwegian economy. This distinguishes NEMO from purely statistical models, which to a limited extent provide scope for economic interpretations. (Brubakk and Sveen 2009, 39)

By creating microfounded models, in which all agents are optimizers making choices consistent with the postulates of microeconomic theory, DSGE model-builders, in effect, create “laboratories” from which to predict the consequences of alternative monetary policies, enabling policy makers to make informed policy choices. I pause merely to note and draw attention to the tendentious and misleading misappropriation of the language of empirical science by these characteristically self-aggrandizing references to DSGE models as “laboratories” as if what was going on in such models was determined by an actual physical process, as is routinely the case in the laboratories of physical and natural scientists, rather than speculative exercises in high-level calculations derived from the manipulation of DSGE models.

As a result of recent advances in macroeconomic theory and computational techniques, it has become feasible to construct richly structured dynamic stochastic general equilibrium models and use them as laboratories for the study of business cycles and for the formulation and analysis of monetary policy. (Cuche-Curri et al. 2009, 39)

Policy makers can be confident that the conditional predictions corresponding to the policy alternative under consideration, which are derived from their “laboratory” DSGE models, because those models, having been constructed on the basis of the postulates of economic theory, are therefore microfounded, embodying deep structural parameters that are invariant to policy changes. Microfounded models are thus immune to the Lucas Critique of macroeconomic policy evaluation, under which the empirically estimated coefficients of traditional Keynesian macroeconometric models cannot be assumed to remain constant under policy changes, because those coefficient estimates are themselves conditional to policy choices.

Here is how the point is made in three different central bank technical reports: by Argov et al. in a 2012 technical report about MOISE, a DSGE model for the Israeli economy, by Cuche-Curti et al. and by Medina and Soto in a 2006 technical report about a new DSGE model for the Chilean economy for the Central Bank of Chile.

Being micro-founded, the model enables the central bank to assess the effect of its alternative policy choices on the future paths of the economy’s endogenous variables, in a way that is immune to the Lucas critique. (Argov et al. 2012, 5)

[The DSGE] approach has three distinct advantages in comparison to other modelling strategies. First and foremost, its microfoundations should allow it to escape the Lucas critique. (Cuche-Curti et al. 2009, 6)

The main advantage of this type of model, over more traditional reduce-form macro models, is that the structural interpretation of their parameters allows [it] to overcome the Lucas Critique. This is clearly an advantage for policy analysis. (Medina and Soto, 2006, 2)

These quotations show clearly that escaping, immunizing, or overcoming the Lucas Critique is viewed by DSGE modelers as the holy grail of macroeconomic model building and macroeconomic policy analysis. If the Lucas Critique cannot be neutralized, the coefficient estimates derived from reduced-form macroeconometric models cannot be treated as invariant to policy and therefore cannot provide a secure basis for predicting the effects of alternative policies. But DSGE models allow deep structural relationships, reflecting the axioms underlying microeconomic theory, to be estimated. Because they reflect the deep, and presumably stable, microeconomic structure of the economy, estimates of deep parameters derived from DSGE models, DSGE modelers claim that these estimates provide policy makers with a reliable basis for conditional forecasting of the effects of macroeconomic policy.

Because of the consistently poor track record of DSGE models in actual forecasting (for evidence of that poor track record see the paper by Carlaw and Lipsey and my post about their paper) comparing the predictive performance of DSGE models with more traditional macroeconometric models), the emphasis placed on the Lucas Critique by DSGE modelers has an apologetic character, DSGE modelers having to account for the relatively poor comparative predictive power of DSGE models by relentlessly invoking the Lucas Critique in trying to account for, and explain away, the poor predictive performance of the DSGE models. But if DSGE models really are better than traditional macro models why are their unconditional predictions not at least as good as those of traditional macroeconometric models? Obviously estimates of the deep structural relationships provided by microfounded models are not as reliable as DSGE apologetics tries to suggest.

And the reason that the estimates of deep structural relationships derived from DSGE models are not reliable is that those models, no less than traditional macroeconometric models, are subject to the Lucas Critique, the deep microeconomic structural relationships embodied in DSGE models being conditional on the existence of a unique equilibrium solution that persists long enough for the structural relationships characterizing that equilibrium to be inferred from the data-generating mechanism whereby those models are estimated. (I have made this point previously here.) But if the data-generating mechanism does not conform to the unique general equilibrium upon whose existence the presumed deep structural relationships of microeconomic theory embodied in DSGE models are conditioned, the econometric estimates derived from DSGE models cannot capture the desired deep structural relationships, and the resulting structural estimates are therefore incapable of providing a reliable basis for macroeconomic-policy analysis or for conditional forecasts of the effects of alternative policies, much less unconditional forecasts of endogenous macroeconomic variables.

Of course, the problem is even more intractable than the discussion above implies, because there is no reason why the deep structural relationships corresponding to a particular equilibrium should be invariant to changes in the equilibrium. So any change in economic policy that displaces a pre-existing equilibrium, let alone any other unforeseen technological change or change in tastes or resource endowments that displaces a pre-existing equilibrium will necessarily cause all the deep structural relationships to change correspondingly. So the deep structural parameters upon whose invariance the supposedly unique capacity of DSGE models to provide policy analysis upon which policy makers can rely simply don’t exist. Policy making based on DSGE models is as much an uncertain art requiring the exercise of finely developed judgment and intuition as policy making based on any other kind of economic modeling. DSGE models provide no uniquely reliable basis for making macroeconomic policy.

References

Argov, E., Barnea, E., Binyamini, A., Borenstein, E., Elkayam, D., and Rozenshtrom, I. (2012). MOISE: A DSGE Model for the Israeli Economy. Technical Report 2012.06, Bank of Israel.
Brubakk, L.,Husebø, T. A., Maih, J., Olsen, K., and Østnor, M. (2006). Finding NEMO: Documentation of the Norwegian economy model. Technical Report 2006/6, Norges Bank, Staff Memo.
Carlaw, K. I., and Lipsey, R. G. (2012). “Does History Matter?: Empirical Analysis of Evolutionary versus Stationary Equilibrium Views of the Economy.” Journal of Evolutionary Economics. 22(4):735-66.
Chari, V. V. (2010). Testimony before the committee on Science and Technology, Subcommittee on Investigations and Oversight, US House of Representatives. In Building a Science of Economics for the Real World.
Chugh, S. K. (2015). Modern Macroeconomics. MIT Press, Cambridge (MA).
Cuche-Curti, N. A., Dellas, H., and Natal, J.-M. (2009). DSGE-CH. A Dynamic Stochastic General Equilibrium Model for Switzerland. Technical Report 5, Swiss National Bank.
Gomes, S., Jacquinot, P., and Pisani, M. (2010). The EAGLE. A Model for Policy Analysis of Macroeconomic Interdependence in the Euro Area. Technical Report 1195, European Central Bank.
Medina, J. P. and Soto, C. (2006). Model for Analysis and Simulations (MAS): A New DSGE Model for the Chilean Economy. Technical report, Central Bank of Chile.

About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,260 other subscribers
Follow Uneasy Money on WordPress.com