Archive for the 'rational expectations' Category

Explaining the Hegemony of New Classical Economics

Simon Wren-Lewis, Robert Waldmann, and Paul Krugman have all recently devoted additional space to explaining – ruefully, for the most part – how it came about that New Classical Economics took over mainstream macroeconomics just about half a century after the Keynesian Revolution. And Mark Thoma got them all started by a complaint about the sorry state of modern macroeconomics and its failure to prevent or to cure the Little Depression.

Wren-Lewis believes that the main problem with modern macro is too much of a good thing, the good thing being microfoundations. Those microfoundations, in Wren-Lewis’s rendering, filled certain gaps in the ad hoc Keynesian expenditure functions. Although the gaps were not as serious as the New Classical School believed, adding an explicit model of intertemporal expenditure plans derived from optimization conditions and rational expectations, was, in Wren-Lewis’s estimation, an improvement on the old Keynesian theory. The improvements could have been easily assimilated into the old Keynesian theory, but weren’t because New Classicals wanted to junk, not improve, the received Keynesian theory.

Wren-Lewis believes that it is actually possible for the progeny of Keynes and the progeny of Fisher to coexist harmoniously, and despite his discomfort with the anti-Keynesian bias of modern macroeconomics, he views the current macroeconomic research program as progressive. By progressive, I interpret him to mean that macroeconomics is still generating new theoretical problems to investigate, and that attempts to solve those problems are producing a stream of interesting and useful publications – interesting and useful, that is, to other economists doing macroeconomic research. Whether the problems and their solutions are useful to anyone else is perhaps not quite so clear. But even if interest in modern macroeconomics is largely confined to practitioners of modern macroeconomics, that fact alone would not conclusively show that the research program in which they are engaged is not progressive, the progressiveness of the research program requiring no more than a sufficient number of self-selecting econ grad students, and a willingness of university departments and sources of research funding to cater to the idiosyncratic tastes of modern macroeconomists.

Robert Waldmann, unsurprisingly, takes a rather less charitable view of modern macroeconomics, focusing on its failure to discover any new, previously unknown, empirical facts about macroeconomic, or to better explain known facts than do alternative models, e.g., by more accurately predicting observed macro time-series data. By that, admittedly, demanding criterion, Waldmann finds nothing progressive in the modern macroeconomics research program.

Paul Krugman weighed in by emphasizing not only the ideological agenda behind the New Classical Revolution, but the self-interest of those involved:

Well, while the explicit message of such manifestos is intellectual – this is the only valid way to do macroeconomics – there’s also an implicit message: from now on, only my students and disciples will get jobs at good schools and publish in major journals/ And that, to an important extent, is exactly what happened; Ken Rogoff wrote about the “scars of not being able to publish stick-price papers during the years of new classical repression.” As time went on and members of the clique made up an ever-growing share of senior faculty and journal editors, the clique’s dominance became self-perpetuating – and impervious to intellectual failure.

I don’t disagree that there has been intellectual repression, and that this has made professional advancement difficult for those who don’t subscribe to the reigning macroeconomic orthodoxy, but I think that the story is more complicated than Krugman suggests. The reason I say that is because I cannot believe that the top-ranking economics departments at schools like MIT, Harvard, UC Berkeley, Princeton, and Penn, and other supposed bastions of saltwater thinking have bought into the underlying New Classical ideology. Nevertheless, microfounded DSGE models have become de rigueur for any serious academic macroeconomic theorizing, not only in the Journal of Political Economy (Chicago), but in the Quarterly Journal of Economics (Harvard), the Review of Economics and Statistics (MIT), and the American Economic Review. New Keynesians, like Simon Wren-Lewis, have made their peace with the new order, and old Keynesians have been relegated to the periphery, unable to publish in the journals that matter without observing the generally accepted (even by those who don’t subscribe to New Classical ideology) conventions of proper macroeconomic discourse.

So I don’t think that Krugman’s ideology plus self-interest story fully explains how the New Classical hegemony was achieved. What I think is missing from his story is the spurious methodological requirement of microfoundations foisted on macroeconomists in the course of the 1970s. I have discussed microfoundations in a number of earlier posts (here, here, here, here, and here) so I will try, possibly in vain, not to repeat myself too much.

The importance and desirability of microfoundations were never questioned. What, after all, was the neoclassical synthesis, if not an attempt, partly successful and partly unsuccessful, to integrate monetary theory with value theory, or macroeconomics with microeconomics? But in the early 1970s the focus of attempts, notably in the 1970 Phelps volume, to provide microfoundations changed from embedding the Keynesian system in a general-equilibrium framework, as Patinkin had done, to providing an explicit microeconomic rationale for the Keynesian idea that the labor market could not be cleared via wage adjustments.

In chapter 19 of the General Theory, Keynes struggled to come up with a convincing general explanation for the failure of nominal-wage reductions to clear the labor market. Instead, he offered an assortment of seemingly ad hoc arguments about why nominal-wage adjustments would not succeed in reducing unemployment, enabling all workers willing to work at the prevailing wage to find employment at that wage. This forced Keynesians into the awkward position of relying on an argument — wages tend to be sticky, especially in the downward direction — that was not really different from one used by the “Classical Economists” excoriated by Keynes to explain high unemployment: that rigidities in the price system – often politically imposed rigidities – prevented wage and price adjustments from equilibrating demand with supply in the textbook fashion.

These early attempts at providing microfoundations were largely exercises in applied price theory, explaining why self-interested behavior by rational workers and employers lacking perfect information about all potential jobs and all potential workers would not result in immediate price adjustments that would enable all workers to find employment at a uniform market-clearing wage. Although these largely search-theoretic models led to a more sophisticated and nuanced understanding of labor-market dynamics than economists had previously had, the models ultimately did not provide a fully satisfactory account of cyclical unemployment. But the goal of microfoundations was to explain a certain set of phenomena in the labor market that had not been seriously investigated, in the hope that price and wage stickiness could be analyzed as an economic phenomenon rather than being arbitrarily introduced into models as an ad hoc, albeit seemingly plausible, assumption.

But instead of pursuing microfoundations as an explanatory strategy, the New Classicals chose to impose it as a methodological prerequisite. A macroeconomic model was inadmissible unless it could be explicitly and formally derived from the optimizing choices of fully rational agents. Instead of trying to enrich and potentially transform the Keynesian model with a deeper analysis and understanding of the incentives and constraints under which workers and employers make decisions, the New Classicals used microfoundations as a methodological tool by which to delegitimize Keynesian models, those models being insufficiently or improperly microfounded. Instead of using microfoundations as a method by which to make macroeconomic models conform more closely to the imperfect and limited informational resources available to actual employers deciding to hire or fire employees, and actual workers deciding to accept or reject employment opportunities, the New Classicals chose to use microfoundations as a methodological justification for the extreme unrealism of the rational-expectations assumption, portraying it as nothing more than the consistent application of the rationality postulate underlying standard neoclassical price theory.

For the New Classicals, microfoundations became a reductionist crusade. There is only one kind of economics, and it is not macroeconomics. Even the idea that there could be a conceptual distinction between micro and macroeconomics was unacceptable to Robert Lucas, just as the idea that there is, or could be, a mind not reducible to the brain is unacceptable to some deranged neuroscientists. No science, not even chemistry, has been reduced to physics. Were it ever to be accomplished, the reduction of chemistry to physics would be a great scientific achievement. Some parts of chemistry have been reduced to physics, which is a good thing, especially when doing so actually enhances our understanding of the chemical process and results in an improved, or more exact, restatement of the relevant chemical laws. But it would be absurd and preposterous simply to reject, on supposed methodological principle, those parts of chemistry that have not been reduced to physics. And how much more absurd would it be to reject higher-level sciences, like biology and ecology, for no other reason than that they have not been reduced to physics.

But reductionism is what modern macroeconomics, under the New Classical hegemony, insists on. No exceptions allowed; don’t even ask. Meekly and unreflectively, modern macroeconomics has succumbed to the absurd and arrogant methodological authoritarianism of the New Classical Revolution. What an embarrassment.

UPDATE (11:43 AM EDST): I made some minor editorial revisions to eliminate some grammatical errors and misplaced or superfluous words.

Temporary Equilibrium One More Time

It’s always nice to be noticed, especially by Paul Krugman. So I am not upset, but in his response to my previous post, I don’t think that Krugman quite understood what I was trying to convey. I will try to be clearer this time. It will be easiest if I just quote from his post and insert my comments or explanations.

Glasner is right to say that the Hicksian IS-LM analysis comes most directly not out of Keynes but out of Hicks’s own Value and Capital, which introduced the concept of “temporary equilibrium”.

Actually, that’s not what I was trying to say. I wasn’t making any explicit connection between Hicks’s temporary-equilibrium concept from Value and Capital and the IS-LM model that he introduced two years earlier in his paper on Keynes and the Classics. Of course that doesn’t mean that the temporary equilibrium method isn’t connected to the IS-LM model; one would need to do a more in-depth study than I have done of Hicks’s intellectual development to determine how much IS-LM was influenced by Hicks’s interest in intertemporal equilibrium and in the method of temporary equilibrium as a way of analyzing intertemporal issues.

This involves using quasi-static methods to analyze a dynamic economy, not because you don’t realize that it’s dynamic, but simply as a tool. In particular, V&C discussed at some length a temporary equilibrium in a three-sector economy, with goods, bonds, and money; that’s essentially full-employment IS-LM, which becomes the 1937 version with some price stickiness. I wrote about that a long time ago.

Now I do think that it’s fair to say that the IS-LM model was very much in the spirit of Value and Capital, in which Hicks deployed an explicit general-equilibrium model to analyze an economy at a Keynesian level of aggregation: goods, bonds, and money. But the temporary-equilibrium aspect of Value and Capital went beyond the Keynesian analysis, because the temporary equilibrium analysis was explicitly intertemporal, all agents formulating plans based on explicit future price expectations, and the inconsistency between expected prices and actual prices was explicitly noted, while in the General Theory, and in IS-LM, price expectations were kept in the background, making an appearance only in the discussion of the marginal efficiency of capital.

So is IS-LM really Keynesian? I think yes — there is a lot of temporary equilibrium in The General Theory, even if there’s other stuff too. As I wrote in the last post, one key thing that distinguished TGT from earlier business cycle theorizing was precisely that it stopped trying to tell a dynamic story — no more periods, forced saving, boom and bust, instead a focus on how economies can stay depressed. Anyway, does it matter? The real question is whether the method of temporary equilibrium is useful.

That is precisely where I think Krugman’s grasp on the concept of temporary equilibrium is slipping. Temporary equilibrium is indeed about periods, and it is explicitly dynamic. In my previous post I referred to Hicks’s discussion in Capital and Growth, about 25 years after writing Value and Capital, in which he wrote

The Temporary Equilibrium model of Value and Capital, also, is “quasi-static” [like the Keynes theory] – in just the same sense. The reason why I was contented with such a model was because I had my eyes fixed on Keynes.

As I read this passage now — and it really bothered me when I read it as I was writing my previous post — I realize that what Hicks was saying was that his desire to conform to the Keynesian paradigm led him to compromise the integrity of the temporary equilibrium model, by forcing it to be “quasi-static” when it really was essentially dynamic. The challenge has been to convert a “quasi-static” IS-LM model into something closer to the temporary-equilibrium method that Hicks introduced, but did not fully execute in Value and Capital.

What are the alternatives? One — which took over much of macro — is to do intertemporal equilibrium all the way, with consumers making lifetime consumption plans, prices set with the future rationally expected, and so on. That’s DSGE — and I think Glasner and I agree that this hasn’t worked out too well. In fact, economists who never learned temporary-equiibrium-style modeling have had a strong tendency to reinvent pre-Keynesian fallacies (cough-Say’s Law-cough), because they don’t know how to think out of the forever-equilibrium straitjacket.

Yes, I agree! Rational expectations, full-equilibrium models have turned out to be a regression, not an advance. But the way I would make the point is that the temporary-equilibrium method provides a sort of a middle way to do intertemporal dynamics without presuming that consumption plans and investment plans are always optimal.

What about disequilibrium dynamics all the way? Basically, I have never seen anyone pull this off. Like the forever-equilibrium types, constant-disequilibrium theorists have a remarkable tendency to make elementary conceptual mistakes.

Again, I agree. We can’t work without some sort of equilibrium conditions, but temporary equilibrium provides a way to keep the discipline of equilibrium without assuming (nearly) full optimality.

Still, Glasner says that temporary equilibrium must involve disappointed expectations, and fails to take account of the dynamics that must result as expectations are revised.

Perhaps I was unclear, but I thought I was saying just the opposite. It’s the “quasi-static” IS-LM model, not temporary equilibrium, that fails to take account of the dynamics produced by revised expectations.

I guess I’d say two things. First, I’m not sure that this is always true. Hicks did indeed assume static expectations — the future will be like the present; but in Keynes’s vision of an economy stuck in sustained depression, such static expectations will be more or less right.

Again, I agree. There may be self-fulfilling expectations of a low-income, low-employment equilibrium. But I don’t think that that is the only explanation for such a situation, and certainly not for the downturn that can lead to such an equilibrium.

Second, those of us who use temporary equilibrium often do think in terms of dynamics as expectations adjust. In fact, you could say that the textbook story of how the short-run aggregate supply curve adjusts over time, eventually restoring full employment, is just that kind of thing. It’s not a great story, but it is the kind of dynamics Glasner wants — and it’s Econ 101 stuff.

Again, I agree. It’s not a great story, but, like it or not, the story is not a Keynesian story.

So where does this leave us? I’m not sure, but my impression is that Krugman, in his admiration for the IS-LM model, is trying too hard to identify IS-LM with the temporary-equilibrium approach, which I think represented a major conceptual advance over both the Keynesian model and the IS-LM representation of the Keynesian model. Temporary equilibrium and IS-LM are not necessarily inconsistent, but I mainly wanted to point out that the two aren’t the same, and shouldn’t be conflated.

Paul Krugman and Roger Farmer on Sticky Wages

I was pleasantly surprised last Friday to see that Paul Krugman took favorable notice of my post about sticky wages, but also registering some disagreement.

[Glasner] is partially right in suggesting that there has been a bit of a role reversal regarding the role of sticky wages in recessions: Keynes asserted that wage flexibility would not help, but Keynes’s self-proclaimed heirs ended up putting downward nominal wage rigidity at the core of their analysis. By the way, this didn’t start with the New Keynesians; way back in the 1940s Franco Modigliani had already taught us to think that everything depended on M/w, the ratio of the money supply to the wage rate.

That said, wage stickiness plays a bigger role in The General Theory — and in modern discussions that are consistent with what Keynes said — than Glasner indicates.

To document his assertion about Keynes, Krugman quotes a passage from the General Theory in which Keynes seems to suggest that in the nineteenth century inflexible wages were partially compensated for by price level movements. One might quibble with Krugman’s interpretation, but the payoff doesn’t seem worth the effort.

But I will quibble with the next paragraph in Krugman’s post.

But there’s another point: even if you don’t think wage flexibility would help in our current situation (and like Keynes, I think it wouldn’t), Keynesians still need a sticky-wage story to make the facts consistent with involuntary unemployment. For if wages were flexible, an excess supply of labor should be reflected in ever-falling wages. If you want to say that we have lots of willing workers unable to find jobs — as opposed to moochers not really seeking work because they’re cradled in Paul Ryan’s hammock — you have to have a story about why wages aren’t falling.

Not that I really disagree with Krugman that the behavior of wages since the 2008 downturn is consistent with some stickiness in wages. Nevertheless, it is still not necessarily the case that, if wages were flexible, an excess supply of labor would lead to ever-falling wages. In a search model of unemployment, if workers are expecting wages to rise every year at a 3% rate, and instead wages rise at only a 1% rate, the model predicts that unemployment will rise, and will continue to rise (or at least not return to the natural rate) as long as observed wages did not increase as fast as workers were expecting wages to rise. Presumably over time, wage expectations would adjust to a new lower rate of increase, but there is no guarantee that the transition would be speedy.

Krugman concludes:

So sticky wages are an important part of both old and new Keynesian analysis, not because wage cuts would help us, but simply to make sense of what we see.

My own view is actually a bit more guarded. I think that “sticky wages” is simply a name that we apply to a problematic phenomenon for ehich we still haven’t found a really satisfactory explanation for. Search models, for all their theoretical elegance, simply can’t explain the observed process by which unemployment rises during recessions, i.e., by layoffs and a lack of job openings rather than an increase in quits and refused offers, as search models imply. The suggestion in my earlier post was intended to offer a possible basis of understanding what the phrase “sticky wages” is actually describing.

Roger Farmer, a long-time and renowned UCLA economist, also commented on my post on his new blog. Welcome to the blogosphere, Roger.

Roger has a different take on the sticky-wage phenomenon. Roger argues, as did some of the commenters to my post, that wages are not sticky. To document this assertion, Roger presents a diagram showing that the decline of nominal wages closely tracked that of prices for the first six years of the Great Depression. From this evidence Roger concludes that nominal wage rigidity is not the cause of rising unemployment during the Great Depression, and presumably, not the cause of rising unemployment in the Little Depression.

farmer_sticky_wagesInstead, Roger argues, the rise in unemployment was caused by an outbreak of self-fulfilling pessimism. Roger believes that there are many alternative equilibria and which equilibrium (actually equilibrium time path) we reach depends on what our expectations are. Roger also believes that our expectations are rational, so that we get what we expect, as he succinctly phrases it “beliefs are fundamental.” I have a lot of sympathy for this way of looking at the economy. In fact one of the early posts on this blog was entitled “Expectations are Fundamental.” But as I have explained in other posts, I am not so sure that expectations are rational in any useful sense, because I think that individual expectations diverge. I don’t think that there is a single way of looking at reality. If there are many potential equilibria, why should everyone expect the same equilibrium. I can be an optimist, and you can be a pessimist. If we agreed, we would be right, but if we disagree, we will both be wrong. What economic mechanism is there to reconcile our expectations? In a world in which expectations diverge — a world of temporary equilibrium — there can be cumulative output reductions that get propagated across the economy as each sector fails to produce its maximum potential output, thereby reducing the demand for the output of other sectors to which it is linked. That’s what happens when there is trading at prices that don’t correspond to the full optimum equilibrium solution.

So I agree with Roger in part, but I think that the coordination problem is (at least potentially) more serious than he imagines.

Big Ideas in Macroeconomics: A Review

Steve Williamson recently plugged a new book by Kartik Athreya (Big Ideas in Macroeconomics), an economist at the Federal Reserve Bank of Richmond, which tries to explain in relatively non-technical terms what modern macroeconomics is all about. I will acknowledge that my graduate training in macroeconomics predated the rise of modern macro, and I am not fluent in the language of modern macro, though I am trying to fill in the gaps. And this book is a good place to start. I found Athreya’s book a good overview of the field, explaining the fundamental ideas and how they fit together.

Big Ideas in Macroeconomics is a moderately big book, 415 pages, covering a very wide range of topics. It is noteworthy, I think, that despite its size, there is so little overlap between the topics covered in this book, and those covered in more traditional, perhaps old-fashioned, books on macroeconomics. The index contains not a single entry on the price level, inflation, deflation, money, interest, total output, employment or unemployment. Which is not to say that none of those concepts are ever mentioned or discussed, just that they are not treated, as they are in traditional macroeconomics books, as the principal objects of macroeconomic inquiry. The conduct of monetary or fiscal policy to achieve some explicit macroeconomic objective is never discussed. In contrast, there are repeated references to Walrasian equilibrium, the Arrow-Debreu-McKenzie model, the Radner model, Nash-equilibria, Pareto optimality, the first and second Welfare theorems. It’s a new world.

The first two chapters present a fairly detailed description of the idea of Walrasian general equilibrium and its modern incarnation in the canonical Arrow-Debreu-McKenzie (ADM) model.The ADM model describes an economy of utility-maximizing households and profit-maximizing firms engaged in the production and consumption of commodities through time and space. There are markets for commodities dated by time period, specified by location and classified by foreseeable contingent states of the world, so that the same physical commodity corresponds to many separate commodities, each corresponding to different time periods and locations and to contingent states of the world. Prices for such physically identical commodities are not necessarily uniform across times, locations or contingent states.The demand for road salt to de-ice roads depends on whether conditions, which depend on time and location and on states of the world. For each different possible weather contingency, there would be a distinct market for road salt for each location and time period.

The ADM model is solved once for all time periods and all states of the world. Under appropriate conditions, there is one (and possibly more than one) intertemporal equilibrium, all trades being executed in advance, with all deliveries subsequently being carried out, as time an contingencies unfold, in accordance with the terms of the original contracts.

Given the existence of an equilibrium, i.e., a set of prices subject to which all agents are individually optimizing, and all markets are clearing, there are two classical welfare theorems stating that any such equilibrium involves a Pareto-optimal allocation and any Pareto-optimal allocation could be supported by an equilibrium set of prices corresponding to a suitably chosen set of initial endowments. For these optimality results to obtain, it is necessary that markets be complete in the sense that there is a market for each commodity in each time period and contingent state of the world. Without a complete set of markets in this sense, the Pareto-optimality of the Walrasian equilibrium cannot be proved.

Readers may wonder about the process by which an equilibrium price vector would actually be found through some trading process. Athreya invokes the fiction of a Walrasian clearinghouse in which all agents (truthfully) register their notional demands and supplies at alternative price vectors. Based on these responses the clearinghouse is able to determine, by a process of trial and error, the equilibrium price vector. Since the Walrasian clearinghouse presumes that no trading occurs except at an equilibrium price vector, there can be no assurance that an equilibrium price vector would ever be arrived at under an actual trading process in which trading occurs at disequilibrium prices. Moreover, as Clower and Leijonhufvud showed over 40 years ago (“Say’s Principle: What it Means and What it Doesn’t Mean”), trading at disequilibrium prices may cause cumulative contractions of aggregate demand because the total volume of trade at a disequilibrium price will always be less than the volume of trade at an equilibrium price, the volume of trade being constrained by the lesser of quantity supplied and quantity demanded.

In the view of modern macroeconomics, then, Walrasian general equilibrium, as characterized by the ADM model, is the basic and overarching paradigm of macroeconomic analysis. To be sure, modern macroeconomics tries to go beyond the highly restrictive assumptions of the ADM model, but it is not clear whether the concessions made by modern macroeconomics to the real world go very far in enhancing the realism of the basic model.

Chapter 3, contains some interesting reflections on the importance of efficiency (Pareto-optimality) as a policy objective and on the trade-offs between efficiency and equity and between ex-ante and ex-post efficiency. But these topics are on the periphery of macroeconomics, so I will offer no comment here.

In chapter 4, Athreya turns to some common criticisms of modern macroeconomics: that it is too highly aggregated, too wedded to the rationality assumption, too focused on equilibrium steady states, and too highly mathematical. Athreya correctly points out that older macroeconomic models were also highly aggregated, so that if aggregation is a problem it is not unique to modern macroeconomics. That’s a fair point, but skirts some thorny issues. As Athreya acknowledges in chapter 5, an important issue separating certain older macroeconomic traditions (both Keynesian and Austrian among others) is the idea that macroeconomic dysfunction is a manifestation of coordination failure. It is a property – a remarkable property – of Walrasian general equilibrium that it achieves perfect (i.e., Pareto-optimal) coordination of disparate, self-interested, competitive individual agents, fully reconciling their plans in a way that might have been achieved by an omniscient and benevolent central planner. Walrasian general equilibrium fully solves the coordination problem. Insofar as important results of modern macroeconomics depend on the assumption that a real-life economy can be realistically characterized as a Walrasian equilibrium, modern macroeconomics is assuming that coordination failures are irrelevant to macroeconomics. It is only after coordination failures have been excluded from the purview of macroeconomics that it became legitimate (for the sake of mathematical tractability) to deploy representative-agent models in macroeconomics, a coordination failure being tantamount, in the context of a representative agent model, to a form of irrationality on the part of the representative agent. Athreya characterizes choices about the level of aggregation as a trade-off between realism and tractability, but it seems to me that, rather than making a trade-off between realism and tractability, modern macroeconomics has simply made an a priori decision that coordination problems are not a relevant macroeconomic concern.

A similar argument applies to Athreya’s defense of rational expectations and the use of equilibrium in modern macroeconomic models. I would not deny that there are good reasons to adopt rational expectations and full equilibrium in some modeling situations, depending on the problem that theorist is trying to address. The question is whether it can be appropriate to deviate from the assumption of a full rational-expectations equilibrium for the purposes of modeling fluctuations over the course of a business cycle, especially a deep cyclical downturn. In particular, the idea of a Hicksian temporary equilibrium in which agents hold divergent expectations about future prices, but markets clear period by period given those divergent expectations, seems to offer (as in, e.g., Thompson’s “Reformulation of Macroeconomic Theory“) more realism and richer empirical content than modern macromodels of rational expectations.

Athreya offers the following explanation and defense of rational expectations:

[Rational expectations] purports to explain the expectations people actually have about the relevant items in their own futures. It does so by asking that their expectations lead to economy-wide outcomes that do not contradict their views. By imposing the requirement that expectations not be systematically contradicted by outcomes, economists keep an unobservable object from becoming a source of “free parameters” through which we can cheaply claim to have “explained” some phenomenon. In other words, in rational-expectations models, expectations are part of what is solved for, and so they are not left to the discretion of the modeler to impose willy-nilly. In so doing, the assumption of rational expectations protects the public from economists.

This defense of rational expectations plainly belies betrays the methodological arrogance of modern macroeconomics. I am all in favor of solving a model for equilibrium expectations, but solving for equilibrium expectations is certainly not the same as insisting that the only interesting or relevant result of a model is the one generated by the assumption of full equilibrium under rational expectations. (Again see Thompson’s “Reformulation of Macroeconomic Theory” as well as the classic paper by Foley and Sidrauski, and this post by Rajiv Sethi on his blog.) It may be relevant and useful to look at a model and examine its properties in a state in which agents hold inconsistent expectations about future prices; the temporary equilibrium existing at a point in time does not correspond to a steady state. Why is such an equilibrium uninteresting and uninformative about what happens in a business cycle? But evidently modern macroeconomists such as Athreya consider it their duty to ban such models from polite discourse — certainly from the leading economics journals — lest the public be tainted by economists who might otherwise dare to abuse their models by making illicit assumptions about expectations formation and equilibrium concepts.

Chapter 5 is the most important chapter of the book. It is in this chapter that Athreya examines in more detail the kinds of adjustments that modern macroeconomists make in the Walrasian/ADM paradigm to accommodate the incompleteness of markets and the imperfections of expectation formation that limit the empirical relevance of the full ADM model as a macroeconomic paradigm. To do so, Athreya starts by explaining how the Radner model in which a less than the full complement of Arrow-Debreu contingent-laims markets is available. In the Radner model, unlike the ADM model, trading takes place through time for those markets that actually exist, so that the full Walrasian equilibrium exists only if agents are able to form correct expectations about future prices. And even if the full Walrasian equilibrium exists, in the absence of a complete set of Arrow-Debreu markets, the classical welfare theorems may not obtain.

To Athreya, these limitations on the Radner version of the Walrasian model seem manageable. After all, if no one really knows how to improve on the equilibrium of the Radner model, the potential existence of Pareto improvements to the Radner equilibrium is not necessarily that big a deal. Athreya expands on the discussion of the Radner model by introducing the neoclassical growth model in both its deterministic and stochastic versions, all the elements of the dynamic stochastic general equilibrium (DSGE) model that characterizes modern macroeconomics now being in place. Athreya closes out the chapter with additional discussions of the role of further modifications to the basic Walrasian paradigm, particularly search models and overlapping-generations models.

I found the discussion in chapter 5 highly informative and useful, but it doesn’t seem to me that Athreya faces up to the limitations of the Radner model or to the implied disconnect between the Walraisan paradigm and macroeconomic analysis. A full Walrasian equilibrium exists in the Radner model only if all agents correctly anticipate future prices. If they don’t correctly anticipate future prices, then we are in the world of Hicksian temporary equilibrium. But in that world, the kind of coordination failures that Athreya so casually dismisses seem all too likely to occur. In a world of temporary equilibrium, there is no guarantee that intertemporal budget constraints will be effective, because those budget constraint reflect expected, not actual, future prices, and, in temporary equilibrium, expected prices are not the same for all transactors. Budget constraints are not binding in a world in which trading takes place through time based on possibly incorrect expectations of future prices. Not only does this mean that all the standard equilibrium and optimality conditions of Walrasian theory are violated, but that defaults on IOUs and, thus, financial-market breakdowns, are entirely possible.

In a key passage in chapter 5, Athreya dismisses coordination-failure explanations, invidiously characterized as Keynesian, for inefficient declines in output and employment. While acknowledging that such fluctuations could, in theory, be caused by “self-fulfilling pessimism or fear,” Athreya invokes the benchmark Radner trading arrangement of the ADM model. “In the Radner economy, Athreya writes, “households and firms have correct expectations for the spot market prices one period hence.” The justification for that expectational assumption, which seems indistinguishable from the assumption of a full, rational-expectations equilibrium, is left unstated. Athreya continues:

Granting that they indeed have such expectations, we can now ask about the extent to which, in a modern economy, we can have outcomes that are extremely sensitive to them. In particular, is it the case that under fairly plausible conditions, “optimism” and “pessimism” can be self-fulfilling in ways that make everyone (or nearly everyone) better off in the former than the latter?

Athreya argues that this is possible only if the aggregate production function of the economy is characterized by increasing returns to scale, so that productivity increases as output rises.

[W]hat I have in mind is that the structure of the economy must be such that when, for example, all households suddenly defer consumption spending (and save instead), interest rates do not adjust rapidly to forestall such a fall in spending by encouraging firms to invest.

Notice that Athreya makes no distinction between a reduction in consumption in which people shift into long-term real or financial assets and one in which people shift into holding cash. The two cases are hardly identical, but Athreya has nothing to say about the demand for money and its role in macroeconomics.

If they did, under what I will later describe as a “standard” production side for the economy, wages would, barring any countervailing forces, promptly rise (as the capital stock rises and makes workers more productive). In turn, output would not fall in response to pessimism.

What Athreya is saying is that if we assume that there is a reduction in the time preference of households, causing them to defer present consumption in order to increase their future consumption, the shift in time preference should be reflected in a rise in asset prices, causing an increase in the production of durable assets, and leading to an increase in wages insofar as the increase in the stock of fixed capital implies an increase in the marginal product of labor. Thus, if all the consequences of increased thrift are foreseen at the moment that current demand for output falls, there would be a smooth transition from the previous steady state corresponding to a high rate of time preference to the new steady state corresponding to a low rate of time preference.

Fine. If you assume that the economy always remains in full equilibrium, even in the transition from one steady state to another, because everyone has rational expectations, you will avoid a lot of unpleasantness. But what if entrepreneurial expectations do not change instantaneously, and the reduction in current demand for output corresponding to reduced spending on consumption causes entrepreneurs to reduce, not increase, their demand for capital equipment? If, after the shift in time preference, total spending actually falls, there may be a chain of disappointments in expectations, and a series of defaults on IOUs, culminating in a financial crisis. Pessimism may indeed be self-fulfilling. But Athreya has a just-so story to tell, and he seems satisfied that there is no other story to be told. Others may not be so easily satisfied, especially when his just-so story depends on a) the rational expectations assumption that many smart people have a hard time accepting as even remotely plausible, and b) the assumption that no trading takes place at disequilibrium prices. Athreya continues:

Thus, at least within the context of models in which households and firms are not routinely incorrect about the future, multiple self-fulfilling outcomes require particular features of the production side of the economy to prevail.

Actually what Athreya should have said is: “within the context of models in which households and firms always predict future prices correctly.”

In chapter 6, Athreya discusses how modern macroeconomics can and has contributed to the understanding of the financial crisis of 2007-08 and the subsequent downturn and anemic recovery. There is a lot of very useful information and discussion of various issues, especially in connection with banking and financial markets. But further comment at this point would be largely repetitive.

Anyway, despite my obvious and strong disagreements with much of what I read, I learned a lot from Athreya’s well-written and stimulating book, and I actually enjoyed reading it.

G. L. S. Shackle and the Indeterminacy of Economics

A post by Greg Hill, which inspired a recent post of my own, and Greg’s comment on that post, have reminded me of the importance of the undeservedly neglected English economist, G. L. S. Shackle, many of whose works I read and profited from as a young economist, but which I have hardly looked at for many years. A student of Hayek’s at the London School of Economics in the 1930s, Shackle renounced his early Hayekian views and the doctoral dissertation on capital theory that he had already started writing under Hayek’s supervision, after hearing a lecture by Joan Robinson in 1935 about the new theory of income and employment that Keynes was then in the final stages of writing up to be published the following year as The General Theory of Employment, Interest and Money. When Shackle, with considerable embarrassment, had to face Hayek to inform him that he could not finish the dissertation that he had started, no longer believing in what he had written, and having been converted to Keynes’s new theory. After hearing that Shackle was planning to find a new advisor under whom to write a new dissertation on another topic, Hayek, in a gesture of extraordinary magnanimity, responded that of course Shackle was free to write on whatever topic he desired, and that he would be happy to continue to serve as Shackle’s advisor regardless of the topic Shackle chose.

Although Shackle became a Keynesian, he retained and developed a number of characteristic Hayekian ideas (possibly extending them even further than Hayek would have), especially the notion that economic fluctuations result from the incompatibility between the plans that individuals are trying to implement, an incompatibility stemming from the imperfect and inconsistent expectations about the future that individuals hold, at least some plans therefore being doomed to failure. For Shackle the conception of a general equilibrium in which all individual plans are perfectly reconciled was a purely mental construct that might be useful in specifying the necessary conditions for the harmonization of individually formulated plans, but lacking descriptive or empirical content. Not only is a general equilibrium never in fact achieved, the very conception of such a state is at odds with the nature of reality. For example, the phenomenon of surprise (and, I would add, regret) is, in Shackle’s view, a characteristic feature of economic life, but under the assumption of most economists (though not of Knight, Keynes or Hayek) that all events can be at least be forecasted in terms of their underlying probability distributions, the phenomenon of surprise cannot be understood. There are some observed events – black swans in Taleb’s terminology – that we can’t incorporate into the standard probability calculus, and are completely inconsistent with the general equilibrium paradigm.

A rational-expectations model allows for stochastic variables (e.g., will it be rainy or sunny two weeks from tomorrow), but those variables are assumed to be drawn from distributions known by the agents, who can also correctly anticipate the future prices conditional on any realization (at a precisely known future moment in time) of a random variable. Thus, all outcomes correspond to expectations conditional on all future realizations of random variables; there are no surprises and no regrets. For a model to be correct and determinate in this sense, it must have accounted fully for all the non-random factors that could affect outcomes. If any important variable(s) were left out, the predictions of the model could not be correct. In other words, unless the model is properly specified, all causal factors having been identified and accounted for, the model will not generate correct predictions for all future states and all possible realizations of random variables. And unless the agents in the model can predict prices as accurately as the fully determined model can predict them, the model will not unfold through time on an equilibrium time path. This capability of forecasting future prices contingent on the realization of all random variables affecting the actual course of the model through time, is called rational expectations, which differs from perfect foresight only in being unable to predict in advance the realizations of the random variables. But all prices conditional on those realizations are correctly expected. Which is the more demanding assumption – rational expectations or perfect foresight — is actually not entirely clear to me.

Now there are two ways to think about rational expectations — one benign and one terribly misleading. The benign way is that the assumption of rational expectations is a means of checking the internal consistency of a model. In other words, if we are trying to figure out whether a model is coherent, we can suppose that the model is the true model; if we then posit that the expectations of the agents correspond to the solution of the model – i.e., the agents expect the equilibrium outcome – the solution of the model will confirm the expectations that have been plugged into the minds of the agents of the model. This is sometimes called a fixed-point property. If the model doesn’t have this fixed-point property, then there is something wrong with the model. So the assumption of rational expectations does not necessarily involve any empirical assertion about the real world, it does not necessarily assert anything about how expectations are formed or whether they ever are rational in the sense that agents can predict the outcome of the relevant model. The assumption merely allows the model to be tested for latent inconsistencies. Equilibrium expectations being a property of equilibrium, it makes no sense for equilibrium expectations not to generate an equilibrium.

But the other way of thinking about rational expectations is as an empirical assertion about what the expectations of people actually are or how those expectations are formed. If that is how we think about rational expectations, then we are saying people always anticipate the solution of the model. And if the model is internally consistent, then the empirical assumption that agents really do have rational expectations means that we are making an empirical assumption that the economy is in fact always in equilibrium, i.e., that is moving through time along an equilibrium path. If agents in the true model expect the equilibrium of the true model, the agents must be in equilibrium. To break out of that tight circle, either expectations have to be wrong (non-rational) or the model from which people derive their expectations must be wrong.

Of course, one way to finesse this problem is to say that the model is not actually true and expectations are not fully rational, but that the assumptions are close enough to being true for the model to be a decent approximation of reality. That is a defensible response, but one either has to take that assertion on faith, or there has to be strong evidence that the real world corresponds to the predictions of the model. Rational-expectations models do reasonably well in predicting the performance of economies near full employment, but not so well in periods like the Great Depression and the Little Depression. In other words, they work pretty well when we don’t need them, and not so well when we do need them.

The relevance of the rational-expectations assumption was discussed a year and a half ago by David Levine of Washington University. Levine was an undergraduate at UCLA after I had left, and went on to get his Ph.D. from MIT. He later returned to UCLA and held the Armen Alchian chair in economics from 1997 to 2006. Along with Michele Boldrin, Levine wrote a wonderful book Aginst Intellectual Monopoly. More recently he has written a little book (Is Behavioral Economics Doomed?) defending the rationality assumption in all its various guises, a book certainly worth reading even (or especially) if one doesn’t agree with all of its conclusions. So, although I have a high regard for Levine’s capabilities as an economist, I am afraid that I have to criticize what he has to say about rational expectations. I should also add that despite my criticism of Levine’s defense of rational expectations, I think the broader point that he makes that people do learn from experience, and that public policies should not be premised on the assumption that people will not eventually figure out how those policies are working, is valid.

In particular, let’s look at a post that Levine contributed to the Huffington Post blog defending the economics profession against the accusation that the economics profession is useless as demonstrated by their failure to predict the financial crisis of 2008. To counter this charge, Levine compared economics to physics — not necessarily the strategy I would have recommended for casting economics in a favorable light, but that’s merely an aside. Just as there is an uncertainty principle in physics, which says that you cannot identify simultaneously both the location and the speed of an electron, there’s an analogous uncertainty principle in economics, which says that the forecast affects the outcome.

The uncertainty principle in economics arises from a simple fact: we are all actors in the economy and the models we use determine how we behave. If a model is discovered to be correct, then we will change our behavior to reflect our new understanding of reality — and when enough of us do so, the original model stops being correct. In this sense future human behavior must necessarily be uncertain.

Levine is certainly right that insofar as the discovery of a new model changes expectations, the model itself can change outcomes. If the model predicts a crisis, the model, if it is believed, may be what causes the crisis. Fair enough, but Levine believes that this uncertainty principle entails the rationality of expectations.

The uncertainty principle in economics leads directly to the theory of rational expectations. Just as the uncertainty principle in physics is consistent with the probabilistic predictions of quantum mechanics (there is a 20% chance this particle will appear in this location with this speed) so the uncertainty principle in economics is consistent with the probabilistic predictions of rational expectations (there is a 3% chance of a stock market crash on October 28).

This claim, if I understand it, is shocking. The equations of quantum mechanics may be able to predict the probability that a particle will appear at given location with a given speed, I am unaware of any economic model that can provide even an approximately accurate prediction of the probability that a financial crisis will occur within a given time period.

Note what rational expectations are not: they are often confused with perfect foresight — meaning we perfectly anticipate what will happen in the future. While perfect foresight is widely used by economists for studying phenomena such as long-term growth where the focus is not on uncertainty — it is not the theory used by economists for studying recessions, crises or the business cycle. The most widely used theory is called DSGE for Dynamic Stochastic General Equilibrium. Notice the word stochastic — it means random — and this theory reflects the necessary randomness brought about by the uncertainty principle.

I have already observed that the introduction of random variables into a general equilibrium is not a significant relaxation of the predictive capacities of agents — and perhaps not even a relaxation, but an enhancement of the predictive capacities of the agents. The problem with this distinction between perfect foresight and stochastic disturbances is that there is no relaxation of the requirement that all agents share the same expectations of all future prices in all possible future states of the world. The world described is a world without surprise and without regret. From the standpoint of the informational requirements imposed on agents, the distinction between perfect foresight and rational expectations is not worth discussing.

In simple language what rational expectations means is “if people believe this forecast it will be true.”

Well, I don’t know about that. If the forecast is derived from a consistent, but empirically false, model, the assumption of rational expectations will ensure that the forecast of the model coincides with what people expect. But the real world may not cooperate, producing an outcome different from what was forecast and what was rationally expected. The expectation of a correct forecast does not guarantee the truth of the forecast unless the model generating the forecast is true. Is Levine convinced that the models used by economists are sufficiently close to being true to generate valid forecasts with a frequency approaching that of the Newtonian model in forecasting, say, solar eclipses? More generally, Levine seems to be confusing the substantive content of a theory — what motivates the agents populating theory and what constrains the choices of those agents in their interactions with other agents and with nature — with an assumption about how agents form expectations. This confusion becomes palpable in the next sentence.

By contrast if a theory is not one of rational expectations it means “if people believe this forecast it will not be true.”

I don’t what it means to say “a theory is not one of rational expectations.” Almost every economic theory depends in some way on the expectations of the agents populating the theory. There are many possible assumptions to make about how expectations are formed. Most of those assumptions about how expectations are formed allow, though they do not require, expectations to correspond to the predictions of the model. In other words, expectations can be viewed as an equilibrating variable of a model. To make a stronger assertion than that is to make an empirical claim about how closely the real world corresponds to the equilibrium state of the model. Levine goes on to make just such an assertion. Referring to a non-rational-expectations theory, he continues:

Obviously such a theory has limited usefulness. Or put differently: if there is a correct theory, eventually most people will believe it, so it must necessarily be rational expectations. Any other theory has the property that people must forever disbelieve the theory regardless of overwhelming evidence — for as soon as the theory is believed it is wrong.

It is hard to interpret what Levine is saying. What theory or class of theories is being dismissed as having limited usefulness? Presumably, all theories that are not “of rational expectations.” OK, but why is their usefulness limited? Is it that they are internally inconsistent, i.e., they lack the fixed-point property whose absence signals internal inconsistency, or is there some other deficiency? Levine seems to be conflating the two very different ways of understanding rational expectations (a test for internal inconsistency v. a substantive empirical hypothesis). Perhaps that’s why Levine feels compelled to paraphrase. But the paraphrase makes it clear that he is not distinguishing between the substantive theory and the specific expectational hypothesis. I also can’t tell whether his premise (“if there is a correct theory”) is meant to be a factual statement or a hypothetical? If it is the former, it would be nice if the correct theory were identified. If the correct theory can’t even be identified, how are people supposed to know which theory they are supposed to believe, so that they can form their expectations accordingly? Rather than an explanation for why the correct rational-expectations theory will eventually be recognized, this sounds like an explanation for why the correct theory is unknowable. Unless, of course, we assume that the rational expectations are a necessary feature of reality in which case, people have been forming expectations based on the one true model all along, and all economists are doing is trying to formalize a pre-existing process of expectations formation that already solves the problem. But the rest of his post (see part two here) makes it clear that Levine (properly) does not hold that extreme position about rational expectations.

So in the end , I find myself unable to make sense of rational expectations except as a test for the internal consistency of an economic model, and, perhaps also, as a tool for policy analysis. Just as one does not want to work with a model that is internally inconsistent, one does not want to formulate a policy based on the assumption that people will fail to understand the effects of the policy being proposed. But as a tool for understanding how economies actually work and what can go wrong, the rational-expectations assumption abstracts from precisely the key problem, the inconsistencies between the expectations held by different agents, which are an inevitable, though certainly not the only, cause of the surprise and regret that are so characteristic of real life.

The Microfoundations Wars Continue

I see belatedly that the battle over microfoundations continues on the blogosphere, with Paul Krugman, Noah Smith, Adam Posen, and Nick Rowe all challenging the microfoundations position, while Tony Yates and Stephen Williamson defend it with Simon Wren-Lewis trying to serve as a peacemaker of sorts. I agree with most of the criticisms, but what I found most striking was the defense of microfoundations offered by Tony Yates, who expresses the mentality of the microfoundations school so well that I thought that some further commentary on his post would be worthwhile.

Yates’s post was prompted by a Twitter exchange between Yates and Adam Posen after Posen tweeted that microfoundations have no merit, an exaggeration no doubt, but not an unreasonable one. Noah Smith chimed in with a challenge to Yates to defend the proposition that microfoundations do have merit. Hence, the title (“Why Microfoundations Have Merit.”) of Yates’s post. What really caught my attention in Yates’s post is that, in trying to defend the proposition that microfounded models do have merit, Yates offers the following methodological, or perhaps aesthetic, pronouncement .

The merit in any economic thinking or knowledge must lie in it at some point producing an insight, a prediction, a prediction of the consequence of a policy action, that helps someone, or a government, or a society to make their lives better.

Microfounded models are models which tell an explicit story about what the people, firms, and large agents in a model do, and why.  What do they want to achieve, what constraints do they face in going about it?  My own position is that these are the ONLY models that have anything genuinely economic to say about anything.  It’s contestable whether they have any merit or not.

Paraphrasing, I would say that Yates defines merit as a useful insight or prediction into the way the world works. Fair enough. He then defines microfounded models as those models that tell an explicit story about what the agents populating the model are trying to do and the resulting outcomes of their efforts. This strikes me as a definition that includes more than just microfounded models, but let that pass, at least for the moment. Then comes the key point. These models “are the ONLY models that have anything genuinely economic to say about anything.” A breathtaking claim.

In other words, Yates believes that unless an insight, a proposition, or a conjecture, can be logically deduced from microfoundations, it is not economics. So whatever the merits of microfounded models, a non-microfounded model is not, as a matter of principle, an economic model. Talk about methodological authoritarianism.

Having established, to his own satisfaction at any rate, that only microfounded models have a legitimate claim to be considered economic, Yates defends the claim that microfounded models have merit by citing the Lucas critique as an early example of a meritorious insight derived from the “microfoundations project.” Now there is something a bit odd about this claim, because Yates neglects to mention that the Lucas critique, as Lucas himself acknowledged, had been anticipated by earlier economists, including both Keynes and Tinbergen. So if the microfoundations project does indeed have merit, the example chosen to illustrate that merit does nothing to show that the merit is in any way peculiar to the microfoundations project. It is also bears repeating (see my earlier post on the Lucas critique) that the Lucas critique only tells us about steady states, so it provides no useful information, insight, prediction or guidance about using monetary policy to speed up the recovery to a new steady state. So we should be careful not to attribute more merit to the Lucas critique than it actually possesses.

To be sure, in his Twitter exchange with Adam Posen, Yates mentioned several other meritorious contributions from the microfoundations project, each of which Posen rejected because the merit of those contributions lies in the intuition behind the one line idea. To which Yates responded:

This statement is highly perplexing to me.  Economic ideas are claims about what people and firms and governments do, and why, and what unfolds as a consequence.  The models are the ideas.  ‘Intuition’, the verbal counterpart to the models, are not separate things, the origins of the models.  They are utterances to ourselves that arise from us comprehending the logical object of the model, in the same way that our account to ourselves of an equation arises from the model.  One could make an argument for the separateness of ‘intuition’ at best, I think, as classifying it in some cases to be a conjecture about what a possible economic world [a microfounded model] would look like.  Intuition as story-telling to oneself can sometimes be a good check on whether what we have done is nonsense.  But not always.  Lots of results are not immediately intuitive.  That’s not a reason to dismiss it.  (Just like most of modern physics is not intuitive.)  Just a reason to have another think and read through your code carefully.

And Yates’s response is highly perplexing to me. An economic model is usually the product of some thought process intended to construct a coherent model from some mental raw materials (ideas) and resources (knowledge and techniques). The thought process is an attempt to embody some idea or ideas about a posited causal mechanism or about a posited mutual interdependency among variables of interest. The intuition is the idea or insight that some such causal mechanism or mutual interdependency exists. A model is one particular implementation (out of many other possible implementations) of the idea in a way that allows further implications of the idea to be deduced, thereby achieving an enhanced and deeper understanding of the original insight. The “microfoundations project” does not directly determine what kinds of ideas can be modeled, but it does require that models have certain properties to be considered acceptable implementations of any idea. In particular the model must incorporate a dynamic stochastic general equilibrium system with rational expectations and a unique equilibrium. Ideas not tractable given those modeling constraints are excluded. Posen’s point, it seems to me, is not that no worthwhile, meritorious ideas have been modeled within the modeling constraints imposed by the microfoundations project, but that the microfoundations project has done nothing to create or propagate those ideas; it has just forced those ideas to be implemented within the template of the microfoundations project.

None of the characteristic properties of the microfoundations project are assumptions for which there is compelling empirical or theoretical justification. We know how to prove the existence of a general equilibrium for economic models populated by agents satisfying certain rationality assumptions (assumptions for which there is no compelling a priori argument and whose primary justifications are tractability and the accuracy of the empirical implications deduced from them), but the conditions for a unique general equilibrium are way more stringent than the standard convexity assumptions required to prove existence. Moreover, even given the existence of a unique general equilibrium, there is no proof that an economy not in general equilibrium will reach the general equilibrium under the standard rules of price adjustment. Nor is there any empirical evidence to suggest that actual economies are in any sense in a general equilibrium, though one might reasonably suppose that actual economies are from time to time in the neighborhood of a general equilibrium. The rationality of expectations is in one sense an entirely ad hoc assumption, though an inconsistency between the predictions of a model, under the assumption of rational expectations, with the rationally expectations of the agents in the model is surely a sign that there is a problem in the structure of the model. But just because rational expectations can be used to check for latent design flaws in a model, it does not follow that assuming rational expectations leads to empirical implications that are generally, or even occasionally, empirically valid.

Thus, the key assumptions of microfounded models are not logically entailed by any deep axioms; they are imposed by methodological fiat, a philosophically and pragmatically unfounded insistence that certain modeling conventions be adhered to in order to count as “scientific.” Now it would be one thing if these modeling conventions were generating new, previously unknown, empirical relationships or generating more accurate predictions than those generated by non-microfounded models, but evidence that the predictions of microfounded models are better than the predictions of non-microfounded models is notably lacking. Indeed, Carlaw and Lipsey have shown that micro-founded models generate predictions that are less accurate than those generated by non-micofounded models. If microfounded theories represent scientific progress, they ought to be producing an increase, not a decrease, in explanatory power.

The microfoundations project is predicated on a gigantic leap of faith that the existing economy has an underlying structure that corresponds closely enough to the assumptions of the Arrow-Debreu model, suitably adjusted for stochastic elements and a variety of frictions (e.g., Calvo pricing) that may be introduced into the models depending on the modeler’s judgment about what constitutes an allowable friction. This is classic question-begging with a vengeance: arriving at a conclusion by assuming what needs to be proved. Such question begging is not necessarily illegitimate; every research program is based on some degree of faith or optimism that results not yet in hand will justify the effort required to generate those results. What is not legitimate is the claim that ONLY the models based on such question-begging assumptions are genuinely scientific.

This question-begging mentality masquerading as science is actually not unique to the microfoundations school. It is not uncommon among those with an exaggerated belief in the powers of science, a mentality that Hayek called scientism. It is akin to physicalism, the philosophical doctrine that all phenomena are physical. According to physicalism, there are no mental phenomena. What we perceive as mental phenomena, e.g., consciousness, is not real, but an illusion. Our mental states are really nothing but physical states. I do not say that physicalism is false, just that it is a philosophical position, not a proposition derived from science, and certainly not a fact that is, or can be, established by the currently available tools of science. It is a faith that some day — some day probably very, very far off into the future — science will demonstrate that our mental processes can be reduced to, and derived from, the laws of physics. Similarly, given the inability to account for observed fluctuations of output and employment in terms of microfoundations, the assertion that only microfounded models are scientific is simply an expression of faith in some, as yet unknown, future discovery, not a claim supported by any available scientific proof or evidence.

The State We’re In

Last week, Paul Krugman, set off by this blog post, complained about the current state macroeconomics. Apparently, Krugman feels that if saltwater economists like himself were willing to accommodate the intertemporal-maximization paradigm developed by the freshwater economists, the freshwater economists ought to have reciprocated by acknowledging some role for countercyclical policy. Seeing little evidence of accommodation on the part of the freshwater economists, Krugman, evidently feeling betrayed, came to this rather harsh conclusion:

The state of macro is, in fact, rotten, and will remain so until the cult that has taken over half the field is somehow dislodged.

Besides engaging in a pretty personal attack on his fellow economists, Krugman did not present a very flattering picture of economics as a scientific discipline. What Krugman describes seems less like a search for truth than a cynical bargaining game, in which Krugman feels that his (saltwater) side, after making good faith offers of cooperation and accommodation that were seemingly accepted by the other (freshwater) side, was somehow misled into making concessions that undermined his side’s strategic position. What I found interesting was that Krugman seemed unaware that his account of the interaction between saltwater and freshwater economists was not much more flattering to the former than the latter.

Krugman’s diatribe gave Stephen Williamson an opportunity to scorn and scold Krugman for a crass misunderstanding of the progress of science. According to Williamson, modern macroeconomics has passed by out-of-touch old-timers like Krugman. Among modern macroeconomists, Williamson observes, the freshwater-saltwater distinction is no longer meaningful or relevant. Everyone is now, more or less, on the same page; differences are worked out collegially in seminars, workshops, conferences and in the top academic journals without the rancor and disrespect in which Krugman indulges himself. If you are lucky (and hard-working) enough to be part of it, macroeconomics is a great place to be. One can almost visualize the condescension and the pity oozing from Williamson’s pores for those not part of the charmed circle.

Commenting on this exchange, Noah Smith generally agreed with Williamson that modern macroeconomics is not a discipline divided against itself; the intetermporal maximizers are clearly dominant. But Noah allows himself to wonder whether this is really any cause for celebration – celebration, at any rate, by those not in the charmed circle.

So macro has not yet discovered what causes recessions, nor come anywhere close to reaching a consensus on how (or even if) we should fight them. . . .

Given this state of affairs, can we conclude that the state of macro is good? Is a field successful as long as its members aren’t divided into warring camps? Or should we require a science to give us actual answers? And if we conclude that a science isn’t giving us actual answers, what do we, the people outside the field, do? Do we demand that the people currently working in the field start producing results pronto, threatening to replace them with people who are currently relegated to the fringe? Do we keep supporting the field with money and acclaim, in the hope that we’re currently only in an interim stage, and that real answers will emerge soon enough? Do we simply conclude that the field isn’t as fruitful an area of inquiry as we thought, and quietly defund it?

All of this seems to me to be a side issue. Who cares if macroeconomists like each other or hate each other? Whether they get along or not, whether they treat each other nicely or not, is really of no great import. For example, it was largely at Milton Friedman’s urging that Harry Johnson was hired to be the resident Keynesian at Chicago. But almost as soon as Johnson arrived, he and Friedman were getting into rather unpleasant personal exchanges and arguments. And even though Johnson underwent a metamorphosis from mildly left-wing Keynesianism to moderately conservative monetarism during his nearly two decades at Chicago, his personal and professional relationship with Friedman got progressively worse. And all of that nastiness was happening while both Friedman and Johnson were becoming dominant figures in the economics profession. So what does the level of collegiality and absence of personal discord have to do with the state of a scientific or academic discipline? Not all that much, I would venture to say.

So when Scott Sumner says:

while Krugman might seem pessimistic about the state of macro, he’s a Pollyanna compared to me. I see the field of macro as being completely adrift

I agree totally. But I diagnose the problem with macro a bit differently from how Scott does. He is chiefly concerned with getting policy right, which is certainly important, inasmuch as policy, since early 2008, has, for the most part, been disastrously wrong. One did not need a theoretically sophisticated model to see that the FOMC, out of misplaced concern that inflation expectations were becoming unanchored, kept money way too tight in 2008 in the face of rising food and energy prices, even as the economy was rapidly contracting in the second and third quarters. And in the wake of the contraction in the second and third quarters and a frightening collapse and panic in the fourth quarter, it did not take a sophisticated model to understand that rapid monetary expansion was called for. That’s why Scott writes the following:

All we really know is what Milton Friedman knew, with his partial equilibrium approach. Monetary policy drives nominal variables.  And cyclical fluctuations caused by nominal shocks seem sub-optimal.  Beyond that it’s all conjecture.

Ahem, and Marshall and Wicksell and Cassel and Fisher and Keynes and Hawtrey and Robertson and Hayek and at least 25 others that I could easily name. But it’s interesting to note that, despite his Marshallian (anti-Walrasian) proclivities, it was Friedman himself who started modern macroeconomics down the fruitless path it has been following for the last 40 years when he introduced the concept of the natural rate of unemployment in his famous 1968 AEA Presidential lecture on the role of monetary policy. Friedman defined the natural rate of unemployment as:

the level [of unemployment] that would be ground out by the Walrasian system of general equilibrium equations, provided there is embedded in them the actual structural characteristics of the labor and commodity markets, including market imperfections, stochastic variability in demands and supplies, the costs of gathering information about job vacancies, and labor availabilities, the costs of mobility, and so on.

Aside from the peculiar verb choice in describing the solution of an unknown variable contained in a system of equations, what is noteworthy about his definition is that Friedman was explicitly adopting a conception of an intertemporal general equilibrium as the unique and stable solution of that system of equations, and, whether he intended to or not, appeared to be suggesting that such a concept was operationally useful as a policy benchmark. Thus, despite Friedman’s own deep skepticism about the usefulness and relevance of general-equilibrium analysis, Friedman, for whatever reasons, chose to present his natural-rate argument in the language (however stilted on his part) of the Walrasian general-equilibrium theory for which he had little use and even less sympathy.

Inspired by the powerful policy conclusions that followed from the natural-rate hypothesis, Friedman’s direct and indirect followers, most notably Robert Lucas, used that analysis to transform macroeconomics, reducing macroeconomics to the manipulation of a simplified intertemporal general-equilibrium system. Under the assumption that all economic agents could correctly forecast all future prices (aka rational expectations), all agents could be viewed as intertemporal optimizers, any observed unemployment reflecting the optimizing choices of individuals to consume leisure or to engage in non-market production. I find it inconceivable that Friedman could have been pleased with the direction taken by the economics profession at large, and especially by his own department when he departed Chicago in 1977. This is pure conjecture on my part, but Friedman’s departure upon reaching retirement age might have had something to do with his own lack of sympathy with the direction that his own department had, under Lucas’s leadership, already taken. The problem was not so much with policy, but with the whole conception of what constitutes macroeconomic analysis.

The paper by Carlaw and Lipsey, which I referenced in my previous post, provides just one of many possible lines of attack against what modern macroeconomics has become. Without in any way suggesting that their criticisms are not weighty and serious, I would just point out that there really is no basis at all for assuming that the economy can be appropriately modeled as being in a continuous, or nearly continuous, state of general equilibrium. In the absence of a complete set of markets, the Arrow-Debreu conditions for the existence of a full intertemporal equilibrium are not satisfied, and there is no market mechanism that leads, even in principle, to a general equilibrium. The rational-expectations assumption is simply a deus-ex-machina method by which to solve a simplified model, a method with no real-world counterpart. And the suggestion that rational expectations is no more than the extension, let alone a logical consequence, of the standard rationality assumptions of basic economic theory is transparently bogus. Nor is there any basis for assuming that, if a general equilibrium does exist, it is unique, and that if it is unique, it is necessarily stable. In particular, in an economy with an incomplete (in the Arrow-Debreu sense) set of markets, an equilibrium may very much depend on the expectations of agents, expectations potentially even being self-fulfilling. We actually know that in many markets, especially those characterized by network effects, equilibria are expectation-dependent. Self-fulfilling expectations may thus be a characteristic property of modern economies, but they do not necessarily produce equilibrium.

An especially pretentious conceit of the modern macroeconomics of the last 40 years is that the extreme assumptions on which it rests are the essential microfoundations without which macroeconomics lacks any scientific standing. That’s preposterous. Perfect foresight and rational expectations are assumptions required for finding the solution to a system of equations describing a general equilibrium. They are not essential properties of a system consistent with the basic rationality propositions of microeconomics. To insist that a macroeconomic theory must correspond to the extreme assumptions necessary to prove the existence of a unique stable general equilibrium is to guarantee in advance the sterility and uselessness of that theory, because the entire field of study called macroeconomics is the result of long historical experience strongly suggesting that persistent, even cumulative, deviations from general equilibrium have been routine features of economic life since at least the early 19th century. That modern macroeconomics can tell a story in which apparently large deviations from general equilibrium are not really what they seem is not evidence that such deviations don’t exist; it merely shows that modern macroeconomics has constructed a language that allows the observed data to be classified in terms consistent with a theoretical paradigm that does not allow for lapses from equilibrium. That modern macroeconomics has constructed such a language is no reason why anyone not already committed to its underlying assumptions should feel compelled to accept its validity.

In fact, the standard comparative-statics propositions of microeconomics are also based on the assumption of the existence of a unique stable general equilibrium. Those comparative-statics propositions about the signs of the derivatives of various endogenous variables (price, quantity demanded, quantity supplied, etc.) with respect to various parameters of a microeconomic model involve comparisons between equilibrium values of the relevant variables before and after the posited parametric changes. All such comparative-statics results involve a ceteris-paribus assumption, conditional on the existence of a unique stable general equilibrium which serves as the starting and ending point (after adjustment to the parameter change) of the exercise, thereby isolating the purely hypothetical effect of a parameter change. Thus, as much as macroeconomics may require microfoundations, microeconomics is no less in need of macrofoundations, i.e., the existence of a unique stable general equilibrium, absent which a comparative-statics exercise would be meaningless, because the ceteris-paribus assumption could not otherwise be maintained. To assert that macroeconomics is impossible without microfoundations is therefore to reason in a circle, the empirically relevant propositions of microeconomics being predicated on the existence of a unique stable general equilibrium. But it is precisely the putative failure of a unique stable intertemporal general equilibrium to be attained, or to serve as a powerful attractor to economic variables, that provides the rationale for the existence of a field called macroeconomics.

So I certainly agree with Krugman that the present state of macroeconomics is pretty dismal. However, his own admitted willingness (and that of his New Keynesian colleagues) to adopt a theoretical paradigm that assumes the perpetual, or near-perpetual, existence of a unique stable intertemporal equilibrium, or at most admits the possibility of a very small set of deviations from such an equilibrium, means that, by his own admission, Krugman and his saltwater colleagues also bear a share of the responsibility for the very state of macroeconomics that Krugman now deplores.


About Me

David Glasner
Washington, DC

I am an economist at the Federal Trade Commission. Nothing that you read on this blog necessarily reflects the views of the FTC or the individual commissioners. Although I work at the FTC as an antitrust economist, most of my research and writing has been on monetary economics and policy and the history of monetary theory. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 291 other followers


Follow

Get every new post delivered to your Inbox.

Join 291 other followers