In two recent blog posts (here and here), Simon Wren-Lewis wrote sensibly about microfoundations. Though triggered by Wren-Lewis’s posts, the following comments are not intended as criticisms of him, though I think he does give microfoundations (as they are now understood) too much credit. Rather, my criticism is aimed at the way microfoundations have come to be used to restrict the kind of macroeconomic explanations and models that are up for consideration among working macroeconomists. I have written about microfoundations before on this blog (here and here) and some, if not most, of what I am going to say may be repetitive, but obviously the misconceptions associated with what Wren-Lewis calls the “microfoundations project” are not going to be dispelled by a couple of blog posts, so a little repetitiveness may not be such a bad thing. Jim Buchanan liked to quote the following passage from Herbert Spencer’s Data of Ethics:
Hence an amount of repetition which to some will probably appear tedious. I do not, however, much regret this almost unavoidable result; for only by varied iteration can alien conceptions be forced on reluctant minds.
When the idea of providing microfoundations for macroeconomics started to catch on in the late 1960s – and probably nowhere did they catch on sooner or with more enthusiasm than at UCLA – the idea resonated, because macroeconomics, which then mainly consisted of various versions of the Keynesian model, seemed to embody certain presumptions about how markets work that contradicted the presumptions of microeconomics about how markets work. In microeconomics, the primary mechanism for achieving equilibrium is the price (actually the relative price) of whatever good is being analyzed. A full (or general) microeconomic equilibrium involves a set of prices such that each of markets (whether for final outputs or for inputs into the productive process) are in equilibrium, equilibrium meaning that every agent is able to purchase or sell as much of any output or input as desired at the equilibrium price. The set of equilibrium prices not only achieves equilibrium, the equilibrium, under some conditions, has optimal properties, because each agent, in choosing how much to buy or sell of each output or input, is presumed to be acting in a way that is optimal given the preferences of the agent and the social constraints under which the agent operates. Those optimal properties don’t always follow from microeconomic presumptions, optimality being dependent on the particular assumptions (about preferences, production and exchange technology, and property rights) adopted by the analyst in modeling an individual market or an entire system of markets.
The problem with Keynesian macroeconomics was that it seemed to overlook, or ignore, or dismiss, or deny, the possibility that a price mechanism is operating — or could operate — to achieve equilibrium in the markets for goods and for labor services. In other words, the Keynesian model seemed to be saying that a macoreconomic equilibrium is compatible with the absence of market clearing, notwithstanding that the absence of market clearing had always been viewed as the defining characteristic of disequilibrium. Thus, from the perspective of microeconomic theory, if there is an excess supply of workers offering labor services, i.e., there are unemployed workers who would be willing to be employed at the same wage that currently employed workers are receiving, there ought to be market forces that would reduce wages to a level such that all workers willing to work at that wage could gain employment. Keynes, of course, had attempted to explain why workers could only reduce their nominal wages, not their real wages, and argued that nominal wage cuts would simply induce equivalent price reductions, leaving real wages and employment unchanged. The microeconomic reasoning on which that argument was based hinged on Keynes’s assumption that nominal wage cuts would trigger proportionate price cuts, but that assumption was not exactly convincing, if only because the percentage price cut would seem to depend not just on the percentage reduction in the nominal wage, but also on the labor intensity of the product, Keynes, habitually and inconsistently, arguing as if labor were the only factor of production while at the same time invoking the principle of diminishing marginal productivity.
At UCLA, the point of finding microfoundations was not to create a macroeconomics that would simply reflect the results and optimal properties of a full general equilibrium model. Indeed, what made UCLA approach to microeconomics distinctive was that it aimed at deriving testable implications from relaxing the usual informational and institutional assumptions (full information, zero transactions costs, fully defined and enforceable property rights) underlying conventional microeconomic theory. If the way forward in microeconomics was to move away from the extreme assumptions underlying the perfectly competitive model, then it seemed plausible that relaxing those assumptions would be fruitful in macroeconomics as well. That led Armen Alchian and others at UCLA to think of unemployment as largely a search phenomenon. For a while that approach seemed promising, and to some extent the promise was fulfilled, but many implications of a purely search-theoretic approach to unemployment don’t seem to be that well supported empirically. For example, search models suggest that in recessions, quits increase, and that workers become more likely to refuse offers of employment after the downturn than before. Neither of those implications seems to be true. A search model would suggest that workers are unemployed because they are refusing offers below their reservation wage, but in fact most workers are becoming unemployed because they are being laid off, and in recessions workers seem likely to accept offers of employment at the same wage that other workers are getting. Now it is possible to reinterpret workers’ behavior in recessions in a way that corresponds to the search-theoretic model, but the reinterpretation seems a bit of a stretch.
Even though he was an early exponent of the search theory of unemployment, Alchian greatly admired and frequently cited a 1974 paper by Donald Gordon “A Neoclassical Theory of Keynesian Unemployment,” which proposed an implicit-contract theory of employer-employee relationship. The idea was that workers make long-term commitments to their employers, and realizing their vulnerability, after having committed themselves to their employer, to exploitation by a unilateral wage cut imposed by the employer under threat of termination, expect some assurance from their employer that they will not be subjected to a unilateral demand to accept a wage cut. Such implicit understandings make it very difficult for employers, facing a reduction in demand, to force workers to accept a wage cut, because doing so would make it hard for the employer to retain those workers that are most highly valued and to attract new workers.
Gordon’s theory of implicit wage contracts has a certain similarity to Dennis Carlton’s explanation of why many suppliers don’t immediately raise prices to their steady customers. Like Gordon, Carlton posits the existence of implicit and sometimes explicit contracts in which customers commit to purchase minimum quantities or to purchase their “requirements” from a particular supplier. In return for the assurance of having a regular customer on whom the supplier can count, the supplier gives the customer assurance that he will receive his customary supply at the agreed upon price even if market conditions should change. Rather than raise the price in the event of a shortage, the supplier may feel that he is obligated to continue supplying his regular customers at the customary price, while raising the price to new or occasional customers to “market-clearing” levels. For certain kinds of supply relationships in which customer and supplier expect to continue transacting regularly over a long period of time, price is not the sole method by which allocation decisions are made.
Klein, Crawford and Alchian discussed a similar idea in their 1978 article about vertical integration as a means of avoiding or mitigating the threat of holdup when a supplier and a customer must invest in some sunk asset, e.g., a pipeline connection, for the supply relationship to be possible. The sunk investment implies that either party, under the right circumstances, could threaten to holdup the other party by threatening to withdraw from the relationship leaving the other party stuck with a useless fixed asset. Vertical integration avoids the problem by aligning the incentives of the two parties, eliminating the potential for holdup. Price rigidity can thus be viewed as a milder form of vertical integration in cases where transactors have a relatively long-term relationship and want to assure each other that they will not be taken advantage of after making a commitment (i.e., foregoing other trading opportunities) to the other party.
The search model is fairly easy to incorporate into a standard framework because search can be treated as a form of self-employment that is an alternative to accepting employment. The shape and position of the individual’s supply curve reflects his expectations about future wage offers that he will receive if he chooses not to accept employment in the current period. The more optimistic the worker’s expectation of future wages, the higher the worker’s reservation wage in the current period. The more certain the worker feels about the expected future wage, the more elastic is his supply curve in the neighborhood of the expected wage. Thus, despite its empirical shortcomings, the search model could serve as a convenient heuristic device for modeling cyclical increases in unemployment because of the unwillingness of workers to accept nominal wage cuts. From a macroeconomic modeling perspective, the incorrect or incomplete representation of the reason for the unwillingness of workers to accept wage cuts may be less important than the overall implication of the model, which is that unanticipated aggregate demand shocks can have significant and persistent effects on real output and employment. For example in his reformulation of macroeconomic theory, Earl Thompson, though he was certainly aware of Donald Gordon’s paper, relied exclusively on a search-theoretic rationale for Keynesian unemployment, and I don’t know (or can’t remember) if he had a specific objection to Gordon’s model or simply preferred to use the search-theoretic approach for pragmatic modeling reasons.
At any rate, these comments about the role of search models in modeling unemployment decisions are meant to illustrate why microfoundations could be useful for macroeconomics: by adding to the empirical content of macromodels, providing insight into the decisions or circumstances that lead workers to accept or reject employment in the aftermath of aggregate demand shocks, or why employers impose layoffs on workers rather than offer employment at reduced wages. The spectrum of such microeconomic theories of employer-employee relationships have provided us with a richer understanding of what the term “sticky wages” might actually be referring to, beyond the existence of minimum wage laws or collective bargaining contracts specifying nominal wages over a period of time for all covered employees.
In this context microfoundations meant providing a more theoretically satisfying, more micreconomically grounded explanation for a phenomenon – “sticky wages” – that seemed somehow crucial for generating the results of the Keynesian model. I don’t think that anyone would question that microfoundations in this narrow sense has been an important and useful area of research. And it is not microfoundations in this sense that is controversial. The sense in which microfoundations is controversial is whether a macroeconomic model must show that aggregate quantities that it generates can be shown to consistent with the optimizing choices of all agents in the model. In other words, the equilibrium solution of a macroeconomic model must be such that all agents are optimizing intertemporally, subject to whatever informational imperfections are specified by the model. If the model is not derived from or consistent with the solution to such an intertemporal optimization problem, the macromodel is now considered inadequate and unworthy of consideration. Here’s how Michael Woodford, a superb economist, but very much part of the stifling microfoundations consensus that has overtaken macroeconomics, put in his paper “The Convergence in Macroeconomics: Elements of the New Synthesis.”
But it is now accepted that one should know how to render one’s growth model and one’s business-cycle model consistent with one another in principle, on those occasions when it is necessary to make such connections. Similarly, microeconomic and macroeconomic analysis are no longer considered to involve fundamentally different principles, so that it should be possible to reconcile one’s views about household or firm behavior, or one’s view of the functioning of individual markets, with one’s model of the aggregate economy, when one needs to do so.
In this respect, the methodological stance of the New Classical school and the real business cycle theorists has become the mainstream. But this does not mean that the Keynesian goal of structural modeling of short-run aggregate dynamics has been abandoned. Instead, it is now understood how one can construct and analyze dynamic general-equilibrium models that incorporate a variety of types of adjustment frictions, that allow these models to provide fairly realistic representations of both shorter-run and longer-run responses to economic disturbances. In important respects, such models remain direct descendants of the Keynesian macroeconometric models of the early postwar period, though an important part of their DNA comes from neoclassical growth models as well.
Woodford argues that by incorporating various imperfections into their general equilibrium models, e.g.., imperfectly competitive output and labor markets, lags in the adjustment of wages and prices to changes in market conditions, search and matching frictions, it is possible to reconcile the existence of underutilized resources with intertemporal optimization by agents.
The insistence of monetarists, New Classicals, and early real business cycle theorists on the empirical relevance of models of perfect competitive equilibrium — a source of much controversy in past decades — is not what has now come to be generally accepted. Instead, what is important is having general-equilibrium models in the broad sense of requiring that all equations of the model be derived from mutually consistent foundations, and that the specified behavior of each economic unit make sense given the environment created by the behavior of the others. At one time, Walrasian competitive equilibrium models were the only kind of models with these features that were well understood; but this is no longer the case.
Woodford shows no recognition of the possibility of multiple equilibria, or that the evolution of an economic system and time-series data may be path-dependent, making the long-run neutrality propositions characterizing most DSGE models untenable. If the world – the data generating mechanism – is not like the world assumed by modern macroeconomics, the estimates derived from econometric models reflecting the worldview of modern macroeconomics will be inferior to estimates derived from an econometric model reflecting another, more accurate, world view. For example, if there are many possible equilibria depending on changes in expectational parameters or on the accidental deviations from an equilibrium time path, the idea of intertemporal optimization may not even be meaningful. Rather than optimize, agents may simply follow certain simple rules of thumb. But, on methodological principle, modern macroeconomics treats the estimates generated by any alternative econometric model insufficiently grounded in the microeconomic principles of intertemporal optimization as illegitimate.
Even worse from the perspective of microfoundations are the implications of something called the Sonnenchein-Mantel-Debreu Theorem, which, as I imperfectly understand it, says something like the following. Even granting the usual assumptions of the standard general equilibrium model — continuous individual demand and supply functions, homogeneity of degree zero in prices, Walras’s Law, and suitable boundary conditions on demand and supply functions, there is no guarantee that there is a unique stable equilibrium for such an economy. Thus, even apart from the dependence of equilibrium on expectations, there is no rationally expected equilibrium because there is no unique equilibrium to serve as an attractor for expectations. Thus, as I have pointed out before, as much as macroeconomics may require microfoundations, microeconomics requires macrofoundations, perhaps even more so.
Now let us compare the methodological demand for microfoundations for macroeconomics, which I would describe as a kind of macroeconomic methodological reductionism, with the reductionism of Newtonian physics. Newtonian physics reduced the Keplerian laws of planetary motion to more fundamental principles of gravitation governing the motion of all bodies celestial and terrestrial. In so doing, Newtonian physics achieved an astounding increase in explanatory power and empirical scope. What has the methodological reductionism of modern macroeconomics achieved? Reductionsim was not the source, but the result, of scientific progress. But as Carlaw and Lipsey demonstrated recently in an important paper, methodological reductionism in macroeconomics has resulted in a clear retrogression in empirical and explanatory power. Thus, methodological reductionism in macroeconomics is an antiscientific exercise in methodological authoritarianism.