Posts Tagged 'microfoundations'

Jack Schwartz on the Weaknesses of the Mathematical Mind

I was recently rereading an essay by Karl Popper, “A Realistic View of Logic, Physics, and History” published in his collection of essays, Objective Knowledge: An Evolutionary Approach, because it discusses the role of reductivism in science and philosophy, a topic about which I’ve written a number of previous posts discussing the microfoundations of macroeconomics.

Here is an important passage from Popper’s essay:

What I should wish to assert is (1) that criticism is a most important methodological device: and (2) that if you answer criticism by saying, “I do not like your logic: your logic may be all right for you, but I prefer a different logic, and according to my logic this criticism is not valid”, then you may undermine the method of critical discussion.

Now I should distinguish between two main uses of logic, namely (1) its use in the demonstrative sciences – that is to say, the mathematical sciences – and (2) its use in the empirical sciences.

In the demonstrative sciences logic is used in the main for proofs – for the transmission of truth – while in the empirical sciences it is almost exclusively used critically – for the retransmission of falsity. Of course, applied mathematics comes in too, which implicitly makes use of the proofs of pure mathematics, but the role of mathematics in the empirical sciences is somewhat dubious in several respects. (There exists a wonderful article by Schwartz to this effect.)

The article to which Popper refers appears by Jack Schwartz in a volume edited by Ernst Nagel, Patrick Suppes, and Alfred Tarski, Logic, Methodology and Philosophy of Science. The title of the essay, “The Pernicious Influence of Mathematics on Science” caught my eye, so I tried to track it down. Unavailable on the internet except behind a paywall, I bought a used copy for $6 including postage. The essay was well worth the $6 I paid to read it.

Before quoting from the essay, I would just note that Jacob T. (Jack) Schwartz was far from being innocent of mathematical and scientific knowledge. Here’s a snippet from the Wikipedia entry on Schwartz.

His research interests included the theory of linear operatorsvon Neumann algebrasquantum field theorytime-sharingparallel computingprogramming language design and implementation, robotics, set-theoretic approaches in computational logicproof and program verification systems; multimedia authoring tools; experimental studies of visual perception; multimedia and other high-level software techniques for analysis and visualization of bioinformatic data.

He authored 18 books and more than 100 papers and technical reports.

He was also the inventor of the Artspeak programming language that historically ran on mainframes and produced graphical output using a single-color graphical plotter.[3]

He served as Chairman of the Computer Science Department (which he founded) at the Courant Institute of Mathematical SciencesNew York University, from 1969 to 1977. He also served as Chairman of the Computer Science Board of the National Research Council and was the former Chairman of the National Science Foundation Advisory Committee for Information, Robotics and Intelligent Systems. From 1986 to 1989, he was the Director of DARPA‘s Information Science and Technology Office (DARPA/ISTO) in Arlington, Virginia.

Here is a link to his obituary.

Though not trained as an economist, Schwartz, an autodidact, wrote two books on economic theory.

With that introduction, I quote from, and comment on, Schwartz’s essay.

Our announced subject today is the role of mathematics in the formulation of physical theories. I wish, however, to make use of the license permitted at philosophical congresses, in two regards: in the first place, to confine myself to the negative aspects of this role, leaving it to others to dwell on the amazing triumphs of the mathematical method; in the second place, to comment not only on physical science but also on social science, in which the characteristic inadequacies which I wish to discuss are more readily apparent.

Computer programmers often make a certain remark about computing machines, which may perhaps be taken as a complaint: that computing machines, with a perfect lack of discrimination, will do any foolish thing they are told to do. The reason for this lies of course in the narrow fixation of the computing machines “intelligence” upon the basely typographical details of its own perceptions – its inability to be guided by any large context. In a psychological description of the computer intelligence, three related adjectives push themselves forward: single-mindedness, literal-mindedness, simple-mindedness. Recognizing this, we should at the same time recognize that this single-mindedness, literal-mindedness, simple-mindedness also characterizes theoretical mathematics, though to a lesser extent.

It is a continual result of the fact that science tries to deal with reality that even the most precise sciences normally work with more or less ill-understood approximations toward which the scientist must maintain an appropriate skepticism. Thus, for instance, it may come as a shock to the mathematician to learn that the Schrodinger equation for the hydrogen atom, which he is able to solve only after a considerable effort of functional analysis and special function theory, is not a literally correct description of this atom, but only an approximation to a somewhat more correct equation taking account of spin, magnetic dipole, and relativistic effects; that this corrected equation is itself only an ill-understood approximation to an infinite set of quantum field-theoretic equations; and finally that the quantum field theory, besides diverging, neglects a myriad of strange-particle interactions whose strength and form are largely unknown. The physicist looking at the original Schrodinger equation, learns to sense in it the presence of many invisible terms, integral, intergrodifferential, perhaps even more complicated types of operators, in addition to the differential terms visible, and this sense inspires an entirely appropriate disregard for the purely technical features of the equation which he sees. This very healthy self-skepticism is foreign to the mathematical approach. . . .

Schwartz, in other words, is noting that the mathematical equations that physicists use in many contexts cannot be relied upon without qualification as accurate or exact representations of reality. The understanding that the mathematics that physicists and other physical scientists use to express their theories is often inexact or approximate inasmuch as reality is more complicated than our theories can capture mathematically. Part of what goes into the making of a good scientist is a kind of artistic feeling for how to adjust or interpret a mathematical model to take into account what the bare mathematics cannot describe in a manageable way.

The literal-mindedness of mathematics . . . makes it essential, if mathematics is to be appropriately used in science, that the assumptions upon which mathematics is to elaborate be correctly chosen from a larger point of view, invisible to mathematics itself. The single-mindedness of mathematics reinforces this conclusion. Mathematics is able to deal successfully only with the simplest of situations, more precisely, with a complex situation only to the extent that rare good fortune makes this complex situation hinge upon a few dominant simple factors. Beyond the well-traversed path, mathematics loses its bearing in a jungle of unnamed special functions and impenetrable combinatorial particularities. Thus, mathematical technique can only reach far if it starts from a point close to the simple essentials of a problem which has simple essentials. That form of wisdom which is the opposite of single-mindedness, the ability to keep many threads in hand, to draw for an argument from many disparate sources, is quite foreign to mathematics. The inability accounts for much of the difficulty which mathematics experiences in attempting to penetrate the social sciences. We may perhaps attempt a mathematical economics – but how difficult would be a mathematical history! Mathematics adjusts only with reluctance to the external, and vitally necessary, approximating of the scientists, and shudders each time a batch of small terms is cavalierly erased. Only with difficulty does it find its way to the scientist’s ready grasp of the relative importance of many factors. Quite typically, science leaps ahead and mathematics plods behind.

Schwartz having referenced mathematical economics, let me try to restate his point more concretely than he did by referring to the Walrasian theory of general equilibrium. “Mathematics,” Schwartz writes, “adjusts only with reluctance to the external, and vitally necessary, approximating of the scientists, and shudders each time a batch of small terms is cavalierly erased.” The Walrasian theory is at once too general and too special to be relied on as an applied theory. It is too general because the functional forms of most of its reliant equations can’t be specified or even meaningfully restricted on very special simplifying assumptions; it is too special, because the simplifying assumptions about the agents and the technologies and the constraints and the price-setting mechanism are at best only approximations and, at worst, are entirely divorced from reality.

Related to this deficiency of mathematics, and perhaps more productive of rueful consequence, is the simple-mindedness of mathematics – its willingness, like that of a computing machine, to elaborate upon any idea, however absurd; to dress scientific brilliancies and scientific absurdities alike in the impressive uniform of formulae and theorems. Unfortunately however, an absurdity in uniform is far more persuasive than an absurdity unclad. The very fact that a theory appears in mathematical form, that, for instance, a theory has provided the occasion for the application of a fixed-point theorem, or of a result about difference equations, somehow makes us more ready to take it seriously. And the mathematical-intellectual effort of applying the theorem fixes in us the particular point of view of the theory with which we deal, making us blind to whatever appears neither as a dependent nor as an independent parameter in its mathematical formulation. The result, perhaps most common in the social sciences, is bad theory with a mathematical passport. The present point is best established by reference to a few horrible examples. . . . I confine myself . . . to the citation of a delightful passage from Keynes’ General Theory, in which the issues before us are discussed with a characteristic wisdom and wit:

“It is the great fault of symbolic pseudomathematical methods of formalizing a system of economic analysis . . . that they expressly assume strict independence between the factors involved and lose all their cogency and authority if this hypothesis is disallowed; whereas, in ordinary discourse, where we are not blindly manipulating but know all the time what we are doing and what the words mean, we can keep ‘at the back of our heads’ the necessary reserves and qualifications and adjustments which we shall have to make later on, in a way in which we cannot keep complicated partial differentials ‘at the back’ of several pages of algebra which assume they all vanish. Too large a proportion of recent ‘mathematical’ economics are mere concoctions, as imprecise as the initial assumptions they reset on, which allow the author to lose sight of the complexities and interdependencies of the real world in a maze of pretentions and unhelpful symbols.”

Although it would have been helpful if Keynes had specifically identified the pseudomathematical methods that he had in mind, I am inclined to think that he was expressing his impatience with the Walrasian general-equilibrium approach that was characteristic of the Marshallian tradition that he carried forward even as he struggled to transcend it. Walrasian general equilibrium analysis, he seems to be suggesting, is too far removed from reality to provide any reliable guide to macroeconomic policy-making, because the necessary qualifications required to make general-equilibrium analysis practically relevant are simply unmanageable within the framework of general-equilibrium analysis. A different kind of analysis is required. As a Marshallian he was less skeptical of partial-equilibrium analysis than of general-equilibrium analysis. But he also recognized that partial-equilibrium analysis could not be usefully applied in situations, e.g., analysis of an overall “market” for labor, where the usual ceteris paribus assumptions underlying the use of stable demand and supply curves as analytical tools cannot be maintained. But for some reason that didn’t stop Keynes from trying to explain the nominal rate of interest by positing a demand curve to hold money and a fixed stock of money supplied by a central bank. But we all have our blind spots and miss obvious implications of familiar ideas that we have already encountered and, at least partially, understand.

Schwartz concludes his essay with an arresting thought that should give us pause about how we often uncritically accept probabilistic and statistical propositions as if we actually knew how they matched up with the stochastic phenomena that we are seeking to analyze. But although there is a lot to unpack in his conclusion, I am afraid someone more capable than I will have to do the unpacking.

[M]athematics, concentrating our attention, makes us blind to its own omissions – what I have already called the single-mindedness of mathematics. Typically, mathematics, knows better what to do than why to do it. Probability theory is a famous example. . . . Here also, the mathematical formalism may be hiding as much as it reveals.

Hayek and Rational Expectations

In this, my final, installment on Hayek and intertemporal equilibrium, I want to focus on a particular kind of intertemporal equilibrium: rational-expectations equilibrium. In his discussions of intertemporal equilibrium, Roy Radner assigns a meaning to the term “rational-expectations equilibrium” very different from the meaning normally associated with that term. Radner describes a rational-expectations equilibrium as the equilibrium that results when some agents are able to make inferences about the beliefs held by other agents when observed prices differ from what they had expected prices to be. Agents attribute the differences between observed and expected prices to information held by agents better informed than themselves, and revise their own expectations accordingly in light of the information that would have justified the observed prices.

In the early 1950s, one very rational agent, Armen Alchian, was able to figure out what chemicals were being used in making the newly developed hydrogen bomb by identifying companies whose stock prices had risen too rapidly to be explained otherwise. Alchian, who spent almost his entire career at UCLA while also moonlighting at the nearby Rand Corporation, wrote a paper for Rand in which he listed the chemicals used in making the hydrogen bomb. When people at the Defense Department heard about the paper – the Rand Corporation was started as a think tank largely funded by the Department of Defense to do research that the Defense Department was interested in – they went to Alchian, confiscated and destroyed the paper. Joseph Newhard recently wrote a paper about this episode in the Journal of Corporate Finance. Here’s the abstract:

At RAND in 1954, Armen A. Alchian conducted the world’s first event study to infer the fuel material used in the manufacturing of the newly-developed hydrogen bomb. Successfully identifying lithium as the fusion fuel using only publicly available financial data, the paper was seen as a threat to national security and was immediately confiscated and destroyed. The bomb’s construction being secret at the time but having since been partially declassified, the nuclear tests of the early 1950s provide an opportunity to observe market efficiency through the dissemination of private information as it becomes public. I replicate Alchian’s event study of capital market reactions to the Operation Castle series of nuclear detonations in the Marshall Islands, beginning with the Bravo shot on March 1, 1954 at Bikini Atoll which remains the largest nuclear detonation in US history, confirming Alchian’s results. The Operation Castle tests pioneered the use of lithium deuteride dry fuel which paved the way for the development of high yield nuclear weapons deliverable by aircraft. I find significant upward movement in the price of Lithium Corp. relative to the other corporations and to DJIA in March 1954; within three weeks of Castle Bravo the stock was up 48% before settling down to a monthly return of 28% despite secrecy, scientific uncertainty, and public confusion surrounding the test; the company saw a return of 461% for the year.

Radner also showed that the ability of some agents to infer the information on which other agents are causing prices to differ from the prices that had been expected does not necessarily lead to an equilibrium. The process of revising expectations in light of observed prices may not converge on a shared set of expectations of the future based on commonly shared knowledge.

So rather than pursue Radner’s conception of rational expectations, I will focus here on the conventional understanding of “rational expectations” in modern macroeconomics, which is that the price expectations formed by the agents in a model should be consistent with what the model itself predicts that those future prices will be. In this very restricted sense, I believe rational expectations is a very important property that any model ought to have. It simply says that a model ought to have the property that if one assumes that the agents in a model expect the equilibrium predicted by the model, then, given those expectations, the solution of the model will turn out to be the equilibrium of the model. This property is a consistency and coherence property that any model, regardless of its substantive predictions, ought to have. If a model lacks this property, there is something wrong with the model.

But there is a huge difference between saying that a model should have the property that correct expectations are self-fulfilling and saying that agents are in fact capable of predicting the equilibrium of the model. Assuming the former does not entail the latter. What kind of crazy model would have the property that correct expectations are not self-fulfilling? I mean think about: a model in which correct expectations are not self-fulfilling is a nonsense model.

But demanding that a model not spout out jibberish is very different from insisting that the agents in the model necessarily have the capacity to predict what the equilibrium of the model will be. Rational expectations in the first sense is a minimal consistency property of an economic model; rational expectations in the latter sense is an empirical assertion about the real world. You can make such an assumption if you want, but you can’t claim that it is a property of the real world. Whether it is a property of the real world is a matter of fact, not a matter of methodological fiat. But methodological fiat is what rational expectations has become in macroeconomics.

In his 1937 paper on intertemporal equilibrium, Hayek was very clear that correct expectations are logically implied by the concept of an equilibrium of plans extending through time. But correct expectations are not a necessary, or even descriptively valid, characteristic of reality. Hayek also conceded that we don’t even have an explanation in theory of how correct expectations come into existence. He merely alluded to the empirical observation – perhaps not the most accurate description of empirical reality in 1937 – that there is an observed general tendency for markets to move toward equilibrium, implying that over time expectations do tend to become more accurate.

It is worth pointing out that when the idea of rational expectations was introduced by John Muth in the early 1960s, he did so in the context of partial-equilibrium models in which the rational expectation in the model was the rational expectation of the equilibrium price in a paraticular market. The motivation for Muth to introduce the idea of a rational expectation was idea of a cobweb cycle in which producers simply assume that the current price will remain at whatever level currently prevails. If there is a time lag between production, as in agricultural markets between the initial application of inputs and the final yield of output, it is easy to generate an alternating sequence of boom and bust, with current high prices inducing increased output in the following period, driving prices down, thereby inducing low output and high prices in the next period and so on.

Muth argued that rational producers would not respond to price signals in a way that led to consistently mistaken expectations, but would base their price expectations on more realistic expectations of what future prices would turn out to be. In his microeconomic work on rational expectations, Muth showed that the rational-expectation assumption was a better predictor of observed prices than the assumption of static expectations underlying the traditional cobweb-cycle model. So Muth’s rational-expectations assumption was based on a realistic conjecture of how real-world agents would actually form expectations. In that sense, Muth’s assumption was consistent with Hayek’s conjecture that there is an empirical tendency for markets to move toward equilibrium.

So while Muth’s introduction of the rational-expectations hypothesis was an empirically progressive theoretical innovation, extending rational-expectations into the domain of macroeconomics has not been empirically progressive, rational expectations models having consistently failed to generate better predictions than macro-models using other expectational assumptions. Instead, a rational-expectations axiom has been imposed as part of a spurious methodological demand that all macroeconomic models be “micro-founded.” But the deeper point – a point that Hayek understood better than perhaps anyone else — is that there is a huge difference in kind between forming rational expectations about a single market price and forming rational expectations about the vector of n prices on the basis of which agents are choosing or revising their optimal intertemporal consumption and production plans.

It is one thing to assume that agents have some expert knowledge about the course of future prices in the particular markets in which they participate regularly; it is another thing entirely to assume that they have knowledge sufficient to forecast the course of all future prices and in particular to understand the subtle interactions between prices in one market and the apparently unrelated prices in another market. The former kind of knowledge is knowledge that expert traders might be expected to have; the latter kind of knowledge is knowledge that would be possessed by no one but a nearly omniscient central planner, whose existence was shown by Hayek to be a practical impossibility.

Standard macroeconomic models are typically so highly aggregated that the extreme nature of the rational-expectations assumption is effectively suppressed. To treat all output as a single good (which involves treating the single output as both a consumption good and a productive asset generating a flow of productive services) effectively imposes the assumption that the only relative price that can ever change is the wage, so that all but one future relative prices are known in advance. That assumption effectively assumes away the problem of incorrect expectations except for two variables: the future price level and the future productivity of labor (owing to the productivity shocks so beloved of Real Business Cycle theorists). Having eliminated all complexity from their models, modern macroeconomists, purporting to solve micro-founded macromodels, simply assume that there is but one or at most two variables about which agents have to form their rational expectations.

Four score years since Hayek explained how challenging the notion of intertemporal equilibrium really is and the difficulties inherent in explaining any empirical tendency toward intertempral equilibrium, modern macroeconomics has succeeded in assuming all those difficulties out of existence. Many macroeconomists feel rather proud of what modern macroeconomics has achieved. I am not quite as impressed as they are.

Microfoundations (aka Macroeconomic Reductionism) Redux

In two recent blog posts (here and here), Simon Wren-Lewis wrote sensibly about microfoundations. Though triggered by Wren-Lewis’s posts, the following comments are not intended as criticisms of him, though I think he does give microfoundations (as they are now understood) too much credit. Rather, my criticism is aimed at the way microfoundations have come to be used to restrict the kind of macroeconomic explanations and models that are up for consideration among working macroeconomists. I have written about microfoundations before on this blog (here and here)  and some, if not most, of what I am going to say may be repetitive, but obviously the misconceptions associated with what Wren-Lewis calls the “microfoundations project” are not going to be dispelled by a couple of blog posts, so a little repetitiveness may not be such a bad thing. Jim Buchanan liked to quote the following passage from Herbert Spencer’s Data of Ethics:

Hence an amount of repetition which to some will probably appear tedious. I do not, however, much regret this almost unavoidable result; for only by varied iteration can alien conceptions be forced on reluctant minds.

When the idea of providing microfoundations for macroeconomics started to catch on in the late 1960s – and probably nowhere did they catch on sooner or with more enthusiasm than at UCLA – the idea resonated, because macroeconomics, which then mainly consisted of various versions of the Keynesian model, seemed to embody certain presumptions about how markets work that contradicted the presumptions of microeconomics about how markets work. In microeconomics, the primary mechanism for achieving equilibrium is the price (actually the relative price) of whatever good is being analyzed. A full (or general) microeconomic equilibrium involves a set of prices such that each of markets (whether for final outputs or for inputs into the productive process) are in equilibrium, equilibrium meaning that every agent is able to purchase or sell as much of any output or input as desired at the equilibrium price. The set of equilibrium prices not only achieves equilibrium, the equilibrium, under some conditions, has optimal properties, because each agent, in choosing how much to buy or sell of each output or input, is presumed to be acting in a way that is optimal given the preferences of the agent and the social constraints under which the agent operates. Those optimal properties don’t always follow from microeconomic presumptions, optimality being dependent on the particular assumptions (about preferences, production and exchange technology, and property rights) adopted by the analyst in modeling an individual market or an entire system of markets.

The problem with Keynesian macroeconomics was that it seemed to overlook, or ignore, or dismiss, or deny, the possibility that a price mechanism is operating — or could operate — to achieve equilibrium in the markets for goods and for labor services. In other words, the Keynesian model seemed to be saying that a macoreconomic equilibrium is compatible with the absence of market clearing, notwithstanding that the absence of market clearing had always been viewed as the defining characteristic of disequilibrium. Thus, from the perspective of microeconomic theory, if there is an excess supply of workers offering labor services, i.e., there are unemployed workers who would be willing to be employed at the same wage that currently employed workers are receiving, there ought to be market forces that would reduce wages to a level such that all workers willing to work at that wage could gain employment. Keynes, of course, had attempted to explain why workers could only reduce their nominal wages, not their real wages, and argued that nominal wage cuts would simply induce equivalent price reductions, leaving real wages and employment unchanged. The microeconomic reasoning on which that argument was based hinged on Keynes’s assumption that nominal wage cuts would trigger proportionate price cuts, but that assumption was not exactly convincing, if only because the percentage price cut would seem to depend not just on the percentage reduction in the nominal wage, but also on the labor intensity of the product, Keynes, habitually and inconsistently, arguing as if labor were the only factor of production while at the same time invoking the principle of diminishing marginal productivity.

At UCLA, the point of finding microfoundations was not to create a macroeconomics that would simply reflect the results and optimal properties of a full general equilibrium model. Indeed, what made UCLA approach to microeconomics distinctive was that it aimed at deriving testable implications from relaxing the usual informational and institutional assumptions (full information, zero transactions costs, fully defined and enforceable property rights) underlying conventional microeconomic theory. If the way forward in microeconomics was to move away from the extreme assumptions underlying the perfectly competitive model, then it seemed plausible that relaxing those assumptions would be fruitful in macroeconomics as well. That led Armen Alchian and others at UCLA to think of unemployment as largely a search phenomenon. For a while that approach seemed promising, and to some extent the promise was fulfilled, but many implications of a purely search-theoretic approach to unemployment don’t seem to be that well supported empirically. For example, search models suggest that in recessions, quits increase, and that workers become more likely to refuse offers of employment after the downturn than before. Neither of those implications seems to be true. A search model would suggest that workers are unemployed because they are refusing offers below their reservation wage, but in fact most workers are becoming unemployed because they are being laid off, and in recessions workers seem likely to accept offers of employment at the same wage that other workers are getting. Now it is possible to reinterpret workers’ behavior in recessions in a way that corresponds to the search-theoretic model, but the reinterpretation seems a bit of a stretch.

Even though he was an early exponent of the search theory of unemployment, Alchian greatly admired and frequently cited a 1974 paper by Donald Gordon “A Neoclassical Theory of Keynesian Unemployment,” which proposed an implicit-contract theory of employer-employee relationship. The idea was that workers make long-term commitments to their employers, and realizing their vulnerability, after having committed themselves to their employer, to exploitation by a unilateral wage cut imposed by the employer under threat of termination, expect some assurance from their employer that they will not be subjected to a unilateral demand to accept a wage cut. Such implicit understandings make it very difficult for employers, facing a reduction in demand, to force workers to accept a wage cut, because doing so would make it hard for the employer to retain those workers that are most highly valued and to attract new workers.

Gordon’s theory of implicit wage contracts has a certain similarity to Dennis Carlton’s explanation of why many suppliers don’t immediately raise prices to their steady customers. Like Gordon, Carlton posits the existence of implicit and sometimes explicit contracts in which customers commit to purchase minimum quantities or to purchase their “requirements” from a particular supplier. In return for the assurance of having a regular customer on whom the supplier can count, the supplier gives the customer assurance that he will receive his customary supply at the agreed upon price even if market conditions should change. Rather than raise the price in the event of a shortage, the supplier may feel that he is obligated to continue supplying his regular customers at the customary price, while raising the price to new or occasional customers to “market-clearing” levels. For certain kinds of supply relationships in which customer and supplier expect to continue transacting regularly over a long period of time, price is not the sole method by which allocation decisions are made.

Klein, Crawford and Alchian discussed a similar idea in their 1978 article about vertical integration as a means of avoiding or mitigating the threat of holdup when a supplier and a customer must invest in some sunk asset, e.g., a pipeline connection, for the supply relationship to be possible. The sunk investment implies that either party, under the right circumstances, could threaten to holdup the other party by threatening to withdraw from the relationship leaving the other party stuck with a useless fixed asset. Vertical integration avoids the problem by aligning the incentives of the two parties, eliminating the potential for holdup. Price rigidity can thus be viewed as a milder form of vertical integration in cases where transactors have a relatively long-term relationship and want to assure each other that they will not be taken advantage of after making a commitment (i.e., foregoing other trading opportunities) to the other party.

The search model is fairly easy to incorporate into a standard framework because search can be treated as a form of self-employment that is an alternative to accepting employment. The shape and position of the individual’s supply curve reflects his expectations about future wage offers that he will receive if he chooses not to accept employment in the current period. The more optimistic the worker’s expectation of future wages, the higher the worker’s reservation wage in the current period. The more certain the worker feels about the expected future wage, the more elastic is his supply curve in the neighborhood of the expected wage. Thus, despite its empirical shortcomings, the search model could serve as a convenient heuristic device for modeling cyclical increases in unemployment because of the unwillingness of workers to accept nominal wage cuts. From a macroeconomic modeling perspective, the incorrect or incomplete representation of the reason for the unwillingness of workers to accept wage cuts may be less important than the overall implication of the model, which is that unanticipated aggregate demand shocks can have significant and persistent effects on real output and employment. For example in his reformulation of macroeconomic theory, Earl Thompson, though he was certainly aware of Donald Gordon’s paper, relied exclusively on a search-theoretic rationale for Keynesian unemployment, and I don’t know (or can’t remember) if he had a specific objection to Gordon’s model or simply preferred to use the search-theoretic approach for pragmatic modeling reasons.

At any rate, these comments about the role of search models in modeling unemployment decisions are meant to illustrate why microfoundations could be useful for macroeconomics: by adding to the empirical content of macromodels, providing insight into the decisions or circumstances that lead workers to accept or reject employment in the aftermath of aggregate demand shocks, or why employers impose layoffs on workers rather than offer employment at reduced wages. The spectrum of such microeconomic theories of employer-employee relationships have provided us with a richer understanding of what the term “sticky wages” might actually be referring to, beyond the existence of minimum wage laws or collective bargaining contracts specifying nominal wages over a period of time for all covered employees.

In this context microfoundations meant providing a more theoretically satisfying, more micreconomically grounded explanation for a phenomenon – “sticky wages” – that seemed somehow crucial for generating the results of the Keynesian model. I don’t think that anyone would question that microfoundations in this narrow sense has been an important and useful area of research. And it is not microfoundations in this sense that is controversial. The sense in which microfoundations is controversial is whether a macroeconomic model must show that aggregate quantities that it generates can be shown to consistent with the optimizing choices of all agents in the model. In other words, the equilibrium solution of a macroeconomic model must be such that all agents are optimizing intertemporally, subject to whatever informational imperfections are specified by the model. If the model is not derived from or consistent with the solution to such an intertemporal optimization problem, the macromodel is now considered inadequate and unworthy of consideration. Here’s how Michael Woodford, a superb economist, but very much part of the stifling microfoundations consensus that has overtaken macroeconomics, put in his paper “The Convergence in Macroeconomics: Elements of the New Synthesis.”

But it is now accepted that one should know how to render one’s growth model and one’s business-cycle model consistent with one another in principle, on those occasions when it is necessary to make such connections. Similarly, microeconomic and macroeconomic analysis are no longer considered to involve fundamentally different principles, so that it should be possible to reconcile one’s views about household or firm behavior, or one’s view of the functioning of individual markets, with one’s model of the aggregate economy, when one needs to do so.

In this respect, the methodological stance of the New Classical school and the real business cycle theorists has become the mainstream. But this does not mean that the Keynesian goal of structural modeling of short-run aggregate dynamics has been abandoned. Instead, it is now understood how one can construct and analyze dynamic general-equilibrium models that incorporate a variety of types of adjustment frictions, that allow these models to provide fairly realistic representations of both shorter-run and longer-run responses to economic disturbances. In important respects, such models remain direct descendants of the Keynesian macroeconometric models of the early postwar period, though an important part of their DNA comes from neoclassical growth models as well.

Woodford argues that by incorporating various imperfections into their general equilibrium models, e.g.., imperfectly competitive output and labor markets, lags in the adjustment of wages and prices to changes in market conditions, search and matching frictions, it is possible to reconcile the existence of underutilized resources with intertemporal optimization by agents.

The insistence of monetarists, New Classicals, and early real business cycle theorists on the empirical relevance of models of perfect competitive equilibrium — a source of much controversy in past decades — is not what has now come to be generally accepted. Instead, what is important is having general-equilibrium models in the broad sense of requiring that all equations of the model be derived from mutually consistent foundations, and that the specified behavior of each economic unit make sense given the environment created by the behavior of the others. At one time, Walrasian competitive equilibrium models were the only kind of models with these features that were well understood; but this is no longer the case.

Woodford shows no recognition of the possibility of multiple equilibria, or that the evolution of an economic system and time-series data may be path-dependent, making the long-run neutrality propositions characterizing most DSGE models untenable. If the world – the data generating mechanism – is not like the world assumed by modern macroeconomics, the estimates derived from econometric models reflecting the worldview of modern macroeconomics will be inferior to estimates derived from an econometric model reflecting another, more accurate, world view. For example, if there are many possible equilibria depending on changes in expectational parameters or on the accidental deviations from an equilibrium time path, the idea of intertemporal optimization may not even be meaningful. Rather than optimize, agents may simply follow certain simple rules of thumb. But, on methodological principle, modern macroeconomics treats the estimates generated by any alternative econometric model insufficiently grounded in the microeconomic principles of intertemporal optimization as illegitimate.

Even worse from the perspective of microfoundations are the implications of something called the Sonnenchein-Mantel-Debreu Theorem, which, as I imperfectly understand it, says something like the following. Even granting the usual assumptions of the standard general equilibrium model — continuous individual demand and supply functions, homogeneity of degree zero in prices, Walras’s Law, and suitable boundary conditions on demand and supply functions, there is no guarantee that there is a unique stable equilibrium for such an economy. Thus, even apart from the dependence of equilibrium on expectations, there is no rationally expected equilibrium because there is no unique equilibrium to serve as an attractor for expectations. Thus, as I have pointed out before, as much as macroeconomics may require microfoundations, microeconomics requires macrofoundations, perhaps even more so.

Now let us compare the methodological demand for microfoundations for macroeconomics, which I would describe as a kind of macroeconomic methodological reductionism, with the reductionism of Newtonian physics. Newtonian physics reduced the Keplerian laws of planetary motion to more fundamental principles of gravitation governing the motion of all bodies celestial and terrestrial. In so doing, Newtonian physics achieved an astounding increase in explanatory power and empirical scope. What has the methodological reductionism of modern macroeconomics achieved? Reductionsim was not the source, but the result, of scientific progress. But as Carlaw and Lipsey demonstrated recently in an important paper, methodological reductionism in macroeconomics has resulted in a clear retrogression in empirical and explanatory power. Thus, methodological reductionism in macroeconomics is an antiscientific exercise in methodological authoritarianism.

The State We’re In

Last week, Paul Krugman, set off by this blog post, complained about the current state macroeconomics. Apparently, Krugman feels that if saltwater economists like himself were willing to accommodate the intertemporal-maximization paradigm developed by the freshwater economists, the freshwater economists ought to have reciprocated by acknowledging some role for countercyclical policy. Seeing little evidence of accommodation on the part of the freshwater economists, Krugman, evidently feeling betrayed, came to this rather harsh conclusion:

The state of macro is, in fact, rotten, and will remain so until the cult that has taken over half the field is somehow dislodged.

Besides engaging in a pretty personal attack on his fellow economists, Krugman did not present a very flattering picture of economics as a scientific discipline. What Krugman describes seems less like a search for truth than a cynical bargaining game, in which Krugman feels that his (saltwater) side, after making good faith offers of cooperation and accommodation that were seemingly accepted by the other (freshwater) side, was somehow misled into making concessions that undermined his side’s strategic position. What I found interesting was that Krugman seemed unaware that his account of the interaction between saltwater and freshwater economists was not much more flattering to the former than the latter.

Krugman’s diatribe gave Stephen Williamson an opportunity to scorn and scold Krugman for a crass misunderstanding of the progress of science. According to Williamson, modern macroeconomics has passed by out-of-touch old-timers like Krugman. Among modern macroeconomists, Williamson observes, the freshwater-saltwater distinction is no longer meaningful or relevant. Everyone is now, more or less, on the same page; differences are worked out collegially in seminars, workshops, conferences and in the top academic journals without the rancor and disrespect in which Krugman indulges himself. If you are lucky (and hard-working) enough to be part of it, macroeconomics is a great place to be. One can almost visualize the condescension and the pity oozing from Williamson’s pores for those not part of the charmed circle.

Commenting on this exchange, Noah Smith generally agreed with Williamson that modern macroeconomics is not a discipline divided against itself; the intetermporal maximizers are clearly dominant. But Noah allows himself to wonder whether this is really any cause for celebration – celebration, at any rate, by those not in the charmed circle.

So macro has not yet discovered what causes recessions, nor come anywhere close to reaching a consensus on how (or even if) we should fight them. . . .

Given this state of affairs, can we conclude that the state of macro is good? Is a field successful as long as its members aren’t divided into warring camps? Or should we require a science to give us actual answers? And if we conclude that a science isn’t giving us actual answers, what do we, the people outside the field, do? Do we demand that the people currently working in the field start producing results pronto, threatening to replace them with people who are currently relegated to the fringe? Do we keep supporting the field with money and acclaim, in the hope that we’re currently only in an interim stage, and that real answers will emerge soon enough? Do we simply conclude that the field isn’t as fruitful an area of inquiry as we thought, and quietly defund it?

All of this seems to me to be a side issue. Who cares if macroeconomists like each other or hate each other? Whether they get along or not, whether they treat each other nicely or not, is really of no great import. For example, it was largely at Milton Friedman’s urging that Harry Johnson was hired to be the resident Keynesian at Chicago. But almost as soon as Johnson arrived, he and Friedman were getting into rather unpleasant personal exchanges and arguments. And even though Johnson underwent a metamorphosis from mildly left-wing Keynesianism to moderately conservative monetarism during his nearly two decades at Chicago, his personal and professional relationship with Friedman got progressively worse. And all of that nastiness was happening while both Friedman and Johnson were becoming dominant figures in the economics profession. So what does the level of collegiality and absence of personal discord have to do with the state of a scientific or academic discipline? Not all that much, I would venture to say.

So when Scott Sumner says:

while Krugman might seem pessimistic about the state of macro, he’s a Pollyanna compared to me. I see the field of macro as being completely adrift

I agree totally. But I diagnose the problem with macro a bit differently from how Scott does. He is chiefly concerned with getting policy right, which is certainly important, inasmuch as policy, since early 2008, has, for the most part, been disastrously wrong. One did not need a theoretically sophisticated model to see that the FOMC, out of misplaced concern that inflation expectations were becoming unanchored, kept money way too tight in 2008 in the face of rising food and energy prices, even as the economy was rapidly contracting in the second and third quarters. And in the wake of the contraction in the second and third quarters and a frightening collapse and panic in the fourth quarter, it did not take a sophisticated model to understand that rapid monetary expansion was called for. That’s why Scott writes the following:

All we really know is what Milton Friedman knew, with his partial equilibrium approach. Monetary policy drives nominal variables.  And cyclical fluctuations caused by nominal shocks seem sub-optimal.  Beyond that it’s all conjecture.

Ahem, and Marshall and Wicksell and Cassel and Fisher and Keynes and Hawtrey and Robertson and Hayek and at least 25 others that I could easily name. But it’s interesting to note that, despite his Marshallian (anti-Walrasian) proclivities, it was Friedman himself who started modern macroeconomics down the fruitless path it has been following for the last 40 years when he introduced the concept of the natural rate of unemployment in his famous 1968 AEA Presidential lecture on the role of monetary policy. Friedman defined the natural rate of unemployment as:

the level [of unemployment] that would be ground out by the Walrasian system of general equilibrium equations, provided there is embedded in them the actual structural characteristics of the labor and commodity markets, including market imperfections, stochastic variability in demands and supplies, the costs of gathering information about job vacancies, and labor availabilities, the costs of mobility, and so on.

Aside from the peculiar verb choice in describing the solution of an unknown variable contained in a system of equations, what is noteworthy about his definition is that Friedman was explicitly adopting a conception of an intertemporal general equilibrium as the unique and stable solution of that system of equations, and, whether he intended to or not, appeared to be suggesting that such a concept was operationally useful as a policy benchmark. Thus, despite Friedman’s own deep skepticism about the usefulness and relevance of general-equilibrium analysis, Friedman, for whatever reasons, chose to present his natural-rate argument in the language (however stilted on his part) of the Walrasian general-equilibrium theory for which he had little use and even less sympathy.

Inspired by the powerful policy conclusions that followed from the natural-rate hypothesis, Friedman’s direct and indirect followers, most notably Robert Lucas, used that analysis to transform macroeconomics, reducing macroeconomics to the manipulation of a simplified intertemporal general-equilibrium system. Under the assumption that all economic agents could correctly forecast all future prices (aka rational expectations), all agents could be viewed as intertemporal optimizers, any observed unemployment reflecting the optimizing choices of individuals to consume leisure or to engage in non-market production. I find it inconceivable that Friedman could have been pleased with the direction taken by the economics profession at large, and especially by his own department when he departed Chicago in 1977. This is pure conjecture on my part, but Friedman’s departure upon reaching retirement age might have had something to do with his own lack of sympathy with the direction that his own department had, under Lucas’s leadership, already taken. The problem was not so much with policy, but with the whole conception of what constitutes macroeconomic analysis.

The paper by Carlaw and Lipsey, which I referenced in my previous post, provides just one of many possible lines of attack against what modern macroeconomics has become. Without in any way suggesting that their criticisms are not weighty and serious, I would just point out that there really is no basis at all for assuming that the economy can be appropriately modeled as being in a continuous, or nearly continuous, state of general equilibrium. In the absence of a complete set of markets, the Arrow-Debreu conditions for the existence of a full intertemporal equilibrium are not satisfied, and there is no market mechanism that leads, even in principle, to a general equilibrium. The rational-expectations assumption is simply a deus-ex-machina method by which to solve a simplified model, a method with no real-world counterpart. And the suggestion that rational expectations is no more than the extension, let alone a logical consequence, of the standard rationality assumptions of basic economic theory is transparently bogus. Nor is there any basis for assuming that, if a general equilibrium does exist, it is unique, and that if it is unique, it is necessarily stable. In particular, in an economy with an incomplete (in the Arrow-Debreu sense) set of markets, an equilibrium may very much depend on the expectations of agents, expectations potentially even being self-fulfilling. We actually know that in many markets, especially those characterized by network effects, equilibria are expectation-dependent. Self-fulfilling expectations may thus be a characteristic property of modern economies, but they do not necessarily produce equilibrium.

An especially pretentious conceit of the modern macroeconomics of the last 40 years is that the extreme assumptions on which it rests are the essential microfoundations without which macroeconomics lacks any scientific standing. That’s preposterous. Perfect foresight and rational expectations are assumptions required for finding the solution to a system of equations describing a general equilibrium. They are not essential properties of a system consistent with the basic rationality propositions of microeconomics. To insist that a macroeconomic theory must correspond to the extreme assumptions necessary to prove the existence of a unique stable general equilibrium is to guarantee in advance the sterility and uselessness of that theory, because the entire field of study called macroeconomics is the result of long historical experience strongly suggesting that persistent, even cumulative, deviations from general equilibrium have been routine features of economic life since at least the early 19th century. That modern macroeconomics can tell a story in which apparently large deviations from general equilibrium are not really what they seem is not evidence that such deviations don’t exist; it merely shows that modern macroeconomics has constructed a language that allows the observed data to be classified in terms consistent with a theoretical paradigm that does not allow for lapses from equilibrium. That modern macroeconomics has constructed such a language is no reason why anyone not already committed to its underlying assumptions should feel compelled to accept its validity.

In fact, the standard comparative-statics propositions of microeconomics are also based on the assumption of the existence of a unique stable general equilibrium. Those comparative-statics propositions about the signs of the derivatives of various endogenous variables (price, quantity demanded, quantity supplied, etc.) with respect to various parameters of a microeconomic model involve comparisons between equilibrium values of the relevant variables before and after the posited parametric changes. All such comparative-statics results involve a ceteris-paribus assumption, conditional on the existence of a unique stable general equilibrium which serves as the starting and ending point (after adjustment to the parameter change) of the exercise, thereby isolating the purely hypothetical effect of a parameter change. Thus, as much as macroeconomics may require microfoundations, microeconomics is no less in need of macrofoundations, i.e., the existence of a unique stable general equilibrium, absent which a comparative-statics exercise would be meaningless, because the ceteris-paribus assumption could not otherwise be maintained. To assert that macroeconomics is impossible without microfoundations is therefore to reason in a circle, the empirically relevant propositions of microeconomics being predicated on the existence of a unique stable general equilibrium. But it is precisely the putative failure of a unique stable intertemporal general equilibrium to be attained, or to serve as a powerful attractor to economic variables, that provides the rationale for the existence of a field called macroeconomics.

So I certainly agree with Krugman that the present state of macroeconomics is pretty dismal. However, his own admitted willingness (and that of his New Keynesian colleagues) to adopt a theoretical paradigm that assumes the perpetual, or near-perpetual, existence of a unique stable intertemporal equilibrium, or at most admits the possibility of a very small set of deviations from such an equilibrium, means that, by his own admission, Krugman and his saltwater colleagues also bear a share of the responsibility for the very state of macroeconomics that Krugman now deplores.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,263 other subscribers
Follow Uneasy Money on WordPress.com