Posts Tagged 'Stephen Williamson'

Methodological Arrogance

A few weeks ago, I posted a somewhat critical review of Kartik Athreya’s new book Big Ideas in Macroeconomics. In quoting a passage from chapter 4 in which Kartik defended the rational-expectations axiom on the grounds that it protects the public from economists who, if left unconstrained by the discipline of rational expectations, could use expectational assumptions to generate whatever results they wanted, I suggested that this sort of reasoning in defense of the rational-expectations axiom betrayed what I called the “methodological arrogance” of modern macroeconomics which has, to a large extent, succeeded in imposing that axiom on all macroeconomic models. In his comment responding to my criticisms, Kartik made good-natured reference in passing to my charge of “methodological arrogance,” without substantively engaging with the charge. And in a post about the early reviews of Kartik’s book, Steve Williamson, while crediting me for at least reading the book before commenting on it, registered puzzlement at what I meant by “methodological arrogance.”

Actually, I realized when writing that post that I was not being entirely clear about what “methodological arrogance” meant, but I thought that my somewhat tongue-in-cheek reference to the duty of modern macroeconomists “to ban such models from polite discourse — certainly from the leading economics journals — lest the public be tainted by economists who might otherwise dare to abuse their models by making illicit assumptions about expectations formation and equilibrium concepts” was sufficiently suggestive not to require elaboration, especially after having devoted several earlier posts to criticisms of the methodology of modern macroeconomics (e.g., here, here, and here). That was a misjudgment.

So let me try to explain what I mean by methodological arrogance, which is not the quite the same as, but is closely related to, methodological authoritarianism. I will do so by referring to the long introductory essay (“A Realist View of Logic, Physics, and History”) that Karl Popper contributed to a book The Self and Its Brain co-authored with neuroscientist John Eccles. The chief aim of the essay was to argue that the universe is not fully determined, but evolves, producing new, emergent, phenomena not originally extant in the universe, such as the higher elements, life, consciousness, language, science and all other products of human creativity, which in turn interact with the universe, in fundamentally unpredictable ways. Popper regards consciousness as a real phenomenon that cannot be reduced to or explained by purely physical causes. Though he makes only brief passing reference to the social sciences, Popper’s criticisms of reductionism are directly applicable to the microfoundations program of modern macroeconomics, and so I think it will be useful to quote what he wrote at some length.

Against the acceptance of the view of emergent evolution there is a strong intuitive prejudice. It is the intuition that, if the universe consists of atoms or elementary particles, so that all things are structures of such particles, then every event in the universe ought to be explicable, and in principle predictable, in terms of particle structure and of particle interaction.

Notice how easy it would be rephrase this statement as a statement about microfoundations:

Against the acceptance of the view that there are macroeconomic phenomena, there is a strong intuitive prejudice. It is the intuition that, if the macroeconomy consists of independent agents, so that all macroeconomic phenomena are the result of decisions made by independent agents, then every macreconomic event ought to be explicable, and in principle predictable, in terms of the decisions of individual agents and their interactions.

Popper continues:

Thus we are led to what has been called the programme of reductionism [microfoundations]. In order to discuss it I shall make use of the following Table

(12) Level of ecosystems

(11) Level of populations of metazoan and plants

(10) Level of metezoa and multicellular plants

(9) Level of tissues and organs (and of sponges?)

(8) Level of populations of unicellular organisms

(7) Level of cells and of unicellular organisms

(6) Level of organelles (and perhaps of viruses)

(5) Liquids and solids (crystals)

(4) Molecules

(3) Atoms

(2) Elementary particles

(1) Sub-elementary particles

(0) Unknown sub-sub-elementary particles?

The reductionist idea behind this table is that the events or things on each level should be explained in terms of the lower levels. . . .

This reductionist idea is interesting and important; and whenever we can explain entities and events on a higher level by those of a lower level, we can speak of a great scientific success, and can say that we have added much to our understanding of the higher level. As a research programme, reductionism is not only important, but it is part of the programme of science whose aim is to explain and to understand.

So far so good. Reductionism certainly has its place. So do microfoundations. Whenever we can take an observation and explain it in terms of its constituent elements, we have accomplished something important. We have made scientific progress.

But Popper goes on to voice a cautionary note. There may be, and probably are, strict, perhaps insuperable, limits to how far higher-level phenomena can be reduced to (explained by) lower-level phenomena.

[E]ven the often referred to reduction of chemistry to physics, important as it is, is far from complete, and very possibly incompletable. . . . [W]e are far removed indeed from being able to claim that all, or most, properties of chemical compounds can be reduced to atomic theory. . . . In fact, the five lower levels of [our] Table . . . can be used to show that we have reason to regard this kind of intuitive reduction programme as clashing with some results of modern physics.

For what [our] Table suggests may be characterized as the principle of “upward causation.” This is the principle that causation can be traced in our Table . . . . from a lower level to a higher level, but not vice versa; that what happens on a higher level can be explained in terms of the next lower level, and ultimately in terms of elementary particles and the relevant physical laws. It appears at first that the higher levels cannot act on the lower ones.

But the idea of particle-to-particle or atom-to-atom interaction has been superseded by physics itself. A diffraction grating or a crystal (belonging to level (5) of our Table . . .) is a spatially very extended complex (and periodic) structure of billions of molecules; but it interacts as a whole extended periodic structure with the photons or the particles of a beam of photons or particles. Thus we have here an important example of “downward causation“. . . . That is to say, the whole, the macro structure, may, qua whole, act upon a photon or an elementary particle or an atom. . . .

Other physical examples of downward causation – of macroscopic structures on level (5) acting upon elementary particles or photons on level (1) – are lasers, masers, and holograms. And there are also many other macro structures which are examples of downward causation: every simple arrangement of negative feedback, such as a steam engine governor, is a macroscopic structure that regulates lower level events, such as the flow of the molecules that constitute the steam. Downward causation is of course important in all tools and machines which are designed for sompe purpose. When we use a wedge, for example, we do not arrange for the action of its elementary particles, but we use a structure, relying on it ot guide the actions of its constituent elementary particles to act, in concert, so as to achieve the desired result.

Stars are undersigned, but one may look at them as undersigned “machines” for putting the atoms and elementary particles in their central region under terrific gravitational pressure, with the (undersigned) result that some atomic nuclei fuse and form the nuclei of heavier elements; an excellent example of downward causation,of the action of the whole structure upon its constituent particles.

(Stars, incidentally, are good examples of the general rule that things are processes. Also, they illustrate the mistake of distinguishing between “wholes” – which are “more than the sums of their parts” – and “mere heaps”: a star is, in a sense, a “mere” accumulation, a “mere heap” of its constituent atoms. Yet it is a process – a dynamic structure. Its stability depends upon the dynamic equilibrium between its gravitational pressure, due to its sheer bulk, and the repulsive forces between its closely packed elementary particles. If the latter are excessive, the star explodes, If they are smaller than the gravitational pressure, it collapses into a “black hole.”

The most interesting examples of downward causation are to be found in organisms and in their ecological systems, and in societies of organisms [my emphasis]. A society may continue to function even though many of its members die; but a strike in an essential industry, such as the supply of electricity, may cause great suffering to many individual people. .. . I believe that these examples make the existence of downward causation obvious; and they make the complete success of any reductionist programme at least problematic.

I was very glad when I recently found this discussion of reductionism by Popper in a book that I had not opened for maybe 40 years, because it supports an argument that I have been making on this blog against the microfoundations program in macroeconomics: that as much as macroeconomics requires microfoundations, microeconomics also requires macrofoundations. Here is how I put a little over a year ago:

In fact, the standard comparative-statics propositions of microeconomics are also based on the assumption of the existence of a unique stable general equilibrium. Those comparative-statics propositions about the signs of the derivatives of various endogenous variables (price, quantity demanded, quantity supplied, etc.) with respect to various parameters of a microeconomic model involve comparisons between equilibrium values of the relevant variables before and after the posited parametric changes. All such comparative-statics results involve a ceteris-paribus assumption, conditional on the existence of a unique stable general equilibrium which serves as the starting and ending point (after adjustment to the parameter change) of the exercise, thereby isolating the purely hypothetical effect of a parameter change. Thus, as much as macroeconomics may require microfoundations, microeconomics is no less in need of macrofoundations, i.e., the existence of a unique stable general equilibrium, absent which a comparative-statics exercise would be meaningless, because the ceteris-paribus assumption could not otherwise be maintained. To assert that macroeconomics is impossible without microfoundations is therefore to reason in a circle, the empirically relevant propositions of microeconomics being predicated on the existence of a unique stable general equilibrium. But it is precisely the putative failure of a unique stable intertemporal general equilibrium to be attained, or to serve as a powerful attractor to economic variables, that provides the rationale for the existence of a field called macroeconomics.

And more recently, I put it this way:

The microeconomic theory of price adjustment is a theory of price adjustment in a single market. It is a theory in which, implicitly, all prices and quantities, but a single price-quantity pair are in equilibrium. Equilibrium in that single market is rapidly restored by price and quantity adjustment in that single market. That is why I have said that microeconomics rests on a macroeconomic foundation, and that is why it is illusory to imagine that macroeconomics can be logically derived from microfoundations. Microfoundations, insofar as they explain how prices adjust, are themselves founded on the existence of a macroeconomic equilibrium. Founding macroeconomics on microfoundations is just a form of bootstrapping.

So I think that my criticism of the microfoundations project exactly captures the gist of Popper’s criticism of reductionism. Popper extended his criticism of a certain form of reductionism, which he called “radical materialism or radical physicalism” in later passage in the same essay that is also worth quoting:

Radical materialism or radical physicalism is certainly a selfconsistent position. Fir it is a view of the universe which, as far as we know, was adequate once; that is, before the emergence of life and consciousness. . . .

What speaks in favour of radical materialism or radical physicalism is, of course, that it offers us a simple vision of a simple universe, and this looks attractive just because, in science, we search for simple theories. However, I think that it is important that we note that there are two different ways by which we can search for simplicity. They may be called, briefly, philosophical reduction and scientific reduction. The former is characterized by an attempt to provide bold and testable theories of high explanatory power. I believe that the latter is an extremely valuable and worthwhile method; while the former is of value only if we have good reasons to assume that it corresponds to the facts about the universe.

Indeed, the demand for simplicity in the sense of philosophical rather than scientific reduction may actually be damaging. For even in order to attempt scientific reduction, it is necessary for us to get a full grasp of the problem to be solved, and it is therefore vitally important that interesting problems are not “explained away” by philosophical analysis. If, say, more than one factor is responsible for some effect, it is important that we do not pre-empt the scientific judgment: there is always the danger that we might refuse to admit any ideas other than the ones we appear to have at hand: explaining away, or belittling the problem. The danger is increased if we try to settle the matter in advance by philosophical reduction. Philosophical reduction also makes us blind to the significance of scientific reduction.

Popper adds the following footnote about the difference between philosophic and scientific reduction.

Consider, for example, what a dogmatic philosophical reductionist of a mechanistic disposition (or even a quantum-mechanistic disposition) might have done in the face of the problem of the chemical bond. The actual reduction, so far as it goes, of the theory of the hydrogen bond to quantum mechanics is far more interesting than the philosophical assertion that such a reduction will one be achieved.

What modern macroeconomics now offers is largely an array of models simplified sufficiently so that they are solvable using the techniques of dynamic optimization. Dynamic optimization by individual agents — the microfoundations of modern macro — makes sense only in the context of an intertemporal equilibrium. But it is just the possibility that intertemporal equilibrium may not obtain that, to some of us at least, makes macroeconomics interesting and relevant. As the great Cambridge economist, Frederick Lavington, anticipating Popper in grasping the possibility of downward causation, put it so well, “the inactivity of all is the cause of the inactivity of each.”

So what do I mean by methodological arrogance? I mean an attitude that invokes microfoundations as a methodological principle — philosophical reductionism in Popper’s terminology — while dismissing non-microfounded macromodels as unscientific. To be sure, the progress of science may enable us to reformulate (and perhaps improve) explanations of certain higher-level phenomena by expressing those relationships in terms of lower-level concepts. That is what Popper calls scientific reduction. But scientific reduction is very different from rejecting, on methodological principle, any explanation not expressed in terms of more basic concepts.

And whenever macrotheory seems inconsistent with microtheory, the inconsistency poses a problem to be solved. Solving the problem will advance our understanding. But simply to reject the macrotheory on methodological principle without evidence that the microfounded theory gives a better explanation of the observed phenomena than the non-microfounded macrotheory (and especially when the evidence strongly indicates the opposite) is arrogant. Microfoundations for macroeconomics should result from progress in economic theory, not from a dubious methodological precept.

Let me quote Popper again (this time from his book Objective Knowledge) about the difference between scientific and philosophical reduction, addressing the denial by physicalists that that there is such a thing as consciousness, a denial based on their belief that all supposedly mental phenomena can and will ultimately be reduced to purely physical phenomena

[P]hilosophical speculations of a materialistic or physicalistic character are very interesting, and may even be able to point the way to a successful scientific reduction. But they should be frankly tentative theories. . . . Some physicalists do not, however, consider their theories as tentative, but as proposals to express everything in physicalist language; and they think these proposals have much in their favour because they are undoubtedly convenient: inconvenient problems such as the body-mind problem do indeed, most conveniently, disappear. So these physicalists think that there can be no doubt that these problems should be eliminated as pseudo-problems. (p. 293)

One could easily substitute “methodological speculations about macroeconomics” for “philosophical speculations of a materialistic or physicalistic character” in the first sentence. And in the third sentence one could substitute “advocates of microfounding all macroeconomic theories” for “physicalists,” “microeconomic” for “physicalist,” and “Phillips Curve” or “involuntary unemployment” for “body-mind problem.”

So, yes, I think it is arrogant to think that you can settle an argument by forcing the other side to use only those terms that you approve of.

What Does “Keynesian” Mean?

Last week Simon Wren-Lewis wrote a really interesting post on his blog trying to find the right labels with which to identify macroeconomists. Simon, rather disarmingly, starts by admitting the ultimate futility of assigning people labels; reality is just too complicated to conform to the labels that we invent to help ourselves make sense of reality. A good label can provide us with a handle with which to gain a better grasp on a messy set of observations, but it is not the reality. And if you come up with one label, I may counter with a different one. Who’s to say which label is better?

At any rate, as I read through Simon’s post I found myself alternately nodding my head in agreement and shaking my head in disagreement. So staying in the spirit of fun in which Simon wrote his post, I will provide a commentary on his labels and other pronouncements. If the comments are weighted on the side of disagreement, well, that’s what makes blogging fun, n’est-ce pas?

Simon divides academic researchers into two groups (mainstream and heterodox) and macroeconomic policy into two approaches (Keynesian and anti-Keynesian). He then offers the following comment on the meaning of the label Keynesian.

Just think about the label Keynesian. Any sensible definition would involve the words sticky prices and aggregate demand. Yet there are still some economists (generally not academics) who think Keynesian means believing fiscal rather than monetary policy should be used to stabilise demand. Fifty years ago maybe, but no longer. Even worse are non-economists who think being a Keynesian means believing in market imperfections, government intervention in general and a mixed economy. (If you do not believe this happens, look at the definition in Wikipedia.)

Well, as I pointed out in a recent post, there is nothing peculiarly Keynesian about the assumption of sticky prices, especially not as a necessary condition for an output gap and involuntary unemployment. So if Simon is going to have to work harder to justify his distinction between Keynesian and anti-Keynesian. In a comment on Simon’s blog, Nick Rowe pointed out just this problem, asking in particular why Simon could not substitute a Monetarist/anti-Monetarist dichotomy for the Keynesian/anti-Keynesian one.

The story gets more complicated in Simon’s next paragraph in which he describes his dichotomy of academic research into mainstream and heterodox.

Thanks to the microfoundations revolution in macro, mainstream macroeconomists speak the same language. I can go to a seminar that involves an RBC model with flexible prices and no involuntary unemployment and still contribute and possibly learn something. Equally an economist like John Cochrane can and does engage in meaningful discussions of New Keynesian theory (pdf).

In other words, the range of acceptable macroeconomic models has been drastically narrowed. Unless it is microfounded in a dynamic stochastic general equilibrium model, a model does not qualify as “mainstream.” This notion of microfoundation is certainly not what Edmund Phelps meant by “microeconomic foundations” when he edited his famous volume Microeconomic Foundations of Employment and Inflation Theory, which contained, among others, Alchian’s classic paper on search costs and unemployment and a paper by the then not so well-known Robert Lucas and his early collaborator Leonard Rapping. Nevertheless, in the current consensus, it is apparently the New Classicals that determine what kind of model is acceptable, while New Keynesians are allowed to make whatever adjustments, mainly sticky wages, they need to derive Keynesian policy recommendations. Anyone who doesn’t go along with this bargain is excluded from the mainstream. Simon may not be happy with this state of affairs, but he seems to have made peace with it without undue discomfort.

Now many mainstream macroeconomists, myself included, can be pretty critical of the limitations that this programme can place on economic thinking, particularly if it is taken too literally by microfoundations purists. But like it or not, that is how most macro research is done nowadays in the mainstream, and I see no sign of this changing anytime soon. (Paul Krugman discusses some reasons why here.) My own view is that I would like to see more tolerance and a greater variety of modelling approaches, but a pragmatic microfoundations macro will and should remain the major academic research paradigm.

Thus, within the mainstream, there is no basic difference in how to create a macroeconomic model. The difference is just in how to tweak the model in order to derive the desired policy implication.

When it comes to macroeconomic policy, and keeping to the different language idea, the only significant division I see is between the mainstream macro practiced by most economists, including those in most central banks, and anti-Keynesians. By anti-Keynesian I mean those who deny the potential for aggregate demand to influence output and unemployment in the short term.

So, even though New Keynesians have learned how to speak the language of New Classicals, New Keynesians can console themselves in retaining the upper hand in policy discussions. Which is why in policy terms, Simon chooses a label that is at least suggestive of a certain Keynesian primacy, the other side being defined in terms of their opposition to Keynesian policy. Half apologetically, Simon then asks: “Why do I use the term anti-Keynesian rather than, say, New Classical?” After all, it’s the New Classical model that’s being tweaked. Simon responds:

Partly because New Keynesian economics essentially just augments New Classical macroeconomics with sticky prices. But also because as far as I can see what holds anti-Keynesians together isn’t some coherent and realistic view of the world, but instead a dislike of what taking aggregate demand seriously implies.

This explanation really annoyed Steve Williamson who commented on Simon’s blog as follows:

Part of what defines a Keynesian (new or old), is that a Keynesian thinks that his or her views are “mainstream,” and that the rest of macroeconomic thought is defined relative to what Keynesians think – Keynesians reside at the center of the universe, and everything else revolves around them.

Simon goes on to explain what he means by the incoherence of the anti-Keynesian view of the world, pointing out that the Pigou Effect, which supposedly invalidated Keynes’s argument that perfect wage and price flexibility would not eventually restore full employment to an economy operating at less than full employment, has itself been shown not to be valid. And then Simon invokes that old standby Say’s Law.

Second, the evidence that prices are not flexible is so overwhelming that you need something else to drive you to ignore this evidence. Or to put it another way, you need something pretty strong for politicians or economists to make the ‘schoolboy error’ that is Says Law, which is why I think the basis of the anti-Keynesian view is essentially ideological.

Here, I think, Simon is missing something important. It was a mistake on Keynes’s part to focus on Say’s Law as the epitome of everything wrong with “classical economics.” Actually Say’s Law is a description of what happens in an economy when trading takes place at disequilibrium prices. At disequilibrium prices, potential gains from trade are left on the table. Not only are they left on the table, but the effects can be cumulative, because the failure to supply implies a further failure to demand. The Keynesian spending multiplier is the other side of the coin of the supply-side contraction envisioned by Say. Even infinite wage and price flexibility may not help an economy in which a lot of trade is occurring at disequilibrium prices.

The microeconomic theory of price adjustment is a theory of price adjustment in a single market. It is a theory in which, implicitly, all prices and quantities, but a single price-quantity pair are in equilibrium. Equilibrium in that single market is rapidly restored by price and quantity adjustment in that single market. That is why I have said that microeconomics rests on a macroeconomic foundation, and that is why it is illusory to imagine that macroeconomics can be logically derived from microfoundations. Microfoundations, insofar as they explain how prices adjust, are themselves founded on the existence of a macroeconomic equilibrium. Founding macroeconomics on microfoundations is just a form of bootstrapping.

If there is widespread unemployment, it may indeed be that wages are too high, and that a reduction in wages would restore equilibrium. But there is no general presumption that unemployment will be cured by a reduction in wages. Unemployment may be the result of a more general dysfunction in which all prices are away from their equilibrium levels, in which case no adjustment of the wage would solve the problem, so that there is no presumption that the current wage exceeds the full-equilibrium wage. This, by the way, seems to me to be nothing more than a straightforward implication of the Lipsey-Lancaster theory of second best.

The Microfoundations Wars Continue

I see belatedly that the battle over microfoundations continues on the blogosphere, with Paul Krugman, Noah Smith, Adam Posen, and Nick Rowe all challenging the microfoundations position, while Tony Yates and Stephen Williamson defend it with Simon Wren-Lewis trying to serve as a peacemaker of sorts. I agree with most of the criticisms, but what I found most striking was the defense of microfoundations offered by Tony Yates, who expresses the mentality of the microfoundations school so well that I thought that some further commentary on his post would be worthwhile.

Yates’s post was prompted by a Twitter exchange between Yates and Adam Posen after Posen tweeted that microfoundations have no merit, an exaggeration no doubt, but not an unreasonable one. Noah Smith chimed in with a challenge to Yates to defend the proposition that microfoundations do have merit. Hence, the title (“Why Microfoundations Have Merit.”) of Yates’s post. What really caught my attention in Yates’s post is that, in trying to defend the proposition that microfounded models do have merit, Yates offers the following methodological, or perhaps aesthetic, pronouncement .

The merit in any economic thinking or knowledge must lie in it at some point producing an insight, a prediction, a prediction of the consequence of a policy action, that helps someone, or a government, or a society to make their lives better.

Microfounded models are models which tell an explicit story about what the people, firms, and large agents in a model do, and why.  What do they want to achieve, what constraints do they face in going about it?  My own position is that these are the ONLY models that have anything genuinely economic to say about anything.  It’s contestable whether they have any merit or not.

Paraphrasing, I would say that Yates defines merit as a useful insight or prediction into the way the world works. Fair enough. He then defines microfounded models as those models that tell an explicit story about what the agents populating the model are trying to do and the resulting outcomes of their efforts. This strikes me as a definition that includes more than just microfounded models, but let that pass, at least for the moment. Then comes the key point. These models “are the ONLY models that have anything genuinely economic to say about anything.” A breathtaking claim.

In other words, Yates believes that unless an insight, a proposition, or a conjecture, can be logically deduced from microfoundations, it is not economics. So whatever the merits of microfounded models, a non-microfounded model is not, as a matter of principle, an economic model. Talk about methodological authoritarianism.

Having established, to his own satisfaction at any rate, that only microfounded models have a legitimate claim to be considered economic, Yates defends the claim that microfounded models have merit by citing the Lucas critique as an early example of a meritorious insight derived from the “microfoundations project.” Now there is something a bit odd about this claim, because Yates neglects to mention that the Lucas critique, as Lucas himself acknowledged, had been anticipated by earlier economists, including both Keynes and Tinbergen. So if the microfoundations project does indeed have merit, the example chosen to illustrate that merit does nothing to show that the merit is in any way peculiar to the microfoundations project. It is also bears repeating (see my earlier post on the Lucas critique) that the Lucas critique only tells us about steady states, so it provides no useful information, insight, prediction or guidance about using monetary policy to speed up the recovery to a new steady state. So we should be careful not to attribute more merit to the Lucas critique than it actually possesses.

To be sure, in his Twitter exchange with Adam Posen, Yates mentioned several other meritorious contributions from the microfoundations project, each of which Posen rejected because the merit of those contributions lies in the intuition behind the one line idea. To which Yates responded:

This statement is highly perplexing to me.  Economic ideas are claims about what people and firms and governments do, and why, and what unfolds as a consequence.  The models are the ideas.  ‘Intuition’, the verbal counterpart to the models, are not separate things, the origins of the models.  They are utterances to ourselves that arise from us comprehending the logical object of the model, in the same way that our account to ourselves of an equation arises from the model.  One could make an argument for the separateness of ‘intuition’ at best, I think, as classifying it in some cases to be a conjecture about what a possible economic world [a microfounded model] would look like.  Intuition as story-telling to oneself can sometimes be a good check on whether what we have done is nonsense.  But not always.  Lots of results are not immediately intuitive.  That’s not a reason to dismiss it.  (Just like most of modern physics is not intuitive.)  Just a reason to have another think and read through your code carefully.

And Yates’s response is highly perplexing to me. An economic model is usually the product of some thought process intended to construct a coherent model from some mental raw materials (ideas) and resources (knowledge and techniques). The thought process is an attempt to embody some idea or ideas about a posited causal mechanism or about a posited mutual interdependency among variables of interest. The intuition is the idea or insight that some such causal mechanism or mutual interdependency exists. A model is one particular implementation (out of many other possible implementations) of the idea in a way that allows further implications of the idea to be deduced, thereby achieving an enhanced and deeper understanding of the original insight. The “microfoundations project” does not directly determine what kinds of ideas can be modeled, but it does require that models have certain properties to be considered acceptable implementations of any idea. In particular the model must incorporate a dynamic stochastic general equilibrium system with rational expectations and a unique equilibrium. Ideas not tractable given those modeling constraints are excluded. Posen’s point, it seems to me, is not that no worthwhile, meritorious ideas have been modeled within the modeling constraints imposed by the microfoundations project, but that the microfoundations project has done nothing to create or propagate those ideas; it has just forced those ideas to be implemented within the template of the microfoundations project.

None of the characteristic properties of the microfoundations project are assumptions for which there is compelling empirical or theoretical justification. We know how to prove the existence of a general equilibrium for economic models populated by agents satisfying certain rationality assumptions (assumptions for which there is no compelling a priori argument and whose primary justifications are tractability and the accuracy of the empirical implications deduced from them), but the conditions for a unique general equilibrium are way more stringent than the standard convexity assumptions required to prove existence. Moreover, even given the existence of a unique general equilibrium, there is no proof that an economy not in general equilibrium will reach the general equilibrium under the standard rules of price adjustment. Nor is there any empirical evidence to suggest that actual economies are in any sense in a general equilibrium, though one might reasonably suppose that actual economies are from time to time in the neighborhood of a general equilibrium. The rationality of expectations is in one sense an entirely ad hoc assumption, though an inconsistency between the predictions of a model, under the assumption of rational expectations, with the rationally expectations of the agents in the model is surely a sign that there is a problem in the structure of the model. But just because rational expectations can be used to check for latent design flaws in a model, it does not follow that assuming rational expectations leads to empirical implications that are generally, or even occasionally, empirically valid.

Thus, the key assumptions of microfounded models are not logically entailed by any deep axioms; they are imposed by methodological fiat, a philosophically and pragmatically unfounded insistence that certain modeling conventions be adhered to in order to count as “scientific.” Now it would be one thing if these modeling conventions were generating new, previously unknown, empirical relationships or generating more accurate predictions than those generated by non-microfounded models, but evidence that the predictions of microfounded models are better than the predictions of non-microfounded models is notably lacking. Indeed, Carlaw and Lipsey have shown that micro-founded models generate predictions that are less accurate than those generated by non-micofounded models. If microfounded theories represent scientific progress, they ought to be producing an increase, not a decrease, in explanatory power.

The microfoundations project is predicated on a gigantic leap of faith that the existing economy has an underlying structure that corresponds closely enough to the assumptions of the Arrow-Debreu model, suitably adjusted for stochastic elements and a variety of frictions (e.g., Calvo pricing) that may be introduced into the models depending on the modeler’s judgment about what constitutes an allowable friction. This is classic question-begging with a vengeance: arriving at a conclusion by assuming what needs to be proved. Such question begging is not necessarily illegitimate; every research program is based on some degree of faith or optimism that results not yet in hand will justify the effort required to generate those results. What is not legitimate is the claim that ONLY the models based on such question-begging assumptions are genuinely scientific.

This question-begging mentality masquerading as science is actually not unique to the microfoundations school. It is not uncommon among those with an exaggerated belief in the powers of science, a mentality that Hayek called scientism. It is akin to physicalism, the philosophical doctrine that all phenomena are physical. According to physicalism, there are no mental phenomena. What we perceive as mental phenomena, e.g., consciousness, is not real, but an illusion. Our mental states are really nothing but physical states. I do not say that physicalism is false, just that it is a philosophical position, not a proposition derived from science, and certainly not a fact that is, or can be, established by the currently available tools of science. It is a faith that some day — some day probably very, very far off into the future — science will demonstrate that our mental processes can be reduced to, and derived from, the laws of physics. Similarly, given the inability to account for observed fluctuations of output and employment in terms of microfoundations, the assertion that only microfounded models are scientific is simply an expression of faith in some, as yet unknown, future discovery, not a claim supported by any available scientific proof or evidence.

Never Mistake a Change in Quantity Demanded for a Change in Demand

We are all in Scott Sumner’s debt for teaching (or reminding) us never, ever to reason from a price change. The reason is simple. You can’t just posit a price change and then start making inferences from the price change, because price changes don’t just happen spontaneously. If there’s a price change, it’s because something else has caused price to change. Maybe demand has increased; maybe supply has decreased; maybe neither supply nor demand has changed, but the market was in disequilibrium before and is in equilibrium now at the new price; maybe neither supply nor demand has changed, but the market was in equilibrium before and is in disequilibrium now. There could be other scenarios as well, but unless you specify at least one of them, you can’t reason sensibly about the implications of the price change.

There’s another important piece of advice for anyone trying to do economics: never mistake a change in quantity demanded for a change in demand. A change in demand means that the willingness of people to pay for something has changed, so that, everything else held constant, the price has to change. If for some reason, the price of something goes up, the willingness of people to pay for not having changed, then the quantity of the thing that they demand will go down. But here’s the important point: their demand for that something – their willingness to pay for it – has not gone down; the change in the amount demanded is simply a response to the increased price of that something. In other words, a change in the price of something cannot be the cause of a change in the demand for that something; it can only cause a change in the quantity demanded. A change in demand can be caused only by change in something other than price – maybe a change in wealth, or in fashion, or in taste, or in the season, or in the weather.

Why am I engaging in this bit of pedantry? Well, in a recent post, Scott responded to the following question from Dustin in the comment section to one of his posts.

An elementary question on the topic of interest rates that I’ve been unable to resolve via google:

Regarding Fed actions, I understand that reduced interest rates are thought to be expansionary because the resulting decrease in cost of capital induces greater investment. But I also understand that reduced interest rates are thought to be contractionary because the resulting decrease in opportunity cost of holding money increases demand for money.

To which Scott responded as follows:

It’s not at all clear that lower interest rates boost investment (never reason from a price change.)  And even if they did boost investment it is not at all clear that they would boost GDP.

Scott is correct to question the relationship between interest rates and investment. The relationship in the Keynesian model is based on the idea that a reduced interest rate, by reducing the rate at which expected future cash flows are discounted, increases the value of durable assets, so that the optimal size of the capital stock increases, implying a speed up in the rate of capital accumulation (investment). There are a couple of steps missing in the chain of reasoning that goes from a reduced rate of discount to a speed up in the rate of accumulation, but, in the olden days at any rate, economists have usually been willing to rely on their intuition that an increase in the size of the optimal capital stock would translate into an increased rate of capital accumulation.

Alternatively, in the Hawtreyan scheme of things, a reduced rate of interest would increase the optimal size of inventories held by traders and middlemen, causing an increase in orders to manufacturers, and a cycle of rising output and income generated by the attempt to increase inventories. Notice that in the Hawtreyan view, the reduced short-term interest is, in part, a positive supply shock (reducing the costs borne by middlemen and traders of holding inventories financed by short-term borrowing) as long as there are unused resources that can be employed if desired inventories increase in size.

That said, I’m not sure what Scott, in questioning whether a reduction in interesting rates raises investment, meant by his parenthetical remark about reasoning from a price change. Scott was asked about the effect of a Fed policy to reduce interest rates. Why is that reasoning from a price change? And furthermore, if we do posit that investment rises, why is it unclear whether GDP would rise?

Scott continues:

However it’s surprisingly hard to explain why OMPs boost NGDP using the mechanism of interest rates. Dustin is right that lower interest rates increase the demand for money.  They also reduce velocity. Higher money demand and lower velocity will, ceteris paribus, reduce NGDP.  So why does everyone think that a cut in interest rates increases NGDP?  Is it possible that Steve Williamson is right after all?

Sorry, Scott. Lower interest rates don’t increase the demand for money; lower interest rates increase the amount of money demanded. What’s the difference? If an interest-rate reduction increased the demand for money, it would mean that the demand curve had shifted, and the size of that shift would be theoretically unspecified. If that were the case, we would be comparing an unknown increase in investment on the one hand to an unknown increase in money demand on the other hand, the net effect being indeterminate. That’s the argument that Scott seems to be making.

But that’s not, repeat not, what’s going on here. What we have is an interest-rate reduction that triggers an increase investment and also in the amount of money demanded. But there is no shift in the demand curve for money, just a movement along an unchanging demand curve. That imposes a limit on the range of possibilities. What is the limit? It’s the extreme case of a demand curve for money that is perfectly elastic at the current rate of interest — in other words a liquidity trap — so that the slightest reduction in interest rates causes an unlimited increase in the amount of money demanded. But that means that the rate of interest can’t fall, so that investment can’t rise. If the demand for money is less than perfectly elastic, then the rate of interest can in fact be reduced, implying that investment, and therefore NGDP, will increase. The quantity of money demanded increases as well — velocity goes down — but not enough to prevent investment and NGDP from increasing.

So, there’s no ambiguity about the correct answer to Dustin’s question. If Steve Williamson is right, it’s because he has introduced some new analytical element not contained in the old-fashioned macroeconomic analysis. (Note that I use the term “old-fashioned” only as an identifier, not as an expression of preference in either direction.) A policy-induced reduction in the rate of interest must, under standard assumption in the old-fashioned macroeconomics, increase nominal GDP, though the size of the increase depends on specific assumptions about empirical magnitudes. I don’t disagree with Scott’s analysis in terms of the monetary base, I just don’t see a substantive difference between that analysis and the one that I just went through in terms of the interest-rate policy instrument.

Just to offer a non-controversial example, it is possible to reason through the effect of a restriction on imports in terms of a per unit tariff on imports or in terms of a numerical quota on imports. For any per unit tariff, there is a corresponding quota on imports that gives you the same solution. MMT guys often fail to see the symmetry between setting the quantity and the price of bank reserves; in this instance Scott seems to have overlooked the symmetry between the quantity and price of base money.

What Kind of Equilibrium Is This?

In my previous post, I suggested that Stephen Williamson’s views about the incapacity of monetary policy to reduce unemployment, and his fears that monetary expansion would simply lead to higher inflation and a repeat of the bad old days the 1970s when inflation and unemployment spun out of control, follow from a theoretical presumption that the US economy is now operating (as it almost always does) in the neighborhood of equilibrium. This does not seem right to me, but it is the sort of deep theoretical assumption (e.g., like the rationality of economic agents) that is not subject to direct empirical testing. It is part of what the philosopher Imre Lakatos called the hard core of a (in this case Williamson’s) scientific research program. Whatever happens, Williamson will process the observed facts in terms of a theoretical paradigm in which prices adjust and markets clear. No other way of viewing reality makes sense, because Williamson cannot make any sense of it in terms of the theoretical paradigm or world view to which he is committed. I actually have some sympathy with that way of looking at the world, but not because I think it’s really true; it’s just the best paradigm we have at the moment. But I don’t want to follow that line of thought too far now, but who knows, maybe another time.

A good illustration of how Williamson understands his paradigm was provided by blogger J. P. Koning in his comment on my previous post copying the following quotation from a post written by Williamson a couple of years on his blog.

In other cases, as in the link you mention, there are people concerned about disequilibrium phenomena. These approaches are or were popular in Europe – I looked up Benassy and he is still hard at work. However, most of the mainstream – and here I’m including New Keynesians – sticks to equilibrium economics. New Keynesian models may have some stuck prices and wages, but those models don’t have to depart much from standard competitive equilibrium (or, if you like, competitive equilibrium with monopolistic competition). In those models, you have to determine what a firm with a stuck price produces, and that is where the big leap is. However, in terms of determining everything mathematically, it’s not a big deal. Equilibrium economics is hard enough as it is, without having to deal with the lack of discipline associated with “disequilibrium.” In equilibrium economics, particularly monetary equilibrium economics, we have all the equilibria (and more) we can handle, thanks.

I actually agree that departing from the assumption of equilibrium can involve a lack of discipline. Market clearing is a very powerful analytical tool, and to give it up without replacing it with an equally powerful analytical tool leaves us theoretically impoverished. But Williamson seems to suggest (or at least leaves ambiguous) that there is only one kind of equilibrium that can be handled theoretically, namely a fully optimal general equilibrium with perfect foresight (i.e., rational expectations) or at least with a learning process leading toward rational expectations. But there are other equilibrium concepts that preserve market clearing, but without imposing, what seems to me, the unreasonable condition of rational expectations and (near) optimality.

In particular, there is the Hicksian concept of a temporary equilibrium (inspired by Hayek’s discussion of intertemporal equilibrium) which allows for inconsistent expectations by economic agents, but assumes market clearing based on supply and demand schedules reflecting those inconsistent expectations. Nearly 40 years ago, Earl Thompson was able to deploy that equilibrium concept to derive a sub-optimal temporary equilibrium with Keynesian unemployment and a role for countercyclical monetary policy in minimizing inefficient unemployment. I have summarized and discussed Thompson’s model previously in some previous posts (here, here, here, and here), and I hope to do a few more in the future. The model is hardly the last word, but it might at least serve as a starting point for thinking seriously about the possibility that not every state of the economy is an optimal equilibrium state, but without abandoning market clearing as an analytical tool.

Too Little, Too Late?

The FOMC, after over four years of overly tight monetary policy, seems to be feeling its way toward an easier policy stance. But will it do any good? Unfortunately, there is reason to doubt that it will. The FOMC statement pledges to continue purchasing $85 billion a month of Treasuries and mortgage-backed securities and to keep interest rates at current low levels until the unemployment rate falls below 6.5% or the inflation rate rises above 2.5%. In other words, the Fed is saying that it will tolerate an inflation rate only marginally higher than the current target for inflation before it begins applying the brakes to the expansion. Here is how the New York Times reported on the Fed announcement.

The Federal Reserve said Wednesday it planned to hold short-term interest rates near zero so long as the unemployment rate remains above 6.5 percent, reinforcing its commitment to improve labor market conditions.

The Fed also said that it would continue in the new year its monthly purchases of $85 billion in Treasury bonds and mortgage-backed securities, the second prong of its effort to accelerate economic growth by reducing borrowing costs.

But Fed officials still do not expect the unemployment rate to fall below the new target for at least three more years, according to forecasts also published Wednesday, and they chose not to expand the Fed’s stimulus campaign.

In fairness to the FOMC, the Fed, although technically independent, must operate within an implicit consensus on what kind of decisions it can take, its freedom of action thereby being circumscribed in the absence of a clear signal of support from the administration for a substantial departure from the terms of the implicit consensus. For the Fed to substantially raise its inflation target would risk a political backlash against it, and perhaps precipitate a deep internal split within the Fed’s leadership. At the depth of the financial crisis and in its immediate aftermath, perhaps Chairman Bernanke, if he had been so inclined, might have been able to effect a drastic change in monetary policy, but that window of opportunity closed quickly once the economy stopped contracting and began its painfully slow pseudo recovery.

As I have observed a number of times (here, here, and here), the paradigm for the kind of aggressive monetary easing that is now necessary is FDR’s unilateral decision to take the US off the gold standard in 1933. But FDR was a newly elected President with a massive electoral mandate, and he was making decisions in the midst of the worst economic crisis in modern times. Could an unelected technocrat (or a collection of unelected technocrats) take such actions on his (or their) own? From the get-go, the Obama administration showed no inclination to provide any significant input to the formulation of monetary policy, either out of an excess of scruples about Fed independence or out of a misguided belief that monetary policy was powerless to affect the economy when interest rates were close to zero.

Stephen Williamson, on his blog, consistently gives articulate expression to the doctrine of Fed powerlessness. In a post yesterday, correctly anticipating that the Fed would continue its program of buying mortgage backed securities and Treasuries, and would tie its policy to numerical triggers relating to unemployment, Williamson disdainfully voiced his skepticism that the Fed’s actions would have any positive effect on the real performance of the economy, while registering his doubts that the Fed would be any more successful in preventing inflation from getting out of hand while attempting to reduce unemployment than it was in the 1970s.

It seems to me that Williamson reaches this conclusion based on the following premises. The Fed has little or no control over interest rates or inflation, and the US economy is not far removed from its equilibrium growth path. But Williamson also believes that the Fed might be able to increase inflation, and that that would be a bad thing if the Fed were actually to do so.  The Fed can’t do any good, but it could do harm.

Williamson is fairly explicit in saying that he doubts the ability of positive QE to stimulate, and negative QE (which, I guess, might be called QT) to dampen real or nominal economic activity.

Short of a theory of QE – or more generally a serious theory of the term structure of interest rates – no one has a clue what the effects are, if any. Until someone suggests something better, the best guess is that QE is irrelevant. Any effects you think you are seeing are either coming from somewhere else, or have to do with what QE signals for the future policy rate. The good news is that, if it’s irrelevant, it doesn’t do any harm. But if the FOMC thinks it works when it doesn’t, that could be a problem, in that negative QE does not tighten, just as positive QE does not ease.

But Williamson seems a bit uncertain about the effects of “forward guidance” i.e., the Fed’s commitment to keep interest rates low for an extended period of time, or until a trigger is pulled e.g., unemployment falls below a specified level. This is where Williamson sees a real potential for mischief.

(1)To be well-understood, the triggers need to be specified in a very simple form. As such it seems as likely that the Fed will make a policy error if it commits to a trigger as if it commits to a calendar date. The unemployment rate seems as good a variable as any to capture what is going on in the real economy, but as such it’s pretty bad. It’s hardly a sufficient statistic for everything the Fed should be concerned with.

(2)This is a bad precedent to set, for two reasons. First, the Fed should not be setting numerical targets for anything related to the real side of the dual mandate. As is well-known, the effect of monetary policy on real economic activity is transient, and the transmission process poorly understood. It would be foolish to pretend that we know what the level of aggregate economic activity should be, or that the Fed knows how to get there. Second, once you convince people that triggers are a good idea in this “unusual” circumstance, those same people will wonder what makes other circumstances “normal.” Why not just write down a Taylor rule for the Fed, and send the FOMC home? Again, our knowledge of how the economy works, and what future contingencies await us, is so bad that it seems optimal, at least to me, that the Fed make it up as it goes along.

I agree that a fixed trigger is a very blunt instrument, and it is hard to know what level to set it at. In principle, it would be preferable if the trigger were not pulled automatically, but only as a result of some exercise of discretionary judgment by the part of the monetary authority; except that the exercise of discretion may undermine the expectational effect of setting a trigger. Williamson’s second objection strikes me as less persuasive than the first. It is at least misleading, and perhaps flatly wrong, to say that the effect of monetary policy on real economic activity is transient. The standard argument for the ineffectiveness of monetary policy involves an exercise in which the economy starts off at equilibrium. If you take such an economy and apply a monetary stimulus to it, there is a plausible (but not necessarily unexceptionable) argument that the long-run effect of the stimulus will be nil, and any transitory gain in output and employment may be offset (or outweighed) by a subsequent transitory loss. But if the initial position is out of equilibrium, I am unaware of any plausible, let alone compelling, argument that monetary stimulus would not be effective in hastening the adjustment toward equilibrium. In a trivial sense, the effect of monetary policy is transient inasmuch as the economy would eventually reach an equilibrium even without monetary stimulus. However, unlike the case in which monetary stimulus is applied to an economy in equilibrium, applying monetary policy to an economy out of equilibrium can produce short-run gains that aren’t wiped out by subsequent losses. I am not sure how to interpret the rest of Williamson’s criticism. One might almost interpret him as saying that he would favor a policy of targeting nominal GDP (which bears a certain family resemblance to the Taylor rule), a policy that would also address some of the other concerns Williamson has about the Fed’s choice of triggers, except that Williamson is already on record in opposition to NGDP targeting.

In reply to a comment on this post, Williamson made the following illuminating observation:

Read James Tobin’s paper, “How Dead is Keynes?” referenced in my previous post. He was writing in June 1977. The unemployment rate is 7.2%, the cpi inflation rate is 6.7%, and he’s complaining because he thinks the unemployment rate is disastrously high. He wants more accommodation. Today, I think we understand the reasons that the unemployment rate was high at the time, and we certainly don’t think that monetary policy was too tight in mid-1977, particularly as inflation was about to take off into the double-digit range. Today, I don’t think the labor market conditions we are looking at are the result of sticky price/wage inefficiencies, or any other problem that monetary policy can correct.

The unemployment rate in 1977 was 7.2%, at least one-half a percentage point less than the current rate, and the cpi inflation rate was 6.7% nearly 5% higher than the current rate. Just because Tobin was overly disposed toward monetary expansion in 1977 when unemployment was less and inflation higher than they are now, it does not follow that monetary expansion now would be as misguided as it was in 1977. Williamson is convinced that the labor market is now roughly in equilibrium, so that monetary expansion would lead us away from, not toward, equilibrium. Perhaps it would, but most informed observers simply don’t share Williamson’s intuition that the current state of the economy is not that far from equilibrium. Unless you buy that far-from-self-evident premise, the case for monetary expansion is hard to dispute.  Nevertheless, despite his current unhappiness, I am not so sure that Williamson will be as upset with what the actual policy that the Fed is going to implement as he seems to think he will be.  The Fed is moving in the right direction, but is only taking baby steps.

PS I see that Williamson has now posted his reaction to the Fed’s statement.  Evidently, he is not pleased.  Perhaps I will have something more to say about that tomorrow.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,261 other subscribers
Follow Uneasy Money on WordPress.com