Posts Tagged 'Frederick Lavington'

Krugman’s Second Best

A couple of days ago Paul Krugman discussed “Second-best Macroeconomics” on his blog. I have no real quarrel with anything he said, but I would like to amplify his discussion of what is sometimes called the problem of second-best, because I think the problem of second best has some really important implications for macroeconomics beyond the limited application of the problem that Krugman addressed. The basic idea underlying the problem of second best is not that complicated, but it has many applications, and what made the 1956 paper (“The General Theory of Second Best”) by R. G. Lipsey and Kelvin Lancaster a classic was that it showed how a number of seemingly disparate problems were really all applications of a single unifying principle. Here’s how Krugman frames his application of the second-best problem.

[T]he whole western world has spent years suffering from a severe shortfall of aggregate demand; in Europe a severe misalignment of national costs and prices has been overlaid on this aggregate problem. These aren’t hard problems to diagnose, and simple macroeconomic models — which have worked very well, although nobody believes it — tell us how to solve them. Conventional monetary policy is unavailable thanks to the zero lower bound, but fiscal policy is still on tap, as is the possibility of raising the inflation target. As for misaligned costs, that’s where exchange rate adjustments come in. So no worries: just hit the big macroeconomic That Was Easy button, and soon the troubles will be over.

Except that all the natural answers to our problems have been ruled out politically. Austerians not only block the use of fiscal policy, they drive it in the wrong direction; a rise in the inflation target is impossible given both central-banker prejudices and the power of the goldbug right. Exchange rate adjustment is blocked by the disappearance of European national currencies, plus extreme fear over technical difficulties in reintroducing them.

As a result, we’re stuck with highly problematic second-best policies like quantitative easing and internal devaluation.

I might quibble with Krugman about the quality of the available macroeconomic models, by which I am less impressed than he, but that’s really beside the point of this post, so I won’t even go there. But I can’t let the comment about the inflation target pass without observing that it’s not just “central-banker prejudices” and the “goldbug right” that are to blame for the failure to raise the inflation target; for reasons that I don’t claim to understand myself, the political consensus in both Europe and the US in favor of perpetually low or zero inflation has been supported with scarcely any less fervor by the left than the right. It’s only some eccentric economists – from diverse positions on the political spectrum – that have been making the case for inflation as a recovery strategy. So the political failure has been uniform across the political spectrum.

OK, having registered my factual disagreement with Krugman about the source of our anti-inflationary intransigence, I can now get to the main point. Here’s Krugman:

“[S]econd best” is an economic term of art. It comes from a classic 1956 paper by Lipsey and Lancaster, which showed that policies which might seem to distort markets may nonetheless help the economy if markets are already distorted by other factors. For example, suppose that a developing country’s poorly functioning capital markets are failing to channel savings into manufacturing, even though it’s a highly profitable sector. Then tariffs that protect manufacturing from foreign competition, raise profits, and therefore make more investment possible can improve economic welfare.

The problems with second best as a policy rationale are familiar. For one thing, it’s always better to address existing distortions directly, if you can — second best policies generally have undesirable side effects (e.g., protecting manufacturing from foreign competition discourages consumption of industrial goods, may reduce effective domestic competition, and so on). . . .

But here we are, with anything resembling first-best macroeconomic policy ruled out by political prejudice, and the distortions we’re trying to correct are huge — one global depression can ruin your whole day. So we have quantitative easing, which is of uncertain effectiveness, probably distorts financial markets at least a bit, and gets trashed all the time by people stressing its real or presumed faults; someone like me is then put in the position of having to defend a policy I would never have chosen if there seemed to be a viable alternative.

In a deep sense, I think the same thing is involved in trying to come up with less terrible policies in the euro area. The deal that Greece and its creditors should have reached — large-scale debt relief, primary surpluses kept small and not ramped up over time — is a far cry from what Greece should and probably would have done if it still had the drachma: big devaluation now. The only way to defend the kind of thing that was actually on the table was as the least-worst option given that the right response was ruled out.

That’s one example of a second-best problem, but it’s only one of a variety of problems, and not, it seems to me, the most macroeconomically interesting. So here’s the second-best problem that I want to discuss: given one distortion (i.e., a departure from one of the conditions for Pareto-optimality), reaching a second-best sub-optimum requires violating other – likely all the other – conditions for reaching the first-best (Pareto) optimum. The strategy for getting to the second-best suboptimum cannot be to achieve as many of the conditions for reaching the first-best optimum as possible; the conditions for reaching the second-best optimum are in general totally different from the conditions for reaching the first-best optimum.

So what’s the deeper macroeconomic significance of the second-best principle?

I would put it this way. Suppose there’s a pre-existing macroeconomic equilibrium, all necessary optimality conditions between marginal rates of substitution in production and consumption and relative prices being satisfied. Let the initial equilibrium be subjected to a macoreconomic disturbance. The disturbance will immediately affect a range — possibly all — of the individual markets, and all optimality conditions will change, so that no market will be unaffected when a new optimum is realized. But while optimality for the system as a whole requires that prices adjust in such a way that the optimality conditions are satisfied in all markets simultaneously, each price adjustment that actually occurs is a response to the conditions in a single market – the relationship between amounts demanded and supplied at the existing price. Each price adjustment being a response to a supply-demand imbalance in an individual market, there is no theory to explain how a process of price adjustment in real time will ever restore an equilibrium in which all optimality conditions are simultaneously satisfied.

Invoking a general Smithian invisible-hand theorem won’t work, because, in this context, the invisible-hand theorem tells us only that if an equilibrium price vector were reached, the system would be in an optimal state of rest with no tendency to change. The invisible-hand theorem provides no account of how the equilibrium price vector is discovered by any price-adjustment process in real time. (And even tatonnement, a non-real-time process, is not guaranteed to work as shown by the Sonnenschein-Mantel-Debreu Theorem). With price adjustment in each market entirely governed by the demand-supply imbalance in that market, market prices determined in individual markets need not ensure that all markets clear simultaneously or satisfy the optimality conditions.

Now it’s true that we have a simple theory of price adjustment for single markets: prices rise if there’s an excess demand and fall if there’s an excess supply. If demand and supply curves have normal slopes, the simple price adjustment rule moves the price toward equilibrium. But that partial-equilibriuim story is contingent on the implicit assumption that all other markets are in equilibrium. When all markets are in disequilibrium, moving toward equilibrium in one market will have repercussions on other markets, and the simple story of how price adjustment in response to a disequilibrium restores equilibrium breaks down, because market conditions in every market depend on market conditions in every other market. So unless all markets arrive at equilibrium simultaneously, there’s no guarantee that equilibrium will obtain in any of the markets. Disequilibrium in any market can mean disequilibrium in every market. And if a single market is out of kilter, the second-best, suboptimal solution for the system is totally different from the first-best solution for all markets.

In the standard microeconomics we are taught in econ 1 and econ 101, all these complications are assumed away by restricting the analysis of price adjustment to a single market. In other words, as I have pointed out in a number of previous posts (here and here), standard microeconomics is built on macroeconomic foundations, and the currently fashionable demand for macroeconomics to be microfounded turns out to be based on question-begging circular reasoning. Partial equilibrium is a wonderful pedagogical device, and it is an essential tool in applied microeconomics, but its limitations are often misunderstood or ignored.

An early macroeconomic application of the theory of second is the statement by the quintessentially orthodox pre-Keynesian Cambridge economist Frederick Lavington who wrote in his book The Trade Cycle “the inactivity of all is the cause of the inactivity of each.” Each successive departure from the conditions for second-, third-, fourth-, and eventually nth-best sub-optima has additional negative feedback effects on the rest of the economy, moving it further and further away from a Pareto-optimal equilibrium with maximum output and full employment. The fewer people that are employed, the more difficult it becomes for anyone to find employment.

This insight was actually admirably, if inexactly, expressed by Say’s Law: supply creates its own demand. The cause of the cumulative contraction of output in a depression is not, as was often suggested, that too much output had been produced, but a breakdown of coordination in which disequilibrium spreads in epidemic fashion from market to market, leaving individual transactors unable to compensate by altering the terms on which they are prepared to supply goods and services. The idea that a partial-equilibrium response, a fall in money wages, can by itself remedy a general-disequilibrium disorder is untenable. Keynes and the Keynesians were therefore completely wrong to accuse Say of committing a fallacy in diagnosing the cause of depressions. The only fallacy lay in the assumption that market adjustments would automatically ensure the restoration of something resembling full-employment equilibrium.

Methodological Arrogance

A few weeks ago, I posted a somewhat critical review of Kartik Athreya’s new book Big Ideas in Macroeconomics. In quoting a passage from chapter 4 in which Kartik defended the rational-expectations axiom on the grounds that it protects the public from economists who, if left unconstrained by the discipline of rational expectations, could use expectational assumptions to generate whatever results they wanted, I suggested that this sort of reasoning in defense of the rational-expectations axiom betrayed what I called the “methodological arrogance” of modern macroeconomics which has, to a large extent, succeeded in imposing that axiom on all macroeconomic models. In his comment responding to my criticisms, Kartik made good-natured reference in passing to my charge of “methodological arrogance,” without substantively engaging with the charge. And in a post about the early reviews of Kartik’s book, Steve Williamson, while crediting me for at least reading the book before commenting on it, registered puzzlement at what I meant by “methodological arrogance.”

Actually, I realized when writing that post that I was not being entirely clear about what “methodological arrogance” meant, but I thought that my somewhat tongue-in-cheek reference to the duty of modern macroeconomists “to ban such models from polite discourse — certainly from the leading economics journals — lest the public be tainted by economists who might otherwise dare to abuse their models by making illicit assumptions about expectations formation and equilibrium concepts” was sufficiently suggestive not to require elaboration, especially after having devoted several earlier posts to criticisms of the methodology of modern macroeconomics (e.g., here, here, and here). That was a misjudgment.

So let me try to explain what I mean by methodological arrogance, which is not the quite the same as, but is closely related to, methodological authoritarianism. I will do so by referring to the long introductory essay (“A Realist View of Logic, Physics, and History”) that Karl Popper contributed to a book The Self and Its Brain co-authored with neuroscientist John Eccles. The chief aim of the essay was to argue that the universe is not fully determined, but evolves, producing new, emergent, phenomena not originally extant in the universe, such as the higher elements, life, consciousness, language, science and all other products of human creativity, which in turn interact with the universe, in fundamentally unpredictable ways. Popper regards consciousness as a real phenomenon that cannot be reduced to or explained by purely physical causes. Though he makes only brief passing reference to the social sciences, Popper’s criticisms of reductionism are directly applicable to the microfoundations program of modern macroeconomics, and so I think it will be useful to quote what he wrote at some length.

Against the acceptance of the view of emergent evolution there is a strong intuitive prejudice. It is the intuition that, if the universe consists of atoms or elementary particles, so that all things are structures of such particles, then every event in the universe ought to be explicable, and in principle predictable, in terms of particle structure and of particle interaction.

Notice how easy it would be rephrase this statement as a statement about microfoundations:

Against the acceptance of the view that there are macroeconomic phenomena, there is a strong intuitive prejudice. It is the intuition that, if the macroeconomy consists of independent agents, so that all macroeconomic phenomena are the result of decisions made by independent agents, then every macreconomic event ought to be explicable, and in principle predictable, in terms of the decisions of individual agents and their interactions.

Popper continues:

Thus we are led to what has been called the programme of reductionism [microfoundations]. In order to discuss it I shall make use of the following Table

(12) Level of ecosystems

(11) Level of populations of metazoan and plants

(10) Level of metezoa and multicellular plants

(9) Level of tissues and organs (and of sponges?)

(8) Level of populations of unicellular organisms

(7) Level of cells and of unicellular organisms

(6) Level of organelles (and perhaps of viruses)

(5) Liquids and solids (crystals)

(4) Molecules

(3) Atoms

(2) Elementary particles

(1) Sub-elementary particles

(0) Unknown sub-sub-elementary particles?

The reductionist idea behind this table is that the events or things on each level should be explained in terms of the lower levels. . . .

This reductionist idea is interesting and important; and whenever we can explain entities and events on a higher level by those of a lower level, we can speak of a great scientific success, and can say that we have added much to our understanding of the higher level. As a research programme, reductionism is not only important, but it is part of the programme of science whose aim is to explain and to understand.

So far so good. Reductionism certainly has its place. So do microfoundations. Whenever we can take an observation and explain it in terms of its constituent elements, we have accomplished something important. We have made scientific progress.

But Popper goes on to voice a cautionary note. There may be, and probably are, strict, perhaps insuperable, limits to how far higher-level phenomena can be reduced to (explained by) lower-level phenomena.

[E]ven the often referred to reduction of chemistry to physics, important as it is, is far from complete, and very possibly incompletable. . . . [W]e are far removed indeed from being able to claim that all, or most, properties of chemical compounds can be reduced to atomic theory. . . . In fact, the five lower levels of [our] Table . . . can be used to show that we have reason to regard this kind of intuitive reduction programme as clashing with some results of modern physics.

For what [our] Table suggests may be characterized as the principle of “upward causation.” This is the principle that causation can be traced in our Table . . . . from a lower level to a higher level, but not vice versa; that what happens on a higher level can be explained in terms of the next lower level, and ultimately in terms of elementary particles and the relevant physical laws. It appears at first that the higher levels cannot act on the lower ones.

But the idea of particle-to-particle or atom-to-atom interaction has been superseded by physics itself. A diffraction grating or a crystal (belonging to level (5) of our Table . . .) is a spatially very extended complex (and periodic) structure of billions of molecules; but it interacts as a whole extended periodic structure with the photons or the particles of a beam of photons or particles. Thus we have here an important example of “downward causation“. . . . That is to say, the whole, the macro structure, may, qua whole, act upon a photon or an elementary particle or an atom. . . .

Other physical examples of downward causation – of macroscopic structures on level (5) acting upon elementary particles or photons on level (1) – are lasers, masers, and holograms. And there are also many other macro structures which are examples of downward causation: every simple arrangement of negative feedback, such as a steam engine governor, is a macroscopic structure that regulates lower level events, such as the flow of the molecules that constitute the steam. Downward causation is of course important in all tools and machines which are designed for sompe purpose. When we use a wedge, for example, we do not arrange for the action of its elementary particles, but we use a structure, relying on it ot guide the actions of its constituent elementary particles to act, in concert, so as to achieve the desired result.

Stars are undersigned, but one may look at them as undersigned “machines” for putting the atoms and elementary particles in their central region under terrific gravitational pressure, with the (undersigned) result that some atomic nuclei fuse and form the nuclei of heavier elements; an excellent example of downward causation,of the action of the whole structure upon its constituent particles.

(Stars, incidentally, are good examples of the general rule that things are processes. Also, they illustrate the mistake of distinguishing between “wholes” – which are “more than the sums of their parts” – and “mere heaps”: a star is, in a sense, a “mere” accumulation, a “mere heap” of its constituent atoms. Yet it is a process – a dynamic structure. Its stability depends upon the dynamic equilibrium between its gravitational pressure, due to its sheer bulk, and the repulsive forces between its closely packed elementary particles. If the latter are excessive, the star explodes, If they are smaller than the gravitational pressure, it collapses into a “black hole.”

The most interesting examples of downward causation are to be found in organisms and in their ecological systems, and in societies of organisms [my emphasis]. A society may continue to function even though many of its members die; but a strike in an essential industry, such as the supply of electricity, may cause great suffering to many individual people. .. . I believe that these examples make the existence of downward causation obvious; and they make the complete success of any reductionist programme at least problematic.

I was very glad when I recently found this discussion of reductionism by Popper in a book that I had not opened for maybe 40 years, because it supports an argument that I have been making on this blog against the microfoundations program in macroeconomics: that as much as macroeconomics requires microfoundations, microeconomics also requires macrofoundations. Here is how I put a little over a year ago:

In fact, the standard comparative-statics propositions of microeconomics are also based on the assumption of the existence of a unique stable general equilibrium. Those comparative-statics propositions about the signs of the derivatives of various endogenous variables (price, quantity demanded, quantity supplied, etc.) with respect to various parameters of a microeconomic model involve comparisons between equilibrium values of the relevant variables before and after the posited parametric changes. All such comparative-statics results involve a ceteris-paribus assumption, conditional on the existence of a unique stable general equilibrium which serves as the starting and ending point (after adjustment to the parameter change) of the exercise, thereby isolating the purely hypothetical effect of a parameter change. Thus, as much as macroeconomics may require microfoundations, microeconomics is no less in need of macrofoundations, i.e., the existence of a unique stable general equilibrium, absent which a comparative-statics exercise would be meaningless, because the ceteris-paribus assumption could not otherwise be maintained. To assert that macroeconomics is impossible without microfoundations is therefore to reason in a circle, the empirically relevant propositions of microeconomics being predicated on the existence of a unique stable general equilibrium. But it is precisely the putative failure of a unique stable intertemporal general equilibrium to be attained, or to serve as a powerful attractor to economic variables, that provides the rationale for the existence of a field called macroeconomics.

And more recently, I put it this way:

The microeconomic theory of price adjustment is a theory of price adjustment in a single market. It is a theory in which, implicitly, all prices and quantities, but a single price-quantity pair are in equilibrium. Equilibrium in that single market is rapidly restored by price and quantity adjustment in that single market. That is why I have said that microeconomics rests on a macroeconomic foundation, and that is why it is illusory to imagine that macroeconomics can be logically derived from microfoundations. Microfoundations, insofar as they explain how prices adjust, are themselves founded on the existence of a macroeconomic equilibrium. Founding macroeconomics on microfoundations is just a form of bootstrapping.

So I think that my criticism of the microfoundations project exactly captures the gist of Popper’s criticism of reductionism. Popper extended his criticism of a certain form of reductionism, which he called “radical materialism or radical physicalism” in later passage in the same essay that is also worth quoting:

Radical materialism or radical physicalism is certainly a selfconsistent position. Fir it is a view of the universe which, as far as we know, was adequate once; that is, before the emergence of life and consciousness. . . .

What speaks in favour of radical materialism or radical physicalism is, of course, that it offers us a simple vision of a simple universe, and this looks attractive just because, in science, we search for simple theories. However, I think that it is important that we note that there are two different ways by which we can search for simplicity. They may be called, briefly, philosophical reduction and scientific reduction. The former is characterized by an attempt to provide bold and testable theories of high explanatory power. I believe that the latter is an extremely valuable and worthwhile method; while the former is of value only if we have good reasons to assume that it corresponds to the facts about the universe.

Indeed, the demand for simplicity in the sense of philosophical rather than scientific reduction may actually be damaging. For even in order to attempt scientific reduction, it is necessary for us to get a full grasp of the problem to be solved, and it is therefore vitally important that interesting problems are not “explained away” by philosophical analysis. If, say, more than one factor is responsible for some effect, it is important that we do not pre-empt the scientific judgment: there is always the danger that we might refuse to admit any ideas other than the ones we appear to have at hand: explaining away, or belittling the problem. The danger is increased if we try to settle the matter in advance by philosophical reduction. Philosophical reduction also makes us blind to the significance of scientific reduction.

Popper adds the following footnote about the difference between philosophic and scientific reduction.

Consider, for example, what a dogmatic philosophical reductionist of a mechanistic disposition (or even a quantum-mechanistic disposition) might have done in the face of the problem of the chemical bond. The actual reduction, so far as it goes, of the theory of the hydrogen bond to quantum mechanics is far more interesting than the philosophical assertion that such a reduction will one be achieved.

What modern macroeconomics now offers is largely an array of models simplified sufficiently so that they are solvable using the techniques of dynamic optimization. Dynamic optimization by individual agents — the microfoundations of modern macro — makes sense only in the context of an intertemporal equilibrium. But it is just the possibility that intertemporal equilibrium may not obtain that, to some of us at least, makes macroeconomics interesting and relevant. As the great Cambridge economist, Frederick Lavington, anticipating Popper in grasping the possibility of downward causation, put it so well, “the inactivity of all is the cause of the inactivity of each.”

So what do I mean by methodological arrogance? I mean an attitude that invokes microfoundations as a methodological principle — philosophical reductionism in Popper’s terminology — while dismissing non-microfounded macromodels as unscientific. To be sure, the progress of science may enable us to reformulate (and perhaps improve) explanations of certain higher-level phenomena by expressing those relationships in terms of lower-level concepts. That is what Popper calls scientific reduction. But scientific reduction is very different from rejecting, on methodological principle, any explanation not expressed in terms of more basic concepts.

And whenever macrotheory seems inconsistent with microtheory, the inconsistency poses a problem to be solved. Solving the problem will advance our understanding. But simply to reject the macrotheory on methodological principle without evidence that the microfounded theory gives a better explanation of the observed phenomena than the non-microfounded macrotheory (and especially when the evidence strongly indicates the opposite) is arrogant. Microfoundations for macroeconomics should result from progress in economic theory, not from a dubious methodological precept.

Let me quote Popper again (this time from his book Objective Knowledge) about the difference between scientific and philosophical reduction, addressing the denial by physicalists that that there is such a thing as consciousness, a denial based on their belief that all supposedly mental phenomena can and will ultimately be reduced to purely physical phenomena

[P]hilosophical speculations of a materialistic or physicalistic character are very interesting, and may even be able to point the way to a successful scientific reduction. But they should be frankly tentative theories. . . . Some physicalists do not, however, consider their theories as tentative, but as proposals to express everything in physicalist language; and they think these proposals have much in their favour because they are undoubtedly convenient: inconvenient problems such as the body-mind problem do indeed, most conveniently, disappear. So these physicalists think that there can be no doubt that these problems should be eliminated as pseudo-problems. (p. 293)

One could easily substitute “methodological speculations about macroeconomics” for “philosophical speculations of a materialistic or physicalistic character” in the first sentence. And in the third sentence one could substitute “advocates of microfounding all macroeconomic theories” for “physicalists,” “microeconomic” for “physicalist,” and “Phillips Curve” or “involuntary unemployment” for “body-mind problem.”

So, yes, I think it is arrogant to think that you can settle an argument by forcing the other side to use only those terms that you approve of.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,263 other subscribers
Follow Uneasy Money on WordPress.com