Archive for the 'Samuelson' Category

Romer v. Lucas

A couple of months ago, Paul Romer created a stir by publishing a paper in the American Economic Review “Mathiness in the Theory of Economic Growth,” an attack on two papers, one by McGrattan and Prescott and the other by Lucas and Moll on aspects of growth theory. He accused the authors of those papers of using mathematical modeling as a cover behind which to hide assumptions guaranteeing results by which the authors could promote their research agendas. In subsequent blog posts, Romer has sharpened his attack, focusing it more directly on Lucas, whom he accuses of a non-scientific attachment to ideological predispositions that have led him to violate what he calls Feynman integrity, a concept eloquently described by Feynman himself in a 1974 commencement address at Caltech.

It’s a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty–a kind of leaning over backwards. For example, if you’re doing an experiment, you should report everything that you think might make it invalid–not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you’ve eliminated by some other experiment, and how they worked–to make sure the other fellow can tell they have been eliminated.

Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can–if you know anything at all wrong, or possibly wrong–to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. There is also a more subtle problem. When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition.

Romer contrasts this admirable statement of what scientific integrity means with another by George Stigler, seemingly justifying, or at least excusing, a kind of special pleading on behalf of one’s own theory. And the institutional and perhaps ideological association between Stigler and Lucas seems to suggest that Lucas is inclined to follow the permissive and flexible Stiglerian ethic rather than rigorous Feynman standard of scientific integrity. Romer regards this as a breach of the scientific method and a step backward for economics as a science.

I am not going to comment on the specific infraction that Romer accuses Lucas of having committed; I am not familiar with the mathematical question in dispute. Certainly if Lucas was aware that his argument in the paper Romer criticizes depended on the particular mathematical assumption in question, Lucas should have acknowledged that to be the case. And even if, as Lucas asserted in responding to a direct question by Romer, he could have derived the result in a more roundabout way, then he should have pointed that out, too. However, I don’t regard the infraction alleged by Romer to be more than a misdemeanor, hardly a scandalous breach of the scientific method.

Why did Lucas, who as far as I can tell was originally guided by Feynman integrity, switch to the mode of Stigler conviction? Market clearing did not have to evolve from auxiliary hypothesis to dogma that could not be questioned.

My conjecture is economists let small accidents of intellectual history matter too much. If we had behaved like scientists, things could have turned out very differently. It is worth paying attention to these accidents because doing so might let us take more control over the process of scientific inquiry that we are engaged in. At the very least, we should try to reduce the odds that that personal frictions and simple misunderstandings could once again cause us to veer off on some damaging trajectory.

I suspect that it was personal friction and a misunderstanding that encouraged a turn toward isolation (or if you prefer, epistemic closure) by Lucas and colleagues. They circled the wagons because they thought that this was the only way to keep the rational expectations revolution alive. The misunderstanding is that Lucas and his colleagues interpreted the hostile reaction they received from such economists as Robert Solow to mean that they were facing implacable, unreasoning resistance from such departments as MIT. In fact, in a remarkably short period of time, rational expectations completely conquered the PhD program at MIT.

More recently Romer, having done graduate work both at MIT and Chicago in the late 1970s, has elaborated on the personal friction between Solow and Lucas and how that friction may have affected Lucas, causing him to disengage from the professional mainstream. Paul Krugman, who was at MIT when this nastiness was happening, is skeptical of Romer’s interpretation.

My own view is that being personally and emotionally attached to one’s own theories, whether for religious or ideological or other non-scientific reasons, is not necessarily a bad thing as long as there are social mechanisms allowing scientists with different scientific viewpoints an opportunity to make themselves heard. If there are such mechanisms, the need for Feynman integrity is minimized, because individual lapses of integrity will be exposed and remedied by criticism from other scientists; scientific progress is possible even if scientists don’t live up to the Feynman standards, and maintain their faith in their theories despite contradictory evidence. But, as I am going to suggest below, there are reasons to doubt that social mechanisms have been operating to discipline – not suppress, just discipline – dubious economic theorizing.

My favorite example of the importance of personal belief in, and commitment to the truth of, one’s own theories is Galileo. As discussed by T. S. Kuhn in The Structure of Scientific Revolutions. Galileo was arguing for a paradigm change in how to think about the universe, despite being confronted by empirical evidence that appeared to refute the Copernican worldview he believed in: the observations that the sun revolves around the earth, and that the earth, as we directly perceive it, is, apart from the occasional earthquake, totally stationary — good old terra firma. Despite that apparently contradictory evidence, Galileo had an alternative vision of the universe in which the obvious movement of the sun in the heavens was explained by the spinning of the earth on its axis, and the stationarity of the earth by the assumption that all our surroundings move along with the earth, rendering its motion imperceptible, our perception of motion being relative to a specific frame of reference.

At bottom, this was an almost metaphysical world view not directly refutable by any simple empirical test. But Galileo adopted this worldview or paradigm, because he deeply believed it to be true, and was therefore willing to defend it at great personal cost, refusing to recant his Copernican view when he could have easily appeased the Church by describing the Copernican theory as just a tool for predicting planetary motion rather than an actual representation of reality. Early empirical tests did not support heliocentrism over geocentrism, but Galileo had faith that theoretical advancements and improved measurements would eventually vindicate the Copernican theory. He was right of course, but strict empiricism would have led to a premature rejection of heliocentrism. Without a deep personal commitment to the Copernican worldview, Galileo might not have articulated the case for heliocentrism as persuasively as he did, and acceptance of heliocentrism might have been delayed for a long time.

Imre Lakatos called such deeply-held views underlying a scientific theory the hard core of the theory (aka scientific research program), a set of beliefs that are maintained despite apparent empirical refutation. The response to any empirical refutation is not to abandon or change the hard core but to adjust what Lakatos called the protective belt of the theory. Eventually, as refutations or empirical anomalies accumulate, the research program may undergo a crisis, leading to its abandonment, or it may simply degenerate if it fails to solve new problems or discover any new empirical facts or regularities. So Romer’s criticism of Lucas’s dogmatic attachment to market clearing – Lucas frequently makes use of ad hoc price stickiness assumptions; I don’t know why Romer identifies market-clearing as a Lucasian dogma — may be no more justified from a history of science perspective than would criticism of Galileo’s dogmatic attachment to heliocentrism.

So while I have many problems with Lucas, lack of Feynman integrity is not really one of them, certainly not in the top ten. What I find more disturbing is his narrow conception of what economics is. As he himself wrote in an autobiographical sketch for Lives of the Laureates, he was bewitched by the beauty and power of Samuelson’s Foundations of Economic Analysis when he read it the summer before starting his training as a graduate student at Chicago in 1960. Although it did not have the transformative effect on me that it had on Lucas, I greatly admire the Foundations, but regardless of whether Samuelson himself meant to suggest such an idea (which I doubt), it is absurd to draw this conclusion from it:

I loved the Foundations. Like so many others in my cohort, I internalized its view that if I couldn’t formulate a problem in economic theory mathematically, I didn’t know what I was doing. I came to the position that mathematical analysis is not one of many ways of doing economic theory: It is the only way. Economic theory is mathematical analysis. Everything else is just pictures and talk.

Oh, come on. Would anyone ever think that unless you can formulate the problem of whether the earth revolves around the sun or the sun around the earth mathematically, you don’t know what you are doing? And, yet, remarkably, on the page following that silly assertion, one finds a totally brilliant description of what it was like to take graduate price theory from Milton Friedman.

Friedman rarely lectured. His class discussions were often structured as debates, with student opinions or newspaper quotes serving to introduce a problem and some loosely stated opinions about it. Then Friedman would lead us into a clear statement of the problem, considering alternative formulations as thoroughly as anyone in the class wanted to. Once formulated, the problem was quickly analyzed—usually diagrammatically—on the board. So we learned how to formulate a model, to think about and decide which features of a problem we could safely abstract from and which he needed to put at the center of the analysis. Here “model” is my term: It was not a term that Friedman liked or used. I think that for him talking about modeling would have detracted from the substantive seriousness of the inquiry we were engaged in, would divert us away from the attempt to discover “what can be done” into a merely mathematical exercise. [my emphasis].

Despite his respect for Friedman, it’s clear that Lucas did not adopt and internalize Friedman’s approach to economic problem solving, but instead internalized the caricature he extracted from Samuelson’s Foundations: that mathematical analysis is the only legitimate way of doing economic theory, and that, in particular, the essence of macroeconomics consists in a combination of axiomatic formalism and philosophical reductionism (microfoundationalism). For Lucas, the only scientifically legitimate macroeconomic models are those that can be deduced from the axiomatized Arrow-Debreu-McKenzie general equilibrium model, with solutions that can be computed and simulated in such a way that the simulations can be matched up against the available macroeconomics time series on output, investment and consumption.

This was both bad methodology and bad science, restricting the formulation of economic problems to those for which mathematical techniques are available to be deployed in finding solutions. On the one hand, the rational-expectations assumption made finding solutions to certain intertemporal models tractable; on the other, the assumption was justified as being required by the rationality assumptions of neoclassical price theory.

In a recent review of Lucas’s Collected Papers on Monetary Theory, Thomas Sargent makes a fascinating reference to Kenneth Arrow’s 1967 review of the first two volumes of Paul Samuelson’s Collected Works in which Arrow referred to the problematic nature of the neoclassical synthesis of which Samuelson was a chief exponent.

Samuelson has not addressed himself to one of the major scandals of current price theory, the relation between microeconomics and macroeconomics. Neoclassical microeconomic equilibrium with fully flexible prices presents a beautiful picture of the mutual articulations of a complex structure, full employment being one of its major elements. What is the relation between this world and either the real world with its recurrent tendencies to unemployment of labor, and indeed of capital goods, or the Keynesian world of underemployment equilibrium? The most explicit statement of Samuelson’s position that I can find is the following: “Neoclassical analysis permits of fully stable underemployment equilibrium only on the assumption of either friction or a peculiar concatenation of wealth-liquidity-interest elasticities. . . . [The neoclassical analysis] goes far beyond the primitive notion that, by definition of a Walrasian system, equilibrium must be at full employment.” . . .

In view of the Phillips curve concept in which Samuelson has elsewhere shown such interest, I take the second sentence in the above quotation to mean that wages are stationary whenever unemployment is X percent, with X positive; thus stationary unemployment is possible. In general, one can have a neoclassical model modified by some elements of price rigidity which will yield Keynesian-type implications. But such a model has yet to be constructed in full detail, and the question of why certain prices remain rigid becomes of first importance. . . . Certainly, as Keynes emphasized the rigidity of prices has something to do with the properties of money; and the integration of the demand and supply of money with general competitive equilibrium theory remains incomplete despite attempts beginning with Walras himself.

If the neoclassical model with full price flexibility were sufficiently unrealistic that stable unemployment equilibrium be possible, then in all likelihood the bulk of the theorems derived by Samuelson, myself, and everyone else from the neoclassical assumptions are also contrafactual. The problem is not resolved by what Samuelson has called “the neoclassical synthesis,” in which it is held that the achievement of full employment requires Keynesian intervention but that neoclassical theory is valid when full employment is reached. . . .

Obviously, I believe firmly that the mutual adjustment of prices and quantities represented by the neoclassical model is an important aspect of economic reality worthy of the serious analysis that has been bestowed on it; and certain dramatic historical episodes – most recently the reconversion of the United States from World War II and the postwar European recovery – suggest that an economic mechanism exists which is capable of adaptation to radical shifts in demand and supply conditions. On the other hand, the Great Depression and the problems of developing countries remind us dramatically that something beyond, but including, neoclassical theory is needed.

Perhaps in a future post, I may discuss this passage, including a few sentences that I have omitted here, in greater detail. For now I will just say that Arrow’s reference to a “neoclassical microeconomic equilibrium with fully flexible prices” seems very strange inasmuch as price flexibility has absolutely no role in the proofs of the existence of a competitive general equilibrium for which Arrow and Debreu and McKenzie are justly famous. All the theorems Arrow et al. proved about the neoclassical equilibrium were related to existence, uniqueness and optimaiity of an equilibrium supported by an equilibrium set of prices. Price flexibility was not involved in those theorems, because the theorems had nothing to do with how prices adjust in response to a disequilibrium situation. What makes this juxtaposition of neoclassical microeconomic equilibrium with fully flexible prices even more remarkable is that about eight years earlier Arrow wrote a paper (“Toward a Theory of Price Adjustment”) whose main concern was the lack of any theory of price adjustment in competitive equilibrium, about which I will have more to say below.

Sargent also quotes from two lectures in which Lucas referred to Don Patinkin’s treatise Money, Interest and Prices which provided perhaps the definitive statement of the neoclassical synthesis Samuelson espoused. In one lecture (“My Keynesian Education” presented to the History of Economics Society in 2003) Lucas explains why he thinks Patinkin’s book did not succeed in its goal of integrating value theory and monetary theory:

I think Patinkin was absolutely right to try and use general equilibrium theory to think about macroeconomic problems. Patinkin and I are both Walrasians, whatever that means. I don’t see how anybody can not be. It’s pure hindsight, but now I think that Patinkin’s problem was that he was a student of Lange’s, and Lange’s version of the Walrasian model was already archaic by the end of the 1950s. Arrow and Debreu and McKenzie had redone the whole theory in a clearer, more rigorous, and more flexible way. Patinkin’s book was a reworking of his Chicago thesis from the middle 1940s and had not benefited from this more recent work.

In the other lecture, his 2003 Presidential address to the American Economic Association, Lucas commented further on why Patinkin fell short in his quest to unify monetary and value theory:

When Don Patinkin gave his Money, Interest, and Prices the subtitle “An Integration of Monetary and Value Theory,” value theory meant, to him, a purely static theory of general equilibrium. Fluctuations in production and employment, due to monetary disturbances or to shocks of any other kind, were viewed as inducing disequilibrium adjustments, unrelated to anyone’s purposeful behavior, modeled with vast numbers of free parameters. For us, today, value theory refers to models of dynamic economies subject to unpredictable shocks, populated by agents who are good at processing information and making choices over time. The macroeconomic research I have discussed today makes essential use of value theory in this modern sense: formulating explicit models, computing solutions, comparing their behavior quantitatively to observed time series and other data sets. As a result, we are able to form a much sharper quantitative view of the potential of changes in policy to improve peoples’ lives than was possible a generation ago.

So, as Sargent observes, Lucas recreated an updated neoclassical synthesis of his own based on the intertemporal Arrow-Debreu-McKenzie version of the Walrasian model, augmented by a rationale for the holding of money and perhaps some form of monetary policy, via the assumption of credit-market frictions and sticky prices. Despite the repudiation of the updated neoclassical synthesis by his friend Edward Prescott, for whom monetary policy is irrelevant, Lucas clings to neoclassical synthesis 2.0. Sargent quotes this passage from Lucas’s 1994 retrospective review of A Monetary History of the US by Friedman and Schwartz to show how tightly Lucas clings to neoclassical synthesis 2.0 :

In Kydland and Prescott’s original model, and in many (though not all) of its descendants, the equilibrium allocation coincides with the optimal allocation: Fluctuations generated by the model represent an efficient response to unavoidable shocks to productivity. One may thus think of the model not as a positive theory suited to all historical time periods but as a normative benchmark providing a good approximation to events when monetary policy is conducted well and a bad approximation when it is not. Viewed in this way, the theory’s relative success in accounting for postwar experience can be interpreted as evidence that postwar monetary policy has resulted in near-efficient behavior, not as evidence that money doesn’t matter.

Indeed, the discipline of real business cycle theory has made it more difficult to defend real alternaltives to a monetary account of the 1930s than it was 30 years ago. It would be a term-paper-size exercise, for example, to work out the possible effects of the 1930 Smoot-Hawley Tariff in a suitably adapted real business cycle model. By now, we have accumulated enough quantitative experience with such models to be sure that the aggregate effects of such a policy (in an economy with a 5% foreign trade sector before the Act and perhaps a percentage point less after) would be trivial.

Nevertheless, in the absence of some catastrophic error in monetary policy, Lucas evidently believes that the key features of the Arrow-Debreu-McKenzie model are closely approximated in the real world. That may well be true. But if it is, Lucas has no real theory to explain why.

In his 1959 paper (“Towards a Theory of Price Adjustment”) I just mentioned, Arrow noted that the theory of competitive equilibrium has no explanation of how equilibrium prices are actually set. Indeed, the idea of competitive price adjustment is beset by a paradox: all agents in a general equilibrium being assumed to be price takers, how is it that a new equilibrium price is ever arrived at following any disturbance to an initial equilibrium? Arrow had no answer to the question, but offered the suggestion that, out of equilibrium, agents are not price takers, but price searchers, possessing some measure of market power to set price in the transition between the old and new equilibrium. But the upshot of Arrow’s discussion was that the problem and the paradox awaited solution. Almost sixty years on, some of us are still waiting, but for Lucas and the Lucasians, there is neither problem nor paradox, because the actual price is the equilibrium price, and the equilibrium price is always the (rationally) expected price.

If the social functions of science were being efficiently discharged, this rather obvious replacement of problem solving by question begging would not have escaped effective challenge and opposition. But Lucas was able to provide cover for this substitution by persuading the profession to embrace his microfoundational methodology, while offering irresistible opportunities for professional advancement to younger economists who could master the new analytical techniques that Lucas and others were rapidly introducing, thereby neutralizing or coopting many of the natural opponents to what became modern macroeconomics. So while Romer considers the conquest of MIT by the rational-expectations revolution, despite the opposition of Robert Solow, to be evidence for the advance of economic science, I regard it as a sign of the social failure of science to discipline a regressive development driven by the elevation of technique over substance.

Sterilizing Gold Inflows: The Anatomy of a Misconception

In my previous post about Milton Friedman’s problematic distinction between real and pseudo-gold standards, I mentioned that one of the signs that Friedman pointed to in asserting that the Federal Reserve Board in the 1920s was managing a pseudo gold standard was the “sterilization” of gold inflows to the Fed. What Friedman meant by sterilization is that the incremental gold reserves flowing into the Fed did not lead to a commensurate increase in the stock of money held by the public, the failure of the stock of money to increase commensurately with an inflow of gold being the standard understanding of sterilization in the context of the gold standard.

Of course “commensurateness” is in the eye of the beholder. Because Friedman felt that, given the size of the gold inflow, the US money stock did not increase “enough,” he argued that the gold standard in the 1920s did not function as a “real” gold standard would have functioned. Now Friedman’s denial that a gold standard in which gold inflows are sterilized is a “real” gold standard may have been uniquely his own, but his understanding of sterilization was hardly unique; it was widely shared. In fact it was so widely shared that I myself have had to engage in a bit of an intellectual struggle to free myself from its implicit reversal of the causation between money creation and the holding of reserves. For direct evidence of my struggles, see some of my earlier posts on currency manipulation (here, here and here), in which I began by using the concept of sterilization as if it actually made sense in the context of international adjustment, and did not fully grasp that the concept leads only to confusion. In an earlier post about Hayek’s 1932 defense of the insane Bank of France, I did not explicitly refer to sterilization, and got the essential analysis right. Of course Hayek, in his 1932 defense of the Bank of France, was using — whether implicitly or explicitly I don’t recall — the idea of sterilization to defend the Bank of France against critics by showing that the Bank of France was not guilty of sterilization, but Hayek’s criterion for what qualifies as sterilization was stricter than Friedman’s. In any event, it would be fair to say that Friedman’s conception of how the gold standard works was broadly consistent with the general understanding at the time of how the gold standard operates, though, even under the orthodox understanding, he had no basis for asserting that the 1920s gold standard was fraudulent and bogus.

To sort out the multiple layers of confusion operating here, it helps to go back to the classic discussion of international monetary adjustment under a pure gold currency, which was the basis for later discussions of international monetary adjustment under a gold standard (i.e, a paper currency convertible into gold at a fixed exchange rate). I refer to David Hume’s essay “Of the Balance of Trade” in which he argued that there is an equilibrium distribution of gold across different countries, working through a famous thought experiment in which four-fifths of the gold held in Great Britain was annihilated to show that an automatic adjustment process would redistribute the international stock of gold to restore Britain’s equilibrium share of the total world stock of gold.

The adjustment process, which came to be known as the price-specie flow mechanism (PSFM), is widely considered one of Hume’s greatest contributions to economics and to monetary theory. Applying the simple quantity theory of money, Hume argued that the loss of 80% of Britain’s gold stock would mean that prices and wages in Britain would fall by 80%. But with British prices 80% lower than prices elsewhere, Britain would stop importing goods that could now be obtained more cheaply at home than they could be obtained abroad, while foreigners would begin exporting all they could from Britain to take advantage of low British prices. British exports would rise and imports fall, causing an inflow of gold into Britain. But, as gold flowed into Britain, British prices would rise, thereby reducing the British competitive advantage, causing imports to increase and exports to decrease, and consequently reducing the inflow of gold. The adjustment process would continue until British prices and wages had risen to a level equal to that in other countries, thus eliminating the British balance-of-trade surplus and terminating the inflow of gold.

This was a very nice argument, and Hume, a consummate literary stylist, expressed it beautifully. There is only one problem: Hume ignored that the prices of tradable goods (those that can be imported or exported or those that compete with imports and exports) are determined not in isolated domestic markets, but in international markets, so the premise that all British prices, like the British stock of gold, would fall by 80% was clearly wrong. Nevertheless, the disconnect between the simple quantity theory and the idea that the prices of tradable goods are determined in international markets was widely ignored by subsequent writers. Although Adam Smith, David Ricardo, and J. S. Mill avoided the fallacy, but without explicit criticism of Hume, while Henry Thornton, in his great work The Paper Credit of Great Britain, alternately embraced it and rejected it, the Humean analysis, by the end of the nineteenth century, if not earlier, had become the established orthodoxy.

Towards the middle of the nineteenth century, there was a famous series of controversies over the Bank Charter Act of 1844, in which two groups of economists the Currency School in support and the Banking School in opposition argued about the key provisions of the Act: to centralize the issue of Banknotes in Great Britain within the Bank of England and to prohibit the Bank of England from issuing additional banknotes, beyond the fixed quantity of “unbacked” notes (i.e. without gold cover) already in circulation, unless the additional banknotes were issued in exchange for a corresponding amount of gold coin or bullion. In other words, the Bank Charter Act imposed a 100% marginal reserve requirement on the issue of additional banknotes by the Bank of England, thereby codifying what was then known as the Currency Principle, the idea being that the fluctuation in the total quantity of Banknotes ought to track exactly the Humean mechanism in which the quantity of money in circulation changes pound for pound with the import or export of gold.

The doctrinal history of the controversies about the Bank Charter Act are very confused, and I have written about them at length in several papers (this, this, and this) and in my book on free banking, so I don’t want to go over that ground again here. But until the advent of the monetary approach to the balance of payments in the late 1960s and early 1970s, the thinking of the economics profession about monetary adjustment under the gold standard was largely in a state of confusion, the underlying fallacy of PSFM having remained largely unrecognized. One of the few who avoided the confusion was R. G. Hawtrey, who had anticipated all the important elements of the monetary approach to the balance of payments, but whose work had been largely forgotten in the wake of the General Theory.

Two important papers changed the landscape. The first was a 1976 paper by Donald McCloskey and Richard Zecher “How the Gold Standard Really Worked” which explained that a whole slew of supposed anomalies in the empirical literature on the gold standard were easily explained if the Humean PSFM was disregarded. The second was Paul Samuelson’s 1980 paper “A Corrected Version of Hume’s Equilibrating Mechanisms for International Trade,” showing that the change in relative price levels — the mechanism whereby international monetary equilibrium is supposedly restored according to PSFM — is irrelevant to the adjustment process when arbitrage constraints on tradable goods are effective. The burden of the adjustment is carried by changes in spending patterns that restore desired asset holdings to their equilibrium levels, independently of relative-price-level effects. Samuelson further showed that even when, owing to the existence of non-tradable goods, there are relative-price-level effects, those effects are irrelevant to the adjustment process that restores equilibrium.

What was missing from Hume’s analysis was the concept of a demand to hold money (or gold). The difference between desired and actual holdings of cash imply corresponding changes in expenditure, and those changes in expenditure restore equilibrium in money (gold) holdings independent of any price effects. Lacking any theory of the demand to hold money (or gold), Hume had to rely on a price-level adjustment to explain how equilibrium is restored after a change in the quantity of gold in one country. Hume’s misstep set monetary economics off on a two-century detour, avoided by only a relative handful of economists, in explaining the process of international adjustment.

So historically there have been two paradigms of international adjustment under the gold standard: 1) the better-known, but incorrect, Humean PSFM based on relative-price-level differences which induce self-correcting gold flows that, in turn, are supposed to eliminate the price-level differences, and 2) the not-so-well-known, but correct, arbitrage-monetary-adjustment theory. Under the PSFM, the adjustment can occur only if gold flows give rise to relative-price-level adjustments. But, under PSFM, for those relative-price-level adjustments to occur, gold flows have to change the domestic money stock, because it is the quantity of domestic money that governs the domestic price level.

That is why if you believe, as Milton Friedman did, in PSFM, sterilization is such a big deal. Relative domestic price levels are correlated with relative domestic money stocks, so if a gold inflow into a country does not change its domestic money stock, the necessary increase in the relative price level of the country receiving the gold inflow cannot occur. The “automatic” adjustment mechanism under the gold standard has been blocked, implying that if there is sterilization, the gold standard is rendered fraudulent.

But we now know that that is not how the gold standard works. The point of gold flows was not to change relative price levels. International adjustment required changes in domestic money supplies to be sure, but, under the gold standard, changes in domestic money supplies are essentially unavoidable. Thus, in his 1932 defense of the insane Bank of France, Hayek pointed out that the domestic quantity of money had in fact increased in France along with French gold holdings. To Hayek, this meant that the Bank of France was not sterilizing the gold inflow. Friedman would have said that, given the gold inflow, the French money stock ought to have increased by a far larger amount than it actually did.

Neither Hayek nor Friedman understood what was happening. The French public wanted to increase their holdings of money. Because the French government imposed high gold reserve requirements (but less than 100%) on the creation of French banknotes and deposits, increasing holdings of money required the French to restrict their spending sufficiently to create a balance-of-trade surplus large enough to induce the inflow of gold needed to satisfy the reserve requirements on the desired increase in cash holdings. The direction of causation was exactly the opposite of what Friedman thought. It was the desired increase in the amount of francs that the French wanted to hold that (given the level of gold reserve requirements) induced the increase in French gold holdings.

But this doesn’t mean, as Hayek argued, that the insane Bank of France was not wreaking havoc on the international monetary system. By advocating a banking law that imposed very high gold reserve requirements and by insisting on redeeming almost all of its non-gold foreign exchange reserves into gold bullion, the insane Bank of France, along with the clueless Federal Reserve, generated a huge increase in the international monetary demand for gold, which was the proximate cause of the worldwide deflation that began in 1929 and continued till 1933. The problem was not a misalignment between relative price levels, which is sterilization supposedly causes; the problem was a worldwide deflation that afflicted all countries on the gold standard, and was avoidable only by escaping from the gold standard.

At any rate, the concept of sterilization does nothing to enhance our understanding of that deflationary process. And whatever defects there were in the way that central banks were operating under the gold standard in the 1920s, the concept of sterilization averts attention from the critical problem which was the increasing demand of the world’s central banks, especially the Bank of France and the Federal Reserve, for gold reserves.

Macroeconomic Science and Meaningful Theorems

Greg Hill has a terrific post on his blog, providing the coup de grace to Stephen Williamson’s attempt to show that the way to increase inflation is for the Fed to raise its Federal Funds rate target. Williamson’s problem, Hill points out is that he attempts to derive his results from relationships that exist in equilibrium. But equilibrium relationships in and of themselves are sterile. What we care about is how a system responds to some change that disturbs a pre-existing equilibrium.

Williamson acknowledged that “the stories about convergence to competitive equilibrium – the Walrasian auctioneer, learning – are indeed just stories . . . [they] come from outside the model” (here).  And, finally, this: “Telling stories outside of the model we have written down opens up the possibility for cheating. If everything is up front – written down in terms of explicit mathematics – then we have to be honest. We’re not doing critical theory here – we’re doing economics, and we want to be treated seriously by other scientists.”

This self-conscious scientism on Williamson’s part is not just annoyingly self-congratulatory. “Hey, look at me! I can write down mathematical models, so I’m a scientist, just like Richard Feynman.” It’s wildly inaccurate, because the mere statement of equilibrium conditions is theoretically vacuous. Back to Greg:

The most disconcerting thing about Professor Williamson’s justification of “scientific economics” isn’t its uncritical “scientism,” nor is it his defense of mathematical modeling. On the contrary, the most troubling thing is Williamson’s acknowledgement-cum-proclamation that his models, like many others, assume that markets are always in equilibrium.

Why is this assumption a problem?  Because, as Arrow, Debreu, and others demonstrated a half-century ago, the conditions required for general equilibrium are unimaginably stringent.  And no one who’s not already ensconced within Williamson’s camp is likely to characterize real-world economies as always being in equilibrium or quickly converging upon it.  Thus, when Williamson responds to a question about this point with, “Much of economics is competitive equilibrium, so if this is a problem for me, it’s a problem for most of the profession,” I’m inclined to reply, “Yes, Professor, that’s precisely the point!”

Greg proceeds to explain that the Walrasian general equilibrium model involves the critical assumption (implemented by the convenient fiction of an auctioneer who announces prices and computes supply and demand at that prices before allowing trade to take place) that no trading takes place except at the equilibrium price vector (where the number of elements in the vector equals the number of prices in the economy). Without an auctioneer there is no way to ensure that the equilibrium price vector, even if it exists, will ever be found.

Franklin Fisher has shown that decisions made out of equilibrium will only converge to equilibrium under highly restrictive conditions (in particular, “no favorable surprises,” i.e., all “sudden changes in expectations are disappointing”).  And since Fisher has, in fact, written down “the explicit mathematics” leading to this conclusion, mustn’t we conclude that the economists who assume that markets are always in equilibrium are really the ones who are “cheating”?

An alternative general equilibrium story is that learning takes place allowing the economy to converge on a general equilibrium time path over time, but Greg easily disposes of that story as well.

[T]he learning narrative also harbors massive problems, which come out clearly when viewed against the background of the Arrow-Debreu idealized general equilibrium construction, which includes a complete set of intertemporal markets in contingent claims.  In the world of Arrow-Debreu, every price in every possible state of nature is known at the moment when everyone’s once-and-for-all commitments are made.  Nature then unfolds – her succession of states is revealed – and resources are exchanged in accordance with the (contractual) commitments undertaken “at the beginning.”

In real-world economies, these intertemporal markets are woefully incomplete, so there’s trading at every date, and a “sequence economy” takes the place of Arrow and Debreu’s timeless general equilibrium.  In a sequence economy, buyers and sellers must act on their expectations of future events and the prices that will prevail in light of these outcomes.  In the limiting case of rational expectations, all agents correctly forecast the equilibrium prices associated with every possible state of nature, and no one’s expectations are disappointed. 

Unfortunately, the notion that rational expectations about future prices can replace the complete menu of Arrow-Debreu prices is hard to swallow.  Frank Hahn, who co-authored “General Competitive Analysis” with Kenneth Arrow (1972), could not begin to swallow it, and, in his disgorgement, proceeded to describe in excruciating detail why the assumption of rational expectations isn’t up to the job (here).  And incomplete markets are, of course, but one departure from Arrow-Debreu.  In fact, there are so many more that Hahn came to ridicule the approach of sweeping them all aside, and “simply supposing the economy to be in equilibrium at every moment of time.”

Just to pile on, I would also point out that any general equilibrium model assumes that there is a given state of knowledge that is available to all traders collectively, but not necessarily to each trader. In this context, learning means that traders gradually learn what the pre-existing facts are. But in the real world, knowledge increases and evolves through time. As knowledge changes, capital — both human and physical — embodying that knowledge becomes obsolete and has to be replaced or upgraded, at unpredictable moments of time, because it is the nature of new knowledge that it cannot be predicted. The concept of learning incorporated in these sorts of general equilibrium constructs is a travesty of the kind of learning that characterizes the growth of knowledge in the real world. The implications for the existence of a general equilibrium model in a world in which knowledge grows in an unpredictable way are devastating.

Greg aptly sums up the absurdity of using general equilibrium theory (the description of a decentralized economy in which the component parts are in a state of perfect coordination) as the microfoundation for macroeconomics (the study of decentralized economies that are less than perfectly coordinated) as follows:

What’s the use of “general competitive equilibrium” if it can’t furnish a sturdy, albeit “external,” foundation for the kind of modeling done by Professor Williamson, et al?  Well, there are lots of other uses, but in the context of this discussion, perhaps the most important insight to be gleaned is this: Every aspect of a real economy that Keynes thought important is missing from Arrow and Debreu’s marvelous construction.  Perhaps this is why Axel Leijonhufvud, in reviewing a state-of-the-art New Keynesian DSGE model here, wrote, “It makes me feel transported into a Wonderland of long ago – to a time before macroeconomics was invented.”

To which I would just add that nearly 70 years ago, Paul Samuelson published his magnificent Foundations of Economic Analysis, a work undoubtedly read and mastered by Williamson. But the central contribution of the Foundations was the distinction between equilibrium conditions and what Samuelson (owing to the influence of the still fashionable philosophical school called logical positivism) mislabeled meaningful theorems. A mere equilibrium condition is not the same as a meaningful theorem, but Samuelson showed how a meaningful theorem can be mathematically derived from an equilibrium condition. The link between equilibrium conditions and meaningful theorems was the foundation of economic analysis. Without a mathematical connection between equilibrium conditions and meaningful theorems analogous to the one provided by Samuelson in the Foundations, claims to have provided microfoundations for macroeconomics are, at best, premature.

The Road to Serfdom: Good Hayek or Bad Hayek?

A new book by Angus Burgin about the role of F. A. Hayek and Milton Friedman and the Mont Pelerin Society (an organization of free-market economists plus some scholars in other disciplines founded by Hayek and later headed by Friedman) in resuscitating free-market capitalism as a political ideal after its nineteenth-century version had been discredited by the twin catastrophes of the Great War and the Great Depression was the subject of an interesting and in many ways insightful review by Robert Solow in the latest New Republic. Despite some unfortunate memory lapses and apologetics concerning his own errors and those of his good friend and colleague Paul Samuelson in their assessments of the of efficiency of central planning, thereby minimizing the analytical contributions of Hayek and Friedman, Solow does a good job of highlighting the complexity and nuances of Hayek’s thought — a complexity often ignored not only by Hayek’s critics but by many of his most vocal admirers — and of contrasting Hayek’s complexity and nuance with Friedman’s rhetorically and strategically compelling, but intellectually dubious, penchant for simplification.

First, let’s get the apologetics out of the way. Tyler Cowen pounced on this comment by Solow:

The MPS [Mont Pelerin Society] was no more influential inside the economics profession. There were no publications to be discussed. The American membership was apparently limited to economists of the Chicago School and its scattered university outposts, plus a few transplanted Europeans. “Some of my best friends” belonged. There was, of course, continuing research and debate among economists on the good and bad properties of competitive and noncompetitive markets, and the capacities and limitations of corrective regulation. But these would have gone on in the same way had the MPS not existed. It has to be remembered that academic economists were never optimistic about central planning. Even discussion about the economics of some conceivable socialism usually took the form of devising institutions and rules of behavior that would make a socialist economy function like a competitive market economy (perhaps more like one than any real-world market economy does). Maybe the main function of the MPS was to maintain the morale of the free-market fellowship.

And one of Tyler’s commenters unearthed this gem from Samuelson’s legendary textbook:

The Soviet economy is proof that, contrary to what many skeptics had earlier believed, a socialist command economy can function and even thrive.

Tyler also dug up this nugget from the classic paper by Sameulson and Solow on the Phillips Curve (but see this paper by James Forder for some revisionist history about the Samuelson-Solow paper):

We have not here entered upon the important question of what feasible institutional reforms might be introduced to lessen the degree of disharmony between full employment and price stability. These could of course involve such wide-ranging issues as direct price and wage controls, antiunion and antitrust legislation, and a host of other measures hopefully designed to move the American Phillips’ curves downward and to the left.

But actually, Solow was undoubtedly right that the main function of the MPS was morale-building! Plus networking. Nothing to be sneered at, and nothing to apologize for. The real heavy lifting was done in the 51 weeks of the year when the MPS was not in session.

Anyway, enough score settling, because Solow does show a qualified, but respectful, appreciation for Hayek’s virtues as an economist, scholar, and social philosopher, suggesting that there was a Good Hayek, who struggled to reformulate a version of liberalism that transcended the inadequacies (practical and theoretical) that doomed the laissez-faire liberalism of the nineteenth century, and a Bad Hayek, who engaged in a black versus white polemical struggle with “socialists of all parties.” The trope strikes me as a bit unfair, but Hayek could sometimes be injudicious in his policy pronouncements, or in his off-the-cuff observations and remarks. Despite his natural reserve, Hayek sometimes indulged in polemical exaggeration. The appetite for rhetorical overkill was especially hard for Hayek to resist when the topic of discussion was J. M. Keynes, the object of both Hayek’s admiration and his disdain. Hayek seemingly could not help but caricature Keynes in a way calculated to make him seem both ridiculous and irresistible.  Have a look.

So I would not dispute that Hayek occasionally committed rhetorical excesses when wearing his policy-advocate hat. And there were some other egregious lapses on Hayek’s part like his unqualified support for General Pinochet, reflecting perhaps a Quixotic hope that somewhere there was a benevolent despot waiting to be persuaded to implement Hayek’s ideas for a new liberal political constitution in which the principle of the separation of powers would be extended to separate the law-making powers of the legislative body from the governing powers of the representative assembly.

But Solow exaggerates by characterizing the Road to Serfdom as an example of the Bad Hayek, despite acknowledging that the Road to Serfdom was very far from advocating a return to nineteenth-century laissez-faire. What Solow finds troubling is thesis that

the standard regulatory interventions in the economy have any inherent tendency to snowball into “serfdom.” The correlations often run the other way. Sixty-five years later, Hayek’s implicit prediction is a failure, rather like Marx’s forecast of the coming “immiserization of the working class.”

This is a common interpretation of Hayek’s thesis in the Road to Serfdom.   And it is true that Hayek did intimate that piecemeal social engineering (to borrow a phrase coined by Hayek’s friend Karl Popper) created tendencies, which, if not held in check by strict adherence to liberal principles, could lead to comprehensive central planning. But that argument is a different one from the main argument of the Road to Serfdom that comprehensive central planning could be carried out effectively only by a government exercising unlimited power over individuals. And there is no empirical evidence that refutes Hayek’s main thesis.

A few years ago, in perhaps his last published article, Paul Samuelson wrote a brief historical assessment of Hayek, including personal recollections of their mostly friendly interactions and of one not so pleasant exchange they had in Hayek’s old age, when Hayek wrote to Samuelson demanding that Samuelson retract the statement in his textbook (essentially the same as the one made by Solow) that the empirical evidence, showing little or no correlation between economic and political freedom, refutes the thesis of the Road to Serfdom that intervention leads to totalitarianism. Hayek complained that this charge misrepresented what he had argued in the Road to Serfdom. Observing that Hayek, with whom he had long been acquainted, never previously complained about the passage, Samuelson explained that he tried to placate Hayek with an empty promise to revise the passage, attributing Hayek’s belated objection to the irritability of old age and a bad heart. Whether Samuelson’s evasive response to Hayek was an appropriate one is left as an exercise for the reader.

Defenders of Hayek expressed varying degrees of outrage at the condescending tone taken by Samuelson in his assessment of Hayek. I think that they were overreacting. Samuelson, an academic enfant terrible if there ever was one, may have treated his elders and peers with condescension, but, speaking from experience, I can testify that he treated his inferiors with the utmost courtesy. Samuelson was not dismissing Hayek, he was just being who he was.

The question remains: what was Hayek trying to say in the Road to Serfdom, and in subsequent works? Well, believe it or not, he was trying to say many things, but the main thesis of the Road to Serfdom was clearly what he always said it was: comprehensive central planning is, and always will be, incompatible with individual and political liberty. Samuelson and Solow were not testing Hayek’s main thesis. None of the examples of interventionist governments that they cite, mostly European social democracies, adopted comprehensive central planning, so Hayek’s thesis was not refuted by those counterexamples. Samuelson once acknowledged “considerable validity . . . for the nonnovel part [my emphasis] of Hayek’s warning” in the Road to Serfdom: “controlled socialist societies are rarely efficient and virtually never freely democratic.” Presumably Samuelson assumed that Hayek must have been saying something more than what had previously been said by other liberal economists. After all, if Hayek were saying no more than that liberty and democracy are incompatible with comprehensive central planning, what claim to originality could Hayek have been making? None.

Yep, that’s exactly right; Hayek was not making any claim to originality in the Road to Serfdom. But sometimes old truths have to be restated in a new and more persuasive form than that in which they were originally stated. That was especially the case in the early 1940s when collectivism and planning were widely viewed as the wave of the future, and even so thoroughly conservative and so eminent an economic theorist as Joseph Schumpeter could argue without embarrassment that there was no practical or theoretical reason why socialist central planning could not be implemented. And besides, the argument that every intervention leads to another one until the market system becomes paralyzed was not invented by Hayek either, having been made by Ludwig von Mises some twenty years earlier, and quite possibly by other writers before that.  So even the argument that Samuelson tried to pin on Hayek was not really novel either.

To be sure, Hayek’s warning that central planning would inevitably lead to totalitarianism was not the only warning he made in the Road to Serfdom, but conceptually distinct arguments should not be conflated. Hayek clearly wanted to make the argument that an unprincipled policy of economic interventions was dangerous, because interventions introduce distortions that beget further interventions, producing a cumulative process of ever-more intrusive interventions, thereby smothering market forces and eventually sapping the productive capacity of the free enterprise system. That is an argument about how it is possible to stumble into central planning without really intending to do so.  Hayek clearly believed in that argument, often invoking it in tandem with, or as a supplement to, his main argument about the incompatibility of central planning with liberty and democracy. Despite the undeniable tendency for interventions to create pressure (for both political and economic reasons) to adopt additional interventions, Hayek clearly overestimated the power of that tendency, failing to understand, or at least to take sufficient account of, the countervailing political forces resisting further interventions. So although Hayek was right that no intellectual principle enables one to say “so much intervention and not a drop more,” there could still be a kind of (messy) democratic political equilibrium that effectively limits the extent to which new interventions can be piled on top of old ones. That surely was a significant gap in Hayek’s too narrow, and overly critical, view of how the democratic political process operates.

That said, I think that Solow came close to getting it right in this paragraph:

THE GOOD HAYEK was not happy with the reception of The Road to Serfdom. He had not meant to provide a manifesto for the far right. Careless readers ignored his rejection of unqualified laissez-faire, and the fact that he reserved a useful, limited economic role for government. He had not actually claimed that the descent into serfdom was inevitable. There is no reason to doubt Hayek’s sincerity in this (although the Bad Hayek occasionally made other appearances). Perhaps he would be appalled at the thought of a Congress full of Tea Party Hayekians. But it was his book, after all. The fact that natural allies such as Knight and moderates such as Viner thought that he had overreached suggests that the Bad Hayek really was there in the text.

But not exactly right. Hayek was not totally good. Who is? Hayek made mistakes. Let he who is without sin cast the first stone. Frank Knight didn’t like the Road to Serfdom. But as Solow, himself, observed earlier in his review, Knight was a curmudgeon, and had previously crossed swords with Hayek over arcane issues of capital theory.  So any inference from Knight’s reaction to the Road to Serfdom must be taken with a large grain of salt. And one might also want to consider what Schumpeter said about Hayek in his review of the Road to Serfdom, criticizing Hayek for “politeness to a fault,” because Hayek would “hardly ever attribute to opponents anything beyond intellectual error.”  Was the Bad Hayek really there in the text? Was it really “not a good book?” The verdict has to be: unproven.

PS  In his review, Solow expressed a wish for a full list of the original attendees at the founding meeting of the Mont Pelerin Society.  Hayek included the list as a footnote to his “Opening Address to a  Conference at Mont Pelerin” published in his Studies in Philosophy, Politics and Economics.  There is a slightly different list of original members in Wikipedia.

Maurice Allais, Paris

Carlo Antoni, Rome

Hans Barth, Zurich

Karl Brandt, Stanford, Calif.

John Davenport, New York

Stanley R. Dennison, Cambridge

Walter Eucken, Freiburg i. B.

Erich Eyck, Oxford

Milton Friedman, Chicago

H. D. Gideonse, Brooklyn

F. D. Graham, Princeton

F. A. Harper, Irvington-on-Hudson, NY

Henry Hazlitt, New York

T. J. B. Hoff, Oslo

Albert Hunold, Zurich

Bertrand de Jouvenal, Chexbres, Vaud

Carl Iversen, Copenhagen

John Jewkes, Manchester

F. H. Knight, Chicgao

Fritz Machlup, Buffalo

L. B. Miller, Detroit

Ludwig von Mises, New York

Felix Morely, Washington, DC

Michael Polanyi, Manchester

Karl R. Popper, London

William E. Rappard, Geneva

L. E. Read, Irvington-on-Hudson, NY

Lionel Robbins, London

Wilhelm Roepke, Geneva

George J. Stigler, Providence, RI

Herbert Tingsten, Stockholm

Fracois Trevoux, Lyon

V. O. Watts, Irvington-on-Hudson, NY

C. V. Wedgewood, London

In addition, Hayek included the names of others invited but unable to attend who joined MPS as original members

Constatino Bresciani-Turroni, Rome

William H. Chamberlin, New York

Rene Courtin, Paris

Max Eastman, New York

Luigi Einaudi, Rome

Howard Ellis, Berkeley, Calif.

A. G. B. Fisher, London

Eli Heckscher, Stockholm

Hans Kohn, Northampton, Mass

Walter Lippmann, New York

Friedrich Lutz, Princeton

Salvador de Madriaga, Oxford

Charles Morgan, London

W. A. Orten, Northampton, Mass.

Arnold Plant, London

Charles Rist, Paris

Michael Roberts, London

Jacques Rueff, Paris

Alexander Rustow, Istanbul

F. Schnabel, Heidelberg

W. J. H. Sprott, Nottingham

Roger Truptil, Paris

D. Villey, Poitiers

E. L. Woodward, Oxford

H. M. Wriston, Providence, RI

G. M. Young, London

About Me

David Glasner
Washington, DC

I am an economist at the Federal Trade Commission. Nothing that you read on this blog necessarily reflects the views of the FTC or the individual commissioners. Although I work at the FTC as an antitrust economist, most of my research and writing has been on monetary economics and policy and the history of monetary theory. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 362 other followers


Get every new post delivered to your Inbox.

Join 362 other followers