Archive Page 2

The Trouble with IS-LM (and its Successors)

Lately, I have been reading a paper by Roger Backhouse and David Laidler, “What Was Lost with IS-LM” (an earlier version is available here) which was part of a very interesting symposium of 11 papers on the IS-LM model published as a supplement to the 2004 volume of History of Political Economy. The main thesis of the paper is that the IS-LM model, like the General Theory of which it is a partial and imperfect distillation, aborted a number of promising developments in the rapidly developing, but still nascent, field of macroeconomics in the 1920 and 1930s, developments that just might, had they not been elbowed aside by the IS-LM model, have evolved into a more useful and relevant theory of macroeconomic fluctuations and policy than we now possess. Even though I have occasionally sparred with Scott Sumner about IS-LM – with me pushing back a bit at Scott’s attacks on IS-LM — I have a lot of sympathy for the Backhouse-Laidler thesis.

The Backhouse-Laidler paper is too long to summarize, but I will just note that there are four types of loss that they attribute to IS-LM, which are all, more or less, derivative of the static equilibrium character of Keynes’s analytic method in both the General Theory and the IS-LM construction.

1 The loss of dynamic analysis. IS-LM is a single-period model.

2 The loss of intertemporal choice and expectations. Intertemporal choice and expectations are excluded a priori in a single-period model.

3 The loss of policy regimes. In a single-period model, policy is a one-time affair. The problem of setting up a regime that leads to optimal results over time doesn’t arise.

4 The loss of intertemporal coordination failures. Another concept that is irrelevant in a one-period model.

There was one particular passage that I found especially impressive. Commenting on the lack of any systematic dynamic analysis in the GT, Backhouse and Laidler observe,

[A]lthough [Keynes] made many remarks that could be (and in some cases were later) turned into dynamic models, the emphasis of the General Theory was nevertheless on unemployment as an equilibrium phenomenon.

Dynamic accounts of how money wages might affect employment were only a little more integrated into Keynes’s formal analysis than they were later into IS-LM. Far more significant for the development in Keynes’s thought is how Keynes himself systematically neglected dynamic factors that had been discussed in previous explanations of unemployment. This was a feature of the General Theory remarked on by Bertil Ohlin (1937, 235-36):

Keynes’s theoretical system . . . is equally “old-fashioned” in the second respect which characterizes recent economic theory – namely, the attempt to break away from an explanation of economic events by means of orthodox equilibrium constructions. No other analysis of trade fluctuations in recent years – with the possible exception of the Mises-Hayek school – follows such conservative lines in this respect. In fact, Keynes is much more of an “equilibrium theorist” than such economists as Cassel and, I think, Marshall.

Backhouse and Laidler go on to cite the Stockholm School (of which Ohlin was a leading figure) as an example of explicitly dynamic analysis.

As Bjorn Hansson (1982) has shown, this group developed an explicit method, using the idea of a succession of “unit periods,” in which each period began with agents having plans based on newly formed expectations about the outcome of executing them, and ended with the economy in some new situation that was the outcome of executing them, and ended with the economy in some new situation that was the outcome of market processes set in motion by the incompatibility of those plans, and in which expectations had been reformulated, too, in the light of experience. They applied this method to the construction of a wide variety of what they called “model sequences,” many of which involved downward spirals in economic activity at whose very heart lay rising unemployment. This is not the place to discuss the vexed question of the extent to which some of this work anticipated the Keynesian multiplier process, but it should be noted that, in IS-LM, it is the limit to which such processes move, rather than the time path they follow to get there, that is emphasized.

The Stockholm method seems to me exactly the right way to explain business-cycle downturns. In normal times, there is a rough – certainly not perfect, but good enough — correspondence of expectations among agents. That correspondence of expectations implies that the individual plans contingent on those expectations will be more or less compatible with one another. Surprises happen; here and there people are disappointed and regret past decisions, but, on the whole, they are able to adjust as needed to muddle through. There is usually enough flexibility in a system to allow most people to adjust their plans in response to unforeseen circumstances, so that the disappointment of some expectations doesn’t become contagious, causing a systemic crisis.

But when there is some sort of major shock – and it can only be a shock if it is unforeseen – the system may not be able to adjust. Instead, the disappointment of expectations becomes contagious. If my customers aren’t able to sell their products, I may not be able to sell mine. Expectations are like networks. If there is a breakdown at some point in the network, the whole network may collapse or malfunction. Because expectations and plans fit together in interlocking networks, it is possible that even a disturbance at one point in the network can cascade over an increasingly wide group of agents, leading to something like a system-wide breakdown, a financial crisis or a depression.

But the “problem” with the Stockholm method was that it was open-ended. It could offer only “a wide variety” of “model sequences,” without specifying a determinate solution. It was just this gap in the Stockholm approach that Keynes was able to fill. He provided a determinate equilibrium, “the limit to which the Stockholm model sequences would move, rather than the time path they follow to get there.” A messy, but insightful, approach to explaining the phenomenon of downward spirals in economic activity coupled with rising unemployment was cast aside in favor of the neater, simpler approach of Keynes. No wonder Ohlin sounds annoyed in his comment, quoted by Backhouse and Laidler, about Keynes. Tractability trumped insight.

Unfortunately, that is still the case today. Open-ended models of the sort that the Stockholm School tried to develop still cannot compete with the RBC and DSGE models that have displaced IS-LM and now dominate modern macroeconomics. The basic idea that modern economies form networks, and that networks have properties that are not reducible to just the nodes forming them has yet to penetrate the trained intuition of modern macroeconomists. Otherwise, how would it have been possible to imagine that a macroeconomic model could consist of a single representative agent? And just because modern macroeconomists have expanded their models to include more than a single representative agent doesn’t mean that the intellectual gap evidenced by the introduction of representative-agent models into macroeconomic discourse has been closed.

Misunderstanding (Totally!) Competitive Currency Devaluations

Before becoming Governor of the Resesrve Bank of India, Raghuram Rajan professor was Professor of Finance at the University of Chicago Business School. Winner of the Fischer Black Prize in 2003, he is the author numerous articles in leading academic journals in economics and finance, and co-author (with Luigi Zingales) of a well-regarded book Saving Capitalism from the Capitalists that had some valuable insights about financial-market dysfunction. He is obviously no slouch.

Unfortunately, based on recent reports, Goverenor Rajan is, despite his impressive resume and academic credentials, as Marcus Nunes pointed out on his blog, totally clueless about the role of monetary policy and the art of central banking in combating depressions. Here is the evidence provided by none other than the Wall Street Journal, a newspaper whose editorial page espouses roughly the same view as Rajan, summarizing Rajan’s remarks.

Reserve Bank of India Governor Raghuram Rajan warned Wednesday that the global economy bears an increasing resemblance to its condition in the 1930s, with advanced economies trying to pull out of the Great Recession at each other’s expense.

The difference: competitive monetary policy easing has now taken the place of competitive currency devaluations as the favored tool for playing a zero-sum game that is bound to end in disaster. Now, as then, “demand shifting” has taken the place of “demand creation,” the Indian policymaker said.

A clear symptom of the major imbalances crippling the world’s financial market is the over valuation of the euro, Mr. Rajan said.

The euro-zone economy faces problems similar to those faced by developing economies, with the European Central Bank’s “very, very accommodative stance” having a reduced impact due to the ultra-loose monetary policies being pursued by other central banks, including the Federal Reserve, the Bank of Japan and the Bank of England.

The notion that competitive currency devaluations in the Great Depression were a zero-sum game is fallacy, an influential fallacy to be sure, but a fallacy nonetheless. And because it is – and was — so influential, it is a highly dangerous fallacy. There is indeed a similarity between the current situation and the 1930s, but the similarity is not that monetary ease is a zero-sum game that merely “shifts,” but does not “create,” demand; the similarity is that the fallacious notion that monetary ease does not create demand is still so prevalent.

The classic refutation of the fallacy that monetary ease only shifts, but does not create, demand was provided on numerous occasions by R. G. Hawtrey. Almost two and a half years ago, I quoted a particularly cogent passage from Hawtrey’s Trade Depression and the Way Out (2nd edition, 1933) in which he addressed the demand-shift fallacy. Hawtrey refuted the fallacy in responding to those who were arguing that Britain’s abandonment of the gold standard in September 1931 had failed to stimulate the British economy, and had damaged the world economy, because prices continued falling after Britain left the gold standard. Hawtrey first discussed the reasons for the drop in the world price level after Britain gave up the gold standard.

When Great Britain left the gold standard, deflationary measures were everywhere resorted to. Not only did the Bank of England raise its rate, but the tremendous withdrawals of gold from the United States involved an increase of rediscounts and a rise of rates there, and the gold that reached Europe was immobilized or hoarded. . . .

In other words, Britain’s departure from the gold standard led to speculation that the US would follow Britain off the gold standard, implying an increase in the demand to hoard gold before the anticipated increase in its dollar price. That is a typical reaction under the gold standard when the probability of a devaluation is perceived to have risen. By leaving gold, Britain increased the perceived probability that other countries, and especially the US, would also leave the gold standard.

The consequence was that the fall in the price level continued [because an increase in the demand to hold gold (for any reason including speculation on a future increase in the nominal price of gold) must raise the current value of gold relative to all other commodities which means that the price of other commodities in terms of gold must fall -- DG]. The British price level rose in the first few weeks after the suspension of the gold standard [because the value of the pound was falling relative to gold, implying that prices in terms of pounds rose immediately after Britain left gold -- DG], but then accompanied the gold price level in its downward trend [because after the initial fall in the value of the pound relative to gold, the pound stabilized while the real value of gold continued to rise -- DG]. This fall of prices calls for no other explanation than the deflationary measures which had been imposed [alarmed at the rapid fall in the value of gold, the Bank of England raised interest rates to prevent further depreciation in sterling -- DG]. Indeed what does demand explanation is the moderation of the fall, which was on the whole not so steep after September 1931 as before.

Yet when the commercial and financial world saw that gold prices were falling rather than sterling prices rising, they evolved the purely empirical conclusion that a depreciation of the pound had no effect in raising the price level, but that it caused the price level in terms of gold and of those currencies in relation to which the pound depreciated to fall.

Here Hawtrey identified precisely the demand-shift fallacy evidently now subscribed to by Governor Rajan. In other words, when Britain left the gold standard, Britain did nothing to raise the international level of prices, which, under the gold standard, is the level of prices measured in terms of gold. Britain may have achieved a slight increase in its own domestic price level, but only by imposing a corresponding reduction in the price level measured in terms of gold. Let’s see how Hawtrey demolishes the fallacy.

For any such conclusion there was no foundation. Whenever the gold price level tended to fall, the tendency would make itself felt in a fall in the pound concurrently with the fall in commodities. [Hawtrey is saying that if the gold price level fell, while the sterling price level remained constant, the value of sterling would also fall in terms of gold. -- DG] But it would be quite unwarrantable to infer that the fall in the pound was the cause of the fall in commodities.

On the other hand, there is no doubt that the depreciation of any currency, by reducing the cost of manufacture in the country concerned in terms of gold, tends to lower the gold prices of manufactured goods. . . . [In other words, the cost of production of manufactured goods, which include both raw materials – raw materials often being imported -- and capital equipment and labor, which generally are not mobile, is unlikely to rise as much in percentage terms as the percentage depreciation in the currency. -- DG]

But that is quite a different thing from lowering the price level. For the fall in manufacturing costs results in a greater demand for manufactured goods, and therefore the derivative demand for primary products is increased. [That is to say, if manufactured products become relatively cheaper as the currency depreciates, the real quantity of manufactured goods demanded will increase, and the real quantity of inputs used to produce the increased quantity of manufactured goods must also increase. -- DG] While the prices of finished goods fall, the prices of primary products rise. Whether the price level as a whole would rise or fall it is not possible to say a priori, but the tendency is toward correcting the disparity between the price levels of finished products and primary products. That is a step towards equilibrium. And there is on the whole an increase of productive activity. The competition of the country which depreciates its currency will result in some reduction of output from the manufacturing industry of other countries. But this reduction will be less than the increase in the country’s output, for if there were no net increase in the world’s output there would be no fall of prices. [Thus, even though there is some demand shifting toward the country that depreciates its currency because its products become relatively cheaper than the products of non-depreciating currencies, the cost reduction favoring the output of the country with a depreciating currency could not cause an overall reduction in prices elsewhere if total output had not increased. -- DG]

Hawtrey then articulates the policy implication of the demand-shift fallacy.

In consequence of the competitive advantage gained by a country’s manufacturers from a depreciation of its currency, any such depreciation is only too likely to meet with recriminations and even retaliation from its competitors. . . . Fears are even expressed that if one country starts depreciation, and others follow suit, there may result “a competitive depreciation” to which no end can be seen.

This competitive depreciation is an entirely imaginary danger. The benefit that a country derives from the depreciation of its currency is in the rise of its price level relative to its wage level, and does not depend on its competitive advantage. [There is a slight ambiguity here, because Hawtrey admitted above that there is a demand shift. But there is also an increase in demand, and it is the increase in demand, associated with a rise of its price level relative to its wage level, which does not depend on a competitive advantage associated with a demand shift. -- DG] If other countries depreciate their currencies, its competitive advantage is destroyed, but the advantage of the price level remains both to it and to them. They in turn may carry the depreciation further, and gain a competitive advantage. But this race in depreciation reaches a natural limit when the fall in wages and in the prices of manufactured goods in terms of gold has gone so far in all the countries concerned as to regain the normal relation with the prices of primary products. When that occurs, the depression is over, and industry is everywhere remunerative and fully employed. Any countries that lag behind in the race will suffer from unemployment in their manufacturing industry. But the remedy lies in their own hands; all they have to do is to depreciate their currencies to the extent necessary to make the price level remunerative to their industry. Their tardiness does not benefit their competitors, once these latter are employed up to capacity. Indeed, if the countries that hang back are an important part of the world’s economic system, the result must be to leave the disparity of price levels partly uncorrected, with undesirable consequences to everybody. . . .

The picture of an endless competition in currency depreciation is completely misleading. The race of depreciation is towards a definite goal; it is a competitive return to equilibrium. The situation is like that of a fishing fleet threatened with a storm; no harm is done if their return to a harbor of refuge is “competitive.” Let them race; the sooner they get there the better. (pp. 154-57)

Hawtrey’s analysis of competitive depreciation can be further elucidated by reconsidering it in the light of Max Corden’s classic analysis of exchange-rate protection, which I have discussed several times on this blog (here, here, and here). Corden provided a deep analysis of the conditions under which exchange-rate depreciation confers a competitive advantage. For internationally traded commodities, it is hard to see how any advantage can be derived from currency manipulation. A change in the exchange rate would be rapidly offset by corresponding changes in the prices of the relevant products. However, factors of production like labor, land, and capital equipment, tend to be immobile so their prices in the local currency don’t necessarily adjust immediately to changes in exchange rate of the local currency. Hawtrey was clearly assuming that labor and capital are not tradable so that international arbitrage does not induce immediate adjusments in their prices. Also, Hawtrey’s analysis begins from a state of disequilibrirum, while Corden’s starts from equilibrium, a very important difference.

However, even if prices of non-tradable commodities and factors of production don’t immediately adjust to exchange-rate changes, there is another mechanism operating to eliminate any competitive advantage, which is that the inflow of foreign-exchange reserves into a country with an undervalued currency (and, thus, a competitive advantage) will normally induce monetary expansion in that country, thereby raising the prices of non-tradables and factors of production. What Corden showed was that a central bank willing to tolerate a sufficiently large expansion in its foreign-reserve holdings could keep its currency undervalued, thereby maintaining a competitive advantage for its country in world markets.

But the corollary to Corden’s analysis is that to gain competitive advantage from currency depreciation requires the central bank to maintain monetary stringency (a chronic excess demand for money thus requiring a corresponding export surplus) and a continuing accumulation of foreign exchange reserves. If central banks are pursuing competitive monetary easing, which Governor Rajan likens to competitive exchange-rate depreciation aimed at shifting, not expanding, total demand, he is obviously getting worked up over nothing, because Corden demonstrated 30 years ago that a necessary condition for competitive exchange-rate depreciation is monetary tightness, not monetary ease.

How to Think about Own Rates of Interest, Version 2.0

In my previous post, I tried to explain how to think about own rates of interest. Unfortunately, I made a careless error in calculating the own rate of interest in the simple example I constructed to capture the essence of Sraffa’s own-rate argument against Hayek’s notion of the natural rate of interest. But sometimes these little slip-ups can be educational, so I am going to try to turn my conceptual misstep to advantage in working through and amplifying the example I presented last time.

But before I reproduce the passage from Sraffa’s review that will serve as our basic text in this post as it did in the previous post, I want to clarify another point. The own rate of interest for a commodity may be calculated in terms of any standard of value. If I borrow wheat and promise to repay in wheat, the wheat own rate of interest may be calculated in terms of wheat or in terms of any other standard; all of those rates are own rates, but each is expressed in terms of a different standard.

Lend me 100 bushels of wheat today, and I will pay you back 102 bushels next year. The own rate of interest for wheat in terms of wheat would be 2%. Alternatively, I could borrow $100 of wheat today and promise to pay back $102 of wheat next year. The own rate of interest for wheat in terms of wheat and the own rate of interest for wheat in terms of dollars would be equal if and only if the forward dollar price of wheat is the same as the current dollar price of wheat. The commodity or asset in terms of which a price is quoted or in terms of which we measure the own rate is known as the numeraire. (If all that Sraffa was trying to say in criticizing Hayek was that there are many equivalent ways of expressing own interest rates, he was making a trivial point. Perhaps Hayek didn’t understand that trivial point, in which case the rough treatment he got from Sraffa was not undeserved. But it seems clear that Sraffa was trying — unsuccessfully — to make a more substantive point than that.)

In principle, there is a separate own rate of interest for every commodity and for every numeraire. If there are n commodities, there are n potential numeraires, and n own rates can be expressed in terms of each numeraire. So there are n-squared own rates. Each own rate can be thought of as equilibrating the demand for loans made in terms of a given commodity and a given numeraire. But arbitrage constraints tightly link all these separate own rates together. If it were cheaper to borrow in terms of one commodity than another, or in terms of one numeraire than another, borrowers would switch to the commodity and numeraire with the lowest cost of borrowing, and if it were more profitable to lend in terms of one commodity, or in terms of one numeraire, than another, lenders would switch to lending in terms of the commodity or numeraire with the highest return.

Thus, competition tends to equalize own rates across all commodities and across all numeraires. Of course, perfect arbitrage requires the existence of forward markets in which to contract today for the purchase or sale of a commodity at a future date. When forward markets don’t exist, some traders may anticipate advantages to borrowing or lending in terms of particular commodities based on their expectations of future prices for those commodities. The arbitrage constraint on the variation of interest rates was discovered and explained by Irving Fisher in his great work Appreciation and Interest.

It is clear that if the unit of length were changed and its change were foreknown, contracts would be modified accordingly. Suppose a yard were defined (as once it probably was) to be the length of the king’s girdle, and suppose the king to be a child. Everybody would then know that the “yard” would increase with age and a merchant who should agree to deliver 1000 “yards” ten years hence, would make his terms correspond to his expectations. To alter the mode of measurement does not alter the actual quantities involved but merely the numbers by which they are represented. (p. 1)

We thus see that the farmer who contracts a mortgage in gold is, if the interest is properly adjusted, no worse and no better off than if his contract were in a “wheat” standard or a “multiple” standard. (p. 16)

I pause to make a subtle, but, I think, an important, point. Although the relationship between the spot and the forward price of any commodity tightly constrains the own rate for that commodity, the spot/forward relationship does not determine the own rate of interest for that commodity. There is always some “real” rate reflecting a rate of intertemporal exchange that is consistent with intertemporal equilibrium. Given such an intertemporal rate of exchange — a real rate of interest — the spot/forward relationship for a commodity in terms of a numeraire pins down the own rate for that commodity in terms of that numeraire.

OK with that introduction out of the way, let’s go back to my previous post in which I wrote the following:

Sraffa correctly noted that arbitrage would force the terms of such a loan (i.e., the own rate of interest) to equal the ratio of the current forward price of the commodity to its current spot price, buying spot and selling forward being essentially equivalent to borrowing and repaying.

That statement now seems quite wrong to me. Sraffa did not assert that arbitrage would force the own rate of interest to equal the ratio of the spot and forward prices. He merely noted that in a stationary equilibrium with equality between all spot and forward prices, all own interest rates would be equal. I criticized him for failing to note that in a stationary equilibrium all own rates would be zero. The conclusion that all own rates would be zero in a stationary equilibrium might in fact be valid, but if it is, it is not as obviously valid as I suggested, and my criticism of Sraffa and Ludwig von Mises for not drawing what seemed to me an obvious inference was not justified. To conclude that own rates are zero in a stationary equilibrium, you would, at a minimum, have to show that there is at least one commodity which could be carried from one period to the next at a non-negative profit. Sraffa may have come close to suggesting such an assumption in the passage in which he explains how borrowing to buy cotton spot and immediately selling cotton forward can be viewed as the equivalent of contracting a loan in terms of cotton, but he did not make that assumption explicitly. In any event, I mistakenly interpreted him to be saying that the ratio of the spot and forward prices is the same as the own interest rate, which is neither true nor what Sraffa meant.

And now let’s finally go back to the key quotation of Sraffa’s that I tried unsuccessfully to parse in my previous post.

Suppose there is a change in the distribution of demand between various commodities; immediately some will rise in price, and others will fall; the market will expect that, after a certain time, the supply of the former will increase, and the supply of the latter fall, and accordingly the forward price, for the date on which equilibrium is expected to be restored, will be below the spot price in the case of the former and above it in the case of the latter; in other words, the rate of interest on the former will be higher than on the latter. (“Dr. Hayek on Money and Capital,” p. 50)

In my previous post I tried to flesh out Sraffa’s example by supposing that, in the stationary equilibrium before the demand shift, tomatoes and cucumbers were both selling for a dollar each. In a stationary equilibrium, tomato and cucumber prices would remain, indefinitely into the future, at a dollar each. A shift in demand from tomatoes to cucumbers upsets the equilibrium, causing the price of tomatoes to fall to, say, $.90 and the price of cucumbers to rise to, say, $1.10. But Sraffa also argued that the prices of tomatoes and cucumbers would diverge only temporarily from their equilibrium values, implicitly assuming that the long-run supply curves of both tomatoes and cucumbers are horizontal at a price of $1 per unit.

I misunderstood Sraffa to be saying that the ratio of the future price and the spot price of tomatoes equals one plus the own rate on tomatoes. I therefore incorrectly calculated the own rate on tomatoes as 1/.9 minus one or 11.1%. There were two mistakes. First, I incorrectly inferred that equality of all spot and forward prices implies that the real rate must be zero, and second, as Nick Edmunds pointed out in his comment, a forward price exceeding the spot price would actually be reflected in an own rate less than the zero real rate that I had been posited. To calculate the own rate on tomatoes, I ought to have taken the ratio of spot price to the forward price — (.9/1) — and subtracted one plus the real rate. If the real rate is zero, then the implied own rate is .9 minus 1, or -10%.

To see where this comes from, we can take the simple algebra from Fisher (pp. 8-9). Let i be the interest rate calculated in terms of one commodity and one numeraire, and j be the rate of interest calculated in terms of a different commodity in that numeraire. Further, let a be the rate at which the second commodity appreciates relative to the first commodity. We have the following relationship derived from the arbitrage condition.

(1 + i) = (1 + j)(1 + a)

Now in our case, we are trying to calculate the own rate on tomatoes given that tomatoes are expected (an expectation reflected in the forward price of tomatoes) to appreciate by 10% from $.90 to $1.00 over the term of the loan. To keep the analysis simple, assume that i is zero. Although I concede that a positive real rate may be consistent with the stationary equilibrium that I, following Sraffa, have assumed, a zero real rate is certainly not an implausible assumption, and no important conclusions of this discussion hinge on assuming that i is zero.

To apply Fisher’s framework to Sraffa’s example, we need only substitute the ratio of the forward price of tomatoes to the spot price — [p(fwd)/p(spot)] — for the appreciation factor (1 + a).

So, in place of the previous equation, I can now substitute the following equivalent equation:

(1 + i) = (1 + j) [p(fwd)/p(spot)].

Rearranging, we get:

[p(spot)/p(fwd)] (1 + i) = (1 + j).

If i = 0, the following equation results:

[p(spot)/p(fwd)] = (1 + j).

In other words:

j = [p(spot)/p(fwd)] – 1.

If the ratio of the spot to the forward price is .9, then the own rate on tomatoes, j, equals -10%.

My assertion in the previous post that the own rate on cucumbers would be negative by the amount of expected depreciation (from $1.10 to $1) in the next period was also backwards. The own rate on cucumbers would have to exceed the zero equilibrium real rate by as much as cucumbers would depreciate at the time of repayment. So, for cucumbers, j would equal 11%.

Just to elaborate further, let’s assume that there is a third commodity, onions, and that, in the initial equilibrium, the unit prices of onions, tomatoes and cucumbers are equal. If the demand shift from tomatoes to cucumbers does not affect the demand for onions, then, even after the shift in demand, the price of onions will remain one dollar per onion.

The table below shows prices and own rates for tomatoes, cucumbers and onions for each possible choice of numeraire. If prices are quoted in tomatoes, the price of tomatoes is fixed at 1. Given a zero real rate, the own rate on tomatoes in period is zero. What about the own rate on cucumbers? In period 0, with no change in prices expected, the own rate on cucumbers is also zero. However in period 1, after the price of cucumbers has risen to 1.22 tomatoes, the own rate on cucumbers must reflect the expected reduction in the price of a cucumber in terms of tomatoes from 1.22 tomatoes in period 1 to 1 tomato in period 2, a price reduction of 22% percent in terms of tomatoes, implying a cucumber own rate of 22% in terms of tomatoes. Similarly, the onion own rate in terms of tomatoes would be 11% percent reflecting a forward price for onions in terms of tomatoes 11% below the spot price for onions in terms of tomatoes. If prices were quoted in terms of cucumbers, the cucumber own rate would be zero, and because the prices of tomatoes and onions would be expected to rise in terms of cucumbers, the tomato and onion own rates would be negative (-18.2% for tomatoes and -10% for onions). And if prices were quoted in terms of onions, the onion own rate would be zero, while the tomato own rate, given the expected appreciation of tomatoes in terms of onions, would be negative (-10%), and the cucumber own rate, given the expected depreciation of cucumbers in terms of onions, would be positive (10%).

own_rates_in_terms_of_tomatoes_cucumbers_onions

The next table, summarizing the first one, is a 3 by 3 matrix showing each of the nine possible combinations of numeraires and corresponding own rates.

own_rates_in_terms_of_tomatoes_cucumbers_onions_2

Thus, although the own rates of the different commodities differ, and although the commodity own rates differ depending on the choice of numeraire, the cost of borrowing (and the return to lending) is equal regardless of which commodity and which numeraire is chosen. As I stated in my previous post, Sraffa believed that, by showing that own rates can diverge, he showed that Hayek’s concept of a natural rate of interest was a nonsense notion. However, the differences in own rates, as Fisher had already showed 36 years earlier, are purely nominal. The underlying real rate, under Sraffa’s own analysis, is independent of the own rates.

Moreover, as I pointed out in my previous post, though the point was made in the context of a confused exposition of own rates,  whenever the own rate for a commodity is negative, there is an incentive to hold it now for sale in the next period at a higher price it would fetch in the current period. It is therefore only possible to observe negative own rates on commodities that are costly to store. Only if the cost of holding a commodity is greater than its expected appreciation would it not be profitable to withhold the commodity from sale this period and to sell instead in the following period. The rate of appreciation of a commodity cannot exceed the cost of storing it (as a percentage of its price).

What do I conclude from all this? That neither Sraffa nor Hayek adequately understood Fisher. Sraffa seems to have argued that there would be multiple real own rates of interest in disequilibrium — or at least his discussion of own rates seem to suggest that that is what he thought — while Hayek failed to see that there could be multiple nominal own rates. Fisher provided a definitive exposition of the distinction between real and nominal rates that encompasses both own rates and money rates of interest.

A. C. Pigou, the great and devoted student of Alfred Marshall, and ultimately his successor at Cambridge, is supposed to have said “It’s all in Marshall.” Well, one could also say “it’s all in Fisher.” Keynes, despite going out of his way in Chapter 12 of the General Theory to criticize Fisher’s distinction between the real and nominal rates of interest, actually vindicated Fisher’s distinction in his exposition of own rates in Chapter 17 of the GT, providing a valuable extension of Fisher’s analysis, but apparently failing to see the connection between his discussion and Fisher’s, and instead crediting Sraffa for introducing the own-rate analysis, even as he undermined Sraffa’s ambiguous suggestion that real own rates could differ. Go figure.

How to Think about Own Rates of Interest

Phil Pilkington has responded to my post about the latest version of my paper (co-authored by Paul Zimmerman) on the Sraffa-Hayek debate about the natural rate of interest. For those of you who haven’t been following my posts on the subject, here’s a quick review. Almost three years ago I wrote a post refuting Sraffa’s argument that Hayek’s concept of the natural rate of interest is incoherent, there being a multiplicity of own rates of interest in a barter economy (Hayek’s benchmark for the rate of interest undisturbed by monetary influences), which makes it impossible to identify any particular own rate as the natural rate of interest.

Sraffa maintained that if there are many own rates of interest in a barter economy, none of them having a claim to priority over the others, then Hayek had no basis for singling out any particular one of them as the natural rate and holding it up as the benchmark rate to guide monetary policy. I pointed out that Ludwig Lachmann had answered Sraffa’s attack (about 20 years too late) by explaining that even though there could be many own rates for individual commodities, all own rates are related by the condition that the cost of borrowing in terms of all commodities would be equalized, differences in own rates reflecting merely differences in expected appreciation or depreciation of the different commodities. Different own rates are simply different nominal rates; there is a unique real own rate, a point demonstrated by Irving Fisher in 1896 in Appreciation and Interest.

Let me pause here for a moment to explain what is meant by an own rate of interest. It is simply the name for the rate of interest corresponding to a loan contracted in terms of a particular commodity, the borrower receiving the commodity now and repaying the lender with the same commodity when the term of the loan expires. Sraffa correctly noted that in equilibrium arbitrage would force the terms of such a loan (i.e., the own rate of interest) to equal the ratio of the current forward price of the commodity to its current spot price, buying spot and selling forward being essentially equivalent to borrowing and repaying.

Now what is tricky about Sraffa’s argument against Hayek is that he actually acknowledges at the beginning of his argument that in a stationary equilibrium, presumably meaning that prices remain at their current equilibrium levels over time, all own rates would be equal. In fact if prices remain (and are expected to remain) constant period after period, the ratio of forward to spot prices would equal unity for all commodities implying that the natural rate of interest would be zero. Sraffa did not make that point explicitly, but it seems to be a necessary implication of his analysis. (This implication seems to bear on an old controversy in the theory of capital and interest, which is whether the rate of interest would be positive in a stationary equilibrium with constant real income). Schumpeter argued that the equilibrium rate of interest would be zero, and von Mises argued that it would be positive, because time preference implying that the rate of interest is necessarily always positive is a kind of a priori praxeological law of nature, the sort of apodictic gibberish to which von Mises was regrettably predisposed. The own-rate analysis supports Schumpeter against Mises.

So to make the case against Hayek, Sraffa had to posit a change, a shift in demand from one product to another, that disrupts the pre-existing equilibrium. Here is the key passage from Sraffa:

Suppose there is a change in the distribution of demand between various commodities; immediately some will rise in price, and others will fall; the market will expect that, after a certain time, the supply of the former will increase, and the supply of the latter fall, and accordingly the forward price, for the date on which equilibrium is expected to be restored, will be below the spot price in the case of the former and above it in the case of the latter; in other words, the rate of interest on the former will be higher than on the latter. (p. 50)

This is a difficult passage, and in previous posts, and in my paper with Zimmerman, I did not try to parse this passage. But I am going to parse it now. Assume that demand shifts from tomatoes to cucumbers. In the original equilibrium, let the prices of both be $1 a pound. With a zero own rate of interest in terms of both tomatoes and cucumbers, you could borrow a pound of tomatoes today and discharge your debt by repaying the lender a pound of tomatoes at the expiration of the loan. However, after the demand shift, the price of tomatoes falls to, say, $0.90 a pound, and the price of cucumbers rises to, say, $1.10 a pound. Sraffa posits that the price changes are temporary, not because the demand shift is temporary, but because the supply curves of tomatoes and cucumbers are perfectly elastic at $1 a pound. However, supply does not adjust immediately, so Sraffa believes that there can be a temporary deviation from the long-run equilibrium prices of tomatoes and cucumbers.

The ratio of the forward prices to the spot prices tells you what the own rates are for tomatoes and cucumbers. For tomatoes, the ratio is 1/.9, implying an own rate of 11.1%. For cucumbers the ratio is 1/1.1, implying an own rate of -9.1%. Other prices have not changed, so all other own rates remain at 0. Having shown that own rates can diverge, Sraffa thinks that he has proven Hayek’s concept of a natural rate of interest to be a nonsense notion. He was mistaken.

There are at least two mistakes. First, the negative own rate on cucumbers simply means that no one will lend in terms of cucumbers for negative interest when other commodities allow lending at zero interest. It also means that no one will hold cucumbers in this period to sell at a lower price in the next period than the cucumbers would fetch in the current period. Cucumbers are a bad investment, promising a negative return; any lending and investing will be conducted in terms of some other commodity. The negative own rate on cucumbers signifies a kind of corner solution, reflecting the impossibility of transporting next period’s cucumbers into the present. If that were possible cucumber prices would be equal in the present and the future, and the cucumber own rate would be equal to all other own rates at zero. But the point is that if any lending takes place, it will be at a zero own rate.

Second, the positive own rate on tomatoes means that there is an incentive to lend in terms of tomatoes rather than lend in terms of other commodities. But as long as it is possible to borrow in terms of other commodities at a zero own rate, no one borrows in terms of tomatoes. Thus, if anyone wanted to lend in terms of tomatoes, he would have to reduce the rate on tomatoes to make borrowers indifferent between borrowing in terms of tomatoes and borrowing in terms of some other commodity. However, if tomatoes today can be held at zero cost to be sold at the higher price prevailing next period, currently produced tomatoes would be sold in the next period rather than sold today. So if there were no costs of holding tomatoes until the next period, the price of tomatoes in the next period would be no higher than the price in the current period. In other words, the forward price of tomatoes cannot exceed the current spot price by more than the cost of holding tomatoes until the next period. If the difference between the spot and the forward price reflects no more than the cost of holding tomatoes till the next period, then, as Keynes showed in chapter 17 of the General Theory, the own rates are indeed effectively equalized after appropriate adjustment for storage costs and expected appreciation.

Thus, it was Keynes, who having selected Sraffa to review Hayek’s Prices and Production in the Economic Journal, of which Keynes was then the editor, adapted Sraffa’s own rate analysis in the General Theory, but did so in a fashion that, at least partially, rehabilitated the very natural-rate analysis that had been the object of Sraffa’s scorn in his review of Prices and Production. Keynes also rejected the natural-rate analysis, but he did so not because it is nonsensical, but because the natural rate is not independent of the level of employment. Keynes’s argument that the natural rate depends on the level of employment seems to me to be inconsistent with the idea that the IS curve is downward sloping. But I will have to think about that a bit and reread the relevant passage in the General Theory and perhaps revisit the point in a future post.

 UPDATE (07/28/14 13:02 EDT): Thanks to my commenters for pointing out that my own thinking about the own rate of interest was not quite right. I should have defined the own rate in terms of a real numeraire instead of $, which was a bit of awkwardness that I should have fixed before posting. I will try to publish a corrected version of this post later today or tomorrow. Sorry for posting without sufficient review and revision.

UPDATE (08/04/14 11:38 EDT): I hope to post the long-delayed sequel to this post later today. A number of personal issues took precedence over posting, but I also found it difficult to get clear on several minor points, which I hope that I have now resolved adequately, for example I found that defining the own rate in terms of a real numeraire was not really the source of my problem with this post, though it was a useful exercise to work through. Anyway, stay tuned.

Who Is Grammatically Challenged? John Taylor or the Wall Street Journal Editorial Page?

Perhaps I will get around to commenting on John Taylor’s latest contribution to public discourse and economic enlightenment on the incomparable Wall Street Journal editorial page. And then again, perhaps not. We shall see.

In truth, there is really nothing much in the article that he has not already said about 500 times (or is it 500 thousand times?) before about “rule-based monetary policy.” But there was one notable feature about his piece, though I am not sure if it was put in there by him or by some staffer on the legendary editorial page at the Journal. And here it is, first the title followed by a teaser:

John Taylor’s Reply to Alan Blinder

The Fed’s ad hoc departures from rule-based monetary policy has hurt the economy.

Yes, believe it or not, that is exactly what it says: “The Fed’s ad hoc departures from rule-based monetary policy has [sic!] hurt the economy.”

Good grief. This is incompetence squared. The teaser was probably not written by Taylor, but one would think that he would at least read the final version before signing off on it.

UPDATE: David Henderson, an authoritative — and probably not overly biased — source, absolves John Taylor from grammatical malpractice, thereby shifting all blame to the Wall Street Journal editorial page.

Monetarism and the Great Depression

Last Friday, Scott Sumner posted a diatribe against the IS-LM triggered by a set of slides by Chris Foote of Harvard and the Boston Fed explaining how the effects of monetary policy can be analyzed using the IS-LM framework. What really annoys Scott is the following slide in which Foote compares the “spending (aka Keynesian) hypothesis” and the “money (aka Monetarist) hypothesis” as explanations for the Great Depression. I am also annoyed; whether more annoyed or less annoyed than Scott I can’t say, interpersonal comparisons of annoyance, like interpersonal comparisons of utility, being beyond the ken of economists. But our reasons for annoyance are a little different, so let me try to explore those reasons. But first, let’s look briefly at the source of our common annoyance.

foote_81The “spending hypothesis” attributes the Great Depression to a sudden collapse of spending which, in turn, is attributed to a collapse of consumer confidence resulting from the 1929 stock-market crash and a collapse of investment spending occasioned by a collapse of business confidence. The cause of the collapse in consumer and business confidence is not really specified, but somehow it has to do with the unstable economic and financial situation that characterized the developed world in the wake of World War I. In addition there was, at least according to some accounts, a perverse fiscal response: cuts in government spending and increases in taxes to keep the budget in balance. The latter notion that fiscal policy was contractionary evokes a contemptuous response from Scott, more or less justified, because nominal government spending actually rose in 1930 and 1931 and spending in real terms continued to rise in 1932. But the key point is that government spending in those days was too meager to have made much difference; the spending hypothesis rises or falls on the notion that the trigger for the Great Depression was an autonomous collapse in private spending.

But what really gets Scott all bent out of shape is Foote’s commentary on the “money hypothesis.” In his first bullet point, Foote refers to the 25% decline in M1 between 1929 and 1933, suggesting that monetary policy was really, really tight, but in the next bullet point, Foote points out that if monetary policy was tight, implying a leftward shift in the LM curve, interest rates should have risen. Instead they fell. Moreover, Foote points out that, inasmuch as the price level fell by more than 25% between 1929 and 1933, the real value of the money supply actually increased, so it’s not even clear that there was a leftward shift in the LM curve. You can just feel Scott’s blood boiling:

What interests me is the suggestion that the “money hypothesis” is contradicted by various stylized facts. Interest rates fell.  The real quantity of money rose.  In fact, these two stylized facts are exactly what you’d expect from tight money.  The fact that they seem to contradict the tight money hypothesis does not reflect poorly on the tight money hypothesis, but rather the IS-LM model that says tight money leads to a smaller level of real cash balances and a higher level of interest rates.

To see the absurdity of IS-LM, just consider a monetary policy shock that no one could question—hyperinflation.  Wheelbarrows full of billion mark currency notes. Can we all agree that that would be “easy money?”  Good.  We also know that hyperinflation leads to extremely high interest rates and extremely low real cash balances, just the opposite of the prediction of the IS-LM model.  In contrast, Milton Friedman would tell you that really tight money leads to low interest rates and large real cash balances, exactly what we do see.

Scott is totally right, of course, to point out that the fall in interest rates and the increase in the real quantity of money do not contradict the “money hypothesis.” However, he is also being selective and unfair in making that criticism, because, in two slides following almost immediately after the one to which Scott takes such offense, Foote actually explains that the simple IS-LM analysis presented in the previous slide requires modification to take into account expected deflation, because the demand for money depends on the nominal rate of interest while the amount of investment spending depends on the real rate of interest, and shows how to do the modification. Here are the slides:

foote_83

foote_84Thus, expected deflation raises the real rate of interest thereby shifting the IS curve to the left while leaving the LM curve where it was. Expected deflation therefore explains a fall in both nominal and real income as well as in the nominal rate of interest; it also explains an increase in the real rate of interest. Scott seems to be emotionally committed to the notion that the IS-LM model must lead to a misunderstanding of the effects of monetary policy, holding Foote up as an example of this confusion on the basis of the first of the slides, but Foote actually shows that IS-LM can be tweaked to accommodate a correct understanding of the dominant role of monetary policy in the Great Depression.

The Great Depression was triggered by a deflationary scramble for gold associated with the uncoordinated restoration of the gold standard by the major European countries in the late 1920s, especially France and its insane central bank. On top of this, the Federal Reserve, succumbing to political pressure to stop “excessive” stock-market speculation, raised its discount rate to a near record 6.5% in early 1929, greatly amplifying the pressure on gold reserves, thereby driving up the value of gold, and causing expectations of the future price level to start dropping. It was thus a rise (both actual and expected) in the value of gold, not a reduction in the money supply, which was the source of the monetary shock that produced the Great Depression. The shock was administered without a reduction in the money supply, so there was no shift in the LM curve. IS-LM is not necessarily the best model with which to describe this monetary shock, but the basic story can be expressed in terms of the IS-LM model.

So, you ask, if I don’t think that Foote’s exposition of the IS-LM model seriously misrepresents what happened in the Great Depression, why did I say at beginning of this post that Foote’s slides really annoy me? Well, the reason is simply that Foote seems to think that the only monetary explanation of the Great Depression is the Monetarist explanation of Milton Friedman: that the Great Depression was caused by an exogenous contraction in the US money supply. That explanation is wrong, theoretically and empirically.

What caused the Great Depression was an international disturbance to the value of gold, caused by the independent actions of a number of central banks, most notably the insane Bank of France, maniacally trying to convert all its foreign exchange reserves into gold, and the Federal Reserve, obsessed with suppressing a non-existent stock-market bubble on Wall Street. It only seems like a bubble with mistaken hindsight, because the collapse of prices was not the result of any inherent overvaluation in stock prices in October 1929, but because the combined policies of the insane Bank of France and the Fed wrecked the world economy. The decline in the nominal quantity of money in the US, the great bugaboo of Milton Friedman, was merely an epiphenomenon.

As Ron Batchelder and I have shown, Gustav Cassel and Ralph Hawtrey had diagnosed and explained the causes of the Great Depression fully a decade before it happened. Unfortunately, whenever people think of a monetary explanation of the Great Depression, they think of Milton Friedman, not Hawtrey and Cassel. Scott Sumner understands all this, he’s even written a book – a wonderful (but unfortunately still unpublished) book – about it. But he gets all worked up about IS-LM.

I, on the other hand, could not care less about IS-LM; it’s the idea that the monetary cause of the Great Depression was discovered by Milton Friedman that annoys the [redacted] out of me.

UPDATE: I posted this post prematurely before I finished editing it, so I apologize for any mistakes or omissions or confusing statements that appeared previously or that I haven’t found yet.

Another Complaint about Modern Macroeconomics

In discussing modern macroeconomics, I’ve have often mentioned my discomfort with a narrow view of microfoundations, but I haven’t commented very much on another disturbing feature of modern macro: the requirement that theoretical models be spelled out fully in axiomatic form. The rhetoric of axiomatization has had sweeping success in economics, making axiomatization a pre-requisite for almost any theoretical paper to be taken seriously, and even considered for publication in a reputable economics journal.

The idea that a good scientific theory must be derived from a formal axiomatic system has little if any foundation in the methodology or history of science. Nevertheless, it has become almost an article of faith in modern economics. I am not aware, but would be interested to know, whether, and if so how widely, this misunderstanding has been propagated in other (purportedly) empirical disciplines. The requirement of the axiomatic method in economics betrays a kind of snobbishness and (I use this word advisedly, see below) pedantry, resulting, it seems, from a misunderstanding of good scientific practice.

Before discussing the situation in economics, I would note that axiomatization did not become a major issue for mathematicians until late in the nineteenth century (though demands – luckily ignored for the most part — for logical precision followed immediately upon the invention of the calculus by Newton and Leibniz) and led ultimately to the publication of the great work of Russell and Whitehead, Principia Mathematica whose goal was to show that all of mathematics could be derived from the axioms of pure logic. This is yet another example of an unsuccessful reductionist attempt, though it seemed for a while that the Principia paved the way for the desired reduction. But 20 years after the Principia was published, Kurt Godel proved his famous incompleteness theorem, showing that, as a matter of pure logic, not even all the valid propositions of arithmetic, much less all of mathematics, could be derived from any system of axioms. This doesn’t mean that trying to achieve a reduction of a higher-level discipline to another, deeper discipline is not a worthy objective, but it certainly does mean that one cannot just dismiss, out of hand, a discipline simply because all of its propositions are not deducible from some set of fundamental propositions. Insisting on reduction as a prerequisite for scientific legitimacy is not a scientific attitude; it is merely a form of obscurantism.

As far as I know, which admittedly is not all that far, the only empirical science which has been axiomatized to any significant extent is theoretical physics. In his famous list of 23 unsolved mathematical problems, the great mathematician David Hilbert included the following (number 6).

Mathematical Treatment of the Axioms of Physics. The investigations on the foundations of geometry suggest the problem: To treat in the same manner, by means of axioms, those physical sciences in which already today mathematics plays an important part, in the first rank are the theory of probabilities and mechanics.

As to the axioms of the theory of probabilities, it seems to me desirable that their logical investigation should be accompanied by a rigorous and satisfactory development of the method of mean values in mathematical physics, and in particular in the kinetic theory of gasses. . . . Boltzman’s work on the principles of mechanics suggests the problem of developing mathematically the limiting processes, there merely indicated, which lead from the atomistic view to the laws of motion of continua.

The point that I want to underscore here is that axiomatization was supposed to ensure that there was an adequate logical underpinning for theories (i.e., probability and the kinetic theory of gasses) that had already been largely worked out. Thus, Hilbert proposed axiomatization not as a method of scientific discovery, but as a method of checking for hidden errors and problems. Error checking is certainly important for science, but it is clearly subordinate to the creation and empirical testing of new and improved scientific theories.

The fetish for axiomitization in economics can largely be traced to Gerard Debreu’s great work, The Theory of Value: An Axiomatic Analysis of Economic Equilibrium, in which Debreu, building on his own work and that of Kenneth Arrow, presented a formal description of a decentralized competitive economy with both households and business firms, and proved that, under the standard assumptions of neoclassical theory (notably diminishing marginal rates of substitution in consumption and production and perfect competition) such an economy would have at least one, and possibly more than one, equilibrium.

A lot of effort subsequently went into gaining a better understanding of the necessary and sufficient conditions under which an equilibrium exists, and when that equilibrium would be unique and Pareto optimal. The subsequent work was then brilliantly summarized and extended in another great work, General Competitive Analysis by Arrow and Frank Hahn. Unfortunately, those two books, paragons of the axiomatic method, set a bad example for the future development of economic theory, which embarked on a needless and counterproductive quest for increasing logical rigor instead of empirical relevance.

A few months ago, I wrote a review of Kartik Athreya’s book Big Ideas in Macroeconomics. One of the arguments of Athreya’s book that I didn’t address was his defense of modern macroeconomics against the complaint that modern macroeconomics is too mathematical. Athreya is not responsible for the reductionist and axiomatic fetishes of modern macroeconomics, but he faithfully defends them against criticism. So I want to comment on a few paragraphs in which Athreya dismisses criticism of formalism and axiomatization.

Natural science has made significant progress by proceeding axiomatically and mathematically, and whether or not we [economists] will achieve this level of precision for any unit of observation in macroeconomics, it is likely to be the only rational alternative.

First, let me observe that axiomatization is not the same as using mathematics to solve problems. Many problems in economics cannot easily be solved without using mathematics, and sometimes it is useful to solve a problem in a few different ways, each way potentially providing some further insight into the problem not provided by the others. So I am not at all opposed to the use of mathematics in economics. However, the choice of tools to solve a problem should bear some reasonable relationship to the problem at hand. A good economist will understand what tools are appropriate to the solution of a particular problem. While mathematics has clearly been enormously useful to the natural sciences and to economics in solving problems, there are very few scientific advances that can be ascribed to axiomatization. Axiomatization was vital in proving the existence of equilibrium, but substantive refutable propositions about real economies, e.g., the Heckscher-Ohlin Theorem, or the Factor-Price Equalization Theorem, or the law of comparative advantage, were not discovered or empirically tested by way of axiomatization. Arthreya talks about economics achieving the “level of precision” achieved by natural science, but the concept of precision is itself hopelessly imprecise, and to set precision up as an independent goal makes no sense. Arthreya continues:

In addition to these benefits from the systematic [i.e. axiomatic] approach, there is the issue of clarity. Lowering mathematical content in economics represents a retreat from unambiguous language. Once mathematized, words in any given model cannot ever mean more than one thing. The unwillingness to couch things in such narrow terms (usually for fear of “losing something more intelligible”) has, in the past, led to a great deal of essentially useless discussion.

Arthreya writes as if the only source of ambiguity is imprecise language. That just isn’t so. Is unemployment voluntary or involuntary? Arthreya actually discusses the question intelligently on p. 283, in the context of search models of unemployment, but I don’t think that he could have provided any insight into that question with a purely formal, symbolic treatment. Again back to Arthreya:

The plaintive expressions of “fear of losing something intangible” are concessions to the forces of muddled thinking. The way modern economics gets done, you cannot possibly not know exactly what the author is assuming – and to boot, you’ll have a foolproof way of checking whether their claims of what follows from these premises is actually true or not.

So let me juxtapose this brief passage from Arthreya with a rather longer passage from Karl Popper in which he effectively punctures the fallacies underlying the specious claims made on behalf of formalism and against ordinary language. The extended quotations are from an addendum titled “Critical Remarks on Meaning Analysis” (pp. 261-77) to chapter IV of Realism and the Aim of Science (volume 1 of the Postscript to the Logic of Scientific Discovery). In this addendum, Popper begins by making the following three claims:

1 What-is? questions, such as What is Justice? . . . are always pointless – without philosophical or scientific interest; and so are all answers to what-is? questions, such as definitions. It must be admitted that some definitions may sometimes be of help in answering other questions: urgent questions which cannot be dismissed: genuine difficulties which may have arisen in science or in philosophy. But what-is? questions as such do not raise this kind of difficulty.

2 It makes no difference whether a what-is question is raised in order to inquire into the essence or into the nature of a thing, or whether it is raised in order to inquire into the essential meaning or into the proper use of an expression. These kinds of what-is questions are fundamentally the same. Again, it must be admitted that an answer to a what-is question – for example, an answer pointing out distinctions between two meanings of a word which have often been confused – may not be without point, provided the confusion led to serious difficulties. But in this case, it is not the what-is question which we are trying to solve; we hope rather to resolve certain contradictions that arise from our reliance upon somewhat naïve intuitive ideas. (The . . . example discussed below – that of the ideas of a derivative and of an integral – will furnish an illustration of this case.) The solution may well be the elimination (rather than the clarification) of the naïve idea. But an answer to . . . a what-is question is never fruitful. . . .

3 The problem, more especially, of replacing an “inexact” term by an “exact” one – for example, the problem of giving a definition in “exact” or “precise” terms – is a pseudo-problem. It depends essentially upon the inexact and imprecise terms “exact” and “precise.” These are most misleading, not only because they strongly suggest that there exists what does not exist – absolute exactness or precision – but also because they are emotionally highly charged: under the guise of scientific character and of scientific objectivity, they suggest that precision or exactness is something superior, a kind of ultimate value, and that it is wrong, or unscientific, or muddle-headed, to use inexact terms (as it is indeed wrong not to speak as lucidly and simply as possible). But there is no such thing as an “exact” term, or terms made “precise” by “precise definitions.” Also, a definition must always use undefined terms in its definiens (since otherwise we should get involved in an infinite regress or in a circle); and if we have to operate with a number of undefined terms, it hardly matters whether we use a few more. Of course, if a definition helps to solve a genuine problem, the situation is different; and some problems cannot be solved without an increase of precision. Indeed, this is the only way in which we can reasonably speak of precision: the demand for precision is empty, unless it is raised relative to some requirements that arise from our attempts to solve a definite problem. (pp. 261-63)

Later in his addendum Popper provides an enlightening discussion of the historical development of calculus despite its lack of solid logical axiomatic foundation. The meaning of an infinitesimal or a derivative was anything but precise. It was, to use Arthreya’s aptly chosen term, a muddle. Mathematicians even came up with a symbol for the derivative. But they literally had no precise idea of what they were talking about. When mathematicians eventually came up with a definition for the derivative, the definition did not clarify what they were talking about; it just provided a particular method of calculating what the derivative would be. However, the absence of a rigorous and precise definition of the derivative did not prevent mathematicians from solving some enormously important practical problems, thereby helping to change the world and our understanding of it.

The modern history of the problem of the foundations of mathematics is largely, it has been asserted, the history of the “clarification” of the fundamental ideas of the differential and integral calculus. The concept of a derivative (the slope of a curve of the rate of increase of a function) has been made “exact” or “precise” by defining it as the limit of the quotient of differences (given a differentiable function); and the concept of an integral (the area or “quadrature” of a region enclosed by a curve) has likewise been “exactly defined”. . . . Attempts to eliminate the contradictions in this field constitute not only one of the main motives of the development of mathematics during the last hundred or even two hundred years, but they have also motivated modern research into the “foundations” of the various sciences and, more particularly, the modern quest for precision or exactness. “Thus mathematicians,” Bertrand Russell says, writing about one of the most important phases of this development, “were only awakened from their “dogmatic slumbers” when Weierstrass and his followers showed that many of their most cherished propositions are in general false. Macaulay, contrasting the certainty of mathematics with the uncertainty of philosophy, asks who ever heard of a reaction against Taylor’s theorem. If he had lived now, he himself might have heard of such a reaction, for his is precisely one of the theorems which modern investigations have overthrown. Such rude shocks to mathematical faith have produced that love of formalism which appears, to those who are ignorant of its motive, to be mere outrageous pedantry.”

It would perhaps be too much to read into this passage of Russell’s his agreement with a view which I hold to be true: that without “such rude shocks” – that is to say, without the urgent need to remove contradictions – the love of formalism is indeed “mere outrageous pedantry.” But I think that Russell does convey his view that without an urgent need, an urgent problem to be solved, the mere demand for precision is indefensible.

But this is only a minor point. My main point is this. Most people, including mathematicians, look upon the definition of the derivative, in terms of limits of sequences, as if it were a definition in the sense that it analyses or makes precise, or “explicates,” the intuitive meaning of the definiendum – of the derivative. But this widespread belief is mistaken. . . .

Newton and Leibniz and their successors did not deny that a derivative, or an integral, could be calculated as a limit of certain sequences . . . . But they would not have regarded these limits as possible definitions, because they do not give the meaning, the idea, of a derivative or an integral.

For the derivative is a measure of a velocity, or a slope of a curve. Now the velocity of a body at a certain instant is something real – a concrete (relational) attribute of that body at that instant. By contrast the limit of a sequence of average velocities is something highly abstract – something that exists only in our thoughts. The average velocities themselves are unreal. Their unending sequence is even more so; and the limit of this unending sequence is a purely mathematical construction out of these unreal entities. Now it is intuitively quite obvious that this limit must numerically coincide with the velocity, and that, if the limit can be calculated, we can thereby calculate the velocity. But according to the views of Newton and his contemporaries, it would be putting the cart before the horse were we to define the velocity as being identical with this limit, rather than as a real state of the body at a certain instant, or at a certain point, of its track – to be calculated by any mathematical contrivance we may be able to think of.

The same holds of course for the slope of a curve in a given point. Its measure will be equal to the limit of a sequence of measures of certain other average slopes (rather than actual slopes) of this curve. But it is not, in its proper meaning or essence, a limit of a sequence: the slope is something we can sometimes actually draw on paper, and construct with a compasses and rulers, while a limit is in essence something abstract, rarely actually reached or realized, but only approached, nearer and nearer, by a sequence of numbers. . . .

Or as Berkeley put it “. . . however expedient such analogies or such expressions may be found for facilitating the modern quadratures, yet we shall not find any light given us thereby into the original real nature of fluxions considered in themselves.” Thus mere means for facilitating our calculations cannot be considered as explications or definitions.

This was the view of all mathematicians of the period, including Newton and Leibniz. If we now look at the modern point of view, then we see that we have completely given up the idea of definition in the sense in which it was understood by the founders of the calculus, as well as by Berkeley. We have given up the idea of a definition which explains the meaning (for example of the derivative). This fact is veiled by our retaining the old symbol of “definition” for some equivalences which we use, not to explain the idea or the essence of a derivative, but to eliminate it. And it is veiled by our retention of the name “differential quotient” or “derivative,” and the old symbol dy/dx which once denoted an idea which we have now discarded. For the name, and the symbol, now have no function other than to serve as labels for the defiens – the limit of a sequence.

Thus we have given up “explication” as a bad job. The intuitive idea, we found, led to contradictions. But we can solve our problems without it, retaining the bulk of the technique of calculation which originally was based upon the intuitive idea. Or more precisely we retain only this technique, as far as it was sound, and eliminate the idea its help. The derivative and the integral are both eliminated; they are replaced, in effect, by certain standard methods of calculating limits. (oo. 266-70)

Not only have the original ideas of the founders of calculus been eliminated, because they ultimately could not withstand logical scrutiny, but a premature insistence on logical precision would have had disastrous consequences for the ultimate development of calculus.

It is fascinating to consider that this whole admirable development might have been nipped in the bud (as in the days of Archimedes) had the mathematicians of the day been more sensitive to Berkeley’s demand – in itself quite reasonable – that we should strictly adhere to the rules of logic, and to the rule of always speaking sense.

We now know that Berkeley was right when, in The Analyst, he blamed Newton . . . for obtaining . . . mathematical results in the theory of fluxions or “in the calculus differentialis” by illegitimate reasoning. And he was completely right when he indicated that [his] symbols were without meaning. “Nothing is easier,” he wrote, “than to devise expressions and notations, for fluxions and infinitesimals of the first, second, third, fourth, and subsequent orders. . . . These expressions indeed are clear and distinct, and the mind finds no difficulty in conceiving them to be continued beyond any assignable bounds. But if . . . we look underneath, if, laying aside the expressions, we set ourselves attentively to consider the things themselves which are supposed to be expressed or marked thereby, we shall discover much emptiness, darkness, and confusion . . . , direct impossibilities, and contradictions.”

But the mathematicians of his day did not listen to Berkeley. They got their results, and they were not afraid of contradictions as long as they felt that they could dodge them with a little skill. For the attempt to “analyse the meaning” or to “explicate” their concepts would, as we know now, have led to nothing. Berkeley was right: all these concept were meaningless, in his sense and in the traditional sense of the word “meaning:” they were empty, for they denoted nothing, they stood for nothing. Had this fact been realized at the time, the development of the calculus might have been stopped again, as it had been stopped before. It was the neglect of precision, the almost instinctive neglect of all meaning analysis or explication, which made the wonderful development of the calculus possible.

The problem underlying the whole development was, of course, to retain the powerful instrument of the calculus without the contradictions which had been found in it. There is no doubt that our present methods are more exact than the earlier ones. But this is not due to the fact that they use “exactly defined” terms. Nor does it mean that they are exact: the main point of the definition by way of limits is always an existential assertion, and the meaning of the little phrase “there exists a number” has become the centre of disturbance in contemporary mathematics. . . . This illustrates my point that the attribute of exactness is not absolute, and that it is inexact and highly misleading to use the terms “exact” and “precise” as if they had any exact or precise meaning. (pp. 270-71)

Popper sums up his discussion as follows:

My examples [I quoted only the first of the four examples as it seemed most relevant to Arthreya's discussion] may help to emphasize a lesson taught by the whole history of science: that absolute exactness does not exist, not even in logic and mathematics (as illustrated by the example of the still unfinished history of the calculus); that we should never try to be more exact than is necessary for the solution of the problem in hand; and that the demand for “something more exact” cannot in itself constitute a genuine problem (except, of course, when improved exactness may improve the testability of some theory). (p. 277)

I apologize for stringing together this long series of quotes from Popper, but I think that it is important to understand that there is simply no scientific justification for the highly formalistic manner in which much modern economics is now carried out. Of course, other far more authoritative critics than I, like Mark Blaug and Richard Lipsey (also here) have complained about the insistence of modern macroeconomics on microfounded, axiomatized models regardless of whether those models generate better predictions than competing models. Their complaints have regrettably been ignored for the most part. I simply want to point out that a recent, and in many ways admirable, introduction to modern macroeconomics failed to provide a coherent justification for insisting on axiomatized models. It really wasn’t the author’s fault; a coherent justification doesn’t exist.


About Me

David Glasner
Washington, DC

I am an economist at the Federal Trade Commission. Nothing that you read on this blog necessarily reflects the views of the FTC or the individual commissioners. Although I work at the FTC as an antitrust economist, most of my research and writing has been on monetary economics and policy and the history of monetary theory. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 274 other followers


Follow

Get every new post delivered to your Inbox.

Join 274 other followers