Archive for July, 2014

How to Think about Own Rates of Interest

Phil Pilkington has responded to my post about the latest version of my paper (co-authored by Paul Zimmerman) on the Sraffa-Hayek debate about the natural rate of interest. For those of you who haven’t been following my posts on the subject, here’s a quick review. Almost three years ago I wrote a post refuting Sraffa’s argument that Hayek’s concept of the natural rate of interest is incoherent, there being a multiplicity of own rates of interest in a barter economy (Hayek’s benchmark for the rate of interest undisturbed by monetary influences), which makes it impossible to identify any particular own rate as the natural rate of interest.

Sraffa maintained that if there are many own rates of interest in a barter economy, none of them having a claim to priority over the others, then Hayek had no basis for singling out any particular one of them as the natural rate and holding it up as the benchmark rate to guide monetary policy. I pointed out that Ludwig Lachmann had answered Sraffa’s attack (about 20 years too late) by explaining that even though there could be many own rates for individual commodities, all own rates are related by the condition that the cost of borrowing in terms of all commodities would be equalized, differences in own rates reflecting merely differences in expected appreciation or depreciation of the different commodities. Different own rates are simply different nominal rates; there is a unique real own rate, a point demonstrated by Irving Fisher in 1896 in Appreciation and Interest.

Let me pause here for a moment to explain what is meant by an own rate of interest. It is simply the name for the rate of interest corresponding to a loan contracted in terms of a particular commodity, the borrower receiving the commodity now and repaying the lender with the same commodity when the term of the loan expires. Sraffa correctly noted that in equilibrium arbitrage would force the terms of such a loan (i.e., the own rate of interest) to equal the ratio of the current forward price of the commodity to its current spot price, buying spot and selling forward being essentially equivalent to borrowing and repaying.

Now what is tricky about Sraffa’s argument against Hayek is that he actually acknowledges at the beginning of his argument that in a stationary equilibrium, presumably meaning that prices remain at their current equilibrium levels over time, all own rates would be equal. In fact if prices remain (and are expected to remain) constant period after period, the ratio of forward to spot prices would equal unity for all commodities implying that the natural rate of interest would be zero. Sraffa did not make that point explicitly, but it seems to be a necessary implication of his analysis. (This implication seems to bear on an old controversy in the theory of capital and interest, which is whether the rate of interest would be positive in a stationary equilibrium with constant real income). Schumpeter argued that the equilibrium rate of interest would be zero, and von Mises argued that it would be positive, because time preference implying that the rate of interest is necessarily always positive is a kind of a priori praxeological law of nature, the sort of apodictic gibberish to which von Mises was regrettably predisposed. The own-rate analysis supports Schumpeter against Mises.

So to make the case against Hayek, Sraffa had to posit a change, a shift in demand from one product to another, that disrupts the pre-existing equilibrium. Here is the key passage from Sraffa:

Suppose there is a change in the distribution of demand between various commodities; immediately some will rise in price, and others will fall; the market will expect that, after a certain time, the supply of the former will increase, and the supply of the latter fall, and accordingly the forward price, for the date on which equilibrium is expected to be restored, will be below the spot price in the case of the former and above it in the case of the latter; in other words, the rate of interest on the former will be higher than on the latter. (p. 50)

This is a difficult passage, and in previous posts, and in my paper with Zimmerman, I did not try to parse this passage. But I am going to parse it now. Assume that demand shifts from tomatoes to cucumbers. In the original equilibrium, let the prices of both be $1 a pound. With a zero own rate of interest in terms of both tomatoes and cucumbers, you could borrow a pound of tomatoes today and discharge your debt by repaying the lender a pound of tomatoes at the expiration of the loan. However, after the demand shift, the price of tomatoes falls to, say, $0.90 a pound, and the price of cucumbers rises to, say, $1.10 a pound. Sraffa posits that the price changes are temporary, not because the demand shift is temporary, but because the supply curves of tomatoes and cucumbers are perfectly elastic at $1 a pound. However, supply does not adjust immediately, so Sraffa believes that there can be a temporary deviation from the long-run equilibrium prices of tomatoes and cucumbers.

The ratio of the forward prices to the spot prices tells you what the own rates are for tomatoes and cucumbers. For tomatoes, the ratio is 1/.9, implying an own rate of 11.1%. For cucumbers the ratio is 1/1.1, implying an own rate of -9.1%. Other prices have not changed, so all other own rates remain at 0. Having shown that own rates can diverge, Sraffa thinks that he has proven Hayek’s concept of a natural rate of interest to be a nonsense notion. He was mistaken.

There are at least two mistakes. First, the negative own rate on cucumbers simply means that no one will lend in terms of cucumbers for negative interest when other commodities allow lending at zero interest. It also means that no one will hold cucumbers in this period to sell at a lower price in the next period than the cucumbers would fetch in the current period. Cucumbers are a bad investment, promising a negative return; any lending and investing will be conducted in terms of some other commodity. The negative own rate on cucumbers signifies a kind of corner solution, reflecting the impossibility of transporting next period’s cucumbers into the present. If that were possible cucumber prices would be equal in the present and the future, and the cucumber own rate would be equal to all other own rates at zero. But the point is that if any lending takes place, it will be at a zero own rate.

Second, the positive own rate on tomatoes means that there is an incentive to lend in terms of tomatoes rather than lend in terms of other commodities. But as long as it is possible to borrow in terms of other commodities at a zero own rate, no one borrows in terms of tomatoes. Thus, if anyone wanted to lend in terms of tomatoes, he would have to reduce the rate on tomatoes to make borrowers indifferent between borrowing in terms of tomatoes and borrowing in terms of some other commodity. However, if tomatoes today can be held at zero cost to be sold at the higher price prevailing next period, currently produced tomatoes would be sold in the next period rather than sold today. So if there were no costs of holding tomatoes until the next period, the price of tomatoes in the next period would be no higher than the price in the current period. In other words, the forward price of tomatoes cannot exceed the current spot price by more than the cost of holding tomatoes until the next period. If the difference between the spot and the forward price reflects no more than the cost of holding tomatoes till the next period, then, as Keynes showed in chapter 17 of the General Theory, the own rates are indeed effectively equalized after appropriate adjustment for storage costs and expected appreciation.

Thus, it was Keynes, who having selected Sraffa to review Hayek’s Prices and Production in the Economic Journal, of which Keynes was then the editor, adapted Sraffa’s own rate analysis in the General Theory, but did so in a fashion that, at least partially, rehabilitated the very natural-rate analysis that had been the object of Sraffa’s scorn in his review of Prices and Production. Keynes also rejected the natural-rate analysis, but he did so not because it is nonsensical, but because the natural rate is not independent of the level of employment. Keynes’s argument that the natural rate depends on the level of employment seems to me to be inconsistent with the idea that the IS curve is downward sloping. But I will have to think about that a bit and reread the relevant passage in the General Theory and perhaps revisit the point in a future post.

 UPDATE (07/28/14 13:02 EDT): Thanks to my commenters for pointing out that my own thinking about the own rate of interest was not quite right. I should have defined the own rate in terms of a real numeraire instead of $, which was a bit of awkwardness that I should have fixed before posting. I will try to publish a corrected version of this post later today or tomorrow. Sorry for posting without sufficient review and revision.

UPDATE (08/04/14 11:38 EDT): I hope to post the long-delayed sequel to this post later today. A number of personal issues took precedence over posting, but I also found it difficult to get clear on several minor points, which I hope that I have now resolved adequately, for example I found that defining the own rate in terms of a real numeraire was not really the source of my problem with this post, though it was a useful exercise to work through. Anyway, stay tuned.

Who Is Grammatically Challenged? John Taylor or the Wall Street Journal Editorial Page?

Perhaps I will get around to commenting on John Taylor’s latest contribution to public discourse and economic enlightenment on the incomparable Wall Street Journal editorial page. And then again, perhaps not. We shall see.

In truth, there is really nothing much in the article that he has not already said about 500 times (or is it 500 thousand times?) before about “rule-based monetary policy.” But there was one notable feature about his piece, though I am not sure if it was put in there by him or by some staffer on the legendary editorial page at the Journal. And here it is, first the title followed by a teaser:

John Taylor’s Reply to Alan Blinder

The Fed’s ad hoc departures from rule-based monetary policy has hurt the economy.

Yes, believe it or not, that is exactly what it says: “The Fed’s ad hoc departures from rule-based monetary policy has [sic!] hurt the economy.”

Good grief. This is incompetence squared. The teaser was probably not written by Taylor, but one would think that he would at least read the final version before signing off on it.

UPDATE: David Henderson, an authoritative — and probably not overly biased — source, absolves John Taylor from grammatical malpractice, thereby shifting all blame to the Wall Street Journal editorial page.

Monetarism and the Great Depression

Last Friday, Scott Sumner posted a diatribe against the IS-LM triggered by a set of slides by Chris Foote of Harvard and the Boston Fed explaining how the effects of monetary policy can be analyzed using the IS-LM framework. What really annoys Scott is the following slide in which Foote compares the “spending (aka Keynesian) hypothesis” and the “money (aka Monetarist) hypothesis” as explanations for the Great Depression. I am also annoyed; whether more annoyed or less annoyed than Scott I can’t say, interpersonal comparisons of annoyance, like interpersonal comparisons of utility, being beyond the ken of economists. But our reasons for annoyance are a little different, so let me try to explore those reasons. But first, let’s look briefly at the source of our common annoyance.

foote_81The “spending hypothesis” attributes the Great Depression to a sudden collapse of spending which, in turn, is attributed to a collapse of consumer confidence resulting from the 1929 stock-market crash and a collapse of investment spending occasioned by a collapse of business confidence. The cause of the collapse in consumer and business confidence is not really specified, but somehow it has to do with the unstable economic and financial situation that characterized the developed world in the wake of World War I. In addition there was, at least according to some accounts, a perverse fiscal response: cuts in government spending and increases in taxes to keep the budget in balance. The latter notion that fiscal policy was contractionary evokes a contemptuous response from Scott, more or less justified, because nominal government spending actually rose in 1930 and 1931 and spending in real terms continued to rise in 1932. But the key point is that government spending in those days was too meager to have made much difference; the spending hypothesis rises or falls on the notion that the trigger for the Great Depression was an autonomous collapse in private spending.

But what really gets Scott all bent out of shape is Foote’s commentary on the “money hypothesis.” In his first bullet point, Foote refers to the 25% decline in M1 between 1929 and 1933, suggesting that monetary policy was really, really tight, but in the next bullet point, Foote points out that if monetary policy was tight, implying a leftward shift in the LM curve, interest rates should have risen. Instead they fell. Moreover, Foote points out that, inasmuch as the price level fell by more than 25% between 1929 and 1933, the real value of the money supply actually increased, so it’s not even clear that there was a leftward shift in the LM curve. You can just feel Scott’s blood boiling:

What interests me is the suggestion that the “money hypothesis” is contradicted by various stylized facts. Interest rates fell.  The real quantity of money rose.  In fact, these two stylized facts are exactly what you’d expect from tight money.  The fact that they seem to contradict the tight money hypothesis does not reflect poorly on the tight money hypothesis, but rather the IS-LM model that says tight money leads to a smaller level of real cash balances and a higher level of interest rates.

To see the absurdity of IS-LM, just consider a monetary policy shock that no one could question—hyperinflation.  Wheelbarrows full of billion mark currency notes. Can we all agree that that would be “easy money?”  Good.  We also know that hyperinflation leads to extremely high interest rates and extremely low real cash balances, just the opposite of the prediction of the IS-LM model.  In contrast, Milton Friedman would tell you that really tight money leads to low interest rates and large real cash balances, exactly what we do see.

Scott is totally right, of course, to point out that the fall in interest rates and the increase in the real quantity of money do not contradict the “money hypothesis.” However, he is also being selective and unfair in making that criticism, because, in two slides following almost immediately after the one to which Scott takes such offense, Foote actually explains that the simple IS-LM analysis presented in the previous slide requires modification to take into account expected deflation, because the demand for money depends on the nominal rate of interest while the amount of investment spending depends on the real rate of interest, and shows how to do the modification. Here are the slides:

foote_83

foote_84Thus, expected deflation raises the real rate of interest thereby shifting the IS curve to the left while leaving the LM curve where it was. Expected deflation therefore explains a fall in both nominal and real income as well as in the nominal rate of interest; it also explains an increase in the real rate of interest. Scott seems to be emotionally committed to the notion that the IS-LM model must lead to a misunderstanding of the effects of monetary policy, holding Foote up as an example of this confusion on the basis of the first of the slides, but Foote actually shows that IS-LM can be tweaked to accommodate a correct understanding of the dominant role of monetary policy in the Great Depression.

The Great Depression was triggered by a deflationary scramble for gold associated with the uncoordinated restoration of the gold standard by the major European countries in the late 1920s, especially France and its insane central bank. On top of this, the Federal Reserve, succumbing to political pressure to stop “excessive” stock-market speculation, raised its discount rate to a near record 6.5% in early 1929, greatly amplifying the pressure on gold reserves, thereby driving up the value of gold, and causing expectations of the future price level to start dropping. It was thus a rise (both actual and expected) in the value of gold, not a reduction in the money supply, which was the source of the monetary shock that produced the Great Depression. The shock was administered without a reduction in the money supply, so there was no shift in the LM curve. IS-LM is not necessarily the best model with which to describe this monetary shock, but the basic story can be expressed in terms of the IS-LM model.

So, you ask, if I don’t think that Foote’s exposition of the IS-LM model seriously misrepresents what happened in the Great Depression, why did I say at beginning of this post that Foote’s slides really annoy me? Well, the reason is simply that Foote seems to think that the only monetary explanation of the Great Depression is the Monetarist explanation of Milton Friedman: that the Great Depression was caused by an exogenous contraction in the US money supply. That explanation is wrong, theoretically and empirically.

What caused the Great Depression was an international disturbance to the value of gold, caused by the independent actions of a number of central banks, most notably the insane Bank of France, maniacally trying to convert all its foreign exchange reserves into gold, and the Federal Reserve, obsessed with suppressing a non-existent stock-market bubble on Wall Street. It only seems like a bubble with mistaken hindsight, because the collapse of prices was not the result of any inherent overvaluation in stock prices in October 1929, but because the combined policies of the insane Bank of France and the Fed wrecked the world economy. The decline in the nominal quantity of money in the US, the great bugaboo of Milton Friedman, was merely an epiphenomenon.

As Ron Batchelder and I have shown, Gustav Cassel and Ralph Hawtrey had diagnosed and explained the causes of the Great Depression fully a decade before it happened. Unfortunately, whenever people think of a monetary explanation of the Great Depression, they think of Milton Friedman, not Hawtrey and Cassel. Scott Sumner understands all this, he’s even written a book – a wonderful (but unfortunately still unpublished) book – about it. But he gets all worked up about IS-LM.

I, on the other hand, could not care less about IS-LM; it’s the idea that the monetary cause of the Great Depression was discovered by Milton Friedman that annoys the [redacted] out of me.

UPDATE: I posted this post prematurely before I finished editing it, so I apologize for any mistakes or omissions or confusing statements that appeared previously or that I haven’t found yet.

Another Complaint about Modern Macroeconomics

In discussing modern macroeconomics, I’ve have often mentioned my discomfort with a narrow view of microfoundations, but I haven’t commented very much on another disturbing feature of modern macro: the requirement that theoretical models be spelled out fully in axiomatic form. The rhetoric of axiomatization has had sweeping success in economics, making axiomatization a pre-requisite for almost any theoretical paper to be taken seriously, and even considered for publication in a reputable economics journal.

The idea that a good scientific theory must be derived from a formal axiomatic system has little if any foundation in the methodology or history of science. Nevertheless, it has become almost an article of faith in modern economics. I am not aware, but would be interested to know, whether, and if so how widely, this misunderstanding has been propagated in other (purportedly) empirical disciplines. The requirement of the axiomatic method in economics betrays a kind of snobbishness and (I use this word advisedly, see below) pedantry, resulting, it seems, from a misunderstanding of good scientific practice.

Before discussing the situation in economics, I would note that axiomatization did not become a major issue for mathematicians until late in the nineteenth century (though demands – luckily ignored for the most part — for logical precision followed immediately upon the invention of the calculus by Newton and Leibniz) and led ultimately to the publication of the great work of Russell and Whitehead, Principia Mathematica whose goal was to show that all of mathematics could be derived from the axioms of pure logic. This is yet another example of an unsuccessful reductionist attempt, though it seemed for a while that the Principia paved the way for the desired reduction. But 20 years after the Principia was published, Kurt Godel proved his famous incompleteness theorem, showing that, as a matter of pure logic, not even all the valid propositions of arithmetic, much less all of mathematics, could be derived from any system of axioms. This doesn’t mean that trying to achieve a reduction of a higher-level discipline to another, deeper discipline is not a worthy objective, but it certainly does mean that one cannot just dismiss, out of hand, a discipline simply because all of its propositions are not deducible from some set of fundamental propositions. Insisting on reduction as a prerequisite for scientific legitimacy is not a scientific attitude; it is merely a form of obscurantism.

As far as I know, which admittedly is not all that far, the only empirical science which has been axiomatized to any significant extent is theoretical physics. In his famous list of 23 unsolved mathematical problems, the great mathematician David Hilbert included the following (number 6).

Mathematical Treatment of the Axioms of Physics. The investigations on the foundations of geometry suggest the problem: To treat in the same manner, by means of axioms, those physical sciences in which already today mathematics plays an important part, in the first rank are the theory of probabilities and mechanics.

As to the axioms of the theory of probabilities, it seems to me desirable that their logical investigation should be accompanied by a rigorous and satisfactory development of the method of mean values in mathematical physics, and in particular in the kinetic theory of gasses. . . . Boltzman’s work on the principles of mechanics suggests the problem of developing mathematically the limiting processes, there merely indicated, which lead from the atomistic view to the laws of motion of continua.

The point that I want to underscore here is that axiomatization was supposed to ensure that there was an adequate logical underpinning for theories (i.e., probability and the kinetic theory of gasses) that had already been largely worked out. Thus, Hilbert proposed axiomatization not as a method of scientific discovery, but as a method of checking for hidden errors and problems. Error checking is certainly important for science, but it is clearly subordinate to the creation and empirical testing of new and improved scientific theories.

The fetish for axiomitization in economics can largely be traced to Gerard Debreu’s great work, The Theory of Value: An Axiomatic Analysis of Economic Equilibrium, in which Debreu, building on his own work and that of Kenneth Arrow, presented a formal description of a decentralized competitive economy with both households and business firms, and proved that, under the standard assumptions of neoclassical theory (notably diminishing marginal rates of substitution in consumption and production and perfect competition) such an economy would have at least one, and possibly more than one, equilibrium.

A lot of effort subsequently went into gaining a better understanding of the necessary and sufficient conditions under which an equilibrium exists, and when that equilibrium would be unique and Pareto optimal. The subsequent work was then brilliantly summarized and extended in another great work, General Competitive Analysis by Arrow and Frank Hahn. Unfortunately, those two books, paragons of the axiomatic method, set a bad example for the future development of economic theory, which embarked on a needless and counterproductive quest for increasing logical rigor instead of empirical relevance.

A few months ago, I wrote a review of Kartik Athreya’s book Big Ideas in Macroeconomics. One of the arguments of Athreya’s book that I didn’t address was his defense of modern macroeconomics against the complaint that modern macroeconomics is too mathematical. Athreya is not responsible for the reductionist and axiomatic fetishes of modern macroeconomics, but he faithfully defends them against criticism. So I want to comment on a few paragraphs in which Athreya dismisses criticism of formalism and axiomatization.

Natural science has made significant progress by proceeding axiomatically and mathematically, and whether or not we [economists] will achieve this level of precision for any unit of observation in macroeconomics, it is likely to be the only rational alternative.

First, let me observe that axiomatization is not the same as using mathematics to solve problems. Many problems in economics cannot easily be solved without using mathematics, and sometimes it is useful to solve a problem in a few different ways, each way potentially providing some further insight into the problem not provided by the others. So I am not at all opposed to the use of mathematics in economics. However, the choice of tools to solve a problem should bear some reasonable relationship to the problem at hand. A good economist will understand what tools are appropriate to the solution of a particular problem. While mathematics has clearly been enormously useful to the natural sciences and to economics in solving problems, there are very few scientific advances that can be ascribed to axiomatization. Axiomatization was vital in proving the existence of equilibrium, but substantive refutable propositions about real economies, e.g., the Heckscher-Ohlin Theorem, or the Factor-Price Equalization Theorem, or the law of comparative advantage, were not discovered or empirically tested by way of axiomatization. Arthreya talks about economics achieving the “level of precision” achieved by natural science, but the concept of precision is itself hopelessly imprecise, and to set precision up as an independent goal makes no sense. Arthreya continues:

In addition to these benefits from the systematic [i.e. axiomatic] approach, there is the issue of clarity. Lowering mathematical content in economics represents a retreat from unambiguous language. Once mathematized, words in any given model cannot ever mean more than one thing. The unwillingness to couch things in such narrow terms (usually for fear of “losing something more intelligible”) has, in the past, led to a great deal of essentially useless discussion.

Arthreya writes as if the only source of ambiguity is imprecise language. That just isn’t so. Is unemployment voluntary or involuntary? Arthreya actually discusses the question intelligently on p. 283, in the context of search models of unemployment, but I don’t think that he could have provided any insight into that question with a purely formal, symbolic treatment. Again back to Arthreya:

The plaintive expressions of “fear of losing something intangible” are concessions to the forces of muddled thinking. The way modern economics gets done, you cannot possibly not know exactly what the author is assuming – and to boot, you’ll have a foolproof way of checking whether their claims of what follows from these premises is actually true or not.

So let me juxtapose this brief passage from Arthreya with a rather longer passage from Karl Popper in which he effectively punctures the fallacies underlying the specious claims made on behalf of formalism and against ordinary language. The extended quotations are from an addendum titled “Critical Remarks on Meaning Analysis” (pp. 261-77) to chapter IV of Realism and the Aim of Science (volume 1 of the Postscript to the Logic of Scientific Discovery). In this addendum, Popper begins by making the following three claims:

1 What-is? questions, such as What is Justice? . . . are always pointless – without philosophical or scientific interest; and so are all answers to what-is? questions, such as definitions. It must be admitted that some definitions may sometimes be of help in answering other questions: urgent questions which cannot be dismissed: genuine difficulties which may have arisen in science or in philosophy. But what-is? questions as such do not raise this kind of difficulty.

2 It makes no difference whether a what-is question is raised in order to inquire into the essence or into the nature of a thing, or whether it is raised in order to inquire into the essential meaning or into the proper use of an expression. These kinds of what-is questions are fundamentally the same. Again, it must be admitted that an answer to a what-is question – for example, an answer pointing out distinctions between two meanings of a word which have often been confused – may not be without point, provided the confusion led to serious difficulties. But in this case, it is not the what-is question which we are trying to solve; we hope rather to resolve certain contradictions that arise from our reliance upon somewhat naïve intuitive ideas. (The . . . example discussed below – that of the ideas of a derivative and of an integral – will furnish an illustration of this case.) The solution may well be the elimination (rather than the clarification) of the naïve idea. But an answer to . . . a what-is question is never fruitful. . . .

3 The problem, more especially, of replacing an “inexact” term by an “exact” one – for example, the problem of giving a definition in “exact” or “precise” terms – is a pseudo-problem. It depends essentially upon the inexact and imprecise terms “exact” and “precise.” These are most misleading, not only because they strongly suggest that there exists what does not exist – absolute exactness or precision – but also because they are emotionally highly charged: under the guise of scientific character and of scientific objectivity, they suggest that precision or exactness is something superior, a kind of ultimate value, and that it is wrong, or unscientific, or muddle-headed, to use inexact terms (as it is indeed wrong not to speak as lucidly and simply as possible). But there is no such thing as an “exact” term, or terms made “precise” by “precise definitions.” Also, a definition must always use undefined terms in its definiens (since otherwise we should get involved in an infinite regress or in a circle); and if we have to operate with a number of undefined terms, it hardly matters whether we use a few more. Of course, if a definition helps to solve a genuine problem, the situation is different; and some problems cannot be solved without an increase of precision. Indeed, this is the only way in which we can reasonably speak of precision: the demand for precision is empty, unless it is raised relative to some requirements that arise from our attempts to solve a definite problem. (pp. 261-63)

Later in his addendum Popper provides an enlightening discussion of the historical development of calculus despite its lack of solid logical axiomatic foundation. The meaning of an infinitesimal or a derivative was anything but precise. It was, to use Arthreya’s aptly chosen term, a muddle. Mathematicians even came up with a symbol for the derivative. But they literally had no precise idea of what they were talking about. When mathematicians eventually came up with a definition for the derivative, the definition did not clarify what they were talking about; it just provided a particular method of calculating what the derivative would be. However, the absence of a rigorous and precise definition of the derivative did not prevent mathematicians from solving some enormously important practical problems, thereby helping to change the world and our understanding of it.

The modern history of the problem of the foundations of mathematics is largely, it has been asserted, the history of the “clarification” of the fundamental ideas of the differential and integral calculus. The concept of a derivative (the slope of a curve of the rate of increase of a function) has been made “exact” or “precise” by defining it as the limit of the quotient of differences (given a differentiable function); and the concept of an integral (the area or “quadrature” of a region enclosed by a curve) has likewise been “exactly defined”. . . . Attempts to eliminate the contradictions in this field constitute not only one of the main motives of the development of mathematics during the last hundred or even two hundred years, but they have also motivated modern research into the “foundations” of the various sciences and, more particularly, the modern quest for precision or exactness. “Thus mathematicians,” Bertrand Russell says, writing about one of the most important phases of this development, “were only awakened from their “dogmatic slumbers” when Weierstrass and his followers showed that many of their most cherished propositions are in general false. Macaulay, contrasting the certainty of mathematics with the uncertainty of philosophy, asks who ever heard of a reaction against Taylor’s theorem. If he had lived now, he himself might have heard of such a reaction, for his is precisely one of the theorems which modern investigations have overthrown. Such rude shocks to mathematical faith have produced that love of formalism which appears, to those who are ignorant of its motive, to be mere outrageous pedantry.”

It would perhaps be too much to read into this passage of Russell’s his agreement with a view which I hold to be true: that without “such rude shocks” – that is to say, without the urgent need to remove contradictions – the love of formalism is indeed “mere outrageous pedantry.” But I think that Russell does convey his view that without an urgent need, an urgent problem to be solved, the mere demand for precision is indefensible.

But this is only a minor point. My main point is this. Most people, including mathematicians, look upon the definition of the derivative, in terms of limits of sequences, as if it were a definition in the sense that it analyses or makes precise, or “explicates,” the intuitive meaning of the definiendum – of the derivative. But this widespread belief is mistaken. . . .

Newton and Leibniz and their successors did not deny that a derivative, or an integral, could be calculated as a limit of certain sequences . . . . But they would not have regarded these limits as possible definitions, because they do not give the meaning, the idea, of a derivative or an integral.

For the derivative is a measure of a velocity, or a slope of a curve. Now the velocity of a body at a certain instant is something real – a concrete (relational) attribute of that body at that instant. By contrast the limit of a sequence of average velocities is something highly abstract – something that exists only in our thoughts. The average velocities themselves are unreal. Their unending sequence is even more so; and the limit of this unending sequence is a purely mathematical construction out of these unreal entities. Now it is intuitively quite obvious that this limit must numerically coincide with the velocity, and that, if the limit can be calculated, we can thereby calculate the velocity. But according to the views of Newton and his contemporaries, it would be putting the cart before the horse were we to define the velocity as being identical with this limit, rather than as a real state of the body at a certain instant, or at a certain point, of its track – to be calculated by any mathematical contrivance we may be able to think of.

The same holds of course for the slope of a curve in a given point. Its measure will be equal to the limit of a sequence of measures of certain other average slopes (rather than actual slopes) of this curve. But it is not, in its proper meaning or essence, a limit of a sequence: the slope is something we can sometimes actually draw on paper, and construct with a compasses and rulers, while a limit is in essence something abstract, rarely actually reached or realized, but only approached, nearer and nearer, by a sequence of numbers. . . .

Or as Berkeley put it “. . . however expedient such analogies or such expressions may be found for facilitating the modern quadratures, yet we shall not find any light given us thereby into the original real nature of fluxions considered in themselves.” Thus mere means for facilitating our calculations cannot be considered as explications or definitions.

This was the view of all mathematicians of the period, including Newton and Leibniz. If we now look at the modern point of view, then we see that we have completely given up the idea of definition in the sense in which it was understood by the founders of the calculus, as well as by Berkeley. We have given up the idea of a definition which explains the meaning (for example of the derivative). This fact is veiled by our retaining the old symbol of “definition” for some equivalences which we use, not to explain the idea or the essence of a derivative, but to eliminate it. And it is veiled by our retention of the name “differential quotient” or “derivative,” and the old symbol dy/dx which once denoted an idea which we have now discarded. For the name, and the symbol, now have no function other than to serve as labels for the defiens – the limit of a sequence.

Thus we have given up “explication” as a bad job. The intuitive idea, we found, led to contradictions. But we can solve our problems without it, retaining the bulk of the technique of calculation which originally was based upon the intuitive idea. Or more precisely we retain only this technique, as far as it was sound, and eliminate the idea its help. The derivative and the integral are both eliminated; they are replaced, in effect, by certain standard methods of calculating limits. (oo. 266-70)

Not only have the original ideas of the founders of calculus been eliminated, because they ultimately could not withstand logical scrutiny, but a premature insistence on logical precision would have had disastrous consequences for the ultimate development of calculus.

It is fascinating to consider that this whole admirable development might have been nipped in the bud (as in the days of Archimedes) had the mathematicians of the day been more sensitive to Berkeley’s demand – in itself quite reasonable – that we should strictly adhere to the rules of logic, and to the rule of always speaking sense.

We now know that Berkeley was right when, in The Analyst, he blamed Newton . . . for obtaining . . . mathematical results in the theory of fluxions or “in the calculus differentialis” by illegitimate reasoning. And he was completely right when he indicated that [his] symbols were without meaning. “Nothing is easier,” he wrote, “than to devise expressions and notations, for fluxions and infinitesimals of the first, second, third, fourth, and subsequent orders. . . . These expressions indeed are clear and distinct, and the mind finds no difficulty in conceiving them to be continued beyond any assignable bounds. But if . . . we look underneath, if, laying aside the expressions, we set ourselves attentively to consider the things themselves which are supposed to be expressed or marked thereby, we shall discover much emptiness, darkness, and confusion . . . , direct impossibilities, and contradictions.”

But the mathematicians of his day did not listen to Berkeley. They got their results, and they were not afraid of contradictions as long as they felt that they could dodge them with a little skill. For the attempt to “analyse the meaning” or to “explicate” their concepts would, as we know now, have led to nothing. Berkeley was right: all these concept were meaningless, in his sense and in the traditional sense of the word “meaning:” they were empty, for they denoted nothing, they stood for nothing. Had this fact been realized at the time, the development of the calculus might have been stopped again, as it had been stopped before. It was the neglect of precision, the almost instinctive neglect of all meaning analysis or explication, which made the wonderful development of the calculus possible.

The problem underlying the whole development was, of course, to retain the powerful instrument of the calculus without the contradictions which had been found in it. There is no doubt that our present methods are more exact than the earlier ones. But this is not due to the fact that they use “exactly defined” terms. Nor does it mean that they are exact: the main point of the definition by way of limits is always an existential assertion, and the meaning of the little phrase “there exists a number” has become the centre of disturbance in contemporary mathematics. . . . This illustrates my point that the attribute of exactness is not absolute, and that it is inexact and highly misleading to use the terms “exact” and “precise” as if they had any exact or precise meaning. (pp. 270-71)

Popper sums up his discussion as follows:

My examples [I quoted only the first of the four examples as it seemed most relevant to Arthreya’s discussion] may help to emphasize a lesson taught by the whole history of science: that absolute exactness does not exist, not even in logic and mathematics (as illustrated by the example of the still unfinished history of the calculus); that we should never try to be more exact than is necessary for the solution of the problem in hand; and that the demand for “something more exact” cannot in itself constitute a genuine problem (except, of course, when improved exactness may improve the testability of some theory). (p. 277)

I apologize for stringing together this long series of quotes from Popper, but I think that it is important to understand that there is simply no scientific justification for the highly formalistic manner in which much modern economics is now carried out. Of course, other far more authoritative critics than I, like Mark Blaug and Richard Lipsey (also here) have complained about the insistence of modern macroeconomics on microfounded, axiomatized models regardless of whether those models generate better predictions than competing models. Their complaints have regrettably been ignored for the most part. I simply want to point out that a recent, and in many ways admirable, introduction to modern macroeconomics failed to provide a coherent justification for insisting on axiomatized models. It really wasn’t the author’s fault; a coherent justification doesn’t exist.

A New Version of my Paper (with Paul Zimmerman) on the Hayek-Sraffa Debate Is Available on SSRN

One of the good things about having a blog (which I launched July 5, 2011) is that I get comments about what I am writing about from a lot of people that I don’t know. One of my most popular posts – it’s about the sixteenth most visited — was one I wrote, just a couple of months after starting the blog, about the Hayek-Sraffa debate on the natural rate of interest. Unlike many popular posts, to which visitors are initially drawn from very popular blogs that linked to those posts, but don’t continue to drawing a lot of visitors, this post initially had only modest popularity, but still keeps on drawing visitors.

That post also led to a collaboration between me and my FTC colleague Paul Zimmerman on a paper “The Sraffa-Hayek Debate on the Natural Rate of Interest” which I presented two years ago at the History of Economics Society conference. We have now finished our revisions of the version we wrote for the conference, and I have just posted the new version on SSRN and will be submitting it for publication later this week.

Here’s the abstract posted on the SSRN site:

Hayek’s Prices and Production, based on his hugely successful lectures at LSE in 1931, was the first English presentation of Austrian business-cycle theory, and established Hayek as a leading business-cycle theorist. Sraffa’s 1932 review of Prices and Production seems to have been instrumental in turning opinion against Hayek and the Austrian theory. A key element of Sraffa’s attack was that Hayek’s idea of a natural rate of interest, reflecting underlying real relationships, undisturbed by monetary factors, was, even from Hayek’s own perspective, incoherent, because, without money, there is a multiplicity of own rates, none of which can be uniquely identified as the natural rate of interest. Although Hayek’s response failed to counter Sraffa’s argument, Ludwig Lachmann later observed that Keynes’s treatment of own rates in Chapter 17 of the General Theory (itself a generalization of Fisher’s (1896) distinction between the real and nominal rates of interest) undercut Sraffa’s criticism. Own rates, Keynes showed, cannot deviate from each other by more than expected price appreciation plus the cost of storage and the commodity service flow, so that anticipated asset yields are equalized in intertemporal equilibrium. Thus, on Keynes’s analysis in the General Theory, the natural rate of interest is indeed well-defined. However, Keynes’s revision of Sraffa’s own-rate analysis provides only a partial rehabilitation of Hayek’s natural rate. There being no unique price level or rate of inflation in a barter system, no unique money natural rate of interest can be specified. Hayek implicitly was reasoning in terms of a constant nominal value of GDP, but barter relationships cannot identify any path for nominal GDP, let alone a constant one, as uniquely compatible with intertemporal equilibrium.

Aside from clarifying the conceptual basis of the natural-rate analysis and its relationship to Sraffa’s own-rate analysis, the paper also highlights the connection (usually overlooked but mentioned by Harald Hagemann in his 2008 article on the own rate of interest for the International Encyclopedia of the Social Sciences) between the own-rate analysis, in either its Sraffian or Keynesian versions, and Fisher’s early distinction between the real and nominal rates of interest. The conceptual identity between Fisher’s real and nominal distinction and Keynes’s own-rate analysis in the General Theory only magnifies the mystery associated with Keynes’s attack in chapter 13 of the General Theory on Fisher’s distinction between the real and the nominal rates of interest.

I also feel that the following discussion of Hayek’s role in developing the concept of intertemporal equilibrium, though tangential to the main topic of the paper, makes an important point about how to think about intertemporal equilibrium.

Perhaps the key analytical concept developed by Hayek in his early work on monetary theory and business cycles was the idea of an intertemporal equilibrium. Before Hayek, the idea of equilibrium had been reserved for a static, unchanging, state in which economic agents continue doing what they have been doing. Equilibrium is the end state in which all adjustments to a set of initial conditions have been fully worked out. Hayek attempted to generalize this narrow equilibrium concept to make it applicable to the study of economic fluctuations – business cycles – in which he was engaged. Hayek chose to formulate a generalized equilibrium concept. He did not do so, as many have done, by simply adding a steady-state rate of growth to factor supplies and technology. Nor did Hayek define equilibrium in terms of any objective or measurable magnitudes. Rather, Hayek defined equilibrium as the mutual consistency of the independent plans of individual economic agents.

The potential consistency of such plans may be conceived of even if economic magnitudes do not remain constant or grow at a constant rate. Even if the magnitudes fluctuate, equilibrium is conceivable if the fluctuations are correctly foreseen. Correct foresight is not the same as perfect foresight. Perfect foresight is necessarily correct; correct foresight is only contingently correct. All that is necessary for equilibrium is that fluctuations (as reflected in future prices) be foreseen. It is not even necessary, as Hayek (1937) pointed out, that future price changes be foreseen correctly, provided that individual agents agree in their anticipations of future prices. If all agents agree in their expectations of future prices, then the individual plans formulated on the basis of those anticipations are, at least momentarily, equilibrium plans, conditional on the realization of those expectations, because the realization of those expectations would allow the plans formulated on the basis of those expectations to be executed without need for revision. What is required for intertemporal equilibrium is therefore a contingently correct anticipation by future agents of future prices, a contingent anticipation not the result of perfect foresight, but of contingently, even fortuitously, correct foresight. The seminal statement of this concept was given by Hayek in his classic 1937 paper, and the idea was restated by J. R. Hicks (1939), with no mention of Hayek, two years later in Value and Capital.

I made the following comment in a footnote to the penultimate sentence of the quotation:

By defining correct foresight as a contingent outcome rather than as an essential property of economic agents, Hayek elegantly avoided the problems that confounded Oskar Morgenstern ([1935] 1976) in his discussion of the meaning of equilibrium.

I look forward to reading your comments.

John Cochrane on the Failure of Macroeconomics

The state of modern macroeconomics is not good; John Cochrane, professor of finance at the University of Chicago, senior fellow of the Hoover Institution, and adjunct scholar of the Cato Institute, writing in Thursday’s Wall Street Journal, thinks macroeconomics is a failure. Perhaps so, but he has trouble explaining why.

The problem that Cochrane is chiefly focused on is slow growth.

Output per capita fell almost 10 percentage points below trend in the 2008 recession. It has since grown at less than 1.5%, and lost more ground relative to trend. Cumulative losses are many trillions of dollars, and growing. And the latest GDP report disappoints again, declining in the first quarter.

Sclerotic growth trumps every other economic problem. Without strong growth, our children and grandchildren will not see the great rise in health and living standards that we enjoy relative to our parents and grandparents. Without growth, our government’s already questionable ability to pay for health care, retirement and its debt evaporate. Without growth, the lot of the unfortunate will not improve. Without growth, U.S. military strength and our influence abroad must fade.

Macroeconomists offer two possible explanations for slow growth: a) too little demand — correctable through monetary or fiscal stimulus — and b) structural rigidities and impediments to growth, for which stimulus is no remedy. Cochrane is not a fan of the demand explanation.

The “demand” side initially cited New Keynesian macroeconomic models. In this view, the economy requires a sharply negative real (after inflation) rate of interest. But inflation is only 2%, and the Federal Reserve cannot lower interest rates below zero. Thus the current negative 2% real rate is too high, inducing people to save too much and spend too little.

New Keynesian models have also produced attractively magical policy predictions. Government spending, even if financed by taxes, and even if completely wasted, raises GDP. Larry Summers and Berkeley’s Brad DeLong write of a multiplier so large that spending generates enough taxes to pay for itself. Paul Krugman writes that even the “broken windows fallacy ceases to be a fallacy,” because replacing windows “can stimulate spending and raise employment.”

If you look hard at New-Keynesian models, however, this diagnosis and these policy predictions are fragile. There are many ways to generate the models’ predictions for GDP, employment and inflation from their underlying assumptions about how people behave. Some predict outsize multipliers and revive the broken-window fallacy. Others generate normal policy predictions—small multipliers and costly broken windows. None produces our steady low-inflation slump as a “demand” failure.

Cochrane’s characterization of what’s wrong with New Keynesian models is remarkably superficial. Slow growth, according to the New Keynesian model, is caused by the real interest rate being insufficiently negative, with the nominal rate at zero and inflation at (less than) 2%. So what is the problem? True, the nominal rate can’t go below zero, but where is it written that the upper bound on inflation is (or must be) 2%? Cochrane doesn’t say. Not only doesn’t he say, he doesn’t even seem interested. It might be that something really terrible would happen if the rate of inflation rose about 2%, but if so, Cochrane or somebody needs to explain why terrible calamities did not befall us during all those comparatively glorious bygone years when the rate of inflation consistently exceeded 2% while real economic growth was at least a percentage point higher than it is now. Perhaps, like Fischer Black, Cochrane believes that the rate of inflation has nothing to do with monetary or fiscal policy. But that is certainly not the standard interpretation of the New Keynesian model that he is using as the archetype for modern demand-management macroeconomic theories. And if Cochrane does believe that the rate of inflation is not determined by either monetary policy or fiscal policy, he ought to come out and say so.

Cochrane thinks that persistent low inflation and low growth together pose a problem for New Keynesian theories. Indeed it does, but it doesn’t seem that a radical revision of New Keynesian theory would be required to cope with that state of affairs. Cochrane thinks otherwise.

These problems [i.e., a steady low-inflation slump, aka “secular stagnation”] are recognized, and now academics such as Brown University’s Gauti Eggertsson and Neil Mehrotra are busy tweaking the models to address them. Good. But models that someone might get to work in the future are not ready to drive trillions of dollars of public expenditure.

In other words, unless the economic model has already been worked out before a particular economic problem arises, no economic policy conclusions may be deduced from that economic model. May I call  this Cochrane’s rule?

Cochrane the proceeds to accuse those who look to traditional Keynesian ideas of rejecting science.

The reaction in policy circles to these problems is instead a full-on retreat, not just from the admirable rigor of New Keynesian modeling, but from the attempt to make economics scientific at all.

Messrs. DeLong and Summers and Johns Hopkins’s Laurence Ball capture this feeling well, writing in a recent paper that “the appropriate new thinking is largely old thinking: traditional Keynesian ideas of the 1930s to 1960s.” That is, from before the 1960s when Keynesian thinking was quantified, fed into computers and checked against data; and before the 1970s, when that check failed, and other economists built new and more coherent models. Paul Krugman likewise rails against “generations of economists” who are “viewing the world through a haze of equations.”

Well, maybe they’re right. Social sciences can go off the rails for 50 years. I think Keynesian economics did just that. But if economics is as ephemeral as philosophy or literature, then it cannot don the mantle of scientific expertise to demand trillions of public expenditure.

This is political rhetoric wrapped in a cloak of scientific objectivity. We don’t have the luxury of knowing in advance what the consequences of our actions will be. The United States has spent trillions of dollars on all kinds of stuff over the past dozen years or so. A lot of it has not worked out well at all. So it is altogether fitting and proper for us to be skeptical about whether we will get our money’s worth for whatever the government proposes to spend on our behalf. But Cochrane’s implicit demand that money only be spent if there is some sort of scientific certainty that the money will be well spent can never be met. However, as Larry Summers has pointed out, there are certainly many worthwhile infrastructure projects that could be undertaken, so the risk of committing the “broken windows fallacy” is small. With the government able to borrow at negative real interest rates, the present value of funding such projects is almost certainly positive. So one wonders what is the scientific basis for not funding those projects?

Cochrane compares macroeconomics to climate science:

The climate policy establishment also wants to spend trillions of dollars, and cites scientific literature, imperfect and contentious as that literature may be. Imagine how much less persuasive they would be if they instead denied published climate science since 1975 and bemoaned climate models’ “haze of equations”; if they told us to go back to the complex writings of a weather guru from the 1930s Dustbowl, as they interpret his writings. That’s the current argument for fiscal stimulus.

Cochrane writes as if there were some important scientific breakthrough made by modern macroeconomics — “the new and more coherent models,” either the New Keynesian version of New Classical macroeconomics or Real Business Cycle Theory — that rendered traditional Keynesian economics obsolete or outdated. I have never been a devote of Keynesian economics, but the fact is that modern macroeconomics has achieved its ascendancy in academic circles almost entirely by way of a misguided methodological preference for axiomatized intertemporal optimization models for which a unique equilibrium solution can be found by imposing the empirically risible assumption of rational expectations. These models, whether in their New Keynesian or Real Business Cycle versions, do not generate better empirical predictions than the old fashioned Keynesian models, and, as Noah Smith has usefully pointed out, these models have been consistently rejected by private forecasters in favor of the traditional Keynesian models. It is only the dominant clique of ivory-tower intellectuals that cultivate and nurture these models. The notion that such models are entitled to any special authority or scientific status is based on nothing but the exaggerated self-esteem that is characteristic of almost every intellectual clique, particularly dominant ones.

Having rejected inadequate demand as a cause of slow growth, Cochrane, relying on no model and no evidence, makes a pitch for uncertainty as the source of slow growth.

Where, instead, are the problems? John Taylor, Stanford’s Nick Bloom and Chicago Booth’s Steve Davis see the uncertainty induced by seat-of-the-pants policy at fault. Who wants to hire, lend or invest when the next stroke of the presidential pen or Justice Department witch hunt can undo all the hard work? Ed Prescott emphasizes large distorting taxes and intrusive regulations. The University of Chicago’s Casey Mulligan deconstructs the unintended disincentives of social programs. And so forth. These problems did not cause the recession. But they are worse now, and they can impede recovery and retard growth.

Where, one wonders, is the science on which this sort of seat-of-the-pants speculation is based? Is there any evidence, for example, that the tax burden on businesses or individuals is greater now than it was let us say in 1983-85 when, under President Reagan, the economy, despite annual tax increases partially reversing the 1981 cuts enacted in Reagan’s first year, began recovering rapidly from the 1981-82 recession?


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,272 other subscribers
Follow Uneasy Money on WordPress.com