Archive Page 3

Who Is Grammatically Challenged? John Taylor or the Wall Street Journal Editorial Page?

Perhaps I will get around to commenting on John Taylor’s latest contribution to public discourse and economic enlightenment on the incomparable Wall Street Journal editorial page. And then again, perhaps not. We shall see.

In truth, there is really nothing much in the article that he has not already said about 500 times (or is it 500 thousand times?) before about “rule-based monetary policy.” But there was one notable feature about his piece, though I am not sure if it was put in there by him or by some staffer on the legendary editorial page at the Journal. And here it is, first the title followed by a teaser:

John Taylor’s Reply to Alan Blinder

The Fed’s ad hoc departures from rule-based monetary policy has hurt the economy.

Yes, believe it or not, that is exactly what it says: “The Fed’s ad hoc departures from rule-based monetary policy has [sic!] hurt the economy.”

Good grief. This is incompetence squared. The teaser was probably not written by Taylor, but one would think that he would at least read the final version before signing off on it.

UPDATE: David Henderson, an authoritative — and probably not overly biased — source, absolves John Taylor from grammatical malpractice, thereby shifting all blame to the Wall Street Journal editorial page.

Monetarism and the Great Depression

Last Friday, Scott Sumner posted a diatribe against the IS-LM triggered by a set of slides by Chris Foote of Harvard and the Boston Fed explaining how the effects of monetary policy can be analyzed using the IS-LM framework. What really annoys Scott is the following slide in which Foote compares the “spending (aka Keynesian) hypothesis” and the “money (aka Monetarist) hypothesis” as explanations for the Great Depression. I am also annoyed; whether more annoyed or less annoyed than Scott I can’t say, interpersonal comparisons of annoyance, like interpersonal comparisons of utility, being beyond the ken of economists. But our reasons for annoyance are a little different, so let me try to explore those reasons. But first, let’s look briefly at the source of our common annoyance.

foote_81The “spending hypothesis” attributes the Great Depression to a sudden collapse of spending which, in turn, is attributed to a collapse of consumer confidence resulting from the 1929 stock-market crash and a collapse of investment spending occasioned by a collapse of business confidence. The cause of the collapse in consumer and business confidence is not really specified, but somehow it has to do with the unstable economic and financial situation that characterized the developed world in the wake of World War I. In addition there was, at least according to some accounts, a perverse fiscal response: cuts in government spending and increases in taxes to keep the budget in balance. The latter notion that fiscal policy was contractionary evokes a contemptuous response from Scott, more or less justified, because nominal government spending actually rose in 1930 and 1931 and spending in real terms continued to rise in 1932. But the key point is that government spending in those days was too meager to have made much difference; the spending hypothesis rises or falls on the notion that the trigger for the Great Depression was an autonomous collapse in private spending.

But what really gets Scott all bent out of shape is Foote’s commentary on the “money hypothesis.” In his first bullet point, Foote refers to the 25% decline in M1 between 1929 and 1933, suggesting that monetary policy was really, really tight, but in the next bullet point, Foote points out that if monetary policy was tight, implying a leftward shift in the LM curve, interest rates should have risen. Instead they fell. Moreover, Foote points out that, inasmuch as the price level fell by more than 25% between 1929 and 1933, the real value of the money supply actually increased, so it’s not even clear that there was a leftward shift in the LM curve. You can just feel Scott’s blood boiling:

What interests me is the suggestion that the “money hypothesis” is contradicted by various stylized facts. Interest rates fell.  The real quantity of money rose.  In fact, these two stylized facts are exactly what you’d expect from tight money.  The fact that they seem to contradict the tight money hypothesis does not reflect poorly on the tight money hypothesis, but rather the IS-LM model that says tight money leads to a smaller level of real cash balances and a higher level of interest rates.

To see the absurdity of IS-LM, just consider a monetary policy shock that no one could question—hyperinflation.  Wheelbarrows full of billion mark currency notes. Can we all agree that that would be “easy money?”  Good.  We also know that hyperinflation leads to extremely high interest rates and extremely low real cash balances, just the opposite of the prediction of the IS-LM model.  In contrast, Milton Friedman would tell you that really tight money leads to low interest rates and large real cash balances, exactly what we do see.

Scott is totally right, of course, to point out that the fall in interest rates and the increase in the real quantity of money do not contradict the “money hypothesis.” However, he is also being selective and unfair in making that criticism, because, in two slides following almost immediately after the one to which Scott takes such offense, Foote actually explains that the simple IS-LM analysis presented in the previous slide requires modification to take into account expected deflation, because the demand for money depends on the nominal rate of interest while the amount of investment spending depends on the real rate of interest, and shows how to do the modification. Here are the slides:

foote_83

foote_84Thus, expected deflation raises the real rate of interest thereby shifting the IS curve to the left while leaving the LM curve where it was. Expected deflation therefore explains a fall in both nominal and real income as well as in the nominal rate of interest; it also explains an increase in the real rate of interest. Scott seems to be emotionally committed to the notion that the IS-LM model must lead to a misunderstanding of the effects of monetary policy, holding Foote up as an example of this confusion on the basis of the first of the slides, but Foote actually shows that IS-LM can be tweaked to accommodate a correct understanding of the dominant role of monetary policy in the Great Depression.

The Great Depression was triggered by a deflationary scramble for gold associated with the uncoordinated restoration of the gold standard by the major European countries in the late 1920s, especially France and its insane central bank. On top of this, the Federal Reserve, succumbing to political pressure to stop “excessive” stock-market speculation, raised its discount rate to a near record 6.5% in early 1929, greatly amplifying the pressure on gold reserves, thereby driving up the value of gold, and causing expectations of the future price level to start dropping. It was thus a rise (both actual and expected) in the value of gold, not a reduction in the money supply, which was the source of the monetary shock that produced the Great Depression. The shock was administered without a reduction in the money supply, so there was no shift in the LM curve. IS-LM is not necessarily the best model with which to describe this monetary shock, but the basic story can be expressed in terms of the IS-LM model.

So, you ask, if I don’t think that Foote’s exposition of the IS-LM model seriously misrepresents what happened in the Great Depression, why did I say at beginning of this post that Foote’s slides really annoy me? Well, the reason is simply that Foote seems to think that the only monetary explanation of the Great Depression is the Monetarist explanation of Milton Friedman: that the Great Depression was caused by an exogenous contraction in the US money supply. That explanation is wrong, theoretically and empirically.

What caused the Great Depression was an international disturbance to the value of gold, caused by the independent actions of a number of central banks, most notably the insane Bank of France, maniacally trying to convert all its foreign exchange reserves into gold, and the Federal Reserve, obsessed with suppressing a non-existent stock-market bubble on Wall Street. It only seems like a bubble with mistaken hindsight, because the collapse of prices was not the result of any inherent overvaluation in stock prices in October 1929, but because the combined policies of the insane Bank of France and the Fed wrecked the world economy. The decline in the nominal quantity of money in the US, the great bugaboo of Milton Friedman, was merely an epiphenomenon.

As Ron Batchelder and I have shown, Gustav Cassel and Ralph Hawtrey had diagnosed and explained the causes of the Great Depression fully a decade before it happened. Unfortunately, whenever people think of a monetary explanation of the Great Depression, they think of Milton Friedman, not Hawtrey and Cassel. Scott Sumner understands all this, he’s even written a book – a wonderful (but unfortunately still unpublished) book – about it. But he gets all worked up about IS-LM.

I, on the other hand, could not care less about IS-LM; it’s the idea that the monetary cause of the Great Depression was discovered by Milton Friedman that annoys the [redacted] out of me.

UPDATE: I posted this post prematurely before I finished editing it, so I apologize for any mistakes or omissions or confusing statements that appeared previously or that I haven’t found yet.

Another Complaint about Modern Macroeconomics

In discussing modern macroeconomics, I’ve have often mentioned my discomfort with a narrow view of microfoundations, but I haven’t commented very much on another disturbing feature of modern macro: the requirement that theoretical models be spelled out fully in axiomatic form. The rhetoric of axiomatization has had sweeping success in economics, making axiomatization a pre-requisite for almost any theoretical paper to be taken seriously, and even considered for publication in a reputable economics journal.

The idea that a good scientific theory must be derived from a formal axiomatic system has little if any foundation in the methodology or history of science. Nevertheless, it has become almost an article of faith in modern economics. I am not aware, but would be interested to know, whether, and if so how widely, this misunderstanding has been propagated in other (purportedly) empirical disciplines. The requirement of the axiomatic method in economics betrays a kind of snobbishness and (I use this word advisedly, see below) pedantry, resulting, it seems, from a misunderstanding of good scientific practice.

Before discussing the situation in economics, I would note that axiomatization did not become a major issue for mathematicians until late in the nineteenth century (though demands – luckily ignored for the most part — for logical precision followed immediately upon the invention of the calculus by Newton and Leibniz) and led ultimately to the publication of the great work of Russell and Whitehead, Principia Mathematica whose goal was to show that all of mathematics could be derived from the axioms of pure logic. This is yet another example of an unsuccessful reductionist attempt, though it seemed for a while that the Principia paved the way for the desired reduction. But 20 years after the Principia was published, Kurt Godel proved his famous incompleteness theorem, showing that, as a matter of pure logic, not even all the valid propositions of arithmetic, much less all of mathematics, could be derived from any system of axioms. This doesn’t mean that trying to achieve a reduction of a higher-level discipline to another, deeper discipline is not a worthy objective, but it certainly does mean that one cannot just dismiss, out of hand, a discipline simply because all of its propositions are not deducible from some set of fundamental propositions. Insisting on reduction as a prerequisite for scientific legitimacy is not a scientific attitude; it is merely a form of obscurantism.

As far as I know, which admittedly is not all that far, the only empirical science which has been axiomatized to any significant extent is theoretical physics. In his famous list of 23 unsolved mathematical problems, the great mathematician David Hilbert included the following (number 6).

Mathematical Treatment of the Axioms of Physics. The investigations on the foundations of geometry suggest the problem: To treat in the same manner, by means of axioms, those physical sciences in which already today mathematics plays an important part, in the first rank are the theory of probabilities and mechanics.

As to the axioms of the theory of probabilities, it seems to me desirable that their logical investigation should be accompanied by a rigorous and satisfactory development of the method of mean values in mathematical physics, and in particular in the kinetic theory of gasses. . . . Boltzman’s work on the principles of mechanics suggests the problem of developing mathematically the limiting processes, there merely indicated, which lead from the atomistic view to the laws of motion of continua.

The point that I want to underscore here is that axiomatization was supposed to ensure that there was an adequate logical underpinning for theories (i.e., probability and the kinetic theory of gasses) that had already been largely worked out. Thus, Hilbert proposed axiomatization not as a method of scientific discovery, but as a method of checking for hidden errors and problems. Error checking is certainly important for science, but it is clearly subordinate to the creation and empirical testing of new and improved scientific theories.

The fetish for axiomitization in economics can largely be traced to Gerard Debreu’s great work, The Theory of Value: An Axiomatic Analysis of Economic Equilibrium, in which Debreu, building on his own work and that of Kenneth Arrow, presented a formal description of a decentralized competitive economy with both households and business firms, and proved that, under the standard assumptions of neoclassical theory (notably diminishing marginal rates of substitution in consumption and production and perfect competition) such an economy would have at least one, and possibly more than one, equilibrium.

A lot of effort subsequently went into gaining a better understanding of the necessary and sufficient conditions under which an equilibrium exists, and when that equilibrium would be unique and Pareto optimal. The subsequent work was then brilliantly summarized and extended in another great work, General Competitive Analysis by Arrow and Frank Hahn. Unfortunately, those two books, paragons of the axiomatic method, set a bad example for the future development of economic theory, which embarked on a needless and counterproductive quest for increasing logical rigor instead of empirical relevance.

A few months ago, I wrote a review of Kartik Athreya’s book Big Ideas in Macroeconomics. One of the arguments of Athreya’s book that I didn’t address was his defense of modern macroeconomics against the complaint that modern macroeconomics is too mathematical. Athreya is not responsible for the reductionist and axiomatic fetishes of modern macroeconomics, but he faithfully defends them against criticism. So I want to comment on a few paragraphs in which Athreya dismisses criticism of formalism and axiomatization.

Natural science has made significant progress by proceeding axiomatically and mathematically, and whether or not we [economists] will achieve this level of precision for any unit of observation in macroeconomics, it is likely to be the only rational alternative.

First, let me observe that axiomatization is not the same as using mathematics to solve problems. Many problems in economics cannot easily be solved without using mathematics, and sometimes it is useful to solve a problem in a few different ways, each way potentially providing some further insight into the problem not provided by the others. So I am not at all opposed to the use of mathematics in economics. However, the choice of tools to solve a problem should bear some reasonable relationship to the problem at hand. A good economist will understand what tools are appropriate to the solution of a particular problem. While mathematics has clearly been enormously useful to the natural sciences and to economics in solving problems, there are very few scientific advances that can be ascribed to axiomatization. Axiomatization was vital in proving the existence of equilibrium, but substantive refutable propositions about real economies, e.g., the Heckscher-Ohlin Theorem, or the Factor-Price Equalization Theorem, or the law of comparative advantage, were not discovered or empirically tested by way of axiomatization. Arthreya talks about economics achieving the “level of precision” achieved by natural science, but the concept of precision is itself hopelessly imprecise, and to set precision up as an independent goal makes no sense. Arthreya continues:

In addition to these benefits from the systematic [i.e. axiomatic] approach, there is the issue of clarity. Lowering mathematical content in economics represents a retreat from unambiguous language. Once mathematized, words in any given model cannot ever mean more than one thing. The unwillingness to couch things in such narrow terms (usually for fear of “losing something more intelligible”) has, in the past, led to a great deal of essentially useless discussion.

Arthreya writes as if the only source of ambiguity is imprecise language. That just isn’t so. Is unemployment voluntary or involuntary? Arthreya actually discusses the question intelligently on p. 283, in the context of search models of unemployment, but I don’t think that he could have provided any insight into that question with a purely formal, symbolic treatment. Again back to Arthreya:

The plaintive expressions of “fear of losing something intangible” are concessions to the forces of muddled thinking. The way modern economics gets done, you cannot possibly not know exactly what the author is assuming – and to boot, you’ll have a foolproof way of checking whether their claims of what follows from these premises is actually true or not.

So let me juxtapose this brief passage from Arthreya with a rather longer passage from Karl Popper in which he effectively punctures the fallacies underlying the specious claims made on behalf of formalism and against ordinary language. The extended quotations are from an addendum titled “Critical Remarks on Meaning Analysis” (pp. 261-77) to chapter IV of Realism and the Aim of Science (volume 1 of the Postscript to the Logic of Scientific Discovery). In this addendum, Popper begins by making the following three claims:

1 What-is? questions, such as What is Justice? . . . are always pointless – without philosophical or scientific interest; and so are all answers to what-is? questions, such as definitions. It must be admitted that some definitions may sometimes be of help in answering other questions: urgent questions which cannot be dismissed: genuine difficulties which may have arisen in science or in philosophy. But what-is? questions as such do not raise this kind of difficulty.

2 It makes no difference whether a what-is question is raised in order to inquire into the essence or into the nature of a thing, or whether it is raised in order to inquire into the essential meaning or into the proper use of an expression. These kinds of what-is questions are fundamentally the same. Again, it must be admitted that an answer to a what-is question – for example, an answer pointing out distinctions between two meanings of a word which have often been confused – may not be without point, provided the confusion led to serious difficulties. But in this case, it is not the what-is question which we are trying to solve; we hope rather to resolve certain contradictions that arise from our reliance upon somewhat naïve intuitive ideas. (The . . . example discussed below – that of the ideas of a derivative and of an integral – will furnish an illustration of this case.) The solution may well be the elimination (rather than the clarification) of the naïve idea. But an answer to . . . a what-is question is never fruitful. . . .

3 The problem, more especially, of replacing an “inexact” term by an “exact” one – for example, the problem of giving a definition in “exact” or “precise” terms – is a pseudo-problem. It depends essentially upon the inexact and imprecise terms “exact” and “precise.” These are most misleading, not only because they strongly suggest that there exists what does not exist – absolute exactness or precision – but also because they are emotionally highly charged: under the guise of scientific character and of scientific objectivity, they suggest that precision or exactness is something superior, a kind of ultimate value, and that it is wrong, or unscientific, or muddle-headed, to use inexact terms (as it is indeed wrong not to speak as lucidly and simply as possible). But there is no such thing as an “exact” term, or terms made “precise” by “precise definitions.” Also, a definition must always use undefined terms in its definiens (since otherwise we should get involved in an infinite regress or in a circle); and if we have to operate with a number of undefined terms, it hardly matters whether we use a few more. Of course, if a definition helps to solve a genuine problem, the situation is different; and some problems cannot be solved without an increase of precision. Indeed, this is the only way in which we can reasonably speak of precision: the demand for precision is empty, unless it is raised relative to some requirements that arise from our attempts to solve a definite problem. (pp. 261-63)

Later in his addendum Popper provides an enlightening discussion of the historical development of calculus despite its lack of solid logical axiomatic foundation. The meaning of an infinitesimal or a derivative was anything but precise. It was, to use Arthreya’s aptly chosen term, a muddle. Mathematicians even came up with a symbol for the derivative. But they literally had no precise idea of what they were talking about. When mathematicians eventually came up with a definition for the derivative, the definition did not clarify what they were talking about; it just provided a particular method of calculating what the derivative would be. However, the absence of a rigorous and precise definition of the derivative did not prevent mathematicians from solving some enormously important practical problems, thereby helping to change the world and our understanding of it.

The modern history of the problem of the foundations of mathematics is largely, it has been asserted, the history of the “clarification” of the fundamental ideas of the differential and integral calculus. The concept of a derivative (the slope of a curve of the rate of increase of a function) has been made “exact” or “precise” by defining it as the limit of the quotient of differences (given a differentiable function); and the concept of an integral (the area or “quadrature” of a region enclosed by a curve) has likewise been “exactly defined”. . . . Attempts to eliminate the contradictions in this field constitute not only one of the main motives of the development of mathematics during the last hundred or even two hundred years, but they have also motivated modern research into the “foundations” of the various sciences and, more particularly, the modern quest for precision or exactness. “Thus mathematicians,” Bertrand Russell says, writing about one of the most important phases of this development, “were only awakened from their “dogmatic slumbers” when Weierstrass and his followers showed that many of their most cherished propositions are in general false. Macaulay, contrasting the certainty of mathematics with the uncertainty of philosophy, asks who ever heard of a reaction against Taylor’s theorem. If he had lived now, he himself might have heard of such a reaction, for his is precisely one of the theorems which modern investigations have overthrown. Such rude shocks to mathematical faith have produced that love of formalism which appears, to those who are ignorant of its motive, to be mere outrageous pedantry.”

It would perhaps be too much to read into this passage of Russell’s his agreement with a view which I hold to be true: that without “such rude shocks” – that is to say, without the urgent need to remove contradictions – the love of formalism is indeed “mere outrageous pedantry.” But I think that Russell does convey his view that without an urgent need, an urgent problem to be solved, the mere demand for precision is indefensible.

But this is only a minor point. My main point is this. Most people, including mathematicians, look upon the definition of the derivative, in terms of limits of sequences, as if it were a definition in the sense that it analyses or makes precise, or “explicates,” the intuitive meaning of the definiendum – of the derivative. But this widespread belief is mistaken. . . .

Newton and Leibniz and their successors did not deny that a derivative, or an integral, could be calculated as a limit of certain sequences . . . . But they would not have regarded these limits as possible definitions, because they do not give the meaning, the idea, of a derivative or an integral.

For the derivative is a measure of a velocity, or a slope of a curve. Now the velocity of a body at a certain instant is something real – a concrete (relational) attribute of that body at that instant. By contrast the limit of a sequence of average velocities is something highly abstract – something that exists only in our thoughts. The average velocities themselves are unreal. Their unending sequence is even more so; and the limit of this unending sequence is a purely mathematical construction out of these unreal entities. Now it is intuitively quite obvious that this limit must numerically coincide with the velocity, and that, if the limit can be calculated, we can thereby calculate the velocity. But according to the views of Newton and his contemporaries, it would be putting the cart before the horse were we to define the velocity as being identical with this limit, rather than as a real state of the body at a certain instant, or at a certain point, of its track – to be calculated by any mathematical contrivance we may be able to think of.

The same holds of course for the slope of a curve in a given point. Its measure will be equal to the limit of a sequence of measures of certain other average slopes (rather than actual slopes) of this curve. But it is not, in its proper meaning or essence, a limit of a sequence: the slope is something we can sometimes actually draw on paper, and construct with a compasses and rulers, while a limit is in essence something abstract, rarely actually reached or realized, but only approached, nearer and nearer, by a sequence of numbers. . . .

Or as Berkeley put it “. . . however expedient such analogies or such expressions may be found for facilitating the modern quadratures, yet we shall not find any light given us thereby into the original real nature of fluxions considered in themselves.” Thus mere means for facilitating our calculations cannot be considered as explications or definitions.

This was the view of all mathematicians of the period, including Newton and Leibniz. If we now look at the modern point of view, then we see that we have completely given up the idea of definition in the sense in which it was understood by the founders of the calculus, as well as by Berkeley. We have given up the idea of a definition which explains the meaning (for example of the derivative). This fact is veiled by our retaining the old symbol of “definition” for some equivalences which we use, not to explain the idea or the essence of a derivative, but to eliminate it. And it is veiled by our retention of the name “differential quotient” or “derivative,” and the old symbol dy/dx which once denoted an idea which we have now discarded. For the name, and the symbol, now have no function other than to serve as labels for the defiens – the limit of a sequence.

Thus we have given up “explication” as a bad job. The intuitive idea, we found, led to contradictions. But we can solve our problems without it, retaining the bulk of the technique of calculation which originally was based upon the intuitive idea. Or more precisely we retain only this technique, as far as it was sound, and eliminate the idea its help. The derivative and the integral are both eliminated; they are replaced, in effect, by certain standard methods of calculating limits. (oo. 266-70)

Not only have the original ideas of the founders of calculus been eliminated, because they ultimately could not withstand logical scrutiny, but a premature insistence on logical precision would have had disastrous consequences for the ultimate development of calculus.

It is fascinating to consider that this whole admirable development might have been nipped in the bud (as in the days of Archimedes) had the mathematicians of the day been more sensitive to Berkeley’s demand – in itself quite reasonable – that we should strictly adhere to the rules of logic, and to the rule of always speaking sense.

We now know that Berkeley was right when, in The Analyst, he blamed Newton . . . for obtaining . . . mathematical results in the theory of fluxions or “in the calculus differentialis” by illegitimate reasoning. And he was completely right when he indicated that [his] symbols were without meaning. “Nothing is easier,” he wrote, “than to devise expressions and notations, for fluxions and infinitesimals of the first, second, third, fourth, and subsequent orders. . . . These expressions indeed are clear and distinct, and the mind finds no difficulty in conceiving them to be continued beyond any assignable bounds. But if . . . we look underneath, if, laying aside the expressions, we set ourselves attentively to consider the things themselves which are supposed to be expressed or marked thereby, we shall discover much emptiness, darkness, and confusion . . . , direct impossibilities, and contradictions.”

But the mathematicians of his day did not listen to Berkeley. They got their results, and they were not afraid of contradictions as long as they felt that they could dodge them with a little skill. For the attempt to “analyse the meaning” or to “explicate” their concepts would, as we know now, have led to nothing. Berkeley was right: all these concept were meaningless, in his sense and in the traditional sense of the word “meaning:” they were empty, for they denoted nothing, they stood for nothing. Had this fact been realized at the time, the development of the calculus might have been stopped again, as it had been stopped before. It was the neglect of precision, the almost instinctive neglect of all meaning analysis or explication, which made the wonderful development of the calculus possible.

The problem underlying the whole development was, of course, to retain the powerful instrument of the calculus without the contradictions which had been found in it. There is no doubt that our present methods are more exact than the earlier ones. But this is not due to the fact that they use “exactly defined” terms. Nor does it mean that they are exact: the main point of the definition by way of limits is always an existential assertion, and the meaning of the little phrase “there exists a number” has become the centre of disturbance in contemporary mathematics. . . . This illustrates my point that the attribute of exactness is not absolute, and that it is inexact and highly misleading to use the terms “exact” and “precise” as if they had any exact or precise meaning. (pp. 270-71)

Popper sums up his discussion as follows:

My examples [I quoted only the first of the four examples as it seemed most relevant to Arthreya's discussion] may help to emphasize a lesson taught by the whole history of science: that absolute exactness does not exist, not even in logic and mathematics (as illustrated by the example of the still unfinished history of the calculus); that we should never try to be more exact than is necessary for the solution of the problem in hand; and that the demand for “something more exact” cannot in itself constitute a genuine problem (except, of course, when improved exactness may improve the testability of some theory). (p. 277)

I apologize for stringing together this long series of quotes from Popper, but I think that it is important to understand that there is simply no scientific justification for the highly formalistic manner in which much modern economics is now carried out. Of course, other far more authoritative critics than I, like Mark Blaug and Richard Lipsey (also here) have complained about the insistence of modern macroeconomics on microfounded, axiomatized models regardless of whether those models generate better predictions than competing models. Their complaints have regrettably been ignored for the most part. I simply want to point out that a recent, and in many ways admirable, introduction to modern macroeconomics failed to provide a coherent justification for insisting on axiomatized models. It really wasn’t the author’s fault; a coherent justification doesn’t exist.

A New Version of my Paper (with Paul Zimmerman) on the Hayek-Sraffa Debate Is Available on SSRN

One of the good things about having a blog (which I launched July 5, 2011) is that I get comments about what I am writing about from a lot of people that I don’t know. One of my most popular posts – it’s about the sixteenth most visited — was one I wrote, just a couple of months after starting the blog, about the Hayek-Sraffa debate on the natural rate of interest. Unlike many popular posts, to which visitors are initially drawn from very popular blogs that linked to those posts, but don’t continue to drawing a lot of visitors, this post initially had only modest popularity, but still keeps on drawing visitors.

That post also led to a collaboration between me and my FTC colleague Paul Zimmerman on a paper “The Sraffa-Hayek Debate on the Natural Rate of Interest” which I presented two years ago at the History of Economics Society conference. We have now finished our revisions of the version we wrote for the conference, and I have just posted the new version on SSRN and will be submitting it for publication later this week.

Here’s the abstract posted on the SSRN site:

Hayek’s Prices and Production, based on his hugely successful lectures at LSE in 1931, was the first English presentation of Austrian business-cycle theory, and established Hayek as a leading business-cycle theorist. Sraffa’s 1932 review of Prices and Production seems to have been instrumental in turning opinion against Hayek and the Austrian theory. A key element of Sraffa’s attack was that Hayek’s idea of a natural rate of interest, reflecting underlying real relationships, undisturbed by monetary factors, was, even from Hayek’s own perspective, incoherent, because, without money, there is a multiplicity of own rates, none of which can be uniquely identified as the natural rate of interest. Although Hayek’s response failed to counter Sraffa’s argument, Ludwig Lachmann later observed that Keynes’s treatment of own rates in Chapter 17 of the General Theory (itself a generalization of Fisher’s (1896) distinction between the real and nominal rates of interest) undercut Sraffa’s criticism. Own rates, Keynes showed, cannot deviate from each other by more than expected price appreciation plus the cost of storage and the commodity service flow, so that anticipated asset yields are equalized in intertemporal equilibrium. Thus, on Keynes’s analysis in the General Theory, the natural rate of interest is indeed well-defined. However, Keynes’s revision of Sraffa’s own-rate analysis provides only a partial rehabilitation of Hayek’s natural rate. There being no unique price level or rate of inflation in a barter system, no unique money natural rate of interest can be specified. Hayek implicitly was reasoning in terms of a constant nominal value of GDP, but barter relationships cannot identify any path for nominal GDP, let alone a constant one, as uniquely compatible with intertemporal equilibrium.

Aside from clarifying the conceptual basis of the natural-rate analysis and its relationship to Sraffa’s own-rate analysis, the paper also highlights the connection (usually overlooked but mentioned by Harald Hagemann in his 2008 article on the own rate of interest for the International Encyclopedia of the Social Sciences) between the own-rate analysis, in either its Sraffian or Keynesian versions, and Fisher’s early distinction between the real and nominal rates of interest. The conceptual identity between Fisher’s real and nominal distinction and Keynes’s own-rate analysis in the General Theory only magnifies the mystery associated with Keynes’s attack in chapter 13 of the General Theory on Fisher’s distinction between the real and the nominal rates of interest.

I also feel that the following discussion of Hayek’s role in developing the concept of intertemporal equilibrium, though tangential to the main topic of the paper, makes an important point about how to think about intertemporal equilibrium.

Perhaps the key analytical concept developed by Hayek in his early work on monetary theory and business cycles was the idea of an intertemporal equilibrium. Before Hayek, the idea of equilibrium had been reserved for a static, unchanging, state in which economic agents continue doing what they have been doing. Equilibrium is the end state in which all adjustments to a set of initial conditions have been fully worked out. Hayek attempted to generalize this narrow equilibrium concept to make it applicable to the study of economic fluctuations – business cycles – in which he was engaged. Hayek chose to formulate a generalized equilibrium concept. He did not do so, as many have done, by simply adding a steady-state rate of growth to factor supplies and technology. Nor did Hayek define equilibrium in terms of any objective or measurable magnitudes. Rather, Hayek defined equilibrium as the mutual consistency of the independent plans of individual economic agents.

The potential consistency of such plans may be conceived of even if economic magnitudes do not remain constant or grow at a constant rate. Even if the magnitudes fluctuate, equilibrium is conceivable if the fluctuations are correctly foreseen. Correct foresight is not the same as perfect foresight. Perfect foresight is necessarily correct; correct foresight is only contingently correct. All that is necessary for equilibrium is that fluctuations (as reflected in future prices) be foreseen. It is not even necessary, as Hayek (1937) pointed out, that future price changes be foreseen correctly, provided that individual agents agree in their anticipations of future prices. If all agents agree in their expectations of future prices, then the individual plans formulated on the basis of those anticipations are, at least momentarily, equilibrium plans, conditional on the realization of those expectations, because the realization of those expectations would allow the plans formulated on the basis of those expectations to be executed without need for revision. What is required for intertemporal equilibrium is therefore a contingently correct anticipation by future agents of future prices, a contingent anticipation not the result of perfect foresight, but of contingently, even fortuitously, correct foresight. The seminal statement of this concept was given by Hayek in his classic 1937 paper, and the idea was restated by J. R. Hicks (1939), with no mention of Hayek, two years later in Value and Capital.

I made the following comment in a footnote to the penultimate sentence of the quotation:

By defining correct foresight as a contingent outcome rather than as an essential property of economic agents, Hayek elegantly avoided the problems that confounded Oskar Morgenstern ([1935] 1976) in his discussion of the meaning of equilibrium.

I look forward to reading your comments.

John Cochrane on the Failure of Macroeconomics

The state of modern macroeconomics is not good; John Cochrane, professor of finance at the University of Chicago, senior fellow of the Hoover Institution, and adjunct scholar of the Cato Institute, writing in Thursday’s Wall Street Journal, thinks macroeconomics is a failure. Perhaps so, but he has trouble explaining why.

The problem that Cochrane is chiefly focused on is slow growth.

Output per capita fell almost 10 percentage points below trend in the 2008 recession. It has since grown at less than 1.5%, and lost more ground relative to trend. Cumulative losses are many trillions of dollars, and growing. And the latest GDP report disappoints again, declining in the first quarter.

Sclerotic growth trumps every other economic problem. Without strong growth, our children and grandchildren will not see the great rise in health and living standards that we enjoy relative to our parents and grandparents. Without growth, our government’s already questionable ability to pay for health care, retirement and its debt evaporate. Without growth, the lot of the unfortunate will not improve. Without growth, U.S. military strength and our influence abroad must fade.

Macroeconomists offer two possible explanations for slow growth: a) too little demand — correctable through monetary or fiscal stimulus — and b) structural rigidities and impediments to growth, for which stimulus is no remedy. Cochrane is not a fan of the demand explanation.

The “demand” side initially cited New Keynesian macroeconomic models. In this view, the economy requires a sharply negative real (after inflation) rate of interest. But inflation is only 2%, and the Federal Reserve cannot lower interest rates below zero. Thus the current negative 2% real rate is too high, inducing people to save too much and spend too little.

New Keynesian models have also produced attractively magical policy predictions. Government spending, even if financed by taxes, and even if completely wasted, raises GDP. Larry Summers and Berkeley’s Brad DeLong write of a multiplier so large that spending generates enough taxes to pay for itself. Paul Krugman writes that even the “broken windows fallacy ceases to be a fallacy,” because replacing windows “can stimulate spending and raise employment.”

If you look hard at New-Keynesian models, however, this diagnosis and these policy predictions are fragile. There are many ways to generate the models’ predictions for GDP, employment and inflation from their underlying assumptions about how people behave. Some predict outsize multipliers and revive the broken-window fallacy. Others generate normal policy predictions—small multipliers and costly broken windows. None produces our steady low-inflation slump as a “demand” failure.

Cochrane’s characterization of what’s wrong with New Keynesian models is remarkably superficial. Slow growth, according to the New Keynesian model, is caused by the real interest rate being insufficiently negative, with the nominal rate at zero and inflation at (less than) 2%. So what is the problem? True, the nominal rate can’t go below zero, but where is it written that the upper bound on inflation is (or must be) 2%? Cochrane doesn’t say. Not only doesn’t he say, he doesn’t even seem interested. It might be that something really terrible would happen if the rate of inflation rose about 2%, but if so, Cochrane or somebody needs to explain why terrible calamities did not befall us during all those comparatively glorious bygone years when the rate of inflation consistently exceeded 2% while real economic growth was at least a percentage point higher than it is now. Perhaps, like Fischer Black, Cochrane believes that the rate of inflation has nothing to do with monetary or fiscal policy. But that is certainly not the standard interpretation of the New Keynesian model that he is using as the archetype for modern demand-management macroeconomic theories. And if Cochrane does believe that the rate of inflation is not determined by either monetary policy or fiscal policy, he ought to come out and say so.

Cochrane thinks that persistent low inflation and low growth together pose a problem for New Keynesian theories. Indeed it does, but it doesn’t seem that a radical revision of New Keynesian theory would be required to cope with that state of affairs. Cochrane thinks otherwise.

These problems [i.e., a steady low-inflation slump, aka "secular stagnation"] are recognized, and now academics such as Brown University’s Gauti Eggertsson and Neil Mehrotra are busy tweaking the models to address them. Good. But models that someone might get to work in the future are not ready to drive trillions of dollars of public expenditure.

In other words, unless the economic model has already been worked out before a particular economic problem arises, no economic policy conclusions may be deduced from that economic model. May I call  this Cochrane’s rule?

Cochrane the proceeds to accuse those who look to traditional Keynesian ideas of rejecting science.

The reaction in policy circles to these problems is instead a full-on retreat, not just from the admirable rigor of New Keynesian modeling, but from the attempt to make economics scientific at all.

Messrs. DeLong and Summers and Johns Hopkins’s Laurence Ball capture this feeling well, writing in a recent paper that “the appropriate new thinking is largely old thinking: traditional Keynesian ideas of the 1930s to 1960s.” That is, from before the 1960s when Keynesian thinking was quantified, fed into computers and checked against data; and before the 1970s, when that check failed, and other economists built new and more coherent models. Paul Krugman likewise rails against “generations of economists” who are “viewing the world through a haze of equations.”

Well, maybe they’re right. Social sciences can go off the rails for 50 years. I think Keynesian economics did just that. But if economics is as ephemeral as philosophy or literature, then it cannot don the mantle of scientific expertise to demand trillions of public expenditure.

This is political rhetoric wrapped in a cloak of scientific objectivity. We don’t have the luxury of knowing in advance what the consequences of our actions will be. The United States has spent trillions of dollars on all kinds of stuff over the past dozen years or so. A lot of it has not worked out well at all. So it is altogether fitting and proper for us to be skeptical about whether we will get our money’s worth for whatever the government proposes to spend on our behalf. But Cochrane’s implicit demand that money only be spent if there is some sort of scientific certainty that the money will be well spent can never be met. However, as Larry Summers has pointed out, there are certainly many worthwhile infrastructure projects that could be undertaken, so the risk of committing the “broken windows fallacy” is small. With the government able to borrow at negative real interest rates, the present value of funding such projects is almost certainly positive. So one wonders what is the scientific basis for not funding those projects?

Cochrane compares macroeconomics to climate science:

The climate policy establishment also wants to spend trillions of dollars, and cites scientific literature, imperfect and contentious as that literature may be. Imagine how much less persuasive they would be if they instead denied published climate science since 1975 and bemoaned climate models’ “haze of equations”; if they told us to go back to the complex writings of a weather guru from the 1930s Dustbowl, as they interpret his writings. That’s the current argument for fiscal stimulus.

Cochrane writes as if there were some important scientific breakthrough made by modern macroeconomics — “the new and more coherent models,” either the New Keynesian version of New Classical macroeconomics or Real Business Cycle Theory — that rendered traditional Keynesian economics obsolete or outdated. I have never been a devote of Keynesian economics, but the fact is that modern macroeconomics has achieved its ascendancy in academic circles almost entirely by way of a misguided methodological preference for axiomatized intertemporal optimization models for which a unique equilibrium solution can be found by imposing the empirically risible assumption of rational expectations. These models, whether in their New Keynesian or Real Business Cycle versions, do not generate better empirical predictions than the old fashioned Keynesian models, and, as Noah Smith has usefully pointed out, these models have been consistently rejected by private forecasters in favor of the traditional Keynesian models. It is only the dominant clique of ivory-tower intellectuals that cultivate and nurture these models. The notion that such models are entitled to any special authority or scientific status is based on nothing but the exaggerated self-esteem that is characteristic of almost every intellectual clique, particularly dominant ones.

Having rejected inadequate demand as a cause of slow growth, Cochrane, relying on no model and no evidence, makes a pitch for uncertainty as the source of slow growth.

Where, instead, are the problems? John Taylor, Stanford’s Nick Bloom and Chicago Booth’s Steve Davis see the uncertainty induced by seat-of-the-pants policy at fault. Who wants to hire, lend or invest when the next stroke of the presidential pen or Justice Department witch hunt can undo all the hard work? Ed Prescott emphasizes large distorting taxes and intrusive regulations. The University of Chicago’s Casey Mulligan deconstructs the unintended disincentives of social programs. And so forth. These problems did not cause the recession. But they are worse now, and they can impede recovery and retard growth.

Where, one wonders, is the science on which this sort of seat-of-the-pants speculation is based? Is there any evidence, for example, that the tax burden on businesses or individuals is greater now than it was let us say in 1983-85 when, under President Reagan, the economy, despite annual tax increases partially reversing the 1981 cuts enacted in Reagan’s first year, began recovering rapidly from the 1981-82 recession?

The Enchanted James Grant Expounds Eloquently on the Esthetics of the Gold Standard

One of the leading financial journalists of our time, James Grant is obviously a very smart, very well read, commentator on contemporary business and finance. He also has published several highly regarded historical studies, and according to the biographical tag on his review of a new book on the monetary role of gold in the weekend Wall Street Journal, he will soon publish a new historical study of the dearly beloved 1920-21 depression, a study that will certainly be worth reading, if not entirely worth believing. Grant reviewed a new book, War and Gold, by Kwasi Kwarteng, which provides a historical account of the role of gold in monetary affairs and in wartime finance since the 16th century. Despite his admiration for Kwarteng’s work, Grant betrays more than a little annoyance and exasperation with Kwarteng’s failure to appreciate what a many-splendored thing gold really is, deploring the impartial attitude to gold taken by Kwarteng.

Exasperatingly, the author, a University of Cambridge Ph. D. in history and a British parliamentarian, refuses to render historical judgment. He doesn’t exactly decry the world’s descent into “too big to fail” banking, occult-style central banking and tiny, government-issued interest rates. Neither does he precisely support those offenses against wholesome finance. He is neither for the dematerialized, non-gold dollar nor against it. He is a monetary Hamlet.

He does, at least, ask: “Why gold?” I would answer: “Because it’s money, or used to be money, and will likely one day become money again.” The value of gold is inherent, not conferred by governments. Its supply tends to grow by 1% to 2% a year, in line with growth in world population. It is nice to look at and self-evidently valuable.

Evidently, Mr. Grant’s enchantment with gold has led him into incoherence. Is gold money or isn’t it? Obviously not — at least not if you believe that definitions ought to correspond to reality rather than to Platonic ideal forms. Sensing that his grip on reality may be questionable, he tries to have it both ways. If gold isn’t money now, it likely will become money again — “one day.” For sure, gold used to be money, but so did cowerie shells, cattle, and at least a dozen other substances. How does that create any presumption that gold is likely to become money again?

Then we read: “The value of gold is inherent.” OMG! And this from a self-proclaimed Austrian! Has he ever heard of the “subjective theory of value?” Mr. Grant, meet Ludwig von Mises.

Value is not intrinsic, it is not in things. It is within us. (Human Action p. 96)

If value “is not in things,” how can anything be “self-evidently valuable?”

Grant, in his emotional attachment to gold, feels obligated to defend the metal against any charge that it may have been responsible for human suffering.

Shelley wrote lines of poetry to protest the deflation that attended Britain’s return to the gold standard after the Napoleonic wars. Mr. Kwarteng quotes them: “Let the Ghost of Gold / Take from Toil a thousandfold / More than e’er its substance could / In the tyrannies of old.” The author seems to agree with the poet.

Grant responds to this unfair slur against gold:

I myself hold the gold standard blameless. The source of the postwar depression was rather the decision of the British government to return to the level of prices and wages that prevailed before the war, a decision it enforced through monetary means (that is, by reimposing the prewar exchange rate). It was an error that Britain repeated after World War I.

This is a remarkable and fanciful defense, suggesting that the British government actually had a specific target level of prices and wages in mind when it restored the pound to its prewar gold parity. In fact, the idea of a price level was not yet even understood by most economists, let alone by the British government. Restoring the pound to its prewar parity was considered a matter of financial rectitude and honor, not a matter of economic fine-tuning. Nor was the choice of the prewar parity the only reason for the ruinous deflation that followed the postwar resumption of gold payments. The replacement of paper pounds with gold pounds implied a significant increase in the total demand for gold by the world’s leading economic power, which implied an increase in the total world demand for gold, and an increase in its value relative to other commodities, in other words deflation. David Ricardo foresaw the deflationary consequences of the resumption of gold payments, and tried to mitigate those consequences with his Proposals for an Economical and Secure Currency, designed to limit the increase in the monetary demand for gold. The real error after World War I, as Hawtrey and Cassel both pointed out in 1919, was that the resumption of an international gold standard after gold had been effectively demonetized during World War I would lead to an enormous increase in the monetary demand for gold, causing a worldwide deflationary collapse. After the Napoleonic wars, the gold standard was still a peculiarly British institution, the rest of the world then operating on a silver standard.

Grant makes further extravagant and unsupported claims on behalf of the gold standard:

The classical gold standard, in service roughly from 1815 to 1914, was certainly imperfect. What it did deliver was long-term price stability. What the politics of the gold-standard era delivered was modest levels of government borrowing.

The choice of 1815 as the start of the gold standard era is quite arbitrary, 1815 being the year that Britain defeated Napoleonic France, thereby setting the stage for the restoration of the golden pound at its prewar parity. But the very fact that 1815 marked the beginning of the restoration of the prewar gold parity with sterling shows that for Britain the gold standard began much earlier, actually 1717 when Isaac Newton, then master of the mint, established the gold parity at a level that overvalued gold, thereby driving silver out of circulation. So, if the gold standard somehow ensures that government borrowing levels are modest, one would think that borrowing by the British government would have been modest from 1717 to 1797 when the gold standard was suspended. But the chart below showing British government debt as a percentage of GDP from 1692 to 2010 shows that British government debt rose rapidly over most of the 18th century.

uk_national_debtGrant suggests that bad behavior by banks is mainly the result of abandonment of the gold standard.

Progress is the rule, the Whig theory of history teaches, but the old Whigs never met the new bankers. Ordinary people live longer and Olympians run faster than they did a century ago, but no such improvement is evident in our monetary and banking affairs. On the contrary, the dollar commands but 1/1,300th of an ounce of gold today, as compared with the 1/20th of an ounce on the eve of World War I. As for banking, the dismal record of 2007-09 would seem inexplicable to the financial leaders of the Model T era. One of these ancients, Comptroller of the Currency John Skelton Williams, predicted in 1920 that bank failures would soon be unimaginable. In 2008, it was solvency you almost couldn’t imagine.

Once again, the claims that Mr. Grant makes on behalf of the gold standard simply do not correspond to reality. The chart below shows the annual number of bank failures in every years since 1920.

bank_failures

Somehow, Mr. Grant somehow seems to have overlooked what happened between 1929 and 1932. John Skelton Williams obviously didn’t know what was going to happen in the following decade. Certainly no shame in that. I am guessing that Mr. Grant does know what happened; he just seems too bedazzled by the beauty of the gold standard to care.

Further Thoughts on Capital and Inequality

In a recent post, I criticized, perhaps without adequate understanding, some of Thomas Piketty’s arguments about capital in his best-selling book. My main criticism is that Piketty’s argument that. under capitalism, there is an inherent tendency toward increasing inequality, ignores the heterogeneity of capital and the tendency for new capital embodying new knowledge, new techniques, and new technologies to render older capital obsolete. Contrary to the simple model of accumulation on which Piketty relies, the accumulation of capital is not a smooth process; it is a very uneven process, generating very high returns to some owners of capital, but also imposing substantial losses on other owners of capital. The only way to avoid the risk of owning suddenly obsolescent capital is to own the market portfolio. But I conjecture that few, if any, great fortunes have been amassed by investing in the market portfolio, and (I further conjecture) great fortunes, once amassed, are usually not liquidated and reinvested in the market portfolio, but continue to be weighted heavily in fairly narrow portfolios of assets from which those great fortunes grew. Great fortunes, aside from being dissipated by deliberate capital consumption, also tend to be eroded by the loss of value through obsolescence, a process that can only be avoided by extreme diversification of holdings or by the exercise of entrepreneurial skill, a skill rarely bequeathed from generation to generation.

Applying this insight, Larry Summers pointed out in his review of Piketty’s book that the rate of turnover in the Forbes list of the 400 wealthiest individuals between 1982 and 2012 was much higher than the turnover predicted by Piketty’s simple accumulation model. Commenting on my post (in which I referred to Summers’s review), Kevin Donoghue objected that Piketty had criticized the Forbes 400 as a measure of wealth in his book, so that Piketty would not necessarily accept Summers’ criticism based on the Forbes 400. Well, as an alternative, let’s have a look at the S&P 500. I just found this study of the rate of turnover in the 500 firms making up the S&P 500, showing that the rate of turnover in the composition of the S&P 500 has been increased greatly over the past 50 years. See the chart below copied from that study showing that the average length of time for firms on the S&P 500 was over 60 years in 1958, but by 2011 had fallen to less than 20 years. The pace of creative destruction seems to be accelerating

S&P500_turnover

From the same study here’s another chart showing the companies that were deleted from the index between 2001 and 2011 and those that were added.

S&P500_churn

But I would also add a cautionary note that, because the population of individuals and publicly held business firms is growing, comparing the composition of a fixed number (400) of wealthiest individuals or (500) most successful corporations over time may overstate the increase over time in the rate of turnover, any group of fixed numerical size becoming a smaller percentage of the population over time. Even with that caveat, however, what this tells me is that there is a lot of variability in the value of capital assets. Wealth grows, but it grows unevenly. Capital is accumulated, but it is also lost.

Does the process of capital accumulation necessarily lead to increasing inequality of wealth and income? Perhaps, but I don’t think that the answer is necessarily determined by the relationship between the real rate of interest and the rate of growth in GDP.

Many people have suggested that an important cause of rising inequality has been the increasing importance of winner-take-all markets in which a few top performers seem to be compensated at very much higher rates than other, only slightly less gifted, performers. This sort of inequality is reflected in widening gaps between the highest and lowest paid participants in a given occupation. In some cases at least, the differences between the highest and lowest paid don’t seem to correspond to the differences in skill, though admittedly skill is often difficult to measure.

This concentration of rewards is especially characteristic of competitive sports, winners gaining much larger rewards than losers. However, because the winner’s return comes, at least in part, at the expense of the loser, the private gain to winning exceeds the social gain. That’s why all organized professional sports engage in some form of revenue sharing and impose limits on spending on players. Without such measures, competitive sports would not be viable, because the private return to improve quality exceeds the collective return from improved quality. There are, of course, times when a superstar like Babe Ruth or Michael Jordan can actually increase the return to losers, but that seems to be the exception.

To what extent other sorts of winner-take-all markets share this intrinsic inefficiency is not immediately clear to me, but it does not seem implausible to think that there is an incentive to overinvest in skills that increase the expected return to participants in winner-take-all markets. If so, the source of inequality may also be a source of inefficiency.


About Me

David Glasner
Washington, DC

I am an economist at the Federal Trade Commission. Nothing that you read on this blog necessarily reflects the views of the FTC or the individual commissioners. Although I work at the FTC as an antitrust economist, most of my research and writing has been on monetary economics and policy and the history of monetary theory. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 276 other followers


Follow

Get every new post delivered to your Inbox.

Join 276 other followers