Archive Page 2

John Cochrane on the Failure of Macroeconomics

The state of modern macroeconomics is not good; John Cochrane, professor of finance at the University of Chicago, senior fellow of the Hoover Institution, and adjunct scholar of the Cato Institute, writing in Thursday’s Wall Street Journal, thinks macroeconomics is a failure. Perhaps so, but he has trouble explaining why.

The problem that Cochrane is chiefly focused on is slow growth.

Output per capita fell almost 10 percentage points below trend in the 2008 recession. It has since grown at less than 1.5%, and lost more ground relative to trend. Cumulative losses are many trillions of dollars, and growing. And the latest GDP report disappoints again, declining in the first quarter.

Sclerotic growth trumps every other economic problem. Without strong growth, our children and grandchildren will not see the great rise in health and living standards that we enjoy relative to our parents and grandparents. Without growth, our government’s already questionable ability to pay for health care, retirement and its debt evaporate. Without growth, the lot of the unfortunate will not improve. Without growth, U.S. military strength and our influence abroad must fade.

Macroeconomists offer two possible explanations for slow growth: a) too little demand — correctable through monetary or fiscal stimulus — and b) structural rigidities and impediments to growth, for which stimulus is no remedy. Cochrane is not a fan of the demand explanation.

The “demand” side initially cited New Keynesian macroeconomic models. In this view, the economy requires a sharply negative real (after inflation) rate of interest. But inflation is only 2%, and the Federal Reserve cannot lower interest rates below zero. Thus the current negative 2% real rate is too high, inducing people to save too much and spend too little.

New Keynesian models have also produced attractively magical policy predictions. Government spending, even if financed by taxes, and even if completely wasted, raises GDP. Larry Summers and Berkeley’s Brad DeLong write of a multiplier so large that spending generates enough taxes to pay for itself. Paul Krugman writes that even the “broken windows fallacy ceases to be a fallacy,” because replacing windows “can stimulate spending and raise employment.”

If you look hard at New-Keynesian models, however, this diagnosis and these policy predictions are fragile. There are many ways to generate the models’ predictions for GDP, employment and inflation from their underlying assumptions about how people behave. Some predict outsize multipliers and revive the broken-window fallacy. Others generate normal policy predictions—small multipliers and costly broken windows. None produces our steady low-inflation slump as a “demand” failure.

Cochrane’s characterization of what’s wrong with New Keynesian models is remarkably superficial. Slow growth, according to the New Keynesian model, is caused by the real interest rate being insufficiently negative, with the nominal rate at zero and inflation at (less than) 2%. So what is the problem? True, the nominal rate can’t go below zero, but where is it written that the upper bound on inflation is (or must be) 2%? Cochrane doesn’t say. Not only doesn’t he say, he doesn’t even seem interested. It might be that something really terrible would happen if the rate of inflation rose about 2%, but if so, Cochrane or somebody needs to explain why terrible calamities did not befall us during all those comparatively glorious bygone years when the rate of inflation consistently exceeded 2% while real economic growth was at least a percentage point higher than it is now. Perhaps, like Fischer Black, Cochrane believes that the rate of inflation has nothing to do with monetary or fiscal policy. But that is certainly not the standard interpretation of the New Keynesian model that he is using as the archetype for modern demand-management macroeconomic theories. And if Cochrane does believe that the rate of inflation is not determined by either monetary policy or fiscal policy, he ought to come out and say so.

Cochrane thinks that persistent low inflation and low growth together pose a problem for New Keynesian theories. Indeed it does, but it doesn’t seem that a radical revision of New Keynesian theory would be required to cope with that state of affairs. Cochrane thinks otherwise.

These problems [i.e., a steady low-inflation slump, aka "secular stagnation"] are recognized, and now academics such as Brown University’s Gauti Eggertsson and Neil Mehrotra are busy tweaking the models to address them. Good. But models that someone might get to work in the future are not ready to drive trillions of dollars of public expenditure.

In other words, unless the economic model has already been worked out before a particular economic problem arises, no economic policy conclusions may be deduced from that economic model. May I call  this Cochrane’s rule?

Cochrane the proceeds to accuse those who look to traditional Keynesian ideas of rejecting science.

The reaction in policy circles to these problems is instead a full-on retreat, not just from the admirable rigor of New Keynesian modeling, but from the attempt to make economics scientific at all.

Messrs. DeLong and Summers and Johns Hopkins’s Laurence Ball capture this feeling well, writing in a recent paper that “the appropriate new thinking is largely old thinking: traditional Keynesian ideas of the 1930s to 1960s.” That is, from before the 1960s when Keynesian thinking was quantified, fed into computers and checked against data; and before the 1970s, when that check failed, and other economists built new and more coherent models. Paul Krugman likewise rails against “generations of economists” who are “viewing the world through a haze of equations.”

Well, maybe they’re right. Social sciences can go off the rails for 50 years. I think Keynesian economics did just that. But if economics is as ephemeral as philosophy or literature, then it cannot don the mantle of scientific expertise to demand trillions of public expenditure.

This is political rhetoric wrapped in a cloak of scientific objectivity. We don’t have the luxury of knowing in advance what the consequences of our actions will be. The United States has spent trillions of dollars on all kinds of stuff over the past dozen years or so. A lot of it has not worked out well at all. So it is altogether fitting and proper for us to be skeptical about whether we will get our money’s worth for whatever the government proposes to spend on our behalf. But Cochrane’s implicit demand that money only be spent if there is some sort of scientific certainty that the money will be well spent can never be met. However, as Larry Summers has pointed out, there are certainly many worthwhile infrastructure projects that could be undertaken, so the risk of committing the “broken windows fallacy” is small. With the government able to borrow at negative real interest rates, the present value of funding such projects is almost certainly positive. So one wonders what is the scientific basis for not funding those projects?

Cochrane compares macroeconomics to climate science:

The climate policy establishment also wants to spend trillions of dollars, and cites scientific literature, imperfect and contentious as that literature may be. Imagine how much less persuasive they would be if they instead denied published climate science since 1975 and bemoaned climate models’ “haze of equations”; if they told us to go back to the complex writings of a weather guru from the 1930s Dustbowl, as they interpret his writings. That’s the current argument for fiscal stimulus.

Cochrane writes as if there were some important scientific breakthrough made by modern macroeconomics — “the new and more coherent models,” either the New Keynesian version of New Classical macroeconomics or Real Business Cycle Theory — that rendered traditional Keynesian economics obsolete or outdated. I have never been a devote of Keynesian economics, but the fact is that modern macroeconomics has achieved its ascendancy in academic circles almost entirely by way of a misguided methodological preference for axiomatized intertemporal optimization models for which a unique equilibrium solution can be found by imposing the empirically risible assumption of rational expectations. These models, whether in their New Keynesian or Real Business Cycle versions, do not generate better empirical predictions than the old fashioned Keynesian models, and, as Noah Smith has usefully pointed out, these models have been consistently rejected by private forecasters in favor of the traditional Keynesian models. It is only the dominant clique of ivory-tower intellectuals that cultivate and nurture these models. The notion that such models are entitled to any special authority or scientific status is based on nothing but the exaggerated self-esteem that is characteristic of almost every intellectual clique, particularly dominant ones.

Having rejected inadequate demand as a cause of slow growth, Cochrane, relying on no model and no evidence, makes a pitch for uncertainty as the source of slow growth.

Where, instead, are the problems? John Taylor, Stanford’s Nick Bloom and Chicago Booth’s Steve Davis see the uncertainty induced by seat-of-the-pants policy at fault. Who wants to hire, lend or invest when the next stroke of the presidential pen or Justice Department witch hunt can undo all the hard work? Ed Prescott emphasizes large distorting taxes and intrusive regulations. The University of Chicago’s Casey Mulligan deconstructs the unintended disincentives of social programs. And so forth. These problems did not cause the recession. But they are worse now, and they can impede recovery and retard growth.

Where, one wonders, is the science on which this sort of seat-of-the-pants speculation is based? Is there any evidence, for example, that the tax burden on businesses or individuals is greater now than it was let us say in 1983-85 when, under President Reagan, the economy, despite annual tax increases partially reversing the 1981 cuts enacted in Reagan’s first year, began recovering rapidly from the 1981-82 recession?

The Enchanted James Grant Expounds Eloquently on the Esthetics of the Gold Standard

One of the leading financial journalists of our time, James Grant is obviously a very smart, very well read, commentator on contemporary business and finance. He also has published several highly regarded historical studies, and according to the biographical tag on his review of a new book on the monetary role of gold in the weekend Wall Street Journal, he will soon publish a new historical study of the dearly beloved 1920-21 depression, a study that will certainly be worth reading, if not entirely worth believing. Grant reviewed a new book, War and Gold, by Kwasi Kwarteng, which provides a historical account of the role of gold in monetary affairs and in wartime finance since the 16th century. Despite his admiration for Kwarteng’s work, Grant betrays more than a little annoyance and exasperation with Kwarteng’s failure to appreciate what a many-splendored thing gold really is, deploring the impartial attitude to gold taken by Kwarteng.

Exasperatingly, the author, a University of Cambridge Ph. D. in history and a British parliamentarian, refuses to render historical judgment. He doesn’t exactly decry the world’s descent into “too big to fail” banking, occult-style central banking and tiny, government-issued interest rates. Neither does he precisely support those offenses against wholesome finance. He is neither for the dematerialized, non-gold dollar nor against it. He is a monetary Hamlet.

He does, at least, ask: “Why gold?” I would answer: “Because it’s money, or used to be money, and will likely one day become money again.” The value of gold is inherent, not conferred by governments. Its supply tends to grow by 1% to 2% a year, in line with growth in world population. It is nice to look at and self-evidently valuable.

Evidently, Mr. Grant’s enchantment with gold has led him into incoherence. Is gold money or isn’t it? Obviously not — at least not if you believe that definitions ought to correspond to reality rather than to Platonic ideal forms. Sensing that his grip on reality may be questionable, he tries to have it both ways. If gold isn’t money now, it likely will become money again — “one day.” For sure, gold used to be money, but so did cowerie shells, cattle, and at least a dozen other substances. How does that create any presumption that gold is likely to become money again?

Then we read: “The value of gold is inherent.” OMG! And this from a self-proclaimed Austrian! Has he ever heard of the “subjective theory of value?” Mr. Grant, meet Ludwig von Mises.

Value is not intrinsic, it is not in things. It is within us. (Human Action p. 96)

If value “is not in things,” how can anything be “self-evidently valuable?”

Grant, in his emotional attachment to gold, feels obligated to defend the metal against any charge that it may have been responsible for human suffering.

Shelley wrote lines of poetry to protest the deflation that attended Britain’s return to the gold standard after the Napoleonic wars. Mr. Kwarteng quotes them: “Let the Ghost of Gold / Take from Toil a thousandfold / More than e’er its substance could / In the tyrannies of old.” The author seems to agree with the poet.

Grant responds to this unfair slur against gold:

I myself hold the gold standard blameless. The source of the postwar depression was rather the decision of the British government to return to the level of prices and wages that prevailed before the war, a decision it enforced through monetary means (that is, by reimposing the prewar exchange rate). It was an error that Britain repeated after World War I.

This is a remarkable and fanciful defense, suggesting that the British government actually had a specific target level of prices and wages in mind when it restored the pound to its prewar gold parity. In fact, the idea of a price level was not yet even understood by most economists, let alone by the British government. Restoring the pound to its prewar parity was considered a matter of financial rectitude and honor, not a matter of economic fine-tuning. Nor was the choice of the prewar parity the only reason for the ruinous deflation that followed the postwar resumption of gold payments. The replacement of paper pounds with gold pounds implied a significant increase in the total demand for gold by the world’s leading economic power, which implied an increase in the total world demand for gold, and an increase in its value relative to other commodities, in other words deflation. David Ricardo foresaw the deflationary consequences of the resumption of gold payments, and tried to mitigate those consequences with his Proposals for an Economical and Secure Currency, designed to limit the increase in the monetary demand for gold. The real error after World War I, as Hawtrey and Cassel both pointed out in 1919, was that the resumption of an international gold standard after gold had been effectively demonetized during World War I would lead to an enormous increase in the monetary demand for gold, causing a worldwide deflationary collapse. After the Napoleonic wars, the gold standard was still a peculiarly British institution, the rest of the world then operating on a silver standard.

Grant makes further extravagant and unsupported claims on behalf of the gold standard:

The classical gold standard, in service roughly from 1815 to 1914, was certainly imperfect. What it did deliver was long-term price stability. What the politics of the gold-standard era delivered was modest levels of government borrowing.

The choice of 1815 as the start of the gold standard era is quite arbitrary, 1815 being the year that Britain defeated Napoleonic France, thereby setting the stage for the restoration of the golden pound at its prewar parity. But the very fact that 1815 marked the beginning of the restoration of the prewar gold parity with sterling shows that for Britain the gold standard began much earlier, actually 1717 when Isaac Newton, then master of the mint, established the gold parity at a level that overvalued gold, thereby driving silver out of circulation. So, if the gold standard somehow ensures that government borrowing levels are modest, one would think that borrowing by the British government would have been modest from 1717 to 1797 when the gold standard was suspended. But the chart below showing British government debt as a percentage of GDP from 1692 to 2010 shows that British government debt rose rapidly over most of the 18th century.

uk_national_debtGrant suggests that bad behavior by banks is mainly the result of abandonment of the gold standard.

Progress is the rule, the Whig theory of history teaches, but the old Whigs never met the new bankers. Ordinary people live longer and Olympians run faster than they did a century ago, but no such improvement is evident in our monetary and banking affairs. On the contrary, the dollar commands but 1/1,300th of an ounce of gold today, as compared with the 1/20th of an ounce on the eve of World War I. As for banking, the dismal record of 2007-09 would seem inexplicable to the financial leaders of the Model T era. One of these ancients, Comptroller of the Currency John Skelton Williams, predicted in 1920 that bank failures would soon be unimaginable. In 2008, it was solvency you almost couldn’t imagine.

Once again, the claims that Mr. Grant makes on behalf of the gold standard simply do not correspond to reality. The chart below shows the annual number of bank failures in every years since 1920.

bank_failures

Somehow, Mr. Grant somehow seems to have overlooked what happened between 1929 and 1932. John Skelton Williams obviously didn’t know what was going to happen in the following decade. Certainly no shame in that. I am guessing that Mr. Grant does know what happened; he just seems too bedazzled by the beauty of the gold standard to care.

Further Thoughts on Capital and Inequality

In a recent post, I criticized, perhaps without adequate understanding, some of Thomas Piketty’s arguments about capital in his best-selling book. My main criticism is that Piketty’s argument that. under capitalism, there is an inherent tendency toward increasing inequality, ignores the heterogeneity of capital and the tendency for new capital embodying new knowledge, new techniques, and new technologies to render older capital obsolete. Contrary to the simple model of accumulation on which Piketty relies, the accumulation of capital is not a smooth process; it is a very uneven process, generating very high returns to some owners of capital, but also imposing substantial losses on other owners of capital. The only way to avoid the risk of owning suddenly obsolescent capital is to own the market portfolio. But I conjecture that few, if any, great fortunes have been amassed by investing in the market portfolio, and (I further conjecture) great fortunes, once amassed, are usually not liquidated and reinvested in the market portfolio, but continue to be weighted heavily in fairly narrow portfolios of assets from which those great fortunes grew. Great fortunes, aside from being dissipated by deliberate capital consumption, also tend to be eroded by the loss of value through obsolescence, a process that can only be avoided by extreme diversification of holdings or by the exercise of entrepreneurial skill, a skill rarely bequeathed from generation to generation.

Applying this insight, Larry Summers pointed out in his review of Piketty’s book that the rate of turnover in the Forbes list of the 400 wealthiest individuals between 1982 and 2012 was much higher than the turnover predicted by Piketty’s simple accumulation model. Commenting on my post (in which I referred to Summers’s review), Kevin Donoghue objected that Piketty had criticized the Forbes 400 as a measure of wealth in his book, so that Piketty would not necessarily accept Summers’ criticism based on the Forbes 400. Well, as an alternative, let’s have a look at the S&P 500. I just found this study of the rate of turnover in the 500 firms making up the S&P 500, showing that the rate of turnover in the composition of the S&P 500 has been increased greatly over the past 50 years. See the chart below copied from that study showing that the average length of time for firms on the S&P 500 was over 60 years in 1958, but by 2011 had fallen to less than 20 years. The pace of creative destruction seems to be accelerating

S&P500_turnover

From the same study here’s another chart showing the companies that were deleted from the index between 2001 and 2011 and those that were added.

S&P500_churn

But I would also add a cautionary note that, because the population of individuals and publicly held business firms is growing, comparing the composition of a fixed number (400) of wealthiest individuals or (500) most successful corporations over time may overstate the increase over time in the rate of turnover, any group of fixed numerical size becoming a smaller percentage of the population over time. Even with that caveat, however, what this tells me is that there is a lot of variability in the value of capital assets. Wealth grows, but it grows unevenly. Capital is accumulated, but it is also lost.

Does the process of capital accumulation necessarily lead to increasing inequality of wealth and income? Perhaps, but I don’t think that the answer is necessarily determined by the relationship between the real rate of interest and the rate of growth in GDP.

Many people have suggested that an important cause of rising inequality has been the increasing importance of winner-take-all markets in which a few top performers seem to be compensated at very much higher rates than other, only slightly less gifted, performers. This sort of inequality is reflected in widening gaps between the highest and lowest paid participants in a given occupation. In some cases at least, the differences between the highest and lowest paid don’t seem to correspond to the differences in skill, though admittedly skill is often difficult to measure.

This concentration of rewards is especially characteristic of competitive sports, winners gaining much larger rewards than losers. However, because the winner’s return comes, at least in part, at the expense of the loser, the private gain to winning exceeds the social gain. That’s why all organized professional sports engage in some form of revenue sharing and impose limits on spending on players. Without such measures, competitive sports would not be viable, because the private return to improve quality exceeds the collective return from improved quality. There are, of course, times when a superstar like Babe Ruth or Michael Jordan can actually increase the return to losers, but that seems to be the exception.

To what extent other sorts of winner-take-all markets share this intrinsic inefficiency is not immediately clear to me, but it does not seem implausible to think that there is an incentive to overinvest in skills that increase the expected return to participants in winner-take-all markets. If so, the source of inequality may also be a source of inefficiency.

The Backing Theory of Money v. the Quantity Theory of Money

Mike Sproul and Scott Sumner were arguing last week about how to account for the value of fiat money and the rate of inflation. As I observed in a recent post, I am doubtful that monetary theory, in its current state, can handle those issues adequately, so I am glad to see that others are trying to think the problems through even if the result is only to make clear how much we don’t know. Both Mike and Scott are very smart guys, and I find some validity in the arguments of both even if I am not really satisfied with the arguments of either.

Mike got things rolling with a guest post on JP Koning’s blog in which he lodged two complaints against Scott:

First, “Scott thinks that the liabilities of governments and central banks are not really liabilities.”

I see two problems with Mike’s first complaint. First, Mike is not explicit about which liabilities he is referring to. However, from the context of his discussion, it seems clear that he is talking about those liabilities that we normally call currency, or in the case of the Federal Reserve, Federal Reserve Notes. Second, and more important, it is not clear what definition of “liability” Mike is using. In a technical sense, as Mike observes, Federal Reserve Notes are classified by the Fed itself as liabilities. But what does it mean for a Federal Reserve Note to be a liability of the Fed? A liability implies that an obligation has been undertaken by someone to be discharged under certain defined conditions. What is the obligation undertaken by the Fed upon issuing a Federal Reserve Note. Under the gold standard, the Fed was legally obligated to redeem its Notes for gold at a fixed predetermined conversion rate. After the gold standard was suspended, that obligation was nullified. What obligation did the Fed accept in place of the redemption obligation? Here’s Mike’s answer:

But there are at least three other ways that FRN’s can still be redeemed: (i) for the Fed’s bonds, (ii) for loans made by the Fed, (iii) for taxes owed to the federal government. The Fed closed one channel of redemption (the gold channel), while the other redemption channels (loan, tax, and bond) were left open.

Those are funny obligations inasmuch as there are no circumstances under which they require the Fed to take any action. The purchase of a Fed (Treasury?) bond at the going market price imposes no obligation on the Fed to do anything except what it is already doing anyway. For there to be an obligation resulting from the issue by the Fed of a note, it would have been necessary for the terms of the transaction following upon the original issue to have been stipulated in advance. But the terms on which the Fed engages in transactions with the public are determined by market forces not by contractual obligation. The same point applies to loans made by the Fed. When the Fed makes a loan, it emits FRNs. The willingness of the Fed to accept FRNs previously emitted in the course of making loans as repayment of those loans doesn’t strike me as an obligation associated with its issue of FRNs. Finally, the fact that the federal government accepts (or requires) payment of tax obligations in FRNs is a decision of the Federal government to which the Fed as a matter of strict legality is not a party. So it seems to me that the technical status of an FRN as a liability of the Fed is a semantic or accounting oddity rather than a substantive property of a FRN.

Having said that, I think that Mike actually does make a substantive point about FRNs, which is that FRNs are not necessarily hot potatoes in the strict quantity-theory sense. There are available channels through which the public can remit its unwanted FRNs back to the Fed. The economic question is whether those means of sending unwanted FRNs back to the Fed are as effective in pinning down the price level as an enforceable legal obligation undertaken by the Fed to redeem FRNs at a predetermined exchange rate in terms of gold. Mike suggests that the alternative mechanisms by which the public can dispose of unwanted FRNs are as effective as gold convertibility in pinning down the price level. I think that assertion is implausible, and it remains to be proved, though I am willing to keep an open mind on the subject.

Now let’s consider Mike’s second complaint: “Scott thinks that if the central bank issues more money, then the money will lose value even if the money is fully backed.”

My first reaction is to ask what it means for money to be “fully backed?” Since it is not clear in what sense the inconvertible note issue of a central bank represents a liability of the issuing bank, it is also not exactly clear why any backing is necessary, or what backing means, though I will try to suggest in a moment a reason why the assets of the central bank actually do matter. But again the point is that, when a liability does not impose a well-defined legal obligation on the central bank to redeem that liability at a predetermined rate in terms of an asset whose supply the central bank does not itself control, the notion of “backing” is as vague as the notion of a “liability.” The difference between a liability that imposes no effective constraint on a central bank and one that does impose an effective constraint on a central bank is the difference between what Nick Rowe calls an alpha bank, which does not make its notes convertible into another asset (real or monetary) not under its control, and what he calls a beta bank, which does make its liabilities convertible into another asset (real or monetary) not under its control.

Now one way to interpret “backing” is to look at all the assets on the balance sheet of the central bank and compare the value of those assets to the value of the outstanding notes issued by the central bank. Sometimes I think that this is really all that Mike means when he talks about “backing,” but I am not really sure. At any rate, if we think of backing in this vague sense, maybe what Mike wants to say is that the value of the outstanding note issue of the central bank is equal to the value of its assets divided by the amount of notes that it has issued. But if this really is what Mike means, then it seems that the aggregate value of the outstanding notes of the central bank must always equal the value of the assets of the central bank. But there is a problem with that notion of “backing” as well, because the equality in the value of the assets of the central bank and its liabilities can be achieved at any price level, and at any rate of inflation, because an increase in prices will scale up the nominal value of outstanding notes and the value of central-bank assets by the same amount. Without providing some nominal anchor, which, as far as I can tell, Mike has not done, the price level is indeterminate. Now to be sure, this is no reason for quantity theorist like Scott to feel overly self-satisfied, because the quantity theory is subject to the same indeterminacy. And while Mike seems absolutely convinced that the backing theory is superior to the quantity theory, he himself admits that it is very difficult, if not impossible, to distinguish between the two theories in terms of their empirical implications.

Let me now consider a slightly different way in which the value of the assets on the balance sheet of a central bank could affect the value of the money issued by the central bank. I would suggest, along the lines of an argument made by Ben Klein many years ago in some of his papers on competitive moneys (e.g. this one), that it is meaningful to talk about the quality of the money issued by a particular bank. In Klein’s terms, the quality of a money reflects the confidence with which people can predict the future value of a money. It’s plausible to assume that the demand (in real terms) to hold money increases with the quality of money. Certainly people will tend to switch form holding lower- to higher-quality moneys. I think that it’s also plausible to assume that the quality of a particular money issued by a central bank increases as the value of the assets held by the central bank increases, because the larger the asset portfolio of the issuer, the more likely it is that the issuer will control the value of the money that it has issued. (This goes to Mike’s point that a central bank has to hold enough assets to buy back its currency if the demand for it goes down. Actually it doesn’t, but people will be more willing to hold a money the larger the stock of assets held by the issuer with which it can buy back its money to prevent it from losing value.) I think that is ultimately the idea that Mike is trying to get at when he talks about “backing.” So I would interpret Mike as saying that the quality of a money is an increasing function of the total asset holdings of the central bank issuing the money, and the demand for a money is an increasing function of its quality. Such an adjustment in Mike’s backing theory just might help to bring the backing theory and the quantity theory into a closer correspondence than one might gather from reading the back and forth between Mike and Scott last week.

PS Mike was kind enough to quote my argument about the problem that backward induction poses for the standard explanation of the value of fiat money. Scott once again dismisses the problem by saying that the problem can be avoided by assuming that no one knows when the last period is. I agree that that is a possible answer, but it means that the value of fiat money is contingent on a violation of rational expectations and the efficient market hypothesis. I am sort of surprised that Scott, of all people, would be so nonchalant about accepting such a violation. But I’ve already said enough about that for now.

Thomas Piketty and Joseph Schumpeter (and Gerard Debreu)

Everybody else seems to have an opinion about Thomas PIketty, so why not me? As if the last two months of Piketty-mania (reminiscent, to those of a certain age, of an earlier invasion of American shores, exactly 50 years ago, by four European rock-stars) were not enough, there has been a renewed flurry of interest this week about Piketty’s blockbuster book triggered by Chris Giles’s recent criticism in the Financial Times of Piketty’s use of income data, which mainly goes to show that, love him or hate him, people cannot get enough of Professor Piketty. Now I will admit upfront that I have not read Piketty’s book, and from my superficial perusal of the recent criticisms, they seem less problematic than the missteps of Reinhart and Rogoff in claiming that, beyond a critical 90% ratio of national debt to national income, the burden of national debt begins to significantly depress economic growth. But in any event, my comments in this post are directed at Piketty’s conceptual approach, not on his use of the data in his empirical work. In fact, I think that Larry Summers in his superficially laudatory, but substantively critical, review has already made most of the essential points about Piketty’s book. But I think that Summers left out a couple of important issues — issues touched upon usefully by George Cooper in a recent blog post about Piketty — which bear further emphasis, .

Just to set the stage for my comments, here is my understanding of the main conceptual point of Piketty’s book. Piketty believes that the essence of capitalism is that capital generates a return to the owners of capital that, on average over time, is equal to the rate of interest. Capital grows; it accumulates. And the rate of accumulation is equal to the rate of interest. However, the rate of interest is generally somewhat higher than the rate of growth of the economy. So if capital is accumulating at a rate of growth equal to, say, 5%, and the economy is growing at a rate of growth equal to only 3%, the share of income accruing to the owners of capital will grow over time. It is in this simple theoretical framework — the relationship between the rate of economic growth to the rate of interest — that Piketty believes he has found the explanation not only for the increase in inequality over the past few centuries of capitalist development, but for the especially rapid increase in inequality over the past 30 years.

While praising Piketty’s scholarship, empirical research and rhetorical prowess, Summers does not handle Piketty’s main thesis gently. Summers points out that, as accumulation proceeds, the incentive to engage in further accumulation tends to weaken, so the iron law of increasing inequality posited by Piketty is not nearly as inflexible as Piketty suggests. Now one could respond that, once accumulation reaches a certain threshold, the capacity to consume weakens as well, if only, as Gary Becker liked to remind us, because of the constraint that time imposes on consumption.

Perhaps so, but the return to capital is not the only, or even the most important, source of inequality. I would interpret Summers’ point to be the following: pure accumulation is unlikely to generate enough growth in wealth to outstrip the capacity to increase consumption. To generate an increase in wealth so large that consumption can’t keep up, there must be not just a return to the ownership of capital, there must be profit in the Knightian or Schumpeterian sense of a profit over and above the return on capital. Alternatively, there must be some extraordinary rent on a unique, irreproducible factor of production. Accumulation by itself, without the stimulus of entrepreneurial profit, reflecting the the application of new knowledge in the broadest sense of the term, cannot go on for very long. It is entrepreneurial profits and rents to unique factors of production (or awards of government monopolies or other privileges) not plain vanilla accumulation that account for the accumulation of extraordinary amounts of wealth. Moreover, it seems that philanthropy (especially conspicuous philanthropy) provides an excellent outlet for the dissipation of accumulated wealth and can easily be combined with quasi-consumption activities, like art patronage or political activism, as more conventional consumption outlets become exhausted.

Summers backs up his conceptual criticism with a powerful factual argument. Comparing the Forbes list of the 400 richest individuals in 1982 with the Forbes list for 2012 Summers observes:

When Forbes compared its list of the wealthiest Americans in 1982 and 2012, it found that less than one tenth of the 1982 list was still on the list in 2012, despite the fact that a significant majority of members of the 1982 list would have qualified for the 2012 list if they had accumulated wealth at a real rate of even 4 percent a year. They did not, given pressures to spend, donate, or misinvest their wealth. In a similar vein, the data also indicate, contra Piketty, that the share of the Forbes 400 who inherited their wealth is in sharp decline.

But something else is also going on here, a misunderstanding, derived from a fundamental ambiguity, about what capital actually means. Capital can refer either to a durable physical asset or to a sum of money. When economists refer to capital as a factor of production, they are thinking of capital as a physical asset. But in most models, economists try to simplify the analysis by collapsing the diversity of the entire stock of heterogeneous capital assets into single homogeneous substance called “capital” and then measure it not in terms of its physical units (which, given heterogeneity, is strictly impossible) but in terms of its value. This creates all kinds of problems, leading to some mighty arguments among economists ever since the latter part of the nineteenth century when Carl Menger (the first Austrian economist) turned on his prize pupil Eugen von Bohm-Bawerk who wrote three dense volumes discussing the theory of capital and interest, and pronounced Bohm-Bawerk’s theory of capital “the greatest blunder in the history of economics.” I remember wanting to ask F. A. Hayek, who, trying to restate Bohm-Bawerk’s theory in a coherent form, wrote a volume about 75 years ago called The Pure Theory of Capital, which probably has been read from cover to cover by fewer than 100 living souls, and probably understood by fewer than 20 of those, what he made of Menger’s remark, but, to my eternal sorrow, I forgot to ask him that question the last time that I saw him.

At any rate, treating capital as a homogeneous substance that can be measured in terms of its value rather than in terms of physical units involves serious, perhaps intractable, problems. For certain purposes, it may be worthwhile to ignore those problems and work with a simplified model (a single output which can be consumed or used as a factor of production), but the magnitude of the simplification is rarely acknowledged. In his discussion, Piketty seems, as best as I could determine using obvious search terms on Amazon, unaware of the conceptual problems involved in speaking about capital as a homogeneous substance measured in terms of its value.

In the real world, capital is anything but homogeneous. It consists of an array of very specialized, often unique, physical embodiments. Once installed, physical capital is usually sunk, and its value is highly uncertain. In contrast to the imaginary model of a homogenous substance that just seems to grow at fixed natural rate, the real physical capital that is deployed in the process of producing goods and services is complex and ever-changing in its physical and economic characteristics, and the economic valuations associated with its various individual components are in perpetual flux. While the total value of all capital may be growing at a fairly steady rate over time, the values of the individual assets that constitute the total stock of capital fluctuate wildly, and few owners of physical capital have any guarantee that the value of their assets will appreciate at a steady rate over time.

Now one would have thought that an eminent scholar like Professor Piketty would, in the course of a 700-page book about capital, have had occasion to comment on enormous diversity and ever-changing composition of the stock of physical capital. These changes are driven by a competitive process in which entrepreneurs constantly introduce new products and new methods of producing products, a competitive process that enriches some owners of new capital, and, it turns out, impoverishes others — owners of old, suddenly obsolete, capital. It is a process that Joseph Schumpeter in his first great book, The Theory of Economic Development, memorably called “creative destruction.” But the term “creative destruction” or the title of Schumpeter’s book does not appear at all in Piketty’s book, and Schumpeter’s name appears only once, in connection not with the notion of creative destruction, but with his, possibly ironic, prediction in a later book Capitalism, Socialism and Democracy that socialism would eventually replace capitalism.

Thus, Piketty’s version of capitalist accumulation seems much too abstract and too far removed from the way in which great fortunes are amassed to provide real insight into the sources of increasing inequality. Insofar as such fortunes are associated with accumulation of capital, they are likely to be the result of the creation of new forms of capital associated with new products, or new production processes. The creation of new capital simultaneously destroys old forms of capital. New fortunes are amassed, and old ones dissipated. The model of steady accumulation that is at the heart of Piketty’s account of inexorably increasing inequality misses this essential feature of capitalism.

I don’t say that Schumpeter’s account of creative destruction means that increasing inequality is a trend that should be welcomed. There may well be arguments that capitalist development and creative destruction are socially inefficient. I have explained in previous posts (e.g., here, here, and here) why I think that a lot of financial-market activity is likely to be socially wasteful. Similar arguments might be made about other kinds of activities in non-financial markets where the private gain exceeds the social gain. Winner-take-all markets seem to be characterized by this divergence between private and social benefits and costs, apparently accounting for a growing share of economic activity, are an obvious source of inequality. But what I find most disturbing about the growth in inequality over the past 30 years is that great wealth has gained increased social status. That seems to me to be a very unfortunate change in public attitudes. I have no problem with people getting rich, even filthy rich. But don’t expect me to admire them because they are rich.

Finally, you may be wondering what all of this has to do with Gerard Debreu. Well, nothing really, but I couldn’t help noticing that Piketty refers in an endnote (p. 654) to “the work of Adam Smith, Friedrich Hayek, and Kenneth Arrow and  Claude Debreu” apparently forgetting that the name of his famous countryman, winner of the Nobel Memorial Prize for Economics in 1983, is not Claude, but Gerard, Debreu. Perhaps Piketty confused Debreu with another eminent Frenchman Claude Debussy, but I hope that in the next printing of his book, Piketty will correct this unfortunate error.

UPDATE (5/29 at 9:46 EDST): Thanks to Kevin Donoghue for checking with Arthur Goldhammer, who translated Piketty’s book from the original French. Goldhammer took responsibility for getting Debreu’s first name wrong in the English edition. In the French edition, only Debreu’s last name was mentioned.

Never Reason from a Disequilibrium

One of Scott Sumner’s many contributions as a blogger has been to show over and over and over again how easy it is to lapse into fallacious economic reasoning by positing a price change and then trying to draw inferences about the results of the price change. The problem is that a price change doesn’t just happen; it is the result of some other change. There being two basic categories of changes (demand and supply) that can affect price, there are always at least two possible causes for a given price change. So, until you have specified the antecedent change responsible for the price change under consideration, you can’t work out the consequences of the price change.

In this post, I want to extend Scott’s insight in a slightly different direction, and explain how every economic analysis has to begin with a statement about the initial conditions from which the analysis starts. In particular, you need to be clear about the equilibrium position corresponding to the initial conditions from which you are starting. If you posit some change in the system, but your starting point isn’t an equilibrium, you have no way of separating out the adjustment to the change that you are imposing on the system from the change the system would be undergoing simply to reach the equilibrium toward which it is already moving, or, even worse, from the change the system would be undergoing if its movement is not toward equilibrium.

Every theoretical analysis in economics properly imposes a ceteris paribus condition. Unfortunately, the ubiquitous ceteris paribus condition comes dangerously close to rendering economic theory irrefutable, except perhaps in a statistical sense, because empirical refutations of the theory can always be attributed to changes, abstracted from only in the theory, but not in the real world of our experience. An empirical model with a sufficient number of data points may be able to control for the changes in conditions that the theory holds constant, but the underlying theory is a comparison of equilibrium states (comparative statics), and it is quite a stretch to assume that the effects of perpetual disequilibrium can be treated as nothing but white noise. Austrians are right to be skeptical of econometric analysis; so was Keynes, for that matter. But skepticism need not imply nihilism.

Let me try to illustrate this principle by applying it to the Keynesian analysis of involuntary unemployment. In the General Theory Keynes argued that if adequate demand is deficient, the likely result is an equilibrium with involuntary unemployment. The “classical” argument that Keynes disputed was that, in principle at least, involuntary unemployment could not persist, because unemployed workers, if only they would accept reduced money wages, would eventually find employment. Keynes denied that involuntary unemployment could not persist, arguing that if workers did accept reduced money wages, the wage reductions would not get translated into reduced real wages. Instead, falling nominal wages would induce employers to cut prices by roughly the same percentage as the reduction in nominal wages, leaving real wages more or less unchanged, thereby nullifying the effectiveness of nominal-wage cuts, and, instead, fueling a vicious downward spiral of prices and wages.

In making this argument, Keynes didn’t dispute the neoclassical proposition that, with a given capital stock, the marginal product of labor declines as employment increases, implying that real wages have to fall for employment to be increased. His argument was about the nature of the labor-supply curve, labor supply, in Keynes’s view, being a function of both the real and the nominal wage, not, as in the neoclassical theory, only the real wage. Under Keynes’s “neoclassical” analysis, the problem with nominal-wage cuts is that they don’t do the job, because they lead to corresponding price cuts. The only way to reduce unemployment, Keynes insisted, is to raise the price level. With nominal wages constant, an increased price level would achieve the real-wage cut necessary for employment to be increased. And this is precisely how Keynes defined involuntary unemployment: the willingness of workers to increase the amount of labor actually supplied in response to a price level increase that reduces their real wage.

Interestingly, in trying to explain why nominal-wage cuts would fail to increase employment, Keynes suggested that the redistribution of income from workers to entrepreneurs associated with reduced nominal wages would tend to reduce consumption, thereby reducing, not increasing, employment. But if that is so, how is it that a reduced real wage, achieved via inflation, would increase employment? Why would the distributional effect of a reduced nominal, but unchanged real, wage be more adverse to employment han a reduced real wage, achieved, with a fixed nominal wage, by way of a price-level increase?

Keynes’s explanation for all this is confused. In chapter 19, where he makes the argument that money-wage cuts can’t eliminate involuntary unemployment, he presents a variety of reasons why nominal-wage cuts are ineffective, and it is usually not clear at what level of theoretical abstraction he is operating, and whether he is arguing that nominal-wage cuts would not work even in principle, or that, although nominal-wage cuts might succeed in theory, they would inevitably fail in practice. Even more puzzling, It is not clear whether he thinks that real wages have to fall to achieve full employment or that full employment could be restored by an increase in aggregate demand with no reduction in real wages. In particular, because Keynes doesn’t start his analysis from a full-employment equilibrium, and doesn’t specify the shock that moves the economy off its equilibrium position, we can only guess whether Keynes is talking about a shock that had reduced labor productivity or (more likely) a shock to entrepreneurial expectations (animal spirits) that has no direct effect on labor productivity.

There was a rhetorical payoff for Keynes in maintaining that ambiguity, because he wanted to present a “general theory” in which full employment is a special case. Keynes therefore emphasized that the labor market is not self-equilibrating by way of nominal-wage adjustments. That was a perfectly fine and useful insight: when the entire system is out of kilter; there is no guarantee that just letting the free market set prices will bring everything back into place. The theory of price adjustment is fundamentally a partial-equilibrium theory that isolates the disequiibrium of a single market, with all other markets in (approximate) equilibrium. There is no necessary connection between the adjustment process in a partial-equilibrium setting and the adjustment process in a full-equilibrium setting. The stability of a single market in disequilibrium does not imply the stability of the entire system of markets in disequilibrium. Keynes might have presented his “general theory” as a theory of disequilibrium, but he preferred (perhaps because he had no other tools to work with) to spell out his theory in terms of familiar equilibrium concepts: savings equaling investment and income equaling expenditure, leaving it ambiguous whether the failure to reach a full-employment equilibrium is caused by a real wage that is too high or an interest rate that is too high. Axel Leijonhufvud highlights the distinction between a disequilibrium in the real wage and a disequilibrium in the interest rate in an important essay “The Wicksell Connection” included in his book Information and Coordination.

Because Keynes did not commit himself on whether a reduction in the real wage is necessary for equilibrium to be restored, it is hard to assess his argument about whether, by accepting reduced money wages, workers could in fact reduce their real wages sufficiently to bring about full employment. Keynes’s argument that money-wage cuts accepted by workers would be undone by corresponding price cuts reflecting reduced production costs is hardly compelling. If the current level of money wages is too high for firms to produce profitably, it is not obvious why the reduced money wages paid by entrepreneurs would be entirely dissipated by price reductions, with none of the cost decline being reflected in increased profit margins. If wage cuts do increase profit margins, that would encourage entrepreneurs to increase output, potentially triggering an expansionary multiplier process. In other words, if the source of disequilibrium is that the real wage is too high, the real wage depending on both the nominal wage and price level, what is the basis for concluding that a reduction in the nominal wage would cause a change in the price level sufficient to keep the real wage at a disequilibrium level? Is it not more likely that the price level would fall no more than required to bring the real wage back to the equilibrium level consistent with full employment? The question is not meant as an expression of policy preference; it is a question about the logic of Keynes’s analysis.

Interestingly, present-day opponents of monetary stimulus (for whom “Keynesian” is a term of extreme derision) like to make a sort of Keynesian argument. Monetary stimulus, by raising the price level, reduces the real wage. That means that monetary stimulus is bad, as it is harmful to workers, whose interests, we all know, is the highest priority – except perhaps the interests of rentiers living off the interest generated by their bond portfolios — of many opponents of monetary stimulus. Once again, the logic is less than compelling. Keynes believed that an increase in the price level could reduce the real wage, a reduction that, at least potentially, might be necessary for the restoration of full employment.

But here is my question: why would an increase in the price level reduce the real wage rather than raise money wages along with the price level. To answer that question, you need to have some idea of whether the current level of real wages is above or below the equilibrium level. If unemployment is high, there is at least some reason to think that the equilibrium real wage is less than the current level, which is why an increase in the price level would be expected to cause the real wage to fall, i.e., to move the actual real wage in the direction of equilibrium. But if the current real wage is about equal to, or even below, the equilibrium level, then why would one think that an increase in the price level would not also cause money wages to rise correspondingly? It seems more plausible that, in the absence of a good reason to think otherwise, that inflation would cause real wages to fall only if real wages are above their equilibrium level.

Monetary Theory on the Neo-Fisherite Edge

The week before last, Noah Smith wrote a post “The Neo-Fisherite Rebellion” discussing, rather sympathetically I thought, the contrarian school of monetary thought emerging from the Great American Heartland, according to which, notwithstanding everything monetary economists since Henry Thornton have taught, high interest rates are inflationary and low interest rates deflationary. This view of the relationship between interest rates and inflation was advanced (but later retracted) by Narayana Kocherlakota, President of the Minneapolis Fed in a 2010 lecture, and was embraced and expounded with increased steadfastness by Stephen Williamson of Washington University in St. Louis and the St. Louis Fed in at least one working paper and in a series of posts over the past five or six months (e.g. here, here and here). And John Cochrane of the University of Chicago has picked up on the idea as well in two recent blog posts (here and here). Others seem to be joining the upstart school as well.

The new argument seems simple: given the Fisher equation, in which the nominal interest rate equals the real interest rate plus the (expected) rate of inflation, a central bank can meet its inflation target by setting a fixed nominal interest rate target consistent with its inflation target and keeping it there. Once the central bank sets its target, the long-run neutrality of money, implying that the real interest rate is independent of the nominal targets set by the central bank, ensures that inflation expectations must converge on rates consistent with the nominal interest rate target and the independently determined real interest rate (i.e., the real yield curve), so that the actual and expected rates of inflation adjust to ensure that the Fisher equation is satisfied. If the promise of the central bank to maintain a particular nominal rate over time is believed, the promise will induce a rate of inflation consistent with the nominal interest-rate target and the exogenous real rate.

The novelty of this way of thinking about monetary policy is that monetary theorists have generally assumed that the actual adjustment of the price level or inflation rate depends on whether the target interest rate is greater or less than the real rate plus the expected rate. When the target rate is greater than the real rate plus expected inflation, inflation goes down, and when it is less than the real rate plus expected inflation, inflation goes up. In the conventional treatment, the expected rate of inflation is momentarily fixed, and the (expected) real rate variable. In the Neo-Fisherite school, the (expected) real rate is fixed, and the expected inflation rate is variable. (Just as an aside, I would observe that the idea that expectations about the real rate of interest and the inflation rate cannot occur simultaneously in the short run is not derived from the limited cognitive capacity of economic agents; it can only be derived from the limited intellectual capacity of economic theorists.)

The heretical views expressed by Williamson and Cochrane and earlier by Kocherlakota have understandably elicited scorn and derision from conventional monetary theorists, whether Keynesian, New Keynesian, Monetarist or Market Monetarist. (Williamson having appropriated for himself the New Monetarist label, I regrettably could not preserve an appropriate symmetry in my list of labels for monetary theorists.) As a matter of fact, I wrote a post last December challenging Williamson’s reasoning in arguing that QE had caused a decline in inflation, though in his initial foray into uncharted territory, Williamson was actually making a narrower argument than the more general thesis that he has more recently expounded.

Although deep down, I have no great sympathy for Williamson’s argument, the counterarguments I have seen leave me feeling a bit, shall we say, underwhelmed. That’s not to say that I am becoming a convert to New Monetarism, but I am feeling that we have reached a point at which certain underlying gaps in monetary theory can’t be concealed any longer. To explain what I mean by that remark, let me start by reviewing the historical context in which the ruling doctrine governing central-bank operations via adjustments in the central-bank lending rate evolved. The primary (though historically not the first) source of the doctrine is Henry Thornton in his classic volume The Nature and Effects of the Paper Credit of Great Britain.

Even though Thornton focused on the policy of the Bank of England during the Napoleonic Wars, when Bank of England notes, not gold, were legal tender, his discussion was still in the context of a monetary system in which paper money was generally convertible into either gold or silver. Inconvertible banknotes – aka fiat money — were the exception not the rule. Gold and silver were what Nick Rowe would call alpha money. All other moneys were evaluated in terms of gold and silver, not in terms of a general price level (not yet a widely accepted concept). Even though Bank of England notes became an alternative alpha money during the restriction period of inconvertibility, that situation was generally viewed as temporary, the restoration of convertibility being expected after the war. The value of the paper pound was tracked by the sterling price of gold on the Hamburg exchange. Thus, Ricardo’s first published work was entitled The High Price of Bullion, in which he blamed the high sterling price of bullion at Hamburg on an overissue of banknotes by the Bank of England.

But to get back to Thornton, who was far more concerned with the mechanics of monetary policy than Ricardo, his great contribution was to show that the Bank of England could control the amount of lending (and money creation) by adjusting the interest rate charged to borrowers. If banknotes were depreciating relative to gold, the Bank of England could increase the value of their notes by raising the rate of interest charged on loans.

The point is that if you are a central banker and are trying to target the exchange rate of your currency with respect to an alpha currency, you can do so by adjusting the interest rate that you charge borrowers. Raising the interest rate will cause the exchange value of your currency to rise and reducing the interest rate will cause the exchange value to fall. And if you are operating under strict convertibility, so that you are committed to keep the exchange rate between your currency and an alpha currency at a specified par value, raising that interest rate will cause you to accumulate reserves payable in terms of the alpha currency, and reducing that interest rate will cause you to emit reserves payable in terms of the alpha currency.

So the idea that an increase in the central-bank interest rate tends to increase the exchange value of its currency, or, under a fixed-exchange rate regime, an increase in the foreign exchange reserves of the bank, has a history at least two centuries old, though the doctrine has not exactly been free of misunderstanding or confusion in the course of those two centuries. One of those misunderstandings was about the effect of a change in the central-bank interest rate, under a fixed-exchange rate regime. In fact, as long as the central bank is maintaining a fixed exchange rate between its currency and an alpha currency, changes in the central-bank interest rate don’t affect (at least as a first approximation) either the domestic money supply or the domestic price level; all that changes in the central-bank interest rate can accomplish is to change the bank’s holdings of alpha-currency reserves.

It seems to me that this long well-documented historical association between changes in the central-bank interest rates and the exchange value of currencies and the level of private spending is the basis for the widespread theoretical presumption that raising the central-bank interest rate target is deflationary and reducing it is inflationary. However, the old central-bank doctrine of the Bank Rate was conceived in a world in which gold and silver were the alpha moneys, and central banks – even central banks operating with inconvertible currencies – were beta banks, because the value of a central-bank currency was still reckoned, like the value of inconvertible Bank of England notes in the Napoleonic Wars, in terms of gold and silver.

In the Neo-Fisherite world, central banks rarely peg exchange rates against each other, and there is no longer any outside standard of value to which central banks even nominally commit themselves. In a world without the metallic standard of value in which the conventional theory of central banking developed, do the propositions about the effects of central-bank interest-rate setting still obtain? I am not so sure that they do, not with the analytical tools that we normally deploy when thinking about the effects of central-bank policies. Why not? Because, in a Neo-Fisherite world in which all central banks are alpha banks, I am not so sure that we really know what determines the value of this thing called fiat money. And if we don’t really know what determines the value of a fiat money, how can we really be sure that interest-rate policy works the same way in a Neo-Fisherite world that it used to work when the value of money was determined in relation to a metallic standard? (Just to avoid misunderstanding, I am not – repeat NOT — arguing for restoring the gold standard.)

Why do I say that we don’t know what determines the value of fiat money in a Neo-Fisherite world? Well, consider this. Almost three weeks ago I wrote a post in which I suggested that Bitcoins could be a massive bubble. My explanation for why Bitcoins could be a bubble is that they provide no real (i.e., non-monetary) service, so that their value is totally contingent on, and derived from (or so it seems to me, though I admit that my understanding of Bitcoins is partial and imperfect), the expectation of a positive future resale value. However, it seems certain that the resale value of Bitcoins must eventually fall to zero, so that backward induction implies that Bitcoins, inasmuch as they provide no real service, cannot retain a positive value in the present. On this reasoning, any observed value of a Bitcoin seems inexplicable except as an irrational bubble phenomenon.

Most of the comments I received about that post challenged the relevance of the backward-induction argument. The challenges were mainly of two types: a) the end state, when everyone will certainly stop accepting a Bitcoin in exchange, is very, very far into the future and its date is unknown, and b) the backward-induction argument applies equally to every fiat currency, so my own reasoning, according to my critics, implies that the value of every fiat currency is just as much a bubble phenomenon as the value of a Bitcoin.

My response to the first objection is that even if the strict logic of the backward-induction argument is inconclusive, because of the long and uncertain duration of the time elapse between now and the end state, the argument nevertheless suggests that the value of a Bitcoin is potentially very unsteady and vulnerable to sudden collapse. Those are not generally thought to be desirable attributes in a medium of exchange.

My response to the second objection is that fiat currencies are actually quite different from Bitcoins, because fiat currencies are accepted by governments in discharging the tax liabilities due to them. The discharge of a tax liability is a real (i.e. non-monetary) service, creating a distinct non-monetary demand for fiat currencies, thereby ensuring that fiat currencies retain value, even apart from being accepted as a medium of exchange.

That, at any rate, is my view, which I first heard from Earl Thompson (see his unpublished paper, “A Reformulation of Macroeconomic Theory” pp. 23-25 for a derivation of the value of fiat money when tax liability is a fixed proportion of income). Some other pretty good economists have also held that view, like Abba Lerner, P. H. Wicksteed, and Adam Smith. Georg Friedrich Knapp also held that view, and, in his day, he was certainly well known, but I am unable to pass judgment on whether he was or wasn’t a good economist. But I do know that his views about money were famously misrepresented and caricatured by Ludwig von Mises. However, there are other good economists (Hal Varian for one), apparently unaware of, or untroubled by, the backward induction argument, who don’t think that acceptability in discharging tax liability is required to explain the value of fiat money.

Nor do I think that Thompson’s tax-acceptability theory of the value of money can stand entirely on its own, because it implies a kind of saw-tooth time profile of the price level, so that a fiat currency, earning no liquidity premium, would actually be appreciating between peak tax collection dates, and depreciating immediately following those dates, a pattern not obviously consistent with observed price data, though I do recall that Thompson used to claim that there is a lot of evidence that prices fall just before peak tax-collection dates. I don’t think that anyone has ever tried to combine the tax-acceptability theory with the empirical premise that currency (or base money) does in fact provide significant liquidity services. That, it seems to me, would be a worthwhile endeavor for any eager young researcher to undertake.

What does all of this have to do with the Neo-Fisherite Rebellion? Well, if we don’t have a satisfactory theory of the value of fiat money at hand, which is what another very smart economist Fischer Black – who, to my knowledge never mentioned the tax-liability theory — thought, then the only explanation of the value of fiat money is that, like the value of a Bitcoin, it is whatever people expect it to be. And the rate of inflation is equally inexplicable, being just whatever it is expected to be. So in a Neo-Fisherite world, if the central bank announces that it is reducing its interest-rate target, the effect of the announcement depends entirely on what “the market” reads into the announcement. And that is exactly what Fischer Black believed. See his paper “Active and Passive Monetary Policy in a Neoclassical Model.”

I don’t say that Williamson and his Neo-Fisherite colleagues are correct. Nor have they, to my knowledge, related their arguments to Fischer Black’s work. What I do say (indeed this is a problem I raised almost three years ago in one of my first posts on this blog) is that existing monetary theories of the price level are unable to rule out his result, because the behavior of the price level and inflation seems to depend, more than anything else, on expectations. And it is far from clear to me that there are any fundamentals in which these expectations can be grounded. If you impose the rational expectations assumption, which is almost certainly wrong empirically, maybe you can argue that the central bank provides a focal point for expectations to converge on. The problem, of course, is that in the real world, expectations are all over the place, there being no fundamentals to force the convergence of expectations to a stable equilibrium value.

In other words, it’s just a mess, a bloody mess, and I do not like it, not one little bit.


About Me

David Glasner
Washington, DC

I am an economist at the Federal Trade Commission. Nothing that you read on this blog necessarily reflects the views of the FTC or the individual commissioners. Although I work at the FTC as an antitrust economist, most of my research and writing has been on monetary economics and policy and the history of monetary theory. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 272 other followers


Follow

Get every new post delivered to your Inbox.

Join 272 other followers