Posts Tagged 'John Cochrane'

Krugman on the Volcker Disinflation

Earlier in the week, Paul Krugman wrote about the Volcker disinflation of the 1980s. Krugman’s annoyance at Stephen Moore (whom Krugman flatters by calling him an economist) and John Cochrane (whom Krugman disflatters by comparing him to Stephen Moore) is understandable, but he has less excuse for letting himself get carried away in an outburst of Keynesian triumphalism.

Right-wing economists like Stephen Moore and John Cochrane — it’s becoming ever harder to tell the difference — have some curious beliefs about history. One of those beliefs is that the experience of disinflation in the 1980s was a huge shock to Keynesians, refuting everything they believed. What makes this belief curious is that it’s the exact opposite of the truth. Keynesians came into the Volcker disinflation — yes, it was mainly the Fed’s doing, not Reagan’s — with a standard, indeed textbook, model of what should happen. And events matched their expectations almost precisely.

I’ve been cleaning out my library, and just unearthed my copy of Dornbusch and Fischer’s Macroeconomics, first edition, copyright 1978. Quite a lot of that book was concerned with inflation and disinflation, using an adaptive-expectations Phillips curve — that is, an assumed relationship in which the current inflation rate depends on the unemployment rate and on lagged inflation. Using that approach, they laid out at some length various scenarios for a strategy of reducing the rate of money growth, and hence eventually reducing inflation. Here’s one of their charts, with the top half showing inflation and the bottom half showing unemployment:




Not the cleanest dynamics in the world, but the basic point should be clear: cutting inflation would require a temporary surge in unemployment. Eventually, however, unemployment could come back down to more or less its original level; this temporary surge in unemployment would deliver a permanent reduction in the inflation rate, because it would change expectations.

And here’s what the Volcker disinflation actually looked like:


A temporary but huge surge in unemployment, with inflation coming down to a sustained lower level.

So were Keynesian economists feeling amazed and dismayed by the events of the 1980s? On the contrary, they were feeling pretty smug: disinflation had played out exactly the way the models in their textbooks said it should.

Well, this is true, but only up to a point. What Krugman neglects to mention, which is why the Volcker disinflation is not widely viewed as having enhanced the Keynesian forecasting record, is that most Keynesians had opposed the Reagan tax cuts, and one of their main arguments was that the tax cuts would be inflationary. However, in the Reagan-Volcker combination of loose fiscal policy and tight money, it was tight money that dominated. Score one for the Monetarists. The rapid drop in inflation, though accompanied by high unemployment, was viewed as a vindication of the Monetarist view that inflation is always and everywhere a monetary phenomenon, a view which now seems pretty commonplace, but in the 1970s and 1980s was hotly contested, including by Keynesians.

However, the (Friedmanian) Monetarist view was only partially vindicated, because the Volcker disinflation was achieved by way of high interest rates not by tightly controlling the money supply. As I have written before on this blog (here and here) and in chapter 10 of my book on free banking (especially, pp. 214-21), Volcker actually tried very hard to slow down the rate of growth in the money supply, but the attempt to implement a k-percent rule induced perverse dynamics, creating a precautionary demand for money whenever monetary growth overshot the target range, the anticipation of an imminent future tightening causing people, fearful that cash would soon be unavailable, to hoard cash by liquidating assets before the tightening. The scenario played itself out repeatedly in the 1981-82 period, when the most closely watched economic or financial statistic in the world was the Fed’s weekly report of growth in the money supply, with growth rates over the target range being associated with falling stock and commodities prices. Finally, in the summer of 1982, Volcker announced that the Fed would stop trying to achieve its money growth targets, and the great stock market rally of the 1980s took off, and economic recovery quickly followed.

So neither the old-line Keynesian dismissal of monetary policy as irrelevant to the control of inflation, nor the Monetarist obsession with controlling the monetary aggregates fared very well in the aftermath of the Volcker disinflation. The result was the New Keynesian focus on monetary policy as the key tool for macroeconomic stabilization, except that monetary policy no longer meant controlling a targeted monetary aggregate, but controlling a targeted interest rate (as in the Taylor rule).

But Krugman doesn’t mention any of this, focusing instead on the conflicts among  non-Keynesians.

Indeed, it was the other side of the macro divide that was left scrambling for answers. The models Chicago was promoting in the 1970s, based on the work of Robert Lucas and company, said that unemployment should have come down quickly, as soon as people realized that the Fed really was bringing down inflation.

Lucas came to Chicago in 1975, and he was the wave of the future at Chicago, but it’s not as if Friedman disappeared; after all, he did win the Nobel Prize in 1976. And although Friedman did not explicitly attack Lucas, it’s clear that, to his credit, Friedman never bought into the rational-expectations revolution. So although Friedman may have been surprised at the depth of the 1981-82 recession – in part attributable to the perverse effects of the money-supply targeting he had convinced the Fed to adopt – the adaptive-expectations model in the Dornbusch-Fischer macro textbook is as much Friedmanian as Keynesian. And by the way, Dornbush and Fischer were both at Chicago in the mid 1970s when the first edition of their macro text was written.

By a few years into the 80s it was obvious that those models were unsustainable in the face of the data. But rather than admit that their dismissal of Keynes was premature, most of those guys went into real business cycle theory — basically, denying that the Fed had anything to do with recessions. And from there they just kept digging ever deeper into the rabbit hole.

But anyway, what you need to know is that the 80s were actually a decade of Keynesian analysis triumphant.

I am just as appalled as Krugman by the real-business-cycle episode, but it was as much a rejection of Friedman, and of all other non-Keynesian monetary theory, as of Keynes. So the inspiring morality tale spun by Krugman in which the hardy band of true-blue Keynesians prevail against those nasty new classical barbarians is a bit overdone and vastly oversimplified.

John Cochrane, Meet Richard Lipsey and Kenneth Carlaw

Paul Krugman wrote an uncharacteristically positive post today about John Cochrane’s latest post in which Cochrane dialed it down a bit after writing two rather heated posts (here and here) attacking Alan Blinder for a recent piece he wrote in the New York Review of Books in which Blinder wrote dismissively quoted Cochrane’s dismissive remark about Keynesian economics being fairy tales that haven’t been taught to graduate students since the 1960s. I don’t want to get into that fracas, but I was amused to read the following paragraphs at the end of Cochrane’s second post in the current series.

Thus, if you read Krugman’s columns, you will see him occasionally crowing about how Keynesian economics won, and how the disciples of Stan Fisher at MIT have spread out to run the world. He’s right. Then you see him complaining about how nobody in academia understands Keynesian economics. He’s right again.

Perhaps academic research ran off the rails for 40 years producing nothing of value. Social sciences can do that. Perhaps our policy makers are stuck with simple stories they learned as undergraduates; and, as has happened countless times before, new ideas will percolate up when the generation trained in the 1980s makes their way to to top of policy circles.

I think we can agree on something. If one wants to write about “what’s wrong with economics,” such a huge divide between academic research ideas and the ideas running our policy establishment is not a good situation.

The right way to address this is with models — written down, objective models, not pundit prognostications — and data. What accounts, quantitatively, for our experience?  I see old-fashioned Keynesianism losing because, having dramatically failed that test once, its advocates are unwilling to do so again, preferring a campaign of personal attack in the popular press. Models confront data in the pages of the AER, the JPE, the QJE, and Econometrica. If old-time Keynesianism really does account for the data, write it down and let’s see.

So Cochrane wants to take this bickering out of the realm of punditry and put the conflicting models to an objective test of how well they perform against the data. Sounds good to me, but I can’t help but wonder if Cochrane means to attribute the academic ascendancy of RBC/New Classical models to their having empirically outperformed competing models? If so, I am not aware that anyone else has made that claim, including Kartik Athreya who wrote the book on the subject. (Here’s my take on the book.) Again just wondering – I am not a macroeconometrician – but is there any study showing that RBC or DSGE models outperform old-fashioned Keynesian models in explaining macro-time-series data?

But I am aware of, and have previously written about, a paper by Kenneth Carlaw and Richard Lipsey (“Does History Matter?: Empirical Analysis of Evolutionary versus Stationary Equilibrium Views of the Economy”) in which they show that time-series data for six OECD countries provide no evidence of the stylized facts about inflation and unemployment implied by RBC and New Keynesian theory. Here is the abstract from the Carlaw-Lipsey paper.

The evolutionary vision in which history matters is of an evolving economy driven by bursts of technological change initiated by agents facing uncertainty and producing long term, path-dependent growth and shorter-term, non-random investment cycles. The alternative vision in which history does not matter is of a stationary, ergodic process driven by rational agents facing risk and producing stable trend growth and shorter term cycles caused by random disturbances. We use Carlaw and Lipsey’s simulation model of non-stationary, sustained growth driven by endogenous, path-dependent technological change under uncertainty to generate artificial macro data. We match these data to the New Classical stylized growth facts. The raw simulation data pass standard tests for trend and difference stationarity, exhibiting unit roots and cointegrating processes of order one. Thus, contrary to current belief, these tests do not establish that the real data are generated by a stationary process. Real data are then used to estimate time-varying NAIRU’s for six OECD countries. The estimates are shown to be highly sensitive to the time period over which they are made. They also fail to show any relation between the unemployment gap, actual unemployment minus estimated NAIRU and the acceleration of inflation. Thus there is no tendency for inflation to behave as required by the New Keynesian and earlier New Classical theory. We conclude by rejecting the existence of a well-defined a short-run, negatively sloped Philips curve, a NAIRU, a unique general equilibrium, short and long-run, a vertical long-run Phillips curve, and the long-run neutrality of money.

Cochrane, like other academic macroeconomists with a RBC/New Classical orientation seems inordinately self-satisfied with the current state of the modern macroeconomics, but curiously sensitive to, and defensive about, criticism from the unwashed masses. Rather than weigh in again with my own criticisms, let me close by quoting another abstract – this one from a paper (“Complexity Eonomics: A Different Framework for Economic Thought”) by Brian Arthur, certainly one of the smartest, and most technically capable, economists around.

This paper provides a logical framework for complexity economics. Complexity economics builds from the proposition that the economy is not necessarily in equilibrium: economic agents (firms, consumers, investors) constantly change their actions and strategies in response to the outcome they mutually create. This further changes the outcome, which requires them to adjust afresh. Agents thus live in a world where their beliefs and strategies are constantly being “tested” for survival within an outcome or “ecology” these beliefs and strategies together create. Economics has largely avoided this nonequilibrium view in the past, but if we allow it, we see patterns or phenomena not visible to equilibrium analysis. These emerge probabilistically, last for some time and dissipate, and they correspond to complex structures in other fields. We also see the economy not as something given and existing but forming from a constantly developing set of technological innovations, institutions, and arrangements that draw forth further innovations, institutions and arrangements.

Complexity economics sees the economy as in motion, perpetually “computing” itself — perpetually constructingitself anew. Where equilibrium economics emphasizes order, determinacy, deduction, and stasis, complexity economics emphasizes contingency, indeterminacy, sense-making, and openness to change. In this framework time, in the sense of real historical time, becomes important, and a solution is no longer necessarily a set of mathematical conditions but a pattern, a set of emergent phenomena, a set of changes that may induce further changes, a set of existing entities creating novel entities. Equilibrium economics is a special case of nonequilibrium and hence complexity economics, therefore complexity economics is economics done in a more general way. It shows us an economy perpetually inventing itself, creating novel structures and possibilities for exploitation, and perpetually open to response.

HT: Mike Norman

John Cochrane on the Failure of Macroeconomics

The state of modern macroeconomics is not good; John Cochrane, professor of finance at the University of Chicago, senior fellow of the Hoover Institution, and adjunct scholar of the Cato Institute, writing in Thursday’s Wall Street Journal, thinks macroeconomics is a failure. Perhaps so, but he has trouble explaining why.

The problem that Cochrane is chiefly focused on is slow growth.

Output per capita fell almost 10 percentage points below trend in the 2008 recession. It has since grown at less than 1.5%, and lost more ground relative to trend. Cumulative losses are many trillions of dollars, and growing. And the latest GDP report disappoints again, declining in the first quarter.

Sclerotic growth trumps every other economic problem. Without strong growth, our children and grandchildren will not see the great rise in health and living standards that we enjoy relative to our parents and grandparents. Without growth, our government’s already questionable ability to pay for health care, retirement and its debt evaporate. Without growth, the lot of the unfortunate will not improve. Without growth, U.S. military strength and our influence abroad must fade.

Macroeconomists offer two possible explanations for slow growth: a) too little demand — correctable through monetary or fiscal stimulus — and b) structural rigidities and impediments to growth, for which stimulus is no remedy. Cochrane is not a fan of the demand explanation.

The “demand” side initially cited New Keynesian macroeconomic models. In this view, the economy requires a sharply negative real (after inflation) rate of interest. But inflation is only 2%, and the Federal Reserve cannot lower interest rates below zero. Thus the current negative 2% real rate is too high, inducing people to save too much and spend too little.

New Keynesian models have also produced attractively magical policy predictions. Government spending, even if financed by taxes, and even if completely wasted, raises GDP. Larry Summers and Berkeley’s Brad DeLong write of a multiplier so large that spending generates enough taxes to pay for itself. Paul Krugman writes that even the “broken windows fallacy ceases to be a fallacy,” because replacing windows “can stimulate spending and raise employment.”

If you look hard at New-Keynesian models, however, this diagnosis and these policy predictions are fragile. There are many ways to generate the models’ predictions for GDP, employment and inflation from their underlying assumptions about how people behave. Some predict outsize multipliers and revive the broken-window fallacy. Others generate normal policy predictions—small multipliers and costly broken windows. None produces our steady low-inflation slump as a “demand” failure.

Cochrane’s characterization of what’s wrong with New Keynesian models is remarkably superficial. Slow growth, according to the New Keynesian model, is caused by the real interest rate being insufficiently negative, with the nominal rate at zero and inflation at (less than) 2%. So what is the problem? True, the nominal rate can’t go below zero, but where is it written that the upper bound on inflation is (or must be) 2%? Cochrane doesn’t say. Not only doesn’t he say, he doesn’t even seem interested. It might be that something really terrible would happen if the rate of inflation rose about 2%, but if so, Cochrane or somebody needs to explain why terrible calamities did not befall us during all those comparatively glorious bygone years when the rate of inflation consistently exceeded 2% while real economic growth was at least a percentage point higher than it is now. Perhaps, like Fischer Black, Cochrane believes that the rate of inflation has nothing to do with monetary or fiscal policy. But that is certainly not the standard interpretation of the New Keynesian model that he is using as the archetype for modern demand-management macroeconomic theories. And if Cochrane does believe that the rate of inflation is not determined by either monetary policy or fiscal policy, he ought to come out and say so.

Cochrane thinks that persistent low inflation and low growth together pose a problem for New Keynesian theories. Indeed it does, but it doesn’t seem that a radical revision of New Keynesian theory would be required to cope with that state of affairs. Cochrane thinks otherwise.

These problems [i.e., a steady low-inflation slump, aka “secular stagnation”] are recognized, and now academics such as Brown University’s Gauti Eggertsson and Neil Mehrotra are busy tweaking the models to address them. Good. But models that someone might get to work in the future are not ready to drive trillions of dollars of public expenditure.

In other words, unless the economic model has already been worked out before a particular economic problem arises, no economic policy conclusions may be deduced from that economic model. May I call  this Cochrane’s rule?

Cochrane the proceeds to accuse those who look to traditional Keynesian ideas of rejecting science.

The reaction in policy circles to these problems is instead a full-on retreat, not just from the admirable rigor of New Keynesian modeling, but from the attempt to make economics scientific at all.

Messrs. DeLong and Summers and Johns Hopkins’s Laurence Ball capture this feeling well, writing in a recent paper that “the appropriate new thinking is largely old thinking: traditional Keynesian ideas of the 1930s to 1960s.” That is, from before the 1960s when Keynesian thinking was quantified, fed into computers and checked against data; and before the 1970s, when that check failed, and other economists built new and more coherent models. Paul Krugman likewise rails against “generations of economists” who are “viewing the world through a haze of equations.”

Well, maybe they’re right. Social sciences can go off the rails for 50 years. I think Keynesian economics did just that. But if economics is as ephemeral as philosophy or literature, then it cannot don the mantle of scientific expertise to demand trillions of public expenditure.

This is political rhetoric wrapped in a cloak of scientific objectivity. We don’t have the luxury of knowing in advance what the consequences of our actions will be. The United States has spent trillions of dollars on all kinds of stuff over the past dozen years or so. A lot of it has not worked out well at all. So it is altogether fitting and proper for us to be skeptical about whether we will get our money’s worth for whatever the government proposes to spend on our behalf. But Cochrane’s implicit demand that money only be spent if there is some sort of scientific certainty that the money will be well spent can never be met. However, as Larry Summers has pointed out, there are certainly many worthwhile infrastructure projects that could be undertaken, so the risk of committing the “broken windows fallacy” is small. With the government able to borrow at negative real interest rates, the present value of funding such projects is almost certainly positive. So one wonders what is the scientific basis for not funding those projects?

Cochrane compares macroeconomics to climate science:

The climate policy establishment also wants to spend trillions of dollars, and cites scientific literature, imperfect and contentious as that literature may be. Imagine how much less persuasive they would be if they instead denied published climate science since 1975 and bemoaned climate models’ “haze of equations”; if they told us to go back to the complex writings of a weather guru from the 1930s Dustbowl, as they interpret his writings. That’s the current argument for fiscal stimulus.

Cochrane writes as if there were some important scientific breakthrough made by modern macroeconomics — “the new and more coherent models,” either the New Keynesian version of New Classical macroeconomics or Real Business Cycle Theory — that rendered traditional Keynesian economics obsolete or outdated. I have never been a devote of Keynesian economics, but the fact is that modern macroeconomics has achieved its ascendancy in academic circles almost entirely by way of a misguided methodological preference for axiomatized intertemporal optimization models for which a unique equilibrium solution can be found by imposing the empirically risible assumption of rational expectations. These models, whether in their New Keynesian or Real Business Cycle versions, do not generate better empirical predictions than the old fashioned Keynesian models, and, as Noah Smith has usefully pointed out, these models have been consistently rejected by private forecasters in favor of the traditional Keynesian models. It is only the dominant clique of ivory-tower intellectuals that cultivate and nurture these models. The notion that such models are entitled to any special authority or scientific status is based on nothing but the exaggerated self-esteem that is characteristic of almost every intellectual clique, particularly dominant ones.

Having rejected inadequate demand as a cause of slow growth, Cochrane, relying on no model and no evidence, makes a pitch for uncertainty as the source of slow growth.

Where, instead, are the problems? John Taylor, Stanford’s Nick Bloom and Chicago Booth’s Steve Davis see the uncertainty induced by seat-of-the-pants policy at fault. Who wants to hire, lend or invest when the next stroke of the presidential pen or Justice Department witch hunt can undo all the hard work? Ed Prescott emphasizes large distorting taxes and intrusive regulations. The University of Chicago’s Casey Mulligan deconstructs the unintended disincentives of social programs. And so forth. These problems did not cause the recession. But they are worse now, and they can impede recovery and retard growth.

Where, one wonders, is the science on which this sort of seat-of-the-pants speculation is based? Is there any evidence, for example, that the tax burden on businesses or individuals is greater now than it was let us say in 1983-85 when, under President Reagan, the economy, despite annual tax increases partially reversing the 1981 cuts enacted in Reagan’s first year, began recovering rapidly from the 1981-82 recession?

Monetary Theory on the Neo-Fisherite Edge

The week before last, Noah Smith wrote a post “The Neo-Fisherite Rebellion” discussing, rather sympathetically I thought, the contrarian school of monetary thought emerging from the Great American Heartland, according to which, notwithstanding everything monetary economists since Henry Thornton have taught, high interest rates are inflationary and low interest rates deflationary. This view of the relationship between interest rates and inflation was advanced (but later retracted) by Narayana Kocherlakota, President of the Minneapolis Fed in a 2010 lecture, and was embraced and expounded with increased steadfastness by Stephen Williamson of Washington University in St. Louis and the St. Louis Fed in at least one working paper and in a series of posts over the past five or six months (e.g. here, here and here). And John Cochrane of the University of Chicago has picked up on the idea as well in two recent blog posts (here and here). Others seem to be joining the upstart school as well.

The new argument seems simple: given the Fisher equation, in which the nominal interest rate equals the real interest rate plus the (expected) rate of inflation, a central bank can meet its inflation target by setting a fixed nominal interest rate target consistent with its inflation target and keeping it there. Once the central bank sets its target, the long-run neutrality of money, implying that the real interest rate is independent of the nominal targets set by the central bank, ensures that inflation expectations must converge on rates consistent with the nominal interest rate target and the independently determined real interest rate (i.e., the real yield curve), so that the actual and expected rates of inflation adjust to ensure that the Fisher equation is satisfied. If the promise of the central bank to maintain a particular nominal rate over time is believed, the promise will induce a rate of inflation consistent with the nominal interest-rate target and the exogenous real rate.

The novelty of this way of thinking about monetary policy is that monetary theorists have generally assumed that the actual adjustment of the price level or inflation rate depends on whether the target interest rate is greater or less than the real rate plus the expected rate. When the target rate is greater than the real rate plus expected inflation, inflation goes down, and when it is less than the real rate plus expected inflation, inflation goes up. In the conventional treatment, the expected rate of inflation is momentarily fixed, and the (expected) real rate variable. In the Neo-Fisherite school, the (expected) real rate is fixed, and the expected inflation rate is variable. (Just as an aside, I would observe that the idea that expectations about the real rate of interest and the inflation rate cannot occur simultaneously in the short run is not derived from the limited cognitive capacity of economic agents; it can only be derived from the limited intellectual capacity of economic theorists.)

The heretical views expressed by Williamson and Cochrane and earlier by Kocherlakota have understandably elicited scorn and derision from conventional monetary theorists, whether Keynesian, New Keynesian, Monetarist or Market Monetarist. (Williamson having appropriated for himself the New Monetarist label, I regrettably could not preserve an appropriate symmetry in my list of labels for monetary theorists.) As a matter of fact, I wrote a post last December challenging Williamson’s reasoning in arguing that QE had caused a decline in inflation, though in his initial foray into uncharted territory, Williamson was actually making a narrower argument than the more general thesis that he has more recently expounded.

Although deep down, I have no great sympathy for Williamson’s argument, the counterarguments I have seen leave me feeling a bit, shall we say, underwhelmed. That’s not to say that I am becoming a convert to New Monetarism, but I am feeling that we have reached a point at which certain underlying gaps in monetary theory can’t be concealed any longer. To explain what I mean by that remark, let me start by reviewing the historical context in which the ruling doctrine governing central-bank operations via adjustments in the central-bank lending rate evolved. The primary (though historically not the first) source of the doctrine is Henry Thornton in his classic volume The Nature and Effects of the Paper Credit of Great Britain.

Even though Thornton focused on the policy of the Bank of England during the Napoleonic Wars, when Bank of England notes, not gold, were legal tender, his discussion was still in the context of a monetary system in which paper money was generally convertible into either gold or silver. Inconvertible banknotes – aka fiat money — were the exception not the rule. Gold and silver were what Nick Rowe would call alpha money. All other moneys were evaluated in terms of gold and silver, not in terms of a general price level (not yet a widely accepted concept). Even though Bank of England notes became an alternative alpha money during the restriction period of inconvertibility, that situation was generally viewed as temporary, the restoration of convertibility being expected after the war. The value of the paper pound was tracked by the sterling price of gold on the Hamburg exchange. Thus, Ricardo’s first published work was entitled The High Price of Bullion, in which he blamed the high sterling price of bullion at Hamburg on an overissue of banknotes by the Bank of England.

But to get back to Thornton, who was far more concerned with the mechanics of monetary policy than Ricardo, his great contribution was to show that the Bank of England could control the amount of lending (and money creation) by adjusting the interest rate charged to borrowers. If banknotes were depreciating relative to gold, the Bank of England could increase the value of their notes by raising the rate of interest charged on loans.

The point is that if you are a central banker and are trying to target the exchange rate of your currency with respect to an alpha currency, you can do so by adjusting the interest rate that you charge borrowers. Raising the interest rate will cause the exchange value of your currency to rise and reducing the interest rate will cause the exchange value to fall. And if you are operating under strict convertibility, so that you are committed to keep the exchange rate between your currency and an alpha currency at a specified par value, raising that interest rate will cause you to accumulate reserves payable in terms of the alpha currency, and reducing that interest rate will cause you to emit reserves payable in terms of the alpha currency.

So the idea that an increase in the central-bank interest rate tends to increase the exchange value of its currency, or, under a fixed-exchange rate regime, an increase in the foreign exchange reserves of the bank, has a history at least two centuries old, though the doctrine has not exactly been free of misunderstanding or confusion in the course of those two centuries. One of those misunderstandings was about the effect of a change in the central-bank interest rate, under a fixed-exchange rate regime. In fact, as long as the central bank is maintaining a fixed exchange rate between its currency and an alpha currency, changes in the central-bank interest rate don’t affect (at least as a first approximation) either the domestic money supply or the domestic price level; all that changes in the central-bank interest rate can accomplish is to change the bank’s holdings of alpha-currency reserves.

It seems to me that this long well-documented historical association between changes in the central-bank interest rates and the exchange value of currencies and the level of private spending is the basis for the widespread theoretical presumption that raising the central-bank interest rate target is deflationary and reducing it is inflationary. However, the old central-bank doctrine of the Bank Rate was conceived in a world in which gold and silver were the alpha moneys, and central banks – even central banks operating with inconvertible currencies – were beta banks, because the value of a central-bank currency was still reckoned, like the value of inconvertible Bank of England notes in the Napoleonic Wars, in terms of gold and silver.

In the Neo-Fisherite world, central banks rarely peg exchange rates against each other, and there is no longer any outside standard of value to which central banks even nominally commit themselves. In a world without the metallic standard of value in which the conventional theory of central banking developed, do the propositions about the effects of central-bank interest-rate setting still obtain? I am not so sure that they do, not with the analytical tools that we normally deploy when thinking about the effects of central-bank policies. Why not? Because, in a Neo-Fisherite world in which all central banks are alpha banks, I am not so sure that we really know what determines the value of this thing called fiat money. And if we don’t really know what determines the value of a fiat money, how can we really be sure that interest-rate policy works the same way in a Neo-Fisherite world that it used to work when the value of money was determined in relation to a metallic standard? (Just to avoid misunderstanding, I am not – repeat NOT — arguing for restoring the gold standard.)

Why do I say that we don’t know what determines the value of fiat money in a Neo-Fisherite world? Well, consider this. Almost three weeks ago I wrote a post in which I suggested that Bitcoins could be a massive bubble. My explanation for why Bitcoins could be a bubble is that they provide no real (i.e., non-monetary) service, so that their value is totally contingent on, and derived from (or so it seems to me, though I admit that my understanding of Bitcoins is partial and imperfect), the expectation of a positive future resale value. However, it seems certain that the resale value of Bitcoins must eventually fall to zero, so that backward induction implies that Bitcoins, inasmuch as they provide no real service, cannot retain a positive value in the present. On this reasoning, any observed value of a Bitcoin seems inexplicable except as an irrational bubble phenomenon.

Most of the comments I received about that post challenged the relevance of the backward-induction argument. The challenges were mainly of two types: a) the end state, when everyone will certainly stop accepting a Bitcoin in exchange, is very, very far into the future and its date is unknown, and b) the backward-induction argument applies equally to every fiat currency, so my own reasoning, according to my critics, implies that the value of every fiat currency is just as much a bubble phenomenon as the value of a Bitcoin.

My response to the first objection is that even if the strict logic of the backward-induction argument is inconclusive, because of the long and uncertain duration of the time elapse between now and the end state, the argument nevertheless suggests that the value of a Bitcoin is potentially very unsteady and vulnerable to sudden collapse. Those are not generally thought to be desirable attributes in a medium of exchange.

My response to the second objection is that fiat currencies are actually quite different from Bitcoins, because fiat currencies are accepted by governments in discharging the tax liabilities due to them. The discharge of a tax liability is a real (i.e. non-monetary) service, creating a distinct non-monetary demand for fiat currencies, thereby ensuring that fiat currencies retain value, even apart from being accepted as a medium of exchange.

That, at any rate, is my view, which I first heard from Earl Thompson (see his unpublished paper, “A Reformulation of Macroeconomic Theory” pp. 23-25 for a derivation of the value of fiat money when tax liability is a fixed proportion of income). Some other pretty good economists have also held that view, like Abba Lerner, P. H. Wicksteed, and Adam Smith. Georg Friedrich Knapp also held that view, and, in his day, he was certainly well known, but I am unable to pass judgment on whether he was or wasn’t a good economist. But I do know that his views about money were famously misrepresented and caricatured by Ludwig von Mises. However, there are other good economists (Hal Varian for one), apparently unaware of, or untroubled by, the backward induction argument, who don’t think that acceptability in discharging tax liability is required to explain the value of fiat money.

Nor do I think that Thompson’s tax-acceptability theory of the value of money can stand entirely on its own, because it implies a kind of saw-tooth time profile of the price level, so that a fiat currency, earning no liquidity premium, would actually be appreciating between peak tax collection dates, and depreciating immediately following those dates, a pattern not obviously consistent with observed price data, though I do recall that Thompson used to claim that there is a lot of evidence that prices fall just before peak tax-collection dates. I don’t think that anyone has ever tried to combine the tax-acceptability theory with the empirical premise that currency (or base money) does in fact provide significant liquidity services. That, it seems to me, would be a worthwhile endeavor for any eager young researcher to undertake.

What does all of this have to do with the Neo-Fisherite Rebellion? Well, if we don’t have a satisfactory theory of the value of fiat money at hand, which is what another very smart economist Fischer Black – who, to my knowledge never mentioned the tax-liability theory — thought, then the only explanation of the value of fiat money is that, like the value of a Bitcoin, it is whatever people expect it to be. And the rate of inflation is equally inexplicable, being just whatever it is expected to be. So in a Neo-Fisherite world, if the central bank announces that it is reducing its interest-rate target, the effect of the announcement depends entirely on what “the market” reads into the announcement. And that is exactly what Fischer Black believed. See his paper “Active and Passive Monetary Policy in a Neoclassical Model.”

I don’t say that Williamson and his Neo-Fisherite colleagues are correct. Nor have they, to my knowledge, related their arguments to Fischer Black’s work. What I do say (indeed this is a problem I raised almost three years ago in one of my first posts on this blog) is that existing monetary theories of the price level are unable to rule out his result, because the behavior of the price level and inflation seems to depend, more than anything else, on expectations. And it is far from clear to me that there are any fundamentals in which these expectations can be grounded. If you impose the rational expectations assumption, which is almost certainly wrong empirically, maybe you can argue that the central bank provides a focal point for expectations to converge on. The problem, of course, is that in the real world, expectations are all over the place, there being no fundamentals to force the convergence of expectations to a stable equilibrium value.

In other words, it’s just a mess, a bloody mess, and I do not like it, not one little bit.

On Multipliers, Ricardian Equivalence and Functioning Well

In my post yesterday, I explained why if one believes, as do Robert Lucas and Robert Barro, that monetary policy can stimulate an economy in an economic downturn, it is easy to construct an argument that fiscal policy would do so as well. I hope that my post won’t cause anyone to conclude that real-business-cycle theory must be right that monetary policy is no more effective than fiscal policy. I suppose that there is that risk, but I can’t worry about every weird idea floating around in the blogosphere. Instead, I want to think out loud a bit about fiscal multipliers and Ricardian equivalence.

I am inspired to do so by something that John Cochrane wrote on his blog defending Robert Lucas from Paul Krugman’s charge that Lucas didn’t understand Ricardian equivalence. Here’s what Cochrane, explaining what Ricardian equivalence means, had to say:

So, according to Paul [Krugman], “Ricardian Equivalence,” which is the theorem that stimulus does not work in a well-functioning economy, fails, because it predicts that a family who takes out a mortgage to buy a $100,000 house would reduce consumption by $100,000 in that very year.

Cochrane was a little careless in defining Ricardian equivalance as a theorem about stimulus, when it’s really a theorem about the equivalence of the effects of present and future taxes on spending. But that’s just a minor slip. What I found striking about Cochrane’s statement was something else: that little qualifying phrase “in a well-functioning economy,” which Cochrane seems to have inserted as a kind of throat-clearing remark, the sort of aside that people are just supposed to hear but not really pay much attention to, that sometimes can be quite revealing, usually unintentionally, in its own way.

What is so striking about those five little words “in a well-functioning economy?” Well, just this. Why, in a well-functioning economy, would anyone care whether a stimulus works or not? A well-functioning economy doesn’t need any stimulus, so why would you even care whether it works or not, much less prove a theorem to show that it doesn’t? (I apologize for the implicit Philistinism of that rhetorical question, I’m just engaging in a little rhetorical excess to make my point a little bit more colorfully.)

So if a well-functioning economy doesn’t require any stimulus, and if a stimulus wouldn’t work in a well-functioning economy, what does that tell us about whether a stimulus works (or would work) in an economy that is not functioning well? Not a whole lot. Thus, the bread and butter models that economists use, models of how an economy functions when there are no frictions, expectations are rational, and markets clear, are guaranteed to imply that there are no multipliers and that Ricardian equivalence holds. This is the world of a single, unique, and stable equilibrium. If you exogenously change any variable in the system, the system will snap back to a new equilibrium in which all variables have optimally adjusted to whatever exogenous change you have subjected the system to. All conventional economic analysis, comparative statics or dynamic adjustment, are built on the assumption of a unique and stable equilibrium to which all economic variables inevitably return when subjected to any exogenous shock. This is the indispensable core of economic theory, but it is not the whole of economic theory.

Keynes had a vision of what could go wrong with an economy: entrepreneurial pessimism — a dampening of animal spirits — would cause investment to flag; the rate of interest would not (or could not) fall enough to revive investment; people would try to shift out of assets into cash, causing a cumulative contraction of income, expenditure and output. In such circumstances, spending by government could replace the investment spending no longer being undertaken by discouraged entrepreneurs, at least until entrepreneurial expectations recovered. This is a vision not of a well-functioning economy, but of a dysfunctional one, but Keynes was able to describe it in terms of a simplified model, essentially what has come down to us as the Keynesian cross. In this little model, you can easily calculate a multiplier as the reciprocal of the marginal propensity to save out of disposable income.

But packaging Keynes’s larger vision into the four corners of the Keynesian cross diagram, or even the slightly more realistic IS-LM diagram, misses the essence of Keynes’s vision — the volatility of entrepreneurial expectations and their susceptibility to unpredictable mood swings that overwhelm any conceivable equilibrating movements in interest rates. A numerical calculation of the multiplier in the simplified Keynesian models is not particularly relevant, because the real goal is not to reach an equilibrium within a system of depressed entrepreneurial expectations, but to create conditions in which entrepreneurial expectations bounce back from their depressed state. As I like to say, expectations are fundamental.

Unlike a well-functioning economy with a unique equilibrium, a not-so-well functioning economy may have multiple equilibria corresponding to different sets of expectations. The point of increased government spending is then not to increase the size of government, but to restore entrepreneurial confidence by providing assurance that if they increase production, they will have customers willing and able to buy the output at prices sufficient to cover their costs.

Ricardian equivalence assumes that expectations of future income are independent of tax and spending decisions in the present, because, in a well-functioning economy, there is but one equilibrium path for future output and income. But if, because the economy not functioning well, expectations of future income, and therefore actual future income, may depend on current decisions about spending and taxation. No matter what Ricardian equivalence says, a stimulus may work by shifting the economy to a different higher path of future output and income than the one it now happens to be on, in which case present taxes may not be equivalent to future taxes, after all.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,263 other subscribers
Follow Uneasy Money on WordPress.com