Archive Page 2

Justice Scalia and the Original Meaning of Originalism

humpty_dumpty

(I almost regret writing this post because it took a lot longer to write than I expected and I am afraid that I have ventured too deeply into unfamiliar territory. But having expended so much time and effort on this post, I must admit to being curious about what people will think of it.)

I resist the temptation to comment on Justice Scalia’s character beyond one observation: a steady stream of irate outbursts may have secured his status as a right-wing icon and burnished his reputation as a minor literary stylist, but his eruptions brought no credit to him or to the honorable Court on which he served.

But I will comment at greater length on the judicial philosophy, originalism, which he espoused so tirelessly. The first point to make, in discussing originalism, is that there are at least two concepts of originalism that have been advanced. The first and older concept is that the provisions of the US Constitution should be understood and interpreted as the framers of the Constitution intended those provisions to be understood and interpreted. The task of the judge, in interpreting the Constitution, would then be to reconstruct the collective or shared state of mind of the framers and, having ascertained that state of mind, to interpret the provisions of the Constitution in accord with that collective or shared state of mind.

A favorite originalist example is the “cruel and unusual punishment” provision of the Eighth Amendment to the Constitution. Originalists dismiss all arguments that capital punishment is cruel and unusual, because the authors of the Eighth Amendment could not have believed capital punishment to be cruel and unusual. If that’s what they believed then, why, having passed the Eighth amendment, did the first Congress proceed to impose the death penalty for treason, counterfeiting and other offenses in 1790? So it seems obvious that the authors of Eighth Amendment did not intend to ban capital punishment. If so, originalists argue, the “cruel and unusual” provision of the Eighth Amendment can provide no ground for ruling that capital punishment violates the Eighth Amendment.

There are a lot of problems with the original-intent version of originalism, the most obvious being the impossibility of attributing an unambiguous intention to the 50 or so delegates to the Constitutional Convention who signed the final document. The Constitutional text that emerged from the Convention was a compromise among many competing views and interests, and it did not necessarily conform to the intentions of any of the delegates, much less all of them. True, James Madison was the acknowledged author of the Bill of Rights, so if we are parsing the Eighth Amendment, we might, in theory, focus exclusively on what he understood the Eighth Amendment to mean. But focusing on Madison alone would be problematic, because Madison actually opposed adding a Bill of Rights to the original Constitution; Madison introduced the Bill of Rights as amendments to the Constitution in the first Congress, only because the Constitution would not have been approved without an understanding that the Bill of Rights that Madison had opposed would be adopted as amendments to the Constitution. The inherent ambiguity in the notion of intention, even in the case of a single individual acting out of mixed, if not conflicting, motives – an ambiguity compounded when action is undertaken collectively by individuals – causes the notion of original intent to dissolve into nothingness when one tries to apply it in practice.

Realizing that trying to determine the original intent of the authors of the Constitution (including the Amendments thereto) is a fool’s errand, many originalists, including Justice Scalia, tried to salvage the doctrine by shifting its focus from the inscrutable intent of the Framers to the objective meaning that a reasonable person would have attached to the provisions of the Constitution when it was ratified. Because the provisions of the Constitution are either ordinary words or legal terms, the meaning that would reasonably have been attached to those provisions can supposedly be ascertained by consulting the contemporary sources, either dictionaries or legal treatises, in which those words or terms were defined. It is this original meaning that, according to Scalia, must remain forever inviolable, because to change the meaning of provisions of the Constitution would allow unelected judges to covertly amend the Constitution, evading the amendment process spelled out in Article V of the Constitution, thereby nullifying the principle of a written constitution that constrains the authority and powers of all branches of government. Instead of being limited by the Constitution, judges not bound by the original meaning arrogate to themselves an unchecked power to impose their own values on the rest of the country.

To return to the Eighth Amendment, Scalia would say that the meaning attached to the term “cruel and unusual” when the Eighth Amendment was passed was clearly not so broad that it prohibited capital punishment. Otherwise, how could Congress, having voted to adopt the Eighth Amendment, proceed to make counterfeiting and treason and several other federal offenses capital crimes? Of course that’s a weak argument, because Congress, like any other representative assembly is under no obligation or constraint to act consistently. It’s well known that democratic decision-making need not be consistent, and just because a general principle is accepted doesn’t mean that the principle will not be violated in specific cases. A written Constitution is supposed to impose some discipline on democratic decision-making for just that reason. But there was no mechanism in place to prevent such inconsistency, judicial review of Congressional enactments not having become part of the Constitutional fabric until John Marshall’s 1803 opinion in Marbury v. Madison made judicial review, quite contrary to the intention of many of the Framers, an organic part of the American system of governance.

Indeed, in 1798, less than ten years after the Bill of Rights was adopted, Congress enacted the Alien and Sedition Acts, which, I am sure even Justice Scalia would have acknowledged, violated the First Amendment prohibition against abridging the freedom of speech and the press. To be sure, the Congress that passed the Alien and Sedition Acts was not the same Congress that passed the Bill of Rights, but one would hardly think that the original meaning of abridging freedom of speech and the press had been forgotten in the intervening decade. Nevertheless, to uphold his version of originalism, Justice Scalia would have to argue either that the original meaning of the First Amendment had been forgotten or acknowledge that one can’t simply infer from the actions of a contemporaneous or nearly contemporaneous Congress what the original meaning of the provisions of the Constitution were, because it is clearly possible that the actions of Congress could have been contrary to some supposed original meaning of the provisions of the Constitution.

Be that as it may, for purposes of the following discussion, I will stipulate that we can ascertain an objective meaning that a reasonable person would have attached to the provisions of the Constitution at the time it was ratified. What I want to examine is Scalia’s idea that it is an abuse of judicial discretion for a judge to assign a meaning to any Constitutional term or provision that is different from that original meaning. To show what is wrong with Scalia’s doctrine, I must first explain that Scalia’s doctrine is based on legal philosophy known as legal positivism. Whether Scalia realized that he was a legal positivist I don’t know, but it’s clear that Scalia was taking the view that the validity and legitimacy of a law or a legal provision or a legal decision (including a Constitutional provision or decision) derives from an authority empowered to make law, and that no one other than an authorized law-maker or sovereign is empowered to make law.

According to legal positivism, all law, including Constitutional law, is understood as an exercise of will – a command. What distinguishes a legal command from, say, a mugger’s command to a victim to turn over his wallet is that the mugger is not a sovereign. Not only does the sovereign get what he wants, the sovereign, by definition, gets it legally; we are not only forced — compelled — to obey, but, to add insult to injury, we are legally obligated to obey. And morality has nothing to do with law or legal obligation. That’s the philosophical basis of legal positivism to which Scalia, wittingly or unwittingly, subscribed.

Luckily for us, we Americans live in a country in which the people are sovereign, but the power of the people to exercise their will collectively was delimited and circumscribed by the Constitution ratified in 1788. Under positivist doctrine, the sovereign people in creating the government of the United States of America laid down a system of rules whereby the valid and authoritative expressions of the will of the people would be given the force of law and would be carried out accordingly. The rule by which the legally valid, authoritative, command of the sovereign can be distinguished from the command of a mere thug or bully is what the legal philosopher H. L. A. Hart called a rule of recognition. In the originalist view, the rule of recognition requires that any judicial judgment accord with the presumed original understanding of the provisions of the Constitution when the Constitution was ratified, thereby becoming the authoritative expression of the sovereign will of the people, unless that original understanding has subsequently been altered by way of the amendment process spelled out in Article V of the Constitution. What Scalia and other originalists are saying is that any interpretation of a provision of the Constitution that conflicts with the original meaning of that provision violates the rule of recognition and is therefore illegitimate. Hence, Scalia’s simmering anger at decisions of the court that he regarded as illegitimate departures from the original meaning of the Constitution.

But legal positivism is not the only theory of law. F. A. Hayek, who, despite his good manners, somehow became a conservative and libertarian icon a generation before Scalia, subjected legal positivism to withering criticism in volume one of Law Legislation and Liberty. But the classic critique of legal positivism was written a little over a half century ago by Ronald Dworkin, in his essay “Is Law a System of Rules?” (aka “The Model of Rules“) Dworkin’s main argument was that no system of rules can be sufficiently explicit and detailed to cover all possible fact patterns that would have to be adjudicated by a judge. Legal positivists view the exercise of discretion by judges as an exercise of personal will authorized by the Sovereign in cases in which no legal rule exactly fits the facts of a case. Dworkin argued that rather than an imposition of judicial will authorized by the sovereign, the exercise of judicial discretion is an application of the deeper principles relevant to the case, thereby allowing the judge to determine which, among the many possible rules that could be applied to the facts of the case, best fits with the totality of the circumstances, including prior judicial decisions, the judge must take into account. According to Dworkin, law and the legal system as a whole is not an expression of sovereign will, but a continuing articulation of principles in terms of which specific rules of law must be understood, interpreted, and applied.

The meaning of a legal or Constitutional provision can’t be fixed at a single moment, because, like all social institutions, meaning evolves and develops organically. Not being an expression of the sovereign will, the meaning of a legal term or provision cannot be identified by a putative rule of recognition – e.g., the original meaning doctrine — that freezes the meaning of the term at a particular moment in time. It is not true, as Scalia and originalists argue, that conceding that the meaning of Constitutional terms and provisions can change and evolve allows unelected judges to substitute their will for the sovereign will enshrined when the Constitution was ratified. When a judge acknowledges that the meaning of a term has changed, the judge does so because that new meaning has already been foreshadowed in earlier cases with which his decision in the case at hand must comport. There is always a danger that the reasoning of a judge is faulty, but faulty reasoning can beset judges claiming to apply the original meaning of a term, as Chief Justice Taney did in his infamous Dred Scot opinion in which Taney argued that the original meaning of the term “property” included property in human beings.

Here is an example of how a change in meaning may be required by a change in our understanding of a concept. It may not be the best example to shed light on the legal issues, but it is the one that occurs to me as I write this. About a hundred years ago, Bertrand Russell and Alfred North Whitehead were writing one the great philosophical works of the twentieth century, Principia Mathematica. Their objective was to prove that all of mathematics could be reduced to pure logic. It was a grand and heroic effort that they undertook, and their work will remain a milestone in history of philosophy. If Russell and Whitehead had succeeded in their effort of reducing mathematics to logic, it could properly be said that mathematics is really the same as logic, and the meaning of the word “mathematics” would be no different from the meaning of the word “logic.” But if the meaning of mathematics were indeed the same as that of logic, it would not be the result of Russell and Whitehead having willed “mathematics” and “logic” to mean the same thing, Russell and Whitehead being possessed of no sovereign power to determine the meaning of “mathematics.” Whether mathematics is really the same as logic depends on whether all of mathematics can be logically deduced from a set of axioms. No matter how much Russell and Whitehead wanted mathematics to be reducible to logic, the factual question of whether mathematics can be reduced to logic has an answer, and the answer is completely independent of what Russell and Whitehead wanted it to be.

Unfortunately for Russell and Whitehead, the Viennese mathematician Kurt Gödel came along a few years after they completed the third and final volume of their masterpiece and proved an “incompleteness theorem” showing that mathematics could not be reduced to logic – mathematics is therefore not the same as logic – because in any axiomatized system, some true propositions of arithmetic will be logically unprovable. The meaning of mathematics is therefore demonstrably not the same as the meaning of logic. This difference in meaning had to be discovered; it could not be willed.

Actually, it was Humpty Dumpty who famously anticipated the originalist theory that meaning is conferred by an act of will.

“I don’t know what you mean by ‘glory,’ ” Alice said.
Humpty Dumpty smiled contemptuously. “Of course you don’t—till I tell you. I meant ‘there’s a nice knock-down argument for you!’ ”
“But ‘glory’ doesn’t mean ‘a nice knock-down argument’,” Alice objected.
“When I use a word,” Humpty Dumpty said, in rather a scornful tone, “it means just what I choose it to meanan—neither more nor less.”
“The question is,” said Alice, “whether you can make words mean so many different things.”
“The question is,” said Humpty Dumpty, “which is to be master—that’s all.”

In Humpty Dumpty’s doctrine, meaning is determined by a sovereign master. In originalist doctrine, the sovereign master is the presumed will of the people when the Constitution and the subsequent Amendments were ratified.

So the question whether capital punishment is “cruel and unusual” can’t be answered, as Scalia insisted, simply by invoking a rule of recognition that freezes the meaning of “cruel and unusual” at the presumed meaning it had in 1790, because the point of a rule of recognition is to identify the sovereign will that is given the force of law, while the meaning of “cruel and unusual” does not depend on anyone’s will. If a judge reaches a decision based on a meaning of “cruel and unusual” different from the supposed original meaning, the judge is not abusing his discretion, the judge is engaged in judicial reasoning. The reasoning may be good or bad, right or wrong, but judicial reasoning is not rendered illegitimate just because it assigns a meaning to a term different from the supposed original meaning. The test of judicial reasoning is how well it accords with the totality of judicial opinions and relevant principles from which the judge can draw in supporting his reasoning. Invoking a supposed original meaning of what “cruel and unusual” meant to Americans in 1789 does not tell us how to understand the meaning of “cruel and unusual” just as the question whether logic and mathematics are synonymous cannot be answered by insisting that Russel and Whitehead were right in thinking that mathematics and logic are the same thing. (I note for the record that I personally have no opinion about whether capital punishment violates the Eighth Amendment.)

One reason meanings change is because circumstances change. The meaning of freedom of the press and freedom of speech may have been perfectly clear in 1789, but our conception of what is protected by the First Amendment has certainly expanded since the First Amendment was ratified. As new media for conveying speech have been introduced, the courts have brought those media under the protection of the First Amendment. Scalia made a big deal of joining with the majority in Texas v. Johnson a 1989 case in which the conviction of a flag burner was overturned. Scalia liked to cite that case as proof of his fidelity to the text of the Constitution; while pouring scorn on the flag burner, Scalia announced that despite his righteous desire to exact a terrible retribution from the bearded weirdo who burned the flag, he had no choice but to follow – heroically, in his estimation — the text of the Constitution.

But flag-burning is certainly a form of symbolic expression, and it is far from obvious that the original meaning of the First Amendment included symbolic expression. To be sure some forms of symbolic speech were recognized as speech in the eighteenth century, but it could be argued that the original meaning of freedom of speech and the press in the First Amendment was understood narrowly. The compelling reason for affording flag-burning First Amendment protection is not that flag-burning was covered by the original meaning of the First Amendment, but that a line of cases has gradually expanded the notion of what activities are included under what the First Amendment calls “speech.” That is the normal process by which law changes and meanings change, incremental adjustments taking into account unforeseen circumstances, eventually leading judges to expand the meanings ascribed to old terms, because the expanded meanings comport better with an accumulation of precedents and the relevant principles on which judges have relied in earlier cases.

But perhaps the best example of how changes in meaning emerge organically from our efforts to cope with changing and unforeseen circumstances rather than being the willful impositions of a higher authority is provided by originalism itself, because, “originalism” was originally about the original intention of the Framers of the Constitution. It was only when it became widely accepted that the original intention of the Framers was not something that could be ascertained, that people like Antonin Scalia decided to change the meaning of “originalism,” so that it was no longer about the original intention of the Framers, but about the original meaning of the Constitution when it was ratified. So what we have here is a perfect example of how the meaning of a well-understood term came to be changed, because the original meaning of the term was found to be problematic. And who was responsible for this change in meaning? Why the very same people who insist that it is forbidden to tamper with the original meaning of the terms and provisions of the Constitution. But they had no problem in changing the meaning of their doctrine of Constitutional interpretation. Do I blame them for changing the meaning of the originalist doctrine? Not one bit. But if originalists were only marginally more introspective than they seem to be, they might have realized that changes in meaning are perfectly normal and legitimate, especially when trying to give concrete meaning to abstract terms in a way that best fits in with the entire tradition of judicial interpretation embodied in the totality of all previous judicial decisions. That is the true task of a judge, not a pointless quest for original meaning.

Paul Krugman Suffers a Memory Lapse

smoot_hawleyPaul Krugman, who is very upset with Republicans on both sides of the Trump divide, ridiculed Mitt Romney’s attack on Trump for being a protectionist. Romney warned that if Trump implemented his proposed protectionist policies, the result would likely be a trade war and a recession. Now I totally understand Krugman’s frustration with what’s happening inside the Republican Party; it’s not a pretty sight. But Krugman seems just a tad too eager to find fault with Romney, especially since the danger that a trade war could trigger a recession, while perhaps overblown, is hardly delusional, and, as Krugman ought to recall, is a danger that Democrats have also warned against. (I’ll come back to that point later.) Here’s the quote that got Krugman’s back up:

If Donald Trump’s plans were ever implemented, the country would sink into prolonged recession. A few examples. His proposed 35 percent tariff-like penalties would instigate a trade war and that would raise prices for consumers, kill our export jobs and lead entrepreneurs and businesses of all stripes to flee America.

Krugman responded:

After all, doesn’t everyone know that protectionism causes recessions? Actually, no. There are reasons to be against protectionism, but that’s not one of them.

Think about the arithmetic (which has a well-known liberal bias). Total final spending on domestically produced goods and services is

Total domestic spending + Exports – Imports = GDP

Now suppose we have a trade war. This will cut exports, which other things equal depresses the economy. But it will also cut imports, which other things equal is expansionary. For the world as a whole, the cuts in exports and imports will by definition be equal, so as far as world demand is concerned, trade wars are a wash.

Actually, Krugman knows better than to argue that the comparative statics response to a parameter change (especially a large change) can be inferred from an accounting identity. The accounting identity always holds, but the equilibrium position does change, and you can’t just assume that the equilibrium rate of spending is unaffected by the parameter change or by the adjustment path the follows the parameter change. So Krugman’s assertion that a trade war cannot cause a recession depends on an implicit assumption that a trade war would be accompanied by a smooth reallocation of resources from producing tradable to producing non-tradable goods and that the wealth losses from the depreciation of specific human and non-human capital invested in the tradable-goods sector would have small repercussions on aggregate demand. That might be true, but the bigger the trade war and the more rounds of reciprocal retaliation, the greater the danger of substantial wealth losses and other disruptions. The fall in oil prices over the past year or two was supposed to be a good thing for the world economy. I think that for a lot of reasons reduced oil prices are, on balance, a good thing, but we also have reason to believe that it also had negative effects, especially on financial institutions holding a lot of assets sensitive to the price of oil. A trade war would have all the negatives of a steep decline in oil prices, but none of the positives.

But didn’t the Smoot-Hawley tariff cause the Great Depression? No. There’s no evidence at all that it did. Yes, trade fell a lot between 1929 and 1933, but that was almost entirely a consequence of the Depression, not a cause. (Trade actually fell faster during the early stages of the 2008 Great Recession than it did after 1929.) And while trade barriers were higher in the 1930s than before, this was partly a response to the Depression, partly a consequence of deflation, which made specific tariffs (i.e., tariffs that are stated in dollars per unit, not as a percentage of value) loom larger.

I certainly would not claim to understand fully the effects of the Smoot Hawley tariff, the question of effects being largely an empirical one that I haven’t studied, but I’m not sure that the profession has completely figured out those effects either. I know that Doug Irwin, who wrote the book on the Smoot-Hawley tariff and whose judgment I greatly respect, doesn’t think that Smoot Hawley tariff was a cause of the Great Depression, but that it did make the Depression worse than it would otherwise have been. It certainly was not the chief cause, and I am not even saying that it was a leading cause, but there is certainly a respectable argument to be made that it played a bigger role in the Depression than even Irwin acknowledges.

In brief, the argument is that there was a lot of international debt – especially allied war loans, German war reparations, German local government borrowing during the 1920s. To be able to make their scheduled debt payments, Germany and other debtor nations had to run trade surpluses. Increased tariffs on imported goods meant that, under the restored gold standard of the late 1920s, to run the export surpluses necessary to meet their debt obligations, debtor nations had to reduce their domestic wage levels sufficiently to overcome the rising trade barriers. Germany, of course, was the country most severely affected, and the prospect of German default undoubtedly undermined the solvency of many financial institutions, in Europe and America, with German debt on their balance sheets. In other words, the Smoot Hawley tariff intensified deflationary pressure and financial instability during the Great Depression, notwithstanding the tendency of tariffs to increase prices on protected goods.

Krugman takes a parting shot at Romney:

Protectionism was the only reason he gave for believing that Trump would cause a recession, which I think is kind of telling: the GOP’s supposedly well-informed, responsible adult, trying to save the party, can’t get basic economics right at the one place where economics is central to his argument.

I’m not sure what other reason there is to think that Trump would cause a recession. He is proposing to cut taxes by a lot, and to increase military spending by a lot without cutting entitlements. So given that his fiscal policy seems to be calculated to increase the federal deficit by a lot, what reason, besides starting a trade war, is there to think that Trump would cause a recession? And as I said, right or wrong, Romeny is hardly alone in thinking that trade wars can cause recessions. Indeed, Romney didn’t even mention the Smoot-Hawley tariff, but Krugman evidently forgot the classic exchange between Al Gore and the previous incarnation of protectionist populist outrage in an anti-establishment billionaire candidate for President:

GORE I’ve heard Mr. Perot say in the past that, as the carpenters says, measure twice and cut once. We’ve measured twice on this. We have had a test of our theory and we’ve had a test of his theory. Over the last five years, Mexico’s tariffs have begun to come down because they’ve made a unilateral decision to bring them down some, and as a result there has been a surge of exports from the United States into Mexico, creating an additional 400,000 jobs, and we can create hundreds of thousands of more if we continue this trend. We know this works. If it doesn’t work, you know, we give six months notice and we’re out of it. But we’ve also had a test of his theory.

PEROT When?

GORE In 1930, when the proposal by Mr. Smoot and Mr. Hawley was to raise tariffs across the board to protect our workers. And I brought some pictures, too.

[Larry] KING You’re saying Ross is a protectionist?

GORE This is, this is a picture of Mr. Smoot and Mr. Hawley. They look like pretty good fellows. They sounded reasonable at the time; a lot of people believed them. The Congress passed the Smoot-Hawley Protection Bill. He wants to raise tariffs on Mexico. They raised tariffs, and it was one of the principal causes, many economists say the principal cause, of the Great Depression in this country and around the world. Now, I framed this so you can put it on your wall if you want to.

You can watch it here

Currency Depreciation and Monetary Expansion Redux

Last week Frances Coppola and I exchanged posts about competitive devaluation. Frances chided me for favoring competitive devaluation, competitive devaluation, in her view, accomplishing nothing in a world of fiat currencies, because exchange rates don’t change. Say, the US devalues the dollar by 10% against the pound and Britain devalues the pound by 10% against the dollar; it’s as if nothing happened. In reply, I pointed out that if the competitive devaluation is achieved by monetary expansion (the US buying pounds with dollars to drive up the value of the pound and the UK buying dollars with pounds to drive up the value of the dollar), the result must be  increased prices in both the US and the UK. Frances responded that our disagreement was just a semantic misunderstanding, because she was talking about competitive devaluation in the absence of monetary expansion; so it’s all good.

I am, more or less, happy with that resolution of our disagreement, but I am not quite persuaded that the disagreement between us is merely semantic, as Frances seems conflicted about Hawtrey’s argument, carried out in the context of a gold standard, which served as my proof text for the proposition that competitive devaluation really is expansionary. On the one hand, she seems to distinguish between the expansionary effect of competitive devaluation relative to gold – Hawtrey’s case – and the beggar-my-neighbor effect of competitive devaluation of fiat currencies relative to each other; on the other hand, she also intimates that even Hawtrey got it wrong in arguing that competitive devaluation is expansionary. Now, much as I admire Hawtrey, I have no problem with criticizing him; it just seems that Frances hasn’t decided whether she does – or doesn’t – agree with him.

But what I want to do in this post is not to argue with Frances, though some disagreements may be impossible to cover up; I just want to explain the relationship between competitive devaluation and monetary expansion.

First some context. One of the reasons that I — almost exactly four years ago – wrote my post about Hawtrey and competitive devaluations (aka currency wars) is that critics of quantitative easing had started to make the argument that the real point of quantitative easing was to gain a competitive advantage over other countries by depreciating – or devaluing – their currencies. What I was trying to show was that if a currency is being depreciated by monetary expansion (aka quantitative easing), then, as Frances now seems – but I’m still not sure – ready to concede, the combination of monetary expansion and currency devaluation has a net expansionary effect on the whole world, and the critics of quantitative easing are wrong. Because the competitive devaluation argument has so often been made together with a criticism of quantitative easing, I assumed, carelessly it appears, that in criticizing my post, Frances was disagreeing with my support of currency depreciation in the context of monetary expansion and quantitative easing.

With that explanatory preface out of the way, let’s think about how to depreciate a fiat currency on the foreign exchange markets. A market-clearing exchange rate between two fiat currencies can be determined in two ways (though there is often a little of both in practice): 1) a currency peg and 2) a floating rate. Under a currency peg, one or both countries are committed to buying and selling the other currency in unlimited quantities at the pegged (official) rate. If neither country is prepared to buy or sell its currency in unlimited quantities at the pegged rate, the peg is not a true peg, because the peg will not withstand a sufficient shift in the relative market demands for the currencies. If the market demand is inconsistent with the quasi-peg, either the pegged rate will cease to be a market-clearing rate, with a rationing system imposed while the appearance of a peg is maintained, or the exchange rate will be allowed to float to clear the market. A peg can be one-sided or two-sided, but a two-sided peg is possible only so long as both countries agree on the exchange rate to be pegged; if they disagree, the system goes haywire. To use Nick Rowe’s terminology, the typical case of a currency peg involves an alpha (or dominant, or reserve) currency which is taken as a standard and a beta currency which is made convertible into the alpha currency at a rate chosen by the issuer of the beta currency.

With floating currencies, the market is cleared by adjustment of the exchange rate rather than currency purchases or sales by the monetary authority to maintain the peg. In practice, monetary authorities generally do buy and sell their currencies in the market — sometimes with, and  sometimes without, an exchange-rate target — so the operation of actual foreign exchange markets lies somewhere in between the two poles of currency pegs and floating rates.

What does this tell us about currency depreciation? First, it is possible for a country to devalue its currency against another currency to which its currency is pegged by changing the peg unilaterally. If a peg is one-sided, i.e., a beta currency is tied to an alpha, the issuer of the beta currency chooses the peg unilaterally. If the peg is two-sided, then the peg cannot be changed unilaterally; the two currencies are merely different denominations of a single currency, and a unilateral change in the peg means that the common currency has been abandoned and replaced by two separate currencies.

So what happens if a beta currency pegged to an alpha currency, e.g., the Hong Kong dollar which pegged to the US dollar, is devalued? Say Hong Kong has an unemployment problem and attributes the problem to Hong Kong wages being too high for its exports to compete in world markets. Hong Kong decides to solve the problem by devaluing their dollar from 13 cents to 10 cents. Would the devaluation be expansionary or contractionary for the rest of the world?

Hong Kong is the paradigmatic small open economy. Its export prices are quoted in US dollars determined in world markets in which HK is a small player, so the prices of HK exports quoted in US dollars don’t change, but in HK dollars the prices rise by 30%. Suddenly, HK exporters become super-profitable, and hire as many workers as they can to increase output. Hong Kong’s unemployment problem is solved.

(Brief digression. There are those who reject this reasoning, because it supposedly assumes that Hong Kong workers suffer from money illusion. If workers are unemployed because their wages are too high relative to the Hong Kong producer price level, why don’t they accept a cut in nominal wages? We don’t know. But if they aren’t willing to accept a nominal-wage cut, why do they allow themselves to be tricked into accepting a real-wage cut by way of a devaluation, unless they are suffering from money illusion? And we all know that it’s irrational to suffer from money illusion, because money is neutral. The question is a good question, but the answer is that the argument for monetary neutrality and for the absence of money illusion presumes a comparison between two equilibrium states. But the devaluation analysis above did not start from an equilibrium; it started from a disequilibrium. So the analysis can’t be refuted by saying that it implies that workers suffer from money illusion.)

The result of the Hong Kong export boom and corresponding increase in output and employment is that US dollars will start flowing into Hong Kong as payment for all those exports. So the next question is what happens to those dollars? With no change in the demand of Hong Kong residents to hold US dollars, they will presumably want to exchange their US dollars for Hong Kong dollars, so that the quantity of Hong Kong dollars held by Hong Kong residents will increase. Because domestic income and expenditure in Hong Kong is rising, some of the new Hong Kong dollars will probably be held, but some will be spent. The increased spending as a result of rising incomes and a desire to convert some of the increased cash holdings into other assets will spill over into increased purchases by Hong Kong residents on imports or foreign assets. The increase in domestic income and expenditure and the increase in import prices will inevitably cause an increase in prices measured in HK dollars.

Thus, insofar as income, expenditure and prices are rising in Hong Kong, the immediate real exchange rate advantage resulting from devaluation will dissipate, though not necessarily completely, as the HK prices of non-tradables including labor services are bid up in response to the demand increase following devaluation. The increase in HK prices and increased spending by HK residents on imported goods will have an expansionary effect on the rest of the world (albeit a small one because Hong Kong is a small open economy). That’s the optimistic scenario.

But there is also a pessimistic scenario that was spelled out by Max Corden in his classic article on exchange rate protection. In this scenario, the HK monetary authority either reduces the quantity of HK dollars to offset the increase in HK dollars caused by its export surplus, or it increases the demand for HK dollars to match the increase in the quantity of HK dollars. It can reduce the quantity of HK dollars by engaging in open-market sales of domestic securities in its portfolio, and it can increase the demand for HK dollars by increasing the required reserves that HK banks must hold against the HK dollars (either deposits or banknotes) that they create. Alternatively, the monetary authority could pay interest on the reserves held by HK banks at the central bank as a way of  increasing the amount of HK dollars demanded. By eliminating the excess supply of HK dollars through one of more of these methods, the central bank prevents the increase in HK spending and the reduction in net exports that would otherwise have occurred in response to the HK devaluation. That was the great theoretical insight of Corden’s analysis: the beggar-my-neighbor effect of devaluation is not caused by the devaluation, but by the monetary policy that prevents the increase in domestic income associated with devaluation from spilling over into increased expenditure. This can only be accomplished by a monetary policy that deliberately creates a chronic excess demand for cash, an excess demand that can only be satisfied by way of an export surplus.

The effect (though just second-order) of the HK policy on US prices can also be determined, because the policy of the HK monetary authority involves an increase in its demand to hold US FX reserves. If it chooses to hold the additional dollar reserves in actual US dollars, the increase in the demand for US base money will, ceteris paribus, cause the US price level to fall. Alternatively, if the HK monetary authority chooses to hold its dollar reserves in the form of US Treasuries, the yield on those Treasuries will tend to fall. A reduced yield on Treasuries will increase the desired holdings of dollars, also implying a reduced US price level. Of course, the US is capable of nullifying the deflationary effect of HK currency manipulation by monetary expansion; the point is that the HK policy will have a (slight) deflationary effect on the US unless it is counteracted.

If I were writing a textbook, I would say that it is left as an exercise for the reader to work out the analysis of devaluation in the case of floating currencies. So if you feel like stopping here, you probably won’t be missing very much. But just to cover all the bases, I will go through the argument quickly. If a country wants to drive down the floating exchange rate between its currency and another currency, the monetary authority can buy the foreign currency in exchange for its own currency in the FX markets. It’s actually not necessary to intervene directly in FX markets to do this, issuing more currency, by open-market operations (aka quantitative easing) would also work, but the effect in FX markets will show up more quickly than if the expansion is carried out by open market purchases. So in the simplest case, currency depreciation is actually just another term for monetary expansion. However, the link between monetary expansion and currency depreciation can be broken if a central bank simultaneously buys the foreign currency with new issues of its own currency while making open-market sales of assets to mop up the home currency issued while intervening in the FX market. Alternatively, it can intervene in the FX market while imposing increased reserve requirements on banks, thereby forcing them to hold the newly issued currency, or by paying banks a sufficiently interest rate on reserves held at the central bank to willingly hold the newly issued currency.

So, it is my contention that there is no such thing as pure currency depreciation without monetary expansion. If currency depreciation is to be achieved without monetary expansion, the central bank must also simultaneously either carry out open-market sales to mop the currency issued in the process of driving down the exchange rate of the currency, or impose reserve requirements on banks, or pay interest on bank reserves, thereby creating an increased demand for the additional currency that was issued to drive down the exchange value of the home currency

Competitive Devaluation Plus Monetary Expansion Does Create a Free Lunch

I want to begin this post by saying that I’m flattered by, and grateful to, Frances Coppola for the first line of her blog post yesterday. But – and I note that imitation is the sincerest form of flattery – I fear I have to take issue with her over competitive devaluation.

Frances quotes at length from a quotation from Hawtrey’s Trade Depression and the Way Out that I used in a post I wrote almost four years ago. Hawtrey explained why competitive devaluation in the 1930s was – and in my view still is – not a problem (except under extreme assumptions, which I will discuss at the end of this post). Indeed, I called competitive devaluation a free lunch, providing her with a title for her post. Here’s the passage that Frances quotes:

This competitive depreciation is an entirely imaginary danger. The benefit that a country derives from the depreciation of its currency is in the rise of its price level relative to its wage level, and does not depend on its competitive advantage. If other countries depreciate their currencies, its competitive advantage is destroyed, but the advantage of the price level remains both to it and to them. They in turn may carry the depreciation further, and gain a competitive advantage. But this race in depreciation reaches a natural limit when the fall in wages and in the prices of manufactured goods in terms of gold has gone so far in all the countries concerned as to regain the normal relation with the prices of primary products. When that occurs, the depression is over, and industry is everywhere remunerative and fully employed. Any countries that lag behind in the race will suffer from unemployment in their manufacturing industry. But the remedy lies in their own hands; all they have to do is to depreciate their currencies to the extent necessary to make the price level remunerative to their industry. Their tardiness does not benefit their competitors, once these latter are employed up to capacity. Indeed, if the countries that hang back are an important part of the world’s economic system, the result must be to leave the disparity of price levels partly uncorrected, with undesirable consequences to everybody. . . .

The picture of an endless competition in currency depreciation is completely misleading. The race of depreciation is towards a definite goal; it is a competitive return to equilibrium. The situation is like that of a fishing fleet threatened with a storm; no harm is done if their return to a harbor of refuge is “competitive.” Let them race; the sooner they get there the better. (pp. 154-57)

Here’s Frances’s take on Hawtrey and me:

The highlight “in terms of gold” is mine, because it is the key to why Glasner is wrong. Hawtrey was right in his time, but his thinking does not apply now. We do not value today’s currencies in terms of gold. We value them in terms of each other. And in such a system, competitive devaluation is by definition beggar-my-neighbour.

Let me explain. Hawtrey defines currency values in relation to gold, and advertises the benefit of devaluing in relation to gold. The fact that gold is the standard means there is no direct relationship between my currency and yours. I may devalue my currency relative to gold, but you do not have to: my currency will be worth less compared to yours, but if the medium of account is gold, this does not matter since yours will still be worth the same amount in terms of gold. Assuming that the world price of gold remains stable, devaluation therefore principally affects the DOMESTIC price level.  As Hawtrey says, there may additionally be some external competitive advantage, but this is not the principal effect and it does not really matter if other countries also devalue. It is adjusting the relationship of domestic wages and prices in terms of gold that matters, since this eventually forces down the price of finished goods and therefore supports domestic demand.

Conversely, in a floating fiat currency system such as we have now, if I devalue my currency relative to yours, your currency rises relative to mine. There may be a domestic inflationary effect due to import price rises, but we do not value domestic wages or the prices of finished goods in terms of other currencies, so there can be no relative adjustment of wages to prices such as Hawtrey envisages. Devaluing the currency DOES NOT support domestic demand in a floating fiat currency system. It only rebalances the external position by making imports relatively more expensive and exports relatively cheaper.

This difference is crucial. In a gold standard system, devaluing the currency is a monetary adjustment to support domestic demand. In a floating fiat currency system, itis an external adjustment to improve competitiveness relative to other countries.

Actually, Frances did not quote the entire passage from Hawtrey that I reproduced in my post, and Frances would have done well to quote from, and to think carefully about, what Hawtrey said in the paragraphs preceding the ones she quoted. Here they are:

When Great Britain left the gold standard, deflationary measure were everywhere resorted to. Not only did the Bank of England raise its rate, but the tremendous withdrawals of gold from the United States involved an increase of rediscounts and a rise of rates there, and the gold that reached Europe was immobilized or hoarded. . . .

The consequence was that the fall in the price level continued. The British price level rose in the first few weeks after the suspension of the gold standard, but then accompanied the gold price level in its downward trend. This fall of prices calls for no other explanation than the deflationary measures which had been imposed. Indeed what does demand explanation is the moderation of the fall, which was on the whole not so steep after September 1931 as before.

Yet when the commercial and financial world saw that gold prices were falling rather than sterling prices rising, they evolved the purely empirical conclusion that a depreciation of the pound had no effect in raising the price level, but that it caused the price level in terms of gold and of those currencies in relation to which the pound depreciated to fall.

For any such conclusion there was no foundation. Whenever the gold price level tended to fall, the tendency would make itself felt in a fall in the pound concurrently with the fall in commodities. But it would be quite unwarrantable to infer that the fall in the pound was the cause of the fall in commodities.

On the other hand, there is no doubt that the depreciation of any currency, by reducing the cost of manufacture in the country concerned in terms of gold, tends to lower the gold prices of manufactured goods. . . .

But that is quite a different thing from lowering the price level. For the fall in manufacturing costs results in a greater demand for manufactured goods, and therefore the derivative demand for primary products is increased. While the prices of finished goods fall, the prices of primary products rise. Whether the price level as a whole would rise or fall it is not possible to say a priori, but the tendency is toward correcting the disparity between the price levels of finished products and primary products. That is a step towards equilibrium. And there is on the whole an increase of productive activity. The competition of the country which depreciates its currency will result in some reduction of output from the manufacturing industry of other countries. But this reduction will be less than the increase in the country’s output, for if there were no net increase in the world’s output there would be no fall of prices.

So Hawtrey was refuting precisely the argument raised  by Frances. Because the value of gold was not stable after Britain left the gold standard and depreciated its currency, the deflationary effect in other countries was mistakenly attributed to the British depreciation. But Hawtrey points out that this reasoning was backwards. The fall in prices in the rest of the world was caused by deflationary measures that were increasing the demand for gold and causing prices in terms of gold to continue to fall, as they had been since 1929. It was the fall in prices in terms of gold that was causing the pound to depreciate, not the other way around

Frances identifies an important difference between an international system of fiat currencies in which currency values are determined in relationship to each other in foreign exchange markets and a gold standard in which currency values are determined relative to gold. However, she seems to be suggesting that currency values in a fiat money system affect only the prices of imports and exports. But that can’t be so, because if the prices of imports and exports are affected, then the prices of the goods that compete with imports and exports must also be affected. And if the prices of tradable goods are affected, then the prices of non-tradables will also — though probably with a lag — eventually be affected as well. Of course, insofar as relative prices before the change in currency values were not in equilibrium, one can’t predict that all prices will adjust proportionately after the change.

To make the point in more abstract terms, the principle of purchasing power parity (PPP) operates under both a gold standard and a fiat money standard, and one can’t just assume that the gold standard has some special property that allows PPP to hold, while PPP is somehow disabled under a fiat currency system. Absent an explanation of why PPP doesn’t hold in a floating fiat currency system, the assertion that devaluing a currency (i.e., driving down the exchange value of one currency relative to other currencies) “is an external adjustment to improve competitiveness relative to other countries” is baseless.

I would also add a semantic point about this part of Frances’s argument:

We do not value today’s currencies in terms of gold. We value them in terms of each other. And in such a system, competitive devaluation is by definition beggar-my-neighbour.

Unfortunately, Frances falls into the common trap of believing that a definition actually tell us something about the real word, when in fact a definition tell us no more than what meaning is supposed to be attached to a word. The real world is invariant with respect to our definitions; our definitions convey no information about reality. So for Frances to say – apparently with the feeling that she is thereby proving her point – that competitive devaluation is by definition beggar-my-neighbour is completely uninformative about happens in the world; she is merely informing us about how she chooses to define the words she is using.

Frances goes on to refer to this graph taken from Gavyn Davies in the Financial Times, concerning a speech made by Stanley Fischer about research done by Fed staff economists showing that the 20% appreciation in the dollar over the past 18 months has reduced the rate of US inflation by as much as 1% and is projected to cause US GDP in three years to be about 3% lower than it would have been without dollar appreciation.Gavyn_Davies_Chart

Frances focuses on these two comments by Gavyn. First:

Importantly, the impact of the higher exchange rate does not reverse itself, at least in the time horizon of this simulation – it is a permanent hit to the level of GDP, assuming that monetary policy is not eased in the meantime.

And then:

According to the model, the annual growth rate should have dropped by about 0.5-1.0 per cent by now, and this effect should increase somewhat further by the end of this year.

Then, Frances continues:

But of course this assumes that the US does not ease monetary policy further. Suppose that it does?

The hit to net exports shown on the above graph is caused by imports becoming relatively cheaper and exports relatively more expensive as other countries devalue. If the US eased monetary policy in order to devalue the dollar support nominal GDP, the relative prices of imports and exports would rebalance – to the detriment of those countries attempting to export to the US.

What Frances overlooks is that by easing monetary policy to support nominal GDP, the US, aside from moderating or reversing the increase in its real exchange rate, would have raised total US aggregate demand, causing US income and employment to increase as well. Increased US income and employment would have increased US demand for imports (and for the products of American exporters), thereby reducing US net exports and increasing aggregate demand in the rest of the world. That was Hawtrey’s argument why competitive devaluation causes an increase in total world demand. Francis continues with a description of the predicament of the countries affected by US currency devaluation:

They have three choices: they respond with further devaluation of their own currencies to support exports, they impose import tariffs to support their own balance of trade, or they accept the deflationary shock themselves. The first is the feared “competitive devaluation” – exporting deflation to other countries through manipulation of the currency; the second, if widely practised, results in a general contraction of global trade, to everyone’s detriment; and you would think that no government would willingly accept the third.

But, as Hawtrey showed, competitive devaluation is not a problem. Depreciating your currency cushions the fall in nominal income and aggregate demand. If aggregate demand is kept stable, then the increased output, income, and employment associated with a falling exchange rate will spill over into a demand for the exports of other countries and an increase in the home demand for exportable home products. So it’s a win-win situation.

However, the Fed has permitted passive monetary tightening over the last eighteen months, and in December 2015 embarked on active monetary tightening in the form of interest rate rises. Davies questions the rationale for this, given the extraordinary rise in the dollar REER and the growing evidence that the US economy is weakening. I share his concern.

And I share his concern, too. So what are we even arguing about? Equally troubling is how passive tightening has reduced US demand for imports and for US exportable products, so passive tightening has negative indirect effects on aggregate demand in the rest of the world.

Although currency depreciation generally tends to increase the home demand for imports and for exportables, there are in fact conditions when the general rule that competitive devaluation is expansionary for all countries may be violated. In a number of previous posts (e.g., this, this, this, this and this) about currency manipulation, I have explained that when currency depreciation is undertaken along with a contractionary monetary policy, the terms-of-trade effect predominates without any countervailing effect on aggregate demand. If a country depreciates its exchange rate by intervening in foreign-exchange markets, buying foreign currencies with its own currency, thereby raising the value of foreign currencies relative to its own currency, it is also increasing the quantity of the domestic currency in the hands of the public. Increasing the quantity of domestic currency tends to raise domestic prices, thereby reversing, though probably with a lag, the effect on the currency’s real exchange rate. To prevent the real-exchange rate from returning to its previous level, the monetary authority must sterilize the issue of domestic currency with which it purchased foreign currencies. This can be done by open-market sales of assets by the cental bank, or by imposing increased reserve requirements on banks, thereby forcing banks to hold the new currency that had been created to depreciate the home currency.

This sort of currency manipulation, or exchange-rate protection, as Max Corden referred to it in his classic paper (reprinted here), is very different from conventional currency depreciation brought about by monetary expansion. The combination of currency depreciation and tight money creates an ongoing shortage of cash, so that the desired additional cash balances can be obtained only by way of reduced expenditures and a consequent export surplus. Since World War II, Japan, Germany, Taiwan, South Korea, and China are among the countries that have used currency undervaluation and tight money as a mechanism for exchange-rate protectionism in promoting industrialization. But exchange rate protection is possible not only under a fiat currency system. Currency manipulation was also possible under the gold standard, as happened when the France restored the gold standard in 1928, and pegged the franc to the dollar at a lower exchange rate than the franc had reached prior to the restoration of convertibility. That depreciation was accompanied by increased reserve requirements on French banknotes, providing the Bank of France with a continuing inflow of foreign exchange reserves with which it was able to pursue its insane policy of accumulating gold, thereby precipitating, with a major assist from the high-interest rate policy of the Fed, the deflation that turned into the Great Depression.

There Is No Intertemporal Budget Constraint

Last week Nick Rowe posted a link to a just published article in a special issue of the Review of Keynesian Economics commemorating the 80th anniversary of the General Theory. Nick’s article discusses the confusion in the General Theory between saving and hoarding, and Nick invited readers to weigh in with comments about his article. The ROKE issue also features an article by Simon Wren-Lewis explaining the eclipse of Keynesian theory as a result of the New Classical Counter-Revolution, correctly identified by Wren-Lewis as a revolution inspired not by empirical success but by a methodological obsession with reductive micro-foundationalism. While deploring the New Classical methodological authoritarianism, Wren-Lewis takes solace from the ability of New Keynesians to survive under the New Classical methodological regime, salvaging a role for activist counter-cyclical policy by, in effect, negotiating a safe haven for the sticky-price assumption despite its shaky methodological credentials. The methodological fiction that sticky prices qualify as micro-founded allowed New Keynesianism to survive despite the ascendancy of micro-foundationalist methodology, thereby enabling the core Keynesian policy message to survive.

I mention the Wren-Lewis article in this context because of an exchange between two of the commenters on Nick’s article: the presumably pseudonymous Avon Barksdale and blogger Jason Smith about microfoundations and Keynesian economics. Avon began by chastising Nick for wasting time discussing Keynes’s 80-year old ideas, something Avon thinks would never happen in a discussion about a true science like physics, the 100-year-old ideas of Einstein being of no interest except insofar as they have been incorporated into the theoretical corpus of modern physics. Of course, this is simply vulgar scientism, as if the only legitimate way to do economics is to mimic how physicists do physics. This methodological scolding is typically charming New Classical arrogance. Sort of reminds one of how Friedrich Engels described Marxian theory as scientific socialism. I mean who, other than a religious fanatic, would be stupid enough to argue with the assertions of science?

Avon continues with a quotation from David Levine, a fine economist who has done a lot of good work, but who is also enthralled by the New Classical methodology. Avon’s scientism provoked the following comment from Jason Smith, a Ph. D. in physics with a deep interest in and understanding of economics.

You quote from Levine: “Keynesianism as argued by people such as Paul Krugman and Brad DeLong is a theory without people either rational or irrational”

This is false. The L in ISLM means liquidity preference and e.g. here …

http://krugman.blogs.nytimes.com/2013/11/18/the-new-keynesian-case-for-fiscal-policy-wonkish/

… Krugman mentions an Euler equation. The Euler equation essentially says that an agent must be indifferent between consuming one more unit today on the one hand and saving that unit and consuming in the future on the other if utility is maximized.

So there are agents in both formulations preferring one state of the world relative to others.

Avon replied:

Jason,

“This is false. The L in ISLM means liquidity preference and e.g. here”

I know what ISLM is. It’s not recursive so it really doesn’t have people in it. The dynamics are not set by any micro-foundation. If you’d like to see models with people in them, try Ljungqvist and Sargent, Recursive Macroeconomic Theory.

To which Jason retorted:

Avon,

So the definition of “people” is restricted to agents making multi-period optimizations over time, solving a dynamic programming problem?

Well then any such theory is obviously wrong because people don’t behave that way. For example, humans don’t optimize the dictator game. How can you add up optimizing agents and get a result that is true for non-optimizing agents … coincident with the details of the optimizing agents mattering.

Your microfoundation requirement is like saying the ideal gas law doesn’t have any atoms in it. And it doesn’t! It is an aggregate property of individual “agents” that don’t have properties like temperature or pressure (or even volume in a meaningful sense). Atoms optimize entropy, but not out of any preferences.

So how do you know for a fact that macro properties like inflation or interest rates are directly related to agent optimizations? Maybe inflation is like temperature — it doesn’t exist for individuals and is only a property of economics in aggregate.

These questions are not answered definitively, and they’d have to be to enforce a requirement for microfoundations … or a particular way of solving the problem.

Are quarks important to nuclear physics? Not really — it’s all pions and nucleons. Emergent degrees of freedom. Sure, you can calculate pion scattering from QCD lattice calculations (quark and gluon DoF), but it doesn’t give an empirically better result than chiral perturbation theory (pion DoF) that ignores the microfoundations (QCD).

Assuming quarks are required to solve nuclear physics problems would have been a giant step backwards.

To which Avon rejoined:

Jason

The microfoundation of nuclear physics and quarks is quantum mechanics and quantum field theory. How the degrees of freedom reorganize under the renormalization group flow, what effective field theory results is an empirical question. Keynesian economics is worse tha[n] useless. It’s wrong empirically, it has no theoretical foundation, it has no laws. It has no microfoundation. No serious grad school has taught Keynesian economics in nearly 40 years.

To which Jason answered:

Avon,

RG flow is irrelevant to chiral perturbation theory which is based on the approximate chiral symmetry of QCD. And chiral perturbation theory could exist without QCD as the “microfoundation”.

Quantum field theory is not a ‘microfoundation’, but rather a framework for building theories that may or may not have microfoundations. As Weinberg (1979) said:

” … quantum field theory itself has no content beyond analyticity, unitarity,
cluster decomposition, and symmetry.”

If I put together an NJL model, there is no requirement that the scalar field condensate be composed of quark-antiquark pairs. In fact, the basic idea was used for Cooper pairs as a model of superconductivity. Same macro theory; different microfoundations. And that is a general problem with microfoundations — different microfoundations can lead to the same macro theory, so which one is right?

And the IS-LM model is actually pretty empirically accurate (for economics):

http://informationtransfereconomics.blogspot.com/2014/03/the-islm-model-again.html

To which Avon responded:

First, ISLM analysis does not hold empirically. It just doesn’t work. That’s why we ended up with the macro revolution of the 70s and 80s. Keynesian economics ignores intertemporal budget constraints, it violates Ricardian equivalence. It’s just not the way the world works. People might not solve dynamic programs to set their consumption path, but at least these models include a future which people plan over. These models work far better than Keynesian ISLM reasoning.

As for chiral perturbation theory and the approximate chiral symmetries of QCD, I am not making the case that NJL models requires QCD. NJL is an effective field theory so it comes from something else. That something else happens to be QCD. It could have been something else, that’s an empirical question. The microfoundation I’m talking about with theories like NJL is QFT and the symmetries of the vacuum, not the short distance physics that might be responsible for it. The microfoundation here is about the basic laws, the principles.

ISLM and Keynesian economics has none of this. There is no principle. The microfoundation of modern macro is not about increasing the degrees of freedom to model every person in the economy on some short distance scale, it is about building the basic principles from consistent economic laws that we find in microeconomics.

Well, I totally agree that IS-LM is a flawed macroeconomic model, and, in its original form, it was borderline-incoherent, being a single-period model with an interest rate, a concept without meaning except as an intertemporal price relationship. These deficiencies of IS-LM became obvious in the 1970s, so the model was extended to include a future period, with an expected future price level, making it possible to speak meaningfully about real and nominal interest rates, inflation and an equilibrium rate of spending. So the failure of IS-LM to explain stagflation, cited by Avon as the justification for rejecting IS-LM in favor of New Classical macro, was not that hard to fix, at least enough to make it serviceable. And comparisons of the empirical success of augmented IS-LM and the New Classical models have shown that IS-LM models consistently outperform New Classical models.

What Avon fails to see is that the microfoundations that he considers essential for macroeconomics are themselves derived from the assumption that the economy is operating in macroeconomic equilibrium. Thus, insisting on microfoundations – at least in the formalist sense that Avon and New Classical macroeconomists understand the term – does not provide a foundation for macroeconomics; it is just question begging aka circular reasoning or petitio principia.

The circularity is obvious from even a cursory reading of Samuelson’s Foundations of Economic Analysis, Robert Lucas’s model for doing economics. What Samuelson called meaningful theorems – thereby betraying his misguided acceptance of the now discredited logical positivist dogma that only potentially empirically verifiable statements have meaning – are derived using the comparative-statics method, which involves finding the sign of the derivative of an endogenous economic variable with respect to a change in some parameter. But the comparative-statics method is premised on the assumption that before and after the parameter change the system is in full equilibrium or at an optimum, and that the equilibrium, if not unique, is at least locally stable and the parameter change is sufficiently small not to displace the system so far that it does not revert back to a new equilibrium close to the original one. So the microeconomic laws invoked by Avon are valid only in the neighborhood of a stable equilibrium, and the macroeconomics that Avon’s New Classical mentors have imposed on the economics profession is a macroeconomics that, by methodological fiat, is operative only in the neighborhood of a locally stable equilibrium.

Avon dismisses Keynesian economics because it ignores intertemporal budget constraints. But the intertemporal budget constraint doesn’t exist in any objective sense. Certainly macroeconomics has to take into account intertemporal choice, but the idea of an intertemporal budget constraint analogous to the microeconomic budget constraint underlying the basic theory of consumer choice is totally misguided. In the static theory of consumer choice, the consumer has a given resource endowment and known prices at which consumers can transact at will, so the utility-maximizing vector of purchases and sales can be determined as the solution of a constrained-maximization problem.

In the intertemporal context, consumers have a given resource endowment, but prices are not known. So consumers have to make current transactions based on their expectations about future prices and a variety of other circumstances about which consumers can only guess. Their budget constraints are thus not real but totally conjectural based on their expectations of future prices. The optimizing Euler equations are therefore entirely conjectural as well, and subject to continual revision in response to changing expectations. The idea that the microeconomic theory of consumer choice is straightforwardly applicable to the intertemporal choice problem in a setting in which consumers don’t know what future prices will be and agents’ expectations of future prices are a) likely to be very different from each other and thus b) likely to be different from their ultimate realizations is a huge stretch. The intertemporal budget constraint has a completely different role in macroeconomics from the role it has in microeconomics.

If I expect that the demand for my services will be such that my disposable income next year would be $500k, my consumption choices would be very different from what they would have been if I were expecting a disposable income of $100k next year. If I expect a disposable income of $500k next year, and it turns out that next year’s income is only $100k, I may find myself in considerable difficulty, because my planned expenditure and the future payments I have obligated myself to make may exceed my disposable income or my capacity to borrow. So if there are a lot of people who overestimate their future incomes, the repercussions of their over-optimism may reverberate throughout the economy, leading to bankruptcies and unemployment and other bad stuff.

A large enough initial shock of mistaken expectations can become self-amplifying, at least for a time, possibly resembling the way a large initial displacement of water can generate a tsunami. A financial crisis, which is hard to model as an equilibrium phenomenon, may rather be an emergent phenomenon with microeconomic sources, but whose propagation can’t be described in microeconomic terms. New Classical macroeconomics simply excludes such possibilities on methodological grounds by imposing a rational-expectations general-equilibrium structure on all macroeconomic models.

This is not to say that the rational expectations assumption does not have a useful analytical role in macroeconomics. But the most interesting and most important problems in macroeconomics arise when the rational expectations assumption does not hold, because it is when individual expectations are very different and very unstable – say, like now, for instance — that macroeconomies become vulnerable to really scary instability.

Simon Wren-Lewis makes a similar point in his paper in the Review of Keynesian Economics.

Much discussion of current divisions within macroeconomics focuses on the ‘saltwater/freshwater’ divide. This understates the importance of the New Classical Counter Revolution (hereafter NCCR). It may be more helpful to think about the NCCR as involving two strands. The one most commonly talked about involves Keynesian monetary and fiscal policy. That is of course very important, and plays a role in the policy reaction to the recent Great Recession. However I want to suggest that in some ways the second strand, which was methodological, is more important. The NCCR helped completely change the way academic macroeconomics is done.

Before the NCCR, macroeconomics was an intensely empirical discipline: something made possible by the developments in statistics and econometrics inspired by The General Theory. After the NCCR and its emphasis on microfoundations, it became much more deductive. As Hoover (2001, p. 72) writes, ‘[t]he conviction that macroeconomics must possess microfoundations has changed the face of the discipline in the last quarter century’. In terms of this second strand, the NCCR was triumphant and remains largely unchallenged within mainstream academic macroeconomics.

Perhaps I will have some more to say about Wren-Lewis’s article in a future post. And perhaps also about Nick Rowe’s article.

HT: Tom Brown

Update (02/11/16):

On his blog Jason Smith provides some further commentary on his exchange with Avon on Nick Rowe’s blog, explaining at greater length how irrelevant microfoundations are to doing real empirically relevant physics. He also expands on and puts into a broader meta-theoretical context my point about the extremely narrow range of applicability of the rational-expectations equilibrium assumptions of New Classical macroeconomics.

David Glasner found a back-and-forth between me and a commenter (with the pseudonym “Avon Barksdale” after [a] character on The Wire who [didn’t end] up taking an economics class [per Tom below]) on Nick Rowe’s blog who expressed the (widely held) view that the only scientific way to proceed in economics is with rigorous microfoundations. “Avon” held physics up as a purported shining example of this approach.
I couldn’t let it go: even physics isn’t that reductionist. I gave several examples of cases where the microfoundations were actually known, but not used to figure things out: thermodynamics, nuclear physics. Even modern physics is supposedly built on string theory. However physicists do not require every pion scattering amplitude be calculated from QCD. Some people do do so-called lattice calculations. But many resort to the “effective” chiral perturbation theory. In a sense, that was what my thesis was about — an effective theory that bridges the gap between lattice QCD and chiral perturbation theory. That effective theory even gave up on one of the basic principles of QCD — confinement. It would be like an economist giving up opportunity cost (a basic principle of the micro theory). But no physicist ever said to me “your model is flawed because it doesn’t have true microfoundations”. That’s because the kind of hard core reductionism that surrounds the microfoundations paradigm doesn’t exist in physics — the most hard core reductionist natural science!
In his post, Glasner repeated something that he had before and — probably because it was in the context of a bunch of quotes about physics — I thought of another analogy.

Glasner says:

But the comparative-statics method is premised on the assumption that before and after the parameter change the system is in full equilibrium or at an optimum, and that the equilibrium, if not unique, is at least locally stable and the parameter change is sufficiently small not to displace the system so far that it does not revert back to a new equilibrium close to the original one. So the microeconomic laws invoked by Avon are valid only in the neighborhood of a stable equilibrium, and the macroeconomics that Avon’s New Classical mentors have imposed on the economics profession is a macroeconomics that, by methodological fiat, is operative only in the neighborhood of a locally stable equilibrium.

 

This hits on a basic principle of physics: any theory radically simplifies near an equilibrium.

Go to Jason’s blog to read the rest of his important and insightful post.

How not to Win Friends and Influence People

Last week David Beckworth and Ramesh Ponnuru wrote a very astute op-ed article in the New York Times explaining how the Fed was tightening its monetary policy in 2008 even as the economy was rapidly falling into recession. Although there are a couple of substantive points on which I might take issue with Beckworth and Ponnuru (more about that below), I think that on the whole they do a very good job of covering the important points about the 2008 financial crisis given that their article had less than 1000 words.

That said, Beckworth and Ponnuru made a really horrible – to me incomprehensible — blunder. For some reason, in the second paragraph of their piece, after having recounted the conventional narrative of the 2008 financial crisis as an inevitable result of housing bubble and the associated misconduct of the financial industry in their first paragraph, Beckworth and Ponnuru cite Ted Cruz as the spokesman for the alternative view that they are about to present. They compound that blunder in a disclaimer identifying one of them – presumably Ponnuru — as a friend of Ted Cruz – for some recent pro-Cruz pronouncements from Ponnuru see here, here, and here – thereby transforming what might have been a piece of neutral policy analysis into a pro-Cruz campaign document. Aside from the unseemliness of turning Cruz into the poster-boy for Market Monetarism and NGDP Level Targeting, when, as recently as last October 28, Mr. Cruz was advocating resurrection of the gold standard while bashing the Fed for debasing the currency, a shout-out to Ted Cruz is obviously not a gesture calculated to engage readers (of the New York Times for heaven sakes) and predispose them to be receptive to the message they want to convey.

I suppose that this would be the appropriate spot for me to add a disclaimer of my own. I do not know, and am no friend of, Ted Cruz, but I was a FTC employee during Cruz’s brief tenure at the agency from July 2002 to December 2003. I can also affirm that I have absolutely no recollection of having ever seen or interacted with him while he was at the agency or since, and have spoken to only one current FTC employee who does remember him.

Predictably, Beckworth and Ponnuru provoked a barrage of negative responses to their argument that the Fed was responsible for the 2008 financial crisis by not easing monetary policy for most of 2008 when, even before the financial crisis, the economy was sliding into a deep recession. Much of the criticism focuses on the ambiguous nature of the concepts of causation and responsibility when hardly any political or economic event is the direct result of just one cause. So to say that the Fed caused or was responsible for the 2008 financial crisis cannot possibly mean that the Fed single-handedly brought it about, and that, but for the Fed’s actions, no crisis would have occurred. That clearly was not the case; the Fed was operating in an environment in which not only its past actions but the actions of private parties and public and political institutions increased the vulnerability of the financial system. To say that the Fed’s actions of commission or omission “caused” the financial crisis in no way absolves all the other actors from responsibility for creating the conditions in which the Fed found itself and in which the Fed’s actions became crucial for the path that the economy actually followed.

Consider the Great Depression. I think it is totally reasonable to say that the Great Depression was the result of the combination of a succession of interest rate increases by the Fed in 1928 and 1929 and by the insane policy adopted by the Bank of France in 1928 and continued for several years thereafter to convert its holdings of foreign-exchange reserves into gold. But does saying that the Fed and the Bank of France caused the Great Depression mean that World War I and the abandonment of the gold standard and the doubling of the price level in terms of gold during the war were irrelevant to the Great Depression? Of course not. Does it mean that accumulation of World War I debt and reparations obligations imposed on Germany by the Treaty of Versailles and the accumulation of debt issued by German state and local governments — debt and obligations that found their way onto the balance sheets of banks all over the world, were irrelevant to the Great Depression? Not at all.

Nevertheless, it does make sense to speak of the role of monetary policy as a specific cause of the Great Depression because the decisions made by the central bankers made a difference at critical moments when it would have been possible to avoid the calamity had they adopted policies that would have avoided a rapid accumulation of gold reserves by the Fed and the Bank of France, thereby moderating or counteracting, instead of intensifying, the deflationary pressures threatening the world economy. Interestingly, many of those objecting to the notion that Fed policy caused the 2008 financial crisis are not at all bothered by the idea that humans are causing global warming even though the world has evidently undergone previous cycles of rising and falling temperatures about which no one would suggest that humans played any causal role. Just as the existence of non-human factors that affect climate does not preclude one from arguing that humans are now playing a key role in the current upswing of temperatures, the existence of non-monetary factors contributing to the 2008 financial crisis need not preclude one from attributing a causal role in the crisis to the Fed.

So let’s have a look at some of the specific criticisms directed at Beckworth and Ponnuru. Here’s Paul Krugman’s take in which he refers back to an earlier exchange last December between Mr. Cruz and Janet Yellen when she testified before Congress:

Back when Ted Cruz first floated his claim that the Fed caused the Great Recession — and some neo-monetarists spoke up in support — I noted that this was a repeat of the old Milton Friedman two-step.

First, you declare that the Fed could have prevented a disaster — the Great Depression in Friedman’s case, the Great Recession this time around. This is an arguable position, although Friedman’s claims about the 30s look a lot less convincing now that we have tried again to deal with a liquidity trap. But then this morphs into the claim that the Fed caused the disaster. See, government is the problem, not the solution! And the motivation for this bait-and-switch is, indeed, political.

Now come Beckworth and Ponnuru to make the argument at greater length, and it’s quite direct: because the Fed “caused” the crisis, things like financial deregulation and runaway bankers had nothing to do with it.

As regular readers of this blog – if there are any – already know, I am not a big fan of Milton Friedman’s work on the Great Depression, and I agree with Krugman’s criticism that Friedman allowed his ideological preferences or commitments to exert an undue influence not only on his policy advocacy but on his substantive analysis. Thus, trying to make a case for his dumb k-percent rule as an alternative monetary regime to the classical gold standard regime generally favored by his libertarian, classical liberal and conservative ideological brethren, he went to great and unreasonable lengths to deny the obvious fact that the demand for money is anything but stable, because such an admission would have made the k-percent rule untenable on its face as it proved to be when Paul Volcker misguidedly tried to follow Friedman’s advice and conduct monetary policy by targeting monetary aggregates. Even worse, because he was so wedded to the naïve quantity-theory monetary framework he thought he was reviving – when in fact he was using a modified version of the Cambride/Keynesian demand for money, even making the patently absurd claim that the quantity theory of money was a theory of the demand for money – Friedman insisted on conducting monetary analysis under the assumption – also made by Keynes — that quantity of money is directly under the control of the monetary authority when in fact, under a gold standard – which means during the Great Depression – the quantity of money for any country is endogenously determined. As a result, there was a total mismatch between Friedman’s monetary model and the institutional setting in place at the time of the monetary phenomenon he was purporting to explain.

So although there were big problems with Friedman’s account of the Great Depression and his characterization of the Fed’s mishandling of the Great Depression, fixing those problems doesn’t reduce the Fed’s culpability. What is certainly true is that the Great Depression, the result of a complex set of circumstances going back at least 15 years to the start of World War I, might well have been avoided largely or entirely, but for the egregious conduct of the Fed and Bank of France. But it is also true that, at the onset of the Great Depression, there was no consensus about how to conduct monetary policy, even though Hawtrey and Cassel and a handful of others well understood how terribly monetary policy had gone off track. But theirs was a minority view, and Hawtrey and Cassel are still largely ignored or forgotten.

Ted Cruz may view the Fed’s mistakes in 2008 as a club with which to beat up on Janet Yellen, but for most of the rest of us who think that Fed mistakes were a critical element of the 2008 financial crisis, the point is not to make an ideological statement, it is to understand what went wrong and to try to keep it from happening again.

Krugman sends us to Mike Konczal for further commentary on Beckworth and Ponnuru.

Is Ted Cruz right about the Great Recession and the Federal Reserve? From a November debate, Cruz argued that “in the third quarter of 2008, the Fed tightened the money and crashed those asset prices, which caused a cascading collapse.”

Fleshing that argument out in the New York Times is David Beckworth and Ramesh Ponnuru, backing and expanding Cruz’s theory that “the Federal Reserve caused the crisis by tightening monetary policy in 2008.”

But wait, didn’t the Federal Reserve lower rates during that time?

Um, no. The Fed cut its interest rate target to 2.25% on March 18, 2008, and to 2% on April 20, which by my calculations would have been in the second quarter of 2008. There it remained until it was reduced to 1.5% on October 8, which by my calculations would have been in the fourth quarter of 2008. So on the face of it, Mr. Cruz was right that the Fed kept its interest rate target constant for over five months while the economy was contracting in real terms in the third quarter at a rate of 1.9% (and growing in nominal terms at a mere 0.8% rate)

Konczal goes on to accuse Cruz of inconsistency for blaming the Fed for tightening policy in 2008 before the crash while bashing the Fed for quantitative easing after the crash. That certainly is a just criticism, and I really hope someone asks Cruz to explain himself, though my expectations that that will happen are not very high. But that’s Cruz’s problem, not Beckworth’s or Ponnuru’s.

Konczal also focuses on the ambiguity in saying that the Fed caused the financial crisis by not cutting interest rates earlier:

I think a lot of people’s frustrations with the article – see Barry Ritholtz at Bloomberg here – is the authors slipping between many possible interpretations. Here’s the three that I could read them making, though these aren’t actual quotes from the piece:

(a) “The Federal Reserve could have stopped the panic in the financial markets with more easing.”

There’s nothing in the Valukas bankruptcy report on Lehman, or any of the numerous other reports that have since come out, that leads me to believe Lehman wouldn’t have failed if the short-term interest rate was lowered. One way to see the crisis was in the interbank lending spreads, often called the TED spread, which is a measure of banking panic. Looking at an image of the spread and its components, you can see a falling short-term t-bill rate didn’t ease that spread throughout 2008.

And, as Matt O’Brien noted, Bear Stearns failed before the passive tightening started.

The problem with this criticism is that it assumes that the only way that the Fed can be effective is by altering the interest rate that it effectively sets on overnight loans. It ignores the relationship between the interest rate that the Fed sets and total spending. That relationship is not entirely obvious, but almost all monetary economists have assumed that there is such a relationship, even if they can’t exactly agree on the mechanism by which the relationship is brought into existence. So it is not enough to look at the effect of the Fed’s interest rate on Lehman or Bear Stearns, you also have to look at the relationship between the interest rate and total spending and how a higher rate of total spending would have affected Lehman and Bear Stearns. If the economy had been performing better in the second and third quarters, the assets that Lehman and Bear Stearns were holding would not have lost as much of their value. And even if Lehman and Bear Stearns had not survived, arranging for their takeover by other firms might have been less difficult.

But beyond that, Beckworth and Ponnuru themselves overlook the fact that tightening by the Fed did not begin in the third quarter – or even the second quarter – of 2008. The tightening may have already begun in as early as the middle of 2006. The chart below shows the rate of expansion of the adjusted monetary base from January 2004 through September 2008. From 2004 through the middle of 2006, the biweekly rate of expansion of the monetary base was consistently at an annual rate exceeding 4% with the exception of a six-month interval at the end of 2005 when the rate fell to the 3-4% range. But from the middle of 2006 through September 2008, the bi-weekly rate of expansion was consistently below 3%, and was well below 2% for most of 2008. Now, I am generally wary of reading too much into changes in the monetary aggregates, because those changes can reflect either changes in supply conditions or demand conditions. However, when the economy is contracting, with the rate of growth in total spending falling substantially below trend, and the rate of growth in the monetary aggregates is decreasing sharply, it isn’t unreasonable to infer that monetary policy was being tightened. So, the monetary policy may well have been tightened as early as 2006, and, insofar as the rate of growth of the monetary base is indicative of the stance of monetary policy, that tightening was hardly passive.

adjusted_monetary_base

(b) “The Federal Reserve could have helped the recovery by acting earlier in 2008. Unemployment would have peaked at, say, 9.5 percent, instead of 10 percent.”

That would have been good! I would have been a fan of that outcome, and I’m willing to believe it. That’s 700,000 people with a job that they wouldn’t have had otherwise. The stimulus should have been bigger too, with a second round once it was clear how deep the hole was and how Treasuries were crashing too.

Again, there are two points. First, tightening may well have begun at least a year or two before the third quarter of 2008. Second, the economy started collapsing in the third quarter of 2008, and the run-up in the value of the dollar starting in July 2008, foolishly interpreted by the Fed as a vote of confidence in its anti-inflation policy, was really a cry for help as the economy was being starved of liquidity just as the demand for liquidity was becoming really intense. That denial of liquidity led to a perverse situation in which the return to holding cash began to exceed the return on real assets, setting the stage for a collapse in asset prices and a financial panic. The Fed could have prevented the panic, by providing more liquidity. Had it done so, the financial crisis would have been avoided, and the collapse in the real economy and the rise in unemployment would have been substantially mitigate.

c – “The Federal Reserve could have stopped the Great Recession from ever happening. Unemployment in 2009 wouldn’t have gone above 5.5 percent.”

This I don’t believe. Do they? There’s a lot of “might have kept that decline from happening or at least moderated it” back-and-forth language in the piece.

Is the argument that we’d somehow avoid the zero-lower bound? Ben Bernanke recently showed that interest rates would have had to go to about -4 percent to offset the Great Recession at the time. Hitting the zero-lower bound earlier than later is good policy, but it’s still there.

I think there’s an argument about “expectations,” and “expectations” wouldn’t have been set for a Great Recession. A lot of the “expectations” stuff has a magic and tautological quality to it once it leaves the models and enters the policy discussion, but the idea that a random speech about inflation worries could have shifted the Taylor Rule 4 percent seems really off base. Why doesn’t it go haywire all the time, since people are always giving speeches?

Well, I have shown in this paper that, starting in 2008, there was a strong empirical relationship between stock prices and inflation expectations, so it’s not just tautological. And we’re not talking about random speeches; we are talking about the decisions of the FOMC and the reasons that were given for those decisions. The markets pay a lot of attention to those reason.

And couldn’t it be just as likely that since the Fed was so confident about inflation in mid-2008 it boosted nominal income, by giving people a higher level of inflation expectations than they’d have otherwise? Given the failure of the Evans Rule and QE3 to stabilize inflation (or even prevent it from collapsing) in 2013, I imagine transporting them back to 2008 would haven’t fundamentally changed the game.

The inflation in 2008 was not induced by monetary policy, but by adverse supply shocks, expectations of higher inflation, given the Fed’s inflation targeting were thus tantamount to predictions of further monetary tightening.

If your mental model is that the Federal Reserve delaying something three months is capable of throwing 8.7 million people out of work, you should probably want to have much more shovel-ready construction and automatic stabilizers, the second of which kicked in right away without delay, as part of your agenda. It seems odd to put all the eggs in this basket if you also believe that even the most minor of mistakes are capable of devastating the economy so greatly.

Once again, it’s not a matter of just three months, but even if it were, in the summer of 2008 the economy was at a kind of inflection point, and the failure to ease monetary policy at that critical moment led directly to a financial crisis with cascading effects on the real economy. If the financial crisis could have been avoided by preventing total spending from dropping far below trend in the third quarter, the crisis might have been avoided, and the subsequent loss of output and employment could have been greatly mitigated.

And just to be clear, I have pointed out previously that the free market economy is fragile, because its smooth functioning depends on the coherence and consistency of expectations. That makes monetary policy very important, but I don’t dismiss shovel-ready construction and automatic stabilizers as means of anchoring expectations in a useful way, in contrast to the perverse way that inflation targeting stabilizes expectations.

The Sky Is Not Falling . . . Yet

Possibly responding to hints by ECB president Mario Draghi of monetary stimulus, stocks around the world are up today; the S&P 500 over 1900 (about 2% above yesterday’s close). Anyone who wants to understand why stock markets have been swooning since the end of 2015 should take a look at this chart showing TIPS_FREDthe breakeven TIPS spread on 10-year Treasuries over the past 10 years.

Let’s look at the peak spread (2.56%) reached in early July 2008, a couple of months before the onset of the financial crisis in September. Despite mounting evidence that the economy was contracting and unemployment rising, the Fed, transfixed by the threat of Inflation (manifested in rising energy prices) and a supposed loss of Fed credibility (manifested in rising inflation expectations), refused to continue reducing its interest-rate target lest the markets conclude that the Fed was not serious about fighting inflation. That’s when all hell started to break loose. By September 14, the Friday before the Lehman bankruptcy, the breakeven TIPS spread had fallen to 1.95%. It was not till October that the Fed finally relented and reduced its target rate, but nullified whatever stimulus the lower target rate might have provided by initiating the payment of interest on reserves. As you can see the breakeven spread continued to fall almost without interruption till reaching lows of about 0.10% by the end of 2008.

There were three other episodes of falling inflation expectations which are evident on the graph, in 2010, 2011 and 2012, each episode precipitating a monetary response (so-called quantitative easing) by the Fed to reverse the fall in inflation expectations, thereby avoiding an untimely end to the weak recovery from the financial crisis and the subsequent Little Depression.

Despite falling inflation expectations during the second half of 2014, the lackluster expansion continued, a possible sign of normalization insofar as the momentum of recovery was sustained despite falling inflation expectations (due in part to a positive oil-supply shock). But after a brief pickup in the first half of 2015, inflation expectations have been falling further in the second half of 2015, and the drop has steepened over the past month, with the breakeven TIPS spread falling from 1.56% on January 5 to 1.28% yesterday, a steeper decline than in July 2008, when the TIPS spread on July 3 stood at 2.56% and did not fall to 2.30% until August 5.

I am not saying that the market turmoil of the past three weeks is totally attributable to falling inflation expectations; it seems very plausible that the bursting of the oil bubble has been a major factor in the decline of stock prices. Falling oil prices could affect stock prices in at least two different ways: 1) the decline in energy prices itself being deflationary – at least if monetary policy is not specifically aimed at reversing those deflationary effects – and 2) oil and energy assets being on the books of many financial institutions, a decline in their value may impair the solvency of those institutions, causing a deflationary increase in the demand for currency and reserves. But even if falling oil prices are an independent cause of market turmoil, they interact with and reinforce deflationary pressures; the only way to counteract those deflationary pressures is monetary expansion.

And with inflation expectations now lower than they have been since early 2009, further reductions in inflation expectations could put us back into a situation in which the expected yield from holding cash exceeds the expected yield from holding real capital. In such situations, with nominal interest rates at or near the zero lower bound, a perverse Fisher effect takes hold and asset prices have to fall sufficiently to make people willing to hold assets rather than cash. (I explained this perverse adjustment process in this paper, and used it to explain the 2008 financial crisis and its aftermath.) The result is a crash in asset prices. We haven’t reached that point yet, but I am afraid that we are getting too close for comfort.

The 2008 crisis. was caused by an FOMC that was so focused on the threat of inflation that they ignored ample and obvious signs of a rapidly deteriorating economy and falling inflation expectations, foolishly interpreting the plunge in TIPS spreads and the appreciation of the dollar relative to other currencies as an expression by the markets of confidence in Fed policy rather than as a cry for help.

In 2008, the Fed at least had the excuse of rising energy prices and headline inflation above its then informal 2% target for not cutting interest rates to provide serious monetary stimulus to a collapsing economy. This time, despite failing for over three years to meet its now official 2% inflation target, Dr. Yellen and her FOMC colleagues show no sign of thinking about anything other than when they can show their mettle as central bankers by raising interest rates again. Now is not the time to worry about raising interest rates. Dr. Yellen’s problem is now to show that her top – indeed her only – priority is to ensure that the Fed’s 2% inflation target will be met, or, if need be, exceeded, in 2016 and that the growth in nominal income in 2016 will be at least as large as it was in 2015. Those are goals that are eminently achievable, and if the FOMC has any credibility left after its recent failures, providing such assurance will prevent another unnecessary and destructive financial crisis.

The 2008 financial crisis ensured the election of Barak Obama as President. I shudder to think of who might be elected if we have another crisis this year.


About Me

David Glasner
Washington, DC

I am an economist at the Federal Trade Commission. Nothing that you read on this blog necessarily reflects the views of the FTC or the individual commissioners. Although I work at the FTC as an antitrust economist, most of my research and writing has been on monetary economics and policy and the history of monetary theory. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 404 other followers


Follow

Get every new post delivered to your Inbox.

Join 404 other followers