Archive Page 3

What’s Wrong with Monetarism?

UPDATE: (05/06): In an email Richard Lipsey has chided me for seeming to endorse the notion that 1970s stagflation refuted Keynesian economics. Lipsey rightly points out that by introducing inflation expectations into the Phillips Curve or the Aggregate Supply Curve, a standard Keynesian model is perfectly capable of explaining stagflation, so that it is simply wrong to suggest that 1970s stagflation constituted an empirical refutation of Keynesian theory. So my statement in the penultimate paragraph that the k-percent rule

was empirically demolished in the 1980s in a failure even more embarrassing than the stagflation failure of Keynesian economics.

should be amended to read “the supposed stagflation failure of Keynesian economics.”

Brad DeLong recently did a post (“The Disappearance of Monetarism”) referencing an old (apparently unpublished) paper of his following up his 2000 article (“The Triumph of Monetarism”) in the Journal of Economic Perspectives. Paul Krugman added his own gloss on DeLong on Friedman in a post called “Why Monetarism Failed.” In the JEP paper, DeLong argued that the New Keynesian policy consensus of the 1990s was built on the foundation of what DeLong called “classic monetarism,” the analytical core of the doctrine developed by Friedman in the 1950s and 1960s, a core that survived the demise of what he called “political monetarism,” the set of factual assumptions and policy preferences required to justify Friedman’s k-percent rule as the holy grail of monetary policy.

In his follow-up paper, DeLong balanced his enthusiasm for Friedman with a bow toward Keynes, noting the influence of Keynes on both classic and political monetarism, arguing that, unlike earlier adherents of the quantity theory, Friedman believed that a passive monetary policy was not the appropriate policy stance during the Great Depression; Friedman famously held the Fed responsible for the depth and duration of what he called the Great Contraction, because it had allowed the US money supply to drop by a third between 1929 and 1933. This was in sharp contrast to hard-core laissez-faire opponents of Fed policy, who regarded even the mild and largely ineffectual steps taken by the Fed – increasing the monetary base by 15% – as illegitimate interventionism to obstruct the salutary liquidation of bad investments, thereby postponing the necessary reallocation of real resources to more valuable uses. So, according to DeLong, Friedman, no less than Keynes, was battling against the hard-core laissez-faire opponents of any positive action to speed recovery from the Depression. While Keynes believed that in a deep depression only fiscal policy would be effective, Friedman believed that, even in a deep depression, monetary policy would be effective. But both agreed that there was no structural reason why stimulus would necessarily counterproductive; both rejected the idea that only if the increased output generated during the recovery was of a particular composition would recovery be sustainable.

Indeed, that’s why Friedman has always been regarded with suspicion by laissez-faire dogmatists who correctly judged him to be soft in his criticism of Keynesian doctrines, never having disputed the possibility that “artificially” increasing demand – either by government spending or by money creation — in a deep depression could lead to sustainable economic growth. From the point of view of laissez-faire dogmatists that concession to Keynesianism constituted a total sellout of fundamental free-market principles.

Friedman parried such attacks on the purity of his free-market dogmatism with a counterattack against his free-market dogmatist opponents, arguing that the gold standard to which they were attached so fervently was itself inconsistent with free-market principles, because, in virtually all historical instances of the gold standard, the monetary authorities charged with overseeing or administering the gold standard retained discretionary authority allowing them to set interest rates and exercise control over the quantity of money. Because monetary authorities retained substantial discretionary latitude under the gold standard, Friedman argued that a gold standard was institutionally inadequate and incapable of constraining the behavior of the monetary authorities responsible for its operation.

The point of a gold standard, in Friedman’s view, was that it makes it costly to increase the quantity of money. That might once have been true, but advances in banking technology eventually made it easy for banks to increase the quantity of money without any increase in the quantity of gold, making inflation possible even under a gold standard. True, eventually the inflation would have to be reversed to maintain the gold standard, but that simply made alternative periods of boom and bust inevitable. Thus, the gold standard, i.e., a mere obligation to convert banknotes or deposits into gold, was an inadequate constraint on the quantity of money, and an inadequate systemic assurance of stability.

In other words, if the point of a gold standard is to prevent the quantity of money from growing excessively, then, why not just eliminate the middleman, and simply establish a monetary rule constraining the growth in the quantity of money. That was why Friedman believed that his k-percent rule – please pardon the expression – trumped the gold standard, accomplishing directly what the gold standard could not accomplish, even indirectly: a gradual steady increase in the quantity of money that would prevent monetary-induced booms and busts.

Moreover, the k-percent rule made the monetary authority responsible for one thing, and one thing alone, imposing a rule on the monetary authority prescribing the time path of a targeted instrument – the quantity of money – over which the monetary authority has direct control: the quantity of money. The belief that the monetary authority in a modern banking system has direct control over the quantity of money was, of course, an obvious mistake. That the mistake could have persisted as long as it did was the result of the analytical distraction of the money multiplier: one of the leading fallacies of twentieth-century monetary thought, a fallacy that introductory textbooks unfortunately continue even now to foist upon unsuspecting students.

The money multiplier is not a structural supply-side variable, it is a reduced-form variable incorporating both supply-side and demand-side parameters, but Friedman and other Monetarists insisted on treating it as if it were a structural — and a deep structural variable at that – supply variable, so that it no less vulnerable to the Lucas Critique than, say, the Phillips Curve. Nevertheless, for at least a decade and a half after his refutation of the structural Phillips Curve, demonstrating its dangers as a guide to policy making, Friedman continued treating the money multiplier as if it were a deep structural variable, leading to the Monetarist forecasting debacle of the 1980s when Friedman and his acolytes were confidently predicting – over and over again — the return of double-digit inflation because the quantity of money was increasing for most of the 1980s at double-digit rates.

So once the k-percent rule collapsed under an avalanche of contradictory evidence, the Monetarist alternative to the gold standard that Friedman had persuasively, though fallaciously, argued was, on strictly libertarian grounds, preferable to the gold standard, the gold standard once again became the default position of laissez-faire dogmatists. There was to be sure some consideration given to free banking as an alternative to the gold standard. In his old age, after winning the Nobel Prize, F. A. Hayek introduced a proposal for direct currency competition — the elimination of legal tender laws and the like – which he later developed into a proposal for the denationalization of money. Hayek’s proposals suggested that convertibility into a real commodity was not necessary for a non-legal tender currency to have value – a proposition which I have argued is fallacious. So Hayek can be regarded as the grandfather of crypto currencies like the bitcoin. On the other hand, advocates of free banking, with a few exceptions like Earl Thompson and me, have generally gravitated back to the gold standard.

So while I agree with DeLong and Krugman (and for that matter with his many laissez-faire dogmatist critics) that Friedman had Keynesian inclinations which, depending on his audience, he sometimes emphasized, and sometimes suppressed, the most important reason that he was unable to retain his hold on right-wing monetary-economics thinking is that his key monetary-policy proposal – the k-percent rule – was empirically demolished in a failure even more embarrassing than the stagflation failure of Keynesian economics. With the k-percent rule no longer available as an alternative, what’s a right-wing ideologue to do?

Anyone for nominal gross domestic product level targeting (or NGDPLT for short)?

Benjamin Cole Remembers Richard Nixon (of Blessed Memory?)

On Marcus Nunes’s Historinhas blog, Benjamin Cole has just written a guest post about Richard Nixon’s August 15, 1971 speech imposing a 90-day freeze on wages and prices, abolishing the last tenuous link between the dollar and gold and applying a 10% tariff on all imports into the US. Tinged with nostalgia for old times, the post actually refers to me in the title, perhaps because of my two recent posts on free trade and the gold standard. Well, rather than comment directly on Ben’s post, I will just refer to one of my first posts as a blogger marking the fortieth anniversary of Nixon’s announcement, which I recall with considerably less nostalgia than Ben, and explaining some of its, mostly disastrous, consequences.

Click here.

PS But Ben is right to point out that stock prices rose about 4 or 5 percent the day after the announcement, a reaction that, of course, was anything but rational.

 

 

What’s so Bad about the Gold Standard?

Last week Paul Krugman argued that Ted Cruz is more dangerous than Donald Trump, because Trump is merely a protectionist while Cruz wants to restore the gold standard. I’m not going to weigh in on the relative merits of Cruz and Trump, but I have previously suggested that Krugman may be too dismissive of the possibility that the Smoot-Hawley tariff did indeed play a significant, though certainly secondary, role in the Great Depression. In warning about the danger of a return to the gold standard, Krugman is certainly right that the gold standard was and could again be profoundly destabilizing to the world economy, but I don’t think he did such a good job of explaining why, largely because, like Ben Bernanke and, I am afraid, most other economists, Krugman isn’t totally clear on how the gold standard really worked.

Here’s what Krugman says:

[P]rotectionism didn’t cause the Great Depression. It was a consequence, not a cause – and much less severe in countries that had the good sense to leave the gold standard.

That’s basically right. But I note for the record, to spell out the my point made in the post I alluded to in the opening paragraph that protectionism might indeed have played a role in exacerbating the Great Depression, making it harder for Germany and other indebted countries to pay off their debts by making it more difficult for them to exports required to discharge their obligations, thereby making their IOUs, widely held by European and American banks, worthless or nearly so, undermining the solvency of many of those banks. It also increased the demand for the gold required to discharge debts, adding to the deflationary forces that had been unleashed by the Bank of France and the Fed, thereby triggering the debt-deflation mechanism described by Irving Fisher in his famous article.

Which brings us to Cruz, who is enthusiastic about the gold standard – which did play a major role in spreading the Depression.

Well, that’s half — or maybe a quarter — right. The gold standard did play a major role in spreading the Depression. But the role was not just major; it was dominant. And the role of the gold standard in the Great Depression was not just to spread it; the role was, as Hawtrey and Cassel warned a decade before it happened, to cause it. The causal mechanism was that in restoring the gold standard, the various central banks linking their currencies to gold would increase their demands for gold reserves so substantially that the value of gold would rise back to its value before World War I, which was about double what it was after the war. It was to avoid such a catastrophic increase in the value of gold that Hawtrey drafted the resolutions adopted at the 1922 Genoa monetary conference calling for central-bank cooperation to minimize the increase in the monetary demand for gold associated with restoring the gold standard. Unfortunately, when France officially restored the gold standard in 1928, it went on a gold-buying spree, joined in by the Fed in 1929 when it raised interest rates to suppress Wall Street stock speculation. The huge accumulation of gold by France and the US in 1929 led directly to the deflation that started in the second half of 1929, which continued unabated till 1933. The Great Depression was caused by a 50% increase in the value of gold that was the direct result of the restoration of the gold standard. In principle, if the Genoa Resolutions had been followed, the restoration of the gold standard could have been accomplished with no increase in the value of gold. But, obviously, the gold standard was a catastrophe waiting to happen.

The problem with gold is, first of all, that it removes flexibility. Given an adverse shock to demand, it rules out any offsetting loosening of monetary policy.

That’s not quite right; the problem with gold is, first of all, that it does not guarantee that value of gold will be stable. The problem is exacerbated when central banks hold substantial gold reserves, which means that significant changes in the demand of central banks for gold reserves can have dramatic repercussions on the value of gold. Far from being a guarantee of price stability, the gold standard can be the source of price-level instability, depending on the policies adopted by individual central banks. The Great Depression was not caused by an adverse shock to demand; it was caused by a policy-induced shock to the value of gold. There was nothing inherent in the gold standard that would have prevented a loosening of monetary policy – a decline in the gold reserves held by central banks – to reverse the deflationary effects of the rapid accumulation of gold reserves, but, the insane Bank of France was not inclined to reverse its policy, perversely viewing the increase in its gold reserves as evidence of the success of its catastrophic policy. However, once some central banks are accumulating gold reserves, other central banks inevitably feel that they must take steps to at least maintain their current levels of reserves, lest markets begin to lose confidence that convertibility into gold will be preserved. Bad policy tends to spread. Krugman seems to have this possibility in mind when he continues:

Worse, relying on gold can easily have the effect of forcing a tightening of monetary policy at precisely the wrong moment. In a crisis, people get worried about banks and seek cash, increasing the demand for the monetary base – but you can’t expand the monetary base to meet this demand, because it’s tied to gold.

But Krugman is being a little sloppy here. If the demand for the monetary base – meaning, presumably, currency plus reserves at the central bank — is increasing, then the public simply wants to increase their holdings of currency, not spend the added holdings. So what stops the the central bank accommodate that demand? Krugman says that “it” – meaning, presumably, the monetary base – is tied to gold. What does it mean for the monetary base to be “tied” to gold? Under the gold standard, the “tie” to gold is a promise to convert the monetary base, on demand, at a specified conversion rate.

Question: why would that promise to convert have prevented the central bank from increasing the monetary base? Answer: it would not and did not. Since, by assumption, the public is demanding more currency to hold, there is no reason why the central bank could not safely accommodate that demand. Of course, there would be a problem if the public feared that the central bank might not continue to honor its convertibility commitment and that the price of gold would rise. Then there would be an internal drain on the central bank’s gold reserves. But that is not — or doesn’t seem to be — the case that Krugman has in mind. Rather, what he seems to mean is that the quantity of base money is limited by a reserve ratio between the gold reserves held by the central bank and the monetary base. But if the tie between the monetary base and gold that Krugman is referring to is a legal reserve requirement, then he is confusing the legal reserve requirement with the gold standard, and the two are simply not the same, it being entirely possible, and actually desirable, for the gold standard to function with no legal reserve requirement – certainly not a marginal reserve requirement.

On top of that, a slump drives interest rates down, increasing the demand for real assets perceived as safe — like gold — which is why gold prices rose after the 2008 crisis. But if you’re on a gold standard, nominal gold prices can’t rise; the only way real prices can rise is a fall in the prices of everything else. Hello, deflation!

Note the implicit assumption here: that the slump just happens for some unknown reason. I don’t deny that such events are possible, but in the context of this discussion about the gold standard and its destabilizing properties, the historically relevant scenario is when the slump occurred because of a deliberate decision to raise interest rates, as the Fed did in 1929 to suppress stock-market speculation and as the Bank of England did for most of the 1920s, to restore and maintain the prewar sterling parity against the dollar. Under those circumstances, it was the increase in the interest rate set by the central bank that amounted to an increase in the monetary demand for gold which is what caused gold appreciation and deflation.

What’s so Great about Free Trade?

Free trade is about as close to a sacred tenet as can be found in classical and neoclassical economic theory. And there is no economic heresy more sacrilegious than protectionism. An important part of what endears free trade to economists, it seems to me, is that it is both logically compelling and counter-intuitive. There is something both self-evident, yet paradoxical, about saying that the gains from trade consist in what you receive not in what you give up, in what you import not in what you export. And there is something even more paradoxical and counter-intuitive — and logically inescapable — in the idea of comparative advantage which teaches that every country, no matter how meager its resources and how unproductive its workers, will always be the lowest-cost producer of something, while every country, no matter how well-endowed with resources and how productive its workers, will always be the highest-cost producer of something.

Despite the love and devotion that the doctrine of free trade inspires in economists, the doctrine has had indifferent success in rallying public opinion to its side. Free trade has never been popular among the masses. Supporting free trade has sometimes been a way for politicians to establish that they are “serious,” high-minded, and principled, and therefore worthy of the support those who fancy themselves as “serious,” high-minded and principled. And so there is a kind of moral pressure on politicians to pronounce themselves as free traders, though with the immediate qualification tacked on that they also believe in fair trade. So even that scourge of political correctness, and you know who I mean, felt obligated to say “I’m a free-trader.”

Although free trade has never been a position calculated to attract a popular following, protectionism has usually not been a winning issue either. But it has, on occasion, been an effective strategy by which political outsiders, or those like Pat Buchanan and Ross Perot, posing as political outsiders, could attract a following. In fact, it is remarkable how closely the message of economic nationalism, control of the borders, disengagement from international treaties and alliances, trumpeted by the Politically Incorrect One resembles the message propagated by Buchanan in his 1992 and 1996 campaigns.

And the protectionist anti-free-trade message clearly appeals to both ends of the political spectrum. Opposition to NAFTA and other free-trade agreements has been fueling the Sanders campaign just as much as it has fueled the campaign of the Golden-Haired One. The latter, of course, has benefited from being able to push a number of other hot-button issues that Sanders would not want to be associated with, and, above all, from having shrewdly chosen a group of incredibly weak opponents (AKA the deep Republican bench that we used to hear so much about) to run against. So the question that I want to explore is why there is such a disconnect between the public and professional economists (with a few noteworthy exceptions to be sure, but they are just that — exceptional) about free trade?

The key to understanding that disconnect is, I suggest, the way in which economists have been trained to think about individual and social welfare, which, it seems to me, is totally different from how most people think about their well-being. In the standard utility-maximization framework, individual well-being is a monotonically increasing function of individual consumption, leisure being one of the “goods” being consumed, so that reductions in hours worked is, when consumption of everything else is held constant, welfare-increasing. Even at a superficial level, this seems totally wrong. While it is certainly true that people do value consumption, and increased consumption does tend to increase overall levels of well-being, I think that changes in consumption have a relatively minor effect on how people perceive the quality of their lives.

What people do is a far more important determinant of their overall estimation of how well-off they are than what they consume. When you meet someone, you are likely, if you are at all interested in finding out about the person, to ask him or her about what he or she does, not about what he or she consumes. Most of the waking hours of an adult person are spent in work-related activities. If people are miserable in their jobs, their estimation of their well-being is likely to be low and if they are happy or fulfilled or challenged in their jobs, their estimation of their well-being is likely to be high.

And maybe I’m clueless, but I find it hard to believe that what makes people happy or unhappy with their lives depends in a really significant way on how much they consume. It seems to me that what matters to most people is the nature of their relationships with their family and friends and the people they work with, and whether they get satisfaction from their jobs or from a sense that they are accomplishing or are on their way to accomplish some important life goals. Compared to the satisfaction derived from their close personal relationships and from a sense of personal accomplishment, levels of consumption don’t seem to matter all that much.

Moreover, insofar as people depend on being employed in order to finance their routine consumption purchases, they know that being employed is a necessary condition for maintaining their current standard of living. For many if not most people, the unplanned loss of their current job would be a personal disaster, which means that being employed is the dominant – the overwhelming – determinant of their well-being. Ordinary people seem to understand how closely their well-being is tied to the stability of their employment, which is why people are so viscerally opposed to policies that, they fear, could increase the likelihood of losing their jobs.

To think that an increased chance of losing one’s job in exchange for a slight gain in purchasing power owing to the availability of low-cost imports is an acceptable trade-off for most workers does not seem at all realistic. Questioning the acceptability of this trade-off doesn’t mean that I am denying that, in principle, free trade increases aggregate income or that there are corresponding employment gains associated with the increased export opportunities created by free trade. Nor does it mean that I deny that, in principle, the gains from free trade are large enough to provide monetary compensation to workers who lose their jobs, but I do question whether such compensation is possible in practice or that the compensation would be adequate for the loss of psychic well-being associated with losing one’s job, even if money income is maintained.

Losing a job may cause a demoralization for which monetary compensation cannot compensate, because the compensation is incommensurate with the loss. The psychic effects of losing a job (an increase in leisure!) are ignored by the standard calculations of welfare effects in which well-being is identified with, and measured by, consumption. And these losses are compounded and amplified when they are concentrated in specific communities and regions, causing substantial further losses to the businesses dependent on the demand of newly unemployed workers. The hollowing out of large parts of the industrial northeast and midwest is sad testimony to these wider effects, which include the irreparable loss of intangible infrastructural capital resulting from the withering away of communities in which complex and extensive social networks formerly thrived.

The goal of this post is not to make an argument for protectionist policies, let alone for any of the candidates arguing for protectionist policies. The aim is to show how inadequate the standard arguments for free trade are in responding to the concerns of the people who feel that they have been hurt by free-trade policies or feel that the jobs that they have now are vulnerable to continued free trade and ever-increasing globalization. I don’t say that responses can’t be made, just that they haven’t been made.

The larger philosophical or methodological point is that although the theory of utility maximization underlying neoclassical theory is certainly useful as a basis for deriving what Samuelson called meaningful theorems – or, in philosophically more defensible terms, refutable predictions — about the effects of changes in specified exogenous variables on prices and output. Thus, economic theory can tell us that an excise tax on sugar tends to cause an increase in the price, and a reduction in output, of sugar. But the idea that we can reliably make welfare comparisons between alternative states of the world when welfare is assumed to be a function of consumption, and that nothing else matters, is simply preposterous. And it’s about time that economists enlarged their notions of what constitutes well-being if they want to make useful recommendations about the welfare implications of public policy, especially trade policy.

Justice Scalia and the Original Meaning of Originalism

humpty_dumpty

(I almost regret writing this post because it took a lot longer to write than I expected and I am afraid that I have ventured too deeply into unfamiliar territory. But having expended so much time and effort on this post, I must admit to being curious about what people will think of it.)

I resist the temptation to comment on Justice Scalia’s character beyond one observation: a steady stream of irate outbursts may have secured his status as a right-wing icon and burnished his reputation as a minor literary stylist, but his eruptions brought no credit to him or to the honorable Court on which he served.

But I will comment at greater length on the judicial philosophy, originalism, which he espoused so tirelessly. The first point to make, in discussing originalism, is that there are at least two concepts of originalism that have been advanced. The first and older concept is that the provisions of the US Constitution should be understood and interpreted as the framers of the Constitution intended those provisions to be understood and interpreted. The task of the judge, in interpreting the Constitution, would then be to reconstruct the collective or shared state of mind of the framers and, having ascertained that state of mind, to interpret the provisions of the Constitution in accord with that collective or shared state of mind.

A favorite originalist example is the “cruel and unusual punishment” provision of the Eighth Amendment to the Constitution. Originalists dismiss all arguments that capital punishment is cruel and unusual, because the authors of the Eighth Amendment could not have believed capital punishment to be cruel and unusual. If that’s what they believed then, why, having passed the Eighth amendment, did the first Congress proceed to impose the death penalty for treason, counterfeiting and other offenses in 1790? So it seems obvious that the authors of Eighth Amendment did not intend to ban capital punishment. If so, originalists argue, the “cruel and unusual” provision of the Eighth Amendment can provide no ground for ruling that capital punishment violates the Eighth Amendment.

There are a lot of problems with the original-intent version of originalism, the most obvious being the impossibility of attributing an unambiguous intention to the 50 or so delegates to the Constitutional Convention who signed the final document. The Constitutional text that emerged from the Convention was a compromise among many competing views and interests, and it did not necessarily conform to the intentions of any of the delegates, much less all of them. True, James Madison was the acknowledged author of the Bill of Rights, so if we are parsing the Eighth Amendment, we might, in theory, focus exclusively on what he understood the Eighth Amendment to mean. But focusing on Madison alone would be problematic, because Madison actually opposed adding a Bill of Rights to the original Constitution; Madison introduced the Bill of Rights as amendments to the Constitution in the first Congress, only because the Constitution would not have been approved without an understanding that the Bill of Rights that Madison had opposed would be adopted as amendments to the Constitution. The inherent ambiguity in the notion of intention, even in the case of a single individual acting out of mixed, if not conflicting, motives – an ambiguity compounded when action is undertaken collectively by individuals – causes the notion of original intent to dissolve into nothingness when one tries to apply it in practice.

Realizing that trying to determine the original intent of the authors of the Constitution (including the Amendments thereto) is a fool’s errand, many originalists, including Justice Scalia, tried to salvage the doctrine by shifting its focus from the inscrutable intent of the Framers to the objective meaning that a reasonable person would have attached to the provisions of the Constitution when it was ratified. Because the provisions of the Constitution are either ordinary words or legal terms, the meaning that would reasonably have been attached to those provisions can supposedly be ascertained by consulting the contemporary sources, either dictionaries or legal treatises, in which those words or terms were defined. It is this original meaning that, according to Scalia, must remain forever inviolable, because to change the meaning of provisions of the Constitution would allow unelected judges to covertly amend the Constitution, evading the amendment process spelled out in Article V of the Constitution, thereby nullifying the principle of a written constitution that constrains the authority and powers of all branches of government. Instead of being limited by the Constitution, judges not bound by the original meaning arrogate to themselves an unchecked power to impose their own values on the rest of the country.

To return to the Eighth Amendment, Scalia would say that the meaning attached to the term “cruel and unusual” when the Eighth Amendment was passed was clearly not so broad that it prohibited capital punishment. Otherwise, how could Congress, having voted to adopt the Eighth Amendment, proceed to make counterfeiting and treason and several other federal offenses capital crimes? Of course that’s a weak argument, because Congress, like any other representative assembly is under no obligation or constraint to act consistently. It’s well known that democratic decision-making need not be consistent, and just because a general principle is accepted doesn’t mean that the principle will not be violated in specific cases. A written Constitution is supposed to impose some discipline on democratic decision-making for just that reason. But there was no mechanism in place to prevent such inconsistency, judicial review of Congressional enactments not having become part of the Constitutional fabric until John Marshall’s 1803 opinion in Marbury v. Madison made judicial review, quite contrary to the intention of many of the Framers, an organic part of the American system of governance.

Indeed, in 1798, less than ten years after the Bill of Rights was adopted, Congress enacted the Alien and Sedition Acts, which, I am sure even Justice Scalia would have acknowledged, violated the First Amendment prohibition against abridging the freedom of speech and the press. To be sure, the Congress that passed the Alien and Sedition Acts was not the same Congress that passed the Bill of Rights, but one would hardly think that the original meaning of abridging freedom of speech and the press had been forgotten in the intervening decade. Nevertheless, to uphold his version of originalism, Justice Scalia would have to argue either that the original meaning of the First Amendment had been forgotten or acknowledge that one can’t simply infer from the actions of a contemporaneous or nearly contemporaneous Congress what the original meaning of the provisions of the Constitution were, because it is clearly possible that the actions of Congress could have been contrary to some supposed original meaning of the provisions of the Constitution.

Be that as it may, for purposes of the following discussion, I will stipulate that we can ascertain an objective meaning that a reasonable person would have attached to the provisions of the Constitution at the time it was ratified. What I want to examine is Scalia’s idea that it is an abuse of judicial discretion for a judge to assign a meaning to any Constitutional term or provision that is different from that original meaning. To show what is wrong with Scalia’s doctrine, I must first explain that Scalia’s doctrine is based on legal philosophy known as legal positivism. Whether Scalia realized that he was a legal positivist I don’t know, but it’s clear that Scalia was taking the view that the validity and legitimacy of a law or a legal provision or a legal decision (including a Constitutional provision or decision) derives from an authority empowered to make law, and that no one other than an authorized law-maker or sovereign is empowered to make law.

According to legal positivism, all law, including Constitutional law, is understood as an exercise of will – a command. What distinguishes a legal command from, say, a mugger’s command to a victim to turn over his wallet is that the mugger is not a sovereign. Not only does the sovereign get what he wants, the sovereign, by definition, gets it legally; we are not only forced — compelled — to obey, but, to add insult to injury, we are legally obligated to obey. And morality has nothing to do with law or legal obligation. That’s the philosophical basis of legal positivism to which Scalia, wittingly or unwittingly, subscribed.

Luckily for us, we Americans live in a country in which the people are sovereign, but the power of the people to exercise their will collectively was delimited and circumscribed by the Constitution ratified in 1788. Under positivist doctrine, the sovereign people in creating the government of the United States of America laid down a system of rules whereby the valid and authoritative expressions of the will of the people would be given the force of law and would be carried out accordingly. The rule by which the legally valid, authoritative, command of the sovereign can be distinguished from the command of a mere thug or bully is what the legal philosopher H. L. A. Hart called a rule of recognition. In the originalist view, the rule of recognition requires that any judicial judgment accord with the presumed original understanding of the provisions of the Constitution when the Constitution was ratified, thereby becoming the authoritative expression of the sovereign will of the people, unless that original understanding has subsequently been altered by way of the amendment process spelled out in Article V of the Constitution. What Scalia and other originalists are saying is that any interpretation of a provision of the Constitution that conflicts with the original meaning of that provision violates the rule of recognition and is therefore illegitimate. Hence, Scalia’s simmering anger at decisions of the court that he regarded as illegitimate departures from the original meaning of the Constitution.

But legal positivism is not the only theory of law. F. A. Hayek, who, despite his good manners, somehow became a conservative and libertarian icon a generation before Scalia, subjected legal positivism to withering criticism in volume one of Law Legislation and Liberty. But the classic critique of legal positivism was written a little over a half century ago by Ronald Dworkin, in his essay “Is Law a System of Rules?” (aka “The Model of Rules“) Dworkin’s main argument was that no system of rules can be sufficiently explicit and detailed to cover all possible fact patterns that would have to be adjudicated by a judge. Legal positivists view the exercise of discretion by judges as an exercise of personal will authorized by the Sovereign in cases in which no legal rule exactly fits the facts of a case. Dworkin argued that rather than an imposition of judicial will authorized by the sovereign, the exercise of judicial discretion is an application of the deeper principles relevant to the case, thereby allowing the judge to determine which, among the many possible rules that could be applied to the facts of the case, best fits with the totality of the circumstances, including prior judicial decisions, the judge must take into account. According to Dworkin, law and the legal system as a whole is not an expression of sovereign will, but a continuing articulation of principles in terms of which specific rules of law must be understood, interpreted, and applied.

The meaning of a legal or Constitutional provision can’t be fixed at a single moment, because, like all social institutions, meaning evolves and develops organically. Not being an expression of the sovereign will, the meaning of a legal term or provision cannot be identified by a putative rule of recognition – e.g., the original meaning doctrine — that freezes the meaning of the term at a particular moment in time. It is not true, as Scalia and originalists argue, that conceding that the meaning of Constitutional terms and provisions can change and evolve allows unelected judges to substitute their will for the sovereign will enshrined when the Constitution was ratified. When a judge acknowledges that the meaning of a term has changed, the judge does so because that new meaning has already been foreshadowed in earlier cases with which his decision in the case at hand must comport. There is always a danger that the reasoning of a judge is faulty, but faulty reasoning can beset judges claiming to apply the original meaning of a term, as Chief Justice Taney did in his infamous Dred Scot opinion in which Taney argued that the original meaning of the term “property” included property in human beings.

Here is an example of how a change in meaning may be required by a change in our understanding of a concept. It may not be the best example to shed light on the legal issues, but it is the one that occurs to me as I write this. About a hundred years ago, Bertrand Russell and Alfred North Whitehead were writing one the great philosophical works of the twentieth century, Principia Mathematica. Their objective was to prove that all of mathematics could be reduced to pure logic. It was a grand and heroic effort that they undertook, and their work will remain a milestone in history of philosophy. If Russell and Whitehead had succeeded in their effort of reducing mathematics to logic, it could properly be said that mathematics is really the same as logic, and the meaning of the word “mathematics” would be no different from the meaning of the word “logic.” But if the meaning of mathematics were indeed the same as that of logic, it would not be the result of Russell and Whitehead having willed “mathematics” and “logic” to mean the same thing, Russell and Whitehead being possessed of no sovereign power to determine the meaning of “mathematics.” Whether mathematics is really the same as logic depends on whether all of mathematics can be logically deduced from a set of axioms. No matter how much Russell and Whitehead wanted mathematics to be reducible to logic, the factual question of whether mathematics can be reduced to logic has an answer, and the answer is completely independent of what Russell and Whitehead wanted it to be.

Unfortunately for Russell and Whitehead, the Viennese mathematician Kurt Gödel came along a few years after they completed the third and final volume of their masterpiece and proved an “incompleteness theorem” showing that mathematics could not be reduced to logic – mathematics is therefore not the same as logic – because in any axiomatized system, some true propositions of arithmetic will be logically unprovable. The meaning of mathematics is therefore demonstrably not the same as the meaning of logic. This difference in meaning had to be discovered; it could not be willed.

Actually, it was Humpty Dumpty who famously anticipated the originalist theory that meaning is conferred by an act of will.

“I don’t know what you mean by ‘glory,’ ” Alice said.
Humpty Dumpty smiled contemptuously. “Of course you don’t—till I tell you. I meant ‘there’s a nice knock-down argument for you!’ ”
“But ‘glory’ doesn’t mean ‘a nice knock-down argument’,” Alice objected.
“When I use a word,” Humpty Dumpty said, in rather a scornful tone, “it means just what I choose it to meanan—neither more nor less.”
“The question is,” said Alice, “whether you can make words mean so many different things.”
“The question is,” said Humpty Dumpty, “which is to be master—that’s all.”

In Humpty Dumpty’s doctrine, meaning is determined by a sovereign master. In originalist doctrine, the sovereign master is the presumed will of the people when the Constitution and the subsequent Amendments were ratified.

So the question whether capital punishment is “cruel and unusual” can’t be answered, as Scalia insisted, simply by invoking a rule of recognition that freezes the meaning of “cruel and unusual” at the presumed meaning it had in 1790, because the point of a rule of recognition is to identify the sovereign will that is given the force of law, while the meaning of “cruel and unusual” does not depend on anyone’s will. If a judge reaches a decision based on a meaning of “cruel and unusual” different from the supposed original meaning, the judge is not abusing his discretion, the judge is engaged in judicial reasoning. The reasoning may be good or bad, right or wrong, but judicial reasoning is not rendered illegitimate just because it assigns a meaning to a term different from the supposed original meaning. The test of judicial reasoning is how well it accords with the totality of judicial opinions and relevant principles from which the judge can draw in supporting his reasoning. Invoking a supposed original meaning of what “cruel and unusual” meant to Americans in 1789 does not tell us how to understand the meaning of “cruel and unusual” just as the question whether logic and mathematics are synonymous cannot be answered by insisting that Russel and Whitehead were right in thinking that mathematics and logic are the same thing. (I note for the record that I personally have no opinion about whether capital punishment violates the Eighth Amendment.)

One reason meanings change is because circumstances change. The meaning of freedom of the press and freedom of speech may have been perfectly clear in 1789, but our conception of what is protected by the First Amendment has certainly expanded since the First Amendment was ratified. As new media for conveying speech have been introduced, the courts have brought those media under the protection of the First Amendment. Scalia made a big deal of joining with the majority in Texas v. Johnson a 1989 case in which the conviction of a flag burner was overturned. Scalia liked to cite that case as proof of his fidelity to the text of the Constitution; while pouring scorn on the flag burner, Scalia announced that despite his righteous desire to exact a terrible retribution from the bearded weirdo who burned the flag, he had no choice but to follow – heroically, in his estimation — the text of the Constitution.

But flag-burning is certainly a form of symbolic expression, and it is far from obvious that the original meaning of the First Amendment included symbolic expression. To be sure some forms of symbolic speech were recognized as speech in the eighteenth century, but it could be argued that the original meaning of freedom of speech and the press in the First Amendment was understood narrowly. The compelling reason for affording flag-burning First Amendment protection is not that flag-burning was covered by the original meaning of the First Amendment, but that a line of cases has gradually expanded the notion of what activities are included under what the First Amendment calls “speech.” That is the normal process by which law changes and meanings change, incremental adjustments taking into account unforeseen circumstances, eventually leading judges to expand the meanings ascribed to old terms, because the expanded meanings comport better with an accumulation of precedents and the relevant principles on which judges have relied in earlier cases.

But perhaps the best example of how changes in meaning emerge organically from our efforts to cope with changing and unforeseen circumstances rather than being the willful impositions of a higher authority is provided by originalism itself, because, “originalism” was originally about the original intention of the Framers of the Constitution. It was only when it became widely accepted that the original intention of the Framers was not something that could be ascertained, that people like Antonin Scalia decided to change the meaning of “originalism,” so that it was no longer about the original intention of the Framers, but about the original meaning of the Constitution when it was ratified. So what we have here is a perfect example of how the meaning of a well-understood term came to be changed, because the original meaning of the term was found to be problematic. And who was responsible for this change in meaning? Why the very same people who insist that it is forbidden to tamper with the original meaning of the terms and provisions of the Constitution. But they had no problem in changing the meaning of their doctrine of Constitutional interpretation. Do I blame them for changing the meaning of the originalist doctrine? Not one bit. But if originalists were only marginally more introspective than they seem to be, they might have realized that changes in meaning are perfectly normal and legitimate, especially when trying to give concrete meaning to abstract terms in a way that best fits in with the entire tradition of judicial interpretation embodied in the totality of all previous judicial decisions. That is the true task of a judge, not a pointless quest for original meaning.

Paul Krugman Suffers a Memory Lapse

smoot_hawleyPaul Krugman, who is very upset with Republicans on both sides of the Trump divide, ridiculed Mitt Romney’s attack on Trump for being a protectionist. Romney warned that if Trump implemented his proposed protectionist policies, the result would likely be a trade war and a recession. Now I totally understand Krugman’s frustration with what’s happening inside the Republican Party; it’s not a pretty sight. But Krugman seems just a tad too eager to find fault with Romney, especially since the danger that a trade war could trigger a recession, while perhaps overblown, is hardly delusional, and, as Krugman ought to recall, is a danger that Democrats have also warned against. (I’ll come back to that point later.) Here’s the quote that got Krugman’s back up:

If Donald Trump’s plans were ever implemented, the country would sink into prolonged recession. A few examples. His proposed 35 percent tariff-like penalties would instigate a trade war and that would raise prices for consumers, kill our export jobs and lead entrepreneurs and businesses of all stripes to flee America.

Krugman responded:

After all, doesn’t everyone know that protectionism causes recessions? Actually, no. There are reasons to be against protectionism, but that’s not one of them.

Think about the arithmetic (which has a well-known liberal bias). Total final spending on domestically produced goods and services is

Total domestic spending + Exports – Imports = GDP

Now suppose we have a trade war. This will cut exports, which other things equal depresses the economy. But it will also cut imports, which other things equal is expansionary. For the world as a whole, the cuts in exports and imports will by definition be equal, so as far as world demand is concerned, trade wars are a wash.

Actually, Krugman knows better than to argue that the comparative statics response to a parameter change (especially a large change) can be inferred from an accounting identity. The accounting identity always holds, but the equilibrium position does change, and you can’t just assume that the equilibrium rate of spending is unaffected by the parameter change or by the adjustment path the follows the parameter change. So Krugman’s assertion that a trade war cannot cause a recession depends on an implicit assumption that a trade war would be accompanied by a smooth reallocation of resources from producing tradable to producing non-tradable goods and that the wealth losses from the depreciation of specific human and non-human capital invested in the tradable-goods sector would have small repercussions on aggregate demand. That might be true, but the bigger the trade war and the more rounds of reciprocal retaliation, the greater the danger of substantial wealth losses and other disruptions. The fall in oil prices over the past year or two was supposed to be a good thing for the world economy. I think that for a lot of reasons reduced oil prices are, on balance, a good thing, but we also have reason to believe that it also had negative effects, especially on financial institutions holding a lot of assets sensitive to the price of oil. A trade war would have all the negatives of a steep decline in oil prices, but none of the positives.

But didn’t the Smoot-Hawley tariff cause the Great Depression? No. There’s no evidence at all that it did. Yes, trade fell a lot between 1929 and 1933, but that was almost entirely a consequence of the Depression, not a cause. (Trade actually fell faster during the early stages of the 2008 Great Recession than it did after 1929.) And while trade barriers were higher in the 1930s than before, this was partly a response to the Depression, partly a consequence of deflation, which made specific tariffs (i.e., tariffs that are stated in dollars per unit, not as a percentage of value) loom larger.

I certainly would not claim to understand fully the effects of the Smoot Hawley tariff, the question of effects being largely an empirical one that I haven’t studied, but I’m not sure that the profession has completely figured out those effects either. I know that Doug Irwin, who wrote the book on the Smoot-Hawley tariff and whose judgment I greatly respect, doesn’t think that Smoot Hawley tariff was a cause of the Great Depression, but that it did make the Depression worse than it would otherwise have been. It certainly was not the chief cause, and I am not even saying that it was a leading cause, but there is certainly a respectable argument to be made that it played a bigger role in the Depression than even Irwin acknowledges.

In brief, the argument is that there was a lot of international debt – especially allied war loans, German war reparations, German local government borrowing during the 1920s. To be able to make their scheduled debt payments, Germany and other debtor nations had to run trade surpluses. Increased tariffs on imported goods meant that, under the restored gold standard of the late 1920s, to run the export surpluses necessary to meet their debt obligations, debtor nations had to reduce their domestic wage levels sufficiently to overcome the rising trade barriers. Germany, of course, was the country most severely affected, and the prospect of German default undoubtedly undermined the solvency of many financial institutions, in Europe and America, with German debt on their balance sheets. In other words, the Smoot Hawley tariff intensified deflationary pressure and financial instability during the Great Depression, notwithstanding the tendency of tariffs to increase prices on protected goods.

Krugman takes a parting shot at Romney:

Protectionism was the only reason he gave for believing that Trump would cause a recession, which I think is kind of telling: the GOP’s supposedly well-informed, responsible adult, trying to save the party, can’t get basic economics right at the one place where economics is central to his argument.

I’m not sure what other reason there is to think that Trump would cause a recession. He is proposing to cut taxes by a lot, and to increase military spending by a lot without cutting entitlements. So given that his fiscal policy seems to be calculated to increase the federal deficit by a lot, what reason, besides starting a trade war, is there to think that Trump would cause a recession? And as I said, right or wrong, Romeny is hardly alone in thinking that trade wars can cause recessions. Indeed, Romney didn’t even mention the Smoot-Hawley tariff, but Krugman evidently forgot the classic exchange between Al Gore and the previous incarnation of protectionist populist outrage in an anti-establishment billionaire candidate for President:

GORE I’ve heard Mr. Perot say in the past that, as the carpenters says, measure twice and cut once. We’ve measured twice on this. We have had a test of our theory and we’ve had a test of his theory. Over the last five years, Mexico’s tariffs have begun to come down because they’ve made a unilateral decision to bring them down some, and as a result there has been a surge of exports from the United States into Mexico, creating an additional 400,000 jobs, and we can create hundreds of thousands of more if we continue this trend. We know this works. If it doesn’t work, you know, we give six months notice and we’re out of it. But we’ve also had a test of his theory.

PEROT When?

GORE In 1930, when the proposal by Mr. Smoot and Mr. Hawley was to raise tariffs across the board to protect our workers. And I brought some pictures, too.

[Larry] KING You’re saying Ross is a protectionist?

GORE This is, this is a picture of Mr. Smoot and Mr. Hawley. They look like pretty good fellows. They sounded reasonable at the time; a lot of people believed them. The Congress passed the Smoot-Hawley Protection Bill. He wants to raise tariffs on Mexico. They raised tariffs, and it was one of the principal causes, many economists say the principal cause, of the Great Depression in this country and around the world. Now, I framed this so you can put it on your wall if you want to.

You can watch it here

Currency Depreciation and Monetary Expansion Redux

Last week Frances Coppola and I exchanged posts about competitive devaluation. Frances chided me for favoring competitive devaluation, competitive devaluation, in her view, accomplishing nothing in a world of fiat currencies, because exchange rates don’t change. Say, the US devalues the dollar by 10% against the pound and Britain devalues the pound by 10% against the dollar; it’s as if nothing happened. In reply, I pointed out that if the competitive devaluation is achieved by monetary expansion (the US buying pounds with dollars to drive up the value of the pound and the UK buying dollars with pounds to drive up the value of the dollar), the result must be  increased prices in both the US and the UK. Frances responded that our disagreement was just a semantic misunderstanding, because she was talking about competitive devaluation in the absence of monetary expansion; so it’s all good.

I am, more or less, happy with that resolution of our disagreement, but I am not quite persuaded that the disagreement between us is merely semantic, as Frances seems conflicted about Hawtrey’s argument, carried out in the context of a gold standard, which served as my proof text for the proposition that competitive devaluation really is expansionary. On the one hand, she seems to distinguish between the expansionary effect of competitive devaluation relative to gold – Hawtrey’s case – and the beggar-my-neighbor effect of competitive devaluation of fiat currencies relative to each other; on the other hand, she also intimates that even Hawtrey got it wrong in arguing that competitive devaluation is expansionary. Now, much as I admire Hawtrey, I have no problem with criticizing him; it just seems that Frances hasn’t decided whether she does – or doesn’t – agree with him.

But what I want to do in this post is not to argue with Frances, though some disagreements may be impossible to cover up; I just want to explain the relationship between competitive devaluation and monetary expansion.

First some context. One of the reasons that I — almost exactly four years ago – wrote my post about Hawtrey and competitive devaluations (aka currency wars) is that critics of quantitative easing had started to make the argument that the real point of quantitative easing was to gain a competitive advantage over other countries by depreciating – or devaluing – their currencies. What I was trying to show was that if a currency is being depreciated by monetary expansion (aka quantitative easing), then, as Frances now seems – but I’m still not sure – ready to concede, the combination of monetary expansion and currency devaluation has a net expansionary effect on the whole world, and the critics of quantitative easing are wrong. Because the competitive devaluation argument has so often been made together with a criticism of quantitative easing, I assumed, carelessly it appears, that in criticizing my post, Frances was disagreeing with my support of currency depreciation in the context of monetary expansion and quantitative easing.

With that explanatory preface out of the way, let’s think about how to depreciate a fiat currency on the foreign exchange markets. A market-clearing exchange rate between two fiat currencies can be determined in two ways (though there is often a little of both in practice): 1) a currency peg and 2) a floating rate. Under a currency peg, one or both countries are committed to buying and selling the other currency in unlimited quantities at the pegged (official) rate. If neither country is prepared to buy or sell its currency in unlimited quantities at the pegged rate, the peg is not a true peg, because the peg will not withstand a sufficient shift in the relative market demands for the currencies. If the market demand is inconsistent with the quasi-peg, either the pegged rate will cease to be a market-clearing rate, with a rationing system imposed while the appearance of a peg is maintained, or the exchange rate will be allowed to float to clear the market. A peg can be one-sided or two-sided, but a two-sided peg is possible only so long as both countries agree on the exchange rate to be pegged; if they disagree, the system goes haywire. To use Nick Rowe’s terminology, the typical case of a currency peg involves an alpha (or dominant, or reserve) currency which is taken as a standard and a beta currency which is made convertible into the alpha currency at a rate chosen by the issuer of the beta currency.

With floating currencies, the market is cleared by adjustment of the exchange rate rather than currency purchases or sales by the monetary authority to maintain the peg. In practice, monetary authorities generally do buy and sell their currencies in the market — sometimes with, and  sometimes without, an exchange-rate target — so the operation of actual foreign exchange markets lies somewhere in between the two poles of currency pegs and floating rates.

What does this tell us about currency depreciation? First, it is possible for a country to devalue its currency against another currency to which its currency is pegged by changing the peg unilaterally. If a peg is one-sided, i.e., a beta currency is tied to an alpha, the issuer of the beta currency chooses the peg unilaterally. If the peg is two-sided, then the peg cannot be changed unilaterally; the two currencies are merely different denominations of a single currency, and a unilateral change in the peg means that the common currency has been abandoned and replaced by two separate currencies.

So what happens if a beta currency pegged to an alpha currency, e.g., the Hong Kong dollar which pegged to the US dollar, is devalued? Say Hong Kong has an unemployment problem and attributes the problem to Hong Kong wages being too high for its exports to compete in world markets. Hong Kong decides to solve the problem by devaluing their dollar from 13 cents to 10 cents. Would the devaluation be expansionary or contractionary for the rest of the world?

Hong Kong is the paradigmatic small open economy. Its export prices are quoted in US dollars determined in world markets in which HK is a small player, so the prices of HK exports quoted in US dollars don’t change, but in HK dollars the prices rise by 30%. Suddenly, HK exporters become super-profitable, and hire as many workers as they can to increase output. Hong Kong’s unemployment problem is solved.

(Brief digression. There are those who reject this reasoning, because it supposedly assumes that Hong Kong workers suffer from money illusion. If workers are unemployed because their wages are too high relative to the Hong Kong producer price level, why don’t they accept a cut in nominal wages? We don’t know. But if they aren’t willing to accept a nominal-wage cut, why do they allow themselves to be tricked into accepting a real-wage cut by way of a devaluation, unless they are suffering from money illusion? And we all know that it’s irrational to suffer from money illusion, because money is neutral. The question is a good question, but the answer is that the argument for monetary neutrality and for the absence of money illusion presumes a comparison between two equilibrium states. But the devaluation analysis above did not start from an equilibrium; it started from a disequilibrium. So the analysis can’t be refuted by saying that it implies that workers suffer from money illusion.)

The result of the Hong Kong export boom and corresponding increase in output and employment is that US dollars will start flowing into Hong Kong as payment for all those exports. So the next question is what happens to those dollars? With no change in the demand of Hong Kong residents to hold US dollars, they will presumably want to exchange their US dollars for Hong Kong dollars, so that the quantity of Hong Kong dollars held by Hong Kong residents will increase. Because domestic income and expenditure in Hong Kong is rising, some of the new Hong Kong dollars will probably be held, but some will be spent. The increased spending as a result of rising incomes and a desire to convert some of the increased cash holdings into other assets will spill over into increased purchases by Hong Kong residents on imports or foreign assets. The increase in domestic income and expenditure and the increase in import prices will inevitably cause an increase in prices measured in HK dollars.

Thus, insofar as income, expenditure and prices are rising in Hong Kong, the immediate real exchange rate advantage resulting from devaluation will dissipate, though not necessarily completely, as the HK prices of non-tradables including labor services are bid up in response to the demand increase following devaluation. The increase in HK prices and increased spending by HK residents on imported goods will have an expansionary effect on the rest of the world (albeit a small one because Hong Kong is a small open economy). That’s the optimistic scenario.

But there is also a pessimistic scenario that was spelled out by Max Corden in his classic article on exchange rate protection. In this scenario, the HK monetary authority either reduces the quantity of HK dollars to offset the increase in HK dollars caused by its export surplus, or it increases the demand for HK dollars to match the increase in the quantity of HK dollars. It can reduce the quantity of HK dollars by engaging in open-market sales of domestic securities in its portfolio, and it can increase the demand for HK dollars by increasing the required reserves that HK banks must hold against the HK dollars (either deposits or banknotes) that they create. Alternatively, the monetary authority could pay interest on the reserves held by HK banks at the central bank as a way of  increasing the amount of HK dollars demanded. By eliminating the excess supply of HK dollars through one of more of these methods, the central bank prevents the increase in HK spending and the reduction in net exports that would otherwise have occurred in response to the HK devaluation. That was the great theoretical insight of Corden’s analysis: the beggar-my-neighbor effect of devaluation is not caused by the devaluation, but by the monetary policy that prevents the increase in domestic income associated with devaluation from spilling over into increased expenditure. This can only be accomplished by a monetary policy that deliberately creates a chronic excess demand for cash, an excess demand that can only be satisfied by way of an export surplus.

The effect (though just second-order) of the HK policy on US prices can also be determined, because the policy of the HK monetary authority involves an increase in its demand to hold US FX reserves. If it chooses to hold the additional dollar reserves in actual US dollars, the increase in the demand for US base money will, ceteris paribus, cause the US price level to fall. Alternatively, if the HK monetary authority chooses to hold its dollar reserves in the form of US Treasuries, the yield on those Treasuries will tend to fall. A reduced yield on Treasuries will increase the desired holdings of dollars, also implying a reduced US price level. Of course, the US is capable of nullifying the deflationary effect of HK currency manipulation by monetary expansion; the point is that the HK policy will have a (slight) deflationary effect on the US unless it is counteracted.

If I were writing a textbook, I would say that it is left as an exercise for the reader to work out the analysis of devaluation in the case of floating currencies. So if you feel like stopping here, you probably won’t be missing very much. But just to cover all the bases, I will go through the argument quickly. If a country wants to drive down the floating exchange rate between its currency and another currency, the monetary authority can buy the foreign currency in exchange for its own currency in the FX markets. It’s actually not necessary to intervene directly in FX markets to do this, issuing more currency, by open-market operations (aka quantitative easing) would also work, but the effect in FX markets will show up more quickly than if the expansion is carried out by open market purchases. So in the simplest case, currency depreciation is actually just another term for monetary expansion. However, the link between monetary expansion and currency depreciation can be broken if a central bank simultaneously buys the foreign currency with new issues of its own currency while making open-market sales of assets to mop up the home currency issued while intervening in the FX market. Alternatively, it can intervene in the FX market while imposing increased reserve requirements on banks, thereby forcing them to hold the newly issued currency, or by paying banks a sufficiently interest rate on reserves held at the central bank to willingly hold the newly issued currency.

So, it is my contention that there is no such thing as pure currency depreciation without monetary expansion. If currency depreciation is to be achieved without monetary expansion, the central bank must also simultaneously either carry out open-market sales to mop the currency issued in the process of driving down the exchange rate of the currency, or impose reserve requirements on banks, or pay interest on bank reserves, thereby creating an increased demand for the additional currency that was issued to drive down the exchange value of the home currency

Competitive Devaluation Plus Monetary Expansion Does Create a Free Lunch

I want to begin this post by saying that I’m flattered by, and grateful to, Frances Coppola for the first line of her blog post yesterday. But – and I note that imitation is the sincerest form of flattery – I fear I have to take issue with her over competitive devaluation.

Frances quotes at length from a quotation from Hawtrey’s Trade Depression and the Way Out that I used in a post I wrote almost four years ago. Hawtrey explained why competitive devaluation in the 1930s was – and in my view still is – not a problem (except under extreme assumptions, which I will discuss at the end of this post). Indeed, I called competitive devaluation a free lunch, providing her with a title for her post. Here’s the passage that Frances quotes:

This competitive depreciation is an entirely imaginary danger. The benefit that a country derives from the depreciation of its currency is in the rise of its price level relative to its wage level, and does not depend on its competitive advantage. If other countries depreciate their currencies, its competitive advantage is destroyed, but the advantage of the price level remains both to it and to them. They in turn may carry the depreciation further, and gain a competitive advantage. But this race in depreciation reaches a natural limit when the fall in wages and in the prices of manufactured goods in terms of gold has gone so far in all the countries concerned as to regain the normal relation with the prices of primary products. When that occurs, the depression is over, and industry is everywhere remunerative and fully employed. Any countries that lag behind in the race will suffer from unemployment in their manufacturing industry. But the remedy lies in their own hands; all they have to do is to depreciate their currencies to the extent necessary to make the price level remunerative to their industry. Their tardiness does not benefit their competitors, once these latter are employed up to capacity. Indeed, if the countries that hang back are an important part of the world’s economic system, the result must be to leave the disparity of price levels partly uncorrected, with undesirable consequences to everybody. . . .

The picture of an endless competition in currency depreciation is completely misleading. The race of depreciation is towards a definite goal; it is a competitive return to equilibrium. The situation is like that of a fishing fleet threatened with a storm; no harm is done if their return to a harbor of refuge is “competitive.” Let them race; the sooner they get there the better. (pp. 154-57)

Here’s Frances’s take on Hawtrey and me:

The highlight “in terms of gold” is mine, because it is the key to why Glasner is wrong. Hawtrey was right in his time, but his thinking does not apply now. We do not value today’s currencies in terms of gold. We value them in terms of each other. And in such a system, competitive devaluation is by definition beggar-my-neighbour.

Let me explain. Hawtrey defines currency values in relation to gold, and advertises the benefit of devaluing in relation to gold. The fact that gold is the standard means there is no direct relationship between my currency and yours. I may devalue my currency relative to gold, but you do not have to: my currency will be worth less compared to yours, but if the medium of account is gold, this does not matter since yours will still be worth the same amount in terms of gold. Assuming that the world price of gold remains stable, devaluation therefore principally affects the DOMESTIC price level.  As Hawtrey says, there may additionally be some external competitive advantage, but this is not the principal effect and it does not really matter if other countries also devalue. It is adjusting the relationship of domestic wages and prices in terms of gold that matters, since this eventually forces down the price of finished goods and therefore supports domestic demand.

Conversely, in a floating fiat currency system such as we have now, if I devalue my currency relative to yours, your currency rises relative to mine. There may be a domestic inflationary effect due to import price rises, but we do not value domestic wages or the prices of finished goods in terms of other currencies, so there can be no relative adjustment of wages to prices such as Hawtrey envisages. Devaluing the currency DOES NOT support domestic demand in a floating fiat currency system. It only rebalances the external position by making imports relatively more expensive and exports relatively cheaper.

This difference is crucial. In a gold standard system, devaluing the currency is a monetary adjustment to support domestic demand. In a floating fiat currency system, itis an external adjustment to improve competitiveness relative to other countries.

Actually, Frances did not quote the entire passage from Hawtrey that I reproduced in my post, and Frances would have done well to quote from, and to think carefully about, what Hawtrey said in the paragraphs preceding the ones she quoted. Here they are:

When Great Britain left the gold standard, deflationary measure were everywhere resorted to. Not only did the Bank of England raise its rate, but the tremendous withdrawals of gold from the United States involved an increase of rediscounts and a rise of rates there, and the gold that reached Europe was immobilized or hoarded. . . .

The consequence was that the fall in the price level continued. The British price level rose in the first few weeks after the suspension of the gold standard, but then accompanied the gold price level in its downward trend. This fall of prices calls for no other explanation than the deflationary measures which had been imposed. Indeed what does demand explanation is the moderation of the fall, which was on the whole not so steep after September 1931 as before.

Yet when the commercial and financial world saw that gold prices were falling rather than sterling prices rising, they evolved the purely empirical conclusion that a depreciation of the pound had no effect in raising the price level, but that it caused the price level in terms of gold and of those currencies in relation to which the pound depreciated to fall.

For any such conclusion there was no foundation. Whenever the gold price level tended to fall, the tendency would make itself felt in a fall in the pound concurrently with the fall in commodities. But it would be quite unwarrantable to infer that the fall in the pound was the cause of the fall in commodities.

On the other hand, there is no doubt that the depreciation of any currency, by reducing the cost of manufacture in the country concerned in terms of gold, tends to lower the gold prices of manufactured goods. . . .

But that is quite a different thing from lowering the price level. For the fall in manufacturing costs results in a greater demand for manufactured goods, and therefore the derivative demand for primary products is increased. While the prices of finished goods fall, the prices of primary products rise. Whether the price level as a whole would rise or fall it is not possible to say a priori, but the tendency is toward correcting the disparity between the price levels of finished products and primary products. That is a step towards equilibrium. And there is on the whole an increase of productive activity. The competition of the country which depreciates its currency will result in some reduction of output from the manufacturing industry of other countries. But this reduction will be less than the increase in the country’s output, for if there were no net increase in the world’s output there would be no fall of prices.

So Hawtrey was refuting precisely the argument raised  by Frances. Because the value of gold was not stable after Britain left the gold standard and depreciated its currency, the deflationary effect in other countries was mistakenly attributed to the British depreciation. But Hawtrey points out that this reasoning was backwards. The fall in prices in the rest of the world was caused by deflationary measures that were increasing the demand for gold and causing prices in terms of gold to continue to fall, as they had been since 1929. It was the fall in prices in terms of gold that was causing the pound to depreciate, not the other way around

Frances identifies an important difference between an international system of fiat currencies in which currency values are determined in relationship to each other in foreign exchange markets and a gold standard in which currency values are determined relative to gold. However, she seems to be suggesting that currency values in a fiat money system affect only the prices of imports and exports. But that can’t be so, because if the prices of imports and exports are affected, then the prices of the goods that compete with imports and exports must also be affected. And if the prices of tradable goods are affected, then the prices of non-tradables will also — though probably with a lag — eventually be affected as well. Of course, insofar as relative prices before the change in currency values were not in equilibrium, one can’t predict that all prices will adjust proportionately after the change.

To make the point in more abstract terms, the principle of purchasing power parity (PPP) operates under both a gold standard and a fiat money standard, and one can’t just assume that the gold standard has some special property that allows PPP to hold, while PPP is somehow disabled under a fiat currency system. Absent an explanation of why PPP doesn’t hold in a floating fiat currency system, the assertion that devaluing a currency (i.e., driving down the exchange value of one currency relative to other currencies) “is an external adjustment to improve competitiveness relative to other countries” is baseless.

I would also add a semantic point about this part of Frances’s argument:

We do not value today’s currencies in terms of gold. We value them in terms of each other. And in such a system, competitive devaluation is by definition beggar-my-neighbour.

Unfortunately, Frances falls into the common trap of believing that a definition actually tell us something about the real word, when in fact a definition tell us no more than what meaning is supposed to be attached to a word. The real world is invariant with respect to our definitions; our definitions convey no information about reality. So for Frances to say – apparently with the feeling that she is thereby proving her point – that competitive devaluation is by definition beggar-my-neighbour is completely uninformative about happens in the world; she is merely informing us about how she chooses to define the words she is using.

Frances goes on to refer to this graph taken from Gavyn Davies in the Financial Times, concerning a speech made by Stanley Fischer about research done by Fed staff economists showing that the 20% appreciation in the dollar over the past 18 months has reduced the rate of US inflation by as much as 1% and is projected to cause US GDP in three years to be about 3% lower than it would have been without dollar appreciation.Gavyn_Davies_Chart

Frances focuses on these two comments by Gavyn. First:

Importantly, the impact of the higher exchange rate does not reverse itself, at least in the time horizon of this simulation – it is a permanent hit to the level of GDP, assuming that monetary policy is not eased in the meantime.

And then:

According to the model, the annual growth rate should have dropped by about 0.5-1.0 per cent by now, and this effect should increase somewhat further by the end of this year.

Then, Frances continues:

But of course this assumes that the US does not ease monetary policy further. Suppose that it does?

The hit to net exports shown on the above graph is caused by imports becoming relatively cheaper and exports relatively more expensive as other countries devalue. If the US eased monetary policy in order to devalue the dollar support nominal GDP, the relative prices of imports and exports would rebalance – to the detriment of those countries attempting to export to the US.

What Frances overlooks is that by easing monetary policy to support nominal GDP, the US, aside from moderating or reversing the increase in its real exchange rate, would have raised total US aggregate demand, causing US income and employment to increase as well. Increased US income and employment would have increased US demand for imports (and for the products of American exporters), thereby reducing US net exports and increasing aggregate demand in the rest of the world. That was Hawtrey’s argument why competitive devaluation causes an increase in total world demand. Francis continues with a description of the predicament of the countries affected by US currency devaluation:

They have three choices: they respond with further devaluation of their own currencies to support exports, they impose import tariffs to support their own balance of trade, or they accept the deflationary shock themselves. The first is the feared “competitive devaluation” – exporting deflation to other countries through manipulation of the currency; the second, if widely practised, results in a general contraction of global trade, to everyone’s detriment; and you would think that no government would willingly accept the third.

But, as Hawtrey showed, competitive devaluation is not a problem. Depreciating your currency cushions the fall in nominal income and aggregate demand. If aggregate demand is kept stable, then the increased output, income, and employment associated with a falling exchange rate will spill over into a demand for the exports of other countries and an increase in the home demand for exportable home products. So it’s a win-win situation.

However, the Fed has permitted passive monetary tightening over the last eighteen months, and in December 2015 embarked on active monetary tightening in the form of interest rate rises. Davies questions the rationale for this, given the extraordinary rise in the dollar REER and the growing evidence that the US economy is weakening. I share his concern.

And I share his concern, too. So what are we even arguing about? Equally troubling is how passive tightening has reduced US demand for imports and for US exportable products, so passive tightening has negative indirect effects on aggregate demand in the rest of the world.

Although currency depreciation generally tends to increase the home demand for imports and for exportables, there are in fact conditions when the general rule that competitive devaluation is expansionary for all countries may be violated. In a number of previous posts (e.g., this, this, this, this and this) about currency manipulation, I have explained that when currency depreciation is undertaken along with a contractionary monetary policy, the terms-of-trade effect predominates without any countervailing effect on aggregate demand. If a country depreciates its exchange rate by intervening in foreign-exchange markets, buying foreign currencies with its own currency, thereby raising the value of foreign currencies relative to its own currency, it is also increasing the quantity of the domestic currency in the hands of the public. Increasing the quantity of domestic currency tends to raise domestic prices, thereby reversing, though probably with a lag, the effect on the currency’s real exchange rate. To prevent the real-exchange rate from returning to its previous level, the monetary authority must sterilize the issue of domestic currency with which it purchased foreign currencies. This can be done by open-market sales of assets by the cental bank, or by imposing increased reserve requirements on banks, thereby forcing banks to hold the new currency that had been created to depreciate the home currency.

This sort of currency manipulation, or exchange-rate protection, as Max Corden referred to it in his classic paper (reprinted here), is very different from conventional currency depreciation brought about by monetary expansion. The combination of currency depreciation and tight money creates an ongoing shortage of cash, so that the desired additional cash balances can be obtained only by way of reduced expenditures and a consequent export surplus. Since World War II, Japan, Germany, Taiwan, South Korea, and China are among the countries that have used currency undervaluation and tight money as a mechanism for exchange-rate protectionism in promoting industrialization. But exchange rate protection is possible not only under a fiat currency system. Currency manipulation was also possible under the gold standard, as happened when the France restored the gold standard in 1928, and pegged the franc to the dollar at a lower exchange rate than the franc had reached prior to the restoration of convertibility. That depreciation was accompanied by increased reserve requirements on French banknotes, providing the Bank of France with a continuing inflow of foreign exchange reserves with which it was able to pursue its insane policy of accumulating gold, thereby precipitating, with a major assist from the high-interest rate policy of the Fed, the deflation that turned into the Great Depression.

There Is No Intertemporal Budget Constraint

Last week Nick Rowe posted a link to a just published article in a special issue of the Review of Keynesian Economics commemorating the 80th anniversary of the General Theory. Nick’s article discusses the confusion in the General Theory between saving and hoarding, and Nick invited readers to weigh in with comments about his article. The ROKE issue also features an article by Simon Wren-Lewis explaining the eclipse of Keynesian theory as a result of the New Classical Counter-Revolution, correctly identified by Wren-Lewis as a revolution inspired not by empirical success but by a methodological obsession with reductive micro-foundationalism. While deploring the New Classical methodological authoritarianism, Wren-Lewis takes solace from the ability of New Keynesians to survive under the New Classical methodological regime, salvaging a role for activist counter-cyclical policy by, in effect, negotiating a safe haven for the sticky-price assumption despite its shaky methodological credentials. The methodological fiction that sticky prices qualify as micro-founded allowed New Keynesianism to survive despite the ascendancy of micro-foundationalist methodology, thereby enabling the core Keynesian policy message to survive.

I mention the Wren-Lewis article in this context because of an exchange between two of the commenters on Nick’s article: the presumably pseudonymous Avon Barksdale and blogger Jason Smith about microfoundations and Keynesian economics. Avon began by chastising Nick for wasting time discussing Keynes’s 80-year old ideas, something Avon thinks would never happen in a discussion about a true science like physics, the 100-year-old ideas of Einstein being of no interest except insofar as they have been incorporated into the theoretical corpus of modern physics. Of course, this is simply vulgar scientism, as if the only legitimate way to do economics is to mimic how physicists do physics. This methodological scolding is typically charming New Classical arrogance. Sort of reminds one of how Friedrich Engels described Marxian theory as scientific socialism. I mean who, other than a religious fanatic, would be stupid enough to argue with the assertions of science?

Avon continues with a quotation from David Levine, a fine economist who has done a lot of good work, but who is also enthralled by the New Classical methodology. Avon’s scientism provoked the following comment from Jason Smith, a Ph. D. in physics with a deep interest in and understanding of economics.

You quote from Levine: “Keynesianism as argued by people such as Paul Krugman and Brad DeLong is a theory without people either rational or irrational”

This is false. The L in ISLM means liquidity preference and e.g. here …

http://krugman.blogs.nytimes.com/2013/11/18/the-new-keynesian-case-for-fiscal-policy-wonkish/

… Krugman mentions an Euler equation. The Euler equation essentially says that an agent must be indifferent between consuming one more unit today on the one hand and saving that unit and consuming in the future on the other if utility is maximized.

So there are agents in both formulations preferring one state of the world relative to others.

Avon replied:

Jason,

“This is false. The L in ISLM means liquidity preference and e.g. here”

I know what ISLM is. It’s not recursive so it really doesn’t have people in it. The dynamics are not set by any micro-foundation. If you’d like to see models with people in them, try Ljungqvist and Sargent, Recursive Macroeconomic Theory.

To which Jason retorted:

Avon,

So the definition of “people” is restricted to agents making multi-period optimizations over time, solving a dynamic programming problem?

Well then any such theory is obviously wrong because people don’t behave that way. For example, humans don’t optimize the dictator game. How can you add up optimizing agents and get a result that is true for non-optimizing agents … coincident with the details of the optimizing agents mattering.

Your microfoundation requirement is like saying the ideal gas law doesn’t have any atoms in it. And it doesn’t! It is an aggregate property of individual “agents” that don’t have properties like temperature or pressure (or even volume in a meaningful sense). Atoms optimize entropy, but not out of any preferences.

So how do you know for a fact that macro properties like inflation or interest rates are directly related to agent optimizations? Maybe inflation is like temperature — it doesn’t exist for individuals and is only a property of economics in aggregate.

These questions are not answered definitively, and they’d have to be to enforce a requirement for microfoundations … or a particular way of solving the problem.

Are quarks important to nuclear physics? Not really — it’s all pions and nucleons. Emergent degrees of freedom. Sure, you can calculate pion scattering from QCD lattice calculations (quark and gluon DoF), but it doesn’t give an empirically better result than chiral perturbation theory (pion DoF) that ignores the microfoundations (QCD).

Assuming quarks are required to solve nuclear physics problems would have been a giant step backwards.

To which Avon rejoined:

Jason

The microfoundation of nuclear physics and quarks is quantum mechanics and quantum field theory. How the degrees of freedom reorganize under the renormalization group flow, what effective field theory results is an empirical question. Keynesian economics is worse tha[n] useless. It’s wrong empirically, it has no theoretical foundation, it has no laws. It has no microfoundation. No serious grad school has taught Keynesian economics in nearly 40 years.

To which Jason answered:

Avon,

RG flow is irrelevant to chiral perturbation theory which is based on the approximate chiral symmetry of QCD. And chiral perturbation theory could exist without QCD as the “microfoundation”.

Quantum field theory is not a ‘microfoundation’, but rather a framework for building theories that may or may not have microfoundations. As Weinberg (1979) said:

” … quantum field theory itself has no content beyond analyticity, unitarity,
cluster decomposition, and symmetry.”

If I put together an NJL model, there is no requirement that the scalar field condensate be composed of quark-antiquark pairs. In fact, the basic idea was used for Cooper pairs as a model of superconductivity. Same macro theory; different microfoundations. And that is a general problem with microfoundations — different microfoundations can lead to the same macro theory, so which one is right?

And the IS-LM model is actually pretty empirically accurate (for economics):

http://informationtransfereconomics.blogspot.com/2014/03/the-islm-model-again.html

To which Avon responded:

First, ISLM analysis does not hold empirically. It just doesn’t work. That’s why we ended up with the macro revolution of the 70s and 80s. Keynesian economics ignores intertemporal budget constraints, it violates Ricardian equivalence. It’s just not the way the world works. People might not solve dynamic programs to set their consumption path, but at least these models include a future which people plan over. These models work far better than Keynesian ISLM reasoning.

As for chiral perturbation theory and the approximate chiral symmetries of QCD, I am not making the case that NJL models requires QCD. NJL is an effective field theory so it comes from something else. That something else happens to be QCD. It could have been something else, that’s an empirical question. The microfoundation I’m talking about with theories like NJL is QFT and the symmetries of the vacuum, not the short distance physics that might be responsible for it. The microfoundation here is about the basic laws, the principles.

ISLM and Keynesian economics has none of this. There is no principle. The microfoundation of modern macro is not about increasing the degrees of freedom to model every person in the economy on some short distance scale, it is about building the basic principles from consistent economic laws that we find in microeconomics.

Well, I totally agree that IS-LM is a flawed macroeconomic model, and, in its original form, it was borderline-incoherent, being a single-period model with an interest rate, a concept without meaning except as an intertemporal price relationship. These deficiencies of IS-LM became obvious in the 1970s, so the model was extended to include a future period, with an expected future price level, making it possible to speak meaningfully about real and nominal interest rates, inflation and an equilibrium rate of spending. So the failure of IS-LM to explain stagflation, cited by Avon as the justification for rejecting IS-LM in favor of New Classical macro, was not that hard to fix, at least enough to make it serviceable. And comparisons of the empirical success of augmented IS-LM and the New Classical models have shown that IS-LM models consistently outperform New Classical models.

What Avon fails to see is that the microfoundations that he considers essential for macroeconomics are themselves derived from the assumption that the economy is operating in macroeconomic equilibrium. Thus, insisting on microfoundations – at least in the formalist sense that Avon and New Classical macroeconomists understand the term – does not provide a foundation for macroeconomics; it is just question begging aka circular reasoning or petitio principia.

The circularity is obvious from even a cursory reading of Samuelson’s Foundations of Economic Analysis, Robert Lucas’s model for doing economics. What Samuelson called meaningful theorems – thereby betraying his misguided acceptance of the now discredited logical positivist dogma that only potentially empirically verifiable statements have meaning – are derived using the comparative-statics method, which involves finding the sign of the derivative of an endogenous economic variable with respect to a change in some parameter. But the comparative-statics method is premised on the assumption that before and after the parameter change the system is in full equilibrium or at an optimum, and that the equilibrium, if not unique, is at least locally stable and the parameter change is sufficiently small not to displace the system so far that it does not revert back to a new equilibrium close to the original one. So the microeconomic laws invoked by Avon are valid only in the neighborhood of a stable equilibrium, and the macroeconomics that Avon’s New Classical mentors have imposed on the economics profession is a macroeconomics that, by methodological fiat, is operative only in the neighborhood of a locally stable equilibrium.

Avon dismisses Keynesian economics because it ignores intertemporal budget constraints. But the intertemporal budget constraint doesn’t exist in any objective sense. Certainly macroeconomics has to take into account intertemporal choice, but the idea of an intertemporal budget constraint analogous to the microeconomic budget constraint underlying the basic theory of consumer choice is totally misguided. In the static theory of consumer choice, the consumer has a given resource endowment and known prices at which consumers can transact at will, so the utility-maximizing vector of purchases and sales can be determined as the solution of a constrained-maximization problem.

In the intertemporal context, consumers have a given resource endowment, but prices are not known. So consumers have to make current transactions based on their expectations about future prices and a variety of other circumstances about which consumers can only guess. Their budget constraints are thus not real but totally conjectural based on their expectations of future prices. The optimizing Euler equations are therefore entirely conjectural as well, and subject to continual revision in response to changing expectations. The idea that the microeconomic theory of consumer choice is straightforwardly applicable to the intertemporal choice problem in a setting in which consumers don’t know what future prices will be and agents’ expectations of future prices are a) likely to be very different from each other and thus b) likely to be different from their ultimate realizations is a huge stretch. The intertemporal budget constraint has a completely different role in macroeconomics from the role it has in microeconomics.

If I expect that the demand for my services will be such that my disposable income next year would be $500k, my consumption choices would be very different from what they would have been if I were expecting a disposable income of $100k next year. If I expect a disposable income of $500k next year, and it turns out that next year’s income is only $100k, I may find myself in considerable difficulty, because my planned expenditure and the future payments I have obligated myself to make may exceed my disposable income or my capacity to borrow. So if there are a lot of people who overestimate their future incomes, the repercussions of their over-optimism may reverberate throughout the economy, leading to bankruptcies and unemployment and other bad stuff.

A large enough initial shock of mistaken expectations can become self-amplifying, at least for a time, possibly resembling the way a large initial displacement of water can generate a tsunami. A financial crisis, which is hard to model as an equilibrium phenomenon, may rather be an emergent phenomenon with microeconomic sources, but whose propagation can’t be described in microeconomic terms. New Classical macroeconomics simply excludes such possibilities on methodological grounds by imposing a rational-expectations general-equilibrium structure on all macroeconomic models.

This is not to say that the rational expectations assumption does not have a useful analytical role in macroeconomics. But the most interesting and most important problems in macroeconomics arise when the rational expectations assumption does not hold, because it is when individual expectations are very different and very unstable – say, like now, for instance — that macroeconomies become vulnerable to really scary instability.

Simon Wren-Lewis makes a similar point in his paper in the Review of Keynesian Economics.

Much discussion of current divisions within macroeconomics focuses on the ‘saltwater/freshwater’ divide. This understates the importance of the New Classical Counter Revolution (hereafter NCCR). It may be more helpful to think about the NCCR as involving two strands. The one most commonly talked about involves Keynesian monetary and fiscal policy. That is of course very important, and plays a role in the policy reaction to the recent Great Recession. However I want to suggest that in some ways the second strand, which was methodological, is more important. The NCCR helped completely change the way academic macroeconomics is done.

Before the NCCR, macroeconomics was an intensely empirical discipline: something made possible by the developments in statistics and econometrics inspired by The General Theory. After the NCCR and its emphasis on microfoundations, it became much more deductive. As Hoover (2001, p. 72) writes, ‘[t]he conviction that macroeconomics must possess microfoundations has changed the face of the discipline in the last quarter century’. In terms of this second strand, the NCCR was triumphant and remains largely unchallenged within mainstream academic macroeconomics.

Perhaps I will have some more to say about Wren-Lewis’s article in a future post. And perhaps also about Nick Rowe’s article.

HT: Tom Brown

Update (02/11/16):

On his blog Jason Smith provides some further commentary on his exchange with Avon on Nick Rowe’s blog, explaining at greater length how irrelevant microfoundations are to doing real empirically relevant physics. He also expands on and puts into a broader meta-theoretical context my point about the extremely narrow range of applicability of the rational-expectations equilibrium assumptions of New Classical macroeconomics.

David Glasner found a back-and-forth between me and a commenter (with the pseudonym “Avon Barksdale” after [a] character on The Wire who [didn’t end] up taking an economics class [per Tom below]) on Nick Rowe’s blog who expressed the (widely held) view that the only scientific way to proceed in economics is with rigorous microfoundations. “Avon” held physics up as a purported shining example of this approach.
I couldn’t let it go: even physics isn’t that reductionist. I gave several examples of cases where the microfoundations were actually known, but not used to figure things out: thermodynamics, nuclear physics. Even modern physics is supposedly built on string theory. However physicists do not require every pion scattering amplitude be calculated from QCD. Some people do do so-called lattice calculations. But many resort to the “effective” chiral perturbation theory. In a sense, that was what my thesis was about — an effective theory that bridges the gap between lattice QCD and chiral perturbation theory. That effective theory even gave up on one of the basic principles of QCD — confinement. It would be like an economist giving up opportunity cost (a basic principle of the micro theory). But no physicist ever said to me “your model is flawed because it doesn’t have true microfoundations”. That’s because the kind of hard core reductionism that surrounds the microfoundations paradigm doesn’t exist in physics — the most hard core reductionist natural science!
In his post, Glasner repeated something that he had before and — probably because it was in the context of a bunch of quotes about physics — I thought of another analogy.

Glasner says:

But the comparative-statics method is premised on the assumption that before and after the parameter change the system is in full equilibrium or at an optimum, and that the equilibrium, if not unique, is at least locally stable and the parameter change is sufficiently small not to displace the system so far that it does not revert back to a new equilibrium close to the original one. So the microeconomic laws invoked by Avon are valid only in the neighborhood of a stable equilibrium, and the macroeconomics that Avon’s New Classical mentors have imposed on the economics profession is a macroeconomics that, by methodological fiat, is operative only in the neighborhood of a locally stable equilibrium.

 

This hits on a basic principle of physics: any theory radically simplifies near an equilibrium.

Go to Jason’s blog to read the rest of his important and insightful post.

How not to Win Friends and Influence People

Last week David Beckworth and Ramesh Ponnuru wrote a very astute op-ed article in the New York Times explaining how the Fed was tightening its monetary policy in 2008 even as the economy was rapidly falling into recession. Although there are a couple of substantive points on which I might take issue with Beckworth and Ponnuru (more about that below), I think that on the whole they do a very good job of covering the important points about the 2008 financial crisis given that their article had less than 1000 words.

That said, Beckworth and Ponnuru made a really horrible – to me incomprehensible — blunder. For some reason, in the second paragraph of their piece, after having recounted the conventional narrative of the 2008 financial crisis as an inevitable result of housing bubble and the associated misconduct of the financial industry in their first paragraph, Beckworth and Ponnuru cite Ted Cruz as the spokesman for the alternative view that they are about to present. They compound that blunder in a disclaimer identifying one of them – presumably Ponnuru — as a friend of Ted Cruz – for some recent pro-Cruz pronouncements from Ponnuru see here, here, and here – thereby transforming what might have been a piece of neutral policy analysis into a pro-Cruz campaign document. Aside from the unseemliness of turning Cruz into the poster-boy for Market Monetarism and NGDP Level Targeting, when, as recently as last October 28, Mr. Cruz was advocating resurrection of the gold standard while bashing the Fed for debasing the currency, a shout-out to Ted Cruz is obviously not a gesture calculated to engage readers (of the New York Times for heaven sakes) and predispose them to be receptive to the message they want to convey.

I suppose that this would be the appropriate spot for me to add a disclaimer of my own. I do not know, and am no friend of, Ted Cruz, but I was a FTC employee during Cruz’s brief tenure at the agency from July 2002 to December 2003. I can also affirm that I have absolutely no recollection of having ever seen or interacted with him while he was at the agency or since, and have spoken to only one current FTC employee who does remember him.

Predictably, Beckworth and Ponnuru provoked a barrage of negative responses to their argument that the Fed was responsible for the 2008 financial crisis by not easing monetary policy for most of 2008 when, even before the financial crisis, the economy was sliding into a deep recession. Much of the criticism focuses on the ambiguous nature of the concepts of causation and responsibility when hardly any political or economic event is the direct result of just one cause. So to say that the Fed caused or was responsible for the 2008 financial crisis cannot possibly mean that the Fed single-handedly brought it about, and that, but for the Fed’s actions, no crisis would have occurred. That clearly was not the case; the Fed was operating in an environment in which not only its past actions but the actions of private parties and public and political institutions increased the vulnerability of the financial system. To say that the Fed’s actions of commission or omission “caused” the financial crisis in no way absolves all the other actors from responsibility for creating the conditions in which the Fed found itself and in which the Fed’s actions became crucial for the path that the economy actually followed.

Consider the Great Depression. I think it is totally reasonable to say that the Great Depression was the result of the combination of a succession of interest rate increases by the Fed in 1928 and 1929 and by the insane policy adopted by the Bank of France in 1928 and continued for several years thereafter to convert its holdings of foreign-exchange reserves into gold. But does saying that the Fed and the Bank of France caused the Great Depression mean that World War I and the abandonment of the gold standard and the doubling of the price level in terms of gold during the war were irrelevant to the Great Depression? Of course not. Does it mean that accumulation of World War I debt and reparations obligations imposed on Germany by the Treaty of Versailles and the accumulation of debt issued by German state and local governments — debt and obligations that found their way onto the balance sheets of banks all over the world, were irrelevant to the Great Depression? Not at all.

Nevertheless, it does make sense to speak of the role of monetary policy as a specific cause of the Great Depression because the decisions made by the central bankers made a difference at critical moments when it would have been possible to avoid the calamity had they adopted policies that would have avoided a rapid accumulation of gold reserves by the Fed and the Bank of France, thereby moderating or counteracting, instead of intensifying, the deflationary pressures threatening the world economy. Interestingly, many of those objecting to the notion that Fed policy caused the 2008 financial crisis are not at all bothered by the idea that humans are causing global warming even though the world has evidently undergone previous cycles of rising and falling temperatures about which no one would suggest that humans played any causal role. Just as the existence of non-human factors that affect climate does not preclude one from arguing that humans are now playing a key role in the current upswing of temperatures, the existence of non-monetary factors contributing to the 2008 financial crisis need not preclude one from attributing a causal role in the crisis to the Fed.

So let’s have a look at some of the specific criticisms directed at Beckworth and Ponnuru. Here’s Paul Krugman’s take in which he refers back to an earlier exchange last December between Mr. Cruz and Janet Yellen when she testified before Congress:

Back when Ted Cruz first floated his claim that the Fed caused the Great Recession — and some neo-monetarists spoke up in support — I noted that this was a repeat of the old Milton Friedman two-step.

First, you declare that the Fed could have prevented a disaster — the Great Depression in Friedman’s case, the Great Recession this time around. This is an arguable position, although Friedman’s claims about the 30s look a lot less convincing now that we have tried again to deal with a liquidity trap. But then this morphs into the claim that the Fed caused the disaster. See, government is the problem, not the solution! And the motivation for this bait-and-switch is, indeed, political.

Now come Beckworth and Ponnuru to make the argument at greater length, and it’s quite direct: because the Fed “caused” the crisis, things like financial deregulation and runaway bankers had nothing to do with it.

As regular readers of this blog – if there are any – already know, I am not a big fan of Milton Friedman’s work on the Great Depression, and I agree with Krugman’s criticism that Friedman allowed his ideological preferences or commitments to exert an undue influence not only on his policy advocacy but on his substantive analysis. Thus, trying to make a case for his dumb k-percent rule as an alternative monetary regime to the classical gold standard regime generally favored by his libertarian, classical liberal and conservative ideological brethren, he went to great and unreasonable lengths to deny the obvious fact that the demand for money is anything but stable, because such an admission would have made the k-percent rule untenable on its face as it proved to be when Paul Volcker misguidedly tried to follow Friedman’s advice and conduct monetary policy by targeting monetary aggregates. Even worse, because he was so wedded to the naïve quantity-theory monetary framework he thought he was reviving – when in fact he was using a modified version of the Cambride/Keynesian demand for money, even making the patently absurd claim that the quantity theory of money was a theory of the demand for money – Friedman insisted on conducting monetary analysis under the assumption – also made by Keynes — that quantity of money is directly under the control of the monetary authority when in fact, under a gold standard – which means during the Great Depression – the quantity of money for any country is endogenously determined. As a result, there was a total mismatch between Friedman’s monetary model and the institutional setting in place at the time of the monetary phenomenon he was purporting to explain.

So although there were big problems with Friedman’s account of the Great Depression and his characterization of the Fed’s mishandling of the Great Depression, fixing those problems doesn’t reduce the Fed’s culpability. What is certainly true is that the Great Depression, the result of a complex set of circumstances going back at least 15 years to the start of World War I, might well have been avoided largely or entirely, but for the egregious conduct of the Fed and Bank of France. But it is also true that, at the onset of the Great Depression, there was no consensus about how to conduct monetary policy, even though Hawtrey and Cassel and a handful of others well understood how terribly monetary policy had gone off track. But theirs was a minority view, and Hawtrey and Cassel are still largely ignored or forgotten.

Ted Cruz may view the Fed’s mistakes in 2008 as a club with which to beat up on Janet Yellen, but for most of the rest of us who think that Fed mistakes were a critical element of the 2008 financial crisis, the point is not to make an ideological statement, it is to understand what went wrong and to try to keep it from happening again.

Krugman sends us to Mike Konczal for further commentary on Beckworth and Ponnuru.

Is Ted Cruz right about the Great Recession and the Federal Reserve? From a November debate, Cruz argued that “in the third quarter of 2008, the Fed tightened the money and crashed those asset prices, which caused a cascading collapse.”

Fleshing that argument out in the New York Times is David Beckworth and Ramesh Ponnuru, backing and expanding Cruz’s theory that “the Federal Reserve caused the crisis by tightening monetary policy in 2008.”

But wait, didn’t the Federal Reserve lower rates during that time?

Um, no. The Fed cut its interest rate target to 2.25% on March 18, 2008, and to 2% on April 20, which by my calculations would have been in the second quarter of 2008. There it remained until it was reduced to 1.5% on October 8, which by my calculations would have been in the fourth quarter of 2008. So on the face of it, Mr. Cruz was right that the Fed kept its interest rate target constant for over five months while the economy was contracting in real terms in the third quarter at a rate of 1.9% (and growing in nominal terms at a mere 0.8% rate)

Konczal goes on to accuse Cruz of inconsistency for blaming the Fed for tightening policy in 2008 before the crash while bashing the Fed for quantitative easing after the crash. That certainly is a just criticism, and I really hope someone asks Cruz to explain himself, though my expectations that that will happen are not very high. But that’s Cruz’s problem, not Beckworth’s or Ponnuru’s.

Konczal also focuses on the ambiguity in saying that the Fed caused the financial crisis by not cutting interest rates earlier:

I think a lot of people’s frustrations with the article – see Barry Ritholtz at Bloomberg here – is the authors slipping between many possible interpretations. Here’s the three that I could read them making, though these aren’t actual quotes from the piece:

(a) “The Federal Reserve could have stopped the panic in the financial markets with more easing.”

There’s nothing in the Valukas bankruptcy report on Lehman, or any of the numerous other reports that have since come out, that leads me to believe Lehman wouldn’t have failed if the short-term interest rate was lowered. One way to see the crisis was in the interbank lending spreads, often called the TED spread, which is a measure of banking panic. Looking at an image of the spread and its components, you can see a falling short-term t-bill rate didn’t ease that spread throughout 2008.

And, as Matt O’Brien noted, Bear Stearns failed before the passive tightening started.

The problem with this criticism is that it assumes that the only way that the Fed can be effective is by altering the interest rate that it effectively sets on overnight loans. It ignores the relationship between the interest rate that the Fed sets and total spending. That relationship is not entirely obvious, but almost all monetary economists have assumed that there is such a relationship, even if they can’t exactly agree on the mechanism by which the relationship is brought into existence. So it is not enough to look at the effect of the Fed’s interest rate on Lehman or Bear Stearns, you also have to look at the relationship between the interest rate and total spending and how a higher rate of total spending would have affected Lehman and Bear Stearns. If the economy had been performing better in the second and third quarters, the assets that Lehman and Bear Stearns were holding would not have lost as much of their value. And even if Lehman and Bear Stearns had not survived, arranging for their takeover by other firms might have been less difficult.

But beyond that, Beckworth and Ponnuru themselves overlook the fact that tightening by the Fed did not begin in the third quarter – or even the second quarter – of 2008. The tightening may have already begun in as early as the middle of 2006. The chart below shows the rate of expansion of the adjusted monetary base from January 2004 through September 2008. From 2004 through the middle of 2006, the biweekly rate of expansion of the monetary base was consistently at an annual rate exceeding 4% with the exception of a six-month interval at the end of 2005 when the rate fell to the 3-4% range. But from the middle of 2006 through September 2008, the bi-weekly rate of expansion was consistently below 3%, and was well below 2% for most of 2008. Now, I am generally wary of reading too much into changes in the monetary aggregates, because those changes can reflect either changes in supply conditions or demand conditions. However, when the economy is contracting, with the rate of growth in total spending falling substantially below trend, and the rate of growth in the monetary aggregates is decreasing sharply, it isn’t unreasonable to infer that monetary policy was being tightened. So, the monetary policy may well have been tightened as early as 2006, and, insofar as the rate of growth of the monetary base is indicative of the stance of monetary policy, that tightening was hardly passive.

adjusted_monetary_base

(b) “The Federal Reserve could have helped the recovery by acting earlier in 2008. Unemployment would have peaked at, say, 9.5 percent, instead of 10 percent.”

That would have been good! I would have been a fan of that outcome, and I’m willing to believe it. That’s 700,000 people with a job that they wouldn’t have had otherwise. The stimulus should have been bigger too, with a second round once it was clear how deep the hole was and how Treasuries were crashing too.

Again, there are two points. First, tightening may well have begun at least a year or two before the third quarter of 2008. Second, the economy started collapsing in the third quarter of 2008, and the run-up in the value of the dollar starting in July 2008, foolishly interpreted by the Fed as a vote of confidence in its anti-inflation policy, was really a cry for help as the economy was being starved of liquidity just as the demand for liquidity was becoming really intense. That denial of liquidity led to a perverse situation in which the return to holding cash began to exceed the return on real assets, setting the stage for a collapse in asset prices and a financial panic. The Fed could have prevented the panic, by providing more liquidity. Had it done so, the financial crisis would have been avoided, and the collapse in the real economy and the rise in unemployment would have been substantially mitigate.

c – “The Federal Reserve could have stopped the Great Recession from ever happening. Unemployment in 2009 wouldn’t have gone above 5.5 percent.”

This I don’t believe. Do they? There’s a lot of “might have kept that decline from happening or at least moderated it” back-and-forth language in the piece.

Is the argument that we’d somehow avoid the zero-lower bound? Ben Bernanke recently showed that interest rates would have had to go to about -4 percent to offset the Great Recession at the time. Hitting the zero-lower bound earlier than later is good policy, but it’s still there.

I think there’s an argument about “expectations,” and “expectations” wouldn’t have been set for a Great Recession. A lot of the “expectations” stuff has a magic and tautological quality to it once it leaves the models and enters the policy discussion, but the idea that a random speech about inflation worries could have shifted the Taylor Rule 4 percent seems really off base. Why doesn’t it go haywire all the time, since people are always giving speeches?

Well, I have shown in this paper that, starting in 2008, there was a strong empirical relationship between stock prices and inflation expectations, so it’s not just tautological. And we’re not talking about random speeches; we are talking about the decisions of the FOMC and the reasons that were given for those decisions. The markets pay a lot of attention to those reason.

And couldn’t it be just as likely that since the Fed was so confident about inflation in mid-2008 it boosted nominal income, by giving people a higher level of inflation expectations than they’d have otherwise? Given the failure of the Evans Rule and QE3 to stabilize inflation (or even prevent it from collapsing) in 2013, I imagine transporting them back to 2008 would haven’t fundamentally changed the game.

The inflation in 2008 was not induced by monetary policy, but by adverse supply shocks, expectations of higher inflation, given the Fed’s inflation targeting were thus tantamount to predictions of further monetary tightening.

If your mental model is that the Federal Reserve delaying something three months is capable of throwing 8.7 million people out of work, you should probably want to have much more shovel-ready construction and automatic stabilizers, the second of which kicked in right away without delay, as part of your agenda. It seems odd to put all the eggs in this basket if you also believe that even the most minor of mistakes are capable of devastating the economy so greatly.

Once again, it’s not a matter of just three months, but even if it were, in the summer of 2008 the economy was at a kind of inflection point, and the failure to ease monetary policy at that critical moment led directly to a financial crisis with cascading effects on the real economy. If the financial crisis could have been avoided by preventing total spending from dropping far below trend in the third quarter, the crisis might have been avoided, and the subsequent loss of output and employment could have been greatly mitigated.

And just to be clear, I have pointed out previously that the free market economy is fragile, because its smooth functioning depends on the coherence and consistency of expectations. That makes monetary policy very important, but I don’t dismiss shovel-ready construction and automatic stabilizers as means of anchoring expectations in a useful way, in contrast to the perverse way that inflation targeting stabilizes expectations.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 425 other followers

Follow Uneasy Money on WordPress.com