Archive for the 'Uncategorized' Category

Why The Wall Street Journal Editorial Page is a Disgrace

In view of today’s absurdly self-righteous statement by the Wall Street Journal editorial board, I thought it would be a good idea to update one of my first posts (almost nine years ago) on this blog. Plus ca change plus c’est la meme chose; just gets worse and worse even with only occasional contributions by the estimable Mr. Stephen Moore.

Stephen Moore has the dubious honor of being a member of the editorial board of The Wall Street Journal.  He lives up (or down) to that honor by imparting his wisdom from time to time in signed columns appearing on the Journal’s editorial page. His contribution in today’s Journal (“Why Americans Hate Economics”) is noteworthy for typifying the sad decline of the Journal’s editorial page into a self-parody of obnoxious, philistine anti-intellectualism.

Mr. Moore begins by repeating a joke once told by Professor Christina Romer, formerly President Obama’s chief economist, now on the economics department at the University of California at Berkeley. The joke, not really that funny, is that there are two kinds of students:  those who hate economics and those who really hate economics.  Professor Romer apparently told the joke to explain that it’s not true. Mr. Moore repeats it to explain why he thinks it really is. Why does he? Let Mr. Moore speak for himself:  “Because too often economic theories defy common sense.” That’s it in a nutshell for Mr. Moore:  common sense — the ultimate standard of truth.

So what’s that you say, Galileo? The sun is stationary and the earth travels around it? You must be kidding! Why any child can tell you that the sun rises in the east and moves across the sky every day and then travels beneath the earth at night to reappear in the east the next morning. And you expect anyone in his right mind to believe otherwise. What? It’s the earth rotating on its axis? Are you possessed of demons? And you say that the earth is round? If the earth were round, how could anybody stand at the bottom of the earth and not fall off? Galileo, you are a raving lunatic. And you, Mr. Einstein, you say that there is something called a space-time continuum, so that time slows down as the speed one travels approaches the speed of light. My God, where could you have come up with such an idea?  By that reasoning, two people could not agree on which of two events happened first if one of them was stationary and the other traveling at half the speed of light.  Away with you, and don’t ever dare speak such nonsense again, or, by God, you shall be really, really sorry.

The point of course is not to disregard common sense–that would not be very intelligent–but to recognize that common sense isn’t enough. Sometimes things are not what they seem – the earth, Mr. Moore, is not flat – and our common sense has to be trained to correspond with a reality that can only be discerned by the intensive application of our reasoning powers, in other words, by thinking harder about what the world is really like than just accepting what common sense seems to be telling us. But once you recognize that common sense has its limitations, the snide populist sneers–the stock-in-trade of the Journal editorial page–mocking economists with degrees from elite universities in which Mr. Moore likes to indulge are exposed for what they are:  the puerile defensiveness of those unwilling to do the hard thinking required to push back the frontiers of their own ignorance.

In today’s column, Mr. Moore directs his ridicule at a number of Keynesian nostrums that I would not necessarily subscribe to, at least not without significant qualification. But Keynesian ideas are also rooted in certain common-sense notions, for example, the idea that income and expenditure are mutually interdependent, the income of one person being derived from the expenditure of another. So when Mr. Moore simply dismisses as “nonsensical” the idea that extending unemployment insurance to keep the unemployed from having to stop spending, he is in fact rejecting an idea that is no less grounded in common sense than the idea that paying people not to work discourages work. The problem is that our common sense cuts in both directions. Mr. Moore likes one and wants to ignore the other.

What we would like economists–even those unfortunate enough to have graduated from an elite university–to tell us is which effect is stronger or, perhaps, when is one effect stronger and when is the other stronger. But all that would be too complicated and messy for Mr. Moore’s–and the Journal‘s–cartoonish view of the world.

In that cartoonish view, the problem is that good old Adam Smith of “invisible hand” fame and his virtuous economic doctrines supporting free enterprise got tossed aside when the dastardly Keynes invented “macroeconomics” in the 1930s. And here is Mr. Moore’s understanding of macroeconomics.

Macroeconomics simply took basic laws of economics we know to be true for the firm or family –i.e., that demand curves are downward-sloping; that when you tax something, you get less of it; that debts have to be repaid—and turned them on their head as national policy.

Simple, isn’t it? The economics of Adam Smith (the microeconomics of firm and family) is good because it is based on common sense; the macroeconomics of Keynes is bad because it turns common sense on its head. Now I don’t know how much Mr. Moore knows about economics other than that demand curves are downward-sloping, but perhaps he has heard of, or even studied, the law of comparative advantage.

The law of comparative advantage says, in one of its formulations, that even if a country is less productive (because of, say, backward technology or a poor endowment of natural resources) than other countries in producing every single product that it produces, it would still have a lower cost of production in at least one of those products, and could profitably export that product (or those products) in international markets in sufficient amounts to pay for its imports of other products. If there is a less common-sensical notion than that in all of macroeconomics, indeed in any scientific discipline, I would like to hear about it. And trust me as a former university teacher of economics, there is no proposition in economics that students hate more or find harder to reconcile with their notions of common sense than the law of comparative advantage. Indeed, even most students who can correctly answer an exam question about comparative advantage don’t believe a word of what they wrote. The only students who actually do believe it are the ones who become economists.

But the law of comparative advantage is logically unassailable; you might as well try to disprove “two plus two equals four.” So, no, Mr. Moore, you don’t know why Americans hate economics, not unless, by Americans you mean that (one hopes small) group of individuals who happen to think exactly the same way as does the editorial board of The Wall Street Journal.

Noah Smith Gives Elizabeth Warren’s Economic Patriotism Plan Two Cheers; I Give it a Bit Less

Update 2/25/20 4:41pm EST: I wrote this post many months ago; I actually don’t remember where or when, but never posted it. I don’t remember why I didn’t post it. I don’t even know how it got posted, because, having long forgotten about it, I certainly wasn’t trying to post it. I was just searching for another old and published post of mine that I wanted to look at. But since it’s seen the light of day, I guess I will just leave it out there for whoever is interested.

Elizabeth Warren issued another one of her policy documents, this one a plan for advancing what she calls “economic patriotism,” a term that certainly doesn’t resonate in my own ears. But to each his own. Noah Smith lost no time publishing his own analysis of Warren’s proposals, no doubt after giving it a careful reading and a lot of careful thought.

Being less diligent than Noah, I haven’t actually read Warren’s policy proposals, but I did read Noah’s analysis of Warren’s proposals, and  here are some quick reactions to Noah and indirectly to Senator Warren.

It’s safe to say that the postwar free-trade consensus in Washington has crumbled. The main agent of its destruction was President Donald Trump, who fulfilled his campaign promises by canceling free-trade deals and launching trade wars with almost every country with which the U.S. does business. But the turn against free trade is bipartisan — socialist presidential candidate Bernie Sanders also promised to pull out of some international deals, and some prominent Democrats have backed Trump’s tariffs against China.

Now Senator and 2020 presidential candidate Elizabeth Warren has released a trade plan that goes squarely against the old consensus. Warren’s “A Plan For Economic Patriotism” would seek to revive U.S. industry in a number of ways — some of them smart, some of them problematic. The plan would leverage government-funded research and development to boost industry — a very good but hardly novel idea — and promote manufacturing (Warren also released a companion proposal specifically about manufacturing). The plan also would aggressively promote U.S. exports.

Although the purpose of the plan may be to triangulate Trump on trade, doing more to promote exports is probably a good idea in its own right. There is a growing body of evidence that nudging developing-country manufacturers to export increases their productivity, and some studies suggest that the phenomenon extends to rich countries like the U.S. This makes sense — when a company starts competing in international markets, it must up its game against global competition, improving efficiency, developing new products and so on.  But the U.S. domestic market is so large that American companies are often tempted to ignore the outside world; export promotion would fight this corrosive complacency.

I am inclined to favor free trade, but as I have observed before, the standard case for unilateral free trade is based on a number of implicit welfare assumptions that are not necessarily true and may leave out important considerations that are relevant to an appropriate analysis of trade policy. If we are trying to promote high employment then the best way of doing that is not by raising the price of imports which mainly benefits the owners of specialized domestic capital used in import-competing industries. It would be better to subsidize employment in industries that produce exports encouraging their expansion.

Then there’s the trade deficit. Countries can’t all run trade surpluses at each other’s expense, and attempts to do so can easily degenerate into a game of beggar-thy-neighbor. If a country runs trade deficits in order to fuel a temporary investment boom, which can help growth. But for more than two decades now, the U.S. has run substantial trade deficits even as investment’s share of the economy has fallen:

That suggests that U.S. consumers are consistently living beyond their means, which seems unsustainable. Increasing exports, rather than trying to cut imports as Trump has done, is a smart way to try to make U.S. consumption levels more sustainable.

The problem is how exactly to do it. Warren would dramatically expand the Export-Import bank’s activities, and direct more of its loans to smaller companies instead of big ones — a great idea that I have called for in the past. Warren would also consolidate the vast array of federal government agencies responsible for industrial policy into a single Department of Economic Development — another smart move.

I’m not following Noah’s reasoning. If the trade deficit is, as Noah correctly suggests, a reflection of a low US saving rate compared to saving in other countries, how will subsidizing exports raise the US propensity to save. Increasing the exports of some products will not induce Americans to increase their saving rate. If aggregate US saving doesn’t increase, the size of the trade deficit will not change; only the composition of that deficit will change.

A less savory tool is Warren’s proposal to have the federal government buy only American-made goods — a protectionist move that would do nothing to promote exports and would simply raise costs for the infrastructure that U.S. manufacturers need to be competitive. Warren should discard this piece of the plan.

Agreed.

More actively managing our currency value to promote exports and domestic manufacturing…We should consider a number of tools and work with other countries harmed by currency misalignment to produce a currency value that’s better for our workers and our industries.

The dollar now functions as the world’s so-called reserve currency — other countries hold dollar assets as buffers against capital outflows, and many internationally traded commodities are priced in dollars. This increases global demand for dollars, which pushes up their value against other currencies. That makes it easier for Americans to borrow, but harder for them to export.

It’s hard to see how Warren’s plan would change that state of affairs. Currency intervention would probably come from the Federal Reserve; if the Fed prints dollars, it puts downward pressure on the dollar. But because the U.S. doesn’t have the same control over its financial system that China does, creating all those dollars would risk inflation.  The difficulty of maintaining an independent monetary policy while also targeting exchange rates is a well-known dilemma in international economics.

Precisely. Noah seems to referencing a policy of exchange-rate protection, which I have written about many times already on this blog, based on the classic article on the subject of the eminent Max Corden. The upshot of Corden’s article was that exchange-rate protection can only work if the monetary authority simultaneously intervenes to reduce the value of its currency in the foreign-exchange market by selling its currency in exchange for foreign currencies and tightens its domestic monetary policy. Exchange-rate intervention means increasing the quantity of the domestic currency, thereby causing domestic prices to rise. If domestic prices rise along with the depreciation of the exchange rate of the domestic currency, exporters gain no advantage. If exports are to be promoted by exchange-rate intervention, the monetary authority must either reduce the domestic quantity of money or increase the demand for it (usually by increasing reserve requirements for the banking system) while the exchange rate is depreciated, creating an excess domestic demand for money. If the domestic economy is chronically short of cash, the only way for cash balances to be increased is through reduced expenditure which means that imports will decrease and exports will increase as a result of reduced domestic expenditure. That doesn’t sound like the sort of strategy for currency manipulation to reduce the real effective exchange rate that Senator Warren would be inclined to support

As an alternative, the U.S. could try to stop other countries from holding U.S. dollar reserves and pricing commodities in dollars, thus forfeiting the dollar’s role as the global reserve currency. But this could destabilize the global financial system in ways that are poorly understood, and thus would be a risky move.

In the end, the best approach on the currency may simply be to put pressure on countries that intervene to reduce the value of their own currencies against the dollar. The problem is that China is by far the biggest of these — although it hasn’t had to intervene to hold down the yuan in recent years, its capital controls and currency management policies are still in place, limiting the potential for yuan appreciation. If other countries allow their own currencies to appreciate against the dollar, they’ll be putting themselves in an uncompetitive position relative to China.

Thus, the issues of economic patriotism, export promotion and currency revaluation will ultimately come back to China. Until and unless that giant country gives up its strategy of promoting manufactured exports to the U.S., it will be an uphill battle to correct the U.S.’s imbalances or revive its export competitiveness. A President Warren would be smarter than a President Trump on trade, but she would find herself confronting much the same challenges.

While China probably was a currency manipulator in the early years of this century, as reflected China’s rapid accumulation of foreign exchange, the pace of foreign exchange accumulation has since tapered off. China could be pressured to disgorge some of its enormous foreign-exchange holdings, which would require China to buy more foreign assets or increase imports from abroad. How that could be done is not exactly obvious, but the most likely way to achieve that result would be for the US to aim for a higher rate of inflation thereby increasing the cost to China and other holders of US foreign exchange of holding low-yielding US financial assets. Whether President Warren would find such a policy approach to her liking is far from obvious.

My Paper “Hayek, Hicks, Radner and Four Equilibrium Concepts” Is Now Available Online.

The paper, forthcoming in The Review of Austrian Economics, can be read online.

Here is the abstract:

Hayek was among the first to realize that for intertemporal equilibrium to obtain all agents must have correct expectations of future prices. Before comparing four categories of intertemporal, the paper explains Hayek’s distinction between correct expectations and perfect foresight. The four equilibrium concepts considered are: (1) Perfect foresight equilibrium of which the Arrow-Debreu-McKenzie (ADM) model of equilibrium with complete markets is an alternative version, (2) Radner’s sequential equilibrium with incomplete markets, (3) Hicks’s temporary equilibrium, as extended by Bliss; (4) the Muth rational-expectations equilibrium as extended by Lucas into macroeconomics. While Hayek’s understanding closely resembles Radner’s sequential equilibrium, described by Radner as an equilibrium of plans, prices, and price expectations, Hicks’s temporary equilibrium seems to have been the natural extension of Hayek’s approach. The now dominant Lucas rational-expectations equilibrium misconceives intertemporal equilibrium, suppressing Hayek’s insights thereby retreating to a sterile perfect-foresight equilibrium.

And here is my concluding paragraph:

Four score and three years after Hayek explained how challenging the subtleties of the notion of intertemporal equilibrium and the elusiveness of any theoretical account of an empirical tendency toward intertemporal equilibrium, modern macroeconomics has now built a formidable theoretical apparatus founded on a methodological principle that rejects all the concerns that Hayek found so vexing denies that all those difficulties even exist. Many macroeconomists feel proud of what modern macroeconomics has achieved, but there is reason to think that the path trod by Hayek, Hicks and Radner could have led macroeconomics in a more fruitful direction than the one on which it has been led by Lucas and his associates.

Yield-Curve Inversion and the Agony of Central Banking

Suddenly, we have been beset with a minor panic attack about our increasingly inverted yield curve. Since fear of yield-curve inversion became a thing a little over a year ago, a lot of people have taken notice of the fact that yield-curve inversion has often presaged recessions. In June 2018, when the yield curve was on the verge of flatlining, I tried to explain the phenomenon, and I think that I provided a pretty good — though perhaps a tad verbose — explanation, providing the basic theory behind the typical upward slope of the yield curve as well as explaining what seems the most likely, though not the only, reason for inversion, one that explains why inversion so often is a harbinger of recession.

But in a Tweet yesterday responding to Sri Thiruvadanthai I think I framed the issue succinctly within the 280 character Twitter allotment. Here are the two tweets.

 

 

And here’s a longer version getting at the same point from my 2018 post:

For purposes of this discussion, however, I will focus on just two factors that, in an ultra-simplified partial-equilibrium setting, seem most likely to cause a normally upward-sloping yield curve to become relatively flat or even inverted. These two factors affecting the slope of the yield curve are the demand for liquidity and the supply of liquidity.

An increase in the demand for liquidity manifests itself in reduced current spending to conserve liquidity and by an increase in the demands of the public on the banking system for credit. But even as reduced spending improves the liquidity position of those trying to conserve liquidity, it correspondingly worsens the liquidity position of those whose revenues are reduced, the reduced spending of some necessarily reducing the revenues of others. So, ultimately, an increase in the demand for liquidity can be met only by (a) the banking system, which is uniquely positioned to create liquidity by accepting the illiquid IOUs of the private sector in exchange for the highly liquid IOUs (cash or deposits) that the banking system can create, or (b) by the discretionary action of a monetary authority that can issue additional units of fiat currency.

The question that I want to address now is why has the yield curve, after having been only slightly inverted or flat for the past year, suddenly — since about the beginning of August — become sharply inverted.

Last summer, when concerns about inversion was just beginning to be discussed, the Fed, which had been signaling a desire to raise short-term rates to “normal” levels, changed signals, indicating that it would not automatically continue raising rates as it had between 2003 and 2006, but would evaluate each rate increase in light of recent data bearing on the state of the economy. So after a further half-a-percent increase in the Fed’s target rate between June and the end of 2018, the Fed held off on further increases, and in July actually cut its rate by a quarter of a percent and even signaled a likely further quarter of a percent decrease in September.

Now to be sure the Fed might have been well-advised not to have raised its target rate as much as it did, and to have cut its rate more steeply than it did in July. Nevertheless, it would be hard to identify any particular monetary cause for the recent steep further inversion of the yield curve. So, the most likely reason for the sudden inversion is nervousness about the possibility of a trade war, which most people do not think is either good or easy to win.

After yesterday’s announcement by the administration that previously announced tariff increases on Chinese goods scheduled to take effect in September would be postponed until after the Christmas buying season, the stock market took some comfort in an apparent easing of tensions between the US and China over trade policy. But this interpretation was shot down by none other than Commerce Secretary Wilbur Ross who, before the start of trading, told CNBC that the administration’s postponement of the tariffs on China was done solely in the interest of American shoppers and not to ease tensions with China. The remark — so unnecessary and so counterproductive — immediately aroused suspicions that Ross had an ulterior motive, like, say, a short position in the S&P 500 index, in sharing it on national television.

So what’s going on? Monetary policy has probably been marginally too tight for that past year, but only marginally. Unlike other inverted yield curve episodes that Fed has not been attempting to reduce the rate of inflation and has even been giving lip service to the goal of raising the rate of inflation, so if the Fed’s target rate was raised too high, it was based on an expectation that the economy was in the midst of an expansion; it was not an attempt to reduce growth. But the economy has weakened, and all signs suggest that the weakness stems from an uncertain economic environment particularly owing to the risk that new tariffs will be imposed or existing ones raised to even higher levels, triggering retaliatory measures by China and other affected countries.

In my 2018 post I mentioned a similar, but different, kind of uncertainty that held back recovery from the 2001-02 recession.

The American economy had entered a recession in early 2001, partly as a result of the bursting of the dotcom bubble of the late 1990s. The recession was short and mild, and the large tax cut enacted by Congress at the behest of the Bush administration in June 2001 was expected to provide significant economic stimulus to promote recovery. However, it soon became clear that, besides the limited US attack on Afghanistan to unseat the Taliban regime and to kill or capture the Al Qaeda leadership in Afghanistan, the Bush Administration was planning for a much more ambitious military operation to effect regime change in Iraq and perhaps even in other neighboring countries in hopes of radically transforming the political landscape of the Middle East. The grandiose ambitions of the Bush administration and the likelihood that a major war of unknown scope and duration with unpredictable consequences might well begin sometime in early 2003 created a general feeling of apprehension and uncertainty that discouraged businesses from making significant new commitments until the war plans of the Administration were clarified and executed and their consequences assessed.

The Fed responded to the uncertain environment of 2002 with a series of interest rate reductions that prevented a lapse into recession.

Gauging the unusual increase in the demand for liquidity in 2002 and 2003, the Fed reduced short-term rates to accommodate increasing demands for liquidity, even as the economy entered into a weak expansion and recovery. Given the unusual increase in the demand for liquidity, the accommodative stance of the Fed and the reduction in the Fed Funds target to an unusually low level of 1% had no inflationary effect, but merely cushioned the economy against a relapse into recession.

Recently, the uncertainty caused by the imposition of tariffs and the threat of a destructive trade war seems to have discouraged firms to go forward with plans to invest and to expand output as decision-makers prefer to wait and see how events play out before making long-term commitments that would put assets and investments at serious risk if a trade war undermines the conditions necessary for those investment to be profitable. In the interim, decision-makers seeking short-term safety and the flexibility to deploy their assets and resources profitably once future prospects become less uncertain leads them to take highly liquid positions that don’t preclude taking future profitable actions once profitable opportunities present themselves.

However, when everyone resists making commitments, economic activity doesn’t keep going as before, it gradually slows down. And so a state of heightened uncertainty eventually leads to a stagnation or recession or something worse. To prevent or mitigate that outcome, a reduction in interest rates by the central bank can prevent or at least postpone the onset of a recession, as the Fed succeeded in doing in 2002-03 by reducing its interest rate target to 1%. Similar steps by the Fed may now be called for.

But there is another question that ought to be discussed. When the Fed reduced interest rates in 2002-03 because of the uncertainty created by the pending decision of the US government about whether to invade Iraq, the Fed was probably right to take that uncertainty as an exogenous decision in which it had no decision-making role or voice. The decision to invade or not would be made based on considerations over which the Fed rightly had no role to evaluate or opine upon. However, the Fed does have a responsibility for creating a stable economic environment and eliminating avoidable uncertainty about economic conditions caused by bad policy-making. Insofar as the current uncertain economic environment is the result of deliberate economic-policy actions that increase uncertainty, reducing interest rates to cushion the uncertainty-increasing effects of imposing, or raising, tariffs or of promoting a trade war would enable those uncertainty-increasing actions to be continued.

The Fed, therefore, now faces a cruel dilemma. Should it try to mitigate, by reducing interest rates, the effects of policies that increase uncertainty, thereby acting as a perhaps unwitting enabler of those policies, or should it stand firm and refuse to cushion the effects of policies that are themselves the cause of the uncertainty whose destructive effects the Fed is being asked to mitigate? This is the sort of dilemma that Arthur Burns, in a somewhat different context, once referred to as “The Agony of Central Banking.”

August 15, 1971: Unhappy Anniversary (Update)

[Update 8/15/2019: It seems appropriate to republish this post originally published about 40 days after I started blogging. I have made a few small changes and inserted a few comments to reflect my improved understanding of certain concepts like “sterilization” that I was uncritically accepting. I actually have learned a thing or two in the eight plus years that I’ve been blogging. I am grateful to all my readers — both those who agreed and those who disagreed — for challenging me and inspiring me to keep thinking critically. It wasn’t easy, but we did survive August 15, 1971. Let’s hope we survive August 15, 2019.]

August 15, 1971 may not exactly be a day that will live in infamy, but it is hardly a day to celebrate 40 years later.  It was the day on which one of the most cynical Presidents in American history committed one of his most cynical acts:  violating solemn promises undertaken many times previously, both before and after his election as President, Richard Nixon declared a 90-day freeze on wages and prices.  Nixon also announced the closing of the gold window at the US Treasury, severing the last shred of a link between gold and the dollar.  Interestingly, the current (August 13th, 2011) Economist (Buttonwood column) and Forbes  (Charles Kadlec op-ed) and today’s Wall Street Journal (Lewis Lehrman op-ed) mark the anniversary with critical commentaries on Nixon’s action ruefully focusing on the baleful consequences of breaking the link to gold, while barely mentioning the 90-day freeze that became the prelude to  the comprehensive wage and price controls imposed after the freeze expired.

Of the two events, the wage and price freeze and subsequent controls had by far the more adverse consequences, the closing of the gold window merely ratifying the demise of a gold standard that long since had ceased to function as it had for much of the 19th and early 20th centuries.  In contrast to the final break with gold, no economic necessity or even a coherent economic argument on the merits lay behind the decision to impose a wage and price freeze, notwithstanding the ex-post rationalizations offered by Nixon’s economic advisers, including such estimable figures as Herbert Stein, Paul McKracken, and George Schultz, who surely knew better,  but somehow were persuaded to fall into line behind a policy of massive, breathtaking, intervention into private market transactions.

The argument for closing the gold window was that the official gold peg of $35 an ounce was probably at least 10-20% below any realistic estimate of the true market value of gold at the time, making it impossible to reestablish the old parity as an economically meaningful price without imposing an intolerable deflation on the world economy.  An alternative response might have been to officially devalue the dollar to something like the market value of gold $40-42 an ounce.  But to have done so would merely have demonstrated that the official price of gold was a policy instrument subject to the whims of the US monetary authorities, undermining faith in the viability of a gold standard.  In the event, an attempt to patch together the Bretton Woods System (the Smithsonian Agreement of December 1971) based on an official $38 an ounce peg was made, but it quickly became obvious that a new monetary system based on any form of gold convertibility could no longer survive.

How did the $35 an ounce price became unsustainable barely 25 years after the Bretton Woods System was created?  The problem that emerged within a few years of its inception was that the main trading partners of the US systematically kept their own currencies undervalued in terms of the dollar, promoting their exports while sterilizing the consequent dollar inflow, allowing neither sufficient domestic inflation nor sufficient exchange-rate appreciation to eliminate the overvaluation of their currencies against the dollar. [DG 8/15/19: “sterilization” is a misleading term because it implies that persistent gold or dollar inflows just happen randomly; the persistent inflow occur only because they are induced by a persistent increased demand for reserves or insufficient creation of cash.] After a burst of inflation in the Korean War, the Fed’s tight monetary policy and a persistently overvalued exchange rate kept US inflation low at the cost of sluggish growth and three recessions between 1953 and 1960.  It was not until the Kennedy administration came into office on a pledge to get the country moving again that the Fed was pressured to loosen monetary policy, initiating the long boom of the 1960s some three years before the Kennedy tax cuts were posthumously enacted in 1964.

Monetary expansion by the Fed reduced the relative overvaluation of the dollar in terms of other currencies, but the increasing export of dollars left the $35 an ounce peg increasingly dependent on the willingness of foreign government to hold dollars.  However, President Charles de Gaulle of France, having overcome domestic opposition to his rule, felt secure enough to assert [his conception of] French interests against the US, resuming the traditional French policy of accumulating physical gold reserves rather than mere claims on gold physically held elsewhere.  By 1967 the London gold pool, a central bank cartel acting to control the price of gold in the London gold market, was collapsing, as France withdrew from the cartel, demanding that gold be shipped to Paris from New York.  In 1968, unable to hold down the market price of gold any longer, the US and other central banks let the gold price rise above the official price, but agreed to conduct official transactions among themselves at the official price of $35 an ounce.  As market prices for gold, driven by US monetary expansion, inched steadily higher, the incentives for central banks to demand gold from the US at the official price became too strong to contain, so that the system was on the verge of collapse when Nixon acknowledged the inevitable and closed the gold window rather than allow depletion of US gold holdings.

Assertions that the Bretton Woods system could somehow have been saved simply ignore the economic reality that by 1971 the Bretton Woods System was broken beyond repair, or at least beyond any repair that could have been effected at a tolerable cost.

But Nixon clearly had another motivation in his August 15 announcement, less than 15 months before the next Presidential election.  It was in effect the opening shot of his reelection campaign.  Remembering all too well that he lost the 1960 election to John Kennedy because the Fed had not provided enough monetary stimulus to cut short the 1960-61 recession, Nixon had appointed his long-time economic adviser, Arthur Burns to replace William McChesney Martin as chairman of the Fed in 1970.  A mild tightening of monetary policy in 1969 as inflation was rising above a 5% annual rate, had produced a recession in late 1969 and early 1970, without providing much relief from inflation.  Burns eased policy enough to allow a mild recovery, but the economy seemed to be suffering the worst of both worlds — inflation still near 4 percent and unemployment at what then seemed an unacceptably high level of almost 6 percent. [For more on Burns and his deplorable role in all of this see this post.]

With an election looming ever closer on the horizon, Nixon in the summer of 1971 became consumed by the political imperative of speeding up the recovery.  Meanwhile a Democratic Congress, assuming that Nixon really did mean his promises never to impose wage and price controls to stop inflation, began clamoring for controls as the way to stop inflation without the pain of a recession, even authorizing the President to impose controls, a dare they never dreamed he would accept.  Arthur Burns, himself, perhaps unwittingly [I was being too kind], provided support for such a step by voicing frustration that inflation persisted in the face of a recession and high unemployment, suggesting that the old rules of economics were no longer operating as they once had.  He even offered vague support for what was then called an incomes policy, generally understood as an informal attempt to bring down inflation by announcing a target  for wage increases corresponding to productivity gains, thereby eliminating the need for businesses to raise prices to compensate for increased labor costs.  What such proposals usually ignored was the necessity for a monetary policy that would limit the growth of total spending sufficiently to limit the growth of wage incomes to the desired target. [On incomes policies and how they might work if they were properly understood see this post.]

Having been persuaded that there was no acceptable alternative to closing the gold window — from Nixon’s perspective and from that of most conventional politicians, a painfully unpleasant admission of US weakness in the face of its enemies (all this was occurring at the height of the Vietnam War and the antiwar protests) – Nixon decided that he could now combine that decision, sugar-coated with an aggressive attack on international currency speculators and a protectionist 10% duty on imports into the United States, with the even more radical measure of a wage-price freeze to be followed by a longer-lasting program to control price increases, thereby snatching the most powerful and popular economic proposal of the Democrats right from under their noses.  Meanwhile, with the inflation threat neutralized, Arthur Burns could be pressured mercilessly to increase the rate of monetary expansion, ensuring that Nixon could stand for reelection in the middle of an economic boom.

But just as Nixon’s electoral triumph fell apart because of his Watergate fiasco, his economic success fell apart when an inflationary monetary policy combined with wage-and-price controls to produce increasing dislocations, shortages and inefficiencies, gradually sapping the strength of an economic recovery fueled by excess demand rather than increasing productivity.  Because broad based, as opposed to narrowly targeted, price controls tend to be more popular before they are imposed than after (as too many expectations about favorable regulatory treatment are disappointed), the vast majority of controls were allowed to lapse when the original grant of Congressional authority to control prices expired in April 1974.

Already by the summer of 1973, shortages of gasoline and other petroleum products were becoming commonplace, and shortages of heating oil and natural gas had been widely predicted for the winter of 1973-74.  But in October 1973 in the wake of the Yom Kippur War and the imposition of an Arab Oil Embargo against the United States and other Western countries sympathetic to Israel, the shortages turned into the first “Energy Crisis.”  A Democratic Congress and the Nixon Administration sprang into action, enacting special legislation to allow controls to be kept on petroleum products of all sorts together with emergency authority to authorize the government to allocate products in short supply.

It still amazes me that almost all the dislocations manifested after the embargo and the associated energy crisis were attributed to excessive consumption of oil and petroleum products in general or to excessive dependence on imports, as if any of the shortages and dislocations would have occurred in the absence of price controls.  And hardly anyone realizes that price controls tend to drive the prices of whatever portion of the supply is exempt from control even higher than they would have risen in the absence of any controls.

About ten years after the first energy crisis, I published a book in which I tried to explain how all the dislocations that emerged from the Arab oil embargo and the 1978-79 crisis following the Iranian Revolution were attributable to the price controls first imposed by Richard Nixon on August 15, 1971.  But the connection between the energy crisis in all its ramifications and the Nixonian price controls unfortunately remains largely overlooked and ignored to this day.  If there is reason to reflect on what happened forty years ago on this date, it surely is for that reason and not because Nixon pulled the plug on a gold standard that had not been functioning for years.

The Mendacity of Yoram Hazony, Virtue Signaler

Yoram Hazony, an American-educated, Israeli philosopher and political operator, former assistant to Benjamin Netanyahu, has become a rising star of the American Right. The week before last, Hazony made his media debut at the Washington DC National Conservatism Conference inspired by his book The Virtue of Nationalism. Sponsored by the shadowy Edmund Burke Foundation, the Conference on “National Conservatism” – a title either remarkably tone-deaf, or an in-your-face provocation echoing another “national ‘ism” ideological movement – featured a keynote address by Fox New personality and provocateur par excellence Tucker Carlson, and various other right-wing notables of varying degrees of respectability, though self-avowed white nationalists were kept at a discreet distance — a distance sufficient to elicit resentful comments and nasty insinuations about Hazony’s origins and loyalties.

I had not planned to read Hazony’s book, having read enough of his articles to know Hazony’s would not be book to read for either pleasure or edification. But sometimes duty calls, so I bought Hazony’s book on Amazon at half price. I have now read the Introduction and the first three chapters. I plan to continue reading till the end, but I thought that I would write down some thoughts as I go along. So consider yourself warned, this may not be my last post about Hazony.

Hazony calls his Introduction “A Return to Nationalism;” it is not a good beginning.

Politics in Britain and America have taken a turn toward nationalism. This has been troubling to many, especially in educated circles, where global integration has long been viewed as a requirement of sound policy and moral decency. From this perspective, Britain’s vote to leave the European Union and the “America First” rhetoric coming out of Washington seem to herald a reversion to a more primitive stage in history, when war-mongering and racism were voiced openly and permitted to set the political agenda of nations. . . .

But nationalism was not always understood to be the evil that current public discourse suggests. . . . Progressives regarded Woodrow Wilson’s Fourteen Points and the Atlantic Charter of Franklin Roosevelt and Winston Churchill as beacons of hope for mankind – and this precisely because they were considered expressions of nationalism, promising national independence and self-determination to enslaved peoples around the world. (pp. 1-2)

Ahem, Hazony cleverly – though not truthfully — appropriates Wilson, FDR and Churchill to the cause of nationalism. Although it was clever move by Hazony to try to disarm opposition to his brief for nationalism by misappropriating Wilson, FDR and Churchill to his side, it was not very smart, it being so obviously contradicted by well-known facts. Merely because Wilson, FDR, and Churchill all supported, with varying degrees of consistency and sincerity, the right of self-determination by national ethnic communities that had never, or not for a long time, enjoyed sovereign control over the territories in which they dwelled, does not mean that they did not also favor international cooperation and supra-national institutions.

For example, points 3 and 4 of Wilson’s Fourteen Points were the following:

The removal, so far as possible, of all economic barriers and the establishment of an equality of trade conditions among all the nations consenting to the peace and associating themselves for its maintenance.

Adequate guarantees given and taken that national armaments will be reduced to the lowest point consistent with domestic safety.

And here is point 14:

A general association of nations must be formed under specific covenants for the purpose of affording mutual guarantees of political independence and territorial integrity to great and small states alike. That association of course was realized as the League of Nations, which Wilson strove mightily to create but failed to convince the United States Senate to ratify the Treaty whereby the US would have joined the League.

I don’t know about you, but to me that sounds awfully globalist .

Now what about The Atlantic Charter?

While it supported the right of self-determination of all peoples, it also called for the lowering of trade barriers and for global economic cooperation. Moreover, Churchill, far from endorsing the unqualified right of all peoples to self-determination, flatly rejected the idea that the right of self-determination extended to British India.

But besides withholding the right of self-determination from British colonial possessions and presumably those of other European powers, Churchill, in a famous speech, endorsed the idea of a United States of Europe. Now Churchill did not necessarily envision a federal union along the lines of the European Union as now constituted, but he obviously did not reject on principle the idea of some form of supra-national governance.

We must build a kind of United States of Europe. In this way only will hundreds of millions of toilers be able to regain the simple joys and hopes which make life worth living.

So it is simply a fabrication and a misrepresentation to suggest that nationalism has ever been regarded as anything like a universal principle of political action, governance or justice. It is one of many principles, all of which have some weight, but must be balanced against, and reconciled with, other principles of justice, policy and expediency.

Going from bad to worse, Hazony continues,

Conservatives from Teddy Roosevelt to Dwight Eisenhower likewise spoke of nationalism as a positive good. (Id.)

Where to begin? Hazony, who is not adverse to footnoting (216 altogether, almost one per page, often providing copious references to sources and scholarly literature) offers not one documentary or secondary source for this assertion. To be sure Teddy Roosevelt and Dwight Eisenhower were Republicans. But Roosevelt differed from most Republicans of his time, gaining the Presidency only because McKinley wanted to marginalize him by choosing him as a running mate at a time when no Vice-President since Van Buren had succeeded to the Presidency, except upon the death of the incumbent President.

Eisenhower had been a non-political military figure with no party affiliation until his candidacy for the Republican Presidential nomination, as an alternative to the preferred conservative choice, Robert Taft. Eisenhower did not self-identify as a conservative, preferring to describe himself as a “modern Republican” to the disgust of conservatives like Barry Goldwater, whose best-selling book The Conscience of a Conservative was a sustained attack on Eisenhower’s refusal even to try to roll back the New Deal.

Moreover, when TR coined the term “New Nationalism” in a famous speech he gave in 1912, he was running for the Republican Presidential nomination against his chosen successor, William Howard Taft, by whom TR felt betrayed for trying to accommodate the conservative Republicans TR so detested. Failing to win the Republican nomination, TR ran as the candidate of the Progressive Party, splitting the Republican party, thereby ensuring the election of the progressive, though racist, Woodrow Wilson. Nor was that the end of it. Roosevelt was himself an imperialist, who had supported the War against Spain and the annexation of the Phillipines, and an early and militant proponent of US entry into World War I against Germany on the side of Britain and France. And, after the war, Roosevelt supported US entry into the League of Nations. These are not obscure historical facts, but Hazony, despite his Princeton undergraduate degree and doctorate in philosophy from Rutgers, shows no awareness of them.

Hazony seems equally unaware that, in the American context, nationalism had an entirely different meaning from its nineteenth-century European meaning, as the right of national ethnic populations, defined mainly by their common language, to form sovereign political units rather than the multi-ethnic, largely undemocratic kingdoms and empires by which they were ruled. In America, nationalism was distinguished from sectionalism, expressing the idea that the United States had become an organic unit unto itself, not merely an association of separate and distinct states. This idea, emphasized by Hamilton and the Federalists, and later the Whigs, against the states’ rights position of the Jeffersonian Democrats who resisted the claims of national and federal primacy. The classic expression of the uniquely American national sensibility was provided by Lincoln in his Gettysburg Address.

Fourscore and seven years ago our fathers brought forth on this continent, a new nation, conceived in Liberty, and dedicated to the proposition that all men are created equal.

Now we are engaged in a great civil war, testing whether that nation, or any nation so conceived and so dedicated, can long endure. We are met on a great battle-field of that war. We have come to dedicate a portion of that field, as a final resting place for those who here gave their lives that that nation might live.

Lincoln offered a conception of nationhood entirely different from that which inspired demands for the right of self-determination by European national ethnic and linguistic communities. If the notion of American exceptionalism is to have any clear meaning, it can only be in the context of Lincoln’s description of the origin and meaning of the American nationality.

After his clearly fraudulent appropriation of Theodore and Franklin Roosevelt, Winston Churchill and Dwight Eisenhower to the Nationalist Conservative cause, Hazony seizes upon Ronald Reagan and Margaret Thatcher. “In their day,’ Hazony assures us, “Ronald Reagan and Margaret Thatcher were welcomed by conservatives for the ‘new nationalism’ they brought to political life.” For good measure, Hazony also adds David Ben-Gurion and Mahatma Gandhi to his nationalist pantheon, though, unaccountably, he omits any mention of their enthusiastic embrace by conservatives.

Hazony favors his readers with a single footnote at the end of this remarkable and fantastical paragraph. Forget the fact that “new nationalism” is a term peculiarly associated with Teddy Roosevelt, not with Reagan, who to my knowledge, never uttered the phrase, but the primary source cited by Hazony doesn’t even refer to Reagan in the same context as “new nationalism.” Here is the text of that footnote.

On Reagan’s “new nationalism,” see Norman Podhoretz, “The New American Majority,” Commentary (January 1981); Irving Kristol, “The Emergence of Two Republican Parties,” Reflections of a Neo-Conservative (New York: Basic Books, 1983), 111. (p. 237)

I am unable to find the Kristol text on the internet, but I did find Podhoretz’s article on the Commentary website. I will quote the entire paragraph in which the words “new nationalism” make their only appearance (it is also the only appearance of “nationalism” in the article). But before reproducing the paragraph, I will register my astonishment at the audacity of Hazony in invoking the two godfathers of neo-conservatism as validators of spurious claim made by Hazony on Reagan’s behalf to posthumous recognition as a National Conservative hero, inasmuch as Hazony goes out of his way, as we shall see presently, to cast neo-conservatism into the Gehenna of imperialistic liberalism. But first, let us consider — and marvel at — Podhoretz’s discussion of the “new nationalism.”

In my opinion, because of Chappaquiddick alone, Edward Kennedy could not have become President of the United States in 1980. Yet even if Chappaquiddick had not been a factor, Edward Kennedy would still not have been a viable candidate — not for the Democratic nomination and certainly not for the Presidency in the general election. But if this is so, why did so many Democrats (over 50 percent in some of the early polls taken before he announced) declare their support for him? Here again it is impossible to say with complete assurance. But given the way the votes were subsequently cast in 1980, I think it is a reasonable guess that in those early days many people who had never paid close attention to him took Kennedy for the same kind of political figure his brother John had been. We know from all the survey data that the political mood had been shifting for some years in a consistent direction — away from the self-doubts and self-hatreds and the neo-isolationism of the immediate post-Vietnam period and toward what some of us have called a new nationalism. In the minds of many people caught up in the new nationalist spirit, John F. Kennedy stood for a powerful America, and in expressing enthusiasm for Edward Kennedy, they were in all probability identifying him with his older brother.

This is just an astoundingly brazen misrepresentation by Hazony in hypocritically misappropriating Reagan, to whose memory most Republicans and conservatives feel some lingering sentimental attachment, even as they discard and disavow many of his most characteristic political principles.

The extent to which Hazony repudiates the neo-conservative world view that was a major pillar of the Reagan Presidency becomes clear in a long paragraph in which Hazony sets up his deeply misleading, dichotomy between the virtuous nationalism he espouses and the iniquitous liberal imperialism that he excoriates as the only two possible choices for organizing our political institutions.

This debate between nationalism and imperialism became acutely relevant again with the fall of the Berlin Wall in 1989. At that time, the struggle against Communism ended, and the minds of Western leaders became preoccupied with two great imperialist project: the European Union, which has progressively relieved member nations of many of the powers usually associated with political independence; and the project of establishing an American “world order,” in which nations that do not abide by international law will be coerced into doing so principally by means of American military might. These imperialist projects, even though their proponents do not like to call them that, for two reasons: First, their purpose is to remove decision-making from the hands of independent national governments and place it in the hands of international governments or bodies. And second, as you can immediately see from the literature produced by these individuals and institutions supporting these endeavors, they are consciously part of an imperialist political tradition, drawing their historical inspiration from the Roman Empire, the Austro-Hungarian Empire, and the British Empire. For example, Charles Krauthammer’s argument for American “Universal Dominion,” written at the dawn of the post-Cold War period, calls for American to create a “super-sovereign,” which will preside over the permanent “depreciation . . . of the notion of sovereignty” for all nations on earth. Krauthammer adopts the Latin term pax Americana to describe this vision, invoking the image of the United States as the new Rome: Just as the Roman Empire supposedly established a pax Romana . . . that obtained security and quiet for all of Europe, so America would now provide security and quiet for the entire world. (pp. 3-4)

I do not defend Krauthammer’s view of pax Americana and his support for invading Iraq in 2003. But the war in Iraq was largely instigated by a small group of right-wing ideologists with whom Krauthammer and other neo-conservatives like William Kristol and Robert Kagan were aligned. In the wake of September 11, 2001, they leveraged fear of another attack into a quixotic and poorly-thought-out and incompetently executed military adventure into Iraq.

That invasion was not, as Hazony falsely suggests, the inevitable result of liberal imperialism (as if liberalism and imperialism were cognate ideas). Moreover, it is deeply dishonest for Hazony to single out Krauthammer et al. for responsibility for that disaster, when Hazony’s mentor and sponsor, Benjamin Netanyahu, was a major supporter and outspoken advocate for the invasion of Iraq.

There is much more to be said about Hazony’s bad faith, but I have already said enough for one post.

Dr. Shelton Remains Outspoken: She Should Have Known Better

I started blogging in July 2011, and in one of my first blogposts I discussed an article in the now defunct Weekly Standard by Dr. Judy Shelton entitled “Gold Standard or Bust.” I wrote then:

I don’t know, and have never met Dr. Shelton, but she has been a frequent op-ed contributor to the Wall Street Journal and various other publications of a like ideological orientation for 20 years or more, invariably advocating a return to the gold standard.  In 1994, she published a book Money Meltdown touting the gold standard as a cure for all our monetary ills.

I was tempted to provide a line-by-line commentary on Dr. Shelton’s Weekly Standard piece, but it would be tedious and churlish to dwell excessively on her deficiencies as a wordsmith or lapses from lucidity.

So I was not very impressed by Dr. Shelton then. I have had occasion to write about her again a few times since, and I cannot report that I have detected any improvement in the lucidity of her thought or the clarity of her exposition.

Aside from, or perhaps owing to, her infatuation with the gold standard, Dr. Shelton seems to have developed a deep aversion to what is commonly, and usually misleadingly, known as currency manipulation. Using her modest entrepreneurial skills as a monetary-policy pundit, Dr. Shelton has tried to use the specter of currency manipulation as a talking point for gold-standard advocacy. So, in 2017 Dr. Shelton wrote an op-ed about currency manipulation for the Wall Street Journal that was so woefully uninformed and unintelligible, that I felt obligated to write a blogpost just for her, a tutorial on the ABCs of currency manipulation, as I called it then. Here’s an excerpt from my tutorial:

[i]t was no surprise to see in Tuesday’s Wall Street Journal that monetary-policy entrepreneur Dr. Judy Shelton has written another one of her screeds promoting the gold standard, in which, showing no awareness of the necessary conditions for currency manipulation, she assures us that a) currency manipulation is a real problem and b) that restoring the gold standard would solve it.

Certainly the rules regarding international exchange-rate arrangements are not working. Monetary integrity was the key to making Bretton Woods institutions work when they were created after World War II to prevent future breakdowns in world order due to trade. The international monetary system, devised in 1944, was based on fixed exchange rates linked to a gold-convertible dollar.

No such system exists today. And no real leader can aspire to champion both the logic and the morality of free trade without confronting the practice that undermines both: currency manipulation.

Ahem, pray tell, which rules relating to exchange-rate arrangements does Dr. Shelton believe are not working? She doesn’t cite any. And, what, on earth does “monetary integrity” even mean, and what does that high-minded, but totally amorphous, concept have to do with the rules of exchange-rate arrangements that aren’t working?

Dr. Shelton mentions “monetary integrity” in the context of the Bretton Woods system, a system based — well, sort of — on fixed exchange rates, forgetting – or choosing not — to acknowledge that, under the Bretton Woods system, exchange rates were also unilaterally adjustable by participating countries. Not only were they adjustable, but currency devaluations were implemented on numerous occasions as a strategy for export promotion, the most notorious example being Britain’s 30% devaluation of sterling in 1949, just five years after the Bretton Woods agreement had been signed. Indeed, many other countries, including West Germany, Italy, and Japan, also had chronically undervalued currencies under the Bretton Woods system, as did France after it rejoined the gold standard in 1926 at a devalued rate deliberately chosen to ensure that its export industries would enjoy a competitive advantage.

The key point to keep in mind is that for a country to gain a competitive advantage by lowering its exchange rate, it has to prevent the automatic tendency of international price arbitrage and corresponding flows of money to eliminate competitive advantages arising from movements in exchange rates. If a depreciated exchange rate gives rise to an export surplus, a corresponding inflow of foreign funds to finance the export surplus will eventually either drive the exchange rate back toward its old level, thereby reducing or eliminating the initial depreciation, or, if the lower rate is maintained, the cash inflow will accumulate in reserve holdings of the central bank. Unless the central bank is willing to accept a continuing accumulation of foreign-exchange reserves, the increased domestic demand and monetary expansion associated with the export surplus will lead to a corresponding rise in domestic prices, wages and incomes, thereby reducing or eliminating the competitive advantage created by the depressed exchange rate. Thus, unless the central bank is willing to accumulate foreign-exchange reserves without limit, or can create an increased demand by private banks and the public to hold additional cash, thereby creating a chronic excess demand for money that can be satisfied only by a continuing export surplus, a permanently reduced foreign-exchange rate creates only a transitory competitive advantage.

I don’t say that currency manipulation is not possible. It is not only possible, but we know that currency manipulation has been practiced. But currency manipulation can occur under a fixed-exchange rate regime as well as under flexible exchange-rate regimes, as demonstrated by the conduct of the Bank of France from 1926 to 1935 while it was operating under a gold standard. And the most egregious recent example of currency manipulation was undertaken by the Chinese central bank when it effectively pegged the yuan to the dollar at a fixed rate. Keeping its exchange rate fixed against the dollar was precisely the offense that the currency-manipulation police accused the Chinese of committing.

I leave it to interested readers to go back and finish the rest of my tutorial for Dr. Shelton. And if you read carefully and attentively, you are likely to understand the concept of currency manipulation a lot more clearly than when you started.

Alas, it’s obvious that Dr. Shelton has either not read or not understood the tutorial I wrote for her, because, in her latest pronouncement on the subject she covers substantially the same ground as she did two years ago, with no sign of increased comprehension of the subject on which she expounds with such misplaced self-assurance. Here are some samples of Dr. Shelton’s conceptual confusion and historical ignorance.

History can be especially informative when it comes to evaluating the relationship between optimal economic performance and monetary regimes. In the 1930s, for example, the “beggar thy neighbor” tactic of devaluing currencies against gold to gain a trade export advantage hampered a global economic recovery.

Beggar thy neighbor policies were indeed adopted by the United States, but they were adopted first in the 1922 (the Fordney-McCumber Act) and again in 1930 (Smoot-Hawley Act) when the US was on the gold standard with the value of the dollar pegged at $4.86 $20.67 for an ounce of gold. The Great Depression started in late 1929, but the stock market crash of 1929 may have been in part precipitated by fears that the Smoot-Hawley Act would be passed by Congress and signed into law by President Hoover.

At any rate, exchange rates among most major countries were pegged to either gold or the dollar until September 1931 when Britain suspended the convertibility of the pound into gold. The Great Depression was the result of a rapid deflation caused by gold accumulation by central banks as they rejoined the gold standard that had been almost universally suspended during World War I. Countries that remained on the gold standard during the Great Depression were condemned to suffer deflation as gold became ever more valuable in real terms, so that currency depreciation against gold was the only pathway to recovery. Thus, once convertibility was suspended and the pound allowed to depreciate, the British economy stopped contracting and began a modest recovery with slowly expanding output and employment.

The United States, however, kept the dollar pegged to its $4.86 $20.67 an ounce parity with gold until April 1933, when FDR saved the American economy by suspending convertibility and commencing a policy of deliberate reflation (i.e. inflation to restore the 1926 price level). An unprecedented expansion of output, employment and income accompanied the rise in prices following the suspension of the gold standard. Currency depreciation was the key to recovery from, not the cause of, depression.

Having exposed her ignorance of the causes of the Great Depression, Dr. Shelton then begins a descent into her confusion about the subject of currency manipulation, about which I had tried to tutor her, evidently without success.

The absence of rules aimed at maintaining a level monetary playing field invites currency manipulation that could spark a backlash against the concept of free trade. Countries engaged in competitive depreciation undermine the principles of genuine competition, and those that have sought to participate in good faith in the global marketplace are unfairly penalized by the monetary sleight of hand executed through central banks.

Currency manipulation is possible only under specific conditions. A depreciating currency is not normally a manipulated currency. Currencies fluctuate in relative values for many different reasons, but if prices adjust in rough proportion to the change in exchange rates, the competitive positions of the countries are only temporarily affected by the change in exchange rates. For a country to gain a sustained advantage for its export and import-competing industries by depreciating its exchange rate, it must adopt a monetary policy that consistently provides less cash than the public demands or needs to satisfy its liquidity needs, forcing the public to obtain the desired cash balances through a balance-of-payments surplus and an inflow of foreign-exchange reserves into the country’s central bank or treasury.

U.S. leadership is necessary to address this fundamental violation of free-trade practices and its distortionary impact on free-market outcomes. When the United States’ trading partners engage in currency manipulation, it is not competing — it’s cheating.

That is why it is vital to weigh the implications of U.S. monetary policy on the dollar’s exchange-rate value against other currencies. Trade and financial flows can be substantially altered by speculative market forces responding to the public comments of officials at the helm of the European Central Bank, the Bank of Japan or the People’s Bank of China — with calls for “additional stimulus” alerting currency players to impending devaluation policies.

Dr. Shelton here reveals a comprehensive misunderstanding of the difference between a monetary policy that aims to stimulate economic activity in general by raising the price level or increasing the rate of inflation to stimulate expenditure and a policy of monetary restraint that aims to raise the relative price of domestic export and import-competing products relative to the prices of domestic non-tradable goods and services, e.g., new homes and apartments. It is only the latter combination of tight monetary policy and exchange-rate intervention to depreciate a currency in foreign-exchange markets that qualifies as currency manipulation.

And, under that understanding, it is obvious that currency manipulation is possible under a fixed-exchange-rate system, as France did in the 1920s and 1930s, and as most European countries and Japan did in the 1950s and early 1960s under the Bretton Woods system so well loved by Dr. Shelton.

In the 1950s and early 1960s, the US dollar was chronically overvalued. The situation was not remediated until the 1960s under the Kennedy administration when consistently loose monetary policy by the Fed made currency manipulation so costly for the Germans and Japanese that they revalued their currencies upward to avoid the inflationary consequences of US monetary expansion.

And then, in a final flourish, Dr. Shelton puts her ignorance of what happened in the Great Depression on public display with the following observation.

When currencies shift downward against the dollar, it makes U.S. exports more expensive for consumers in other nations. It also discounts the cost of imported goods compared with domestic U.S. products. Downshifting currencies against the dollar has the same punishing impact as a tariff. That is why, as in the 1930s during the Great Depression, currency devaluation prompts retaliatory tariffs.

The retaliatory tariffs were imposed in response to the US tariffs that preceded the or were imposed at the outset of the Great Depression in 1930. The devaluations against gold promoted economic recovery, and were accompanied by a general reduction in tariff levels under FDR after the US devalued the dollar against gold and the remaining gold standard currencies. Whereof she knows nothing, thereof Dr. Shelton would do better to remain silent.

Dr. Popper: Or How I Learned to Stop Worrying and Love Metaphysics

Introduction to Falsificationism

Although his reputation among philosophers was never quite as exalted as it was among non-philosophers, Karl Popper was a pre-eminent figure in 20th century philosophy. As a non-philosopher, I won’t attempt to adjudicate which take on Popper is the more astute, but I think I can at least sympathize, if not fully agree, with philosophers who believe that Popper is overrated by non-philosophers. In an excellent blog post, Phillipe Lemoine gives a good explanation of why philosophers look askance at falsificationism, Popper’s most important contribution to philosophy.

According to Popper, what distinguishes or demarcates a scientific statement from a non-scientific (metaphysical) statement is whether the statement can, or could be, disproved or refuted – falsified (in the sense of being shown to be false not in the sense of being forged, misrepresented or fraudulently changed) – by an actual or potential observation. Vulnerability to potentially contradictory empirical evidence, according to Popper, is what makes science special, allowing it to progress through a kind of dialectical process of conjecture (hypothesis) and refutation (empirical testing) leading to further conjecture and refutation and so on.

Theories purporting to explain anything and everything are thus non-scientific or metaphysical. Claiming to be able to explain too much is a vice, not a virtue, in science. Science advances by risk-taking, not by playing it safe. Trying to explain too much is actually playing it safe. If you’re not willing to take the chance of putting your theory at risk, by saying that this and not that will happen — rather than saying that this or that will happen — you’re playing it safe. This view of science, portrayed by Popper in modestly heroic terms, was not unappealing to scientists, and in part accounts for the positive reception of Popper’s work among scientists.

But this heroic view of science, as Lemoine nicely explains, was just a bit oversimplified. Theories never exist in a vacuum, there is always implicit or explicit background knowledge that informs and provides context for the application of any theory from which a prediction is deduced. To deduce a prediction from any theory, background knowledge, including complementary theories that are presumed to be valid for purposes of making a prediction, is necessary. Any prediction relies not just on a single theory but on a system of related theories and auxiliary assumptions.

So when a prediction is deduced from a theory, and the predicted event is not observed, it is never unambiguously clear which of the multiple assumptions underlying the prediction is responsible for the failure of the predicted event to be observed. The one-to-one logical dependence between a theory and a prediction upon which Popper’s heroic view of science depends doesn’t exist. Because the heroic view of science is too simplified, Lemoine considers it false, at least in the naïve and heroic form in which it is often portrayed by its proponents.

But, as Lemoine himself acknowledges, Popper was not unaware of these issues and actually dealt with some if not all of them. Popper therefore dismissed those criticisms pointing to his various acknowledgments and even anticipations of and responses to the criticisms. Nevertheless, his rhetorical style was generally not to qualify his position but to present it in stark terms, thereby reinforcing the view of his critics that he actually did espouse the naïve version of falsificationism that, only under duress, would be toned down to meet the objections raised to the usual unqualified version of his argument. Popper after all believed in making bold conjectures and framing a theory in the strongest possible terms and characteristically adopted an argumentative and polemical stance in staking out his positions.

Toned-Down Falsificationism

In his tone-downed version of falsificationism, Popper acknowledged that one can never know if a prediction fails because the underlying theory is false or because one of the auxiliary assumptions required to make the prediction is false, or even because of an error in measurement. But that acknowledgment, Popper insisted, does not refute falsificationism, because falsificationism is not a scientific theory about how scientists do science; it is a normative theory about how scientists ought to do science. The normative implication of falsificationism is that scientists should not try to shield their theories by making just-so adjustments in their theories through ad hoc auxiliary assumptions, e.g., ceteris paribus assumptions, to shield their theories from empirical disproof. Rather they should accept the falsification of their theories when confronted by observations that conflict with the implications of their theories and then formulate new and better theories to replace the old ones.

But a strict methodological rule against adjusting auxiliary assumptions or making further assumptions of an ad hoc nature would have ruled out many fruitful theoretical developments resulting from attempts to account for failed predictions. For example, the planet Neptune was discovered in 1846 by scientists who posited (ad hoc) the existence of another planet to explain why the planet Uranus did not follow its predicted path. Rather than conclude that the Newtonian theory was falsified by the failure of Uranus to follow the orbital path predicted by Newtonian theory, the French astronomer Urbain Le Verrier posited the existence of another planet that would account for the path actually followed by Uranus. Now in this case, it was possible to observe the predicted position of the new planet, and its discovery in the predicted location turned out to be a sensational confirmation of Newtonian theory.

Popper therefore admitted that making an ad hoc assumption in order to save a theory from refutation was permissible under his version of normative faslisificationism, but only if the ad hoc assumption was independently testable. But suppose that, under the circumstances, it would have been impossible to observe the existence of the predicted planet, at least with the observational tools then available, making the ad hoc assumption testable only in principle, but not in practice. Strictly adhering to Popper’s methodological requirement of being able to test independently any ad hoc assumption would have meant accepting the refutation of the Newtonian theory rather than positing the untestable — but true — ad hoc other-planet hypothesis to account for the failed prediction of the orbital path of Uranus.

My point is not that ad hoc assumptions to save a theory from falsification are ok, but to point out that a strict methodological rules requiring rejection of any theory once it appears to be contradicted by empirical evidence and prohibiting the use of any ad hoc assumption to save the theory unless the ad hoc assumption is independently testable might well lead to the wrong conclusion given the nuances and special circumstances associated with every case in which a theory seems to be contradicted by observed evidence. Such contradictions are rarely so blatant that theory cannot be reconciled with the evidence. Indeed, as Popper himself recognized, all observations are themselves understood and interpreted in the light of theoretical presumptions. It is only in extreme cases that evidence cannot be interpreted in a way that more or less conforms to the theory under consideration. At first blush, the Copernican heliocentric view of the world seemed obviously contradicted by direct sensory observation that earth seems flat and the sun rise and sets. Empirical refutation could be avoided only by providing an alternative interpretation of the sensory data that could be reconciled with the apparent — and obvious — flatness and stationarity of the earth and the movement of the sun and moon in the heavens.

So the problem with falsificationism as a normative theory is that it’s not obvious why a moderately good, but less than perfect, theory should be abandoned simply because it’s not perfect and suffers from occasional predictive failures. To be sure, if a better theory than the one under consideration is available, predicting correctly whenever the one under consideration predicts correctly and predicting more accurately than the one under consideration when the latter fails to predict correctly, the alternative theory is surely preferable, but that simply underscores the point that evaluating any theory in isolation is not very important. After all, every theory, being a simplification, is an imperfect representation of reality. It is only when two or more theories are available that scientists must try to determine which of them is preferable.

Oakeshott and the Poverty of Falsificationism

These problems with falsificationism were brought into clearer focus by Michael Oakeshott in his famous essay “Rationalism in Politics,” which though not directed at Popper himself (whose colleague at the London School of Economics he was) can be read as a critique of Popper’s attempt to prescribe methodological rules for scientists to follow in carrying out their research. Methodological rules of the kind propounded by Popper are precisely the sort of supposedly rational rules of practice intended to ensure the successful outcome of an undertaking that Oakeshott believed to be ill-advised and hopelessly naïve. The rationalist conceit in Oakesott’s view is that there are demonstrably correct answers to practical questions and that practical activity is rational only when it is based on demonstrably true moral or causal rules.

The entry on Michael Oakeshott in the Stanford Encyclopedia of Philosophy summarizes Oakeshott’s position as follows:

The error of Rationalism is to think that making decisions simply requires skill in the technique of applying rules or calculating consequences. In an early essay on this theme, Oakeshott distinguishes between “technical” and “traditional” knowledge. Technical knowledge is of facts or rules that can be easily learned and applied, even by those who are without experience or lack the relevant skills. Traditional knowledge, in contrast, means “knowing how” rather than “knowing that” (Ryle 1949). It is acquired by engaging in an activity and involves judgment in handling facts or rules (RP 12–17). The point is not that rules cannot be “applied” but rather that using them skillfully or prudently means going beyond the instructions they provide.

The idea that a scientist’s decision about when to abandon one theory and replace it with another can be reduced to the application of a Popperian falsificationist maxim ignores all the special circumstances and all the accumulated theoretical and practical knowledge that a truly expert scientist will bring to bear in studying and addressing such a problem. Here is how Oakeshott addresses the problem in his famous essay.

These two sorts of knowledge, then, distinguishable but inseparable, are the twin components of the knowledge involved in every human activity. In a practical art such as cookery, nobody supposes that the knowledge that belongs to the good cook is confined to what is or what may be written down in the cookery book: technique and what I have called practical knowledge combine to make skill in cookery wherever it exists. And the same is true of the fine arts, of painting, of music, of poetry: a high degree of technical knowledge, even where it is both subtle and ready, is one thing; the ability to create a work of art, the ability to compose something with real musical qualities, the ability to write a great sonnet, is another, and requires in addition to technique, this other sort of knowledge. Again these two sorts of knowledge are involved in any genuinely scientific activity. The natural scientist will certainly make use of observation and verification that belong to his technique, but these rules remain only one of the components of his knowledge; advances in scientific knowledge were never achieved merely by following the rules. . . .

Technical knowledge . . . is susceptible of formulation in rules, principles, directions, maxims – comprehensively, in propositions. It is possible to write down technical knowledge in a book. Consequently, it does not surprise us that when an artist writes about his art, he writes only about the technique of his art. This is so, not because he is ignorant of what may be called asesthetic element, or thinks it unimportant, but because what he has to say about that he has said already (if he is a painter) in his pictures, and he knows no other way of saying it. . . . And it may be observed that this character of being susceptible of precise formulation gives to technical knowledge at least the appearance of certainty: it appears to be possible to be certain about a technique. On the other hand, it is characteristic of practical knowledge that it is not susceptible of formulation of that kind. Its normal expression is in a customary or traditional way of doing things, or, simply, in practice. And this gives it the appearance of imprecision and consequently of uncertainty, of being a matter of opinion, of probability rather than truth. It is indeed knowledge that is expressed in taste or connoisseurship, lacking rigidity and ready for the impress of the mind of the learner. . . .

Technical knowledge, in short, an be both taught and learned in the simplest meanings of these words. On the other hand, practical knowledge can neither be taught nor learned, but only imparted and acquired. It exists only in practice, and the only way to acquire it is by apprenticeship to a master – not because the master can teach it (he cannot), but because it can be acquired only by continuous contact with one who is perpetually practicing it. In the arts and in natural science what normally happens is that the pupil, in being taught and in learning the technique from his master, discovers himself to have acquired also another sort of knowledge than merely technical knowledge, without it ever having been precisely imparted and often without being able to say precisely what it is. Thus a pianist acquires artistry as well as technique, a chess-player style and insight into the game as well as knowledge of the moves, and a scientist acquires (among other things) the sort of judgement which tells him when his technique is leading him astray and the connoisseurship which enables him to distinguish the profitable from the unprofitable directions to explore.

Now, as I understand it, Rationalism is the assertion that what I have called practical knowledge is not knowledge at all, the assertion that, properly speaking, there is no knowledge which is not technical knowledge. The Rationalist holds that the only element of knowledge involved in any human activity is technical knowledge and that what I have called practical knowledge is really only a sort of nescience which would be negligible if it were not positively mischievous. (Rationalism in Politics and Other Essays, pp. 12-16)

Almost three years ago, I attended the History of Economics Society meeting at Duke University at which Jeff Biddle of Michigan State University delivered his Presidential Address, “Statistical Inference in Economics 1920-1965: Changes in Meaning and Practice, published in the June 2017 issue of the Journal of the History of Economic Thought. The paper is a remarkable survey of the differing attitudes towards using formal probability theory as the basis for making empirical inferences from the data. The underlying assumptions of probability theory about the nature of the data were widely viewed as being too extreme to make probability theory an acceptable basis for empirical inferences from the data. However, the early negative attitudes toward accepting probability theory as the basis for making statistical inferences from data were gradually overcome (or disregarded). But as late as the 1960s, even though econometric techniques were becoming more widely accepted, a great deal of empirical work, including by some of the leading empirical economists of the time, avoided using the techniques of statistical inference to assess empirical data using regression analysis. Only in the 1970s was there a rapid sea-change in professional opinion that made statistical inference based on explicit probabilisitic assumptions about underlying data distributions the requisite technique for drawing empirical inferences from the analysis of economic data. In the final section of his paper, Biddle offers an explanation for this rapid change in professional attitude toward the use of probabilistic assumptions about data distributions as the required method of the empirical assessment of economic data.

By the 1970s, there was a broad consensus in the profession that inferential methods justified by probability theory—methods of producing estimates, of assessing the reliability of those estimates, and of testing hypotheses—were not only applicable to economic data, but were a necessary part of almost any attempt to generalize on the basis of economic data. . . .

This paper has been concerned with beliefs and practices of economists who wanted to use samples of statistical data as a basis for drawing conclusions about what was true, or probably true, in the world beyond the sample. In this setting, “mechanical objectivity” means employing a set of explicit and detailed rules and procedures to produce conclusions that are objective in the sense that if many different people took the same statistical information, and followed the same rules, they would come to exactly the same conclusions. The trustworthiness of the conclusion depends on the quality of the method. The classical theory of inference is a prime example of this sort of mechanical objectivity.

Porter [Trust in Numbers: The Pursuit of Objectivity in Science and Public Life] contrasts mechanical objectivity with an objectivity based on the “expert judgment” of those who analyze data. Expertise is acquired through a sanctioned training process, enhanced by experience, and displayed through a record of work meeting the approval of other experts. One’s faith in the analyst’s conclusions depends on one’s assessment of the quality of his disciplinary expertise and his commitment to the ideal of scientific objectivity. Elmer Working’s method of determining whether measured correlations represented true cause-and-effect relationships involved a good amount of expert judgment. So, too, did Gregg Lewis’s adjustments of the various estimates of the union/non-union wage gap, in light of problems with the data and peculiarities of the times and markets from which they came. Keynes and Persons pushed for a definition of statistical inference that incorporated space for the exercise of expert judgment; what Arthur Goldberger and Lawrence Klein referred to as ‘statistical inference’ had no explicit place for expert judgment.

Speaking in these terms, I would say that in the 1920s and 1930s, empirical economists explicitly acknowledged the need for expert judgment in making statistical inferences. At the same time, mechanical objectivity was valued—there are many examples of economists of that period employing rule-oriented, replicable procedures for drawing conclusions from economic data. The rejection of the classical theory of inference during this period was simply a rejection of one particular means for achieving mechanical objectivity. By the 1970s, however, this one type of mechanical objectivity had become an almost required part of the process of drawing conclusions from economic data, and was taught to every economics graduate student.

Porter emphasizes the tension between the desire for mechanically objective methods and the belief in the importance of expert judgment in interpreting statistical evidence. This tension can certainly be seen in economists’ writings on statistical inference throughout the twentieth century. However, it would be wrong to characterize what happened to statistical inference between the 1940s and the 1970s as a displace-ment of procedures requiring expert judgment by mechanically objective procedures. In the econometric textbooks published after 1960, explicit instruction on statistical inference was largely limited to instruction in the mechanically objective procedures of the classical theory of inference. It was understood, however, that expert judgment was still an important part of empirical economic analysis, particularly in the specification of the models to be estimated. But the disciplinary knowledge needed for this task was to be taught in other classes, using other textbooks.

And in practice, even after the statistical model had been chosen, the estimates and standard errors calculated, and the hypothesis tests conducted, there was still room to exercise a fair amount of judgment before drawing conclusions from the statistical results. Indeed, as Marcel Boumans (2015, pp. 84–85) emphasizes, no procedure for drawing conclusions from data, no matter how algorithmic or rule bound, can dispense entirely with the need for expert judgment. This fact, though largely unacknowledged in the post-1960s econometrics textbooks, would not be denied or decried by empirical economists of the 1970s or today.

This does not mean, however, that the widespread embrace of the classical theory of inference was simply a change in rhetoric. When application of classical inferential procedures became a necessary part of economists’ analyses of statistical data, the results of applying those procedures came to act as constraints on the set of claims that a researcher could credibly make to his peers on the basis of that data. For example, if a regression analysis of sample data yielded a large and positive partial correlation, but the correlation was not “statistically significant,” it would simply not be accepted as evidence that the “population” correlation was positive. If estimation of a statistical model produced a significant estimate of a relationship between two variables, but a statistical test led to rejection of an assumption required for the model to produce unbiased estimates, the evidence of a relationship would be heavily discounted.

So, as we consider the emergence of the post-1970s consensus on how to draw conclusions from samples of statistical data, there are arguably two things to be explained. First, how did it come about that using a mechanically objective procedure to generalize on the basis of statistical measures went from being a choice determined by the preferences of the analyst to a professional requirement, one that had real con-sequences for what economists would and would not assert on the basis of a body of statistical evidence? Second, why was it the classical theory of inference that became the required form of mechanical objectivity? . . .

Perhaps searching for an explanation that focuses on the classical theory of inference as a means of achieving mechanical objectivity emphasizes the wrong characteristic of that theory. In contrast to earlier forms of mechanical objectivity used by economists, such as standardized methods of time series decomposition employed since the 1920s, the classical theory of inference is derived from, and justified by, a body of formal mathematics with impeccable credentials: modern probability theory. During a period when the value placed on mathematical expression in economics was increasing, it may have been this feature of the classical theory of inference that increased its perceived value enough to overwhelm long-standing concerns that it was not applicable to economic data. In other words, maybe the chief causes of the profession’s embrace of the classical theory of inference are those that drove the broader mathematization of economics, and one should simply look to the literature that explores possible explanations for that phenomenon rather than seeking a special explanation of the embrace of the classical theory of inference.

I would suggest one more factor that might have made the classical theory of inference more attractive to economists in the 1950s and 1960s: the changing needs of pedagogy in graduate economics programs. As I have just argued, since the 1920s, economists have employed both judgment based on expertise and mechanically objective data-processing procedures when generalizing from economic data. One important difference between these two modes of analysis is how they are taught and learned. The classical theory of inference as used by economists can be taught to many students simultaneously as a set of rules and procedures, recorded in a textbook and applicable to “data” in general. This is in contrast to the judgment-based reasoning that combines knowledge of statistical methods with knowledge of the circumstances under which the particular data being analyzed were generated. This form of reasoning is harder to teach in a classroom or codify in a textbook, and is probably best taught using an apprenticeship model, such as that which ideally exists when an aspiring economist writes a thesis under the supervision of an experienced empirical researcher.

During the 1950s and 1960s, the ratio of PhD candidates to senior faculty in PhD-granting programs was increasing rapidly. One consequence of this, I suspect, was that experienced empirical economists had less time to devote to providing each interested student with individualized feedback on his attempts to analyze data, so that relatively more of a student’s training in empirical economics came in an econometrics classroom, using a book that taught statistical inference as the application of classical inference procedures. As training in empirical economics came more and more to be classroom training, competence in empirical economics came more and more to mean mastery of the mechanically objective techniques taught in the econometrics classroom, a competence displayed to others by application of those techniques. Less time in the training process being spent on judgment-based procedures for interpreting statistical results meant fewer researchers using such procedures, or looking for them when evaluating the work of others.

This process, if indeed it happened, would not explain why the classical theory of inference was the particular mechanically objective method that came to dominate classroom training in econometrics; for that, I would again point to the classical theory’s link to a general and mathematically formalistic theory. But it does help to explain why the application of mechanically objective procedures came to be regarded as a necessary means of determining the reliability of a set of statistical measures and the extent to which they provided evidence for assertions about reality. This conjecture fits in with a larger possibility that I believe is worth further exploration: that is, that the changing nature of graduate education in economics might sometimes be a cause as well as a consequence of changing research practices in economics. (pp. 167-70)

The correspondence between Biddle’s discussion of the change in the attitude of the economics profession about how inferences should be drawn from data about empirical relationships is strikingly similar to Oakeshott’s discussion and depressing in its implications for the decline of expert judgment by economics, expert judgment having been replaced by mechanical and technical knowledge that can be objectively summarized in the form of rules or tests for statistical significance, itself an entirely arbitrary convention lacking any logical, or self-evident, justification.

But my point is not to condemn using rules derived from classical probability theory to assess the significance of relationships statistically estimated from historical data, but to challenge the methodological prohibition against the kinds of expert judgments that many statistically knowledgeable economists like Nobel Prize winners such as Simon Kuznets, Milton Friedman, Theodore Schultz and Gary Becker routinely used to make in their empirical studies. As Biddle notes:

In 1957, Milton Friedman published his theory of the consumption function. Friedman certainly understood statistical theory and probability theory as well as anyone in the profession in the 1950s, and he used statistical theory to derive testable hypotheses from his economic model: hypotheses about the relationships between estimates of the marginal propensity to consume for different groups and from different types of data. But one will search his book almost in vain for applications of the classical methods of inference. Six years later, Friedman and Anna Schwartz published their Monetary History of the United States, a work packed with graphs and tables of statistical data, as well as numerous generalizations based on that data. But the book contains no classical hypothesis tests, no confidence intervals, no reports of statistical significance or insignificance, and only a handful of regressions. (p. 164)

Friedman’s work on the Monetary History is still regarded as authoritative. My own view is that much of the Monetary History was either wrong or misleading. But my quarrel with the Monetary History mainly pertains to the era in which the US was on the gold standard, inasmuch as Friedman simply did not understand how the gold standard worked, either in theory or in practice, as McCloskey and Zecher showed in two important papers (here and here). Also see my posts about the empirical mistakes in the Monetary History (here and here). But Friedman’s problem was bad monetary theory, not bad empirical technique.

Friedman’s theoretical misunderstandings have no relationship to the misguided prohibition against doing quantitative empirical research without obeying the arbitrary methodological requirement that statistical be derived in a way that measures the statistical significance of the estimated relationships. These methodological requirements have been adopted to support a self-defeating pretense to scientific rigor, necessitating the use of relatively advanced mathematical techniques to perform quantitative empirical research. The methodological requirements for measuring statistical relationships were never actually shown to be generate more accurate or reliable statistical results than those derived from the less technically advanced, but in some respects more economically sophisticated, techniques that have almost totally been displaced. One more example of the fallacy that there is but one technique of research that ensures the discovery of truth, a mistake even Popper was never guilty of.

Methodological Prescriptions Go from Bad to Worse

The methodological requirement for the use of formal tests of statistical significance before any quantitative statistical estimate could be credited was a prelude, though it would be a stretch to link them causally, to another and more insidious form of methodological tyrannizing: the insistence that any macroeconomic model be derived from explicit micro-foundations based on the solution of an intertemporal-optimization exercise. Of course, the idea that such a model was in any way micro-founded was a pretense, the solution being derived only through the fiction of a single representative agent, rendering the entire optimization exercise fundamentally illegitimate and the exact opposite of micro-founded model. Having already explained in previous posts why transforming microfoundations from a legitimate theoretical goal into methodological necessity has taken a generation of macroeconomists down a blind alley (here, here, here, and here) I will only make the further comment that this is yet another example of the danger of elevating technique over practice and substance.

Popper’s More Important Contribution

This post has largely concurred with the negative assessment of Popper’s work registered by Lemoine. But I wish to end on a positive note, because I have learned a great deal from Popper, and even if he is overrated as a philosopher of science, he undoubtedly deserves great credit for suggesting falsifiability as the criterion by which to distinguish between science and metaphysics. Even if that criterion does not hold up, or holds up only when qualified to a greater extent than Popper admitted, Popper made a hugely important contribution by demolishing the startling claim of the Logical Positivists who in the 1920s and 1930s argued that only statements that can be empirically verified through direct or indirect observation have meaning, all other statements being meaningless or nonsensical. That position itself now seems to verge on the nonsensical. But at the time many of the world’s leading philosophers, including Ludwig Wittgenstein, no less, seemed to accept that remarkable view.

Thus, Popper’s demarcation between science and metaphysics had a two-fold significance. First, that it is not verifiability, but falsifiability, that distinguishes science from metaphysics. That’s the contribution for which Popper is usually remembered now. But it was really the other aspect of his contribution that was more significant: that even metaphysical, non-scientific, statements can be meaningful. According to the Logical Positivists, unless you are talking about something that can be empirically verified, you are talking nonsense. In other words they were deliberately hoisting themselves on their petard, because their discussions about what is and what is not meaningful, being discussions about concepts, not empirically verifiable objects, were themselves – on the Positivists’ own criterion of meaning — meaningless and nonsensical.

Popper made the world safe for metaphysics, and the world is a better place as a result. Science is a wonderful enterprise, rewarding for its own sake and because it contributes to the well-being of many millions of human beings, though like many other human endeavors, it can also have unintended and unfortunate consequences. But metaphysics, because it was used as a term of abuse by the Positivists, is still, too often, used as an epithet. It shouldn’t be.

Certainly economists should aspire to tease out whatever empirical implications they can from their theories. But that doesn’t mean that an economic theory with no falsifiable implications is useless, a judgment whereby Mark Blaug declared general equilibrium theory to be unscientific and useless, a judgment that I don’t think has stood the test of time. And even if general equilibrium theory is simply metaphysical, my response would be: so what? It could still serve as a source of inspiration and insight to us in framing other theories that may have falsifiable implications. And even if, in its current form, a theory has no empirical content, there is always the possibility that, through further discussion, critical analysis and creative thought, empirically falsifiable implications may yet become apparent.

Falsifiability is certainly a good quality for a theory to have, but even an unfalsifiable theory may be worth paying attention to and worth thinking about.

Judy Shelton Speaks Up for the Gold Standard

I have been working on a third installment in my series on how, with a huge assist from Arthur Burns, things fell apart in the 1970s. In my third installment, I will discuss the sad denouement of Burns’s misunderstandings and mistakes when Paul Volcker administered a brutal dose of tight money that caused the worst downturn and highest unemployment since the Great Depression in the Great Recession of 1981-82. But having seen another one of Judy Shelton’s less than enlightening op-eds arguing for a gold standard in the formerly respectable editorial section of the Wall Street Journal, I am going to pause from my account of Volcker’s monetary policy in the early 1980s to give Dr. Shelton my undivided attention.

The opening paragraph of Dr. Shelton’s op-ed is a less than auspicious start.

Since President Trump announced his intention to nominate Herman Cain and Stephen Moore to serve on the Federal Reserve’s board of governors, mainstream commentators have made a point of dismissing anyone sympathetic to a gold standard as crankish or unqualified.

That is a totally false charge. Since Herman Cain and Stephen Moore were nominated, they have been exposed as incompetent and unqualified to serve on the Board of Governors of the world’s most important central bank. It is not support for reestablishing the gold standard that demonstrates their incompetence and lack of qualifications. It is true that most economists, myself included, oppose restoring the gold standard. It is also true that most supporters of the gold standard, like, say — to choose a name more or less at random — Ron Paul, are indeed cranks and unqualified to hold high office, but there is indeed a minority of economists, including some outstanding ones like Larry White, George Selgin, Richard Timberlake and Nobel Laureate Robert Mundell, who do favor restoring the gold standard, at least under certain conditions.

But Cain and Moore are so unqualified and so incompetent, that they are incapable of doing more than mouthing platitudes about how wonderful it would be to have a dollar as good as gold by restoring some unspecified link between the dollar and gold. Because of their manifest ignorance about how a gold standard would work now or how it did work when it was in operation, they were unprepared to defend their support of a gold standard when called upon to do so by inquisitive reporters. So they just lied and denied that they had ever supported returning to the gold standard. Thus, in addition to being ignorant, incompetent and unqualified to serve on the Board of Governors of the Federal Reserve, Cain and Moore exposed their own foolishness and stupidity, because it was easy for reporters to dig up multiple statements by both aspiring central bankers explicitly calling for a gold standard to be restored and muddled utterances bearing at least vague resemblance to support for the gold standard.

So Dr. Shelton, in accusing mainstream commentators of dismissing anyone sympathetic to a gold standard as crankish or unqualified is accusing mainstream commentators of a level of intolerance and closed-mindedness for which she supplies not a shred of evidence.

After making a defamatory accusation with no basis in fact, Dr. Shelton turns her attention to a strawman whom she slays mercilessly.

But it is wholly legitimate, and entirely prudent, to question the infallibility of the Federal Reserve in calibrating the money supply to the needs of the economy. No other government institution had more influence over the creation of money and credit in the lead-up to the devastating 2008 global meltdown.

Where to begin? The Federal Reserve has not been targeting the quantity of money in the economy as a policy instrument since the early 1980s when the Fed misguidedly used the quantity of money as its policy target in its anti-inflation strategy. After acknowledging that mistake the Fed has, ever since, eschewed attempts to conduct monetary policy by targeting any monetary aggregate. It is through the independent choices and decisions of individual agents and of many competing private banking institutions, not the dictate of the Federal Reserve, that the quantity of money in the economy at any given time is determined. Indeed, it is true that the Federal Reserve played a great role in the run-up to the 2008 financial crisis, but its mistake had nothing to do with the amount of money being created. Rather the problem was that the Fed was setting its policy interest rate at too high a level throughout 2008 because of misplaced inflation fears fueled by a temporary increases in commodity prices that deterred the Fed from providing the monetary stimulus needed to counter a rapidly deepening recession.

But guess who was urging the Fed to raise its interest rate in 2008 exactly when a cut in interest rates was what the economy needed? None other than the Wall Street Journal editorial page. And guess who was the lead editorial writer on the Wall Street Journal in 2008 for economic policy? None other than Stephen Moore himself. Isn’t that special?

I will forbear from discussing Dr. Shelton’s comments on the Fed’s policy of paying interest on reserves, because I actually agree with her criticism of the policy. But I do want to say a word about her discussion of currency manipulation and the supposed role of the gold standard in minimizing such currency manipulation.

The classical gold standard established an international benchmark for currency values, consistent with free-trade principles. Today’s arrangements permit governments to manipulate their currencies to gain an export advantage.

Having previously explained to Dr. Shelton that currency manipulation to gain an export advantage depends not just on the exchange rate, but the monetary policy that is associated with that exchange rate, I have to admit some disappointment that my previous efforts to instruct her don’t seem to have improved her understanding of the ABCs of currency manipulation. But I will try again. Let me just quote from my last attempt to educate her.

The key point to keep in mind is that for a country to gain a competitive advantage by lowering its exchange rate, it has to prevent the automatic tendency of international price arbitrage and corresponding flows of money to eliminate competitive advantages arising from movements in exchange rates. If a depreciated exchange rate gives rise to an export surplus, a corresponding inflow of foreign funds to finance the export surplus will eventually either drive the exchange rate back toward its old level, thereby reducing or eliminating the initial depreciation, or, if the lower rate is maintained, the cash inflow will accumulate in reserve holdings of the central bank. Unless the central bank is willing to accept a continuing accumulation of foreign-exchange reserves, the increased domestic demand and monetary expansion associated with the export surplus will lead to a corresponding rise in domestic prices, wages and incomes, thereby reducing or eliminating the competitive advantage created by the depressed exchange rate. Thus, unless the central bank is willing to accumulate foreign-exchange reserves without limit, or can create an increased demand by private banks and the public to hold additional cash, thereby creating a chronic excess demand for money that can be satisfied only by a continuing export surplus, a permanently reduced foreign-exchange rate creates only a transitory competitive advantage.

I don’t say that currency manipulation is not possible. It is not only possible, but we know that currency manipulation has been practiced. But currency manipulation can occur under a fixed-exchange rate regime as well as under flexible exchange-rate regimes, as demonstrated by the conduct of the Bank of France from 1926 to 1935 while it was operating under a gold standard.

Dr. Shelton believes that restoring a gold standard would usher in a period of economic growth like the one that followed World War II under the Bretton Woods System. Well, Dr. Shelton might want to reconsider how well the Bretton Woods system worked to the advantage of the United States.

The fact is that, as Ralph Hawtrey pointed out in his Incomes and Money, the US dollar was overvalued relative to the currencies of most its European trading parties, which is why unemployment in the US was chronically above 5% after 1954 to 1965. With undervalued currencies, West Germany, Italy, Belgium, Britain, France and Japan all had much lower unemployment than the US. It was only in 1961, after John Kennedy became President, when the Federal Reserve systematically loosened monetary policy, forcing Germany and other countries to revalue their countries upward to avoid importing US inflation that the US was able redress the overvaluation of the dollar. But in doing so, the US also gradually rendered the $35/ounce price of gold, at which it maintained a kind of semi-convertibility of the dollar, unsustainable, leading a decade later to the final abandonment of the gold-dollar peg.

Dr. Shelton is obviously dedicated to restoring the gold standard, but she really ought to study up on how the gold standard actually worked in its previous incarnations and semi-incarnations, before she opines any further about how it might work in the future. At present, she doesn’t seem to be knowledgeable about how the gold standard worked in the past, and her confidence that it would work well in the future is entirely misplaced.

James Buchanan Calling the Kettle Black

In the wake of the tragic death of Alan Krueger, attention has been drawn to an implicitly defamatory statement by James Buchanan about those who, like Krueger, dared question the orthodox position taken by most economists that minimum-wage laws increase unemployment among low-wage, low-skilled workers whose productivity, at the margin, is less than the minimum wage that employers are required to pay employees.

Here is Buchanan’s statement:

The inverse relationship between quantity demanded and price is the core proposition in economic science, which embodies the presupposition that human choice behavior is sufficiently relational to allow predictions to be made. Just as no physicist would claim that “water runs uphill,” no self-respecting economist would claim that increases in the minimum wage increase employment. Such a claim, if seriously advanced, becomes equivalent to a denial that there is even minimal scientific content in economics, and that, in consequence, economists can do nothing but write as advocates for ideological interests. Fortunately, only a handful of economists are willing to throw over the teachings of two centuries; we have not yet become a bevy of camp-following whores.

Wholly apart from its odious metaphorical characterization of those he was criticizing, Buchanan’s assertion was substantively problematic in two respects. The first, which is straightforward and well-known, and which Buchanan was obviously wrong not to acknowledge, is that there are obvious circumstances in which a minimum-wage law could simultaneously raise wages and reduce unemployment without contradicting the inverse relationship between quantity demanded and price. Such circumstances obtain whenever employers exercise monopsony power in the market for unskilled labor. If employers realize that hiring additional low-skilled workers drives up the wage paid to all the low-skilled workers that they employ, not just the additional ones hired, the wage paid by employers will be less than the value of the marginal product of labor. If employers exercise monopsony power, then divergence between the wage and the marginal product is not a violation, but an implication, of the inverse relationship between quantity demanded and price. If Buchanan had written on his price theory preliminary exam for a Ph. D at Chicago that support for a minimum wage could be rationalized only be denying the inverse relationship between quantity demanded and price, he would have been flunked.

The second problem with Buchanan’s position is less straightforward and less well-known, but more important, than the first. The inverse relationship by which Buchanan set such great store is valid only if qualified by a ceteris paribus condition. Demand is a function of many variables of which price is only one. So the inverse relationship between price and quantity demanded is premised on the assumption that all the other variables affecting demand are held (at least approximately) constant.

Now it’s true that even the law of gravity is subject to a ceteris paribus condition; the law of gravity will not control the movement of objects in a magnetic field. And it would be absurd to call a physicist an advocate for ideological interests just because he recognized that possibility.

Of course, the presence or absence of a magnetic field is a circumstance that can be easily ascertained, thereby enabling a physicist to alter his prediction of the movement of an object according as the the relevant field for predicting the motion of the object under consideration is gravitational or magnetic. But the magnitude and relevance of other factors affecting demand are not so easily taken into account by economists. That’s why applied economists try to focus on markets in which the effects of “other factors” are small or on markets in which “other factors” can easily be identified and measured or treated qualitatively as fixed effects.

But in some markets the factors affecting demand are themselves interrelated so that the ceteris paribus assumption can’t be maintained. Such markets can’t be analyzed in isolation, they can only be analyzed as a system in which all the variables are jointly determined. Economists call the analysis of an isolated market partial-equilibrium analysis. And it is partial-equilibrium analysis that constitutes the core of price theory and microeconomics. The ceteris paribus assumption either has to be maintained by assuming that changes in the variables other than price affecting demand and supply are inconsequential or by identifying other variable whose changes could affect demand and supply and either measuring them quantitatively or at least accounting for them qualitatively.

But labor markets, except at a granular level, when the focus is on an isolated region or a specialized occupation, cannot be modeled usefully with the standard partial-equilibrium techniques of price theory, because income effects and interactions between related markets cannot appropriately be excluded from the partial-equilibrium analysis of supply and demand in a broadly defined market for labor. The determination of the equilibrium price in a market that encompasses a substantial share of economic activity cannot be isolated from the determination of the equilibrium prices in other markets.

Moreover, the idea that the equilibration of any labor market can be understood within a partial-equiilbrium framework in which the wage responds to excess demands for, or excess supplies of, labor just as the price of a standardized commodity adjusts to excess demands for, or excess supplies of, that commodity, reflects a gross misunderstanding of the incentives of employers and workers in reaching wage bargains for the differentiated services provided by individual workers. Those incentives are in no way comparable to the incentives of businesses to adjust the prices of their products in response to excess supplies of or excess demands for those products.

Buchanan was implicitly applying an inappropriate paradigm of price adjustment in a single market to the analysis of how wages adjust in the real world. The truth is we don’t have a good understanding of how wages adjust, and so we don’t have a good understanding of the effects of minimum wages. But in arrogantly and insultingly dismissing Krueger’s empirical research on the effects of minimum wage laws, Buchanan was unwittingly exposing not Krueger’s ideological advocacy but his own.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,566 other followers

Follow Uneasy Money on WordPress.com