Archive for the 'Uncategorized' Category

Yield-Curve Inversion and the Agony of Central Banking

Suddenly, we have been beset with a minor panic attack about our increasingly inverted yield curve. Since fear of yield-curve inversion became a thing a little over a year ago, a lot of people have taken notice of the fact that yield-curve inversion has often presaged recessions. In June 2018, when the yield curve was on the verge of flatlining, I tried to explain the phenomenon, and I think that I provided a pretty good — though perhaps a tad verbose — explanation, providing the basic theory behind the typical upward slope of the yield curve as well as explaining what seems the most likely, though not the only, reason for inversion, one that explains why inversion so often is a harbinger of recession.

But in a Tweet yesterday responding to Sri Thiruvadanthai I think I framed the issue succinctly within the 280 character Twitter allotment. Here are the two tweets.

 

 

And here’s a longer version getting at the same point from my 2018 post:

For purposes of this discussion, however, I will focus on just two factors that, in an ultra-simplified partial-equilibrium setting, seem most likely to cause a normally upward-sloping yield curve to become relatively flat or even inverted. These two factors affecting the slope of the yield curve are the demand for liquidity and the supply of liquidity.

An increase in the demand for liquidity manifests itself in reduced current spending to conserve liquidity and by an increase in the demands of the public on the banking system for credit. But even as reduced spending improves the liquidity position of those trying to conserve liquidity, it correspondingly worsens the liquidity position of those whose revenues are reduced, the reduced spending of some necessarily reducing the revenues of others. So, ultimately, an increase in the demand for liquidity can be met only by (a) the banking system, which is uniquely positioned to create liquidity by accepting the illiquid IOUs of the private sector in exchange for the highly liquid IOUs (cash or deposits) that the banking system can create, or (b) by the discretionary action of a monetary authority that can issue additional units of fiat currency.

The question that I want to address now is why has the yield curve, after having been only slightly inverted or flat for the past year, suddenly — since about the beginning of August — become sharply inverted.

Last summer, when concerns about inversion was just beginning to be discussed, the Fed, which had been signaling a desire to raise short-term rates to “normal” levels, changed signals, indicating that it would not automatically continue raising rates as it had between 2003 and 2006, but would evaluate each rate increase in light of recent data bearing on the state of the economy. So after a further half-a-percent increase in the Fed’s target rate between June and the end of 2018, the Fed held off on further increases, and in July actually cut its rate by a quarter of a percent and even signaled a likely further quarter of a percent decrease in September.

Now to be sure the Fed might have been well-advised not to have raised its target rate as much as it did, and to have cut its rate more steeply than it did in July. Nevertheless, it would be hard to identify any particular monetary cause for the recent steep further inversion of the yield curve. So, the most likely reason for the sudden inversion is nervousness about the possibility of a trade war, which most people do not think is either good or easy to win.

After yesterday’s announcement by the administration that previously announced tariff increases on Chinese goods scheduled to take effect in September would be postponed until after the Christmas buying season, the stock market took some comfort in an apparent easing of tensions between the US and China over trade policy. But this interpretation was shot down by none other than Commerce Secretary Wilbur Ross who, before the start of trading, told CNBC that the administration’s postponement of the tariffs on China was done solely in the interest of American shoppers and not to ease tensions with China. The remark — so unnecessary and so counterproductive — immediately aroused suspicions that Ross had an ulterior motive, like, say, a short position in the S&P 500 index, in sharing it on national television.

So what’s going on? Monetary policy has probably been marginally too tight for that past year, but only marginally. Unlike other inverted yield curve episodes that Fed has not been attempting to reduce the rate of inflation and has even been giving lip service to the goal of raising the rate of inflation, so if the Fed’s target rate was raised too high, it was based on an expectation that the economy was in the midst of an expansion; it was not an attempt to reduce growth. But the economy has weakened, and all signs suggest that the weakness stems from an uncertain economic environment particularly owing to the risk that new tariffs will be imposed or existing ones raised to even higher levels, triggering retaliatory measures by China and other affected countries.

In my 2018 post I mentioned a similar, but different, kind of uncertainty that held back recovery from the 2001-02 recession.

The American economy had entered a recession in early 2001, partly as a result of the bursting of the dotcom bubble of the late 1990s. The recession was short and mild, and the large tax cut enacted by Congress at the behest of the Bush administration in June 2001 was expected to provide significant economic stimulus to promote recovery. However, it soon became clear that, besides the limited US attack on Afghanistan to unseat the Taliban regime and to kill or capture the Al Qaeda leadership in Afghanistan, the Bush Administration was planning for a much more ambitious military operation to effect regime change in Iraq and perhaps even in other neighboring countries in hopes of radically transforming the political landscape of the Middle East. The grandiose ambitions of the Bush administration and the likelihood that a major war of unknown scope and duration with unpredictable consequences might well begin sometime in early 2003 created a general feeling of apprehension and uncertainty that discouraged businesses from making significant new commitments until the war plans of the Administration were clarified and executed and their consequences assessed.

The Fed responded to the uncertain environment of 2002 with a series of interest rate reductions that prevented a lapse into recession.

Gauging the unusual increase in the demand for liquidity in 2002 and 2003, the Fed reduced short-term rates to accommodate increasing demands for liquidity, even as the economy entered into a weak expansion and recovery. Given the unusual increase in the demand for liquidity, the accommodative stance of the Fed and the reduction in the Fed Funds target to an unusually low level of 1% had no inflationary effect, but merely cushioned the economy against a relapse into recession.

Recently, the uncertainty caused by the imposition of tariffs and the threat of a destructive trade war seems to have discouraged firms to go forward with plans to invest and to expand output as decision-makers prefer to wait and see how events play out before making long-term commitments that would put assets and investments at serious risk if a trade war undermines the conditions necessary for those investment to be profitable. In the interim, decision-makers seeking short-term safety and the flexibility to deploy their assets and resources profitably once future prospects become less uncertain leads them to take highly liquid positions that don’t preclude taking future profitable actions once profitable opportunities present themselves.

However, when everyone resists making commitments, economic activity doesn’t keep going as before, it gradually slows down. And so a state of heightened uncertainty eventually leads to a stagnation or recession or something worse. To prevent or mitigate that outcome, a reduction in interest rates by the central bank can prevent or at least postpone the onset of a recession, as the Fed succeeded in doing in 2002-03 by reducing its interest rate target to 1%. Similar steps by the Fed may now be called for.

But there is another question that ought to be discussed. When the Fed reduced interest rates in 2002-03 because of the uncertainty created by the pending decision of the US government about whether to invade Iraq, the Fed was probably right to take that uncertainty as an exogenous decision in which it had no decision-making role or voice. The decision to invade or not would be made based on considerations over which the Fed rightly had no role to evaluate or opine upon. However, the Fed does have a responsibility for creating a stable economic environment and eliminating avoidable uncertainty about economic conditions caused by bad policy-making. Insofar as the current uncertain economic environment is the result of deliberate economic-policy actions that increase uncertainty, reducing interest rates to cushion the uncertainty-increasing effects of imposing, or raising, tariffs or of promoting a trade war would enable those uncertainty-increasing actions to be continued.

The Fed, therefore, now faces a cruel dilemma. Should it try to mitigate, by reducing interest rates, the effects of policies that increase uncertainty, thereby acting as a perhaps unwitting enabler of those policies, or should it stand firm and refuse to cushion the effects of policies that are themselves the cause of the uncertainty whose destructive effects the Fed is being asked to mitigate? This is the sort of dilemma that Arthur Burns, in a somewhat different context, once referred to as “The Agony of Central Banking.”

August 15, 1971: Unhappy Anniversary (Update)

[Update 8/15/2019: It seems appropriate to republish this post originally published about 40 days after I started blogging. I have made a few small changes and inserted a few comments to reflect my improved understanding of certain concepts like “sterilization” that I was uncritically accepting. I actually have learned a thing or two in the eight plus years that I’ve been blogging. I am grateful to all my readers — both those who agreed and those who disagreed — for challenging me and inspiring me to keep thinking critically. It wasn’t easy, but we did survive August 15, 1971. Let’s hope we survive August 15, 2019.]

August 15, 1971 may not exactly be a day that will live in infamy, but it is hardly a day to celebrate 40 years later.  It was the day on which one of the most cynical Presidents in American history committed one of his most cynical acts:  violating solemn promises undertaken many times previously, both before and after his election as President, Richard Nixon declared a 90-day freeze on wages and prices.  Nixon also announced the closing of the gold window at the US Treasury, severing the last shred of a link between gold and the dollar.  Interestingly, the current (August 13th, 2011) Economist (Buttonwood column) and Forbes  (Charles Kadlec op-ed) and today’s Wall Street Journal (Lewis Lehrman op-ed) mark the anniversary with critical commentaries on Nixon’s action ruefully focusing on the baleful consequences of breaking the link to gold, while barely mentioning the 90-day freeze that became the prelude to  the comprehensive wage and price controls imposed after the freeze expired.

Of the two events, the wage and price freeze and subsequent controls had by far the more adverse consequences, the closing of the gold window merely ratifying the demise of a gold standard that long since had ceased to function as it had for much of the 19th and early 20th centuries.  In contrast to the final break with gold, no economic necessity or even a coherent economic argument on the merits lay behind the decision to impose a wage and price freeze, notwithstanding the ex-post rationalizations offered by Nixon’s economic advisers, including such estimable figures as Herbert Stein, Paul McKracken, and George Schultz, who surely knew better,  but somehow were persuaded to fall into line behind a policy of massive, breathtaking, intervention into private market transactions.

The argument for closing the gold window was that the official gold peg of $35 an ounce was probably at least 10-20% below any realistic estimate of the true market value of gold at the time, making it impossible to reestablish the old parity as an economically meaningful price without imposing an intolerable deflation on the world economy.  An alternative response might have been to officially devalue the dollar to something like the market value of gold $40-42 an ounce.  But to have done so would merely have demonstrated that the official price of gold was a policy instrument subject to the whims of the US monetary authorities, undermining faith in the viability of a gold standard.  In the event, an attempt to patch together the Bretton Woods System (the Smithsonian Agreement of December 1971) based on an official $38 an ounce peg was made, but it quickly became obvious that a new monetary system based on any form of gold convertibility could no longer survive.

How did the $35 an ounce price became unsustainable barely 25 years after the Bretton Woods System was created?  The problem that emerged within a few years of its inception was that the main trading partners of the US systematically kept their own currencies undervalued in terms of the dollar, promoting their exports while sterilizing the consequent dollar inflow, allowing neither sufficient domestic inflation nor sufficient exchange-rate appreciation to eliminate the overvaluation of their currencies against the dollar. [DG 8/15/19: “sterilization” is a misleading term because it implies that persistent gold or dollar inflows just happen randomly; the persistent inflow occur only because they are induced by a persistent increased demand for reserves or insufficient creation of cash.] After a burst of inflation in the Korean War, the Fed’s tight monetary policy and a persistently overvalued exchange rate kept US inflation low at the cost of sluggish growth and three recessions between 1953 and 1960.  It was not until the Kennedy administration came into office on a pledge to get the country moving again that the Fed was pressured to loosen monetary policy, initiating the long boom of the 1960s some three years before the Kennedy tax cuts were posthumously enacted in 1964.

Monetary expansion by the Fed reduced the relative overvaluation of the dollar in terms of other currencies, but the increasing export of dollars left the $35 an ounce peg increasingly dependent on the willingness of foreign government to hold dollars.  However, President Charles de Gaulle of France, having overcome domestic opposition to his rule, felt secure enough to assert [his conception of] French interests against the US, resuming the traditional French policy of accumulating physical gold reserves rather than mere claims on gold physically held elsewhere.  By 1967 the London gold pool, a central bank cartel acting to control the price of gold in the London gold market, was collapsing, as France withdrew from the cartel, demanding that gold be shipped to Paris from New York.  In 1968, unable to hold down the market price of gold any longer, the US and other central banks let the gold price rise above the official price, but agreed to conduct official transactions among themselves at the official price of $35 an ounce.  As market prices for gold, driven by US monetary expansion, inched steadily higher, the incentives for central banks to demand gold from the US at the official price became too strong to contain, so that the system was on the verge of collapse when Nixon acknowledged the inevitable and closed the gold window rather than allow depletion of US gold holdings.

Assertions that the Bretton Woods system could somehow have been saved simply ignore the economic reality that by 1971 the Bretton Woods System was broken beyond repair, or at least beyond any repair that could have been effected at a tolerable cost.

But Nixon clearly had another motivation in his August 15 announcement, less than 15 months before the next Presidential election.  It was in effect the opening shot of his reelection campaign.  Remembering all too well that he lost the 1960 election to John Kennedy because the Fed had not provided enough monetary stimulus to cut short the 1960-61 recession, Nixon had appointed his long-time economic adviser, Arthur Burns to replace William McChesney Martin as chairman of the Fed in 1970.  A mild tightening of monetary policy in 1969 as inflation was rising above a 5% annual rate, had produced a recession in late 1969 and early 1970, without providing much relief from inflation.  Burns eased policy enough to allow a mild recovery, but the economy seemed to be suffering the worst of both worlds — inflation still near 4 percent and unemployment at what then seemed an unacceptably high level of almost 6 percent. [For more on Burns and his deplorable role in all of this see this post.]

With an election looming ever closer on the horizon, Nixon in the summer of 1971 became consumed by the political imperative of speeding up the recovery.  Meanwhile a Democratic Congress, assuming that Nixon really did mean his promises never to impose wage and price controls to stop inflation, began clamoring for controls as the way to stop inflation without the pain of a recession, even authorizing the President to impose controls, a dare they never dreamed he would accept.  Arthur Burns, himself, perhaps unwittingly [I was being too kind], provided support for such a step by voicing frustration that inflation persisted in the face of a recession and high unemployment, suggesting that the old rules of economics were no longer operating as they once had.  He even offered vague support for what was then called an incomes policy, generally understood as an informal attempt to bring down inflation by announcing a target  for wage increases corresponding to productivity gains, thereby eliminating the need for businesses to raise prices to compensate for increased labor costs.  What such proposals usually ignored was the necessity for a monetary policy that would limit the growth of total spending sufficiently to limit the growth of wage incomes to the desired target. [On incomes policies and how they might work if they were properly understood see this post.]

Having been persuaded that there was no acceptable alternative to closing the gold window — from Nixon’s perspective and from that of most conventional politicians, a painfully unpleasant admission of US weakness in the face of its enemies (all this was occurring at the height of the Vietnam War and the antiwar protests) – Nixon decided that he could now combine that decision, sugar-coated with an aggressive attack on international currency speculators and a protectionist 10% duty on imports into the United States, with the even more radical measure of a wage-price freeze to be followed by a longer-lasting program to control price increases, thereby snatching the most powerful and popular economic proposal of the Democrats right from under their noses.  Meanwhile, with the inflation threat neutralized, Arthur Burns could be pressured mercilessly to increase the rate of monetary expansion, ensuring that Nixon could stand for reelection in the middle of an economic boom.

But just as Nixon’s electoral triumph fell apart because of his Watergate fiasco, his economic success fell apart when an inflationary monetary policy combined with wage-and-price controls to produce increasing dislocations, shortages and inefficiencies, gradually sapping the strength of an economic recovery fueled by excess demand rather than increasing productivity.  Because broad based, as opposed to narrowly targeted, price controls tend to be more popular before they are imposed than after (as too many expectations about favorable regulatory treatment are disappointed), the vast majority of controls were allowed to lapse when the original grant of Congressional authority to control prices expired in April 1974.

Already by the summer of 1973, shortages of gasoline and other petroleum products were becoming commonplace, and shortages of heating oil and natural gas had been widely predicted for the winter of 1973-74.  But in October 1973 in the wake of the Yom Kippur War and the imposition of an Arab Oil Embargo against the United States and other Western countries sympathetic to Israel, the shortages turned into the first “Energy Crisis.”  A Democratic Congress and the Nixon Administration sprang into action, enacting special legislation to allow controls to be kept on petroleum products of all sorts together with emergency authority to authorize the government to allocate products in short supply.

It still amazes me that almost all the dislocations manifested after the embargo and the associated energy crisis were attributed to excessive consumption of oil and petroleum products in general or to excessive dependence on imports, as if any of the shortages and dislocations would have occurred in the absence of price controls.  And hardly anyone realizes that price controls tend to drive the prices of whatever portion of the supply is exempt from control even higher than they would have risen in the absence of any controls.

About ten years after the first energy crisis, I published a book in which I tried to explain how all the dislocations that emerged from the Arab oil embargo and the 1978-79 crisis following the Iranian Revolution were attributable to the price controls first imposed by Richard Nixon on August 15, 1971.  But the connection between the energy crisis in all its ramifications and the Nixonian price controls unfortunately remains largely overlooked and ignored to this day.  If there is reason to reflect on what happened forty years ago on this date, it surely is for that reason and not because Nixon pulled the plug on a gold standard that had not been functioning for years.

The Mendacity of Yoram Hazony, Virtue Signaler

Yoram Hazony, an American-educated, Israeli philosopher and political operator, former assistant to Benjamin Netanyahu, has become a rising star of the American Right. The week before last, Hazony made his media debut at the Washington DC National Conservatism Conference inspired by his book The Virtue of Nationalism. Sponsored by the shadowy Edmund Burke Foundation, the Conference on “National Conservatism” – a title either remarkably tone-deaf, or an in-your-face provocation echoing another “national ‘ism” ideological movement – featured a keynote address by Fox New personality and provocateur par excellence Tucker Carlson, and various other right-wing notables of varying degrees of respectability, though self-avowed white nationalists were kept at a discreet distance — a distance sufficient to elicit resentful comments and nasty insinuations about Hazony’s origins and loyalties.

I had not planned to read Hazony’s book, having read enough of his articles to know Hazony’s would not be book to read for either pleasure or edification. But sometimes duty calls, so I bought Hazony’s book on Amazon at half price. I have now read the Introduction and the first three chapters. I plan to continue reading till the end, but I thought that I would write down some thoughts as I go along. So consider yourself warned, this may not be my last post about Hazony.

Hazony calls his Introduction “A Return to Nationalism;” it is not a good beginning.

Politics in Britain and America have taken a turn toward nationalism. This has been troubling to many, especially in educated circles, where global integration has long been viewed as a requirement of sound policy and moral decency. From this perspective, Britain’s vote to leave the European Union and the “America First” rhetoric coming out of Washington seem to herald a reversion to a more primitive stage in history, when war-mongering and racism were voiced openly and permitted to set the political agenda of nations. . . .

But nationalism was not always understood to be the evil that current public discourse suggests. . . . Progressives regarded Woodrow Wilson’s Fourteen Points and the Atlantic Charter of Franklin Roosevelt and Winston Churchill as beacons of hope for mankind – and this precisely because they were considered expressions of nationalism, promising national independence and self-determination to enslaved peoples around the world. (pp. 1-2)

Ahem, Hazony cleverly – though not truthfully — appropriates Wilson, FDR and Churchill to the cause of nationalism. Although it was clever move by Hazony to try to disarm opposition to his brief for nationalism by misappropriating Wilson, FDR and Churchill to his side, it was not very smart, it being so obviously contradicted by well-known facts. Merely because Wilson, FDR, and Churchill all supported, with varying degrees of consistency and sincerity, the right of self-determination by national ethnic communities that had never, or not for a long time, enjoyed sovereign control over the territories in which they dwelled, does not mean that they did not also favor international cooperation and supra-national institutions.

For example, points 3 and 4 of Wilson’s Fourteen Points were the following:

The removal, so far as possible, of all economic barriers and the establishment of an equality of trade conditions among all the nations consenting to the peace and associating themselves for its maintenance.

Adequate guarantees given and taken that national armaments will be reduced to the lowest point consistent with domestic safety.

And here is point 14:

A general association of nations must be formed under specific covenants for the purpose of affording mutual guarantees of political independence and territorial integrity to great and small states alike. That association of course was realized as the League of Nations, which Wilson strove mightily to create but failed to convince the United States Senate to ratify the Treaty whereby the US would have joined the League.

I don’t know about you, but to me that sounds awfully globalist .

Now what about The Atlantic Charter?

While it supported the right of self-determination of all peoples, it also called for the lowering of trade barriers and for global economic cooperation. Moreover, Churchill, far from endorsing the unqualified right of all peoples to self-determination, flatly rejected the idea that the right of self-determination extended to British India.

But besides withholding the right of self-determination from British colonial possessions and presumably those of other European powers, Churchill, in a famous speech, endorsed the idea of a United States of Europe. Now Churchill did not necessarily envision a federal union along the lines of the European Union as now constituted, but he obviously did not reject on principle the idea of some form of supra-national governance.

We must build a kind of United States of Europe. In this way only will hundreds of millions of toilers be able to regain the simple joys and hopes which make life worth living.

So it is simply a fabrication and a misrepresentation to suggest that nationalism has ever been regarded as anything like a universal principle of political action, governance or justice. It is one of many principles, all of which have some weight, but must be balanced against, and reconciled with, other principles of justice, policy and expediency.

Going from bad to worse, Hazony continues,

Conservatives from Teddy Roosevelt to Dwight Eisenhower likewise spoke of nationalism as a positive good. (Id.)

Where to begin? Hazony, who is not adverse to footnoting (216 altogether, almost one per page, often providing copious references to sources and scholarly literature) offers not one documentary or secondary source for this assertion. To be sure Teddy Roosevelt and Dwight Eisenhower were Republicans. But Roosevelt differed from most Republicans of his time, gaining the Presidency only because McKinley wanted to marginalize him by choosing him as a running mate at a time when no Vice-President since Van Buren had succeeded to the Presidency, except upon the death of the incumbent President.

Eisenhower had been a non-political military figure with no party affiliation until his candidacy for the Republican Presidential nomination, as an alternative to the preferred conservative choice, Robert Taft. Eisenhower did not self-identify as a conservative, preferring to describe himself as a “modern Republican” to the disgust of conservatives like Barry Goldwater, whose best-selling book The Conscience of a Conservative was a sustained attack on Eisenhower’s refusal even to try to roll back the New Deal.

Moreover, when TR coined the term “New Nationalism” in a famous speech he gave in 1912, he was running for the Republican Presidential nomination against his chosen successor, William Howard Taft, by whom TR felt betrayed for trying to accommodate the conservative Republicans TR so detested. Failing to win the Republican nomination, TR ran as the candidate of the Progressive Party, splitting the Republican party, thereby ensuring the election of the progressive, though racist, Woodrow Wilson. Nor was that the end of it. Roosevelt was himself an imperialist, who had supported the War against Spain and the annexation of the Phillipines, and an early and militant proponent of US entry into World War I against Germany on the side of Britain and France. And, after the war, Roosevelt supported US entry into the League of Nations. These are not obscure historical facts, but Hazony, despite his Princeton undergraduate degree and doctorate in philosophy from Rutgers, shows no awareness of them.

Hazony seems equally unaware that, in the American context, nationalism had an entirely different meaning from its nineteenth-century European meaning, as the right of national ethnic populations, defined mainly by their common language, to form sovereign political units rather than the multi-ethnic, largely undemocratic kingdoms and empires by which they were ruled. In America, nationalism was distinguished from sectionalism, expressing the idea that the United States had become an organic unit unto itself, not merely an association of separate and distinct states. This idea, emphasized by Hamilton and the Federalists, and later the Whigs, against the states’ rights position of the Jeffersonian Democrats who resisted the claims of national and federal primacy. The classic expression of the uniquely American national sensibility was provided by Lincoln in his Gettysburg Address.

Fourscore and seven years ago our fathers brought forth on this continent, a new nation, conceived in Liberty, and dedicated to the proposition that all men are created equal.

Now we are engaged in a great civil war, testing whether that nation, or any nation so conceived and so dedicated, can long endure. We are met on a great battle-field of that war. We have come to dedicate a portion of that field, as a final resting place for those who here gave their lives that that nation might live.

Lincoln offered a conception of nationhood entirely different from that which inspired demands for the right of self-determination by European national ethnic and linguistic communities. If the notion of American exceptionalism is to have any clear meaning, it can only be in the context of Lincoln’s description of the origin and meaning of the American nationality.

After his clearly fraudulent appropriation of Theodore and Franklin Roosevelt, Winston Churchill and Dwight Eisenhower to the Nationalist Conservative cause, Hazony seizes upon Ronald Reagan and Margaret Thatcher. “In their day,’ Hazony assures us, “Ronald Reagan and Margaret Thatcher were welcomed by conservatives for the ‘new nationalism’ they brought to political life.” For good measure, Hazony also adds David Ben-Gurion and Mahatma Gandhi to his nationalist pantheon, though, unaccountably, he omits any mention of their enthusiastic embrace by conservatives.

Hazony favors his readers with a single footnote at the end of this remarkable and fantastical paragraph. Forget the fact that “new nationalism” is a term peculiarly associated with Teddy Roosevelt, not with Reagan, who to my knowledge, never uttered the phrase, but the primary source cited by Hazony doesn’t even refer to Reagan in the same context as “new nationalism.” Here is the text of that footnote.

On Reagan’s “new nationalism,” see Norman Podhoretz, “The New American Majority,” Commentary (January 1981); Irving Kristol, “The Emergence of Two Republican Parties,” Reflections of a Neo-Conservative (New York: Basic Books, 1983), 111. (p. 237)

I am unable to find the Kristol text on the internet, but I did find Podhoretz’s article on the Commentary website. I will quote the entire paragraph in which the words “new nationalism” make their only appearance (it is also the only appearance of “nationalism” in the article). But before reproducing the paragraph, I will register my astonishment at the audacity of Hazony in invoking the two godfathers of neo-conservatism as validators of spurious claim made by Hazony on Reagan’s behalf to posthumous recognition as a National Conservative hero, inasmuch as Hazony goes out of his way, as we shall see presently, to cast neo-conservatism into the Gehenna of imperialistic liberalism. But first, let us consider — and marvel at — Podhoretz’s discussion of the “new nationalism.”

In my opinion, because of Chappaquiddick alone, Edward Kennedy could not have become President of the United States in 1980. Yet even if Chappaquiddick had not been a factor, Edward Kennedy would still not have been a viable candidate — not for the Democratic nomination and certainly not for the Presidency in the general election. But if this is so, why did so many Democrats (over 50 percent in some of the early polls taken before he announced) declare their support for him? Here again it is impossible to say with complete assurance. But given the way the votes were subsequently cast in 1980, I think it is a reasonable guess that in those early days many people who had never paid close attention to him took Kennedy for the same kind of political figure his brother John had been. We know from all the survey data that the political mood had been shifting for some years in a consistent direction — away from the self-doubts and self-hatreds and the neo-isolationism of the immediate post-Vietnam period and toward what some of us have called a new nationalism. In the minds of many people caught up in the new nationalist spirit, John F. Kennedy stood for a powerful America, and in expressing enthusiasm for Edward Kennedy, they were in all probability identifying him with his older brother.

This is just an astoundingly brazen misrepresentation by Hazony in hypocritically misappropriating Reagan, to whose memory most Republicans and conservatives feel some lingering sentimental attachment, even as they discard and disavow many of his most characteristic political principles.

The extent to which Hazony repudiates the neo-conservative world view that was a major pillar of the Reagan Presidency becomes clear in a long paragraph in which Hazony sets up his deeply misleading, dichotomy between the virtuous nationalism he espouses and the iniquitous liberal imperialism that he excoriates as the only two possible choices for organizing our political institutions.

This debate between nationalism and imperialism became acutely relevant again with the fall of the Berlin Wall in 1989. At that time, the struggle against Communism ended, and the minds of Western leaders became preoccupied with two great imperialist project: the European Union, which has progressively relieved member nations of many of the powers usually associated with political independence; and the project of establishing an American “world order,” in which nations that do not abide by international law will be coerced into doing so principally by means of American military might. These imperialist projects, even though their proponents do not like to call them that, for two reasons: First, their purpose is to remove decision-making from the hands of independent national governments and place it in the hands of international governments or bodies. And second, as you can immediately see from the literature produced by these individuals and institutions supporting these endeavors, they are consciously part of an imperialist political tradition, drawing their historical inspiration from the Roman Empire, the Austro-Hungarian Empire, and the British Empire. For example, Charles Krauthammer’s argument for American “Universal Dominion,” written at the dawn of the post-Cold War period, calls for American to create a “super-sovereign,” which will preside over the permanent “depreciation . . . of the notion of sovereignty” for all nations on earth. Krauthammer adopts the Latin term pax Americana to describe this vision, invoking the image of the United States as the new Rome: Just as the Roman Empire supposedly established a pax Romana . . . that obtained security and quiet for all of Europe, so America would now provide security and quiet for the entire world. (pp. 3-4)

I do not defend Krauthammer’s view of pax Americana and his support for invading Iraq in 2003. But the war in Iraq was largely instigated by a small group of right-wing ideologists with whom Krauthammer and other neo-conservatives like William Kristol and Robert Kagan were aligned. In the wake of September 11, 2001, they leveraged fear of another attack into a quixotic and poorly-thought-out and incompetently executed military adventure into Iraq.

That invasion was not, as Hazony falsely suggests, the inevitable result of liberal imperialism (as if liberalism and imperialism were cognate ideas). Moreover, it is deeply dishonest for Hazony to single out Krauthammer et al. for responsibility for that disaster, when Hazony’s mentor and sponsor, Benjamin Netanyahu, was a major supporter and outspoken advocate for the invasion of Iraq.

There is much more to be said about Hazony’s bad faith, but I have already said enough for one post.

Dr. Shelton Remains Outspoken: She Should Have Known Better

I started blogging in July 2011, and in one of my first blogposts I discussed an article in the now defunct Weekly Standard by Dr. Judy Shelton entitled “Gold Standard or Bust.” I wrote then:

I don’t know, and have never met Dr. Shelton, but she has been a frequent op-ed contributor to the Wall Street Journal and various other publications of a like ideological orientation for 20 years or more, invariably advocating a return to the gold standard.  In 1994, she published a book Money Meltdown touting the gold standard as a cure for all our monetary ills.

I was tempted to provide a line-by-line commentary on Dr. Shelton’s Weekly Standard piece, but it would be tedious and churlish to dwell excessively on her deficiencies as a wordsmith or lapses from lucidity.

So I was not very impressed by Dr. Shelton then. I have had occasion to write about her again a few times since, and I cannot report that I have detected any improvement in the lucidity of her thought or the clarity of her exposition.

Aside from, or perhaps owing to, her infatuation with the gold standard, Dr. Shelton seems to have developed a deep aversion to what is commonly, and usually misleadingly, known as currency manipulation. Using her modest entrepreneurial skills as a monetary-policy pundit, Dr. Shelton has tried to use the specter of currency manipulation as a talking point for gold-standard advocacy. So, in 2017 Dr. Shelton wrote an op-ed about currency manipulation for the Wall Street Journal that was so woefully uninformed and unintelligible, that I felt obligated to write a blogpost just for her, a tutorial on the ABCs of currency manipulation, as I called it then. Here’s an excerpt from my tutorial:

[i]t was no surprise to see in Tuesday’s Wall Street Journal that monetary-policy entrepreneur Dr. Judy Shelton has written another one of her screeds promoting the gold standard, in which, showing no awareness of the necessary conditions for currency manipulation, she assures us that a) currency manipulation is a real problem and b) that restoring the gold standard would solve it.

Certainly the rules regarding international exchange-rate arrangements are not working. Monetary integrity was the key to making Bretton Woods institutions work when they were created after World War II to prevent future breakdowns in world order due to trade. The international monetary system, devised in 1944, was based on fixed exchange rates linked to a gold-convertible dollar.

No such system exists today. And no real leader can aspire to champion both the logic and the morality of free trade without confronting the practice that undermines both: currency manipulation.

Ahem, pray tell, which rules relating to exchange-rate arrangements does Dr. Shelton believe are not working? She doesn’t cite any. And, what, on earth does “monetary integrity” even mean, and what does that high-minded, but totally amorphous, concept have to do with the rules of exchange-rate arrangements that aren’t working?

Dr. Shelton mentions “monetary integrity” in the context of the Bretton Woods system, a system based — well, sort of — on fixed exchange rates, forgetting – or choosing not — to acknowledge that, under the Bretton Woods system, exchange rates were also unilaterally adjustable by participating countries. Not only were they adjustable, but currency devaluations were implemented on numerous occasions as a strategy for export promotion, the most notorious example being Britain’s 30% devaluation of sterling in 1949, just five years after the Bretton Woods agreement had been signed. Indeed, many other countries, including West Germany, Italy, and Japan, also had chronically undervalued currencies under the Bretton Woods system, as did France after it rejoined the gold standard in 1926 at a devalued rate deliberately chosen to ensure that its export industries would enjoy a competitive advantage.

The key point to keep in mind is that for a country to gain a competitive advantage by lowering its exchange rate, it has to prevent the automatic tendency of international price arbitrage and corresponding flows of money to eliminate competitive advantages arising from movements in exchange rates. If a depreciated exchange rate gives rise to an export surplus, a corresponding inflow of foreign funds to finance the export surplus will eventually either drive the exchange rate back toward its old level, thereby reducing or eliminating the initial depreciation, or, if the lower rate is maintained, the cash inflow will accumulate in reserve holdings of the central bank. Unless the central bank is willing to accept a continuing accumulation of foreign-exchange reserves, the increased domestic demand and monetary expansion associated with the export surplus will lead to a corresponding rise in domestic prices, wages and incomes, thereby reducing or eliminating the competitive advantage created by the depressed exchange rate. Thus, unless the central bank is willing to accumulate foreign-exchange reserves without limit, or can create an increased demand by private banks and the public to hold additional cash, thereby creating a chronic excess demand for money that can be satisfied only by a continuing export surplus, a permanently reduced foreign-exchange rate creates only a transitory competitive advantage.

I don’t say that currency manipulation is not possible. It is not only possible, but we know that currency manipulation has been practiced. But currency manipulation can occur under a fixed-exchange rate regime as well as under flexible exchange-rate regimes, as demonstrated by the conduct of the Bank of France from 1926 to 1935 while it was operating under a gold standard. And the most egregious recent example of currency manipulation was undertaken by the Chinese central bank when it effectively pegged the yuan to the dollar at a fixed rate. Keeping its exchange rate fixed against the dollar was precisely the offense that the currency-manipulation police accused the Chinese of committing.

I leave it to interested readers to go back and finish the rest of my tutorial for Dr. Shelton. And if you read carefully and attentively, you are likely to understand the concept of currency manipulation a lot more clearly than when you started.

Alas, it’s obvious that Dr. Shelton has either not read or not understood the tutorial I wrote for her, because, in her latest pronouncement on the subject she covers substantially the same ground as she did two years ago, with no sign of increased comprehension of the subject on which she expounds with such misplaced self-assurance. Here are some samples of Dr. Shelton’s conceptual confusion and historical ignorance.

History can be especially informative when it comes to evaluating the relationship between optimal economic performance and monetary regimes. In the 1930s, for example, the “beggar thy neighbor” tactic of devaluing currencies against gold to gain a trade export advantage hampered a global economic recovery.

Beggar thy neighbor policies were indeed adopted by the United States, but they were adopted first in the 1922 (the Fordney-McCumber Act) and again in 1930 (Smoot-Hawley Act) when the US was on the gold standard with the value of the dollar pegged at $4.86 $20.67 for an ounce of gold. The Great Depression started in late 1929, but the stock market crash of 1929 may have been in part precipitated by fears that the Smoot-Hawley Act would be passed by Congress and signed into law by President Hoover.

At any rate, exchange rates among most major countries were pegged to either gold or the dollar until September 1931 when Britain suspended the convertibility of the pound into gold. The Great Depression was the result of a rapid deflation caused by gold accumulation by central banks as they rejoined the gold standard that had been almost universally suspended during World War I. Countries that remained on the gold standard during the Great Depression were condemned to suffer deflation as gold became ever more valuable in real terms, so that currency depreciation against gold was the only pathway to recovery. Thus, once convertibility was suspended and the pound allowed to depreciate, the British economy stopped contracting and began a modest recovery with slowly expanding output and employment.

The United States, however, kept the dollar pegged to its $4.86 $20.67 an ounce parity with gold until April 1933, when FDR saved the American economy by suspending convertibility and commencing a policy of deliberate reflation (i.e. inflation to restore the 1926 price level). An unprecedented expansion of output, employment and income accompanied the rise in prices following the suspension of the gold standard. Currency depreciation was the key to recovery from, not the cause of, depression.

Having exposed her ignorance of the causes of the Great Depression, Dr. Shelton then begins a descent into her confusion about the subject of currency manipulation, about which I had tried to tutor her, evidently without success.

The absence of rules aimed at maintaining a level monetary playing field invites currency manipulation that could spark a backlash against the concept of free trade. Countries engaged in competitive depreciation undermine the principles of genuine competition, and those that have sought to participate in good faith in the global marketplace are unfairly penalized by the monetary sleight of hand executed through central banks.

Currency manipulation is possible only under specific conditions. A depreciating currency is not normally a manipulated currency. Currencies fluctuate in relative values for many different reasons, but if prices adjust in rough proportion to the change in exchange rates, the competitive positions of the countries are only temporarily affected by the change in exchange rates. For a country to gain a sustained advantage for its export and import-competing industries by depreciating its exchange rate, it must adopt a monetary policy that consistently provides less cash than the public demands or needs to satisfy its liquidity needs, forcing the public to obtain the desired cash balances through a balance-of-payments surplus and an inflow of foreign-exchange reserves into the country’s central bank or treasury.

U.S. leadership is necessary to address this fundamental violation of free-trade practices and its distortionary impact on free-market outcomes. When the United States’ trading partners engage in currency manipulation, it is not competing — it’s cheating.

That is why it is vital to weigh the implications of U.S. monetary policy on the dollar’s exchange-rate value against other currencies. Trade and financial flows can be substantially altered by speculative market forces responding to the public comments of officials at the helm of the European Central Bank, the Bank of Japan or the People’s Bank of China — with calls for “additional stimulus” alerting currency players to impending devaluation policies.

Dr. Shelton here reveals a comprehensive misunderstanding of the difference between a monetary policy that aims to stimulate economic activity in general by raising the price level or increasing the rate of inflation to stimulate expenditure and a policy of monetary restraint that aims to raise the relative price of domestic export and import-competing products relative to the prices of domestic non-tradable goods and services, e.g., new homes and apartments. It is only the latter combination of tight monetary policy and exchange-rate intervention to depreciate a currency in foreign-exchange markets that qualifies as currency manipulation.

And, under that understanding, it is obvious that currency manipulation is possible under a fixed-exchange-rate system, as France did in the 1920s and 1930s, and as most European countries and Japan did in the 1950s and early 1960s under the Bretton Woods system so well loved by Dr. Shelton.

In the 1950s and early 1960s, the US dollar was chronically overvalued. The situation was not remediated until the 1960s under the Kennedy administration when consistently loose monetary policy by the Fed made currency manipulation so costly for the Germans and Japanese that they revalued their currencies upward to avoid the inflationary consequences of US monetary expansion.

And then, in a final flourish, Dr. Shelton puts her ignorance of what happened in the Great Depression on public display with the following observation.

When currencies shift downward against the dollar, it makes U.S. exports more expensive for consumers in other nations. It also discounts the cost of imported goods compared with domestic U.S. products. Downshifting currencies against the dollar has the same punishing impact as a tariff. That is why, as in the 1930s during the Great Depression, currency devaluation prompts retaliatory tariffs.

The retaliatory tariffs were imposed in response to the US tariffs that preceded the or were imposed at the outset of the Great Depression in 1930. The devaluations against gold promoted economic recovery, and were accompanied by a general reduction in tariff levels under FDR after the US devalued the dollar against gold and the remaining gold standard currencies. Whereof she knows nothing, thereof Dr. Shelton would do better to remain silent.

Dr. Popper: Or How I Learned to Stop Worrying and Love Metaphysics

Introduction to Falsificationism

Although his reputation among philosophers was never quite as exalted as it was among non-philosophers, Karl Popper was a pre-eminent figure in 20th century philosophy. As a non-philosopher, I won’t attempt to adjudicate which take on Popper is the more astute, but I think I can at least sympathize, if not fully agree, with philosophers who believe that Popper is overrated by non-philosophers. In an excellent blog post, Phillipe Lemoine gives a good explanation of why philosophers look askance at falsificationism, Popper’s most important contribution to philosophy.

According to Popper, what distinguishes or demarcates a scientific statement from a non-scientific (metaphysical) statement is whether the statement can, or could be, disproved or refuted – falsified (in the sense of being shown to be false not in the sense of being forged, misrepresented or fraudulently changed) – by an actual or potential observation. Vulnerability to potentially contradictory empirical evidence, according to Popper, is what makes science special, allowing it to progress through a kind of dialectical process of conjecture (hypothesis) and refutation (empirical testing) leading to further conjecture and refutation and so on.

Theories purporting to explain anything and everything are thus non-scientific or metaphysical. Claiming to be able to explain too much is a vice, not a virtue, in science. Science advances by risk-taking, not by playing it safe. Trying to explain too much is actually playing it safe. If you’re not willing to take the chance of putting your theory at risk, by saying that this and not that will happen — rather than saying that this or that will happen — you’re playing it safe. This view of science, portrayed by Popper in modestly heroic terms, was not unappealing to scientists, and in part accounts for the positive reception of Popper’s work among scientists.

But this heroic view of science, as Lemoine nicely explains, was just a bit oversimplified. Theories never exist in a vacuum, there is always implicit or explicit background knowledge that informs and provides context for the application of any theory from which a prediction is deduced. To deduce a prediction from any theory, background knowledge, including complementary theories that are presumed to be valid for purposes of making a prediction, is necessary. Any prediction relies not just on a single theory but on a system of related theories and auxiliary assumptions.

So when a prediction is deduced from a theory, and the predicted event is not observed, it is never unambiguously clear which of the multiple assumptions underlying the prediction is responsible for the failure of the predicted event to be observed. The one-to-one logical dependence between a theory and a prediction upon which Popper’s heroic view of science depends doesn’t exist. Because the heroic view of science is too simplified, Lemoine considers it false, at least in the naïve and heroic form in which it is often portrayed by its proponents.

But, as Lemoine himself acknowledges, Popper was not unaware of these issues and actually dealt with some if not all of them. Popper therefore dismissed those criticisms pointing to his various acknowledgments and even anticipations of and responses to the criticisms. Nevertheless, his rhetorical style was generally not to qualify his position but to present it in stark terms, thereby reinforcing the view of his critics that he actually did espouse the naïve version of falsificationism that, only under duress, would be toned down to meet the objections raised to the usual unqualified version of his argument. Popper after all believed in making bold conjectures and framing a theory in the strongest possible terms and characteristically adopted an argumentative and polemical stance in staking out his positions.

Toned-Down Falsificationism

In his tone-downed version of falsificationism, Popper acknowledged that one can never know if a prediction fails because the underlying theory is false or because one of the auxiliary assumptions required to make the prediction is false, or even because of an error in measurement. But that acknowledgment, Popper insisted, does not refute falsificationism, because falsificationism is not a scientific theory about how scientists do science; it is a normative theory about how scientists ought to do science. The normative implication of falsificationism is that scientists should not try to shield their theories by making just-so adjustments in their theories through ad hoc auxiliary assumptions, e.g., ceteris paribus assumptions, to shield their theories from empirical disproof. Rather they should accept the falsification of their theories when confronted by observations that conflict with the implications of their theories and then formulate new and better theories to replace the old ones.

But a strict methodological rule against adjusting auxiliary assumptions or making further assumptions of an ad hoc nature would have ruled out many fruitful theoretical developments resulting from attempts to account for failed predictions. For example, the planet Neptune was discovered in 1846 by scientists who posited (ad hoc) the existence of another planet to explain why the planet Uranus did not follow its predicted path. Rather than conclude that the Newtonian theory was falsified by the failure of Uranus to follow the orbital path predicted by Newtonian theory, the French astronomer Urbain Le Verrier posited the existence of another planet that would account for the path actually followed by Uranus. Now in this case, it was possible to observe the predicted position of the new planet, and its discovery in the predicted location turned out to be a sensational confirmation of Newtonian theory.

Popper therefore admitted that making an ad hoc assumption in order to save a theory from refutation was permissible under his version of normative faslisificationism, but only if the ad hoc assumption was independently testable. But suppose that, under the circumstances, it would have been impossible to observe the existence of the predicted planet, at least with the observational tools then available, making the ad hoc assumption testable only in principle, but not in practice. Strictly adhering to Popper’s methodological requirement of being able to test independently any ad hoc assumption would have meant accepting the refutation of the Newtonian theory rather than positing the untestable — but true — ad hoc other-planet hypothesis to account for the failed prediction of the orbital path of Uranus.

My point is not that ad hoc assumptions to save a theory from falsification are ok, but to point out that a strict methodological rules requiring rejection of any theory once it appears to be contradicted by empirical evidence and prohibiting the use of any ad hoc assumption to save the theory unless the ad hoc assumption is independently testable might well lead to the wrong conclusion given the nuances and special circumstances associated with every case in which a theory seems to be contradicted by observed evidence. Such contradictions are rarely so blatant that theory cannot be reconciled with the evidence. Indeed, as Popper himself recognized, all observations are themselves understood and interpreted in the light of theoretical presumptions. It is only in extreme cases that evidence cannot be interpreted in a way that more or less conforms to the theory under consideration. At first blush, the Copernican heliocentric view of the world seemed obviously contradicted by direct sensory observation that earth seems flat and the sun rise and sets. Empirical refutation could be avoided only by providing an alternative interpretation of the sensory data that could be reconciled with the apparent — and obvious — flatness and stationarity of the earth and the movement of the sun and moon in the heavens.

So the problem with falsificationism as a normative theory is that it’s not obvious why a moderately good, but less than perfect, theory should be abandoned simply because it’s not perfect and suffers from occasional predictive failures. To be sure, if a better theory than the one under consideration is available, predicting correctly whenever the one under consideration predicts correctly and predicting more accurately than the one under consideration when the latter fails to predict correctly, the alternative theory is surely preferable, but that simply underscores the point that evaluating any theory in isolation is not very important. After all, every theory, being a simplification, is an imperfect representation of reality. It is only when two or more theories are available that scientists must try to determine which of them is preferable.

Oakeshott and the Poverty of Falsificationism

These problems with falsificationism were brought into clearer focus by Michael Oakeshott in his famous essay “Rationalism in Politics,” which though not directed at Popper himself (whose colleague at the London School of Economics he was) can be read as a critique of Popper’s attempt to prescribe methodological rules for scientists to follow in carrying out their research. Methodological rules of the kind propounded by Popper are precisely the sort of supposedly rational rules of practice intended to ensure the successful outcome of an undertaking that Oakeshott believed to be ill-advised and hopelessly naïve. The rationalist conceit in Oakesott’s view is that there are demonstrably correct answers to practical questions and that practical activity is rational only when it is based on demonstrably true moral or causal rules.

The entry on Michael Oakeshott in the Stanford Encyclopedia of Philosophy summarizes Oakeshott’s position as follows:

The error of Rationalism is to think that making decisions simply requires skill in the technique of applying rules or calculating consequences. In an early essay on this theme, Oakeshott distinguishes between “technical” and “traditional” knowledge. Technical knowledge is of facts or rules that can be easily learned and applied, even by those who are without experience or lack the relevant skills. Traditional knowledge, in contrast, means “knowing how” rather than “knowing that” (Ryle 1949). It is acquired by engaging in an activity and involves judgment in handling facts or rules (RP 12–17). The point is not that rules cannot be “applied” but rather that using them skillfully or prudently means going beyond the instructions they provide.

The idea that a scientist’s decision about when to abandon one theory and replace it with another can be reduced to the application of a Popperian falsificationist maxim ignores all the special circumstances and all the accumulated theoretical and practical knowledge that a truly expert scientist will bring to bear in studying and addressing such a problem. Here is how Oakeshott addresses the problem in his famous essay.

These two sorts of knowledge, then, distinguishable but inseparable, are the twin components of the knowledge involved in every human activity. In a practical art such as cookery, nobody supposes that the knowledge that belongs to the good cook is confined to what is or what may be written down in the cookery book: technique and what I have called practical knowledge combine to make skill in cookery wherever it exists. And the same is true of the fine arts, of painting, of music, of poetry: a high degree of technical knowledge, even where it is both subtle and ready, is one thing; the ability to create a work of art, the ability to compose something with real musical qualities, the ability to write a great sonnet, is another, and requires in addition to technique, this other sort of knowledge. Again these two sorts of knowledge are involved in any genuinely scientific activity. The natural scientist will certainly make use of observation and verification that belong to his technique, but these rules remain only one of the components of his knowledge; advances in scientific knowledge were never achieved merely by following the rules. . . .

Technical knowledge . . . is susceptible of formulation in rules, principles, directions, maxims – comprehensively, in propositions. It is possible to write down technical knowledge in a book. Consequently, it does not surprise us that when an artist writes about his art, he writes only about the technique of his art. This is so, not because he is ignorant of what may be called asesthetic element, or thinks it unimportant, but because what he has to say about that he has said already (if he is a painter) in his pictures, and he knows no other way of saying it. . . . And it may be observed that this character of being susceptible of precise formulation gives to technical knowledge at least the appearance of certainty: it appears to be possible to be certain about a technique. On the other hand, it is characteristic of practical knowledge that it is not susceptible of formulation of that kind. Its normal expression is in a customary or traditional way of doing things, or, simply, in practice. And this gives it the appearance of imprecision and consequently of uncertainty, of being a matter of opinion, of probability rather than truth. It is indeed knowledge that is expressed in taste or connoisseurship, lacking rigidity and ready for the impress of the mind of the learner. . . .

Technical knowledge, in short, an be both taught and learned in the simplest meanings of these words. On the other hand, practical knowledge can neither be taught nor learned, but only imparted and acquired. It exists only in practice, and the only way to acquire it is by apprenticeship to a master – not because the master can teach it (he cannot), but because it can be acquired only by continuous contact with one who is perpetually practicing it. In the arts and in natural science what normally happens is that the pupil, in being taught and in learning the technique from his master, discovers himself to have acquired also another sort of knowledge than merely technical knowledge, without it ever having been precisely imparted and often without being able to say precisely what it is. Thus a pianist acquires artistry as well as technique, a chess-player style and insight into the game as well as knowledge of the moves, and a scientist acquires (among other things) the sort of judgement which tells him when his technique is leading him astray and the connoisseurship which enables him to distinguish the profitable from the unprofitable directions to explore.

Now, as I understand it, Rationalism is the assertion that what I have called practical knowledge is not knowledge at all, the assertion that, properly speaking, there is no knowledge which is not technical knowledge. The Rationalist holds that the only element of knowledge involved in any human activity is technical knowledge and that what I have called practical knowledge is really only a sort of nescience which would be negligible if it were not positively mischievous. (Rationalism in Politics and Other Essays, pp. 12-16)

Almost three years ago, I attended the History of Economics Society meeting at Duke University at which Jeff Biddle of Michigan State University delivered his Presidential Address, “Statistical Inference in Economics 1920-1965: Changes in Meaning and Practice, published in the June 2017 issue of the Journal of the History of Economic Thought. The paper is a remarkable survey of the differing attitudes towards using formal probability theory as the basis for making empirical inferences from the data. The underlying assumptions of probability theory about the nature of the data were widely viewed as being too extreme to make probability theory an acceptable basis for empirical inferences from the data. However, the early negative attitudes toward accepting probability theory as the basis for making statistical inferences from data were gradually overcome (or disregarded). But as late as the 1960s, even though econometric techniques were becoming more widely accepted, a great deal of empirical work, including by some of the leading empirical economists of the time, avoided using the techniques of statistical inference to assess empirical data using regression analysis. Only in the 1970s was there a rapid sea-change in professional opinion that made statistical inference based on explicit probabilisitic assumptions about underlying data distributions the requisite technique for drawing empirical inferences from the analysis of economic data. In the final section of his paper, Biddle offers an explanation for this rapid change in professional attitude toward the use of probabilistic assumptions about data distributions as the required method of the empirical assessment of economic data.

By the 1970s, there was a broad consensus in the profession that inferential methods justified by probability theory—methods of producing estimates, of assessing the reliability of those estimates, and of testing hypotheses—were not only applicable to economic data, but were a necessary part of almost any attempt to generalize on the basis of economic data. . . .

This paper has been concerned with beliefs and practices of economists who wanted to use samples of statistical data as a basis for drawing conclusions about what was true, or probably true, in the world beyond the sample. In this setting, “mechanical objectivity” means employing a set of explicit and detailed rules and procedures to produce conclusions that are objective in the sense that if many different people took the same statistical information, and followed the same rules, they would come to exactly the same conclusions. The trustworthiness of the conclusion depends on the quality of the method. The classical theory of inference is a prime example of this sort of mechanical objectivity.

Porter [Trust in Numbers: The Pursuit of Objectivity in Science and Public Life] contrasts mechanical objectivity with an objectivity based on the “expert judgment” of those who analyze data. Expertise is acquired through a sanctioned training process, enhanced by experience, and displayed through a record of work meeting the approval of other experts. One’s faith in the analyst’s conclusions depends on one’s assessment of the quality of his disciplinary expertise and his commitment to the ideal of scientific objectivity. Elmer Working’s method of determining whether measured correlations represented true cause-and-effect relationships involved a good amount of expert judgment. So, too, did Gregg Lewis’s adjustments of the various estimates of the union/non-union wage gap, in light of problems with the data and peculiarities of the times and markets from which they came. Keynes and Persons pushed for a definition of statistical inference that incorporated space for the exercise of expert judgment; what Arthur Goldberger and Lawrence Klein referred to as ‘statistical inference’ had no explicit place for expert judgment.

Speaking in these terms, I would say that in the 1920s and 1930s, empirical economists explicitly acknowledged the need for expert judgment in making statistical inferences. At the same time, mechanical objectivity was valued—there are many examples of economists of that period employing rule-oriented, replicable procedures for drawing conclusions from economic data. The rejection of the classical theory of inference during this period was simply a rejection of one particular means for achieving mechanical objectivity. By the 1970s, however, this one type of mechanical objectivity had become an almost required part of the process of drawing conclusions from economic data, and was taught to every economics graduate student.

Porter emphasizes the tension between the desire for mechanically objective methods and the belief in the importance of expert judgment in interpreting statistical evidence. This tension can certainly be seen in economists’ writings on statistical inference throughout the twentieth century. However, it would be wrong to characterize what happened to statistical inference between the 1940s and the 1970s as a displace-ment of procedures requiring expert judgment by mechanically objective procedures. In the econometric textbooks published after 1960, explicit instruction on statistical inference was largely limited to instruction in the mechanically objective procedures of the classical theory of inference. It was understood, however, that expert judgment was still an important part of empirical economic analysis, particularly in the specification of the models to be estimated. But the disciplinary knowledge needed for this task was to be taught in other classes, using other textbooks.

And in practice, even after the statistical model had been chosen, the estimates and standard errors calculated, and the hypothesis tests conducted, there was still room to exercise a fair amount of judgment before drawing conclusions from the statistical results. Indeed, as Marcel Boumans (2015, pp. 84–85) emphasizes, no procedure for drawing conclusions from data, no matter how algorithmic or rule bound, can dispense entirely with the need for expert judgment. This fact, though largely unacknowledged in the post-1960s econometrics textbooks, would not be denied or decried by empirical economists of the 1970s or today.

This does not mean, however, that the widespread embrace of the classical theory of inference was simply a change in rhetoric. When application of classical inferential procedures became a necessary part of economists’ analyses of statistical data, the results of applying those procedures came to act as constraints on the set of claims that a researcher could credibly make to his peers on the basis of that data. For example, if a regression analysis of sample data yielded a large and positive partial correlation, but the correlation was not “statistically significant,” it would simply not be accepted as evidence that the “population” correlation was positive. If estimation of a statistical model produced a significant estimate of a relationship between two variables, but a statistical test led to rejection of an assumption required for the model to produce unbiased estimates, the evidence of a relationship would be heavily discounted.

So, as we consider the emergence of the post-1970s consensus on how to draw conclusions from samples of statistical data, there are arguably two things to be explained. First, how did it come about that using a mechanically objective procedure to generalize on the basis of statistical measures went from being a choice determined by the preferences of the analyst to a professional requirement, one that had real con-sequences for what economists would and would not assert on the basis of a body of statistical evidence? Second, why was it the classical theory of inference that became the required form of mechanical objectivity? . . .

Perhaps searching for an explanation that focuses on the classical theory of inference as a means of achieving mechanical objectivity emphasizes the wrong characteristic of that theory. In contrast to earlier forms of mechanical objectivity used by economists, such as standardized methods of time series decomposition employed since the 1920s, the classical theory of inference is derived from, and justified by, a body of formal mathematics with impeccable credentials: modern probability theory. During a period when the value placed on mathematical expression in economics was increasing, it may have been this feature of the classical theory of inference that increased its perceived value enough to overwhelm long-standing concerns that it was not applicable to economic data. In other words, maybe the chief causes of the profession’s embrace of the classical theory of inference are those that drove the broader mathematization of economics, and one should simply look to the literature that explores possible explanations for that phenomenon rather than seeking a special explanation of the embrace of the classical theory of inference.

I would suggest one more factor that might have made the classical theory of inference more attractive to economists in the 1950s and 1960s: the changing needs of pedagogy in graduate economics programs. As I have just argued, since the 1920s, economists have employed both judgment based on expertise and mechanically objective data-processing procedures when generalizing from economic data. One important difference between these two modes of analysis is how they are taught and learned. The classical theory of inference as used by economists can be taught to many students simultaneously as a set of rules and procedures, recorded in a textbook and applicable to “data” in general. This is in contrast to the judgment-based reasoning that combines knowledge of statistical methods with knowledge of the circumstances under which the particular data being analyzed were generated. This form of reasoning is harder to teach in a classroom or codify in a textbook, and is probably best taught using an apprenticeship model, such as that which ideally exists when an aspiring economist writes a thesis under the supervision of an experienced empirical researcher.

During the 1950s and 1960s, the ratio of PhD candidates to senior faculty in PhD-granting programs was increasing rapidly. One consequence of this, I suspect, was that experienced empirical economists had less time to devote to providing each interested student with individualized feedback on his attempts to analyze data, so that relatively more of a student’s training in empirical economics came in an econometrics classroom, using a book that taught statistical inference as the application of classical inference procedures. As training in empirical economics came more and more to be classroom training, competence in empirical economics came more and more to mean mastery of the mechanically objective techniques taught in the econometrics classroom, a competence displayed to others by application of those techniques. Less time in the training process being spent on judgment-based procedures for interpreting statistical results meant fewer researchers using such procedures, or looking for them when evaluating the work of others.

This process, if indeed it happened, would not explain why the classical theory of inference was the particular mechanically objective method that came to dominate classroom training in econometrics; for that, I would again point to the classical theory’s link to a general and mathematically formalistic theory. But it does help to explain why the application of mechanically objective procedures came to be regarded as a necessary means of determining the reliability of a set of statistical measures and the extent to which they provided evidence for assertions about reality. This conjecture fits in with a larger possibility that I believe is worth further exploration: that is, that the changing nature of graduate education in economics might sometimes be a cause as well as a consequence of changing research practices in economics. (pp. 167-70)

The correspondence between Biddle’s discussion of the change in the attitude of the economics profession about how inferences should be drawn from data about empirical relationships is strikingly similar to Oakeshott’s discussion and depressing in its implications for the decline of expert judgment by economics, expert judgment having been replaced by mechanical and technical knowledge that can be objectively summarized in the form of rules or tests for statistical significance, itself an entirely arbitrary convention lacking any logical, or self-evident, justification.

But my point is not to condemn using rules derived from classical probability theory to assess the significance of relationships statistically estimated from historical data, but to challenge the methodological prohibition against the kinds of expert judgments that many statistically knowledgeable economists like Nobel Prize winners such as Simon Kuznets, Milton Friedman, Theodore Schultz and Gary Becker routinely used to make in their empirical studies. As Biddle notes:

In 1957, Milton Friedman published his theory of the consumption function. Friedman certainly understood statistical theory and probability theory as well as anyone in the profession in the 1950s, and he used statistical theory to derive testable hypotheses from his economic model: hypotheses about the relationships between estimates of the marginal propensity to consume for different groups and from different types of data. But one will search his book almost in vain for applications of the classical methods of inference. Six years later, Friedman and Anna Schwartz published their Monetary History of the United States, a work packed with graphs and tables of statistical data, as well as numerous generalizations based on that data. But the book contains no classical hypothesis tests, no confidence intervals, no reports of statistical significance or insignificance, and only a handful of regressions. (p. 164)

Friedman’s work on the Monetary History is still regarded as authoritative. My own view is that much of the Monetary History was either wrong or misleading. But my quarrel with the Monetary History mainly pertains to the era in which the US was on the gold standard, inasmuch as Friedman simply did not understand how the gold standard worked, either in theory or in practice, as McCloskey and Zecher showed in two important papers (here and here). Also see my posts about the empirical mistakes in the Monetary History (here and here). But Friedman’s problem was bad monetary theory, not bad empirical technique.

Friedman’s theoretical misunderstandings have no relationship to the misguided prohibition against doing quantitative empirical research without obeying the arbitrary methodological requirement that statistical be derived in a way that measures the statistical significance of the estimated relationships. These methodological requirements have been adopted to support a self-defeating pretense to scientific rigor, necessitating the use of relatively advanced mathematical techniques to perform quantitative empirical research. The methodological requirements for measuring statistical relationships were never actually shown to be generate more accurate or reliable statistical results than those derived from the less technically advanced, but in some respects more economically sophisticated, techniques that have almost totally been displaced. One more example of the fallacy that there is but one technique of research that ensures the discovery of truth, a mistake even Popper was never guilty of.

Methodological Prescriptions Go from Bad to Worse

The methodological requirement for the use of formal tests of statistical significance before any quantitative statistical estimate could be credited was a prelude, though it would be a stretch to link them causally, to another and more insidious form of methodological tyrannizing: the insistence that any macroeconomic model be derived from explicit micro-foundations based on the solution of an intertemporal-optimization exercise. Of course, the idea that such a model was in any way micro-founded was a pretense, the solution being derived only through the fiction of a single representative agent, rendering the entire optimization exercise fundamentally illegitimate and the exact opposite of micro-founded model. Having already explained in previous posts why transforming microfoundations from a legitimate theoretical goal into methodological necessity has taken a generation of macroeconomists down a blind alley (here, here, here, and here) I will only make the further comment that this is yet another example of the danger of elevating technique over practice and substance.

Popper’s More Important Contribution

This post has largely concurred with the negative assessment of Popper’s work registered by Lemoine. But I wish to end on a positive note, because I have learned a great deal from Popper, and even if he is overrated as a philosopher of science, he undoubtedly deserves great credit for suggesting falsifiability as the criterion by which to distinguish between science and metaphysics. Even if that criterion does not hold up, or holds up only when qualified to a greater extent than Popper admitted, Popper made a hugely important contribution by demolishing the startling claim of the Logical Positivists who in the 1920s and 1930s argued that only statements that can be empirically verified through direct or indirect observation have meaning, all other statements being meaningless or nonsensical. That position itself now seems to verge on the nonsensical. But at the time many of the world’s leading philosophers, including Ludwig Wittgenstein, no less, seemed to accept that remarkable view.

Thus, Popper’s demarcation between science and metaphysics had a two-fold significance. First, that it is not verifiability, but falsifiability, that distinguishes science from metaphysics. That’s the contribution for which Popper is usually remembered now. But it was really the other aspect of his contribution that was more significant: that even metaphysical, non-scientific, statements can be meaningful. According to the Logical Positivists, unless you are talking about something that can be empirically verified, you are talking nonsense. In other words they were deliberately hoisting themselves on their petard, because their discussions about what is and what is not meaningful, being discussions about concepts, not empirically verifiable objects, were themselves – on the Positivists’ own criterion of meaning — meaningless and nonsensical.

Popper made the world safe for metaphysics, and the world is a better place as a result. Science is a wonderful enterprise, rewarding for its own sake and because it contributes to the well-being of many millions of human beings, though like many other human endeavors, it can also have unintended and unfortunate consequences. But metaphysics, because it was used as a term of abuse by the Positivists, is still, too often, used as an epithet. It shouldn’t be.

Certainly economists should aspire to tease out whatever empirical implications they can from their theories. But that doesn’t mean that an economic theory with no falsifiable implications is useless, a judgment whereby Mark Blaug declared general equilibrium theory to be unscientific and useless, a judgment that I don’t think has stood the test of time. And even if general equilibrium theory is simply metaphysical, my response would be: so what? It could still serve as a source of inspiration and insight to us in framing other theories that may have falsifiable implications. And even if, in its current form, a theory has no empirical content, there is always the possibility that, through further discussion, critical analysis and creative thought, empirically falsifiable implications may yet become apparent.

Falsifiability is certainly a good quality for a theory to have, but even an unfalsifiable theory may be worth paying attention to and worth thinking about.

Judy Shelton Speaks Up for the Gold Standard

I have been working on a third installment in my series on how, with a huge assist from Arthur Burns, things fell apart in the 1970s. In my third installment, I will discuss the sad denouement of Burns’s misunderstandings and mistakes when Paul Volcker administered a brutal dose of tight money that caused the worst downturn and highest unemployment since the Great Depression in the Great Recession of 1981-82. But having seen another one of Judy Shelton’s less than enlightening op-eds arguing for a gold standard in the formerly respectable editorial section of the Wall Street Journal, I am going to pause from my account of Volcker’s monetary policy in the early 1980s to give Dr. Shelton my undivided attention.

The opening paragraph of Dr. Shelton’s op-ed is a less than auspicious start.

Since President Trump announced his intention to nominate Herman Cain and Stephen Moore to serve on the Federal Reserve’s board of governors, mainstream commentators have made a point of dismissing anyone sympathetic to a gold standard as crankish or unqualified.

That is a totally false charge. Since Herman Cain and Stephen Moore were nominated, they have been exposed as incompetent and unqualified to serve on the Board of Governors of the world’s most important central bank. It is not support for reestablishing the gold standard that demonstrates their incompetence and lack of qualifications. It is true that most economists, myself included, oppose restoring the gold standard. It is also true that most supporters of the gold standard, like, say — to choose a name more or less at random — Ron Paul, are indeed cranks and unqualified to hold high office, but there is indeed a minority of economists, including some outstanding ones like Larry White, George Selgin, Richard Timberlake and Nobel Laureate Robert Mundell, who do favor restoring the gold standard, at least under certain conditions.

But Cain and Moore are so unqualified and so incompetent, that they are incapable of doing more than mouthing platitudes about how wonderful it would be to have a dollar as good as gold by restoring some unspecified link between the dollar and gold. Because of their manifest ignorance about how a gold standard would work now or how it did work when it was in operation, they were unprepared to defend their support of a gold standard when called upon to do so by inquisitive reporters. So they just lied and denied that they had ever supported returning to the gold standard. Thus, in addition to being ignorant, incompetent and unqualified to serve on the Board of Governors of the Federal Reserve, Cain and Moore exposed their own foolishness and stupidity, because it was easy for reporters to dig up multiple statements by both aspiring central bankers explicitly calling for a gold standard to be restored and muddled utterances bearing at least vague resemblance to support for the gold standard.

So Dr. Shelton, in accusing mainstream commentators of dismissing anyone sympathetic to a gold standard as crankish or unqualified is accusing mainstream commentators of a level of intolerance and closed-mindedness for which she supplies not a shred of evidence.

After making a defamatory accusation with no basis in fact, Dr. Shelton turns her attention to a strawman whom she slays mercilessly.

But it is wholly legitimate, and entirely prudent, to question the infallibility of the Federal Reserve in calibrating the money supply to the needs of the economy. No other government institution had more influence over the creation of money and credit in the lead-up to the devastating 2008 global meltdown.

Where to begin? The Federal Reserve has not been targeting the quantity of money in the economy as a policy instrument since the early 1980s when the Fed misguidedly used the quantity of money as its policy target in its anti-inflation strategy. After acknowledging that mistake the Fed has, ever since, eschewed attempts to conduct monetary policy by targeting any monetary aggregate. It is through the independent choices and decisions of individual agents and of many competing private banking institutions, not the dictate of the Federal Reserve, that the quantity of money in the economy at any given time is determined. Indeed, it is true that the Federal Reserve played a great role in the run-up to the 2008 financial crisis, but its mistake had nothing to do with the amount of money being created. Rather the problem was that the Fed was setting its policy interest rate at too high a level throughout 2008 because of misplaced inflation fears fueled by a temporary increases in commodity prices that deterred the Fed from providing the monetary stimulus needed to counter a rapidly deepening recession.

But guess who was urging the Fed to raise its interest rate in 2008 exactly when a cut in interest rates was what the economy needed? None other than the Wall Street Journal editorial page. And guess who was the lead editorial writer on the Wall Street Journal in 2008 for economic policy? None other than Stephen Moore himself. Isn’t that special?

I will forbear from discussing Dr. Shelton’s comments on the Fed’s policy of paying interest on reserves, because I actually agree with her criticism of the policy. But I do want to say a word about her discussion of currency manipulation and the supposed role of the gold standard in minimizing such currency manipulation.

The classical gold standard established an international benchmark for currency values, consistent with free-trade principles. Today’s arrangements permit governments to manipulate their currencies to gain an export advantage.

Having previously explained to Dr. Shelton that currency manipulation to gain an export advantage depends not just on the exchange rate, but the monetary policy that is associated with that exchange rate, I have to admit some disappointment that my previous efforts to instruct her don’t seem to have improved her understanding of the ABCs of currency manipulation. But I will try again. Let me just quote from my last attempt to educate her.

The key point to keep in mind is that for a country to gain a competitive advantage by lowering its exchange rate, it has to prevent the automatic tendency of international price arbitrage and corresponding flows of money to eliminate competitive advantages arising from movements in exchange rates. If a depreciated exchange rate gives rise to an export surplus, a corresponding inflow of foreign funds to finance the export surplus will eventually either drive the exchange rate back toward its old level, thereby reducing or eliminating the initial depreciation, or, if the lower rate is maintained, the cash inflow will accumulate in reserve holdings of the central bank. Unless the central bank is willing to accept a continuing accumulation of foreign-exchange reserves, the increased domestic demand and monetary expansion associated with the export surplus will lead to a corresponding rise in domestic prices, wages and incomes, thereby reducing or eliminating the competitive advantage created by the depressed exchange rate. Thus, unless the central bank is willing to accumulate foreign-exchange reserves without limit, or can create an increased demand by private banks and the public to hold additional cash, thereby creating a chronic excess demand for money that can be satisfied only by a continuing export surplus, a permanently reduced foreign-exchange rate creates only a transitory competitive advantage.

I don’t say that currency manipulation is not possible. It is not only possible, but we know that currency manipulation has been practiced. But currency manipulation can occur under a fixed-exchange rate regime as well as under flexible exchange-rate regimes, as demonstrated by the conduct of the Bank of France from 1926 to 1935 while it was operating under a gold standard.

Dr. Shelton believes that restoring a gold standard would usher in a period of economic growth like the one that followed World War II under the Bretton Woods System. Well, Dr. Shelton might want to reconsider how well the Bretton Woods system worked to the advantage of the United States.

The fact is that, as Ralph Hawtrey pointed out in his Incomes and Money, the US dollar was overvalued relative to the currencies of most its European trading parties, which is why unemployment in the US was chronically above 5% after 1954 to 1965. With undervalued currencies, West Germany, Italy, Belgium, Britain, France and Japan all had much lower unemployment than the US. It was only in 1961, after John Kennedy became President, when the Federal Reserve systematically loosened monetary policy, forcing Germany and other countries to revalue their countries upward to avoid importing US inflation that the US was able redress the overvaluation of the dollar. But in doing so, the US also gradually rendered the $35/ounce price of gold, at which it maintained a kind of semi-convertibility of the dollar, unsustainable, leading a decade later to the final abandonment of the gold-dollar peg.

Dr. Shelton is obviously dedicated to restoring the gold standard, but she really ought to study up on how the gold standard actually worked in its previous incarnations and semi-incarnations, before she opines any further about how it might work in the future. At present, she doesn’t seem to be knowledgeable about how the gold standard worked in the past, and her confidence that it would work well in the future is entirely misplaced.

James Buchanan Calling the Kettle Black

In the wake of the tragic death of Alan Krueger, attention has been drawn to an implicitly defamatory statement by James Buchanan about those who, like Krueger, dared question the orthodox position taken by most economists that minimum-wage laws increase unemployment among low-wage, low-skilled workers whose productivity, at the margin, is less than the minimum wage that employers are required to pay employees.

Here is Buchanan’s statement:

The inverse relationship between quantity demanded and price is the core proposition in economic science, which embodies the presupposition that human choice behavior is sufficiently relational to allow predictions to be made. Just as no physicist would claim that “water runs uphill,” no self-respecting economist would claim that increases in the minimum wage increase employment. Such a claim, if seriously advanced, becomes equivalent to a denial that there is even minimal scientific content in economics, and that, in consequence, economists can do nothing but write as advocates for ideological interests. Fortunately, only a handful of economists are willing to throw over the teachings of two centuries; we have not yet become a bevy of camp-following whores.

Wholly apart from its odious metaphorical characterization of those he was criticizing, Buchanan’s assertion was substantively problematic in two respects. The first, which is straightforward and well-known, and which Buchanan was obviously wrong not to acknowledge, is that there are obvious circumstances in which a minimum-wage law could simultaneously raise wages and reduce unemployment without contradicting the inverse relationship between quantity demanded and price. Such circumstances obtain whenever employers exercise monopsony power in the market for unskilled labor. If employers realize that hiring additional low-skilled workers drives up the wage paid to all the low-skilled workers that they employ, not just the additional ones hired, the wage paid by employers will be less than the value of the marginal product of labor. If employers exercise monopsony power, then divergence between the wage and the marginal product is not a violation, but an implication, of the inverse relationship between quantity demanded and price. If Buchanan had written on his price theory preliminary exam for a Ph. D at Chicago that support for a minimum wage could be rationalized only be denying the inverse relationship between quantity demanded and price, he would have been flunked.

The second problem with Buchanan’s position is less straightforward and less well-known, but more important, than the first. The inverse relationship by which Buchanan set such great store is valid only if qualified by a ceteris paribus condition. Demand is a function of many variables of which price is only one. So the inverse relationship between price and quantity demanded is premised on the assumption that all the other variables affecting demand are held (at least approximately) constant.

Now it’s true that even the law of gravity is subject to a ceteris paribus condition; the law of gravity will not control the movement of objects in a magnetic field. And it would be absurd to call a physicist an advocate for ideological interests just because he recognized that possibility.

Of course, the presence or absence of a magnetic field is a circumstance that can be easily ascertained, thereby enabling a physicist to alter his prediction of the movement of an object according as the the relevant field for predicting the motion of the object under consideration is gravitational or magnetic. But the magnitude and relevance of other factors affecting demand are not so easily taken into account by economists. That’s why applied economists try to focus on markets in which the effects of “other factors” are small or on markets in which “other factors” can easily be identified and measured or treated qualitatively as fixed effects.

But in some markets the factors affecting demand are themselves interrelated so that the ceteris paribus assumption can’t be maintained. Such markets can’t be analyzed in isolation, they can only be analyzed as a system in which all the variables are jointly determined. Economists call the analysis of an isolated market partial-equilibrium analysis. And it is partial-equilibrium analysis that constitutes the core of price theory and microeconomics. The ceteris paribus assumption either has to be maintained by assuming that changes in the variables other than price affecting demand and supply are inconsequential or by identifying other variable whose changes could affect demand and supply and either measuring them quantitatively or at least accounting for them qualitatively.

But labor markets, except at a granular level, when the focus is on an isolated region or a specialized occupation, cannot be modeled usefully with the standard partial-equilibrium techniques of price theory, because income effects and interactions between related markets cannot appropriately be excluded from the partial-equilibrium analysis of supply and demand in a broadly defined market for labor. The determination of the equilibrium price in a market that encompasses a substantial share of economic activity cannot be isolated from the determination of the equilibrium prices in other markets.

Moreover, the idea that the equilibration of any labor market can be understood within a partial-equiilbrium framework in which the wage responds to excess demands for, or excess supplies of, labor just as the price of a standardized commodity adjusts to excess demands for, or excess supplies of, that commodity, reflects a gross misunderstanding of the incentives of employers and workers in reaching wage bargains for the differentiated services provided by individual workers. Those incentives are in no way comparable to the incentives of businesses to adjust the prices of their products in response to excess supplies of or excess demands for those products.

Buchanan was implicitly applying an inappropriate paradigm of price adjustment in a single market to the analysis of how wages adjust in the real world. The truth is we don’t have a good understanding of how wages adjust, and so we don’t have a good understanding of the effects of minimum wages. But in arrogantly and insultingly dismissing Krueger’s empirical research on the effects of minimum wage laws, Buchanan was unwittingly exposing not Krueger’s ideological advocacy but his own.

Was There a Blue Wave?

In the 2018 midterm elections on two weeks ago on November 6, Democrats gained about 38 seats in the House of Representatives with results for a few seats still incomplete. Polls and special elections for vacancies in the House and Senate and state legislatures indicated that a swing toward the Democrats was likely, raising hopes among Democrats that a blue wave would sweep Democrats into control of the House of Representatives and possibly, despite an unfavorable election map with many more Democratic Senate seats at state than Republican seats, even the Senate.

On election night when results in the Florida Senate and Governor races suddenly swung toward the Democrats, the high hopes for a blue wave began to ebb, especially as results from Indiana, Misouri, and South Dakota showed that Democratic incumbent Senators trailing by substantial margins. Other results seemed like a mixed bag, with some Democratic gains, but hardly providing clear signs of a blue wave. The mood was not lifted when the incumbent Democratic Senator from Montana fell behind his Republican challenger and Ted Cruz seemed to be maintaining a slim lead over his charismatic opponent Beto O’Rourke and the Republican candidate for the open Senate seat held by the retiring Jeff Flake of Arizona was leading the Democratic candidate.

As the night wore on, although it seemed that the Democrats would gain a majority in the House of Representatives, estimates of the number of seats gained were only in the high twenties or low thirties, while it appeared that Republicans might gain as many as five Senate Seats. President Trump was able to claim, almost credibly, the next morning at his White House news conference that the election results had been an almost total victory for himself and his party.

It was not till later the next day that it became clear that the Democratic gains in the House would not be just barely enough (23) to gain a majority in the House but would likely be closer to 40 than to 30. The apparent losses of the Montana seat was reversed by late results, and the delayed results from Nevada showed that a Democrat had defeated the Republican incumbent while the Democratic candidate in Arizona had substantially cut into the lead built up by the Republican candidate with most of the of the uncounted votes in Democratic strongholds. Instead of winning 56 Senate seats a pickup of 5, as seemed likely on Tuesday night, the Republicans gains were cut to no more than 2, and the apparent defeat of an incumbent in the Florida election was thrown into doubt, as late returns showed a steadily shrinking Republican margin, sending Republicans into an almost hysterical panic at the prospect gaining no more than one seat rather than five they had been expecting on Tuesday night.

So, within a day or two after the election, the narrative of a Democratic wave began to reemerge. Many commentators accepted the narrative of a covert Democratic wave, but others disagreed. For example, Sean Trende at Real Clear Politics argues that there really wasn’t a Blue Wave, even though Democratic House gains of nearly 40 seats, taken in isolation, might qualify for that designation. Trende thinks the Democratic losses in the Senate, though not as large as they seemed originally, are inconsistent with a wave election as were Democratic gains in governorships and state legislatures.

However, a pickup of seven governorships, while not spectacular is hardly to be sneezed at, and Democratic gains in state legislative seats would have been substantially greater than they were had it not been for extremely effective gerrymandering that kept democratic gains well below their share of the vote in state legislatures even though their effect on races for the House were fairly minimal. So I think that the best measure of the wave-like character of the 2018 elections is provided by the results for the House of Representatives.

Now the problem with judging whether the House results were a wave or were not a wave is that midterm election results are sensitive to economic conditions, so before you can compare results you need to adjust for how well or poorly the economy was performing. You also need to adjust for how many seats the President’s party has going into the election. The more seats the President’s Party has to defend, the greater its potential loss in the election.

To test this idea, I estimated a simple regression model with the number of seats lost by the President’s party in the midterm elections as the dependent variable and the number of seats held by the President’s party as one independent variable and the ratio of real GDP in the year of the midterm election to real GDP in the year of the previous Presidential election as the other independent variable. One would expect the President’s party to perform better in the midterm elections the higher the ratio of real GDP in the midterm year to real GDP in the year of the previous Presidential election.

My regression equation is thus ΔSeats = C + aSeats + bRGDPratio + ε,

where ΔSeats is the change in the number of seats held by the President’s party after the midterm election, Seats is the number of seats held before the midterm, RGDPratio is the ratio of real GDP in the midterm election year to the real GDP in the previous Presidential election year, C is a constant reflecting the average change in the number of seats of the President’s party in the midterm elections, and a and b are the coefficients reflecting the marginal effect of a change in the corresponding independent variables on the dependent variable, with the other independent variable held constant.

I estimated this equation using data in the 18 midterm elections from 1946 through 2014. The estimated regression equation was the following:

ΔSeats = 24.63 – .26Seats + 184.48RGDPratio

The t values for Seats and RGDPratio are both slightly greater than 2 in absolute value, indicating that they are statistically significant at the 10% level and nearly significant at the 5% level. But given the small number of observations, I wouldn’t put much store on the significance levels except as an indication of plausibility. The assumption that Seats is linearly related to ΔSeats doesn’t seem right, but I haven’t tried alternative specifications. The R-squared and adjusted R-squared statistics are .31 and .22, which seem pretty high.

At any rate when I plotted the predicted changes in the number of seats against the actual number of seats changed in the elections from 1946 to 2018 I came up with the following chart:

 

The blue line in the chart represents the actual number of seats gained or lost in each midterm election since 1946 and the orange line represents the change in the number of seats predicted by the model. One can see that the President’s party did substantially better than expected in 1962, 1978, 1998, and 2002 elections, while the President’s party did substantially worse than expected in the 1958, 1966, 1974, 1994, 2006, 2010 and 2018 elections.

In 2018, the Democrats gained approximately 38 seats compared to the 22 seats the model predicted, so the Democrats overperformed by about 16 seats. In 2010 the Republicans gained 63 seats compared to a predicted gain of 35. In 2006, the Democrats gained 32 seats compared to a predicted gain of 22. In 1994 Republicans gained 54 seats compared to a predicted gain of 26 seats. In 1974, Democrats gains 48 seats compared to a predicted gain of 20 seats. In 1966, Republicans gained 47 seats compared to a predicted gain of 26 seats. And in 1958, Democrats gained 48 seats compared to a predicted gain of 20 seats.

So the Democrats in 2018 did not over-perform as much as they did in 1958 and 1974, or as much as the Republicans did in 1966, 1994, and 2010. But the Democrats overperformed by more in 2018 than they did in 2006 when Mrs. Pelosi became Speaker of the House the first time, and actually came close to the Republicans’ overperformance of 1966. So, my tentative conclusion is yes, there was a blue wave in 2018, but it was a light blue wave.

 

More on Sticky Wages

It’s been over four and a half years since I wrote my second most popular post on this blog (“Why are Wages Sticky?”). Although the post was linked to and discussed by Paul Krugman (which is almost always a guarantee of getting a lot of traffic) and by other econoblogosphere standbys like Mark Thoma and Barry Ritholz, unlike most of my other popular posts, it has continued ever since to attract a steady stream of readers. It’s the posts that keep attracting readers long after their original expiration date that I am generally most proud of.

I made a few preliminary points about wage stickiness before getting to my point. First, although Keynes is often supposed to have used sticky wages as the basis for his claim that market forces, unaided by stimulus to aggregate demand, cannot automatically eliminate cyclical unemployment within the short or even medium term, he actually devoted a lot of effort and space in the General Theory to arguing that nominal wage reductions would not increase employment, and to criticizing economists who blamed unemployment on nominal wages fixed by collective bargaining at levels too high to allow all workers to be employed. So, the idea that wage stickiness is a Keynesian explanation for unemployment doesn’t seem to me to be historically accurate.

I also discussed the search theories of unemployment that in some ways have improved our understanding of why some level of unemployment is a normal phenomenon even when people are able to find jobs fairly easily and why search and unemployment can actually be productive, enabling workers and employers to improve the matches between the skills and aptitudes that workers have and the skills and aptitudes that employers are looking for. But search theories also have trouble accounting for some basic facts about unemployment.

First, a lot of job search takes place when workers have jobs while search theories assume that workers can’t or don’t search while they are employed. Second, when unemployment rises in recessions, it’s not because workers mistakenly expect more favorable wage offers than employers are offering and mistakenly turn down job offers that they later regret not having accepted, which is a very skewed way of interpreting what happens in recessions; it’s because workers are laid off by employers who are cutting back output and idling production lines.

I then suggested the following alternative explanation for wage stickiness:

Consider the incentive to cut price of a firm that can’t sell as much as it wants [to sell] at the current price. The firm is off its supply curve. The firm is a price taker in the sense that, if it charges a higher price than its competitors, it won’t sell anything, losing all its sales to competitors. Would the firm have any incentive to cut its price? Presumably, yes. But let’s think about that incentive. Suppose the firm has a maximum output capacity of one unit, and can produce either zero or one units in any time period. Suppose that demand has gone down, so that the firm is not sure if it will be able to sell the unit of output that it produces (assume also that the firm only produces if it has an order in hand). Would such a firm have an incentive to cut price? Only if it felt that, by doing so, it would increase the probability of getting an order sufficiently to compensate for the reduced profit margin at the lower price. Of course, the firm does not want to set a price higher than its competitors, so it will set a price no higher than the price that it expects its competitors to set.

Now consider a different sort of firm, a firm that can easily expand its output. Faced with the prospect of losing its current sales, this type of firm, unlike the first type, could offer to sell an increased amount at a reduced price. How could it sell an increased amount when demand is falling? By undercutting its competitors. A firm willing to cut its price could, by taking share away from its competitors, actually expand its output despite overall falling demand. That is the essence of competitive rivalry. Obviously, not every firm could succeed in such a strategy, but some firms, presumably those with a cost advantage, or a willingness to accept a reduced profit margin, could expand, thereby forcing marginal firms out of the market.

Workers seem to me to have the characteristics of type-one firms, while most actual businesses seem to resemble type-two firms. So what I am suggesting is that the inability of workers to take over the jobs of co-workers (the analog of output expansion by a firm) when faced with the prospect of a layoff means that a powerful incentive operating in non-labor markets for price cutting in response to reduced demand is not present in labor markets. A firm faced with the prospect of being terminated by a customer whose demand for the firm’s product has fallen may offer significant concessions to retain the customer’s business, especially if it can, in the process, gain an increased share of the customer’s business. A worker facing the prospect of a layoff cannot offer his employer a similar deal. And requiring a workforce of many workers, the employer cannot generally avoid the morale-damaging effects of a wage cut on his workforce by replacing current workers with another set of workers at a lower wage than the old workers were getting.

I think that what I wrote four years ago is clearly right, identifying an important reason for wage stickiness. But there’s also another reason that I didn’t mention then, but whose importance has since come to appear increasingly significant to me, especially as a result of writing and rewriting my paper “Hayek, Hicks, Radner and three concepts of intertemporal equilibrium.”

If you are unemployed because the demand for your employer’s product has gone down, and your employer, planning to reduce output, is laying off workers no longer needed, how could you, as an individual worker, unconstrained by a union collective-bargaining agreement or by a minimum-wage law, persuade your employer not to lay you off? Could you really keep your job by offering to accept a wage cut — no matter how big? If you are being laid off because your employer is reducing output, would your offer to work at a lower wage cause your employer to keep output unchanged, despite a reduction in demand? If not, how would your offer to take a pay cut help you keep your job? Unless enough workers are willing to accept a big enough wage cut for your employer to find it profitable to maintain current output instead of cutting output, how would your own willingness to accept a wage cut enable you to keep your job?

Now, if all workers were to accept a sufficiently large wage cut, it might make sense for an employer not to carry out a planned reduction in output, but the offer by any single worker to accept a wage cut certainly would not cause the employer to change its output plans. So, if you are making an independent decision whether to offer to accept a wage cut, and other workers are making their own independent decisions about whether to accept a wage cut, would it be rational for you or any of them to accept a wage cut? Whether it would or wouldn’t might depend on what each worker was expecting other workers to do. But certainly given the expectation that other workers are not offering to accept a wage cut, why would it make any sense for any worker to be the one to offer to accept a wage cut? Would offering to accept a wage cut, increase the likelihood that a worker would be one of the lucky ones chosen not to be laid off? Why would offering to accept a wage cut that no one else was offering to accept, make the worker willing to work for less appear more desirable to the employer than the others that wouldn’t accept a wage cut? One reaction by the employer might be: what’s this guy’s problem?

Combining this way of looking at the incentives workers have to offer to accept wage reductions to keep their jobs with my argument in my post of four years ago, I now am inclined to suggest that unemployment as such provides very little incentive for workers and employers to cut wages. Price cutting in periods of excess supply is often driven by aggressive price cutting by suppliers with large unsold inventories. There may be lots of unemployment, but no one is holding a large stock of unemployed workers, and no is in a position to offer low wages to undercut the position of those currently employed at  nominal wages that, arguably, are too high.

That’s not how labor markets operate. Labor markets involve matching individual workers and individual employers more or less one at a time. If nominal wages fall, it’s not because of an overhang of unsold labor flooding the market; it’s because something is changing the expectations of workers and employers about what wage will be offered by employers, and accepted by workers, for a particular kind of work. If the expected wage is too high, not all workers willing to work at that wage will find employment; if it’s too low, employers will not be able to find as many workers as they would like to hire, but the situation will not change until wage expectations change. And the reason that wage expectations change is not because the excess demand for workers causes any immediate pressure for nominal wages to rise.

The further point I would make is that the optimal responses of workers and the optimal responses of their employers to a recessionary reduction in demand, in which the employers, given current input and output prices, are planning to cut output and lay off workers, are mutually interdependent. While it is, I suppose, theoretically possible that if enough workers decided to immediately offer to accept sufficiently large wage cuts, some employers might forego plans to lay off their workers, there are no obvious market signals that would lead to such a response, because such a response would be contingent on a level of coordination between workers and employers and a convergence of expectations about future outcomes that is almost unimaginable.

One can’t simply assume that it is in the independent self-interest of every worker to accept a wage cut as soon as an employer perceives a reduced demand for its product, making the current level of output unprofitable. But unless all, or enough, workers decide to accept a wage cut, the optimal response of the employer is still likely to be to cut output and lay off workers. There is no automatic mechanism by which the market adjusts to demand shocks to achieve the set of mutually consistent optimal decisions that characterizes a full-employment market-clearing equilibrium. Market-clearing equilibrium requires not merely isolated price and wage cuts by individual suppliers of inputs and final outputs, but a convergence of expectations about the prices of inputs and outputs that will be consistent with market clearing. And there is no market mechanism that achieves that convergence of expectations.

So, this brings me back to Keynes and the idea of sticky wages as the key to explaining cyclical fluctuations in output and employment. Keynes writes at the beginning of chapter 19 of the General Theory.

For the classical theory has been accustomed to rest the supposedly self-adjusting character of the economic system on an assumed fluidity of money-wages; and, when there is rigidity, to lay on this rigidity the blame of maladjustment.

A reduction in money-wages is quite capable in certain circumstances of affording a stimulus to output, as the classical theory supposes. My difference from this theory is primarily a difference of analysis. . . .

The generally accept explanation is . . . quite a simple one. It does not depend on roundabout repercussions, such as we shall discuss below. The argument simply is that a reduction in money wages will, cet. par. Stimulate demand by diminishing the price of the finished product, and will therefore increase output, and will therefore increase output and employment up to the point where  the reduction which labour has agreed to accept in its money wages is just offset by the diminishing marginal efficiency of labour as output . . . is increased. . . .

It is from this type of analysis that I fundamentally differ.

[T]his way of thinking is probably reached as follows. In any given industry we have a demand schedule for the product relating the quantities which can be sold to the prices asked; we have a series of supply schedules relating the prices which will be asked for the sale of different quantities. .  . and these schedules between them lead up to a further schedule which, on the assumption that other costs are unchanged . . . gives us the demand schedule for labour in the industry relating the quantity of employment to different levels of wages . . . This conception is then transferred . . . to industry as a whole; and it is supposed, by a parity of reasoning, that we have a demand schedule for labour in industry as a whole relating the quantity of employment to different levels of wages. It is held that it makes no material difference to this argument whether it is in terms of money-wages or of real wages. If we are thinking of real wages, we must, of course, correct for changes in the value of money; but this leaves the general tendency of the argument unchanged, since prices certainly do not change in exact proportion to changes in money wages.

If this is the groundwork of the argument . . ., surely it is fallacious. For the demand schedules for particular industries can only be constructed on some fixed assumption as to the nature of the demand and supply schedules of other industries and as to the amount of aggregate effective demand. It is invalid, therefore, to transfer the argument to industry as a whole unless we also transfer our assumption that the aggregate effective demand is fixed. Yet this assumption amount to an ignoratio elenchi. For whilst no one would wish to deny the proposition that a reduction in money-wages accompanied by the same aggregate demand as before will be associated with an increase in employment, the precise question at issue is whether the reduction in money wages will or will not be accompanied by the same aggregate effective demand as before measured in money, or, at any rate, measured by an aggregate effective demand which is not reduced in full proportion to the reduction in money-wages. . . But if the classical theory is not allowed to extend by analogy its conclusions in respect of a particular industry to industry as a whole, it is wholly unable to answer the question what effect on employment a reduction in money-wages will have. For it has no method of analysis wherewith to tackle the problem. (General Theory, pp. 257-60)

Keynes’s criticism here is entirely correct. But I would restate slightly differently. Standard microeconomic reasoning about preferences, demand, cost and supply is partial-equilbriium analysis. The focus is on how equilibrium in a single market is achieved by the adjustment of the price in a single market to equate the amount demanded in that market with amount supplied in that market.

Supply and demand is a wonderful analytical tool that can illuminate and clarify many economic problems, providing the key to important empirical insights and knowledge. But supply-demand analysis explicitly – but too often without realizing its limiting implications – assumes that other prices and incomes in other markets are held constant. That assumption essentially means that the market – i.e., the demand, cost and supply curves used to represent the behavioral characteristics of the market being analyzed – is small relative to the rest of the economy, so that changes in that single market can be assumed to have a de minimus effect on the equilibrium of all other markets. (The conditions under which such an assumption could be justified are themselves not unproblematic, but I am now assuming that those problems can in fact be assumed away at least in many applications. And a good empirical economist will have a good instinctual sense for when it’s OK to make the assumption and when it’s not OK to make the assumption.)

So, the underlying assumption of microeconomics is that the individual markets under analysis are very small relative to the whole economy. Why? Because if those markets are not small, we can’t assume that the demand curves, cost curves, and supply curves end up where they started. Because a high price in one market may have effects on other markets and those effects will have further repercussions that move the very demand, cost and supply curves that were drawn to represent the market of interest. If the curves themselves are unstable, the ability to predict the final outcome is greatly impaired if not completely compromised.

The working assumption of the bread and butter partial-equilibrium analysis that constitutes econ 101 is that markets have closed borders. And that assumption is not always valid. If markets have open borders so that there is a lot of spillover between and across markets, the markets can only be analyzed in terms of broader systems of simultaneous equations, not the simplified solutions that we like to draw in two-dimensional space corresponding to intersections of stable supply curves with stable supply curves.

What Keynes was saying is that it makes no sense to draw a curve representing the demand of an entire economy for labor or a curve representing the supply of labor of an entire economy, because the underlying assumption of such curves that all other prices are constant cannot possibly be satisfied when you are drawing a demand curve and a supply curve for an input that generates more than half the income earned in an economy.

But the problem is even deeper than just the inability to draw a curve that meaningfully represents the demand of an entire economy for labor. The assumption that you can model a transition from one point on the curve to another point on the curve is simply untenable, because not only is the assumption that other variables are being held constant untenable and self-contradictory, the underlying assumption that you are starting from an equilibrium state is never satisfied when you are trying to analyze a situation of unemployment – at least if you have enough sense not to assume that economy is starting from, and is not always in, a state of general equilibrium.

So, Keynes was certainly correct to reject the naïve transfer of partial equilibrium theorizing from its legitimate field of applicability in analyzing the effects of small parameter changes on outcomes in individual markets – what later came to be known as comparative statics – to macroeconomic theorizing about economy-wide disturbances in which the assumptions underlying the comparative-statics analysis used in microeconomics are clearly not satisfied. That illegitimate transfer of one kind of theorizing to another has come to be known as the demand for microfoundations in macroeconomic models that is the foundational methodological principle of modern macroeconomics.

The principle, as I have been arguing for some time, is illegitimate for a variety of reasons. And one of those reasons is that microeconomics itself is based on the macroeconomic foundational assumption of a pre-existing general equilibrium, in which all plans in the entire economy are, and will remain, perfectly coordinated throughout the analysis of a particular parameter change in a single market. Once you relax the assumption that all, but one, markets are in equilibrium, the discipline imposed by the assumption of the rationality of general equilibrium and comparative statics is shattered, and a different kind of theorizing must be adopted to replace it.

The search for that different kind of theorizing is the challenge that has always faced macroeconomics. Despite heroic attempts to avoid facing that challenge and pretend that macroeconomics can be built as if it were microeconomics, the search for a different kind of theorizing will continue; it must continue. But it would certainly help if more smart and creative people would join in that search.

Only Idiots Think that Judges Are Umpires and Only Cads Say that They Think So

It now seems besides the point, but I want to go back and consider something Judge Kavanaugh said in his initial testimony three weeks ago before the Senate Judiciary Committee, now largely, and deservedly, forgotten.

In his earlier testimony, Judge Kavanaugh made the following ludicrous statement, echoing a similar statement by (God help us) Chief Justice Roberts at his confirmation hearing before the Senate Judiciary Committee:

A good judge must be an umpire, a neutral and impartial arbiter who favors no litigant or policy. As Justice Kennedy explained in Texas versus Johnson, one of his greatest opinions, judges do not make decisions to reach a preferred result. Judges make decisions because “the law and the Constitution, as we see them, compel the result.”

I don’t decide cases based on personal or policy preferences.

Kavanaugh’s former law professor Akhil Amar offered an embarrassingly feeble defense of Kavanaugh’s laughable comparison, in a touching gesture of loyalty to a former student, to put the most generous possible gloss on his deeply inappropriate defense of an indefensible trivialization of what judging is all about.

According to the Chief Justice and to Judge Kavanaugh, judges, like umpires, are there to call balls and strikes. An umpire calls balls and strikes with no concern for the consequences of calling a ball or a strike on the outcome of the game. Think about it: do judges reach decisions about cases, make their rulings, write their opinions, with no concern for the consequences of their decisions?

Umpires make their calls based on split-second responses to their visual perceptions of what happens in front of their eyes, with no reflection on what implications their decisions have for anyone else, or the expectations held by the players whom they are watching. Think about it: would you want a judge to decide a case without considering the effects of his decision on the litigants and on the society at large?

Umpires make their decisions without hearing arguments from the players before rendering their decisions. Players, coaches, managers, or their spokesmen do not submit written briefs, or make oral arguments, to umpires in an effort to explain to umpires why justice requires that a decision be rendered in their favor. Umpires don’t study briefs or do research on decisions rendered by earlier umpires in previous contests. Think about it: would you want a judge to decide a case within the time that an umpire takes to call balls and strikes and do so with no input from the litigants?

Umpires never write opinions in which they explain (or at least try to explain) why their decisions are right and just after having taken into account on all the arguments advanced by the opposing sides and any other relevant considerations that might properly be taken into account in reaching a decision. Think about it: would you want a judge to decide a case without having to write an opinion explaining why his or her decision is the right and just one?

Umpires call balls on strikes instinctively, unreflectively, and without hesitation. But to judge means to think, to reflect, to consider both (or all) sides, to consider the consequences of the decision for the litigants and for society, and for future judges in future cases who will be guided by the decision being rendered in the case at hand. Judging — especially appellate judging — is a deeply intellectual and reflective vocation requiring knowledge, erudition, insight, wisdom, temperament, and, quite often, empathy and creativity.

To reduce this venerable vocation to the mere calling of balls and strikes is deeply dishonorable, and, coming from a judge who presumes to be worthy of sitting on the highest court in the land, supremely offensive.

What could possibly possess a judge — and a judge, presumably neither an idiot nor insufficiently self-aware to understand what he is actually doing — to engage in such obvious sophistry? The answer, I think, is that it has come to be in the obvious political and ideological self-interest of many lawyers and judges, to deliberately adopt a pretense that judging is — or should be — a mechanical activity that can be reduced to simply looking up and following already existing rules that have already been written down somewhere, and that to apply those rules requires nothing more than knowing how to read them properly. That idea can be summed up in two eight-letter words, one of which is nonsense, and those who knowingly propagate it are just, well, dare I say it, deplorable.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,354 other followers

Follow Uneasy Money on WordPress.com