Archive Page 2

What’s Wrong with Econ 101?

Hendrickson responded recently to criticisms of Econ 101 made by Noah Smith and Mark Thoma. Mark Thoma thinks that Econ 101 has a conservative bias, presumably because Econ 101 teaches students that markets equilibrate supply and demand and allocate resources to their highest valued use and that sort of thing. If markets are so wonderful, then shouldn’t we keep hands off the market and let things take care of themselves? Noah Smith is especially upset that Econ 101, slighting the ambiguous evidence that minimum-wage laws actually do increase unemployment, is too focused on theory and pays too little attention to empirical techniques.

I sympathize with Josh defense of Econ 101, and I think he makes a good point that there is nothing in Econ 101 that quantifies the effect on unemployment of minimum-wage legislation, so that the disconnect between theory and evidence isn’t as stark as Noah suggests. Josh also emphasizes, properly, that whatever the effect of an increase in the minimum wage implied by economic theory, that implication by itself can’t tell us whether the minimum wage should be raised. An ought statement can’t be derived from an is statement. Philosophers are not as uniformly in agreement about the positive-normative distinction as they used to be, but I am old-fashioned enough to think that it’s still valid. If there is a conservative bias in Econ 101, the problem is not Econ 101; the problem is bad teaching.

Having said all that, however, I don’t think that Josh’s defense addresses the real problems with Econ 101. Noah Smith’s complaints about the implied opposition of Econ 101 to minimum-wage legislation and Mark Thoma’s about the conservative bias of Econ 101 are symptoms of a deeper problem with Econ 101, a problem inherent in the current state of economic theory, and unlikely to go away any time soon.

The deeper problem that I think underlies much of the criticism of Econ 101 is the fragility of its essential propositions. These propositions, what Paul Samuelson misguidedly called “meaningful theorems” are deducible from the basic postulates of utility maximization and wealth maximization by applying the method of comparative statics. Not only are the propositions based on questionable psychological assumptions, the comparative-statics method imposes further restrictive assumptions designed to isolate a single purely theoretical relationship. The assumptions aren’t just the kind of simplifications necessary for the theoretical models of any empirical science to be applicable to the real world, they subvert the powerful logic used to derive those implications. It’s not just that the assumptions may not be fully consistent with the conditions actually observed, but the implications of the model are themselves highly sensitive to those assumptions. The meaningful theorems themselves are very sensitive to the assumptions of the model.

The bread and butter of Econ 101 is the microeconomic theory of market adjustment in which price and quantity adjust to equilibrate what consumers demand with what suppliers produce. This is the partial-equilibrium analysis derived from Alfred Marshall, and gradually perfected in the 1920s and 1930s after Marshall’s death with the development of the theories of the firm, and perfect and imperfect competition. As I have pointed out before in a number of posts just as macroeconomics depends on microfoundations, microeconomics depends on macrofoundations (e.g. here and here). All partial-equilibrium analysis relies on the – usually implicit — assumption that all markets but the single market under analysis are in equilibrium. Without that assumption, it is logically impossible to derive any of Samuelson’s meaningful theorems, and the logical necessity of microeconomics is severely compromised.

The underlying idea is very simple. Samuelson’s meaningful theorems are meant to isolate the effect of a change in a single parameter on a particular endogenous variable in an economic system. The only way to isolate the effect of the parameter on the variable is to start from an equilibrium state in which the system is, as it were, at rest. A small (aka infinitesimal) change in the parameter induces an adjustment in the equilibrium, and a comparison of the small change in the variable of interest between the new equilibrium and the old equilibrium relative to the parameter change identifies the underlying relationship between the variable and the parameter, all else being held constant. If the analysis did not start from equilibrium, then the effect of the parameter change on the variable could not be isolated, because the variable would be changing for reasons having nothing to do with the parameter change, making it impossible to isolate the pure effect of the parameter change on the variable of interest.

Not only must the exercise start from an equilibrium state, the equilibrium must be at least locally stable, so that the posited small parameter change doesn’t cause the system to gravitate towards another equilibrium — the usual assumption of a unique equilibrium being an assumption to ensure tractability rather than a deduction from any plausible assumptions – or simply veer off on some explosive or indeterminate path.

Even aside from all these restrictive assumptions, the standard partial-equilibrium analysis is restricted to markets that can be assumed to be very small relative to the entire system. For small markets, it is safe to assume that the small changes in the single market under analysis will have sufficiently small effects on all the other markets in the economy that the induced effects on all the other markets from the change in the market of interest have a negligible feedback effect on the market of interest.

But the partial-equilibrium method surely breaks down when the market under analysis is a market that is large relative to the entire economy, like, shall we say, the market for labor. The feedback effects are simply too strong for the small-market assumptions underlying the partial-equilibrium analysis to be satisfied by the labor market. But even aside from the size issue, the essence of the partial-equilibrium method is the assumption that all markets other than the market under analysis are in equilibrium. But the very assumption that the labor market is not in equilibrium renders the partial-equilibrium assumption that all other markets are in equilibrium untenable. I would suggest that the proper way to think about what Keynes was trying, not necessarily successfully, to do in the General Theory when discussing nominal wage cuts as a way to reduce unemployment is to view that discussion as a critique of using the partial-equilibrium method to analyze a state of general unemployment, as opposed to a situation in which unemployment is confined to a particular occupation or a particular geographic area.

So the question naturally arises: If the logical basis of Econ 101 is as flimsy as I have been suggesting, should we stop teaching Econ 101? My answer is an emphatic, but qualified, no. Econ 101 is the distillation of almost a century and a half of rigorous thought about how to analyze human behavior. What we have come up with so far is very imperfect, but it is still the most effective tool we have for systematically thinking about human conduct and its consequences, especially its unintended consequences. But we should be more forthright about its limitations and the nature of the assumptions that underlie the analysis. We should also be more aware of the logical gaps between the theory – Samuelson’s meaningful theorems — and the applications of the theory.

In fact, many meaningful theorems are consistently corroborated by statistical tests, presumably because observations by and large occur when the economy operates in the neighborhood of a general equililbrium and feedback effect are small, so that the extraneous forces – other than those derived from theory – impinge on actual observations more or less randomly, and thus don’t significantly distort the predicted relationship. And undoubtedly there are also cases in which the random effects overwhelm the theoretically identified relationships, preventing the relationships from being identified statistically, at least when the number of observations is relatively small as is usually the case with economic data. But we should also acknowledge that the theoretically predicted relationships may simply not hold in the real world, because the extreme conditions required for the predicted partial-equilibrium relationships to hold – near-equilibrium conditions and the absence of feedback effects – may often not be satisfied.

What’s So Bad about the Trade Deficit?

The ravings of a certain candidate for President — in case you have been living under a rock, I mean the one that has been having a bad hair decade — about the evils of the US trade deficits with China, Mexico, Japan, and assorted other countries reminded me of a column I wrote for the New York Times in another election year, 1984 to be exact. So I went back and looked for it, and it seemed sadly still to be relevant, so I will reproduce it here. Not that I expect it to contribute much to general enlightenment, but, at times like these, one feels that one just has to do something to resist the madness.

The Much Maligned Trade Gap

No economic statistic is reported more dolefully these days than the country’s trade balance.

Ever on the alert for signs of impending economic disaster, the press routinely couples reports of record monthly trade deficits with warnings of experts and Government officials of the dangers of the deficit.

Just what is so dangerous about receiving more goods from foreigners than we give them back is never actually explained, but it is often suggested that that it causes a loss of American jobs.

News reports sometimes even provide estimates of the number of jobs lost owing to every billion dollar increase in the trade deficit. Heaven only knows how these estimates are made, but presumably they are based on the assumption that imports deprive Americans of jobs they could have had producing domestic substitutes for the imports.

It almost seems tedious to do so, but it apparently still needs to be pointed out that buying less from foreigners means that they will buy less from us for the simple reason that they will have fewer dollars with which to purchase our products.

Thus, even if reducing imports increases employment in industries that compete with imports, it must also reduce employment in export industries.

Moreover, the notion that the trade deficit destroys domestic jobs is contradicted by the tendency of the deficit to increase during economic expansions and to decrease during contractions.

The demand for imports rises with income, so imports normally tend to rise faster than exports when a country expands more rapidly than its trading partners. The trade deficit is a symptom or rising employment — not the cause of rising unemployment.

That balance-of-trade figures are misunderstood and misused is not surprising, since their function has never been to inform or to enlighten. Their real purpose is to provide spurious statistical and pseudo-scientific support to groups seeking protectionist legislation. These groups try to cloak their appeals to protection with an invocation of the general interest in a favorable balance of trade.

Anyone who has ever thought about it has probably wondered why a country that gives up more goods in trade than it gets back is said to have a favorable balance of trade.

If you have ever wondered about it and couldn’t think of an answer, don’t worry, because you are in good company. Adam Smith couldn’t either. “Nothing,” Smith once observed, “can be more absurd than this whole doctrine of the balance of trade, upon which . . . almost all the . . . regulations of commerce are founded.”

The absurdity of the doctrine ought now to be manifest owing to the current international debt crisis. The crisis, as we all know, arose because large numbers of developing countries are apparently unable to make the scheduled payment on loans to American banks from which they borrowed.

It is, I believe, just about universally acknowledged that it would be a bad thing if the debtor countries failed to repay their loans.

The debtor countries would suffer because they would be less able to borrow in the future, and thus less able to import the products they need to take care of their populations and to promote development.

Creditor countries would  also suffer because default would impose huge losses on the banks and their shareholders. And since such losses might undermine the domestic and international banking systems, they would undoubtedly be made up, at least in part, by the Government and the taxpayers.

Yet it is remarkable how little, even now, the relationship between the ability of the debtor countries to repay their debts and the size of the American trade deficit is understood. For everyone continues to rail against the trade deficit even though reducing it would make the default of the debtor countries all the more likely.

A simple example will help to explain why that is so.

Suppose I borrow money from you and promise to repay you next year. And, for simplicity, suppose that neither of us engages in transactions with third parties. Thus, I produce goods, some of which I consume myself and the rest of which I sell to you, and you produce goods, some of which you consumer and the rest of which you sell to me.

Now the reason that I am borrowing from you is that  the value of the goods I want from you this year exceeds the value of what I am willing to sell you this year. But next year I shall have to sell you enough not only to cover what I buy from you, I shall have to sell you enough to earn the money with which to repay you.

Thus, to avoid default, I must run a trade surplus with you next year. And if you want to be repaid, you have to reconcile yourself to the idea of running a trade deficit, because repayment consists in, and is equivalent to, my trade surplus and your trade deficit.

The debtor nations are faced with default because they haven’t enough dollars to repay the banks from whom they have borrowed. Why not? Because their trade surplus with the United States — our trade deficit with them — is too small for them to earn the dollars they need for repayment.

How could they earn more dollars? 1. They could reduce their imports from us. 2. They could increase their exports to us. 3. They could borrow more dollars from us. 4. We could give them the dollars.

The first two options both imply an increase in our trade deficit. That sounds bad only if you ignore the alternatives. The third option might have some attraction if the debtor countries could repay their existing debts. But since they can’t even do that, further lending seems inadvisable.

The fourth option, it goes without saying, is the economic equivalent of default.

Those who insist that the United States trade deficit must be reduced had better think through the implications. They should ask themselves whether they really want to drive the debtor countries to the wall and whether they are prepared to absorb the losses associated with a default by the debtor countries just to stop American consumers from buying as much of the products of debtor countries as they want.

Allowing unrestricted access of those products into our markets would not necessarily prevent default, but maintaining or tightening restrictions can only increase the likelihood and the severity of an eventual default.

And at a more fundamental level, isn’t there something perverse in first lending to someone, and then, after having refused to accept payment, hauling him into court because he won’t pay his debts?

Which brings to mind the following thought: the very same candidate that is promising to convert the US trade deficit into a trade surplus is also the candidate that wants America’s European and Asian allies to increase their payments to the US to cover the costs incurred by the US in defending them? How does this candidate suppose that these allies can simultaneously increase their dollar payments to the US without either exporting more to, or importing less from, the US. Or perhaps the candidate believes that these allies can just print the money with which to purchase the dollars from the US, and then pay us back with the dollars they acquired with the money they printed. But that would mean that their currencies would depreciate against the dollar. But this candidate doesn’t like it when other countries devalue their currencies against the dollar; the candidate calls that currency manipulation. My oh my, what a tangled web we weave . . .

P. H. Wicksteed, the Coase Theorem, and the Real Cost Fallacy

I am now busy writing a paper with my colleague Paul Zimmerman, documenting a claim that I made just over four years ago that P. H. Wicksteed discovered the Coase Theorem. The paper is due to be presented at the History of Economics Society Conference next month at Duke University. At some point soon after the paper is written, I plan to post it on SSRN.

Briefly, the point of the paper is that Wicksteed’s argument that there is no such thing as a supply curve in the sense that the supply curve of a commodity in fixed supply is just the reverse of a certain section of the demand curve, the section depending on how the given stock of the commodity is initially distributed among market participants. However the initial stock is distributed, the final price and the final allocation of the commodity is determined by the preferences of the market participants reflected in their individual demands for the commodity. But this is exactly the reasoning underlying the Coase Theorem: the initial assignment of liability for damages has no effect on the final allocation of resources if transactions costs are zero (as Wicksteed implicitly assumed in his argument). Coase’s originality was not in his reasoning, but in recognizing that economic exchange is not the mere trading of physical goods but trading rights to property or rights to engage in certain types of conduct affecting property.

But Wicksteed went further than just showing that the initial distribution of a commodity in fixed supply does not affect the equilibrium price of the commodity or its equilibrium distribution. He showed that in a production economy, cost has no effect on equilibrium price or the equilibrium allocation of resources and goods and services, which seems a remarkably sweeping assertion. But I think that Wicksteed was right in that assertion, and I think that, in making that assertion, he anticipated a point that I have made numerous times on this blog (e.g., here) namely, that just as macroeconomic requires microfoundations, microeconomics requires macrofoundations. The whole of standard microeconomics, e.g., assertions about the effects of an excise tax on price and output, presumes the existence of equilibrium in all markets other than the one being subjected to micro-analysis. Without the background assumption of equilibrium, it would be impossible to derive what Paul Samuelson (incorrectly) called “meaningful theorems,” (the mistake stemming from the absurd positivist presumption that empirically testable statements are the only statements that are meaningful).

So let me quote from Wicksteed’s 1914 paper “The Scope and Method of Political Economy in the Light of the Marginal Theory of Value and Distribution.”

[S]o far we have only dealt with the market in the narrower sense. Our investigations throw sufficient light on the distribution of the hay harvest, for instance, or on the “catch” of a fishing fleet. But where the production is continuous, as in mining or in ironworks, will the same theory still suffice to guide us? Here again we encounter the attempt to establish two co-ordinate principles, diagrammatically represented by two intersecting curves; for though the “cost of production” theory of value is generally repudiated, we are still foo often taught to look for the forces that determine the stream of supply along two lines, the value of the product, regulated by the law of the market, and the cost of production. But what is cost of production? In the market of commodities I am ready to give as much as the article is worth to me, and I cannot get it unless I give as much as it is worth to others. In the same way, if I employ land or labour or tools to produce something, I shall be ready to give as much as they are worth to me, and I shall have to give as much as they are worth to others-always, of course, differentially. Their worth to me is determined by their differential effect upon my product, their worth to others by the like effect upon their products . . . Again we have an alias merely. Cost of production is merely the form in which the desiredness a thing possesses for someone else presents itself to me. When we take the collective curve of demand for any factor of production we see again that it is entirely composed of demands, and my adjustment of my own demands to the cond ditions imposed by the demands of others is of exactly the same nature whether I am buying cabbages or factors for the production of steel plates. I have to adjust my desire for a thing to the desires of others for the same thing, not to find some principle other than that of desiredness, co-ordinate with it as a second determinant of market price. The second determinant, here as everywhere, is the supply. It is not until we have perfectly grasped the truth that costs of production of one thing are nothing whatever but an alias of efficiencies in production of other things that we shall be finally emancipated from the ancient fallacy we have so often thrust out at the door, while always leaving the window open for its return.

The upshot of Wicksteed’s argument appears to be that cost, viewed as an independent determinant of price or the allocation of resources, is a redundant concept. Cost as a determinant of value is useful only in the context of a background of general equilibrium in which the prices of all but a single commodity have already been determined. The usual partial-equilibrium apparatus for determining the price of a single commodity in terms of the demand for and the supply of that single product, presumes a given technology for converting inputs into output, and given factor prices, so that the costs can be calculated based on those assumptions. In principle, that exercise is no different from finding the intersection between the demand-price curve and the supply-price curve for a commodity in fixed supply, the demand-price curve and the supply-price curve being conditional on a particular arbitrary assumption about the initial distribution of the commodity among market participants. In the analysis of a production economy, the determination of equilibrium price and output in a single market can proceed in terms of a demand curve for the product and a supply curve (reflecting the aggregate of individual firm marginal-cost curves). However, in this case the supply curve is conditional on the assumption that prices of all other outputs and all factor prices have already been determined. But from the perspective of general equilibrium, the determination of the vector of prices, including all factor prices, that is consistent with general equilibrium cannot be carried out by computing production costs for each individual output, because the factor prices required for a computation of the production costs for any product are unknown until the general equilibrium solution has itself been found.

Thus, the notion that cost can serve as an independent determinant of equilibrium price is an exercise in question begging, because cost is no less an equilibrium concept than price. Cost cannot be logically prior to price if both are determined simultaneously and are mutually interdependent. All that is logically prior to equilibrium price in a basic economic model are the preferences of market participants and the technology for converting inputs into outputs. Cost is not an explanatory variable; it is an explained variable. That is the ultimate fallacy in the doctrine of real costs defended so tenaciously by Jacob Viner in chapter eight of his classic Studies in the Theory of International Trade. That Paul Samuelson in one of his many classic papers, “International Trade and the Equalization of Factor Prices,” could have defended Viner and the real-cost doctrine, failing to realize that costs are simultaneously determined with prices in equilibrium, and are indeterminate outside of equilibrium, seems to me to be a quite remarkable lapse of reasoning on Samuelson’s part.

Saving a Very Old Paper from Oblivion — UPDATE

UPDATE (May 15, 2016): My account below of the events surrounding the writing of my paper elicited the following letter from James Dorn, the unnamed organizer of the conference on Alternatives to Government Fiat Money, to whom I refer in the post. His letter makes it clear that my recollection of the events I describe was inaccurate or incomplete in several respects and that, most important, Cato did not intend to suppress my paper. Their intent was to originally to publish the paper in a separate volume to be published by Kluwer, but the intended volume was never published. Dorn refers to correspondence between us in 1999 in which he apologized for the delay in publication and invited me either to submit the paper for publication at the Cato Journal or to submit it elsewhere. As I mentioned in my post I did revise the paper into the current version dated June 2000. Why I did not submit it to the Cato Journal or to another journal I am unable to say, but subsequently I somehow came under the impression that I had been discouraged from doing so by Cato. Evidently, my recollection was faulty. In any event, I should not have posted my recollections of how this paper came to languish unpublished for almost three decades without communicating with James Dorn. That, at least, is one lesson to be learned, I can also take some minimal comfort in learning that my own conduct was not quite as wimpy as I had thought. On the other hand, I must apologize to Brad DeLong and Paul Krugman, who linked to this post on their blogs, for having led them to into this discussion. All in all, not a great performance on my part. Here is the text of Jim Dorn’s letter.

Dear David,

A colleague directed me to your blog of May 13, in which you state:

“After about 10 years [1989–99] passed, it occurred to me that the paper, which I had more or less forgotten about in the interim, would be worth updating to take into account the literature on network externalities that had subsequently developed. Just to work out for myself the connections between my old arguments and the new literature, I revised the paper to incorporate the network-externalities literature into the discussion. Then, hoping that Cato might no longer care about the paper, I contacted the conference organizer, who was still at Cato, to inquire whether, after a lapse of 10 years, Cato still had objections to my submitting the paper for publication. The response I got was that, at least for the time being, Cato would not allow me to publish the paper, but might reconsider at some unspecified future time.”

                                                                                                    —David Glasner (Uneasy Money blog, May 13, 2016)

If you remember, the reason your article from the 1989 conference was not included in the Fall 1989 CJ (vol. 9, no. 2) was because I had deliberately omitted several conference papers from the CJ b/c Kluwer was going to co-publish a book with the title “Alternatives to Government Fiat Money” and wanted me to differentiate it from the CJ conference issue with the same title.  So the intention was to use your paper in the book ( I sent you a letter to that effect of which I have a copy).  Unfortunately, my many other duties at Cato at that time, plus my full-time teaching schedule, put the book project on the back burner.  I accept full responsibility for that delay.

I wrote to you on December 1, 1999, apologizing for the delay in the book project and explicitly stated: “If you wish to withdraw your paper from consideration and use it elsewhere, go ahead.”  I also stated in a separate email on December 1, 1999, that “if you revise your paper, at your own pace, then as soon as I receive it, I will consider it for use in the Cato Journal.  Also, I will reserve the right to use it in the book , if the CJ comes out first.”  You had mentioned in your email (Dec. 1, 1999) that the original paper had to be updated to take account of “network effects and externalities.”  And in a separate reply to my offer, you wrote (Dec. 1, 1999): “Sounds reasonable.  I’ll have to dredge up a copy of the paper and look it over again, before I give you a definite yes or no.  I’ll try to do that by Monday at the latest.”  As far as I can tell from my files, you never did get back to me.

David, I’m sorry that your paper did not see the light of day; I wish I had used it immediately in the conference issue of the CJ in 1989.

I did, however, give you permission to publish elsewhere, as noted above, albeit with a significant lag.  If you had sent me your revised paper when I requested it, I would have certainly used it n the CJ.  I think your blog is incorrect at points and that you were unfair to Cato.  Things happen to delay publications. I value honesty and do the best to maintain my own and Cato’s reputation.  Indeed, if you have revised your paper and would like me to consider it for use in the CJ, I would be glad to do so.

Best regards,

Jim

I have just posted a paper (“How ‘Natural’ Is the Government Monopoly over Money”) on SSRN. It’s a paper I wrote about 28 years ago, shortly after arriving in Washington to start working at the FTC, for a Cato Monetary Conference on Alternatives to Fiat Money. I don’t remember how I came up with the idea for the paper, but it’s possible that I thought, having become a FTC antitrust economist, that it would be worth applying industrial organization concepts to analyze whether any of the traditional arguments for a state monopoly over money could withstand scrutiny. At any rate, the conference organizers seemed to like the idea, and I was offered an honorarium of about — I don’t remember exactly — $1000 to $1500; I was told that the conference papers would be published in a future edition of the Cato Journal.

I am afraid that I no longer have any recollection of the conference or of my presentation, except that, after the conference, I was moderately pleased with myself and my paper, and I was probably more than moderately happy with a four-figure honorarium to supplement my modest government salary. Unfortunately, my happy feelings about the experience were short-lived, being informed, not long after the conference by one of the conference organizers, that the original plans had been changed, so that my paper would not be published in the Cato Journal. That surprise was a bit annoying, but hardly devastating, because I simply assumed that what I had been told meant that I would just have to go through the tedious process of sending the paper out to be published in some economics journal. Feeling moderately pleased with what I had written, I thought that I might try my luck with, say, The Journal of Money, Credit and Banking, a step up — actually several steps up — from the Cato Journal. So when I replied – I thought fairly tactfully – that I certainly understood how plans could change, and had no hard feelings about Cato’s unwillingness to publish my paper, and that I would work on it some more before submitting it elsewhere for publication, I was totally unprepared for the response that was forthcoming: by accepting that four-figure honorarium for writing the paper for the Cato conference, I had relinquished to the Cato Institute all rights to the paper and that I was free to submit it to any publication or journal, and that Cato would take legal action against me and any publication that published the paper. My interlocutor did say that there was a chance that Cato might decide to publish the paper in the future, but I well understood that that contingency was not very likely.

Shocked at what had just happened I felt helpless and violated, and I now reproach myself bitterly for my timidity in acquiescing to Cato’s suppression of my work, not even insisting on a written explanation of Cato’s decision to stop me from publishing my own paper. Nor did I seek legal advice about challenging Cato’s conduct. I could have at least tried writing an article exposing how Cato – an institution whose “mission is to originate, disseminate, and increase understanding of public policies based on the principles of individual liberty, limited government, free markets, and peace” — was engaged in suppressing the original research that it had sponsored with no obvious justification.

After about 10 years passed, it occurred to me that the paper, which I had more or less forgotten about in the interim, would be worth updating to take into account the literature on network externalities that had subsequently developed. Just to work out for myself the connections between my old arguments and the new literature, I revised the paper to incorporate the network-externalities literature into the discussion. Then, hoping that Cato might no longer care about the paper, I contacted the conference organizer, who was still at Cato, to inquire whether, after a lapse of 10 years, Cato still had objections to my submitting the paper for publication. The response I got was that, at least for the time being, Cato would not allow me to publish the paper, but might reconsider at some unspecified future time. At that point, I put the paper away, and forgot about it again, until I came across it recently, and decided that it was finally time to at least post it on the internet. If Cato wants to come after me for doing so, I guess they know how to find me.

Here’s the introductory section of the paper.

I have chosen the title of this paper to underscore an ambiguity in how the term “natural monopoly” describes the role of the government in monetary affairs. One possible meaning of “natural monopoly” in this context is the narrow one that economists attach to it in their taxonomies of market structure. The term then denotes special technical conditions of production that make it cheaper for just one firm to produce the entire output of an industry than for two or more firms to share in that production. However, the term “natural monopoly” is not necessarily confined to that narrow meaning even when the term is used by economists. In these looser usages, “natural monopoly” is intended to refer to any conditions or forces that either explain or rationalize why the production of money is or should be monopolized by the government. But since the production of money (or at least of a non-trivial subset of monetary instruments) is nearly universally monopolized by governments, it seems reasonable to infer that there must be some forces (that economic theory ought to be able to identify) that can account for the universality or, if you will, “naturalness,” of a government monopoly over money. While there is nothing sacred about the narrow usage of the term “natural monopoly,” the presumption among economists that natural monopolies in the strict sense should be regulated or taken over by the government is so strong, that it seems worthwhile to distinguish between those explanations of the monopoly over money that can, and those that cannot, be subsumed under the narrow meaning of that term.

My first task, therefore, will be to review and evaluate some of the reasons commonly advanced to support the proposition that the production of money is a natural monopoly. The strict natural-monopoly explanations for a government monopoly do not seem to me to withstand critical scrutiny. But in discussing them, I shall point out that several of what purport to be natural-monopoly explanations are actually arguments that the production or use of money involves some type of externality that can be internalized only by a monopoly, or a monopoly. The specific externality is often not explicitly described, but, upon careful analysis, one can distinguish between few possible externalities that might justify some government role in monetary affairs. These disguised externality arguments will draw me into an explicit discussion of the externality justification for a government monopoly. But even if one grants that one or more of these possible externalities may justify some government role in monetary affairs, it does not necessarily follow that a government monopoly is necessary to compensate for the externality. Since I have elsewhere (Glasner 1989, 1998) tried to account for the pervasiveness of the monopoly on national-defense grounds, I shall not repeat myself here. I shall merely conclude by mentioning some implications of the national-defense explanation that suggest that the basis for the government monopoly is weakening and that the supply of money has, and will continue to, become increasingly competitive.

Click here to continue reading.

Whither Conservatism?

I’m not sure why – well, maybe I can guess – but I have been thinking about an article (“Hayek and the Conservatives”) I wrote in 1992 for Commentary. I just reread it — probably for the first time this century — and although I can’t say that I agree with everything I wrote over 20 years ago, it somehow still seems relevant, perhaps even more so now than then. So I thought I would share it.

At the time of his death on March 23, 1992, less than two months before his ninety-third birthday, F.A. Hayek was widely if not universally acknowledged as this century’s preeminent intellectual advocate of the free market and one of its leading opponents of socialism. His death, coming so soon after the collapse of Communism in Eastern Europe and the abandonment of Marxism and socialism as intellectual ideals, occasioned understandable comment by his admirers about the vindication that Hayek, after years of vilification at the hands of critics, had received at the hands of history.

Though long in coming, however, Hayek’s vindication did not occur all at once. For his work had exerted a crucial, though basically indirect, influence over the renascent conservative and libertarian movements that had grown up after World War II in the United States and Great Britain. Indeed, the revival of those movements culminated in the rise to power of two politicians, Ronald Reagan in America and Margaret Thatcher in England, who were proud to list Hayek among their intellectual mentors. And his vindication had also been presaged, though in an oddly ambiguous way, when Hayek was named co-winner, with the Swedish socialist economist Gunnar Myrdal, of the 1974 Nobel prize in economics.

Still, most of Hayek’s career was spent in the relative obscurity befitting an expatriate Central European intellectual of reserve, urbanity, erudition—and unfashionable views. Hayek’s economic theories had apparently been superseded, first by those of John Maynard Keynes and then by the increasingly mathematical economic analysis of the postwar period, and his political philosophy was considered either a relic of an obsolete Victorian liberalism or, less charitably, an apology for the worst excesses of capitalist exploitation. Before winning the Nobel prize, Hayek, who never served either officially or unofficially as an adviser to any political figure and never sought a mass audience, only twice transcended the obscurity in which he labored for so long: first in the early 1930’s when, as a young man newly arrived in Great Britain, he was briefly considered the chief intellectual rival of Keynes, and a decade later in the mid-1940’s when, much to Hayek’s own surprise, his book, The Road to Serfdom, became a trans-Atlantic best-seller. . . .

Ironically, Hayek’s death occurred not only after his critique of socialism had just received decisive historical confirmation, but when the conservative movement in the United States, whose free-market and free-trade principles he, perhaps more than anyone else, had shaped, was undergoing a fundamental crisis. To understand the nature of the crisis, one must first understand how Hayek came to play such a crucial role in the development of conservatism.

Before World War II what passed for conservatism in the United States was an amalgam of views and prejudices which lacked sufficient coherence to be summarized by any clear set of principles. The chief characteristics of the Old Right were a fanatical opposition to . . . Roosevelt’s New Deal or indeed to any national measures aimed at improving the lot of the least well-off groups or individuals in the country; opposition to international alliances, coupled with decidedly nativist tendencies and a bias in favor of protectionist trade policies; a primitive bias against banks, speculation, high finance, and Wall Street; and complacent toleration of, or occasionally even active support for, racial and religious discrimination against blacks, Jews, and other minorities. . . .

Even more disastrously, American conservatives, mistrusting the federal government and its tendency to become involved in European conflicts, and viscerally hating Roosevelt, bitterly opposed any U.S. efforts to resist or contain the spread of European fascism. These attitudes gave birth to the America First movement of the 1930’s, whose often implicit and occasionally explicit anti-Semitic overtones can only be understood in the light of the broader set of fears, hatreds, and neuroses that animated the movement.

The onset of World War II, the attack on Pearl Harbor, and the subsequent horrific revelations about the Holocaust perpetrated by the Nazis (with whom the America Firsters had uniformly urged coexistence and for whom some of them had expressed sympathy) left American conservatism discredited both morally and intellectually, just as the Depression and a reflexive opposition to the New Deal had discredited conservatism programmatically.

Thus, when The Road to Serfdom was published in 1945, it filled a gaping moral and intellectual vacuum. For here was a book, written by an Austrian expatriate of impeccable anti-Nazi credentials, fundamentally opposed to the socialist ideas now guiding progressive thought everywhere. Moreover, in a profound and eloquent argument, The Road to Serfdom contended that the path the fascists had followed to absolute power had been prepared for them by the very instruments of central planning and the ideology of an all-powerful state which socialists had created before them. The Nazis, after all, had been National Socialists, and Mussolini had been a leader of the Italian Socialist party before starting the Fascist party. The common characteristic of all such movements was to subordinate the individual to the supposed interests of some abstract collective entity—class, nation, race, or simply society.

In a relatively brief span of time, Hayek’s version of free-market, free-trade liberalism (in the traditional European sense of the word), and political internationalism, which had never before taken root among either American conservatives or liberals, became the bedrock on which the generation of American conservatives who came of age after 1945 built a political movement. Liberated from nativist, protectionist, and isolationist tendencies, this generation could turn its energies to the struggle against Communism and other forms of collectivism, and to the promotion of a free-market economy.

Naturally the transformation of American conservatism was never complete. . . The lingering Old Right influence is today most noticeable in the conspiratorial cast of mind, the obsession with betrayal and disloyalty, the search for alien influences, the siege mentality, the anti-intellectualism, the chauvinism, and the free-floating anger that unfortunately still pervade parts of the conservative movement. It is just these qualities that Patrick J. Buchanan would restore if he should ever succeed in “taking back” the movement from those who, in his words, have hijacked it. . . .

The key distinction for Hayek was not big government versus small government, but between a government of laws in which all coercive action is constrained by general and impartial rules, and a government of men in which coercion may be arbitrarily exercised to achieve whatever ends the government, or even the majority on whose behalf it acts, wishes to accomplish. Though Hayek contemplated with little enthusiasm the absorption by the state of a third or more of national income, the amount and character of government spending were to him very much a secondary issue that directly involved no fundamental principle. . . .

Hayek’s point is that there is no deductive proof from self-evident axioms that will establish the case for liberty. Rather, he argues, liberty is a condition and a value that has evolved with society. If we value liberty, it is because Western civilization has evolved in such a way that liberty has become part of its tradition. That tradition, the provisional outcome of a contingent historical and evolutionary process, cannot be explained in purely rational terms.

This approach to social theory, the product of a thoroughgoing philosophical skepticism, is decidedly incompatible with the religious beliefs to which a large segment of the conservative movement subscribes, and equally unattractive to those, conservative or libertarian, who ground their political beliefs in natural law or in any other set of self-evident truths. This no doubt explains the regrettably limited influence that Hayek’s later work has exerted on American conservatives—particularly unfortunate because his rich contributions to legal and constitutional theory have much to offer both conservatives and liberals struggling to formulate a coherent philosophy of adjudication.

That apart, however, it remains undeniable that the primary goals of American conservatism in the postwar era evolved steadily from an Old Right toward a Hayekian agenda: from isolationism to containing the military expansion of Communism and other aggressive totalitarian movements; from protectionism to reducing the extent of government interference with and disruption of the free-market economy both domestically and internationally; from wholesale opposition to the New Deal to reforming and rationalizing its social-insurance measures along more market-oriented lines, and focusing government efforts on helping the least well-off rather than redistributing income generally.

Given the conflicting pressures under which policies are made, this agenda has been far from perfectly implemented even under conservative administrations. Yet it was only by embracing such an agenda that conservatism attracted not just new intellectual supporters—the neoconservatives—but, at least in presidential elections, a majority of votes. A retreat to the Old Right stance advocated by Patrick J. Buchanan and his supporters would mean not just throwing overboard the neoconservative “parvenus,” it would mean eradicating root and branch the fundamental consensus that enabled American conservatism to grow and to thrive in the postwar era.

What’s Wrong with Monetarism?

UPDATE: (05/06): In an email Richard Lipsey has chided me for seeming to endorse the notion that 1970s stagflation refuted Keynesian economics. Lipsey rightly points out that by introducing inflation expectations into the Phillips Curve or the Aggregate Supply Curve, a standard Keynesian model is perfectly capable of explaining stagflation, so that it is simply wrong to suggest that 1970s stagflation constituted an empirical refutation of Keynesian theory. So my statement in the penultimate paragraph that the k-percent rule

was empirically demolished in the 1980s in a failure even more embarrassing than the stagflation failure of Keynesian economics.

should be amended to read “the supposed stagflation failure of Keynesian economics.”

Brad DeLong recently did a post (“The Disappearance of Monetarism”) referencing an old (apparently unpublished) paper of his following up his 2000 article (“The Triumph of Monetarism”) in the Journal of Economic Perspectives. Paul Krugman added his own gloss on DeLong on Friedman in a post called “Why Monetarism Failed.” In the JEP paper, DeLong argued that the New Keynesian policy consensus of the 1990s was built on the foundation of what DeLong called “classic monetarism,” the analytical core of the doctrine developed by Friedman in the 1950s and 1960s, a core that survived the demise of what he called “political monetarism,” the set of factual assumptions and policy preferences required to justify Friedman’s k-percent rule as the holy grail of monetary policy.

In his follow-up paper, DeLong balanced his enthusiasm for Friedman with a bow toward Keynes, noting the influence of Keynes on both classic and political monetarism, arguing that, unlike earlier adherents of the quantity theory, Friedman believed that a passive monetary policy was not the appropriate policy stance during the Great Depression; Friedman famously held the Fed responsible for the depth and duration of what he called the Great Contraction, because it had allowed the US money supply to drop by a third between 1929 and 1933. This was in sharp contrast to hard-core laissez-faire opponents of Fed policy, who regarded even the mild and largely ineffectual steps taken by the Fed – increasing the monetary base by 15% – as illegitimate interventionism to obstruct the salutary liquidation of bad investments, thereby postponing the necessary reallocation of real resources to more valuable uses. So, according to DeLong, Friedman, no less than Keynes, was battling against the hard-core laissez-faire opponents of any positive action to speed recovery from the Depression. While Keynes believed that in a deep depression only fiscal policy would be effective, Friedman believed that, even in a deep depression, monetary policy would be effective. But both agreed that there was no structural reason why stimulus would necessarily counterproductive; both rejected the idea that only if the increased output generated during the recovery was of a particular composition would recovery be sustainable.

Indeed, that’s why Friedman has always been regarded with suspicion by laissez-faire dogmatists who correctly judged him to be soft in his criticism of Keynesian doctrines, never having disputed the possibility that “artificially” increasing demand – either by government spending or by money creation — in a deep depression could lead to sustainable economic growth. From the point of view of laissez-faire dogmatists that concession to Keynesianism constituted a total sellout of fundamental free-market principles.

Friedman parried such attacks on the purity of his free-market dogmatism with a counterattack against his free-market dogmatist opponents, arguing that the gold standard to which they were attached so fervently was itself inconsistent with free-market principles, because, in virtually all historical instances of the gold standard, the monetary authorities charged with overseeing or administering the gold standard retained discretionary authority allowing them to set interest rates and exercise control over the quantity of money. Because monetary authorities retained substantial discretionary latitude under the gold standard, Friedman argued that a gold standard was institutionally inadequate and incapable of constraining the behavior of the monetary authorities responsible for its operation.

The point of a gold standard, in Friedman’s view, was that it makes it costly to increase the quantity of money. That might once have been true, but advances in banking technology eventually made it easy for banks to increase the quantity of money without any increase in the quantity of gold, making inflation possible even under a gold standard. True, eventually the inflation would have to be reversed to maintain the gold standard, but that simply made alternative periods of boom and bust inevitable. Thus, the gold standard, i.e., a mere obligation to convert banknotes or deposits into gold, was an inadequate constraint on the quantity of money, and an inadequate systemic assurance of stability.

In other words, if the point of a gold standard is to prevent the quantity of money from growing excessively, then, why not just eliminate the middleman, and simply establish a monetary rule constraining the growth in the quantity of money. That was why Friedman believed that his k-percent rule – please pardon the expression – trumped the gold standard, accomplishing directly what the gold standard could not accomplish, even indirectly: a gradual steady increase in the quantity of money that would prevent monetary-induced booms and busts.

Moreover, the k-percent rule made the monetary authority responsible for one thing, and one thing alone, imposing a rule on the monetary authority prescribing the time path of a targeted instrument – the quantity of money – over which the monetary authority has direct control: the quantity of money. The belief that the monetary authority in a modern banking system has direct control over the quantity of money was, of course, an obvious mistake. That the mistake could have persisted as long as it did was the result of the analytical distraction of the money multiplier: one of the leading fallacies of twentieth-century monetary thought, a fallacy that introductory textbooks unfortunately continue even now to foist upon unsuspecting students.

The money multiplier is not a structural supply-side variable, it is a reduced-form variable incorporating both supply-side and demand-side parameters, but Friedman and other Monetarists insisted on treating it as if it were a structural — and a deep structural variable at that – supply variable, so that it no less vulnerable to the Lucas Critique than, say, the Phillips Curve. Nevertheless, for at least a decade and a half after his refutation of the structural Phillips Curve, demonstrating its dangers as a guide to policy making, Friedman continued treating the money multiplier as if it were a deep structural variable, leading to the Monetarist forecasting debacle of the 1980s when Friedman and his acolytes were confidently predicting – over and over again — the return of double-digit inflation because the quantity of money was increasing for most of the 1980s at double-digit rates.

So once the k-percent rule collapsed under an avalanche of contradictory evidence, the Monetarist alternative to the gold standard that Friedman had persuasively, though fallaciously, argued was, on strictly libertarian grounds, preferable to the gold standard, the gold standard once again became the default position of laissez-faire dogmatists. There was to be sure some consideration given to free banking as an alternative to the gold standard. In his old age, after winning the Nobel Prize, F. A. Hayek introduced a proposal for direct currency competition — the elimination of legal tender laws and the like – which he later developed into a proposal for the denationalization of money. Hayek’s proposals suggested that convertibility into a real commodity was not necessary for a non-legal tender currency to have value – a proposition which I have argued is fallacious. So Hayek can be regarded as the grandfather of crypto currencies like the bitcoin. On the other hand, advocates of free banking, with a few exceptions like Earl Thompson and me, have generally gravitated back to the gold standard.

So while I agree with DeLong and Krugman (and for that matter with his many laissez-faire dogmatist critics) that Friedman had Keynesian inclinations which, depending on his audience, he sometimes emphasized, and sometimes suppressed, the most important reason that he was unable to retain his hold on right-wing monetary-economics thinking is that his key monetary-policy proposal – the k-percent rule – was empirically demolished in a failure even more embarrassing than the stagflation failure of Keynesian economics. With the k-percent rule no longer available as an alternative, what’s a right-wing ideologue to do?

Anyone for nominal gross domestic product level targeting (or NGDPLT for short)?

Benjamin Cole Remembers Richard Nixon (of Blessed Memory?)

On Marcus Nunes’s Historinhas blog, Benjamin Cole has just written a guest post about Richard Nixon’s August 15, 1971 speech imposing a 90-day freeze on wages and prices, abolishing the last tenuous link between the dollar and gold and applying a 10% tariff on all imports into the US. Tinged with nostalgia for old times, the post actually refers to me in the title, perhaps because of my two recent posts on free trade and the gold standard. Well, rather than comment directly on Ben’s post, I will just refer to one of my first posts as a blogger marking the fortieth anniversary of Nixon’s announcement, which I recall with considerably less nostalgia than Ben, and explaining some of its, mostly disastrous, consequences.

Click here.

PS But Ben is right to point out that stock prices rose about 4 or 5 percent the day after the announcement, a reaction that, of course, was anything but rational.

 

 

What’s so Bad about the Gold Standard?

Last week Paul Krugman argued that Ted Cruz is more dangerous than Donald Trump, because Trump is merely a protectionist while Cruz wants to restore the gold standard. I’m not going to weigh in on the relative merits of Cruz and Trump, but I have previously suggested that Krugman may be too dismissive of the possibility that the Smoot-Hawley tariff did indeed play a significant, though certainly secondary, role in the Great Depression. In warning about the danger of a return to the gold standard, Krugman is certainly right that the gold standard was and could again be profoundly destabilizing to the world economy, but I don’t think he did such a good job of explaining why, largely because, like Ben Bernanke and, I am afraid, most other economists, Krugman isn’t totally clear on how the gold standard really worked.

Here’s what Krugman says:

[P]rotectionism didn’t cause the Great Depression. It was a consequence, not a cause – and much less severe in countries that had the good sense to leave the gold standard.

That’s basically right. But I note for the record, to spell out the my point made in the post I alluded to in the opening paragraph that protectionism might indeed have played a role in exacerbating the Great Depression, making it harder for Germany and other indebted countries to pay off their debts by making it more difficult for them to exports required to discharge their obligations, thereby making their IOUs, widely held by European and American banks, worthless or nearly so, undermining the solvency of many of those banks. It also increased the demand for the gold required to discharge debts, adding to the deflationary forces that had been unleashed by the Bank of France and the Fed, thereby triggering the debt-deflation mechanism described by Irving Fisher in his famous article.

Which brings us to Cruz, who is enthusiastic about the gold standard – which did play a major role in spreading the Depression.

Well, that’s half — or maybe a quarter — right. The gold standard did play a major role in spreading the Depression. But the role was not just major; it was dominant. And the role of the gold standard in the Great Depression was not just to spread it; the role was, as Hawtrey and Cassel warned a decade before it happened, to cause it. The causal mechanism was that in restoring the gold standard, the various central banks linking their currencies to gold would increase their demands for gold reserves so substantially that the value of gold would rise back to its value before World War I, which was about double what it was after the war. It was to avoid such a catastrophic increase in the value of gold that Hawtrey drafted the resolutions adopted at the 1922 Genoa monetary conference calling for central-bank cooperation to minimize the increase in the monetary demand for gold associated with restoring the gold standard. Unfortunately, when France officially restored the gold standard in 1928, it went on a gold-buying spree, joined in by the Fed in 1929 when it raised interest rates to suppress Wall Street stock speculation. The huge accumulation of gold by France and the US in 1929 led directly to the deflation that started in the second half of 1929, which continued unabated till 1933. The Great Depression was caused by a 50% increase in the value of gold that was the direct result of the restoration of the gold standard. In principle, if the Genoa Resolutions had been followed, the restoration of the gold standard could have been accomplished with no increase in the value of gold. But, obviously, the gold standard was a catastrophe waiting to happen.

The problem with gold is, first of all, that it removes flexibility. Given an adverse shock to demand, it rules out any offsetting loosening of monetary policy.

That’s not quite right; the problem with gold is, first of all, that it does not guarantee that value of gold will be stable. The problem is exacerbated when central banks hold substantial gold reserves, which means that significant changes in the demand of central banks for gold reserves can have dramatic repercussions on the value of gold. Far from being a guarantee of price stability, the gold standard can be the source of price-level instability, depending on the policies adopted by individual central banks. The Great Depression was not caused by an adverse shock to demand; it was caused by a policy-induced shock to the value of gold. There was nothing inherent in the gold standard that would have prevented a loosening of monetary policy – a decline in the gold reserves held by central banks – to reverse the deflationary effects of the rapid accumulation of gold reserves, but, the insane Bank of France was not inclined to reverse its policy, perversely viewing the increase in its gold reserves as evidence of the success of its catastrophic policy. However, once some central banks are accumulating gold reserves, other central banks inevitably feel that they must take steps to at least maintain their current levels of reserves, lest markets begin to lose confidence that convertibility into gold will be preserved. Bad policy tends to spread. Krugman seems to have this possibility in mind when he continues:

Worse, relying on gold can easily have the effect of forcing a tightening of monetary policy at precisely the wrong moment. In a crisis, people get worried about banks and seek cash, increasing the demand for the monetary base – but you can’t expand the monetary base to meet this demand, because it’s tied to gold.

But Krugman is being a little sloppy here. If the demand for the monetary base – meaning, presumably, currency plus reserves at the central bank — is increasing, then the public simply wants to increase their holdings of currency, not spend the added holdings. So what stops the the central bank accommodate that demand? Krugman says that “it” – meaning, presumably, the monetary base – is tied to gold. What does it mean for the monetary base to be “tied” to gold? Under the gold standard, the “tie” to gold is a promise to convert the monetary base, on demand, at a specified conversion rate.

Question: why would that promise to convert have prevented the central bank from increasing the monetary base? Answer: it would not and did not. Since, by assumption, the public is demanding more currency to hold, there is no reason why the central bank could not safely accommodate that demand. Of course, there would be a problem if the public feared that the central bank might not continue to honor its convertibility commitment and that the price of gold would rise. Then there would be an internal drain on the central bank’s gold reserves. But that is not — or doesn’t seem to be — the case that Krugman has in mind. Rather, what he seems to mean is that the quantity of base money is limited by a reserve ratio between the gold reserves held by the central bank and the monetary base. But if the tie between the monetary base and gold that Krugman is referring to is a legal reserve requirement, then he is confusing the legal reserve requirement with the gold standard, and the two are simply not the same, it being entirely possible, and actually desirable, for the gold standard to function with no legal reserve requirement – certainly not a marginal reserve requirement.

On top of that, a slump drives interest rates down, increasing the demand for real assets perceived as safe — like gold — which is why gold prices rose after the 2008 crisis. But if you’re on a gold standard, nominal gold prices can’t rise; the only way real prices can rise is a fall in the prices of everything else. Hello, deflation!

Note the implicit assumption here: that the slump just happens for some unknown reason. I don’t deny that such events are possible, but in the context of this discussion about the gold standard and its destabilizing properties, the historically relevant scenario is when the slump occurred because of a deliberate decision to raise interest rates, as the Fed did in 1929 to suppress stock-market speculation and as the Bank of England did for most of the 1920s, to restore and maintain the prewar sterling parity against the dollar. Under those circumstances, it was the increase in the interest rate set by the central bank that amounted to an increase in the monetary demand for gold which is what caused gold appreciation and deflation.

What’s so Great about Free Trade?

Free trade is about as close to a sacred tenet as can be found in classical and neoclassical economic theory. And there is no economic heresy more sacrilegious than protectionism. An important part of what endears free trade to economists, it seems to me, is that it is both logically compelling and counter-intuitive. There is something both self-evident, yet paradoxical, about saying that the gains from trade consist in what you receive not in what you give up, in what you import not in what you export. And there is something even more paradoxical and counter-intuitive — and logically inescapable — in the idea of comparative advantage which teaches that every country, no matter how meager its resources and how unproductive its workers, will always be the lowest-cost producer of something, while every country, no matter how well-endowed with resources and how productive its workers, will always be the highest-cost producer of something.

Despite the love and devotion that the doctrine of free trade inspires in economists, the doctrine has had indifferent success in rallying public opinion to its side. Free trade has never been popular among the masses. Supporting free trade has sometimes been a way for politicians to establish that they are “serious,” high-minded, and principled, and therefore worthy of the support those who fancy themselves as “serious,” high-minded and principled. And so there is a kind of moral pressure on politicians to pronounce themselves as free traders, though with the immediate qualification tacked on that they also believe in fair trade. So even that scourge of political correctness, and you know who I mean, felt obligated to say “I’m a free-trader.”

Although free trade has never been a position calculated to attract a popular following, protectionism has usually not been a winning issue either. But it has, on occasion, been an effective strategy by which political outsiders, or those like Pat Buchanan and Ross Perot, posing as political outsiders, could attract a following. In fact, it is remarkable how closely the message of economic nationalism, control of the borders, disengagement from international treaties and alliances, trumpeted by the Politically Incorrect One resembles the message propagated by Buchanan in his 1992 and 1996 campaigns.

And the protectionist anti-free-trade message clearly appeals to both ends of the political spectrum. Opposition to NAFTA and other free-trade agreements has been fueling the Sanders campaign just as much as it has fueled the campaign of the Golden-Haired One. The latter, of course, has benefited from being able to push a number of other hot-button issues that Sanders would not want to be associated with, and, above all, from having shrewdly chosen a group of incredibly weak opponents (AKA the deep Republican bench that we used to hear so much about) to run against. So the question that I want to explore is why there is such a disconnect between the public and professional economists (with a few noteworthy exceptions to be sure, but they are just that — exceptional) about free trade?

The key to understanding that disconnect is, I suggest, the way in which economists have been trained to think about individual and social welfare, which, it seems to me, is totally different from how most people think about their well-being. In the standard utility-maximization framework, individual well-being is a monotonically increasing function of individual consumption, leisure being one of the “goods” being consumed, so that reductions in hours worked is, when consumption of everything else is held constant, welfare-increasing. Even at a superficial level, this seems totally wrong. While it is certainly true that people do value consumption, and increased consumption does tend to increase overall levels of well-being, I think that changes in consumption have a relatively minor effect on how people perceive the quality of their lives.

What people do is a far more important determinant of their overall estimation of how well-off they are than what they consume. When you meet someone, you are likely, if you are at all interested in finding out about the person, to ask him or her about what he or she does, not about what he or she consumes. Most of the waking hours of an adult person are spent in work-related activities. If people are miserable in their jobs, their estimation of their well-being is likely to be low and if they are happy or fulfilled or challenged in their jobs, their estimation of their well-being is likely to be high.

And maybe I’m clueless, but I find it hard to believe that what makes people happy or unhappy with their lives depends in a really significant way on how much they consume. It seems to me that what matters to most people is the nature of their relationships with their family and friends and the people they work with, and whether they get satisfaction from their jobs or from a sense that they are accomplishing or are on their way to accomplish some important life goals. Compared to the satisfaction derived from their close personal relationships and from a sense of personal accomplishment, levels of consumption don’t seem to matter all that much.

Moreover, insofar as people depend on being employed in order to finance their routine consumption purchases, they know that being employed is a necessary condition for maintaining their current standard of living. For many if not most people, the unplanned loss of their current job would be a personal disaster, which means that being employed is the dominant – the overwhelming – determinant of their well-being. Ordinary people seem to understand how closely their well-being is tied to the stability of their employment, which is why people are so viscerally opposed to policies that, they fear, could increase the likelihood of losing their jobs.

To think that an increased chance of losing one’s job in exchange for a slight gain in purchasing power owing to the availability of low-cost imports is an acceptable trade-off for most workers does not seem at all realistic. Questioning the acceptability of this trade-off doesn’t mean that I am denying that, in principle, free trade increases aggregate income or that there are corresponding employment gains associated with the increased export opportunities created by free trade. Nor does it mean that I deny that, in principle, the gains from free trade are large enough to provide monetary compensation to workers who lose their jobs, but I do question whether such compensation is possible in practice or that the compensation would be adequate for the loss of psychic well-being associated with losing one’s job, even if money income is maintained.

Losing a job may cause a demoralization for which monetary compensation cannot compensate, because the compensation is incommensurate with the loss. The psychic effects of losing a job (an increase in leisure!) are ignored by the standard calculations of welfare effects in which well-being is identified with, and measured by, consumption. And these losses are compounded and amplified when they are concentrated in specific communities and regions, causing substantial further losses to the businesses dependent on the demand of newly unemployed workers. The hollowing out of large parts of the industrial northeast and midwest is sad testimony to these wider effects, which include the irreparable loss of intangible infrastructural capital resulting from the withering away of communities in which complex and extensive social networks formerly thrived.

The goal of this post is not to make an argument for protectionist policies, let alone for any of the candidates arguing for protectionist policies. The aim is to show how inadequate the standard arguments for free trade are in responding to the concerns of the people who feel that they have been hurt by free-trade policies or feel that the jobs that they have now are vulnerable to continued free trade and ever-increasing globalization. I don’t say that responses can’t be made, just that they haven’t been made.

The larger philosophical or methodological point is that although the theory of utility maximization underlying neoclassical theory is certainly useful as a basis for deriving what Samuelson called meaningful theorems – or, in philosophically more defensible terms, refutable predictions — about the effects of changes in specified exogenous variables on prices and output. Thus, economic theory can tell us that an excise tax on sugar tends to cause an increase in the price, and a reduction in output, of sugar. But the idea that we can reliably make welfare comparisons between alternative states of the world when welfare is assumed to be a function of consumption, and that nothing else matters, is simply preposterous. And it’s about time that economists enlarged their notions of what constitutes well-being if they want to make useful recommendations about the welfare implications of public policy, especially trade policy.

Justice Scalia and the Original Meaning of Originalism

humpty_dumpty

(I almost regret writing this post because it took a lot longer to write than I expected and I am afraid that I have ventured too deeply into unfamiliar territory. But having expended so much time and effort on this post, I must admit to being curious about what people will think of it.)

I resist the temptation to comment on Justice Scalia’s character beyond one observation: a steady stream of irate outbursts may have secured his status as a right-wing icon and burnished his reputation as a minor literary stylist, but his eruptions brought no credit to him or to the honorable Court on which he served.

But I will comment at greater length on the judicial philosophy, originalism, which he espoused so tirelessly. The first point to make, in discussing originalism, is that there are at least two concepts of originalism that have been advanced. The first and older concept is that the provisions of the US Constitution should be understood and interpreted as the framers of the Constitution intended those provisions to be understood and interpreted. The task of the judge, in interpreting the Constitution, would then be to reconstruct the collective or shared state of mind of the framers and, having ascertained that state of mind, to interpret the provisions of the Constitution in accord with that collective or shared state of mind.

A favorite originalist example is the “cruel and unusual punishment” provision of the Eighth Amendment to the Constitution. Originalists dismiss all arguments that capital punishment is cruel and unusual, because the authors of the Eighth Amendment could not have believed capital punishment to be cruel and unusual. If that’s what they believed then, why, having passed the Eighth amendment, did the first Congress proceed to impose the death penalty for treason, counterfeiting and other offenses in 1790? So it seems obvious that the authors of Eighth Amendment did not intend to ban capital punishment. If so, originalists argue, the “cruel and unusual” provision of the Eighth Amendment can provide no ground for ruling that capital punishment violates the Eighth Amendment.

There are a lot of problems with the original-intent version of originalism, the most obvious being the impossibility of attributing an unambiguous intention to the 50 or so delegates to the Constitutional Convention who signed the final document. The Constitutional text that emerged from the Convention was a compromise among many competing views and interests, and it did not necessarily conform to the intentions of any of the delegates, much less all of them. True, James Madison was the acknowledged author of the Bill of Rights, so if we are parsing the Eighth Amendment, we might, in theory, focus exclusively on what he understood the Eighth Amendment to mean. But focusing on Madison alone would be problematic, because Madison actually opposed adding a Bill of Rights to the original Constitution; Madison introduced the Bill of Rights as amendments to the Constitution in the first Congress, only because the Constitution would not have been approved without an understanding that the Bill of Rights that Madison had opposed would be adopted as amendments to the Constitution. The inherent ambiguity in the notion of intention, even in the case of a single individual acting out of mixed, if not conflicting, motives – an ambiguity compounded when action is undertaken collectively by individuals – causes the notion of original intent to dissolve into nothingness when one tries to apply it in practice.

Realizing that trying to determine the original intent of the authors of the Constitution (including the Amendments thereto) is a fool’s errand, many originalists, including Justice Scalia, tried to salvage the doctrine by shifting its focus from the inscrutable intent of the Framers to the objective meaning that a reasonable person would have attached to the provisions of the Constitution when it was ratified. Because the provisions of the Constitution are either ordinary words or legal terms, the meaning that would reasonably have been attached to those provisions can supposedly be ascertained by consulting the contemporary sources, either dictionaries or legal treatises, in which those words or terms were defined. It is this original meaning that, according to Scalia, must remain forever inviolable, because to change the meaning of provisions of the Constitution would allow unelected judges to covertly amend the Constitution, evading the amendment process spelled out in Article V of the Constitution, thereby nullifying the principle of a written constitution that constrains the authority and powers of all branches of government. Instead of being limited by the Constitution, judges not bound by the original meaning arrogate to themselves an unchecked power to impose their own values on the rest of the country.

To return to the Eighth Amendment, Scalia would say that the meaning attached to the term “cruel and unusual” when the Eighth Amendment was passed was clearly not so broad that it prohibited capital punishment. Otherwise, how could Congress, having voted to adopt the Eighth Amendment, proceed to make counterfeiting and treason and several other federal offenses capital crimes? Of course that’s a weak argument, because Congress, like any other representative assembly is under no obligation or constraint to act consistently. It’s well known that democratic decision-making need not be consistent, and just because a general principle is accepted doesn’t mean that the principle will not be violated in specific cases. A written Constitution is supposed to impose some discipline on democratic decision-making for just that reason. But there was no mechanism in place to prevent such inconsistency, judicial review of Congressional enactments not having become part of the Constitutional fabric until John Marshall’s 1803 opinion in Marbury v. Madison made judicial review, quite contrary to the intention of many of the Framers, an organic part of the American system of governance.

Indeed, in 1798, less than ten years after the Bill of Rights was adopted, Congress enacted the Alien and Sedition Acts, which, I am sure even Justice Scalia would have acknowledged, violated the First Amendment prohibition against abridging the freedom of speech and the press. To be sure, the Congress that passed the Alien and Sedition Acts was not the same Congress that passed the Bill of Rights, but one would hardly think that the original meaning of abridging freedom of speech and the press had been forgotten in the intervening decade. Nevertheless, to uphold his version of originalism, Justice Scalia would have to argue either that the original meaning of the First Amendment had been forgotten or acknowledge that one can’t simply infer from the actions of a contemporaneous or nearly contemporaneous Congress what the original meaning of the provisions of the Constitution were, because it is clearly possible that the actions of Congress could have been contrary to some supposed original meaning of the provisions of the Constitution.

Be that as it may, for purposes of the following discussion, I will stipulate that we can ascertain an objective meaning that a reasonable person would have attached to the provisions of the Constitution at the time it was ratified. What I want to examine is Scalia’s idea that it is an abuse of judicial discretion for a judge to assign a meaning to any Constitutional term or provision that is different from that original meaning. To show what is wrong with Scalia’s doctrine, I must first explain that Scalia’s doctrine is based on legal philosophy known as legal positivism. Whether Scalia realized that he was a legal positivist I don’t know, but it’s clear that Scalia was taking the view that the validity and legitimacy of a law or a legal provision or a legal decision (including a Constitutional provision or decision) derives from an authority empowered to make law, and that no one other than an authorized law-maker or sovereign is empowered to make law.

According to legal positivism, all law, including Constitutional law, is understood as an exercise of will – a command. What distinguishes a legal command from, say, a mugger’s command to a victim to turn over his wallet is that the mugger is not a sovereign. Not only does the sovereign get what he wants, the sovereign, by definition, gets it legally; we are not only forced — compelled — to obey, but, to add insult to injury, we are legally obligated to obey. And morality has nothing to do with law or legal obligation. That’s the philosophical basis of legal positivism to which Scalia, wittingly or unwittingly, subscribed.

Luckily for us, we Americans live in a country in which the people are sovereign, but the power of the people to exercise their will collectively was delimited and circumscribed by the Constitution ratified in 1788. Under positivist doctrine, the sovereign people in creating the government of the United States of America laid down a system of rules whereby the valid and authoritative expressions of the will of the people would be given the force of law and would be carried out accordingly. The rule by which the legally valid, authoritative, command of the sovereign can be distinguished from the command of a mere thug or bully is what the legal philosopher H. L. A. Hart called a rule of recognition. In the originalist view, the rule of recognition requires that any judicial judgment accord with the presumed original understanding of the provisions of the Constitution when the Constitution was ratified, thereby becoming the authoritative expression of the sovereign will of the people, unless that original understanding has subsequently been altered by way of the amendment process spelled out in Article V of the Constitution. What Scalia and other originalists are saying is that any interpretation of a provision of the Constitution that conflicts with the original meaning of that provision violates the rule of recognition and is therefore illegitimate. Hence, Scalia’s simmering anger at decisions of the court that he regarded as illegitimate departures from the original meaning of the Constitution.

But legal positivism is not the only theory of law. F. A. Hayek, who, despite his good manners, somehow became a conservative and libertarian icon a generation before Scalia, subjected legal positivism to withering criticism in volume one of Law Legislation and Liberty. But the classic critique of legal positivism was written a little over a half century ago by Ronald Dworkin, in his essay “Is Law a System of Rules?” (aka “The Model of Rules“) Dworkin’s main argument was that no system of rules can be sufficiently explicit and detailed to cover all possible fact patterns that would have to be adjudicated by a judge. Legal positivists view the exercise of discretion by judges as an exercise of personal will authorized by the Sovereign in cases in which no legal rule exactly fits the facts of a case. Dworkin argued that rather than an imposition of judicial will authorized by the sovereign, the exercise of judicial discretion is an application of the deeper principles relevant to the case, thereby allowing the judge to determine which, among the many possible rules that could be applied to the facts of the case, best fits with the totality of the circumstances, including prior judicial decisions, the judge must take into account. According to Dworkin, law and the legal system as a whole is not an expression of sovereign will, but a continuing articulation of principles in terms of which specific rules of law must be understood, interpreted, and applied.

The meaning of a legal or Constitutional provision can’t be fixed at a single moment, because, like all social institutions, meaning evolves and develops organically. Not being an expression of the sovereign will, the meaning of a legal term or provision cannot be identified by a putative rule of recognition – e.g., the original meaning doctrine — that freezes the meaning of the term at a particular moment in time. It is not true, as Scalia and originalists argue, that conceding that the meaning of Constitutional terms and provisions can change and evolve allows unelected judges to substitute their will for the sovereign will enshrined when the Constitution was ratified. When a judge acknowledges that the meaning of a term has changed, the judge does so because that new meaning has already been foreshadowed in earlier cases with which his decision in the case at hand must comport. There is always a danger that the reasoning of a judge is faulty, but faulty reasoning can beset judges claiming to apply the original meaning of a term, as Chief Justice Taney did in his infamous Dred Scot opinion in which Taney argued that the original meaning of the term “property” included property in human beings.

Here is an example of how a change in meaning may be required by a change in our understanding of a concept. It may not be the best example to shed light on the legal issues, but it is the one that occurs to me as I write this. About a hundred years ago, Bertrand Russell and Alfred North Whitehead were writing one the great philosophical works of the twentieth century, Principia Mathematica. Their objective was to prove that all of mathematics could be reduced to pure logic. It was a grand and heroic effort that they undertook, and their work will remain a milestone in history of philosophy. If Russell and Whitehead had succeeded in their effort of reducing mathematics to logic, it could properly be said that mathematics is really the same as logic, and the meaning of the word “mathematics” would be no different from the meaning of the word “logic.” But if the meaning of mathematics were indeed the same as that of logic, it would not be the result of Russell and Whitehead having willed “mathematics” and “logic” to mean the same thing, Russell and Whitehead being possessed of no sovereign power to determine the meaning of “mathematics.” Whether mathematics is really the same as logic depends on whether all of mathematics can be logically deduced from a set of axioms. No matter how much Russell and Whitehead wanted mathematics to be reducible to logic, the factual question of whether mathematics can be reduced to logic has an answer, and the answer is completely independent of what Russell and Whitehead wanted it to be.

Unfortunately for Russell and Whitehead, the Viennese mathematician Kurt Gödel came along a few years after they completed the third and final volume of their masterpiece and proved an “incompleteness theorem” showing that mathematics could not be reduced to logic – mathematics is therefore not the same as logic – because in any axiomatized system, some true propositions of arithmetic will be logically unprovable. The meaning of mathematics is therefore demonstrably not the same as the meaning of logic. This difference in meaning had to be discovered; it could not be willed.

Actually, it was Humpty Dumpty who famously anticipated the originalist theory that meaning is conferred by an act of will.

“I don’t know what you mean by ‘glory,’ ” Alice said.
Humpty Dumpty smiled contemptuously. “Of course you don’t—till I tell you. I meant ‘there’s a nice knock-down argument for you!’ ”
“But ‘glory’ doesn’t mean ‘a nice knock-down argument’,” Alice objected.
“When I use a word,” Humpty Dumpty said, in rather a scornful tone, “it means just what I choose it to meanan—neither more nor less.”
“The question is,” said Alice, “whether you can make words mean so many different things.”
“The question is,” said Humpty Dumpty, “which is to be master—that’s all.”

In Humpty Dumpty’s doctrine, meaning is determined by a sovereign master. In originalist doctrine, the sovereign master is the presumed will of the people when the Constitution and the subsequent Amendments were ratified.

So the question whether capital punishment is “cruel and unusual” can’t be answered, as Scalia insisted, simply by invoking a rule of recognition that freezes the meaning of “cruel and unusual” at the presumed meaning it had in 1790, because the point of a rule of recognition is to identify the sovereign will that is given the force of law, while the meaning of “cruel and unusual” does not depend on anyone’s will. If a judge reaches a decision based on a meaning of “cruel and unusual” different from the supposed original meaning, the judge is not abusing his discretion, the judge is engaged in judicial reasoning. The reasoning may be good or bad, right or wrong, but judicial reasoning is not rendered illegitimate just because it assigns a meaning to a term different from the supposed original meaning. The test of judicial reasoning is how well it accords with the totality of judicial opinions and relevant principles from which the judge can draw in supporting his reasoning. Invoking a supposed original meaning of what “cruel and unusual” meant to Americans in 1789 does not tell us how to understand the meaning of “cruel and unusual” just as the question whether logic and mathematics are synonymous cannot be answered by insisting that Russel and Whitehead were right in thinking that mathematics and logic are the same thing. (I note for the record that I personally have no opinion about whether capital punishment violates the Eighth Amendment.)

One reason meanings change is because circumstances change. The meaning of freedom of the press and freedom of speech may have been perfectly clear in 1789, but our conception of what is protected by the First Amendment has certainly expanded since the First Amendment was ratified. As new media for conveying speech have been introduced, the courts have brought those media under the protection of the First Amendment. Scalia made a big deal of joining with the majority in Texas v. Johnson a 1989 case in which the conviction of a flag burner was overturned. Scalia liked to cite that case as proof of his fidelity to the text of the Constitution; while pouring scorn on the flag burner, Scalia announced that despite his righteous desire to exact a terrible retribution from the bearded weirdo who burned the flag, he had no choice but to follow – heroically, in his estimation — the text of the Constitution.

But flag-burning is certainly a form of symbolic expression, and it is far from obvious that the original meaning of the First Amendment included symbolic expression. To be sure some forms of symbolic speech were recognized as speech in the eighteenth century, but it could be argued that the original meaning of freedom of speech and the press in the First Amendment was understood narrowly. The compelling reason for affording flag-burning First Amendment protection is not that flag-burning was covered by the original meaning of the First Amendment, but that a line of cases has gradually expanded the notion of what activities are included under what the First Amendment calls “speech.” That is the normal process by which law changes and meanings change, incremental adjustments taking into account unforeseen circumstances, eventually leading judges to expand the meanings ascribed to old terms, because the expanded meanings comport better with an accumulation of precedents and the relevant principles on which judges have relied in earlier cases.

But perhaps the best example of how changes in meaning emerge organically from our efforts to cope with changing and unforeseen circumstances rather than being the willful impositions of a higher authority is provided by originalism itself, because, “originalism” was originally about the original intention of the Framers of the Constitution. It was only when it became widely accepted that the original intention of the Framers was not something that could be ascertained, that people like Antonin Scalia decided to change the meaning of “originalism,” so that it was no longer about the original intention of the Framers, but about the original meaning of the Constitution when it was ratified. So what we have here is a perfect example of how the meaning of a well-understood term came to be changed, because the original meaning of the term was found to be problematic. And who was responsible for this change in meaning? Why the very same people who insist that it is forbidden to tamper with the original meaning of the terms and provisions of the Constitution. But they had no problem in changing the meaning of their doctrine of Constitutional interpretation. Do I blame them for changing the meaning of the originalist doctrine? Not one bit. But if originalists were only marginally more introspective than they seem to be, they might have realized that changes in meaning are perfectly normal and legitimate, especially when trying to give concrete meaning to abstract terms in a way that best fits in with the entire tradition of judicial interpretation embodied in the totality of all previous judicial decisions. That is the true task of a judge, not a pointless quest for original meaning.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 418 other followers

Follow Uneasy Money on WordPress.com

Follow

Get every new post delivered to your Inbox.

Join 418 other followers