Jack Schwartz on the Weaknesses of the Mathematical Mind

I was recently rereading an essay by Karl Popper, “A Realistic View of Logic, Physics, and History” published in his collection of essays, Objective Knowledge: An Evolutionary Approach, because it discusses the role of reductivism in science and philosophy, a topic about which I’ve written a number of previous posts discussing the microfoundations of macroeconomics.

Here is an important passage from Popper’s essay:

What I should wish to assert is (1) that criticism is a most important methodological device: and (2) that if you answer criticism by saying, “I do not like your logic: your logic may be all right for you, but I prefer a different logic, and according to my logic this criticism is not valid”, then you may undermine the method of critical discussion.

Now I should distinguish between two main uses of logic, namely (1) its use in the demonstrative sciences – that is to say, the mathematical sciences – and (2) its use in the empirical sciences.

In the demonstrative sciences logic is used in the main for proofs – for the transmission of truth – while in the empirical sciences it is almost exclusively used critically – for the retransmission of falsity. Of course, applied mathematics comes in too, which implicitly makes use of the proofs of pure mathematics, but the role of mathematics in the empirical sciences is somewhat dubious in several respects. (There exists a wonderful article by Schwartz to this effect.)

The article to which Popper refers appears by Jack Schwartz in a volume edited by Ernst Nagel, Patrick Suppes, and Alfred Tarski, Logic, Methodology and Philosophy of Science. The title of the essay, “The Pernicious Influence of Mathematics on Science” caught my eye, so I tried to track it down. Unavailable on the internet except behind a paywall, I bought a used copy for $6 including postage. The essay was well worth the $6 I paid to read it.

Before quoting from the essay, I would just note that Jacob T. (Jack) Schwartz was far from being innocent of mathematical and scientific knowledge. Here’s a snippet from the Wikipedia entry on Schwartz.

His research interests included the theory of linear operatorsvon Neumann algebrasquantum field theorytime-sharingparallel computingprogramming language design and implementation, robotics, set-theoretic approaches in computational logicproof and program verification systems; multimedia authoring tools; experimental studies of visual perception; multimedia and other high-level software techniques for analysis and visualization of bioinformatic data.

He authored 18 books and more than 100 papers and technical reports.

He was also the inventor of the Artspeak programming language that historically ran on mainframes and produced graphical output using a single-color graphical plotter.[3]

He served as Chairman of the Computer Science Department (which he founded) at the Courant Institute of Mathematical SciencesNew York University, from 1969 to 1977. He also served as Chairman of the Computer Science Board of the National Research Council and was the former Chairman of the National Science Foundation Advisory Committee for Information, Robotics and Intelligent Systems. From 1986 to 1989, he was the Director of DARPA‘s Information Science and Technology Office (DARPA/ISTO) in Arlington, Virginia.

Here is a link to his obituary.

Though not trained as an economist, Schwartz, an autodidact, wrote two books on economic theory.

With that introduction, I quote from, and comment on, Schwartz’s essay.

Our announced subject today is the role of mathematics in the formulation of physical theories. I wish, however, to make use of the license permitted at philosophical congresses, in two regards: in the first place, to confine myself to the negative aspects of this role, leaving it to others to dwell on the amazing triumphs of the mathematical method; in the second place, to comment not only on physical science but also on social science, in which the characteristic inadequacies which I wish to discuss are more readily apparent.

Computer programmers often make a certain remark about computing machines, which may perhaps be taken as a complaint: that computing machines, with a perfect lack of discrimination, will do any foolish thing they are told to do. The reason for this lies of course in the narrow fixation of the computing machines “intelligence” upon the basely typographical details of its own perceptions – its inability to be guided by any large context. In a psychological description of the computer intelligence, three related adjectives push themselves forward: single-mindedness, literal-mindedness, simple-mindedness. Recognizing this, we should at the same time recognize that this single-mindedness, literal-mindedness, simple-mindedness also characterizes theoretical mathematics, though to a lesser extent.

It is a continual result of the fact that science tries to deal with reality that even the most precise sciences normally work with more or less ill-understood approximations toward which the scientist must maintain an appropriate skepticism. Thus, for instance, it may come as a shock to the mathematician to learn that the Schrodinger equation for the hydrogen atom, which he is able to solve only after a considerable effort of functional analysis and special function theory, is not a literally correct description of this atom, but only an approximation to a somewhat more correct equation taking account of spin, magnetic dipole, and relativistic effects; that this corrected equation is itself only an ill-understood approximation to an infinite set of quantum field-theoretic equations; and finally that the quantum field theory, besides diverging, neglects a myriad of strange-particle interactions whose strength and form are largely unknown. The physicist looking at the original Schrodinger equation, learns to sense in it the presence of many invisible terms, integral, intergrodifferential, perhaps even more complicated types of operators, in addition to the differential terms visible, and this sense inspires an entirely appropriate disregard for the purely technical features of the equation which he sees. This very healthy self-skepticism is foreign to the mathematical approach. . . .

Schwartz, in other words, is noting that the mathematical equations that physicists use in many contexts cannot be relied upon without qualification as accurate or exact representations of reality. The understanding that the mathematics that physicists and other physical scientists use to express their theories is often inexact or approximate inasmuch as reality is more complicated than our theories can capture mathematically. Part of what goes into the making of a good scientist is a kind of artistic feeling for how to adjust or interpret a mathematical model to take into account what the bare mathematics cannot describe in a manageable way.

The literal-mindedness of mathematics . . . makes it essential, if mathematics is to be appropriately used in science, that the assumptions upon which mathematics is to elaborate be correctly chosen from a larger point of view, invisible to mathematics itself. The single-mindedness of mathematics reinforces this conclusion. Mathematics is able to deal successfully only with the simplest of situations, more precisely, with a complex situation only to the extent that rare good fortune makes this complex situation hinge upon a few dominant simple factors. Beyond the well-traversed path, mathematics loses its bearing in a jungle of unnamed special functions and impenetrable combinatorial particularities. Thus, mathematical technique can only reach far if it starts from a point close to the simple essentials of a problem which has simple essentials. That form of wisdom which is the opposite of single-mindedness, the ability to keep many threads in hand, to draw for an argument from many disparate sources, is quite foreign to mathematics. The inability accounts for much of the difficulty which mathematics experiences in attempting to penetrate the social sciences. We may perhaps attempt a mathematical economics – but how difficult would be a mathematical history! Mathematics adjusts only with reluctance to the external, and vitally necessary, approximating of the scientists, and shudders each time a batch of small terms is cavalierly erased. Only with difficulty does it find its way to the scientist’s ready grasp of the relative importance of many factors. Quite typically, science leaps ahead and mathematics plods behind.

Schwartz having referenced mathematical economics, let me try to restate his point more concretely than he did by referring to the Walrasian theory of general equilibrium. “Mathematics,” Schwartz writes, “adjusts only with reluctance to the external, and vitally necessary, approximating of the scientists, and shudders each time a batch of small terms is cavalierly erased.” The Walrasian theory is at once too general and too special to be relied on as an applied theory. It is too general because the functional forms of most of its reliant equations can’t be specified or even meaningfully restricted on very special simplifying assumptions; it is too special, because the simplifying assumptions about the agents and the technologies and the constraints and the price-setting mechanism are at best only approximations and, at worst, are entirely divorced from reality.

Related to this deficiency of mathematics, and perhaps more productive of rueful consequence, is the simple-mindedness of mathematics – its willingness, like that of a computing machine, to elaborate upon any idea, however absurd; to dress scientific brilliancies and scientific absurdities alike in the impressive uniform of formulae and theorems. Unfortunately however, an absurdity in uniform is far more persuasive than an absurdity unclad. The very fact that a theory appears in mathematical form, that, for instance, a theory has provided the occasion for the application of a fixed-point theorem, or of a result about difference equations, somehow makes us more ready to take it seriously. And the mathematical-intellectual effort of applying the theorem fixes in us the particular point of view of the theory with which we deal, making us blind to whatever appears neither as a dependent nor as an independent parameter in its mathematical formulation. The result, perhaps most common in the social sciences, is bad theory with a mathematical passport. The present point is best established by reference to a few horrible examples. . . . I confine myself . . . to the citation of a delightful passage from Keynes’ General Theory, in which the issues before us are discussed with a characteristic wisdom and wit:

“It is the great fault of symbolic pseudomathematical methods of formalizing a system of economic analysis . . . that they expressly assume strict independence between the factors involved and lose all their cogency and authority if this hypothesis is disallowed; whereas, in ordinary discourse, where we are not blindly manipulating but know all the time what we are doing and what the words mean, we can keep ‘at the back of our heads’ the necessary reserves and qualifications and adjustments which we shall have to make later on, in a way in which we cannot keep complicated partial differentials ‘at the back’ of several pages of algebra which assume they all vanish. Too large a proportion of recent ‘mathematical’ economics are mere concoctions, as imprecise as the initial assumptions they reset on, which allow the author to lose sight of the complexities and interdependencies of the real world in a maze of pretentions and unhelpful symbols.”

Although it would have been helpful if Keynes had specifically identified the pseudomathematical methods that he had in mind, I am inclined to think that he was expressing his impatience with the Walrasian general-equilibrium approach that was characteristic of the Marshallian tradition that he carried forward even as he struggled to transcend it. Walrasian general equilibrium analysis, he seems to be suggesting, is too far removed from reality to provide any reliable guide to macroeconomic policy-making, because the necessary qualifications required to make general-equilibrium analysis practically relevant are simply unmanageable within the framework of general-equilibrium analysis. A different kind of analysis is required. As a Marshallian he was less skeptical of partial-equilibrium analysis than of general-equilibrium analysis. But he also recognized that partial-equilibrium analysis could not be usefully applied in situations, e.g., analysis of an overall “market” for labor, where the usual ceteris paribus assumptions underlying the use of stable demand and supply curves as analytical tools cannot be maintained. But for some reason that didn’t stop Keynes from trying to explain the nominal rate of interest by positing a demand curve to hold money and a fixed stock of money supplied by a central bank. But we all have our blind spots and miss obvious implications of familiar ideas that we have already encountered and, at least partially, understand.

Schwartz concludes his essay with an arresting thought that should give us pause about how we often uncritically accept probabilistic and statistical propositions as if we actually knew how they matched up with the stochastic phenomena that we are seeking to analyze. But although there is a lot to unpack in his conclusion, I am afraid someone more capable than I will have to do the unpacking.

[M]athematics, concentrating our attention, makes us blind to its own omissions – what I have already called the single-mindedness of mathematics. Typically, mathematics, knows better what to do than why to do it. Probability theory is a famous example. . . . Here also, the mathematical formalism may be hiding as much as it reveals.

Stigler Confirms that Wicksteed Did Indeed Discover the Coase Theorem

The world is full of surprises, a fact with which rational-expectations theorists have not yet come to grips. Yesterday I was surprised to find that a post of mine from May 2016, was attracting lots of traffic. When published, that post had not attracted much attention, and I had more or less forgotten about it, but when I quickly went back to look at it, I recalled that I had thought well of it, because in the process of calling attention to Wicksteed’s anticipation of the Coase Theorem, I thought that I had done a good job of demonstrating one of my favorite talking points: that what we think of as microeconomics (supply-demand analysis aka partial-equilibrium analysis) requires a macrofoundation, namely that all markets but the one under analysis are in equilibrium. In particular, Wicksteed showed that to use cost as a determinant of price in the context of partial-equilibrium analysis, one must assume that the prices of everything else have already been determined, because costs don’t exist independently of the prices of all other outputs. But, unfortunately, the post went pretty much unnoticed. Until yesterday.

After noticing all the traffic that an old post was suddenly receiving, I found that the source was Tyler Cowen’s Marginal Revolution blog, a link to my three-year-old post having been included in a post with five other links. I was curious to see if readers of Tyler’s blog would react to my post, so I checked the comments to his post. Most of them were directed towards the other links that Tyler included, but there were a few that mentioned mine. None of the comments really engaged with my larger point about Wicksteed; most of them focused on my claim that Wicksteed had anticipated the Coase Theorem. Here’s the most pointed comment, by Alan Gunn.

If Wicksteed didn’t mention transaction costs, he didn’t discover the Coase theorem. The importance of transaction costs and the errors economists make when they ignore them are what make Coase’s work important. The stuff about how initial assignment of rights doesn’t matter if transaction costs are zero is obvious and trivial.

A bit later I found that Scott Sumner, whose recent post on Econlib was also linked to by Tyler, added a comment to my post that more gently makes precisely a point exactly opposite of Alan Gunn’s.

Very good post. Some would argue that the essence of the Coase Theorem is not that the initial distribution of property rights doesn’t matter, but rather that it doesn’t matter if there are no transactions costs. I seem to recall that that was Coase’s view.

I agree with Scott that the essential point of the Coase Theorem is that if there are zero transactions costs, the initial allocation doesn’t matter. To credit Wicksteed with anticipating the Coase Theorem, you have to assume that Wicksteed understood that transactions costs had to be zero. But the zero transactions costs assumption was the default assumption. The question is then whether the observation that the final allocation is independent of the initial allocation is a real discovery even if the assumption of zero transactions cost is made only implicitly. Wicksteed obviously did make that assumption, because his result would not have followed if transactions costs were zero. Articulating explicitly an assumption that was assumed implicitly is important, but the substance of the argument is unchanged.

I can’t comment on what Coase’s view of his theorem was, but Stigler clearly did view the Theorem to refer to a situation in which transactions costs were zero. And it was Stigler who attached the name Coase Theorem to Coase’s discovery, and he clearly thought that it was a discovery because the chapter in Stigler’s autobiography Memoirs of an Unregulated Economist in which he recounts the events surrounding the discovery of the Coase Theorem is entitle “Eureka!” (exclamation point is Stigler’s).

The chapter begins as follows:

Scientific discoveries are usually the product of dozens upon dozens tentative explorations, with almost as many blind alleys followed too long. The rare idea that grows into a hypothesis, even more rarely overcomes the difficulties and contradictions it soon encounters. An Archimedes who suddenly has a marvelous idea and shouts “Eureka!” is the hero of the rarest of events. I have spend all of my professional life in the company of first-class scholars but only once have I encountered something like the sudden Archimedian revelation – as an observer. (p. 73)

After recounting the history of the Marshallian doctrine of external economies and its development by Pigou into a deviation between private and social costs, Stigler continues:

The disharmonies between private and social interests produced by external economies and diseconomies became gospel to the economics profession. . . . When, in 1960, Ronald Coase criticized Pigou’s theory rather casually, in the course of a masterly analysis of the Federal Communications Commission’s work, Chicago economists could not understand how so fine an economist as Coase could make so obvious a mistake. Since he persisted [he persisted!], we invited Coase . . . to come and give a talk on it. Some twenty economists from the University of Chicago and Ronald Coase assembled one evening at the home of Aaron Director. Ronald asked us to assume, for a time, a world without transactions costs. That seemed reasonable because economists . . .  are accustomed . . . to deal with simplified . . . “models” and problems. . . .

Ronald asked us to believe . . . [that] whatever the assignment of legal liability for damages, or whatever assignment of legal rights of ownership, the assignments would have no effect upon the way economic resources would be used! We strongly objected to this heresy. Milton Friedman did most of the talking, as usual. He also did much of the thinking, as usual. In the course of two hours of argument the vote went from twenty against and one for Coase to twenty-one for Coase. What an exhilarating event! I lamented afterward that we had not the clairvoyance to tape it (pp. 74-76)

Stigler then summarizes Coase’s argument and proceeds to tell his understanding of the proposition that he called the Coase Theorem.

This proposition, that when there are no transactions costs the assignments of legal rights have no effect upon the allocation of resources among economic enterprises, will, I hope, be reasonable and possibly even obvious once it is explained. Nevertheless there were a fair number of “refutations” published in the economic journals. I christened the proposition the “Coase Theorem” and that is how it is known today. Scientific theories are hardly ever named after their first discoverers . . . so this is a rare example of correct attribution of a priority.

Well, not so much. Coase’s real insight was to see that all economic exchange involves an exchange of rights over resources rather than over the resources themselves. But the insight that the final allocation is independent of the initial allocation was Wicksteed’s.

What’s Wrong with DSGE Models Is Not Representative Agency

The basic DSGE macroeconomic model taught to students is based on a representative agent. Many critics of modern macroeconomics and DSGE models have therefore latched on to the representative agent as the key – and disqualifying — feature in DSGE models, and by extension, with modern macroeconomics. Criticism of representative-agent models is certainly appropriate, because, as Alan Kirman admirably explained some 25 years ago, the simplification inherent in a macoreconomic model based on a representative agent, renders the model entirely inappropriate and unsuitable for most of the problems that a macroeconomic model might be expected to address, like explaining why economies might suffer from aggregate fluctuations in output and employment and the price level.

While altogether fitting and proper, criticism of the representative agent model in macroeconomics had an unfortunate unintended consequence, which was to focus attention on representative agency rather than on the deeper problem with DSGE models, problems that cannot be solved by just throwing the Representative Agent under the bus.

Before explaining why representative agency is not the root problem with DSGE models, let’s take a moment or two to talk about where the idea of representative agency comes from. The idea can be traced back to F. Y. Edgeworth who, in his exposition of the ideas of W. S. Jevons – one of the three marginal revolutionaries of the 1870s – introduced two “representative particulars” to illustrate how trade could maximize the utility of each particular subject to the benchmark utility of the counterparty. That analysis of two different representative particulars, reflected in what is now called the Edgeworth Box, remains one of the outstanding achievements and pedagogical tools of economics. (See a superb account of the historical development of the Box and the many contributions to economic theory that it facilitated by Thomas Humphrey). But Edgeworth’s analysis and its derivatives always focused on the incentives of two representative agents rather than a single isolated representative agent.

Only a few years later, Alfred Marshall in his Principles of Economics, offered an analysis of how the equilibrium price for the product of a competitive industry is determined by the demand for (derived from the marginal utility accruing to consumers from increments of the product) and the supply of that product (derived from the cost of production). The concepts of the marginal cost of an individual firm as a function of quantity produced and the supply of an individual firm as a function of price not yet having been formulated, Marshall, in a kind of hand-waving exercise, introduced a hypothetical representative firm as a stand-in for the entire industry.

The completely ad hoc and artificial concept of a representative firm was not well-received by Marshall’s contemporaries, and the young Lionel Robbins, starting his long career at the London School of Economics, subjected the idea to withering criticism in a 1928 article. Even without Robbins’s criticism, the development of the basic theory of a profit-maximizing firm quickly led to the disappearance of Marshall’s concept from subsequent economics textbooks. James Hartley wrote about the short and unhappy life of Marshall’s Representative Firm in the Journal of Economic Perspectives.

One might have thought that the inauspicious career of Marshall’s Representative Firm would have discouraged modern macroeconomists from resurrecting the Representative Firm in the barely disguised form of a Representative Agent in their DSGE models, but the convenience and relative simplicity of solving a DSGE model for a single agent was too enticing to be resisted.

Therein lies the difference between the theory of the firm and a macroeconomic theory. The gain in convenience from adopting the Representative Firm was radically reduced by Marshall’s Cambridge students and successors who, without the representative firm, provided a more rigorous, more satisfying and more flexible exposition of the industry supply curve and the corresponding partial-equilibrium analysis than Marshall had with it. Providing no advantages of realism, logical coherence, analytical versatility or heuristic intuition, the Representative Firm was unceremoniously expelled from the polite company of economists.

However, as a heuristic device for portraying certain properties of an equilibrium state — whose existence is assumed not derived — even a single representative individual or agent proved to be a serviceable device with which to display the defining first-order conditions, the simultaneous equality of marginal rates of substitution in consumption and production with the marginal rate of substitution at market prices. Unlike the Edgeworth Box populated by two representative agents whose different endowments or preference maps result in mutually beneficial trade, the representative agent, even if afforded the opportunity to trade, can find no gain from engaging in it.

An excellent example of this heuristic was provided by Jack Hirshleifer in his 1970 textbook Investment, Interest, and Capital, wherein he adapted the basic Fisherian model of intertemporal consumption, production and exchange opportunities, representing the canonical Fisherian exposition in a single basic diagram. But the representative agent necessarily represents a state of no trade, because, for a single isolated agent, production and consumption must coincide, and the equilibrium price vector must have the property that the representative agent chooses not to trade at that price vector. I reproduce Hirshleifer’s diagram (Figure 4-6) in the attached chart.

Here is how Hirshleifer explained what was going on.

Figure 4-6 illustrates a technique that will be used often from now on: the representative-individual device. If one makes the assumption that all individuals have identical tastes and are identically situated with respect to endowments and productive opportunities, it follows that the individual optimum must be a microcosm of the social equilibrium. In this model the productive and consumptive solutions coincide, as in the Robinson Crusoe case. Nevertheless, market opportunities exist, as indicated by the market line M’M’ through the tangency point P* = C*. But the price reflected in the slope of M’M’ is a sustaining price, such that each individual prefers to hold the combination attained by productive transformations rather than engage in market transactions. The representative-individual device is helpful in suggesting how the equilibrium will respond to changes in exogenous data—the proviso being that such changes od not modify the distribution of wealth among individuals.

While not spelling out the limitations of the representative-individual device, Hirshleifer makes it clear that the representative-agent device is being used as an expository technique to describe, not as an analytical tool to determine, intertemporal equilibrium. The existence of intertemporal equilibrium does not depend on the assumptions necessary to allow a representative individual to serve as a stand-in for all other agents. The representative-individual is portrayed only to provide the student with a special case serving as a visual aid with which to gain an intuitive grasp of the necessary conditions characterizing an intertemporal equilibrium in production and consumption.

But the role of the representative agent in the DSGE model is very different from the representative individual in Hirshleifer’s exposition of the canonical Fisherian theory. In Hirshleifer’s exposition, the representative individual is just a special case and a visual aid with no independent analytical importance. In contrast to Hirshleifer’s deployment of the representative-individual, representative-agent in the DSGE model is used as an assumption whereby an analytical solution to the DSGE model can be derived, allowing the modeler to generate quantitative results to be compared with existing time-series data, to generate forecasts of future economic conditions, and to evaluate the effects of alternative policy rules.

The prominent and dubious role of the representative agent in DSGE models provided a convenient target for critics of DSGE models to direct their criticisms. In Congressional testimony, Robert Solow famously attacked DSGE models and used their reliance on the representative-agents to make them seem, well, simply ridiculous.

Most economists are willing to believe that most individual “agents” – consumers investors, borrowers, lenders, workers, employers – make their decisions so as to do the best that they can for themselves, given their possibilities and their information. Clearly they do not always behave in this rational way, and systematic deviations are well worth studying. But this is not a bad first approximation in many cases. The DSGE school populates its simplified economy – remember that all economics is about simplified economies just as biology is about simplified cells – with exactly one single combination worker-owner-consumer-everything-else who plans ahead carefully and lives forever. One important consequence of this “representative agent” assumption is that there are no conflicts of interest, no incompatible expectations, no deceptions.

This all-purpose decision-maker essentially runs the economy according to its own preferences. Not directly, of course: the economy has to operate through generally well-behaved markets and prices. Under pressure from skeptics and from the need to deal with actual data, DSGE modellers have worked hard to allow for various market frictions and imperfections like rigid prices and wages, asymmetries of information, time lags, and so on. This is all to the good. But the basic story always treats the whole economy as if it were like a person, trying consciously and rationally to do the best it can on behalf of the representative agent, given its circumstances. This cannot be an adequate description of a national economy, which is pretty conspicuously not pursuing a consistent goal. A thoughtful person, faced with the thought that economic policy was being pursued on this basis, might reasonably wonder what planet he or she is on.

An obvious example is that the DSGE story has no real room for unemployment of the kind we see most of the time, and especially now: unemployment that is pure waste. There are competent workers, willing to work at the prevailing wage or even a bit less, but the potential job is stymied by a market failure. The economy is unable to organize a win-win situation that is apparently there for the taking. This sort of outcome is incompatible with the notion that the economy is in rational pursuit of an intelligible goal. The only way that DSGE and related models can cope with unemployment is to make it somehow voluntary, a choice of current leisure or a desire to retain some kind of flexibility for the future or something like that. But this is exactly the sort of explanation that does not pass the smell test.

While Solow’s criticism of the representative agent was correct, he left himself open to an effective rejoinder by defenders of DSGE models who could point out that the representative agent was adopted by DSGE modelers not because it was an essential feature of the DSGE model but because it enabled DSGE modelers to simplify the task of analytically solving for an equilibrium solution. With enough time and computing power, however, DSGE modelers were able to write down models with a few heterogeneous agents (themselves representative of particular kinds of agents in the model) and then crank out an equilibrium solution for those models.

Unfortunately for Solow, V. V. Chari also testified at the same hearing, and he responded directly to Solow, denying that DSGE models necessarily entail the assumption of a representative agent and identifying numerous examples even in 2010 of DSGE models with heterogeneous agents.

What progress have we made in modern macro? State of the art models in, say, 1982, had a representative agent, no role for unemployment, no role for Financial factors, no sticky prices or sticky wages, no role for crises and no role for government. What do modern macroeconomic models look like? The models have all kinds of heterogeneity in behavior and decisions. This heterogeneity arises because people’s objectives dier, they differ by age, by information, by the history of their past experiences. Please look at the seminal work by Rao Aiyagari, Per Krusell and Tony Smith, Tim Kehoe and David Levine, Victor Rios Rull, Nobu Kiyotaki and John Moore. All of them . . . prominent macroeconomists at leading departments . . . much of their work is explicitly about models without representative agents. Any claim that modern macro is dominated by representative-agent models is wrong.

So on the narrow question of whether DSGE models are necessarily members of the representative-agent family, Solow was debunked by Chari. But debunking the claim that DSGE models must be representative-agent models doesn’t mean that DSGE models have the basic property that some of us at least seek in a macro-model: the capacity to explain how and why an economy may deviate from a potential full-employment time path.

Chari actually addressed the charge that DSGE models cannot explain lapses from full employment (to use Pigou’s rather anodyne terminology for depressions). Here is Chari’s response:

In terms of unemployment, the baseline model used in the analysis of labor markets in modern macroeconomics is the Mortensen-Pissarides model. The main point of this model is to focus on the dynamics of unemployment. It is specifically a model in which labor markets are beset with frictions.

Chari’s response was thus to treat lapses from full employment as “frictions.” To treat unemployment as the result of one or more frictions is to take a very narrow view of the potential causes of unemployment. The argument that Keynes made in the General Theory was that unemployment is a systemic failure of a market economy, which lacks an error-correction mechanism that is capable of returning the economy to a full-employment state, at least not within a reasonable period of time.

The basic approach of DSGE is to treat the solution of the model as an optimal solution of a problem. In the representative-agent version of a DSGE model, the optimal solution is optimal solution for a single agent, so optimality is already baked into the model. With heterogeneous agents, the solution of the model is a set of mutually consistent optimal plans, and optimality is baked into that heterogenous-agent DSGE model as well. Sophisticated heterogeneous-agent models can incorporate various frictions and constraints that cause the solution to deviate from a hypothetical frictionless, unconstrained first-best optimum.

The policy message emerging from this modeling approach is that unemployment is attributable to frictions and other distortions that don’t permit a first-best optimum that would be achieved automatically in their absence from being reached. The possibility that the optimal plans of individuals might be incompatible resulting in a systemic breakdown — that there could be a failure to coordinate — does not even come up for discussion.

One needn’t accept Keynes’s own theoretical explanation of unemployment to find the attribution of cyclical unemployment to frictions deeply problematic. But, as I have asserted in many previous posts (e.g., here and here) a modeling approach that excludes a priori any systemic explanation of cyclical unemployment, attributing instead all cyclical unemployment to frictions or inefficient constraints on market pricing, cannot be regarded as anything but an exercise in question begging.

 

My Paper “Hawtrey and Keynes” Is Now Available on SSRN

About five or six years ago, I was invited by Robert Dimand and Harald Hagemann to contribute an article on Hawtrey for The Elgar Companion to John Maynard Keynes, which they edited. I have now posted an early (2014) version of my article on SSRN.

Here is the abstract of my article on Hawtrey and Keynes

R. G. Hawtrey, like his younger contemporary J. M. Keynes, was a Cambridge graduate in mathematics, an Apostle, deeply influenced by the Cambridge philosopher G. E. Moore, attached, if only peripherally, to the Bloomsbury group, and largely an autodidact in economics. Both entered the British Civil Service shortly after graduation, publishing their first books on economics in 1913. Though eventually overshadowed by Keynes, Hawtrey, after publishing Currency and Credit in 1919, was in the front rank of monetary economists in the world and a major figure at the 1922 Genoa International Monetary Conference planning for a restoration of the international gold standard. This essay explores their relationship during the 1920s and 1930s, focusing on their interactions concerning the plans for restoring an international gold standard immediately after World War I, the 1925 decision to restore the convertibility of sterling at the prewar dollar parity, Hawtrey’s articulation of what became known as the Treasury view, Hawtrey’s commentary on Keynes’s Treatise on Money, including his exposition of the multiplier, Keynes’s questioning of Hawtrey after his testimony before the Macmillan Committee, their differences over the relative importance of the short-term and long-term rates of interest as instruments of monetary policy, Hawtrey’s disagreement with Keynes about the causes of the Great Depression, and finally the correspondence between Keynes and Hawtrey while Keynes was writing the General Theory, a correspondence that failed to resolve theoretical differences culminating in Hawtrey’s critical review of the General Theory and their 1937 exchange in the Economic Journal.

Mankiw’s Phillips-Curve Agonistes

The steady expansion of employment and reduction in unemployment since the recovery from the financial crisis of 2008 and the subsequent Little Depression, even as inflation remained almost continuously between 1.5 and 2% (with only a slight uptick to 3% in 2011), has led many observers to conclude that the negative correlation between inflation and unemployment posited by the Phillips Curve is no longer valid. So, almost a month ago, Greg Mankiw wrote New York Times Sunday Business Section defending the Phillips Curve as an analytical tool that ought to inform monetary policy-making by the Federal Reserve and other monetary authorities.

Mankiw starts with a brief potted history of the Phillips Curve.

The economist George Akerlof, a Nobel laureate and the husband of the former Federal Reserve chair Janet Yellen, once called the Phillips curve “probably the single most important macroeconomic relationship.” So it is worth recalling what the Phillips curve is, why it plays a central role in mainstream economics and why it has so many critics.

The story begins in 1958, when the economist A. W. Phillips published an article reporting an inverse relationship between unemployment and inflation in Britain. He reasoned that when unemployment is high, workers are easy to find, so employers hardly raise wages, if they do so at all.

But when unemployment is low, employers have trouble attracting workers, so they raise wages faster. Inflation in wages soon turns into inflation in the prices of goods and services.

Let’s pause for a moment and think about that explanation. If we translate it into a supply-demand framework in which the equilibrium corresponds to the intersection of a downward-sloping demand for labor curve with an upward-sloping supply of labor curve. The equilibrium is associated with some amount of unemployment inasmuch as there are always some workers transitioning from one job to another. The fewer and the more rapid the transitions, the less unemployment. The farther to the right the intersection of the demand curve and the supply curve, the more workers are employed and the fewer are transitioning between jobs in any time period.

I must note parenthetically that, as I have written recently, a supply-demand framework (aka partial equilibrium analysis) is not really the appropriate way to think about unemployment, because the equilibrium level of wages and the rates of unemployment must be analyzed, as, using different terminology, Keynes argued, in a general equilibrium, not a partial equilibrium, framework. But for ease of exposition, I use the partial equilibrium supply-demand paradigm.

Mankiw’s assertion that when unemployment is low, employers have trouble attracting workers, and therefore have to raise wages to hire, or retain, as many workers as they would like to employ, could be true. And that is the way that Mankiw wants us to focus on the relationship between wages and unemployment. Mankiw is focusing on the demand side as an explanation for low unemployment. But low unemployment could also reflect the eagerness of workers to be employed, even at low wages, so that workers quickly accept job offers rather than searching or holding out for better offers.

This is a classic issue in empirical estimates of demand. If high prices are associated with high output, does that mean that the data show that demand curves are upward-sloping, so that when suppliers raise price, customers increase want to buy more of their products? Obviously not. Suppliers raise their price because at low prices their customers are want to buy more than producers want to sell. So to estimate a demand curve, there must be some way of identifying factors that cause the entire demand curve to shift.

The identification problem in estimating demand curves has been understood since the early years of the 20th century. It is incredible that economists, especially one as steeped in the history of the discipline as Mankiw, talk about the Phillips Curve as if they had never heard of the identification problem.

When you estimate a demand curve without being able to identify shifts in demand, you are not estimating a demand curve, you are estimating a reduced form that combines — and fails to identify or distinguish between — both demand and supply. The Phillips Curve is a reduced form that captures both factors that affect the demand for labor and the supply of labor, though as I mentioned above, talking about the demand for labor and the supply of labor in the normal partial equilibrium sense of those terms is itself misleading and inappropriate.

What is unambiguously true, however, is that whatever else the Phillips Curve may be, it is a reduced form and not a deep structural relationship in an economy. It therefore is of little if any use in helping policy makers figure out whether to tighten or ease monetary policy.

For centuries, economists have understood that inflation is ultimately a monetary phenomenon. They noticed that when the world’s economies operated under a gold standard, gold discoveries resulted in higher prices for goods and services. And when central banks in economies with fiat money created large quantities — Germany in the interwar period, Zimbabwe in 2008, or Venezuela recently — the result was hyperinflation.

But economists also noticed that monetary conditions affect economic activity. Gold discoveries often lead to booming economies, and central banks easing monetary policy usually stimulate production and employment, at least for a while.

The Phillips curve helps explain how inflation and economic activity are related. At every moment, central bankers face a trade-off. They can stimulate production and employment at the cost of higher inflation. Or they can fight inflation at the cost of slower economic growth.

The Phillips curve, a reduced form, a mere correlation revealing no deep or necessary structural relationship between inflation and unemployment, explains nothing. It merely reflects the fact that, under a certain set of conditions, monetary expansion is associated with increased output and employment, and, accordingly, with reduced unemployment. And, under a certain set of conditions, monetary contraction is associated with reduced output and unemployment, and, accordingly, increased unemployment. We know now – and have long known — all about those relationships, without the Phillips Curve.

But under other conditions, high inflation may be associated with non-monetary factors (negative supply shocks) causing falling output and employment. And under still other conditions, low inflation may be associated with rising output and employment. There is no deep structural reason causing low unemployment to be incompatible with low inflation or causing high unemployment to be incompatible with high inflation. To suggest that the Phillips Curve is somehow a necessary relationship rather than a coincidental correlation between inflation and unemployment is a shockingly superficial reading of the evidence betraying an embarrassing misunderstanding of elementary theory.

Mankiw seems to be vaguely aware of his own confusion when he writes the following.

Today, most economists believe there is a trade-off between inflation and unemployment in the sense that actions taken by a central bank push these variables in opposite directions.

Mankiw’s qualification that the trade-off between inflation and unemployment reflects the tendency of rapid monetary expansion to cause output and employment to expand (at least temporarily) and prices to rise is a tacit admission that there is no necessary trade-off between inflation and unemployment. What we refer to as a Phillips Curve is simply the tendency for changes in monetary policy to affect inflation and unemployment in opposite directions, not a necessary structural relationship between inflation and unemployment. This is not rocket science. I can’t understand why Mankiw has trouble understanding that the fact that monetary policy may cause unemployment to rise when it causes inflation to fall and vice versa seems is not the same thing as a necessary inverse relationship between inflation and unemployment.

As a corollary, they also believe there must be a minimum level of unemployment that the economy can sustain without inflation rising too high. But for various reasons, that level fluctuates and is difficult to determine.

The level of unemployment that can be sustained without inflation is both unobservable and subject to change. Monetary policy can stimulate a rapid reduction in unemployment when unemployment is clearly higher than normal, as unemployment falls ongoing monetary expansion carries a risk of increasing inflation. But that doesn’t mean that unemployment cannot continue to fall without triggering an increase in inflation.

Mankiw concludes with the following bit of advice.

The Fed’s job is to balance the competing risks of rising unemployment and rising inflation. Striking just the right balance is never easy. The first step, however, is to recognize that the Phillips curve is always out there lurking.

That is just silly. The risk of increasing inflation is there with or without recognizing that the Phillips curve is out there lurking. To avoid the risk of inflation in a responsible way means using monetary policy to keep the rate of increase in nominal GDP within a reasonably narrow band providing enough room for the normal rate of growth in output with a rate of inflation sufficient to keep nominal interest rates moderately low, but substantially above zero. You don’t need a Phillips curve to figure that out.

Yield-Curve Inversion and the Agony of Central Banking

Suddenly, we have been beset with a minor panic attack about our increasingly inverted yield curve. Since fear of yield-curve inversion became a thing a little over a year ago, a lot of people have taken notice of the fact that yield-curve inversion has often presaged recessions. In June 2018, when the yield curve was on the verge of flatlining, I tried to explain the phenomenon, and I think that I provided a pretty good — though perhaps a tad verbose — explanation, providing the basic theory behind the typical upward slope of the yield curve as well as explaining what seems the most likely, though not the only, reason for inversion, one that explains why inversion so often is a harbinger of recession.

But in a Tweet yesterday responding to Sri Thiruvadanthai I think I framed the issue succinctly within the 280 character Twitter allotment. Here are the two tweets.

 

 

And here’s a longer version getting at the same point from my 2018 post:

For purposes of this discussion, however, I will focus on just two factors that, in an ultra-simplified partial-equilibrium setting, seem most likely to cause a normally upward-sloping yield curve to become relatively flat or even inverted. These two factors affecting the slope of the yield curve are the demand for liquidity and the supply of liquidity.

An increase in the demand for liquidity manifests itself in reduced current spending to conserve liquidity and by an increase in the demands of the public on the banking system for credit. But even as reduced spending improves the liquidity position of those trying to conserve liquidity, it correspondingly worsens the liquidity position of those whose revenues are reduced, the reduced spending of some necessarily reducing the revenues of others. So, ultimately, an increase in the demand for liquidity can be met only by (a) the banking system, which is uniquely positioned to create liquidity by accepting the illiquid IOUs of the private sector in exchange for the highly liquid IOUs (cash or deposits) that the banking system can create, or (b) by the discretionary action of a monetary authority that can issue additional units of fiat currency.

The question that I want to address now is why has the yield curve, after having been only slightly inverted or flat for the past year, suddenly — since about the beginning of August — become sharply inverted.

Last summer, when concerns about inversion was just beginning to be discussed, the Fed, which had been signaling a desire to raise short-term rates to “normal” levels, changed signals, indicating that it would not automatically continue raising rates as it had between 2003 and 2006, but would evaluate each rate increase in light of recent data bearing on the state of the economy. So after a further half-a-percent increase in the Fed’s target rate between June and the end of 2018, the Fed held off on further increases, and in July actually cut its rate by a quarter of a percent and even signaled a likely further quarter of a percent decrease in September.

Now to be sure the Fed might have been well-advised not to have raised its target rate as much as it did, and to have cut its rate more steeply than it did in July. Nevertheless, it would be hard to identify any particular monetary cause for the recent steep further inversion of the yield curve. So, the most likely reason for the sudden inversion is nervousness about the possibility of a trade war, which most people do not think is either good or easy to win.

After yesterday’s announcement by the administration that previously announced tariff increases on Chinese goods scheduled to take effect in September would be postponed until after the Christmas buying season, the stock market took some comfort in an apparent easing of tensions between the US and China over trade policy. But this interpretation was shot down by none other than Commerce Secretary Wilbur Ross who, before the start of trading, told CNBC that the administration’s postponement of the tariffs on China was done solely in the interest of American shoppers and not to ease tensions with China. The remark — so unnecessary and so counterproductive — immediately aroused suspicions that Ross had an ulterior motive, like, say, a short position in the S&P 500 index, in sharing it on national television.

So what’s going on? Monetary policy has probably been marginally too tight for that past year, but only marginally. Unlike other inverted yield curve episodes that Fed has not been attempting to reduce the rate of inflation and has even been giving lip service to the goal of raising the rate of inflation, so if the Fed’s target rate was raised too high, it was based on an expectation that the economy was in the midst of an expansion; it was not an attempt to reduce growth. But the economy has weakened, and all signs suggest that the weakness stems from an uncertain economic environment particularly owing to the risk that new tariffs will be imposed or existing ones raised to even higher levels, triggering retaliatory measures by China and other affected countries.

In my 2018 post I mentioned a similar, but different, kind of uncertainty that held back recovery from the 2001-02 recession.

The American economy had entered a recession in early 2001, partly as a result of the bursting of the dotcom bubble of the late 1990s. The recession was short and mild, and the large tax cut enacted by Congress at the behest of the Bush administration in June 2001 was expected to provide significant economic stimulus to promote recovery. However, it soon became clear that, besides the limited US attack on Afghanistan to unseat the Taliban regime and to kill or capture the Al Qaeda leadership in Afghanistan, the Bush Administration was planning for a much more ambitious military operation to effect regime change in Iraq and perhaps even in other neighboring countries in hopes of radically transforming the political landscape of the Middle East. The grandiose ambitions of the Bush administration and the likelihood that a major war of unknown scope and duration with unpredictable consequences might well begin sometime in early 2003 created a general feeling of apprehension and uncertainty that discouraged businesses from making significant new commitments until the war plans of the Administration were clarified and executed and their consequences assessed.

The Fed responded to the uncertain environment of 2002 with a series of interest rate reductions that prevented a lapse into recession.

Gauging the unusual increase in the demand for liquidity in 2002 and 2003, the Fed reduced short-term rates to accommodate increasing demands for liquidity, even as the economy entered into a weak expansion and recovery. Given the unusual increase in the demand for liquidity, the accommodative stance of the Fed and the reduction in the Fed Funds target to an unusually low level of 1% had no inflationary effect, but merely cushioned the economy against a relapse into recession.

Recently, the uncertainty caused by the imposition of tariffs and the threat of a destructive trade war seems to have discouraged firms to go forward with plans to invest and to expand output as decision-makers prefer to wait and see how events play out before making long-term commitments that would put assets and investments at serious risk if a trade war undermines the conditions necessary for those investment to be profitable. In the interim, decision-makers seeking short-term safety and the flexibility to deploy their assets and resources profitably once future prospects become less uncertain leads them to take highly liquid positions that don’t preclude taking future profitable actions once profitable opportunities present themselves.

However, when everyone resists making commitments, economic activity doesn’t keep going as before, it gradually slows down. And so a state of heightened uncertainty eventually leads to a stagnation or recession or something worse. To prevent or mitigate that outcome, a reduction in interest rates by the central bank can prevent or at least postpone the onset of a recession, as the Fed succeeded in doing in 2002-03 by reducing its interest rate target to 1%. Similar steps by the Fed may now be called for.

But there is another question that ought to be discussed. When the Fed reduced interest rates in 2002-03 because of the uncertainty created by the pending decision of the US government about whether to invade Iraq, the Fed was probably right to take that uncertainty as an exogenous decision in which it had no decision-making role or voice. The decision to invade or not would be made based on considerations over which the Fed rightly had no role to evaluate or opine upon. However, the Fed does have a responsibility for creating a stable economic environment and eliminating avoidable uncertainty about economic conditions caused by bad policy-making. Insofar as the current uncertain economic environment is the result of deliberate economic-policy actions that increase uncertainty, reducing interest rates to cushion the uncertainty-increasing effects of imposing, or raising, tariffs or of promoting a trade war would enable those uncertainty-increasing actions to be continued.

The Fed, therefore, now faces a cruel dilemma. Should it try to mitigate, by reducing interest rates, the effects of policies that increase uncertainty, thereby acting as a perhaps unwitting enabler of those policies, or should it stand firm and refuse to cushion the effects of policies that are themselves the cause of the uncertainty whose destructive effects the Fed is being asked to mitigate? This is the sort of dilemma that Arthur Burns, in a somewhat different context, once referred to as “The Agony of Central Banking.”

August 15, 1971: Unhappy Anniversary (Update)

[Update 8/15/2019: It seems appropriate to republish this post originally published about 40 days after I started blogging. I have made a few small changes and inserted a few comments to reflect my improved understanding of certain concepts like “sterilization” that I was uncritically accepting. I actually have learned a thing or two in the eight plus years that I’ve been blogging. I am grateful to all my readers — both those who agreed and those who disagreed — for challenging me and inspiring me to keep thinking critically. It wasn’t easy, but we did survive August 15, 1971. Let’s hope we survive August 15, 2019.]

August 15, 1971 may not exactly be a day that will live in infamy, but it is hardly a day to celebrate 40 years later.  It was the day on which one of the most cynical Presidents in American history committed one of his most cynical acts:  violating solemn promises undertaken many times previously, both before and after his election as President, Richard Nixon declared a 90-day freeze on wages and prices.  Nixon also announced the closing of the gold window at the US Treasury, severing the last shred of a link between gold and the dollar.  Interestingly, the current (August 13th, 2011) Economist (Buttonwood column) and Forbes  (Charles Kadlec op-ed) and today’s Wall Street Journal (Lewis Lehrman op-ed) mark the anniversary with critical commentaries on Nixon’s action ruefully focusing on the baleful consequences of breaking the link to gold, while barely mentioning the 90-day freeze that became the prelude to  the comprehensive wage and price controls imposed after the freeze expired.

Of the two events, the wage and price freeze and subsequent controls had by far the more adverse consequences, the closing of the gold window merely ratifying the demise of a gold standard that long since had ceased to function as it had for much of the 19th and early 20th centuries.  In contrast to the final break with gold, no economic necessity or even a coherent economic argument on the merits lay behind the decision to impose a wage and price freeze, notwithstanding the ex-post rationalizations offered by Nixon’s economic advisers, including such estimable figures as Herbert Stein, Paul McKracken, and George Schultz, who surely knew better,  but somehow were persuaded to fall into line behind a policy of massive, breathtaking, intervention into private market transactions.

The argument for closing the gold window was that the official gold peg of $35 an ounce was probably at least 10-20% below any realistic estimate of the true market value of gold at the time, making it impossible to reestablish the old parity as an economically meaningful price without imposing an intolerable deflation on the world economy.  An alternative response might have been to officially devalue the dollar to something like the market value of gold $40-42 an ounce.  But to have done so would merely have demonstrated that the official price of gold was a policy instrument subject to the whims of the US monetary authorities, undermining faith in the viability of a gold standard.  In the event, an attempt to patch together the Bretton Woods System (the Smithsonian Agreement of December 1971) based on an official $38 an ounce peg was made, but it quickly became obvious that a new monetary system based on any form of gold convertibility could no longer survive.

How did the $35 an ounce price became unsustainable barely 25 years after the Bretton Woods System was created?  The problem that emerged within a few years of its inception was that the main trading partners of the US systematically kept their own currencies undervalued in terms of the dollar, promoting their exports while sterilizing the consequent dollar inflow, allowing neither sufficient domestic inflation nor sufficient exchange-rate appreciation to eliminate the overvaluation of their currencies against the dollar. [DG 8/15/19: “sterilization” is a misleading term because it implies that persistent gold or dollar inflows just happen randomly; the persistent inflow occur only because they are induced by a persistent increased demand for reserves or insufficient creation of cash.] After a burst of inflation in the Korean War, the Fed’s tight monetary policy and a persistently overvalued exchange rate kept US inflation low at the cost of sluggish growth and three recessions between 1953 and 1960.  It was not until the Kennedy administration came into office on a pledge to get the country moving again that the Fed was pressured to loosen monetary policy, initiating the long boom of the 1960s some three years before the Kennedy tax cuts were posthumously enacted in 1964.

Monetary expansion by the Fed reduced the relative overvaluation of the dollar in terms of other currencies, but the increasing export of dollars left the $35 an ounce peg increasingly dependent on the willingness of foreign government to hold dollars.  However, President Charles de Gaulle of France, having overcome domestic opposition to his rule, felt secure enough to assert [his conception of] French interests against the US, resuming the traditional French policy of accumulating physical gold reserves rather than mere claims on gold physically held elsewhere.  By 1967 the London gold pool, a central bank cartel acting to control the price of gold in the London gold market, was collapsing, as France withdrew from the cartel, demanding that gold be shipped to Paris from New York.  In 1968, unable to hold down the market price of gold any longer, the US and other central banks let the gold price rise above the official price, but agreed to conduct official transactions among themselves at the official price of $35 an ounce.  As market prices for gold, driven by US monetary expansion, inched steadily higher, the incentives for central banks to demand gold from the US at the official price became too strong to contain, so that the system was on the verge of collapse when Nixon acknowledged the inevitable and closed the gold window rather than allow depletion of US gold holdings.

Assertions that the Bretton Woods system could somehow have been saved simply ignore the economic reality that by 1971 the Bretton Woods System was broken beyond repair, or at least beyond any repair that could have been effected at a tolerable cost.

But Nixon clearly had another motivation in his August 15 announcement, less than 15 months before the next Presidential election.  It was in effect the opening shot of his reelection campaign.  Remembering all too well that he lost the 1960 election to John Kennedy because the Fed had not provided enough monetary stimulus to cut short the 1960-61 recession, Nixon had appointed his long-time economic adviser, Arthur Burns to replace William McChesney Martin as chairman of the Fed in 1970.  A mild tightening of monetary policy in 1969 as inflation was rising above a 5% annual rate, had produced a recession in late 1969 and early 1970, without providing much relief from inflation.  Burns eased policy enough to allow a mild recovery, but the economy seemed to be suffering the worst of both worlds — inflation still near 4 percent and unemployment at what then seemed an unacceptably high level of almost 6 percent. [For more on Burns and his deplorable role in all of this see this post.]

With an election looming ever closer on the horizon, Nixon in the summer of 1971 became consumed by the political imperative of speeding up the recovery.  Meanwhile a Democratic Congress, assuming that Nixon really did mean his promises never to impose wage and price controls to stop inflation, began clamoring for controls as the way to stop inflation without the pain of a recession, even authorizing the President to impose controls, a dare they never dreamed he would accept.  Arthur Burns, himself, perhaps unwittingly [I was being too kind], provided support for such a step by voicing frustration that inflation persisted in the face of a recession and high unemployment, suggesting that the old rules of economics were no longer operating as they once had.  He even offered vague support for what was then called an incomes policy, generally understood as an informal attempt to bring down inflation by announcing a target  for wage increases corresponding to productivity gains, thereby eliminating the need for businesses to raise prices to compensate for increased labor costs.  What such proposals usually ignored was the necessity for a monetary policy that would limit the growth of total spending sufficiently to limit the growth of wage incomes to the desired target. [On incomes policies and how they might work if they were properly understood see this post.]

Having been persuaded that there was no acceptable alternative to closing the gold window — from Nixon’s perspective and from that of most conventional politicians, a painfully unpleasant admission of US weakness in the face of its enemies (all this was occurring at the height of the Vietnam War and the antiwar protests) – Nixon decided that he could now combine that decision, sugar-coated with an aggressive attack on international currency speculators and a protectionist 10% duty on imports into the United States, with the even more radical measure of a wage-price freeze to be followed by a longer-lasting program to control price increases, thereby snatching the most powerful and popular economic proposal of the Democrats right from under their noses.  Meanwhile, with the inflation threat neutralized, Arthur Burns could be pressured mercilessly to increase the rate of monetary expansion, ensuring that Nixon could stand for reelection in the middle of an economic boom.

But just as Nixon’s electoral triumph fell apart because of his Watergate fiasco, his economic success fell apart when an inflationary monetary policy combined with wage-and-price controls to produce increasing dislocations, shortages and inefficiencies, gradually sapping the strength of an economic recovery fueled by excess demand rather than increasing productivity.  Because broad based, as opposed to narrowly targeted, price controls tend to be more popular before they are imposed than after (as too many expectations about favorable regulatory treatment are disappointed), the vast majority of controls were allowed to lapse when the original grant of Congressional authority to control prices expired in April 1974.

Already by the summer of 1973, shortages of gasoline and other petroleum products were becoming commonplace, and shortages of heating oil and natural gas had been widely predicted for the winter of 1973-74.  But in October 1973 in the wake of the Yom Kippur War and the imposition of an Arab Oil Embargo against the United States and other Western countries sympathetic to Israel, the shortages turned into the first “Energy Crisis.”  A Democratic Congress and the Nixon Administration sprang into action, enacting special legislation to allow controls to be kept on petroleum products of all sorts together with emergency authority to authorize the government to allocate products in short supply.

It still amazes me that almost all the dislocations manifested after the embargo and the associated energy crisis were attributed to excessive consumption of oil and petroleum products in general or to excessive dependence on imports, as if any of the shortages and dislocations would have occurred in the absence of price controls.  And hardly anyone realizes that price controls tend to drive the prices of whatever portion of the supply is exempt from control even higher than they would have risen in the absence of any controls.

About ten years after the first energy crisis, I published a book in which I tried to explain how all the dislocations that emerged from the Arab oil embargo and the 1978-79 crisis following the Iranian Revolution were attributable to the price controls first imposed by Richard Nixon on August 15, 1971.  But the connection between the energy crisis in all its ramifications and the Nixonian price controls unfortunately remains largely overlooked and ignored to this day.  If there is reason to reflect on what happened forty years ago on this date, it surely is for that reason and not because Nixon pulled the plug on a gold standard that had not been functioning for years.

Irving Fisher Demolishes the Loanable-Funds Theory of Interest

In some recent posts (here, here and here) I have discussed the inappropriate application of partial-equilibrium analysis (aka supply-demand analysis) when the conditions under which the ceteris paribus assumption underlying partial-equilibrium analysis are not satisfied. The two examples of inappropriate application of partial equilibrium analysis I have mentioned were: 1) drawing a supply curve of labor and demand curve for labor to explain aggregate unemployment in the economy, and 2) drawing a supply curve of loanable funds and a demand curve for loanable funds to explain the rate of interest. In neither case can one assume that a change in the wage of labor or in the rate of interest can occur without at the same time causing the demand curve and the supply curve to shift from their original position to a new one. Because of the feedback effects from a change in the wage or a change in the rate of interest inevitably cause the demand and supply curves to shift, the standard supply-and-demand analysis breaks down in the face of such feedback effects.

I pointed out that while Keynes correctly observed that demand-and-supply analysis of the labor market was inappropriate, it is puzzling that it did not occur to him that demand-and-supply analysis could not be used to explain the rate of interest.

Keynes explained the rate of interest as a measure of the liquidity premium commanded by holders of money for parting with liquidity when lending money to a borrower. That view is sometimes contrasted with Fisher’s explanation of the rate interest as a measure of the productivity of capital in shifting output from the present to the future and the time preference of individuals for consuming in the present rather waiting to consume in the future. Sometimes the Fisherian theory of the rate of interest is juxtaposed with the Keynesian theory by contrasting the liquidity preference theory with a loanable-funds theory. But that contrast between liquidity preference and loanable funds misrepresents Fisher’s view, because a loanable funds theory is also an inappropriate misapplication of partial-equilibrium analysis when general-equilibrium anlaysis is required.

I recently came upon a passage from Fisher’s classic 1907 treatise, The Rate of Interest: Its Nature, Determination and Relation to Economic Phenomena, which explicitly rejects supply-demand analysis of the market for loanable funds as a useful way of explaining the rate of interest. Here is how Fisher made that fundamental point.

If a modern business man is asked what determines the rate of interest, he may usually be expected to answer, “the supply and demand of loanable money.” But “supply and demand” is a phrase which has been too often into service to cover up difficult problems. Even economists have been prone to employ it to describe economic causation which they could not unravel. It was once wittily remarked of the early writers on economic problems, “Catch a parrot and teach him to say ‘supply and demand,’ and you have an excellent economist.” Prices, wages, rent, interest, and profits were thought to be fully “explained” by this glib phrase. It is true that every ratio of exchange is due to the resultant of causes operating on the buyer and seller, and we may classify these as “demand” and supply.” But this fact does not relieve us of the necessity of examining specifically the two sets of causes, including utility in its effect on demand, and cost in its effect on supply. Consequently, when we say that the rate of interest is due to the supply and demand of “capital” or of “money” or of “loans,” we are very far from having an adequate explanation. It is true that when merchants seek to discount bills at a bank in large numbers and for large amounts, the rate of interest will tend to be low. But we must inquire for what purposes and from what causes merchants thus apply to a bank for the discount of loans and others supply the bank with the funds to be loaned. The real problem is: What causes make the demand for loans and what causes make the supply? This question is not answered by the summary “supply and demand” theory. The explanation is not simply that those who have little capital demand them. In fact, the contrary is often the case. The depositors in savings banks are the lenders, and they are usually poor, whereas those to whom the savings bank in turn lends the funds are relatively rich. (pp. 6-7)

The Mendacity of Yoram Hazony, Virtue Signaler

Yoram Hazony, an American-educated, Israeli philosopher and political operator, former assistant to Benjamin Netanyahu, has become a rising star of the American Right. The week before last, Hazony made his media debut at the Washington DC National Conservatism Conference inspired by his book The Virtue of Nationalism. Sponsored by the shadowy Edmund Burke Foundation, the Conference on “National Conservatism” – a title either remarkably tone-deaf, or an in-your-face provocation echoing another “national ‘ism” ideological movement – featured a keynote address by Fox New personality and provocateur par excellence Tucker Carlson, and various other right-wing notables of varying degrees of respectability, though self-avowed white nationalists were kept at a discreet distance — a distance sufficient to elicit resentful comments and nasty insinuations about Hazony’s origins and loyalties.

I had not planned to read Hazony’s book, having read enough of his articles to know Hazony’s would not be book to read for either pleasure or edification. But sometimes duty calls, so I bought Hazony’s book on Amazon at half price. I have now read the Introduction and the first three chapters. I plan to continue reading till the end, but I thought that I would write down some thoughts as I go along. So consider yourself warned, this may not be my last post about Hazony.

Hazony calls his Introduction “A Return to Nationalism;” it is not a good beginning.

Politics in Britain and America have taken a turn toward nationalism. This has been troubling to many, especially in educated circles, where global integration has long been viewed as a requirement of sound policy and moral decency. From this perspective, Britain’s vote to leave the European Union and the “America First” rhetoric coming out of Washington seem to herald a reversion to a more primitive stage in history, when war-mongering and racism were voiced openly and permitted to set the political agenda of nations. . . .

But nationalism was not always understood to be the evil that current public discourse suggests. . . . Progressives regarded Woodrow Wilson’s Fourteen Points and the Atlantic Charter of Franklin Roosevelt and Winston Churchill as beacons of hope for mankind – and this precisely because they were considered expressions of nationalism, promising national independence and self-determination to enslaved peoples around the world. (pp. 1-2)

Ahem, Hazony cleverly – though not truthfully — appropriates Wilson, FDR and Churchill to the cause of nationalism. Although it was clever move by Hazony to try to disarm opposition to his brief for nationalism by misappropriating Wilson, FDR and Churchill to his side, it was not very smart, it being so obviously contradicted by well-known facts. Merely because Wilson, FDR, and Churchill all supported, with varying degrees of consistency and sincerity, the right of self-determination by national ethnic communities that had never, or not for a long time, enjoyed sovereign control over the territories in which they dwelled, does not mean that they did not also favor international cooperation and supra-national institutions.

For example, points 3 and 4 of Wilson’s Fourteen Points were the following:

The removal, so far as possible, of all economic barriers and the establishment of an equality of trade conditions among all the nations consenting to the peace and associating themselves for its maintenance.

Adequate guarantees given and taken that national armaments will be reduced to the lowest point consistent with domestic safety.

And here is point 14:

A general association of nations must be formed under specific covenants for the purpose of affording mutual guarantees of political independence and territorial integrity to great and small states alike. That association of course was realized as the League of Nations, which Wilson strove mightily to create but failed to convince the United States Senate to ratify the Treaty whereby the US would have joined the League.

I don’t know about you, but to me that sounds awfully globalist .

Now what about The Atlantic Charter?

While it supported the right of self-determination of all peoples, it also called for the lowering of trade barriers and for global economic cooperation. Moreover, Churchill, far from endorsing the unqualified right of all peoples to self-determination, flatly rejected the idea that the right of self-determination extended to British India.

But besides withholding the right of self-determination from British colonial possessions and presumably those of other European powers, Churchill, in a famous speech, endorsed the idea of a United States of Europe. Now Churchill did not necessarily envision a federal union along the lines of the European Union as now constituted, but he obviously did not reject on principle the idea of some form of supra-national governance.

We must build a kind of United States of Europe. In this way only will hundreds of millions of toilers be able to regain the simple joys and hopes which make life worth living.

So it is simply a fabrication and a misrepresentation to suggest that nationalism has ever been regarded as anything like a universal principle of political action, governance or justice. It is one of many principles, all of which have some weight, but must be balanced against, and reconciled with, other principles of justice, policy and expediency.

Going from bad to worse, Hazony continues,

Conservatives from Teddy Roosevelt to Dwight Eisenhower likewise spoke of nationalism as a positive good. (Id.)

Where to begin? Hazony, who is not adverse to footnoting (216 altogether, almost one per page, often providing copious references to sources and scholarly literature) offers not one documentary or secondary source for this assertion. To be sure Teddy Roosevelt and Dwight Eisenhower were Republicans. But Roosevelt differed from most Republicans of his time, gaining the Presidency only because McKinley wanted to marginalize him by choosing him as a running mate at a time when no Vice-President since Van Buren had succeeded to the Presidency, except upon the death of the incumbent President.

Eisenhower had been a non-political military figure with no party affiliation until his candidacy for the Republican Presidential nomination, as an alternative to the preferred conservative choice, Robert Taft. Eisenhower did not self-identify as a conservative, preferring to describe himself as a “modern Republican” to the disgust of conservatives like Barry Goldwater, whose best-selling book The Conscience of a Conservative was a sustained attack on Eisenhower’s refusal even to try to roll back the New Deal.

Moreover, when TR coined the term “New Nationalism” in a famous speech he gave in 1912, he was running for the Republican Presidential nomination against his chosen successor, William Howard Taft, by whom TR felt betrayed for trying to accommodate the conservative Republicans TR so detested. Failing to win the Republican nomination, TR ran as the candidate of the Progressive Party, splitting the Republican party, thereby ensuring the election of the progressive, though racist, Woodrow Wilson. Nor was that the end of it. Roosevelt was himself an imperialist, who had supported the War against Spain and the annexation of the Phillipines, and an early and militant proponent of US entry into World War I against Germany on the side of Britain and France. And, after the war, Roosevelt supported US entry into the League of Nations. These are not obscure historical facts, but Hazony, despite his Princeton undergraduate degree and doctorate in philosophy from Rutgers, shows no awareness of them.

Hazony seems equally unaware that, in the American context, nationalism had an entirely different meaning from its nineteenth-century European meaning, as the right of national ethnic populations, defined mainly by their common language, to form sovereign political units rather than the multi-ethnic, largely undemocratic kingdoms and empires by which they were ruled. In America, nationalism was distinguished from sectionalism, expressing the idea that the United States had become an organic unit unto itself, not merely an association of separate and distinct states. This idea, emphasized by Hamilton and the Federalists, and later the Whigs, against the states’ rights position of the Jeffersonian Democrats who resisted the claims of national and federal primacy. The classic expression of the uniquely American national sensibility was provided by Lincoln in his Gettysburg Address.

Fourscore and seven years ago our fathers brought forth on this continent, a new nation, conceived in Liberty, and dedicated to the proposition that all men are created equal.

Now we are engaged in a great civil war, testing whether that nation, or any nation so conceived and so dedicated, can long endure. We are met on a great battle-field of that war. We have come to dedicate a portion of that field, as a final resting place for those who here gave their lives that that nation might live.

Lincoln offered a conception of nationhood entirely different from that which inspired demands for the right of self-determination by European national ethnic and linguistic communities. If the notion of American exceptionalism is to have any clear meaning, it can only be in the context of Lincoln’s description of the origin and meaning of the American nationality.

After his clearly fraudulent appropriation of Theodore and Franklin Roosevelt, Winston Churchill and Dwight Eisenhower to the Nationalist Conservative cause, Hazony seizes upon Ronald Reagan and Margaret Thatcher. “In their day,’ Hazony assures us, “Ronald Reagan and Margaret Thatcher were welcomed by conservatives for the ‘new nationalism’ they brought to political life.” For good measure, Hazony also adds David Ben-Gurion and Mahatma Gandhi to his nationalist pantheon, though, unaccountably, he omits any mention of their enthusiastic embrace by conservatives.

Hazony favors his readers with a single footnote at the end of this remarkable and fantastical paragraph. Forget the fact that “new nationalism” is a term peculiarly associated with Teddy Roosevelt, not with Reagan, who to my knowledge, never uttered the phrase, but the primary source cited by Hazony doesn’t even refer to Reagan in the same context as “new nationalism.” Here is the text of that footnote.

On Reagan’s “new nationalism,” see Norman Podhoretz, “The New American Majority,” Commentary (January 1981); Irving Kristol, “The Emergence of Two Republican Parties,” Reflections of a Neo-Conservative (New York: Basic Books, 1983), 111. (p. 237)

I am unable to find the Kristol text on the internet, but I did find Podhoretz’s article on the Commentary website. I will quote the entire paragraph in which the words “new nationalism” make their only appearance (it is also the only appearance of “nationalism” in the article). But before reproducing the paragraph, I will register my astonishment at the audacity of Hazony in invoking the two godfathers of neo-conservatism as validators of spurious claim made by Hazony on Reagan’s behalf to posthumous recognition as a National Conservative hero, inasmuch as Hazony goes out of his way, as we shall see presently, to cast neo-conservatism into the Gehenna of imperialistic liberalism. But first, let us consider — and marvel at — Podhoretz’s discussion of the “new nationalism.”

In my opinion, because of Chappaquiddick alone, Edward Kennedy could not have become President of the United States in 1980. Yet even if Chappaquiddick had not been a factor, Edward Kennedy would still not have been a viable candidate — not for the Democratic nomination and certainly not for the Presidency in the general election. But if this is so, why did so many Democrats (over 50 percent in some of the early polls taken before he announced) declare their support for him? Here again it is impossible to say with complete assurance. But given the way the votes were subsequently cast in 1980, I think it is a reasonable guess that in those early days many people who had never paid close attention to him took Kennedy for the same kind of political figure his brother John had been. We know from all the survey data that the political mood had been shifting for some years in a consistent direction — away from the self-doubts and self-hatreds and the neo-isolationism of the immediate post-Vietnam period and toward what some of us have called a new nationalism. In the minds of many people caught up in the new nationalist spirit, John F. Kennedy stood for a powerful America, and in expressing enthusiasm for Edward Kennedy, they were in all probability identifying him with his older brother.

This is just an astoundingly brazen misrepresentation by Hazony in hypocritically misappropriating Reagan, to whose memory most Republicans and conservatives feel some lingering sentimental attachment, even as they discard and disavow many of his most characteristic political principles.

The extent to which Hazony repudiates the neo-conservative world view that was a major pillar of the Reagan Presidency becomes clear in a long paragraph in which Hazony sets up his deeply misleading, dichotomy between the virtuous nationalism he espouses and the iniquitous liberal imperialism that he excoriates as the only two possible choices for organizing our political institutions.

This debate between nationalism and imperialism became acutely relevant again with the fall of the Berlin Wall in 1989. At that time, the struggle against Communism ended, and the minds of Western leaders became preoccupied with two great imperialist project: the European Union, which has progressively relieved member nations of many of the powers usually associated with political independence; and the project of establishing an American “world order,” in which nations that do not abide by international law will be coerced into doing so principally by means of American military might. These imperialist projects, even though their proponents do not like to call them that, for two reasons: First, their purpose is to remove decision-making from the hands of independent national governments and place it in the hands of international governments or bodies. And second, as you can immediately see from the literature produced by these individuals and institutions supporting these endeavors, they are consciously part of an imperialist political tradition, drawing their historical inspiration from the Roman Empire, the Austro-Hungarian Empire, and the British Empire. For example, Charles Krauthammer’s argument for American “Universal Dominion,” written at the dawn of the post-Cold War period, calls for American to create a “super-sovereign,” which will preside over the permanent “depreciation . . . of the notion of sovereignty” for all nations on earth. Krauthammer adopts the Latin term pax Americana to describe this vision, invoking the image of the United States as the new Rome: Just as the Roman Empire supposedly established a pax Romana . . . that obtained security and quiet for all of Europe, so America would now provide security and quiet for the entire world. (pp. 3-4)

I do not defend Krauthammer’s view of pax Americana and his support for invading Iraq in 2003. But the war in Iraq was largely instigated by a small group of right-wing ideologists with whom Krauthammer and other neo-conservatives like William Kristol and Robert Kagan were aligned. In the wake of September 11, 2001, they leveraged fear of another attack into a quixotic and poorly-thought-out and incompetently executed military adventure into Iraq.

That invasion was not, as Hazony falsely suggests, the inevitable result of liberal imperialism (as if liberalism and imperialism were cognate ideas). Moreover, it is deeply dishonest for Hazony to single out Krauthammer et al. for responsibility for that disaster, when Hazony’s mentor and sponsor, Benjamin Netanyahu, was a major supporter and outspoken advocate for the invasion of Iraq.

There is much more to be said about Hazony’s bad faith, but I have already said enough for one post.

Dr. Shelton Remains Outspoken: She Should Have Known Better

I started blogging in July 2011, and in one of my first blogposts I discussed an article in the now defunct Weekly Standard by Dr. Judy Shelton entitled “Gold Standard or Bust.” I wrote then:

I don’t know, and have never met Dr. Shelton, but she has been a frequent op-ed contributor to the Wall Street Journal and various other publications of a like ideological orientation for 20 years or more, invariably advocating a return to the gold standard.  In 1994, she published a book Money Meltdown touting the gold standard as a cure for all our monetary ills.

I was tempted to provide a line-by-line commentary on Dr. Shelton’s Weekly Standard piece, but it would be tedious and churlish to dwell excessively on her deficiencies as a wordsmith or lapses from lucidity.

So I was not very impressed by Dr. Shelton then. I have had occasion to write about her again a few times since, and I cannot report that I have detected any improvement in the lucidity of her thought or the clarity of her exposition.

Aside from, or perhaps owing to, her infatuation with the gold standard, Dr. Shelton seems to have developed a deep aversion to what is commonly, and usually misleadingly, known as currency manipulation. Using her modest entrepreneurial skills as a monetary-policy pundit, Dr. Shelton has tried to use the specter of currency manipulation as a talking point for gold-standard advocacy. So, in 2017 Dr. Shelton wrote an op-ed about currency manipulation for the Wall Street Journal that was so woefully uninformed and unintelligible, that I felt obligated to write a blogpost just for her, a tutorial on the ABCs of currency manipulation, as I called it then. Here’s an excerpt from my tutorial:

[i]t was no surprise to see in Tuesday’s Wall Street Journal that monetary-policy entrepreneur Dr. Judy Shelton has written another one of her screeds promoting the gold standard, in which, showing no awareness of the necessary conditions for currency manipulation, she assures us that a) currency manipulation is a real problem and b) that restoring the gold standard would solve it.

Certainly the rules regarding international exchange-rate arrangements are not working. Monetary integrity was the key to making Bretton Woods institutions work when they were created after World War II to prevent future breakdowns in world order due to trade. The international monetary system, devised in 1944, was based on fixed exchange rates linked to a gold-convertible dollar.

No such system exists today. And no real leader can aspire to champion both the logic and the morality of free trade without confronting the practice that undermines both: currency manipulation.

Ahem, pray tell, which rules relating to exchange-rate arrangements does Dr. Shelton believe are not working? She doesn’t cite any. And, what, on earth does “monetary integrity” even mean, and what does that high-minded, but totally amorphous, concept have to do with the rules of exchange-rate arrangements that aren’t working?

Dr. Shelton mentions “monetary integrity” in the context of the Bretton Woods system, a system based — well, sort of — on fixed exchange rates, forgetting – or choosing not — to acknowledge that, under the Bretton Woods system, exchange rates were also unilaterally adjustable by participating countries. Not only were they adjustable, but currency devaluations were implemented on numerous occasions as a strategy for export promotion, the most notorious example being Britain’s 30% devaluation of sterling in 1949, just five years after the Bretton Woods agreement had been signed. Indeed, many other countries, including West Germany, Italy, and Japan, also had chronically undervalued currencies under the Bretton Woods system, as did France after it rejoined the gold standard in 1926 at a devalued rate deliberately chosen to ensure that its export industries would enjoy a competitive advantage.

The key point to keep in mind is that for a country to gain a competitive advantage by lowering its exchange rate, it has to prevent the automatic tendency of international price arbitrage and corresponding flows of money to eliminate competitive advantages arising from movements in exchange rates. If a depreciated exchange rate gives rise to an export surplus, a corresponding inflow of foreign funds to finance the export surplus will eventually either drive the exchange rate back toward its old level, thereby reducing or eliminating the initial depreciation, or, if the lower rate is maintained, the cash inflow will accumulate in reserve holdings of the central bank. Unless the central bank is willing to accept a continuing accumulation of foreign-exchange reserves, the increased domestic demand and monetary expansion associated with the export surplus will lead to a corresponding rise in domestic prices, wages and incomes, thereby reducing or eliminating the competitive advantage created by the depressed exchange rate. Thus, unless the central bank is willing to accumulate foreign-exchange reserves without limit, or can create an increased demand by private banks and the public to hold additional cash, thereby creating a chronic excess demand for money that can be satisfied only by a continuing export surplus, a permanently reduced foreign-exchange rate creates only a transitory competitive advantage.

I don’t say that currency manipulation is not possible. It is not only possible, but we know that currency manipulation has been practiced. But currency manipulation can occur under a fixed-exchange rate regime as well as under flexible exchange-rate regimes, as demonstrated by the conduct of the Bank of France from 1926 to 1935 while it was operating under a gold standard. And the most egregious recent example of currency manipulation was undertaken by the Chinese central bank when it effectively pegged the yuan to the dollar at a fixed rate. Keeping its exchange rate fixed against the dollar was precisely the offense that the currency-manipulation police accused the Chinese of committing.

I leave it to interested readers to go back and finish the rest of my tutorial for Dr. Shelton. And if you read carefully and attentively, you are likely to understand the concept of currency manipulation a lot more clearly than when you started.

Alas, it’s obvious that Dr. Shelton has either not read or not understood the tutorial I wrote for her, because, in her latest pronouncement on the subject she covers substantially the same ground as she did two years ago, with no sign of increased comprehension of the subject on which she expounds with such misplaced self-assurance. Here are some samples of Dr. Shelton’s conceptual confusion and historical ignorance.

History can be especially informative when it comes to evaluating the relationship between optimal economic performance and monetary regimes. In the 1930s, for example, the “beggar thy neighbor” tactic of devaluing currencies against gold to gain a trade export advantage hampered a global economic recovery.

Beggar thy neighbor policies were indeed adopted by the United States, but they were adopted first in the 1922 (the Fordney-McCumber Act) and again in 1930 (Smoot-Hawley Act) when the US was on the gold standard with the value of the dollar pegged at $4.86 $20.67 for an ounce of gold. The Great Depression started in late 1929, but the stock market crash of 1929 may have been in part precipitated by fears that the Smoot-Hawley Act would be passed by Congress and signed into law by President Hoover.

At any rate, exchange rates among most major countries were pegged to either gold or the dollar until September 1931 when Britain suspended the convertibility of the pound into gold. The Great Depression was the result of a rapid deflation caused by gold accumulation by central banks as they rejoined the gold standard that had been almost universally suspended during World War I. Countries that remained on the gold standard during the Great Depression were condemned to suffer deflation as gold became ever more valuable in real terms, so that currency depreciation against gold was the only pathway to recovery. Thus, once convertibility was suspended and the pound allowed to depreciate, the British economy stopped contracting and began a modest recovery with slowly expanding output and employment.

The United States, however, kept the dollar pegged to its $4.86 $20.67 an ounce parity with gold until April 1933, when FDR saved the American economy by suspending convertibility and commencing a policy of deliberate reflation (i.e. inflation to restore the 1926 price level). An unprecedented expansion of output, employment and income accompanied the rise in prices following the suspension of the gold standard. Currency depreciation was the key to recovery from, not the cause of, depression.

Having exposed her ignorance of the causes of the Great Depression, Dr. Shelton then begins a descent into her confusion about the subject of currency manipulation, about which I had tried to tutor her, evidently without success.

The absence of rules aimed at maintaining a level monetary playing field invites currency manipulation that could spark a backlash against the concept of free trade. Countries engaged in competitive depreciation undermine the principles of genuine competition, and those that have sought to participate in good faith in the global marketplace are unfairly penalized by the monetary sleight of hand executed through central banks.

Currency manipulation is possible only under specific conditions. A depreciating currency is not normally a manipulated currency. Currencies fluctuate in relative values for many different reasons, but if prices adjust in rough proportion to the change in exchange rates, the competitive positions of the countries are only temporarily affected by the change in exchange rates. For a country to gain a sustained advantage for its export and import-competing industries by depreciating its exchange rate, it must adopt a monetary policy that consistently provides less cash than the public demands or needs to satisfy its liquidity needs, forcing the public to obtain the desired cash balances through a balance-of-payments surplus and an inflow of foreign-exchange reserves into the country’s central bank or treasury.

U.S. leadership is necessary to address this fundamental violation of free-trade practices and its distortionary impact on free-market outcomes. When the United States’ trading partners engage in currency manipulation, it is not competing — it’s cheating.

That is why it is vital to weigh the implications of U.S. monetary policy on the dollar’s exchange-rate value against other currencies. Trade and financial flows can be substantially altered by speculative market forces responding to the public comments of officials at the helm of the European Central Bank, the Bank of Japan or the People’s Bank of China — with calls for “additional stimulus” alerting currency players to impending devaluation policies.

Dr. Shelton here reveals a comprehensive misunderstanding of the difference between a monetary policy that aims to stimulate economic activity in general by raising the price level or increasing the rate of inflation to stimulate expenditure and a policy of monetary restraint that aims to raise the relative price of domestic export and import-competing products relative to the prices of domestic non-tradable goods and services, e.g., new homes and apartments. It is only the latter combination of tight monetary policy and exchange-rate intervention to depreciate a currency in foreign-exchange markets that qualifies as currency manipulation.

And, under that understanding, it is obvious that currency manipulation is possible under a fixed-exchange-rate system, as France did in the 1920s and 1930s, and as most European countries and Japan did in the 1950s and early 1960s under the Bretton Woods system so well loved by Dr. Shelton.

In the 1950s and early 1960s, the US dollar was chronically overvalued. The situation was not remediated until the 1960s under the Kennedy administration when consistently loose monetary policy by the Fed made currency manipulation so costly for the Germans and Japanese that they revalued their currencies upward to avoid the inflationary consequences of US monetary expansion.

And then, in a final flourish, Dr. Shelton puts her ignorance of what happened in the Great Depression on public display with the following observation.

When currencies shift downward against the dollar, it makes U.S. exports more expensive for consumers in other nations. It also discounts the cost of imported goods compared with domestic U.S. products. Downshifting currencies against the dollar has the same punishing impact as a tariff. That is why, as in the 1930s during the Great Depression, currency devaluation prompts retaliatory tariffs.

The retaliatory tariffs were imposed in response to the US tariffs that preceded the or were imposed at the outset of the Great Depression in 1930. The devaluations against gold promoted economic recovery, and were accompanied by a general reduction in tariff levels under FDR after the US devalued the dollar against gold and the remaining gold standard currencies. Whereof she knows nothing, thereof Dr. Shelton would do better to remain silent.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,354 other followers

Follow Uneasy Money on WordPress.com