Archive Page 2

Correct Foresight, Perfect Foresight, and Intertemporal Equilibrium

In my previous post, I discussed Hayek’s path-breaking insight into the meaning of intertemporal equilibrium. His breakthrough was to see that an equilibrium can be understood not as a stationary state in which nothing changes, but as a state in which decentralized plans are both optimal from the point of view of the individuals formulating the plans and mutually consistent, so that the individually optimal plans, at least potentially, could be simultaneously executed. In the simple one-period model, the plans of individuals extending over a single-period time horizon are constrained by the necessary equality for each agent between the value of all planned purchases and the value of all planned sales in that period. A single-period or stationary equilibrium, if it exists, is characterized by a set of prices such that the optimal plans corresponding to that set of prices such that total amount demanded for each product equals the total amount supplied for each product. Thus, an equilibrium price vector has the property that every individual is choosing optimally based on the choice criteria and the constraints governing the decisions for each individual and that those individually optimal choices are mutually consistent, that mutual consistency being manifested in the equality of the total amount demanded and the total amount supplied of each product in that single period.

The problem posed by the concept of intertemporal equilibrium is how to generalize the single-period notion of an equilibrium as a vector of all the observed prices of goods and services actually traded in that single period into a multi-period concept in which the prices on which optimal choices depend include both the actual prices of goods traded in the current period as well as the prices of goods and services that agents plan to buy or sell only in some future time period. In an intertemporal context, the prices on the basis of which optimal plans are chosen cannot be just those prices at which transactions are being executed in the current period; the relevant set of prices must also include those prices at which transactions already being planned in the current period will be executed. Because even choices about transactions today may depend on the prices at which future transactions will take place, future prices can affect not only future demands and supplies they can also affect current demands and supplies.

But because prices in future periods are typically not observable by individuals in the present, it is not observed — but expected — future prices on the basis of which individual agents are making the optimal choices reflected in their intertemporal plans. And insofar as optimal plans depend on expected future prices, those optimal plans can be mutually consistent only if they are based on the same expected future prices, because if their choices are based on different expected future prices, then it is not possible that all expectations are realized. If the expectations of at least one agent, and probably of many agents, will be disappointed, implying that the plans of at least one and probably of many agents will not be optimized and will have to be revised.

The recognition that the mutual consistency of optimal plans requires individuals to accurately foresee the future prices upon which their optimal choices are based suggested that individual agents must be endowed with remarkable capacities to foresee the future. To assume that all individual agents would be endowed with the extraordinary ability to foresee correctly all the future prices relevant to their optimal choices about their intertemporal plans seemed an exceedingly unrealistic assumption on which to premise an economic model.

This dismissive attitude toward the concept of intertemporal equilibrium and the seemingly related assumption of “perfect foresight” necessary for an intertemporal equilibrium to exist was stridently expressed by Oskar Morgenstern in his famous 1935 article “Perfect Foresight and Economic Equilibrium.”

The impossibly high claims which are attributed to the intellectual efficiency of the economic subject immediately indicate that there are included in this equilibrium system not ordinary men, but rather, at least to one another, exactly equal demi-gods, in case the claim of complete foresight is fulfilled. If this is the case, there is, of course, nothing more to be done. If “full” or “perfect” foresight is to provide the basis of the theory of equilibrium in the strictly specified sense, and in the meaning obviously intended by the economic authors, then, a completely meaningless assumption is being considered. If limitations are introduced in such a way that the perfection of foresight is not reached, then these limitations are to be stated very precisely. They would have to be so narrowly drawn that the fundamental aim of producing ostensibly full rationality of the system by means of high, de facto unlimited, foresight, would be lost. For the theoretical economist, there is no way out of this dilemma. ln this discussion, “full” and “perfect” foresight are not only used synonymously, but both are employed, moreover, in the essentialIy more exact sense of limitlessness. This expression would have to be preferred because with the words “perfect” or “imperfect”, there arise superficial valuations which play no role here at all.

Morgenstern then went on to make an even more powerful attack on the idea of perfect foresight: that the idea is itself self-contradictory. Interestingly, he did so by positing an example that would figure in Morgenstern’s later development of game theory with his collaborator John von Neumann (and, as we now know, with his research assistant who in fact was his mathematical guide and mentor, Abraham Wald, fcredited as a co-author of The Theory of Games and Economic Behavior).

Sherlock Holmes, pursued by his opponent, Moriarity, leaves London for Dover. The train stops at a station on the way, and he alights there rather than traveling on to Dover. He has seen Moriarity at the railway station, recognizes that he is very clever and expects that Moriarity will take a faster special train in order to catch him in Dover. Holmes’ anticipation turns out to be correct. But what if Moriarity had been still more clever, had estimated Holmes’ mental abilities better and had foreseen his actions accordingly? Then, obviously, he would have traveled to the intermediate station. Holmes, again, would have had to calculate that, and he himself would have decided to go on to Dover. Whereupon, Moriarity would again have “reacted” differently. Because of so much thinking they might not have been able to act at all or the intellectually weaker of the two would have surrendered to the other in the Victoria Station, since the whole flight would have become unnecessary. Examples of this kind can be drawn from everywhere. However, chess, strategy, etc. presuppose expert knowledge, which encumbers the example unnecessarily.

One may be easily convinced that here lies an insoluble paradox. And the situation is not improved, but, rather, greatly aggravated if we assume that more than two individuals-as, for example, is the case with exchange-are brought together into a position, which would correspond to the one brought forward here. Always, there is exhibited an endless chain of reciprocally conjectural reactions and counter-reactions. This chain can never be broken by an act of knowledge but always only through an arbitrary act-a resolution. This resolution, again, would have to be foreseen by the two or more persons concerned. The paradox still remains no matter how one attempts to twist or turn things around. Unlimited foresight and economic equilibrium are thus irreconcilable with one another. But can equilibrium really take place with a faulty, heterogeneous foresight, however, it may be disposed? This is the question which arises at once when an answer is sought. One can even say this: has foresight been truly introduced at all into the consideration of equilibrium, or, rather, does not the theorem of equilibrium generally stand in no proven connection with the assumptions about foresight, so that a false assumption is being considered?

As Carlo Zappia has shown, it was probably Morgenstern’s attack on the notion of intertemporal equilibrium and perfect foresight that led Hayek to his classic restatement of the idea in his 1937 paper “Economics and Knowledge.” The point that Hayek clarified in his 1937 version, but had not been clear in his earlier expositions of the concept, is that correct foresight is not an assumption from which the existence of an intertemporal equilibrium can be causally deduced; there is no assertion that a state of equilibrium is the result of correct foresight. Rather, correct foresight is the characteristic that defines what is meant when the term “intertemporal equilibrium” is used in economic theory. Morgenstern’s conceptual error was to mistake a tautological statement about what would have to be true if an intertemporal equilibrium were to obtain for a causal statement about what conditions would bring an intertemporal equilibrium into existence.

The idea of correct foresight does not attribute any special powers to the economic agents who might under hypothetical circumstances possess correct expectations of future prices. The term is not meant to be a description of an actual state of affairs, but a description of what would have to be true for a state of affairs to be an equilibrium state of affairs.

As an aside, I would simply mention that many years ago when I met Hayek and had the opportunity to ask him about his 1937 paper and his role in developing the concept of intertemporal equilibrium, he brought my attention to his 1928 paper in which he first described an intertemporal equilibrium as state of affairs in which agents had correct expectations about future prices. My recollection of that conversation is unfortunately rather vague, but I do remember that he expressed some regret for not having had the paper translated into English, which would have established his priority in articulating the intertemporal equilibrium concept. My recollection is that the reason he gave for not having had the paper translated into English was that there was something about the paper about which he felt dissatisfied, but I can no longer remember what it was that he said he was dissatisfied with. However, I would now be inclined to conjecture that he was dissatisfied with not having disambiguated, as he did in the 1937 paper, between correct foresight as a defining characteristic of what intertemporal equilibrium means versus perfect foresight as the cause that brings intertemporal equilibruim into existence.

It is also interesting to note that the subsequent development of game theory in which Morgenstern played a not insubstantial role, shows that under a probabilistic interpretation of the interaction between Holmes and Moriarity, there could be an optimal mixed strategy that would provide an equilibrium solution of repeated Holmes-Moriarity interactions. But if the interaction is treated as a single non-repeatable event with no mixed strategy available to either party, the correct interpretation of the interaction is certainly that there is no equilibrium solution to the interaction. If there is no equilibrium solution, then it is precisely the absence of an equilibrium solution that implies the impossibility of correct foresight, correct foresight and the existence of an equilibrium being logically equivalent concepts.

A Draft of my Paper on Rules versus Discretion Is Now Available on SSRN

My paper “Rules versus Discretion in Monetary Policy Historically Contemplated” which I spoke about last September at the Mercatus Center Conference on rules for a post-crisis world has been accepted by the Journal of Macroeconomics. I posted a draft of the concluding section of the paper on this blog several weeks ago. An abstract, and a complete draft, of the paper are available on the journal website, but only the abstract is ungated.

I have posted a draft of the paper on SSRN where it may now be downloaded. Here is the abstract of the paper.

Monetary-policy rules are attempts to cope with the implications of having a medium of exchange whose value exceeds its cost of production. Two classes of monetary rules can be identified: (1) price rules that target the value of money in terms of a real commodity, e.g., gold, or in terms of some index of prices, and (2) quantity rules that target the quantity of money in circulation. Historically, price rules, e.g. the gold standard, have predominated, but the Bank Charter Act of 1844 imposed a quantity rule as an adjunct to the gold standard, because the gold standard had performed unsatisfactorily after being restored in Britain at the close of the Napoleonic Wars. A quantity rule was not proposed independently of a price rule until Henry Simons proposed a constant money supply consisting of government-issued fiat currency and deposits issued by banks operating on a 100-percent reserve basis. Simons argued that such a plan would be ideal if it could be implemented because it would deprive the monetary authority of any discretionary decision-making power. Nevertheless, Simons concluded that such a plan was impractical and supported a price rule to stabilized the price level. Simons’s student Milton Friedman revived Simons’s argument against discretion and modified Simons plan for 100-percent reserve banking and a constant money supply into his k-percent rule for monetary growth. This paper examines the doctrinal and ideological origins and background that lay behind the rules versus discretion distinction.

Hayek and Intertemporal Equilibrium

I am starting to write a paper on Hayek and intertemporal equilibrium, and as I write it over the next couple of weeks, I am going to post sections of it on this blog. Comments from readers will be even more welcome than usual, and I will do my utmost to reply to comments, a goal that, I am sorry to say, I have not been living up to in my recent posts.

The idea of equilibrium is an essential concept in economics. It is an essential concept in other sciences as well, its meaning in economics is not the same as in other disciplines. The concept having originally been borrowed from physics, the meaning originally attached to it by economists corresponded to the notion of a system at rest, and it took a long time for economists to see that viewing an economy as a system at rest was not the only, or even the most useful, way of applying the equilibrium concept to economic phenomena.

What would it mean for an economic system to be at rest? The obvious answer was to say that prices and quantities would not change. If supply equals demand in every market, and if there no exogenous change introduced into the system, e.g., in population, technology, tastes, etc., it would seem that would be no reason for the prices paid and quantities produced to change in that system. But that view of an economic system was a very restrictive one, because such a large share of economic activity – savings and investment — is predicated on the assumption and expectation of change.

The model of a stationary economy at rest in which all economic activity simply repeats what has already happened before did not seem very satisfying or informative, but that was the view of equilibrium that originally took hold in economics. The idea of a stationary timeless equilibrium can be traced back to the classical economists, especially Ricardo and Mill who wrote about the long-run tendency of an economic system toward a stationary state. But it was the introduction by Jevons, Menger, Walras and their followers of the idea of optimizing decisions by rational consumers and producers that provided the key insight for a more robust and fruitful version of the equilibrium concept.

If each economic agent (household or business firm) is viewed as making optimal choices based on some scale of preferences subject to limitations or constraints imposed by their capacities, endowments, technology and the legal system, then the equilibrium of an economy must describe a state in which each agent, given his own subjective ranking of the feasible alternatives, is making a optimal decision, and those optimal decisions are consistent with those of all other agents. The optimal decisions of each agent must simultaneously be optimal from the point of view of that agent while also being consistent, or compatible, with the optimal decisions of every other agent. In other words, the decisions of all buyers of how much to purchase must be consistent with the decisions of all sellers of how much to sell.

The idea of an equilibrium as a set of independently conceived, mutually consistent optimal plans was latent in the earlier notions of equilibrium, but it could not be articulated until a concept of optimality had been defined. That concept was utility maximization and it was further extended to include the ideas of cost minimization and profit maximization. Once the idea of an optimal plan was worked out, the necessary conditions for the mutual consistency of optimal plans could be articulated as the necessary conditions for a general economic equilibrium. Once equilibrium was defined as the consistency of optimal plans, the path was clear to define an intertemporal equilibrium as the consistency of optimal plans extending over time. Because current goods and services and otherwise identical goods and services in the future could be treated as economically distinct goods and services, defining the conditions for an intertemporal equilibrium was formally almost equivalent to defining the conditions for a static, stationary equilibrium. Just as the conditions for a static equilibrium could be stated in terms of equalities between marginal rates of substitution of goods in consumption and in production to their corresponding price ratios, an intertemporal equilibrium could be stated in terms of equalities between the marginal rates of intertemporal substitution in consumption and in production and their corresponding intertemporal price ratios.

The only formal adjustment required in the necessary conditions for static equilibrium to be extended to intertemporal equilibrium was to recognize that, inasmuch as future prices (typically) are unobservable, and hence unknown to economic agents, the intertemporal price ratios cannot be ratios between actual current prices and actual future prices, but, instead, ratios between current prices and expected future prices. From this it followed that for optimal plans to be mutually consistent, all economic agents must have the same expectations of the future prices in terms of which their plans were optimized.

The concept of an intertemporal equilibrium was first presented in English by F. A. Hayek in his 1937 article “Economics and Knowledge.” But it was through J. R. Hicks’s Value and Capital published two years later in 1939 that the concept became more widely known and understood. In explaining and applying the concept of intertemporal equilibrium and introducing the derivative concept of a temporary equilibrium in which current markets clear, but individual expectations of future prices are not the same, Hicks did not claim originality, but instead of crediting Hayek for the concept, or even mentioning Hayek’s 1937 paper, Hicks credited the Swedish economist Erik Lindahl, who had published articles in the early 1930s in which he had articulated the concept. But although Lindahl had published his important work on intertemporal equilibrium before Hayek’s 1937 article, Hayek had already explained the concept in a 1928 article “Das intertemporale Gleichgewichtasystem der Priese und die Bewegungen des ‘Geltwertes.'” (English translation: “Intertemporal price equilibrium and movements in the value of money.“)

Having been a junior colleague of Hayek’s in the early 1930s when Hayek arrived at the London School of Economics, and having come very much under Hayek’s influence for a few years before moving in a different theoretical direction in the mid-1930s, Hicks was certainly aware of Hayek’s work on intertemporal equilibrium, so it has long been a puzzle to me why Hicks did not credit Hayek along with Lindahl for having developed the concept of intertemporal equilibrium. It might be worth pursuing that question, but I mention it now only as an aside, in the hope that someone else might find it interesting and worthwhile to try to find a solution to that puzzle. As a further aside, I will mention that Murray Milgate in a 1979 article “On the Origin of the Notion of ‘Intertemporal Equilibrium’” has previously tried to redress the failure to credit Hayek’s role in introducing the concept of intertemporal equilibrium into economic theory.

What I am going to discuss in here and in future posts are three distinct ways in which the concept of intertemporal equilibrium has been developed since Hayek’s early work – his 1928 and 1937 articles but also his 1941 discussion of intertemporal equilibrium in The Pure Theory of Capital. Of course, the best known development of the concept of intertemporal equilibrium is the Arrow-Debreu-McKenzie (ADM) general-equilibrium model. But although it can be thought of as a model of intertemporal equilibrium, the ADM model is set up in such a way that all economic decisions are taken before the clock even starts ticking; the transactions that are executed once the clock does start simply follow a pre-determined script. In the ADM model, the passage of time is a triviality, merely a way of recording the sequential order of the predetermined production and consumption activities. This feat is accomplished by assuming that all agents are present at time zero with their property endowments in hand and capable of transacting – but conditional on the determination of an equilibrium price vector that allows all optimal plans to be simultaneously executed over the entire duration of the model — in a complete set of markets (including state-contingent markets covering the entire range of contingent events that will unfold in the course of time whose outcomes could affect the wealth or well-being of any agent with the probabilities associated with every contingent event known in advance).

Just as identical goods in different physical locations or different time periods can be distinguished as different commodities that cn be purchased at different prices for delivery at specific times and places, identical goods can be distinguished under different states of the world (ice cream on July 4, 2017 in Washington DC at 2pm only if the temperature is greater than 90 degrees). Given the complete set of state-contingent markets and the known probabilities of the contingent events, an equilibrium price vector for the complete set of markets would give rise to optimal trades reallocating the risks associated with future contingent events and to an optimal allocation of resources over time. Although the ADM model is an intertemporal model only in a limited sense, it does provide an ideal benchmark describing the characteristics of a set of mutually consistent optimal plans.

The seminal work of Roy Radner in relaxing some of the extreme assumptions of the ADM model puts Hayek’s contribution to the understanding of the necessary conditions for an intertemporal equilibrium into proper perspective. At an informal level, Hayek was addressing the same kinds of problems that Radner analyzed with far more powerful analytical tools than were available to Hayek. But the were both concerned with a common problem: under what conditions could an economy with an incomplete set of markets be said to be in a state of intertemporal equilibrium? In an economy lacking the full set of forward and state contingent markets describing the ADM model, intertemporal equilibrium cannot predetermined before trading even begins, but must, if such an equilibrium obtains, unfold through the passage of time. Outcomes might be expected, but they would not be predetermined in advance. Echoing Hayek, though to my knowledge he does not refer to Hayek in his work, Radner describes his intertemporal equilibrium under uncertainty as an equilibrium of plans, prices, and price expectations. Even if it exists, the Radner equilibrium is not the same as the ADM equilibrium, because without a full set of markets, agents can’t fully hedge against, or insure, all the risks to which they are exposed. The distinction between ex ante and ex post is not eliminated in the Radner equilibrium, though it is eliminated in the ADM equilibrium.

Additionally, because all trades in the ADM model have been executed before “time” begins, it seems impossible to rationalize holding any asset whose only use is to serve as a medium of exchange. In his early writings on business cycles, e.g., Monetary Theory and the Trade Cycle, Hayek questioned whether it would be possible to rationalize the holding of money in the context of a model of full equilibrium, suggesting that monetary exchange, by severing the link between aggregate supply and aggregate demand characteristic of a barter economy as described by Say’s Law, was the source of systematic deviations from the intertemporal equilibrium corresponding to the solution of a system of Walrasian equations. Hayek suggested that progress in analyzing economic fluctuations would be possible only if the Walrasian equilibrium method could be somehow be extended to accommodate the existence of money, uncertainty, and other characteristics of the real world while maintaining the analytical discipline imposed by the equilibrium method and the optimization principle. It proved to be a task requiring resources that were beyond those at Hayek’s, or probably anyone else’s, disposal at the time. But it would be wrong to fault Hayek for having had to insight to perceive and frame a problem that was beyond his capacity to solve. What he may be criticized for is mistakenly believing that he he had in fact grasped the general outlines of a solution when in fact he had only perceived some aspects of the solution and offering seriously inappropriate policy recommendations based on that seriously incomplete understanding.

In Value and Capital, Hicks also expressed doubts whether it would be possible to analyze the economic fluctuations characterizing the business cycle using a model of pure intertemporal equilibrium. He proposed an alternative approach for analyzing fluctuations which he called the method of temporary equilibrium. The essence of the temporary-equilibrium method is to analyze the behavior of an economy under the assumption that all markets for current delivery clear (in some not entirely clear sense of the term “clear”) while understanding that demand and supply in current markets depend not only on current prices but also upon expected future prices, and that the failure of current prices to equal what they had been expected to be is a potential cause for the plans that economic agents are trying to execute to be modified and possibly abandoned. In the Pure Theory of Capital, Hayek discussed Hicks’s temporary-equilibrium method a possible method of achieving the modification in the Walrasian method that he himself had proposed in Monetary Theory and the Trade Cycle. But after a brief critical discussion of the method, he dismissed it for reasons that remain obscure. Hayek’s rejection of the temporary-equilibrium method seems in retrospect to have been one of Hayek’s worst theoretical — or perhaps, meta-theoretical — blunders.

Decades later, C. J. Bliss developed the concept of temporary equilibrium to show that temporary equilibrium method can rationalize both holding an asset purely for its services as a medium of exchange and the existence of financial intermediaries (private banks) that supply financial assets held exclusively to serve as a medium of exchange. In such a temporary-equilibrium model with financial intermediaries, it seems possible to model not only the existence of private suppliers of a medium of exchange, but also the conditions – in a very general sense — under which the system of financial intermediaries breaks down. The key variable of course is vectors of expected prices subject to which the plans of individual households, business firms, and financial intermediaries are optimized. The critical point that emerges from Bliss’s analysis is that there are sets of expected prices, which if held by agents, are inconsistent with the existence of even a temporary equilibrium. Thus price flexibility in current market cannot, in principle, result in even a temporary equilibrium, because there is no price vector of current price in markets for present delivery that solves the temporary-equilibrium system. Even perfect price flexibility doesn’t lead to equilibrium if the equilibrium does not exist. And the equilibrium cannot exist if price expectations are in some sense “too far out of whack.”

Expected prices are thus, necessarily, equilibrating variables. But there is no economic mechanism that tends to cause the adjustment of expected prices so that they are consistent with the existence of even a temporary equilibrium, much less a full equilibrium.

Unfortunately, modern macroeconomics continues to neglect the temporary-equilibrium method; instead macroeconomists have for the most part insisted on the adoption of the rational-expectations hypothesis, a hypothesis that elevates question-begging to the status of a fundamental axiom of rationality. The crucial error in the rational-expectations hypothesis was to misunderstand the role of the comparative-statics method developed by Samuelson in The Foundations of Economic Analysis. The role of the comparative-statics method is to isolate the pure theoretical effect of a parameter change under a ceteris-paribus assumption. Such an effect could be derived only by comparing two equilibria under the assumption of a locally unique and stable equilibrium before and after the parameter change. But the method of comparative statics is completely inappropriate to most macroeconomic problems which are precisely concerned with the failure of the economy to achieve, or even to approximate, the unique and stable equilibrium state posited by the comparative-statics method.

Moreover, the original empirical application of the rational-expectations hypothesis by Muth was in the context of the behavior of a single market in which the market was dominated by well-informed specialists who could be presumed to have well-founded expectations of future prices conditional on a relatively stable economic environment. Under conditions of macroeconomic instability, there is good reason to doubt that the accumulated knowledge and experience of market participants would enable agents to form accurate expectations of the future course of prices even in those markets about which they expert knowledge. Insofar as the rational expectations hypothesis has any claim to empirical relevance it is only in the context of stable market situations that can be assumed to be already operating in the neighborhood of an equilibrium. For the kinds of problems that macroeconomists are really trying to answer that assumption is neither relevant nor appropriate.

What’s so Great about Science? or, How I Learned to Stop Worrying and Love Metaphysics

A couple of weeks ago, a lot people in a lot of places marched for science. What struck me about those marches is that there is almost nobody out there that is openly and explicitly campaigning against science. There are, of course, a few flat-earthers who, if one looks for them very diligently, can be found. But does anyone — including the flat-earthers themselves – think that they are serious? There are also Creationists who believe that the earth was created and designed by a Supreme Being – usually along the lines of the Biblical account in the Book of Genesis. But Creationists don’t reject science in general, they reject a particular scientific theory, because they believe it to be untrue, and try to defend their beliefs with a variety of arguments couched in scientific terms. I don’t defend Creationist arguments, but just because someone makes a bad scientific argument, it doesn’t mean that the person making the argument is an opponent of science. To be sure, the reason that Creationists make bad arguments is that they hold a set of beliefs about how the world came to exist that aren’t based on science but on some religious or ideological belief system. But people come up with arguments all the time to justify beliefs for which they have no evidentiary or “scientific” basis.

I mean one of the two greatest scientists that ever lived criticized quantum mechanics, because he couldn’t accept that the world was not fully determined by the laws of nature, or, as he put it so pithily: “God does not play dice with the universe.” I understand that Einstein was not religious, and wasn’t making a religious argument, but he was basing his scientific view of what an acceptable theory should be on certain metaphysical predispositions that he held, and he was expressing his disinclination to accept a theory inconsistent with those predispositions. A scientific argument is judged on its merits, not on the motivations for advancing the argument. And I won’t even discuss the voluminous writings of the other one of the two greatest scientists who ever lived on alchemy and other occult topics.

Similarly, there are climate-change deniers who question the scientific basis for asserting that temperatures have been rising around the world, and that the increase in temperatures results from human activity that discharges greenhouse gasses into the atmosphere. Deniers of global warming may be biased and may be making bad scientific arguments, but the mere fact – and for purposes of this discussion I don’t dispute that it is a fact – that global warming is real and caused by human activity does not mean that to dispute those facts unmasks that person as an opponent of science. R. A. Fisher, the greatest mathematical statistician of the first half of the twentieth century, who developed most of the statistical techniques now used in experimental research, severely damaged his reputation by rejecting or dismissing evidence that smoking tobacco is a primary cause of cancer. Some critics accused Fisher of having been compromised by financial inducements from the tobacco industry, while others attribute his positions to his own smoking habits or anti-puritanical tendencies. In any event, Fisher’s arguments against a causal link between smoking tobacco and lung cancer are now viewed as an embarrassing stain on an otherwise illustrious career. But Fisher’s lapse of judgment, and perhaps of ethics, don’t justify accusing him of opposition to science. Climate-change deniers don’t reject science; they reject or disagree with the conclusions of most climate scientists. They may have lousy reasons for their views – either that the climate is not changing or that whatever change has occurred is unrelated to the human production of greenhouse gasses – but holding wrong or biased views doesn’t make someone an opponent of science.

I don’t say that there are no people who dislike science – I mean don’t like it because of what it stands for, not because they find it difficult or boring. Such people may be opposed to teaching science and to funding scientific research and don’t want scientific knowledge to influence public policy or the way people live. But, as far as I can tell, they have little influence. There is just no one out there that wants to outlaw scientific research, or trying to criminalize the teaching of science. They may not want to fund science, but they aren’t trying to ban it. In fact, I doubt that the prestige and authority of science has ever been higher than it is now. Certainly religion, especially organized religion, to which science was once subordinate if not subservient, no longer exercises anything near the authority that science now does.

The reason for this extended introduction into the topic that I really want to discuss is to provide some context for my belief that economists worry too much about whether economics is really a science. It was such a validation for economists when the Swedish Central Bank piggy-backed on the storied Nobel Prize to create its ersatz “Nobel Memorial Prize” for economic science. (I note with regret the recent passing of William Baumol, whose failure to receive the Nobel Prize in economics, like that of Armen Alchian, was in fact a deplorable failure of good judgment on the part of the Nobel Committee.) And the self-consciousness of economists about the possibly dubious status of economics as a science is a reflection of the exalted status of science in society. So naturally, if one is seeking to increase the prestige of his own occupation and of the intellectual discipline in which one does research, it helps enormously to be able to say: “oh, yes, I am an economist, and economics is a science, which means that I really am a scientist, just like those guys that win Nobel Prizes.” It also helps to be able to show that your scientific research involves a lot of mathematics, because scientists use math in their theories, sometimes a lot of math, which makes it hard for non-scientists to understand what scientists are doing. We economists also use math in our theories, sometimes a lot math, and that’s why it’s just as hard for non-economists to understand what we economists are doing as it is to understand what real scientists are doing. So we really are scientists, aren’t we?”

Where did this obsession with science come from? I think it’s fairly recent, but my sketchy knowledge of the history of science prevents me from getting too deeply into that discussion. But until relatively modern times, science was subsumed under the heading of philosophy — Greek for the love of wisdom. But philosophy is a very broad subject, so eventually that part of philosophy that was concerned with the world as it actually exists was called natural philosophy as opposed to say, ethical and moral philosophy. After the stunning achievements of Newton and his successors, and after Francis Bacon outlined an inductive method for achieving knowledge of the world, the disjunction between mere speculative thought and empirically based research, which was what science supposedly exemplifies, became increasingly sharp. And the inductive method seemed to be the right way to do science.

David Hume and Immanuel Kant struggled with limited success to make sense of induction, because a general proposition cannot be logically deduced from a set of observations, however numerous. Despite the logical problem of induction, early in the early twentieth century a philosophical movement based in Vienna called logical positivism arrived at the conclusion that not only is all scientific knowledge acquired inductively through sensory experience and observation, but no meaning can be attached to any statement unless the statement makes reference to something about which we have or could have sensory experience; to be meaningful a statement must be verified or at least verifiable, so that its truth could be either verified or refuted. Any reference to concepts that have no basis in sensory experience is simply meaningless, i.e., a form of nonsense. Thus, science became not just the epitome of valid, certain, reliable, verified knowledge, which is what people were led to believe by the stunning success of Newton’s theory, it became the exemplar of meaningful discourse. Unless our statements refer to some observable, verifiable object, we are talking nonsense. And in the first half of the twentieth century, logical positivism dominated academic philosophy, at least in the English speaking world, thereby exercising great influence over how economists thought about their own discipline and its scientific status.

Logical positivism was subjected to rigorous criticism by Karl Popper in his early work Logik der Forschung (English translation The Logic of Scientific Discovery). His central point was that scientific theories are less about what is or has been observed, but about what cannot be observed. The empirical content of a scientific proposition consists in the range of observations that the theory says are not possible. The more observations excluded by the theory the greater its empirical content. A theory that is consistent with any observation, has no empirical content. Thus, paradoxically, scientific theories, under the logical positivist doctrine, would have to be considered nonsensical, because they tell us what can’t be observed. And because it is always possible that an excluded observation – the black swan – which our scientific theory tells us can’t be observed, will be observed, scientific theories can never be definitively verified. If a scientific theory can’t verified, then according to the positivists’ own criterion, the theory is nonsense. Of course, this just shows that the positivist criterion of meaning was nonsensical, because obviously scientific theories are completely meaningful despite being unverifiable.

Popper therefore concluded that verification or verifiability can’t be a criterion of meaning. In its place he proposed the criterion of falsification (i.e., refutation, not misrepresentation), but falsification became a criterion not for distinguishing between what is meaningful and what is meaningless, but between science and metaphysics. There is no reason why metaphysical statements (statements lacking empirical content) cannot be perfectly meaningful; they just aren’t scientific. Popper was misinterpreted by many to have simply substituted falsifiability for verifiability as a criterion of meaning; that was a mistaken interpretation, which Popper explicitly rejected.

So, in using the term “meaningful theorems” to refer to potentially refutable propositions that can be derived from economic theory using the method of comparative statics, Paul Samuelson in his Foundations of Economic Analysis adopted the interpretation of Popper’s demarcation criterion between science and metaphysics as if it were a demarcation criterion between meaning and nonsense. I conjecture that Samuelson’s unfortunate lapse into the discredited verbal usage of logical positivism may have reinforced the unhealthy inclination of economists to feel the need to prove their scientific credentials in order to even engage in meaningful discourse.

While Popper certainly performed a valuable service in clearing up the positivist confusion about meaning, he adopted a very prescriptive methodology aimed at making scientific practice more scientific in the sense of exposing theories to, rather than immunizing them against, attempts at refutation, because, according to Popper, it is only if after our theories survive powerful attempts to show that they are false that we can have confidence that those theories may be truthful or at least come close to being truthful. In principle, Popper was not wrong in encouraging scientists to formulate theories that are empirically testable by specifying what kinds of observations would be inconsistent with their theories. But in practice, that advice has been difficult to follow, and not only because researchers try to avoid subjecting their pet theories to tests that might prove them wrong.

Although Popper often cited historical examples to support his view that science progresses through an ongoing process of theoretical conjecture and empirical refutation, historians of science have had no trouble finding instances in which scientists did not follow Popper’s methodological rules and continued to maintain theories even after they had been refuted by evidence or after other theories had been shown to generate more accurate predictions than their own theories. Popper parried this objection by saying that his methodological rules were not positive (i.e., descriptive of science), but normative (i.e., prescriptive of how to do good science). In other words, Popper’s scientific methodology was itself not empirically refutable and scientific, but empirically irrefutable and metaphysical. I point out the unscientific character of Popper’s methodology of science, not to criticize Popper, but to point out that Popper himself did not believe that science is itself the final authority and ultimate arbiter of scientific practice.

But the more important lesson from the critical discussions of Popper’s methodological rules seems to me to be that they are too rigid to accommodate all the considerations that are relevant to assessing scientific theories and deciding whether those theories should be discarded or, at least tentatively, maintained. And Popper’s methodological rules are especially ill-suited for economics and other disciplines in which the empirical implications of theories depend on a large number of jointly-maintained hypotheses, so that it is hard to identify which of several maintained hypotheses is responsible for the failure of a predicted outcome to match the observed outcome. That of course is the well-known ceteris paribus problem, and it requires a very capable practitioner to know when to apply the ceteris paribus condition and which variables to hold constants and which to allow to vary. Popper’s methodological rules tell us to reject a theory when its predictions are mistaken, and Popper regarded the ceteris paribus quite skeptically as an illegitimate immunizing stratagem. That describes a profound dilemma for economics. On the one hand, it is hard to imagine how economic theory could be applied without using the ceteris paribus qualification, on the other hand, the qualification diminishes empirical content of economic theory.

Empirical problems are amplified by the infirmities of the data that economists typically use to derive quantitative predictions from their models. The accuracy of the data is often questionable, and the relationships between the data and the theoretical concepts they are supposed to measure are often dubious. Moreover, the assumptions about the data-generating process (e.g., independent and identically distributed random variables, randomly selected observations, omitted explanatory variables are uncorrelated with the dependent variable) necessary for the classical statistical techniques to generate unbiased estimates of the theoretical coefficients are almost impossibly stringent. Econometricians are certainly well aware of these issues, and they have discovered methods of mitigating them, but the problems with the data routinely used by economists and the complicated issues involved in developing and applying techniques to cope with those problems make it very difficult to use statistical techniques to reach definitive conclusions about empirical questions.

Jeff Biddle, one of the leading contemporary historians of economics, has a wonderful paper (“Statistical Inference in Economics 1920-1965: Changes in Meaning and Practice”)– his 2016 presidential address to the History of Economics Society – discussing how the modern statistical techniques based on concepts and methods derived from probability theory gradually became the standard empirical and statistical techniques used by economists, even though many distinguished earlier researchers who were neither unaware of, nor unschooled in, the newer techniques believed them to be inappropriate for analyzing economic data. Here is the abstract of Biddle’s paper.

This paper reviews changes over time in the meaning that economists in the US attributed to the phrase “statistical inference”, as well as changes in how inference was conducted. Prior to WWII, leading statistical economists rejected probability theory as a source of measures and procedures to be used in statistical inference. Haavelmo and the econometricians associated with the early Cowles Commission developed an approach to statistical inference based on concepts and measures derived from probability theory, but the arguments they offered in defense of this approach were not always responsive to the concerns of earlier empirical economists that the data available to economists did not satisfy the assumptions required for such an approach. Despite this, after a period of about 25 years, a consensus developed that methods of inference derived from probability theory were an almost essential part of empirical research in economics. I close the paper with some speculation on possible reasons for this transformation in thinking about statistical inference.

I quote one passage from Biddle’s paper:

As I have noted, the leading statistical economists of the 1920s and 1930s were also unwilling to assume that any sample they might have was representative of the universe they cared about. This was particularly true of time series, and Haavelmo’s proposal to think of time series as a random selection of the output of a stable mechanism did not really address one of their concerns – that the structure of the “mechanism” could not be expected to remain stable for long periods of time. As Schultz pithily put it, “‘the universe’ of our time series does not ‘stay put’” (Schultz 1938, p. 215). Working commented that there was nothing in the theory of sampling that warranted our saying that “the conditions of covariance obtaining in the sample (would) hold true at any time in the future” (Advisory Committee 1928, p. 275). As I have already noted, Persons went further, arguing that treating a time series as a sample from which a future observation would be a random draw was not only inaccurate but ignored useful information about unusual circumstances surrounding various observations in the series, and the unusual circumstances likely to surround the future observations about which one wished to draw conclusions (Persons 1924, p. 7). And, the belief that samples were unlikely to be representative of the universe in which the economists had an interest applied to cross section data as well. The Cowles econometricians offered to little assuage these concerns except the hope that it would be possible to specify the equations describing the systematic part of the mechanism of interest in a way that captured the impact of factors that made for structural change in the case of time series, or factors that led cross section samples to be systematically different from the universe of interest.

It is not my purpose to argue that the economists who rejected the classical theory of inference had better arguments than the Cowles econometricians, or had a better approach to analyzing economic data given the nature of those data, the analytical tools available, and the potential for further development of those tools. I only wish to offer this account of the differences between the Cowles econometricians and the previously dominant professional opinion on appropriate methods of statistical inference as an example of a phenomenon that is not uncommon in the history of economics. Revolutions in economics, or “turns”, to use a currently more popular term, typically involve new concepts and analytical methods. But they also often involve a willingness to employ assumptions considered by most economists at the time to be too unrealistic, a willingness that arises because the assumptions allow progress to be made with the new concepts and methods. Obviously, in the decades after Haavelmo’s essay on the probability approach, there was a significant change in the list of assumptions about economic data that empirical economists were routinely willing to make in order to facilitate empirical research.

Let me now quote from a recent book (To Explain the World) by Steven Weinberg, perhaps – even though a movie about his life has not (yet) been made — the greatest living physicist:

Newton’s theory of gravitation made successful predictions for simple phenomena like planetary motion, but it could not give a quantitative account of more complicated phenomena, like the tides. We are in a similar position today with regard to the strong forces that hold quarks together inside the protons and neutrons inside the atomic nucleus, a theory known as quantum chromodynamics. This theory has been successful in accounting for certain processes at high energy, such as the production of various strongly interacting particles in the annihilation of energetic electrons and their antiparticles, and its successes convince us that the theory is correct. We cannot use the theory to calculate precise values for other things that we would like to explain, like the masses of the proton and neutron, because the calculations is too complicated. Here, as for Newton’s theory of the tides, the proper attitude is patience. Physical theories are validated when they give us the ability to calculate enough things that are sufficiently simple to allow reliable calculations, even if we can’t calculate everything that we might want to calculate.

So Weinberg is very much aware of the limits that even physics faces in making accurate predictions. Only a small subset (relative to the universe of physical phenomena) of simple effects can be calculated, but the capacity of physics to make very accurate predictions of simple phenomena gives us a measure of confidence that the theory would be reliable in making more complicated predictions if only we had the computing capacity to make those more complicated predictions. But in economics the set of simple predictions that can be accurately made is almost nil, because economics is inherently a theory a complex social phenomena, and simplifying the real world problems to which we apply the theory to allow testable predictions to be made is extremely difficult and hardly ever possible. Experimental economists try to create conditions in which this can be done in controlled settings, but whether these experimental results have much relevance for real-world applications is open to question.

The problematic relationship between economic theory and empirical evidence is deeply rooted in the nature of economic theory and the very complex nature of the phenomena that economic theory seek to explain. It is very difficult to isolate simple real-world events in which economic theories can be put to decisive empirical tests that allow us to put competing theories to decisive tests based on unambiguous observations that are either consistent with or contrary to the predictions generated by those theories. Under those circumstances, if we apply the Popperian criterion for demarcation between science and metaphysics to economics, it is not at all clear to me whether economics is more on the science side of the line than on the metaphysics side.

Certainly, there are refutable implications of economic theory that can be deduced, but these implications are often subject to qualification, so the refutable implications are often refutable only n principle, but not in practice. Many fastidious economic methodologists, notably Mark Blaug, voiced unhappiness about this state of affairs and blamed economists for not being more ruthless in applying Popperian test of empirical refutation to their theories. Surely Blaug had a point, but the infrequency of empirical refutation of theories in economics is, I think, less attributable to bad methodological practice on the part of economists than to the nature of the theories that economists work with and the inherent ambiguities of the empirical evidence with which those theories can be tested. We might as well just face up to the fact that, to a large extent, empirical evidence is simply not clear cut enough to force us to discard well-entrenched economic theories, because well-entrenched economic theories can be adjusted and reformulated in response to apparently contrary evidence in ways that allow those theories to live on to fight another day, theories typically having enough moving parts to allow them to be adjusted as needed to accommodate anomalous or inconvenient empirical evidence.

Popper’s somewhat disloyal disciple, Imre Lakatos, talked about scientific theories in the context of scientific research programs, a research program being an amalgam of related theories which share a common inner core of theoretical principles or axioms which are not subject to refutation. Lakatos called these deep axiomatic core of principles the hard core of the research program. The hard core defines the program so it is fundamentally fixed and not open to refutation. The empirical content of the research program is provided by a protective belt of specific theories that are subject to refutation and, when refuted, can be replaced as needed with alternative theories that are consistent with both the theoretical hard core and the empirical evidence. What determines the success of a scientific research program is whether it is progressive or degenerating. A progressive research program accumulates an increasingly dense, but evolving, protective belt of theories in response to new theoretical and empirical problems or puzzles that are generated within the research program to keep researchers busy and to attract into the program new researchers seeking problems to solve. In contrast, a degenerating research program is unable to find enough interesting new problems or puzzles to keep researchers busy much less attract new ones.

Despite its Popperian origins, the largely sociological Lakatosian account of how science evolves and progresses was hardly congenial to Popper’s sensibilities, because the success of a research program is not strictly determined by the process of conjecture and refutation envisioned by Popper. But the important point for me is that a Lakatosian research program can be progressive even if it is metaphysical and not scientific. What matters is that it offer opportunities for researchers to find and to solve or even just to talk about solving new problems, thereby attracting new researchers into the program.

It does appear that economics has for at least two centuries been a progressive research program. But it is not clear that is a really scientific research program, because the nature of economic theory is so flexible that it can be adapted as needed to explain almost any set of observations. Almost any observation can be set up and solved in terms of some sort of constrained optimization problem. What the task requires is sufficient ingenuity on the part of the theorist to formulate the problem in such a way that the desired outcome can be derived as the solution of a constrained optimization problem. The hard core of the research program is therefore never at risk, and the protective belt can always be modified as needed to generate the sort of solution that is compatible with the theoretical hard core. The scope for true refutation has thus been effectively narrowed to eliminate any real scope for refutation, leaving us with a progressive metaphysical research program.

I am not denying that it would be preferable if economics could be a truly scientific research program, but it is not clear to me how much can be done about it. The complexity of the phenomena, the multiplicity of the hypotheses required to explain the data, and the ambiguous and not fully reliable nature of most of the data that economists have available devilishly conspire to render Popperian falsificationism an illusory ideal in economics. That is not an excuse for cynicism, just a warning against unrealistic expectations about what economics can accomplish. And the last thing that I am suggesting is that we stop paying attention to the data that we have or stop trying to improve the quality of the data that we have to work with.

Rules vs. Discretion Historically Contemplated

Here is a new concluding section which I have just written for my paper “Rules versus Discretion in Monetary Policy: Historically Contemplated” which I spoke about last September at the Mercatus Confernce on Monetary Rules in a Post-Crisis World. I have been working a lot on the paper over the past month or so and I hope to post a draft soon on SSRN and it is now under review for publication. I apologize for having written very little in past month and for having failed to respond to any comments on my previous posts. I simply have been too busy with work and life to have any energy left for blogging. I look forward to being more involved in the blog over the next few months and expect to be posting some sections of a couple of papers I am going to be writing. But I’m offering no guarantees. It is gratifying to know that people are still visiting the blog and reading some of my old posts.

Although recognition of a need for some rule to govern the conduct of the monetary authority originated in the perceived incentive of the authority to opportunistically abuse its privileged position, the expectations of the public (including that small, but modestly influential, segment consisting of amateur and professional economists) about what monetary rules might actually accomplish have evolved and expanded over the course of the past two centuries. As Laidler (“Economic Ideas, the Monetary Order, and the Uneasy Case for Monetary Rules”) shows, that evolution has been driven by both the evolution of economic and monetary institutions and the evolution of economic and monetary doctrines about how those institutions work.

I distinguish between two types of rules: price rules and quantity rules. The simplest price rule involved setting the price of a commodity – usually gold or silver – in terms of a monetary unit whose supply was controlled by the monetary authority or defining a monetary unit as a specific quantity of a particular commodity. Under the classical gold standard, for example, the monetary authority stood ready to buy or sell gold on demand at legally determined price of gold in terms of the monetary unit. Thus, the fixed price of gold under the gold standard was originally thought to serve as both the policy target of the rule and the operational instrument for implementing the rule.

However, as monetary institutions and theories evolved, it became apparent that there were policy objectives other than simply maintaining the convertibility of the monetary unit into the standard commodity that required the attention of the monetary authority. The first attempt to impose an additional policy goal on a monetary authority was the Bank Charter Act of 1844 which specified a quantity target – the aggregate of banknotes in circulation in Britain – which the monetary authority — the Bank of England – was required to reach by following a simple mechanical rule. By imposing a 100-percent marginal gold-reserve requirement on the notes issued by the Bank of England, the Bank Charter Act made the quantity of banknotes issued by the Bank of England both the target of the quantity rule and the instrument by which the rule was implemented.

Owing to deficiencies in the monetary theory on the basis of which the Act was designed and to the evolution of British monetary practices and institution, the conceptual elegance of the Bank Charter Act was not matched by its efficacy in practice. But despite, or, more likely, because of, the ultimate failure of Bank Charter Act, the gold standard, surviving recurring financial crises in Great Britain in the middle third of the nineteenth century, was eventually adopted by many other countries in the 1870s, becoming the de facto international monetary system from the late 1870s until the start of World War I. Operation of the gold standard was defined by, and depended on, the observance of a single price rule in which the value of a currency was defined by its legal gold content, so that corresponding to each gold-standard currency, there was an official gold price at which the monetary authority was obligated to buy or sell gold on demand.

The value – the purchasing power — of gold was relatively stable in the 35 or so years of the gold standard era, but that stability could not survive the upheavals associated with World War I, and so the problem of reconstructing the postwar monetary system was what kind of monetary rule to adopt to govern the post-war economy. Was it enough merely to restore the old currency parities – perhaps adjusted for differences in the extent of wartime and postwar currency depreciation — that governed the classical gold standard, or was it necessary to take into account other factors, e.g., the purchasing power of gold, in restoring the gold standard? This basic conundrum was never satisfactorily answered, and the failure to do so undoubtedly was a contributing, and perhaps dominant, factor in the economic collapse that began at the end of 1929, ultimately leading to the abandonment of the gold standard.

Searching for a new monetary regime to replace the failed gold standard, but to some extent inspired by the Bank Charter Act of the previous century, Henry Simons and ten fellow University of Chicago economists devised a totally new monetary system based on 100-percent reserve banking. The original Chicago proposal for 100-percent reserve banking proposed a monetary rule for stabilizing the purchasing power of fiat money. The 100-percent banking proposal would give the monetary authority complete control over the quantity of money, thereby enhancing the power of the monetary authority to achieve its price-level target. The Chicago proposal was thus inspired by a desire to increase the likelihood that the monetary authority could successfully implement the desired price rule. The price level was the target, and the quantity of money was the instrument. But as long as private fractional-reserve banks remained in operation, the monetary authority would lack effective control over the instrument. That was the rationale for replacing fractional reserve banks with 100-percent reserve banks.

But Simons eventually decided in his paper (“Rules versus Authorities in Monetary Policy”) that a price-level target was undesirable in principle, because allowing the monetary authority to choose which price level to stabilize, thereby favoring some groups at the expense of others, would grant too much discretion to the monetary authority. Rejecting price-level stabilization as monetary rule, Simons concluded that the exercise of discretion could be avoided only if the quantity of money was the target as well as the instrument of a monetary rule. Simons’s ideal monetary rule was therefore to keep the quantity of money in the economy constant — forever. But having found the ideal rule, Simons immediately rejected it, because he realized that the reforms in the financial and monetary systems necessary to make such a rule viable over the long run would never be adopted. And so he reluctantly and unhappily reverted back to the price-level stabilization rule that he and his Chicago colleagues had proposed in 1933.

Simons’s student Milton Friedman continued to espouse his teacher’s opposition to discretion, and as late as 1959 (A Program for Monetary Stability) he continued to advocate 100-percent reserve banking. But in the early 1960s, he adopted his k-percent rule and gave up his support for 100-percent banking. But despite giving up on 100-percent banking, Friedman continued to argue that the k-percent rule was less discretionary than the gold standard or a price-level rule, because neither the gold standard nor a price-level rule eliminated the exercise of discretion by the monetary authority in its implementation of policy, failing to acknowledge that, under any of the definitions that he used (usually M1 and sometimes M2), the quantity of money was a target, not an instrument. Of course, Friedman did eventually abandon his k-percent rule, but that acknowledgment came at least a decade after almost everyone else had recognized its unsuitability as a guide for conducting monetary policy, let alone as a legally binding rule, and long after Friedman’s repeated predictions that rapid growth of the monetary aggregates in the 1980s presaged the return of near-double-digit inflation.

However, the work of Kydland and Prescott (“Rules Rather than Discretion: The Inconsistency of Optimal Plans”) on time inconsistency has provided an alternative basis on which argue against discretion: that the lack of commitment to a long-run policy would lead to self-defeating short-term attempts to deviate from the optimal long-term policy.[1]

It is now I think generally understood that a monetary authority has available to it four primary instruments in conducting monetary policy, the quantity of base money, the lending rate it charges to banks, the deposit rate it pays banks on reserves, and an exchange rate against some other currency or some asset. A variety of goals remain available as well, nominal goals like inflation, the price level, or nominal income, or even an index of stock prices, as well as real goals like real GDP and employment.

Ever since Friedman and Phelps independently argued that the long-run Phillips Curve is vertical, a consensus has developed that countercyclical monetary policy is basically ineffectual, because the effects of countercyclical policy will be anticipated so that the only long-run effect of countercyclical policy is to raise the average rate of inflation without affecting output and employment in the long run. Because the reasoning that generates this result is essentially that money is neutral in the long run, the reasoning is not as compelling as the professional consensus in its favor would suggest. The monetary neutrality result only applies under the very special assumptions of a comparative static exercise comparing an initial equilibrium with a final equilibrium. But the whole point of countercyclical policy is to speed the adjustment from a disequilbrium with high unemployment back to a low-unemployment equilibrium. A comparative-statics exercise provides no theoretical, much less empirical, support for the proposition that anticipated monetary policy cannot have real effects.

So the range of possible targets and the range of possible instruments now provide considerable latitude to supporters of monetary rules to recommend alternative monetary rules incorporating many different combinations of alternative instruments and alternative targets. As of now, we have arrived at few solid theoretical conclusions about the relative effectiveness of alternative rules and even less empirical evidence about their effectiveness. But at least we know that, to be viable, a monetary rule will almost certainly have to be expressed in terms of one or more targets while allowing the monetary authority at least some discretion to adjust its control over its chosen instruments in order to effectively achieve its target (McCallum 1987, 1988). That does not seem like a great deal of progress to have made in the two centuries since economists began puzzling over how to construct an appropriate rule to govern the behavior of the monetary authority, but it is progress nonetheless. And, if we are so inclined, we can at least take some comfort in knowing that earlier generations have left us a lot of room for improvement.

Footnote:

[1] Friedman in fact recognized the point in his writings, but he emphasized the dangers of allowing discretion in the choice of instruments rather than the time-inconsistency policy, because it was only former argument that provided a basis for preferring his quantity rule over price rules.

A Tale of Three Posts

Since I started blogging in July 2011, I have published 521 posts (not including this one). A number of my posts have achieved a fair amount of popularity, as measured by the number of views, which WordPress allows me to keep track of. Many, though not all, of my most widely viewed posts were mentioned by Paul Krugman in his blog. Whenever I noticed an unusually large uptick in the number of viewers visiting the blog, I usually found Krugman had linked to my post, causing a surge of viewers to my blog.

The most visitors I ever had in one day was on August 7, 2012. It was the day after I wrote a post mocking an op-ed in the Wall Street Journal by Arthur Laffer (“Arthur Laffer, Anti-Enlightenment Economist”) in which, based on some questionable data, and embarrassingly bad logic, Laffer maintained that countries that had adopted fiscal stimulus after the 2008-09 downturn had weaker recoveries than countries that had practiced fiscal austerity. This was not the first or last time that Krugman linked to a post of mine, but what made it special was that Krugman linked to it while he on vacation, so that for three days, everyone who visited Krugman’s blog found his post linking to my post, so that on August 7 alone, my post was viewed 7885 times, with 3004 viewing the post on August 8, 1591 on August 9, and 953 on August 10. In the entire month of August, the Laffer post was viewed 15,399 times. To this day, that post remains the most viewed post that I have ever written, having been viewed a total 17,604 times.

As you can see, the post has not maintained its popular appeal, over 87 percent of all views having occurred within three and a half weeks of its having been published. And there’s no reason why it should have retained its popularity. It was a well-written post, properly taking a moderately well-known right-wing economist to task for publishing a silly piece of ideological drivel in a once-great newspaper, but there was nothing especially profound or original about it. It was just the sort of post that Krugman loves to link to, and I was at the top of his blog for three days before he published his next post.

Exactly a year and a half later, February 6, 2014, I wrote another post (“Why Are Wages Sticky?“) that Krugman mentioned on his blog. I wasn’t mocking or attacking anyone, but suggesting what I think is an original theoretical explanation for why wages are more sticky than most other prices, while also reminding people that in the General Theory, Keynes actually tried to explain why wage stickiness was not an essential element of his theoretical argument for the existence of involuntary unemployment. Because it wasn’t as polemical as the earlier post, and because I didn’t have Krugman’s blog all to myself for three days, Krugman’s link did not generate anywhere near the traffic for this post that it did for the Laffer post. The day that Krugman linked to my post, February 7, it was viewed by 1034 viewers (333 of whom were referred by Krugman). Very good, but nowhere near the traffic I got a year and a half earlier. For the entire month of February, the post was viewed 2145 times. Again, that’s pretty good, but probably below average for a post to which Kruman posted a link. But the nice thing about the wage stickiness post is that although the traffic to that post dropped off over the next few months, the decline was not nearly as precipitous as dropoff in traffic to the Laffer post. During all of 2014, wage-stickiness post was viewed a total of 6622 times.

What I also noticed was that after traffic gradully dropped off in the months after February, traffic picked up again in September and again in October before dropping off slightly in December and January,  only to pick up again in February. That pattern, which has continued ever since, suggests to me that somehow econ students, on their own or perhaps at the suggestion of their professors, are looking up what I had to say about wage stickiness. Here is a WordPress table tracking monthly views of this post.

So unlike the Laffer post, the vast majority of the visits to the wage-stickiness post (almost 88%) have occurred since the month in which it was published. So for about two years I have been watching the visits to my wage-stickiness post gradually move up in the rankings of my all-time most viewed posts until I could announce that it had eclipsed the fluke Laffer post as my number one post. The price-stickiness post is now within less than fifty views of passing the Laffer post. Yes, I know it’s not a big deal, but I feel good about it.

But over the past six months, suddenly since October, a third post (“Gold Standard or Gold Exchange Standard: What’s the Difference?“), originally published on July 1, 2015, has been attracting a lot of traffic. When first published, it was moderately successful, drawing 569 visits on July 2, 2015, which is still the most visits it has received on any single day, mostly via links from Mark Toma’s blog and Brad DeLong’s blog. The post was not terribly original, but I think it did a nice job of describing that evolution of the gold standard from an almost accidental and peculiarly British, institution into a totem of late nineteenth-century international monetary orthodoxy, whose principal features remain till this day surprisingly obscure even to well trained and sophisticated monetary economists and financial experts.

And I also tried to show that the supposed differences between the pre-World-War I gold standard and the attempted and ultimately disastrous resurrection of the gold standard (GS 2.0) in the 1920s in the form of what was called a gold-exchange standard were really pretty trivial. So if the gold standard failed when it was reconstituted after World War I, the reason was not that what was tried was not the real thing. It was because of deeper systemic problems that had no direct connection to the nominal difference between the original gold standard and the gold exchange standard. I cconclluded the post with three lengthy quotations from J. M. Keynes’s first book on economics Indian Currency and Finance, which displayed an excellent understanding of the workings of the gold standard and the gold exchange standard, the latter having been the system by which India was linked to gold while under British control before World War I. Here is the WordPress table tracking monthly views of my post on the gold exchange standard.

The number of views this month alone is a staggering amount of traffic for any post — the second most views in a month for any post I have written. And what is more amazing is that the traffic has not been driven by links from other blogs, but has been driven, as best as I can tell, at least partially, by search engines.

The other amazing thing about the burst of traffic to this post is that most of the visitors seem to be coming from India. Over the past 30 days since February 28, this blog has been viewed 17,165 times. The most-often viewed post in that time period was my gold-exchange standard post, which was viewed 7385 times, i.e., over 40% of all views were of that one single post. In the past 30 days, my blog was viewed from India 6446 times while my blog was viewed from the United States only 4863 times. Over the entire history of this blog, about 50% of views have been from within the US. So India is clearly where it’s at now.

Now I know that the Indian monetary system was implicated in this post owing to my extended quotation from Keynes’s book, but that reference is largely incidental. So I am at a loss to explain why all these Indian visitors have been attracted to the blog, and why the attraction seems to be growing exponentially, though I suspect that traffic may have peaked over the last week.

At any rate here is how a WordPress table with my 11 most popular posts (as of today at 3:07 pm EDST).

So, as I write this it is not clear whether my hopes that my price-stickiness post will become my all-time most viewed post will ever come to pass, because my gold exchange standard post may very well pass it before it passes the Laffer post. Even so, over the very long run, I still have a feeling that the wage stickiness post will eventually come out on top. We shall see.

At any rate, if you have ever viewed either one of those posts in the past, I would be interested in hearing from you how you got to it.

PS I realized that, by identifying Paul Krugman’s blog as the blog from which many of my most popular posts have received the largest number of viewers, I inadvertently slighted Mark Thoma’s indispensible blog (Economistsview.typepad.com), which really is the heart and soul of the econ blogosphere. I just checked, and I see that since my blog started in 2011, over 79,000 viewers have visited my blog via Mark’s blog compared to 53,000 viewers who have visited via Krugman. And I daresay that when Krugman has linked to one of my posts, it’s probably only after he followed Thoma’s link to my blog, so I’m doubly indebted to Mark.

Deconstructing Judge Bybee’s Disingenuous Dissent

On January 27, 2017, Executive Order 13769 was issued; among other things the order instructed cabinet secretaries to stop immigration from seven previously identified countries (Iran, Iraq, Libya, Somalia, Sudan, Syria, and Yemen), the officials being authorized to issue exemptions on a case-by-case basis. The order was immediately challenged in a number of suits in the federal district courts, with at least one court (in Boston) upholding the order. However, the court in the Western district of Washington, finding that the order was likely to be ruled unconstitutional in a trial on the merits, issued a temporary restraining order (TRO) blocking the government from enforcing the order. The government immediately appealed the TRO to the Ninth Circuit Court of Appeals. A three-judge panel of the court heard the appeal, and unanimously dismissed the government’s request for a stay of the TRO in a per curiam decision. Rather than appeal the decision of the 3-judge panel to the full court of appeals, or to the Supreme Court, the government chose to withdraw the initial order, mooting the decision, and began to redraft the order to address the defects in the original order identified by the district court trial judge and the 3-judge panel of the Ninth Circuit.

The opinion of the 3-judge panel upholding the TRO, focused on three provisions of the order: first the 120-day ban on entry into the US by any nationals from the seven listed countries, including nationals who are legal permanent residents, holders of green cards, or other valid non-immigrant visas permitting them to work or reside in the US, second the suspension for 120 days of the refugee resettlement program for nationals of the seven listed countries, and, upon completion of the 120-day period, the prioritization of granting refugee status to religious minorities (i.e., non-Muslims) from those countries, and third, the indefinite suspension of all Syrians from the refugee resettlement program.

Although the cause of action underlying the Washington case was removed by the withdrawal of executive order 13769, the decision of the 3-judge panel remains valid and may be cited as authority by other courts. However, one (unnamed) judge on the Ninth Circuit moved for the opinion to be vacated, a technical term meaning that the decision and the opinion are reduced to the approximate status of, say, a law review article, but become devoid of any precedential authority. A motion by a judge on the court of appeals to vacate a decision is typically not made unless a judge wants to signal his or her strong disagreement with the decision, and the opinion written by Judge Jay Bybee of the Ninth Circuit and concurred in by four other judges of the Ninth Circuit, including the former Chief Judge, Alex Kozinski.

The main points of the opinion of the 3-judge panel were: 1) the states of Washington and Minnesota had standing to act as plaintiffs on behalf of resident aliens and on behalf of citizens whose rights or interests were incidentally harmed by the executive order; 2) the executive order was subject to judicial review notwithstanding broad Constitutional powers assigned to the executive branch in matters of foreign policy and explicit grants of authority by Congress over immigration policy; 3) the TRO issued by the district court was a procedural order based on a finding by the court that the plaintiffs had established a substantial likelihood of success at trial; 4) in seeking to stay the TRO, the government bore the burden of rebutting the decision of the trial court that plaintiffs would prevail on the merits, which could be done either by proving that the wrong standard of judicial review was applied, or by showing that there was a compelling national security justification for the order; 5) the district court was correct in ruling that the plaintiffs had a strong likelihood of success in establishing that the Constitutionally granted rights of due process to which nationals from the seven listed countries who were either legal resident aliens, green-card holders, or holders of valid travel visas are entitled had been violated by the executive order; 6) the likelihood that claims by plaintiffs that they were victims of religious discrimination would be upheld is not clear, but a likelihood of success in establishing their due-process claims having been established, plaintiffs could continue to raise their religious discrimination claims in subsequent proceedings.

In his opinion arguing for the decision and opinion of the 3-judge panel to be vacated, Judge Bybee focused his attention primarily on the standard under which Executive Order 13769 may properly be reviewed. The key point of contention is whether the Supreme Court’s decision in Mandel v. Kleindienst sets the limits to what factors a court may take into consideration in reviewing the Executive Order, the failure of the 3-judge panel to abide by the Mandel standard constituting the fundamental error justifying the panel’s per curiam opinion to be vacated. But before considering the relevance of Mandel v. Kleindienst to the Washington case, I want to take note of some of Judge Bybee’s remarks about the Constitutional status of aliens and the rights to which they are entitled.

Having acknowledged that decisions by the government in the fields of foreign affairs and immigration policy are not entirely beyond the scope of judicial review, Judge Bybee asks how the requirements of judicial review can be reconciled with the deference owed to the political branches in those areas. He responds by invoking an old case:

The Supreme Court has given us a way to analyze these knotty questions, but it depends on our ability to distinguish between two groups of aliens: those who are present within our borders and those who are seeking admission. As the Court explained in Leng May Ma v. Barber,

It is important to note at the outset that our immigration laws have long made a distinction between those aliens who have come to our shores seeking admission, . . . and those who are within the United States after an entry, irrespective of its legality. In the latter instance the Court has recognized additional rights and privileges not extended to those in the former category who are merely “on the threshold of initial entry.” 357 U.S. 185, 187 (1958) (quoting Mezei, 345 U.S. at 212). (pp. 10-11)

The panel did not recognize that critical distinction and it led to manifest error.

This is a quite remarkable assertion by Judge Bybee, because two paragraphs earlier, criticizing the 3-judge panel for having merely paid lip-service to the deference owed to the President in the field of foreign affairs, Judge Bybee commented acidly:

The panel began its analysis from two important premises: first, that it is an “uncontroversial principle” that we “owe substantial deference to the immigration and national security policy determinations of the political branches,” 847 F.3d at 1161; second, that courts can review constitutional challenges to executive actions, see id. at 1164. I agree with both of these propositions. Unfortunately, that was both the beginning and the end of the deference the panel gave the President. (p. 9)

A rather peculiar criticism for Judge Bybee to have made inasmuch as his invocation of the critical distinction between aliens coming to our shores seeking admission and those already within the US after entry is both the beginning and the end of his own recognition of that distinction. But aside from its peculiarity, the criticism was completely misplaced, the distinction between two classes of aliens actually being central to the reasoning by which the panel justified its opinion.

The bedrock of Judge Bybee’s dissent rests is the case Kleindienst v. Mandel decided in 1972. Before Mandel, the doctrine of Consular Nonreviewability was absolute. Thus, in Knauff v. Shaughnessy the Supreme Court rejected the appeal of a former American soldier who wanted to bring his German wife to America under the War Brides Act. His wife’s application for a visa having been denied on the basis of confidential undisclosed information transmitted to the counselor official processing Mrs. Knauff’s visa application, Mr. Knauff filed suit seeking judicial review of the consular decision. The Court ruled that, as an alien applying for admission to the United States, Mrs. Knauff had no due-process claim for a review of the consular decision. The best commentary on the Court’s reprehensible decision was delivered by Justice Jackson in his dissenting opinion (which follows Justice Frankurter’s dissent in the link). “Security is like liberty” wrote Justice Jackson, “in that many are the crimes committed in its name.”

In Mandel, the doctrine of consular nonreviewability was extended, and modified ever-so slightly, to take into account not the non-existent right to due process of non-resident aliens, but the implicated rights of American citizens claiming some injury as a result of the consular official’s rejection of the alien’s visa application. Mandel, a Marxist journalist and scholar invited to speak at an academic conference, had unsuccessfully applied for a visa to enter the United States to attend the conference, his application having been denied by a consular official. In an earlier visit to the US to lecture and participate in academic conferences, Mandel had made an unscheduled appearance not authorized by his visa. Mandel and co-plaintiffs brought suit against Richard Kleindienst to require him to grant a waiver to the denial of Mandel’s visa request on the grounds that denial of Mandel’s request had violated the First and Fifth Amendment rights, not of Mandel, but of the US citizens who had invited him to participate in their conference. Mandel is, sadly, a well-established precedent, but its holding is orthogonal to the point of law – the rights to due process of aliens legally present within our borders – for which Judge Bybee invokes its undeserved authority.

Having both acknowledged and lamented Mandel’s status as an authoritative precedent on which much current immigration law depends, I will digress briefly to that a fair reading of the dissents by Justice Douglas and especially Justice Marshall ought to create substantial doubt in the mind of any disinterested reader that the case was correctly decided. Justice Marshall’s powerful and eloquent dissent deserves particular attention.

Today’s majority apparently holds that Mandel may be excluded and Americans’ First Amendment rights restricted because the Attorney General has given a “facially legitimate and bona fide reason” for refusing to waive Mandel’s visa ineligibility. I do not understand the source of this unusual standard. Merely “legitimate” governmental interests cannot override constitutional rights. Moreover, the majority demands only “facial” legitimacy and good faith, by which it means that this Court will never “look behind” any reason the Attorney General gives. No citation is given for this kind of unprecedented deference to the Executive, nor can I imagine (nor am I told) the slightest justification for such a rule.

Even the briefest peek behind the Attorney General’s reason for refusing a waiver in this case would reveal that it is a sham. The Attorney General informed appellees’ counsel that the waiver was refused because Mandel’s activities on a previous American visit “went far beyond the stated purposes of his trip . . . and represented a flagrant abuse of the opportunities afforded him to express his views in this country.” App. 68. But, as the Department of State had already conceded to appellees’ counsel, Dr. Mandel “was apparently not informed that [his previous] visa was issued only after obtaining a waiver of ineligibility and therefore [Mandel] may not have been aware of the conditions and limitations attached to the [previous] visa issuance.” App. 22. There is no basis in the present record for concluding that Mandel’s behavior on his previous visit was a “flagrant abuse” — or even willful or knowing departure — from visa restrictions. For good reason, the Government in this litigation has never relied on the Attorney General’s reason to justify Mandel’s exclusion. In these circumstances, the Attorney General’s reason cannot possibly support a decision for the Government in this case. But without even remanding for a factual hearing to see if there is any support for the Attorney General’s determination, the majority declares that his reason is sufficient to override appellees’ First Amendment interests.

Thus, the Mandel court’s own invocation of the “facially legitimate and bona fide reason” by which it justified the government’s refusal to grant Mandel a visa was itself neither facially legitimate nor bona fide, but a flagrant exercise of bad faith by the majority, invoking a made-up and pretextual justification for the refusal to grant Mandel a visa that even the government had not offered as a justification of its position. After disposing of this sham argument, Justice Marshall addressed the heart of the majority opinion, the broad grant of power to the Executive to exclude whole classes of aliens from the US.

The heart of appellants’ position in this case . . . is that the Government’s power is distinctively broad and unreviewable because “the regulation in question is directed at the admission of aliens.” Brief for Appellants 33. Thus, in the appellants’ view, this case is no different from a long line of cases holding that the power to exclude aliens is left exclusively to the “political” branches of Government, Congress, and the Executive.

These cases are not the strongest precedents in the United States Reports, and the majority’s baroque approach reveals its reluctance to rely on them completely. They include such milestones as The Chinese Exclusion Case, 130 U.S. 581 (1889), and Fong Yue Ting v. United States, 149 U.S. 698 (1893), in which this Court upheld the Government’s power to exclude and expel Chinese aliens from our midst.

But none of these old cases must be “reconsidered” or overruled to strike down Dr. Mandel’s exclusion, for none of them was concerned with the rights of American citizens. All of them involved only rights of the excluded aliens themselves. At least when the rights of Americans are involved, there is no basis for concluding that the power to exclude aliens is absolute. “When Congress’ exercise of one of its enumerated powers clashes with those individual liberties protected by the Bill of Rights, it is our ‘delicate and difficult task’ to determine whether the resulting restriction on freedom can be tolerated.” United States v. Robel, 389 U.S. 258, 264 (1967). As Robel and many other cases5  show, all governmental power — even the war power, the power to maintain national security, or the power to conduct foreign affairs — is limited by the Bill of Rights. When individual freedoms of Americans are at stake, we do not blindly defer to broad claims of the Legislative Branch or Executive Branch, but rather we consider those claims in light of the individual freedoms. This should be our approach in the present case, even though the Government urges that the question of admitting aliens may involve foreign relations and national defense policies.

The majority recognizes that the right of American citizens to hear Mandel is “implicated” in our case. There were no rights of Americans involved in any of the old alien exclusion cases, and therefore their broad counsel about deference to the political branches is inapplicable. Surely a Court that can distinguish between pre-indictment and post-indictment lineups, Kirby v. Illinois, 406 U.S. 682 (1972), can distinguish between our case and cases which involve only the rights of aliens.

I do not mean to suggest that simply because some Americans wish to hear an alien speak, they can automatically compel even his temporary admission to our country. Government may prohibit aliens from even temporary admission if exclusion is necessary to protect a compelling governmental interest.6  Actual threats to the national security, public health needs, and genuine requirements of law enforcement are the most apparent interests that would surely be compelling.7  But in Dr. Mandel’s case, the Government has, and claims, no such compelling interest. Mandel’s visit was to be temporary.8  His “ineligibility” for a visa was based solely on § 212(a)(28). The only governmental interest embodied in that section is the Government’s desire to keep certain ideas out of circulation in this country. This is hardly a compelling governmental interest. Section (a)(28) may not be the basis for excluding an alien when Americans wish to hear him. Without any claim that Mandel “live” is an actual threat to this country, there is no difference between excluding Mandel because of his ideas and keeping his books out because of their ideas. Neither is permitted. Lamont v. Postmaster General, supra.

Writing for the majority, Justice Blackmun – yes, that Justice Blackmun – attempted to deflect the clear violation of the First Amendment rights of American citizens resulting from the denial of Mandel’s visa application.

Appellees’ First Amendment argument would prove too much. In almost every instance of an alien excludable under § 212(a)(28), there are probably those who would wish to meet and speak with him. The ideas of most such aliens might not be so influential as those of Mandel, nor his American audience so numerous, nor the planned discussion forums so impressive. But the First Amendment does not protect only the articulate, the well known, and the popular. Were we to endorse the proposition that governmental power to withhold a waiver must yield whenever a bona fide claim is made that American citizens wish to meet and talk with an alien excludable under § 212(a)(28), one of two unsatisfactory results would necessarily ensue. Either every claim would prevail, in which case the plenary discretionary authority Congress granted the Executive becomes a nullity, or courts in each case would be required to weigh the strength of the audience’s interest against that of the Government in refusing a waiver to the particular alien applicant, according to some as yet undetermined standard. The dangers and the undesirability of making that determination on the basis of factors such as the size of the audience or the probity of the speaker’s ideas are obvious. Indeed, it is for precisely this reason that the waiver decision has, properly, been placed in the hands of the Executive.

This response might have been persuasive if there had in fact been a bona fide reason for denying Mandel’s visa application. However, the stated reason was clearly pretextual and a sham; the real reason for denying the application was Mandel’s political opinions, so the First Amendment argument raised by Appellees was entirely correct and unrebutted by Justice Blackmun’s majority opinion. Mandel v. Kleindienst was wrongly and dishonestly decided, and, like similar wrongly decided cases, e.g., Korematsu v. United States, deserves, as a matter of simple justice, no precedential weight.

Despite its having been demolished by Justice Marshall’s dissent, I am willing to stipulate for present purposes that the majority opinion in Mandel would be controlling if it were not distinguishable from the case decided by the 3-judge panel. But let us keep in mind two important takeaway points from Justice Marshall’s discussion: first, the disgraceful, racist lineage of the plenary powers doctrine as it relates to immigration, and second, and more importantly for assessing Judge Bybee’s dissent, the absence in Mandel v. Kleindienst of any distinction between the Constitutional rights or interests of citizens that are incidentally abridged by the refusal to admit non-resident aliens into the Unites States and the Constitutional due process rights of aliens legally residing in the United States, precisely the distinction that, Judge Bybee incorrectly asserts, is addressed by Mandel.

Judge Bybee begins by criticizing the 3-judge panel for distinguishing Mandel, in which the Attorney General’s refusal to grant a waiver allowing Mandel entry to the US after a consular official denied his visa application, from an Executive Order promulgating sweeping immigration policy. Judge Bybee offers the following rebuttal:

First, the panel’s declaration that we cannot look behind the decision of a consular officer, but can examine the decision of the President stands the separation of powers on its head. We give deference to a consular officer making an individual determination, but not the President when making a broad, national security-based decision? With a moment’s thought, that principle cannot withstand the gentlest inquiry, and we have said so. See Bustamante v. Mukasey , 531 F.3d 1059, 1062 n.1 (9th Cir. 2008) (“We are unable to distinguish Mandel on the grounds that the exclusionary decision challenged in that case was not a consular visa denial, but rather the Attorney General’s refusal to waive Mandel’s inadmissibility. The holding is plainly stated in terms of the power delegated by Congress to the Executive.’ The Supreme Court said nothing to suggest that the reasoning or outcome would vary according to which executive officer is exercising the Congressionally-delegated power to exclude.”) (pp. 12-13)

Judge Bybee’s sarcasm is as misplaced as it is inappropriate. Mandel is a case about the exercise of a Congressionally authorized power to make a factual determination, normally delegated to a consular official, but in this case the determination at issue was made by the Attorney General reviewing the consular decision. In Bustamente the decision was made at the consular level. Big deal! The Mandel court ruled that such consular decisions to deny visas or higher- level decisions to deny waivers to lower-level decisions were not reviewable on the merits, even if the denials incidentally infringed upon the Constitutional rights of American citizens, provided that “a facially legitimate and bona fide reason” for the decision was provided. The deference accorded by Mandel to the factual decision of a consular official – or his superior — to deny the visa application of a non-resident alien, albeit one that incidentally affected the rights of an American citizen, is in no way comparable to a Presidential decision denying or abridging the Constitutional due-process rights of legally resident aliens, green-card holders and non-immigrant aliens holding valid visas.

Second, the promulgation of broad policy is precisely what we expect the political branches to do; Presidents rarely, if ever, trouble themselves with decisions to admit or exclude individual visa -seekers. See Knauff, 338 U.S. at 543 (“[B]ecause the power of exclusion of aliens is also inherent in the executive department of the sovereign, Congress may in broad terms authorize the executive to exercise the power . . . for the best interests of the country during a time of national emergency.”). If the panel is correct, it just wiped out any principle of deference to the executive. (p. 13)

Is there no deference to the executive unless we allow the Constitutional rights of American citizens and legally resident aliens to be trampled upon by the executive? Since when does “deference” mean “abject submission?” The implications of Judge Bybee’s argument lead straight to Korematsu v. United States. If Judge Bybee is correct, what Constitutional rights may not be abridged by the executive in the process of excluding aliens? Deference to the executive need not entail acquiescence in the denial of due process rights on an industrial scale.

Judge Bybee then invokes Fiallo v. Bell to support his position that broad policy decisions – in this case by Congress, which accorded preferential treatment to the natural mothers of illegitimate children over the natural fathers – are immune from scrutiny despite having discriminatory effects (pp. 13-14). In Fiallo, the Supreme Court upheld a provision of the 1952 Immigration and Nationality Act giving preference for immigration into the US to the legitimate parents of American citizens and to the illegitimate mothers (but not illegitimate fathers) of American citizens as well as to the legitimate children of American parents and to the illegitimate children of American mothers (but not American fathers). A group of illegitimate fathers of American children and illegitimate offspring of American fathers challenged this provision for discriminating on the basis of sex and legitimacy. The Fiallo Court relied on the Mandel “facially legitimate and bona fide reason” test to rule against the plaintiffs.

The panel’s holding that “exercises of policy making authority at the highest levels of the political branches are plainly not subject to the Mandel standard,” id., is simply irreconcilable with the Supreme Court’s holding that it could “see no reason to review the broad congressional policy choice at issue [there] under a more exacting standard than was applied in Kleindienst v. Mandel,” Fiallo, 430 U.S. at 795.

Having thoughtlessly embarked on the wrong road, Judge Bybee keeps marching relentlessly forward. Fiallo, like Mandel, is a case brought by American citizens claiming that their Constitutional rights not to be discriminated against had been incidentally abridged by a Congressional policy decision concerning which aliens, not otherwise eligible for entry into the US, shall be granted special waivers. While the case is related to Mandel, it was not entailed by Mandel, because deference to a consular decision about a question of fact need not entail deference to Congress about a matter of policy. Indeed, both the majority and the minority in Fiallo suggested reasons why the Congressional policy might have been judged to serve a legitimate public purpose. But again the key point is simply that the holding of the Fiallo court did not address the issue addressed by Washington, which is whether the President, by Executive Order, may deny the Constitutional rights of resident aliens, green card holders, and aliens holding valid visas.

Judge Bybee’s wrongheaded attack on the decision of the 3-judge panel reaches a crescendo of confusion in his discussion of Kerry v. Din (pp. 14-16), once again citing a case involving the Constitutional claim of an American citizen as a basis for challenging the denial of a visa to a non-resident alien. In Din, a US citizen whose Afghani husband had been denied an entry visa, claimed that her Constitutional right to live with her husband had been violated without due process. After the Ninth Circuit Court of Appeals upheld her claim, the Supreme Court reversed that decision on appeal. Not only does Judge Bybee misunderstand the relevance of Din to the issues addressed by the 3-judge panel, he fails to recognize that the holding of the Din court has essentially no precedential weight, because the majority that upheld the decision not to grant Din’s husband a visa did not agree on the grounds for rejecting Din’s claim, three justices rejecting Din’s claim that she had a Constitutional right to live with her husband, and two justices arguing that even if she had such a Constitutional right, the consular decision to her husband’s visa request satisfied the Mandel “facially legitimate and bona fide reason” test.

Believing that, because Justice Kennedy’s opinion invoking the Mandel test was controlling, that opinion has precedential authority for other cases, Judge Bybee admonishes the 3-judge panel for ignoring Din. Judge Bybee is wrong on both counts; Din is irrelevant to the opinion of the 3-judge panel, and, even if it were relevant, the 3-judge panel would not have had to reckon with it, because the majority could not agree on the basis of the decision. And I can’t help but observe that, on its face, Justice Kennedy’s opinion that the decision of the consular official that Din’s husband was a terrorist threat merely because he had held a civil-service position under the Taliban government did not obviously satisfy even the weak Mandel test, as Justice Breyer cogently observed in his dissenting opinion.

When Judge Bybee finally does get to a discussion of relevant precedents 16 pages into his 25 page opinion, the best he is able to come up with is Rajah v. Mukasey. After the September 11 attacks, non-immigrant resident males over the age of 16 from 24 Muslim-majority countries plus North Korea were required to appear for registration and fingerprinting. The Second Circuit Court of Appeals upheld this requirement in view of potential risks of further terrorist attacks. Although these requirements were burdensome and discriminatory, those requirements were hardly comparable to exclusion from the United States, so the willingness of the Rajah court to approve such provisions in the wake of the worst terrorist attack in US history does not come close to proving what Judge Bybee wants it to prove: that the law allows the President to revoke the Constitutional rights of resident aliens and prevent them from re-entering the country without even granting them a hearing. In other words, under Judge Bybee’s understanding, resident aliens denied re-entry into the country by Executive Order 13769 would be denied even the minimal “additional rights and privileges not extended to those on the threshold of entry” that, according to the Court in Leng May Ma v. Barber cited above, have been recognized by the Court.

The logical confusion of Judge Bybee’s conflation of two completely different classes of cases is actually quite impressive.

Judge Bybee (p. 20) also invokes 8 U.S.C. 1182f as a legal basis for the executive order at issue. However, the statutory authority of the US Code does not automatically override the Constitutional right to a hearing of a legal resident alien denied re-entry into the United States. Nor is it obvious that the statute in question referring to “the entry of any aliens or of any class of aliens into the United States,” includes resident aliens seeking re-entry into the United States. That is a question of statutory interpretation and the courts are entitled to have the final say on matters of statutory interpretation.

Judge Bybee (p. 20-21) considers that the reasons offered by the President in issuing the executive order were facially legitimate and bona fide reasons, but he acknowledges that in Din, Justice Kennedy indicated that evidence of bad faith on the part of a consular officer who denied a visa might be grounds for questioning whether the reasons offered by consular officer were “facially legitimate and bona fide.” After again chiding the 3-judge panel for not discussing Din, Judge Bybee (p. 21-22) then makes the interesting remark that “it would be a huge leap to suggest that Din’s ‘bad faith’ exception also applies to the motives of broad-policy makers as opposed to those of consular officials.” Because the grounds for suspecting that the executive order was issued in bad faith are so varied and abundant, it is astonishing that Judge Bybee would consider it a leap to conclude that a bad-faith exception might apply to a policy maker, especially after Judge Bybee was so insistent earlier in his opinion that the Mandel “facially legitimate and bona fide reason” test originally applied to the consular nonreviewability doctrine applied seamlessly to both consular decisions and to broad policy decisions.

There are other defects of Judge Bybee’s decision that I could have touched on, but this post is already much too long, and I have devoted too much of my time to tracking them down and explaining them. But I hope others will continue.

Samuelson Rules the Seas

I think Nick Rowe is a great economist; I really do. And on top of that, he recently has shown himself to be a very brave economist, fearlessly claiming to have shown that Paul Samuelson’s classic 1980 takedown (“A Corrected Version of Hume’s Equilibrating Mechanisms for International Trade“) of David Hume’s classic 1752 articulation of the price-specie-flow mechanism (PSFM) (“Of the Balance of Trade“) was all wrong. Although I am a great admirer of Paul Samuelson, I am far from believing that he was error-free. But I would be very cautious about attributing an error in pure economic theory to Samuelson. So if you were placing bets, Nick would certainly be the longshot in this match-up.

Of course, I should admit that I am not an entirely disinterested observer of this engagement, because in the early 1970s, long before I discovered the Samuelson article that Nick is challenging, Earl Thompson had convinced me that Hume’s account of PSFM was all wrong, the international arbitrage of tradable-goods prices implying that gold movements between countries couldn’t cause the relative price levels of those countries in terms of gold to deviate from a common level, beyond the limits imposed by the operation of international commodity arbitrage. And Thompson’s reasoning was largely restated in the ensuing decade by Jacob Frenkel and Harry Johnson (“The Monetary Approach to the Balance of Payments: Essential Concepts and Historical Origins”) and by Donald McCloskey and Richard Zecher (“How the Gold Standard Really Worked”) both in the 1976 volume on The Monetary Approach to the Balance of Payments edited by Johnson and Frenkel, and by David Laidler in his essay “Adam Smith as a Monetary Economist,” explaining why in The Wealth of Nations Smith ignored his best friend Hume’s classic essay on PSFM. So the main point of Samuelson’s takedown of Hume and the PSFM was not even original. What was original about Samuelson’s classic article was his dismissal of the rationalization that PSFM applies when there are both non-tradable and tradable goods, so that national price levels can deviate from the common international price level in terms of tradables, showing that the inclusion of tradables into the analysis serves only to slow down the adjustment process after a gold-supply shock.

So let’s follow Nick in his daring quest to disprove Samuelson, and see where that leads us.

Assume that durable sailing ships are costly to build, but have low (or zero for simplicity) operating costs. Assume apples are the only tradeable good, and one ship can transport one apple per year across the English Channel between Britain and France (the only countries in the world). Let P be the price of apples in Britain, P* be the price of apples in France, and R be the annual rental of a ship, (all prices measured in gold), then R=ABS(P*-P).

I am sorry to report that Nick has not gotten off to a good start here. There cannot be only tradable good. It takes two tango and two to trade. If apples are being traded, they must be traded for something, and that something is something other than apples. And, just to avoid misunderstanding, let me say that that something is also something other than gold. Otherwise, there couldn’t possibly be a difference between the Thompson-Frenkel-Johnson-McCloskey-Zecher-Laidler-Samuelson critique of PSFM and the PSFM. We need at least three goods – two real goods plus gold – providing a relative price between the two real goods and two absolute prices quoted in terms of gold (the numeraire). So if there are at least two absolute prices, then Nick’s equation for the annual rental of a ship R must be rewritten as follows R=ABS[P(A)*-P(A)+P(SE)*-P(SE)], where P(A) is the price of apples in Britain, P(A)* is the price of apples in France, P(SE) is the price of something else in Britain, and P(SE)* is the price of that same something else in France.

OK, now back to Nick:

In this model, the Law of One Price (P=P*) will only hold if the volume of exports of apples (in either direction) is unconstrained by the existing stock of ships, so rentals on ships are driven to zero. But then no ships would be built to export apples if ship rentals were expected to be always zero, which is a contradiction of the Law of One Price because arbitrage is impossible without ships. But an existing stock of ships represents a sunk cost (sorry) and they keep on sailing even as rentals approach zero. They sail around Samuelson’s Iceberg model (sorry) of transport costs.

This is a peculiar result in two respects. First, it suggests, perhaps inadvertently, that the law of price requires equality between the prices of goods in every location when in fact it only requires that prices in different locations not differ by more than the cost of transportation. The second, more serious, peculiarity is that with only one good being traded the price difference in that single good between the two locations has to be sufficient to cover the cost of building the ship. That suggests that there has to be a very large price difference in that single good to justify building the ship, but in fact there are at least two goods being shipped, so it is the sum of the price differences of the two goods that must be sufficient to cover the cost of building the ship. The more tradable goods there are, the smaller the price differences in any single good necessary to cover the cost of building the ship.

Again, back to Nick:

Start with zero exports, zero ships, and P=P*. Then suppose, like Hume, that some of the gold in Britain magically disappears. (And unlike Hume, just to keep it simple, suppose that gold magically reappears in France.)

Uh-oh. Just to keep it simple? I don’t think so. To me, keeping it simple would mean looking at one change in initial conditions at a time. The one relevant change – the one discussed by Hume – is a reduction in the stock of gold in Britain. But Nick is looking at two changes — a reduced stock of gold in Britain and an increased stock of gold in France — simultaneously. Why does it matter? Because the key point at issue is whether a national price level – i.e, Britain’s — can deviate from the international price level. In Nick’s two-country example, there should be one national price level and one international price level, which means that the only price level subject to change as a result of the change in initial conditions should be, as in Hume’s example, the British price level, while the French price level – representing the international price level – remained constant. In a two-country model, this can only be made plausible by assuming that France is large compared to Britain, so that a loss of gold could potentially affect the British price level without changing the French price level. Once again back to Nick.

The price of apples in Britain drops, the price of apples in France rises, and so the rent on a ship is now positive because you can use it to export apples from Britain to France. If that rent is big enough, and expected to stay big long enough, some ships will be built, and Britain will export apples to France in exchange for gold. Gold will flow from France to Britain, so the stock of gold will slowly rise in Britain and slowly fall in France, and the price of apples will likewise slowly rise in Britain and fall in France, so ship rentals will slowly fall, and the price of ships (the Present Value of those rents) will eventually fall below the cost of production, so no new ships will be built. But the ships already built will keep on sailing until rentals fall to zero or they rot (whichever comes first).

So notice what Nick has done. Instead of confronting the Thompson-Frenkel-Johnson-McCloseky-Zecher-Laidler-Samuelson critique of Hume, which asserts that a world price level determines the national price level, Nick has simply begged the question by not assuming that the world price of gold, which determines the world price level, is constant. Instead, he posits a decreased value of gold in France, owing to an increased French stock of gold, and an increased value of gold in Britain, owing to a decreased British stock of gold, and then conflating the resulting adjustment in the value gold with the operation of commodity arbitrage. Why Nick thinks his discussion is relevant to the Thompson-Frenkel-Johnson-McCloseky-Zecher-Laidler-Samuelson critique escapes me.

The flow of exports and hence the flow of specie is limited by the stock of ships. And only a finite number of ships will be built. So we observe David Hume’s price-specie flow mechanism playing out in real time.

This bugs me. Because it’s all sorta obvious really.

Yes, it bugs me, too. And, yes, it is obvious. But why is it relevant to the question under discussion, which is whether there is an international price level in terms of gold that constrains movements in national price levels in countries in which gold is the numeraire. In other words, if there is a shock to the gold stock of a small open economy, how much will the price level in that small open economy change? By the percentage change in the stock of gold in that country – as Hume maintained – or by the minisicule percentage change in the international stock of gold, gold prices in the country that has lost gold being constrained from changing by more than allowed by the cost of arbitrage operations? Nick’s little example is simply orthogonal to the question under discussion.

I skip Nick’s little exegetical discussion of Hume’s essay and proceed to what I think is the final substantive point that Nick makes.

Prices don’t just arbitrage themselves. Even if we take the limit of my model, as the cost of building ships approaches zero, we need to explain what process ensures the Law of One Price holds in equilibrium. Suppose it didn’t…then people would buy low and sell high…..you know the rest.

There are different equilibrium conditions being confused here. The equilibrium arbitrage conditions are not same as the equilibrium conditions for international monetary equilibrium. Arbitrage conditions for individual commodities can hold even if the international distribution of gold is not in equilibrium. So I really don’t know what conclusion Nick is alluding to here.

But let me end on what I hope is a conciliatory and constructive note. As always, Nick is making an insightful argument, even if it is misplaced in the context of Hume and PSFM. And the upshot of Nick’s argument is that transportation costs are a function of the dispersion of prices, because, as the incentive to ship products to capture arbitrage profits increases, the cost of shipping will increase as arbitragers bid up the value of resources specialized to the processes of transporting stuff. So the assumption that the cost of transportation can be treated as a parameter is not really valid, which means that the constraints imposed on national price level movements are not really parametric, they are endongenously determined within an appropriately specified general equilibrium model. If Nick is willing to settle for that proposition, I don’t think that our positions are that far apart.

Cyclical versus Secular Causes of Stagnation

Nick Rowe and Scott Sumner have recently had an interesting little debate about whether the slowdown in real GDP growth and labor productivity since the 2007-09 downturn is the result of cyclical or secular factors. Nick argues that successful inflation targeting in the two decades before the 2007 downturn had given rise to entrepreneurial expectations of stable aggregate demand, thereby providing a supportive macroeconomic environment for long-term investment that generates rising labor productivity over time. By undermining confidence in macroeconomic stability, the 2007-09 downturn diminished the willingness of businesses to continue make long-term investments and thus compromised one of the institutional pillars supporting long-term investment and productivity growth. Despite a recovery, expectations of future aggregate demand are now held with less confidence – higher perceived variance – than previously, thereby reducing entrepreneurial willingness to commit to the long-term capital expansion that increases productivity.

Scott is skeptical of the argument, because productivity growth had already started to decline after the 2001 downturn. Of course, one could argue that geopolitical uncertainty after the 9/11 attack and the invasions of Afghanistan and Iraq could have had a similar depressing effect on investment well before the 2007 downturn. So the decline in productivity growth that was underway at the time of the 2007 downturn is not necessarily inconsistent with Nick’s basic story. But Scott at least partially defends himself against that response by showing that real long-term investment as a share of GDP rose sharply after the 2001 downturn and was well above the levels of 1950s and 1960s.

Seeing no reason why the pace of productivity growth couldn’t have been affected by both cyclical and secular forces, I am happy to agree with both Nick and Scott. But I also have my own theory about the slowdown in productivity growth, which I have discussed previously, so this seems like a good time to weigh in again on the topic. As I pointed out in a 2015 post, one characteristic that distinguishes the 2007-09 downturn from earlier downturns is that it was associated with relatively large sectoral shifts in demand. Thus, the 2007-09 downturn was characterized by a higher percentage of jobs lost in the downturn that were not subsequently restored than was the case in earlier downturns. In earlier downturns, the decline in aggregate demand caused workers to be laid off temporarily from their jobs when demand and output fell, but a large percentage of laid-off workers were later rehired by their former when demand and output recovered. And even many of those laid-off workers that weren’t rehired by their previous employers still eventually found jobs doing work very similar to what they had been doing before losing their old jobs.

The depth and the severity of recessions can be measured not just by the unemployment rate, but also by the long-term unemployment rate. What set the 2007-09 downturn and the recovery apart from earlier downturns — even the 1981-82 downturn, in which the unemployment rate rose to almost 11 percent, higher than the 10 percent rate at depth of the 2007-09 downturn – was a long-term unemployment rate substantially higher, followed by a slower rate of decline, than in any post-World-War II downturn. I quote from a recent article on long-term unemployment

In January 2017, there were 1.85 million long-term unemployed. The number first dropped below two million in May 2015. That means 24.2 percent of the unemployed have been looking for work for six months. That’s better than the record high of 46 percent in the second quarter of 2010.

Sadly, it’s barely better than the darkest days of the 1981 recession. At that point, 26 percent of the unemployed were out of work for more than six months. On the other hand, total unemployment was worse than it is today. There was a 10.8 percent overall unemployment rate. In other words, the Great Recession created a higher percent of long-term unemployment.(Source: “Potential Causes and Implications of the Rise in Long-Term Unemployment,” The Federal Reserve Bank of Richmond, September 2011.)

Here’s how I put it in 2015.

[T]he 2008-09 downturn was associated with major sectoral shifts that caused an unusually large reallocation of labor from industries like construction and finance to other industries so that an unusually large number of workers have had to find new jobs doing work different from what they were doing previously. In many recessions, laid-off workers are either re-employed at their old jobs or find new jobs doing basically the same work that they had been doing at their old jobs. When workers transfer from one job to another similar job, there is little reason to expect a decline in their productivity after they are re-employed, but when workers are re-employed doing something very different from what they did before, a significant drop in their productivity in their new jobs is likely.

In addition, the number of long-term unemployed (27 weeks or more) since the 2000-09 downturn has been unusually high. Workers who remain unemployed for an extended period of time tend to suffer an erosion of skills, causing their productivity to drop when they are re-employed even if they are able to find a new job in their old occupation. It seems likely that the percentage of long-term unemployed workers that switch occupations is larger than the percentage of short-term unemployed workers that switch occupations, so the unusually high rate of long-term unemployment has probably had a doubly negative effect on labor productivity.

Long-term unemployment has adverse effects on health and many other metrics of well-being, effects that aren’t confined to the unemployed, but extend to their families, friends and communities. An increase in long-term unemployment, even if originally caused by an aggregate demand shock, is associated with a long-term negative supply shock. So it’s not surprising that the unusually and persistently high rate of long-term unemployment after the 2007-09 downturn, causing a massive loss of human capital, has depressed the subsequent growth in labor productivity. In my 2015 post, I tried to provide an optimistic interpretation of this phenomenon, but my optimism was misplaced, because the damage inflicted by long-term unemployment is very often irreversible, and rates of long-term unemployment have remained stubbornly high notwithstanding the steady decline in the overall unemployment rate.

Accounting for a disproportionate share of the long-term unemployed, discouraged older workers, chronically unable to find new jobs, have prematurely departed from the labor force. These older workers have presumably been replaced by younger entrants into the labor force, and one would suppose that the productivity of the younger workers is, on average, substantially lower than the productivity of the older and more experienced workers whom they have replaced, though presumably as they gain experience and acquire skills, the productivity of new workers will rise over time. Thus the demographic shift in the labor force is another reason for the low productivity growth since the 2007-09 downturn. But that effect, though largely demographic, has also had a cyclical component, making it difficult to disentangle the cyclical from the secular causes of sluggish productivity growth.

That difficulty is further compounded by another contributory cause of slow productivity growth. In my 2016 post, I discussed the late Walter Oi’s idea that labor is not really a variable factor of production as it is typically treated in simplified models, but a quasi-fixed factor. Here’s how Oi explained the idea:

For analytic purposes fixed employment costs can be separated into two categories called, for convenience, hiring and training costs. Hiring costs are defined as those costs that have no effect on a worker’s productivity and include outlays for recruiting, for processing payroll records, and for supplements such as unemployment compensation. These costs are closely related to the number of new workers and only indirectly related to the flow of labor’s services Training expenses, on the other hand, are investments in the human agent, specifically designed to improve a worker’s productivity.

The training activity typically entails direct money outlays as well as numerous implicit costs such as the allocation of old workers to teaching skills and rejection of unqualified workers during the training period.

So, if the 2007-09 downturn and the recovery was associated with an unusually high flow of workers from old jobs into new jobs, there has been an unusually high level of training expenses incurred by firms as they have brought workers into new jobs. The large investments by firms in training new workers have inevitably caused measured labor productivity to lag below previous trends when the fraction of workers entering the labor force or requiring new training to learn new skills was likely less than it has been since 2009. This idea, at any rate, does provide some reason to hope for at least a modest improvement in productivity and economic growth over time, even if the human cost of almost a decade of extremely high long-term unemployment is now largely irremediable and irretrievable.

Richard Lipsey and the Phillips Curve Redux

Almost three and a half years ago, I published a post about Richard Lipsey’s paper “The Phillips Curve and the Tyranny of an Assumed Unique Macro Equilibrium.” The paper originally presented at the 2013 meeting of the History of Econmics Society has just been published in the Journal of the History of Economic Thought, with a slightly revised title “The Phillips Curve and an Assumed Unique Macroeconomic Equilibrium in Historical Context.” The abstract of the revised published version of the paper is different from the earlier abstract included in my 2013 post. Here is the new abstract.

An early post-WWII debate concerned the most desirable demand and inflationary pressures at which to run the economy. Context was provided by Keynesian theory devoid of a full employment equilibrium and containing its mainly forgotten, but still relevant, microeconomic underpinnings. A major input came with the estimates provided by the original Phillips curve. The debate seemed to be rendered obsolete by the curve’s expectations-augmented version with its natural rate of unemployment, and associated unique equilibrium GDP, as the only values consistent with stable inflation. The current behavior of economies with the successful inflation targeting is inconsistent with this natural-rate view, but is consistent with evolutionary theory in which economies have a wide range of GDP-compatible stable inflation. Now the early post-WWII debates are seen not to be as misguided as they appeared to be when economists came to accept the assumptions implicit in the expectations-augmented Phillips curve.

Publication of Lipsey’s article nicely coincides with Roger Farmer’s new book Prosperity for All which I discussed in my previous post. A key point that Roger makes is that the assumption of a unique equilibrium which underlies modern macroeconomics and the vertical long-run Phillips Curve is neither theoretically compelling nor consistent with the empirical evidence. Lipsey’s article powerfully reinforces those arguments. Access to Lipsey’s article is gated on the JHET website, so in addition to the abstract, I will quote the introduction and a couple of paragraphs from the conclusion.

One important early post-WWII debate, which took place particularly in the UK, concerned the demand and inflationary pressures at which it was best to run the economy. The context for this debate was provided by early Keynesian theory with its absence of a unique full-employment equilibrium and its mainly forgotten, but still relevant, microeconomic underpinnings. The original Phillips Curve was highly relevant to this debate. All this changed, however, with the introduction of the expectations-augmented version of the curve with its natural rate of unemployment, and associated unique equilibrium GDP, as the only values consistent with a stable inflation rate. This new view of the economy found easy acceptance partly because most economists seem to feel deeply in their guts — and their training predisposes them to do so — that the economy must have a unique equilibrium to which market forces inevitably propel it, even if the approach is sometimes, as some believe, painfully slow.

The current behavior of economies with successful inflation targeting is inconsistent with the existence of a unique non-accelerating-inflation rate of unemployment (NAIRU) but is consistent with evolutionary theory in which the economy is constantly evolving in the face of path-dependent, endogenously generated, technological change, and has a wide range of unemployment and GDP over which the inflation rate is stable. This view explains what otherwise seems mysterious in the recent experience of many economies and makes the early post-WWII debates not seem as silly as they appeared to be when economists came to accept the assumption of a perfectly inelastic, long-run Phillips curve located at the unique equilibrium level of unemployment. One thing that stands in the way of accepting this view, however, the tyranny of the generally accepted assumption of a unique, self-sustaining macroeconomic equilibrium.

This paper covers some of the key events in the theory concerning, and the experience of, the economy’s behavior with respect to inflation and unemployment over the post-WWII period. The stage is set by the pressure-of-demand debate in the 1950s and the place that the simple Phillips curve came to play in it. The action begins with the introduction of the expectations-augmented Phillips curve and the acceptance by most Keynesians of its implication of a unique, self-sustaining macro equilibrium. This view seemed not inconsistent with the facts of inflation and unemployment until the mid-1990s, when the successful adoption of inflation targeting made it inconsistent with the facts. An alternative view is proposed, on that is capable of explaining current macro behavior and reinstates the relevance of the early pressure-of-demand debate. (pp. 415-16).

In reviewing the evidence that stable inflation is consistent with a range of unemployment rates, Lipsey generalizes the concept of a unique NAIRU to a non-accelerating-inflation band of unemployment (NAIBU) within which multiple rates of unemployment are consistent with a basically stable expected rate of inflation. In an interesting footnote, Lipsey addresses a possible argument against the relevance of the empirical evidence for policy makers based on the Lucas critique.

Some might raise the Lucas critique here, arguing that one finds the NAIBU in the data because policymakers are credibly concerned only with inflation. As soon as policymakers made use of the NAIBU, the whole unemployment-inflation relation that has been seen since the mid-1990s might change or break. For example, unions, particularly in the European Union, where they are typically more powerful than in North America, might alter their behavior once they became aware that the central bank was actually targeting employment levels directly and appeared to have the power to do so. If so, the Bank would have to establish that its priorities were lexicographically ordered with control of inflation paramount so that any level-of-activity target would be quickly dropped whenever inflation threatened to go outside of the target bands. (pp. 426-27)

I would just mention in this context that in this 2013 post about the Lucas critique, I pointed out that in the paper in which Lucas articulated his critique, he assumed that the only possible source of disequilibrium was a mistake in expected inflation. If everything else is working well, causing inflation expectations to be incorrect will make things worse. But if there are other sources of disequilibrium, it is not clear that incorrect inflation expectations will make things worse; they could make things better. That is a point that Lipsey and Kelvin Lancaster taught the profession in a classic article “The General Theory of Second Best,” 20 years before Lucas published his critique of econometric policy evaluation.

I conclude by quoting Lipsey’s penultimate paragraph (the final paragraph being a quote from Lipsey’s paper on the Phillips Curve from the Blaug and Lloyd volume Famous Figures and Diagrams in Economics which I quoted in full in my 2013 post.

So we seem to have gone full circle from the early Keynesian view in which there was no unique level of GDP to which the economy was inevitably drawn, through a simple Phillips curve with its implied trade-0ff, to an expectations-augmented Phillips curve (or any of its more modern equivalents) with its associated unique level of GDP, and finally back to the early Keynesian view in which policymakers had an option as to the average pressure of aggregate demand at which economic activity could be sustained. However, the modern debated about whether to aim for [the high or low range of stable unemployment rates] is not a debate about inflation versus growth, as it was in the 1950s, but between those who would risk an occasional rise of inflation above the target band as the price of getting unemployment as low as possible and those who would risk letting unemployment fall below that indicated by the lower boundary of the NAIBU  as the price of never risking an acceleration of inflation above the target rate. (p. 427)


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 1,431 other followers

Follow Uneasy Money on WordPress.com