Archive for the 'Samuelson' Category

Hayek and Intertemporal Equilibrium

I am starting to write a paper on Hayek and intertemporal equilibrium, and as I write it over the next couple of weeks, I am going to post sections of it on this blog. Comments from readers will be even more welcome than usual, and I will do my utmost to reply to comments, a goal that, I am sorry to say, I have not been living up to in my recent posts.

The idea of equilibrium is an essential concept in economics. It is an essential concept in other sciences as well, its meaning in economics is not the same as in other disciplines. The concept having originally been borrowed from physics, the meaning originally attached to it by economists corresponded to the notion of a system at rest, and it took a long time for economists to see that viewing an economy as a system at rest was not the only, or even the most useful, way of applying the equilibrium concept to economic phenomena.

What would it mean for an economic system to be at rest? The obvious answer was to say that prices and quantities would not change. If supply equals demand in every market, and if there no exogenous change introduced into the system, e.g., in population, technology, tastes, etc., it would seem that would be no reason for the prices paid and quantities produced to change in that system. But that view of an economic system was a very restrictive one, because such a large share of economic activity – savings and investment — is predicated on the assumption and expectation of change.

The model of a stationary economy at rest in which all economic activity simply repeats what has already happened before did not seem very satisfying or informative, but that was the view of equilibrium that originally took hold in economics. The idea of a stationary timeless equilibrium can be traced back to the classical economists, especially Ricardo and Mill who wrote about the long-run tendency of an economic system toward a stationary state. But it was the introduction by Jevons, Menger, Walras and their followers of the idea of optimizing decisions by rational consumers and producers that provided the key insight for a more robust and fruitful version of the equilibrium concept.

If each economic agent (household or business firm) is viewed as making optimal choices based on some scale of preferences subject to limitations or constraints imposed by their capacities, endowments, technology and the legal system, then the equilibrium of an economy must describe a state in which each agent, given his own subjective ranking of the feasible alternatives, is making a optimal decision, and those optimal decisions are consistent with those of all other agents. The optimal decisions of each agent must simultaneously be optimal from the point of view of that agent while also being consistent, or compatible, with the optimal decisions of every other agent. In other words, the decisions of all buyers of how much to purchase must be consistent with the decisions of all sellers of how much to sell.

The idea of an equilibrium as a set of independently conceived, mutually consistent optimal plans was latent in the earlier notions of equilibrium, but it could not be articulated until a concept of optimality had been defined. That concept was utility maximization and it was further extended to include the ideas of cost minimization and profit maximization. Once the idea of an optimal plan was worked out, the necessary conditions for the mutual consistency of optimal plans could be articulated as the necessary conditions for a general economic equilibrium. Once equilibrium was defined as the consistency of optimal plans, the path was clear to define an intertemporal equilibrium as the consistency of optimal plans extending over time. Because current goods and services and otherwise identical goods and services in the future could be treated as economically distinct goods and services, defining the conditions for an intertemporal equilibrium was formally almost equivalent to defining the conditions for a static, stationary equilibrium. Just as the conditions for a static equilibrium could be stated in terms of equalities between marginal rates of substitution of goods in consumption and in production to their corresponding price ratios, an intertemporal equilibrium could be stated in terms of equalities between the marginal rates of intertemporal substitution in consumption and in production and their corresponding intertemporal price ratios.

The only formal adjustment required in the necessary conditions for static equilibrium to be extended to intertemporal equilibrium was to recognize that, inasmuch as future prices (typically) are unobservable, and hence unknown to economic agents, the intertemporal price ratios cannot be ratios between actual current prices and actual future prices, but, instead, ratios between current prices and expected future prices. From this it followed that for optimal plans to be mutually consistent, all economic agents must have the same expectations of the future prices in terms of which their plans were optimized.

The concept of an intertemporal equilibrium was first presented in English by F. A. Hayek in his 1937 article “Economics and Knowledge.” But it was through J. R. Hicks’s Value and Capital published two years later in 1939 that the concept became more widely known and understood. In explaining and applying the concept of intertemporal equilibrium and introducing the derivative concept of a temporary equilibrium in which current markets clear, but individual expectations of future prices are not the same, Hicks did not claim originality, but instead of crediting Hayek for the concept, or even mentioning Hayek’s 1937 paper, Hicks credited the Swedish economist Erik Lindahl, who had published articles in the early 1930s in which he had articulated the concept. But although Lindahl had published his important work on intertemporal equilibrium before Hayek’s 1937 article, Hayek had already explained the concept in a 1928 article “Das intertemporale Gleichgewichtasystem der Priese und die Bewegungen des ‘Geltwertes.'” (English translation: “Intertemporal price equilibrium and movements in the value of money.“)

Having been a junior colleague of Hayek’s in the early 1930s when Hayek arrived at the London School of Economics, and having come very much under Hayek’s influence for a few years before moving in a different theoretical direction in the mid-1930s, Hicks was certainly aware of Hayek’s work on intertemporal equilibrium, so it has long been a puzzle to me why Hicks did not credit Hayek along with Lindahl for having developed the concept of intertemporal equilibrium. It might be worth pursuing that question, but I mention it now only as an aside, in the hope that someone else might find it interesting and worthwhile to try to find a solution to that puzzle. As a further aside, I will mention that Murray Milgate in a 1979 article “On the Origin of the Notion of ‘Intertemporal Equilibrium’” has previously tried to redress the failure to credit Hayek’s role in introducing the concept of intertemporal equilibrium into economic theory.

What I am going to discuss in here and in future posts are three distinct ways in which the concept of intertemporal equilibrium has been developed since Hayek’s early work – his 1928 and 1937 articles but also his 1941 discussion of intertemporal equilibrium in The Pure Theory of Capital. Of course, the best known development of the concept of intertemporal equilibrium is the Arrow-Debreu-McKenzie (ADM) general-equilibrium model. But although it can be thought of as a model of intertemporal equilibrium, the ADM model is set up in such a way that all economic decisions are taken before the clock even starts ticking; the transactions that are executed once the clock does start simply follow a pre-determined script. In the ADM model, the passage of time is a triviality, merely a way of recording the sequential order of the predetermined production and consumption activities. This feat is accomplished by assuming that all agents are present at time zero with their property endowments in hand and capable of transacting – but conditional on the determination of an equilibrium price vector that allows all optimal plans to be simultaneously executed over the entire duration of the model — in a complete set of markets (including state-contingent markets covering the entire range of contingent events that will unfold in the course of time whose outcomes could affect the wealth or well-being of any agent with the probabilities associated with every contingent event known in advance).

Just as identical goods in different physical locations or different time periods can be distinguished as different commodities that cn be purchased at different prices for delivery at specific times and places, identical goods can be distinguished under different states of the world (ice cream on July 4, 2017 in Washington DC at 2pm only if the temperature is greater than 90 degrees). Given the complete set of state-contingent markets and the known probabilities of the contingent events, an equilibrium price vector for the complete set of markets would give rise to optimal trades reallocating the risks associated with future contingent events and to an optimal allocation of resources over time. Although the ADM model is an intertemporal model only in a limited sense, it does provide an ideal benchmark describing the characteristics of a set of mutually consistent optimal plans.

The seminal work of Roy Radner in relaxing some of the extreme assumptions of the ADM model puts Hayek’s contribution to the understanding of the necessary conditions for an intertemporal equilibrium into proper perspective. At an informal level, Hayek was addressing the same kinds of problems that Radner analyzed with far more powerful analytical tools than were available to Hayek. But the were both concerned with a common problem: under what conditions could an economy with an incomplete set of markets be said to be in a state of intertemporal equilibrium? In an economy lacking the full set of forward and state contingent markets describing the ADM model, intertemporal equilibrium cannot predetermined before trading even begins, but must, if such an equilibrium obtains, unfold through the passage of time. Outcomes might be expected, but they would not be predetermined in advance. Echoing Hayek, though to my knowledge he does not refer to Hayek in his work, Radner describes his intertemporal equilibrium under uncertainty as an equilibrium of plans, prices, and price expectations. Even if it exists, the Radner equilibrium is not the same as the ADM equilibrium, because without a full set of markets, agents can’t fully hedge against, or insure, all the risks to which they are exposed. The distinction between ex ante and ex post is not eliminated in the Radner equilibrium, though it is eliminated in the ADM equilibrium.

Additionally, because all trades in the ADM model have been executed before “time” begins, it seems impossible to rationalize holding any asset whose only use is to serve as a medium of exchange. In his early writings on business cycles, e.g., Monetary Theory and the Trade Cycle, Hayek questioned whether it would be possible to rationalize the holding of money in the context of a model of full equilibrium, suggesting that monetary exchange, by severing the link between aggregate supply and aggregate demand characteristic of a barter economy as described by Say’s Law, was the source of systematic deviations from the intertemporal equilibrium corresponding to the solution of a system of Walrasian equations. Hayek suggested that progress in analyzing economic fluctuations would be possible only if the Walrasian equilibrium method could be somehow be extended to accommodate the existence of money, uncertainty, and other characteristics of the real world while maintaining the analytical discipline imposed by the equilibrium method and the optimization principle. It proved to be a task requiring resources that were beyond those at Hayek’s, or probably anyone else’s, disposal at the time. But it would be wrong to fault Hayek for having had to insight to perceive and frame a problem that was beyond his capacity to solve. What he may be criticized for is mistakenly believing that he he had in fact grasped the general outlines of a solution when in fact he had only perceived some aspects of the solution and offering seriously inappropriate policy recommendations based on that seriously incomplete understanding.

In Value and Capital, Hicks also expressed doubts whether it would be possible to analyze the economic fluctuations characterizing the business cycle using a model of pure intertemporal equilibrium. He proposed an alternative approach for analyzing fluctuations which he called the method of temporary equilibrium. The essence of the temporary-equilibrium method is to analyze the behavior of an economy under the assumption that all markets for current delivery clear (in some not entirely clear sense of the term “clear”) while understanding that demand and supply in current markets depend not only on current prices but also upon expected future prices, and that the failure of current prices to equal what they had been expected to be is a potential cause for the plans that economic agents are trying to execute to be modified and possibly abandoned. In the Pure Theory of Capital, Hayek discussed Hicks’s temporary-equilibrium method a possible method of achieving the modification in the Walrasian method that he himself had proposed in Monetary Theory and the Trade Cycle. But after a brief critical discussion of the method, he dismissed it for reasons that remain obscure. Hayek’s rejection of the temporary-equilibrium method seems in retrospect to have been one of Hayek’s worst theoretical — or perhaps, meta-theoretical — blunders.

Decades later, C. J. Bliss developed the concept of temporary equilibrium to show that temporary equilibrium method can rationalize both holding an asset purely for its services as a medium of exchange and the existence of financial intermediaries (private banks) that supply financial assets held exclusively to serve as a medium of exchange. In such a temporary-equilibrium model with financial intermediaries, it seems possible to model not only the existence of private suppliers of a medium of exchange, but also the conditions – in a very general sense — under which the system of financial intermediaries breaks down. The key variable of course is vectors of expected prices subject to which the plans of individual households, business firms, and financial intermediaries are optimized. The critical point that emerges from Bliss’s analysis is that there are sets of expected prices, which if held by agents, are inconsistent with the existence of even a temporary equilibrium. Thus price flexibility in current market cannot, in principle, result in even a temporary equilibrium, because there is no price vector of current price in markets for present delivery that solves the temporary-equilibrium system. Even perfect price flexibility doesn’t lead to equilibrium if the equilibrium does not exist. And the equilibrium cannot exist if price expectations are in some sense “too far out of whack.”

Expected prices are thus, necessarily, equilibrating variables. But there is no economic mechanism that tends to cause the adjustment of expected prices so that they are consistent with the existence of even a temporary equilibrium, much less a full equilibrium.

Unfortunately, modern macroeconomics continues to neglect the temporary-equilibrium method; instead macroeconomists have for the most part insisted on the adoption of the rational-expectations hypothesis, a hypothesis that elevates question-begging to the status of a fundamental axiom of rationality. The crucial error in the rational-expectations hypothesis was to misunderstand the role of the comparative-statics method developed by Samuelson in The Foundations of Economic Analysis. The role of the comparative-statics method is to isolate the pure theoretical effect of a parameter change under a ceteris-paribus assumption. Such an effect could be derived only by comparing two equilibria under the assumption of a locally unique and stable equilibrium before and after the parameter change. But the method of comparative statics is completely inappropriate to most macroeconomic problems which are precisely concerned with the failure of the economy to achieve, or even to approximate, the unique and stable equilibrium state posited by the comparative-statics method.

Moreover, the original empirical application of the rational-expectations hypothesis by Muth was in the context of the behavior of a single market in which the market was dominated by well-informed specialists who could be presumed to have well-founded expectations of future prices conditional on a relatively stable economic environment. Under conditions of macroeconomic instability, there is good reason to doubt that the accumulated knowledge and experience of market participants would enable agents to form accurate expectations of the future course of prices even in those markets about which they expert knowledge. Insofar as the rational expectations hypothesis has any claim to empirical relevance it is only in the context of stable market situations that can be assumed to be already operating in the neighborhood of an equilibrium. For the kinds of problems that macroeconomists are really trying to answer that assumption is neither relevant nor appropriate.

Advertisements

What’s so Great about Science? or, How I Learned to Stop Worrying and Love Metaphysics

A couple of weeks ago, a lot people in a lot of places marched for science. What struck me about those marches is that there is almost nobody out there that is openly and explicitly campaigning against science. There are, of course, a few flat-earthers who, if one looks for them very diligently, can be found. But does anyone — including the flat-earthers themselves – think that they are serious? There are also Creationists who believe that the earth was created and designed by a Supreme Being – usually along the lines of the Biblical account in the Book of Genesis. But Creationists don’t reject science in general, they reject a particular scientific theory, because they believe it to be untrue, and try to defend their beliefs with a variety of arguments couched in scientific terms. I don’t defend Creationist arguments, but just because someone makes a bad scientific argument, it doesn’t mean that the person making the argument is an opponent of science. To be sure, the reason that Creationists make bad arguments is that they hold a set of beliefs about how the world came to exist that aren’t based on science but on some religious or ideological belief system. But people come up with arguments all the time to justify beliefs for which they have no evidentiary or “scientific” basis.

I mean one of the two greatest scientists that ever lived criticized quantum mechanics, because he couldn’t accept that the world was not fully determined by the laws of nature, or, as he put it so pithily: “God does not play dice with the universe.” I understand that Einstein was not religious, and wasn’t making a religious argument, but he was basing his scientific view of what an acceptable theory should be on certain metaphysical predispositions that he held, and he was expressing his disinclination to accept a theory inconsistent with those predispositions. A scientific argument is judged on its merits, not on the motivations for advancing the argument. And I won’t even discuss the voluminous writings of the other one of the two greatest scientists who ever lived on alchemy and other occult topics.

Similarly, there are climate-change deniers who question the scientific basis for asserting that temperatures have been rising around the world, and that the increase in temperatures results from human activity that discharges greenhouse gasses into the atmosphere. Deniers of global warming may be biased and may be making bad scientific arguments, but the mere fact – and for purposes of this discussion I don’t dispute that it is a fact – that global warming is real and caused by human activity does not mean that to dispute those facts unmasks that person as an opponent of science. R. A. Fisher, the greatest mathematical statistician of the first half of the twentieth century, who developed most of the statistical techniques now used in experimental research, severely damaged his reputation by rejecting or dismissing evidence that smoking tobacco is a primary cause of cancer. Some critics accused Fisher of having been compromised by financial inducements from the tobacco industry, while others attribute his positions to his own smoking habits or anti-puritanical tendencies. In any event, Fisher’s arguments against a causal link between smoking tobacco and lung cancer are now viewed as an embarrassing stain on an otherwise illustrious career. But Fisher’s lapse of judgment, and perhaps of ethics, don’t justify accusing him of opposition to science. Climate-change deniers don’t reject science; they reject or disagree with the conclusions of most climate scientists. They may have lousy reasons for their views – either that the climate is not changing or that whatever change has occurred is unrelated to the human production of greenhouse gasses – but holding wrong or biased views doesn’t make someone an opponent of science.

I don’t say that there are no people who dislike science – I mean don’t like it because of what it stands for, not because they find it difficult or boring. Such people may be opposed to teaching science and to funding scientific research and don’t want scientific knowledge to influence public policy or the way people live. But, as far as I can tell, they have little influence. There is just no one out there that wants to outlaw scientific research, or trying to criminalize the teaching of science. They may not want to fund science, but they aren’t trying to ban it. In fact, I doubt that the prestige and authority of science has ever been higher than it is now. Certainly religion, especially organized religion, to which science was once subordinate if not subservient, no longer exercises anything near the authority that science now does.

The reason for this extended introduction into the topic that I really want to discuss is to provide some context for my belief that economists worry too much about whether economics is really a science. It was such a validation for economists when the Swedish Central Bank piggy-backed on the storied Nobel Prize to create its ersatz “Nobel Memorial Prize” for economic science. (I note with regret the recent passing of William Baumol, whose failure to receive the Nobel Prize in economics, like that of Armen Alchian, was in fact a deplorable failure of good judgment on the part of the Nobel Committee.) And the self-consciousness of economists about the possibly dubious status of economics as a science is a reflection of the exalted status of science in society. So naturally, if one is seeking to increase the prestige of his own occupation and of the intellectual discipline in which one does research, it helps enormously to be able to say: “oh, yes, I am an economist, and economics is a science, which means that I really am a scientist, just like those guys that win Nobel Prizes.” It also helps to be able to show that your scientific research involves a lot of mathematics, because scientists use math in their theories, sometimes a lot of math, which makes it hard for non-scientists to understand what scientists are doing. We economists also use math in our theories, sometimes a lot math, and that’s why it’s just as hard for non-economists to understand what we economists are doing as it is to understand what real scientists are doing. So we really are scientists, aren’t we?”

Where did this obsession with science come from? I think it’s fairly recent, but my sketchy knowledge of the history of science prevents me from getting too deeply into that discussion. But until relatively modern times, science was subsumed under the heading of philosophy — Greek for the love of wisdom. But philosophy is a very broad subject, so eventually that part of philosophy that was concerned with the world as it actually exists was called natural philosophy as opposed to say, ethical and moral philosophy. After the stunning achievements of Newton and his successors, and after Francis Bacon outlined an inductive method for achieving knowledge of the world, the disjunction between mere speculative thought and empirically based research, which was what science supposedly exemplifies, became increasingly sharp. And the inductive method seemed to be the right way to do science.

David Hume and Immanuel Kant struggled with limited success to make sense of induction, because a general proposition cannot be logically deduced from a set of observations, however numerous. Despite the logical problem of induction, early in the early twentieth century a philosophical movement based in Vienna called logical positivism arrived at the conclusion that not only is all scientific knowledge acquired inductively through sensory experience and observation, but no meaning can be attached to any statement unless the statement makes reference to something about which we have or could have sensory experience; to be meaningful a statement must be verified or at least verifiable, so that its truth could be either verified or refuted. Any reference to concepts that have no basis in sensory experience is simply meaningless, i.e., a form of nonsense. Thus, science became not just the epitome of valid, certain, reliable, verified knowledge, which is what people were led to believe by the stunning success of Newton’s theory, it became the exemplar of meaningful discourse. Unless our statements refer to some observable, verifiable object, we are talking nonsense. And in the first half of the twentieth century, logical positivism dominated academic philosophy, at least in the English speaking world, thereby exercising great influence over how economists thought about their own discipline and its scientific status.

Logical positivism was subjected to rigorous criticism by Karl Popper in his early work Logik der Forschung (English translation The Logic of Scientific Discovery). His central point was that scientific theories are less about what is or has been observed, but about what cannot be observed. The empirical content of a scientific proposition consists in the range of observations that the theory says are not possible. The more observations excluded by the theory the greater its empirical content. A theory that is consistent with any observation, has no empirical content. Thus, paradoxically, scientific theories, under the logical positivist doctrine, would have to be considered nonsensical, because they tell us what can’t be observed. And because it is always possible that an excluded observation – the black swan – which our scientific theory tells us can’t be observed, will be observed, scientific theories can never be definitively verified. If a scientific theory can’t verified, then according to the positivists’ own criterion, the theory is nonsense. Of course, this just shows that the positivist criterion of meaning was nonsensical, because obviously scientific theories are completely meaningful despite being unverifiable.

Popper therefore concluded that verification or verifiability can’t be a criterion of meaning. In its place he proposed the criterion of falsification (i.e., refutation, not misrepresentation), but falsification became a criterion not for distinguishing between what is meaningful and what is meaningless, but between science and metaphysics. There is no reason why metaphysical statements (statements lacking empirical content) cannot be perfectly meaningful; they just aren’t scientific. Popper was misinterpreted by many to have simply substituted falsifiability for verifiability as a criterion of meaning; that was a mistaken interpretation, which Popper explicitly rejected.

So, in using the term “meaningful theorems” to refer to potentially refutable propositions that can be derived from economic theory using the method of comparative statics, Paul Samuelson in his Foundations of Economic Analysis adopted the interpretation of Popper’s demarcation criterion between science and metaphysics as if it were a demarcation criterion between meaning and nonsense. I conjecture that Samuelson’s unfortunate lapse into the discredited verbal usage of logical positivism may have reinforced the unhealthy inclination of economists to feel the need to prove their scientific credentials in order to even engage in meaningful discourse.

While Popper certainly performed a valuable service in clearing up the positivist confusion about meaning, he adopted a very prescriptive methodology aimed at making scientific practice more scientific in the sense of exposing theories to, rather than immunizing them against, attempts at refutation, because, according to Popper, it is only if after our theories survive powerful attempts to show that they are false that we can have confidence that those theories may be truthful or at least come close to being truthful. In principle, Popper was not wrong in encouraging scientists to formulate theories that are empirically testable by specifying what kinds of observations would be inconsistent with their theories. But in practice, that advice has been difficult to follow, and not only because researchers try to avoid subjecting their pet theories to tests that might prove them wrong.

Although Popper often cited historical examples to support his view that science progresses through an ongoing process of theoretical conjecture and empirical refutation, historians of science have had no trouble finding instances in which scientists did not follow Popper’s methodological rules and continued to maintain theories even after they had been refuted by evidence or after other theories had been shown to generate more accurate predictions than their own theories. Popper parried this objection by saying that his methodological rules were not positive (i.e., descriptive of science), but normative (i.e., prescriptive of how to do good science). In other words, Popper’s scientific methodology was itself not empirically refutable and scientific, but empirically irrefutable and metaphysical. I point out the unscientific character of Popper’s methodology of science, not to criticize Popper, but to point out that Popper himself did not believe that science is itself the final authority and ultimate arbiter of scientific practice.

But the more important lesson from the critical discussions of Popper’s methodological rules seems to me to be that they are too rigid to accommodate all the considerations that are relevant to assessing scientific theories and deciding whether those theories should be discarded or, at least tentatively, maintained. And Popper’s methodological rules are especially ill-suited for economics and other disciplines in which the empirical implications of theories depend on a large number of jointly-maintained hypotheses, so that it is hard to identify which of several maintained hypotheses is responsible for the failure of a predicted outcome to match the observed outcome. That of course is the well-known ceteris paribus problem, and it requires a very capable practitioner to know when to apply the ceteris paribus condition and which variables to hold constants and which to allow to vary. Popper’s methodological rules tell us to reject a theory when its predictions are mistaken, and Popper regarded the ceteris paribus quite skeptically as an illegitimate immunizing stratagem. That describes a profound dilemma for economics. On the one hand, it is hard to imagine how economic theory could be applied without using the ceteris paribus qualification, on the other hand, the qualification diminishes empirical content of economic theory.

Empirical problems are amplified by the infirmities of the data that economists typically use to derive quantitative predictions from their models. The accuracy of the data is often questionable, and the relationships between the data and the theoretical concepts they are supposed to measure are often dubious. Moreover, the assumptions about the data-generating process (e.g., independent and identically distributed random variables, randomly selected observations, omitted explanatory variables are uncorrelated with the dependent variable) necessary for the classical statistical techniques to generate unbiased estimates of the theoretical coefficients are almost impossibly stringent. Econometricians are certainly well aware of these issues, and they have discovered methods of mitigating them, but the problems with the data routinely used by economists and the complicated issues involved in developing and applying techniques to cope with those problems make it very difficult to use statistical techniques to reach definitive conclusions about empirical questions.

Jeff Biddle, one of the leading contemporary historians of economics, has a wonderful paper (“Statistical Inference in Economics 1920-1965: Changes in Meaning and Practice”)– his 2016 presidential address to the History of Economics Society – discussing how the modern statistical techniques based on concepts and methods derived from probability theory gradually became the standard empirical and statistical techniques used by economists, even though many distinguished earlier researchers who were neither unaware of, nor unschooled in, the newer techniques believed them to be inappropriate for analyzing economic data. Here is the abstract of Biddle’s paper.

This paper reviews changes over time in the meaning that economists in the US attributed to the phrase “statistical inference”, as well as changes in how inference was conducted. Prior to WWII, leading statistical economists rejected probability theory as a source of measures and procedures to be used in statistical inference. Haavelmo and the econometricians associated with the early Cowles Commission developed an approach to statistical inference based on concepts and measures derived from probability theory, but the arguments they offered in defense of this approach were not always responsive to the concerns of earlier empirical economists that the data available to economists did not satisfy the assumptions required for such an approach. Despite this, after a period of about 25 years, a consensus developed that methods of inference derived from probability theory were an almost essential part of empirical research in economics. I close the paper with some speculation on possible reasons for this transformation in thinking about statistical inference.

I quote one passage from Biddle’s paper:

As I have noted, the leading statistical economists of the 1920s and 1930s were also unwilling to assume that any sample they might have was representative of the universe they cared about. This was particularly true of time series, and Haavelmo’s proposal to think of time series as a random selection of the output of a stable mechanism did not really address one of their concerns – that the structure of the “mechanism” could not be expected to remain stable for long periods of time. As Schultz pithily put it, “‘the universe’ of our time series does not ‘stay put’” (Schultz 1938, p. 215). Working commented that there was nothing in the theory of sampling that warranted our saying that “the conditions of covariance obtaining in the sample (would) hold true at any time in the future” (Advisory Committee 1928, p. 275). As I have already noted, Persons went further, arguing that treating a time series as a sample from which a future observation would be a random draw was not only inaccurate but ignored useful information about unusual circumstances surrounding various observations in the series, and the unusual circumstances likely to surround the future observations about which one wished to draw conclusions (Persons 1924, p. 7). And, the belief that samples were unlikely to be representative of the universe in which the economists had an interest applied to cross section data as well. The Cowles econometricians offered to little assuage these concerns except the hope that it would be possible to specify the equations describing the systematic part of the mechanism of interest in a way that captured the impact of factors that made for structural change in the case of time series, or factors that led cross section samples to be systematically different from the universe of interest.

It is not my purpose to argue that the economists who rejected the classical theory of inference had better arguments than the Cowles econometricians, or had a better approach to analyzing economic data given the nature of those data, the analytical tools available, and the potential for further development of those tools. I only wish to offer this account of the differences between the Cowles econometricians and the previously dominant professional opinion on appropriate methods of statistical inference as an example of a phenomenon that is not uncommon in the history of economics. Revolutions in economics, or “turns”, to use a currently more popular term, typically involve new concepts and analytical methods. But they also often involve a willingness to employ assumptions considered by most economists at the time to be too unrealistic, a willingness that arises because the assumptions allow progress to be made with the new concepts and methods. Obviously, in the decades after Haavelmo’s essay on the probability approach, there was a significant change in the list of assumptions about economic data that empirical economists were routinely willing to make in order to facilitate empirical research.

Let me now quote from a recent book (To Explain the World) by Steven Weinberg, perhaps – even though a movie about his life has not (yet) been made — the greatest living physicist:

Newton’s theory of gravitation made successful predictions for simple phenomena like planetary motion, but it could not give a quantitative account of more complicated phenomena, like the tides. We are in a similar position today with regard to the strong forces that hold quarks together inside the protons and neutrons inside the atomic nucleus, a theory known as quantum chromodynamics. This theory has been successful in accounting for certain processes at high energy, such as the production of various strongly interacting particles in the annihilation of energetic electrons and their antiparticles, and its successes convince us that the theory is correct. We cannot use the theory to calculate precise values for other things that we would like to explain, like the masses of the proton and neutron, because the calculations is too complicated. Here, as for Newton’s theory of the tides, the proper attitude is patience. Physical theories are validated when they give us the ability to calculate enough things that are sufficiently simple to allow reliable calculations, even if we can’t calculate everything that we might want to calculate.

So Weinberg is very much aware of the limits that even physics faces in making accurate predictions. Only a small subset (relative to the universe of physical phenomena) of simple effects can be calculated, but the capacity of physics to make very accurate predictions of simple phenomena gives us a measure of confidence that the theory would be reliable in making more complicated predictions if only we had the computing capacity to make those more complicated predictions. But in economics the set of simple predictions that can be accurately made is almost nil, because economics is inherently a theory a complex social phenomena, and simplifying the real world problems to which we apply the theory to allow testable predictions to be made is extremely difficult and hardly ever possible. Experimental economists try to create conditions in which this can be done in controlled settings, but whether these experimental results have much relevance for real-world applications is open to question.

The problematic relationship between economic theory and empirical evidence is deeply rooted in the nature of economic theory and the very complex nature of the phenomena that economic theory seek to explain. It is very difficult to isolate simple real-world events in which economic theories can be put to decisive empirical tests that allow us to put competing theories to decisive tests based on unambiguous observations that are either consistent with or contrary to the predictions generated by those theories. Under those circumstances, if we apply the Popperian criterion for demarcation between science and metaphysics to economics, it is not at all clear to me whether economics is more on the science side of the line than on the metaphysics side.

Certainly, there are refutable implications of economic theory that can be deduced, but these implications are often subject to qualification, so the refutable implications are often refutable only n principle, but not in practice. Many fastidious economic methodologists, notably Mark Blaug, voiced unhappiness about this state of affairs and blamed economists for not being more ruthless in applying Popperian test of empirical refutation to their theories. Surely Blaug had a point, but the infrequency of empirical refutation of theories in economics is, I think, less attributable to bad methodological practice on the part of economists than to the nature of the theories that economists work with and the inherent ambiguities of the empirical evidence with which those theories can be tested. We might as well just face up to the fact that, to a large extent, empirical evidence is simply not clear cut enough to force us to discard well-entrenched economic theories, because well-entrenched economic theories can be adjusted and reformulated in response to apparently contrary evidence in ways that allow those theories to live on to fight another day, theories typically having enough moving parts to allow them to be adjusted as needed to accommodate anomalous or inconvenient empirical evidence.

Popper’s somewhat disloyal disciple, Imre Lakatos, talked about scientific theories in the context of scientific research programs, a research program being an amalgam of related theories which share a common inner core of theoretical principles or axioms which are not subject to refutation. Lakatos called these deep axiomatic core of principles the hard core of the research program. The hard core defines the program so it is fundamentally fixed and not open to refutation. The empirical content of the research program is provided by a protective belt of specific theories that are subject to refutation and, when refuted, can be replaced as needed with alternative theories that are consistent with both the theoretical hard core and the empirical evidence. What determines the success of a scientific research program is whether it is progressive or degenerating. A progressive research program accumulates an increasingly dense, but evolving, protective belt of theories in response to new theoretical and empirical problems or puzzles that are generated within the research program to keep researchers busy and to attract into the program new researchers seeking problems to solve. In contrast, a degenerating research program is unable to find enough interesting new problems or puzzles to keep researchers busy much less attract new ones.

Despite its Popperian origins, the largely sociological Lakatosian account of how science evolves and progresses was hardly congenial to Popper’s sensibilities, because the success of a research program is not strictly determined by the process of conjecture and refutation envisioned by Popper. But the important point for me is that a Lakatosian research program can be progressive even if it is metaphysical and not scientific. What matters is that it offer opportunities for researchers to find and to solve or even just to talk about solving new problems, thereby attracting new researchers into the program.

It does appear that economics has for at least two centuries been a progressive research program. But it is not clear that is a really scientific research program, because the nature of economic theory is so flexible that it can be adapted as needed to explain almost any set of observations. Almost any observation can be set up and solved in terms of some sort of constrained optimization problem. What the task requires is sufficient ingenuity on the part of the theorist to formulate the problem in such a way that the desired outcome can be derived as the solution of a constrained optimization problem. The hard core of the research program is therefore never at risk, and the protective belt can always be modified as needed to generate the sort of solution that is compatible with the theoretical hard core. The scope for true refutation has thus been effectively narrowed to eliminate any real scope for refutation, leaving us with a progressive metaphysical research program.

I am not denying that it would be preferable if economics could be a truly scientific research program, but it is not clear to me how much can be done about it. The complexity of the phenomena, the multiplicity of the hypotheses required to explain the data, and the ambiguous and not fully reliable nature of most of the data that economists have available devilishly conspire to render Popperian falsificationism an illusory ideal in economics. That is not an excuse for cynicism, just a warning against unrealistic expectations about what economics can accomplish. And the last thing that I am suggesting is that we stop paying attention to the data that we have or stop trying to improve the quality of the data that we have to work with.

Samuelson Rules the Seas

I think Nick Rowe is a great economist; I really do. And on top of that, he recently has shown himself to be a very brave economist, fearlessly claiming to have shown that Paul Samuelson’s classic 1980 takedown (“A Corrected Version of Hume’s Equilibrating Mechanisms for International Trade“) of David Hume’s classic 1752 articulation of the price-specie-flow mechanism (PSFM) (“Of the Balance of Trade“) was all wrong. Although I am a great admirer of Paul Samuelson, I am far from believing that he was error-free. But I would be very cautious about attributing an error in pure economic theory to Samuelson. So if you were placing bets, Nick would certainly be the longshot in this match-up.

Of course, I should admit that I am not an entirely disinterested observer of this engagement, because in the early 1970s, long before I discovered the Samuelson article that Nick is challenging, Earl Thompson had convinced me that Hume’s account of PSFM was all wrong, the international arbitrage of tradable-goods prices implying that gold movements between countries couldn’t cause the relative price levels of those countries in terms of gold to deviate from a common level, beyond the limits imposed by the operation of international commodity arbitrage. And Thompson’s reasoning was largely restated in the ensuing decade by Jacob Frenkel and Harry Johnson (“The Monetary Approach to the Balance of Payments: Essential Concepts and Historical Origins”) and by Donald McCloskey and Richard Zecher (“How the Gold Standard Really Worked”) both in the 1976 volume on The Monetary Approach to the Balance of Payments edited by Johnson and Frenkel, and by David Laidler in his essay “Adam Smith as a Monetary Economist,” explaining why in The Wealth of Nations Smith ignored his best friend Hume’s classic essay on PSFM. So the main point of Samuelson’s takedown of Hume and the PSFM was not even original. What was original about Samuelson’s classic article was his dismissal of the rationalization that PSFM applies when there are both non-tradable and tradable goods, so that national price levels can deviate from the common international price level in terms of tradables, showing that the inclusion of tradables into the analysis serves only to slow down the adjustment process after a gold-supply shock.

So let’s follow Nick in his daring quest to disprove Samuelson, and see where that leads us.

Assume that durable sailing ships are costly to build, but have low (or zero for simplicity) operating costs. Assume apples are the only tradeable good, and one ship can transport one apple per year across the English Channel between Britain and France (the only countries in the world). Let P be the price of apples in Britain, P* be the price of apples in France, and R be the annual rental of a ship, (all prices measured in gold), then R=ABS(P*-P).

I am sorry to report that Nick has not gotten off to a good start here. There cannot be only tradable good. It takes two tango and two to trade. If apples are being traded, they must be traded for something, and that something is something other than apples. And, just to avoid misunderstanding, let me say that that something is also something other than gold. Otherwise, there couldn’t possibly be a difference between the Thompson-Frenkel-Johnson-McCloskey-Zecher-Laidler-Samuelson critique of PSFM and the PSFM. We need at least three goods – two real goods plus gold – providing a relative price between the two real goods and two absolute prices quoted in terms of gold (the numeraire). So if there are at least two absolute prices, then Nick’s equation for the annual rental of a ship R must be rewritten as follows R=ABS[P(A)*-P(A)+P(SE)*-P(SE)], where P(A) is the price of apples in Britain, P(A)* is the price of apples in France, P(SE) is the price of something else in Britain, and P(SE)* is the price of that same something else in France.

OK, now back to Nick:

In this model, the Law of One Price (P=P*) will only hold if the volume of exports of apples (in either direction) is unconstrained by the existing stock of ships, so rentals on ships are driven to zero. But then no ships would be built to export apples if ship rentals were expected to be always zero, which is a contradiction of the Law of One Price because arbitrage is impossible without ships. But an existing stock of ships represents a sunk cost (sorry) and they keep on sailing even as rentals approach zero. They sail around Samuelson’s Iceberg model (sorry) of transport costs.

This is a peculiar result in two respects. First, it suggests, perhaps inadvertently, that the law of price requires equality between the prices of goods in every location when in fact it only requires that prices in different locations not differ by more than the cost of transportation. The second, more serious, peculiarity is that with only one good being traded the price difference in that single good between the two locations has to be sufficient to cover the cost of building the ship. That suggests that there has to be a very large price difference in that single good to justify building the ship, but in fact there are at least two goods being shipped, so it is the sum of the price differences of the two goods that must be sufficient to cover the cost of building the ship. The more tradable goods there are, the smaller the price differences in any single good necessary to cover the cost of building the ship.

Again, back to Nick:

Start with zero exports, zero ships, and P=P*. Then suppose, like Hume, that some of the gold in Britain magically disappears. (And unlike Hume, just to keep it simple, suppose that gold magically reappears in France.)

Uh-oh. Just to keep it simple? I don’t think so. To me, keeping it simple would mean looking at one change in initial conditions at a time. The one relevant change – the one discussed by Hume – is a reduction in the stock of gold in Britain. But Nick is looking at two changes — a reduced stock of gold in Britain and an increased stock of gold in France — simultaneously. Why does it matter? Because the key point at issue is whether a national price level – i.e, Britain’s — can deviate from the international price level. In Nick’s two-country example, there should be one national price level and one international price level, which means that the only price level subject to change as a result of the change in initial conditions should be, as in Hume’s example, the British price level, while the French price level – representing the international price level – remained constant. In a two-country model, this can only be made plausible by assuming that France is large compared to Britain, so that a loss of gold could potentially affect the British price level without changing the French price level. Once again back to Nick.

The price of apples in Britain drops, the price of apples in France rises, and so the rent on a ship is now positive because you can use it to export apples from Britain to France. If that rent is big enough, and expected to stay big long enough, some ships will be built, and Britain will export apples to France in exchange for gold. Gold will flow from France to Britain, so the stock of gold will slowly rise in Britain and slowly fall in France, and the price of apples will likewise slowly rise in Britain and fall in France, so ship rentals will slowly fall, and the price of ships (the Present Value of those rents) will eventually fall below the cost of production, so no new ships will be built. But the ships already built will keep on sailing until rentals fall to zero or they rot (whichever comes first).

So notice what Nick has done. Instead of confronting the Thompson-Frenkel-Johnson-McCloseky-Zecher-Laidler-Samuelson critique of Hume, which asserts that a world price level determines the national price level, Nick has simply begged the question by not assuming that the world price of gold, which determines the world price level, is constant. Instead, he posits a decreased value of gold in France, owing to an increased French stock of gold, and an increased value of gold in Britain, owing to a decreased British stock of gold, and then conflating the resulting adjustment in the value gold with the operation of commodity arbitrage. Why Nick thinks his discussion is relevant to the Thompson-Frenkel-Johnson-McCloseky-Zecher-Laidler-Samuelson critique escapes me.

The flow of exports and hence the flow of specie is limited by the stock of ships. And only a finite number of ships will be built. So we observe David Hume’s price-specie flow mechanism playing out in real time.

This bugs me. Because it’s all sorta obvious really.

Yes, it bugs me, too. And, yes, it is obvious. But why is it relevant to the question under discussion, which is whether there is an international price level in terms of gold that constrains movements in national price levels in countries in which gold is the numeraire. In other words, if there is a shock to the gold stock of a small open economy, how much will the price level in that small open economy change? By the percentage change in the stock of gold in that country – as Hume maintained – or by the minisicule percentage change in the international stock of gold, gold prices in the country that has lost gold being constrained from changing by more than allowed by the cost of arbitrage operations? Nick’s little example is simply orthogonal to the question under discussion.

I skip Nick’s little exegetical discussion of Hume’s essay and proceed to what I think is the final substantive point that Nick makes.

Prices don’t just arbitrage themselves. Even if we take the limit of my model, as the cost of building ships approaches zero, we need to explain what process ensures the Law of One Price holds in equilibrium. Suppose it didn’t…then people would buy low and sell high…..you know the rest.

There are different equilibrium conditions being confused here. The equilibrium arbitrage conditions are not same as the equilibrium conditions for international monetary equilibrium. Arbitrage conditions for individual commodities can hold even if the international distribution of gold is not in equilibrium. So I really don’t know what conclusion Nick is alluding to here.

But let me end on what I hope is a conciliatory and constructive note. As always, Nick is making an insightful argument, even if it is misplaced in the context of Hume and PSFM. And the upshot of Nick’s argument is that transportation costs are a function of the dispersion of prices, because, as the incentive to ship products to capture arbitrage profits increases, the cost of shipping will increase as arbitragers bid up the value of resources specialized to the processes of transporting stuff. So the assumption that the cost of transportation can be treated as a parameter is not really valid, which means that the constraints imposed on national price level movements are not really parametric, they are endongenously determined within an appropriately specified general equilibrium model. If Nick is willing to settle for that proposition, I don’t think that our positions are that far apart.

What’s Wrong with Econ 101?

Hendrickson responded recently to criticisms of Econ 101 made by Noah Smith and Mark Thoma. Mark Thoma thinks that Econ 101 has a conservative bias, presumably because Econ 101 teaches students that markets equilibrate supply and demand and allocate resources to their highest valued use and that sort of thing. If markets are so wonderful, then shouldn’t we keep hands off the market and let things take care of themselves? Noah Smith is especially upset that Econ 101, slighting the ambiguous evidence that minimum-wage laws actually do increase unemployment, is too focused on theory and pays too little attention to empirical techniques.

I sympathize with Josh defense of Econ 101, and I think he makes a good point that there is nothing in Econ 101 that quantifies the effect on unemployment of minimum-wage legislation, so that the disconnect between theory and evidence isn’t as stark as Noah suggests. Josh also emphasizes, properly, that whatever the effect of an increase in the minimum wage implied by economic theory, that implication by itself can’t tell us whether the minimum wage should be raised. An ought statement can’t be derived from an is statement. Philosophers are not as uniformly in agreement about the positive-normative distinction as they used to be, but I am old-fashioned enough to think that it’s still valid. If there is a conservative bias in Econ 101, the problem is not Econ 101; the problem is bad teaching.

Having said all that, however, I don’t think that Josh’s defense addresses the real problems with Econ 101. Noah Smith’s complaints about the implied opposition of Econ 101 to minimum-wage legislation and Mark Thoma’s about the conservative bias of Econ 101 are symptoms of a deeper problem with Econ 101, a problem inherent in the current state of economic theory, and unlikely to go away any time soon.

The deeper problem that I think underlies much of the criticism of Econ 101 is the fragility of its essential propositions. These propositions, what Paul Samuelson misguidedly called “meaningful theorems” are deducible from the basic postulates of utility maximization and wealth maximization by applying the method of comparative statics. Not only are the propositions based on questionable psychological assumptions, the comparative-statics method imposes further restrictive assumptions designed to isolate a single purely theoretical relationship. The assumptions aren’t just the kind of simplifications necessary for the theoretical models of any empirical science to be applicable to the real world, they subvert the powerful logic used to derive those implications. It’s not just that the assumptions may not be fully consistent with the conditions actually observed, but the implications of the model are themselves highly sensitive to those assumptions. The meaningful theorems themselves are very sensitive to the assumptions of the model.

The bread and butter of Econ 101 is the microeconomic theory of market adjustment in which price and quantity adjust to equilibrate what consumers demand with what suppliers produce. This is the partial-equilibrium analysis derived from Alfred Marshall, and gradually perfected in the 1920s and 1930s after Marshall’s death with the development of the theories of the firm, and perfect and imperfect competition. As I have pointed out before in a number of posts just as macroeconomics depends on microfoundations, microeconomics depends on macrofoundations (e.g. here and here). All partial-equilibrium analysis relies on the – usually implicit — assumption that all markets but the single market under analysis are in equilibrium. Without that assumption, it is logically impossible to derive any of Samuelson’s meaningful theorems, and the logical necessity of microeconomics is severely compromised.

The underlying idea is very simple. Samuelson’s meaningful theorems are meant to isolate the effect of a change in a single parameter on a particular endogenous variable in an economic system. The only way to isolate the effect of the parameter on the variable is to start from an equilibrium state in which the system is, as it were, at rest. A small (aka infinitesimal) change in the parameter induces an adjustment in the equilibrium, and a comparison of the small change in the variable of interest between the new equilibrium and the old equilibrium relative to the parameter change identifies the underlying relationship between the variable and the parameter, all else being held constant. If the analysis did not start from equilibrium, then the effect of the parameter change on the variable could not be isolated, because the variable would be changing for reasons having nothing to do with the parameter change, making it impossible to isolate the pure effect of the parameter change on the variable of interest.

Not only must the exercise start from an equilibrium state, the equilibrium must be at least locally stable, so that the posited small parameter change doesn’t cause the system to gravitate towards another equilibrium — the usual assumption of a unique equilibrium being an assumption to ensure tractability rather than a deduction from any plausible assumptions – or simply veer off on some explosive or indeterminate path.

Even aside from all these restrictive assumptions, the standard partial-equilibrium analysis is restricted to markets that can be assumed to be very small relative to the entire system. For small markets, it is safe to assume that the small changes in the single market under analysis will have sufficiently small effects on all the other markets in the economy that the induced effects on all the other markets from the change in the market of interest have a negligible feedback effect on the market of interest.

But the partial-equilibrium method surely breaks down when the market under analysis is a market that is large relative to the entire economy, like, shall we say, the market for labor. The feedback effects are simply too strong for the small-market assumptions underlying the partial-equilibrium analysis to be satisfied by the labor market. But even aside from the size issue, the essence of the partial-equilibrium method is the assumption that all markets other than the market under analysis are in equilibrium. But the very assumption that the labor market is not in equilibrium renders the partial-equilibrium assumption that all other markets are in equilibrium untenable. I would suggest that the proper way to think about what Keynes was trying, not necessarily successfully, to do in the General Theory when discussing nominal wage cuts as a way to reduce unemployment is to view that discussion as a critique of using the partial-equilibrium method to analyze a state of general unemployment, as opposed to a situation in which unemployment is confined to a particular occupation or a particular geographic area.

So the question naturally arises: If the logical basis of Econ 101 is as flimsy as I have been suggesting, should we stop teaching Econ 101? My answer is an emphatic, but qualified, no. Econ 101 is the distillation of almost a century and a half of rigorous thought about how to analyze human behavior. What we have come up with so far is very imperfect, but it is still the most effective tool we have for systematically thinking about human conduct and its consequences, especially its unintended consequences. But we should be more forthright about its limitations and the nature of the assumptions that underlie the analysis. We should also be more aware of the logical gaps between the theory – Samuelson’s meaningful theorems — and the applications of the theory.

In fact, many meaningful theorems are consistently corroborated by statistical tests, presumably because observations by and large occur when the economy operates in the neighborhood of a general equililbrium and feedback effect are small, so that the extraneous forces – other than those derived from theory – impinge on actual observations more or less randomly, and thus don’t significantly distort the predicted relationship. And undoubtedly there are also cases in which the random effects overwhelm the theoretically identified relationships, preventing the relationships from being identified statistically, at least when the number of observations is relatively small as is usually the case with economic data. But we should also acknowledge that the theoretically predicted relationships may simply not hold in the real world, because the extreme conditions required for the predicted partial-equilibrium relationships to hold – near-equilibrium conditions and the absence of feedback effects – may often not be satisfied.

P. H. Wicksteed, the Coase Theorem, and the Real Cost Fallacy

I am now busy writing a paper with my colleague Paul Zimmerman, documenting a claim that I made just over four years ago that P. H. Wicksteed discovered the Coase Theorem. The paper is due to be presented at the History of Economics Society Conference next month at Duke University. At some point soon after the paper is written, I plan to post it on SSRN.

Briefly, the point of the paper is that Wicksteed’s argument that there is no such thing as a supply curve in the sense that the supply curve of a commodity in fixed supply is just the reverse of a certain section of the demand curve, the section depending on how the given stock of the commodity is initially distributed among market participants. However the initial stock is distributed, the final price and the final allocation of the commodity is determined by the preferences of the market participants reflected in their individual demands for the commodity. But this is exactly the reasoning underlying the Coase Theorem: the initial assignment of liability for damages has no effect on the final allocation of resources if transactions costs are zero (as Wicksteed implicitly assumed in his argument). Coase’s originality was not in his reasoning, but in recognizing that economic exchange is not the mere trading of physical goods but trading rights to property or rights to engage in certain types of conduct affecting property.

But Wicksteed went further than just showing that the initial distribution of a commodity in fixed supply does not affect the equilibrium price of the commodity or its equilibrium distribution. He showed that in a production economy, cost has no effect on equilibrium price or the equilibrium allocation of resources and goods and services, which seems a remarkably sweeping assertion. But I think that Wicksteed was right in that assertion, and I think that, in making that assertion, he anticipated a point that I have made numerous times on this blog (e.g., here) namely, that just as macroeconomic requires microfoundations, microeconomics requires macrofoundations. The whole of standard microeconomics, e.g., assertions about the effects of an excise tax on price and output, presumes the existence of equilibrium in all markets other than the one being subjected to micro-analysis. Without the background assumption of equilibrium, it would be impossible to derive what Paul Samuelson (incorrectly) called “meaningful theorems,” (the mistake stemming from the absurd positivist presumption that empirically testable statements are the only statements that are meaningful).

So let me quote from Wicksteed’s 1914 paper “The Scope and Method of Political Economy in the Light of the Marginal Theory of Value and Distribution.”

[S]o far we have only dealt with the market in the narrower sense. Our investigations throw sufficient light on the distribution of the hay harvest, for instance, or on the “catch” of a fishing fleet. But where the production is continuous, as in mining or in ironworks, will the same theory still suffice to guide us? Here again we encounter the attempt to establish two co-ordinate principles, diagrammatically represented by two intersecting curves; for though the “cost of production” theory of value is generally repudiated, we are still foo often taught to look for the forces that determine the stream of supply along two lines, the value of the product, regulated by the law of the market, and the cost of production. But what is cost of production? In the market of commodities I am ready to give as much as the article is worth to me, and I cannot get it unless I give as much as it is worth to others. In the same way, if I employ land or labour or tools to produce something, I shall be ready to give as much as they are worth to me, and I shall have to give as much as they are worth to others-always, of course, differentially. Their worth to me is determined by their differential effect upon my product, their worth to others by the like effect upon their products . . . Again we have an alias merely. Cost of production is merely the form in which the desiredness a thing possesses for someone else presents itself to me. When we take the collective curve of demand for any factor of production we see again that it is entirely composed of demands, and my adjustment of my own demands to the cond ditions imposed by the demands of others is of exactly the same nature whether I am buying cabbages or factors for the production of steel plates. I have to adjust my desire for a thing to the desires of others for the same thing, not to find some principle other than that of desiredness, co-ordinate with it as a second determinant of market price. The second determinant, here as everywhere, is the supply. It is not until we have perfectly grasped the truth that costs of production of one thing are nothing whatever but an alias of efficiencies in production of other things that we shall be finally emancipated from the ancient fallacy we have so often thrust out at the door, while always leaving the window open for its return.

The upshot of Wicksteed’s argument appears to be that cost, viewed as an independent determinant of price or the allocation of resources, is a redundant concept. Cost as a determinant of value is useful only in the context of a background of general equilibrium in which the prices of all but a single commodity have already been determined. The usual partial-equilibrium apparatus for determining the price of a single commodity in terms of the demand for and the supply of that single product, presumes a given technology for converting inputs into output, and given factor prices, so that the costs can be calculated based on those assumptions. In principle, that exercise is no different from finding the intersection between the demand-price curve and the supply-price curve for a commodity in fixed supply, the demand-price curve and the supply-price curve being conditional on a particular arbitrary assumption about the initial distribution of the commodity among market participants. In the analysis of a production economy, the determination of equilibrium price and output in a single market can proceed in terms of a demand curve for the product and a supply curve (reflecting the aggregate of individual firm marginal-cost curves). However, in this case the supply curve is conditional on the assumption that prices of all other outputs and all factor prices have already been determined. But from the perspective of general equilibrium, the determination of the vector of prices, including all factor prices, that is consistent with general equilibrium cannot be carried out by computing production costs for each individual output, because the factor prices required for a computation of the production costs for any product are unknown until the general equilibrium solution has itself been found.

Thus, the notion that cost can serve as an independent determinant of equilibrium price is an exercise in question begging, because cost is no less an equilibrium concept than price. Cost cannot be logically prior to price if both are determined simultaneously and are mutually interdependent. All that is logically prior to equilibrium price in a basic economic model are the preferences of market participants and the technology for converting inputs into outputs. Cost is not an explanatory variable; it is an explained variable. That is the ultimate fallacy in the doctrine of real costs defended so tenaciously by Jacob Viner in chapter eight of his classic Studies in the Theory of International Trade. That Paul Samuelson in one of his many classic papers, “International Trade and the Equalization of Factor Prices,” could have defended Viner and the real-cost doctrine, failing to realize that costs are simultaneously determined with prices in equilibrium, and are indeterminate outside of equilibrium, seems to me to be a quite remarkable lapse of reasoning on Samuelson’s part.

Romer v. Lucas

A couple of months ago, Paul Romer created a stir by publishing a paper in the American Economic Review “Mathiness in the Theory of Economic Growth,” an attack on two papers, one by McGrattan and Prescott and the other by Lucas and Moll on aspects of growth theory. He accused the authors of those papers of using mathematical modeling as a cover behind which to hide assumptions guaranteeing results by which the authors could promote their research agendas. In subsequent blog posts, Romer has sharpened his attack, focusing it more directly on Lucas, whom he accuses of a non-scientific attachment to ideological predispositions that have led him to violate what he calls Feynman integrity, a concept eloquently described by Feynman himself in a 1974 commencement address at Caltech.

It’s a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty–a kind of leaning over backwards. For example, if you’re doing an experiment, you should report everything that you think might make it invalid–not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you’ve eliminated by some other experiment, and how they worked–to make sure the other fellow can tell they have been eliminated.

Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can–if you know anything at all wrong, or possibly wrong–to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. There is also a more subtle problem. When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition.

Romer contrasts this admirable statement of what scientific integrity means with another by George Stigler, seemingly justifying, or at least excusing, a kind of special pleading on behalf of one’s own theory. And the institutional and perhaps ideological association between Stigler and Lucas seems to suggest that Lucas is inclined to follow the permissive and flexible Stiglerian ethic rather than rigorous Feynman standard of scientific integrity. Romer regards this as a breach of the scientific method and a step backward for economics as a science.

I am not going to comment on the specific infraction that Romer accuses Lucas of having committed; I am not familiar with the mathematical question in dispute. Certainly if Lucas was aware that his argument in the paper Romer criticizes depended on the particular mathematical assumption in question, Lucas should have acknowledged that to be the case. And even if, as Lucas asserted in responding to a direct question by Romer, he could have derived the result in a more roundabout way, then he should have pointed that out, too. However, I don’t regard the infraction alleged by Romer to be more than a misdemeanor, hardly a scandalous breach of the scientific method.

Why did Lucas, who as far as I can tell was originally guided by Feynman integrity, switch to the mode of Stigler conviction? Market clearing did not have to evolve from auxiliary hypothesis to dogma that could not be questioned.

My conjecture is economists let small accidents of intellectual history matter too much. If we had behaved like scientists, things could have turned out very differently. It is worth paying attention to these accidents because doing so might let us take more control over the process of scientific inquiry that we are engaged in. At the very least, we should try to reduce the odds that that personal frictions and simple misunderstandings could once again cause us to veer off on some damaging trajectory.

I suspect that it was personal friction and a misunderstanding that encouraged a turn toward isolation (or if you prefer, epistemic closure) by Lucas and colleagues. They circled the wagons because they thought that this was the only way to keep the rational expectations revolution alive. The misunderstanding is that Lucas and his colleagues interpreted the hostile reaction they received from such economists as Robert Solow to mean that they were facing implacable, unreasoning resistance from such departments as MIT. In fact, in a remarkably short period of time, rational expectations completely conquered the PhD program at MIT.

More recently Romer, having done graduate work both at MIT and Chicago in the late 1970s, has elaborated on the personal friction between Solow and Lucas and how that friction may have affected Lucas, causing him to disengage from the professional mainstream. Paul Krugman, who was at MIT when this nastiness was happening, is skeptical of Romer’s interpretation.

My own view is that being personally and emotionally attached to one’s own theories, whether for religious or ideological or other non-scientific reasons, is not necessarily a bad thing as long as there are social mechanisms allowing scientists with different scientific viewpoints an opportunity to make themselves heard. If there are such mechanisms, the need for Feynman integrity is minimized, because individual lapses of integrity will be exposed and remedied by criticism from other scientists; scientific progress is possible even if scientists don’t live up to the Feynman standards, and maintain their faith in their theories despite contradictory evidence. But, as I am going to suggest below, there are reasons to doubt that social mechanisms have been operating to discipline – not suppress, just discipline – dubious economic theorizing.

My favorite example of the importance of personal belief in, and commitment to the truth of, one’s own theories is Galileo. As discussed by T. S. Kuhn in The Structure of Scientific Revolutions. Galileo was arguing for a paradigm change in how to think about the universe, despite being confronted by empirical evidence that appeared to refute the Copernican worldview he believed in: the observations that the sun revolves around the earth, and that the earth, as we directly perceive it, is, apart from the occasional earthquake, totally stationary — good old terra firma. Despite that apparently contradictory evidence, Galileo had an alternative vision of the universe in which the obvious movement of the sun in the heavens was explained by the spinning of the earth on its axis, and the stationarity of the earth by the assumption that all our surroundings move along with the earth, rendering its motion imperceptible, our perception of motion being relative to a specific frame of reference.

At bottom, this was an almost metaphysical world view not directly refutable by any simple empirical test. But Galileo adopted this worldview or paradigm, because he deeply believed it to be true, and was therefore willing to defend it at great personal cost, refusing to recant his Copernican view when he could have easily appeased the Church by describing the Copernican theory as just a tool for predicting planetary motion rather than an actual representation of reality. Early empirical tests did not support heliocentrism over geocentrism, but Galileo had faith that theoretical advancements and improved measurements would eventually vindicate the Copernican theory. He was right of course, but strict empiricism would have led to a premature rejection of heliocentrism. Without a deep personal commitment to the Copernican worldview, Galileo might not have articulated the case for heliocentrism as persuasively as he did, and acceptance of heliocentrism might have been delayed for a long time.

Imre Lakatos called such deeply-held views underlying a scientific theory the hard core of the theory (aka scientific research program), a set of beliefs that are maintained despite apparent empirical refutation. The response to any empirical refutation is not to abandon or change the hard core but to adjust what Lakatos called the protective belt of the theory. Eventually, as refutations or empirical anomalies accumulate, the research program may undergo a crisis, leading to its abandonment, or it may simply degenerate if it fails to solve new problems or discover any new empirical facts or regularities. So Romer’s criticism of Lucas’s dogmatic attachment to market clearing – Lucas frequently makes use of ad hoc price stickiness assumptions; I don’t know why Romer identifies market-clearing as a Lucasian dogma — may be no more justified from a history of science perspective than would criticism of Galileo’s dogmatic attachment to heliocentrism.

So while I have many problems with Lucas, lack of Feynman integrity is not really one of them, certainly not in the top ten. What I find more disturbing is his narrow conception of what economics is. As he himself wrote in an autobiographical sketch for Lives of the Laureates, he was bewitched by the beauty and power of Samuelson’s Foundations of Economic Analysis when he read it the summer before starting his training as a graduate student at Chicago in 1960. Although it did not have the transformative effect on me that it had on Lucas, I greatly admire the Foundations, but regardless of whether Samuelson himself meant to suggest such an idea (which I doubt), it is absurd to draw this conclusion from it:

I loved the Foundations. Like so many others in my cohort, I internalized its view that if I couldn’t formulate a problem in economic theory mathematically, I didn’t know what I was doing. I came to the position that mathematical analysis is not one of many ways of doing economic theory: It is the only way. Economic theory is mathematical analysis. Everything else is just pictures and talk.

Oh, come on. Would anyone ever think that unless you can formulate the problem of whether the earth revolves around the sun or the sun around the earth mathematically, you don’t know what you are doing? And, yet, remarkably, on the page following that silly assertion, one finds a totally brilliant description of what it was like to take graduate price theory from Milton Friedman.

Friedman rarely lectured. His class discussions were often structured as debates, with student opinions or newspaper quotes serving to introduce a problem and some loosely stated opinions about it. Then Friedman would lead us into a clear statement of the problem, considering alternative formulations as thoroughly as anyone in the class wanted to. Once formulated, the problem was quickly analyzed—usually diagrammatically—on the board. So we learned how to formulate a model, to think about and decide which features of a problem we could safely abstract from and which he needed to put at the center of the analysis. Here “model” is my term: It was not a term that Friedman liked or used. I think that for him talking about modeling would have detracted from the substantive seriousness of the inquiry we were engaged in, would divert us away from the attempt to discover “what can be done” into a merely mathematical exercise. [my emphasis].

Despite his respect for Friedman, it’s clear that Lucas did not adopt and internalize Friedman’s approach to economic problem solving, but instead internalized the caricature he extracted from Samuelson’s Foundations: that mathematical analysis is the only legitimate way of doing economic theory, and that, in particular, the essence of macroeconomics consists in a combination of axiomatic formalism and philosophical reductionism (microfoundationalism). For Lucas, the only scientifically legitimate macroeconomic models are those that can be deduced from the axiomatized Arrow-Debreu-McKenzie general equilibrium model, with solutions that can be computed and simulated in such a way that the simulations can be matched up against the available macroeconomics time series on output, investment and consumption.

This was both bad methodology and bad science, restricting the formulation of economic problems to those for which mathematical techniques are available to be deployed in finding solutions. On the one hand, the rational-expectations assumption made finding solutions to certain intertemporal models tractable; on the other, the assumption was justified as being required by the rationality assumptions of neoclassical price theory.

In a recent review of Lucas’s Collected Papers on Monetary Theory, Thomas Sargent makes a fascinating reference to Kenneth Arrow’s 1967 review of the first two volumes of Paul Samuelson’s Collected Works in which Arrow referred to the problematic nature of the neoclassical synthesis of which Samuelson was a chief exponent.

Samuelson has not addressed himself to one of the major scandals of current price theory, the relation between microeconomics and macroeconomics. Neoclassical microeconomic equilibrium with fully flexible prices presents a beautiful picture of the mutual articulations of a complex structure, full employment being one of its major elements. What is the relation between this world and either the real world with its recurrent tendencies to unemployment of labor, and indeed of capital goods, or the Keynesian world of underemployment equilibrium? The most explicit statement of Samuelson’s position that I can find is the following: “Neoclassical analysis permits of fully stable underemployment equilibrium only on the assumption of either friction or a peculiar concatenation of wealth-liquidity-interest elasticities. . . . [The neoclassical analysis] goes far beyond the primitive notion that, by definition of a Walrasian system, equilibrium must be at full employment.” . . .

In view of the Phillips curve concept in which Samuelson has elsewhere shown such interest, I take the second sentence in the above quotation to mean that wages are stationary whenever unemployment is X percent, with X positive; thus stationary unemployment is possible. In general, one can have a neoclassical model modified by some elements of price rigidity which will yield Keynesian-type implications. But such a model has yet to be constructed in full detail, and the question of why certain prices remain rigid becomes of first importance. . . . Certainly, as Keynes emphasized the rigidity of prices has something to do with the properties of money; and the integration of the demand and supply of money with general competitive equilibrium theory remains incomplete despite attempts beginning with Walras himself.

If the neoclassical model with full price flexibility were sufficiently unrealistic that stable unemployment equilibrium be possible, then in all likelihood the bulk of the theorems derived by Samuelson, myself, and everyone else from the neoclassical assumptions are also contrafactual. The problem is not resolved by what Samuelson has called “the neoclassical synthesis,” in which it is held that the achievement of full employment requires Keynesian intervention but that neoclassical theory is valid when full employment is reached. . . .

Obviously, I believe firmly that the mutual adjustment of prices and quantities represented by the neoclassical model is an important aspect of economic reality worthy of the serious analysis that has been bestowed on it; and certain dramatic historical episodes – most recently the reconversion of the United States from World War II and the postwar European recovery – suggest that an economic mechanism exists which is capable of adaptation to radical shifts in demand and supply conditions. On the other hand, the Great Depression and the problems of developing countries remind us dramatically that something beyond, but including, neoclassical theory is needed.

Perhaps in a future post, I may discuss this passage, including a few sentences that I have omitted here, in greater detail. For now I will just say that Arrow’s reference to a “neoclassical microeconomic equilibrium with fully flexible prices” seems very strange inasmuch as price flexibility has absolutely no role in the proofs of the existence of a competitive general equilibrium for which Arrow and Debreu and McKenzie are justly famous. All the theorems Arrow et al. proved about the neoclassical equilibrium were related to existence, uniqueness and optimaiity of an equilibrium supported by an equilibrium set of prices. Price flexibility was not involved in those theorems, because the theorems had nothing to do with how prices adjust in response to a disequilibrium situation. What makes this juxtaposition of neoclassical microeconomic equilibrium with fully flexible prices even more remarkable is that about eight years earlier Arrow wrote a paper (“Toward a Theory of Price Adjustment”) whose main concern was the lack of any theory of price adjustment in competitive equilibrium, about which I will have more to say below.

Sargent also quotes from two lectures in which Lucas referred to Don Patinkin’s treatise Money, Interest and Prices which provided perhaps the definitive statement of the neoclassical synthesis Samuelson espoused. In one lecture (“My Keynesian Education” presented to the History of Economics Society in 2003) Lucas explains why he thinks Patinkin’s book did not succeed in its goal of integrating value theory and monetary theory:

I think Patinkin was absolutely right to try and use general equilibrium theory to think about macroeconomic problems. Patinkin and I are both Walrasians, whatever that means. I don’t see how anybody can not be. It’s pure hindsight, but now I think that Patinkin’s problem was that he was a student of Lange’s, and Lange’s version of the Walrasian model was already archaic by the end of the 1950s. Arrow and Debreu and McKenzie had redone the whole theory in a clearer, more rigorous, and more flexible way. Patinkin’s book was a reworking of his Chicago thesis from the middle 1940s and had not benefited from this more recent work.

In the other lecture, his 2003 Presidential address to the American Economic Association, Lucas commented further on why Patinkin fell short in his quest to unify monetary and value theory:

When Don Patinkin gave his Money, Interest, and Prices the subtitle “An Integration of Monetary and Value Theory,” value theory meant, to him, a purely static theory of general equilibrium. Fluctuations in production and employment, due to monetary disturbances or to shocks of any other kind, were viewed as inducing disequilibrium adjustments, unrelated to anyone’s purposeful behavior, modeled with vast numbers of free parameters. For us, today, value theory refers to models of dynamic economies subject to unpredictable shocks, populated by agents who are good at processing information and making choices over time. The macroeconomic research I have discussed today makes essential use of value theory in this modern sense: formulating explicit models, computing solutions, comparing their behavior quantitatively to observed time series and other data sets. As a result, we are able to form a much sharper quantitative view of the potential of changes in policy to improve peoples’ lives than was possible a generation ago.

So, as Sargent observes, Lucas recreated an updated neoclassical synthesis of his own based on the intertemporal Arrow-Debreu-McKenzie version of the Walrasian model, augmented by a rationale for the holding of money and perhaps some form of monetary policy, via the assumption of credit-market frictions and sticky prices. Despite the repudiation of the updated neoclassical synthesis by his friend Edward Prescott, for whom monetary policy is irrelevant, Lucas clings to neoclassical synthesis 2.0. Sargent quotes this passage from Lucas’s 1994 retrospective review of A Monetary History of the US by Friedman and Schwartz to show how tightly Lucas clings to neoclassical synthesis 2.0 :

In Kydland and Prescott’s original model, and in many (though not all) of its descendants, the equilibrium allocation coincides with the optimal allocation: Fluctuations generated by the model represent an efficient response to unavoidable shocks to productivity. One may thus think of the model not as a positive theory suited to all historical time periods but as a normative benchmark providing a good approximation to events when monetary policy is conducted well and a bad approximation when it is not. Viewed in this way, the theory’s relative success in accounting for postwar experience can be interpreted as evidence that postwar monetary policy has resulted in near-efficient behavior, not as evidence that money doesn’t matter.

Indeed, the discipline of real business cycle theory has made it more difficult to defend real alternaltives to a monetary account of the 1930s than it was 30 years ago. It would be a term-paper-size exercise, for example, to work out the possible effects of the 1930 Smoot-Hawley Tariff in a suitably adapted real business cycle model. By now, we have accumulated enough quantitative experience with such models to be sure that the aggregate effects of such a policy (in an economy with a 5% foreign trade sector before the Act and perhaps a percentage point less after) would be trivial.

Nevertheless, in the absence of some catastrophic error in monetary policy, Lucas evidently believes that the key features of the Arrow-Debreu-McKenzie model are closely approximated in the real world. That may well be true. But if it is, Lucas has no real theory to explain why.

In his 1959 paper (“Towards a Theory of Price Adjustment”) I just mentioned, Arrow noted that the theory of competitive equilibrium has no explanation of how equilibrium prices are actually set. Indeed, the idea of competitive price adjustment is beset by a paradox: all agents in a general equilibrium being assumed to be price takers, how is it that a new equilibrium price is ever arrived at following any disturbance to an initial equilibrium? Arrow had no answer to the question, but offered the suggestion that, out of equilibrium, agents are not price takers, but price searchers, possessing some measure of market power to set price in the transition between the old and new equilibrium. But the upshot of Arrow’s discussion was that the problem and the paradox awaited solution. Almost sixty years on, some of us are still waiting, but for Lucas and the Lucasians, there is neither problem nor paradox, because the actual price is the equilibrium price, and the equilibrium price is always the (rationally) expected price.

If the social functions of science were being efficiently discharged, this rather obvious replacement of problem solving by question begging would not have escaped effective challenge and opposition. But Lucas was able to provide cover for this substitution by persuading the profession to embrace his microfoundational methodology, while offering irresistible opportunities for professional advancement to younger economists who could master the new analytical techniques that Lucas and others were rapidly introducing, thereby neutralizing or coopting many of the natural opponents to what became modern macroeconomics. So while Romer considers the conquest of MIT by the rational-expectations revolution, despite the opposition of Robert Solow, to be evidence for the advance of economic science, I regard it as a sign of the social failure of science to discipline a regressive development driven by the elevation of technique over substance.

Sterilizing Gold Inflows: The Anatomy of a Misconception

In my previous post about Milton Friedman’s problematic distinction between real and pseudo-gold standards, I mentioned that one of the signs that Friedman pointed to in asserting that the Federal Reserve Board in the 1920s was managing a pseudo gold standard was the “sterilization” of gold inflows to the Fed. What Friedman meant by sterilization is that the incremental gold reserves flowing into the Fed did not lead to a commensurate increase in the stock of money held by the public, the failure of the stock of money to increase commensurately with an inflow of gold being the standard understanding of sterilization in the context of the gold standard.

Of course “commensurateness” is in the eye of the beholder. Because Friedman felt that, given the size of the gold inflow, the US money stock did not increase “enough,” he argued that the gold standard in the 1920s did not function as a “real” gold standard would have functioned. Now Friedman’s denial that a gold standard in which gold inflows are sterilized is a “real” gold standard may have been uniquely his own, but his understanding of sterilization was hardly unique; it was widely shared. In fact it was so widely shared that I myself have had to engage in a bit of an intellectual struggle to free myself from its implicit reversal of the causation between money creation and the holding of reserves. For direct evidence of my struggles, see some of my earlier posts on currency manipulation (here, here and here), in which I began by using the concept of sterilization as if it actually made sense in the context of international adjustment, and did not fully grasp that the concept leads only to confusion. In an earlier post about Hayek’s 1932 defense of the insane Bank of France, I did not explicitly refer to sterilization, and got the essential analysis right. Of course Hayek, in his 1932 defense of the Bank of France, was using — whether implicitly or explicitly I don’t recall — the idea of sterilization to defend the Bank of France against critics by showing that the Bank of France was not guilty of sterilization, but Hayek’s criterion for what qualifies as sterilization was stricter than Friedman’s. In any event, it would be fair to say that Friedman’s conception of how the gold standard works was broadly consistent with the general understanding at the time of how the gold standard operates, though, even under the orthodox understanding, he had no basis for asserting that the 1920s gold standard was fraudulent and bogus.

To sort out the multiple layers of confusion operating here, it helps to go back to the classic discussion of international monetary adjustment under a pure gold currency, which was the basis for later discussions of international monetary adjustment under a gold standard (i.e, a paper currency convertible into gold at a fixed exchange rate). I refer to David Hume’s essay “Of the Balance of Trade” in which he argued that there is an equilibrium distribution of gold across different countries, working through a famous thought experiment in which four-fifths of the gold held in Great Britain was annihilated to show that an automatic adjustment process would redistribute the international stock of gold to restore Britain’s equilibrium share of the total world stock of gold.

The adjustment process, which came to be known as the price-specie flow mechanism (PSFM), is widely considered one of Hume’s greatest contributions to economics and to monetary theory. Applying the simple quantity theory of money, Hume argued that the loss of 80% of Britain’s gold stock would mean that prices and wages in Britain would fall by 80%. But with British prices 80% lower than prices elsewhere, Britain would stop importing goods that could now be obtained more cheaply at home than they could be obtained abroad, while foreigners would begin exporting all they could from Britain to take advantage of low British prices. British exports would rise and imports fall, causing an inflow of gold into Britain. But, as gold flowed into Britain, British prices would rise, thereby reducing the British competitive advantage, causing imports to increase and exports to decrease, and consequently reducing the inflow of gold. The adjustment process would continue until British prices and wages had risen to a level equal to that in other countries, thus eliminating the British balance-of-trade surplus and terminating the inflow of gold.

This was a very nice argument, and Hume, a consummate literary stylist, expressed it beautifully. There is only one problem: Hume ignored that the prices of tradable goods (those that can be imported or exported or those that compete with imports and exports) are determined not in isolated domestic markets, but in international markets, so the premise that all British prices, like the British stock of gold, would fall by 80% was clearly wrong. Nevertheless, the disconnect between the simple quantity theory and the idea that the prices of tradable goods are determined in international markets was widely ignored by subsequent writers. Although Adam Smith, David Ricardo, and J. S. Mill avoided the fallacy, but without explicit criticism of Hume, while Henry Thornton, in his great work The Paper Credit of Great Britain, alternately embraced it and rejected it, the Humean analysis, by the end of the nineteenth century, if not earlier, had become the established orthodoxy.

Towards the middle of the nineteenth century, there was a famous series of controversies over the Bank Charter Act of 1844, in which two groups of economists the Currency School in support and the Banking School in opposition argued about the key provisions of the Act: to centralize the issue of Banknotes in Great Britain within the Bank of England and to prohibit the Bank of England from issuing additional banknotes, beyond the fixed quantity of “unbacked” notes (i.e. without gold cover) already in circulation, unless the additional banknotes were issued in exchange for a corresponding amount of gold coin or bullion. In other words, the Bank Charter Act imposed a 100% marginal reserve requirement on the issue of additional banknotes by the Bank of England, thereby codifying what was then known as the Currency Principle, the idea being that the fluctuation in the total quantity of Banknotes ought to track exactly the Humean mechanism in which the quantity of money in circulation changes pound for pound with the import or export of gold.

The doctrinal history of the controversies about the Bank Charter Act are very confused, and I have written about them at length in several papers (this, this, and this) and in my book on free banking, so I don’t want to go over that ground again here. But until the advent of the monetary approach to the balance of payments in the late 1960s and early 1970s, the thinking of the economics profession about monetary adjustment under the gold standard was largely in a state of confusion, the underlying fallacy of PSFM having remained largely unrecognized. One of the few who avoided the confusion was R. G. Hawtrey, who had anticipated all the important elements of the monetary approach to the balance of payments, but whose work had been largely forgotten in the wake of the General Theory.

Two important papers changed the landscape. The first was a 1976 paper by Donald McCloskey and Richard Zecher “How the Gold Standard Really Worked” which explained that a whole slew of supposed anomalies in the empirical literature on the gold standard were easily explained if the Humean PSFM was disregarded. The second was Paul Samuelson’s 1980 paper “A Corrected Version of Hume’s Equilibrating Mechanisms for International Trade,” showing that the change in relative price levels — the mechanism whereby international monetary equilibrium is supposedly restored according to PSFM — is irrelevant to the adjustment process when arbitrage constraints on tradable goods are effective. The burden of the adjustment is carried by changes in spending patterns that restore desired asset holdings to their equilibrium levels, independently of relative-price-level effects. Samuelson further showed that even when, owing to the existence of non-tradable goods, there are relative-price-level effects, those effects are irrelevant to the adjustment process that restores equilibrium.

What was missing from Hume’s analysis was the concept of a demand to hold money (or gold). The difference between desired and actual holdings of cash imply corresponding changes in expenditure, and those changes in expenditure restore equilibrium in money (gold) holdings independent of any price effects. Lacking any theory of the demand to hold money (or gold), Hume had to rely on a price-level adjustment to explain how equilibrium is restored after a change in the quantity of gold in one country. Hume’s misstep set monetary economics off on a two-century detour, avoided by only a relative handful of economists, in explaining the process of international adjustment.

So historically there have been two paradigms of international adjustment under the gold standard: 1) the better-known, but incorrect, Humean PSFM based on relative-price-level differences which induce self-correcting gold flows that, in turn, are supposed to eliminate the price-level differences, and 2) the not-so-well-known, but correct, arbitrage-monetary-adjustment theory. Under the PSFM, the adjustment can occur only if gold flows give rise to relative-price-level adjustments. But, under PSFM, for those relative-price-level adjustments to occur, gold flows have to change the domestic money stock, because it is the quantity of domestic money that governs the domestic price level.

That is why if you believe, as Milton Friedman did, in PSFM, sterilization is such a big deal. Relative domestic price levels are correlated with relative domestic money stocks, so if a gold inflow into a country does not change its domestic money stock, the necessary increase in the relative price level of the country receiving the gold inflow cannot occur. The “automatic” adjustment mechanism under the gold standard has been blocked, implying that if there is sterilization, the gold standard is rendered fraudulent.

But we now know that that is not how the gold standard works. The point of gold flows was not to change relative price levels. International adjustment required changes in domestic money supplies to be sure, but, under the gold standard, changes in domestic money supplies are essentially unavoidable. Thus, in his 1932 defense of the insane Bank of France, Hayek pointed out that the domestic quantity of money had in fact increased in France along with French gold holdings. To Hayek, this meant that the Bank of France was not sterilizing the gold inflow. Friedman would have said that, given the gold inflow, the French money stock ought to have increased by a far larger amount than it actually did.

Neither Hayek nor Friedman understood what was happening. The French public wanted to increase their holdings of money. Because the French government imposed high gold reserve requirements (but less than 100%) on the creation of French banknotes and deposits, increasing holdings of money required the French to restrict their spending sufficiently to create a balance-of-trade surplus large enough to induce the inflow of gold needed to satisfy the reserve requirements on the desired increase in cash holdings. The direction of causation was exactly the opposite of what Friedman thought. It was the desired increase in the amount of francs that the French wanted to hold that (given the level of gold reserve requirements) induced the increase in French gold holdings.

But this doesn’t mean, as Hayek argued, that the insane Bank of France was not wreaking havoc on the international monetary system. By advocating a banking law that imposed very high gold reserve requirements and by insisting on redeeming almost all of its non-gold foreign exchange reserves into gold bullion, the insane Bank of France, along with the clueless Federal Reserve, generated a huge increase in the international monetary demand for gold, which was the proximate cause of the worldwide deflation that began in 1929 and continued till 1933. The problem was not a misalignment between relative price levels, which is sterilization supposedly causes; the problem was a worldwide deflation that afflicted all countries on the gold standard, and was avoidable only by escaping from the gold standard.

At any rate, the concept of sterilization does nothing to enhance our understanding of that deflationary process. And whatever defects there were in the way that central banks were operating under the gold standard in the 1920s, the concept of sterilization averts attention from the critical problem which was the increasing demand of the world’s central banks, especially the Bank of France and the Federal Reserve, for gold reserves.

Macroeconomic Science and Meaningful Theorems

Greg Hill has a terrific post on his blog, providing the coup de grace to Stephen Williamson’s attempt to show that the way to increase inflation is for the Fed to raise its Federal Funds rate target. Williamson’s problem, Hill points out is that he attempts to derive his results from relationships that exist in equilibrium. But equilibrium relationships in and of themselves are sterile. What we care about is how a system responds to some change that disturbs a pre-existing equilibrium.

Williamson acknowledged that “the stories about convergence to competitive equilibrium – the Walrasian auctioneer, learning – are indeed just stories . . . [they] come from outside the model” (here).  And, finally, this: “Telling stories outside of the model we have written down opens up the possibility for cheating. If everything is up front – written down in terms of explicit mathematics – then we have to be honest. We’re not doing critical theory here – we’re doing economics, and we want to be treated seriously by other scientists.”

This self-conscious scientism on Williamson’s part is not just annoyingly self-congratulatory. “Hey, look at me! I can write down mathematical models, so I’m a scientist, just like Richard Feynman.” It’s wildly inaccurate, because the mere statement of equilibrium conditions is theoretically vacuous. Back to Greg:

The most disconcerting thing about Professor Williamson’s justification of “scientific economics” isn’t its uncritical “scientism,” nor is it his defense of mathematical modeling. On the contrary, the most troubling thing is Williamson’s acknowledgement-cum-proclamation that his models, like many others, assume that markets are always in equilibrium.

Why is this assumption a problem?  Because, as Arrow, Debreu, and others demonstrated a half-century ago, the conditions required for general equilibrium are unimaginably stringent.  And no one who’s not already ensconced within Williamson’s camp is likely to characterize real-world economies as always being in equilibrium or quickly converging upon it.  Thus, when Williamson responds to a question about this point with, “Much of economics is competitive equilibrium, so if this is a problem for me, it’s a problem for most of the profession,” I’m inclined to reply, “Yes, Professor, that’s precisely the point!”

Greg proceeds to explain that the Walrasian general equilibrium model involves the critical assumption (implemented by the convenient fiction of an auctioneer who announces prices and computes supply and demand at that prices before allowing trade to take place) that no trading takes place except at the equilibrium price vector (where the number of elements in the vector equals the number of prices in the economy). Without an auctioneer there is no way to ensure that the equilibrium price vector, even if it exists, will ever be found.

Franklin Fisher has shown that decisions made out of equilibrium will only converge to equilibrium under highly restrictive conditions (in particular, “no favorable surprises,” i.e., all “sudden changes in expectations are disappointing”).  And since Fisher has, in fact, written down “the explicit mathematics” leading to this conclusion, mustn’t we conclude that the economists who assume that markets are always in equilibrium are really the ones who are “cheating”?

An alternative general equilibrium story is that learning takes place allowing the economy to converge on a general equilibrium time path over time, but Greg easily disposes of that story as well.

[T]he learning narrative also harbors massive problems, which come out clearly when viewed against the background of the Arrow-Debreu idealized general equilibrium construction, which includes a complete set of intertemporal markets in contingent claims.  In the world of Arrow-Debreu, every price in every possible state of nature is known at the moment when everyone’s once-and-for-all commitments are made.  Nature then unfolds – her succession of states is revealed – and resources are exchanged in accordance with the (contractual) commitments undertaken “at the beginning.”

In real-world economies, these intertemporal markets are woefully incomplete, so there’s trading at every date, and a “sequence economy” takes the place of Arrow and Debreu’s timeless general equilibrium.  In a sequence economy, buyers and sellers must act on their expectations of future events and the prices that will prevail in light of these outcomes.  In the limiting case of rational expectations, all agents correctly forecast the equilibrium prices associated with every possible state of nature, and no one’s expectations are disappointed. 

Unfortunately, the notion that rational expectations about future prices can replace the complete menu of Arrow-Debreu prices is hard to swallow.  Frank Hahn, who co-authored “General Competitive Analysis” with Kenneth Arrow (1972), could not begin to swallow it, and, in his disgorgement, proceeded to describe in excruciating detail why the assumption of rational expectations isn’t up to the job (here).  And incomplete markets are, of course, but one departure from Arrow-Debreu.  In fact, there are so many more that Hahn came to ridicule the approach of sweeping them all aside, and “simply supposing the economy to be in equilibrium at every moment of time.”

Just to pile on, I would also point out that any general equilibrium model assumes that there is a given state of knowledge that is available to all traders collectively, but not necessarily to each trader. In this context, learning means that traders gradually learn what the pre-existing facts are. But in the real world, knowledge increases and evolves through time. As knowledge changes, capital — both human and physical — embodying that knowledge becomes obsolete and has to be replaced or upgraded, at unpredictable moments of time, because it is the nature of new knowledge that it cannot be predicted. The concept of learning incorporated in these sorts of general equilibrium constructs is a travesty of the kind of learning that characterizes the growth of knowledge in the real world. The implications for the existence of a general equilibrium model in a world in which knowledge grows in an unpredictable way are devastating.

Greg aptly sums up the absurdity of using general equilibrium theory (the description of a decentralized economy in which the component parts are in a state of perfect coordination) as the microfoundation for macroeconomics (the study of decentralized economies that are less than perfectly coordinated) as follows:

What’s the use of “general competitive equilibrium” if it can’t furnish a sturdy, albeit “external,” foundation for the kind of modeling done by Professor Williamson, et al?  Well, there are lots of other uses, but in the context of this discussion, perhaps the most important insight to be gleaned is this: Every aspect of a real economy that Keynes thought important is missing from Arrow and Debreu’s marvelous construction.  Perhaps this is why Axel Leijonhufvud, in reviewing a state-of-the-art New Keynesian DSGE model here, wrote, “It makes me feel transported into a Wonderland of long ago – to a time before macroeconomics was invented.”

To which I would just add that nearly 70 years ago, Paul Samuelson published his magnificent Foundations of Economic Analysis, a work undoubtedly read and mastered by Williamson. But the central contribution of the Foundations was the distinction between equilibrium conditions and what Samuelson (owing to the influence of the still fashionable philosophical school called logical positivism) mislabeled meaningful theorems. A mere equilibrium condition is not the same as a meaningful theorem, but Samuelson showed how a meaningful theorem can be mathematically derived from an equilibrium condition. The link between equilibrium conditions and meaningful theorems was the foundation of economic analysis. Without a mathematical connection between equilibrium conditions and meaningful theorems analogous to the one provided by Samuelson in the Foundations, claims to have provided microfoundations for macroeconomics are, at best, premature.

The Road to Serfdom: Good Hayek or Bad Hayek?

A new book by Angus Burgin about the role of F. A. Hayek and Milton Friedman and the Mont Pelerin Society (an organization of free-market economists plus some scholars in other disciplines founded by Hayek and later headed by Friedman) in resuscitating free-market capitalism as a political ideal after its nineteenth-century version had been discredited by the twin catastrophes of the Great War and the Great Depression was the subject of an interesting and in many ways insightful review by Robert Solow in the latest New Republic. Despite some unfortunate memory lapses and apologetics concerning his own errors and those of his good friend and colleague Paul Samuelson in their assessments of the of efficiency of central planning, thereby minimizing the analytical contributions of Hayek and Friedman, Solow does a good job of highlighting the complexity and nuances of Hayek’s thought — a complexity often ignored not only by Hayek’s critics but by many of his most vocal admirers — and of contrasting Hayek’s complexity and nuance with Friedman’s rhetorically and strategically compelling, but intellectually dubious, penchant for simplification.

First, let’s get the apologetics out of the way. Tyler Cowen pounced on this comment by Solow:

The MPS [Mont Pelerin Society] was no more influential inside the economics profession. There were no publications to be discussed. The American membership was apparently limited to economists of the Chicago School and its scattered university outposts, plus a few transplanted Europeans. “Some of my best friends” belonged. There was, of course, continuing research and debate among economists on the good and bad properties of competitive and noncompetitive markets, and the capacities and limitations of corrective regulation. But these would have gone on in the same way had the MPS not existed. It has to be remembered that academic economists were never optimistic about central planning. Even discussion about the economics of some conceivable socialism usually took the form of devising institutions and rules of behavior that would make a socialist economy function like a competitive market economy (perhaps more like one than any real-world market economy does). Maybe the main function of the MPS was to maintain the morale of the free-market fellowship.

And one of Tyler’s commenters unearthed this gem from Samuelson’s legendary textbook:

The Soviet economy is proof that, contrary to what many skeptics had earlier believed, a socialist command economy can function and even thrive.

Tyler also dug up this nugget from the classic paper by Sameulson and Solow on the Phillips Curve (but see this paper by James Forder for some revisionist history about the Samuelson-Solow paper):

We have not here entered upon the important question of what feasible institutional reforms might be introduced to lessen the degree of disharmony between full employment and price stability. These could of course involve such wide-ranging issues as direct price and wage controls, antiunion and antitrust legislation, and a host of other measures hopefully designed to move the American Phillips’ curves downward and to the left.

But actually, Solow was undoubtedly right that the main function of the MPS was morale-building! Plus networking. Nothing to be sneered at, and nothing to apologize for. The real heavy lifting was done in the 51 weeks of the year when the MPS was not in session.

Anyway, enough score settling, because Solow does show a qualified, but respectful, appreciation for Hayek’s virtues as an economist, scholar, and social philosopher, suggesting that there was a Good Hayek, who struggled to reformulate a version of liberalism that transcended the inadequacies (practical and theoretical) that doomed the laissez-faire liberalism of the nineteenth century, and a Bad Hayek, who engaged in a black versus white polemical struggle with “socialists of all parties.” The trope strikes me as a bit unfair, but Hayek could sometimes be injudicious in his policy pronouncements, or in his off-the-cuff observations and remarks. Despite his natural reserve, Hayek sometimes indulged in polemical exaggeration. The appetite for rhetorical overkill was especially hard for Hayek to resist when the topic of discussion was J. M. Keynes, the object of both Hayek’s admiration and his disdain. Hayek seemingly could not help but caricature Keynes in a way calculated to make him seem both ridiculous and irresistible.  Have a look.

So I would not dispute that Hayek occasionally committed rhetorical excesses when wearing his policy-advocate hat. And there were some other egregious lapses on Hayek’s part like his unqualified support for General Pinochet, reflecting perhaps a Quixotic hope that somewhere there was a benevolent despot waiting to be persuaded to implement Hayek’s ideas for a new liberal political constitution in which the principle of the separation of powers would be extended to separate the law-making powers of the legislative body from the governing powers of the representative assembly.

But Solow exaggerates by characterizing the Road to Serfdom as an example of the Bad Hayek, despite acknowledging that the Road to Serfdom was very far from advocating a return to nineteenth-century laissez-faire. What Solow finds troubling is thesis that

the standard regulatory interventions in the economy have any inherent tendency to snowball into “serfdom.” The correlations often run the other way. Sixty-five years later, Hayek’s implicit prediction is a failure, rather like Marx’s forecast of the coming “immiserization of the working class.”

This is a common interpretation of Hayek’s thesis in the Road to Serfdom.   And it is true that Hayek did intimate that piecemeal social engineering (to borrow a phrase coined by Hayek’s friend Karl Popper) created tendencies, which, if not held in check by strict adherence to liberal principles, could lead to comprehensive central planning. But that argument is a different one from the main argument of the Road to Serfdom that comprehensive central planning could be carried out effectively only by a government exercising unlimited power over individuals. And there is no empirical evidence that refutes Hayek’s main thesis.

A few years ago, in perhaps his last published article, Paul Samuelson wrote a brief historical assessment of Hayek, including personal recollections of their mostly friendly interactions and of one not so pleasant exchange they had in Hayek’s old age, when Hayek wrote to Samuelson demanding that Samuelson retract the statement in his textbook (essentially the same as the one made by Solow) that the empirical evidence, showing little or no correlation between economic and political freedom, refutes the thesis of the Road to Serfdom that intervention leads to totalitarianism. Hayek complained that this charge misrepresented what he had argued in the Road to Serfdom. Observing that Hayek, with whom he had long been acquainted, never previously complained about the passage, Samuelson explained that he tried to placate Hayek with an empty promise to revise the passage, attributing Hayek’s belated objection to the irritability of old age and a bad heart. Whether Samuelson’s evasive response to Hayek was an appropriate one is left as an exercise for the reader.

Defenders of Hayek expressed varying degrees of outrage at the condescending tone taken by Samuelson in his assessment of Hayek. I think that they were overreacting. Samuelson, an academic enfant terrible if there ever was one, may have treated his elders and peers with condescension, but, speaking from experience, I can testify that he treated his inferiors with the utmost courtesy. Samuelson was not dismissing Hayek, he was just being who he was.

The question remains: what was Hayek trying to say in the Road to Serfdom, and in subsequent works? Well, believe it or not, he was trying to say many things, but the main thesis of the Road to Serfdom was clearly what he always said it was: comprehensive central planning is, and always will be, incompatible with individual and political liberty. Samuelson and Solow were not testing Hayek’s main thesis. None of the examples of interventionist governments that they cite, mostly European social democracies, adopted comprehensive central planning, so Hayek’s thesis was not refuted by those counterexamples. Samuelson once acknowledged “considerable validity . . . for the nonnovel part [my emphasis] of Hayek’s warning” in the Road to Serfdom: “controlled socialist societies are rarely efficient and virtually never freely democratic.” Presumably Samuelson assumed that Hayek must have been saying something more than what had previously been said by other liberal economists. After all, if Hayek were saying no more than that liberty and democracy are incompatible with comprehensive central planning, what claim to originality could Hayek have been making? None.

Yep, that’s exactly right; Hayek was not making any claim to originality in the Road to Serfdom. But sometimes old truths have to be restated in a new and more persuasive form than that in which they were originally stated. That was especially the case in the early 1940s when collectivism and planning were widely viewed as the wave of the future, and even so thoroughly conservative and so eminent an economic theorist as Joseph Schumpeter could argue without embarrassment that there was no practical or theoretical reason why socialist central planning could not be implemented. And besides, the argument that every intervention leads to another one until the market system becomes paralyzed was not invented by Hayek either, having been made by Ludwig von Mises some twenty years earlier, and quite possibly by other writers before that.  So even the argument that Samuelson tried to pin on Hayek was not really novel either.

To be sure, Hayek’s warning that central planning would inevitably lead to totalitarianism was not the only warning he made in the Road to Serfdom, but conceptually distinct arguments should not be conflated. Hayek clearly wanted to make the argument that an unprincipled policy of economic interventions was dangerous, because interventions introduce distortions that beget further interventions, producing a cumulative process of ever-more intrusive interventions, thereby smothering market forces and eventually sapping the productive capacity of the free enterprise system. That is an argument about how it is possible to stumble into central planning without really intending to do so.  Hayek clearly believed in that argument, often invoking it in tandem with, or as a supplement to, his main argument about the incompatibility of central planning with liberty and democracy. Despite the undeniable tendency for interventions to create pressure (for both political and economic reasons) to adopt additional interventions, Hayek clearly overestimated the power of that tendency, failing to understand, or at least to take sufficient account of, the countervailing political forces resisting further interventions. So although Hayek was right that no intellectual principle enables one to say “so much intervention and not a drop more,” there could still be a kind of (messy) democratic political equilibrium that effectively limits the extent to which new interventions can be piled on top of old ones. That surely was a significant gap in Hayek’s too narrow, and overly critical, view of how the democratic political process operates.

That said, I think that Solow came close to getting it right in this paragraph:

THE GOOD HAYEK was not happy with the reception of The Road to Serfdom. He had not meant to provide a manifesto for the far right. Careless readers ignored his rejection of unqualified laissez-faire, and the fact that he reserved a useful, limited economic role for government. He had not actually claimed that the descent into serfdom was inevitable. There is no reason to doubt Hayek’s sincerity in this (although the Bad Hayek occasionally made other appearances). Perhaps he would be appalled at the thought of a Congress full of Tea Party Hayekians. But it was his book, after all. The fact that natural allies such as Knight and moderates such as Viner thought that he had overreached suggests that the Bad Hayek really was there in the text.

But not exactly right. Hayek was not totally good. Who is? Hayek made mistakes. Let he who is without sin cast the first stone. Frank Knight didn’t like the Road to Serfdom. But as Solow, himself, observed earlier in his review, Knight was a curmudgeon, and had previously crossed swords with Hayek over arcane issues of capital theory.  So any inference from Knight’s reaction to the Road to Serfdom must be taken with a large grain of salt. And one might also want to consider what Schumpeter said about Hayek in his review of the Road to Serfdom, criticizing Hayek for “politeness to a fault,” because Hayek would “hardly ever attribute to opponents anything beyond intellectual error.”  Was the Bad Hayek really there in the text? Was it really “not a good book?” The verdict has to be: unproven.

PS  In his review, Solow expressed a wish for a full list of the original attendees at the founding meeting of the Mont Pelerin Society.  Hayek included the list as a footnote to his “Opening Address to a  Conference at Mont Pelerin” published in his Studies in Philosophy, Politics and Economics.  There is a slightly different list of original members in Wikipedia.

Maurice Allais, Paris

Carlo Antoni, Rome

Hans Barth, Zurich

Karl Brandt, Stanford, Calif.

John Davenport, New York

Stanley R. Dennison, Cambridge

Walter Eucken, Freiburg i. B.

Erich Eyck, Oxford

Milton Friedman, Chicago

H. D. Gideonse, Brooklyn

F. D. Graham, Princeton

F. A. Harper, Irvington-on-Hudson, NY

Henry Hazlitt, New York

T. J. B. Hoff, Oslo

Albert Hunold, Zurich

Bertrand de Jouvenal, Chexbres, Vaud

Carl Iversen, Copenhagen

John Jewkes, Manchester

F. H. Knight, Chicgao

Fritz Machlup, Buffalo

L. B. Miller, Detroit

Ludwig von Mises, New York

Felix Morely, Washington, DC

Michael Polanyi, Manchester

Karl R. Popper, London

William E. Rappard, Geneva

L. E. Read, Irvington-on-Hudson, NY

Lionel Robbins, London

Wilhelm Roepke, Geneva

George J. Stigler, Providence, RI

Herbert Tingsten, Stockholm

Fracois Trevoux, Lyon

V. O. Watts, Irvington-on-Hudson, NY

C. V. Wedgewood, London

In addition, Hayek included the names of others invited but unable to attend who joined MPS as original members

Constatino Bresciani-Turroni, Rome

William H. Chamberlin, New York

Rene Courtin, Paris

Max Eastman, New York

Luigi Einaudi, Rome

Howard Ellis, Berkeley, Calif.

A. G. B. Fisher, London

Eli Heckscher, Stockholm

Hans Kohn, Northampton, Mass

Walter Lippmann, New York

Friedrich Lutz, Princeton

Salvador de Madriaga, Oxford

Charles Morgan, London

W. A. Orten, Northampton, Mass.

Arnold Plant, London

Charles Rist, Paris

Michael Roberts, London

Jacques Rueff, Paris

Alexander Rustow, Istanbul

F. Schnabel, Heidelberg

W. J. H. Sprott, Nottingham

Roger Truptil, Paris

D. Villey, Poitiers

E. L. Woodward, Oxford

H. M. Wriston, Providence, RI

G. M. Young, London


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 1,492 other followers

Follow Uneasy Money on WordPress.com