Archive for the 'Mark Blaug' Category

What’s so Great about Science? or, How I Learned to Stop Worrying and Love Metaphysics

A couple of weeks ago, a lot people in a lot of places marched for science. What struck me about those marches is that there is almost nobody out there that is openly and explicitly campaigning against science. There are, of course, a few flat-earthers who, if one looks for them very diligently, can be found. But does anyone — including the flat-earthers themselves – think that they are serious? There are also Creationists who believe that the earth was created and designed by a Supreme Being – usually along the lines of the Biblical account in the Book of Genesis. But Creationists don’t reject science in general, they reject a particular scientific theory, because they believe it to be untrue, and try to defend their beliefs with a variety of arguments couched in scientific terms. I don’t defend Creationist arguments, but just because someone makes a bad scientific argument, it doesn’t mean that the person making the argument is an opponent of science. To be sure, the reason that Creationists make bad arguments is that they hold a set of beliefs about how the world came to exist that aren’t based on science but on some religious or ideological belief system. But people come up with arguments all the time to justify beliefs for which they have no evidentiary or “scientific” basis.

I mean one of the two greatest scientists that ever lived criticized quantum mechanics, because he couldn’t accept that the world was not fully determined by the laws of nature, or, as he put it so pithily: “God does not play dice with the universe.” I understand that Einstein was not religious, and wasn’t making a religious argument, but he was basing his scientific view of what an acceptable theory should be on certain metaphysical predispositions that he held, and he was expressing his disinclination to accept a theory inconsistent with those predispositions. A scientific argument is judged on its merits, not on the motivations for advancing the argument. And I won’t even discuss the voluminous writings of the other one of the two greatest scientists who ever lived on alchemy and other occult topics.

Similarly, there are climate-change deniers who question the scientific basis for asserting that temperatures have been rising around the world, and that the increase in temperatures results from human activity that discharges greenhouse gasses into the atmosphere. Deniers of global warming may be biased and may be making bad scientific arguments, but the mere fact – and for purposes of this discussion I don’t dispute that it is a fact – that global warming is real and caused by human activity does not mean that to dispute those facts unmasks that person as an opponent of science. R. A. Fisher, the greatest mathematical statistician of the first half of the twentieth century, who developed most of the statistical techniques now used in experimental research, severely damaged his reputation by rejecting or dismissing evidence that smoking tobacco is a primary cause of cancer. Some critics accused Fisher of having been compromised by financial inducements from the tobacco industry, while others attribute his positions to his own smoking habits or anti-puritanical tendencies. In any event, Fisher’s arguments against a causal link between smoking tobacco and lung cancer are now viewed as an embarrassing stain on an otherwise illustrious career. But Fisher’s lapse of judgment, and perhaps of ethics, don’t justify accusing him of opposition to science. Climate-change deniers don’t reject science; they reject or disagree with the conclusions of most climate scientists. They may have lousy reasons for their views – either that the climate is not changing or that whatever change has occurred is unrelated to the human production of greenhouse gasses – but holding wrong or biased views doesn’t make someone an opponent of science.

I don’t say that there are no people who dislike science – I mean don’t like it because of what it stands for, not because they find it difficult or boring. Such people may be opposed to teaching science and to funding scientific research and don’t want scientific knowledge to influence public policy or the way people live. But, as far as I can tell, they have little influence. There is just no one out there that wants to outlaw scientific research, or trying to criminalize the teaching of science. They may not want to fund science, but they aren’t trying to ban it. In fact, I doubt that the prestige and authority of science has ever been higher than it is now. Certainly religion, especially organized religion, to which science was once subordinate if not subservient, no longer exercises anything near the authority that science now does.

The reason for this extended introduction into the topic that I really want to discuss is to provide some context for my belief that economists worry too much about whether economics is really a science. It was such a validation for economists when the Swedish Central Bank piggy-backed on the storied Nobel Prize to create its ersatz “Nobel Memorial Prize” for economic science. (I note with regret the recent passing of William Baumol, whose failure to receive the Nobel Prize in economics, like that of Armen Alchian, was in fact a deplorable failure of good judgment on the part of the Nobel Committee.) And the self-consciousness of economists about the possibly dubious status of economics as a science is a reflection of the exalted status of science in society. So naturally, if one is seeking to increase the prestige of his own occupation and of the intellectual discipline in which one does research, it helps enormously to be able to say: “oh, yes, I am an economist, and economics is a science, which means that I really am a scientist, just like those guys that win Nobel Prizes.” It also helps to be able to show that your scientific research involves a lot of mathematics, because scientists use math in their theories, sometimes a lot of math, which makes it hard for non-scientists to understand what scientists are doing. We economists also use math in our theories, sometimes a lot math, and that’s why it’s just as hard for non-economists to understand what we economists are doing as it is to understand what real scientists are doing. So we really are scientists, aren’t we?”

Where did this obsession with science come from? I think it’s fairly recent, but my sketchy knowledge of the history of science prevents me from getting too deeply into that discussion. But until relatively modern times, science was subsumed under the heading of philosophy — Greek for the love of wisdom. But philosophy is a very broad subject, so eventually that part of philosophy that was concerned with the world as it actually exists was called natural philosophy as opposed to say, ethical and moral philosophy. After the stunning achievements of Newton and his successors, and after Francis Bacon outlined an inductive method for achieving knowledge of the world, the disjunction between mere speculative thought and empirically based research, which was what science supposedly exemplifies, became increasingly sharp. And the inductive method seemed to be the right way to do science.

David Hume and Immanuel Kant struggled with limited success to make sense of induction, because a general proposition cannot be logically deduced from a set of observations, however numerous. Despite the logical problem of induction, early in the early twentieth century a philosophical movement based in Vienna called logical positivism arrived at the conclusion that not only is all scientific knowledge acquired inductively through sensory experience and observation, but no meaning can be attached to any statement unless the statement makes reference to something about which we have or could have sensory experience; to be meaningful a statement must be verified or at least verifiable, so that its truth could be either verified or refuted. Any reference to concepts that have no basis in sensory experience is simply meaningless, i.e., a form of nonsense. Thus, science became not just the epitome of valid, certain, reliable, verified knowledge, which is what people were led to believe by the stunning success of Newton’s theory, it became the exemplar of meaningful discourse. Unless our statements refer to some observable, verifiable object, we are talking nonsense. And in the first half of the twentieth century, logical positivism dominated academic philosophy, at least in the English speaking world, thereby exercising great influence over how economists thought about their own discipline and its scientific status.

Logical positivism was subjected to rigorous criticism by Karl Popper in his early work Logik der Forschung (English translation The Logic of Scientific Discovery). His central point was that scientific theories are less about what is or has been observed, but about what cannot be observed. The empirical content of a scientific proposition consists in the range of observations that the theory says are not possible. The more observations excluded by the theory the greater its empirical content. A theory that is consistent with any observation, has no empirical content. Thus, paradoxically, scientific theories, under the logical positivist doctrine, would have to be considered nonsensical, because they tell us what can’t be observed. And because it is always possible that an excluded observation – the black swan – which our scientific theory tells us can’t be observed, will be observed, scientific theories can never be definitively verified. If a scientific theory can’t verified, then according to the positivists’ own criterion, the theory is nonsense. Of course, this just shows that the positivist criterion of meaning was nonsensical, because obviously scientific theories are completely meaningful despite being unverifiable.

Popper therefore concluded that verification or verifiability can’t be a criterion of meaning. In its place he proposed the criterion of falsification (i.e., refutation, not misrepresentation), but falsification became a criterion not for distinguishing between what is meaningful and what is meaningless, but between science and metaphysics. There is no reason why metaphysical statements (statements lacking empirical content) cannot be perfectly meaningful; they just aren’t scientific. Popper was misinterpreted by many to have simply substituted falsifiability for verifiability as a criterion of meaning; that was a mistaken interpretation, which Popper explicitly rejected.

So, in using the term “meaningful theorems” to refer to potentially refutable propositions that can be derived from economic theory using the method of comparative statics, Paul Samuelson in his Foundations of Economic Analysis adopted the interpretation of Popper’s demarcation criterion between science and metaphysics as if it were a demarcation criterion between meaning and nonsense. I conjecture that Samuelson’s unfortunate lapse into the discredited verbal usage of logical positivism may have reinforced the unhealthy inclination of economists to feel the need to prove their scientific credentials in order to even engage in meaningful discourse.

While Popper certainly performed a valuable service in clearing up the positivist confusion about meaning, he adopted a very prescriptive methodology aimed at making scientific practice more scientific in the sense of exposing theories to, rather than immunizing them against, attempts at refutation, because, according to Popper, it is only if after our theories survive powerful attempts to show that they are false that we can have confidence that those theories may be truthful or at least come close to being truthful. In principle, Popper was not wrong in encouraging scientists to formulate theories that are empirically testable by specifying what kinds of observations would be inconsistent with their theories. But in practice, that advice has been difficult to follow, and not only because researchers try to avoid subjecting their pet theories to tests that might prove them wrong.

Although Popper often cited historical examples to support his view that science progresses through an ongoing process of theoretical conjecture and empirical refutation, historians of science have had no trouble finding instances in which scientists did not follow Popper’s methodological rules and continued to maintain theories even after they had been refuted by evidence or after other theories had been shown to generate more accurate predictions than their own theories. Popper parried this objection by saying that his methodological rules were not positive (i.e., descriptive of science), but normative (i.e., prescriptive of how to do good science). In other words, Popper’s scientific methodology was itself not empirically refutable and scientific, but empirically irrefutable and metaphysical. I point out the unscientific character of Popper’s methodology of science, not to criticize Popper, but to point out that Popper himself did not believe that science is itself the final authority and ultimate arbiter of scientific practice.

But the more important lesson from the critical discussions of Popper’s methodological rules seems to me to be that they are too rigid to accommodate all the considerations that are relevant to assessing scientific theories and deciding whether those theories should be discarded or, at least tentatively, maintained. And Popper’s methodological rules are especially ill-suited for economics and other disciplines in which the empirical implications of theories depend on a large number of jointly-maintained hypotheses, so that it is hard to identify which of several maintained hypotheses is responsible for the failure of a predicted outcome to match the observed outcome. That of course is the well-known ceteris paribus problem, and it requires a very capable practitioner to know when to apply the ceteris paribus condition and which variables to hold constants and which to allow to vary. Popper’s methodological rules tell us to reject a theory when its predictions are mistaken, and Popper regarded the ceteris paribus quite skeptically as an illegitimate immunizing stratagem. That describes a profound dilemma for economics. On the one hand, it is hard to imagine how economic theory could be applied without using the ceteris paribus qualification, on the other hand, the qualification diminishes empirical content of economic theory.

Empirical problems are amplified by the infirmities of the data that economists typically use to derive quantitative predictions from their models. The accuracy of the data is often questionable, and the relationships between the data and the theoretical concepts they are supposed to measure are often dubious. Moreover, the assumptions about the data-generating process (e.g., independent and identically distributed random variables, randomly selected observations, omitted explanatory variables are uncorrelated with the dependent variable) necessary for the classical statistical techniques to generate unbiased estimates of the theoretical coefficients are almost impossibly stringent. Econometricians are certainly well aware of these issues, and they have discovered methods of mitigating them, but the problems with the data routinely used by economists and the complicated issues involved in developing and applying techniques to cope with those problems make it very difficult to use statistical techniques to reach definitive conclusions about empirical questions.

Jeff Biddle, one of the leading contemporary historians of economics, has a wonderful paper (“Statistical Inference in Economics 1920-1965: Changes in Meaning and Practice”)– his 2016 presidential address to the History of Economics Society – discussing how the modern statistical techniques based on concepts and methods derived from probability theory gradually became the standard empirical and statistical techniques used by economists, even though many distinguished earlier researchers who were neither unaware of, nor unschooled in, the newer techniques believed them to be inappropriate for analyzing economic data. Here is the abstract of Biddle’s paper.

This paper reviews changes over time in the meaning that economists in the US attributed to the phrase “statistical inference”, as well as changes in how inference was conducted. Prior to WWII, leading statistical economists rejected probability theory as a source of measures and procedures to be used in statistical inference. Haavelmo and the econometricians associated with the early Cowles Commission developed an approach to statistical inference based on concepts and measures derived from probability theory, but the arguments they offered in defense of this approach were not always responsive to the concerns of earlier empirical economists that the data available to economists did not satisfy the assumptions required for such an approach. Despite this, after a period of about 25 years, a consensus developed that methods of inference derived from probability theory were an almost essential part of empirical research in economics. I close the paper with some speculation on possible reasons for this transformation in thinking about statistical inference.

I quote one passage from Biddle’s paper:

As I have noted, the leading statistical economists of the 1920s and 1930s were also unwilling to assume that any sample they might have was representative of the universe they cared about. This was particularly true of time series, and Haavelmo’s proposal to think of time series as a random selection of the output of a stable mechanism did not really address one of their concerns – that the structure of the “mechanism” could not be expected to remain stable for long periods of time. As Schultz pithily put it, “‘the universe’ of our time series does not ‘stay put’” (Schultz 1938, p. 215). Working commented that there was nothing in the theory of sampling that warranted our saying that “the conditions of covariance obtaining in the sample (would) hold true at any time in the future” (Advisory Committee 1928, p. 275). As I have already noted, Persons went further, arguing that treating a time series as a sample from which a future observation would be a random draw was not only inaccurate but ignored useful information about unusual circumstances surrounding various observations in the series, and the unusual circumstances likely to surround the future observations about which one wished to draw conclusions (Persons 1924, p. 7). And, the belief that samples were unlikely to be representative of the universe in which the economists had an interest applied to cross section data as well. The Cowles econometricians offered to little assuage these concerns except the hope that it would be possible to specify the equations describing the systematic part of the mechanism of interest in a way that captured the impact of factors that made for structural change in the case of time series, or factors that led cross section samples to be systematically different from the universe of interest.

It is not my purpose to argue that the economists who rejected the classical theory of inference had better arguments than the Cowles econometricians, or had a better approach to analyzing economic data given the nature of those data, the analytical tools available, and the potential for further development of those tools. I only wish to offer this account of the differences between the Cowles econometricians and the previously dominant professional opinion on appropriate methods of statistical inference as an example of a phenomenon that is not uncommon in the history of economics. Revolutions in economics, or “turns”, to use a currently more popular term, typically involve new concepts and analytical methods. But they also often involve a willingness to employ assumptions considered by most economists at the time to be too unrealistic, a willingness that arises because the assumptions allow progress to be made with the new concepts and methods. Obviously, in the decades after Haavelmo’s essay on the probability approach, there was a significant change in the list of assumptions about economic data that empirical economists were routinely willing to make in order to facilitate empirical research.

Let me now quote from a recent book (To Explain the World) by Steven Weinberg, perhaps – even though a movie about his life has not (yet) been made — the greatest living physicist:

Newton’s theory of gravitation made successful predictions for simple phenomena like planetary motion, but it could not give a quantitative account of more complicated phenomena, like the tides. We are in a similar position today with regard to the strong forces that hold quarks together inside the protons and neutrons inside the atomic nucleus, a theory known as quantum chromodynamics. This theory has been successful in accounting for certain processes at high energy, such as the production of various strongly interacting particles in the annihilation of energetic electrons and their antiparticles, and its successes convince us that the theory is correct. We cannot use the theory to calculate precise values for other things that we would like to explain, like the masses of the proton and neutron, because the calculations is too complicated. Here, as for Newton’s theory of the tides, the proper attitude is patience. Physical theories are validated when they give us the ability to calculate enough things that are sufficiently simple to allow reliable calculations, even if we can’t calculate everything that we might want to calculate.

So Weinberg is very much aware of the limits that even physics faces in making accurate predictions. Only a small subset (relative to the universe of physical phenomena) of simple effects can be calculated, but the capacity of physics to make very accurate predictions of simple phenomena gives us a measure of confidence that the theory would be reliable in making more complicated predictions if only we had the computing capacity to make those more complicated predictions. But in economics the set of simple predictions that can be accurately made is almost nil, because economics is inherently a theory a complex social phenomena, and simplifying the real world problems to which we apply the theory to allow testable predictions to be made is extremely difficult and hardly ever possible. Experimental economists try to create conditions in which this can be done in controlled settings, but whether these experimental results have much relevance for real-world applications is open to question.

The problematic relationship between economic theory and empirical evidence is deeply rooted in the nature of economic theory and the very complex nature of the phenomena that economic theory seek to explain. It is very difficult to isolate simple real-world events in which economic theories can be put to decisive empirical tests that allow us to put competing theories to decisive tests based on unambiguous observations that are either consistent with or contrary to the predictions generated by those theories. Under those circumstances, if we apply the Popperian criterion for demarcation between science and metaphysics to economics, it is not at all clear to me whether economics is more on the science side of the line than on the metaphysics side.

Certainly, there are refutable implications of economic theory that can be deduced, but these implications are often subject to qualification, so the refutable implications are often refutable only n principle, but not in practice. Many fastidious economic methodologists, notably Mark Blaug, voiced unhappiness about this state of affairs and blamed economists for not being more ruthless in applying Popperian test of empirical refutation to their theories. Surely Blaug had a point, but the infrequency of empirical refutation of theories in economics is, I think, less attributable to bad methodological practice on the part of economists than to the nature of the theories that economists work with and the inherent ambiguities of the empirical evidence with which those theories can be tested. We might as well just face up to the fact that, to a large extent, empirical evidence is simply not clear cut enough to force us to discard well-entrenched economic theories, because well-entrenched economic theories can be adjusted and reformulated in response to apparently contrary evidence in ways that allow those theories to live on to fight another day, theories typically having enough moving parts to allow them to be adjusted as needed to accommodate anomalous or inconvenient empirical evidence.

Popper’s somewhat disloyal disciple, Imre Lakatos, talked about scientific theories in the context of scientific research programs, a research program being an amalgam of related theories which share a common inner core of theoretical principles or axioms which are not subject to refutation. Lakatos called these deep axiomatic core of principles the hard core of the research program. The hard core defines the program so it is fundamentally fixed and not open to refutation. The empirical content of the research program is provided by a protective belt of specific theories that are subject to refutation and, when refuted, can be replaced as needed with alternative theories that are consistent with both the theoretical hard core and the empirical evidence. What determines the success of a scientific research program is whether it is progressive or degenerating. A progressive research program accumulates an increasingly dense, but evolving, protective belt of theories in response to new theoretical and empirical problems or puzzles that are generated within the research program to keep researchers busy and to attract into the program new researchers seeking problems to solve. In contrast, a degenerating research program is unable to find enough interesting new problems or puzzles to keep researchers busy much less attract new ones.

Despite its Popperian origins, the largely sociological Lakatosian account of how science evolves and progresses was hardly congenial to Popper’s sensibilities, because the success of a research program is not strictly determined by the process of conjecture and refutation envisioned by Popper. But the important point for me is that a Lakatosian research program can be progressive even if it is metaphysical and not scientific. What matters is that it offer opportunities for researchers to find and to solve or even just to talk about solving new problems, thereby attracting new researchers into the program.

It does appear that economics has for at least two centuries been a progressive research program. But it is not clear that is a really scientific research program, because the nature of economic theory is so flexible that it can be adapted as needed to explain almost any set of observations. Almost any observation can be set up and solved in terms of some sort of constrained optimization problem. What the task requires is sufficient ingenuity on the part of the theorist to formulate the problem in such a way that the desired outcome can be derived as the solution of a constrained optimization problem. The hard core of the research program is therefore never at risk, and the protective belt can always be modified as needed to generate the sort of solution that is compatible with the theoretical hard core. The scope for true refutation has thus been effectively narrowed to eliminate any real scope for refutation, leaving us with a progressive metaphysical research program.

I am not denying that it would be preferable if economics could be a truly scientific research program, but it is not clear to me how much can be done about it. The complexity of the phenomena, the multiplicity of the hypotheses required to explain the data, and the ambiguous and not fully reliable nature of most of the data that economists have available devilishly conspire to render Popperian falsificationism an illusory ideal in economics. That is not an excuse for cynicism, just a warning against unrealistic expectations about what economics can accomplish. And the last thing that I am suggesting is that we stop paying attention to the data that we have or stop trying to improve the quality of the data that we have to work with.

Sumner Sticks with Friedman

Scott Sumner won’t let go. Scott had another post today trying to show that the Cambridge Theory of the demand for money was already in place before Keynes arrived on the scene. He quotes from Hicks’s classic article “Mr. Keynes and the Classics” to dispute the quotation from another classic article by Hicks, “A Suggestions for Simplifying the Theory of Money,” which I presented in a post last week, demonstrating that Hicks credited Keynes with an important contribution to the demand for money that went beyond what Pigou, and even Lavington, had provided in their discussions of the demand for money.

In this battle of dueling quotations, I will now call upon Mark Blaug, perhaps the greatest historian of economics since Schumpeter, who in his book Economic Theory in Retrospect devotes an entire chapter (15) to the neoclassical theory of money, interest and prices. I quote from pp. 636-37 (4th edition).

Marshall and his followers went some way to move the theory of the demand for money in the direction of ordinary demand analysis, first, by relating money to net output or national income rather than the broader category of total transactions, and, second, by shifting from money’s rate of turnover to the proportion of annual income that the public wishes to hold in the form of money. In purely formal terms, there I nothing to choose between the Fisherian transaction approach and the Cambridge cash-balance approach, but the Cambridge formulation held out the potential of a genuine portfolio theory of the demand for money, which potential, however, was never fully exploited. . . .

The Cambridge formulation implies a demand for money equation, D_m = kPY, which contains no variable to represent the opportunity costs of holding cash, namely the rate of interest or the yield of alternative non-money assets, analogous to the relative price arguments of ordinary demand functions.
Yet a straight-forward application of utility-maximizing principles would have suggested that a rise in interest rates is likely to induce a fall in k as people strive to substitute interest-earning assets for passive money balances in their asset portfolios. Similarly, a fall in interest rates, by lowering the opportunity cost of holding money, is likely to cause a rise in k. Strangely enough, however, the Cambridge monetary theory never explicitly recognized the functional dependence of k on either the rate of interest or the rate on all non-monetary assets. After constructing a framework highly suggestive of a study of all the factors influencing cash-holding decisions, the Cambridge writers tended to lapse back to a list of the determinants of k that differed in no important respects from the list of institutional factors that Fisher had cited in his discussion of V. One can find references in Marshall, Pigou and particularly Lavington to a representative individual striking a balance between the costs of cash holdings in terms of interest foregone (minus the brokerage costs that would have been incurred by the movement into stocks and bonds) and their returns in terms of convenience and security against default but such passages were never systematically integrated with the cash-balance equation. As late as 1923, we find the young Keynes in A Tract on Monetary Reform interpreting k as a stable constant, representing an invariant link in the transmission mechanism connecting money to prices. If only Keynes at that date had read Wicksell instead of Marshall, he might have arrived at a money demand function that incorporates variations in the interest rate years before The General Theory (1936).

Moving to pp. 645-46, we find the following under the heading “The Demand for Money after Keynes.”

In giving explicit consideration to the yields on assets that compete with money, Keynes became one of the founders of the portfolio balance approach to monetary analysis. However, it is Hicks rather than Keynes who ought to be regarded as the founder of the view that the demand for money is simply an aspect of the problem of choosing an optimum portfolio of assets. In a remarkable paper published a year before the appearance of the General Theory, modestly entitled “A Suggestion for Simplifying the Theory of Money,” Hicks argued that money held at least partly as a store of value must be considered a type of capital asset. Hence the demand for money equation must include total wealth and expected rates of return on non-monetary assets as explanatory variables. Because individuals can choose to hold their entire wealth portfolios in the form of cash, the wealth variable represents the budget constraint on money holdings. The yield variables, on the other hand, represent both the opportunity costs of holding money and the substitutions effects of changes in relative rates of return. Individuals optimize their portfolio balances by comparing these yields with the imputed yield in terms of convenience and security of holding money. By these means, Hicks in effect treated the demand for money as a problem of balance sheet equilibrium analyzed along the same lines as those employed in ordinary demand theory.

It was Milton Friedman who carried this Hicksian analysis of money as a capital asset to its logical conclusion. In a 1956 essay, he set out a precise and complete specification of the relevant constraints and opportunity cost variable entering a household’s money demand function. His independent variable included wealth or permanent income – the present value of expected future receipts from all sources, whether personal earning or the income from real property and financial assets – the ratio of human to non-human wealth, expected rates of return on stocks, bonds and real assets, the nominal interest rate, the actual price level, and, finally, the expected percentage change in the price level. Like Hicks, Friedman specified wealth as the appropriate budget constraint but his concept of wealth was much broader than that adopted by Hicks. Whereas Keynes had viewed bonds as the only asset competing with cash, Friedman regarded all types of wealth as potential substitutes for cash holdings in an individual’s balance sheet; thus, instead of a single interest variable in the Keynesian liquidity preference equation, we get a whole list of relative yield variables in Friedman. An additional novel feature, entirely original with Friedman, is the inclusion of the expected rate of change in P as a measure of the anticipated rate of depreciation in the purchasing power of cash balances.

This formulation of the money demand function was offered in a paper entitled “The Quantity Theory of Money: A Restatement.” Friedman claimed not only that the quantity theory of money had always been a theory about the demand for money but also that his reformulation corresponded closely to what some of the great Chicago monetary economists, such as H.C. Simons and L. W. Mints, had always meant by the quantity theory. It is clear, however, from our earlier discussion that the quantity theory of money, while embodying an implicit conception of the demand for money, had always stood first and foremost for a theory of the determination of prices and nominal income; it contained much more than a particular theory of the demand for money.

Finally, Blaug remarks in his “notes for further reading” at the end of chapter 15,

In an influential essay, “The Quantity Theory of Money – A Restatement,” . . . M. Friedman claimed that his restatement was nothing more than the University of Chicago “oral” tradition. That claim was effectively destroyed by D. Patinkin, “The Chicago Tradition, the Quantity Theory, and Friedman, JMCB, 1969 .

Well, just a couple of quick comments on Blaug. I don’t entirely agree with everything he says about Cambridge monetary theory, and about the relative importance of Hicks and Keynes in advancing the theory of the demand for money. Cambridge economists may have been a bit more aware that the demand for money was a function of the rate of interest than he admits, and I think Keynes in chapter 17, definitely formulated a theory of the demand for money in a portfolio balance context, so I think that Friedman was indebted to both Hicks and Keynes for his theory of the demand for money.

As for Scott Sumner’s quotation from Hicks’s Mr. Keynes and the Classics, I think the point of that paper was not so much the theory of the demand for money, which had already been addressed in the 1935 paper from which I quoted, as to sketch out a way of generalizing the argument of the General Theory to encompass both the liquidity trap and the non-liquidity trap cases within a single graph. From the standpoint of the IS-LM diagram, the distinctive Keynesian contribution was the case of absolute liquidity preference, that doesn’t mean that Hicks meant that nothing had been added to the theory of the demand for money since Lavington. If that were the case, Hicks would have been denying that his 1935 paper had made any contribution. I don’t think that’s what he meant to suggest.

To sum up: 1) there was no Chicago oral tradition of the demand for money; 2) Friedman’s restatement of the quantity theory owed more to Keynes (and Hicks) than he admitted; 3) Friedman adapted the Cambridge/Keynes/Hicks theory of the demand for money in novel ways that allowed him to develop an analysis of price level changes that was more straightforward than was possible in the IS-LM model, thereby de-emphasizing the link between money and interest rates, which had been a such a prominent feature of the Keynesian models. That of course is a point that Scott Sumner likes to stress. In an upcoming post, I will comment on the fact that it was not just Keynesian models which stressed the link between money and interest rates. Pre-Keynesian monetary models also stressed the connection between easy money and low interest rates and dear money and high interest rates. Friedman’s argument was thus an innovation not only relative to Keynesian models but to orthodox monetary models. What accounts for this innovation?

About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 1,340 other followers

Follow Uneasy Money on