Posts Tagged 'Paul Samuelson'

There Is No Intertemporal Budget Constraint

Last week Nick Rowe posted a link to a just published article in a special issue of the Review of Keynesian Economics commemorating the 80th anniversary of the General Theory. Nick’s article discusses the confusion in the General Theory between saving and hoarding, and Nick invited readers to weigh in with comments about his article. The ROKE issue also features an article by Simon Wren-Lewis explaining the eclipse of Keynesian theory as a result of the New Classical Counter-Revolution, correctly identified by Wren-Lewis as a revolution inspired not by empirical success but by a methodological obsession with reductive micro-foundationalism. While deploring the New Classical methodological authoritarianism, Wren-Lewis takes solace from the ability of New Keynesians to survive under the New Classical methodological regime, salvaging a role for activist counter-cyclical policy by, in effect, negotiating a safe haven for the sticky-price assumption despite its shaky methodological credentials. The methodological fiction that sticky prices qualify as micro-founded allowed New Keynesianism to survive despite the ascendancy of micro-foundationalist methodology, thereby enabling the core Keynesian policy message to survive.

I mention the Wren-Lewis article in this context because of an exchange between two of the commenters on Nick’s article: the presumably pseudonymous Avon Barksdale and blogger Jason Smith about microfoundations and Keynesian economics. Avon began by chastising Nick for wasting time discussing Keynes’s 80-year old ideas, something Avon thinks would never happen in a discussion about a true science like physics, the 100-year-old ideas of Einstein being of no interest except insofar as they have been incorporated into the theoretical corpus of modern physics. Of course, this is simply vulgar scientism, as if the only legitimate way to do economics is to mimic how physicists do physics. This methodological scolding is typically charming New Classical arrogance. Sort of reminds one of how Friedrich Engels described Marxian theory as scientific socialism. I mean who, other than a religious fanatic, would be stupid enough to argue with the assertions of science?

Avon continues with a quotation from David Levine, a fine economist who has done a lot of good work, but who is also enthralled by the New Classical methodology. Avon’s scientism provoked the following comment from Jason Smith, a Ph. D. in physics with a deep interest in and understanding of economics.

You quote from Levine: “Keynesianism as argued by people such as Paul Krugman and Brad DeLong is a theory without people either rational or irrational”

This is false. The L in ISLM means liquidity preference and e.g. here …

http://krugman.blogs.nytimes.com/2013/11/18/the-new-keynesian-case-for-fiscal-policy-wonkish/

… Krugman mentions an Euler equation. The Euler equation essentially says that an agent must be indifferent between consuming one more unit today on the one hand and saving that unit and consuming in the future on the other if utility is maximized.

So there are agents in both formulations preferring one state of the world relative to others.

Avon replied:

Jason,

“This is false. The L in ISLM means liquidity preference and e.g. here”

I know what ISLM is. It’s not recursive so it really doesn’t have people in it. The dynamics are not set by any micro-foundation. If you’d like to see models with people in them, try Ljungqvist and Sargent, Recursive Macroeconomic Theory.

To which Jason retorted:

Avon,

So the definition of “people” is restricted to agents making multi-period optimizations over time, solving a dynamic programming problem?

Well then any such theory is obviously wrong because people don’t behave that way. For example, humans don’t optimize the dictator game. How can you add up optimizing agents and get a result that is true for non-optimizing agents … coincident with the details of the optimizing agents mattering.

Your microfoundation requirement is like saying the ideal gas law doesn’t have any atoms in it. And it doesn’t! It is an aggregate property of individual “agents” that don’t have properties like temperature or pressure (or even volume in a meaningful sense). Atoms optimize entropy, but not out of any preferences.

So how do you know for a fact that macro properties like inflation or interest rates are directly related to agent optimizations? Maybe inflation is like temperature — it doesn’t exist for individuals and is only a property of economics in aggregate.

These questions are not answered definitively, and they’d have to be to enforce a requirement for microfoundations … or a particular way of solving the problem.

Are quarks important to nuclear physics? Not really — it’s all pions and nucleons. Emergent degrees of freedom. Sure, you can calculate pion scattering from QCD lattice calculations (quark and gluon DoF), but it doesn’t give an empirically better result than chiral perturbation theory (pion DoF) that ignores the microfoundations (QCD).

Assuming quarks are required to solve nuclear physics problems would have been a giant step backwards.

To which Avon rejoined:

Jason

The microfoundation of nuclear physics and quarks is quantum mechanics and quantum field theory. How the degrees of freedom reorganize under the renormalization group flow, what effective field theory results is an empirical question. Keynesian economics is worse tha[n] useless. It’s wrong empirically, it has no theoretical foundation, it has no laws. It has no microfoundation. No serious grad school has taught Keynesian economics in nearly 40 years.

To which Jason answered:

Avon,

RG flow is irrelevant to chiral perturbation theory which is based on the approximate chiral symmetry of QCD. And chiral perturbation theory could exist without QCD as the “microfoundation”.

Quantum field theory is not a ‘microfoundation’, but rather a framework for building theories that may or may not have microfoundations. As Weinberg (1979) said:

” … quantum field theory itself has no content beyond analyticity, unitarity,
cluster decomposition, and symmetry.”

If I put together an NJL model, there is no requirement that the scalar field condensate be composed of quark-antiquark pairs. In fact, the basic idea was used for Cooper pairs as a model of superconductivity. Same macro theory; different microfoundations. And that is a general problem with microfoundations — different microfoundations can lead to the same macro theory, so which one is right?

And the IS-LM model is actually pretty empirically accurate (for economics):

http://informationtransfereconomics.blogspot.com/2014/03/the-islm-model-again.html

To which Avon responded:

First, ISLM analysis does not hold empirically. It just doesn’t work. That’s why we ended up with the macro revolution of the 70s and 80s. Keynesian economics ignores intertemporal budget constraints, it violates Ricardian equivalence. It’s just not the way the world works. People might not solve dynamic programs to set their consumption path, but at least these models include a future which people plan over. These models work far better than Keynesian ISLM reasoning.

As for chiral perturbation theory and the approximate chiral symmetries of QCD, I am not making the case that NJL models requires QCD. NJL is an effective field theory so it comes from something else. That something else happens to be QCD. It could have been something else, that’s an empirical question. The microfoundation I’m talking about with theories like NJL is QFT and the symmetries of the vacuum, not the short distance physics that might be responsible for it. The microfoundation here is about the basic laws, the principles.

ISLM and Keynesian economics has none of this. There is no principle. The microfoundation of modern macro is not about increasing the degrees of freedom to model every person in the economy on some short distance scale, it is about building the basic principles from consistent economic laws that we find in microeconomics.

Well, I totally agree that IS-LM is a flawed macroeconomic model, and, in its original form, it was borderline-incoherent, being a single-period model with an interest rate, a concept without meaning except as an intertemporal price relationship. These deficiencies of IS-LM became obvious in the 1970s, so the model was extended to include a future period, with an expected future price level, making it possible to speak meaningfully about real and nominal interest rates, inflation and an equilibrium rate of spending. So the failure of IS-LM to explain stagflation, cited by Avon as the justification for rejecting IS-LM in favor of New Classical macro, was not that hard to fix, at least enough to make it serviceable. And comparisons of the empirical success of augmented IS-LM and the New Classical models have shown that IS-LM models consistently outperform New Classical models.

What Avon fails to see is that the microfoundations that he considers essential for macroeconomics are themselves derived from the assumption that the economy is operating in macroeconomic equilibrium. Thus, insisting on microfoundations – at least in the formalist sense that Avon and New Classical macroeconomists understand the term – does not provide a foundation for macroeconomics; it is just question begging aka circular reasoning or petitio principia.

The circularity is obvious from even a cursory reading of Samuelson’s Foundations of Economic Analysis, Robert Lucas’s model for doing economics. What Samuelson called meaningful theorems – thereby betraying his misguided acceptance of the now discredited logical positivist dogma that only potentially empirically verifiable statements have meaning – are derived using the comparative-statics method, which involves finding the sign of the derivative of an endogenous economic variable with respect to a change in some parameter. But the comparative-statics method is premised on the assumption that before and after the parameter change the system is in full equilibrium or at an optimum, and that the equilibrium, if not unique, is at least locally stable and the parameter change is sufficiently small not to displace the system so far that it does not revert back to a new equilibrium close to the original one. So the microeconomic laws invoked by Avon are valid only in the neighborhood of a stable equilibrium, and the macroeconomics that Avon’s New Classical mentors have imposed on the economics profession is a macroeconomics that, by methodological fiat, is operative only in the neighborhood of a locally stable equilibrium.

Avon dismisses Keynesian economics because it ignores intertemporal budget constraints. But the intertemporal budget constraint doesn’t exist in any objective sense. Certainly macroeconomics has to take into account intertemporal choice, but the idea of an intertemporal budget constraint analogous to the microeconomic budget constraint underlying the basic theory of consumer choice is totally misguided. In the static theory of consumer choice, the consumer has a given resource endowment and known prices at which consumers can transact at will, so the utility-maximizing vector of purchases and sales can be determined as the solution of a constrained-maximization problem.

In the intertemporal context, consumers have a given resource endowment, but prices are not known. So consumers have to make current transactions based on their expectations about future prices and a variety of other circumstances about which consumers can only guess. Their budget constraints are thus not real but totally conjectural based on their expectations of future prices. The optimizing Euler equations are therefore entirely conjectural as well, and subject to continual revision in response to changing expectations. The idea that the microeconomic theory of consumer choice is straightforwardly applicable to the intertemporal choice problem in a setting in which consumers don’t know what future prices will be and agents’ expectations of future prices are a) likely to be very different from each other and thus b) likely to be different from their ultimate realizations is a huge stretch. The intertemporal budget constraint has a completely different role in macroeconomics from the role it has in microeconomics.

If I expect that the demand for my services will be such that my disposable income next year would be $500k, my consumption choices would be very different from what they would have been if I were expecting a disposable income of $100k next year. If I expect a disposable income of $500k next year, and it turns out that next year’s income is only $100k, I may find myself in considerable difficulty, because my planned expenditure and the future payments I have obligated myself to make may exceed my disposable income or my capacity to borrow. So if there are a lot of people who overestimate their future incomes, the repercussions of their over-optimism may reverberate throughout the economy, leading to bankruptcies and unemployment and other bad stuff.

A large enough initial shock of mistaken expectations can become self-amplifying, at least for a time, possibly resembling the way a large initial displacement of water can generate a tsunami. A financial crisis, which is hard to model as an equilibrium phenomenon, may rather be an emergent phenomenon with microeconomic sources, but whose propagation can’t be described in microeconomic terms. New Classical macroeconomics simply excludes such possibilities on methodological grounds by imposing a rational-expectations general-equilibrium structure on all macroeconomic models.

This is not to say that the rational expectations assumption does not have a useful analytical role in macroeconomics. But the most interesting and most important problems in macroeconomics arise when the rational expectations assumption does not hold, because it is when individual expectations are very different and very unstable – say, like now, for instance — that macroeconomies become vulnerable to really scary instability.

Simon Wren-Lewis makes a similar point in his paper in the Review of Keynesian Economics.

Much discussion of current divisions within macroeconomics focuses on the ‘saltwater/freshwater’ divide. This understates the importance of the New Classical Counter Revolution (hereafter NCCR). It may be more helpful to think about the NCCR as involving two strands. The one most commonly talked about involves Keynesian monetary and fiscal policy. That is of course very important, and plays a role in the policy reaction to the recent Great Recession. However I want to suggest that in some ways the second strand, which was methodological, is more important. The NCCR helped completely change the way academic macroeconomics is done.

Before the NCCR, macroeconomics was an intensely empirical discipline: something made possible by the developments in statistics and econometrics inspired by The General Theory. After the NCCR and its emphasis on microfoundations, it became much more deductive. As Hoover (2001, p. 72) writes, ‘[t]he conviction that macroeconomics must possess microfoundations has changed the face of the discipline in the last quarter century’. In terms of this second strand, the NCCR was triumphant and remains largely unchallenged within mainstream academic macroeconomics.

Perhaps I will have some more to say about Wren-Lewis’s article in a future post. And perhaps also about Nick Rowe’s article.

HT: Tom Brown

Update (02/11/16):

On his blog Jason Smith provides some further commentary on his exchange with Avon on Nick Rowe’s blog, explaining at greater length how irrelevant microfoundations are to doing real empirically relevant physics. He also expands on and puts into a broader meta-theoretical context my point about the extremely narrow range of applicability of the rational-expectations equilibrium assumptions of New Classical macroeconomics.

David Glasner found a back-and-forth between me and a commenter (with the pseudonym “Avon Barksdale” after [a] character on The Wire who [didn’t end] up taking an economics class [per Tom below]) on Nick Rowe’s blog who expressed the (widely held) view that the only scientific way to proceed in economics is with rigorous microfoundations. “Avon” held physics up as a purported shining example of this approach.
I couldn’t let it go: even physics isn’t that reductionist. I gave several examples of cases where the microfoundations were actually known, but not used to figure things out: thermodynamics, nuclear physics. Even modern physics is supposedly built on string theory. However physicists do not require every pion scattering amplitude be calculated from QCD. Some people do do so-called lattice calculations. But many resort to the “effective” chiral perturbation theory. In a sense, that was what my thesis was about — an effective theory that bridges the gap between lattice QCD and chiral perturbation theory. That effective theory even gave up on one of the basic principles of QCD — confinement. It would be like an economist giving up opportunity cost (a basic principle of the micro theory). But no physicist ever said to me “your model is flawed because it doesn’t have true microfoundations”. That’s because the kind of hard core reductionism that surrounds the microfoundations paradigm doesn’t exist in physics — the most hard core reductionist natural science!
In his post, Glasner repeated something that he had before and — probably because it was in the context of a bunch of quotes about physics — I thought of another analogy.

Glasner says:

But the comparative-statics method is premised on the assumption that before and after the parameter change the system is in full equilibrium or at an optimum, and that the equilibrium, if not unique, is at least locally stable and the parameter change is sufficiently small not to displace the system so far that it does not revert back to a new equilibrium close to the original one. So the microeconomic laws invoked by Avon are valid only in the neighborhood of a stable equilibrium, and the macroeconomics that Avon’s New Classical mentors have imposed on the economics profession is a macroeconomics that, by methodological fiat, is operative only in the neighborhood of a locally stable equilibrium.

 

This hits on a basic principle of physics: any theory radically simplifies near an equilibrium.

Go to Jason’s blog to read the rest of his important and insightful post.

Two Cheers (Well, Maybe Only One and a Half) for Falsificationism

Noah Smith recently wrote a defense (sort of) of falsificationism in response to Sean Carroll’s suggestion that the time has come for scientists to throw falisficationism overboard as a guide for scientific practice. While Noah isn’t ready to throw out falsification as a scientific ideal, he does acknowledge that not everything that scientists do is really falsifiable.

But, as Carroll himself seems to understand in arguing against falsificationism, even though a particular concept or entity may itself be unobservable (and thus unfalsifiable), the larger theory of which it is a part may still have implications that are falsifiable. This is the case in economics. A utility function or a preference ordering is not observable, but by imposing certain conditions on that utility function, one can derive some (weakly) testable implications. This is exactly what Karl Popper, who introduced and popularized the idea of falsificationism, meant when he said that the aim of science is to explain the known by the unknown. To posit an unobservable utility function or an unobservable string is not necessarily to engage in purely metaphysical speculation, but to do exactly what scientists have always done, to propose explanations that would somehow account for some problematic phenomenon that they had already observed. The explanations always (or at least frequently) involve positing something unobservable (e.g., gravitation) whose existence can only be indirectly perceived by comparing the implications (predictions) inferred from the existence of the unobservable entity with what we can actually observe. Here’s how Popper once put it:

Science is valued for its liberalizing influence as one of the greatest of the forces that make for human freedom.

According to the view of science which I am trying to defend here, this is due to the fact that scientists have dared (since Thales, Democritus, Plato’s Timaeus, and Aristarchus) to create myths, or conjectures, or theories, which are in striking contrast to the everyday world of common experience, yet able to explain some aspects of this world of common experience. Galileo pays homage to Aristarchus and Copernicus precisely because they dared to go beyond this known world of our senses: “I cannot,” he writes, “express strongly enough my unbounded admiration for the greatness of mind of these men who conceived [the heliocentric system] and held it to be true […], in violent opposition to the evidence of their own senses.” This is Galileo’s testimony to the liberalizing force of science. Such theories would be important even if they were no more than exercises for our imagination. But they are more than this, as can be seen from the fact that we submit them to severe tests by trying to deduce from them some of the regularities of the known world of common experience by trying to explain these regularities. And these attempts to explain the known by the unknown (as I have described them elsewhere) have immeasurably extended the realm of the known. They have added to the facts of our everyday world the invisible air, the antipodes, the circulation of the blood, the worlds of the telescope and the microscope, of electricity, and of tracer atoms showing us in detail the movements of matter within living bodies.  All these things are far from being mere instruments: they are witness to the intellectual conquest of our world by our minds.

So I think that Sean Carroll, rather than arguing against falisficationism, is really thinking of falsificationism in the broader terms that Popper himself laid out a long time ago. And I think that Noah’s shrug-ability suggestion is also, with appropriate adjustments for changes in expository style, entirely in the spirit of Popper’s view of falsificationism. But to make that point clear, one needs to understand what motivated Popper to propose falsifiability as a criterion for distinguishing between science and non-science. Popper’s aim was to overturn logical positivism, a philosophical doctrine associated with the group of eminent philosophers who made up what was known as the Vienna Circle in the 1920s and 1930s. Building on the British empiricist tradition in science and philosophy, the logical positivists argued that our knowledge of the external world is based on sensory experience, and that apart from the tautological truths of pure logic (of which mathematics is a part) there is no other knowledge. Furthermore, no meaning could be attached to any statement whose validity could not checked either by examining its logical validity as an inference from explicit premises or verified by sensory experience. According to this criterion, much of human discourse about ethics, morals, aesthetics, religion and much of philosophy was simply meaningless, aka metaphysics.

Popper, who grew up in Vienna and was on the periphery of the Vienna Circle, rejected the idea that logical tautologies and statements potentially verifiable by observation are the only conveyors of meaning between human beings. Metaphysical statements can be meaningful even if they can’t be confirmed by observation. Metaphysical statements are meaningful if they are coherent and are not nonsensical. If there is a problem with metaphysical statements, the problem is not necessarily because they have no meaning. In making this argument, Popper suggested an alternative criterion of demarcation to that between meaning and non-meaning: a criterion of demarcation between science and metaphysics. Science is indeed different from metaphysics, but the difference is not that science is meaningful and metaphysics is not. The difference is that scientific statements can be refuted (or falsified) by observations while metaphysical statements cannot be refuted by observations. As a matter of logic, the only way to refute a proposition by an observation is for the proposition to assert that the observation was not possible. Unless you can say what observation would refute what you are saying, you are engaging in metaphysical, not scientific, talk. This gave rise to Popper’s then very surprising result. If you positively assert the existence of something – an assertion potentially verifiable by observation, and hence for logical positivists the quintessential scientific statement — you are making a metaphysical, not a scientific, statement. The statement that something (e.g., God, a string, or a utility function) exists cannot be refuted by any observation. However the unobservable phenomenon may be part of a theory with implications that could be refuted by some observation. But in that case it would be the theory not the posited object that was refuted.

In fact, Popper thought that metaphysical statements not only could be meaningful, but could even be extremely useful, coining the term “metaphysical research programs,” because a metaphysical, unfalsifiable idea or theory could be the impetus for further research, possibly becoming scientifically fruitful in the way that evolutionary biology eventually sprang from the possibly unfalsifiable idea of survival of the fittest. That sounds to me pretty much like Noah’s idea of shrug-ability.

Popper was largely successful in overthrowing logical positivism, though whether it was entirely his doing (as he liked to claim) and whether it was fully overthrown are not so clear. One reason to think that it was not all his doing is that there is still a lot of confusion about what the falsification criterion actually means. Reading Noah Smith and Sean Carroll, I almost get the impression that they think the falsification criterion distinguishes not just between science and non-science but between meaning and non-meaning. Otherwise, why would anyone think that there is any problem with introducing an unfalsifiable concept into scientific discussion. When Popper argued that science should aim at proposing and testing falsifiable theories, he meant that one should not design a theory so that it can’t be tested, or adopt stratagems — ad hoc hypotheses — that serve only to account for otherwise falsifying observations. But if someone comes up with a creative new idea, and the idea can’t be tested, at least given the current observational technology, that is not a reason to reject the theory, especially if the new theory accounts for otherwise unexplained observations.

Another manifestation of Popper’s imperfect success in overthrowing logical positivism is that Paul Samuelson in his classic The Foundations of Economic Analysis chose to call the falsifiable implications of economic theory, meaningful theorems. By naming those implications “meaningful theorems,” Samuelson clearly was operating under the positivist presumption that only a proposition that could (at least in principle) be falsified by observation was meaningful. However, that formulation reflected an untenable compromise between Popper’s criterion for distinguishing science from metaphysics and the logical positivist criterion for distinguishing meaningful from meaningless statements. Instead of referring to meaningful theorems, Samuelson should have called them, more modestly, testable or scientific theorems.

So, at least as I read Popper, Noah Smith and Sean Carroll are only discovering what Popper already understood a long time ago.

At this point, some readers may be wondering why, having said all that, I seem to have trouble giving falisficationism (and Popper) even two cheers. So I am afraid that I will have to close this post on a somewhat critical note. The problem with Popper is that his rhetoric suggests that scientific methodology is a lot more important than it really is. Apart from some egregious examples like Marxism and Freudianism, which were deliberately formulated to exclude the possibility of refutation, there really aren’t that many theories entertained by scientists that can be ruled out of order on strictly methodological grounds. Popper can occasionally provide some methodological reminders to scientists to avoid relying on ad hoc theorizing — at least when a non-ad-hoc alternative is handy — but beyond that I don’t think methodology counts for very much in the day to day work of scientists. Many theories are difficult to falsify, but the difficulty is not necessarily the result of deliberate choices by the theorists, it is the result of the nature of the problem and the nature of the evidence that could potentially refute the theory. The evidence is what it is. It is nice to come up with a theory that predicts a novel fact that can be observed, but nature is not always so accommodating to our theories.

There is a kind of rationalistic (I am using “rationalistic” in the pejorative sense of Michael Oakeshott) faith that following the methodological rules that Popper worked so hard to formulate will guarantee scientific progress. Those rules tend to encourage an unrealistic focus on making theories testable (especially in economics) when by their nature the phenomena are too complex for theories to be formulated in ways that are susceptible to decisive testing. And although Popper recognized that empirical testing of a theory has very limited usefulness unless the theory is being compared to some alternative theory, too often discussions of theory testing are in the context of testing a single theory in isolation. Kuhn and others have pointed out that science is not routinely carried out in the way that Popper suggested it should be. To some extent, Popper acknowledged the truth of that observation, though he liked to cite examples from the history of science to illustrate his thesis, but argued that he was offering a normative, not a positive, theory of scientific discovery. But why should we assume that Popper had more insight into the process of discovery for particular sciences than the practitioners of those sciences actually doing the research? That is the nub of the criticism of Popper that I take away from Oakeshott’s work. Life and any form of endeavor involves the transmission of ways of doing things, traditions, that cannot be reduced to a set of rules, but require education, training, practice and experience. That’s what Kuhn called normal science. Normal science can go off the tracks too, but it is naïve to think that a list of methodological rules is what will keep science moving constantly in the right direction. Why should Popper’s rules necessarily trump the lessons that practitioners have absorbed from the scientific traditions in which they have been trained? I don’t believe that there is any surefire recipe for scientific progress.

Nevertheless, when I look at the way economics is now being practiced and taught, I can’t help but think that a dose of Popperianism might not be the worst thing that could be administered to modern economics. But that’s a discussion for another day.

Richard Lipsey and the Phillips Curve

Richard Lipsey has had an extraordinarily long and productive career as both an economic theorist and an empirical economist, making numerous important contributions in almost all branches of economics. (See, for example, the citation about Lipsey as a fellow of the Canadian Economics Association.) In addition, his many textbooks have been enormously influential in advocating that economists should strive to make their discipline empirically relevant by actually subjecting their theories to meaningful empirical tests in which refutation is a realistic possibility not just a sign that the researcher was insufficiently creative in theorizing or in performing the data analysis.

One of Lipsey’s most important early contributions was his 1960 paper on the Phillips Curve “The Relationship between Unemployment and the Rate of Change of Money Wages in the United Kingdom 1862-1957: A Further Analysis” in which he extended W A. Phillips’s original results, and he has continued to write about the Phillips Curve ever since. Lipsey, in line with his empiricist philosophical position, has consistently argued that a well-supported empirical relationship should not be dismissed simply because of a purely theoretical argument about how expectations are formed. In other words, the argument that adjustments in inflation expectations would cause the short-run Phillips curve relation captured by empirical estimates of the relationship between inflation and unemployment may well be valid (as was actually recognized early on by Samuelson and Solow in their famous paper suggesting that the Phillips Curve could be interpreted as a menu of alternative combinations of inflation and unemployment from which policy-makers could choose) in some general qualitative sense. But that does not mean that it had to be accepted as an undisputable axiom of economics that the long-run relationship between unemployment and inflation is necessarily vertical, as Friedman and Phelps and Lucas convinced most of the economics profession in the late 1960s and early 1970s.

A few months ago, Lipsey was kind enough to send me a draft of the paper that he presented at the annual meeting of the History of Economics Society; the paper is called “The Phillips Curve and the Tyranny of an Assumed Unique Macro Equilibrium.” Here is the abstract of the paper.

To make the argument that the behaviour of modern industrial economies since the 1990s is inconsistent with theories in which there is a unique ergodic macro equilibrium, the paper starts by reviewing both the early Keynesian theory in which there was no unique level of income to which the economy was inevitably drawn and the debate about the amount of demand pressure at which it was best of maintain the economy: high aggregate demand and some inflationary pressure or lower aggregate demand and a stable price level. It then covers the rise of the simple Phillips curve and its expectations-augmented version, which introduced into current macro theory a natural rate of unemployment (and its associated equilibrium level of national income). This rate was also a NAIRU, the only rate consistent with stable inflation. It is then argued that the current behaviour of many modern economies in which there is a credible policy to maintain a low and steady inflation rate is inconsistent with the existence of either a unique natural rate or a NAIRU but is consistent with evolutionary theory in which there is perpetual change driven by endogenous technological advance. Instead of a NAIRU evolutionary economies have a non-inflationary band of unemployment (a NAIBU) indicating a range of unemployment and income over with the inflation rate is stable. The paper concludes with the observation that the great pre-Phillips curve debates of the 1950s that assumed that there was a range within which the economy could be run with varying pressures of demand, and varying amounts of unemployment and inflationary pressure, were not as silly as they were made to seem when both Keynesian and New Classical economists accepted the assumption of a perfectly inelastic, long-run Phillips curve located at the unique equilibrium level of unemployment.

Back in January, I wrote a post about the Lucas Critique in which I pointed out that his “proof” that the Phillips Curve is vertical in his celebrated paper on econometric policy evaluation was no proof at all, but simply a very special example in which the only disequilibrium permitted in the model – a misperception of the future price level – would lead an econometrician to estimate a negatively sloped relation between inflation and employment even though under correct expectations of inflation the relationship would be vertical. Allowing for a wider range of behavioral responses, I suggested, might well change the relation between inflation and output even under correctly expected inflation. In his new paper, Lipsey correctly points out that Friedman and Phelps and Lucas, and subsequent New Classical and New Keynesian theoreticians, who have embraced the vertical Phillips Curve doctrine as an article of faith, are also assuming, based on essentially no evidence, that there is a unique macro equilibrium. But, there is very strong evidence to suggest that, in fact, any deviation from an initial equilibrium (or equilibrium time path) is likely to cause changes that, in and of themselves, cause a change in conditions that will propel the system toward a new and different equilibrium time path, rather than return to the time path the system had been moving along before it was disturbed. See my post of almost a year ago about a paper, “Does history matter?: Empirical analysis of evolutionary versus stationary equilibrium views of the economy,” by Carlaw and Lipsey.)

Lipsey concludes his paper with a quotation from his article “The Phillips Curve” published in the volume Famous Figures and Diagrams in Economics edited by Mark Blaug and Peter Lloyd.

Perhaps [then] Keynesians were too hasty in following the New Classical economists in accepting the view that follows from static [and all EWD] models that stable rates of wage and price inflation are poised on the razor’s edge of a unique NAIRU and its accompanying Y*. The alternative does not require a long term Phillips curve trade off, nor does it deny the possibility of accelerating inflations of the kind that have bedevilled many third world countries. It is merely states that industrialised economies with low expected inflation rates may be less precisely responsive than current theory assumes because they are subject to many lags and inertias, and are operating in an ever-changing and uncertain world of endogenous technological change, which has no unique long term static equilibrium. If so, the economy may not be similar to the smoothly functioning mechanical world of Newtonian mechanics but rather to the imperfectly evolving world of evolutionary biology. The Phillips relation then changes from being a precise curve to being a band within which various combinations of inflation and unemployment are possible but outside of which inflation tends to accelerate or decelerate. Perhaps then the great [pre-Phillips curve] debates of the 1940s and early 1950s that assumed that there was a range within which the economy could be run with varying pressures of demand, and varying amounts of unemployment and inflation[ary pressure], were not as silly as they were made to seem when both Keynesian and New Classical economists accepted the assumption of a perfectly inelastic, one-dimensional, long run Phillips curve located at a unique equilibrium Y* and NAIRU.”

Armen Alchian, The Economists’ Economist

The first time that I ever heard of Armen Alchian was when I took introductory economics at UCLA as a freshman, and his book (co-authored with his colleague William R. Allen who was probably responsible for the macro and international chapters) University Economics (the greatest economics textbook ever written) was the required text. I had only just started to get interested in economics, and was still more interested in political philosophy than in economics, but I found myself captivated by what I was reading in Alchian’s textbook, even though I didn’t find the professor teaching the course very exciting. And after 10 weeks (the University of California had switched to a quarter system) of introductory micro, I changed my major to economics. So there is no doubt that I became an economist because the textbook that I was taught from was written by Alchian.

In my four years as an undergraduate at UCLA, I took three classes from Axel Leijonhufvud, two from Ben Klein, two from Bill Allen, and one each from Robert Rooney, Nicos Devletoglou, James Buchanan, Jack Hirshleifer, George Murphy, and Jean Balbach. But Alchian, who in those days was not teaching undergrads, was a looming presence. It became obvious that Alchian was the central figure in the department, the leader and the role model that everyone else looked up to. I would see him occasionally on campus, but was too shy or too much in awe of him to introduce myself to him. One incident that I particularly recall is when, in my junior year, F. A. Hayek visited UCLA in the fall and winter quarters (in the department of philosophy!) teaching an undergraduate course in the philosophy of the social sciences and a graduate seminar on the first draft of Law, Legislation and Liberty. I took Hayek’s course on the philosophy of the social sciences, and audited his graduate seminar, and I occasionally used to visit his office to ask him some questions. I once asked his advice about which graduate programs he would suggest that I apply to. He mentioned two schools, Chicago, of course, and Princeton where his friends Fritz Machlup and Jacob Viner were still teaching, before asking, “but why would you think of going to graduate school anywhere else than UCLA? You will get the best training in economics in the world from Alchian, Hirshleifer and Leijonhufvud.” And so it was, I applied to, and was accepted at, Chicago, but stayed at UCLA.

As a first year graduate student, I took the (three-quarter) microeconomics sequence from Jack Hirshleifer (who in the scholarly hierarachy at UCLA ranked only slightly below Alchian) and the two-quarter macroeconomics sequence from Leijonhufvud. Hirshleifer taught a great course. He was totally prepared, very organized and his lectures were always clear and easy to follow. To do well, you had to sit back listen, review the lecture notes, read through the reading assignments, and do the homework problems. For me at least, with the benefit of four years of UCLA undergraduate training, it was a breeze.

Great as Hirshleifer was as a teacher, I still felt that I was missing out by not having been taught by Alchian. Perhaps Alchian felt that the students who took the microeconomics sequence from Hirshleifer should get some training from him as well, so the next year he taught a graduate seminar in topics in price theory, to give us an opportunity to learn from him how to do economics. You could also see how Alchian operated if you went to a workshop or lecture by a visiting scholar, when Alchian would start to ask questions. He would smile, put his head on his forehead, and say something like, “I just don’t understand that,” and force whoever it was to try to explain the logic by which he had arrived at some conclusion. And Alchian would just keep smiling, explain what the problem was with the answer he got, and ask more questions. Alchian didn’t shout or rant or rave, but if Alchian was questioning you, you were not in a very comfortable position.

So I was more than a bit apprehensive going into Alchian’s seminar. There were all kinds of stories told by graduate students about how tough Alchian could be on his students if they weren’t able to respond adequately when subjected to his questioning in the Socratic style. But the seminar could not have been more enjoyable. There was give and take, but I don’t remember seeing any blood spilled. Perhaps by the time I got to his seminar, Alchian, then about 57, had mellowed a bit, or, maybe, because we had all gone through the graduate microeconomics sequence, he felt that we didn’t require such an intense learning environment. At any rate, the seminar, which met twice a week for an hour and a quarter for 10 weeks, usually involved Alchian picking a story from the newspaper and asking us how to analyze the economics underlying the story. Armed with nothing but a chalkboard and piece of chalk, Alchian would lead us relatively painlessly from confusion to clarity, from obscurity to enlightenment. The key concepts with which to approach any problem were to understand the choices available to those involved, to define the relevant costs, and to understand the constraints under which choices are made, the constraints being determined largely by the delimitation of the property rights under which the resources can be used or exchanged, or, to be more precise, the property rights to use those resources can be exchanged.

Ultimately, the lesson that I learned from Alchian is that, at its best, economic theory is a tool for solving actual real problems, and the nature of the problem ought to dictate the way in which the theory (verbal, numerical, graphical, higher mathematical) is deployed, not the other way around. The goal is not to reach any particular conclusion, but to apply the tools in the best and most authentic way that they can be applied. Alchian did not wear his politics on his sleeve, though it wasn’t too hard to figure out that he was politically conservative with libertarian tendencies. But you never got the feeling that his politics dictated his economic analysis. In many respects, Alchian’s closest disciple was Earl Thompson, who studied under Alchian as an undergraduate, and then, after playing minor-league baseball for a couple of years, going to Harvard for graduate school, eventually coming back to UCLA as an assistant professor where he remained for his entire career. Earl, discarding his youthful libertarianism early on, developed many completely original, often eccentric, theories about the optimality of all kinds of government interventions – even protectionism – opposed by most economists, but Alchian took them all in stride. Mere policy disagreements never affected their close personal bond, and Alchian wrote the forward to Earl’s book with Charles Hickson, Ideology and the Evolution of Vital Economics Institutions. If Alchian was friendly with and an admirer of Milton Friedman, he just as friendly with, and just as admiring of, Paul Samuelson and Kenneth Arrow, with whom he collaborated on several projects in the 1950s when they consulted for the Rand Corporation. Alchian cared less about the policy conclusion than he did about the quality of the underlying economic analysis.

As I have pointed out on several prior occasions, it is simply scandalous that Alchian was not awarded the Noble Prize. His published output was not as voluminous as that of some other luminaries, but there is a remarkably high proportion of classics among his publications. So many important ideas came from him, especially thinking about economic competition as an evolutionary process, the distinction between the functional relationship between cost and volume of output and cost and rate of output, the effect of incomplete information on economic action, the economics of property rights, the effects of inflation on economic activity. (Two volumes of his Collected Works, a must for anyone really serious about economics, contain a number of previously unpublished or hard to find papers, and are available here.) Perhaps in the future I will discuss some of my favorites among his articles.

Although Alchian did not win the Nobel Prize, in 1990 the Nobel Prize was awarded to Harry Markowitz, Merton Miller, and William F. Sharpe for their work on financial economics. Sharp, went to UCLA, writing his Ph.D. dissertation on securities prices under Alchian, and worked at the Rand Corporation in the 1950s and 1960s with Markowitz.  Here’s what Sharpe wrote about Alchian:

Armen Alchian, a professor of economics, was my role model at UCLA. He taught his students to question everything; to always begin an analysis with first principles; to concentrate on essential elements and abstract from secondary ones; and to play devil’s advocate with one’s own ideas. In his classes we were able to watch a first-rate mind work on a host of fascinating problems. I have attempted to emulate his approach to research ever since.

And if you go to the Amazon page for University Economics and look at the comments you will see a comment from none other than Harry Markowitz:

I am about to order this book. I have just read its quite favorable reviews, and I am not a bit surprised at their being impressed by Armen Alchian’s writings. I was a colleague of Armen’s, at the Rand Corporation “think tank,” during the 1950s, and hold no economist in higher regard. When I sat down at my keyboard just now it was to find out what happened to Armen’s works. One Google response was someone saying that Armen should get a Nobel Prize. I concur. My own Nobel Prize in Economics was awarded in 1990 along with the prize for Wm. Sharpe. I see in Wikipedia that Armen “influenced” Bill, and that Armen is still alive and is 96 years old. I’ll see if I can contact him, but first I’ll buy this book.

I will always remember Alchian’s air of amused, philosophical detachment, occasionally bemused (though, perhaps only apparently so, as he tried to guide his students and colleagues with question to figure out a point that he already grasped), always curious, always eager for the intellectual challenge of discovery and problem solving. Has there ever been a greater teacher of economics than Alchian? Perhaps, but I don’t know who. I close with one more quotation, this one from Axel Leijonhufvud written about Alchian 25 years ago.  It still rings true.

[Alchian’s] unique brand of price theory is what gave UCLA Economics its own intellectual profile and achieved for us international recognition as an independent school of some importance—as a group of scholars who did not always take their leads from MIT, Chicago or wherever. When I came here (in 1964) the Department had Armen’s intellectual stamp on it (and he remained the obvious leader until just a couple of years ago ….). Even people outside Armen’s fields, like myself, learned to do Armen’s brand of economic analysis and a strong esprit de corps among both faculty and graduate students sprang from the consciousness that this ‘New Institutional Economics’ was one of the waves of the future and that we, at UCLA, were surfing it way ahead of the rest. But Armen’s true importance to the UCLA school did not stem just from the new ideas he taught or the outwardly recognized “brandname” that he created for us. For many of his young colleagues he embodied qualities of mind and character that seemed the more important to seek to emulate the more closely you got to know him.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,260 other subscribers
Follow Uneasy Money on WordPress.com