Archive for January, 2014

Two Cheers (Well, Maybe Only One and a Half) for Falsificationism

Noah Smith recently wrote a defense (sort of) of falsificationism in response to Sean Carroll’s suggestion that the time has come for scientists to throw falisficationism overboard as a guide for scientific practice. While Noah isn’t ready to throw out falsification as a scientific ideal, he does acknowledge that not everything that scientists do is really falsifiable.

But, as Carroll himself seems to understand in arguing against falsificationism, even though a particular concept or entity may itself be unobservable (and thus unfalsifiable), the larger theory of which it is a part may still have implications that are falsifiable. This is the case in economics. A utility function or a preference ordering is not observable, but by imposing certain conditions on that utility function, one can derive some (weakly) testable implications. This is exactly what Karl Popper, who introduced and popularized the idea of falsificationism, meant when he said that the aim of science is to explain the known by the unknown. To posit an unobservable utility function or an unobservable string is not necessarily to engage in purely metaphysical speculation, but to do exactly what scientists have always done, to propose explanations that would somehow account for some problematic phenomenon that they had already observed. The explanations always (or at least frequently) involve positing something unobservable (e.g., gravitation) whose existence can only be indirectly perceived by comparing the implications (predictions) inferred from the existence of the unobservable entity with what we can actually observe. Here’s how Popper once put it:

Science is valued for its liberalizing influence as one of the greatest of the forces that make for human freedom.

According to the view of science which I am trying to defend here, this is due to the fact that scientists have dared (since Thales, Democritus, Plato’s Timaeus, and Aristarchus) to create myths, or conjectures, or theories, which are in striking contrast to the everyday world of common experience, yet able to explain some aspects of this world of common experience. Galileo pays homage to Aristarchus and Copernicus precisely because they dared to go beyond this known world of our senses: “I cannot,” he writes, “express strongly enough my unbounded admiration for the greatness of mind of these men who conceived [the heliocentric system] and held it to be true […], in violent opposition to the evidence of their own senses.” This is Galileo’s testimony to the liberalizing force of science. Such theories would be important even if they were no more than exercises for our imagination. But they are more than this, as can be seen from the fact that we submit them to severe tests by trying to deduce from them some of the regularities of the known world of common experience by trying to explain these regularities. And these attempts to explain the known by the unknown (as I have described them elsewhere) have immeasurably extended the realm of the known. They have added to the facts of our everyday world the invisible air, the antipodes, the circulation of the blood, the worlds of the telescope and the microscope, of electricity, and of tracer atoms showing us in detail the movements of matter within living bodies.  All these things are far from being mere instruments: they are witness to the intellectual conquest of our world by our minds.

So I think that Sean Carroll, rather than arguing against falisficationism, is really thinking of falsificationism in the broader terms that Popper himself laid out a long time ago. And I think that Noah’s shrug-ability suggestion is also, with appropriate adjustments for changes in expository style, entirely in the spirit of Popper’s view of falsificationism. But to make that point clear, one needs to understand what motivated Popper to propose falsifiability as a criterion for distinguishing between science and non-science. Popper’s aim was to overturn logical positivism, a philosophical doctrine associated with the group of eminent philosophers who made up what was known as the Vienna Circle in the 1920s and 1930s. Building on the British empiricist tradition in science and philosophy, the logical positivists argued that our knowledge of the external world is based on sensory experience, and that apart from the tautological truths of pure logic (of which mathematics is a part) there is no other knowledge. Furthermore, no meaning could be attached to any statement whose validity could not checked either by examining its logical validity as an inference from explicit premises or verified by sensory experience. According to this criterion, much of human discourse about ethics, morals, aesthetics, religion and much of philosophy was simply meaningless, aka metaphysics.

Popper, who grew up in Vienna and was on the periphery of the Vienna Circle, rejected the idea that logical tautologies and statements potentially verifiable by observation are the only conveyors of meaning between human beings. Metaphysical statements can be meaningful even if they can’t be confirmed by observation. Metaphysical statements are meaningful if they are coherent and are not nonsensical. If there is a problem with metaphysical statements, the problem is not necessarily because they have no meaning. In making this argument, Popper suggested an alternative criterion of demarcation to that between meaning and non-meaning: a criterion of demarcation between science and metaphysics. Science is indeed different from metaphysics, but the difference is not that science is meaningful and metaphysics is not. The difference is that scientific statements can be refuted (or falsified) by observations while metaphysical statements cannot be refuted by observations. As a matter of logic, the only way to refute a proposition by an observation is for the proposition to assert that the observation was not possible. Unless you can say what observation would refute what you are saying, you are engaging in metaphysical, not scientific, talk. This gave rise to Popper’s then very surprising result. If you positively assert the existence of something – an assertion potentially verifiable by observation, and hence for logical positivists the quintessential scientific statement — you are making a metaphysical, not a scientific, statement. The statement that something (e.g., God, a string, or a utility function) exists cannot be refuted by any observation. However the unobservable phenomenon may be part of a theory with implications that could be refuted by some observation. But in that case it would be the theory not the posited object that was refuted.

In fact, Popper thought that metaphysical statements not only could be meaningful, but could even be extremely useful, coining the term “metaphysical research programs,” because a metaphysical, unfalsifiable idea or theory could be the impetus for further research, possibly becoming scientifically fruitful in the way that evolutionary biology eventually sprang from the possibly unfalsifiable idea of survival of the fittest. That sounds to me pretty much like Noah’s idea of shrug-ability.

Popper was largely successful in overthrowing logical positivism, though whether it was entirely his doing (as he liked to claim) and whether it was fully overthrown are not so clear. One reason to think that it was not all his doing is that there is still a lot of confusion about what the falsification criterion actually means. Reading Noah Smith and Sean Carroll, I almost get the impression that they think the falsification criterion distinguishes not just between science and non-science but between meaning and non-meaning. Otherwise, why would anyone think that there is any problem with introducing an unfalsifiable concept into scientific discussion. When Popper argued that science should aim at proposing and testing falsifiable theories, he meant that one should not design a theory so that it can’t be tested, or adopt stratagems — ad hoc hypotheses — that serve only to account for otherwise falsifying observations. But if someone comes up with a creative new idea, and the idea can’t be tested, at least given the current observational technology, that is not a reason to reject the theory, especially if the new theory accounts for otherwise unexplained observations.

Another manifestation of Popper’s imperfect success in overthrowing logical positivism is that Paul Samuelson in his classic The Foundations of Economic Analysis chose to call the falsifiable implications of economic theory, meaningful theorems. By naming those implications “meaningful theorems,” Samuelson clearly was operating under the positivist presumption that only a proposition that could (at least in principle) be falsified by observation was meaningful. However, that formulation reflected an untenable compromise between Popper’s criterion for distinguishing science from metaphysics and the logical positivist criterion for distinguishing meaningful from meaningless statements. Instead of referring to meaningful theorems, Samuelson should have called them, more modestly, testable or scientific theorems.

So, at least as I read Popper, Noah Smith and Sean Carroll are only discovering what Popper already understood a long time ago.

At this point, some readers may be wondering why, having said all that, I seem to have trouble giving falisficationism (and Popper) even two cheers. So I am afraid that I will have to close this post on a somewhat critical note. The problem with Popper is that his rhetoric suggests that scientific methodology is a lot more important than it really is. Apart from some egregious examples like Marxism and Freudianism, which were deliberately formulated to exclude the possibility of refutation, there really aren’t that many theories entertained by scientists that can be ruled out of order on strictly methodological grounds. Popper can occasionally provide some methodological reminders to scientists to avoid relying on ad hoc theorizing — at least when a non-ad-hoc alternative is handy — but beyond that I don’t think methodology counts for very much in the day to day work of scientists. Many theories are difficult to falsify, but the difficulty is not necessarily the result of deliberate choices by the theorists, it is the result of the nature of the problem and the nature of the evidence that could potentially refute the theory. The evidence is what it is. It is nice to come up with a theory that predicts a novel fact that can be observed, but nature is not always so accommodating to our theories.

There is a kind of rationalistic (I am using “rationalistic” in the pejorative sense of Michael Oakeshott) faith that following the methodological rules that Popper worked so hard to formulate will guarantee scientific progress. Those rules tend to encourage an unrealistic focus on making theories testable (especially in economics) when by their nature the phenomena are too complex for theories to be formulated in ways that are susceptible to decisive testing. And although Popper recognized that empirical testing of a theory has very limited usefulness unless the theory is being compared to some alternative theory, too often discussions of theory testing are in the context of testing a single theory in isolation. Kuhn and others have pointed out that science is not routinely carried out in the way that Popper suggested it should be. To some extent, Popper acknowledged the truth of that observation, though he liked to cite examples from the history of science to illustrate his thesis, but argued that he was offering a normative, not a positive, theory of scientific discovery. But why should we assume that Popper had more insight into the process of discovery for particular sciences than the practitioners of those sciences actually doing the research? That is the nub of the criticism of Popper that I take away from Oakeshott’s work. Life and any form of endeavor involves the transmission of ways of doing things, traditions, that cannot be reduced to a set of rules, but require education, training, practice and experience. That’s what Kuhn called normal science. Normal science can go off the tracks too, but it is naïve to think that a list of methodological rules is what will keep science moving constantly in the right direction. Why should Popper’s rules necessarily trump the lessons that practitioners have absorbed from the scientific traditions in which they have been trained? I don’t believe that there is any surefire recipe for scientific progress.

Nevertheless, when I look at the way economics is now being practiced and taught, I can’t help but think that a dose of Popperianism might not be the worst thing that could be administered to modern economics. But that’s a discussion for another day.

Barro and Krugman Yet Again on Regular Economics vs. Keynesian Economics

A lot of people have been getting all worked up about Paul Krugman’s acerbic takedown of Robert Barro for suggesting in a Wall Street Journal op-ed in 2011 that increased government spending would not stimulate the economy. Barro’s target was a claim by Agriculture Secretary Tom Vilsack that every additional dollar spent on food stamps would actually result in a net increase of $1.84 in total spending. This statement so annoyed Barro that, in a fit of pique, he wrote the following.

Keynesian economics argues that incentives and other forces in regular economics are overwhelmed, at least in recessions, by effects involving “aggregate demand.” Recipients of food stamps use their transfers to consume more. Compared to this urge, the negative effects on consumption and investment by taxpayers are viewed as weaker in magnitude, particularly when the transfers are deficit-financed.

Thus, the aggregate demand for goods rises, and businesses respond by selling more goods and then by raising production and employment. The additional wage and profit income leads to further expansions of demand and, hence, to more production and employment. As per Mr. Vilsack, the administration believes that the cumulative effect is a multiplier around two.

If valid, this result would be truly miraculous. The recipients of food stamps get, say, $1 billion but they are not the only ones who benefit. Another $1 billion appears that can make the rest of society better off. Unlike the trade-off in regular economics, that extra $1 billion is the ultimate free lunch.

How can it be right? Where was the market failure that allowed the government to improve things just by borrowing money and giving it to people? Keynes, in his “General Theory” (1936), was not so good at explaining why this worked, and subsequent generations of Keynesian economists (including my own youthful efforts) have not been more successful.

Sorry to brag, but it was actually none other than moi that (via Mark Thoma) brought this little gem to Krugman’s attention. In what is still my third most visited blog post, I expressed incredulity that Barro could ask where Is the market failure about a situation in which unemployment suddenly rises to more than double its pre-recession level. I also pointed out that Barro had himself previously acknowledged in a Wall Street Journal op-ed that monetary expansion could alleviate a cyclical increase in unemployment. If monetary policy (printing money on worthless pieces of paper) can miraculously reduce unemployment, why is out of the question that government spending could also reduce unemployment, especially when it is possible to view government spending as a means of transferring cash from people with unlimited demand for money to those unwilling to increase their holdings of cash? So, given Barro’s own explicit statement that monetary policy could be stimulative, it seemed odd for him to suggest, without clarification, that it would be a miracle if fiscal policy were effective.

Apparently, Krugman felt compelled to revisit this argument of Barro’s because of the recent controversy about extending unemployment insurance, an issue to which Barro made only passing reference in his 2011 piece. Krugman again ridiculed the idea that just because regular economics says that a policy will have adverse effects under “normal” conditions, the policy must be wrongheaded even in a recession.

But if you follow right-wing talk — by which I mean not Rush Limbaugh but the Wall Street Journal and famous economists like Robert Barro — you see the notion that aid to the unemployed can create jobs dismissed as self-evidently absurd. You think that you can reduce unemployment by paying people not to work? Hahahaha!

Quite aside from the fact that this ridicule is dead wrong, and has had a malign effect on policy, think about what it represents: it amounts to casually trashing one of the most important discoveries economists have ever made, one of my profession’s main claims to be useful to humanity.

Krugman was subsequently accused of bad faith in making this argument because he, like other Keynesians, has acknowledged that unemployment insurance tends to increase the unemployment rate. Therefore, his critics argue, it was hypocritical of Krugman to criticize Barro and the Wall Street Journal for making precisely the same argument that he himself has made. Well, you can perhaps accuse Krugman of being a bit artful in his argument by not acknowledging explicitly that a full policy assessment might in fact legitimately place some limit on UI benefits, but Krugman’s main point is obviously not to assert that “regular economics” is necessarily wrong, just that Barro and the Wall Street Journal are refusing to acknowledge that countercyclical policy of some type could ever, under any circumstances, be effective. Or, to put it another way, Krugman could (and did) easily agree that increasing UI will increases the natural rate of unemployment, but, in a recession, actual unemployment is above the natural rate, and UI can cause the actual rate to fall even as it causes the natural rate to rise.

Now Barro might respond that all he was really saying in his 2011 piece was that the existence of a government spending multiplier significantly greater than zero is not supported by the empirical evidenc. But there are two problems with that response. First, it would still not resolve the theoretical inconsistency in Barro’s argument that monetary policy does have magical properties in a recession with his position that fiscal policy has no such magical powers. Second, and perhaps less obviously, the empirical evidence on which Barro relies does not necessarily distinguish between periods of severe recession or depression and periods when the economy is close to full employment. If so, the empirical estimates of government spending multipliers are subject to the Lucas critique. Parameter estimates may not be stable over time, because those parameters may change depending on the cyclical phase of the economy. The multiplier at the trough of a deep business cycle may be much greater than the multiplier at close to full employment. The empirical estimates for the multiplier cited by Barro make no real allowance for different cyclical phases in estimating the multiplier.

PS Scott Sumner also comes away from reading Barro’s 2011 piece perplexed by what Barro is really saying and why, and does an excellent job of trying in vain to find some coherent conceptual framework within which to understand Barro. The problem is that there is none. That’s why Barro deserves the rough treatment he got from Krugman.

G. L. S. Shackle and the Indeterminacy of Economics

A post by Greg Hill, which inspired a recent post of my own, and Greg’s comment on that post, have reminded me of the importance of the undeservedly neglected English economist, G. L. S. Shackle, many of whose works I read and profited from as a young economist, but which I have hardly looked at for many years. A student of Hayek’s at the London School of Economics in the 1930s, Shackle renounced his early Hayekian views and the doctoral dissertation on capital theory that he had already started writing under Hayek’s supervision, after hearing a lecture by Joan Robinson in 1935 about the new theory of income and employment that Keynes was then in the final stages of writing up to be published the following year as The General Theory of Employment, Interest and Money. When Shackle, with considerable embarrassment, had to face Hayek to inform him that he could not finish the dissertation that he had started, no longer believing in what he had written, and having been converted to Keynes’s new theory. After hearing that Shackle was planning to find a new advisor under whom to write a new dissertation on another topic, Hayek, in a gesture of extraordinary magnanimity, responded that of course Shackle was free to write on whatever topic he desired, and that he would be happy to continue to serve as Shackle’s advisor regardless of the topic Shackle chose.

Although Shackle became a Keynesian, he retained and developed a number of characteristic Hayekian ideas (possibly extending them even further than Hayek would have), especially the notion that economic fluctuations result from the incompatibility between the plans that individuals are trying to implement, an incompatibility stemming from the imperfect and inconsistent expectations about the future that individuals hold, at least some plans therefore being doomed to failure. For Shackle the conception of a general equilibrium in which all individual plans are perfectly reconciled was a purely mental construct that might be useful in specifying the necessary conditions for the harmonization of individually formulated plans, but lacking descriptive or empirical content. Not only is a general equilibrium never in fact achieved, the very conception of such a state is at odds with the nature of reality. For example, the phenomenon of surprise (and, I would add, regret) is, in Shackle’s view, a characteristic feature of economic life, but under the assumption of most economists (though not of Knight, Keynes or Hayek) that all events can be at least be forecasted in terms of their underlying probability distributions, the phenomenon of surprise cannot be understood. There are some observed events – black swans in Taleb’s terminology – that we can’t incorporate into the standard probability calculus, and are completely inconsistent with the general equilibrium paradigm.

A rational-expectations model allows for stochastic variables (e.g., will it be rainy or sunny two weeks from tomorrow), but those variables are assumed to be drawn from distributions known by the agents, who can also correctly anticipate the future prices conditional on any realization (at a precisely known future moment in time) of a random variable. Thus, all outcomes correspond to expectations conditional on all future realizations of random variables; there are no surprises and no regrets. For a model to be correct and determinate in this sense, it must have accounted fully for all the non-random factors that could affect outcomes. If any important variable(s) were left out, the predictions of the model could not be correct. In other words, unless the model is properly specified, all causal factors having been identified and accounted for, the model will not generate correct predictions for all future states and all possible realizations of random variables. And unless the agents in the model can predict prices as accurately as the fully determined model can predict them, the model will not unfold through time on an equilibrium time path. This capability of forecasting future prices contingent on the realization of all random variables affecting the actual course of the model through time, is called rational expectations, which differs from perfect foresight only in being unable to predict in advance the realizations of the random variables. But all prices conditional on those realizations are correctly expected. Which is the more demanding assumption – rational expectations or perfect foresight — is actually not entirely clear to me.

Now there are two ways to think about rational expectations — one benign and one terribly misleading. The benign way is that the assumption of rational expectations is a means of checking the internal consistency of a model. In other words, if we are trying to figure out whether a model is coherent, we can suppose that the model is the true model; if we then posit that the expectations of the agents correspond to the solution of the model – i.e., the agents expect the equilibrium outcome – the solution of the model will confirm the expectations that have been plugged into the minds of the agents of the model. This is sometimes called a fixed-point property. If the model doesn’t have this fixed-point property, then there is something wrong with the model. So the assumption of rational expectations does not necessarily involve any empirical assertion about the real world, it does not necessarily assert anything about how expectations are formed or whether they ever are rational in the sense that agents can predict the outcome of the relevant model. The assumption merely allows the model to be tested for latent inconsistencies. Equilibrium expectations being a property of equilibrium, it makes no sense for equilibrium expectations not to generate an equilibrium.

But the other way of thinking about rational expectations is as an empirical assertion about what the expectations of people actually are or how those expectations are formed. If that is how we think about rational expectations, then we are saying people always anticipate the solution of the model. And if the model is internally consistent, then the empirical assumption that agents really do have rational expectations means that we are making an empirical assumption that the economy is in fact always in equilibrium, i.e., that is moving through time along an equilibrium path. If agents in the true model expect the equilibrium of the true model, the agents must be in equilibrium. To break out of that tight circle, either expectations have to be wrong (non-rational) or the model from which people derive their expectations must be wrong.

Of course, one way to finesse this problem is to say that the model is not actually true and expectations are not fully rational, but that the assumptions are close enough to being true for the model to be a decent approximation of reality. That is a defensible response, but one either has to take that assertion on faith, or there has to be strong evidence that the real world corresponds to the predictions of the model. Rational-expectations models do reasonably well in predicting the performance of economies near full employment, but not so well in periods like the Great Depression and the Little Depression. In other words, they work pretty well when we don’t need them, and not so well when we do need them.

The relevance of the rational-expectations assumption was discussed a year and a half ago by David Levine of Washington University. Levine was an undergraduate at UCLA after I had left, and went on to get his Ph.D. from MIT. He later returned to UCLA and held the Armen Alchian chair in economics from 1997 to 2006. Along with Michele Boldrin, Levine wrote a wonderful book Aginst Intellectual Monopoly. More recently he has written a little book (Is Behavioral Economics Doomed?) defending the rationality assumption in all its various guises, a book certainly worth reading even (or especially) if one doesn’t agree with all of its conclusions. So, although I have a high regard for Levine’s capabilities as an economist, I am afraid that I have to criticize what he has to say about rational expectations. I should also add that despite my criticism of Levine’s defense of rational expectations, I think the broader point that he makes that people do learn from experience, and that public policies should not be premised on the assumption that people will not eventually figure out how those policies are working, is valid.

In particular, let’s look at a post that Levine contributed to the Huffington Post blog defending the economics profession against the accusation that the economics profession is useless as demonstrated by their failure to predict the financial crisis of 2008. To counter this charge, Levine compared economics to physics — not necessarily the strategy I would have recommended for casting economics in a favorable light, but that’s merely an aside. Just as there is an uncertainty principle in physics, which says that you cannot identify simultaneously both the location and the speed of an electron, there’s an analogous uncertainty principle in economics, which says that the forecast affects the outcome.

The uncertainty principle in economics arises from a simple fact: we are all actors in the economy and the models we use determine how we behave. If a model is discovered to be correct, then we will change our behavior to reflect our new understanding of reality — and when enough of us do so, the original model stops being correct. In this sense future human behavior must necessarily be uncertain.

Levine is certainly right that insofar as the discovery of a new model changes expectations, the model itself can change outcomes. If the model predicts a crisis, the model, if it is believed, may be what causes the crisis. Fair enough, but Levine believes that this uncertainty principle entails the rationality of expectations.

The uncertainty principle in economics leads directly to the theory of rational expectations. Just as the uncertainty principle in physics is consistent with the probabilistic predictions of quantum mechanics (there is a 20% chance this particle will appear in this location with this speed) so the uncertainty principle in economics is consistent with the probabilistic predictions of rational expectations (there is a 3% chance of a stock market crash on October 28).

This claim, if I understand it, is shocking. The equations of quantum mechanics may be able to predict the probability that a particle will appear at given location with a given speed, I am unaware of any economic model that can provide even an approximately accurate prediction of the probability that a financial crisis will occur within a given time period.

Note what rational expectations are not: they are often confused with perfect foresight — meaning we perfectly anticipate what will happen in the future. While perfect foresight is widely used by economists for studying phenomena such as long-term growth where the focus is not on uncertainty — it is not the theory used by economists for studying recessions, crises or the business cycle. The most widely used theory is called DSGE for Dynamic Stochastic General Equilibrium. Notice the word stochastic — it means random — and this theory reflects the necessary randomness brought about by the uncertainty principle.

I have already observed that the introduction of random variables into a general equilibrium is not a significant relaxation of the predictive capacities of agents — and perhaps not even a relaxation, but an enhancement of the predictive capacities of the agents. The problem with this distinction between perfect foresight and stochastic disturbances is that there is no relaxation of the requirement that all agents share the same expectations of all future prices in all possible future states of the world. The world described is a world without surprise and without regret. From the standpoint of the informational requirements imposed on agents, the distinction between perfect foresight and rational expectations is not worth discussing.

In simple language what rational expectations means is “if people believe this forecast it will be true.”

Well, I don’t know about that. If the forecast is derived from a consistent, but empirically false, model, the assumption of rational expectations will ensure that the forecast of the model coincides with what people expect. But the real world may not cooperate, producing an outcome different from what was forecast and what was rationally expected. The expectation of a correct forecast does not guarantee the truth of the forecast unless the model generating the forecast is true. Is Levine convinced that the models used by economists are sufficiently close to being true to generate valid forecasts with a frequency approaching that of the Newtonian model in forecasting, say, solar eclipses? More generally, Levine seems to be confusing the substantive content of a theory — what motivates the agents populating theory and what constrains the choices of those agents in their interactions with other agents and with nature — with an assumption about how agents form expectations. This confusion becomes palpable in the next sentence.

By contrast if a theory is not one of rational expectations it means “if people believe this forecast it will not be true.”

I don’t what it means to say “a theory is not one of rational expectations.” Almost every economic theory depends in some way on the expectations of the agents populating the theory. There are many possible assumptions to make about how expectations are formed. Most of those assumptions about how expectations are formed allow, though they do not require, expectations to correspond to the predictions of the model. In other words, expectations can be viewed as an equilibrating variable of a model. To make a stronger assertion than that is to make an empirical claim about how closely the real world corresponds to the equilibrium state of the model. Levine goes on to make just such an assertion. Referring to a non-rational-expectations theory, he continues:

Obviously such a theory has limited usefulness. Or put differently: if there is a correct theory, eventually most people will believe it, so it must necessarily be rational expectations. Any other theory has the property that people must forever disbelieve the theory regardless of overwhelming evidence — for as soon as the theory is believed it is wrong.

It is hard to interpret what Levine is saying. What theory or class of theories is being dismissed as having limited usefulness? Presumably, all theories that are not “of rational expectations.” OK, but why is their usefulness limited? Is it that they are internally inconsistent, i.e., they lack the fixed-point property whose absence signals internal inconsistency, or is there some other deficiency? Levine seems to be conflating the two very different ways of understanding rational expectations (a test for internal inconsistency v. a substantive empirical hypothesis). Perhaps that’s why Levine feels compelled to paraphrase. But the paraphrase makes it clear that he is not distinguishing between the substantive theory and the specific expectational hypothesis. I also can’t tell whether his premise (“if there is a correct theory”) is meant to be a factual statement or a hypothetical? If it is the former, it would be nice if the correct theory were identified. If the correct theory can’t even be identified, how are people supposed to know which theory they are supposed to believe, so that they can form their expectations accordingly? Rather than an explanation for why the correct rational-expectations theory will eventually be recognized, this sounds like an explanation for why the correct theory is unknowable. Unless, of course, we assume that the rational expectations are a necessary feature of reality in which case, people have been forming expectations based on the one true model all along, and all economists are doing is trying to formalize a pre-existing process of expectations formation that already solves the problem. But the rest of his post (see part two here) makes it clear that Levine (properly) does not hold that extreme position about rational expectations.

So in the end , I find myself unable to make sense of rational expectations except as a test for the internal consistency of an economic model, and, perhaps also, as a tool for policy analysis. Just as one does not want to work with a model that is internally inconsistent, one does not want to formulate a policy based on the assumption that people will fail to understand the effects of the policy being proposed. But as a tool for understanding how economies actually work and what can go wrong, the rational-expectations assumption abstracts from precisely the key problem, the inconsistencies between the expectations held by different agents, which are an inevitable, though certainly not the only, cause of the surprise and regret that are so characteristic of real life.

Did Raising Interest Rates under the Gold Standard Really Increase Aggregate Demand?

I hope that I can write this quickly just so people won’t think that I’ve disappeared. I’ve been a bit under the weather this week, and the post that I’ve been working on needs more attention and it’s not going to be ready for a few more days. But the good news, from my perspective at any rate, is that Scott Sumner, as he has done so often in the past, has come through for me by giving me something to write about. In his most recent post at his second home on Econlog, Scott writes the following:

I recently did a post pointing out that higher interest rates don’t reduce AD.  Indeed even higher interest rates caused by a decrease in the money supply don’t reduce AD. Rather the higher rates raise velocity, but that effect is more than offset by the decrease in the money supply.

Of course that’s not the way Keynesians typically look at things.  They believe that higher interest rates actually cause AD to decrease.  Except under the gold standard. Back in 1988 Robert Barsky and Larry Summers wrote a paper showing that higher interest rates were expansionary when the dollar was pegged to gold.  Now in fairness, many Keynesians understand that higher interest rates are often associated with higher levels of AD.  But Barsky and Summers showed that the higher rates actually caused AD to increase.  Higher nominal rates increase the opportunity cost of holding gold. This reduces gold demand, and thus lowers its value.  Because the nominal price of gold is fixed under the gold standard, the only way for the value of gold to decrease is for the price level to increase. Thus higher interest rates boost AD and the price level.  This explains the “Gibson Paradox.”

Very clever on Scott’s part, and I am sure that he will have backfooted a lot of Keynesians. There’s just one problem with Scott’s point, which is that he forgets that an increase in interest rates by the central bank under the gold standard corresponds to an increase in the demand of the central bank for gold, which, as Scott certainly knows better than almost anyone else, is deflationary. What Barsky and Summers were talking about when they were relating interest rates to the value of gold was movements in the long-term interest rate (the yield on consols), not in central-bank lending rate (the rate central banks charge for overnight or very short-dated loans to other banks). As Hawtrey showed in A Century of Bank Rate, the yield on consols was not closely correlated with Bank Rate. So not only is Scott looking at the wrong interest rate (for purposes of his argument), he is – and I don’t know how to phrase this delicately – reasoning from a price change. Ouch!

Macroeconomic Science and Meaningful Theorems

Greg Hill has a terrific post on his blog, providing the coup de grace to Stephen Williamson’s attempt to show that the way to increase inflation is for the Fed to raise its Federal Funds rate target. Williamson’s problem, Hill points out is that he attempts to derive his results from relationships that exist in equilibrium. But equilibrium relationships in and of themselves are sterile. What we care about is how a system responds to some change that disturbs a pre-existing equilibrium.

Williamson acknowledged that “the stories about convergence to competitive equilibrium – the Walrasian auctioneer, learning – are indeed just stories . . . [they] come from outside the model” (here).  And, finally, this: “Telling stories outside of the model we have written down opens up the possibility for cheating. If everything is up front – written down in terms of explicit mathematics – then we have to be honest. We’re not doing critical theory here – we’re doing economics, and we want to be treated seriously by other scientists.”

This self-conscious scientism on Williamson’s part is not just annoyingly self-congratulatory. “Hey, look at me! I can write down mathematical models, so I’m a scientist, just like Richard Feynman.” It’s wildly inaccurate, because the mere statement of equilibrium conditions is theoretically vacuous. Back to Greg:

The most disconcerting thing about Professor Williamson’s justification of “scientific economics” isn’t its uncritical “scientism,” nor is it his defense of mathematical modeling. On the contrary, the most troubling thing is Williamson’s acknowledgement-cum-proclamation that his models, like many others, assume that markets are always in equilibrium.

Why is this assumption a problem?  Because, as Arrow, Debreu, and others demonstrated a half-century ago, the conditions required for general equilibrium are unimaginably stringent.  And no one who’s not already ensconced within Williamson’s camp is likely to characterize real-world economies as always being in equilibrium or quickly converging upon it.  Thus, when Williamson responds to a question about this point with, “Much of economics is competitive equilibrium, so if this is a problem for me, it’s a problem for most of the profession,” I’m inclined to reply, “Yes, Professor, that’s precisely the point!”

Greg proceeds to explain that the Walrasian general equilibrium model involves the critical assumption (implemented by the convenient fiction of an auctioneer who announces prices and computes supply and demand at that prices before allowing trade to take place) that no trading takes place except at the equilibrium price vector (where the number of elements in the vector equals the number of prices in the economy). Without an auctioneer there is no way to ensure that the equilibrium price vector, even if it exists, will ever be found.

Franklin Fisher has shown that decisions made out of equilibrium will only converge to equilibrium under highly restrictive conditions (in particular, “no favorable surprises,” i.e., all “sudden changes in expectations are disappointing”).  And since Fisher has, in fact, written down “the explicit mathematics” leading to this conclusion, mustn’t we conclude that the economists who assume that markets are always in equilibrium are really the ones who are “cheating”?

An alternative general equilibrium story is that learning takes place allowing the economy to converge on a general equilibrium time path over time, but Greg easily disposes of that story as well.

[T]he learning narrative also harbors massive problems, which come out clearly when viewed against the background of the Arrow-Debreu idealized general equilibrium construction, which includes a complete set of intertemporal markets in contingent claims.  In the world of Arrow-Debreu, every price in every possible state of nature is known at the moment when everyone’s once-and-for-all commitments are made.  Nature then unfolds – her succession of states is revealed – and resources are exchanged in accordance with the (contractual) commitments undertaken “at the beginning.”

In real-world economies, these intertemporal markets are woefully incomplete, so there’s trading at every date, and a “sequence economy” takes the place of Arrow and Debreu’s timeless general equilibrium.  In a sequence economy, buyers and sellers must act on their expectations of future events and the prices that will prevail in light of these outcomes.  In the limiting case of rational expectations, all agents correctly forecast the equilibrium prices associated with every possible state of nature, and no one’s expectations are disappointed. 

Unfortunately, the notion that rational expectations about future prices can replace the complete menu of Arrow-Debreu prices is hard to swallow.  Frank Hahn, who co-authored “General Competitive Analysis” with Kenneth Arrow (1972), could not begin to swallow it, and, in his disgorgement, proceeded to describe in excruciating detail why the assumption of rational expectations isn’t up to the job (here).  And incomplete markets are, of course, but one departure from Arrow-Debreu.  In fact, there are so many more that Hahn came to ridicule the approach of sweeping them all aside, and “simply supposing the economy to be in equilibrium at every moment of time.”

Just to pile on, I would also point out that any general equilibrium model assumes that there is a given state of knowledge that is available to all traders collectively, but not necessarily to each trader. In this context, learning means that traders gradually learn what the pre-existing facts are. But in the real world, knowledge increases and evolves through time. As knowledge changes, capital — both human and physical — embodying that knowledge becomes obsolete and has to be replaced or upgraded, at unpredictable moments of time, because it is the nature of new knowledge that it cannot be predicted. The concept of learning incorporated in these sorts of general equilibrium constructs is a travesty of the kind of learning that characterizes the growth of knowledge in the real world. The implications for the existence of a general equilibrium model in a world in which knowledge grows in an unpredictable way are devastating.

Greg aptly sums up the absurdity of using general equilibrium theory (the description of a decentralized economy in which the component parts are in a state of perfect coordination) as the microfoundation for macroeconomics (the study of decentralized economies that are less than perfectly coordinated) as follows:

What’s the use of “general competitive equilibrium” if it can’t furnish a sturdy, albeit “external,” foundation for the kind of modeling done by Professor Williamson, et al?  Well, there are lots of other uses, but in the context of this discussion, perhaps the most important insight to be gleaned is this: Every aspect of a real economy that Keynes thought important is missing from Arrow and Debreu’s marvelous construction.  Perhaps this is why Axel Leijonhufvud, in reviewing a state-of-the-art New Keynesian DSGE model here, wrote, “It makes me feel transported into a Wonderland of long ago – to a time before macroeconomics was invented.”

To which I would just add that nearly 70 years ago, Paul Samuelson published his magnificent Foundations of Economic Analysis, a work undoubtedly read and mastered by Williamson. But the central contribution of the Foundations was the distinction between equilibrium conditions and what Samuelson (owing to the influence of the still fashionable philosophical school called logical positivism) mislabeled meaningful theorems. A mere equilibrium condition is not the same as a meaningful theorem, but Samuelson showed how a meaningful theorem can be mathematically derived from an equilibrium condition. The link between equilibrium conditions and meaningful theorems was the foundation of economic analysis. Without a mathematical connection between equilibrium conditions and meaningful theorems analogous to the one provided by Samuelson in the Foundations, claims to have provided microfoundations for macroeconomics are, at best, premature.

James Grant on Irving Fisher and the Great Depression

In the past weekend edition (January 4-5, 2014) of the Wall Street Journal, James Grant, financial journalist, reviewed (“Great Minds, Failed Prophets”) Fortune Tellers by Walter A. Friedman, a new book about the first generation of economic forecasters, or business prophets. Friedman tells the stories of forecasters who became well-known and successful in the 1920s: Roger Babson, John Moody, the team of Carl J. Bullock and Warren Persons, Wesley Mitchell, and the great Irving Fisher. I haven’t read the book, but, judging from the Grant’s review, I am guessing it’s a good read.

Grant is a gifted, erudite and insightful journalist, but unfortunately his judgment is often led astray by a dogmatic attachment to Austrian business cycle theory and the gold standard, which causes him to make an absurd identification of Fisher’s views on how to stop the Great Depression with the disastrous policies of Herbert Hoover after the stock market crash.

Though undoubtedly a genius, Fisher was not immune to bad ideas, and was easily carried away by his enthusiasms. He was often right, but sometimes he was tragically wrong. His forecasting record and his scholarship made him perhaps the best known American economist in the 1920s, and a good case could be made that he was the greatest economist who ever lived, but his reputation was destroyed when, on the eve of the stock market crash, he commented “stock prices have reached what looks like a permanently high plateau.” For a year, Fisher insisted that stock prices would rebound (which they did in early 1930, recovering most of their losses), but the recovery in stock prices was short-lived, and Fisher’s public reputation never recovered.

Certainly, Fisher should have been more alert to the danger of a depression than he was. Working with a monetary theory similar to Fisher’s, both Ralph Hawtrey and Gustav Cassel foresaw the deflationary dangers associated with the restoration of the gold standard and warned against the disastrous policies of the Bank of France and the Federal Reserve in 1928-29, which led to the downturn and the crash. What Fisher thought of the warnings of Hawtrey and Cassel I don’t know, but it would be interesting and worthwhile for some researcher to go back and look for Fisher’s comments on Hawtrey and Cassel before or after the 1929 crash.

So there is no denying that Fisher got something wrong in his forecasts, but we (or least I) still don’t know exactly what his mistake was. This is where Grant’s story starts to unravel. He discusses how, under the tutelage of Wesley Mitchell, Herbert Hoover responded to the crash by “[summoning] the captains of industry to the White House.”

So when stocks crashed in 1929, Hoover, as president, summoned the captains of industry to the White House. Profits should bear the brunt of the initial adjustment to the downturn, he said. Capital-spending plans should go forward, if not be accelerated. Wages must not be cut, as they had been in the bad old days of 1920-21. The executives shook hands on it.

In the wake of this unprecedented display of federal economic activism, Wesley Mitchell, the economist, said: “While a business cycle is passing over from a phase of expansion to the phase of contraction, the president of the United States is organizing the economic forces of the country to check the threatened decline at the start, if possible. A more significant experiment in the technique of balance could not be devised than the one which is being performed before our very eyes.”

The experiment in balance ended in monumental imbalance. . . . The laissez-faire depression of 1920-21 was over and done within 18 months. The federally doctored depression of 1929-33 spanned 43 months. Hoover failed for the same reason that Babson, Moody and Fisher fell short: America’s economy is too complex to predict, much less to direct from on high.

We can stipulate that Hoover’s attempt to keep prices and wages from falling in the face of a massive deflationary shock did not aid the recovery, but neither did it cause the Depression; the deflationary shock did. The deflationary shock was the result of the failed attempt to restore the gold standard and the insane policies of the Bank of France, which might have been counteracted, but were instead reinforced, by the Federal Reserve.

Before closing, Grant turns back to Fisher, recounting, with admiration, Fisher’s continuing scholarly achievements despite the loss of his personal fortune in the crash and the collapse of his public reputation.

Though sorely beset, Fisher produced one of his best known works in 1933, the essay called “The Debt-Deflation Theory of Great Depressions,” in which he observed that plunging prices made debts unsupportable. The way out? Price stabilization, the very policy that Hoover had championed.

Grant has it totally wrong. Hoover acquiesced in, even encouraged, the deflationary policies of the Fed, and never wavered in his commitment to the gold standard. His policy of stabilizing prices and wages was largely ineffectual, because you can’t control the price level by controlling individual prices. Fisher understood the difference between controlling individual prices and controlling the price level. It is Grant, not Fisher, who resembles Hoover in failing to grasp that essential distinction.

The Microfoundations Wars Continue

I see belatedly that the battle over microfoundations continues on the blogosphere, with Paul Krugman, Noah Smith, Adam Posen, and Nick Rowe all challenging the microfoundations position, while Tony Yates and Stephen Williamson defend it with Simon Wren-Lewis trying to serve as a peacemaker of sorts. I agree with most of the criticisms, but what I found most striking was the defense of microfoundations offered by Tony Yates, who expresses the mentality of the microfoundations school so well that I thought that some further commentary on his post would be worthwhile.

Yates’s post was prompted by a Twitter exchange between Yates and Adam Posen after Posen tweeted that microfoundations have no merit, an exaggeration no doubt, but not an unreasonable one. Noah Smith chimed in with a challenge to Yates to defend the proposition that microfoundations do have merit. Hence, the title (“Why Microfoundations Have Merit.”) of Yates’s post. What really caught my attention in Yates’s post is that, in trying to defend the proposition that microfounded models do have merit, Yates offers the following methodological, or perhaps aesthetic, pronouncement .

The merit in any economic thinking or knowledge must lie in it at some point producing an insight, a prediction, a prediction of the consequence of a policy action, that helps someone, or a government, or a society to make their lives better.

Microfounded models are models which tell an explicit story about what the people, firms, and large agents in a model do, and why.  What do they want to achieve, what constraints do they face in going about it?  My own position is that these are the ONLY models that have anything genuinely economic to say about anything.  It’s contestable whether they have any merit or not.

Paraphrasing, I would say that Yates defines merit as a useful insight or prediction into the way the world works. Fair enough. He then defines microfounded models as those models that tell an explicit story about what the agents populating the model are trying to do and the resulting outcomes of their efforts. This strikes me as a definition that includes more than just microfounded models, but let that pass, at least for the moment. Then comes the key point. These models “are the ONLY models that have anything genuinely economic to say about anything.” A breathtaking claim.

In other words, Yates believes that unless an insight, a proposition, or a conjecture, can be logically deduced from microfoundations, it is not economics. So whatever the merits of microfounded models, a non-microfounded model is not, as a matter of principle, an economic model. Talk about methodological authoritarianism.

Having established, to his own satisfaction at any rate, that only microfounded models have a legitimate claim to be considered economic, Yates defends the claim that microfounded models have merit by citing the Lucas critique as an early example of a meritorious insight derived from the “microfoundations project.” Now there is something a bit odd about this claim, because Yates neglects to mention that the Lucas critique, as Lucas himself acknowledged, had been anticipated by earlier economists, including both Keynes and Tinbergen. So if the microfoundations project does indeed have merit, the example chosen to illustrate that merit does nothing to show that the merit is in any way peculiar to the microfoundations project. It is also bears repeating (see my earlier post on the Lucas critique) that the Lucas critique only tells us about steady states, so it provides no useful information, insight, prediction or guidance about using monetary policy to speed up the recovery to a new steady state. So we should be careful not to attribute more merit to the Lucas critique than it actually possesses.

To be sure, in his Twitter exchange with Adam Posen, Yates mentioned several other meritorious contributions from the microfoundations project, each of which Posen rejected because the merit of those contributions lies in the intuition behind the one line idea. To which Yates responded:

This statement is highly perplexing to me.  Economic ideas are claims about what people and firms and governments do, and why, and what unfolds as a consequence.  The models are the ideas.  ‘Intuition’, the verbal counterpart to the models, are not separate things, the origins of the models.  They are utterances to ourselves that arise from us comprehending the logical object of the model, in the same way that our account to ourselves of an equation arises from the model.  One could make an argument for the separateness of ‘intuition’ at best, I think, as classifying it in some cases to be a conjecture about what a possible economic world [a microfounded model] would look like.  Intuition as story-telling to oneself can sometimes be a good check on whether what we have done is nonsense.  But not always.  Lots of results are not immediately intuitive.  That’s not a reason to dismiss it.  (Just like most of modern physics is not intuitive.)  Just a reason to have another think and read through your code carefully.

And Yates’s response is highly perplexing to me. An economic model is usually the product of some thought process intended to construct a coherent model from some mental raw materials (ideas) and resources (knowledge and techniques). The thought process is an attempt to embody some idea or ideas about a posited causal mechanism or about a posited mutual interdependency among variables of interest. The intuition is the idea or insight that some such causal mechanism or mutual interdependency exists. A model is one particular implementation (out of many other possible implementations) of the idea in a way that allows further implications of the idea to be deduced, thereby achieving an enhanced and deeper understanding of the original insight. The “microfoundations project” does not directly determine what kinds of ideas can be modeled, but it does require that models have certain properties to be considered acceptable implementations of any idea. In particular the model must incorporate a dynamic stochastic general equilibrium system with rational expectations and a unique equilibrium. Ideas not tractable given those modeling constraints are excluded. Posen’s point, it seems to me, is not that no worthwhile, meritorious ideas have been modeled within the modeling constraints imposed by the microfoundations project, but that the microfoundations project has done nothing to create or propagate those ideas; it has just forced those ideas to be implemented within the template of the microfoundations project.

None of the characteristic properties of the microfoundations project are assumptions for which there is compelling empirical or theoretical justification. We know how to prove the existence of a general equilibrium for economic models populated by agents satisfying certain rationality assumptions (assumptions for which there is no compelling a priori argument and whose primary justifications are tractability and the accuracy of the empirical implications deduced from them), but the conditions for a unique general equilibrium are way more stringent than the standard convexity assumptions required to prove existence. Moreover, even given the existence of a unique general equilibrium, there is no proof that an economy not in general equilibrium will reach the general equilibrium under the standard rules of price adjustment. Nor is there any empirical evidence to suggest that actual economies are in any sense in a general equilibrium, though one might reasonably suppose that actual economies are from time to time in the neighborhood of a general equilibrium. The rationality of expectations is in one sense an entirely ad hoc assumption, though an inconsistency between the predictions of a model, under the assumption of rational expectations, with the rationally expectations of the agents in the model is surely a sign that there is a problem in the structure of the model. But just because rational expectations can be used to check for latent design flaws in a model, it does not follow that assuming rational expectations leads to empirical implications that are generally, or even occasionally, empirically valid.

Thus, the key assumptions of microfounded models are not logically entailed by any deep axioms; they are imposed by methodological fiat, a philosophically and pragmatically unfounded insistence that certain modeling conventions be adhered to in order to count as “scientific.” Now it would be one thing if these modeling conventions were generating new, previously unknown, empirical relationships or generating more accurate predictions than those generated by non-microfounded models, but evidence that the predictions of microfounded models are better than the predictions of non-microfounded models is notably lacking. Indeed, Carlaw and Lipsey have shown that micro-founded models generate predictions that are less accurate than those generated by non-micofounded models. If microfounded theories represent scientific progress, they ought to be producing an increase, not a decrease, in explanatory power.

The microfoundations project is predicated on a gigantic leap of faith that the existing economy has an underlying structure that corresponds closely enough to the assumptions of the Arrow-Debreu model, suitably adjusted for stochastic elements and a variety of frictions (e.g., Calvo pricing) that may be introduced into the models depending on the modeler’s judgment about what constitutes an allowable friction. This is classic question-begging with a vengeance: arriving at a conclusion by assuming what needs to be proved. Such question begging is not necessarily illegitimate; every research program is based on some degree of faith or optimism that results not yet in hand will justify the effort required to generate those results. What is not legitimate is the claim that ONLY the models based on such question-begging assumptions are genuinely scientific.

This question-begging mentality masquerading as science is actually not unique to the microfoundations school. It is not uncommon among those with an exaggerated belief in the powers of science, a mentality that Hayek called scientism. It is akin to physicalism, the philosophical doctrine that all phenomena are physical. According to physicalism, there are no mental phenomena. What we perceive as mental phenomena, e.g., consciousness, is not real, but an illusion. Our mental states are really nothing but physical states. I do not say that physicalism is false, just that it is a philosophical position, not a proposition derived from science, and certainly not a fact that is, or can be, established by the currently available tools of science. It is a faith that some day — some day probably very, very far off into the future — science will demonstrate that our mental processes can be reduced to, and derived from, the laws of physics. Similarly, given the inability to account for observed fluctuations of output and employment in terms of microfoundations, the assertion that only microfounded models are scientific is simply an expression of faith in some, as yet unknown, future discovery, not a claim supported by any available scientific proof or evidence.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,272 other subscribers
Follow Uneasy Money on WordPress.com