Archive for the 'rational expectations' Category

A Primer on Equilibrium

After my latest post about rational expectations, Henry from Australia, one of my most prolific commenters, has been engaging me in a conversation about what assumptions are made – or need to be made – for an economic model to have a solution and for that solution to be characterized as an equilibrium, and in particular, a general equilibrium. Equilibrium in economics is not always a clearly defined concept, and it can have a number of different meanings depending on the properties of a given model. But the usual understanding is that the agents in the model (as consumers or producers) are trying to do as well for themselves as they can, given the endowments of resources, skills and technology at their disposal and given their preferences. The conversation was triggered by my assertion that rational expectations must be “compatible with the equilibrium of the model in which those expectations are embedded.”

That was the key insight of John Muth in his paper introducing the rational-expectations assumption into economic modelling. So in any model in which the current and future actions of individuals depend on their expectations of the future, the model cannot arrive at an equilibrium unless those expectations are consistent with the equilibrium of the model. If the expectations of agents are incompatible or inconsistent with the equilibrium of the model, then, since the actions taken or plans made by agents are based on those expectations, the model cannot have an equilibrium solution.

Now Henry thinks that this reasoning is circular. My argument would be circular if I defined an equilibrium to be the same thing as correct expectations. But I am not so defining an equilibrium. I am saying that the correctness of expectations by all agents implies 1) that their expectations are mutually consistent, and 2) that, having made plans, based on their expectations, which, by assumption, agents felt were the best set of choices available to them given those expectations, if the expectations of the agents are realized, then they would not regret the decisions and the choices that they made. Each agent would be as well off as he could have made himself, given his perceived opportunities when the decision were made. That the correctness of expectations implies equilibrium is the consequence of assuming that agents are trying to optimize their decision-making process, given their available and expected opportunities. If all expected opportunities are correctly foreseen, then all decisions will have been the optimal decisions under the circumstances. But nothing has been said that requires all expectations to be correct, or even that it is possible for all expectations to be correct. If an equilibrium does not exist, and just because you can write down an economic model, it does not mean that a solution to the model exists, then the sweet spot where all expectations are consistent and compatible is just a blissful fantasy. So a logical precondition to showing that rational expectations are even possible is to prove that an equilibrium exists. There is nothing circular about the argument.

Now the key to proving the existence of a general equilibrium is to show that the general equilibrium model implies the existence of what mathematicians call a fixed point. A fixed point is said to exist when there is a mapping – a rule or a function – that takes every point in a convex compact set of points and assigns that point to another point in the same set. A convex, compact set has two important properties: 1) the line connecting any two points in the set is entirely contained within the boundaries of the set, and 2) there are no gaps between any two points in set. The set of points in a circle or a rectangle is a convex compact set; the set of points contained in the Star of David is not a convex set. Any two points in the circle will be connected by a line that lies completely within the circle; the points at adjacent edges of a Star of David will be connected by a line that lies entirely outside the Star of David.

If you think of the set of all possible price vectors for an economy, those vectors – each containing a price for each good or service in the economy – could be mapped onto itself in the following way. Given all the equations describing the behavior of each agent in the economy, the quantity demanded and supplied of each good could be calculated, giving us the excess demand (the difference between amount demand and supplied) for each good. Then the price of every good in excess demand would be raised, the price of every good in negative excess demand would be reduced, and the price of every good with zero excess demand would be held constant. To ensure that the mapping was taking a point from a given convex set onto itself, all prices could be normalized so that they would have the property that the sum of all the individual prices would always equal 1. The fixed point theorem ensures that for a mapping from one convex compact set onto itself there must be at least one fixed point, i.e., at least one point in the set that gets mapped onto itself. The price vector corresponding to that point is an equilibrium, because, given how our mapping rule was defined, a point would be mapped onto itself if and only if all excess demands are zero, so that no prices changed. Every fixed point – and there may be one or more fixed points – corresponds to an equilibrium price vector and every equilibrium price vector is associated with a fixed point.

Before going on, I ought to make an important observation that is often ignored. The mathematical proof of the existence of an equilibrium doesn’t prove that the economy operates at an equilibrium, or even that the equilibrium could be identified under the mapping rule described (which is a kind of formalization of the Walrasian tatonnement process). The mapping rule doesn’t guarantee that you would ever discover a fixed point in any finite amount of iterations. Walras thought the price adjustment rule of raising the prices of goods in excess demand and reducing prices of goods in excess supply would converge on the equilibrium price vector. But the conditions under which you can prove that the naïve price-adjustment rule converges to an equilibrium price vector turn out to be very restrictive, so even though we can prove that the competitive model has an equilibrium solution – in other words the behavioral, structural and technological assumptions of the model are coherent, meaning that the model has a solution, the model has no assumptions about how prices are actually determined that would prove that the equilibrium is ever reached. In fact, the problem is even more daunting than the previous sentence suggest, because even Walrasian tatonnement imposes an incredibly powerful restriction, namely that no trading is allowed at non-equilibrium prices. In practice there are almost never recontracting provisions allowing traders to revise the terms of their trades once it becomes clear that the prices at which trades were made were not equilibrium prices.

I now want to show how price expectations fit into all of this, because the original general equilibrium models were either one-period models or formal intertemporal models that were reduced to single-period models by assuming that all trading for future delivery was undertaken in the first period by long-lived agents who would eventually carry out the transactions that were contracted in period 1 for subsequent consumption and production. Time was preserved in a purely formal, technical way, but all economic decision-making was actually concluded in the first period. But even though the early general-equilibrium models did not encompass expectations, one of the extraordinary precursors of modern economics, Augustin Cournot, who was way too advanced for his contemporaries even to comprehend, much less make any use of, what he was saying, had incorporated the idea of expectations into the solution of his famous economic model of oligopolistic price setting.

The key to oligopolistic pricing is that each oligopolist must take into account not just consumer demand for his product, and his own production costs; he must consider as well what actions will be taken by his rivals. This is not a problem for a competitive producer (a price-taker) or a pure monopolist. The price-taker simply compares the price at which he can sell as much as he wants with his production costs and decides how much it is worthwhile to produce by comparing his marginal cost to price ,and increases output until the marginal cost rises to match the price at which he can sell. The pure monopolist, if he knows, as is assumed in such exercises, or thinks he knows the shape of the customer demand curve, selects the price and quantity combination on the demand curve that maximizes total profit (corresponding to the equality of marginal revenue and marginal cost). In oligopolistic situations, each producer must take into account how much his rivals will sell, or what prices they will set.

It was by positing such a situation and finding an analytic solution, that Cournot made a stunning intellectual breakthrough. In the simple duopoly case, Cournot posited that if the duopolists had identical costs, then each could find his optimal price conditional on the output chosen by the other. This is a simple profit-maximization problem for each duopolist, given a demand curve for the combined output of both (assumed to be identical, so that a single price must obtain for the output of both) a cost curve and the output of the other duopolist. Thus, for each duopolist there is a reaction curve showing his optimal output given the output of the other. See the accompanying figure.cournot

If one duopolist produces zero, the optimal output for the other is the monopoly output. Depending on what the level of marginal cost is, there is some output by either of the duopolists that is sufficient to make it unprofitable for the other duopolist to produce anything. That level of output corresponds to the competitive output where price just equals marginal cost. So the slope of the two reaction functions corresponds to the ratio of the monopoly output to the competitive output, which, with constant marginal cost is 2:1. Given identical costs, the two reaction curves are symmetric and the optimal output for each, given the expected output of the other, corresponds to the intersection of the two reaction curves, at which both duopolists produce the same quantity. The combined output of the two duopolists will be greater than the monopoly output, but less than the competitive output at which price equals marginal cost. With constant marginal cost, it turns out that each duopolist produces one-third of the competitive output. In the general case with n oligoplists, the ratio of the combined output of all n firms to the competitive output equals n/(n+1).

Cournot’s solution corresponds to a fixed point where the equilibrium of the model implies that both duopolists have correct expectations of the output of the other. Given the assumptions of the model, if the duopolists both expect the other to produce an output equal to one-third of the competitive output, their expectations will be consistent and will be realized. If either one expects the other to produce a different output, the outcome will not be an equilibrium, and each duopolist will regret his output decision, because the price at which he can sell his output will differ from the price that he had expected. In the Cournot case, you could define a mapping of a vector of the quantities that each duopolist had expected the other to produce and the corresponding planned output of each duopolist. An equilibrium corresponds to a case in which both duopolists expected the output planned by the other. If either duopolist expected a different output from what the other planned, the outcome would not be an equilibrium.

We can now recognize that Cournot’s solution anticipated John Nash’s concept of an equilibrium strategy in which player chooses a strategy that is optimal given his expectation of what the other player’s strategy will be. A Nash equilibrium corresponds to a fixed point in which each player chooses an optimal strategy based on the correct expectation of what the other player’s strategy will be. There may be more than one Nash equilibrium in many games. For example, rather than base their decisions on an expectation of the quantity choice of the other duopolist, the two duopolists could base their decisions on an expectation of what price the other duopolist would set. In the constant-cost case, this choice of strategies would lead to the competitive output because both duopolists would conclude that the optimal strategy of the other duopolist would be to charge a price just sufficient to cover his marginal cost. This was the alternative oligopoly model suggested by another French economist J. L. F. Bertrand. Of course there is a lot more to be said about how oligopolists strategize than just these two models, and the conditions under which one or the other model is the more appropriate. I just want to observe that assumptions about expectations are crucial to how we analyze market equilibrium, and that the importance of these assumptions for understanding market behavior has been recognized for a very long time.

But from a macroeconomic perspective, the important point is that expected prices become the critical equilibrating variable in the theory of general equilibrium and in macroeconomics in general. Single-period models of equilibrium, including general-equilibrium models that are formally intertemporal, but in which all trades are executed in the initial period at known prices in a complete array of markets determining all future economic activity, are completely sterile and useless for macroeconomics except as a stepping stone to analyzing the implications of imperfect forecasts of future prices. If we want to think about general equilibrium in a useful macroeconomic context, we have to think about a general-equilibrium system in which agents make plans about consumption and production over time based on only the vaguest conjectures about what future conditions will be like when the various interconnected stages of their plans will be executed.

Unlike the full Arrow-Debreu system of complete markets, a general-equilibrium system with incomplete markets cannot be equilibrated, even in principle, by price adjustments in the incomplete set of present markets. Equilibration depends on the consistency of expected prices with equilibrium. If equilibrium is characterized by a fixed point, the fixed point must be mapping of a set of vectors of current prices and expected prices on to itself. That means that expected future prices are as much equilibrating variables as current market prices. But expected future prices exist only in the minds of the agents, they are not directly subject to change by market forces in the way that prices in actual markets are. If the equilibrating tendencies of market prices in a system of complete markets are very far from completely effective, the equilibrating tendencies of expected future prices may not only be non-existent, but may even be potentially disequilibrating rather than equilibrating.

The problem of price expectations in an intertemporal general-equilibrium system is central to the understanding of macroeconomics. Hayek, who was the father of intertemporal equilibrium theory, which he was the first to outline in a 1928 paper in German, and who explained the problem with unsurpassed clarity in his 1937 paper “Economics and Knowledge,” unfortunately did not seem to acknowledge its radical consequences for macroeconomic theory, and the potential ineffectiveness of self-equilibrating market forces. My quarrel with rational expectations as a strategy of macroeconomic analysis is its implicit assumption, lacking any analytical support, that prices and price expectations somehow always adjust to equilibrium values. In certain contexts, when there is no apparent basis to question whether a particular market is functioning efficiently, rational expectations may be a reasonable working assumption for modelling observed behavior. However, when there is reason to question whether a given market is operating efficiently or whether an entire economy is operating close to its potential, to insist on principle that the rational-expectations assumption must be made, to assume, in other words, that actual and expected prices adjust rapidly to their equilibrium values allowing an economy to operate at or near its optimal growth path, is simply, as I have often said, an exercise in circular reasoning and question begging.

Making Sense of Rational Expectations

Almost two months ago I wrote a provocatively titled post about rational expectations, in which I argued against the idea that it is useful to make the rational-expectations assumption in developing a theory of business cycles. The title of the post was probably what led to the start of a thread about my post on the econjobrumors blog, the tenor of which  can be divined from the contribution of one commenter: “Who on earth is Glasner?” But, aside from the attention I received on econjobrumors, I also elicited a response from Scott Sumner

David Glasner has a post criticizing the rational expectations modeling assumption in economics:

What this means is that expectations can be rational only when everyone has identical expectations. If people have divergent expectations, then the expectations of at least some people will necessarily be disappointed — the expectations of both people with differing expectations cannot be simultaneously realized — and those individuals whose expectations have been disappointed will have to revise their plans. But that means that the expectations of those people who were correct were also not rational, because the prices that they expected were not equilibrium prices. So unless all agents have the same expectations about the future, the expectations of no one are rational. Rational expectations are a fixed point, and that fixed point cannot be attained unless everyone shares those expectations.

Beyond that little problem, Mason raises the further problem that, in a rational-expectations equilibrium, it makes no sense to speak of a shock, because the only possible meaning of “shock” in the context of a full intertemporal (aka rational-expectations) equilibrium is a failure of expectations to be realized. But if expectations are not realized, expectations were not rational.

I see two mistakes here. Not everyone must have identical expectations in a world of rational expectations. Now it’s true that there are ratex models where people are simply assumed to have identical expectations, such as representative agent models, but that modeling assumption has nothing to do with rational expectations, per se.

In fact, the rational expectations hypothesis suggests that people form optimal forecasts based on all publicly available information. One of the most famous rational expectations models was Robert Lucas’s model of monetary misperceptions, where people observed local conditions before national data was available. In that model, each agent sees different local prices, and thus forms different expectations about aggregate demand at the national level.

It is true that not all expectations must be identical in a world of rational expectations. The question is whether those expectations are compatible with the equilibrium of the model in which those expectations are embedded. If any of those expectations are incompatible with the equilibrium of the model, then, if agents’ decision are based on their expectations, the model will not arrive at an equilibrium solution. Lucas’s monetary misperception model was a clever effort to tweak the rational-expectations assumption just enough to allow for a temporary disequilibrium. But the attempt was a failure, because Lucas could only generate a one-period deviation from equilibrium, which was too little for the model to pose as a plausible account of a business cycle. That provided Kydland and Prescott the idea to discard Lucas’s monetary misperceptions idea and write their paper on real business cycles without adulterating the rational expectations assumption.

Here’s what Muth said about the rational expectations assumption in the paper in which he introduced “rational expectations” as a modeling strategy.

In order to explain these phenomena, I should like to suggest that expectations, since they are informed predictions of future events, are essentially the same as the predictions of the relevant economic theory. At the risk of confusing this purely descriptive hypothesis with a pronouncement as to what firms ought to do, we call such expectations “rational.”

The hypothesis can be rephrased a little more precisely as follows: that expectations of firms (or, more generally, the subjective probability distribution of outcomes) tend to be distributed, for the same information set, about the prediction of the theory (or the “objective” probability distributions of outcomes).

The hypothesis asserts three things: (1) Information is scarce, and the economic system generally does not waste it. (2) The way expectations are formed depends specifically on the structure of the relevant system describing the economy. (3) A “public prediction,” in the sense of Grunberg and Modigliani, will have no substantial effect on the operation of the economic system (unless it is based on inside information).

It does not assert that the scratch work of entrepreneurs resembles the system of equations in any way; nor does it state that predictions of entrepreneurs are perfect or that their expectations are all the same. For purposes of analysis, we shall use a specialized form of the hypothesis. In particular, we assume: 1. The random disturbances are normally distributed. 2. Certainty equivalents exist for the variables to be predicted. 3. The equations of the system, including the expectations formulas, are linear. These assumptions are not quite so strong as may appear at first because any one of them virtually implies the other two.

It seems to me that Muth was confused about what the rational-expectations assumption entails. He asserts that the expectations of entrepreneurs — and presumably that applies to other economic agents as well insofar as their decisions are influenced by their expectations of the future – should be assumed to be exactly what the relevant economic model predicts the expected outcomes to be. If so, I don’t see how it can be maintained that expectations could diverge from each other. If what entrepreneurs produce next period depends on the price they expect next period, then how is it possible that the total supply produced next period is independent of the distribution of expectations as long as the errors are normally distributed and the mean of the distribution corresponds to the equilibrium of the model? This could only be true if the output produced by each entrepreneur was a linear function of the expected price and all entrepreneurs had identical marginal costs or if the distribution of marginal costs was uncorrelated with the distribution of expectations. The linearity assumption is hardly compelling unless you assume that the system is in equilibrium and all changes are small. But making that assumption is just another form of question begging.

It’s also wrong to say:

But if expectations are not realized, expectations were not rational.

Scott is right. What I said was wrong. What I ought to have said is: “But if expectations (being divergent) could not have been realized, those expectations were not rational.”

Suppose I am watching the game of roulette. I form the expectation that the ball will not land on one of the two green squares. Now suppose it does. Was my expectation rational? I’d say yes—there was only a 2/38 chance of the ball landing on a green square. It’s true that I lacked perfect foresight, but my expectation was rational, given what I knew at the time.

I don’t think that Scott’s response is compelling, because you can’t judge the rationality of an expectation in isolation, it has to be judged in a broader context. If you are forming your expectation about where the ball will fall in a game of roulette, the rationality of that expectation can only be evaluated in the context of how much you should be willing to bet that the ball will fall on one of the two green squares and that requires knowledge of what the payoff would be if the ball did fall on one of those two squares. And that would mean that someone else is involved in the game and would be taking an opposite position. The rationality of expectations could only be judged in the context of what everyone participating in the game was expecting and what the payoffs and penalties were for each participant.

In 2006, it might have been rational to forecast that housing prices would not crash. If you lived in many countries, your forecast would have been correct. If you happened to live in Ireland or the US, your forecast would have been incorrect. But it might well have been a rational forecast in all countries.

The rationality of a forecast can’t be assessed in isolation. A forecast is rational if it is consistent with other forecasts, so that it, along with the other forecasts, could potentially be realized. As a commenter on Scott’s blog observed, a rational expectation is an expectation that, at the time the forecast is made, is consistent with the relevant model. The forecast of housing prices may turn out to be incorrect, but the forecast might still have been rational when it was made if the forecast of prices was consistent with what the relevant model would have predicted. The failure of the forecast to be realized could mean either that forecast was not consistent with the model, or that between the time of the forecast and the time of its realization, new information,  not available at the time of the forecast, came to light and changed the the prediction of the relevant model.

The need for context in assessing the rationality of expectations was wonderfully described by Thomas Schelling in his classic analysis of cooperative games.

One may or may not agree with any particular hypothesis as to how a bargainer’s expectations are formed either in the bargaining process or before it and either by the bargaining itself or by other forces. But it does seem clear that the outcome of a bargaining process is to be described most immediately, most straightforwardly, and most empirically, in terms of some phenomenon of stable and convergent expectations. Whether one agrees explicitly to a bargain, or agrees tacitly, or accepts by default, he must if he has his wits about him, expect that he could do no better and recognize that the other party must reciprocate the feeling. Thus, the fact of an outcome, which is simply a coordinated choice, should be analytically characterized by the notion of convergent expectations.

The intuitive formulation, or even a careful formulation in psychological terms, of what it is that a rational player expects in relation to another rational player in the “pure” bargaining game, poses a problem in sheer scientific description. Both players, being rational, must recognize that the only kind of “rational” expectation they can have is a fully shared expectation of an outcome. It is not quite accurate – as a description of a psychological phenomenon – to say that one expects the second to concede something; the second’s readiness to concede or to accept is only an expression of what he expects the first to accept or to concede, which in turn is what he expects the first to expect the second to expect the first to expect, and so on. To avoid an “ad infinitum” in the description process, we have to say that both sense a shared expectation of an outcome; one’s expectation is a belief that both identify the outcome as being indicated by the situation, hence as virtually inevitable. Both players, in effect, accept a common authority – the power of the game to dictate its own solution through their intellectual capacity to perceive it – and what they “expect” is that they both perceive the same solution.

Viewed in this way, the intellectual process of arriving at “rational expectations” in the full-communication “pure” bargaining game is virtually identical with the intellectual process of arriving at a coordinated choice in the tacit game. The actual solutions might be different because the game contexts might be different, with different suggestive details; but the intellectual nature of the two solutions seems virtually identical since both depend on an agreement that is reached by tacit consent. This is true because the explicit agreement that is reached in the full communication game corresponds to the a prioir expectations that were reached (or in theory could have been reached) jointly but independently by the two players before the bargaining started. And it is a tacit “agreement” in the sense that both can hold confident rational expectation only if both are aware that both accept the indicated solution in advance as the outcome that they both know they both expect.

So I agree that rational expectations can simply mean that agents are forming expectations about the future incorporating as best as they can all the knowledge available to them. This is a weak common sense interpretation of rational expectations that I think is what Scott Sumner has in mind when he uses the term “rational expectations.” But in the context of formal modelling, rational expectations has a more restrictive meaning, which is that given all the information available, the expectations of all agents in the model must correspond to what the model itself predicts given that information. Even though Muth himself and others have tried to avoid the inference that all agents must have expectations that match the solution of the model, given the information underlying the model, the assumptions under which agents could hold divergent expectations are, in their own way, just as restrictive as the assumption that agents hold convergent expectations.

In a way, the disconnect between a common-sense understanding of what “rational expectations” means and what “rational expectations” means in the context of formal macroeconomic models is analogous to the disconnect between what “competition” means in normal discourse and what “competition” (and especially “perfect competition”) means in the context of formal microeconomic models. Much of the rivalrous behavior between competitors that we think of as being essential aspects of competition and the competitive process is simply ruled out by the formal assumption of perfect competition.

Rational Expectations, or, The Road to Incoherence

J. W. Mason left a very nice comment on my recent post about Paul Romer’s now-famous essay on macroeconomics, a comment now embedded in his interesting and insightful blog post on the Romer essay. As a wrote in my reply to Mason’s comment, I really liked the way he framed his point about rational expectations and intertemporal equilibrium. Sometimes when you see a familiar idea expressed in a particular way, the novelty of the expression, even though it’s not substantively different from other ways of expressing the idea, triggers a new insight. And that’s what I think happened in my own mind as I read Mason’s comment. Here’s what he wrote:

David Glasner’s interesting comment on Romer makes in passing a point that’s bugged me for years — that you can’t talk about transitions from one intertemporal equilibrium to another, there’s only the one. Or equivalently, you can’t have a model with rational expectations and then talk about what happens if there’s a “shock.” To say there is a shock in one period, is just to say that expectations in the previous period were wrong. Glasner:

the Lucas Critique applies even to micro-founded models, those models being strictly valid only in equilibrium settings and being unable to predict the adjustment of economies in the transition between equilibrium states. All models are subject to the Lucas Critique.

So the further point that I would make, after reading Mason’s comment, is just this. For an intertemporal equilibrium to exist, there must be a complete set of markets for all future periods and contingent states of the world, or, alternatively, there must be correct expectations shared by all agents about all future prices and the probability that each contingent future state of the world will be realized. By the way, If you think about it for a moment, the notion that probabilities can be assigned to every contingent future state of the world is mind-bogglingly unrealistic, because the number of contingent states must rapidly become uncountable, because every single contingency itself gives rise to further potential contingencies, and so on and on and on. But forget about that little complication. What intertemporal equilibrium requires is that all expectations of all individuals be in agreement – or at least not be inconsistent, some agents possibly having an incomplete set of expectations about future prices and future states of the world. If individuals differ in their expectations, so that their planned future purchases and sales are based on what they expect future prices to be when the time comes for those transactions to be carried out, then individuals will not be able to execute their plans as intended when at least one of them finds that actual prices are different from what they had been expected to be.

What this means is that expectations can be rational only when everyone has identical expectations. If people have divergent expectations, then the expectations of at least some people will necessarily be disappointed — the expectations of both people with differing expectations cannot be simultaneously realized — and those individuals whose expectations have been disappointed will have to revise their plans. But that means that the expectations of those people who were correct were also not rational, because the prices that they expected were not equilibrium prices. So unless all agents have the same expectations about the future, the expectations of no one are rational. Rational expectations are a fixed point, and that fixed point cannot be attained unless everyone shares those expectations.

Beyond that little problem, Mason raises the further problem that, in a rational-expectations equilibrium, it makes no sense to speak of a shock, because the only possible meaning of “shock” in the context of a full intertemporal (aka rational-expectations) equilibrium is a failure of expectations to be realized. But if expectations are not realized, expectations were not rational. So the whole New Classical modeling strategy of identifying shocks  to a system in rational-expectations equilibrium, and “predicting” the responses to these shocks as if they had been anticipated is self-contradictory and incoherent.

Price Stickiness Is a Symptom not a Cause

In my recent post about Nick Rowe and the law of reflux, I mentioned in passing that I might write a post soon about price stickiness. The reason that I thought it would be worthwhile writing again about price stickiness (which I have written about before here and here), because Nick, following a broad consensus among economists, identifies price stickiness as a critical cause of fluctuations in employment and income. Here’s how Nick phrased it:

An excess demand for land is observed in the land market. An excess demand for bonds is observed in the bond market. An excess demand for equities is observed in the equity market. An excess demand for money is observed in any market. If some prices adjust quickly enough to clear their market, but other prices are sticky so their markets don’t always clear, we may observe an excess demand for money as an excess supply of goods in those sticky-price markets, but the prices in flexible-price markets will still be affected by the excess demand for money.

Then a bit later, Nick continues:

If individuals want to save in the form of money, they won’t collectively be able to if the stock of money does not increase.There will be an excess demand for money in all the money markets, except those where the price of the non-money thing in that market is flexible and adjusts to clear that market. In the sticky-price markets there will nothing an individual can do if he wants to buy more money but nobody else wants to sell more. But in those same sticky-price markets any individual can always sell less money, regardless of what any other individual wants to do. Nobody can stop you selling less money, if that’s what you want to do.

Unable to increase the flow of money into their portfolios, each individual reduces the flow of money out of his portfolio. Demand falls in stick-price markets, quantity traded is determined by the short side of the market (Q=min{Qd,Qs}), so trade falls, and some traders that would be mutually advantageous in a barter or Walrasian economy even at those sticky prices don’t get made, and there’s a recession. Since money is used for trade, the demand for money depends on the volume of trade. When trade falls the flow of money falls too, and the stock demand for money falls, until the representative individual chooses a flow of money out of his portfolio equal to the flow in. He wants to increase the flow in, but cannot, since other individuals don’t want to increase their flows out.

The role of price stickiness or price rigidity in accounting for involuntary unemployment is an old and complicated story. If you go back and read what economists before Keynes had to say about the Great Depression, you will find that there was considerable agreement that, in principle, if workers were willing to accept a large enough cut in their wages, they could all get reemployed. That was a proposition accepted by Hawtry and by Keynes. However, they did not believe that wage cutting was a good way of restoring full employment, because the process of wage cutting would be brutal economically and divisive – even self-destructive – politically. So they favored a policy of reflation that would facilitate and hasten the process of recovery. However, there also those economists, e.g., Ludwig von Mises and the young Lionel Robbins in his book The Great Depression, (which he had the good sense to disavow later in life) who attributed high unemployment to an unwillingness of workers and labor unions to accept wage cuts and to various other legal barriers preventing the price mechanism from operating to restore equilibrium in the normal way that prices adjust to equate the amount demanded with the amount supplied in each and every single market.

But in the General Theory, Keynes argued that if you believed in the standard story told by microeconomics about how prices constantly adjust to equate demand and supply and maintain equilibrium, then maybe you should be consistent and follow the Mises/Robbins story and just wait for the price mechanism to perform its magic, rather than support counter-cyclical monetary and fiscal policies. So Keynes then argued that there is actually something wrong with the standard microeconomic story; price adjustments can’t ensure that overall economic equilibrium is restored, because the level of employment depends on aggregate demand, and if aggregate demand is insufficient, wage cutting won’t increase – and, more likely, would reduce — aggregate demand, so that no amount of wage-cutting would succeed in reducing unemployment.

To those upholding the idea that the price system is a stable self-regulating system or process for coordinating a decentralized market economy, in other words to those upholding microeconomic orthodoxy as developed in any of the various strands of the neoclassical paradigm, Keynes’s argument was deeply disturbing and subversive.

In one of the first of his many important publications, “Liquidity Preference and the Theory of Money and Interest,” Franco Modigliani argued that, despite Keynes’s attempt to prove that unemployment could persist even if prices and wages were perfectly flexible, the assumption of wage rigidity was in fact essential to arrive at Keynes’s result that there could be an equilibrium with involuntary unemployment. Modigliani did so by positing a model in which the supply of labor is a function of real wages. It was not hard for Modigliani to show that in such a model an equilibrium with unemployment required a rigid real wage.

Modigliani was not in favor of relying on price flexibility instead of counter-cyclical policy to solve the problem of involuntary unemployment; he just argued that the rationale for such policies had to be that prices and wages were not adjusting immediately to clear markets. But the inference that Modigliani drew from that analysis — that price flexibility would lead to an equilibrium with full employment — was not valid, there being no guarantee that price adjustments would necessarily lead to equilibrium, unless all prices and wages instantaneously adjusted to their new equilibrium in response to any deviation from a pre-existing equilibrium.

All the theory of general equilibrium tells us is that if all trading takes place at the equilibrium set of prices, the economy will be in equilibrium as long as the underlying “fundamentals” of the economy do not change. But in a decentralized economy, no one knows what the equilibrium prices are, and the equilibrium price in each market depends in principle on what the equilibrium prices are in every other market. So unless the price in every market is an equilibrium price, none of the markets is necessarily in equilibrium.

Now it may well be that if all prices are close to equilibrium, the small changes will keep moving the economy closer and closer to equilibrium, so that the adjustment process will converge. But that is just conjecture, there is no proof showing the conditions under which a simple rule that says raise the price in any market with an excess demand and decrease the price in any market with an excess supply will in fact lead to the convergence of the whole system to equilibrium. Even in a Walrasian tatonnement system, in which no trading at disequilibrium prices is allowed, there is no proof that the adjustment process will eventually lead to the discovery of the equilibrium price vector. If trading at disequilibrium prices is allowed, tatonnement is hopeless.

So the real problem is not that prices are sticky but that trading takes place at disequilibrium prices and there is no mechanism by which to discover what the equilibrium prices are. Modern macroeconomics solves this problem, in its characteristic fashion, by assuming it away by insisting that expectations are “rational.”

Economists have allowed themselves to make this absurd assumption because they are in the habit of thinking that the simple rule of raising price when there is an excess demand and reducing the price when there is an excess supply inevitably causes convergence to equilibrium. This habitual way of thinking has been inculcated in economists by the intense, and largely beneficial, training they have been subjected to in Marshallian partial-equilibrium analysis, which is built on the assumption that every market can be analyzed in isolation from every other market. But that analytic approach can only be justified under a very restrictive set of assumptions. In particular it is assumed that any single market under consideration is small relative to the whole economy, so that its repercussions on other markets can be ignored, and that every other market is in equilibrium, so that there are no changes from other markets that are impinging on the equilibrium in the market under consideration.

Neither of these assumptions is strictly true in theory, so all partial equilibrium analysis involves a certain amount of hand-waving. Nor, even if we wanted to be careful and precise, could we actually dispense with the hand-waving; the hand-waving is built into the analysis, and can’t be avoided. I have often referred to these assumptions required for the partial-equilibrium analysis — the bread and butter microeconomic analysis of Econ 101 — to be valid as the macroeconomic foundations of microeconomics, by which I mean that the casual assumption that microeconomics somehow has a privileged and secure theoretical position compared to macroeconomics and that macroeconomic propositions are only valid insofar as they can be reduced to more basic microeconomic principles is entirely unjustified. That doesn’t mean that we shouldn’t care about reconciling macroeconomics with microeconomics; it just means that the validity of proposition in macroeconomics is not necessarily contingent on being derived from microeconomics. Reducing macroeconomics to microeconomics should be an analytical challenge, not a methodological imperative.

So the assumption, derived from Modigliani’s 1944 paper that “price stickiness” is what prevents an economic system from moving automatically to a new equilibrium after being subjected to some shock or disturbance, reflects either a misunderstanding or a semantic confusion. It is not price stickiness that prevents the system from moving toward equilibrium, it is the fact that individuals are engaging in transactions at disequilibrium prices. We simply do not know how to compare different sets of non-equilibrium prices to determine which set of non-equilibrium prices will move the economy further from or closer to equilibrium. Our experience and out intuition suggest that in some neighborhood of equilibrium, an economy can absorb moderate shocks without going into a cumulative contraction. But all we really know from theory is that any trading at any set of non-equilibrium prices can trigger an economic contraction, and once it starts to occur, a contraction may become cumulative.

It is also a mistake to assume that in a world of incomplete markets, the missing markets being markets for the delivery of goods and the provision of services in the future, any set of price adjustments, however large, could by themselves ensure that equilibrium is restored. With an incomplete set of markets, economic agents base their decisions not just on actual prices in the existing markets; they base their decisions on prices for future goods and services which can only be guessed at. And it is only when individual expectations of those future prices are mutually consistent that equilibrium obtains. With inconsistent expectations of future prices, the adjustments in current prices in the markets that exist for currently supplied goods and services that in some sense equate amounts demanded and supplied, lead to a (temporary) equilibrium that is not efficient, one that could be associated with high unemployment and unused capacity even though technically existing markets are clearing.

So that’s why I regard the term “sticky prices” and other similar terms as very unhelpful and misleading; they are a kind of mental crutch that economists are too ready to rely on as a substitute for thinking about what are the actual causes of economic breakdowns, crises, recessions, and depressions. Most of all, they represent an uncritical transfer of partial-equilibrium microeconomic thinking to a problem that requires a system-wide macroeconomic approach. That approach should not ignore microeconomic reasoning, but it has to transcend both partial-equilibrium supply-demand analysis and the mathematics of intertemporal optimization.

What’s Wrong with EMH?

Scott Sumner wrote a post commenting on my previous post about Paul Krugman’s column in the New York Times last Friday. I found Krugman’s column really interesting in his ability to pack so much real economic content into an 800-word column written to help non-economists understand recent fluctuations in the stock market. Part of what I was doing in my post was to offer my own criticism of the efficient market hypothesis (EMH) of which Krugman is probably not an enthusiastic adherent either. Nevertheless, both Krugman and I recognize that EMH serves as a useful way to discipline how we think about fluctuating stock prices.

Here is a passage of Krugman’s that I commented on:

But why are long-term interest rates so low? As I argued in my last column, the answer is basically weakness in investment spending, despite low short-term interest rates, which suggests that those rates will have to stay low for a long time.

My comment was:

Again, this seems inexactly worded. Weakness in investment spending is a symptom not a cause, so we are back to where we started from. At the margin, there are no attractive investment opportunities.

Scott had this to say about my comment:

David is certainly right that Krugman’s statement is “inexactly worded”, but I’m also a bit confused by his criticism. Certainly “weakness in investment spending” is not a “symptom” of low interest rates, which is how his comment reads in context.  Rather I think David meant that the shift in the investment schedule is a symptom of a low level of AD, which is a very reasonable argument, and one he develops later in the post.  But that’s just a quibble about wording.  More substantively, I’m persuaded by Krugman’s argument that weak investment is about more than just AD; the modern information economy (with, I would add, a slowgrowing working age population) just doesn’t generate as much investment spending as before, even at full employment.

Just to be clear, what I was trying to say was that investment spending is determined by “fundamentals,” i.e., expectations about future conditions (including what demand for firms’ output will be, what competing firms are planning to do, what cost conditions will be, and a whole range of other considerations. It is the combination of all those real and psychological factors that determines the projected returns from undertaking an investment, and those expected returns must be compared with the cost of capital to reach a final decision about which projects will be undertaken, thereby giving rise to actual investment spending. So I certainly did not mean to say that weakness in investment spending is a symptom of low interest rates. I meant that it is a symptom of the entire economic environment that, depending on the level of interest rates, makes specific investment projects seem attractive or unattractive. Actually, I don’t think that there is any real disagreement between Scott and me on this particular point; I just mention the point to avoid possible misunderstandings.

But the differences between Scott and me about the EMH seem to be substantive. Scott quotes this passage from my previous post:

The efficient market hypothesis (EMH) is at best misleading in positing that market prices are determined by solid fundamentals. What does it mean for fundamentals to be solid? It means that the fundamentals remain what they are independent of what people think they are. But if fundamentals themselves depend on opinions, the idea that values are determined by fundamentals is a snare and a delusion.

Scott responded as follows:

I don’t think it’s correct to say the EMH is based on “solid fundamentals”.  Rather, AFAIK, the EMH says that asset prices are based on rational expectations of future fundamentals, what David calls “opinions”.  Thus when David tries to replace the EMH view of fundamentals with something more reasonable, he ends up with the actual EMH, as envisioned by people like Eugene Fama.  Or am I missing something?

In fairness, David also rejects rational expectations, so he would not accept even my version of the EMH, but I think he’s too quick to dismiss the EMH as being obviously wrong. Lots of people who are much smarter than me believe in the EMH, and if there was an obvious flaw I think it would have been discovered by now.

I accept Scott’s correction that EMH is based on the rational expectation of future fundamentals, but I don’t think that the distinction is as meaningful as Scott does. The problem is that in a typical rational-expectations model, the fundamentals are given and don’t change, so that fundamentals are actually static. The seemingly non-static property of a rational-expectations model is achieved by introducing stochastic parameters with known means and variances, so that the ultimate realizations of stochastic variables within the model are not known in advance. However, the rational expectations of all stochastic variables are unbiased, and they are – in some sense — the best expectations possible given the underlying stochastic nature of the variables. But given that stochastic structure, current asset prices reflect the actual – and unchanging — fundamentals, the stochastic elements in the model being fully reflected in asset prices today. Prices may change ex post, but, conditional on the realizations of the stochastic variables (whose probability distributions are assumed to have been known in advance), those changes are fully anticipated. Thus, in a rational-expectations equilibrium, causation still runs from fundamentals to expectations.

The problem with rational expectations is not a flaw in logic. In fact, the importance of rational expectations is that it is a very important logical test for the coherence of a model. If a model cannot be solved for a rational-expectations equilibrium, it suffers from a basic lack of coherence. Something is basically wrong with a model in which the expectation of the equilibrium values predicted by the model does not lead to their realization. But a logical property of the model is not the same as a positive theory of how expectations are formed and how they evolve. In the real world, knowledge is constantly growing, and new knowledge implies that the fundamentals underlying the economy must be changing as knowledge grows. The future fundamentals that will determine the future prices of a future economy cannot be rationally expected in the present, because we have no way of specifying probability distributions corresponding to dynamic evolving systems.

If future fundamentals are logically unknowable — even in a probabilistic sense — in the present, because we can’t predict what our future knowledge will be, because if we could, future knowledge would already be known, making it present knowledge, then expectations of the future can’t possibly be rational because we never have the knowledge that would be necessary to form rational expectations. And so I can’t accept Scott’s assertion that asset prices are based on rational expectations of future fundamentals. It seems to me that the causation goes in the other direction as well: future fundamentals will be based, at least in part, on current expectations.

There Is No Intertemporal Budget Constraint

Last week Nick Rowe posted a link to a just published article in a special issue of the Review of Keynesian Economics commemorating the 80th anniversary of the General Theory. Nick’s article discusses the confusion in the General Theory between saving and hoarding, and Nick invited readers to weigh in with comments about his article. The ROKE issue also features an article by Simon Wren-Lewis explaining the eclipse of Keynesian theory as a result of the New Classical Counter-Revolution, correctly identified by Wren-Lewis as a revolution inspired not by empirical success but by a methodological obsession with reductive micro-foundationalism. While deploring the New Classical methodological authoritarianism, Wren-Lewis takes solace from the ability of New Keynesians to survive under the New Classical methodological regime, salvaging a role for activist counter-cyclical policy by, in effect, negotiating a safe haven for the sticky-price assumption despite its shaky methodological credentials. The methodological fiction that sticky prices qualify as micro-founded allowed New Keynesianism to survive despite the ascendancy of micro-foundationalist methodology, thereby enabling the core Keynesian policy message to survive.

I mention the Wren-Lewis article in this context because of an exchange between two of the commenters on Nick’s article: the presumably pseudonymous Avon Barksdale and blogger Jason Smith about microfoundations and Keynesian economics. Avon began by chastising Nick for wasting time discussing Keynes’s 80-year old ideas, something Avon thinks would never happen in a discussion about a true science like physics, the 100-year-old ideas of Einstein being of no interest except insofar as they have been incorporated into the theoretical corpus of modern physics. Of course, this is simply vulgar scientism, as if the only legitimate way to do economics is to mimic how physicists do physics. This methodological scolding is typically charming New Classical arrogance. Sort of reminds one of how Friedrich Engels described Marxian theory as scientific socialism. I mean who, other than a religious fanatic, would be stupid enough to argue with the assertions of science?

Avon continues with a quotation from David Levine, a fine economist who has done a lot of good work, but who is also enthralled by the New Classical methodology. Avon’s scientism provoked the following comment from Jason Smith, a Ph. D. in physics with a deep interest in and understanding of economics.

You quote from Levine: “Keynesianism as argued by people such as Paul Krugman and Brad DeLong is a theory without people either rational or irrational”

This is false. The L in ISLM means liquidity preference and e.g. here …

http://krugman.blogs.nytimes.com/2013/11/18/the-new-keynesian-case-for-fiscal-policy-wonkish/

… Krugman mentions an Euler equation. The Euler equation essentially says that an agent must be indifferent between consuming one more unit today on the one hand and saving that unit and consuming in the future on the other if utility is maximized.

So there are agents in both formulations preferring one state of the world relative to others.

Avon replied:

Jason,

“This is false. The L in ISLM means liquidity preference and e.g. here”

I know what ISLM is. It’s not recursive so it really doesn’t have people in it. The dynamics are not set by any micro-foundation. If you’d like to see models with people in them, try Ljungqvist and Sargent, Recursive Macroeconomic Theory.

To which Jason retorted:

Avon,

So the definition of “people” is restricted to agents making multi-period optimizations over time, solving a dynamic programming problem?

Well then any such theory is obviously wrong because people don’t behave that way. For example, humans don’t optimize the dictator game. How can you add up optimizing agents and get a result that is true for non-optimizing agents … coincident with the details of the optimizing agents mattering.

Your microfoundation requirement is like saying the ideal gas law doesn’t have any atoms in it. And it doesn’t! It is an aggregate property of individual “agents” that don’t have properties like temperature or pressure (or even volume in a meaningful sense). Atoms optimize entropy, but not out of any preferences.

So how do you know for a fact that macro properties like inflation or interest rates are directly related to agent optimizations? Maybe inflation is like temperature — it doesn’t exist for individuals and is only a property of economics in aggregate.

These questions are not answered definitively, and they’d have to be to enforce a requirement for microfoundations … or a particular way of solving the problem.

Are quarks important to nuclear physics? Not really — it’s all pions and nucleons. Emergent degrees of freedom. Sure, you can calculate pion scattering from QCD lattice calculations (quark and gluon DoF), but it doesn’t give an empirically better result than chiral perturbation theory (pion DoF) that ignores the microfoundations (QCD).

Assuming quarks are required to solve nuclear physics problems would have been a giant step backwards.

To which Avon rejoined:

Jason

The microfoundation of nuclear physics and quarks is quantum mechanics and quantum field theory. How the degrees of freedom reorganize under the renormalization group flow, what effective field theory results is an empirical question. Keynesian economics is worse tha[n] useless. It’s wrong empirically, it has no theoretical foundation, it has no laws. It has no microfoundation. No serious grad school has taught Keynesian economics in nearly 40 years.

To which Jason answered:

Avon,

RG flow is irrelevant to chiral perturbation theory which is based on the approximate chiral symmetry of QCD. And chiral perturbation theory could exist without QCD as the “microfoundation”.

Quantum field theory is not a ‘microfoundation’, but rather a framework for building theories that may or may not have microfoundations. As Weinberg (1979) said:

” … quantum field theory itself has no content beyond analyticity, unitarity,
cluster decomposition, and symmetry.”

If I put together an NJL model, there is no requirement that the scalar field condensate be composed of quark-antiquark pairs. In fact, the basic idea was used for Cooper pairs as a model of superconductivity. Same macro theory; different microfoundations. And that is a general problem with microfoundations — different microfoundations can lead to the same macro theory, so which one is right?

And the IS-LM model is actually pretty empirically accurate (for economics):

http://informationtransfereconomics.blogspot.com/2014/03/the-islm-model-again.html

To which Avon responded:

First, ISLM analysis does not hold empirically. It just doesn’t work. That’s why we ended up with the macro revolution of the 70s and 80s. Keynesian economics ignores intertemporal budget constraints, it violates Ricardian equivalence. It’s just not the way the world works. People might not solve dynamic programs to set their consumption path, but at least these models include a future which people plan over. These models work far better than Keynesian ISLM reasoning.

As for chiral perturbation theory and the approximate chiral symmetries of QCD, I am not making the case that NJL models requires QCD. NJL is an effective field theory so it comes from something else. That something else happens to be QCD. It could have been something else, that’s an empirical question. The microfoundation I’m talking about with theories like NJL is QFT and the symmetries of the vacuum, not the short distance physics that might be responsible for it. The microfoundation here is about the basic laws, the principles.

ISLM and Keynesian economics has none of this. There is no principle. The microfoundation of modern macro is not about increasing the degrees of freedom to model every person in the economy on some short distance scale, it is about building the basic principles from consistent economic laws that we find in microeconomics.

Well, I totally agree that IS-LM is a flawed macroeconomic model, and, in its original form, it was borderline-incoherent, being a single-period model with an interest rate, a concept without meaning except as an intertemporal price relationship. These deficiencies of IS-LM became obvious in the 1970s, so the model was extended to include a future period, with an expected future price level, making it possible to speak meaningfully about real and nominal interest rates, inflation and an equilibrium rate of spending. So the failure of IS-LM to explain stagflation, cited by Avon as the justification for rejecting IS-LM in favor of New Classical macro, was not that hard to fix, at least enough to make it serviceable. And comparisons of the empirical success of augmented IS-LM and the New Classical models have shown that IS-LM models consistently outperform New Classical models.

What Avon fails to see is that the microfoundations that he considers essential for macroeconomics are themselves derived from the assumption that the economy is operating in macroeconomic equilibrium. Thus, insisting on microfoundations – at least in the formalist sense that Avon and New Classical macroeconomists understand the term – does not provide a foundation for macroeconomics; it is just question begging aka circular reasoning or petitio principia.

The circularity is obvious from even a cursory reading of Samuelson’s Foundations of Economic Analysis, Robert Lucas’s model for doing economics. What Samuelson called meaningful theorems – thereby betraying his misguided acceptance of the now discredited logical positivist dogma that only potentially empirically verifiable statements have meaning – are derived using the comparative-statics method, which involves finding the sign of the derivative of an endogenous economic variable with respect to a change in some parameter. But the comparative-statics method is premised on the assumption that before and after the parameter change the system is in full equilibrium or at an optimum, and that the equilibrium, if not unique, is at least locally stable and the parameter change is sufficiently small not to displace the system so far that it does not revert back to a new equilibrium close to the original one. So the microeconomic laws invoked by Avon are valid only in the neighborhood of a stable equilibrium, and the macroeconomics that Avon’s New Classical mentors have imposed on the economics profession is a macroeconomics that, by methodological fiat, is operative only in the neighborhood of a locally stable equilibrium.

Avon dismisses Keynesian economics because it ignores intertemporal budget constraints. But the intertemporal budget constraint doesn’t exist in any objective sense. Certainly macroeconomics has to take into account intertemporal choice, but the idea of an intertemporal budget constraint analogous to the microeconomic budget constraint underlying the basic theory of consumer choice is totally misguided. In the static theory of consumer choice, the consumer has a given resource endowment and known prices at which consumers can transact at will, so the utility-maximizing vector of purchases and sales can be determined as the solution of a constrained-maximization problem.

In the intertemporal context, consumers have a given resource endowment, but prices are not known. So consumers have to make current transactions based on their expectations about future prices and a variety of other circumstances about which consumers can only guess. Their budget constraints are thus not real but totally conjectural based on their expectations of future prices. The optimizing Euler equations are therefore entirely conjectural as well, and subject to continual revision in response to changing expectations. The idea that the microeconomic theory of consumer choice is straightforwardly applicable to the intertemporal choice problem in a setting in which consumers don’t know what future prices will be and agents’ expectations of future prices are a) likely to be very different from each other and thus b) likely to be different from their ultimate realizations is a huge stretch. The intertemporal budget constraint has a completely different role in macroeconomics from the role it has in microeconomics.

If I expect that the demand for my services will be such that my disposable income next year would be $500k, my consumption choices would be very different from what they would have been if I were expecting a disposable income of $100k next year. If I expect a disposable income of $500k next year, and it turns out that next year’s income is only $100k, I may find myself in considerable difficulty, because my planned expenditure and the future payments I have obligated myself to make may exceed my disposable income or my capacity to borrow. So if there are a lot of people who overestimate their future incomes, the repercussions of their over-optimism may reverberate throughout the economy, leading to bankruptcies and unemployment and other bad stuff.

A large enough initial shock of mistaken expectations can become self-amplifying, at least for a time, possibly resembling the way a large initial displacement of water can generate a tsunami. A financial crisis, which is hard to model as an equilibrium phenomenon, may rather be an emergent phenomenon with microeconomic sources, but whose propagation can’t be described in microeconomic terms. New Classical macroeconomics simply excludes such possibilities on methodological grounds by imposing a rational-expectations general-equilibrium structure on all macroeconomic models.

This is not to say that the rational expectations assumption does not have a useful analytical role in macroeconomics. But the most interesting and most important problems in macroeconomics arise when the rational expectations assumption does not hold, because it is when individual expectations are very different and very unstable – say, like now, for instance — that macroeconomies become vulnerable to really scary instability.

Simon Wren-Lewis makes a similar point in his paper in the Review of Keynesian Economics.

Much discussion of current divisions within macroeconomics focuses on the ‘saltwater/freshwater’ divide. This understates the importance of the New Classical Counter Revolution (hereafter NCCR). It may be more helpful to think about the NCCR as involving two strands. The one most commonly talked about involves Keynesian monetary and fiscal policy. That is of course very important, and plays a role in the policy reaction to the recent Great Recession. However I want to suggest that in some ways the second strand, which was methodological, is more important. The NCCR helped completely change the way academic macroeconomics is done.

Before the NCCR, macroeconomics was an intensely empirical discipline: something made possible by the developments in statistics and econometrics inspired by The General Theory. After the NCCR and its emphasis on microfoundations, it became much more deductive. As Hoover (2001, p. 72) writes, ‘[t]he conviction that macroeconomics must possess microfoundations has changed the face of the discipline in the last quarter century’. In terms of this second strand, the NCCR was triumphant and remains largely unchallenged within mainstream academic macroeconomics.

Perhaps I will have some more to say about Wren-Lewis’s article in a future post. And perhaps also about Nick Rowe’s article.

HT: Tom Brown

Update (02/11/16):

On his blog Jason Smith provides some further commentary on his exchange with Avon on Nick Rowe’s blog, explaining at greater length how irrelevant microfoundations are to doing real empirically relevant physics. He also expands on and puts into a broader meta-theoretical context my point about the extremely narrow range of applicability of the rational-expectations equilibrium assumptions of New Classical macroeconomics.

David Glasner found a back-and-forth between me and a commenter (with the pseudonym “Avon Barksdale” after [a] character on The Wire who [didn’t end] up taking an economics class [per Tom below]) on Nick Rowe’s blog who expressed the (widely held) view that the only scientific way to proceed in economics is with rigorous microfoundations. “Avon” held physics up as a purported shining example of this approach.
I couldn’t let it go: even physics isn’t that reductionist. I gave several examples of cases where the microfoundations were actually known, but not used to figure things out: thermodynamics, nuclear physics. Even modern physics is supposedly built on string theory. However physicists do not require every pion scattering amplitude be calculated from QCD. Some people do do so-called lattice calculations. But many resort to the “effective” chiral perturbation theory. In a sense, that was what my thesis was about — an effective theory that bridges the gap between lattice QCD and chiral perturbation theory. That effective theory even gave up on one of the basic principles of QCD — confinement. It would be like an economist giving up opportunity cost (a basic principle of the micro theory). But no physicist ever said to me “your model is flawed because it doesn’t have true microfoundations”. That’s because the kind of hard core reductionism that surrounds the microfoundations paradigm doesn’t exist in physics — the most hard core reductionist natural science!
In his post, Glasner repeated something that he had before and — probably because it was in the context of a bunch of quotes about physics — I thought of another analogy.

Glasner says:

But the comparative-statics method is premised on the assumption that before and after the parameter change the system is in full equilibrium or at an optimum, and that the equilibrium, if not unique, is at least locally stable and the parameter change is sufficiently small not to displace the system so far that it does not revert back to a new equilibrium close to the original one. So the microeconomic laws invoked by Avon are valid only in the neighborhood of a stable equilibrium, and the macroeconomics that Avon’s New Classical mentors have imposed on the economics profession is a macroeconomics that, by methodological fiat, is operative only in the neighborhood of a locally stable equilibrium.

 

This hits on a basic principle of physics: any theory radically simplifies near an equilibrium.

Go to Jason’s blog to read the rest of his important and insightful post.

Romer v. Lucas

A couple of months ago, Paul Romer created a stir by publishing a paper in the American Economic Review “Mathiness in the Theory of Economic Growth,” an attack on two papers, one by McGrattan and Prescott and the other by Lucas and Moll on aspects of growth theory. He accused the authors of those papers of using mathematical modeling as a cover behind which to hide assumptions guaranteeing results by which the authors could promote their research agendas. In subsequent blog posts, Romer has sharpened his attack, focusing it more directly on Lucas, whom he accuses of a non-scientific attachment to ideological predispositions that have led him to violate what he calls Feynman integrity, a concept eloquently described by Feynman himself in a 1974 commencement address at Caltech.

It’s a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty–a kind of leaning over backwards. For example, if you’re doing an experiment, you should report everything that you think might make it invalid–not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you’ve eliminated by some other experiment, and how they worked–to make sure the other fellow can tell they have been eliminated.

Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can–if you know anything at all wrong, or possibly wrong–to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. There is also a more subtle problem. When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition.

Romer contrasts this admirable statement of what scientific integrity means with another by George Stigler, seemingly justifying, or at least excusing, a kind of special pleading on behalf of one’s own theory. And the institutional and perhaps ideological association between Stigler and Lucas seems to suggest that Lucas is inclined to follow the permissive and flexible Stiglerian ethic rather than rigorous Feynman standard of scientific integrity. Romer regards this as a breach of the scientific method and a step backward for economics as a science.

I am not going to comment on the specific infraction that Romer accuses Lucas of having committed; I am not familiar with the mathematical question in dispute. Certainly if Lucas was aware that his argument in the paper Romer criticizes depended on the particular mathematical assumption in question, Lucas should have acknowledged that to be the case. And even if, as Lucas asserted in responding to a direct question by Romer, he could have derived the result in a more roundabout way, then he should have pointed that out, too. However, I don’t regard the infraction alleged by Romer to be more than a misdemeanor, hardly a scandalous breach of the scientific method.

Why did Lucas, who as far as I can tell was originally guided by Feynman integrity, switch to the mode of Stigler conviction? Market clearing did not have to evolve from auxiliary hypothesis to dogma that could not be questioned.

My conjecture is economists let small accidents of intellectual history matter too much. If we had behaved like scientists, things could have turned out very differently. It is worth paying attention to these accidents because doing so might let us take more control over the process of scientific inquiry that we are engaged in. At the very least, we should try to reduce the odds that that personal frictions and simple misunderstandings could once again cause us to veer off on some damaging trajectory.

I suspect that it was personal friction and a misunderstanding that encouraged a turn toward isolation (or if you prefer, epistemic closure) by Lucas and colleagues. They circled the wagons because they thought that this was the only way to keep the rational expectations revolution alive. The misunderstanding is that Lucas and his colleagues interpreted the hostile reaction they received from such economists as Robert Solow to mean that they were facing implacable, unreasoning resistance from such departments as MIT. In fact, in a remarkably short period of time, rational expectations completely conquered the PhD program at MIT.

More recently Romer, having done graduate work both at MIT and Chicago in the late 1970s, has elaborated on the personal friction between Solow and Lucas and how that friction may have affected Lucas, causing him to disengage from the professional mainstream. Paul Krugman, who was at MIT when this nastiness was happening, is skeptical of Romer’s interpretation.

My own view is that being personally and emotionally attached to one’s own theories, whether for religious or ideological or other non-scientific reasons, is not necessarily a bad thing as long as there are social mechanisms allowing scientists with different scientific viewpoints an opportunity to make themselves heard. If there are such mechanisms, the need for Feynman integrity is minimized, because individual lapses of integrity will be exposed and remedied by criticism from other scientists; scientific progress is possible even if scientists don’t live up to the Feynman standards, and maintain their faith in their theories despite contradictory evidence. But, as I am going to suggest below, there are reasons to doubt that social mechanisms have been operating to discipline – not suppress, just discipline – dubious economic theorizing.

My favorite example of the importance of personal belief in, and commitment to the truth of, one’s own theories is Galileo. As discussed by T. S. Kuhn in The Structure of Scientific Revolutions. Galileo was arguing for a paradigm change in how to think about the universe, despite being confronted by empirical evidence that appeared to refute the Copernican worldview he believed in: the observations that the sun revolves around the earth, and that the earth, as we directly perceive it, is, apart from the occasional earthquake, totally stationary — good old terra firma. Despite that apparently contradictory evidence, Galileo had an alternative vision of the universe in which the obvious movement of the sun in the heavens was explained by the spinning of the earth on its axis, and the stationarity of the earth by the assumption that all our surroundings move along with the earth, rendering its motion imperceptible, our perception of motion being relative to a specific frame of reference.

At bottom, this was an almost metaphysical world view not directly refutable by any simple empirical test. But Galileo adopted this worldview or paradigm, because he deeply believed it to be true, and was therefore willing to defend it at great personal cost, refusing to recant his Copernican view when he could have easily appeased the Church by describing the Copernican theory as just a tool for predicting planetary motion rather than an actual representation of reality. Early empirical tests did not support heliocentrism over geocentrism, but Galileo had faith that theoretical advancements and improved measurements would eventually vindicate the Copernican theory. He was right of course, but strict empiricism would have led to a premature rejection of heliocentrism. Without a deep personal commitment to the Copernican worldview, Galileo might not have articulated the case for heliocentrism as persuasively as he did, and acceptance of heliocentrism might have been delayed for a long time.

Imre Lakatos called such deeply-held views underlying a scientific theory the hard core of the theory (aka scientific research program), a set of beliefs that are maintained despite apparent empirical refutation. The response to any empirical refutation is not to abandon or change the hard core but to adjust what Lakatos called the protective belt of the theory. Eventually, as refutations or empirical anomalies accumulate, the research program may undergo a crisis, leading to its abandonment, or it may simply degenerate if it fails to solve new problems or discover any new empirical facts or regularities. So Romer’s criticism of Lucas’s dogmatic attachment to market clearing – Lucas frequently makes use of ad hoc price stickiness assumptions; I don’t know why Romer identifies market-clearing as a Lucasian dogma — may be no more justified from a history of science perspective than would criticism of Galileo’s dogmatic attachment to heliocentrism.

So while I have many problems with Lucas, lack of Feynman integrity is not really one of them, certainly not in the top ten. What I find more disturbing is his narrow conception of what economics is. As he himself wrote in an autobiographical sketch for Lives of the Laureates, he was bewitched by the beauty and power of Samuelson’s Foundations of Economic Analysis when he read it the summer before starting his training as a graduate student at Chicago in 1960. Although it did not have the transformative effect on me that it had on Lucas, I greatly admire the Foundations, but regardless of whether Samuelson himself meant to suggest such an idea (which I doubt), it is absurd to draw this conclusion from it:

I loved the Foundations. Like so many others in my cohort, I internalized its view that if I couldn’t formulate a problem in economic theory mathematically, I didn’t know what I was doing. I came to the position that mathematical analysis is not one of many ways of doing economic theory: It is the only way. Economic theory is mathematical analysis. Everything else is just pictures and talk.

Oh, come on. Would anyone ever think that unless you can formulate the problem of whether the earth revolves around the sun or the sun around the earth mathematically, you don’t know what you are doing? And, yet, remarkably, on the page following that silly assertion, one finds a totally brilliant description of what it was like to take graduate price theory from Milton Friedman.

Friedman rarely lectured. His class discussions were often structured as debates, with student opinions or newspaper quotes serving to introduce a problem and some loosely stated opinions about it. Then Friedman would lead us into a clear statement of the problem, considering alternative formulations as thoroughly as anyone in the class wanted to. Once formulated, the problem was quickly analyzed—usually diagrammatically—on the board. So we learned how to formulate a model, to think about and decide which features of a problem we could safely abstract from and which he needed to put at the center of the analysis. Here “model” is my term: It was not a term that Friedman liked or used. I think that for him talking about modeling would have detracted from the substantive seriousness of the inquiry we were engaged in, would divert us away from the attempt to discover “what can be done” into a merely mathematical exercise. [my emphasis].

Despite his respect for Friedman, it’s clear that Lucas did not adopt and internalize Friedman’s approach to economic problem solving, but instead internalized the caricature he extracted from Samuelson’s Foundations: that mathematical analysis is the only legitimate way of doing economic theory, and that, in particular, the essence of macroeconomics consists in a combination of axiomatic formalism and philosophical reductionism (microfoundationalism). For Lucas, the only scientifically legitimate macroeconomic models are those that can be deduced from the axiomatized Arrow-Debreu-McKenzie general equilibrium model, with solutions that can be computed and simulated in such a way that the simulations can be matched up against the available macroeconomics time series on output, investment and consumption.

This was both bad methodology and bad science, restricting the formulation of economic problems to those for which mathematical techniques are available to be deployed in finding solutions. On the one hand, the rational-expectations assumption made finding solutions to certain intertemporal models tractable; on the other, the assumption was justified as being required by the rationality assumptions of neoclassical price theory.

In a recent review of Lucas’s Collected Papers on Monetary Theory, Thomas Sargent makes a fascinating reference to Kenneth Arrow’s 1967 review of the first two volumes of Paul Samuelson’s Collected Works in which Arrow referred to the problematic nature of the neoclassical synthesis of which Samuelson was a chief exponent.

Samuelson has not addressed himself to one of the major scandals of current price theory, the relation between microeconomics and macroeconomics. Neoclassical microeconomic equilibrium with fully flexible prices presents a beautiful picture of the mutual articulations of a complex structure, full employment being one of its major elements. What is the relation between this world and either the real world with its recurrent tendencies to unemployment of labor, and indeed of capital goods, or the Keynesian world of underemployment equilibrium? The most explicit statement of Samuelson’s position that I can find is the following: “Neoclassical analysis permits of fully stable underemployment equilibrium only on the assumption of either friction or a peculiar concatenation of wealth-liquidity-interest elasticities. . . . [The neoclassical analysis] goes far beyond the primitive notion that, by definition of a Walrasian system, equilibrium must be at full employment.” . . .

In view of the Phillips curve concept in which Samuelson has elsewhere shown such interest, I take the second sentence in the above quotation to mean that wages are stationary whenever unemployment is X percent, with X positive; thus stationary unemployment is possible. In general, one can have a neoclassical model modified by some elements of price rigidity which will yield Keynesian-type implications. But such a model has yet to be constructed in full detail, and the question of why certain prices remain rigid becomes of first importance. . . . Certainly, as Keynes emphasized the rigidity of prices has something to do with the properties of money; and the integration of the demand and supply of money with general competitive equilibrium theory remains incomplete despite attempts beginning with Walras himself.

If the neoclassical model with full price flexibility were sufficiently unrealistic that stable unemployment equilibrium be possible, then in all likelihood the bulk of the theorems derived by Samuelson, myself, and everyone else from the neoclassical assumptions are also contrafactual. The problem is not resolved by what Samuelson has called “the neoclassical synthesis,” in which it is held that the achievement of full employment requires Keynesian intervention but that neoclassical theory is valid when full employment is reached. . . .

Obviously, I believe firmly that the mutual adjustment of prices and quantities represented by the neoclassical model is an important aspect of economic reality worthy of the serious analysis that has been bestowed on it; and certain dramatic historical episodes – most recently the reconversion of the United States from World War II and the postwar European recovery – suggest that an economic mechanism exists which is capable of adaptation to radical shifts in demand and supply conditions. On the other hand, the Great Depression and the problems of developing countries remind us dramatically that something beyond, but including, neoclassical theory is needed.

Perhaps in a future post, I may discuss this passage, including a few sentences that I have omitted here, in greater detail. For now I will just say that Arrow’s reference to a “neoclassical microeconomic equilibrium with fully flexible prices” seems very strange inasmuch as price flexibility has absolutely no role in the proofs of the existence of a competitive general equilibrium for which Arrow and Debreu and McKenzie are justly famous. All the theorems Arrow et al. proved about the neoclassical equilibrium were related to existence, uniqueness and optimaiity of an equilibrium supported by an equilibrium set of prices. Price flexibility was not involved in those theorems, because the theorems had nothing to do with how prices adjust in response to a disequilibrium situation. What makes this juxtaposition of neoclassical microeconomic equilibrium with fully flexible prices even more remarkable is that about eight years earlier Arrow wrote a paper (“Toward a Theory of Price Adjustment”) whose main concern was the lack of any theory of price adjustment in competitive equilibrium, about which I will have more to say below.

Sargent also quotes from two lectures in which Lucas referred to Don Patinkin’s treatise Money, Interest and Prices which provided perhaps the definitive statement of the neoclassical synthesis Samuelson espoused. In one lecture (“My Keynesian Education” presented to the History of Economics Society in 2003) Lucas explains why he thinks Patinkin’s book did not succeed in its goal of integrating value theory and monetary theory:

I think Patinkin was absolutely right to try and use general equilibrium theory to think about macroeconomic problems. Patinkin and I are both Walrasians, whatever that means. I don’t see how anybody can not be. It’s pure hindsight, but now I think that Patinkin’s problem was that he was a student of Lange’s, and Lange’s version of the Walrasian model was already archaic by the end of the 1950s. Arrow and Debreu and McKenzie had redone the whole theory in a clearer, more rigorous, and more flexible way. Patinkin’s book was a reworking of his Chicago thesis from the middle 1940s and had not benefited from this more recent work.

In the other lecture, his 2003 Presidential address to the American Economic Association, Lucas commented further on why Patinkin fell short in his quest to unify monetary and value theory:

When Don Patinkin gave his Money, Interest, and Prices the subtitle “An Integration of Monetary and Value Theory,” value theory meant, to him, a purely static theory of general equilibrium. Fluctuations in production and employment, due to monetary disturbances or to shocks of any other kind, were viewed as inducing disequilibrium adjustments, unrelated to anyone’s purposeful behavior, modeled with vast numbers of free parameters. For us, today, value theory refers to models of dynamic economies subject to unpredictable shocks, populated by agents who are good at processing information and making choices over time. The macroeconomic research I have discussed today makes essential use of value theory in this modern sense: formulating explicit models, computing solutions, comparing their behavior quantitatively to observed time series and other data sets. As a result, we are able to form a much sharper quantitative view of the potential of changes in policy to improve peoples’ lives than was possible a generation ago.

So, as Sargent observes, Lucas recreated an updated neoclassical synthesis of his own based on the intertemporal Arrow-Debreu-McKenzie version of the Walrasian model, augmented by a rationale for the holding of money and perhaps some form of monetary policy, via the assumption of credit-market frictions and sticky prices. Despite the repudiation of the updated neoclassical synthesis by his friend Edward Prescott, for whom monetary policy is irrelevant, Lucas clings to neoclassical synthesis 2.0. Sargent quotes this passage from Lucas’s 1994 retrospective review of A Monetary History of the US by Friedman and Schwartz to show how tightly Lucas clings to neoclassical synthesis 2.0 :

In Kydland and Prescott’s original model, and in many (though not all) of its descendants, the equilibrium allocation coincides with the optimal allocation: Fluctuations generated by the model represent an efficient response to unavoidable shocks to productivity. One may thus think of the model not as a positive theory suited to all historical time periods but as a normative benchmark providing a good approximation to events when monetary policy is conducted well and a bad approximation when it is not. Viewed in this way, the theory’s relative success in accounting for postwar experience can be interpreted as evidence that postwar monetary policy has resulted in near-efficient behavior, not as evidence that money doesn’t matter.

Indeed, the discipline of real business cycle theory has made it more difficult to defend real alternaltives to a monetary account of the 1930s than it was 30 years ago. It would be a term-paper-size exercise, for example, to work out the possible effects of the 1930 Smoot-Hawley Tariff in a suitably adapted real business cycle model. By now, we have accumulated enough quantitative experience with such models to be sure that the aggregate effects of such a policy (in an economy with a 5% foreign trade sector before the Act and perhaps a percentage point less after) would be trivial.

Nevertheless, in the absence of some catastrophic error in monetary policy, Lucas evidently believes that the key features of the Arrow-Debreu-McKenzie model are closely approximated in the real world. That may well be true. But if it is, Lucas has no real theory to explain why.

In his 1959 paper (“Towards a Theory of Price Adjustment”) I just mentioned, Arrow noted that the theory of competitive equilibrium has no explanation of how equilibrium prices are actually set. Indeed, the idea of competitive price adjustment is beset by a paradox: all agents in a general equilibrium being assumed to be price takers, how is it that a new equilibrium price is ever arrived at following any disturbance to an initial equilibrium? Arrow had no answer to the question, but offered the suggestion that, out of equilibrium, agents are not price takers, but price searchers, possessing some measure of market power to set price in the transition between the old and new equilibrium. But the upshot of Arrow’s discussion was that the problem and the paradox awaited solution. Almost sixty years on, some of us are still waiting, but for Lucas and the Lucasians, there is neither problem nor paradox, because the actual price is the equilibrium price, and the equilibrium price is always the (rationally) expected price.

If the social functions of science were being efficiently discharged, this rather obvious replacement of problem solving by question begging would not have escaped effective challenge and opposition. But Lucas was able to provide cover for this substitution by persuading the profession to embrace his microfoundational methodology, while offering irresistible opportunities for professional advancement to younger economists who could master the new analytical techniques that Lucas and others were rapidly introducing, thereby neutralizing or coopting many of the natural opponents to what became modern macroeconomics. So while Romer considers the conquest of MIT by the rational-expectations revolution, despite the opposition of Robert Solow, to be evidence for the advance of economic science, I regard it as a sign of the social failure of science to discipline a regressive development driven by the elevation of technique over substance.

Roger and Me

Last week Roger Farmer wrote a post elaborating on a comment that he had left to my post on Price Stickiness and Macroeconomics. Roger’s comment is aimed at this passage from my post:

[A]lthough price stickiness is a sufficient condition for inefficient macroeconomic fluctuations, it is not a necessary condition. It is entirely possible that even with highly flexible prices, there would still be inefficient macroeconomic fluctuations. And the reason why price flexibility, by itself, is no guarantee against macroeconomic contractions is that macroeconomic contractions are caused by disequilibrium prices, and disequilibrium prices can prevail regardless of how flexible prices are.

Here’s Roger’s comment:

I have a somewhat different take. I like Lucas’ insistence on equilibrium at every point in time as long as we recognize two facts. 1. There is a continuum of equilibria, both dynamic and steady state and 2. Almost all of them are Pareto suboptimal.

I made the following reply to Roger’s comment:

Roger, I think equilibrium at every point in time is ok if we distinguish between temporary and full equilibrium, but I don’t see how there can be a continuum of full equilibria when agents are making all kinds of long-term commitments by investing in specific capital. Having said that, I certainly agree with you that expectational shifts are very important in determining which equilibrium the economy winds up at.

To which Roger responded:

I am comfortable with temporary equilibrium as the guiding principle, as long as the equilibrium in each period is well defined. By that, I mean that, taking expectations as given in each period, each market clears according to some well defined principle. In classical models, that principle is the equality of demand and supply in a Walrasian auction. I do not think that is the right equilibrium concept.

Roger didn’t explain – at least not here, he probably has elsewhere — exactly why he doesn’t think equality of demand and supply in a Walrasian auction is not the right equilibrium concept. But I would be interested in hearing from him why he thinks equality of supply and demand is not the right equilibrium concept. Perhaps he will clarify his thinking for me.

Hicks wanted to separate ‘fix price markets’ from ‘flex price markets’. I don’t think that is the right equilibrium concept either. I prefer to use competitive search equilibrium for the labor market. Search equilibrium leads to indeterminacy because there are not enough prices for the inputs to the search process. Classical search theory closes that gap with an arbitrary Nash bargaining weight. I prefer to close it by making expectations fundamental [a proposition I have advanced on this blog].

I agree that the Hicksian distinction between fix-price markets and flex-price markets doesn’t cut it. Nevertheless, it’s not clear to me that a Thompsonian temporary-equilibrium model in which expectations determine the reservation wage at which workers will accept employment (i.e, the labor-supply curve conditional on the expected wage) doesn’t work as well as a competitive search equilibrium in this context.

Once one treats expectations as fundamental, there is no longer a multiplicity of equilibria. People act in a well defined way and prices clear markets. Of course ‘market clearing’ in a search market may involve unemployment that is considerably higher than the unemployment rate that would be chosen by a social planner. And when there is steady state indeterminacy, as there is in my work, shocks to beliefs may lead the economy to one of a continuum of steady state equilibria.

There is an equilibrium for each set of expectations (with the understanding, I presume, that expectations are always uniform across agents). The problem that I see with this is that there doesn’t seem to be any interaction between outcomes and expectations. Expectations are always self-fulfilling, and changes in expectations are purely exogenous. But in a classic downturn, the process seems to be cumulative, the contraction seemingly feeding on itself, causing a spiral of falling prices, declining output, rising unemployment, and increasing pessimism.

That brings me to the second part of an equilibrium concept. Are expectations rational in the sense that subjective probability measures over future outcomes coincide with realized probability measures? That is not a property of the real world. It is a consistency property for a model.

Yes; I agree totally. Rational expectations is best understood as a property of a model, the property being that if agents expect an equilibrium price vector the solution of the model is the same equilibrium price vector. It is not a substantive theory of expectation formation, the model doesn’t posit that agents correctly foresee the equilibrium price vector, that’s an extreme and unrealistic assumption about how the world actually works, IMHO. The distinction is crucial, but it seems to me that it is largely ignored in practice.

And yes: if we plop our agents down into a stationary environment, their beliefs should eventually coincide with reality.

This seems to me a plausible-sounding assumption for which there is no theoretical proof, and in view of Roger’s recent discussion of unit roots, dubious empirical support.

If the environment changes in an unpredictable way, it is the belief function, a primitive of the model, that guides the economy to a new steady state. And I can envision models where expectations on the transition path are systematically wrong.

I need to read Roger’s papers about this, but I am left wondering by what mechanism the belief function guides the economy to a steady state? It seems to me that the result requires some pretty strong assumptions.

The recent ‘nonlinearity debate’ on the blogs confuses the existence of multiple steady states in a dynamic model with the existence of multiple rational expectations equilibria. Nonlinearity is neither necessary nor sufficient for the existence of multiplicity. A linear model can have a unique indeterminate steady state associated with an infinite dimensional continuum of locally stable rational expectations equilibria. A linear model can also have a continuum of attracting points, each of which is an equilibrium. These are not just curiosities. Both of these properties characterize modern dynamic equilibrium models of the real economy.

I’m afraid that I don’t quite get the distinction that is being made here. Does “multiple steady states in a dynamic model” mean multiple equilibria of the full Arrow-Debreu general equilibrium model? And does “multiple rational-expectations equilibria” mean multiple equilibria conditional on the expectations of the agents? And I also am not sure what the import of this distinction is supposed to be.

My further question is, how does all of this relate to Leijonhfuvud’s idea of the corridor, which Roger has endorsed? My own understanding of what Axel means by the corridor is that the corridor has certain stability properties that keep the economy from careening out of control, i.e. becoming subject to a cumulative dynamic process that does not lead the economy back to the neighborhood of a stable equilibrium. But if there is a continuum of attracting points, each of which is an equilibrium, how could any of those points be understood to be outside the corridor?

Anyway, those are my questions. I am hoping that Roger can enlighten me.

Price Stickiness and Macroeconomics

Noah Smith has a classically snide rejoinder to Stephen Williamson’s outrage at Noah’s Bloomberg paean to price stickiness and to the classic Ball and Maniw article on the subject, an article that provoked an embarrassingly outraged response from Robert Lucas when published over 20 years ago. I don’t know if Lucas ever got over it, but evidently Williamson hasn’t.

Now to be fair, Lucas’s outrage, though misplaced, was understandable, at least if one understands that Lucas was so offended by the ironic tone in which Ball and Mankiw cast themselves as defenders of traditional macroeconomics – including both Keynesians and Monetarists – against the onslaught of “heretics” like Lucas, Sargent, Kydland and Prescott that he just stopped reading after the first few pages and then, in a fit of righteous indignation, wrote a diatribe attacking Ball and Mankiw as religious fanatics trying to halt the progress of science as if that was the real message of the paper – not, to say the least, a very sophisticated reading of what Ball and Mankiw wrote.

While I am not hostile to the idea of price stickiness — one of the most popular posts I have written being an attempt to provide a rationale for the stylized (though controversial) fact that wages are stickier than other input, and most output, prices — it does seem to me that there is something ad hoc and superficial about the idea of price stickiness and about many explanations, including those offered by Ball and Mankiw, for price stickiness. I think that the negative reactions that price stickiness elicits from a lot of economists — and not only from Lucas and Williamson — reflect a feeling that price stickiness is not well grounded in any economic theory.

Let me offer a slightly different criticism of price stickiness as a feature of macroeconomic models, which is simply that although price stickiness is a sufficient condition for inefficient macroeconomic fluctuations, it is not a necessary condition. It is entirely possible that even with highly flexible prices, there would still be inefficient macroeconomic fluctuations. And the reason why price flexibility, by itself, is no guarantee against macroeconomic contractions is that macroeconomic contractions are caused by disequilibrium prices, and disequilibrium prices can prevail regardless of how flexible prices are.

The usual argument is that if prices are free to adjust in response to market forces, they will adjust to balance supply and demand, and an equilibrium will be restored by the automatic adjustment of prices. That is what students are taught in Econ 1. And it is an important lesson, but it is also a “partial” lesson. It is partial, because it applies to a single market that is out of equilibrium. The implicit assumption in that exercise is that nothing else is changing, which means that all other markets — well, not quite all other markets, but I will ignore that nuance – are in equilibrium. That’s what I mean when I say (as I have done before) that just as macroeconomics needs microfoundations, microeconomics needs macrofoundations.

Now it’s pretty easy to show that in a single market with an upward-sloping supply curve and a downward-sloping demand curve, that a price-adjustment rule that raises price when there’s an excess demand and reduces price when there’s an excess supply will lead to an equilibrium market price. But that simple price-adjustment rule is hard to generalize when many markets — not just one — are in disequilibrium, because reducing disequilibrium in one market may actually exacerbate disequilibrium, or create a disequilibrium that wasn’t there before, in another market. Thus, even if there is an equilibrium price vector out there, which, if it were announced to all economic agents, would sustain a general equilibrium in all markets, there is no guarantee that following the standard price-adjustment rule of raising price in markets with an excess demand and reducing price in markets with an excess supply will ultimately lead to the equilibrium price vector. Even more disturbing, the standard price-adjustment rule may not, even under a tatonnement process in which no trading is allowed at disequilibrium prices, lead to the discovery of the equilibrium price vector. Of course, in the real world trading occurs routinely at disequilibrium prices, so that the “mechanical” forces tending an economy toward equilibrium are even weaker than the standard analysis of price-adjustment would suggest.

This doesn’t mean that an economy out of equilibrium has no stabilizing tendencies; it does mean that those stabilizing tendencies are not very well understood, and we have almost no formal theory with which to describe how such an adjustment process leading from disequilibrium to equilibrium actually works. We just assume that such a process exists. Franklin Fisher made this point 30 years ago in an important, but insufficiently appreciated, volume Disequilibrium Foundations of Equilibrium Economics. But the idea goes back even further: to Hayek’s important work on intertemporal equilibrium, especially his classic paper “Economics and Knowledge,” formalized by Hicks in the temporary-equilibrium model described in Value and Capital.

The key point made by Hayek in this context is that there can be an intertemporal equilibrium if and only if all agents formulate their individual plans on the basis of the same expectations of future prices. If their expectations for future prices are not the same, then any plans based on incorrect price expectations will have to be revised, or abandoned altogether, as price expectations are disappointed over time. For price adjustment to lead an economy back to equilibrium, the price adjustment must converge on an equilibrium price vector and on correct price expectations. But, as Hayek understood in 1937, and as Fisher explained in a dense treatise 30 years ago, we have no economic theory that explains how such a price vector, even if it exists, is arrived at, and even under a tannonement process, much less under decentralized price setting. Pinning the blame on this vague thing called price stickiness doesn’t address the deeper underlying theoretical issue.

Of course for Lucas et al. to scoff at price stickiness on these grounds is a bit rich, because Lucas and his followers seem entirely comfortable with assuming that the equilibrium price vector is rationally expected. Indeed, rational expectation of the equilibrium price vector is held up by Lucas as precisely the microfoundation that transformed the unruly field of macroeconomics into a real science.

Krugman on the Volcker Disinflation

Earlier in the week, Paul Krugman wrote about the Volcker disinflation of the 1980s. Krugman’s annoyance at Stephen Moore (whom Krugman flatters by calling him an economist) and John Cochrane (whom Krugman disflatters by comparing him to Stephen Moore) is understandable, but he has less excuse for letting himself get carried away in an outburst of Keynesian triumphalism.

Right-wing economists like Stephen Moore and John Cochrane — it’s becoming ever harder to tell the difference — have some curious beliefs about history. One of those beliefs is that the experience of disinflation in the 1980s was a huge shock to Keynesians, refuting everything they believed. What makes this belief curious is that it’s the exact opposite of the truth. Keynesians came into the Volcker disinflation — yes, it was mainly the Fed’s doing, not Reagan’s — with a standard, indeed textbook, model of what should happen. And events matched their expectations almost precisely.

I’ve been cleaning out my library, and just unearthed my copy of Dornbusch and Fischer’s Macroeconomics, first edition, copyright 1978. Quite a lot of that book was concerned with inflation and disinflation, using an adaptive-expectations Phillips curve — that is, an assumed relationship in which the current inflation rate depends on the unemployment rate and on lagged inflation. Using that approach, they laid out at some length various scenarios for a strategy of reducing the rate of money growth, and hence eventually reducing inflation. Here’s one of their charts, with the top half showing inflation and the bottom half showing unemployment:




Not the cleanest dynamics in the world, but the basic point should be clear: cutting inflation would require a temporary surge in unemployment. Eventually, however, unemployment could come back down to more or less its original level; this temporary surge in unemployment would deliver a permanent reduction in the inflation rate, because it would change expectations.

And here’s what the Volcker disinflation actually looked like:


A temporary but huge surge in unemployment, with inflation coming down to a sustained lower level.

So were Keynesian economists feeling amazed and dismayed by the events of the 1980s? On the contrary, they were feeling pretty smug: disinflation had played out exactly the way the models in their textbooks said it should.

Well, this is true, but only up to a point. What Krugman neglects to mention, which is why the Volcker disinflation is not widely viewed as having enhanced the Keynesian forecasting record, is that most Keynesians had opposed the Reagan tax cuts, and one of their main arguments was that the tax cuts would be inflationary. However, in the Reagan-Volcker combination of loose fiscal policy and tight money, it was tight money that dominated. Score one for the Monetarists. The rapid drop in inflation, though accompanied by high unemployment, was viewed as a vindication of the Monetarist view that inflation is always and everywhere a monetary phenomenon, a view which now seems pretty commonplace, but in the 1970s and 1980s was hotly contested, including by Keynesians.

However, the (Friedmanian) Monetarist view was only partially vindicated, because the Volcker disinflation was achieved by way of high interest rates not by tightly controlling the money supply. As I have written before on this blog (here and here) and in chapter 10 of my book on free banking (especially, pp. 214-21), Volcker actually tried very hard to slow down the rate of growth in the money supply, but the attempt to implement a k-percent rule induced perverse dynamics, creating a precautionary demand for money whenever monetary growth overshot the target range, the anticipation of an imminent future tightening causing people, fearful that cash would soon be unavailable, to hoard cash by liquidating assets before the tightening. The scenario played itself out repeatedly in the 1981-82 period, when the most closely watched economic or financial statistic in the world was the Fed’s weekly report of growth in the money supply, with growth rates over the target range being associated with falling stock and commodities prices. Finally, in the summer of 1982, Volcker announced that the Fed would stop trying to achieve its money growth targets, and the great stock market rally of the 1980s took off, and economic recovery quickly followed.

So neither the old-line Keynesian dismissal of monetary policy as irrelevant to the control of inflation, nor the Monetarist obsession with controlling the monetary aggregates fared very well in the aftermath of the Volcker disinflation. The result was the New Keynesian focus on monetary policy as the key tool for macroeconomic stabilization, except that monetary policy no longer meant controlling a targeted monetary aggregate, but controlling a targeted interest rate (as in the Taylor rule).

But Krugman doesn’t mention any of this, focusing instead on the conflicts among  non-Keynesians.

Indeed, it was the other side of the macro divide that was left scrambling for answers. The models Chicago was promoting in the 1970s, based on the work of Robert Lucas and company, said that unemployment should have come down quickly, as soon as people realized that the Fed really was bringing down inflation.

Lucas came to Chicago in 1975, and he was the wave of the future at Chicago, but it’s not as if Friedman disappeared; after all, he did win the Nobel Prize in 1976. And although Friedman did not explicitly attack Lucas, it’s clear that, to his credit, Friedman never bought into the rational-expectations revolution. So although Friedman may have been surprised at the depth of the 1981-82 recession – in part attributable to the perverse effects of the money-supply targeting he had convinced the Fed to adopt – the adaptive-expectations model in the Dornbusch-Fischer macro textbook is as much Friedmanian as Keynesian. And by the way, Dornbush and Fischer were both at Chicago in the mid 1970s when the first edition of their macro text was written.

By a few years into the 80s it was obvious that those models were unsustainable in the face of the data. But rather than admit that their dismissal of Keynes was premature, most of those guys went into real business cycle theory — basically, denying that the Fed had anything to do with recessions. And from there they just kept digging ever deeper into the rabbit hole.

But anyway, what you need to know is that the 80s were actually a decade of Keynesian analysis triumphant.

I am just as appalled as Krugman by the real-business-cycle episode, but it was as much a rejection of Friedman, and of all other non-Keynesian monetary theory, as of Keynes. So the inspiring morality tale spun by Krugman in which the hardy band of true-blue Keynesians prevail against those nasty new classical barbarians is a bit overdone and vastly oversimplified.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 443 other followers

Follow Uneasy Money on WordPress.com