Archive for the 'macroeconomics' Category

Hayek and Rational Expectations

In this, my final, installment on Hayek and intertemporal equilibrium, I want to focus on a particular kind of intertemporal equilibrium: rational-expectations equilibrium. In his discussions of intertemporal equilibrium, Roy Radner assigns a meaning to the term “rational-expectations equilibrium” very different from the meaning normally associated with that term. Radner describes a rational-expectations equilibrium as the equilibrium that results when some agents are able to make inferences about the beliefs held by other agents when observed prices differ from what they had expected prices to be. Agents attribute the differences between observed and expected prices to information held by agents better informed than themselves, and revise their own expectations accordingly in light of the information that would have justified the observed prices.

In the early 1950s, one very rational agent, Armen Alchian, was able to figure out what chemicals were being used in making the newly developed hydrogen bomb by identifying companies whose stock prices had risen too rapidly to be explained otherwise. Alchian, who spent almost his entire career at UCLA while also moonlighting at the nearby Rand Corporation, wrote a paper for Rand in which he listed the chemicals used in making the hydrogen bomb. When people at the Defense Department heard about the paper – the Rand Corporation was started as a think tank largely funded by the Department of Defense to do research that the Defense Department was interested in – they went to Alchian, confiscated and destroyed the paper. Joseph Newhard recently wrote a paper about this episode in the Journal of Corporate Finance. Here’s the abstract:

At RAND in 1954, Armen A. Alchian conducted the world’s first event study to infer the fuel material used in the manufacturing of the newly-developed hydrogen bomb. Successfully identifying lithium as the fusion fuel using only publicly available financial data, the paper was seen as a threat to national security and was immediately confiscated and destroyed. The bomb’s construction being secret at the time but having since been partially declassified, the nuclear tests of the early 1950s provide an opportunity to observe market efficiency through the dissemination of private information as it becomes public. I replicate Alchian’s event study of capital market reactions to the Operation Castle series of nuclear detonations in the Marshall Islands, beginning with the Bravo shot on March 1, 1954 at Bikini Atoll which remains the largest nuclear detonation in US history, confirming Alchian’s results. The Operation Castle tests pioneered the use of lithium deuteride dry fuel which paved the way for the development of high yield nuclear weapons deliverable by aircraft. I find significant upward movement in the price of Lithium Corp. relative to the other corporations and to DJIA in March 1954; within three weeks of Castle Bravo the stock was up 48% before settling down to a monthly return of 28% despite secrecy, scientific uncertainty, and public confusion surrounding the test; the company saw a return of 461% for the year.

Radner also showed that the ability of some agents to infer the information on which other agents are causing prices to differ from the prices that had been expected does not necessarily lead to an equilibrium. The process of revising expectations in light of observed prices may not converge on a shared set of expectations of the future based on commonly shared knowledge.

So rather than pursue Radner’s conception of rational expectations, I will focus here on the conventional understanding of “rational expectations” in modern macroeconomics, which is that the price expectations formed by the agents in a model should be consistent with what the model itself predicts that those future prices will be. In this very restricted sense, I believe rational expectations is a very important property that any model ought to have. It simply says that a model ought to have the property that if one assumes that the agents in a model expect the equilibrium predicted by the model, then, given those expectations, the solution of the model will turn out to be the equilibrium of the model. This property is a consistency and coherence property that any model, regardless of its substantive predictions, ought to have. If a model lacks this property, there is something wrong with the model.

But there is a huge difference between saying that a model should have the property that correct expectations are self-fulfilling and saying that agents are in fact capable of predicting the equilibrium of the model. Assuming the former does not entail the latter. What kind of crazy model would have the property that correct expectations are not self-fulfilling? I mean think about: a model in which correct expectations are not self-fulfilling is a nonsense model.

But demanding that a model not spout out jibberish is very different from insisting that the agents in the model necessarily have the capacity to predict what the equilibrium of the model will be. Rational expectations in the first sense is a minimal consistency property of an economic model; rational expectations in the latter sense is an empirical assertion about the real world. You can make such an assumption if you want, but you can’t claim that it is a property of the real world. Whether it is a property of the real world is a matter of fact, not a matter of methodological fiat. But methodological fiat is what rational expectations has become in macroeconomics.

In his 1937 paper on intertemporal equilibrium, Hayek was very clear that correct expectations are logically implied by the concept of an equilibrium of plans extending through time. But correct expectations are not a necessary, or even descriptively valid, characteristic of reality. Hayek also conceded that we don’t even have an explanation in theory of how correct expectations come into existence. He merely alluded to the empirical observation – perhaps not the most accurate description of empirical reality in 1937 – that there is an observed general tendency for markets to move toward equilibrium, implying that over time expectations do tend to become more accurate.

It is worth pointing out that when the idea of rational expectations was introduced by John Muth in the early 1960s, he did so in the context of partial-equilibrium models in which the rational expectation in the model was the rational expectation of the equilibrium price in a paraticular market. The motivation for Muth to introduce the idea of a rational expectation was idea of a cobweb cycle in which producers simply assume that the current price will remain at whatever level currently prevails. If there is a time lag between production, as in agricultural markets between the initial application of inputs and the final yield of output, it is easy to generate an alternating sequence of boom and bust, with current high prices inducing increased output in the following period, driving prices down, thereby inducing low output and high prices in the next period and so on.

Muth argued that rational producers would not respond to price signals in a way that led to consistently mistaken expectations, but would base their price expectations on more realistic expectations of what future prices would turn out to be. In his microeconomic work on rational expectations, Muth showed that the rational-expectation assumption was a better predictor of observed prices than the assumption of static expectations underlying the traditional cobweb-cycle model. So Muth’s rational-expectations assumption was based on a realistic conjecture of how real-world agents would actually form expectations. In that sense, Muth’s assumption was consistent with Hayek’s conjecture that there is an empirical tendency for markets to move toward equilibrium.

So while Muth’s introduction of the rational-expectations hypothesis was an empirically progressive theoretical innovation, extending rational-expectations into the domain of macroeconomics has not been empirically progressive, rational expectations models having consistently failed to generate better predictions than macro-models using other expectational assumptions. Instead, a rational-expectations axiom has been imposed as part of a spurious methodological demand that all macroeconomic models be “micro-founded.” But the deeper point – a point that Hayek understood better than perhaps anyone else — is that there is a huge difference in kind between forming rational expectations about a single market price and forming rational expectations about the vector of n prices on the basis of which agents are choosing or revising their optimal intertemporal consumption and production plans.

It is one thing to assume that agents have some expert knowledge about the course of future prices in the particular markets in which they participate regularly; it is another thing entirely to assume that they have knowledge sufficient to forecast the course of all future prices and in particular to understand the subtle interactions between prices in one market and the apparently unrelated prices in another market. The former kind of knowledge is knowledge that expert traders might be expected to have; the latter kind of knowledge is knowledge that would be possessed by no one but a nearly omniscient central planner, whose existence was shown by Hayek to be a practical impossibility.

Standard macroeconomic models are typically so highly aggregated that the extreme nature of the rational-expectations assumption is effectively suppressed. To treat all output as a single good (which involves treating the single output as both a consumption good and a productive asset generating a flow of productive services) effectively imposes the assumption that the only relative price that can ever change is the wage, so that all but one future relative prices are known in advance. That assumption effectively assumes away the problem of incorrect expectations except for two variables: the future price level and the future productivity of labor (owing to the productivity shocks so beloved of Real Business Cycle theorists). Having eliminated all complexity from their models, modern macroeconomists, purporting to solve micro-founded macromodels, simply assume that there is but one or at most two variables about which agents have to form their rational expectations.

Four score years since Hayek explained how challenging the notion of intertemporal equilibrium really is and the difficulties inherent in explaining any empirical tendency toward intertempral equilibrium, modern macroeconomics has succeeded in assuming all those difficulties out of existence. Many macroeconomists feel rather proud of what modern macroeconomics has achieved. I am not quite as impressed as they are.

Hayek and Intertemporal Equilibrium

I am starting to write a paper on Hayek and intertemporal equilibrium, and as I write it over the next couple of weeks, I am going to post sections of it on this blog. Comments from readers will be even more welcome than usual, and I will do my utmost to reply to comments, a goal that, I am sorry to say, I have not been living up to in my recent posts.

The idea of equilibrium is an essential concept in economics. It is an essential concept in other sciences as well, its meaning in economics is not the same as in other disciplines. The concept having originally been borrowed from physics, the meaning originally attached to it by economists corresponded to the notion of a system at rest, and it took a long time for economists to see that viewing an economy as a system at rest was not the only, or even the most useful, way of applying the equilibrium concept to economic phenomena.

What would it mean for an economic system to be at rest? The obvious answer was to say that prices and quantities would not change. If supply equals demand in every market, and if there no exogenous change introduced into the system, e.g., in population, technology, tastes, etc., it would seem that would be no reason for the prices paid and quantities produced to change in that system. But that view of an economic system was a very restrictive one, because such a large share of economic activity – savings and investment — is predicated on the assumption and expectation of change.

The model of a stationary economy at rest in which all economic activity simply repeats what has already happened before did not seem very satisfying or informative, but that was the view of equilibrium that originally took hold in economics. The idea of a stationary timeless equilibrium can be traced back to the classical economists, especially Ricardo and Mill who wrote about the long-run tendency of an economic system toward a stationary state. But it was the introduction by Jevons, Menger, Walras and their followers of the idea of optimizing decisions by rational consumers and producers that provided the key insight for a more robust and fruitful version of the equilibrium concept.

If each economic agent (household or business firm) is viewed as making optimal choices based on some scale of preferences subject to limitations or constraints imposed by their capacities, endowments, technology and the legal system, then the equilibrium of an economy must describe a state in which each agent, given his own subjective ranking of the feasible alternatives, is making a optimal decision, and those optimal decisions are consistent with those of all other agents. The optimal decisions of each agent must simultaneously be optimal from the point of view of that agent while also being consistent, or compatible, with the optimal decisions of every other agent. In other words, the decisions of all buyers of how much to purchase must be consistent with the decisions of all sellers of how much to sell.

The idea of an equilibrium as a set of independently conceived, mutually consistent optimal plans was latent in the earlier notions of equilibrium, but it could not be articulated until a concept of optimality had been defined. That concept was utility maximization and it was further extended to include the ideas of cost minimization and profit maximization. Once the idea of an optimal plan was worked out, the necessary conditions for the mutual consistency of optimal plans could be articulated as the necessary conditions for a general economic equilibrium. Once equilibrium was defined as the consistency of optimal plans, the path was clear to define an intertemporal equilibrium as the consistency of optimal plans extending over time. Because current goods and services and otherwise identical goods and services in the future could be treated as economically distinct goods and services, defining the conditions for an intertemporal equilibrium was formally almost equivalent to defining the conditions for a static, stationary equilibrium. Just as the conditions for a static equilibrium could be stated in terms of equalities between marginal rates of substitution of goods in consumption and in production to their corresponding price ratios, an intertemporal equilibrium could be stated in terms of equalities between the marginal rates of intertemporal substitution in consumption and in production and their corresponding intertemporal price ratios.

The only formal adjustment required in the necessary conditions for static equilibrium to be extended to intertemporal equilibrium was to recognize that, inasmuch as future prices (typically) are unobservable, and hence unknown to economic agents, the intertemporal price ratios cannot be ratios between actual current prices and actual future prices, but, instead, ratios between current prices and expected future prices. From this it followed that for optimal plans to be mutually consistent, all economic agents must have the same expectations of the future prices in terms of which their plans were optimized.

The concept of an intertemporal equilibrium was first presented in English by F. A. Hayek in his 1937 article “Economics and Knowledge.” But it was through J. R. Hicks’s Value and Capital published two years later in 1939 that the concept became more widely known and understood. In explaining and applying the concept of intertemporal equilibrium and introducing the derivative concept of a temporary equilibrium in which current markets clear, but individual expectations of future prices are not the same, Hicks did not claim originality, but instead of crediting Hayek for the concept, or even mentioning Hayek’s 1937 paper, Hicks credited the Swedish economist Erik Lindahl, who had published articles in the early 1930s in which he had articulated the concept. But although Lindahl had published his important work on intertemporal equilibrium before Hayek’s 1937 article, Hayek had already explained the concept in a 1928 article “Das intertemporale Gleichgewichtasystem der Priese und die Bewegungen des ‘Geltwertes.'” (English translation: “Intertemporal price equilibrium and movements in the value of money.“)

Having been a junior colleague of Hayek’s in the early 1930s when Hayek arrived at the London School of Economics, and having come very much under Hayek’s influence for a few years before moving in a different theoretical direction in the mid-1930s, Hicks was certainly aware of Hayek’s work on intertemporal equilibrium, so it has long been a puzzle to me why Hicks did not credit Hayek along with Lindahl for having developed the concept of intertemporal equilibrium. It might be worth pursuing that question, but I mention it now only as an aside, in the hope that someone else might find it interesting and worthwhile to try to find a solution to that puzzle. As a further aside, I will mention that Murray Milgate in a 1979 article “On the Origin of the Notion of ‘Intertemporal Equilibrium’” has previously tried to redress the failure to credit Hayek’s role in introducing the concept of intertemporal equilibrium into economic theory.

What I am going to discuss in here and in future posts are three distinct ways in which the concept of intertemporal equilibrium has been developed since Hayek’s early work – his 1928 and 1937 articles but also his 1941 discussion of intertemporal equilibrium in The Pure Theory of Capital. Of course, the best known development of the concept of intertemporal equilibrium is the Arrow-Debreu-McKenzie (ADM) general-equilibrium model. But although it can be thought of as a model of intertemporal equilibrium, the ADM model is set up in such a way that all economic decisions are taken before the clock even starts ticking; the transactions that are executed once the clock does start simply follow a pre-determined script. In the ADM model, the passage of time is a triviality, merely a way of recording the sequential order of the predetermined production and consumption activities. This feat is accomplished by assuming that all agents are present at time zero with their property endowments in hand and capable of transacting – but conditional on the determination of an equilibrium price vector that allows all optimal plans to be simultaneously executed over the entire duration of the model — in a complete set of markets (including state-contingent markets covering the entire range of contingent events that will unfold in the course of time whose outcomes could affect the wealth or well-being of any agent with the probabilities associated with every contingent event known in advance).

Just as identical goods in different physical locations or different time periods can be distinguished as different commodities that cn be purchased at different prices for delivery at specific times and places, identical goods can be distinguished under different states of the world (ice cream on July 4, 2017 in Washington DC at 2pm only if the temperature is greater than 90 degrees). Given the complete set of state-contingent markets and the known probabilities of the contingent events, an equilibrium price vector for the complete set of markets would give rise to optimal trades reallocating the risks associated with future contingent events and to an optimal allocation of resources over time. Although the ADM model is an intertemporal model only in a limited sense, it does provide an ideal benchmark describing the characteristics of a set of mutually consistent optimal plans.

The seminal work of Roy Radner in relaxing some of the extreme assumptions of the ADM model puts Hayek’s contribution to the understanding of the necessary conditions for an intertemporal equilibrium into proper perspective. At an informal level, Hayek was addressing the same kinds of problems that Radner analyzed with far more powerful analytical tools than were available to Hayek. But the were both concerned with a common problem: under what conditions could an economy with an incomplete set of markets be said to be in a state of intertemporal equilibrium? In an economy lacking the full set of forward and state contingent markets describing the ADM model, intertemporal equilibrium cannot predetermined before trading even begins, but must, if such an equilibrium obtains, unfold through the passage of time. Outcomes might be expected, but they would not be predetermined in advance. Echoing Hayek, though to my knowledge he does not refer to Hayek in his work, Radner describes his intertemporal equilibrium under uncertainty as an equilibrium of plans, prices, and price expectations. Even if it exists, the Radner equilibrium is not the same as the ADM equilibrium, because without a full set of markets, agents can’t fully hedge against, or insure, all the risks to which they are exposed. The distinction between ex ante and ex post is not eliminated in the Radner equilibrium, though it is eliminated in the ADM equilibrium.

Additionally, because all trades in the ADM model have been executed before “time” begins, it seems impossible to rationalize holding any asset whose only use is to serve as a medium of exchange. In his early writings on business cycles, e.g., Monetary Theory and the Trade Cycle, Hayek questioned whether it would be possible to rationalize the holding of money in the context of a model of full equilibrium, suggesting that monetary exchange, by severing the link between aggregate supply and aggregate demand characteristic of a barter economy as described by Say’s Law, was the source of systematic deviations from the intertemporal equilibrium corresponding to the solution of a system of Walrasian equations. Hayek suggested that progress in analyzing economic fluctuations would be possible only if the Walrasian equilibrium method could be somehow be extended to accommodate the existence of money, uncertainty, and other characteristics of the real world while maintaining the analytical discipline imposed by the equilibrium method and the optimization principle. It proved to be a task requiring resources that were beyond those at Hayek’s, or probably anyone else’s, disposal at the time. But it would be wrong to fault Hayek for having had to insight to perceive and frame a problem that was beyond his capacity to solve. What he may be criticized for is mistakenly believing that he he had in fact grasped the general outlines of a solution when in fact he had only perceived some aspects of the solution and offering seriously inappropriate policy recommendations based on that seriously incomplete understanding.

In Value and Capital, Hicks also expressed doubts whether it would be possible to analyze the economic fluctuations characterizing the business cycle using a model of pure intertemporal equilibrium. He proposed an alternative approach for analyzing fluctuations which he called the method of temporary equilibrium. The essence of the temporary-equilibrium method is to analyze the behavior of an economy under the assumption that all markets for current delivery clear (in some not entirely clear sense of the term “clear”) while understanding that demand and supply in current markets depend not only on current prices but also upon expected future prices, and that the failure of current prices to equal what they had been expected to be is a potential cause for the plans that economic agents are trying to execute to be modified and possibly abandoned. In the Pure Theory of Capital, Hayek discussed Hicks’s temporary-equilibrium method a possible method of achieving the modification in the Walrasian method that he himself had proposed in Monetary Theory and the Trade Cycle. But after a brief critical discussion of the method, he dismissed it for reasons that remain obscure. Hayek’s rejection of the temporary-equilibrium method seems in retrospect to have been one of Hayek’s worst theoretical — or perhaps, meta-theoretical — blunders.

Decades later, C. J. Bliss developed the concept of temporary equilibrium to show that temporary equilibrium method can rationalize both holding an asset purely for its services as a medium of exchange and the existence of financial intermediaries (private banks) that supply financial assets held exclusively to serve as a medium of exchange. In such a temporary-equilibrium model with financial intermediaries, it seems possible to model not only the existence of private suppliers of a medium of exchange, but also the conditions – in a very general sense — under which the system of financial intermediaries breaks down. The key variable of course is vectors of expected prices subject to which the plans of individual households, business firms, and financial intermediaries are optimized. The critical point that emerges from Bliss’s analysis is that there are sets of expected prices, which if held by agents, are inconsistent with the existence of even a temporary equilibrium. Thus price flexibility in current market cannot, in principle, result in even a temporary equilibrium, because there is no price vector of current price in markets for present delivery that solves the temporary-equilibrium system. Even perfect price flexibility doesn’t lead to equilibrium if the equilibrium does not exist. And the equilibrium cannot exist if price expectations are in some sense “too far out of whack.”

Expected prices are thus, necessarily, equilibrating variables. But there is no economic mechanism that tends to cause the adjustment of expected prices so that they are consistent with the existence of even a temporary equilibrium, much less a full equilibrium.

Unfortunately, modern macroeconomics continues to neglect the temporary-equilibrium method; instead macroeconomists have for the most part insisted on the adoption of the rational-expectations hypothesis, a hypothesis that elevates question-begging to the status of a fundamental axiom of rationality. The crucial error in the rational-expectations hypothesis was to misunderstand the role of the comparative-statics method developed by Samuelson in The Foundations of Economic Analysis. The role of the comparative-statics method is to isolate the pure theoretical effect of a parameter change under a ceteris-paribus assumption. Such an effect could be derived only by comparing two equilibria under the assumption of a locally unique and stable equilibrium before and after the parameter change. But the method of comparative statics is completely inappropriate to most macroeconomic problems which are precisely concerned with the failure of the economy to achieve, or even to approximate, the unique and stable equilibrium state posited by the comparative-statics method.

Moreover, the original empirical application of the rational-expectations hypothesis by Muth was in the context of the behavior of a single market in which the market was dominated by well-informed specialists who could be presumed to have well-founded expectations of future prices conditional on a relatively stable economic environment. Under conditions of macroeconomic instability, there is good reason to doubt that the accumulated knowledge and experience of market participants would enable agents to form accurate expectations of the future course of prices even in those markets about which they expert knowledge. Insofar as the rational expectations hypothesis has any claim to empirical relevance it is only in the context of stable market situations that can be assumed to be already operating in the neighborhood of an equilibrium. For the kinds of problems that macroeconomists are really trying to answer that assumption is neither relevant nor appropriate.

A Primer on Equilibrium

After my latest post about rational expectations, Henry from Australia, one of my most prolific commenters, has been engaging me in a conversation about what assumptions are made – or need to be made – for an economic model to have a solution and for that solution to be characterized as an equilibrium, and in particular, a general equilibrium. Equilibrium in economics is not always a clearly defined concept, and it can have a number of different meanings depending on the properties of a given model. But the usual understanding is that the agents in the model (as consumers or producers) are trying to do as well for themselves as they can, given the endowments of resources, skills and technology at their disposal and given their preferences. The conversation was triggered by my assertion that rational expectations must be “compatible with the equilibrium of the model in which those expectations are embedded.”

That was the key insight of John Muth in his paper introducing the rational-expectations assumption into economic modelling. So in any model in which the current and future actions of individuals depend on their expectations of the future, the model cannot arrive at an equilibrium unless those expectations are consistent with the equilibrium of the model. If the expectations of agents are incompatible or inconsistent with the equilibrium of the model, then, since the actions taken or plans made by agents are based on those expectations, the model cannot have an equilibrium solution.

Now Henry thinks that this reasoning is circular. My argument would be circular if I defined an equilibrium to be the same thing as correct expectations. But I am not so defining an equilibrium. I am saying that the correctness of expectations by all agents implies 1) that their expectations are mutually consistent, and 2) that, having made plans, based on their expectations, which, by assumption, agents felt were the best set of choices available to them given those expectations, if the expectations of the agents are realized, then they would not regret the decisions and the choices that they made. Each agent would be as well off as he could have made himself, given his perceived opportunities when the decision were made. That the correctness of expectations implies equilibrium is the consequence of assuming that agents are trying to optimize their decision-making process, given their available and expected opportunities. If all expected opportunities are correctly foreseen, then all decisions will have been the optimal decisions under the circumstances. But nothing has been said that requires all expectations to be correct, or even that it is possible for all expectations to be correct. If an equilibrium does not exist, and just because you can write down an economic model, it does not mean that a solution to the model exists, then the sweet spot where all expectations are consistent and compatible is just a blissful fantasy. So a logical precondition to showing that rational expectations are even possible is to prove that an equilibrium exists. There is nothing circular about the argument.

Now the key to proving the existence of a general equilibrium is to show that the general equilibrium model implies the existence of what mathematicians call a fixed point. A fixed point is said to exist when there is a mapping – a rule or a function – that takes every point in a convex compact set of points and assigns that point to another point in the same set. A convex, compact set has two important properties: 1) the line connecting any two points in the set is entirely contained within the boundaries of the set, and 2) there are no gaps between any two points in set. The set of points in a circle or a rectangle is a convex compact set; the set of points contained in the Star of David is not a convex set. Any two points in the circle will be connected by a line that lies completely within the circle; the points at adjacent edges of a Star of David will be connected by a line that lies entirely outside the Star of David.

If you think of the set of all possible price vectors for an economy, those vectors – each containing a price for each good or service in the economy – could be mapped onto itself in the following way. Given all the equations describing the behavior of each agent in the economy, the quantity demanded and supplied of each good could be calculated, giving us the excess demand (the difference between amount demand and supplied) for each good. Then the price of every good in excess demand would be raised, the price of every good in negative excess demand would be reduced, and the price of every good with zero excess demand would be held constant. To ensure that the mapping was taking a point from a given convex set onto itself, all prices could be normalized so that they would have the property that the sum of all the individual prices would always equal 1. The fixed point theorem ensures that for a mapping from one convex compact set onto itself there must be at least one fixed point, i.e., at least one point in the set that gets mapped onto itself. The price vector corresponding to that point is an equilibrium, because, given how our mapping rule was defined, a point would be mapped onto itself if and only if all excess demands are zero, so that no prices changed. Every fixed point – and there may be one or more fixed points – corresponds to an equilibrium price vector and every equilibrium price vector is associated with a fixed point.

Before going on, I ought to make an important observation that is often ignored. The mathematical proof of the existence of an equilibrium doesn’t prove that the economy operates at an equilibrium, or even that the equilibrium could be identified under the mapping rule described (which is a kind of formalization of the Walrasian tatonnement process). The mapping rule doesn’t guarantee that you would ever discover a fixed point in any finite amount of iterations. Walras thought the price adjustment rule of raising the prices of goods in excess demand and reducing prices of goods in excess supply would converge on the equilibrium price vector. But the conditions under which you can prove that the naïve price-adjustment rule converges to an equilibrium price vector turn out to be very restrictive, so even though we can prove that the competitive model has an equilibrium solution – in other words the behavioral, structural and technological assumptions of the model are coherent, meaning that the model has a solution, the model has no assumptions about how prices are actually determined that would prove that the equilibrium is ever reached. In fact, the problem is even more daunting than the previous sentence suggest, because even Walrasian tatonnement imposes an incredibly powerful restriction, namely that no trading is allowed at non-equilibrium prices. In practice there are almost never recontracting provisions allowing traders to revise the terms of their trades once it becomes clear that the prices at which trades were made were not equilibrium prices.

I now want to show how price expectations fit into all of this, because the original general equilibrium models were either one-period models or formal intertemporal models that were reduced to single-period models by assuming that all trading for future delivery was undertaken in the first period by long-lived agents who would eventually carry out the transactions that were contracted in period 1 for subsequent consumption and production. Time was preserved in a purely formal, technical way, but all economic decision-making was actually concluded in the first period. But even though the early general-equilibrium models did not encompass expectations, one of the extraordinary precursors of modern economics, Augustin Cournot, who was way too advanced for his contemporaries even to comprehend, much less make any use of, what he was saying, had incorporated the idea of expectations into the solution of his famous economic model of oligopolistic price setting.

The key to oligopolistic pricing is that each oligopolist must take into account not just consumer demand for his product, and his own production costs; he must consider as well what actions will be taken by his rivals. This is not a problem for a competitive producer (a price-taker) or a pure monopolist. The price-taker simply compares the price at which he can sell as much as he wants with his production costs and decides how much it is worthwhile to produce by comparing his marginal cost to price ,and increases output until the marginal cost rises to match the price at which he can sell. The pure monopolist, if he knows, as is assumed in such exercises, or thinks he knows the shape of the customer demand curve, selects the price and quantity combination on the demand curve that maximizes total profit (corresponding to the equality of marginal revenue and marginal cost). In oligopolistic situations, each producer must take into account how much his rivals will sell, or what prices they will set.

It was by positing such a situation and finding an analytic solution, that Cournot made a stunning intellectual breakthrough. In the simple duopoly case, Cournot posited that if the duopolists had identical costs, then each could find his optimal price conditional on the output chosen by the other. This is a simple profit-maximization problem for each duopolist, given a demand curve for the combined output of both (assumed to be identical, so that a single price must obtain for the output of both) a cost curve and the output of the other duopolist. Thus, for each duopolist there is a reaction curve showing his optimal output given the output of the other. See the accompanying figure.cournot

If one duopolist produces zero, the optimal output for the other is the monopoly output. Depending on what the level of marginal cost is, there is some output by either of the duopolists that is sufficient to make it unprofitable for the other duopolist to produce anything. That level of output corresponds to the competitive output where price just equals marginal cost. So the slope of the two reaction functions corresponds to the ratio of the monopoly output to the competitive output, which, with constant marginal cost is 2:1. Given identical costs, the two reaction curves are symmetric and the optimal output for each, given the expected output of the other, corresponds to the intersection of the two reaction curves, at which both duopolists produce the same quantity. The combined output of the two duopolists will be greater than the monopoly output, but less than the competitive output at which price equals marginal cost. With constant marginal cost, it turns out that each duopolist produces one-third of the competitive output. In the general case with n oligoplists, the ratio of the combined output of all n firms to the competitive output equals n/(n+1).

Cournot’s solution corresponds to a fixed point where the equilibrium of the model implies that both duopolists have correct expectations of the output of the other. Given the assumptions of the model, if the duopolists both expect the other to produce an output equal to one-third of the competitive output, their expectations will be consistent and will be realized. If either one expects the other to produce a different output, the outcome will not be an equilibrium, and each duopolist will regret his output decision, because the price at which he can sell his output will differ from the price that he had expected. In the Cournot case, you could define a mapping of a vector of the quantities that each duopolist had expected the other to produce and the corresponding planned output of each duopolist. An equilibrium corresponds to a case in which both duopolists expected the output planned by the other. If either duopolist expected a different output from what the other planned, the outcome would not be an equilibrium.

We can now recognize that Cournot’s solution anticipated John Nash’s concept of an equilibrium strategy in which player chooses a strategy that is optimal given his expectation of what the other player’s strategy will be. A Nash equilibrium corresponds to a fixed point in which each player chooses an optimal strategy based on the correct expectation of what the other player’s strategy will be. There may be more than one Nash equilibrium in many games. For example, rather than base their decisions on an expectation of the quantity choice of the other duopolist, the two duopolists could base their decisions on an expectation of what price the other duopolist would set. In the constant-cost case, this choice of strategies would lead to the competitive output because both duopolists would conclude that the optimal strategy of the other duopolist would be to charge a price just sufficient to cover his marginal cost. This was the alternative oligopoly model suggested by another French economist J. L. F. Bertrand. Of course there is a lot more to be said about how oligopolists strategize than just these two models, and the conditions under which one or the other model is the more appropriate. I just want to observe that assumptions about expectations are crucial to how we analyze market equilibrium, and that the importance of these assumptions for understanding market behavior has been recognized for a very long time.

But from a macroeconomic perspective, the important point is that expected prices become the critical equilibrating variable in the theory of general equilibrium and in macroeconomics in general. Single-period models of equilibrium, including general-equilibrium models that are formally intertemporal, but in which all trades are executed in the initial period at known prices in a complete array of markets determining all future economic activity, are completely sterile and useless for macroeconomics except as a stepping stone to analyzing the implications of imperfect forecasts of future prices. If we want to think about general equilibrium in a useful macroeconomic context, we have to think about a general-equilibrium system in which agents make plans about consumption and production over time based on only the vaguest conjectures about what future conditions will be like when the various interconnected stages of their plans will be executed.

Unlike the full Arrow-Debreu system of complete markets, a general-equilibrium system with incomplete markets cannot be equilibrated, even in principle, by price adjustments in the incomplete set of present markets. Equilibration depends on the consistency of expected prices with equilibrium. If equilibrium is characterized by a fixed point, the fixed point must be mapping of a set of vectors of current prices and expected prices on to itself. That means that expected future prices are as much equilibrating variables as current market prices. But expected future prices exist only in the minds of the agents, they are not directly subject to change by market forces in the way that prices in actual markets are. If the equilibrating tendencies of market prices in a system of complete markets are very far from completely effective, the equilibrating tendencies of expected future prices may not only be non-existent, but may even be potentially disequilibrating rather than equilibrating.

The problem of price expectations in an intertemporal general-equilibrium system is central to the understanding of macroeconomics. Hayek, who was the father of intertemporal equilibrium theory, which he was the first to outline in a 1928 paper in German, and who explained the problem with unsurpassed clarity in his 1937 paper “Economics and Knowledge,” unfortunately did not seem to acknowledge its radical consequences for macroeconomic theory, and the potential ineffectiveness of self-equilibrating market forces. My quarrel with rational expectations as a strategy of macroeconomic analysis is its implicit assumption, lacking any analytical support, that prices and price expectations somehow always adjust to equilibrium values. In certain contexts, when there is no apparent basis to question whether a particular market is functioning efficiently, rational expectations may be a reasonable working assumption for modelling observed behavior. However, when there is reason to question whether a given market is operating efficiently or whether an entire economy is operating close to its potential, to insist on principle that the rational-expectations assumption must be made, to assume, in other words, that actual and expected prices adjust rapidly to their equilibrium values allowing an economy to operate at or near its optimal growth path, is simply, as I have often said, an exercise in circular reasoning and question begging.

Paul Romer on Modern Macroeconomics, Or, the “All Models Are False” Dodge

Paul Romer has been engaged for some time in a worthy campaign against the travesty of modern macroeconomics. A little over a year ago I commented favorably about Romer’s takedown of Robert Lucas, but I also defended George Stigler against what I thought was an unfair attempt by Romer to identify George Stigler as an inspiration and role model for Lucas’s transgressions. Now just a week ago, a paper based on Romer’s Commons Memorial Lecture to the Omicron Delta Epsilon Society, has become just about the hottest item in the econ-blogosophere, even drawing the attention of Daniel Drezner in the Washington Post.

I have already written critically about modern macroeconomics in my five years of blogging, and here are some links to previous posts (link, link, link, link). It’s good to see that Romer is continuing to voice his criticisms, and that they are gaining a lot of attention. But the macroeconomic hierarchy is used to criticism, and has its standard responses to criticism, which are being dutifully deployed by defenders of the powers that be.

Romer’s most effective rhetorical strategy is to point out that the RBC core of modern DSGE models posit unobservable taste and technology shocks to account for fluctuations in the economic time series, but that these taste and technology shocks are themselves simply inferred from the fluctuations in the times-series data, so that the entire structure of modern macroeconometrics is little more than an elaborate and sophisticated exercise in question-begging.

In this post, I just want to highlight one of the favorite catch-phrases of modern macroeconomics which serves as a kind of default excuse and self-justification for the rampant empirical failures of modern macroeconomics (documented by Lipsey and Carlaw as I showed in this post). When confronted by evidence that the predictions of their models are wrong, the standard and almost comically self-confident response of the modern macroeconomists is: All models are false. By which the modern macroeconomists apparently mean something like: “And if they are all false anyway, you can’t hold us accountable, because any model can be proven wrong. What really matters is that our models, being microfounded, are not subject to the Lucas Critique, and since all other models than ours are not micro-founded, and, therefore, being subject to the Lucas Critique, they are simply unworthy of consideration. This is what I have called methodological arrogance. That response is simply not true, because the Lucas Critique applies even to micro-founded models, those models being strictly valid only in equilibrium settings and being unable to predict the adjustment of economies in the transition between equilibrium states. All models are subject to the Lucas Critique.

Here is Romer’s take:

In response to the observation that the shocks are imaginary, a standard defense invokes Milton Friedman’s (1953) methodological assertion from unnamed authority that “the more significant the theory, the more unrealistic the assumptions (p.14).” More recently, “all models are false” seems to have become the universal hand-wave for dismissing any fact that does not conform to the model that is the current favorite.

Friedman’s methodological assertion would have been correct had Friedman substituted “simple” for “unrealistic.” Sometimes simplifications are unrealistic, but they don’t have to be. A simplification is a generalization of something complicated. By simplifying, we can transform a problem that had been too complex to handle into a problem more easily analyzed. But such simplifications aren’t necessarily unrealistic. To say that all models are false is simply a dodge to avoid having to account for failure. The excuse of course is that all those other models are subject to the Lucas Critique, so my model wins. But your model is subject to the Lucas Critique even though you claim it’s not, so even according to the rules you have arbitrarily laid down, you don’t win.

So I was just curious about where the little phrase “all models are false” came from. I was expecting that Karl Popper might have said it, in which case to use the phrase as a defense mechanism against empirical refutation would have been a particularly fraudulent tactic, because it would have been a perversion of Popper’s methodological stance, which was to force our theoretical constructs to face up to, not to insulate it from, empirical testing. But when I googled “all theories are false” what I found was not Popper, but the British statistician, G. E. P. Box who wrote in his paper “Science and Statistics” based on his R. A. Fisher Memorial Lecture to the American Statistical Association: “All models are wrong.” Here’s the exact quote:

Since all models are wrong the scientist cannot obtain a “correct” one by excessive elaboration. On the contrary following William of Occam he should seek an economical description of natural phenomena. Just as the ability to devise simple but evocative models is the signature of the great scientist so overelaboration and overparameterization is often the mark of mediocrity.

Since all models are wrong the scientist must be alert to what is importantly wrong. It is inappropriate to be concerned about mice when there are tigers abroad. Pure mathematics is concerned with propositions like “given that A is true, does B necessarily follow?” Since the statement is a conditional one, it has nothing whatsoever to do with the truth of A nor of the consequences B in relation to real life. The pure mathematician, acting in that capacity, need not, and perhaps should not, have any contact with practical matters at all.

In applying mathematics to subjects such as physics or statistics we make tentative assumptions about the real world which we know are false but which we believe may be useful nonetheless. The physicist knows that particles have mass and yet certain results, approximating what really happens, may be derived from the assumption that they do not. Equally, the statistician knows, for example, that in nature there never was a normal distribution, there never was a straight line, yet with normal and linear assumptions, known to be false, he can often derive results which match, to a useful approximation, those found in the real world. It follows that, although rigorous derivation of logical consequences is of great importance to statistics, such derivations are necessarily encapsulated in the knowledge that premise, and hence consequence, do not describe natural truth.

It follows that we cannot know that any statistical technique we develop is useful unless we use it. Major advances in science and in the science of statistics in particular, usually occur, therefore, as the result of the theory-practice iteration.

One of the most annoying conceits of modern macroeconomists is the constant self-congratulatory references to themselves as scientists because of their ostentatious use of axiomatic reasoning, formal proofs, and higher mathematical techniques. The tiresome self-congratulation might get toned down ever so slightly if they bothered to read and take to heart Box’s lecture.

There Is No Intertemporal Budget Constraint

Last week Nick Rowe posted a link to a just published article in a special issue of the Review of Keynesian Economics commemorating the 80th anniversary of the General Theory. Nick’s article discusses the confusion in the General Theory between saving and hoarding, and Nick invited readers to weigh in with comments about his article. The ROKE issue also features an article by Simon Wren-Lewis explaining the eclipse of Keynesian theory as a result of the New Classical Counter-Revolution, correctly identified by Wren-Lewis as a revolution inspired not by empirical success but by a methodological obsession with reductive micro-foundationalism. While deploring the New Classical methodological authoritarianism, Wren-Lewis takes solace from the ability of New Keynesians to survive under the New Classical methodological regime, salvaging a role for activist counter-cyclical policy by, in effect, negotiating a safe haven for the sticky-price assumption despite its shaky methodological credentials. The methodological fiction that sticky prices qualify as micro-founded allowed New Keynesianism to survive despite the ascendancy of micro-foundationalist methodology, thereby enabling the core Keynesian policy message to survive.

I mention the Wren-Lewis article in this context because of an exchange between two of the commenters on Nick’s article: the presumably pseudonymous Avon Barksdale and blogger Jason Smith about microfoundations and Keynesian economics. Avon began by chastising Nick for wasting time discussing Keynes’s 80-year old ideas, something Avon thinks would never happen in a discussion about a true science like physics, the 100-year-old ideas of Einstein being of no interest except insofar as they have been incorporated into the theoretical corpus of modern physics. Of course, this is simply vulgar scientism, as if the only legitimate way to do economics is to mimic how physicists do physics. This methodological scolding is typically charming New Classical arrogance. Sort of reminds one of how Friedrich Engels described Marxian theory as scientific socialism. I mean who, other than a religious fanatic, would be stupid enough to argue with the assertions of science?

Avon continues with a quotation from David Levine, a fine economist who has done a lot of good work, but who is also enthralled by the New Classical methodology. Avon’s scientism provoked the following comment from Jason Smith, a Ph. D. in physics with a deep interest in and understanding of economics.

You quote from Levine: “Keynesianism as argued by people such as Paul Krugman and Brad DeLong is a theory without people either rational or irrational”

This is false. The L in ISLM means liquidity preference and e.g. here …

http://krugman.blogs.nytimes.com/2013/11/18/the-new-keynesian-case-for-fiscal-policy-wonkish/

… Krugman mentions an Euler equation. The Euler equation essentially says that an agent must be indifferent between consuming one more unit today on the one hand and saving that unit and consuming in the future on the other if utility is maximized.

So there are agents in both formulations preferring one state of the world relative to others.

Avon replied:

Jason,

“This is false. The L in ISLM means liquidity preference and e.g. here”

I know what ISLM is. It’s not recursive so it really doesn’t have people in it. The dynamics are not set by any micro-foundation. If you’d like to see models with people in them, try Ljungqvist and Sargent, Recursive Macroeconomic Theory.

To which Jason retorted:

Avon,

So the definition of “people” is restricted to agents making multi-period optimizations over time, solving a dynamic programming problem?

Well then any such theory is obviously wrong because people don’t behave that way. For example, humans don’t optimize the dictator game. How can you add up optimizing agents and get a result that is true for non-optimizing agents … coincident with the details of the optimizing agents mattering.

Your microfoundation requirement is like saying the ideal gas law doesn’t have any atoms in it. And it doesn’t! It is an aggregate property of individual “agents” that don’t have properties like temperature or pressure (or even volume in a meaningful sense). Atoms optimize entropy, but not out of any preferences.

So how do you know for a fact that macro properties like inflation or interest rates are directly related to agent optimizations? Maybe inflation is like temperature — it doesn’t exist for individuals and is only a property of economics in aggregate.

These questions are not answered definitively, and they’d have to be to enforce a requirement for microfoundations … or a particular way of solving the problem.

Are quarks important to nuclear physics? Not really — it’s all pions and nucleons. Emergent degrees of freedom. Sure, you can calculate pion scattering from QCD lattice calculations (quark and gluon DoF), but it doesn’t give an empirically better result than chiral perturbation theory (pion DoF) that ignores the microfoundations (QCD).

Assuming quarks are required to solve nuclear physics problems would have been a giant step backwards.

To which Avon rejoined:

Jason

The microfoundation of nuclear physics and quarks is quantum mechanics and quantum field theory. How the degrees of freedom reorganize under the renormalization group flow, what effective field theory results is an empirical question. Keynesian economics is worse tha[n] useless. It’s wrong empirically, it has no theoretical foundation, it has no laws. It has no microfoundation. No serious grad school has taught Keynesian economics in nearly 40 years.

To which Jason answered:

Avon,

RG flow is irrelevant to chiral perturbation theory which is based on the approximate chiral symmetry of QCD. And chiral perturbation theory could exist without QCD as the “microfoundation”.

Quantum field theory is not a ‘microfoundation’, but rather a framework for building theories that may or may not have microfoundations. As Weinberg (1979) said:

” … quantum field theory itself has no content beyond analyticity, unitarity,
cluster decomposition, and symmetry.”

If I put together an NJL model, there is no requirement that the scalar field condensate be composed of quark-antiquark pairs. In fact, the basic idea was used for Cooper pairs as a model of superconductivity. Same macro theory; different microfoundations. And that is a general problem with microfoundations — different microfoundations can lead to the same macro theory, so which one is right?

And the IS-LM model is actually pretty empirically accurate (for economics):

http://informationtransfereconomics.blogspot.com/2014/03/the-islm-model-again.html

To which Avon responded:

First, ISLM analysis does not hold empirically. It just doesn’t work. That’s why we ended up with the macro revolution of the 70s and 80s. Keynesian economics ignores intertemporal budget constraints, it violates Ricardian equivalence. It’s just not the way the world works. People might not solve dynamic programs to set their consumption path, but at least these models include a future which people plan over. These models work far better than Keynesian ISLM reasoning.

As for chiral perturbation theory and the approximate chiral symmetries of QCD, I am not making the case that NJL models requires QCD. NJL is an effective field theory so it comes from something else. That something else happens to be QCD. It could have been something else, that’s an empirical question. The microfoundation I’m talking about with theories like NJL is QFT and the symmetries of the vacuum, not the short distance physics that might be responsible for it. The microfoundation here is about the basic laws, the principles.

ISLM and Keynesian economics has none of this. There is no principle. The microfoundation of modern macro is not about increasing the degrees of freedom to model every person in the economy on some short distance scale, it is about building the basic principles from consistent economic laws that we find in microeconomics.

Well, I totally agree that IS-LM is a flawed macroeconomic model, and, in its original form, it was borderline-incoherent, being a single-period model with an interest rate, a concept without meaning except as an intertemporal price relationship. These deficiencies of IS-LM became obvious in the 1970s, so the model was extended to include a future period, with an expected future price level, making it possible to speak meaningfully about real and nominal interest rates, inflation and an equilibrium rate of spending. So the failure of IS-LM to explain stagflation, cited by Avon as the justification for rejecting IS-LM in favor of New Classical macro, was not that hard to fix, at least enough to make it serviceable. And comparisons of the empirical success of augmented IS-LM and the New Classical models have shown that IS-LM models consistently outperform New Classical models.

What Avon fails to see is that the microfoundations that he considers essential for macroeconomics are themselves derived from the assumption that the economy is operating in macroeconomic equilibrium. Thus, insisting on microfoundations – at least in the formalist sense that Avon and New Classical macroeconomists understand the term – does not provide a foundation for macroeconomics; it is just question begging aka circular reasoning or petitio principia.

The circularity is obvious from even a cursory reading of Samuelson’s Foundations of Economic Analysis, Robert Lucas’s model for doing economics. What Samuelson called meaningful theorems – thereby betraying his misguided acceptance of the now discredited logical positivist dogma that only potentially empirically verifiable statements have meaning – are derived using the comparative-statics method, which involves finding the sign of the derivative of an endogenous economic variable with respect to a change in some parameter. But the comparative-statics method is premised on the assumption that before and after the parameter change the system is in full equilibrium or at an optimum, and that the equilibrium, if not unique, is at least locally stable and the parameter change is sufficiently small not to displace the system so far that it does not revert back to a new equilibrium close to the original one. So the microeconomic laws invoked by Avon are valid only in the neighborhood of a stable equilibrium, and the macroeconomics that Avon’s New Classical mentors have imposed on the economics profession is a macroeconomics that, by methodological fiat, is operative only in the neighborhood of a locally stable equilibrium.

Avon dismisses Keynesian economics because it ignores intertemporal budget constraints. But the intertemporal budget constraint doesn’t exist in any objective sense. Certainly macroeconomics has to take into account intertemporal choice, but the idea of an intertemporal budget constraint analogous to the microeconomic budget constraint underlying the basic theory of consumer choice is totally misguided. In the static theory of consumer choice, the consumer has a given resource endowment and known prices at which consumers can transact at will, so the utility-maximizing vector of purchases and sales can be determined as the solution of a constrained-maximization problem.

In the intertemporal context, consumers have a given resource endowment, but prices are not known. So consumers have to make current transactions based on their expectations about future prices and a variety of other circumstances about which consumers can only guess. Their budget constraints are thus not real but totally conjectural based on their expectations of future prices. The optimizing Euler equations are therefore entirely conjectural as well, and subject to continual revision in response to changing expectations. The idea that the microeconomic theory of consumer choice is straightforwardly applicable to the intertemporal choice problem in a setting in which consumers don’t know what future prices will be and agents’ expectations of future prices are a) likely to be very different from each other and thus b) likely to be different from their ultimate realizations is a huge stretch. The intertemporal budget constraint has a completely different role in macroeconomics from the role it has in microeconomics.

If I expect that the demand for my services will be such that my disposable income next year would be $500k, my consumption choices would be very different from what they would have been if I were expecting a disposable income of $100k next year. If I expect a disposable income of $500k next year, and it turns out that next year’s income is only $100k, I may find myself in considerable difficulty, because my planned expenditure and the future payments I have obligated myself to make may exceed my disposable income or my capacity to borrow. So if there are a lot of people who overestimate their future incomes, the repercussions of their over-optimism may reverberate throughout the economy, leading to bankruptcies and unemployment and other bad stuff.

A large enough initial shock of mistaken expectations can become self-amplifying, at least for a time, possibly resembling the way a large initial displacement of water can generate a tsunami. A financial crisis, which is hard to model as an equilibrium phenomenon, may rather be an emergent phenomenon with microeconomic sources, but whose propagation can’t be described in microeconomic terms. New Classical macroeconomics simply excludes such possibilities on methodological grounds by imposing a rational-expectations general-equilibrium structure on all macroeconomic models.

This is not to say that the rational expectations assumption does not have a useful analytical role in macroeconomics. But the most interesting and most important problems in macroeconomics arise when the rational expectations assumption does not hold, because it is when individual expectations are very different and very unstable – say, like now, for instance — that macroeconomies become vulnerable to really scary instability.

Simon Wren-Lewis makes a similar point in his paper in the Review of Keynesian Economics.

Much discussion of current divisions within macroeconomics focuses on the ‘saltwater/freshwater’ divide. This understates the importance of the New Classical Counter Revolution (hereafter NCCR). It may be more helpful to think about the NCCR as involving two strands. The one most commonly talked about involves Keynesian monetary and fiscal policy. That is of course very important, and plays a role in the policy reaction to the recent Great Recession. However I want to suggest that in some ways the second strand, which was methodological, is more important. The NCCR helped completely change the way academic macroeconomics is done.

Before the NCCR, macroeconomics was an intensely empirical discipline: something made possible by the developments in statistics and econometrics inspired by The General Theory. After the NCCR and its emphasis on microfoundations, it became much more deductive. As Hoover (2001, p. 72) writes, ‘[t]he conviction that macroeconomics must possess microfoundations has changed the face of the discipline in the last quarter century’. In terms of this second strand, the NCCR was triumphant and remains largely unchallenged within mainstream academic macroeconomics.

Perhaps I will have some more to say about Wren-Lewis’s article in a future post. And perhaps also about Nick Rowe’s article.

HT: Tom Brown

Update (02/11/16):

On his blog Jason Smith provides some further commentary on his exchange with Avon on Nick Rowe’s blog, explaining at greater length how irrelevant microfoundations are to doing real empirically relevant physics. He also expands on and puts into a broader meta-theoretical context my point about the extremely narrow range of applicability of the rational-expectations equilibrium assumptions of New Classical macroeconomics.

David Glasner found a back-and-forth between me and a commenter (with the pseudonym “Avon Barksdale” after [a] character on The Wire who [didn’t end] up taking an economics class [per Tom below]) on Nick Rowe’s blog who expressed the (widely held) view that the only scientific way to proceed in economics is with rigorous microfoundations. “Avon” held physics up as a purported shining example of this approach.
I couldn’t let it go: even physics isn’t that reductionist. I gave several examples of cases where the microfoundations were actually known, but not used to figure things out: thermodynamics, nuclear physics. Even modern physics is supposedly built on string theory. However physicists do not require every pion scattering amplitude be calculated from QCD. Some people do do so-called lattice calculations. But many resort to the “effective” chiral perturbation theory. In a sense, that was what my thesis was about — an effective theory that bridges the gap between lattice QCD and chiral perturbation theory. That effective theory even gave up on one of the basic principles of QCD — confinement. It would be like an economist giving up opportunity cost (a basic principle of the micro theory). But no physicist ever said to me “your model is flawed because it doesn’t have true microfoundations”. That’s because the kind of hard core reductionism that surrounds the microfoundations paradigm doesn’t exist in physics — the most hard core reductionist natural science!
In his post, Glasner repeated something that he had before and — probably because it was in the context of a bunch of quotes about physics — I thought of another analogy.

Glasner says:

But the comparative-statics method is premised on the assumption that before and after the parameter change the system is in full equilibrium or at an optimum, and that the equilibrium, if not unique, is at least locally stable and the parameter change is sufficiently small not to displace the system so far that it does not revert back to a new equilibrium close to the original one. So the microeconomic laws invoked by Avon are valid only in the neighborhood of a stable equilibrium, and the macroeconomics that Avon’s New Classical mentors have imposed on the economics profession is a macroeconomics that, by methodological fiat, is operative only in the neighborhood of a locally stable equilibrium.

 

This hits on a basic principle of physics: any theory radically simplifies near an equilibrium.

Go to Jason’s blog to read the rest of his important and insightful post.

Representative Agents, Homunculi and Faith-Based Macroeconomics

After my previous post comparing the neoclassical synthesis in its various versions to the mind-body problem, there was an interesting Twitter exchange between Steve Randy Waldman and David Andolfatto in which Andolfatto queried whether Waldman and I are aware that there are representative-agent models in which the equilibrium is not Pareto-optimal. Andalfatto raised an interesting point, but what I found interesting about it might be different from what Andalfatto was trying to show, which, I am guessing, was that a representative-agent modeling strategy doesn’t necessarily commit the theorist to the conclusion that the world is optimal and that the solutions of the model can never be improved upon by a monetary/fiscal-policy intervention. I concede the point. It is well-known I think that, given the appropriate assumptions, a general-equilibrium model can have a sub-optimal solution. Given those assumptions, the corresponding representative-agent will also choose a sub-optimal solution. So I think I get that, but perhaps there’s a more subtle point  that I’m missing. If so, please set me straight.

But what I was trying to argue was not that representative-agent models are necessarily optimal, but that representative-agent models suffer from an inherent, and, in my view, fatal, flaw: they can’t explain any real macroeconomic phenomenon, because a macroeconomic phenomenon has to encompass something more than the decision of a single agent, even an omniscient central planner. At best, the representative agent is just a device for solving an otherwise intractable general-equilibrium model, which is how I think Lucas originally justified the assumption.

Yet just because a general-equilibrium model can be formulated so that it can be solved as the solution of an optimizing agent does not explain the economic mechanism or process that generates the solution. The mathematical solution of a model does not necessarily provide any insight into the adjustment process or mechanism by which the solution actually is, or could be, achieved in the real world. Your ability to find a solution for a mathematical problem does not mean that you understand the real-world mechanism to which the solution of your model corresponds. The correspondence between your model may be a strictly mathematical correspondence which may not really be in any way descriptive of how any real-world mechanism or process actually operates.

Here’s an example of what I am talking about. Consider a traffic-flow model explaining how congestion affects vehicle speed and the flow of traffic. It seems obvious that traffic congestion is caused by interactions between the different vehicles traversing a thoroughfare, just as it seems obvious that market exchange arises as the result of interactions between the different agents seeking to advance their own interests. OK, can you imagine building a useful traffic-flow model based on solving for the optimal plan of a representative vehicle?

I don’t think so. Once you frame the model in terms of a representative vehicle, you have abstracted from the phenomenon to be explained. The entire exercise would be pointless – unless, that is, you assumed that interactions between vehicles are so minimal that they can be ignored. But then why would you be interested in congestion effects? If you want to claim that your model has any relevance to the effect of congestion on traffic flow, you can’t base the claim on an assumption that there is no congestion.

Or to take another example, suppose you want to explain the phenomenon that, at sporting events, all, or almost all, the spectators sit in their seats but occasionally get up simultaneously from their seats to watch the play on the field or court. Would anyone ever think that an explanation in terms of a representative spectator could explain that phenomenon?

In just the same way, a representative-agent macroeconomic model necessarily abstracts from the interactions between actual agents. Obviously, by abstracting from the interactions, the model can’t demonstrate that there are no interactions between agents in the real world or that their interactions are too insignificant to matter. I would be shocked if anyone really believed that the interactions between agents are unimportant, much less, negligible; nor have I seen an argument that interactions between agents are unimportant, the concept of network effects, to give just one example, being an important topic in microeconomics.

It’s no answer to say that all the interactions are accounted for within the general-equilibrium model. That is just a form of question-begging. The representative agent is being assumed because without him the problem of finding a general-equilibrium solution of the model is very difficult or intractable. Taking into account interactions makes the model too complicated to work with analytically, so it is much easier — but still hard enough to allow the theorist to perform some fancy mathematical techniques — to ignore those pesky interactions. On top of that, the process by which the real world arrives at outcomes to which a general-equilibrium model supposedly bears at least some vague resemblance can’t even be described by conventional modeling techniques.

The modeling approach seems like that of a neuroscientist saying that, because he could simulate the functions, electrical impulses, chemical reactions, and neural connections in the brain – which he can’t do and isn’t even close to doing, even though a neuroscientist’s understanding of the brain far surpasses any economist’s understanding of the economy – he can explain consciousness. Simulating the operation of a brain would not explain consciousness, because the computer on which the neuroscientist performed the simulation would not become conscious in the course of the simulation.

Many neuroscientists and other materialists like to claim that consciousness is not real, that it’s just an epiphenomenon. But we all have the subjective experience of consciousness, so whatever it is that someone wants to call it, consciousness — indeed the entire world of mental phenomena denoted by that term — remains an unexplained phenomenon, a phenomenon that can only be dismissed as unreal on the basis of a metaphysical dogma that denies the existence of anything that can’t be explained as the result of material and physical causes.

I call that metaphysical belief a dogma not because it’s false — I have no way of proving that it’s false — but because materialism is just as much a metaphysical belief as deism or monotheism. It graduates from belief to dogma when people assert not only that the belief is true but that there’s something wrong with you if you are unwilling to believe it as well. The most that I would say against the belief in materialism is that I can’t understand how it could possibly be true. But I admit that there are a lot of things that I just don’t understand, and I will even admit to believing in some of those things.

New Classical macroeconomists, like, say, Robert Lucas and, perhaps, Thomas Sargent, like to claim that unless a macroeconomic model is microfounded — by which they mean derived from an explicit intertemporal optimization exercise typically involving a representative agent or possibly a small number of different representative agents — it’s not an economic model, because the model, being vulnerable to the Lucas critique, is theoretically superficial and vacuous. But only models of intertemporal equilibrium — a set of one or more mutually consistent optimal plans — are immune to the Lucas critique, so insisting on immunity to the Lucas critique as a prerequisite for a macroeconomic model is a guarantee of failure if your aim to explain anything other than an intertemporal equilibrium.

Unless, that is, you believe that real world is in fact the realization of a general equilibrium model, which is what real-business-cycle theorists, like Edward Prescott, at least claim to believe. Like materialist believers that all mental states are epiphenomenous, and that consciousness is an (unexplained) illusion, real-business-cycle theorists purport to deny that there is such a thing as a disequilibrium phenomenon, the so-called business cycle, in their view, being nothing but a manifestation of the intertemporal-equilibrium adjustment of an economy to random (unexplained) productivity shocks. According to real-business-cycle theorists, such characteristic phenomena of business cycles as surprise, regret, disappointed expectations, abandoned and failed plans, the inability to find work at wages comparable to wages that other similar workers are being paid are not real phenomena; they are (unexplained) illusions and misnomers. The real-business-cycle theorists don’t just fail to construct macroeconomic models; they deny the very existence of macroeconomics, just as strict materialists deny the existence of consciousness.

What is so preposterous about the New-Classical/real-business-cycle methodological position is not the belief that the business cycle can somehow be modeled as a purely equilibrium phenomenon, implausible as that idea seems, but the insistence that only micro-founded business-cycle models are methodologically acceptable. It is one thing to believe that ultimately macroeconomics and business-cycle theory will be reduced to the analysis of individual agents and their interactions. But current micro-founded models can’t provide explanations for what many of us think are basic features of macroeconomic and business-cycle phenomena. If non-micro-founded models can provide explanations for those phenomena, even if those explanations are not fully satisfactory, what basis is there for rejecting them just because of a methodological precept that disqualifies all non-micro-founded models?

According to Kevin Hoover, the basis for insisting that only micro-founded macroeconomic models are acceptable, even if the microfoundation consists in a single representative agent optimizing for an entire economy, is eschatological. In other words, because of a belief that economics will eventually develop analytical or computational techniques sufficiently advanced to model an entire economy in terms of individual interacting agents, an analysis based on a single representative agent, as the first step on this theoretical odyssey, is somehow methodologically privileged over alternative models that do not share that destiny. Hoover properly rejects the presumptuous notion that an avowed, but unrealized, theoretical destiny, can provide a privileged methodological status to an explanatory strategy. The reductionist microfoundationalism of New-Classical macroeconomics and real-business-cycle theory, with which New Keynesian economists have formed an alliance of convenience, is truly a faith-based macroeconomics.

The remarkable similarity between the reductionist microfoundational methodology of New-Classical macroeconomics and the reductionist materialist approach to the concept of mind suggests to me that there is also a close analogy between the representative agent and what philosophers of mind call a homunculus. The Cartesian materialist theory of mind maintains that, at some place or places inside the brain, there resides information corresponding to our conscious experience. The question then arises: how does our conscious experience access the latent information inside the brain? And the answer is that there is a homunculus (or little man) that processes the information for us so that we can perceive it through him. For example, the homunculus (see the attached picture of the little guy) views the image cast by light on the retina as if he were watching a movie projected onto a screen.

homunculus

But there is an obvious fallacy, because the follow-up question is: how does our little friend see anything? Well, the answer must be that there’s another, smaller, homunculus inside his brain. You can probably already tell that this argument is going to take us on an infinite regress. So what purports to be an explanation turns out to be just a form of question-begging. Sound familiar? The only difference between the representative agent and the homunculus is that the representative agent begs the question immediately without having to go on an infinite regress.

PS I have been sidetracked by other responsibilities, so I have not been blogging much, if at all, for the last few weeks. I hope to post more frequently, but I am afraid that my posting and replies to comments are likely to remain infrequent for the next couple of months.

Romer v. Lucas

A couple of months ago, Paul Romer created a stir by publishing a paper in the American Economic Review “Mathiness in the Theory of Economic Growth,” an attack on two papers, one by McGrattan and Prescott and the other by Lucas and Moll on aspects of growth theory. He accused the authors of those papers of using mathematical modeling as a cover behind which to hide assumptions guaranteeing results by which the authors could promote their research agendas. In subsequent blog posts, Romer has sharpened his attack, focusing it more directly on Lucas, whom he accuses of a non-scientific attachment to ideological predispositions that have led him to violate what he calls Feynman integrity, a concept eloquently described by Feynman himself in a 1974 commencement address at Caltech.

It’s a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty–a kind of leaning over backwards. For example, if you’re doing an experiment, you should report everything that you think might make it invalid–not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you’ve eliminated by some other experiment, and how they worked–to make sure the other fellow can tell they have been eliminated.

Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can–if you know anything at all wrong, or possibly wrong–to explain it. If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it. There is also a more subtle problem. When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition.

Romer contrasts this admirable statement of what scientific integrity means with another by George Stigler, seemingly justifying, or at least excusing, a kind of special pleading on behalf of one’s own theory. And the institutional and perhaps ideological association between Stigler and Lucas seems to suggest that Lucas is inclined to follow the permissive and flexible Stiglerian ethic rather than rigorous Feynman standard of scientific integrity. Romer regards this as a breach of the scientific method and a step backward for economics as a science.

I am not going to comment on the specific infraction that Romer accuses Lucas of having committed; I am not familiar with the mathematical question in dispute. Certainly if Lucas was aware that his argument in the paper Romer criticizes depended on the particular mathematical assumption in question, Lucas should have acknowledged that to be the case. And even if, as Lucas asserted in responding to a direct question by Romer, he could have derived the result in a more roundabout way, then he should have pointed that out, too. However, I don’t regard the infraction alleged by Romer to be more than a misdemeanor, hardly a scandalous breach of the scientific method.

Why did Lucas, who as far as I can tell was originally guided by Feynman integrity, switch to the mode of Stigler conviction? Market clearing did not have to evolve from auxiliary hypothesis to dogma that could not be questioned.

My conjecture is economists let small accidents of intellectual history matter too much. If we had behaved like scientists, things could have turned out very differently. It is worth paying attention to these accidents because doing so might let us take more control over the process of scientific inquiry that we are engaged in. At the very least, we should try to reduce the odds that that personal frictions and simple misunderstandings could once again cause us to veer off on some damaging trajectory.

I suspect that it was personal friction and a misunderstanding that encouraged a turn toward isolation (or if you prefer, epistemic closure) by Lucas and colleagues. They circled the wagons because they thought that this was the only way to keep the rational expectations revolution alive. The misunderstanding is that Lucas and his colleagues interpreted the hostile reaction they received from such economists as Robert Solow to mean that they were facing implacable, unreasoning resistance from such departments as MIT. In fact, in a remarkably short period of time, rational expectations completely conquered the PhD program at MIT.

More recently Romer, having done graduate work both at MIT and Chicago in the late 1970s, has elaborated on the personal friction between Solow and Lucas and how that friction may have affected Lucas, causing him to disengage from the professional mainstream. Paul Krugman, who was at MIT when this nastiness was happening, is skeptical of Romer’s interpretation.

My own view is that being personally and emotionally attached to one’s own theories, whether for religious or ideological or other non-scientific reasons, is not necessarily a bad thing as long as there are social mechanisms allowing scientists with different scientific viewpoints an opportunity to make themselves heard. If there are such mechanisms, the need for Feynman integrity is minimized, because individual lapses of integrity will be exposed and remedied by criticism from other scientists; scientific progress is possible even if scientists don’t live up to the Feynman standards, and maintain their faith in their theories despite contradictory evidence. But, as I am going to suggest below, there are reasons to doubt that social mechanisms have been operating to discipline – not suppress, just discipline – dubious economic theorizing.

My favorite example of the importance of personal belief in, and commitment to the truth of, one’s own theories is Galileo. As discussed by T. S. Kuhn in The Structure of Scientific Revolutions. Galileo was arguing for a paradigm change in how to think about the universe, despite being confronted by empirical evidence that appeared to refute the Copernican worldview he believed in: the observations that the sun revolves around the earth, and that the earth, as we directly perceive it, is, apart from the occasional earthquake, totally stationary — good old terra firma. Despite that apparently contradictory evidence, Galileo had an alternative vision of the universe in which the obvious movement of the sun in the heavens was explained by the spinning of the earth on its axis, and the stationarity of the earth by the assumption that all our surroundings move along with the earth, rendering its motion imperceptible, our perception of motion being relative to a specific frame of reference.

At bottom, this was an almost metaphysical world view not directly refutable by any simple empirical test. But Galileo adopted this worldview or paradigm, because he deeply believed it to be true, and was therefore willing to defend it at great personal cost, refusing to recant his Copernican view when he could have easily appeased the Church by describing the Copernican theory as just a tool for predicting planetary motion rather than an actual representation of reality. Early empirical tests did not support heliocentrism over geocentrism, but Galileo had faith that theoretical advancements and improved measurements would eventually vindicate the Copernican theory. He was right of course, but strict empiricism would have led to a premature rejection of heliocentrism. Without a deep personal commitment to the Copernican worldview, Galileo might not have articulated the case for heliocentrism as persuasively as he did, and acceptance of heliocentrism might have been delayed for a long time.

Imre Lakatos called such deeply-held views underlying a scientific theory the hard core of the theory (aka scientific research program), a set of beliefs that are maintained despite apparent empirical refutation. The response to any empirical refutation is not to abandon or change the hard core but to adjust what Lakatos called the protective belt of the theory. Eventually, as refutations or empirical anomalies accumulate, the research program may undergo a crisis, leading to its abandonment, or it may simply degenerate if it fails to solve new problems or discover any new empirical facts or regularities. So Romer’s criticism of Lucas’s dogmatic attachment to market clearing – Lucas frequently makes use of ad hoc price stickiness assumptions; I don’t know why Romer identifies market-clearing as a Lucasian dogma — may be no more justified from a history of science perspective than would criticism of Galileo’s dogmatic attachment to heliocentrism.

So while I have many problems with Lucas, lack of Feynman integrity is not really one of them, certainly not in the top ten. What I find more disturbing is his narrow conception of what economics is. As he himself wrote in an autobiographical sketch for Lives of the Laureates, he was bewitched by the beauty and power of Samuelson’s Foundations of Economic Analysis when he read it the summer before starting his training as a graduate student at Chicago in 1960. Although it did not have the transformative effect on me that it had on Lucas, I greatly admire the Foundations, but regardless of whether Samuelson himself meant to suggest such an idea (which I doubt), it is absurd to draw this conclusion from it:

I loved the Foundations. Like so many others in my cohort, I internalized its view that if I couldn’t formulate a problem in economic theory mathematically, I didn’t know what I was doing. I came to the position that mathematical analysis is not one of many ways of doing economic theory: It is the only way. Economic theory is mathematical analysis. Everything else is just pictures and talk.

Oh, come on. Would anyone ever think that unless you can formulate the problem of whether the earth revolves around the sun or the sun around the earth mathematically, you don’t know what you are doing? And, yet, remarkably, on the page following that silly assertion, one finds a totally brilliant description of what it was like to take graduate price theory from Milton Friedman.

Friedman rarely lectured. His class discussions were often structured as debates, with student opinions or newspaper quotes serving to introduce a problem and some loosely stated opinions about it. Then Friedman would lead us into a clear statement of the problem, considering alternative formulations as thoroughly as anyone in the class wanted to. Once formulated, the problem was quickly analyzed—usually diagrammatically—on the board. So we learned how to formulate a model, to think about and decide which features of a problem we could safely abstract from and which he needed to put at the center of the analysis. Here “model” is my term: It was not a term that Friedman liked or used. I think that for him talking about modeling would have detracted from the substantive seriousness of the inquiry we were engaged in, would divert us away from the attempt to discover “what can be done” into a merely mathematical exercise. [my emphasis].

Despite his respect for Friedman, it’s clear that Lucas did not adopt and internalize Friedman’s approach to economic problem solving, but instead internalized the caricature he extracted from Samuelson’s Foundations: that mathematical analysis is the only legitimate way of doing economic theory, and that, in particular, the essence of macroeconomics consists in a combination of axiomatic formalism and philosophical reductionism (microfoundationalism). For Lucas, the only scientifically legitimate macroeconomic models are those that can be deduced from the axiomatized Arrow-Debreu-McKenzie general equilibrium model, with solutions that can be computed and simulated in such a way that the simulations can be matched up against the available macroeconomics time series on output, investment and consumption.

This was both bad methodology and bad science, restricting the formulation of economic problems to those for which mathematical techniques are available to be deployed in finding solutions. On the one hand, the rational-expectations assumption made finding solutions to certain intertemporal models tractable; on the other, the assumption was justified as being required by the rationality assumptions of neoclassical price theory.

In a recent review of Lucas’s Collected Papers on Monetary Theory, Thomas Sargent makes a fascinating reference to Kenneth Arrow’s 1967 review of the first two volumes of Paul Samuelson’s Collected Works in which Arrow referred to the problematic nature of the neoclassical synthesis of which Samuelson was a chief exponent.

Samuelson has not addressed himself to one of the major scandals of current price theory, the relation between microeconomics and macroeconomics. Neoclassical microeconomic equilibrium with fully flexible prices presents a beautiful picture of the mutual articulations of a complex structure, full employment being one of its major elements. What is the relation between this world and either the real world with its recurrent tendencies to unemployment of labor, and indeed of capital goods, or the Keynesian world of underemployment equilibrium? The most explicit statement of Samuelson’s position that I can find is the following: “Neoclassical analysis permits of fully stable underemployment equilibrium only on the assumption of either friction or a peculiar concatenation of wealth-liquidity-interest elasticities. . . . [The neoclassical analysis] goes far beyond the primitive notion that, by definition of a Walrasian system, equilibrium must be at full employment.” . . .

In view of the Phillips curve concept in which Samuelson has elsewhere shown such interest, I take the second sentence in the above quotation to mean that wages are stationary whenever unemployment is X percent, with X positive; thus stationary unemployment is possible. In general, one can have a neoclassical model modified by some elements of price rigidity which will yield Keynesian-type implications. But such a model has yet to be constructed in full detail, and the question of why certain prices remain rigid becomes of first importance. . . . Certainly, as Keynes emphasized the rigidity of prices has something to do with the properties of money; and the integration of the demand and supply of money with general competitive equilibrium theory remains incomplete despite attempts beginning with Walras himself.

If the neoclassical model with full price flexibility were sufficiently unrealistic that stable unemployment equilibrium be possible, then in all likelihood the bulk of the theorems derived by Samuelson, myself, and everyone else from the neoclassical assumptions are also contrafactual. The problem is not resolved by what Samuelson has called “the neoclassical synthesis,” in which it is held that the achievement of full employment requires Keynesian intervention but that neoclassical theory is valid when full employment is reached. . . .

Obviously, I believe firmly that the mutual adjustment of prices and quantities represented by the neoclassical model is an important aspect of economic reality worthy of the serious analysis that has been bestowed on it; and certain dramatic historical episodes – most recently the reconversion of the United States from World War II and the postwar European recovery – suggest that an economic mechanism exists which is capable of adaptation to radical shifts in demand and supply conditions. On the other hand, the Great Depression and the problems of developing countries remind us dramatically that something beyond, but including, neoclassical theory is needed.

Perhaps in a future post, I may discuss this passage, including a few sentences that I have omitted here, in greater detail. For now I will just say that Arrow’s reference to a “neoclassical microeconomic equilibrium with fully flexible prices” seems very strange inasmuch as price flexibility has absolutely no role in the proofs of the existence of a competitive general equilibrium for which Arrow and Debreu and McKenzie are justly famous. All the theorems Arrow et al. proved about the neoclassical equilibrium were related to existence, uniqueness and optimaiity of an equilibrium supported by an equilibrium set of prices. Price flexibility was not involved in those theorems, because the theorems had nothing to do with how prices adjust in response to a disequilibrium situation. What makes this juxtaposition of neoclassical microeconomic equilibrium with fully flexible prices even more remarkable is that about eight years earlier Arrow wrote a paper (“Toward a Theory of Price Adjustment”) whose main concern was the lack of any theory of price adjustment in competitive equilibrium, about which I will have more to say below.

Sargent also quotes from two lectures in which Lucas referred to Don Patinkin’s treatise Money, Interest and Prices which provided perhaps the definitive statement of the neoclassical synthesis Samuelson espoused. In one lecture (“My Keynesian Education” presented to the History of Economics Society in 2003) Lucas explains why he thinks Patinkin’s book did not succeed in its goal of integrating value theory and monetary theory:

I think Patinkin was absolutely right to try and use general equilibrium theory to think about macroeconomic problems. Patinkin and I are both Walrasians, whatever that means. I don’t see how anybody can not be. It’s pure hindsight, but now I think that Patinkin’s problem was that he was a student of Lange’s, and Lange’s version of the Walrasian model was already archaic by the end of the 1950s. Arrow and Debreu and McKenzie had redone the whole theory in a clearer, more rigorous, and more flexible way. Patinkin’s book was a reworking of his Chicago thesis from the middle 1940s and had not benefited from this more recent work.

In the other lecture, his 2003 Presidential address to the American Economic Association, Lucas commented further on why Patinkin fell short in his quest to unify monetary and value theory:

When Don Patinkin gave his Money, Interest, and Prices the subtitle “An Integration of Monetary and Value Theory,” value theory meant, to him, a purely static theory of general equilibrium. Fluctuations in production and employment, due to monetary disturbances or to shocks of any other kind, were viewed as inducing disequilibrium adjustments, unrelated to anyone’s purposeful behavior, modeled with vast numbers of free parameters. For us, today, value theory refers to models of dynamic economies subject to unpredictable shocks, populated by agents who are good at processing information and making choices over time. The macroeconomic research I have discussed today makes essential use of value theory in this modern sense: formulating explicit models, computing solutions, comparing their behavior quantitatively to observed time series and other data sets. As a result, we are able to form a much sharper quantitative view of the potential of changes in policy to improve peoples’ lives than was possible a generation ago.

So, as Sargent observes, Lucas recreated an updated neoclassical synthesis of his own based on the intertemporal Arrow-Debreu-McKenzie version of the Walrasian model, augmented by a rationale for the holding of money and perhaps some form of monetary policy, via the assumption of credit-market frictions and sticky prices. Despite the repudiation of the updated neoclassical synthesis by his friend Edward Prescott, for whom monetary policy is irrelevant, Lucas clings to neoclassical synthesis 2.0. Sargent quotes this passage from Lucas’s 1994 retrospective review of A Monetary History of the US by Friedman and Schwartz to show how tightly Lucas clings to neoclassical synthesis 2.0 :

In Kydland and Prescott’s original model, and in many (though not all) of its descendants, the equilibrium allocation coincides with the optimal allocation: Fluctuations generated by the model represent an efficient response to unavoidable shocks to productivity. One may thus think of the model not as a positive theory suited to all historical time periods but as a normative benchmark providing a good approximation to events when monetary policy is conducted well and a bad approximation when it is not. Viewed in this way, the theory’s relative success in accounting for postwar experience can be interpreted as evidence that postwar monetary policy has resulted in near-efficient behavior, not as evidence that money doesn’t matter.

Indeed, the discipline of real business cycle theory has made it more difficult to defend real alternaltives to a monetary account of the 1930s than it was 30 years ago. It would be a term-paper-size exercise, for example, to work out the possible effects of the 1930 Smoot-Hawley Tariff in a suitably adapted real business cycle model. By now, we have accumulated enough quantitative experience with such models to be sure that the aggregate effects of such a policy (in an economy with a 5% foreign trade sector before the Act and perhaps a percentage point less after) would be trivial.

Nevertheless, in the absence of some catastrophic error in monetary policy, Lucas evidently believes that the key features of the Arrow-Debreu-McKenzie model are closely approximated in the real world. That may well be true. But if it is, Lucas has no real theory to explain why.

In his 1959 paper (“Towards a Theory of Price Adjustment”) I just mentioned, Arrow noted that the theory of competitive equilibrium has no explanation of how equilibrium prices are actually set. Indeed, the idea of competitive price adjustment is beset by a paradox: all agents in a general equilibrium being assumed to be price takers, how is it that a new equilibrium price is ever arrived at following any disturbance to an initial equilibrium? Arrow had no answer to the question, but offered the suggestion that, out of equilibrium, agents are not price takers, but price searchers, possessing some measure of market power to set price in the transition between the old and new equilibrium. But the upshot of Arrow’s discussion was that the problem and the paradox awaited solution. Almost sixty years on, some of us are still waiting, but for Lucas and the Lucasians, there is neither problem nor paradox, because the actual price is the equilibrium price, and the equilibrium price is always the (rationally) expected price.

If the social functions of science were being efficiently discharged, this rather obvious replacement of problem solving by question begging would not have escaped effective challenge and opposition. But Lucas was able to provide cover for this substitution by persuading the profession to embrace his microfoundational methodology, while offering irresistible opportunities for professional advancement to younger economists who could master the new analytical techniques that Lucas and others were rapidly introducing, thereby neutralizing or coopting many of the natural opponents to what became modern macroeconomics. So while Romer considers the conquest of MIT by the rational-expectations revolution, despite the opposition of Robert Solow, to be evidence for the advance of economic science, I regard it as a sign of the social failure of science to discipline a regressive development driven by the elevation of technique over substance.

Krugman’s Second Best

A couple of days ago Paul Krugman discussed “Second-best Macroeconomics” on his blog. I have no real quarrel with anything he said, but I would like to amplify his discussion of what is sometimes called the problem of second-best, because I think the problem of second best has some really important implications for macroeconomics beyond the limited application of the problem that Krugman addressed. The basic idea underlying the problem of second best is not that complicated, but it has many applications, and what made the 1956 paper (“The General Theory of Second Best”) by R. G. Lipsey and Kelvin Lancaster a classic was that it showed how a number of seemingly disparate problems were really all applications of a single unifying principle. Here’s how Krugman frames his application of the second-best problem.

[T]he whole western world has spent years suffering from a severe shortfall of aggregate demand; in Europe a severe misalignment of national costs and prices has been overlaid on this aggregate problem. These aren’t hard problems to diagnose, and simple macroeconomic models — which have worked very well, although nobody believes it — tell us how to solve them. Conventional monetary policy is unavailable thanks to the zero lower bound, but fiscal policy is still on tap, as is the possibility of raising the inflation target. As for misaligned costs, that’s where exchange rate adjustments come in. So no worries: just hit the big macroeconomic That Was Easy button, and soon the troubles will be over.

Except that all the natural answers to our problems have been ruled out politically. Austerians not only block the use of fiscal policy, they drive it in the wrong direction; a rise in the inflation target is impossible given both central-banker prejudices and the power of the goldbug right. Exchange rate adjustment is blocked by the disappearance of European national currencies, plus extreme fear over technical difficulties in reintroducing them.

As a result, we’re stuck with highly problematic second-best policies like quantitative easing and internal devaluation.

I might quibble with Krugman about the quality of the available macroeconomic models, by which I am less impressed than he, but that’s really beside the point of this post, so I won’t even go there. But I can’t let the comment about the inflation target pass without observing that it’s not just “central-banker prejudices” and the “goldbug right” that are to blame for the failure to raise the inflation target; for reasons that I don’t claim to understand myself, the political consensus in both Europe and the US in favor of perpetually low or zero inflation has been supported with scarcely any less fervor by the left than the right. It’s only some eccentric economists – from diverse positions on the political spectrum – that have been making the case for inflation as a recovery strategy. So the political failure has been uniform across the political spectrum.

OK, having registered my factual disagreement with Krugman about the source of our anti-inflationary intransigence, I can now get to the main point. Here’s Krugman:

“[S]econd best” is an economic term of art. It comes from a classic 1956 paper by Lipsey and Lancaster, which showed that policies which might seem to distort markets may nonetheless help the economy if markets are already distorted by other factors. For example, suppose that a developing country’s poorly functioning capital markets are failing to channel savings into manufacturing, even though it’s a highly profitable sector. Then tariffs that protect manufacturing from foreign competition, raise profits, and therefore make more investment possible can improve economic welfare.

The problems with second best as a policy rationale are familiar. For one thing, it’s always better to address existing distortions directly, if you can — second best policies generally have undesirable side effects (e.g., protecting manufacturing from foreign competition discourages consumption of industrial goods, may reduce effective domestic competition, and so on). . . .

But here we are, with anything resembling first-best macroeconomic policy ruled out by political prejudice, and the distortions we’re trying to correct are huge — one global depression can ruin your whole day. So we have quantitative easing, which is of uncertain effectiveness, probably distorts financial markets at least a bit, and gets trashed all the time by people stressing its real or presumed faults; someone like me is then put in the position of having to defend a policy I would never have chosen if there seemed to be a viable alternative.

In a deep sense, I think the same thing is involved in trying to come up with less terrible policies in the euro area. The deal that Greece and its creditors should have reached — large-scale debt relief, primary surpluses kept small and not ramped up over time — is a far cry from what Greece should and probably would have done if it still had the drachma: big devaluation now. The only way to defend the kind of thing that was actually on the table was as the least-worst option given that the right response was ruled out.

That’s one example of a second-best problem, but it’s only one of a variety of problems, and not, it seems to me, the most macroeconomically interesting. So here’s the second-best problem that I want to discuss: given one distortion (i.e., a departure from one of the conditions for Pareto-optimality), reaching a second-best sub-optimum requires violating other – likely all the other – conditions for reaching the first-best (Pareto) optimum. The strategy for getting to the second-best suboptimum cannot be to achieve as many of the conditions for reaching the first-best optimum as possible; the conditions for reaching the second-best optimum are in general totally different from the conditions for reaching the first-best optimum.

So what’s the deeper macroeconomic significance of the second-best principle?

I would put it this way. Suppose there’s a pre-existing macroeconomic equilibrium, all necessary optimality conditions between marginal rates of substitution in production and consumption and relative prices being satisfied. Let the initial equilibrium be subjected to a macoreconomic disturbance. The disturbance will immediately affect a range — possibly all — of the individual markets, and all optimality conditions will change, so that no market will be unaffected when a new optimum is realized. But while optimality for the system as a whole requires that prices adjust in such a way that the optimality conditions are satisfied in all markets simultaneously, each price adjustment that actually occurs is a response to the conditions in a single market – the relationship between amounts demanded and supplied at the existing price. Each price adjustment being a response to a supply-demand imbalance in an individual market, there is no theory to explain how a process of price adjustment in real time will ever restore an equilibrium in which all optimality conditions are simultaneously satisfied.

Invoking a general Smithian invisible-hand theorem won’t work, because, in this context, the invisible-hand theorem tells us only that if an equilibrium price vector were reached, the system would be in an optimal state of rest with no tendency to change. The invisible-hand theorem provides no account of how the equilibrium price vector is discovered by any price-adjustment process in real time. (And even tatonnement, a non-real-time process, is not guaranteed to work as shown by the Sonnenschein-Mantel-Debreu Theorem). With price adjustment in each market entirely governed by the demand-supply imbalance in that market, market prices determined in individual markets need not ensure that all markets clear simultaneously or satisfy the optimality conditions.

Now it’s true that we have a simple theory of price adjustment for single markets: prices rise if there’s an excess demand and fall if there’s an excess supply. If demand and supply curves have normal slopes, the simple price adjustment rule moves the price toward equilibrium. But that partial-equilibriuim story is contingent on the implicit assumption that all other markets are in equilibrium. When all markets are in disequilibrium, moving toward equilibrium in one market will have repercussions on other markets, and the simple story of how price adjustment in response to a disequilibrium restores equilibrium breaks down, because market conditions in every market depend on market conditions in every other market. So unless all markets arrive at equilibrium simultaneously, there’s no guarantee that equilibrium will obtain in any of the markets. Disequilibrium in any market can mean disequilibrium in every market. And if a single market is out of kilter, the second-best, suboptimal solution for the system is totally different from the first-best solution for all markets.

In the standard microeconomics we are taught in econ 1 and econ 101, all these complications are assumed away by restricting the analysis of price adjustment to a single market. In other words, as I have pointed out in a number of previous posts (here and here), standard microeconomics is built on macroeconomic foundations, and the currently fashionable demand for macroeconomics to be microfounded turns out to be based on question-begging circular reasoning. Partial equilibrium is a wonderful pedagogical device, and it is an essential tool in applied microeconomics, but its limitations are often misunderstood or ignored.

An early macroeconomic application of the theory of second is the statement by the quintessentially orthodox pre-Keynesian Cambridge economist Frederick Lavington who wrote in his book The Trade Cycle “the inactivity of all is the cause of the inactivity of each.” Each successive departure from the conditions for second-, third-, fourth-, and eventually nth-best sub-optima has additional negative feedback effects on the rest of the economy, moving it further and further away from a Pareto-optimal equilibrium with maximum output and full employment. The fewer people that are employed, the more difficult it becomes for anyone to find employment.

This insight was actually admirably, if inexactly, expressed by Say’s Law: supply creates its own demand. The cause of the cumulative contraction of output in a depression is not, as was often suggested, that too much output had been produced, but a breakdown of coordination in which disequilibrium spreads in epidemic fashion from market to market, leaving individual transactors unable to compensate by altering the terms on which they are prepared to supply goods and services. The idea that a partial-equilibrium response, a fall in money wages, can by itself remedy a general-disequilibrium disorder is untenable. Keynes and the Keynesians were therefore completely wrong to accuse Say of committing a fallacy in diagnosing the cause of depressions. The only fallacy lay in the assumption that market adjustments would automatically ensure the restoration of something resembling full-employment equilibrium.

Price Stickiness and Macroeconomics

Noah Smith has a classically snide rejoinder to Stephen Williamson’s outrage at Noah’s Bloomberg paean to price stickiness and to the classic Ball and Maniw article on the subject, an article that provoked an embarrassingly outraged response from Robert Lucas when published over 20 years ago. I don’t know if Lucas ever got over it, but evidently Williamson hasn’t.

Now to be fair, Lucas’s outrage, though misplaced, was understandable, at least if one understands that Lucas was so offended by the ironic tone in which Ball and Mankiw cast themselves as defenders of traditional macroeconomics – including both Keynesians and Monetarists – against the onslaught of “heretics” like Lucas, Sargent, Kydland and Prescott that he just stopped reading after the first few pages and then, in a fit of righteous indignation, wrote a diatribe attacking Ball and Mankiw as religious fanatics trying to halt the progress of science as if that was the real message of the paper – not, to say the least, a very sophisticated reading of what Ball and Mankiw wrote.

While I am not hostile to the idea of price stickiness — one of the most popular posts I have written being an attempt to provide a rationale for the stylized (though controversial) fact that wages are stickier than other input, and most output, prices — it does seem to me that there is something ad hoc and superficial about the idea of price stickiness and about many explanations, including those offered by Ball and Mankiw, for price stickiness. I think that the negative reactions that price stickiness elicits from a lot of economists — and not only from Lucas and Williamson — reflect a feeling that price stickiness is not well grounded in any economic theory.

Let me offer a slightly different criticism of price stickiness as a feature of macroeconomic models, which is simply that although price stickiness is a sufficient condition for inefficient macroeconomic fluctuations, it is not a necessary condition. It is entirely possible that even with highly flexible prices, there would still be inefficient macroeconomic fluctuations. And the reason why price flexibility, by itself, is no guarantee against macroeconomic contractions is that macroeconomic contractions are caused by disequilibrium prices, and disequilibrium prices can prevail regardless of how flexible prices are.

The usual argument is that if prices are free to adjust in response to market forces, they will adjust to balance supply and demand, and an equilibrium will be restored by the automatic adjustment of prices. That is what students are taught in Econ 1. And it is an important lesson, but it is also a “partial” lesson. It is partial, because it applies to a single market that is out of equilibrium. The implicit assumption in that exercise is that nothing else is changing, which means that all other markets — well, not quite all other markets, but I will ignore that nuance – are in equilibrium. That’s what I mean when I say (as I have done before) that just as macroeconomics needs microfoundations, microeconomics needs macrofoundations.

Now it’s pretty easy to show that in a single market with an upward-sloping supply curve and a downward-sloping demand curve, that a price-adjustment rule that raises price when there’s an excess demand and reduces price when there’s an excess supply will lead to an equilibrium market price. But that simple price-adjustment rule is hard to generalize when many markets — not just one — are in disequilibrium, because reducing disequilibrium in one market may actually exacerbate disequilibrium, or create a disequilibrium that wasn’t there before, in another market. Thus, even if there is an equilibrium price vector out there, which, if it were announced to all economic agents, would sustain a general equilibrium in all markets, there is no guarantee that following the standard price-adjustment rule of raising price in markets with an excess demand and reducing price in markets with an excess supply will ultimately lead to the equilibrium price vector. Even more disturbing, the standard price-adjustment rule may not, even under a tatonnement process in which no trading is allowed at disequilibrium prices, lead to the discovery of the equilibrium price vector. Of course, in the real world trading occurs routinely at disequilibrium prices, so that the “mechanical” forces tending an economy toward equilibrium are even weaker than the standard analysis of price-adjustment would suggest.

This doesn’t mean that an economy out of equilibrium has no stabilizing tendencies; it does mean that those stabilizing tendencies are not very well understood, and we have almost no formal theory with which to describe how such an adjustment process leading from disequilibrium to equilibrium actually works. We just assume that such a process exists. Franklin Fisher made this point 30 years ago in an important, but insufficiently appreciated, volume Disequilibrium Foundations of Equilibrium Economics. But the idea goes back even further: to Hayek’s important work on intertemporal equilibrium, especially his classic paper “Economics and Knowledge,” formalized by Hicks in the temporary-equilibrium model described in Value and Capital.

The key point made by Hayek in this context is that there can be an intertemporal equilibrium if and only if all agents formulate their individual plans on the basis of the same expectations of future prices. If their expectations for future prices are not the same, then any plans based on incorrect price expectations will have to be revised, or abandoned altogether, as price expectations are disappointed over time. For price adjustment to lead an economy back to equilibrium, the price adjustment must converge on an equilibrium price vector and on correct price expectations. But, as Hayek understood in 1937, and as Fisher explained in a dense treatise 30 years ago, we have no economic theory that explains how such a price vector, even if it exists, is arrived at, and even under a tannonement process, much less under decentralized price setting. Pinning the blame on this vague thing called price stickiness doesn’t address the deeper underlying theoretical issue.

Of course for Lucas et al. to scoff at price stickiness on these grounds is a bit rich, because Lucas and his followers seem entirely comfortable with assuming that the equilibrium price vector is rationally expected. Indeed, rational expectation of the equilibrium price vector is held up by Lucas as precisely the microfoundation that transformed the unruly field of macroeconomics into a real science.

Traffic Jams and Multipliers

Since my previous post which I closed by quoting the abstract of Brian Arthur’s paper “Complexity Economics: A Different Framework for Economic Thought,” I have been reading his paper and some of the papers he cites, especially Magda Fontana’s paper “The Santa Fe Perspective on Economics: Emerging Patterns in the Science of Complexity,” and Mark Blaug’s paper “The Formalist Revolution of the 1950s.” The papers bring together a number of themes that I have been emphasizing in previous posts on what I consider the misguided focus of modern macroeconomics on rational-expectations equilibrium as the organizing principle of macroeconomic theory. Among these themes are the importance of coordination failures in explaining macroeconomic fluctuations, the inappropriateness of the full general-equilibrium paradigm in macroeconomics, the mistaken transformation of microfoundations from a theoretical problem to be solved into an absolute methodological requirement to be insisted upon (almost exactly analogous to the absurd transformation of the mind-body problem into a dogmatic insistence that the mind is merely a figment of our own imagination), or, stated another way, a recognition that macrofoundations are just as necessary for economics as microfoundations.

Let me quote again from Arthur’s essay; this time a beautiful passage which captures the interdependence between the micro and macro perspectives

To look at the economy, or areas within the economy, from a complexity viewpoint then would mean asking how it evolves, and this means examining in detail how individual agents’ behaviors together form some outcome and how this might in turn alter their behavior as a result. Complexity in other words asks how individual behaviors might react to the pattern they together create, and how that pattern would alter itself as a result. This is often a difficult question; we are asking how a process is created from the purposed actions of multiple agents. And so economics early in its history took a simpler approach, one more amenable to mathematical analysis. It asked not how agents’ behaviors would react to the aggregate patterns these created, but what behaviors (actions, strategies, expectations) would be upheld by — would be consistent with — the aggregate patterns these caused. It asked in other words what patterns would call for no changes in microbehavior, and would therefore be in stasis, or equilibrium. (General equilibrium theory thus asked what prices and quantities of goods produced and consumed would be consistent with — would pose no incentives for change to — the overall pattern of prices and quantities in the economy’s markets. Classical game theory asked what strategies, moves, or allocations would be consistent with — would be the best course of action for an agent (under some criterion) — given the strategies, moves, allocations his rivals might choose. And rational expectations economics asked what expectations would be consistent with — would on average be validated by — the outcomes these expectations together created.)

This equilibrium shortcut was a natural way to examine patterns in the economy and render them open to mathematical analysis. It was an understandable — even proper — way to push economics forward. And it achieved a great deal. Its central construct, general equilibrium theory, is not just mathematically elegant; in modeling the economy it re-composes it in our minds, gives us a way to picture it, a way to comprehend the economy in its wholeness. This is extremely valuable, and the same can be said for other equilibrium modelings: of the theory of the firm, of international trade, of financial markets.

But there has been a price for this equilibrium finesse. Economists have objected to it — to the neoclassical construction it has brought about — on the grounds that it posits an idealized, rationalized world that distorts reality, one whose underlying assumptions are often chosen for analytical convenience. I share these objections. Like many economists, I admire the beauty of the neoclassical economy; but for me the construct is too pure, too brittle — too bled of reality. It lives in a Platonic world of order, stasis, knowableness, and perfection. Absent from it is the ambiguous, the messy, the real. (pp. 2-3)

Later in the essay, Arthur provides a simple example of a non-equilibrium complex process: traffic flow.

A typical model would acknowledge that at close separation from cars in front, cars lower their speed, and at wide separation they raise it. A given high density of traffic of N cars per mile would imply a certain average separation, and cars would slow or accelerate to a speed that corresponds. Trivially, an equilibrium speed emerges, and if we were restricting solutions to equilibrium that is all we would see. But in practice at high density, a nonequilibrium phenomenon occurs. Some car may slow down — its driver may lose concentration or get distracted — and this might cause cars behind to slow down. This immediately compresses the flow, which causes further slowing of the cars behind. The compression propagates backwards, traffic backs up, and a jam emerges. In due course the jam clears. But notice three things. The phenomenon’s onset is spontaneous; each instance of it is unique in time of appearance, length of propagation, and time of clearing. It is therefore not easily captured by closed-form solutions, but best studied by probabilistic or statistical methods. Second, the phenomenon is temporal, it emerges or happens within time, and cannot appear if we insist on equilibrium. And third, the phenomenon occurs neither at the micro-level (individual car level) nor at the macro-level (overall flow on the road) but at a level in between — the meso-level. (p. 9)

This simple example provides an excellent insight into why macroeconomic reasoning can be led badly astray by focusing on the purely equilibrium relationships characterizing what we now think of as microfounded models. In arguing against the Keynesian multiplier analysis supposedly justifying increased government spending as a countercyclical tool, Robert Barro wrote the following in an unfortunate Wall Street Journal op-ed piece, which I have previously commented on here and here.

Keynesian economics argues that incentives and other forces in regular economics are overwhelmed, at least in recessions, by effects involving “aggregate demand.” Recipients of food stamps use their transfers to consume more. Compared to this urge, the negative effects on consumption and investment by taxpayers are viewed as weaker in magnitude, particularly when the transfers are deficit-financed.

Thus, the aggregate demand for goods rises, and businesses respond by selling more goods and then by raising production and employment. The additional wage and profit income leads to further expansions of demand and, hence, to more production and employment. As per Mr. Vilsack, the administration believes that the cumulative effect is a multiplier around two.

If valid, this result would be truly miraculous. The recipients of food stamps get, say, $1 billion but they are not the only ones who benefit. Another $1 billion appears that can make the rest of society better off. Unlike the trade-off in regular economics, that extra $1 billion is the ultimate free lunch.

How can it be right? Where was the market failure that allowed the government to improve things just by borrowing money and giving it to people? Keynes, in his “General Theory” (1936), was not so good at explaining why this worked, and subsequent generations of Keynesian economists (including my own youthful efforts) have not been more successful.

In the disequilibrium environment of a recession, it is at least possible that injecting additional spending into the economy could produce effects that a similar injection of spending, under “normal” macro conditions, would not produce, just as somehow withdrawing a few cars from a congested road could increase the average speed of all the remaining cars on the road, by a much greater amount than would withdrawing a few cars from an uncongested road. In other words, microresponses may be sensitive to macroconditions.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 1,431 other followers

Follow Uneasy Money on WordPress.com