Archive for the 'comparative statics' Category

Hayek and Intertemporal Equilibrium

I am starting to write a paper on Hayek and intertemporal equilibrium, and as I write it over the next couple of weeks, I am going to post sections of it on this blog. Comments from readers will be even more welcome than usual, and I will do my utmost to reply to comments, a goal that, I am sorry to say, I have not been living up to in my recent posts.

The idea of equilibrium is an essential concept in economics. It is an essential concept in other sciences as well, its meaning in economics is not the same as in other disciplines. The concept having originally been borrowed from physics, the meaning originally attached to it by economists corresponded to the notion of a system at rest, and it took a long time for economists to see that viewing an economy as a system at rest was not the only, or even the most useful, way of applying the equilibrium concept to economic phenomena.

What would it mean for an economic system to be at rest? The obvious answer was to say that prices and quantities would not change. If supply equals demand in every market, and if there no exogenous change introduced into the system, e.g., in population, technology, tastes, etc., it would seem that would be no reason for the prices paid and quantities produced to change in that system. But that view of an economic system was a very restrictive one, because such a large share of economic activity – savings and investment — is predicated on the assumption and expectation of change.

The model of a stationary economy at rest in which all economic activity simply repeats what has already happened before did not seem very satisfying or informative, but that was the view of equilibrium that originally took hold in economics. The idea of a stationary timeless equilibrium can be traced back to the classical economists, especially Ricardo and Mill who wrote about the long-run tendency of an economic system toward a stationary state. But it was the introduction by Jevons, Menger, Walras and their followers of the idea of optimizing decisions by rational consumers and producers that provided the key insight for a more robust and fruitful version of the equilibrium concept.

If each economic agent (household or business firm) is viewed as making optimal choices based on some scale of preferences subject to limitations or constraints imposed by their capacities, endowments, technology and the legal system, then the equilibrium of an economy must describe a state in which each agent, given his own subjective ranking of the feasible alternatives, is making a optimal decision, and those optimal decisions are consistent with those of all other agents. The optimal decisions of each agent must simultaneously be optimal from the point of view of that agent while also being consistent, or compatible, with the optimal decisions of every other agent. In other words, the decisions of all buyers of how much to purchase must be consistent with the decisions of all sellers of how much to sell.

The idea of an equilibrium as a set of independently conceived, mutually consistent optimal plans was latent in the earlier notions of equilibrium, but it could not be articulated until a concept of optimality had been defined. That concept was utility maximization and it was further extended to include the ideas of cost minimization and profit maximization. Once the idea of an optimal plan was worked out, the necessary conditions for the mutual consistency of optimal plans could be articulated as the necessary conditions for a general economic equilibrium. Once equilibrium was defined as the consistency of optimal plans, the path was clear to define an intertemporal equilibrium as the consistency of optimal plans extending over time. Because current goods and services and otherwise identical goods and services in the future could be treated as economically distinct goods and services, defining the conditions for an intertemporal equilibrium was formally almost equivalent to defining the conditions for a static, stationary equilibrium. Just as the conditions for a static equilibrium could be stated in terms of equalities between marginal rates of substitution of goods in consumption and in production to their corresponding price ratios, an intertemporal equilibrium could be stated in terms of equalities between the marginal rates of intertemporal substitution in consumption and in production and their corresponding intertemporal price ratios.

The only formal adjustment required in the necessary conditions for static equilibrium to be extended to intertemporal equilibrium was to recognize that, inasmuch as future prices (typically) are unobservable, and hence unknown to economic agents, the intertemporal price ratios cannot be ratios between actual current prices and actual future prices, but, instead, ratios between current prices and expected future prices. From this it followed that for optimal plans to be mutually consistent, all economic agents must have the same expectations of the future prices in terms of which their plans were optimized.

The concept of an intertemporal equilibrium was first presented in English by F. A. Hayek in his 1937 article “Economics and Knowledge.” But it was through J. R. Hicks’s Value and Capital published two years later in 1939 that the concept became more widely known and understood. In explaining and applying the concept of intertemporal equilibrium and introducing the derivative concept of a temporary equilibrium in which current markets clear, but individual expectations of future prices are not the same, Hicks did not claim originality, but instead of crediting Hayek for the concept, or even mentioning Hayek’s 1937 paper, Hicks credited the Swedish economist Erik Lindahl, who had published articles in the early 1930s in which he had articulated the concept. But although Lindahl had published his important work on intertemporal equilibrium before Hayek’s 1937 article, Hayek had already explained the concept in a 1928 article “Das intertemporale Gleichgewichtasystem der Priese und die Bewegungen des ‘Geltwertes.'” (English translation: “Intertemporal price equilibrium and movements in the value of money.“)

Having been a junior colleague of Hayek’s in the early 1930s when Hayek arrived at the London School of Economics, and having come very much under Hayek’s influence for a few years before moving in a different theoretical direction in the mid-1930s, Hicks was certainly aware of Hayek’s work on intertemporal equilibrium, so it has long been a puzzle to me why Hicks did not credit Hayek along with Lindahl for having developed the concept of intertemporal equilibrium. It might be worth pursuing that question, but I mention it now only as an aside, in the hope that someone else might find it interesting and worthwhile to try to find a solution to that puzzle. As a further aside, I will mention that Murray Milgate in a 1979 article “On the Origin of the Notion of ‘Intertemporal Equilibrium’” has previously tried to redress the failure to credit Hayek’s role in introducing the concept of intertemporal equilibrium into economic theory.

What I am going to discuss in here and in future posts are three distinct ways in which the concept of intertemporal equilibrium has been developed since Hayek’s early work – his 1928 and 1937 articles but also his 1941 discussion of intertemporal equilibrium in The Pure Theory of Capital. Of course, the best known development of the concept of intertemporal equilibrium is the Arrow-Debreu-McKenzie (ADM) general-equilibrium model. But although it can be thought of as a model of intertemporal equilibrium, the ADM model is set up in such a way that all economic decisions are taken before the clock even starts ticking; the transactions that are executed once the clock does start simply follow a pre-determined script. In the ADM model, the passage of time is a triviality, merely a way of recording the sequential order of the predetermined production and consumption activities. This feat is accomplished by assuming that all agents are present at time zero with their property endowments in hand and capable of transacting – but conditional on the determination of an equilibrium price vector that allows all optimal plans to be simultaneously executed over the entire duration of the model — in a complete set of markets (including state-contingent markets covering the entire range of contingent events that will unfold in the course of time whose outcomes could affect the wealth or well-being of any agent with the probabilities associated with every contingent event known in advance).

Just as identical goods in different physical locations or different time periods can be distinguished as different commodities that cn be purchased at different prices for delivery at specific times and places, identical goods can be distinguished under different states of the world (ice cream on July 4, 2017 in Washington DC at 2pm only if the temperature is greater than 90 degrees). Given the complete set of state-contingent markets and the known probabilities of the contingent events, an equilibrium price vector for the complete set of markets would give rise to optimal trades reallocating the risks associated with future contingent events and to an optimal allocation of resources over time. Although the ADM model is an intertemporal model only in a limited sense, it does provide an ideal benchmark describing the characteristics of a set of mutually consistent optimal plans.

The seminal work of Roy Radner in relaxing some of the extreme assumptions of the ADM model puts Hayek’s contribution to the understanding of the necessary conditions for an intertemporal equilibrium into proper perspective. At an informal level, Hayek was addressing the same kinds of problems that Radner analyzed with far more powerful analytical tools than were available to Hayek. But the were both concerned with a common problem: under what conditions could an economy with an incomplete set of markets be said to be in a state of intertemporal equilibrium? In an economy lacking the full set of forward and state contingent markets describing the ADM model, intertemporal equilibrium cannot predetermined before trading even begins, but must, if such an equilibrium obtains, unfold through the passage of time. Outcomes might be expected, but they would not be predetermined in advance. Echoing Hayek, though to my knowledge he does not refer to Hayek in his work, Radner describes his intertemporal equilibrium under uncertainty as an equilibrium of plans, prices, and price expectations. Even if it exists, the Radner equilibrium is not the same as the ADM equilibrium, because without a full set of markets, agents can’t fully hedge against, or insure, all the risks to which they are exposed. The distinction between ex ante and ex post is not eliminated in the Radner equilibrium, though it is eliminated in the ADM equilibrium.

Additionally, because all trades in the ADM model have been executed before “time” begins, it seems impossible to rationalize holding any asset whose only use is to serve as a medium of exchange. In his early writings on business cycles, e.g., Monetary Theory and the Trade Cycle, Hayek questioned whether it would be possible to rationalize the holding of money in the context of a model of full equilibrium, suggesting that monetary exchange, by severing the link between aggregate supply and aggregate demand characteristic of a barter economy as described by Say’s Law, was the source of systematic deviations from the intertemporal equilibrium corresponding to the solution of a system of Walrasian equations. Hayek suggested that progress in analyzing economic fluctuations would be possible only if the Walrasian equilibrium method could be somehow be extended to accommodate the existence of money, uncertainty, and other characteristics of the real world while maintaining the analytical discipline imposed by the equilibrium method and the optimization principle. It proved to be a task requiring resources that were beyond those at Hayek’s, or probably anyone else’s, disposal at the time. But it would be wrong to fault Hayek for having had to insight to perceive and frame a problem that was beyond his capacity to solve. What he may be criticized for is mistakenly believing that he he had in fact grasped the general outlines of a solution when in fact he had only perceived some aspects of the solution and offering seriously inappropriate policy recommendations based on that seriously incomplete understanding.

In Value and Capital, Hicks also expressed doubts whether it would be possible to analyze the economic fluctuations characterizing the business cycle using a model of pure intertemporal equilibrium. He proposed an alternative approach for analyzing fluctuations which he called the method of temporary equilibrium. The essence of the temporary-equilibrium method is to analyze the behavior of an economy under the assumption that all markets for current delivery clear (in some not entirely clear sense of the term “clear”) while understanding that demand and supply in current markets depend not only on current prices but also upon expected future prices, and that the failure of current prices to equal what they had been expected to be is a potential cause for the plans that economic agents are trying to execute to be modified and possibly abandoned. In the Pure Theory of Capital, Hayek discussed Hicks’s temporary-equilibrium method a possible method of achieving the modification in the Walrasian method that he himself had proposed in Monetary Theory and the Trade Cycle. But after a brief critical discussion of the method, he dismissed it for reasons that remain obscure. Hayek’s rejection of the temporary-equilibrium method seems in retrospect to have been one of Hayek’s worst theoretical — or perhaps, meta-theoretical — blunders.

Decades later, C. J. Bliss developed the concept of temporary equilibrium to show that temporary equilibrium method can rationalize both holding an asset purely for its services as a medium of exchange and the existence of financial intermediaries (private banks) that supply financial assets held exclusively to serve as a medium of exchange. In such a temporary-equilibrium model with financial intermediaries, it seems possible to model not only the existence of private suppliers of a medium of exchange, but also the conditions – in a very general sense — under which the system of financial intermediaries breaks down. The key variable of course is vectors of expected prices subject to which the plans of individual households, business firms, and financial intermediaries are optimized. The critical point that emerges from Bliss’s analysis is that there are sets of expected prices, which if held by agents, are inconsistent with the existence of even a temporary equilibrium. Thus price flexibility in current market cannot, in principle, result in even a temporary equilibrium, because there is no price vector of current price in markets for present delivery that solves the temporary-equilibrium system. Even perfect price flexibility doesn’t lead to equilibrium if the equilibrium does not exist. And the equilibrium cannot exist if price expectations are in some sense “too far out of whack.”

Expected prices are thus, necessarily, equilibrating variables. But there is no economic mechanism that tends to cause the adjustment of expected prices so that they are consistent with the existence of even a temporary equilibrium, much less a full equilibrium.

Unfortunately, modern macroeconomics continues to neglect the temporary-equilibrium method; instead macroeconomists have for the most part insisted on the adoption of the rational-expectations hypothesis, a hypothesis that elevates question-begging to the status of a fundamental axiom of rationality. The crucial error in the rational-expectations hypothesis was to misunderstand the role of the comparative-statics method developed by Samuelson in The Foundations of Economic Analysis. The role of the comparative-statics method is to isolate the pure theoretical effect of a parameter change under a ceteris-paribus assumption. Such an effect could be derived only by comparing two equilibria under the assumption of a locally unique and stable equilibrium before and after the parameter change. But the method of comparative statics is completely inappropriate to most macroeconomic problems which are precisely concerned with the failure of the economy to achieve, or even to approximate, the unique and stable equilibrium state posited by the comparative-statics method.

Moreover, the original empirical application of the rational-expectations hypothesis by Muth was in the context of the behavior of a single market in which the market was dominated by well-informed specialists who could be presumed to have well-founded expectations of future prices conditional on a relatively stable economic environment. Under conditions of macroeconomic instability, there is good reason to doubt that the accumulated knowledge and experience of market participants would enable agents to form accurate expectations of the future course of prices even in those markets about which they expert knowledge. Insofar as the rational expectations hypothesis has any claim to empirical relevance it is only in the context of stable market situations that can be assumed to be already operating in the neighborhood of an equilibrium. For the kinds of problems that macroeconomists are really trying to answer that assumption is neither relevant nor appropriate.

Rules vs. Discretion Historically Contemplated

Here is a new concluding section which I have just written for my paper “Rules versus Discretion in Monetary Policy: Historically Contemplated” which I spoke about last September at the Mercatus Confernce on Monetary Rules in a Post-Crisis World. I have been working a lot on the paper over the past month or so and I hope to post a draft soon on SSRN and it is now under review for publication. I apologize for having written very little in past month and for having failed to respond to any comments on my previous posts. I simply have been too busy with work and life to have any energy left for blogging. I look forward to being more involved in the blog over the next few months and expect to be posting some sections of a couple of papers I am going to be writing. But I’m offering no guarantees. It is gratifying to know that people are still visiting the blog and reading some of my old posts.

Although recognition of a need for some rule to govern the conduct of the monetary authority originated in the perceived incentive of the authority to opportunistically abuse its privileged position, the expectations of the public (including that small, but modestly influential, segment consisting of amateur and professional economists) about what monetary rules might actually accomplish have evolved and expanded over the course of the past two centuries. As Laidler (“Economic Ideas, the Monetary Order, and the Uneasy Case for Monetary Rules”) shows, that evolution has been driven by both the evolution of economic and monetary institutions and the evolution of economic and monetary doctrines about how those institutions work.

I distinguish between two types of rules: price rules and quantity rules. The simplest price rule involved setting the price of a commodity – usually gold or silver – in terms of a monetary unit whose supply was controlled by the monetary authority or defining a monetary unit as a specific quantity of a particular commodity. Under the classical gold standard, for example, the monetary authority stood ready to buy or sell gold on demand at legally determined price of gold in terms of the monetary unit. Thus, the fixed price of gold under the gold standard was originally thought to serve as both the policy target of the rule and the operational instrument for implementing the rule.

However, as monetary institutions and theories evolved, it became apparent that there were policy objectives other than simply maintaining the convertibility of the monetary unit into the standard commodity that required the attention of the monetary authority. The first attempt to impose an additional policy goal on a monetary authority was the Bank Charter Act of 1844 which specified a quantity target – the aggregate of banknotes in circulation in Britain – which the monetary authority — the Bank of England – was required to reach by following a simple mechanical rule. By imposing a 100-percent marginal gold-reserve requirement on the notes issued by the Bank of England, the Bank Charter Act made the quantity of banknotes issued by the Bank of England both the target of the quantity rule and the instrument by which the rule was implemented.

Owing to deficiencies in the monetary theory on the basis of which the Act was designed and to the evolution of British monetary practices and institution, the conceptual elegance of the Bank Charter Act was not matched by its efficacy in practice. But despite, or, more likely, because of, the ultimate failure of Bank Charter Act, the gold standard, surviving recurring financial crises in Great Britain in the middle third of the nineteenth century, was eventually adopted by many other countries in the 1870s, becoming the de facto international monetary system from the late 1870s until the start of World War I. Operation of the gold standard was defined by, and depended on, the observance of a single price rule in which the value of a currency was defined by its legal gold content, so that corresponding to each gold-standard currency, there was an official gold price at which the monetary authority was obligated to buy or sell gold on demand.

The value – the purchasing power — of gold was relatively stable in the 35 or so years of the gold standard era, but that stability could not survive the upheavals associated with World War I, and so the problem of reconstructing the postwar monetary system was what kind of monetary rule to adopt to govern the post-war economy. Was it enough merely to restore the old currency parities – perhaps adjusted for differences in the extent of wartime and postwar currency depreciation — that governed the classical gold standard, or was it necessary to take into account other factors, e.g., the purchasing power of gold, in restoring the gold standard? This basic conundrum was never satisfactorily answered, and the failure to do so undoubtedly was a contributing, and perhaps dominant, factor in the economic collapse that began at the end of 1929, ultimately leading to the abandonment of the gold standard.

Searching for a new monetary regime to replace the failed gold standard, but to some extent inspired by the Bank Charter Act of the previous century, Henry Simons and ten fellow University of Chicago economists devised a totally new monetary system based on 100-percent reserve banking. The original Chicago proposal for 100-percent reserve banking proposed a monetary rule for stabilizing the purchasing power of fiat money. The 100-percent banking proposal would give the monetary authority complete control over the quantity of money, thereby enhancing the power of the monetary authority to achieve its price-level target. The Chicago proposal was thus inspired by a desire to increase the likelihood that the monetary authority could successfully implement the desired price rule. The price level was the target, and the quantity of money was the instrument. But as long as private fractional-reserve banks remained in operation, the monetary authority would lack effective control over the instrument. That was the rationale for replacing fractional reserve banks with 100-percent reserve banks.

But Simons eventually decided in his paper (“Rules versus Authorities in Monetary Policy”) that a price-level target was undesirable in principle, because allowing the monetary authority to choose which price level to stabilize, thereby favoring some groups at the expense of others, would grant too much discretion to the monetary authority. Rejecting price-level stabilization as monetary rule, Simons concluded that the exercise of discretion could be avoided only if the quantity of money was the target as well as the instrument of a monetary rule. Simons’s ideal monetary rule was therefore to keep the quantity of money in the economy constant — forever. But having found the ideal rule, Simons immediately rejected it, because he realized that the reforms in the financial and monetary systems necessary to make such a rule viable over the long run would never be adopted. And so he reluctantly and unhappily reverted back to the price-level stabilization rule that he and his Chicago colleagues had proposed in 1933.

Simons’s student Milton Friedman continued to espouse his teacher’s opposition to discretion, and as late as 1959 (A Program for Monetary Stability) he continued to advocate 100-percent reserve banking. But in the early 1960s, he adopted his k-percent rule and gave up his support for 100-percent banking. But despite giving up on 100-percent banking, Friedman continued to argue that the k-percent rule was less discretionary than the gold standard or a price-level rule, because neither the gold standard nor a price-level rule eliminated the exercise of discretion by the monetary authority in its implementation of policy, failing to acknowledge that, under any of the definitions that he used (usually M1 and sometimes M2), the quantity of money was a target, not an instrument. Of course, Friedman did eventually abandon his k-percent rule, but that acknowledgment came at least a decade after almost everyone else had recognized its unsuitability as a guide for conducting monetary policy, let alone as a legally binding rule, and long after Friedman’s repeated predictions that rapid growth of the monetary aggregates in the 1980s presaged the return of near-double-digit inflation.

However, the work of Kydland and Prescott (“Rules Rather than Discretion: The Inconsistency of Optimal Plans”) on time inconsistency has provided an alternative basis on which argue against discretion: that the lack of commitment to a long-run policy would lead to self-defeating short-term attempts to deviate from the optimal long-term policy.[1]

It is now I think generally understood that a monetary authority has available to it four primary instruments in conducting monetary policy, the quantity of base money, the lending rate it charges to banks, the deposit rate it pays banks on reserves, and an exchange rate against some other currency or some asset. A variety of goals remain available as well, nominal goals like inflation, the price level, or nominal income, or even an index of stock prices, as well as real goals like real GDP and employment.

Ever since Friedman and Phelps independently argued that the long-run Phillips Curve is vertical, a consensus has developed that countercyclical monetary policy is basically ineffectual, because the effects of countercyclical policy will be anticipated so that the only long-run effect of countercyclical policy is to raise the average rate of inflation without affecting output and employment in the long run. Because the reasoning that generates this result is essentially that money is neutral in the long run, the reasoning is not as compelling as the professional consensus in its favor would suggest. The monetary neutrality result only applies under the very special assumptions of a comparative static exercise comparing an initial equilibrium with a final equilibrium. But the whole point of countercyclical policy is to speed the adjustment from a disequilbrium with high unemployment back to a low-unemployment equilibrium. A comparative-statics exercise provides no theoretical, much less empirical, support for the proposition that anticipated monetary policy cannot have real effects.

So the range of possible targets and the range of possible instruments now provide considerable latitude to supporters of monetary rules to recommend alternative monetary rules incorporating many different combinations of alternative instruments and alternative targets. As of now, we have arrived at few solid theoretical conclusions about the relative effectiveness of alternative rules and even less empirical evidence about their effectiveness. But at least we know that, to be viable, a monetary rule will almost certainly have to be expressed in terms of one or more targets while allowing the monetary authority at least some discretion to adjust its control over its chosen instruments in order to effectively achieve its target (McCallum 1987, 1988). That does not seem like a great deal of progress to have made in the two centuries since economists began puzzling over how to construct an appropriate rule to govern the behavior of the monetary authority, but it is progress nonetheless. And, if we are so inclined, we can at least take some comfort in knowing that earlier generations have left us a lot of room for improvement.

Footnote:

[1] Friedman in fact recognized the point in his writings, but he emphasized the dangers of allowing discretion in the choice of instruments rather than the time-inconsistency policy, because it was only former argument that provided a basis for preferring his quantity rule over price rules.

There Is No Intertemporal Budget Constraint

Last week Nick Rowe posted a link to a just published article in a special issue of the Review of Keynesian Economics commemorating the 80th anniversary of the General Theory. Nick’s article discusses the confusion in the General Theory between saving and hoarding, and Nick invited readers to weigh in with comments about his article. The ROKE issue also features an article by Simon Wren-Lewis explaining the eclipse of Keynesian theory as a result of the New Classical Counter-Revolution, correctly identified by Wren-Lewis as a revolution inspired not by empirical success but by a methodological obsession with reductive micro-foundationalism. While deploring the New Classical methodological authoritarianism, Wren-Lewis takes solace from the ability of New Keynesians to survive under the New Classical methodological regime, salvaging a role for activist counter-cyclical policy by, in effect, negotiating a safe haven for the sticky-price assumption despite its shaky methodological credentials. The methodological fiction that sticky prices qualify as micro-founded allowed New Keynesianism to survive despite the ascendancy of micro-foundationalist methodology, thereby enabling the core Keynesian policy message to survive.

I mention the Wren-Lewis article in this context because of an exchange between two of the commenters on Nick’s article: the presumably pseudonymous Avon Barksdale and blogger Jason Smith about microfoundations and Keynesian economics. Avon began by chastising Nick for wasting time discussing Keynes’s 80-year old ideas, something Avon thinks would never happen in a discussion about a true science like physics, the 100-year-old ideas of Einstein being of no interest except insofar as they have been incorporated into the theoretical corpus of modern physics. Of course, this is simply vulgar scientism, as if the only legitimate way to do economics is to mimic how physicists do physics. This methodological scolding is typically charming New Classical arrogance. Sort of reminds one of how Friedrich Engels described Marxian theory as scientific socialism. I mean who, other than a religious fanatic, would be stupid enough to argue with the assertions of science?

Avon continues with a quotation from David Levine, a fine economist who has done a lot of good work, but who is also enthralled by the New Classical methodology. Avon’s scientism provoked the following comment from Jason Smith, a Ph. D. in physics with a deep interest in and understanding of economics.

You quote from Levine: “Keynesianism as argued by people such as Paul Krugman and Brad DeLong is a theory without people either rational or irrational”

This is false. The L in ISLM means liquidity preference and e.g. here …

http://krugman.blogs.nytimes.com/2013/11/18/the-new-keynesian-case-for-fiscal-policy-wonkish/

… Krugman mentions an Euler equation. The Euler equation essentially says that an agent must be indifferent between consuming one more unit today on the one hand and saving that unit and consuming in the future on the other if utility is maximized.

So there are agents in both formulations preferring one state of the world relative to others.

Avon replied:

Jason,

“This is false. The L in ISLM means liquidity preference and e.g. here”

I know what ISLM is. It’s not recursive so it really doesn’t have people in it. The dynamics are not set by any micro-foundation. If you’d like to see models with people in them, try Ljungqvist and Sargent, Recursive Macroeconomic Theory.

To which Jason retorted:

Avon,

So the definition of “people” is restricted to agents making multi-period optimizations over time, solving a dynamic programming problem?

Well then any such theory is obviously wrong because people don’t behave that way. For example, humans don’t optimize the dictator game. How can you add up optimizing agents and get a result that is true for non-optimizing agents … coincident with the details of the optimizing agents mattering.

Your microfoundation requirement is like saying the ideal gas law doesn’t have any atoms in it. And it doesn’t! It is an aggregate property of individual “agents” that don’t have properties like temperature or pressure (or even volume in a meaningful sense). Atoms optimize entropy, but not out of any preferences.

So how do you know for a fact that macro properties like inflation or interest rates are directly related to agent optimizations? Maybe inflation is like temperature — it doesn’t exist for individuals and is only a property of economics in aggregate.

These questions are not answered definitively, and they’d have to be to enforce a requirement for microfoundations … or a particular way of solving the problem.

Are quarks important to nuclear physics? Not really — it’s all pions and nucleons. Emergent degrees of freedom. Sure, you can calculate pion scattering from QCD lattice calculations (quark and gluon DoF), but it doesn’t give an empirically better result than chiral perturbation theory (pion DoF) that ignores the microfoundations (QCD).

Assuming quarks are required to solve nuclear physics problems would have been a giant step backwards.

To which Avon rejoined:

Jason

The microfoundation of nuclear physics and quarks is quantum mechanics and quantum field theory. How the degrees of freedom reorganize under the renormalization group flow, what effective field theory results is an empirical question. Keynesian economics is worse tha[n] useless. It’s wrong empirically, it has no theoretical foundation, it has no laws. It has no microfoundation. No serious grad school has taught Keynesian economics in nearly 40 years.

To which Jason answered:

Avon,

RG flow is irrelevant to chiral perturbation theory which is based on the approximate chiral symmetry of QCD. And chiral perturbation theory could exist without QCD as the “microfoundation”.

Quantum field theory is not a ‘microfoundation’, but rather a framework for building theories that may or may not have microfoundations. As Weinberg (1979) said:

” … quantum field theory itself has no content beyond analyticity, unitarity,
cluster decomposition, and symmetry.”

If I put together an NJL model, there is no requirement that the scalar field condensate be composed of quark-antiquark pairs. In fact, the basic idea was used for Cooper pairs as a model of superconductivity. Same macro theory; different microfoundations. And that is a general problem with microfoundations — different microfoundations can lead to the same macro theory, so which one is right?

And the IS-LM model is actually pretty empirically accurate (for economics):

http://informationtransfereconomics.blogspot.com/2014/03/the-islm-model-again.html

To which Avon responded:

First, ISLM analysis does not hold empirically. It just doesn’t work. That’s why we ended up with the macro revolution of the 70s and 80s. Keynesian economics ignores intertemporal budget constraints, it violates Ricardian equivalence. It’s just not the way the world works. People might not solve dynamic programs to set their consumption path, but at least these models include a future which people plan over. These models work far better than Keynesian ISLM reasoning.

As for chiral perturbation theory and the approximate chiral symmetries of QCD, I am not making the case that NJL models requires QCD. NJL is an effective field theory so it comes from something else. That something else happens to be QCD. It could have been something else, that’s an empirical question. The microfoundation I’m talking about with theories like NJL is QFT and the symmetries of the vacuum, not the short distance physics that might be responsible for it. The microfoundation here is about the basic laws, the principles.

ISLM and Keynesian economics has none of this. There is no principle. The microfoundation of modern macro is not about increasing the degrees of freedom to model every person in the economy on some short distance scale, it is about building the basic principles from consistent economic laws that we find in microeconomics.

Well, I totally agree that IS-LM is a flawed macroeconomic model, and, in its original form, it was borderline-incoherent, being a single-period model with an interest rate, a concept without meaning except as an intertemporal price relationship. These deficiencies of IS-LM became obvious in the 1970s, so the model was extended to include a future period, with an expected future price level, making it possible to speak meaningfully about real and nominal interest rates, inflation and an equilibrium rate of spending. So the failure of IS-LM to explain stagflation, cited by Avon as the justification for rejecting IS-LM in favor of New Classical macro, was not that hard to fix, at least enough to make it serviceable. And comparisons of the empirical success of augmented IS-LM and the New Classical models have shown that IS-LM models consistently outperform New Classical models.

What Avon fails to see is that the microfoundations that he considers essential for macroeconomics are themselves derived from the assumption that the economy is operating in macroeconomic equilibrium. Thus, insisting on microfoundations – at least in the formalist sense that Avon and New Classical macroeconomists understand the term – does not provide a foundation for macroeconomics; it is just question begging aka circular reasoning or petitio principia.

The circularity is obvious from even a cursory reading of Samuelson’s Foundations of Economic Analysis, Robert Lucas’s model for doing economics. What Samuelson called meaningful theorems – thereby betraying his misguided acceptance of the now discredited logical positivist dogma that only potentially empirically verifiable statements have meaning – are derived using the comparative-statics method, which involves finding the sign of the derivative of an endogenous economic variable with respect to a change in some parameter. But the comparative-statics method is premised on the assumption that before and after the parameter change the system is in full equilibrium or at an optimum, and that the equilibrium, if not unique, is at least locally stable and the parameter change is sufficiently small not to displace the system so far that it does not revert back to a new equilibrium close to the original one. So the microeconomic laws invoked by Avon are valid only in the neighborhood of a stable equilibrium, and the macroeconomics that Avon’s New Classical mentors have imposed on the economics profession is a macroeconomics that, by methodological fiat, is operative only in the neighborhood of a locally stable equilibrium.

Avon dismisses Keynesian economics because it ignores intertemporal budget constraints. But the intertemporal budget constraint doesn’t exist in any objective sense. Certainly macroeconomics has to take into account intertemporal choice, but the idea of an intertemporal budget constraint analogous to the microeconomic budget constraint underlying the basic theory of consumer choice is totally misguided. In the static theory of consumer choice, the consumer has a given resource endowment and known prices at which consumers can transact at will, so the utility-maximizing vector of purchases and sales can be determined as the solution of a constrained-maximization problem.

In the intertemporal context, consumers have a given resource endowment, but prices are not known. So consumers have to make current transactions based on their expectations about future prices and a variety of other circumstances about which consumers can only guess. Their budget constraints are thus not real but totally conjectural based on their expectations of future prices. The optimizing Euler equations are therefore entirely conjectural as well, and subject to continual revision in response to changing expectations. The idea that the microeconomic theory of consumer choice is straightforwardly applicable to the intertemporal choice problem in a setting in which consumers don’t know what future prices will be and agents’ expectations of future prices are a) likely to be very different from each other and thus b) likely to be different from their ultimate realizations is a huge stretch. The intertemporal budget constraint has a completely different role in macroeconomics from the role it has in microeconomics.

If I expect that the demand for my services will be such that my disposable income next year would be $500k, my consumption choices would be very different from what they would have been if I were expecting a disposable income of $100k next year. If I expect a disposable income of $500k next year, and it turns out that next year’s income is only $100k, I may find myself in considerable difficulty, because my planned expenditure and the future payments I have obligated myself to make may exceed my disposable income or my capacity to borrow. So if there are a lot of people who overestimate their future incomes, the repercussions of their over-optimism may reverberate throughout the economy, leading to bankruptcies and unemployment and other bad stuff.

A large enough initial shock of mistaken expectations can become self-amplifying, at least for a time, possibly resembling the way a large initial displacement of water can generate a tsunami. A financial crisis, which is hard to model as an equilibrium phenomenon, may rather be an emergent phenomenon with microeconomic sources, but whose propagation can’t be described in microeconomic terms. New Classical macroeconomics simply excludes such possibilities on methodological grounds by imposing a rational-expectations general-equilibrium structure on all macroeconomic models.

This is not to say that the rational expectations assumption does not have a useful analytical role in macroeconomics. But the most interesting and most important problems in macroeconomics arise when the rational expectations assumption does not hold, because it is when individual expectations are very different and very unstable – say, like now, for instance — that macroeconomies become vulnerable to really scary instability.

Simon Wren-Lewis makes a similar point in his paper in the Review of Keynesian Economics.

Much discussion of current divisions within macroeconomics focuses on the ‘saltwater/freshwater’ divide. This understates the importance of the New Classical Counter Revolution (hereafter NCCR). It may be more helpful to think about the NCCR as involving two strands. The one most commonly talked about involves Keynesian monetary and fiscal policy. That is of course very important, and plays a role in the policy reaction to the recent Great Recession. However I want to suggest that in some ways the second strand, which was methodological, is more important. The NCCR helped completely change the way academic macroeconomics is done.

Before the NCCR, macroeconomics was an intensely empirical discipline: something made possible by the developments in statistics and econometrics inspired by The General Theory. After the NCCR and its emphasis on microfoundations, it became much more deductive. As Hoover (2001, p. 72) writes, ‘[t]he conviction that macroeconomics must possess microfoundations has changed the face of the discipline in the last quarter century’. In terms of this second strand, the NCCR was triumphant and remains largely unchallenged within mainstream academic macroeconomics.

Perhaps I will have some more to say about Wren-Lewis’s article in a future post. And perhaps also about Nick Rowe’s article.

HT: Tom Brown

Update (02/11/16):

On his blog Jason Smith provides some further commentary on his exchange with Avon on Nick Rowe’s blog, explaining at greater length how irrelevant microfoundations are to doing real empirically relevant physics. He also expands on and puts into a broader meta-theoretical context my point about the extremely narrow range of applicability of the rational-expectations equilibrium assumptions of New Classical macroeconomics.

David Glasner found a back-and-forth between me and a commenter (with the pseudonym “Avon Barksdale” after [a] character on The Wire who [didn’t end] up taking an economics class [per Tom below]) on Nick Rowe’s blog who expressed the (widely held) view that the only scientific way to proceed in economics is with rigorous microfoundations. “Avon” held physics up as a purported shining example of this approach.
I couldn’t let it go: even physics isn’t that reductionist. I gave several examples of cases where the microfoundations were actually known, but not used to figure things out: thermodynamics, nuclear physics. Even modern physics is supposedly built on string theory. However physicists do not require every pion scattering amplitude be calculated from QCD. Some people do do so-called lattice calculations. But many resort to the “effective” chiral perturbation theory. In a sense, that was what my thesis was about — an effective theory that bridges the gap between lattice QCD and chiral perturbation theory. That effective theory even gave up on one of the basic principles of QCD — confinement. It would be like an economist giving up opportunity cost (a basic principle of the micro theory). But no physicist ever said to me “your model is flawed because it doesn’t have true microfoundations”. That’s because the kind of hard core reductionism that surrounds the microfoundations paradigm doesn’t exist in physics — the most hard core reductionist natural science!
In his post, Glasner repeated something that he had before and — probably because it was in the context of a bunch of quotes about physics — I thought of another analogy.

Glasner says:

But the comparative-statics method is premised on the assumption that before and after the parameter change the system is in full equilibrium or at an optimum, and that the equilibrium, if not unique, is at least locally stable and the parameter change is sufficiently small not to displace the system so far that it does not revert back to a new equilibrium close to the original one. So the microeconomic laws invoked by Avon are valid only in the neighborhood of a stable equilibrium, and the macroeconomics that Avon’s New Classical mentors have imposed on the economics profession is a macroeconomics that, by methodological fiat, is operative only in the neighborhood of a locally stable equilibrium.

 

This hits on a basic principle of physics: any theory radically simplifies near an equilibrium.

Go to Jason’s blog to read the rest of his important and insightful post.

The Near Irrelevance of the Vertical Long-Run Phillips Curve

From a discussion about how much credit Milton Friedman deserves for changing the way that economists thought about inflation, I want to nudge the conversation in a slightly different direction, to restate a point that I made some time ago in one of my favorite posts (The Lucas Critique Revisited). But if Friedman taught us anything it is that incessant repetition of the same already obvious point can do wonders for your reputation. That’s one lesson from Milton that I am willing to take to heart, though my tolerance for hearing myself say the same darn thing over and over again is probably not as great as Friedman’s was, which to be sure is not the only way in which I fall short of him by comparison. (I am almost a foot taller than he was by the way). Speaking of being a foot taller than Friedman, I don’t usually post pictures on this blog, but here is one that I have always found rather touching. And if you don’t know who the other guy is in the picture, you have no right to call yourself an economist.

friedman_&_StiglerAt any rate, the expectations augmented, long-run Phillips Curve, as we all know, was shown by Friedman to be vertical. But what exactly does it mean for the expectations-augmented, long-run Phillips Curve to be vertical? Discussions about whether the evidence supports the proposition that the expectations-augmented, long-run Phillips Curve is vertical (including some of the comments on my recent posts) suggest that people are not clear on what “long-run” means in the context of the expectations-augmented Phillips Curve and have not really thought carefully about what empirical content is contained by the proposition that the expectations-augmented, long-run Phillips Curve is vertical.

Just to frame the discussion of the Phillips Curve, let’s talk about what the term “long-run” means in economics. What it certainly does not mean is an amount of calendar time, though I won’t deny that there are frequent attempts to correlate long-run with varying durations of calendar time. But all such attempts either completely misunderstand what the long-run actually represents, or they merely aim to provide the untutored with some illusion of concreteness in what is otherwise a completely abstract discussion. In fact, what “long run” connotes is simply a full transition from one equilibrium state to another in the context of a comparative-statics exercise.

If a change in some exogenous parameter is imposed on a pre-existing equilibrium, then the long-run represents the full transition to a new equilibrium in which all endogenous variables have fully adjusted to the parameter change. The short-run, then, refers to some intermediate adjustment to the parameter change in which some endogenous variables have been arbitrarily held fixed (presumably because of some possibly reasonable assumption that some variables are able to adjust more speedily than other variables to the posited parameter change).

Now the Phillips Curve that was discovered by A. W. Phillips in his original paper was a strictly empirical relation between observed (wage) inflation and observed unemployment. But the expectations-augmented long-run Phillips Curve is a theoretical construct. And what it represents is certainly not an observable relationship between inflation and unemployment; it rather is a locus of points of equilibrium, each point representing full adjustment of the labor market to a particular rate of inflation, where full adjustment means that the rate of inflation is fully anticipated by all economic agents in the model. So what the expectations-augmented, long-run Phillips Curve is telling us is that if we perform a series of comparative-statics exercises in which, starting from full equilibrium with the given rate of inflation fully expected, we impose on the system a parameter change in which the exogenously imposed rate of inflation is changed and deduce a new equilibrium in which the fully and universally expected rate of inflation equals the alternative exogenously imposed inflation parameter, the equilibrium rate of unemployment corresponding to the new inflation parameter will not differ from the equilibrium rate of unemployment corresponding to the original inflation parameter.

Notice, as well, that the expectations-augmented, long-run Phillips Curve is not saying that imposing a new rate of inflation on an actual economic system would lead to a new equilibrium in which there was no change in unemployment; it is merely comparing alternative equilibria of the same system with different exogenously imposed rates of inflation. To make a statement about the effect of a change in the rate of inflation on unemployment, one has to be able to specify an adjustment path in moving from one equilibrium to another. The comparative-statics method says nothing about the adjustment path; it simply compares two alternative equilibrium states and specifies the change in endogenous variable induced by the change in an exogenous parameter.

So the vertical shape of the expectations-augmented, long-run Phillips Curve tells us very little about how, in any given situation, a change in the rate of inflation would actually affect the rate of unemployment. Not only does the expectations-augmented long-run Phillips Curve fail to tell us how a real system starting from equilibrium would be affected by a change in the rate of inflation, the underlying comparative-statics exercise being unable to specify the adjustment path taken by a system once it departs from its original equilibrium state, the expectations augmented, long-run Phillips Curve is even less equipped to tell us about the adjustment to a change in the rate of inflation when a system is not even in equilibrium to begin with.

The entire discourse of the expectations-augmented, long-run Phillips Curve is completely divorced from the kinds of questions that policy makers in the real world usually have to struggle with – questions like will increasing the rate of inflation of an economy in which there is abnormally high unemployment facilitate or obstruct the adjustment process that takes the economy back to a more normal unemployment rate. The expectations-augmented, long-run Phillips Curve may not be completely irrelevant to the making of economic policy – it is good to know, for example, that if we are trying to figure out which time path of NGDP to aim for, there is no particular reason to think that a time path with a 10% rate of growth of NGDP would probably not generate a significantly lower rate of unemployment than a time path with a 5% rate of growth – but its relationship to reality is sufficiently tenuous that it is irrelevant to any discussion of policy alternatives for economies unless those economies are already close to being in equilibrium.

Can We All Export Our Way out of Depression?

Tyler Cowen has a post chastising Keynesians for scolding Germany for advising their Euro counterparts to adopt the virtuous German example of increasing their international competitiveness so that they can increase their exports, thereby increasing GDP and employment. The Keynesian response is that increasing exports is a zero-sum game, so that, far from being a recipe for recovery, the German advice is actually a recipe for continued stagnation.

Tyler doesn’t think much of the Keynesian response.

But that Keynesian counter is a mistake, perhaps brought on by the IS-LM model and its impoverished treatment of banking and credit.

Let’s say all nations could indeed increase their gross exports, although of course the sum of net exports could not go up.  The first effect is that small- and medium-sized enterprises would be more profitable in the currently troubled economies.  They would receive more credit and the broader monetary aggregates would go up in those countries, reflating their economies.  (Price level integration is not so tight in these cases, furthermore much of the reflation could operate through q’s rather than p’s.)  It sometimes feels like the IS-LM users have a mercantilist gold standard model, where the commodity base money can only be shuffled around in zero-sum fashion and not much more can happen in a positive direction.

The problem with Tyler’s rejoinder to the Keynesian response, which, I agree, provides an incomplete picture of what is going on, is that he assumes that which he wants to prove, thereby making his job just a bit too easy. That is, Tyler just assumes that “all nations could indeed increase their gross exports.” Obviously, if all nations increase their gross exports, they will very likely all increase their total output and employment. (It is, I suppose, theoretically possible that all the additional exports could be generated by shifting output from non-tradables to tradables, but that seems an extremely unlikely scenario.) The reaction of credit markets and monetary aggregates would be very much a second-order reaction. It’s the initial assumption–  that all nations could increase gross exports simultaneously — that is doing all the heavy lifting.

Concerning Tyler’s characterization of the IS-LM model as a mercantilist gold-standard model, I agree that IS-LM has serious deficiencies, but that characterization strikes me as unfair. The simple IS-LM model is a closed economy model, with an exogenously determined price level. Such a model certainly has certain similarities to a mercantilist gold standard model, but that doesn’t mean that the two models are essentially the same. There are many ways of augmenting the IS-LM model to turn it into an open-economy model, in which case it would not necessarily resemble the a mercantilist gold-standard model.

Now I am guessing that Tyler would respond to my criticism by asking: “well, why wouldn’t all countries increase their gross exports is they all followed the German advice?”

My response to that question would be that the conclusion that everybody’s exports would increase if everybody became more efficient logically follows only in a comparative-statics framework. But, for purposes of this exercise, we are not starting from an equilibrium, and we have no assurance that, in a disequilibrium environment, the interaction of the overall macro disequilibrium with the posited increase of efficiency would produce, as the comparative-statics exercise would lead us to believe, a straightforward increase in everyone’s exports. Indeed, even the comparative-statics exercise is making an unsubstantiated assumption that the initial equilibrium is locally unique and stable.

Of course, this response might be dismissed as a mere theoretical possibility, though the likelihood that widespread adoption of export-increasing policies in the midst of an international depression, unaccompanied by monetary expansion, would lead to increased output does not seem all that high to me. So let’s think about what might happen if all countries simultaneously adopted export-increasing policies. The first point to consider is that not all countries are the same, and not all are in a position to increase their exports by as much or as quickly as others. Inevitably, some countries would increase their exports faster than others. As a result, it is also inevitable that some countries would lose export markets as other countries penetrated export markets before they did. In addition, some countries would experience declines in domestic output as domestic-import competing industries were forced by import competition to curtail output. In the absence of demand-increasing monetary policies, output and employment in some countries would very likely fall. This is the kernel of truth in the conventional IS-LM analysis that Tyler tries to dismiss. The IS-LM framework abstracts from the output-increasing tendency of export-led growth, but the comparative-statics approach abstracts from aggregate-demand effects that could easily overwhelm the comparative-statics effect.

Now, to be fair, I must acknowledge that Tyler reaches a pretty balanced conclusion:

This interpretation of the meaning of zero-sum net exports is one of the most common economic mistakes you will hear from serious economists in the blogosphere, and yet it is often presented dogmatically or dismissively in a single sentence, without much consideration of more complex or more realistic scenarios.

That is a reasonable conclusion, but I think it would be just as dogmatic, if not more so, to rely on the comparative-statics analysis that Tyler goes through in the first part of his post without consideration of more complex or more realistic scenarios.

Let me also offer a comment on Scott Sumner’s take on Tyler’s post. Scott tries to translate Tyler’s analysis into macroeconomic terms to support Tyler’s comparative-statics analysis. Scott considers three methods by which exports might be increased: 1) supply-side reforms, 2) monetary stimulus aimed at currency depreciation, and 3) increased government saving (fiscal austerity). The first two, Scott believes, lead to increased output and employment, and that the third is a wash. I agree with Scott about monetary stimulus aimed at currency depreciation, but I disagree (at least in part) about the other two.

Supply-side reforms [to increase exports] boost output under either an inflation target, or a dual mandate.  If you want to use the Keynesian model, these reforms boost the Wicksellian equilibrium interest rate, which makes NGDP grow faster, even at the zero bound.

Scott makes a fair point, but I don’t think it is necessarily true for all inflation targets. Here is how I would put it. Because supply-side reforms to increase exports could cause aggregate demand in some countries to fall, and we have very little ability to predict by how much aggregate demand could go down in some countries adversely affected by increased competition from exports by other countries, it is at least possible that worldwide aggregate demand would fall if such policies were generally adopted. You can’t tell how the Wicksellian natural rate would be affected until you’ve accounted for all the indirect feedback effects on aggregate demand. If the Wicksellian natural rate fell, an inflation target, even if met, might not prevent a slowdown in NGDP growth, and a net reduction in output and employment. To prevent a slowdown in NGDP growth would require increasing the inflation target. Of course, under a real dual mandate (as opposed to the sham dual mandate now in place at the Fed) or an NGDP target, monetary policy would have to be loosened sufficiently to prevent output and employment from falling.

As far as government saving (fiscal austerity), I’d say it’s a net wash, for monetary offset reasons.

I am not sure what Scott means about monetary offset in this context. As I have argued in several earlier posts (here, here, here and here), attempting to increase employment via currency depreciation and increased saving involves tightening monetary policy, not loosening it. So I don’t see how fiscal policy can be used to depreciate a currency at the same time that monetary policy is being loosened. At any rate, if monetary policy is being used to depreciate the currency, then I see no difference between options 2) and 3).

But my general comment is that, like Tyler, Scott seems to be exaggerating the difference between his bottom line and the one that comes out of the IS-LM model, though I am certainly not saying that IS-LM is  last word on the subject.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 1,391 other followers

Follow Uneasy Money on WordPress.com