Archive Page 2

Bernanke’s Continuing Confusion about How Monetary Policy Works

TravisV recently posted a comment on this blog with a link to his comment on Scott Sumner’s blog flagging two apparently contradictory rationales for the Fed’s quantitative easing policy in chapter 19 of Ben Bernanke’s new book in which he demurely takes credit for saving Western Civilization. Here are the two quotes from Bernanke:

1              Our goal was to bring down longer-term interest rates, such as the rates on thirty-year mortgages and corporate bonds. If we could do that, we might stimulate spending—on housing and business capital investment, for example…..Similarly, when we bought longer-term Treasury securities, such as a note maturing in ten years, the yields on those securities tended to decline.

2              A new era of monetary policy activism had arrived, and our announcement had powerful effects. Between the day before the meeting and the end of the year, the Dow would rise more than 3,000 points—more than 40 percent—to 10,428. Longer-term interest rates fell on our announcement, with the yield on ten-year Treasury securities dropping from about 3 percent to about 2.5 percent in one day, a very large move. Over the summer, longer-term yields would reverse and rise to above 4 percent. We would see that increase as a sign of success. Higher yields suggested that investors were expecting both more growth and higher inflation, consistent with our goal of economic revival. Indeed, after four quarters of contraction, revised data would show that the economy would grow at a 1.3 percent rate in the third quarter and a 3.9 percent rate in the fourth.

Over my four years of blogging — especially the first two – I have written a number of posts pointing out that the Fed’s articulated rationale for its quantitative easing – the one expressed in quote number 1 above: that quantitative easing would reduce long-term interest rates and stimulate the economy by promoting investment – was largely irrelevant, because the magnitude of the effect would be far too small to have any noticeable macroeconomic effect.

In making this argument, Bernanke bought into one of the few propositions shared by both Keynes and the Austrians: that monetary policy is effective by operating on long-term interest rates, and that significant investments by business in plant and equipment are responsive to relatively small changes in long-term rates. Keynes, at any rate, had the good sense to realize that long-term investment in plant and equipment is not very responsive to changes in long-term interest rates – a view he had espoused in his Treatise on Money before emphasizing, in the General Theory, expectations about future prices and profitability as the key factor governing investment. Austrians, however, never gave up their theoretical preoccupation with the idea that the entire structural profile of a modern economy is dominated by small changes in the long-term rate of interest.

So for Bernanke’s theory of how QE would be effective to be internally consistent, he would have had to buy into a hyper-Austrian view of how the economy works, which he obviously doesn’t and never did. Sometimes internal inconsistency can be a sign that being misled by bad theory hasn’t overwhelmed a person’s good judgment. So I say even though he botched the theory, give Bernanke credit for his good judgment. Unfortunately, Bernanke’s confusion made it impossible for him to communicate a coherent story about how monetary policy, undermining, or at least compromising, his ability to build popular support for the policy.

Of course the problem was even deeper than expecting a marginal reduction in long-term interest rates to have any effect on the economy. The Fed’s refusal to budge from its two-percent inflation target, drastically limited the potential stimulus that monetary policy could provide.

I might add that I just noticed that I had already drawn attention to Bernanke’s inconsistent rationale for adopting QE in my paper “The Fisher Effect Under Deflationary Expectations” written before I started this blog, which both Scott Sumner and Paul Krugman plugged after I posted it on SSRN.

Here’s what I said in my paper (p. 18):

If so, the expressed rationale for the Fed’s quantitative easing policy (Bernanke 2010), namely to reduce long term interest rates, thereby stimulating spending on investment and consumption, reflects a misapprehension of the mechanism by which the policy would be most likely to operate, increasing expectations of both inflation and future profitability and, hence, of the cash flows derived from real assets, causing asset values to rise in step with both inflation expectations and real interest rates. Rather than a policy to reduce interest rates, quantitative easing appears to be a policy for increasing interest rates, though only as a consequence of increasing expected future prices and cash flows.

I wrote that almost five years ago, and it still seems pretty much on the mark.

Representative Agents, Homunculi and Faith-Based Macroeconomics

After my previous post comparing the neoclassical synthesis in its various versions to the mind-body problem, there was an interesting Twitter exchange between Steve Randy Waldman and David Andolfatto in which Andolfatto queried whether Waldman and I are aware that there are representative-agent models in which the equilibrium is not Pareto-optimal. Andalfatto raised an interesting point, but what I found interesting about it might be different from what Andalfatto was trying to show, which, I am guessing, was that a representative-agent modeling strategy doesn’t necessarily commit the theorist to the conclusion that the world is optimal and that the solutions of the model can never be improved upon by a monetary/fiscal-policy intervention. I concede the point. It is well-known I think that, given the appropriate assumptions, a general-equilibrium model can have a sub-optimal solution. Given those assumptions, the corresponding representative-agent will also choose a sub-optimal solution. So I think I get that, but perhaps there’s a more subtle point  that I’m missing. If so, please set me straight.

But what I was trying to argue was not that representative-agent models are necessarily optimal, but that representative-agent models suffer from an inherent, and, in my view, fatal, flaw: they can’t explain any real macroeconomic phenomenon, because a macroeconomic phenomenon has to encompass something more than the decision of a single agent, even an omniscient central planner. At best, the representative agent is just a device for solving an otherwise intractable general-equilibrium model, which is how I think Lucas originally justified the assumption.

Yet just because a general-equilibrium model can be formulated so that it can be solved as the solution of an optimizing agent does not explain the economic mechanism or process that generates the solution. The mathematical solution of a model does not necessarily provide any insight into the adjustment process or mechanism by which the solution actually is, or could be, achieved in the real world. Your ability to find a solution for a mathematical problem does not mean that you understand the real-world mechanism to which the solution of your model corresponds. The correspondence between your model may be a strictly mathematical correspondence which may not really be in any way descriptive of how any real-world mechanism or process actually operates.

Here’s an example of what I am talking about. Consider a traffic-flow model explaining how congestion affects vehicle speed and the flow of traffic. It seems obvious that traffic congestion is caused by interactions between the different vehicles traversing a thoroughfare, just as it seems obvious that market exchange arises as the result of interactions between the different agents seeking to advance their own interests. OK, can you imagine building a useful traffic-flow model based on solving for the optimal plan of a representative vehicle?

I don’t think so. Once you frame the model in terms of a representative vehicle, you have abstracted from the phenomenon to be explained. The entire exercise would be pointless – unless, that is, you assumed that interactions between vehicles are so minimal that they can be ignored. But then why would you be interested in congestion effects? If you want to claim that your model has any relevance to the effect of congestion on traffic flow, you can’t base the claim on an assumption that there is no congestion.

Or to take another example, suppose you want to explain the phenomenon that, at sporting events, all, or almost all, the spectators sit in their seats but occasionally get up simultaneously from their seats to watch the play on the field or court. Would anyone ever think that an explanation in terms of a representative spectator could explain that phenomenon?

In just the same way, a representative-agent macroeconomic model necessarily abstracts from the interactions between actual agents. Obviously, by abstracting from the interactions, the model can’t demonstrate that there are no interactions between agents in the real world or that their interactions are too insignificant to matter. I would be shocked if anyone really believed that the interactions between agents are unimportant, much less, negligible; nor have I seen an argument that interactions between agents are unimportant, the concept of network effects, to give just one example, being an important topic in microeconomics.

It’s no answer to say that all the interactions are accounted for within the general-equilibrium model. That is just a form of question-begging. The representative agent is being assumed because without him the problem of finding a general-equilibrium solution of the model is very difficult or intractable. Taking into account interactions makes the model too complicated to work with analytically, so it is much easier — but still hard enough to allow the theorist to perform some fancy mathematical techniques — to ignore those pesky interactions. On top of that, the process by which the real world arrives at outcomes to which a general-equilibrium model supposedly bears at least some vague resemblance can’t even be described by conventional modeling techniques.

The modeling approach seems like that of a neuroscientist saying that, because he could simulate the functions, electrical impulses, chemical reactions, and neural connections in the brain – which he can’t do and isn’t even close to doing, even though a neuroscientist’s understanding of the brain far surpasses any economist’s understanding of the economy – he can explain consciousness. Simulating the operation of a brain would not explain consciousness, because the computer on which the neuroscientist performed the simulation would not become conscious in the course of the simulation.

Many neuroscientists and other materialists like to claim that consciousness is not real, that it’s just an epiphenomenon. But we all have the subjective experience of consciousness, so whatever it is that someone wants to call it, consciousness — indeed the entire world of mental phenomena denoted by that term — remains an unexplained phenomenon, a phenomenon that can only be dismissed as unreal on the basis of a metaphysical dogma that denies the existence of anything that can’t be explained as the result of material and physical causes.

I call that metaphysical belief a dogma not because it’s false — I have no way of proving that it’s false — but because materialism is just as much a metaphysical belief as deism or monotheism. It graduates from belief to dogma when people assert not only that the belief is true but that there’s something wrong with you if you are unwilling to believe it as well. The most that I would say against the belief in materialism is that I can’t understand how it could possibly be true. But I admit that there are a lot of things that I just don’t understand, and I will even admit to believing in some of those things.

New Classical macroeconomists, like, say, Robert Lucas and, perhaps, Thomas Sargent, like to claim that unless a macroeconomic model is microfounded — by which they mean derived from an explicit intertemporal optimization exercise typically involving a representative agent or possibly a small number of different representative agents — it’s not an economic model, because the model, being vulnerable to the Lucas critique, is theoretically superficial and vacuous. But only models of intertemporal equilibrium — a set of one or more mutually consistent optimal plans — are immune to the Lucas critique, so insisting on immunity to the Lucas critique as a prerequisite for a macroeconomic model is a guarantee of failure if your aim to explain anything other than an intertemporal equilibrium.

Unless, that is, you believe that real world is in fact the realization of a general equilibrium model, which is what real-business-cycle theorists, like Edward Prescott, at least claim to believe. Like materialist believers that all mental states are epiphenomenous, and that consciousness is an (unexplained) illusion, real-business-cycle theorists purport to deny that there is such a thing as a disequilibrium phenomenon, the so-called business cycle, in their view, being nothing but a manifestation of the intertemporal-equilibrium adjustment of an economy to random (unexplained) productivity shocks. According to real-business-cycle theorists, such characteristic phenomena of business cycles as surprise, regret, disappointed expectations, abandoned and failed plans, the inability to find work at wages comparable to wages that other similar workers are being paid are not real phenomena; they are (unexplained) illusions and misnomers. The real-business-cycle theorists don’t just fail to construct macroeconomic models; they deny the very existence of macroeconomics, just as strict materialists deny the existence of consciousness.

What is so preposterous about the New-Classical/real-business-cycle methodological position is not the belief that the business cycle can somehow be modeled as a purely equilibrium phenomenon, implausible as that idea seems, but the insistence that only micro-founded business-cycle models are methodologically acceptable. It is one thing to believe that ultimately macroeconomics and business-cycle theory will be reduced to the analysis of individual agents and their interactions. But current micro-founded models can’t provide explanations for what many of us think are basic features of macroeconomic and business-cycle phenomena. If non-micro-founded models can provide explanations for those phenomena, even if those explanations are not fully satisfactory, what basis is there for rejecting them just because of a methodological precept that disqualifies all non-micro-founded models?

According to Kevin Hoover, the basis for insisting that only micro-founded macroeconomic models are acceptable, even if the microfoundation consists in a single representative agent optimizing for an entire economy, is eschatological. In other words, because of a belief that economics will eventually develop analytical or computational techniques sufficiently advanced to model an entire economy in terms of individual interacting agents, an analysis based on a single representative agent, as the first step on this theoretical odyssey, is somehow methodologically privileged over alternative models that do not share that destiny. Hoover properly rejects the presumptuous notion that an avowed, but unrealized, theoretical destiny, can provide a privileged methodological status to an explanatory strategy. The reductionist microfoundationalism of New-Classical macroeconomics and real-business-cycle theory, with which New Keynesian economists have formed an alliance of convenience, is truly a faith-based macroeconomics.

The remarkable similarity between the reductionist microfoundational methodology of New-Classical macroeconomics and the reductionist materialist approach to the concept of mind suggests to me that there is also a close analogy between the representative agent and what philosophers of mind call a homunculus. The Cartesian materialist theory of mind maintains that, at some place or places inside the brain, there resides information corresponding to our conscious experience. The question then arises: how does our conscious experience access the latent information inside the brain? And the answer is that there is a homunculus (or little man) that processes the information for us so that we can perceive it through him. For example, the homunculus (see the attached picture of the little guy) views the image cast by light on the retina as if he were watching a movie projected onto a screen.


But there is an obvious fallacy, because the follow-up question is: how does our little friend see anything? Well, the answer must be that there’s another, smaller, homunculus inside his brain. You can probably already tell that this argument is going to take us on an infinite regress. So what purports to be an explanation turns out to be just a form of question-begging. Sound familiar? The only difference between the representative agent and the homunculus is that the representative agent begs the question immediately without having to go on an infinite regress.

PS I have been sidetracked by other responsibilities, so I have not been blogging much, if at all, for the last few weeks. I hope to post more frequently, but I am afraid that my posting and replies to comments are likely to remain infrequent for the next couple of months.

The Neoclassical Synthesis and the Mind-Body Problem

The neoclassical synthesis that emerged in the early postwar period aimed at reconciling the macroeconomic (IS-LM) analysis derived from Keynes via Hicks and others with the neoclassical microeconomic analysis of general equilibrium derived from Walras. The macroeconomic analysis was focused on an equilibrium of income and expenditure flows while the Walrasian analysis was focused on the equilibrium between supply and demand in individual markets. The two types of analysis seemed to be incommensurate inasmuch as the conditions for equilibrium in the two analysis did not seem to match up against each other. How does an analysis focused on the equality of aggregate flows of income and expenditure get translated into an analysis focused on the equality of supply and demand in individual markets? The two languages seem to be different, so it is not obvious how a statement formulated in one language gets translated into the other. And even if a translation is possible, does the translation hold under all, or only under some, conditions? And if so, what are those conditions?

The original neoclassical synthesis did not aim to provide a definitive answer to those questions, but it was understood to assert that if the equality of income and expenditure was assured at a level consistent with full employment, one could safely assume that market forces would take care of the allocation of resources, so that markets would be cleared and the conditions of microeconomic general equilibrium satisfied, at least as a first approximation. This version of the neoclassical synthesis was obviously ad hoc and an unsatisfactory resolution of the incommensurability of the two levels of analysis. Don Patinkin sought to provide a rigorous reconciliation of the two levels of analysis in his treatise Money, Interest and Prices. But for all its virtues – and they are numerous – Patinkin’s treatise failed to bridge the gap between the two levels of analysis.

As I mentioned recently in a post on Romer and Lucas, Kenneth Arrow in a 1967 review of Samuelson’s Collected Works commented disparagingly on the neoclassical synthesis of which Samuelson was a leading proponent. The widely shared dissatisfaction expressed by Arrow motivated much of the work that soon followed on the microfoundations of macroeconomics exemplified in the famous 1970 Phelps volume. But the motivation for the search for microfoundations was then (before the rational expectations revolution) to specify the crucial deviations from the assumptions underlying the standard Walrasian general-equilibrium model that would generate actual or seeming price rigidities, which a straightforward – some might say superficial — understanding of neoclassical microeconomic theory suggested were necessary to explain why, after a macro-disturbance, equilibrium was not rapidly restored by price adjustments. Two sorts of explanations emerged from the early microfoundations literature: a) search and matching theories assuming that workers and employers must expend time and resources to find appropriate matches; b) institutional theories of efficiency wages or implicit contracts that explain why employers and workers prefer layoffs to wage cuts in response to negative demand shocks.

Forty years on, the search and matching theories do not seem capable of accounting for the magnitude of observed fluctuations in employment or the cyclical variation in layoffs, and the institutional theories are still difficult to reconcile with the standard neoclassical assumptions, remaining an ad hoc appendage to New Keynesian models that otherwise adhere to the neoclassical paradigm. Thus, although the original neoclassical synthesis in which the Keynesian income-expenditure model was seen as a pre-condition for the validity of the neoclassical model was rejected within a decade of Arrow’s dismissive comment about the neoclassical synthesis, Tom Sargent has observed in a recent review of Robert Lucas’s Collected Papers on Monetary Theory that Lucas has implicitly adopted a new version of the neoclassical synthesis dominated by an intertemporal neoclassical general-equilibrium model, but with the proviso that substantial shocks to aggregate demand and the price level are prevented by monetary policy, thereby making the neoclassical model a reasonable approximation to reality.

Ok, so you are probably asking what does all this have to do with the mind-body problem? A lot, I think in that both the neoclassical synthesis and the mind-body problem involve a disconnect between two kinds – two levels – of explanation. The neoclassical synthesis asserts some sort of connection – but a problematic one — between the explanatory apparatus – macroeconomics — used to understand the cyclical fluctuations of what we are used to think of as the aggregate economy and the explanatory apparatus – microeconomics — used to understand the constituent elements of the aggregate economy — households and firms — and how those elements are related to, and interact with, each other.

The mind-body problem concerns the relationship between the mental – our direct experience of a conscious inner life of thoughts, emotions, memories, decisions, hopes and regrets — and the physical – matter, atoms, neurons. A basic postulate of science is that all phenomena have material causes. So the existence of conscious states that seem to us, by way of our direct experience, to be independent of material causes is also highly problematic. There are a few strategies for handling the problem. One is to assert that the mind truly is independent of the body, which is to say that consciousness is not the result of physical causes. A second is to say that mind is not independent of the body; we just don’t understand the nature of the relationship. There are two possible versions of this strategy: a) that although the nature of the relationship is unknown to us now, advances in neuroscience could reveal to us the way in which consciousness is caused by the operation of the brain; b) although our minds are somehow related to the operation of our brains, the nature of this relationship is beyond the capacity of our minds or brains to comprehend owing to considerations analogous to Godel’s incompleteness theorem (a view espoused by the philosopher Colin McGinn among others); in other words, the mind-body problem is inherently beyond human understanding. And the third strategy is to deny the existence of consciousness, because a conscious state is identical with the physical state of a brain, so that consciousness is just an epiphenomenon of a brain state; we in our naivete may think that our conscious states have a separate existence, but those states are strictly identical with corresponding brain states, so that whatever conscious state that we think we are experiencing has been entirely produced by the physical forces that determine the behavior of our brains and the configuration of its physical constituents.

The first, and probably the last, thing that one needs to understand about the third strategy is that, as explained by Colin McGinn (see e.g., here), its validity has not been demonstrated by neuroscience or by any other branch of science; it is, no less than any of the other strategies, strictly a metaphysical position. The mind-body problem is a problem precisely because science has not even come close to demonstrating how mental states are caused by, let alone that they are identical to, brain states, despite some spurious misinterpretations of research that purport to show such an identity.

Analogous to the scientific principle that all phenomena have material or physical causes, there is in economics and social science a principle called methodological individualism, which roughly states that explanations of social outcomes should be derived from theories about the conduct of individuals, not from theories about abstract social entities that exist independently of their constituent elements. The underlying motivation for methodological individualism (as opposed to political individualism with which it is related but from which it is distinct) was to counter certain ideas popular in the nineteenth and twentieth centuries asserting the existence of metaphysical social entities like “history” that are somehow distinct from yet impinge upon individual human beings, and that there are laws of history or social development from which future states of the world can be predicted, as Hegel, Marx and others tried to do. This notion gave rise to a two famous books by Popper: The Open Society and its Enemies and The Poverty of Historicism. Methodological individualism as articulated by Popper was thus primarily an attack on the attribution of special powers to determine the course of future events to abstract metaphysical or mystical entities like history or society that are supposedly things or beings in themselves distinct from the individual human beings of which they are constituted. Methodological individualism does not deny the existence of collective entities like society; it simply denies that such collective entities exist as objective facts that can be observed as such. Our apprehension of these entities must be built up from more basic elements — individuals and their plans, beliefs and expectations — that we can apprehend directly.

However, methodological individualism is not the same as reductionism; methodological individualism teaches us to look for explanations of higher-level phenomena, e.g., a pattern of social relationships like the business cycle, in terms of the basic constituents forming the pattern: households, business firms, banks, central banks and governments. It does not assert identity between the pattern of relationships and the constituent elements; it says that the pattern can be understood in terms of interactions between the elements. Thus, a methodologically individualistic explanation of the business cycle in terms of the interactions between agents – households, businesses, etc. — would be analogous to an explanation of consciousness in terms of the brain if an explanation of consciousness existed. A methodologically individualistic explanation of the business cycle would not be analogous to an assertion that consciousness exists only as an epiphenomenon of brain states. The assertion that consciousness is nothing but the epiphenomenon of a corresponding brain state is reductionist; it asserts an identity between consciousness and brain states without explaining how consciousness is caused by brain states.

In business-cycle theory, the analogue of such a reductionist assertion of identity between higher-level and lower level phenomena is the assertion that the business cycle is not the product of the interaction of individual agents, but is simply the optimal plan of a representative agent. On this account, the business cycle becomes an epiphenomenon; apparent fluctuations being nothing more than the optimal choices of the representative agent. Of course, everyone knows that the representative agent is merely a convenient modeling device in terms of which a business-cycle theorist tries to account for the observed fluctuations. But that is precisely the point. The whole exercise is a sham; the representative agent is an as-if device that does not ground business-cycle fluctuations in the conduct of individual agents and their interactions, but simply asserts an identity between those interactions and the supposed decisions of the fictitious representative agent. The optimality conditions in terms of which the model is solved completely disregard the interactions between individuals that might cause an unintended pattern of relationships between those individuals. The distinctive feature of methodological individualism is precisely the idea that the interactions between individuals can lead to unintended consequences; it is by way of those unintended consequences that a higher-level pattern might emerge from interactions among individuals. And those individual interactions are exactly what is suppressed by representative-agent models.

So the notion that any analysis premised on a representative agent provides microfoundations for macroeconomic theory seems to be a travesty built on a total misunderstanding of the principle of methodological individualism that it purports to affirm.

All New Classical Models Are Subject to the Lucas Critique

Almost 40 years ago, Robert Lucas made a huge, but not quite original, contribution, when he provided a very compelling example of how the predictions of the then standard macroeconometric models used for policy analysis were inherently vulnerable to shifts in the empirically estimated parameters contained in the models, shifts induced by the very policy change under consideration. Insofar as those models could provide reliable forecasts of the future course of the economy, it was because the policy environment under which the parameters of the model had been estimated was not changing during the time period for which the forecasts were made. But any forecast deduced from the model conditioned on a policy change would necessarily be inaccurate, because the policy change itself would cause the agents in the model to alter their expectations in light of the policy change, causing the parameters of the model to diverge from their previously estimated values. Lucas concluded that only models based on deep parameters reflecting the underlying tastes, technology, and resource constraints under which agents make decisions could provide a reliable basis for policy analysis.

The Lucas critique undoubtedly conveyed an important insight about how to use econometric models in analyzing the effects of policy changes, and if it did no more than cause economists to be more cautious in offering policy advice based on their econometric models and policy makers to more skeptical about the advice they got from economists using such models, the Lucas critique would have performed a very valuable public service. Unfortunately, the lesson that the economics profession learned from the Lucas critique went far beyond that useful warning about the reliability of conditional forecasts potentially sensitive to unstable parameter estimates. In an earlier post, I discussed another way in which the Lucas Critique has been misapplied. (One responsible way to deal with unstable parameter estimates would be make forecasts showing a range of plausible outcome depending on how parameter estimates might change as a result of the policy change. Such an approach is inherently messy, and, at least in the short run, would tend to make policy makers less likely to pay attention to the policy advice of economists. But the inherent sensitivity of forecasts to unstable model parameters ought to make one skeptical about the predictions derived from any econometric model.)

Instead, the Lucas critique was used by Lucas and his followers as a tool by which to advance a reductionist agenda of transforming macroeconomics into a narrow slice of microeconomics, the slice being applied general-equilibrium theory in which the models required drastic simplification before they could generate quantitative predictions. The key to deriving quantitative results from these models is to find an optimal intertemporal allocation of resources given the specified tastes, technology and resource constraints, which is typically done by describing the model in terms of an optimizing representative agent with a utility function, a production function, and a resource endowment. A kind of hand-waving is performed via the rational-expectations assumption, thereby allowing the optimal intertemporal allocation of the representative agent to be identified as a composite of the mutually compatible optimal plans of a set of decentralized agents, the hand-waving being motivated by the Arrow-Debreu welfare theorems proving that any Pareto-optimal allocation can be sustained by a corresponding equilibrium price vector. Under rational expectations, agents correctly anticipate future equilibrium prices, so that market-clearing prices in the current period are consistent with full intertemporal equilibrium.

What is amazing – mind-boggling might be a more apt adjective – is that this modeling strategy is held by Lucas and his followers to be invulnerable to the Lucas critique, being based supposedly on deep parameters reflecting nothing other than tastes, technology and resource endowments. The first point to make – there are many others, but we needn’t exhaust the list – is that it is borderline pathological to convert a valid and important warning about how economic models may be subject to misunderstanding or misuse as a weapon with which to demolish any model susceptible of such misunderstanding or misuse as a prelude to replacing those models by the class of reductionist micromodels that now pass for macroeconomics.

But there is a second point to make, which is that the reductionist models adopted by Lucas and his followers are no less vulnerable to the Lucas critique than the models they replaced. All the New Classical models are explicitly conditioned on the assumption of optimality. It is only by positing an optimal solution for the representative agent that the equilibrium price vector can be inferred. The deep parameters of the model are conditioned on the assumption of optimality and the existence of an equilibrium price vector supporting that equilibrium. If the equilibrium does not obtain – the optimal plans of the individual agents or the fantastical representative agent becoming incapable of execution — empirical estimates of the parameters of the model parameters cannot correspond to the equilibrium values implied by the model itself. Parameter estimates are therefore sensitive to how closely the economic environment in which the parameters were estimated corresponded to conditions of equilibrium. If the conditions under which the parameters were estimated more nearly approximated the conditions of equilibrium than the period in which the model is being used to make conditional forecasts, those forecasts, from the point of view of the underlying equilibrium model, must be inaccurate. The Lucas critique devours its own offspring.

Scott Sumner Defends EMH

Last week I wrote about the sudden increase in stock market volatility as an illustration of why the efficient market hypothesis (EMH) is not entirely accurate. I focused on the empirical argument made by Robert Shiller that the observed volatility of stock prices is greater than the volatility implied by the proposition that stock prices reflect rational expectations of future dividends paid out by the listed corporations. I made two further points about EMH: a) empirical evidence cited in favor of EMH like the absence of simple trading rules that would generate excess profits and the lack of serial correlation in the returns earned by asset managers is also consistent with theories of asset pricing other than EMH such as Keynes’s casino (beauty contest) model, and b) the distinction between fundamentals and expectations that underlies the EMH model is not valid because expectations are themselves fundamental owing to the potential for expectations to be self-fulfilling.

Scott responded to my criticism by referencing two of his earlier posts — one criticizing the Keynesian beauty contest model, and another criticizing the Keynesian argument that the market can stay irrational longer than any trader seeking to exploit such irrationality can stay solvent – and by writing a new post describing what he called the self-awareness of markets.

Let me begin with Scott’s criticism of the beauty-contest model. I do so by registering my agreement with Scott that the beauty contest model is not a good description of how stocks are typically priced. As I have said, I don’t view EMH as being radically wrong, and in much applied work (including some of my own) it is an extremely useful assumption to make. But EMH describes a kind of equilibrium condition, and not all economic processes can be characterized or approximated by equilibrium conditions.

Perhaps the chief contribution of recent Austrian economics has been to explain how all entrepreneurial activity aims at exploiting latent disequilibrium relationships in the price system. We have no theoretical or empirical basis for assuming that deviations of prices whether for assets for services and whether prices are determined in auction markets or in imperfectly competitive markets that prices cannot deviate substantially from their equilibrium values.  We have no theoretical or empirical basis for assuming that substantial deviations of prices — whether for assets or for services, and whether prices are determined in auction markets or in imperfectly competitive markets — from their equilibrium values are immediately or even quickly eliminated. (Let me note parenthetically that vulgar Austrians who deny that prices voluntarily agreed upon are ever different from equilibrium values thereby undermine the Austrian theory of entrepreneurship based on the equilibrating activity of entrepreneurs which is the source of the profits they earn. The profits earned are ipso facto evidence of disequilibrium pricing. Austrians can’t have it both ways.)

So my disagreement with Scott about the beauty-contest theory of stock prices as an alternative to EMH is relatively small. My main reason for mentioning the beauty-contest theory was not to advocate it but to point out that the sort of empirical evidence that Scott cites in support of EMH is also consistent with the beauty-contest theory. As Scott emphasizes himself, it’s not easy to predict who judges will choose as the winner of the beauty contest. And Keynes also used a casino metaphor to describe stock pricing in same chapter (12) of the General Theory in which he developed the beauty-contest analogy. However, there do seem to be times when prices are rising or falling for extended periods of time, and enough people, observing the trends and guessing that the trends will continue long enough so that they can rely on continuation of the trend in making investment decisions, keep the trend going despite underlying forces that eventually cause a price collapse.

Let’s turn to Scott’s post about the ability of the market to stay irrational longer than any individual trader can stay solvent.

The markets can stay irrational for longer than you can stay solvent.

Thus people who felt that tech stocks were overvalued in 1996, or American real estate was overvalued in 2003, and who shorted tech stocks or MBSs, might go bankrupt before their accurate predictions were finally vindicated.

There are lots of problems with this argument. First of all, it’s not clear that stocks were overvalued in 1996, or that real estate was overvalued in 2003. Lots of people who made those claims later claimed that subsequent events had proven them correct, but it’s not obvious why they were justified in making this claim. If you claim X is overvalued at time t, is it vindication if X later rises much higher, and then falls back to the levels of time t?

I agree with Scott that the argument is problematic; it is almost impossible to specify when a suspected bubble is really a bubble. However, I don’t think that Scott fully comes to terms with the argument. The argument doesn’t depend on the time lag between the beginning of the run-up and the peak; it depends on the unwillingness of most speculators to buck a trend when there is no clear terminal point to the run-up. Scott continues:

The first thing to note is that the term ‘bubble’ implies asset mis-pricing that is easily observable. A positive bubble is when asset prices are clearly irrationally high, and a negative bubble is when asset price are clearly irrationally low. If these bubbles existed, then investors could earn excess returns in a highly diversified contra-bubble fund. At any given time there are many assets that pundits think are overpriced, and many others that are seen as underpriced. These asset classes include stocks, bonds, foreign exchange, REITs, commodities, etc. And even within stocks there are many different sectors, biotech might be booming while oil is plunging. And then you have dozens of markets around the world that respond to local factors. So if you think QE has led Japanese equity prices to be overvalued, and tight money has led Swiss stocks to be undervalued, the fund could take appropriate short positions in Japanese stocks and long positions in Swiss stocks.

A highly diversified mutual fund that takes advantage of bubble mis-pricing should clearly outperform other investments, such as index funds. Or at least it should if the EMH is not true. I happen to think the EMH is true, or at least roughly true, and hence I don’t actually expect to see the average contra-bubble fund do well. (Of course individual funds may do better or worse than average.)

I think that Scott is conflating a couple of questions here: a) is EMH a valid theory of asset prices? b) are asset prices frequently characterized by bubble-like behavior? Even if the answer to b) is no, the answer to a) need not be yes. Investors may be able, by identifying mis-priced assets, to earn excess returns even if the mis-pricing doesn’t meet a threshold level required for identifying a bubble. But the main point that Scott is making is that if there are a lot of examples of mis-pricing out there, it should be possible for astute investors capable of identifying mis-priced assets to diversify their portfolios sufficiently to avoid the problem of staying solvent longer than the market is irrational.

That is a very good point, worth taking into account. But it’s not dispositive and certainly doesn’t dispose of the objection that investors are unlikely to try to bet against a bubble, at least not in sufficient numbers to keep it from expanding. The reason is that the absence of proof is not proof of absence. That of course is a legal, not a scientific, principle, but it expresses a valid common-sense notion, you can’t make an evidentiary inference that something is not the case simply because you have not found evidence that it is the case. So you can’t infer from the non-implementatio of the plausible investment strategies listed by Scott that such strategies would not have generated excess returns if they were implemented. We simply don’t know whether they would be profitable or not.

In his new post Scott makes the following observation about what I had written in my post on excess volatility.

David Glasner seems to feel that it’s not rational for consumers to change their views on the economy after a stock crash. I will argue the reverse, that rationality requires them to do so. First, here’s David:

This seems an odd interpretation of what I had written because in the passage quoted by Scott I wrote the following:

I may hold a very optimistic view about the state of the economy today. But suppose that I wake up tomorrow and hear that the Shanghai stock market crashes, going down by 30% in one day. Will my expectations be completely independent of my observation of falling asset prices in China? Maybe, but what if I hear that S&P futures are down by 10%? If other people start revising their expectations, will it not become rational for me to change my own expectations at some point? How can it not be rational for me to change my expectations if I see that everyone else is changing theirs?

So, like Scott, I am saying that it is rational for people to revise their expectations based on new information that there has been a stock crash. I guess what Scott meant to say is that my argument, while valid, is not an argument against EMH, because the scenario I am describing is consistent with EMH. But that is not the case. Scott goes on to provide his own example.

All citizens are told there’s a jar with lots of jellybeans locked away in a room. That’s all they know. The average citizen guesstimates there are 453 jellybeans in this mysterious jar. Now 10,000 citizens are allowed in to look at the jar. They each guess the contents, and their average guess is 761 jellybeans. This information is reported to the other citizens. They revise their estimate accordingly.

But there’s a difference between my example and Scott’s. In my example, the future course of the economy depends on whether people are optimistic or pessimistic. In Scott’s example, the number of jellybeans in the jar is what it is regardless of what people expect it to be. The problem with EMH is that it presumes that there is some criterion of efficiency that is independent of expectations, just as in Scott’s example there is objective knowledge out there of the number of jellybeans in the jar. I claim that there is no criterion of market efficiency that is independent of expectations, even though some expectations may produce better outcomes than those produced by other expectations.

More Economic Prejudice and High-Minded Sloganeering

I wasn’t planning to post today, but I just saw (courtesy of the New York Times) a classic example of the economic prejudice wrapped in high-minded sloganeering that I talked about yesterday. David Rocker, founder and former managing general partner of the hedge fund Rocker Partners, proclaims that he is in favor of a free market.

The worldwide turbulence of recent days is a strong indication that government intervention alone cannot restore the economy and offers a glimpse of the risk of completely depending on it. It is time to give the free market a chance. Since the crash of 2008, governments have tried to stimulate their economies by a variety of means but have relied heavily on manipulating interest rates lower through one form or other of quantitative easing or simply printing money. The immediate rescue of the collapsing economy was necessary at the time, but the manipulation has now gone on for nearly seven years and has produced many unwanted consequences.

In what sense is the market less free than it was before the crash of 2008? It’s not as if the Fed before 2008 wasn’t doing the sorts of things that are so upsetting to Mr. Rucker now. The Fed was setting an interest rate target for short-term rates and it was conducting open market purchases (printing money) to ensure that its target was achieved. There are to be sure some people, like, say, Ron Paul, that regard such action by the Fed as an intolerable example of government intervention in the market, but it’s not something that, as Mr. Rucker suggests, the Fed just started to do after 2008. And at a deeper level, there is a very basic difference between the Fed targeting an interest rate by engaging in open-market operations (repeat open-market operations) and imposing price controls that prevent transactors from engaging in transactions on mutually agreeable terms. Aside from libertarian ideologues, most people are capable of understanding the difference between monetary policy and government interference with the free market.

So what really bothers Mr. Rucker is not that the absence of a free market, but that he disagrees with the policy that the Fed is implementing. He has every right to disagree with the policy, but it is misleading to suggest that he is the one defending the free market against the Fed’s intervention into an otherwise free market.

When Mr. Rucker tries to explain what’s wrong with the Fed’s policy, his explanations continue to reflect prejudices expressed in high-minded sloganeering. First he plays the income inequality card.

The Federal Reserve, waiting for signs of inflation to change its policies, seems to be looking at the wrong data. . . .

Low interest rates have hugely lifted assets largely owned by the very rich, and inflation in these areas is clearly apparent. Stocks have tripled and real estate prices in the major cities where the wealthy live have been soaring, as have the prices of artwork and the conspicuous consumption of luxury goods.

Now it may be true that certain assets like real estate in Manhattan and San Francisco, works of art, and yachts have been rising rapidly in price, but there is no meaningful price index in which these assets account for a large enough share of purchases to generate a significant inflation. So this claim by Mr. Rucker is just an empty rhetorical gesture to show how good-hearted he is and how callous and unfeeling Janet Yellen and her ilk are. He goes on.

Cheap financing has led to a boom in speculative activity, and mergers and acquisitions. Most acquisitions are justified by “efficiencies” which is usually a euphemism for layoffs. Valeant Pharmaceuticals International, one of the nation’s most active acquirers, routinely fires “redundant” workers after each acquisition to enhance reported earnings. This elevates its stock, with which it makes the next acquisition. With money cheap, corporate executives have used cash flow to buy back stock, enhancing the value of their options, instead of investing for the future. This pattern, and the fear it engenders, has added to downward pressure on employment and wages.

Actually, according to data reported by the Institute for Mergers and Acquisitions and Alliances displayed in the accompanying chart, the level of mergers and acquisitions since 2008 has been consistently below what it was in the late 1990s when interest rates were over 5 percent and in 2007 when interest rates were also above 5 percent.

M&A1985-2015And if corporate executives are using cash flow to buy back stock to enhance the value of their stock options instead of making profitable investments that would enhance share-holder value, there is a serious problem in how corporate executives are discharging their responsibilities to shareholders. Violations of management responsibility to their shareholders should be disciplined and the legal environment that allows executives to disregard shareholder interests should be reformed. To blame the bad behavior of corporate executives on the Fed is a total distraction.

Having just attributed a supposed boom in speculative activity and mergers and acquisitions to the Fed’s low-interest rate policy, Mr. Rucker, without batting an eye, flatly denies that an increase in interest rates would have any negative effect on investment.

The Fed should raise rates in September. The focus on a quarter-point change in short rates and its precise date of imposition is foolishness. Expected rates of return on new investments are typically well above 10 percent. No sensible businessman would defer a sound investment because short-term rates are slightly higher for a few months. They either have a sound investment or they don’t.

Let me repeat that. “Expected rates of return on new investment are typically well above 10 percent.” I wonder what Mr. Rucker thinks the expected rate of return on speculative activity and mergers and acquisitions is.

But, almost despite himself, Mr. Rucker is on to something. Some long-term investment surely is sensitive to the rate of interest, but – and I know that this will come as a rude shock to adherents of Austrian Business Cycle Theory – most investment by business in plant and equipment depends on expected future sales, not the rate of interest. So the way to increase investment is really not by manipulating the rate of interest; the way to increase investment is to increase aggregate demand, and the best way to do that would be to increase inflation and expected inflation (aka nominal GDP and expected nominal GDP).

Economic Prejudice and High-Minded Sloganeering

In a post yesterday commenting on Paul Krugman’s takedown of a silly and ignorant piece of writing about monetary policy by William Cohan, Scott Sumner expressed his annoyance at the level of ignorance displayed people writing for supposedly elite publications like the New York Times which published Cohan’s rant about how it’s time for the Fed to show some spine and stop manipulating interest rates. Scott, ever vigilant, noticed that another elite publication the Financial Times published an equally silly rant by Avinah Persaud exhorting the Fed to show steel and raise rates.

Scott focused on one particular example of silliness about the importance of raising interest rates ASAP notwithstanding the fact that the Fed has failed to meet its 2% inflation target for something like 39 consecutive months:

Yet monetary policy cannot confine itself to reacting to the latest inflation data if it is to promote the wider goals of financial stability and sustainable economic growth. An over-reliance on extremely accommodative monetary policy may be one of the reasons why the world has not escaped from the clutches of a financial crisis that began more than eight years ago.

Scott deftly skewers Persaud with the following comment:

I suppose that’s why the eurozone economy took off after 2011, while the US failed to grow.  The ECB avoided our foolish QE policies, and “showed steel” by raising interest rates twice in the spring of 2011.  If only we had done the same.

But Scott allowed the following bit of nonsense on Persaud’s part to escape unscathed (I don’t mean to be critical of Scott, there’s only so much nonsense that any single person be expected to hold up to public derision):

The slowdown in the Chinese economy has its roots in decisions made far from Beijing. In the past five years, central banks in all the big advanced economies have embarked on huge quantitative easing programmes, buying financial assets with newly created cash. Because of the effect they have on exchange rates, these policies have a “beggar-thy-neighbour” quality. Growth has been shuffled from place to place — first the US, then Europe and Japan — with one country’s gains coming at the expense of another. This zero-sum game cannot launch a lasting global recovery. China is the latest loser. Last week’s renminbi devaluation brought into focus that since 2010, China’s export-driven economy has laboured under a 25 per cent appreciation of its real effective exchange rate.

The effect of quantitative easing on exchange rates is not the result of foreign-exchange-market intervention; it is the result of increasing the total quantity of base money. Expanding the monetary base reduces the value of the domestic currency unit relative to foreign currencies by raising prices in terms of the domestic currency relative to prices in terms of foreign currencies. There is no beggar-thy-neighbor effect from monetary expansion of this sort. And even if exchange-rate depreciation were achieved by direct intervention in the foreign-exchange markets, the beggar-thy-neighbor effect would be transitory as prices in terms of domestic and foreign currencies would adjust to reflect the altered exchange rate. As I have explained in a number of previous posts on currency manipulation (e.g., here, here, and here) relying on Max Corden’s contributions of 30 years ago on the concept of exchange-rate protection, a “beggar-thy-neighbor” effect is achieved only if there is simultaneous intervention in foreign-exchange markets to reduce the exchange rate of the domestic currency combined with offsetting open-market sales to contractnot expand – the monetary base (or, alternatively, increased reserve requirements to increase the domestic demand to hold the monetary base). So the allegation that quantitative easing has any substantial “beggar-thy-nation” effect is totally without foundation in economic theory. It is just the ignorant repetition of absurd economic prejudices dressed up in high-minded sloganeering about “zero-sum games” and “beggar-thy-neighbor” effects.

And while the real exchange rate of the Chinese yuan may have increased by 25% since 2010, the real exchange rate of the dollar over the same period in which the US was allegedly pursuing a beggar thy nation policy increased by about 12%. The appreciation of the dollar reflects the relative increase in the strength of the US economy over the past 5 years, precisely the opposite of a beggar-thy-neighbor strategy.

And at an intuitive level, it is just absurd to think that China would have been better off if the US, out of a tender solicitude for the welfare of Chinese workers, had foregone monetary expansion, and allowed its domestic economy to stagnate totally. To whom would the Chinese have exported in that case?


About Me

David Glasner
Washington, DC

I am an economist at the Federal Trade Commission. Nothing that you read on this blog necessarily reflects the views of the FTC or the individual commissioners. Although I work at the FTC as an antitrust economist, most of my research and writing has been on monetary economics and policy and the history of monetary theory. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 374 other followers


Get every new post delivered to your Inbox.

Join 374 other followers