Archive for the 'representative agent' Category

Representative Agents, Homunculi and Faith-Based Macroeconomics

After my previous post comparing the neoclassical synthesis in its various versions to the mind-body problem, there was an interesting Twitter exchange between Steve Randy Waldman and David Andolfatto in which Andolfatto queried whether Waldman and I are aware that there are representative-agent models in which the equilibrium is not Pareto-optimal. Andalfatto raised an interesting point, but what I found interesting about it might be different from what Andalfatto was trying to show, which, I am guessing, was that a representative-agent modeling strategy doesn’t necessarily commit the theorist to the conclusion that the world is optimal and that the solutions of the model can never be improved upon by a monetary/fiscal-policy intervention. I concede the point. It is well-known I think that, given the appropriate assumptions, a general-equilibrium model can have a sub-optimal solution. Given those assumptions, the corresponding representative-agent will also choose a sub-optimal solution. So I think I get that, but perhaps there’s a more subtle point  that I’m missing. If so, please set me straight.

But what I was trying to argue was not that representative-agent models are necessarily optimal, but that representative-agent models suffer from an inherent, and, in my view, fatal, flaw: they can’t explain any real macroeconomic phenomenon, because a macroeconomic phenomenon has to encompass something more than the decision of a single agent, even an omniscient central planner. At best, the representative agent is just a device for solving an otherwise intractable general-equilibrium model, which is how I think Lucas originally justified the assumption.

Yet just because a general-equilibrium model can be formulated so that it can be solved as the solution of an optimizing agent does not explain the economic mechanism or process that generates the solution. The mathematical solution of a model does not necessarily provide any insight into the adjustment process or mechanism by which the solution actually is, or could be, achieved in the real world. Your ability to find a solution for a mathematical problem does not mean that you understand the real-world mechanism to which the solution of your model corresponds. The correspondence between your model may be a strictly mathematical correspondence which may not really be in any way descriptive of how any real-world mechanism or process actually operates.

Here’s an example of what I am talking about. Consider a traffic-flow model explaining how congestion affects vehicle speed and the flow of traffic. It seems obvious that traffic congestion is caused by interactions between the different vehicles traversing a thoroughfare, just as it seems obvious that market exchange arises as the result of interactions between the different agents seeking to advance their own interests. OK, can you imagine building a useful traffic-flow model based on solving for the optimal plan of a representative vehicle?

I don’t think so. Once you frame the model in terms of a representative vehicle, you have abstracted from the phenomenon to be explained. The entire exercise would be pointless – unless, that is, you assumed that interactions between vehicles are so minimal that they can be ignored. But then why would you be interested in congestion effects? If you want to claim that your model has any relevance to the effect of congestion on traffic flow, you can’t base the claim on an assumption that there is no congestion.

Or to take another example, suppose you want to explain the phenomenon that, at sporting events, all, or almost all, the spectators sit in their seats but occasionally get up simultaneously from their seats to watch the play on the field or court. Would anyone ever think that an explanation in terms of a representative spectator could explain that phenomenon?

In just the same way, a representative-agent macroeconomic model necessarily abstracts from the interactions between actual agents. Obviously, by abstracting from the interactions, the model can’t demonstrate that there are no interactions between agents in the real world or that their interactions are too insignificant to matter. I would be shocked if anyone really believed that the interactions between agents are unimportant, much less, negligible; nor have I seen an argument that interactions between agents are unimportant, the concept of network effects, to give just one example, being an important topic in microeconomics.

It’s no answer to say that all the interactions are accounted for within the general-equilibrium model. That is just a form of question-begging. The representative agent is being assumed because without him the problem of finding a general-equilibrium solution of the model is very difficult or intractable. Taking into account interactions makes the model too complicated to work with analytically, so it is much easier — but still hard enough to allow the theorist to perform some fancy mathematical techniques — to ignore those pesky interactions. On top of that, the process by which the real world arrives at outcomes to which a general-equilibrium model supposedly bears at least some vague resemblance can’t even be described by conventional modeling techniques.

The modeling approach seems like that of a neuroscientist saying that, because he could simulate the functions, electrical impulses, chemical reactions, and neural connections in the brain – which he can’t do and isn’t even close to doing, even though a neuroscientist’s understanding of the brain far surpasses any economist’s understanding of the economy – he can explain consciousness. Simulating the operation of a brain would not explain consciousness, because the computer on which the neuroscientist performed the simulation would not become conscious in the course of the simulation.

Many neuroscientists and other materialists like to claim that consciousness is not real, that it’s just an epiphenomenon. But we all have the subjective experience of consciousness, so whatever it is that someone wants to call it, consciousness — indeed the entire world of mental phenomena denoted by that term — remains an unexplained phenomenon, a phenomenon that can only be dismissed as unreal on the basis of a metaphysical dogma that denies the existence of anything that can’t be explained as the result of material and physical causes.

I call that metaphysical belief a dogma not because it’s false — I have no way of proving that it’s false — but because materialism is just as much a metaphysical belief as deism or monotheism. It graduates from belief to dogma when people assert not only that the belief is true but that there’s something wrong with you if you are unwilling to believe it as well. The most that I would say against the belief in materialism is that I can’t understand how it could possibly be true. But I admit that there are a lot of things that I just don’t understand, and I will even admit to believing in some of those things.

New Classical macroeconomists, like, say, Robert Lucas and, perhaps, Thomas Sargent, like to claim that unless a macroeconomic model is microfounded — by which they mean derived from an explicit intertemporal optimization exercise typically involving a representative agent or possibly a small number of different representative agents — it’s not an economic model, because the model, being vulnerable to the Lucas critique, is theoretically superficial and vacuous. But only models of intertemporal equilibrium — a set of one or more mutually consistent optimal plans — are immune to the Lucas critique, so insisting on immunity to the Lucas critique as a prerequisite for a macroeconomic model is a guarantee of failure if your aim to explain anything other than an intertemporal equilibrium.

Unless, that is, you believe that real world is in fact the realization of a general equilibrium model, which is what real-business-cycle theorists, like Edward Prescott, at least claim to believe. Like materialist believers that all mental states are epiphenomenous, and that consciousness is an (unexplained) illusion, real-business-cycle theorists purport to deny that there is such a thing as a disequilibrium phenomenon, the so-called business cycle, in their view, being nothing but a manifestation of the intertemporal-equilibrium adjustment of an economy to random (unexplained) productivity shocks. According to real-business-cycle theorists, such characteristic phenomena of business cycles as surprise, regret, disappointed expectations, abandoned and failed plans, the inability to find work at wages comparable to wages that other similar workers are being paid are not real phenomena; they are (unexplained) illusions and misnomers. The real-business-cycle theorists don’t just fail to construct macroeconomic models; they deny the very existence of macroeconomics, just as strict materialists deny the existence of consciousness.

What is so preposterous about the New-Classical/real-business-cycle methodological position is not the belief that the business cycle can somehow be modeled as a purely equilibrium phenomenon, implausible as that idea seems, but the insistence that only micro-founded business-cycle models are methodologically acceptable. It is one thing to believe that ultimately macroeconomics and business-cycle theory will be reduced to the analysis of individual agents and their interactions. But current micro-founded models can’t provide explanations for what many of us think are basic features of macroeconomic and business-cycle phenomena. If non-micro-founded models can provide explanations for those phenomena, even if those explanations are not fully satisfactory, what basis is there for rejecting them just because of a methodological precept that disqualifies all non-micro-founded models?

According to Kevin Hoover, the basis for insisting that only micro-founded macroeconomic models are acceptable, even if the microfoundation consists in a single representative agent optimizing for an entire economy, is eschatological. In other words, because of a belief that economics will eventually develop analytical or computational techniques sufficiently advanced to model an entire economy in terms of individual interacting agents, an analysis based on a single representative agent, as the first step on this theoretical odyssey, is somehow methodologically privileged over alternative models that do not share that destiny. Hoover properly rejects the presumptuous notion that an avowed, but unrealized, theoretical destiny, can provide a privileged methodological status to an explanatory strategy. The reductionist microfoundationalism of New-Classical macroeconomics and real-business-cycle theory, with which New Keynesian economists have formed an alliance of convenience, is truly a faith-based macroeconomics.

The remarkable similarity between the reductionist microfoundational methodology of New-Classical macroeconomics and the reductionist materialist approach to the concept of mind suggests to me that there is also a close analogy between the representative agent and what philosophers of mind call a homunculus. The Cartesian materialist theory of mind maintains that, at some place or places inside the brain, there resides information corresponding to our conscious experience. The question then arises: how does our conscious experience access the latent information inside the brain? And the answer is that there is a homunculus (or little man) that processes the information for us so that we can perceive it through him. For example, the homunculus (see the attached picture of the little guy) views the image cast by light on the retina as if he were watching a movie projected onto a screen.

homunculus

But there is an obvious fallacy, because the follow-up question is: how does our little friend see anything? Well, the answer must be that there’s another, smaller, homunculus inside his brain. You can probably already tell that this argument is going to take us on an infinite regress. So what purports to be an explanation turns out to be just a form of question-begging. Sound familiar? The only difference between the representative agent and the homunculus is that the representative agent begs the question immediately without having to go on an infinite regress.

PS I have been sidetracked by other responsibilities, so I have not been blogging much, if at all, for the last few weeks. I hope to post more frequently, but I am afraid that my posting and replies to comments are likely to remain infrequent for the next couple of months.

Advertisements

The Neoclassical Synthesis and the Mind-Body Problem

The neoclassical synthesis that emerged in the early postwar period aimed at reconciling the macroeconomic (IS-LM) analysis derived from Keynes via Hicks and others with the neoclassical microeconomic analysis of general equilibrium derived from Walras. The macroeconomic analysis was focused on an equilibrium of income and expenditure flows while the Walrasian analysis was focused on the equilibrium between supply and demand in individual markets. The two types of analysis seemed to be incommensurate inasmuch as the conditions for equilibrium in the two analysis did not seem to match up against each other. How does an analysis focused on the equality of aggregate flows of income and expenditure get translated into an analysis focused on the equality of supply and demand in individual markets? The two languages seem to be different, so it is not obvious how a statement formulated in one language gets translated into the other. And even if a translation is possible, does the translation hold under all, or only under some, conditions? And if so, what are those conditions?

The original neoclassical synthesis did not aim to provide a definitive answer to those questions, but it was understood to assert that if the equality of income and expenditure was assured at a level consistent with full employment, one could safely assume that market forces would take care of the allocation of resources, so that markets would be cleared and the conditions of microeconomic general equilibrium satisfied, at least as a first approximation. This version of the neoclassical synthesis was obviously ad hoc and an unsatisfactory resolution of the incommensurability of the two levels of analysis. Don Patinkin sought to provide a rigorous reconciliation of the two levels of analysis in his treatise Money, Interest and Prices. But for all its virtues – and they are numerous – Patinkin’s treatise failed to bridge the gap between the two levels of analysis.

As I mentioned recently in a post on Romer and Lucas, Kenneth Arrow in a 1967 review of Samuelson’s Collected Works commented disparagingly on the neoclassical synthesis of which Samuelson was a leading proponent. The widely shared dissatisfaction expressed by Arrow motivated much of the work that soon followed on the microfoundations of macroeconomics exemplified in the famous 1970 Phelps volume. But the motivation for the search for microfoundations was then (before the rational expectations revolution) to specify the crucial deviations from the assumptions underlying the standard Walrasian general-equilibrium model that would generate actual or seeming price rigidities, which a straightforward – some might say superficial — understanding of neoclassical microeconomic theory suggested were necessary to explain why, after a macro-disturbance, equilibrium was not rapidly restored by price adjustments. Two sorts of explanations emerged from the early microfoundations literature: a) search and matching theories assuming that workers and employers must expend time and resources to find appropriate matches; b) institutional theories of efficiency wages or implicit contracts that explain why employers and workers prefer layoffs to wage cuts in response to negative demand shocks.

Forty years on, the search and matching theories do not seem capable of accounting for the magnitude of observed fluctuations in employment or the cyclical variation in layoffs, and the institutional theories are still difficult to reconcile with the standard neoclassical assumptions, remaining an ad hoc appendage to New Keynesian models that otherwise adhere to the neoclassical paradigm. Thus, although the original neoclassical synthesis in which the Keynesian income-expenditure model was seen as a pre-condition for the validity of the neoclassical model was rejected within a decade of Arrow’s dismissive comment about the neoclassical synthesis, Tom Sargent has observed in a recent review of Robert Lucas’s Collected Papers on Monetary Theory that Lucas has implicitly adopted a new version of the neoclassical synthesis dominated by an intertemporal neoclassical general-equilibrium model, but with the proviso that substantial shocks to aggregate demand and the price level are prevented by monetary policy, thereby making the neoclassical model a reasonable approximation to reality.

Ok, so you are probably asking what does all this have to do with the mind-body problem? A lot, I think in that both the neoclassical synthesis and the mind-body problem involve a disconnect between two kinds – two levels – of explanation. The neoclassical synthesis asserts some sort of connection – but a problematic one — between the explanatory apparatus – macroeconomics — used to understand the cyclical fluctuations of what we are used to think of as the aggregate economy and the explanatory apparatus – microeconomics — used to understand the constituent elements of the aggregate economy — households and firms — and how those elements are related to, and interact with, each other.

The mind-body problem concerns the relationship between the mental – our direct experience of a conscious inner life of thoughts, emotions, memories, decisions, hopes and regrets — and the physical – matter, atoms, neurons. A basic postulate of science is that all phenomena have material causes. So the existence of conscious states that seem to us, by way of our direct experience, to be independent of material causes is also highly problematic. There are a few strategies for handling the problem. One is to assert that the mind truly is independent of the body, which is to say that consciousness is not the result of physical causes. A second is to say that mind is not independent of the body; we just don’t understand the nature of the relationship. There are two possible versions of this strategy: a) that although the nature of the relationship is unknown to us now, advances in neuroscience could reveal to us the way in which consciousness is caused by the operation of the brain; b) although our minds are somehow related to the operation of our brains, the nature of this relationship is beyond the capacity of our minds or brains to comprehend owing to considerations analogous to Godel’s incompleteness theorem (a view espoused by the philosopher Colin McGinn among others); in other words, the mind-body problem is inherently beyond human understanding. And the third strategy is to deny the existence of consciousness, because a conscious state is identical with the physical state of a brain, so that consciousness is just an epiphenomenon of a brain state; we in our naivete may think that our conscious states have a separate existence, but those states are strictly identical with corresponding brain states, so that whatever conscious state that we think we are experiencing has been entirely produced by the physical forces that determine the behavior of our brains and the configuration of its physical constituents.

The first, and probably the last, thing that one needs to understand about the third strategy is that, as explained by Colin McGinn (see e.g., here), its validity has not been demonstrated by neuroscience or by any other branch of science; it is, no less than any of the other strategies, strictly a metaphysical position. The mind-body problem is a problem precisely because science has not even come close to demonstrating how mental states are caused by, let alone that they are identical to, brain states, despite some spurious misinterpretations of research that purport to show such an identity.

Analogous to the scientific principle that all phenomena have material or physical causes, there is in economics and social science a principle called methodological individualism, which roughly states that explanations of social outcomes should be derived from theories about the conduct of individuals, not from theories about abstract social entities that exist independently of their constituent elements. The underlying motivation for methodological individualism (as opposed to political individualism with which it is related but from which it is distinct) was to counter certain ideas popular in the nineteenth and twentieth centuries asserting the existence of metaphysical social entities like “history” that are somehow distinct from yet impinge upon individual human beings, and that there are laws of history or social development from which future states of the world can be predicted, as Hegel, Marx and others tried to do. This notion gave rise to a two famous books by Popper: The Open Society and its Enemies and The Poverty of Historicism. Methodological individualism as articulated by Popper was thus primarily an attack on the attribution of special powers to determine the course of future events to abstract metaphysical or mystical entities like history or society that are supposedly things or beings in themselves distinct from the individual human beings of which they are constituted. Methodological individualism does not deny the existence of collective entities like society; it simply denies that such collective entities exist as objective facts that can be observed as such. Our apprehension of these entities must be built up from more basic elements — individuals and their plans, beliefs and expectations — that we can apprehend directly.

However, methodological individualism is not the same as reductionism; methodological individualism teaches us to look for explanations of higher-level phenomena, e.g., a pattern of social relationships like the business cycle, in terms of the basic constituents forming the pattern: households, business firms, banks, central banks and governments. It does not assert identity between the pattern of relationships and the constituent elements; it says that the pattern can be understood in terms of interactions between the elements. Thus, a methodologically individualistic explanation of the business cycle in terms of the interactions between agents – households, businesses, etc. — would be analogous to an explanation of consciousness in terms of the brain if an explanation of consciousness existed. A methodologically individualistic explanation of the business cycle would not be analogous to an assertion that consciousness exists only as an epiphenomenon of brain states. The assertion that consciousness is nothing but the epiphenomenon of a corresponding brain state is reductionist; it asserts an identity between consciousness and brain states without explaining how consciousness is caused by brain states.

In business-cycle theory, the analogue of such a reductionist assertion of identity between higher-level and lower level phenomena is the assertion that the business cycle is not the product of the interaction of individual agents, but is simply the optimal plan of a representative agent. On this account, the business cycle becomes an epiphenomenon; apparent fluctuations being nothing more than the optimal choices of the representative agent. Of course, everyone knows that the representative agent is merely a convenient modeling device in terms of which a business-cycle theorist tries to account for the observed fluctuations. But that is precisely the point. The whole exercise is a sham; the representative agent is an as-if device that does not ground business-cycle fluctuations in the conduct of individual agents and their interactions, but simply asserts an identity between those interactions and the supposed decisions of the fictitious representative agent. The optimality conditions in terms of which the model is solved completely disregard the interactions between individuals that might cause an unintended pattern of relationships between those individuals. The distinctive feature of methodological individualism is precisely the idea that the interactions between individuals can lead to unintended consequences; it is by way of those unintended consequences that a higher-level pattern might emerge from interactions among individuals. And those individual interactions are exactly what is suppressed by representative-agent models.

So the notion that any analysis premised on a representative agent provides microfoundations for macroeconomic theory seems to be a travesty built on a total misunderstanding of the principle of methodological individualism that it purports to affirm.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,198 other followers

Follow Uneasy Money on WordPress.com
Advertisements