Traffic Jams and Multipliers

Since my previous post which I closed by quoting the abstract of Brian Arthur’s paper “Complexity Economics: A Different Framework for Economic Thought,” I have been reading his paper and some of the papers he cites, especially Magda Fontana’s paper “The Santa Fe Perspective on Economics: Emerging Patterns in the Science of Complexity,” and Mark Blaug’s paper “The Formalist Revolution of the 1950s.” The papers bring together a number of themes that I have been emphasizing in previous posts on what I consider the misguided focus of modern macroeconomics on rational-expectations equilibrium as the organizing principle of macroeconomic theory. Among these themes are the importance of coordination failures in explaining macroeconomic fluctuations, the inappropriateness of the full general-equilibrium paradigm in macroeconomics, the mistaken transformation of microfoundations from a theoretical problem to be solved into an absolute methodological requirement to be insisted upon (almost exactly analogous to the absurd transformation of the mind-body problem into a dogmatic insistence that the mind is merely a figment of our own imagination), or, stated another way, a recognition that macrofoundations are just as necessary for economics as microfoundations.

Let me quote again from Arthur’s essay; this time a beautiful passage which captures the interdependence between the micro and macro perspectives

To look at the economy, or areas within the economy, from a complexity viewpoint then would mean asking how it evolves, and this means examining in detail how individual agents’ behaviors together form some outcome and how this might in turn alter their behavior as a result. Complexity in other words asks how individual behaviors might react to the pattern they together create, and how that pattern would alter itself as a result. This is often a difficult question; we are asking how a process is created from the purposed actions of multiple agents. And so economics early in its history took a simpler approach, one more amenable to mathematical analysis. It asked not how agents’ behaviors would react to the aggregate patterns these created, but what behaviors (actions, strategies, expectations) would be upheld by — would be consistent with — the aggregate patterns these caused. It asked in other words what patterns would call for no changes in microbehavior, and would therefore be in stasis, or equilibrium. (General equilibrium theory thus asked what prices and quantities of goods produced and consumed would be consistent with — would pose no incentives for change to — the overall pattern of prices and quantities in the economy’s markets. Classical game theory asked what strategies, moves, or allocations would be consistent with — would be the best course of action for an agent (under some criterion) — given the strategies, moves, allocations his rivals might choose. And rational expectations economics asked what expectations would be consistent with — would on average be validated by — the outcomes these expectations together created.)

This equilibrium shortcut was a natural way to examine patterns in the economy and render them open to mathematical analysis. It was an understandable — even proper — way to push economics forward. And it achieved a great deal. Its central construct, general equilibrium theory, is not just mathematically elegant; in modeling the economy it re-composes it in our minds, gives us a way to picture it, a way to comprehend the economy in its wholeness. This is extremely valuable, and the same can be said for other equilibrium modelings: of the theory of the firm, of international trade, of financial markets.

But there has been a price for this equilibrium finesse. Economists have objected to it — to the neoclassical construction it has brought about — on the grounds that it posits an idealized, rationalized world that distorts reality, one whose underlying assumptions are often chosen for analytical convenience. I share these objections. Like many economists, I admire the beauty of the neoclassical economy; but for me the construct is too pure, too brittle — too bled of reality. It lives in a Platonic world of order, stasis, knowableness, and perfection. Absent from it is the ambiguous, the messy, the real. (pp. 2-3)

Later in the essay, Arthur provides a simple example of a non-equilibrium complex process: traffic flow.

A typical model would acknowledge that at close separation from cars in front, cars lower their speed, and at wide separation they raise it. A given high density of traffic of N cars per mile would imply a certain average separation, and cars would slow or accelerate to a speed that corresponds. Trivially, an equilibrium speed emerges, and if we were restricting solutions to equilibrium that is all we would see. But in practice at high density, a nonequilibrium phenomenon occurs. Some car may slow down — its driver may lose concentration or get distracted — and this might cause cars behind to slow down. This immediately compresses the flow, which causes further slowing of the cars behind. The compression propagates backwards, traffic backs up, and a jam emerges. In due course the jam clears. But notice three things. The phenomenon’s onset is spontaneous; each instance of it is unique in time of appearance, length of propagation, and time of clearing. It is therefore not easily captured by closed-form solutions, but best studied by probabilistic or statistical methods. Second, the phenomenon is temporal, it emerges or happens within time, and cannot appear if we insist on equilibrium. And third, the phenomenon occurs neither at the micro-level (individual car level) nor at the macro-level (overall flow on the road) but at a level in between — the meso-level. (p. 9)

This simple example provides an excellent insight into why macroeconomic reasoning can be led badly astray by focusing on the purely equilibrium relationships characterizing what we now think of as microfounded models. In arguing against the Keynesian multiplier analysis supposedly justifying increased government spending as a countercyclical tool, Robert Barro wrote the following in an unfortunate Wall Street Journal op-ed piece, which I have previously commented on here and here.

Keynesian economics argues that incentives and other forces in regular economics are overwhelmed, at least in recessions, by effects involving “aggregate demand.” Recipients of food stamps use their transfers to consume more. Compared to this urge, the negative effects on consumption and investment by taxpayers are viewed as weaker in magnitude, particularly when the transfers are deficit-financed.

Thus, the aggregate demand for goods rises, and businesses respond by selling more goods and then by raising production and employment. The additional wage and profit income leads to further expansions of demand and, hence, to more production and employment. As per Mr. Vilsack, the administration believes that the cumulative effect is a multiplier around two.

If valid, this result would be truly miraculous. The recipients of food stamps get, say, $1 billion but they are not the only ones who benefit. Another $1 billion appears that can make the rest of society better off. Unlike the trade-off in regular economics, that extra $1 billion is the ultimate free lunch.

How can it be right? Where was the market failure that allowed the government to improve things just by borrowing money and giving it to people? Keynes, in his “General Theory” (1936), was not so good at explaining why this worked, and subsequent generations of Keynesian economists (including my own youthful efforts) have not been more successful.

In the disequilibrium environment of a recession, it is at least possible that injecting additional spending into the economy could produce effects that a similar injection of spending, under “normal” macro conditions, would not produce, just as somehow withdrawing a few cars from a congested road could increase the average speed of all the remaining cars on the road, by a much greater amount than would withdrawing a few cars from an uncongested road. In other words, microresponses may be sensitive to macroconditions.


15 Responses to “Traffic Jams and Multipliers”

  1. 1 Mike Sproul December 12, 2014 at 1:03 pm

    “in the disequilibrium environment of a recession, it is at least possible that injecting additional spending into the economy could produce effects that a similar injection of spending, under “normal” macro conditions, would not produce”

    Maybe it’s not the spending, but the money printing that does the trick. Certainly if you start in an economy that is cash-starved, due to bank runs, legal restrictions on banking, or misguided tight money policies from central banks, then issuing money can relieve the tight money condition and revive business.

    Maybe all of Keynesian economics is just a big misunderstanding, started by people who witnessed the effects of money printing, and thought they were seeing the effects of government spending.


  2. 2 Thornton Hall December 12, 2014 at 1:42 pm

    I don’t understand your phrasing. It seems to be desperately trying to hold onto equilibrium analysis. Let’s not be so focused on it, but keep it around.

    But complexity doesn’t apply just to recessions. There aren’t just occasions of disequilibrium. Just because we’re not feeling an earthquake as we speak doesn’t mean the forces shifting the plates around are on a coffee break.

    The orthodoxy doesn’t just comprise a set of theories. It also includes meta-ideas like that it’s ok to have a quiver of (wrong) theories that one draws from based on… expertise? Magic 8 Ball?


  3. 3 Jason December 12, 2014 at 1:58 pm

    Hi David,

    I don’t mean to be a bother with the information theory take on economics I’ve been researching, but I thought you might be interested in seeing how traffic would described with the same model:

    In fact, I thought that because you discuss the traffic model, if I showed how such a traffic model could be built in the framework, it might win you over to this new way of thinking.

    The framework encompasses much of what you talk about in this post and what Brian Arthur discusses (e.g. equilibrium and coordination) — in fact, it was originally designed to address non-equilibrium complex systems!




  4. 4 Dan December 12, 2014 at 3:16 pm

    Keynes GDP multiplier
    I am surprised at the trust given to the Keynesian Multiplier. From my perspective it detracts from the economist’s efforts and places him in an awkward position.
    My article on the topic, Beware of Economic Textbooks, discusses several dubious economic concepts including the Keynsian Multiplier. The link is:

    The explanation of the Keynesian “multiplier” from is as follows:

    “The intuition comes from the fact that the marginal propensity to consume (MPC) is positive. MPC is the money people spend when they get an extra dollar of income. When MPC = 0.8, for example, when people gets an extra dollar of income, they spend 80 cents of it. So the Keynesian multiplier works as follow, assuming for simplicity, MPC = 0.8. Then when the government increases expenditure by 1 dollar on a good produced by agent A, this dollar becomes A’s income. As MPC = 0.8, A will spend 80 cents of this extra income on something is wants to consume. Suppose A spends the 80 cents on a good produced by B, then B would have an extra income of 80 cents. B would then spend 0.8 of this 80 cents, ie, 64 cents, on something else. This 64 cents becomes someone else’s income, and this someone will spend 0.8 of it. The process repeats itself. The GDP added to the economy is the sum of all the spending, 1 + 0.8 + 0.64 + 0.512 + … which has a larger effect than the 1 dollar that the government originally spent. In other words, the government spending is “multiplied”. Mathematically, the sum 1 + 0.8 + 0.64 + … is a geometric series. When you sum them up, it takes the form 1/1-MPC. For MPC = 0.8, the effect of the government spending is multiplied 5 times.”

    Very good, except for two consideration, (1) the exact multiplier value (5 in this instance) requires an infinite number of transactions, which will take an infinite number of years, and (2) spending on goods (not services, which is not in the multiplier) does not generate more spending; the spending goes to the entrepreneur and generates new investment. Government deficit spending only adds to the money supply, to either purchase unsold goods, purchase new goods, provide profit or, if entering the market when production capacity has been reached, stimulate an increase in prices. During the last years of deficit spending the GDP has risen slowly and profits have risen greatly.

    If taken at face value, The Keynesian Multiplier hits a theoretical inconsistency; when MPC = 1, the multiplier becomes infinite, and the era of abundance has been reached. Actually a MPC = 1 signifies that if the total of the original investment is spent and continues to be totally spent, then, after an infinite number of spending of the initial investment, the total contribution to GDP during the infinite period will total infinity.

    The contribution of exogenous investment to an economy in any one year depends upon the number of business cycles from the investment, which is a function of the velocity of money. Rather than being a “multiplier,” the formula is actually a “divider.” If the MPC is < 1, then in each succeeding business cycle the contribution to GDP will continually decline until it becomes nil. If there is only one business cycle in the first year then the contribution to GDP cannot exceed the deficit spending, the "multiplier" cannot be more than one and the contribution to GDP cannot be more than the exogenous investment during that year and succeeding years.


  5. 5 David December 12, 2014 at 4:52 pm

    For the literal minded, Jennifer La’O has a nice paper explicitly using traffic models of the type Arthur describes in that quote to generate a theory of coordination failure driven recessions, which can arise naturally from small deviations away from the constant-spacing equilibrium in the same way traffic jams can: see


  6. 6 Kevin Donoghue December 13, 2014 at 5:33 am

    Ah, Barro! Truth to tell, his “youthful efforts” were very valuable. It was the turn to freshwater theory that was the mistake. I can’t resist quoting this tweet by his son Josh:

    My dad bragged that he paid no initiation fee for his gym membership. I told him nobody ever pays the initiation fee. He looked crestfallen.

    There’s a moral there for High Theorists.


  7. 7 doncoffin64 December 13, 2014 at 12:05 pm

    “In other words, microresponses may be sensitive to macroconditions.”

    Exactly. The entire equilibrium analysis reminds me of the “stationary state” analysis that seems to underlie a lot 19th century throey…


  8. 8 jonasfeit December 13, 2014 at 12:51 pm

    I don’t know if the right dichotomy is “equilibrium versus disequilibrium.” Rather, it’s likely negative feedback and positive feedback. The chief blind spot of neoclassical economics is positive feedback. Not all forces conjure countervailing forces. This is how come we have bank runs, “lemon markets,” spontaneous traffic jams, and so forth.


  9. 9 Marcus Nunes December 13, 2014 at 2:10 pm

    David, just curiosity.
    In the late 1990s, Krugman was a bit peeved at the attention given to Brin Arthur´s initial thoughts on economics & complexity:


  10. 10 David Glasner December 13, 2014 at 6:04 pm

    Mike, If you look at my first post about Barro, you will see that I interpreted the multiplier as a shift of cash from people with a perfectly elastic demand for money in a liquidity trap to people who were cash constrained and would therefore spend any additional cash as soon as they got it.

    Thornton, I’m not sure what phrasing you’re referring to. I like equilibrium analysis, but there are many problems that can’t be addressed using equilibrium models (at least full equilibrium or ratex equilibrium models). I do think that temporary equilibrium models offer an alternative as I have discussed in some previous posts.

    Jason, No bother, and thanks for the link, which I will look at.

    Dan, The multiplier is a linear relationship. Obviously, the linear relationship cannot persist indefinitely, so if you try to extend the multiplier analysis very far, you will get an absurd result, because other variable will have to change. But it is conceivable that there could be situations (e.g., a deep depression) in which the multiplier relationship could be validated empirically.

    David, Thanks for the link, will have a look.

    Kevin, That’s a classic.

    Don, Well, the stationary state was conceived of as the final equilibrium towards which an economy was tending in the absence of any exogenous disturbances (though of course everyone realized that there would always be exogenous disturbances.)

    Jonas, Negative feedback implies that equilibrium is stable, positive feedback implies equilibrium is unstable.

    Marcus, Thanks very interesting article. I don’t think that increasing returns is the only source of complexity in Brian Arthur’s work. I was also surprised that Arthur seems to think that increasing returns was such a breakthrough. Marshall was already discussing the idea in his Principles of Economics at the end of the nineteenth century.


  11. 11 Thornton Hall December 16, 2014 at 7:47 am

    Thanks for the response. It clearly states your position on equilibrium analysis and confirms my reading of your post.

    A steady state model of the Earth’s crust would sort of work for most purposes, but insisting on retaintaining it would no doubt retard our understanding of plate tectonics. Earthquakes and volcanos, the phenomena that reveal the forces at work, would be treated as “exogenous shocks”. There is no way to get from there to a correct understanding.

    Holding on to equilibrium is exactly the same. Except, of course, for the fact that it makes winners and losers out of different classes of people.


  12. 13 David Glasner December 18, 2014 at 5:46 pm

    Thornton, I don’t insist on equilibrium models, and obviously sometimes we have to make do without them. Part of what makes a good economist is the ability to identify the appropriate model for the appropriate problem.

    Travis, Thanks for the links. I already saw Noah’s post and the link to Roger’s post. It’s a terrific post, but, taking Thornton’s side for a moment, I think that Roger is a bit too wedded to equilibrium modeling.


  1. 1 Mathematical Modelling in Economics that Meets the Lawson Critique | Decisions, Decisions, Decisions Trackback on January 18, 2015 at 3:31 am
  2. 2 Involuntary Unemployment, the Mind-Body Problem, and Rubbernecking | Uneasy Money Trackback on June 17, 2021 at 4:54 pm

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner


Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,263 other subscribers
Follow Uneasy Money on

%d bloggers like this: