Archive for December, 2012

The State We’re In

Last week, Paul Krugman, set off by this blog post, complained about the current state macroeconomics. Apparently, Krugman feels that if saltwater economists like himself were willing to accommodate the intertemporal-maximization paradigm developed by the freshwater economists, the freshwater economists ought to have reciprocated by acknowledging some role for countercyclical policy. Seeing little evidence of accommodation on the part of the freshwater economists, Krugman, evidently feeling betrayed, came to this rather harsh conclusion:

The state of macro is, in fact, rotten, and will remain so until the cult that has taken over half the field is somehow dislodged.

Besides engaging in a pretty personal attack on his fellow economists, Krugman did not present a very flattering picture of economics as a scientific discipline. What Krugman describes seems less like a search for truth than a cynical bargaining game, in which Krugman feels that his (saltwater) side, after making good faith offers of cooperation and accommodation that were seemingly accepted by the other (freshwater) side, was somehow misled into making concessions that undermined his side’s strategic position. What I found interesting was that Krugman seemed unaware that his account of the interaction between saltwater and freshwater economists was not much more flattering to the former than the latter.

Krugman’s diatribe gave Stephen Williamson an opportunity to scorn and scold Krugman for a crass misunderstanding of the progress of science. According to Williamson, modern macroeconomics has passed by out-of-touch old-timers like Krugman. Among modern macroeconomists, Williamson observes, the freshwater-saltwater distinction is no longer meaningful or relevant. Everyone is now, more or less, on the same page; differences are worked out collegially in seminars, workshops, conferences and in the top academic journals without the rancor and disrespect in which Krugman indulges himself. If you are lucky (and hard-working) enough to be part of it, macroeconomics is a great place to be. One can almost visualize the condescension and the pity oozing from Williamson’s pores for those not part of the charmed circle.

Commenting on this exchange, Noah Smith generally agreed with Williamson that modern macroeconomics is not a discipline divided against itself; the intetermporal maximizers are clearly dominant. But Noah allows himself to wonder whether this is really any cause for celebration – celebration, at any rate, by those not in the charmed circle.

So macro has not yet discovered what causes recessions, nor come anywhere close to reaching a consensus on how (or even if) we should fight them. . . .

Given this state of affairs, can we conclude that the state of macro is good? Is a field successful as long as its members aren’t divided into warring camps? Or should we require a science to give us actual answers? And if we conclude that a science isn’t giving us actual answers, what do we, the people outside the field, do? Do we demand that the people currently working in the field start producing results pronto, threatening to replace them with people who are currently relegated to the fringe? Do we keep supporting the field with money and acclaim, in the hope that we’re currently only in an interim stage, and that real answers will emerge soon enough? Do we simply conclude that the field isn’t as fruitful an area of inquiry as we thought, and quietly defund it?

All of this seems to me to be a side issue. Who cares if macroeconomists like each other or hate each other? Whether they get along or not, whether they treat each other nicely or not, is really of no great import. For example, it was largely at Milton Friedman’s urging that Harry Johnson was hired to be the resident Keynesian at Chicago. But almost as soon as Johnson arrived, he and Friedman were getting into rather unpleasant personal exchanges and arguments. And even though Johnson underwent a metamorphosis from mildly left-wing Keynesianism to moderately conservative monetarism during his nearly two decades at Chicago, his personal and professional relationship with Friedman got progressively worse. And all of that nastiness was happening while both Friedman and Johnson were becoming dominant figures in the economics profession. So what does the level of collegiality and absence of personal discord have to do with the state of a scientific or academic discipline? Not all that much, I would venture to say.

So when Scott Sumner says:

while Krugman might seem pessimistic about the state of macro, he’s a Pollyanna compared to me. I see the field of macro as being completely adrift

I agree totally. But I diagnose the problem with macro a bit differently from how Scott does. He is chiefly concerned with getting policy right, which is certainly important, inasmuch as policy, since early 2008, has, for the most part, been disastrously wrong. One did not need a theoretically sophisticated model to see that the FOMC, out of misplaced concern that inflation expectations were becoming unanchored, kept money way too tight in 2008 in the face of rising food and energy prices, even as the economy was rapidly contracting in the second and third quarters. And in the wake of the contraction in the second and third quarters and a frightening collapse and panic in the fourth quarter, it did not take a sophisticated model to understand that rapid monetary expansion was called for. That’s why Scott writes the following:

All we really know is what Milton Friedman knew, with his partial equilibrium approach. Monetary policy drives nominal variables.  And cyclical fluctuations caused by nominal shocks seem sub-optimal.  Beyond that it’s all conjecture.

Ahem, and Marshall and Wicksell and Cassel and Fisher and Keynes and Hawtrey and Robertson and Hayek and at least 25 others that I could easily name. But it’s interesting to note that, despite his Marshallian (anti-Walrasian) proclivities, it was Friedman himself who started modern macroeconomics down the fruitless path it has been following for the last 40 years when he introduced the concept of the natural rate of unemployment in his famous 1968 AEA Presidential lecture on the role of monetary policy. Friedman defined the natural rate of unemployment as:

the level [of unemployment] that would be ground out by the Walrasian system of general equilibrium equations, provided there is embedded in them the actual structural characteristics of the labor and commodity markets, including market imperfections, stochastic variability in demands and supplies, the costs of gathering information about job vacancies, and labor availabilities, the costs of mobility, and so on.

Aside from the peculiar verb choice in describing the solution of an unknown variable contained in a system of equations, what is noteworthy about his definition is that Friedman was explicitly adopting a conception of an intertemporal general equilibrium as the unique and stable solution of that system of equations, and, whether he intended to or not, appeared to be suggesting that such a concept was operationally useful as a policy benchmark. Thus, despite Friedman’s own deep skepticism about the usefulness and relevance of general-equilibrium analysis, Friedman, for whatever reasons, chose to present his natural-rate argument in the language (however stilted on his part) of the Walrasian general-equilibrium theory for which he had little use and even less sympathy.

Inspired by the powerful policy conclusions that followed from the natural-rate hypothesis, Friedman’s direct and indirect followers, most notably Robert Lucas, used that analysis to transform macroeconomics, reducing macroeconomics to the manipulation of a simplified intertemporal general-equilibrium system. Under the assumption that all economic agents could correctly forecast all future prices (aka rational expectations), all agents could be viewed as intertemporal optimizers, any observed unemployment reflecting the optimizing choices of individuals to consume leisure or to engage in non-market production. I find it inconceivable that Friedman could have been pleased with the direction taken by the economics profession at large, and especially by his own department when he departed Chicago in 1977. This is pure conjecture on my part, but Friedman’s departure upon reaching retirement age might have had something to do with his own lack of sympathy with the direction that his own department had, under Lucas’s leadership, already taken. The problem was not so much with policy, but with the whole conception of what constitutes macroeconomic analysis.

The paper by Carlaw and Lipsey, which I referenced in my previous post, provides just one of many possible lines of attack against what modern macroeconomics has become. Without in any way suggesting that their criticisms are not weighty and serious, I would just point out that there really is no basis at all for assuming that the economy can be appropriately modeled as being in a continuous, or nearly continuous, state of general equilibrium. In the absence of a complete set of markets, the Arrow-Debreu conditions for the existence of a full intertemporal equilibrium are not satisfied, and there is no market mechanism that leads, even in principle, to a general equilibrium. The rational-expectations assumption is simply a deus-ex-machina method by which to solve a simplified model, a method with no real-world counterpart. And the suggestion that rational expectations is no more than the extension, let alone a logical consequence, of the standard rationality assumptions of basic economic theory is transparently bogus. Nor is there any basis for assuming that, if a general equilibrium does exist, it is unique, and that if it is unique, it is necessarily stable. In particular, in an economy with an incomplete (in the Arrow-Debreu sense) set of markets, an equilibrium may very much depend on the expectations of agents, expectations potentially even being self-fulfilling. We actually know that in many markets, especially those characterized by network effects, equilibria are expectation-dependent. Self-fulfilling expectations may thus be a characteristic property of modern economies, but they do not necessarily produce equilibrium.

An especially pretentious conceit of the modern macroeconomics of the last 40 years is that the extreme assumptions on which it rests are the essential microfoundations without which macroeconomics lacks any scientific standing. That’s preposterous. Perfect foresight and rational expectations are assumptions required for finding the solution to a system of equations describing a general equilibrium. They are not essential properties of a system consistent with the basic rationality propositions of microeconomics. To insist that a macroeconomic theory must correspond to the extreme assumptions necessary to prove the existence of a unique stable general equilibrium is to guarantee in advance the sterility and uselessness of that theory, because the entire field of study called macroeconomics is the result of long historical experience strongly suggesting that persistent, even cumulative, deviations from general equilibrium have been routine features of economic life since at least the early 19th century. That modern macroeconomics can tell a story in which apparently large deviations from general equilibrium are not really what they seem is not evidence that such deviations don’t exist; it merely shows that modern macroeconomics has constructed a language that allows the observed data to be classified in terms consistent with a theoretical paradigm that does not allow for lapses from equilibrium. That modern macroeconomics has constructed such a language is no reason why anyone not already committed to its underlying assumptions should feel compelled to accept its validity.

In fact, the standard comparative-statics propositions of microeconomics are also based on the assumption of the existence of a unique stable general equilibrium. Those comparative-statics propositions about the signs of the derivatives of various endogenous variables (price, quantity demanded, quantity supplied, etc.) with respect to various parameters of a microeconomic model involve comparisons between equilibrium values of the relevant variables before and after the posited parametric changes. All such comparative-statics results involve a ceteris-paribus assumption, conditional on the existence of a unique stable general equilibrium which serves as the starting and ending point (after adjustment to the parameter change) of the exercise, thereby isolating the purely hypothetical effect of a parameter change. Thus, as much as macroeconomics may require microfoundations, microeconomics is no less in need of macrofoundations, i.e., the existence of a unique stable general equilibrium, absent which a comparative-statics exercise would be meaningless, because the ceteris-paribus assumption could not otherwise be maintained. To assert that macroeconomics is impossible without microfoundations is therefore to reason in a circle, the empirically relevant propositions of microeconomics being predicated on the existence of a unique stable general equilibrium. But it is precisely the putative failure of a unique stable intertemporal general equilibrium to be attained, or to serve as a powerful attractor to economic variables, that provides the rationale for the existence of a field called macroeconomics.

So I certainly agree with Krugman that the present state of macroeconomics is pretty dismal. However, his own admitted willingness (and that of his New Keynesian colleagues) to adopt a theoretical paradigm that assumes the perpetual, or near-perpetual, existence of a unique stable intertemporal equilibrium, or at most admits the possibility of a very small set of deviations from such an equilibrium, means that, by his own admission, Krugman and his saltwater colleagues also bear a share of the responsibility for the very state of macroeconomics that Krugman now deplores.

Carlaw and Lipsey on Whether History Matters

About six months ago,  I mentioned a forthcoming paper by Kenneth Carlaw and Richard Lipsey, “Does history matter? Empirical analysis  of evolutionary versus stationary equilibrium views of the economy.”  The paper was recently published in the Journal of Evolutionary Economics.  The empirical analysis undertaken by Carlaw and Lipsey undermines many widely accepted propositions of modern macroeconomics, and is thus especially timely after the recent flurry of posts on the current state of macroecoomics by Krugman, Williamson, Smith, Delong, Sumner, et al., a topic about which I may have a word or two to say anon.  Here is the abstract of the Carlaw and Lipsey paper.

The evolutionary vision in which history matters is of an evolving economy driven by bursts of technological change initiated by agents facing uncertainty and producing long term, path-dependent growth and shorter-term, non-random investment cycles. The alternative vision in which history does not matter is of a stationary, ergodic process driven by rational agents facing risk and producing stable trend growth and shorter term cycles caused by random disturbances. We use Carlaw and Lipsey’s simulation model of non-stationary, sustained growth driven by endogenous, path-dependent technological change under uncertainty to generate artificial macro data. We match these data to the New Classical stylized growth facts. The raw simulation data pass standard tests for trend and difference stationarity, exhibiting unit roots and cointegrating processes of order one. Thus, contrary to current belief, these tests do not establish that the real data are generated by a stationary process. Real data are then used to estimate time-varying NAIRU’s for six OECD countries. The estimates are shown to be highly sensitive to the time period over which they are made. They also fail to show any relation between the unemployment gap, actual unemployment minus estimated NAIRU and the acceleration of inflation. Thus there is no tendency for inflation to behave as required by the New Keynesian and earlier New Classical theory. We conclude by rejecting the existence of a well-defined a short-run, negatively sloped Philips curve, a NAIRU, a unique general equilibrium, short and long-run, a vertical long-run Phillips curve, and the long-run neutrality of money.

UPDATE:  In addition to the abstract, I think it would be worthwhile to quote the three introductory paragraphs from Carlaw and Lipsey.

Economists face two conflicting visions of the market economy, visions that reflect two distinct paradigms, the Newtonian and the Darwinian. In the former, the behaviour of the economy is seen as the result of an equilibrium reached by the operation of opposing forces – such as market demanders and suppliers or competing oligopolists – that operate in markets characterised by negative feedback that returns the economy to its static equilibrium or its stationary equilibrium growth path. In the latter, the behaviour of the economy is seen as the result of many different forces – especially technological changes – that evolve endogenously over time, that are subject to many exogenous shocks, and that often operate in markets subject to positive feedback and in which agents operate under conditions of genuine uncertainty.
One major characteristic that distinguishes the two visions is stationarity for the Newtonian and non-stationarity for the Darwinian. In the stationary equilibrium of a static general equilibrium model and the equilibrium growth path of a Solow-type or endogenous growth model, the path by which the equilibrium is reached has no effect on the equilibrium values themselves. In short, history does not matter. In contrast, an important characteristic of the Darwinian vision is path dependency: what happens now has important implications for what will happen in the future. In short, history does matter.
In this paper, we consider, and cast doubts on, the stationarity properties of models in the Newtonian tradition. These doubts, if sustained, have important implications for understanding virtually all aspects of macroeconomics, including of long term economic growth, shorter term business cycles, and stabilisation policy.
UPDATE (12/28/12):  I received an email from Richard Lipsey about this post.  He attached two footnotes (1 and 5) from his article with Carlaw, which he thinks are relevant to some of the issues raised in comments to this post.  Footnote 1 explains their use of “Darwinian” to describe their path-dependent approach to economic modeling; footnote 5 observes that the analysis of many microeconomic problems and short-run macro-policy analysis may be amenable to the static-equilibrium method.

1 The use of the terms Darwinian and Newtonian here is meant to highlight the significant difference in equilibrium concept employed in the two groups of theories that we contrast, the evolutionary and what we call equilibrium with deviations (EWD) theories. Not all evolutionary theories, including the one employed here, are strictly speaking Darwinian in the sense that they embody replication and selection. We use the term, Darwinian to highlight the critical equilibrium concept of a path dependent, non-ergodic, historical process employed in Darwinian and evolutionary theories and to draw the contrast between that and the negative feedback, usually unique, ergodic equilibrium concept employed in Newtonian and EWD theories.

 5 Most evolutionary economists accept that for many issues in micro economics, comparative static equilibrium models are useful. Also, there is nothing incompatible between the evolutionary world view and the use of Keynesian models – of which IS-LM closed by an expectations-augmented Phillips curve is the prototype – to study such short run phenomenon as stagflation and the impact effects of monetary and fiscal policy shocks. Problems arise, however, when such analyses are applied to situations in which technology is changing endogenously over time periods that are relevant to the issues being studied. Depending on the issue at hand, this might be as short as a few months.

Maybe Robert Waldmann Should Calm Down

Robert Waldmann is unhappy with Matthew Yglesias for being hopeful that, Shinzo Abe, just elected prime minister of Japan, may be about to make an important contribution to the world economy, and to economic science, by prodding the Bank of Japan to increase its inflation target and by insisting that the BOJ actually hit the new target. Since I don’t regularly read Waldmann’s blog (not because it’s not worth reading — I usually enjoy reading it when I get to it – I just can’t keep up with that many blogs), I’m not sure why Waldmann finds Yglesias’s piece so annoying. OK, Waldmann’s a Keynesian and prefers fiscal to monetary policy, but so is Paul Krugman, and he thinks that monetary policy can be effective even at the zero lower bound. At any rate this is how Waldmann responds to Yglesias:

Ben Bernanke too has declared a policy of unlimited quantitative easing and increased inflation (new target only 2.5% but that’s higher than current inflation).  The declaration (which was a surprise) had essentially no effect on prices for medium term treasuries, TIPS or the breakeven.

I was wondering when you would comment, since you have confidently asserted again and again that if only the FOMC did what it just did, expected inflation would jump and then GDP growth would increase.

However, instead of noting the utter total failure of your past predictions (and the perfect confirmation of mine) you just boldly make new predictions.

Face fact,  like conventional monetary policy (in the US the Federal Funds rate) forward guidance is pedal to the metal.   It’s long past time for you to start climbing down.

I mention this, because just yesterday I happened across another blog post about what Bernanke said after the FOMC meeting.  This post by David Altig, executive VP and research director of the Atlanta Fed, was on the macroblog. Altig points out that, despite the increase in the Fed’s inflation threshold from 2 to 2.5%, the Fed increased neither its inflation target (still 2%) nor its inflation forecast (still under 2%). All that the Fed did was to say that it won’t immediately slam on the brakes if inflation rises above 2% provided that unemployment is greater than 6.5% and inflation is less than 2.5%. That seems like a pretty marginal change in policy to me.

Also have a look at this post from earlier today by Yglesias, showing that the Japanese stock market has risen about 5.5% in the last two weeks, and about 2% in the two days since Abe’s election. Here is Yglesias’s chart showing the rise of the Nikkei over the past two weeks.

abe-nomics

In addition, here is a news story from Bloomberg about rising yields on Japanese government bonds, which are now the highest since April.

Japan‘s bonds declined, sending 20- year yields to an eight-month high, as demand ebbed at a sale of the securities and domestic shares climbed.

The sale of 1.2 trillion yen ($14.3 billion) of 20-year bonds had the lowest demand in four months. Yields on the benchmark 10-year note rose to a one-month high as Japan’s Nikkei 225 Stock Average reached the most since April amid signs U.S. budget talks are progressing.

Finally, another item from Yglesias, a nice little graph showing the continuing close relationship between the S&P 500 and inflation expectations as approximated by the breakeven TIPS spread on 10-year Treasuries, a relationship for which I have provided (in a paper available here) a theoretical explanation as well as statistical evidence that the relationship did not begin to be observed until approximately the spring of 2008 as the US economy, even before the Lehman debacle, began its steep contraction. Here’s the graph.

yglesias_S&P500

HT: Mark Thoma

UPDATE:  Added a link above to the blog post by Altig about what Bernanke meant when he announced a 2.5% inflation threshold.

The Prodigal Son Returns: The Wall Street Journal Editorial Page Rediscovers Root Canal Economics

Perhaps the most interesting and influential financial journalist of the 1970s was a guy by the name of Jude Wanniski, who was an editorial writer for Wall Street Journal, from 1972 to 1978. In the wake of the Watergate scandal, and the devastating losses suffered by the Republicans in the 1974 Congressional elections, many people thought that the Republican party might not survive. The GOP certainly did not seem to offer much hope for free-market conservatives and libertarians, Richard Nixon having imposed wage-and-price controls in 1971, with the help of John Connally and Arthur Burns and enthusiastic backing from almost all Republicans. After Nixon resigned, leadership of the party was transferred to his successor, Gerald Ford, a very nice and decent fellow, whose lack of ideological conviction was symbolized by his choice of Nelson Rockefeller, an interventionist, big-government Republican if there ever was one, to serve as his Vice-President.

In this very dispirited environment for conservatives, Jude Wanniski’s editorial pieces in the Wall Street Journal and his remarkable 1978 book The Way the World Works, in which he advocated cuts in income tax rates as a cure for economic stagnation, proved to be an elixir of life for demoralized Republicans and conservatives. Wanniski was especially hard on old-fashioned Republicans and conservatives for whom balancing the federal budget had become the be all and end of all of economic policy, a doctrine that the Wall Street Journal itself had once espoused. A gifted phrase maker, Wanniski dubbed traditional Republican balanced-budget policy, root-canal economics. Instead, Wanniski adopted the across-the-board income tax cuts proposed by John Kennedy in 1963, a proposal that conservative icon Barry Goldwater had steadfastly opposed, as his model for economic policy.

Wanniski quickly won over a rising star in the Republican party, former NFL quarterback Jack Kemp, to his way of thinking.  Another acolyte was an ambitious young Georgian by the name of Newt Gingrich. In 1978, Kemp and Senator Bill Roth form Delaware (after whom the Roth IRA is named), co-authored a bill, without support from the Republican Congressional leadership, to cut income taxes across the board by 25%. Many Republicans running for Congress and the Senate in 1978 pledged to support what became known as the Kemp-Roth bill. An unexpectedly strong showing by Republicans supporting the Kemp-Roth bill in the 1978 elections encouraged Jack Kemp to consider running for President in 1980 on a platform of across-the-board tax cuts. However, when Ronald Reagan, nearly 70 years old, and widely thought, after unsuccessfully challenging Gerald Ford for the GOP nomination in 1976, to be past his prime, signed on as a supporter of the Kemp-Roth bill, Kemp bowed out of contention, endorsing Reagan for the nomination, and uniting conservatives behind the Gipper.

After his landslide victory in the 1980 election, Reagan, riding a crest of popularity, enhanced by an unsuccessful assassination attempt in the first few months of his term, was able to push the Kemp-Roth bill through Congress, despite warnings from the Democrats that steep tax cuts would cause large budget deficits. To such warnings, Jack Kemp famously responded that Republicans no longer worshiped at the altar of a balanced budget. No one cheered louder for that heretical statement by Kemp than, you guessed it, the Wall Street Journal editorial page.

Fast forward to 2012, the Wall Street Journal, which never fails to invoke the memory of Ronald Reagan whenever an opportunity arises, nevertheless seems to have rediscovered the charms of root-canal economics. How else can one explain this piece of sophistry from Robert L. Pollock, a member of the group of sages otherwise known as the Editorial Board of the Wall Street Journal? Consider what Mr. Pollock had to say in an opinion piece on the Journal‘s website.

[T]o the extent that the United States finds itself in a precarious fiscal situation, Federal Reserve Chairman Ben Bernanke shares much of the blame. Simply put, there is no way that Washington could have run the deficits it has in recent years without the active assistance of a near-zero interest rate policy. . . .

European governments finally decided to take cost-cutting steps when their borrowing costs went up. But Democrats and liberal economists use Mr. Bernanke’s low rates and willingness to buy government bonds as evidence that there’s no pressing problem here to be addressed.

This is a strange argument for high interest rates, especially coming from a self-avowed conservative. Conservatives got all bent out of shape when Obama’s Energy Secretary, Stephen Chu, opined that rising gasoline prices might actually serve a useful function by inducing consumers and businesses to be more economical in their their use of gasoline. That comment was seized on by Republicans as proof that the Obama administration was seeking to increase gasoline prices as a way of reducing gasoline consumption. Now, Mr. Pollock provides us with a new argument for high interest rates: by raising the cost of borrowing, high interest rates will force the government to be more economical in its spending decisions.  Evidently, it’s wrong to suggest that an increased price will reduce gasoline consumption, but it’s fine to say that an increased interest rate will cut government spending. Go figure.

Well, here’s what Wanniski had to say about the Republican obsession with reducing government spending for its own sake:

It isn’t that Republicans don’t enjoy cutting taxes. They love it. But there is something in the Republican chemistry that causes the GOP to become hypnotized by the prospect of an imbalanced budget. Static analysis tells them taxes can’t be cut or inflation will result. They either argue for a tax hike to dampen inflation when the economy is in a boom or demand spending cuts to balance the budget when the economy is in recession.

Either way, of course, they embrace the role of Scrooge, playing into the hands of the Democrats, who know the first rule of successful politics is Never Shoot Santa Claus. The political tension in the market place of ideas must be between tax reduction and spending increases, and as long as Republicans have insisted on balanced budgets, their influence as a party has shriveled, and budgets have been imbalanced.

How’s that old root-canal economics working out for ya?

Now back to Pollock. Here’s how he explains why low interest rates may not really be helping the economy.

It would be one thing if there were widespread agreement that low rates are the right medicine for the economy. But easy money on the Bernanke scale is a heretofore untested policy, one for which the past few years of meager growth haven’t provided convincing evidence.

Fair enough. Low rates haven’t been helping the economy all that much. But the question arises: why are rates so low? Is it really all the Fed’s doing, or could it possibly have something to do with pessimism on the part of businesses and consumers about whether they will be able to sell their products or their services in the future? If it is the latter, then low interest rates may not be a symptom of easy money, but of tight money.

Pollock, of course, has a different explanation for why low interest rates are not promoting a recovery.

Economists such as David Malpass argue that low rates are actually contractionary because they cause capital to be diverted from more productive uses to less productive ones.

Oh my. What can one say about an argument like that? I have encountered Mr. Malpass before and was less than impressed by his powers of economic reasoning; I remain unimpressed. How can a low interest rate divert capital from more productive uses to less productive ones unless capital rationing is taking place? If some potential borrowers were unable to secure funding for their productive projects while other borrowers with less productive projects were able to get funding for theirs, the disappointed borrowers could have offered to borrow at increased interest rates, thereby outbidding borrowers with unproductive projects, and driving up interest rates in the process.   That is just elementary.  That interest rates are now at such low levels is more reflective of the pessimism of most potential borrowers about the projects for which they seeking funding, than of the supposed power of the Fed to determine interest rates.

So there you have it. The Wall Street Journal editorial page, transformed in the 1970s by the daring and unorthodox ideas of a single, charismatic economic journalist, Jude Wanniski, has now, almost four decades later, finally come back to its roots.  Welcome home where you belong.

What Kind of Equilibrium Is This?

In my previous post, I suggested that Stephen Williamson’s views about the incapacity of monetary policy to reduce unemployment, and his fears that monetary expansion would simply lead to higher inflation and a repeat of the bad old days the 1970s when inflation and unemployment spun out of control, follow from a theoretical presumption that the US economy is now operating (as it almost always does) in the neighborhood of equilibrium. This does not seem right to me, but it is the sort of deep theoretical assumption (e.g., like the rationality of economic agents) that is not subject to direct empirical testing. It is part of what the philosopher Imre Lakatos called the hard core of a (in this case Williamson’s) scientific research program. Whatever happens, Williamson will process the observed facts in terms of a theoretical paradigm in which prices adjust and markets clear. No other way of viewing reality makes sense, because Williamson cannot make any sense of it in terms of the theoretical paradigm or world view to which he is committed. I actually have some sympathy with that way of looking at the world, but not because I think it’s really true; it’s just the best paradigm we have at the moment. But I don’t want to follow that line of thought too far now, but who knows, maybe another time.

A good illustration of how Williamson understands his paradigm was provided by blogger J. P. Koning in his comment on my previous post copying the following quotation from a post written by Williamson a couple of years on his blog.

In other cases, as in the link you mention, there are people concerned about disequilibrium phenomena. These approaches are or were popular in Europe – I looked up Benassy and he is still hard at work. However, most of the mainstream – and here I’m including New Keynesians – sticks to equilibrium economics. New Keynesian models may have some stuck prices and wages, but those models don’t have to depart much from standard competitive equilibrium (or, if you like, competitive equilibrium with monopolistic competition). In those models, you have to determine what a firm with a stuck price produces, and that is where the big leap is. However, in terms of determining everything mathematically, it’s not a big deal. Equilibrium economics is hard enough as it is, without having to deal with the lack of discipline associated with “disequilibrium.” In equilibrium economics, particularly monetary equilibrium economics, we have all the equilibria (and more) we can handle, thanks.

I actually agree that departing from the assumption of equilibrium can involve a lack of discipline. Market clearing is a very powerful analytical tool, and to give it up without replacing it with an equally powerful analytical tool leaves us theoretically impoverished. But Williamson seems to suggest (or at least leaves ambiguous) that there is only one kind of equilibrium that can be handled theoretically, namely a fully optimal general equilibrium with perfect foresight (i.e., rational expectations) or at least with a learning process leading toward rational expectations. But there are other equilibrium concepts that preserve market clearing, but without imposing, what seems to me, the unreasonable condition of rational expectations and (near) optimality.

In particular, there is the Hicksian concept of a temporary equilibrium (inspired by Hayek’s discussion of intertemporal equilibrium) which allows for inconsistent expectations by economic agents, but assumes market clearing based on supply and demand schedules reflecting those inconsistent expectations. Nearly 40 years ago, Earl Thompson was able to deploy that equilibrium concept to derive a sub-optimal temporary equilibrium with Keynesian unemployment and a role for countercyclical monetary policy in minimizing inefficient unemployment. I have summarized and discussed Thompson’s model previously in some previous posts (here, here, here, and here), and I hope to do a few more in the future. The model is hardly the last word, but it might at least serve as a starting point for thinking seriously about the possibility that not every state of the economy is an optimal equilibrium state, but without abandoning market clearing as an analytical tool.

Too Little, Too Late?

The FOMC, after over four years of overly tight monetary policy, seems to be feeling its way toward an easier policy stance. But will it do any good? Unfortunately, there is reason to doubt that it will. The FOMC statement pledges to continue purchasing $85 billion a month of Treasuries and mortgage-backed securities and to keep interest rates at current low levels until the unemployment rate falls below 6.5% or the inflation rate rises above 2.5%. In other words, the Fed is saying that it will tolerate an inflation rate only marginally higher than the current target for inflation before it begins applying the brakes to the expansion. Here is how the New York Times reported on the Fed announcement.

The Federal Reserve said Wednesday it planned to hold short-term interest rates near zero so long as the unemployment rate remains above 6.5 percent, reinforcing its commitment to improve labor market conditions.

The Fed also said that it would continue in the new year its monthly purchases of $85 billion in Treasury bonds and mortgage-backed securities, the second prong of its effort to accelerate economic growth by reducing borrowing costs.

But Fed officials still do not expect the unemployment rate to fall below the new target for at least three more years, according to forecasts also published Wednesday, and they chose not to expand the Fed’s stimulus campaign.

In fairness to the FOMC, the Fed, although technically independent, must operate within an implicit consensus on what kind of decisions it can take, its freedom of action thereby being circumscribed in the absence of a clear signal of support from the administration for a substantial departure from the terms of the implicit consensus. For the Fed to substantially raise its inflation target would risk a political backlash against it, and perhaps precipitate a deep internal split within the Fed’s leadership. At the depth of the financial crisis and in its immediate aftermath, perhaps Chairman Bernanke, if he had been so inclined, might have been able to effect a drastic change in monetary policy, but that window of opportunity closed quickly once the economy stopped contracting and began its painfully slow pseudo recovery.

As I have observed a number of times (here, here, and here), the paradigm for the kind of aggressive monetary easing that is now necessary is FDR’s unilateral decision to take the US off the gold standard in 1933. But FDR was a newly elected President with a massive electoral mandate, and he was making decisions in the midst of the worst economic crisis in modern times. Could an unelected technocrat (or a collection of unelected technocrats) take such actions on his (or their) own? From the get-go, the Obama administration showed no inclination to provide any significant input to the formulation of monetary policy, either out of an excess of scruples about Fed independence or out of a misguided belief that monetary policy was powerless to affect the economy when interest rates were close to zero.

Stephen Williamson, on his blog, consistently gives articulate expression to the doctrine of Fed powerlessness. In a post yesterday, correctly anticipating that the Fed would continue its program of buying mortgage backed securities and Treasuries, and would tie its policy to numerical triggers relating to unemployment, Williamson disdainfully voiced his skepticism that the Fed’s actions would have any positive effect on the real performance of the economy, while registering his doubts that the Fed would be any more successful in preventing inflation from getting out of hand while attempting to reduce unemployment than it was in the 1970s.

It seems to me that Williamson reaches this conclusion based on the following premises. The Fed has little or no control over interest rates or inflation, and the US economy is not far removed from its equilibrium growth path. But Williamson also believes that the Fed might be able to increase inflation, and that that would be a bad thing if the Fed were actually to do so.  The Fed can’t do any good, but it could do harm.

Williamson is fairly explicit in saying that he doubts the ability of positive QE to stimulate, and negative QE (which, I guess, might be called QT) to dampen real or nominal economic activity.

Short of a theory of QE – or more generally a serious theory of the term structure of interest rates – no one has a clue what the effects are, if any. Until someone suggests something better, the best guess is that QE is irrelevant. Any effects you think you are seeing are either coming from somewhere else, or have to do with what QE signals for the future policy rate. The good news is that, if it’s irrelevant, it doesn’t do any harm. But if the FOMC thinks it works when it doesn’t, that could be a problem, in that negative QE does not tighten, just as positive QE does not ease.

But Williamson seems a bit uncertain about the effects of “forward guidance” i.e., the Fed’s commitment to keep interest rates low for an extended period of time, or until a trigger is pulled e.g., unemployment falls below a specified level. This is where Williamson sees a real potential for mischief.

(1)To be well-understood, the triggers need to be specified in a very simple form. As such it seems as likely that the Fed will make a policy error if it commits to a trigger as if it commits to a calendar date. The unemployment rate seems as good a variable as any to capture what is going on in the real economy, but as such it’s pretty bad. It’s hardly a sufficient statistic for everything the Fed should be concerned with.

(2)This is a bad precedent to set, for two reasons. First, the Fed should not be setting numerical targets for anything related to the real side of the dual mandate. As is well-known, the effect of monetary policy on real economic activity is transient, and the transmission process poorly understood. It would be foolish to pretend that we know what the level of aggregate economic activity should be, or that the Fed knows how to get there. Second, once you convince people that triggers are a good idea in this “unusual” circumstance, those same people will wonder what makes other circumstances “normal.” Why not just write down a Taylor rule for the Fed, and send the FOMC home? Again, our knowledge of how the economy works, and what future contingencies await us, is so bad that it seems optimal, at least to me, that the Fed make it up as it goes along.

I agree that a fixed trigger is a very blunt instrument, and it is hard to know what level to set it at. In principle, it would be preferable if the trigger were not pulled automatically, but only as a result of some exercise of discretionary judgment by the part of the monetary authority; except that the exercise of discretion may undermine the expectational effect of setting a trigger. Williamson’s second objection strikes me as less persuasive than the first. It is at least misleading, and perhaps flatly wrong, to say that the effect of monetary policy on real economic activity is transient. The standard argument for the ineffectiveness of monetary policy involves an exercise in which the economy starts off at equilibrium. If you take such an economy and apply a monetary stimulus to it, there is a plausible (but not necessarily unexceptionable) argument that the long-run effect of the stimulus will be nil, and any transitory gain in output and employment may be offset (or outweighed) by a subsequent transitory loss. But if the initial position is out of equilibrium, I am unaware of any plausible, let alone compelling, argument that monetary stimulus would not be effective in hastening the adjustment toward equilibrium. In a trivial sense, the effect of monetary policy is transient inasmuch as the economy would eventually reach an equilibrium even without monetary stimulus. However, unlike the case in which monetary stimulus is applied to an economy in equilibrium, applying monetary policy to an economy out of equilibrium can produce short-run gains that aren’t wiped out by subsequent losses. I am not sure how to interpret the rest of Williamson’s criticism. One might almost interpret him as saying that he would favor a policy of targeting nominal GDP (which bears a certain family resemblance to the Taylor rule), a policy that would also address some of the other concerns Williamson has about the Fed’s choice of triggers, except that Williamson is already on record in opposition to NGDP targeting.

In reply to a comment on this post, Williamson made the following illuminating observation:

Read James Tobin’s paper, “How Dead is Keynes?” referenced in my previous post. He was writing in June 1977. The unemployment rate is 7.2%, the cpi inflation rate is 6.7%, and he’s complaining because he thinks the unemployment rate is disastrously high. He wants more accommodation. Today, I think we understand the reasons that the unemployment rate was high at the time, and we certainly don’t think that monetary policy was too tight in mid-1977, particularly as inflation was about to take off into the double-digit range. Today, I don’t think the labor market conditions we are looking at are the result of sticky price/wage inefficiencies, or any other problem that monetary policy can correct.

The unemployment rate in 1977 was 7.2%, at least one-half a percentage point less than the current rate, and the cpi inflation rate was 6.7% nearly 5% higher than the current rate. Just because Tobin was overly disposed toward monetary expansion in 1977 when unemployment was less and inflation higher than they are now, it does not follow that monetary expansion now would be as misguided as it was in 1977. Williamson is convinced that the labor market is now roughly in equilibrium, so that monetary expansion would lead us away from, not toward, equilibrium. Perhaps it would, but most informed observers simply don’t share Williamson’s intuition that the current state of the economy is not that far from equilibrium. Unless you buy that far-from-self-evident premise, the case for monetary expansion is hard to dispute.  Nevertheless, despite his current unhappiness, I am not so sure that Williamson will be as upset with what the actual policy that the Fed is going to implement as he seems to think he will be.  The Fed is moving in the right direction, but is only taking baby steps.

PS I see that Williamson has now posted his reaction to the Fed’s statement.  Evidently, he is not pleased.  Perhaps I will have something more to say about that tomorrow.

Those Dreaded Cantillon Effects

Once again, I find myself slightly behind the curve, with Scott Sumner (and again, and again, and again, and again), Nick Rowe and Bill Woolsey out there trying to face down an onslaught of Austrians rallying under the dreaded banner (I won’t say what color) of Cantillon Effects. At this point, the best I can do is some mopping up by making a few general observations about the traditional role of Cantillon Effects in Austrian business cycle theory and how that role squares with the recent clamor about Cantillon Effects.

Scott got things started, as he usually does, with a post challenging an Austrian claim that the Federal Reserve favors the rich because its injections of newly printed money enter the economy at “specific points,” thereby conferring unearned advantages on those lucky or well-connected few into whose hands those crisp new dollar bills hot off the printing press first arrive. The fortunate ones who get to spend the newly created money before the fresh new greenbacks have started on their inflationary journey through the economy are able to buy stuff at pre-inflation prices, while the poor suckers further down the chain of transactions triggered by the cash infusion must pay higher prices before receiving any of the increased spending. Scott’s challenge provoked a fierce Austrian counterattack from commenters on his blog and from not-so-fierce bloggers like Bob Murphy. As is often the case, the discussion (or the shouting) produced no clear outcome, each side confidently claiming vindication. Scott and Nick argued that any benefits conferred on first recipients of cash would be attributable to the fiscal impact of the Fed’s actions (e.g., purchasing treasury bonds with new money rather than helicopter distribution), with Murphy et al. arguing that distinctions between the fiscal and monetary effects of Fed operations are a dodge. No one will be surprised when I say that Scott and Nick got the better of the argument.

But there are a couple of further points that I would like to bring up about Cantillon effects. It seems to me that the reason Cantillon effects were thought to be of import by the early Austrian theorists like Hayek was that they had a systematic theory of the distribution or the incidence of those effects. Merely to point out that such effects exist and redound to the benefits of some lucky individuals would have been considered a rather trivial and pointless exercise by Hayek. Hayek went to great lengths in the 1930s to spell out a theory of how the creation of new money resulting in an increase in total expenditure would be associated with a systematic and (to the theorist) predictable change in relative prices between consumption goods and capital goods, a cheapening of consumption goods relative to capital goods causing a shift in the composition of output in favor of capital goods. Hayek then argued that such a shift in the composition of output would be induced by the increase in capital-goods prices relative to consumption-goods prices, the latter shift, having been induced by a monetary expansion that could not (for reasons I have discussed in previous posts, e.g., here) be continued indefinitely, eventually having to be reversed. This reversal was identified by Hayek with the upper-turning point of the business cycle, because it would trigger a collapse of the capital-goods industries and a disruption of all the production processes dependent on a continued supply of those capital goods.

Hayek’s was an interesting theory, because it identified a particular consequence of monetary expansion for an important sector of the economy, providing an explanation of the economic mechanism and a prediction about the direction of change along with an explanation of why the initial change would eventually turn out to be unsustainable. The theory could be right or wrong, but it involved a pretty clear-cut set of empirical implications. But the point to bear in mind is that this went well beyond merely saying that in principle there would be some gainers and some losers as the process of monetary expansion unfolds.

What accounts for the difference between the empirically rich theory of systematic Cantillon Effects articulated by Hayek over 80 years ago and the empirically trivial version on which so much energy was expended over the past few days on the blogosphere? I think that the key difference is that in Hayek’s cycle theory, it is the banks that are assumed somehow or other to set an interest rate at which they are willing to lend, and this interest rate may or may not be consistent with the constant volume of expenditure that Hayek thought (albeit with many qualifications) was ideal criterion of the neutral monetary policy which he favored. A central bank might or might not be involved in the process of setting the bank rate, but the instrument of monetary policy was (depending on circumstances) the lending rate of the banks, or, alternatively, the rate at which the central bank was willing lending to banks by rediscounting the assets acquired by banks in lending to their borrowers.

The way Hayek’s theory works is through an unobservable natural interest rate that would, if it were chosen by the banks, generate a constant rate of total spending. There is, however, no market mechanism guaranteeing that the lending rate selected by the banks (with or without the involvement of a central bank) coincides with the ideal but unobservable natural rate.  Deviations of the banks’ lending rate from the natural rate cause Cantillon Effects involving relative-price distortions, thereby misdirecting resources from capital-goods industries to consumption-goods industries, or vice versa. But the specific Cantillon effect associated with Hayek’s theory presumes that the banking system has the power to determine the interest rates at which borrowing and lending take place for the entire economy.  This presumption is nowhere ot my knowledge justified, and it does not seem to me that the presumption is even remotely justifiable unless one accepts the very narrow theory of interest known as the loanable-funds theory.  According to the loanable-funds theory, the rate of interest is that rate which equates the demand for funds to be borrowed with the supply of funds available to be lent.  However, if one views the rate of interest (in the sense of the entire term structure of interest rates) as being determined in the process by which the entire existing stock of capital assets is valued (i.e., the price for each asset at which it would be willingly held by just one economic agent) those valuations being mutually consistent only when the expected net cash flows attached to each asset are discounted at the equilibrium term structure and equilibrium risk premia. Given that comprehensive view of asset valuations and interest-rate determination, the notion that banks (with or without a central bank) have any substantial discretion in choosing interest rates is hard to take seriously. And to the extent that banks have any discretion over lending rates, it is concentrated at the very short end of the term structure. I really can’t tell what she meant, but it is at least possible that Joan Robinson was alluding to this idea when, in her own uniquely charming way, she criticized Hayek’s argument in Prices and Production.

I very well remember Hayek’s visit to Cambridge on his way to the London School. He expounded his theory and covered a black board with his triangles. The whole argument, as we could see later, consisted in confusing the current rate of investment with the total stock of capital goods, but we could not make it out at the time. The general tendency seemed to be to show that the slump was caused by [excessive] consumption. R. F. Kahn, who was at that time involved in explaining that the multiplier guaranteed that saving equals investment, asked in a puzzled tone, “Is it your view that if I went out tomorrow and bought a new overcoat, that would increase unemploy- ment?”‘ “Yes,” said Hayek, “but,” pointing to his triangles on the board, “it would take a very long mathematical argument to explain why.”

At any rate, if interest rates are determined comprehensively in all the related markets for existing stocks of physical assets, not in flow markets for current borrowing and lending, Hayek’s notion that the banking system can cause significant Cantillon effects via its control over interest rates is hard to credit. There is perhaps some room to alter very short-term rates, but longer-term rates seem impervious to manipulation by the banking system except insofar as inflation expectations respond to the actions of the banking system. But how does one derive a Cantillon Effect from a change in expected inflation?  Cantillon Effects may or may not exist, but unless they are systematic, predictable, and unsustainable, they have little relevance to the study of business cycles.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,263 other subscribers
Follow Uneasy Money on WordPress.com