Posts Tagged 'Noah Smith'

Graeber Against Economics

David Graeber’s vitriolic essay “Against Economics” in the New York Review of Books has generated responses from Noah Smith and Scott Sumner among others. I don’t disagree with much that Noah or Scott have to say, but I want to dig a little deeper than they did into some of Graeber’s arguments, because even though I think he is badly misinformed on many if not most of the subjects he writes about, I actually have some sympathy for his dissatisfaction with the current state of economics. Graeber wastes no time on pleasantries.

There is a growing feeling, among those who have the responsibility of managing large economies, that the discipline of economics is no longer fit for purpose. It is beginning to look like a science designed to solve problems that no longer exist.

A serious polemicist should avoid blatant mischaracterizations, exaggerations and cheap shots, and should be well-grounded in the object of his critique, thereby avoiding criticisms that undermine his own claims to expertise. I grant that  Graeber has some valid criticisms to make, even agreeing with him, at least in part, on some of them. But his indiscriminate attacks on, and caricatures of, all neoclassical economics betrays a superficial understanding of that discipline.

Graeber begins by attacking what he considers the misguided and obsessive focus on inflation by economists.

A good example is the obsession with inflation. Economists still teach their students that the primary economic role of government—many would insist, its only really proper economic role—is to guarantee price stability. We must be constantly vigilant over the dangers of inflation. For governments to simply print money is therefore inherently sinful.

Every currency unit, or banknote issued by a central bank, now in circulation, as Graeber must know, has been “printed.” So to say that economists consider it sinful for governments to print money is either a deliberate falsehood, or an emotional rhetorical outburst, as Graeber immediately, and apparently unwittingly, acknowledges!

If, however, inflation is kept at bay through the coordinated action of government and central bankers, the market should find its “natural rate of unemployment,” and investors, taking advantage of clear price signals, should be able to ensure healthy growth. These assumptions came with the monetarism of the 1980s, the idea that government should restrict itself to managing the money supply, and by the 1990s had come to be accepted as such elementary common sense that pretty much all political debate had to set out from a ritual acknowledgment of the perils of government spending. This continues to be the case, despite the fact that, since the 2008 recession, central banks have been printing money frantically [my emphasis] in an attempt to create inflation and compel the rich to do something useful with their money, and have been largely unsuccessful in both endeavors.

Graeber’s use of the ambiguous pronoun “this” beginning the last sentence of the paragraph betrays his own confusion about what he is saying. Central banks are printing money and attempting to “create” inflation while supposedly still believing that inflation is a menace and printing money is a sin. Go figure.

We now live in a different economic universe than we did before the crash. Falling unemployment no longer drives up wages. Printing money does not cause inflation. Yet the language of public debate, and the wisdom conveyed in economic textbooks, remain almost entirely unchanged.

Again showing an inadequate understanding of basic economic theory, Graeber suggests that, in theory if not practice, falling unemployment should cause wages to rise. The Philips Curve, upon which Graeber’s suggestion relies, represents the empirically observed negative correlation between the rate of average wage increase and the rate of unemployment. But correlation does not imply causation, so there is no basis in economic theory to assert that falling unemployment causes the rate of increase in wages to accelerate. That the empirical correlation between unemployment and wage increases has not recently been in evidence provides no compelling reason for changing textbook theory.

From this largely unfounded and attack on economic theory – a theory which I myself consider, in many respects, inadequate and unreliable – Graeber launches a bitter diatribe against the supposed hegemony of economists over policy-making.

Mainstream economists nowadays might not be particularly good at predicting financial crashes, facilitating general prosperity, or coming up with models for preventing climate change, but when it comes to establishing themselves in positions of intellectual authority, unaffected by such failings, their success is unparalleled. One would have to look at the history of religions to find anything like it.

The ability to predict financial crises would be desirable, but that cannot be the sole criterion for whether economics has advanced our understanding of how economic activity is organized or what effects policy changes have. (I note parenthetically that many economists defensively reject the notion that economic crises are predictable on the grounds that if economists could predict a future economic crisis, those predictions would be immediately self-fulfilling. This response, of course, effectively disproves the idea that economists could predict that an economic crisis would occur in the way that astronomers predict solar eclipses. But this response slays a strawman. The issue is not whether economists can predict future crises, but whether they can identify conditions indicating an increased likelihood of a crisis and suggest precautionary measures to reduce the likelihood that a potential crisis will occur. But Graeber seems uninterested in or incapable of engaging the question at even this moderate level of subtlety.)

In general, I doubt that economists can make more than a modest contribution to improved policy-making, and the best that one can hope for is probably that they steer us away from the worst potential decisions rather than identifying the best ones. But no one, as far as I know, has yet been burned at the stake by a tribunal of economists.

To this day, economics continues to be taught not as a story of arguments—not, like any other social science, as a welter of often warring theoretical perspectives—but rather as something more like physics, the gradual realization of universal, unimpeachable mathematical truths. “Heterodox” theories of economics do, of course, exist (institutionalist, Marxist, feminist, “Austrian,” post-Keynesian…), but their exponents have been almost completely locked out of what are considered “serious” departments, and even outright rebellions by economics students (from the post-autistic economics movement in France to post-crash economics in Britain) have largely failed to force them into the core curriculum.

I am now happy to register agreement with something that Graeber says. Economists in general have become overly attached to axiomatic and formalistic mathematical models that create a false and misleading impression of rigor and mathematical certainty. In saying this, I don’t dispute that mathematical modeling is an important part of much economic theorizing, but it should not exclude other approaches to economic analysis and discourse.

As a result, heterodox economists continue to be treated as just a step or two away from crackpots, despite the fact that they often have a much better record of predicting real-world economic events. What’s more, the basic psychological assumptions on which mainstream (neoclassical) economics is based—though they have long since been disproved by actual psychologists—have colonized the rest of the academy, and have had a profound impact on popular understandings of the world.

That heterodox economists have a better record of predicting economic events than mainstream economists is an assertion for which Graeber offers no evidence or examples. I would not be surprised if he could cite examples, but one would have to weigh the evidence surrounding those examples before concluding that predictions by heterodox economists were more accurate than those of their mainstream counterparts.

Graeber returns to the topic of monetary theory, which seems a particular bugaboo of his. Taking the extreme liberty of holding up Mrs. Theresa May as a spokesperson for orthodox economics, he focuses on her definitive 2017 statement that there is no magic money tree.

The truly extraordinary thing about May’s phrase is that it isn’t true. There are plenty of magic money trees in Britain, as there are in any developed economy. They are called “banks.” Since modern money is simply credit, banks can and do create money literally out of nothing, simply by making loans. Almost all of the money circulating in Britain at the moment is bank-created in this way.

What Graeber chooses to ignore is that banks do not operate magically; they make loans and create deposits in seeking to earn profits; their decisions are not magical, but are oriented toward making profits. Whether they make good or bad decisions is debatable, but the debate isn’t about a magical process; it’s a debate about theory and evidence. Graeber describe how he thinks that economists think about how banks create money, correctly observing that there is a debate about how that process works, but without understanding those differences or their significance.

Economists, for obvious reasons, can’t be completely oblivious to the role of banks, but they have spent much of the twentieth century arguing about what actually happens when someone applies for a loan. One school insists that banks transfer existing funds from their reserves, another that they produce new money, but only on the basis of a multiplier effect). . . Only a minority—mostly heterodox economists, post-Keynesians, and modern money theorists—uphold what is called the “credit creation theory of banking”: that bankers simply wave a magic wand and make the money appear, secure in the confidence that even if they hand a client a credit for $1 million, ultimately the recipient will put it back in the bank again, so that, across the system as a whole, credits and debts will cancel out. Rather than loans being based in deposits, in this view, deposits themselves were the result of loans.

The one thing it never seemed to occur to anyone to do was to get a job at a bank, and find out what actually happens when someone asks to borrow money. In 2014 a German economist named Richard Werner did exactly that, and discovered that, in fact, loan officers do not check their existing funds, reserves, or anything else. They simply create money out of thin air, or, as he preferred to put it, “fairy dust.”

Graeber is right that economists differ in how they understand banking. But the simple transfer-of-funds view, a product of the eighteenth century, was gradually rejected over the course of the nineteenth century; the money-multiplier view largely superseded it, enjoying a half-century or more of dominance as a theory of banking, still remains a popular way for introductory textbooks to explain how banking works, though it would be better if it were decently buried and forgotten. But since James Tobin’s classic essay “Commercial banks as creators of money” was published in 1963, most economists who have thought carefully about banking have concluded that the amount of deposits created by banks corresponds to the quantity of deposits that the public, given their expectations about the future course of the economy and the future course of prices, chooses to hold. The important point is that while a bank can create deposits without incurring more than the negligible cost of making a book-keeping, or an electronic, entry in a customer’s account, the creation of a deposit is typically associated with a demand by the bank to hold either reserves in its account with the Fed or to hold some amount of Treasury instruments convertible, on very short notice, into reserves at the Fed.

Graeber seems to think that there is something fundamental at stake for the whole of macroeconomics in the question whether deposits created loans or loans create deposits. I agree that it’s an important question, but not as significant as Graeber believes. But aside from that nuance, what’s remarkable is that Graeber actually acknowledges that the weight of professional opinion is on the side that says that loans create deposits. He thus triumphantly cites a report by Bank of England economists that correctly explained that banks create money and do so in the normal course of business by making loans.

Before long, the Bank of England . . . rolled out an elaborate official report called “Money Creation in the Modern Economy,” replete with videos and animations, making the same point: existing economics textbooks, and particularly the reigning monetarist orthodoxy, are wrong. The heterodox economists are right. Private banks create money. Central banks like the Bank of England create money as well, but monetarists are entirely wrong to insist that their proper function is to control the money supply.

Graeber, I regret to say, is simply exposing the inadequacy of his knowledge of the history of economics. Adam Smith in The Wealth of Nations explained that banks create money who, in doing so, saved the resources that would have been wasted on creating additional gold and silver. Subsequent economists from David Ricardo through Henry Thornton, J. S. Mill and R. G. Hawtrey were perfectly aware that banks can supply money — either banknotes or deposits — at less than the cost of mining and minting new coins, as they extend their credit in making loans to borrowers. So what is at issue, Graeber to the contrary notwithstanding, is not a dispute between orthodoxy and heterodoxy.

In fact, central banks do not in any sense control the money supply; their main function is to set the interest rate—to determine how much private banks can charge for the money they create.

Central banks set a rental price for reserves, thereby controlling the quantity of reserves into which bank deposits are convertible that is available to the economy. One way to think about that quantity is that the quantity of reserves along with the aggregate demand to hold reserves determines the exchange value of reserves and hence the price level; another way to think about it is that the interest rate or the implied policy stance of the central bank helps to determine the expectations of the public about the future course of the price level which is what determines – within some margin of error or range – what the future course of the price level will turn out to be.

Almost all public debate on these subjects is therefore based on false premises. For example, if what the Bank of England was saying were true, government borrowing didn’t divert funds from the private sector; it created entirely new money that had not existed before.

This is just silly. Funds may or may not be diverted from the private sector, but the total available resources to society is finite. If the central bank creates additional money, it creates additional claims to those resources and the creation of additional claims to resources necessarily has an effect on the prices of inputs and of outputs.

One might have imagined that such an admission would create something of a splash, and in certain restricted circles, it did. Central banks in Norway, Switzerland, and Germany quickly put out similar papers. Back in the UK, the immediate media response was simply silence. The Bank of England report has never, to my knowledge, been so much as mentioned on the BBC or any other TV news outlet. Newspaper columnists continued to write as if monetarism was self-evidently correct. Politicians continued to be grilled about where they would find the cash for social programs. It was as if a kind of entente cordiale had been established, in which the technocrats would be allowed to live in one theoretical universe, while politicians and news commentators would continue to exist in an entirely different one.

Even if we stipulate that this characterization of what the BBC and newspaper columnists believe is correct, what we would have — at best — is a commentary on the ability of economists to communicate their understanding of how the economy works to the intelligentsia that communicates to ordinary citizens. It is not in and of itself a commentary on the state of economic knowledge, inasmuch as Graeber himself concedes that most economists don’t accept monetarism. And that has been the case, as Noah Smith pointed out in his Bloomberg column on Graeber, since the early 1980s when the Monetarist experiment in trying to conduct monetary policy by controlling the monetary aggregates proved entirely unworkable and had to be abandoned as it was on the verge of precipitating a financial crisis.

Only after this long warmup decrying the sorry state of contemporary economic theory does Graeber begin discussing the book under review Money and Government by Robert Skidelsky.

What [Skidelsky] reveals is an endless war between two broad theoretical perspectives. . . The crux of the argument always seems to turn on the nature of money. Is money best conceived of as a physical commodity, a precious substance used to facilitate exchange, or is it better to see money primarily as a credit, a bookkeeping method or circulating IOU—in any case, a social arrangement? This is an argument that has been going on in some form for thousands of years. What we call “money” is always a mixture of both, and, as I myself noted in Debt (2011), the center of gravity between the two tends to shift back and forth over time. . . .One important theoretical innovation that these new bullion-based theories of money allowed was, as Skidelsky notes, what has come to be called the quantity theory of money (usually referred to in textbooks—since economists take endless delight in abbreviations—as QTM).

But these two perspectives are not mutually exclusive, and, depending on time, place, circumstances, and the particular problem that is the focus of attention, either of the two may be the appropriate paradigm for analysis.

The QTM argument was first put forward by a French lawyer named Jean Bodin, during a debate over the cause of the sharp, destablizing price inflation that immediately followed the Iberian conquest of the Americas. Bodin argued that the inflation was a simple matter of supply and demand: the enormous influx of gold and silver from the Spanish colonies was cheapening the value of money in Europe. The basic principle would no doubt have seemed a matter of common sense to anyone with experience of commerce at the time, but it turns out to have been based on a series of false assumptions. For one thing, most of the gold and silver extracted from Mexico and Peru did not end up in Europe at all, and certainly wasn’t coined into money. Most of it was transported directly to China and India (to buy spices, silks, calicoes, and other “oriental luxuries”), and insofar as it had inflationary effects back home, it was on the basis of speculative bonds of one sort or another. This almost always turns out to be true when QTM is applied: it seems self-evident, but only if you leave most of the critical factors out.

In the case of the sixteenth-century price inflation, for instance, once one takes account of credit, hoarding, and speculation—not to mention increased rates of economic activity, investment in new technology, and wage levels (which, in turn, have a lot to do with the relative power of workers and employers, creditors and debtors)—it becomes impossible to say for certain which is the deciding factor: whether the money supply drives prices, or prices drive the money supply.

As a matter of logic, if the value of money depends on the precious metals (gold or silver) from which coins were minted, the value of money is necessarily affected by a change in the value of the metals used to coin money. Because a large increase in the stock of gold and silver, as Graeber concedes, must reduce the value of those metals, subsequent inflation then being attributable, at least in part, to the gold and silver discoveries even if the newly mined gold and silver was shipped mainly to privately held Indian and Chinese hoards rather than minted into new coins. An exogenous increase in prices may well have caused the quantity of credit money to increase, but that is analytically distinct from the inflationary effect of a reduced value of gold or silver when, as was the case in the sixteenth century, money is legally defined as a specific weight of gold or silver.

Technically, this comes down to a choice between what are called exogenous and endogenous theories of money. Should money be treated as an outside factor, like all those Spanish dubloons supposedly sweeping into Antwerp, Dublin, and Genoa in the days of Philip II, or should it be imagined primarily as a product of economic activity itself, mined, minted, and put into circulation, or more often, created as credit instruments such as loans, in order to meet a demand—which would, of course, mean that the roots of inflation lie elsewhere?

There is no such choice, because any theory must posit certain initial conditions and definitions, which are given or exogenous to the analysis. How the theory is framed and which variables are treated as exogenous and which are treated as endogenous is a matter of judgment in light of the problem and the circumstances. Graeber is certainly correct that, in any realistic model, the quantity of money is endogenously, not exogenously, determined, but that doesn’t mean that the value of gold and silver may not usefully be treated as exogenous in a system in which money is defined as a weight of gold or silver.

To put it bluntly: QTM is obviously wrong. Doubling the amount of gold in a country will have no effect on the price of cheese if you give all the gold to rich people and they just bury it in their yards, or use it to make gold-plated submarines (this is, incidentally, why quantitative easing, the strategy of buying long-term government bonds to put money into circulation, did not work either). What actually matters is spending.

Graeber is talking in circles, failing to distinguish between the quantity theory of money – a theory about the value of a pure medium of exchange with no use except to be received in exchange — and a theory of the real value of gold and silver when money is defined as a weight of gold or silver. The value of gold (or silver) in monetary uses must be roughly equal to its value in non-monetary uses. which is determined by the total stock of gold and the demand to hold gold or to use it in coinage or for other uses (e.g., jewelry and ornamentation). An increase in the stock of gold relative to demand must reduce its value. That relationship between price and quantity is not the same as QTM. The quantity of a metallic money will increase as its value in non-monetary uses declines. If there is literally an unlimited demand for newly mined gold to be immediately sent unused into hoards, Graeber’s argument would be correct. But the fact that much of the newly mined gold initially went into hoards does not mean that all of the newly mined gold went into hoards.

In sum, Graeber is confused between the quantity theory of money and a theory of a commodity money used both as money and as a real commodity. The quantity theory of money of a pure medium of exchange posits that changes in the quantity of money cause proportionate changes in the price level. Changes in the quantity of a real commodity also used as money have nothing to do with the quantity theory of money.

Relying on a dubious account of the history of monetary theory by Skidelsky, Graeber blames the obsession of economists with the quantity theory for repeated monetary disturbances starting with the late 17th century deflation in Britain when silver appreciated relative to gold causing prices measured in silver to fall. Graeber thus fails to see that under a metallic money, real disturbances do have repercussion on the level of prices, repercussions having nothing to do with an exogenous prior change in the quantity of money.

According to Skidelsky, the pattern was to repeat itself again and again, in 1797, the 1840s, the 1890s, and, ultimately, the late 1970s and early 1980s, with Thatcher and Reagan’s (in each case brief) adoption of monetarism. Always we see the same sequence of events:

(1) The government adopts hard-money policies as a matter of principle.

(2) Disaster ensues.

(3) The government quietly abandons hard-money policies.

(4) The economy recovers.

(5) Hard-money philosophy nonetheless becomes, or is reinforced as, simple universal common sense.

There is so much indiscriminate generalization here that it is hard to know what to make of it. But the conduct of monetary policy has always been fraught, and learning has been slow and painful. We can and must learn to do better, but blanket condemnations of economics are unlikely to lead to better outcomes.

How was it possible to justify such a remarkable string of failures? Here a lot of the blame, according to Skidelsky, can be laid at the feet of the Scottish philosopher David Hume. An early advocate of QTM, Hume was also the first to introduce the notion that short-term shocks—such as Locke produced—would create long-term benefits if they had the effect of unleashing the self-regulating powers of the market:

Actually I agree that Hume, as great and insightful a philosopher as he was and as sophisticated an economic observer as he was, was an unreliable monetary theorist. And one of the reasons he was led astray was his unwarranted attachment to the quantity theory of money, an attachment that was not shared by his close friend Adam Smith.

Ever since Hume, economists have distinguished between the short-run and the long-run effects of economic change, including the effects of policy interventions. The distinction has served to protect the theory of equilibrium, by enabling it to be stated in a form which took some account of reality. In economics, the short-run now typically stands for the period during which a market (or an economy of markets) temporarily deviates from its long-term equilibrium position under the impact of some “shock,” like a pendulum temporarily dislodged from a position of rest. This way of thinking suggests that governments should leave it to markets to discover their natural equilibrium positions. Government interventions to “correct” deviations will only add extra layers of delusion to the original one.

I also agree that focusing on long-run equilibrium without regard to short-run fluctuations can lead to terrible macroeconomic outcomes, but that doesn’t mean that long-run effects are never of concern and may be safely disregarded. But just as current suffering must not be disregarded when pursuing vague and uncertain long-term benefits, ephemeral transitory benefits shouldn’t obscure serious long-term consequences. Weighing such alternatives isn’t easy, but nothing is gained by denying that the alternatives exist. Making those difficult choices is inherent in policy-making, whether macroeconomic or climate policy-making.

Although Graeber takes a valid point – that a supposed tendency toward an optimal long-run equilibrium does not justify disregard of an acute short-term problem – to an extreme, his criticism of the New Classical approach to policy-making that replaced the flawed mainstream Keynesian macroeconomics of the late 1970s is worth listening to. The New Classical approach self-consciously rejected any policy aimed at short-run considerations owing to a time-inconsistency paradox was based almost entirely on the logic of general-equilibrium theory and an illegitimate methodological argument rejecting all macroeconomic theories not rigorously deduced from the unarguable axiom of optimizing behavior by rational agents (and therefore not, in the official jargon, microfounded) as unscientific and unworthy of serious consideration in the brave New Classical world of scientific macroeconomics.

It’s difficult for outsiders to see what was really at stake here, because the argument has come to be recounted as a technical dispute between the roles of micro- and macroeconomics. Keynesians insisted that the former is appropriate to studying the behavior of individual households or firms, trying to optimize their advantage in the marketplace, but that as soon as one begins to look at national economies, one is moving to an entirely different level of complexity, where different sorts of laws apply. Just as it is impossible to understand the mating habits of an aardvark by analyzing all the chemical reactions in their cells, so patterns of trade, investment, or the fluctuations of interest or employment rates were not simply the aggregate of all the microtransactions that seemed to make them up. The patterns had, as philosophers of science would put it, “emergent properties.” Obviously, it was necessary to understand the micro level (just as it was necessary to understand the chemicals that made up the aardvark) to have any chance of understand the macro, but that was not, in itself, enough.

As an aisde, it’s worth noting that the denial or disregard of the possibility of any emergent properties by New Classical economists (of which what came to be known as New Keynesian economics is really a mildly schismatic offshoot) is nicely illustrated by the un-self-conscious alacrity with which the representative-agent approach was adopted as a modeling strategy in the first few generations of New Classical models. That New Classical theorists now insist that representative agency is not an essential to New Classical modeling is true, but the methodologically reductive nature of New Classical macroeconomics, in which all macroeconomic theories must be derived under the axiom of individually maximizing behavior except insofar as specific “frictions” are introduced by explicit assumption, is essential. (See here, here, and here)

The counterrevolutionaries, starting with Keynes’s old rival Friedrich Hayek . . . took aim directly at this notion that national economies are anything more than the sum of their parts. Politically, Skidelsky notes, this was due to a hostility to the very idea of statecraft (and, in a broader sense, of any collective good). National economies could indeed be reduced to the aggregate effect of millions of individual decisions, and, therefore, every element of macroeconomics had to be systematically “micro-founded.”

Hayek’s role in the microfoundations movement is important, but his position was more sophisticated and less methodologically doctrinaire than that of the New Classical macroeconomists, if for no other reason than that Hayek didn’t believe that macroeconomics should, or could, be derived from general-equilibrium theory. His criticism, like that of economists like Clower and Leijonhufvud, of Keynesian macroeconomics for being insufficiently grounded in microeconomic principles, was aimed at finding microeconomic arguments that could explain and embellish and modify the propositions of Keynesian macroeconomic theory. That is the sort of scientific – not methodological — reductivism that Hayek’s friend Karl Popper advocated: a theoretical and empirical challenge of reducing a higher level theory to its more fundamental foundations, e.g., when physicists and chemists search for theoretical breakthroughs that allow the propositions of chemistry to be reduced to more fundamental propositions of physics. The attempt to reduce chemistry to underlying physical principles is very different from a methodological rejection of all chemistry that cannot be derived from underlying deep physical theories.

There is probably more than a grain of truth in Graeber’s belief that there was a political and ideological subtext in the demand for microfoundations by New Classical macroeconomists, but the success of the microfoundations program was also the result of philosophically unsophisticated methodological error. How to apportion the share of blame going to mistaken methodology, professional and academic opportunism, and a hidden political agenda is a question worthy of further investigation. The easy part is to identify the mistaken methodology, which Graeber does. As for the rest, Graeber simply asserts bad faith, but with little evidence.

In Graeber’s comprehensive condemnation of modern economics, the efficient market hypothesis, being closely related to the rational-expectations hypothesis so central to New Classical economics, is not spared either. Here again, though I share and sympathize with his disdain for EMH, Graeber can’t resist exaggeration.

In other words, we were obliged to pretend that markets could not, by definition, be wrong—if in the 1980s the land on which the Imperial compound in Tokyo was built, for example, was valued higher than that of all the land in New York City, then that would have to be because that was what it was actually worth. If there are deviations, they are purely random, “stochastic” and therefore unpredictable, temporary, and, ultimately, insignificant.

Of course, no one is obliged to pretend that markets could not be wrong — and certainly not by a definition. The EMH simply asserts that the price of an asset reflects all the publicly available information. But what EMH asserts is certainly not true in many or even most cases, because people with non-public information (or with superior capacity to process public information) may affect asset prices, and such people may profit at the expense of those less knowledgeable or less competent in anticipating price changes. Moreover, those advantages may result from (largely wasted) resources devoted to acquiring and processing information, and it is those people who make fortunes betting on the future course of asset prices.

Graeber then quotes Skidelsky approvingly:

There is a paradox here. On the one hand, the theory says that there is no point in trying to profit from speculation, because shares are always correctly priced and their movements cannot be predicted. But on the other hand, if investors did not try to profit, the market would not be efficient because there would be no self-correcting mechanism. . .

Secondly, if shares are always correctly priced, bubbles and crises cannot be generated by the market….

This attitude leached into policy: “government officials, starting with [Fed Chairman] Alan Greenspan, were unwilling to burst the bubble precisely because they were unwilling to even judge that it was a bubble.” The EMH made the identification of bubbles impossible because it ruled them out a priori.

So the apparent paradox that concerns Skidelsky and Graeber dissolves upon (only a modest amount of) further reflection. Proper understanding and revision of the EMH makes it clear that bubbles can occur. But that doesn’t mean that bursting bubbles is a job that can be safely delegated to any agency, including the Fed.

Moreover, the housing bubble peaked in early 2006, two and a half years before the financial crisis in September 2008. The financial crisis was not unrelated to the housing bubble, which undoubtedly added to the fragility of the financial system and its vulnerability to macroeconomic shocks, but the main cause of the crisis was Fed policy that was unnecessarily focused on a temporary blip in commodity prices persuading the Fed not to loosen policy in 2008 during a worsening recession. That was a scenario similar to the one in 1929 when concern about an apparent stock-market bubble caused the Fed to repeatedly tighten money, raising interest rates, thereby causing a downturn and crash of asset prices triggering the Great Depression.

Graeber and Skidelsky correctly identify some of the problems besetting macroeconomics, but their indiscriminate attack on all economic theory is unlikely to improve the situation. A pity, because a focused and sophisticated critique of economics than they have served up has never been more urgently needed than it is now to enable economists to perform the modest service to mankind of which they might be capable.

Neo-Fisherism and All That

A few weeks ago Michael Woodford and his Columbia colleague Mariana Garcia-Schmidt made an initial response to the Neo-Fisherian argument advanced by, among others, John Cochrane and Stephen Williamson that a central bank can achieve its inflation target by pegging its interest-rate instrument at a rate such that if the expected inflation rate is the inflation rate targeted by the central bank, the Fisher equation would be satisfied. In other words, if the central bank wants 2% inflation, it should set the interest rate instrument under its control at the Fisherian real rate of interest (aka the natural rate) plus 2% expected inflation. So if the Fisherian real rate is 2%, the central bank should set its interest-rate instrument (Fed Funds rate) at 4%, because, in equilibrium – and, under rational expectations, that is the only policy-relevant solution of the model – inflation expectations must satisfy the Fisher equation.

The Neo-Fisherians believe that, by way of this insight, they have overturned at least two centuries of standard monetary theory, dating back at least to Henry Thornton, instructing the monetary authorities to raise interest rates to combat inflation and to reduce interest rates to counter deflation. According to the Neo-Fisherian Revolution, this was all wrong: the way to reduce inflation is for the monetary authority to reduce the setting on its interest-rate instrument and the way to counter deflation is to raise the setting on the instrument. That is supposedly why the Fed, by reducing its Fed Funds target practically to zero, has locked us into a low-inflation environment.

Unwilling to junk more than 200 years of received doctrine on the basis, not of a behavioral relationship, but a reduced-form equilibrium condition containing no information about the direction of causality, few monetary economists and no policy makers have become devotees of the Neo-Fisherian Revolution. Nevertheless, the Neo-Fisherian argument has drawn enough attention to elicit a response from Michael Woodford, who is the go-to monetary theorist for monetary-policy makers. The Woodford-Garcia-Schmidt (hereinafter WGS) response (for now just a slide presentation) has already been discussed by Noah Smith, Nick Rowe, Scott Sumner, Brad DeLong, Roger Farmer and John Cochrane. Nick Rowe’s discussion, not surprisingly, is especially penetrating in distilling the WGS presentation into its intuitive essence.

Using Nick’s discussion as a starting point, I am going to offer some comments of my own on Neo-Fisherism and the WGS critique. Right off the bat, WGS concede that it is possible that by increasing the setting of its interest-rate instrument, a central bank could, move the economy from one rational-expectations equilibrium to another, the only difference between the two being that inflation in the second would differ from inflation in the first by an amount exactly equal to the difference in the corresponding settings of the interest-rate instrument. John Cochrane apparently feels pretty good about having extracted this concession from WGS, remarking

My first reaction is relief — if Woodford says it is a prediction of the standard perfect foresight / rational expectations version, that means I didn’t screw up somewhere. And if one has to resort to learning and non-rational expectations to get rid of a result, the battle is half won.

And my first reaction to Cochrane’s first reaction is: why only half? What else is there to worry about besides a comparison of rational-expectations equilibria? Well, let Cochrane read Nick Rowe’s blogpost. If he did, he might realize that if you do no more than compare alternative steady-state equilibria, ignoring the path leading from one equilibrium to the other, you miss just about everything that makes macroeconomics worth studying (by the way I do realize the question-begging nature of that remark). Of course that won’t necessarily bother Cochrane, because, like other practitioners of modern macroeconomics, he has convinced himself that it is precisely by excluding everything but rational-expectations equilibria from consideration that modern macroeconomics has made what its practitioners like to think of as progress, and what its critics regard as the opposite .

But Nick Rowe actually takes the trouble to show what might happen if you try to specify the path by which you could get from rational-expectations equilibrium A with the interest-rate instrument of the central bank set at i to rational-expectations equilibrium B with the interest-rate instrument of the central bank set at i ­+ ε. If you try to specify a process of trial-and-error (tatonnement) that leads from A to B, you will almost certainly fail, your only chance being to get it right on your first try. And, as Nick further points out, the very notion of a tatonnement process leading from one equilibrium to another is a huge stretch, because, in the real world there are “no backs” as there are in tatonnement. If you enter into an exchange, you can’t nullify it, as is the case under tatonnement, just because the price you agreed on turns out not to have been an equilibrium price. For there to be a tatonnement path from the first equilibrium that converges on the second requires that monetary authority set its interest-rate instrument in the conventional, not the Neo-Fisherian, manner, using variations in the real interest rate as a lever by which to nudge the economy onto a path leading to a new equilibrium rather than away from it.

The very notion that you don’t have to worry about the path by which you get from one equilibrium to another is so bizarre that it would be merely laughable if it were not so dangerous. Kenneth Boulding used to tell a story about a physicist, a chemist and an economist stranded on a desert island with nothing to eat except a can of food, but nothing to open the can with. The physicist and the chemist tried to figure out a way to open the can, but the economist just said: “assume a can opener.” But I wonder if even Boulding could have imagined the disconnect from reality embodied in the Neo-Fisherian argument.

Having registered my disapproval of Neo-Fisherism, let me now reverse field and make some critical comments about the current state of non-Neo-Fisherian monetary theory, and what makes it vulnerable to off-the-wall ideas like Neo-Fisherism. The important fact to consider about the past two centuries of monetary theory that I referred to above is that for at least three-quarters of that time there was a basic default assumption that the value of money was ultimately governed by the value of some real commodity, usually either silver or gold (or even both). There could be temporary deviations between the value of money and the value of the monetary standard, but because there was a standard, the value of gold or silver provided a benchmark against which the value of money could always be reckoned. I am not saying that this was either a good way of thinking about the value of money or a bad way; I am just pointing out that this was metatheoretical background governing how people thought about money.

Even after the final collapse of the gold standard in the mid-1930s, there was a residue of metalism that remained, people still calculating values in terms of gold equivalents and the value of currency in terms of its gold price. Once the gold standard collapsed, it was inevitable that these inherited habits of thinking about money would eventually give way to new ways of thinking, and it took another 40 years or so, until the official way of thinking about the value of money finally eliminated any vestige of the gold mentality. In our age of enlightenment, no sane person any longer thinks about the value of money in terms of gold or silver equivalents.

But the problem for monetary theory is that, without a real-value equivalent to assign to money, the value of money in our macroeconomic models became theoretically indeterminate. If the value of money is theoretically indeterminate, so, too, is the rate of inflation. The value of money and the rate of inflation are simply, as Fischer Black understood, whatever people in the aggregate expect them to be. Nevertheless, our basic mental processes for understanding how central banks can use an interest-rate instrument to control the value of money are carryovers from an earlier epoch when the value of money was determined, most of the time and in most places, by convertibility, either actual or expected, into gold or silver. The interest-rate instrument of central banks was not primarily designed as a method for controlling the value of money; it was the mechanism by which the central bank could control the amount of reserves on its balance sheet and the amount of gold or silver in its vaults. There was only an indirect connection – at least until the 1920s — between a central bank setting its interest-rate instrument to control its balance sheet and the effect on prices and inflation. The rules of monetary policy developed under a gold standard are not necessarily applicable to an economic system in which the value of money is fundamentally indeterminate.

Viewed from this perspective, the Neo-Fisherian Revolution appears as a kind of reductio ad absurdum of the present confused state of monetary theory in which the price level and the rate of inflation are entirely subjective and determined totally by expectations.

Price Stickiness and Macroeconomics

Noah Smith has a classically snide rejoinder to Stephen Williamson’s outrage at Noah’s Bloomberg paean to price stickiness and to the classic Ball and Maniw article on the subject, an article that provoked an embarrassingly outraged response from Robert Lucas when published over 20 years ago. I don’t know if Lucas ever got over it, but evidently Williamson hasn’t.

Now to be fair, Lucas’s outrage, though misplaced, was understandable, at least if one understands that Lucas was so offended by the ironic tone in which Ball and Mankiw cast themselves as defenders of traditional macroeconomics – including both Keynesians and Monetarists – against the onslaught of “heretics” like Lucas, Sargent, Kydland and Prescott that he just stopped reading after the first few pages and then, in a fit of righteous indignation, wrote a diatribe attacking Ball and Mankiw as religious fanatics trying to halt the progress of science as if that was the real message of the paper – not, to say the least, a very sophisticated reading of what Ball and Mankiw wrote.

While I am not hostile to the idea of price stickiness — one of the most popular posts I have written being an attempt to provide a rationale for the stylized (though controversial) fact that wages are stickier than other input, and most output, prices — it does seem to me that there is something ad hoc and superficial about the idea of price stickiness and about many explanations, including those offered by Ball and Mankiw, for price stickiness. I think that the negative reactions that price stickiness elicits from a lot of economists — and not only from Lucas and Williamson — reflect a feeling that price stickiness is not well grounded in any economic theory.

Let me offer a slightly different criticism of price stickiness as a feature of macroeconomic models, which is simply that although price stickiness is a sufficient condition for inefficient macroeconomic fluctuations, it is not a necessary condition. It is entirely possible that even with highly flexible prices, there would still be inefficient macroeconomic fluctuations. And the reason why price flexibility, by itself, is no guarantee against macroeconomic contractions is that macroeconomic contractions are caused by disequilibrium prices, and disequilibrium prices can prevail regardless of how flexible prices are.

The usual argument is that if prices are free to adjust in response to market forces, they will adjust to balance supply and demand, and an equilibrium will be restored by the automatic adjustment of prices. That is what students are taught in Econ 1. And it is an important lesson, but it is also a “partial” lesson. It is partial, because it applies to a single market that is out of equilibrium. The implicit assumption in that exercise is that nothing else is changing, which means that all other markets — well, not quite all other markets, but I will ignore that nuance – are in equilibrium. That’s what I mean when I say (as I have done before) that just as macroeconomics needs microfoundations, microeconomics needs macrofoundations.

Now it’s pretty easy to show that in a single market with an upward-sloping supply curve and a downward-sloping demand curve, that a price-adjustment rule that raises price when there’s an excess demand and reduces price when there’s an excess supply will lead to an equilibrium market price. But that simple price-adjustment rule is hard to generalize when many markets — not just one — are in disequilibrium, because reducing disequilibrium in one market may actually exacerbate disequilibrium, or create a disequilibrium that wasn’t there before, in another market. Thus, even if there is an equilibrium price vector out there, which, if it were announced to all economic agents, would sustain a general equilibrium in all markets, there is no guarantee that following the standard price-adjustment rule of raising price in markets with an excess demand and reducing price in markets with an excess supply will ultimately lead to the equilibrium price vector. Even more disturbing, the standard price-adjustment rule may not, even under a tatonnement process in which no trading is allowed at disequilibrium prices, lead to the discovery of the equilibrium price vector. Of course, in the real world trading occurs routinely at disequilibrium prices, so that the “mechanical” forces tending an economy toward equilibrium are even weaker than the standard analysis of price-adjustment would suggest.

This doesn’t mean that an economy out of equilibrium has no stabilizing tendencies; it does mean that those stabilizing tendencies are not very well understood, and we have almost no formal theory with which to describe how such an adjustment process leading from disequilibrium to equilibrium actually works. We just assume that such a process exists. Franklin Fisher made this point 30 years ago in an important, but insufficiently appreciated, volume Disequilibrium Foundations of Equilibrium Economics. But the idea goes back even further: to Hayek’s important work on intertemporal equilibrium, especially his classic paper “Economics and Knowledge,” formalized by Hicks in the temporary-equilibrium model described in Value and Capital.

The key point made by Hayek in this context is that there can be an intertemporal equilibrium if and only if all agents formulate their individual plans on the basis of the same expectations of future prices. If their expectations for future prices are not the same, then any plans based on incorrect price expectations will have to be revised, or abandoned altogether, as price expectations are disappointed over time. For price adjustment to lead an economy back to equilibrium, the price adjustment must converge on an equilibrium price vector and on correct price expectations. But, as Hayek understood in 1937, and as Fisher explained in a dense treatise 30 years ago, we have no economic theory that explains how such a price vector, even if it exists, is arrived at, and even under a tannonement process, much less under decentralized price setting. Pinning the blame on this vague thing called price stickiness doesn’t address the deeper underlying theoretical issue.

Of course for Lucas et al. to scoff at price stickiness on these grounds is a bit rich, because Lucas and his followers seem entirely comfortable with assuming that the equilibrium price vector is rationally expected. Indeed, rational expectation of the equilibrium price vector is held up by Lucas as precisely the microfoundation that transformed the unruly field of macroeconomics into a real science.

Is John Cochrane Really an (Irving) Fisherian?

I’m pretty late getting to this Wall Street Journal op-ed by John Cochrane (here’s an ungated version), and Noah Smith has already given it an admirable working over, but, even after Noah Smith, there’s an assertion or two by Cochrane that could use a bit of elucidation. Like this one:

Keynesians told us that once interest rates got stuck at or near zero, economies would fall into a deflationary spiral. Deflation would lower demand, causing more deflation, and so on.

Noah seems to think this is a good point, but I guess that I am less easily impressed than Noah. Feeling no need to provide citations for the views he attributes to Keynesians, Cochrane does not bother either to tell us which Keynesian has asserted that the zero lower bound creates the danger of a deflationary spiral, though in a previous blog post, Cochrane does provide a number of statements by Paul Krugman (who I guess qualifies as the default representative of all Keynesians) about the danger of a deflationary spiral. Interestingly all but one of these quotations were from 2009 when, in the wake of the fall 2008 financial crisis, a nasty little relapse in early 2009 having driven the stock market to a 12-year low, the Fed finally launched its first round of quantitative easing, the threat of a deflationary spiral did not seem at all remote.

Now an internet search shows that Krugman does have a model showing that a downward deflationary spiral is possible at the zero lower bound. I would just note, for the record, that Earl Thompson, in an unpublished 1976 paper, derived a similar result from an aggregate model based on a neo-classical aggregate production function with the Keynesian expenditure functions (through application of Walras’s Law) excluded. So what’s Keynes got to do with it?

But even more remarkable is that the most famous model of a deflationary downward spiral was constructed not by a Keynesian, but by the grandfather of modern Monetarism, Irving Fisher, in his famous 1933 paper on debt deflation, “The Debt-Deflation Theory of Great Depressions.” So the suggestion that there is something uniquely Keynesian about a downward deflationary spiral at the zero lower bound is simply not credible.

Cochrane also believes that because inflation has stabilized at very low levels, slow growth cannot be blamed on insufficient aggregate demand.

Zero interest rates and low inflation turn out to be quite a stable state, even in Japan. Yes, Japan is growing more slowly than one might wish, but with 3.5% unemployment and no deflationary spiral, it’s hard to blame slow growth on lack of “demand.”

Except that, since 2009 when the threat of a downward deflationary spiral seemed more visibly on the horizon than it does now, Krugman has consistently argued that, at the zero lower bound, chronic stagnation and underemployment are perfectly capable of coexisting with a positive rate of inflation. So it’s not clear why Cochrane thinks the coincidence of low inflation and sluggish economic growth for five years since the end of the 2008-09 downturn somehow refutes Krugman’s diagnosis of what has been ailing the economy in recent years.

And, again, what’s even more interesting is that the proposition that there can be insufficient aggregate demand, even with positive inflation, follows directly from the Fisher equation, of which Cochrane claims to be a fervent devotee. After all, if the real rate of interest is negative, then the Fisher equation tells us that the equilibrium expected rate of inflation cannot be less than the absolute value of the real rate of interest. So if, at the zero lower bound, the real rate of interest is minus 1%, then the equilibrium expected rate of inflation is 1%, and if the actual rate of inflation equals the equilibrium expected rate, then the economy, even if it is operating at less than full employment and less than its potential output, may be in a state of macroeconomic equilibrium. And it may not be possible to escape from that low-level equilibrium and increase output and employment without a burst of unexpected inflation, providing a self-sustaining stimulus to economic growth, thereby moving the economy to a higher-level equilibrium with a higher real rate of interest than the rate corresponding to lower-level equilibrium. If I am not mistaken, Roger Farmer has been making an argument along these lines.

Given the close correspondence between the Keynesian and Fisherian analyses of what happens in the neighborhood of the zero lower bound, I am really curious to know what part of the Fisherian analysis Cochrane finds difficult to comprehend.

John Cochrane on the Failure of Macroeconomics

The state of modern macroeconomics is not good; John Cochrane, professor of finance at the University of Chicago, senior fellow of the Hoover Institution, and adjunct scholar of the Cato Institute, writing in Thursday’s Wall Street Journal, thinks macroeconomics is a failure. Perhaps so, but he has trouble explaining why.

The problem that Cochrane is chiefly focused on is slow growth.

Output per capita fell almost 10 percentage points below trend in the 2008 recession. It has since grown at less than 1.5%, and lost more ground relative to trend. Cumulative losses are many trillions of dollars, and growing. And the latest GDP report disappoints again, declining in the first quarter.

Sclerotic growth trumps every other economic problem. Without strong growth, our children and grandchildren will not see the great rise in health and living standards that we enjoy relative to our parents and grandparents. Without growth, our government’s already questionable ability to pay for health care, retirement and its debt evaporate. Without growth, the lot of the unfortunate will not improve. Without growth, U.S. military strength and our influence abroad must fade.

Macroeconomists offer two possible explanations for slow growth: a) too little demand — correctable through monetary or fiscal stimulus — and b) structural rigidities and impediments to growth, for which stimulus is no remedy. Cochrane is not a fan of the demand explanation.

The “demand” side initially cited New Keynesian macroeconomic models. In this view, the economy requires a sharply negative real (after inflation) rate of interest. But inflation is only 2%, and the Federal Reserve cannot lower interest rates below zero. Thus the current negative 2% real rate is too high, inducing people to save too much and spend too little.

New Keynesian models have also produced attractively magical policy predictions. Government spending, even if financed by taxes, and even if completely wasted, raises GDP. Larry Summers and Berkeley’s Brad DeLong write of a multiplier so large that spending generates enough taxes to pay for itself. Paul Krugman writes that even the “broken windows fallacy ceases to be a fallacy,” because replacing windows “can stimulate spending and raise employment.”

If you look hard at New-Keynesian models, however, this diagnosis and these policy predictions are fragile. There are many ways to generate the models’ predictions for GDP, employment and inflation from their underlying assumptions about how people behave. Some predict outsize multipliers and revive the broken-window fallacy. Others generate normal policy predictions—small multipliers and costly broken windows. None produces our steady low-inflation slump as a “demand” failure.

Cochrane’s characterization of what’s wrong with New Keynesian models is remarkably superficial. Slow growth, according to the New Keynesian model, is caused by the real interest rate being insufficiently negative, with the nominal rate at zero and inflation at (less than) 2%. So what is the problem? True, the nominal rate can’t go below zero, but where is it written that the upper bound on inflation is (or must be) 2%? Cochrane doesn’t say. Not only doesn’t he say, he doesn’t even seem interested. It might be that something really terrible would happen if the rate of inflation rose about 2%, but if so, Cochrane or somebody needs to explain why terrible calamities did not befall us during all those comparatively glorious bygone years when the rate of inflation consistently exceeded 2% while real economic growth was at least a percentage point higher than it is now. Perhaps, like Fischer Black, Cochrane believes that the rate of inflation has nothing to do with monetary or fiscal policy. But that is certainly not the standard interpretation of the New Keynesian model that he is using as the archetype for modern demand-management macroeconomic theories. And if Cochrane does believe that the rate of inflation is not determined by either monetary policy or fiscal policy, he ought to come out and say so.

Cochrane thinks that persistent low inflation and low growth together pose a problem for New Keynesian theories. Indeed it does, but it doesn’t seem that a radical revision of New Keynesian theory would be required to cope with that state of affairs. Cochrane thinks otherwise.

These problems [i.e., a steady low-inflation slump, aka “secular stagnation”] are recognized, and now academics such as Brown University’s Gauti Eggertsson and Neil Mehrotra are busy tweaking the models to address them. Good. But models that someone might get to work in the future are not ready to drive trillions of dollars of public expenditure.

In other words, unless the economic model has already been worked out before a particular economic problem arises, no economic policy conclusions may be deduced from that economic model. May I call  this Cochrane’s rule?

Cochrane the proceeds to accuse those who look to traditional Keynesian ideas of rejecting science.

The reaction in policy circles to these problems is instead a full-on retreat, not just from the admirable rigor of New Keynesian modeling, but from the attempt to make economics scientific at all.

Messrs. DeLong and Summers and Johns Hopkins’s Laurence Ball capture this feeling well, writing in a recent paper that “the appropriate new thinking is largely old thinking: traditional Keynesian ideas of the 1930s to 1960s.” That is, from before the 1960s when Keynesian thinking was quantified, fed into computers and checked against data; and before the 1970s, when that check failed, and other economists built new and more coherent models. Paul Krugman likewise rails against “generations of economists” who are “viewing the world through a haze of equations.”

Well, maybe they’re right. Social sciences can go off the rails for 50 years. I think Keynesian economics did just that. But if economics is as ephemeral as philosophy or literature, then it cannot don the mantle of scientific expertise to demand trillions of public expenditure.

This is political rhetoric wrapped in a cloak of scientific objectivity. We don’t have the luxury of knowing in advance what the consequences of our actions will be. The United States has spent trillions of dollars on all kinds of stuff over the past dozen years or so. A lot of it has not worked out well at all. So it is altogether fitting and proper for us to be skeptical about whether we will get our money’s worth for whatever the government proposes to spend on our behalf. But Cochrane’s implicit demand that money only be spent if there is some sort of scientific certainty that the money will be well spent can never be met. However, as Larry Summers has pointed out, there are certainly many worthwhile infrastructure projects that could be undertaken, so the risk of committing the “broken windows fallacy” is small. With the government able to borrow at negative real interest rates, the present value of funding such projects is almost certainly positive. So one wonders what is the scientific basis for not funding those projects?

Cochrane compares macroeconomics to climate science:

The climate policy establishment also wants to spend trillions of dollars, and cites scientific literature, imperfect and contentious as that literature may be. Imagine how much less persuasive they would be if they instead denied published climate science since 1975 and bemoaned climate models’ “haze of equations”; if they told us to go back to the complex writings of a weather guru from the 1930s Dustbowl, as they interpret his writings. That’s the current argument for fiscal stimulus.

Cochrane writes as if there were some important scientific breakthrough made by modern macroeconomics — “the new and more coherent models,” either the New Keynesian version of New Classical macroeconomics or Real Business Cycle Theory — that rendered traditional Keynesian economics obsolete or outdated. I have never been a devote of Keynesian economics, but the fact is that modern macroeconomics has achieved its ascendancy in academic circles almost entirely by way of a misguided methodological preference for axiomatized intertemporal optimization models for which a unique equilibrium solution can be found by imposing the empirically risible assumption of rational expectations. These models, whether in their New Keynesian or Real Business Cycle versions, do not generate better empirical predictions than the old fashioned Keynesian models, and, as Noah Smith has usefully pointed out, these models have been consistently rejected by private forecasters in favor of the traditional Keynesian models. It is only the dominant clique of ivory-tower intellectuals that cultivate and nurture these models. The notion that such models are entitled to any special authority or scientific status is based on nothing but the exaggerated self-esteem that is characteristic of almost every intellectual clique, particularly dominant ones.

Having rejected inadequate demand as a cause of slow growth, Cochrane, relying on no model and no evidence, makes a pitch for uncertainty as the source of slow growth.

Where, instead, are the problems? John Taylor, Stanford’s Nick Bloom and Chicago Booth’s Steve Davis see the uncertainty induced by seat-of-the-pants policy at fault. Who wants to hire, lend or invest when the next stroke of the presidential pen or Justice Department witch hunt can undo all the hard work? Ed Prescott emphasizes large distorting taxes and intrusive regulations. The University of Chicago’s Casey Mulligan deconstructs the unintended disincentives of social programs. And so forth. These problems did not cause the recession. But they are worse now, and they can impede recovery and retard growth.

Where, one wonders, is the science on which this sort of seat-of-the-pants speculation is based? Is there any evidence, for example, that the tax burden on businesses or individuals is greater now than it was let us say in 1983-85 when, under President Reagan, the economy, despite annual tax increases partially reversing the 1981 cuts enacted in Reagan’s first year, began recovering rapidly from the 1981-82 recession?

Monetary Theory on the Neo-Fisherite Edge

The week before last, Noah Smith wrote a post “The Neo-Fisherite Rebellion” discussing, rather sympathetically I thought, the contrarian school of monetary thought emerging from the Great American Heartland, according to which, notwithstanding everything monetary economists since Henry Thornton have taught, high interest rates are inflationary and low interest rates deflationary. This view of the relationship between interest rates and inflation was advanced (but later retracted) by Narayana Kocherlakota, President of the Minneapolis Fed in a 2010 lecture, and was embraced and expounded with increased steadfastness by Stephen Williamson of Washington University in St. Louis and the St. Louis Fed in at least one working paper and in a series of posts over the past five or six months (e.g. here, here and here). And John Cochrane of the University of Chicago has picked up on the idea as well in two recent blog posts (here and here). Others seem to be joining the upstart school as well.

The new argument seems simple: given the Fisher equation, in which the nominal interest rate equals the real interest rate plus the (expected) rate of inflation, a central bank can meet its inflation target by setting a fixed nominal interest rate target consistent with its inflation target and keeping it there. Once the central bank sets its target, the long-run neutrality of money, implying that the real interest rate is independent of the nominal targets set by the central bank, ensures that inflation expectations must converge on rates consistent with the nominal interest rate target and the independently determined real interest rate (i.e., the real yield curve), so that the actual and expected rates of inflation adjust to ensure that the Fisher equation is satisfied. If the promise of the central bank to maintain a particular nominal rate over time is believed, the promise will induce a rate of inflation consistent with the nominal interest-rate target and the exogenous real rate.

The novelty of this way of thinking about monetary policy is that monetary theorists have generally assumed that the actual adjustment of the price level or inflation rate depends on whether the target interest rate is greater or less than the real rate plus the expected rate. When the target rate is greater than the real rate plus expected inflation, inflation goes down, and when it is less than the real rate plus expected inflation, inflation goes up. In the conventional treatment, the expected rate of inflation is momentarily fixed, and the (expected) real rate variable. In the Neo-Fisherite school, the (expected) real rate is fixed, and the expected inflation rate is variable. (Just as an aside, I would observe that the idea that expectations about the real rate of interest and the inflation rate cannot occur simultaneously in the short run is not derived from the limited cognitive capacity of economic agents; it can only be derived from the limited intellectual capacity of economic theorists.)

The heretical views expressed by Williamson and Cochrane and earlier by Kocherlakota have understandably elicited scorn and derision from conventional monetary theorists, whether Keynesian, New Keynesian, Monetarist or Market Monetarist. (Williamson having appropriated for himself the New Monetarist label, I regrettably could not preserve an appropriate symmetry in my list of labels for monetary theorists.) As a matter of fact, I wrote a post last December challenging Williamson’s reasoning in arguing that QE had caused a decline in inflation, though in his initial foray into uncharted territory, Williamson was actually making a narrower argument than the more general thesis that he has more recently expounded.

Although deep down, I have no great sympathy for Williamson’s argument, the counterarguments I have seen leave me feeling a bit, shall we say, underwhelmed. That’s not to say that I am becoming a convert to New Monetarism, but I am feeling that we have reached a point at which certain underlying gaps in monetary theory can’t be concealed any longer. To explain what I mean by that remark, let me start by reviewing the historical context in which the ruling doctrine governing central-bank operations via adjustments in the central-bank lending rate evolved. The primary (though historically not the first) source of the doctrine is Henry Thornton in his classic volume The Nature and Effects of the Paper Credit of Great Britain.

Even though Thornton focused on the policy of the Bank of England during the Napoleonic Wars, when Bank of England notes, not gold, were legal tender, his discussion was still in the context of a monetary system in which paper money was generally convertible into either gold or silver. Inconvertible banknotes – aka fiat money — were the exception not the rule. Gold and silver were what Nick Rowe would call alpha money. All other moneys were evaluated in terms of gold and silver, not in terms of a general price level (not yet a widely accepted concept). Even though Bank of England notes became an alternative alpha money during the restriction period of inconvertibility, that situation was generally viewed as temporary, the restoration of convertibility being expected after the war. The value of the paper pound was tracked by the sterling price of gold on the Hamburg exchange. Thus, Ricardo’s first published work was entitled The High Price of Bullion, in which he blamed the high sterling price of bullion at Hamburg on an overissue of banknotes by the Bank of England.

But to get back to Thornton, who was far more concerned with the mechanics of monetary policy than Ricardo, his great contribution was to show that the Bank of England could control the amount of lending (and money creation) by adjusting the interest rate charged to borrowers. If banknotes were depreciating relative to gold, the Bank of England could increase the value of their notes by raising the rate of interest charged on loans.

The point is that if you are a central banker and are trying to target the exchange rate of your currency with respect to an alpha currency, you can do so by adjusting the interest rate that you charge borrowers. Raising the interest rate will cause the exchange value of your currency to rise and reducing the interest rate will cause the exchange value to fall. And if you are operating under strict convertibility, so that you are committed to keep the exchange rate between your currency and an alpha currency at a specified par value, raising that interest rate will cause you to accumulate reserves payable in terms of the alpha currency, and reducing that interest rate will cause you to emit reserves payable in terms of the alpha currency.

So the idea that an increase in the central-bank interest rate tends to increase the exchange value of its currency, or, under a fixed-exchange rate regime, an increase in the foreign exchange reserves of the bank, has a history at least two centuries old, though the doctrine has not exactly been free of misunderstanding or confusion in the course of those two centuries. One of those misunderstandings was about the effect of a change in the central-bank interest rate, under a fixed-exchange rate regime. In fact, as long as the central bank is maintaining a fixed exchange rate between its currency and an alpha currency, changes in the central-bank interest rate don’t affect (at least as a first approximation) either the domestic money supply or the domestic price level; all that changes in the central-bank interest rate can accomplish is to change the bank’s holdings of alpha-currency reserves.

It seems to me that this long well-documented historical association between changes in the central-bank interest rates and the exchange value of currencies and the level of private spending is the basis for the widespread theoretical presumption that raising the central-bank interest rate target is deflationary and reducing it is inflationary. However, the old central-bank doctrine of the Bank Rate was conceived in a world in which gold and silver were the alpha moneys, and central banks – even central banks operating with inconvertible currencies – were beta banks, because the value of a central-bank currency was still reckoned, like the value of inconvertible Bank of England notes in the Napoleonic Wars, in terms of gold and silver.

In the Neo-Fisherite world, central banks rarely peg exchange rates against each other, and there is no longer any outside standard of value to which central banks even nominally commit themselves. In a world without the metallic standard of value in which the conventional theory of central banking developed, do the propositions about the effects of central-bank interest-rate setting still obtain? I am not so sure that they do, not with the analytical tools that we normally deploy when thinking about the effects of central-bank policies. Why not? Because, in a Neo-Fisherite world in which all central banks are alpha banks, I am not so sure that we really know what determines the value of this thing called fiat money. And if we don’t really know what determines the value of a fiat money, how can we really be sure that interest-rate policy works the same way in a Neo-Fisherite world that it used to work when the value of money was determined in relation to a metallic standard? (Just to avoid misunderstanding, I am not – repeat NOT — arguing for restoring the gold standard.)

Why do I say that we don’t know what determines the value of fiat money in a Neo-Fisherite world? Well, consider this. Almost three weeks ago I wrote a post in which I suggested that Bitcoins could be a massive bubble. My explanation for why Bitcoins could be a bubble is that they provide no real (i.e., non-monetary) service, so that their value is totally contingent on, and derived from (or so it seems to me, though I admit that my understanding of Bitcoins is partial and imperfect), the expectation of a positive future resale value. However, it seems certain that the resale value of Bitcoins must eventually fall to zero, so that backward induction implies that Bitcoins, inasmuch as they provide no real service, cannot retain a positive value in the present. On this reasoning, any observed value of a Bitcoin seems inexplicable except as an irrational bubble phenomenon.

Most of the comments I received about that post challenged the relevance of the backward-induction argument. The challenges were mainly of two types: a) the end state, when everyone will certainly stop accepting a Bitcoin in exchange, is very, very far into the future and its date is unknown, and b) the backward-induction argument applies equally to every fiat currency, so my own reasoning, according to my critics, implies that the value of every fiat currency is just as much a bubble phenomenon as the value of a Bitcoin.

My response to the first objection is that even if the strict logic of the backward-induction argument is inconclusive, because of the long and uncertain duration of the time elapse between now and the end state, the argument nevertheless suggests that the value of a Bitcoin is potentially very unsteady and vulnerable to sudden collapse. Those are not generally thought to be desirable attributes in a medium of exchange.

My response to the second objection is that fiat currencies are actually quite different from Bitcoins, because fiat currencies are accepted by governments in discharging the tax liabilities due to them. The discharge of a tax liability is a real (i.e. non-monetary) service, creating a distinct non-monetary demand for fiat currencies, thereby ensuring that fiat currencies retain value, even apart from being accepted as a medium of exchange.

That, at any rate, is my view, which I first heard from Earl Thompson (see his unpublished paper, “A Reformulation of Macroeconomic Theory” pp. 23-25 for a derivation of the value of fiat money when tax liability is a fixed proportion of income). Some other pretty good economists have also held that view, like Abba Lerner, P. H. Wicksteed, and Adam Smith. Georg Friedrich Knapp also held that view, and, in his day, he was certainly well known, but I am unable to pass judgment on whether he was or wasn’t a good economist. But I do know that his views about money were famously misrepresented and caricatured by Ludwig von Mises. However, there are other good economists (Hal Varian for one), apparently unaware of, or untroubled by, the backward induction argument, who don’t think that acceptability in discharging tax liability is required to explain the value of fiat money.

Nor do I think that Thompson’s tax-acceptability theory of the value of money can stand entirely on its own, because it implies a kind of saw-tooth time profile of the price level, so that a fiat currency, earning no liquidity premium, would actually be appreciating between peak tax collection dates, and depreciating immediately following those dates, a pattern not obviously consistent with observed price data, though I do recall that Thompson used to claim that there is a lot of evidence that prices fall just before peak tax-collection dates. I don’t think that anyone has ever tried to combine the tax-acceptability theory with the empirical premise that currency (or base money) does in fact provide significant liquidity services. That, it seems to me, would be a worthwhile endeavor for any eager young researcher to undertake.

What does all of this have to do with the Neo-Fisherite Rebellion? Well, if we don’t have a satisfactory theory of the value of fiat money at hand, which is what another very smart economist Fischer Black – who, to my knowledge never mentioned the tax-liability theory — thought, then the only explanation of the value of fiat money is that, like the value of a Bitcoin, it is whatever people expect it to be. And the rate of inflation is equally inexplicable, being just whatever it is expected to be. So in a Neo-Fisherite world, if the central bank announces that it is reducing its interest-rate target, the effect of the announcement depends entirely on what “the market” reads into the announcement. And that is exactly what Fischer Black believed. See his paper “Active and Passive Monetary Policy in a Neoclassical Model.”

I don’t say that Williamson and his Neo-Fisherite colleagues are correct. Nor have they, to my knowledge, related their arguments to Fischer Black’s work. What I do say (indeed this is a problem I raised almost three years ago in one of my first posts on this blog) is that existing monetary theories of the price level are unable to rule out his result, because the behavior of the price level and inflation seems to depend, more than anything else, on expectations. And it is far from clear to me that there are any fundamentals in which these expectations can be grounded. If you impose the rational expectations assumption, which is almost certainly wrong empirically, maybe you can argue that the central bank provides a focal point for expectations to converge on. The problem, of course, is that in the real world, expectations are all over the place, there being no fundamentals to force the convergence of expectations to a stable equilibrium value.

In other words, it’s just a mess, a bloody mess, and I do not like it, not one little bit.

The Social Cost of Finance

Noah Smith has a great post that bears on the topic that I have been discussing of late (here and here): whether the growth of the US financial sector over the past three decades had anything to do with the decline in the real rate of interest that seems to have occurred over the same period. I have been suggesting that there may be reason to believe that the growth in the financial sector (from about 5% of GDP in 1980 to 8% in 2007) has reduced the productivity of the rest of the economy, because a not insubstantial part of the earnings of the financial sector has been extracted from relatively unsophisticated, informationally disadvantaged, traders and customers. Much of what financial firms do is aimed at obtaining an information advantage from which profit can be extracted, just as athletes devote resources to gaining a competitive advantage. The resources devoted to gaining informational advantage are mostly wasted, being used to transfer, not create, wealth. This seems to be true as a matter of theory; what is less clear is whether enough resources have been wasted to cause a non-negligible deterioration in economic performance.

Noah underscores the paucity of our knowledge by referring to two papers, one by Robin Greenwood and David Scharfstein (recently published in the Journal of Economic Perspectives) and the other, a response by John Cochrane posted on his blog (see here for the PDF). The Greewood and Scharfstein paper provides theoretical arguments and evidence that tend to support the proposition that the US financial sector is too large. Here is how they sum up their findings.

First, a large part of the growth of finance is in asset management, which has brought many benefits including, most notably, increased diversification and household participation in the stock market. This has likely lowered required rates of return on risky securities, increased valuations, and lowered the cost of capital to corporations. The biggest beneficiaries were likely young firms, which stand to gain the most when discount rates fall. On the other hand, the enormous growth of asset management after 1997 was driven by high fee alternative investments, with little direct evidence of much social benefit, and potentially large distortions in the allocation of talent. On net, society is likely better off because of active asset management but, on the margin, society would be better off if the cost of asset management could be reduced.

Second, changes in the process of credit delivery facilitated the expansion of household credit, mainly in residential mortgage credit. This led to higher fee income to the financial sector. While there may be benefits of expanding access to mortgage credit and lowering its cost, we point out that the U.S. tax code already biases households to overinvest in residential real estate. Moreover, the shadow banking system that facilitated this expansion made the financial system more fragile.

In his response, Cochrane offers a number of reasons why Greenwood and Scharfstein are understating the benefits generated by active asset management. Here is a passage from Cochrane’s paper (quoted also by Noah) that I would like to focus on.

I conclude that information trading of this sort sits at the conflict of two externalities / public goods. On the one hand, as French points out, “price impact” means that traders are not able to appropriate the full value of the information they bring, so there can be too few resources devoted to information production (and digestion, which strikes me as far more important). On the other hand, as Greenwood and Scharfstein point out, information is a non-rival good, and its exploitation in financial markets is a tournament (first to use it gets all the benefit) so the theorem that profits you make equal the social benefit of its production is false. It is indeed a waste of resources to bring information to the market a few minutes early, when that information will be revealed for free a few minutes later. Whether we have “too much” trading, too many resources devoted to finding information that somebody already has in will be revealed in a few minutes, or “too little” trading, markets where prices go for long times not reflecting important information, as many argued during the financial crisis, seems like a topic which neither theory nor empirical work has answered with any sort of clarity.

Cochrane’s characterization of information trading as a public good is not wrong, inasmuch as we all benefit from the existence of markets for goods and assets, even those of us that don’t participate routinely (or ever) in those markets, first because the existence of those markets provides us with opportunities to trade that may, at some unknown future time, become very valuable to us, and second, because the existence of markets contributes to the efficient utilization of resources, thereby increasing the total value of output. Because the existence of markets is a kind of public good, it may be true that even more market trading than now occurs would be socially beneficial. Suppose that every trade involves a transaction cost of 5 cents, and that the transactions cost prevents at least one trade from taking place, because the expected gain to the traders from that trade would only be 4 cents. But since that unconsummated trade would also confer a benefit on third parties, by improving the allocation of resources ever so slightly, causing total output to rise by, say, 3 cents, it would be worth it to the rest of us to subsidize parties to that unconsummated trade by rebating some part of the transactions cost associated with that trade.

But here’s my problem with Cochrane’s argument. Let us imagine that there is some unique social optimum, or at least a defined set of Pareto-optimal allocations, which we are trying to attain, or to come as close as possible to. The existence of functioning markets certainly helps us come closer to the set of Pareto optimal allocations than if markets did not exist. Cochrane is suggesting that, by devoting more resources to the production of information (which in a basically free-market, private-property economy involves the creation private informational advantages) we get more trading, and with more trading we come closer to the set of Pareto-optimal allocations than with less trading. However, it seems plausible that the production of additional information and the increase in trading activity is subject to diminishing returns in the sense that eventually obtaining additional information and engaging in additional trades reduces the distance between the actual allocation and the set of Pareto-optimal allocations by successively smaller amounts. Otherwise, we would in fact reach Pareto optimality. So, as we devote more and more resources to producing information and to trading, the amount of public-good co-generation must diminish. But this means that the negative externality associated with using increasing amounts of resources to produce private informational advantages must at some point — and probably fairly quickly — overwhelm the public-good co-generated by increased trading.

So although Cochrane has a theoretical point that, without more evidence than we have now, we can’t necessarily be sure that the increase in resources devoted to finance has been associated with a net social loss, I am still inclined to suspect doubt strongly that, at the margin, there are net positive social benefits from adding resources to finance. In this regard, the paper (cited by Greenwood and Scharfstein) “The Allocation of Talent: Implications for Growth” by Kevin Murphy, Andrei Shleifer and Robert Vishny.

The State We’re In

Last week, Paul Krugman, set off by this blog post, complained about the current state macroeconomics. Apparently, Krugman feels that if saltwater economists like himself were willing to accommodate the intertemporal-maximization paradigm developed by the freshwater economists, the freshwater economists ought to have reciprocated by acknowledging some role for countercyclical policy. Seeing little evidence of accommodation on the part of the freshwater economists, Krugman, evidently feeling betrayed, came to this rather harsh conclusion:

The state of macro is, in fact, rotten, and will remain so until the cult that has taken over half the field is somehow dislodged.

Besides engaging in a pretty personal attack on his fellow economists, Krugman did not present a very flattering picture of economics as a scientific discipline. What Krugman describes seems less like a search for truth than a cynical bargaining game, in which Krugman feels that his (saltwater) side, after making good faith offers of cooperation and accommodation that were seemingly accepted by the other (freshwater) side, was somehow misled into making concessions that undermined his side’s strategic position. What I found interesting was that Krugman seemed unaware that his account of the interaction between saltwater and freshwater economists was not much more flattering to the former than the latter.

Krugman’s diatribe gave Stephen Williamson an opportunity to scorn and scold Krugman for a crass misunderstanding of the progress of science. According to Williamson, modern macroeconomics has passed by out-of-touch old-timers like Krugman. Among modern macroeconomists, Williamson observes, the freshwater-saltwater distinction is no longer meaningful or relevant. Everyone is now, more or less, on the same page; differences are worked out collegially in seminars, workshops, conferences and in the top academic journals without the rancor and disrespect in which Krugman indulges himself. If you are lucky (and hard-working) enough to be part of it, macroeconomics is a great place to be. One can almost visualize the condescension and the pity oozing from Williamson’s pores for those not part of the charmed circle.

Commenting on this exchange, Noah Smith generally agreed with Williamson that modern macroeconomics is not a discipline divided against itself; the intetermporal maximizers are clearly dominant. But Noah allows himself to wonder whether this is really any cause for celebration – celebration, at any rate, by those not in the charmed circle.

So macro has not yet discovered what causes recessions, nor come anywhere close to reaching a consensus on how (or even if) we should fight them. . . .

Given this state of affairs, can we conclude that the state of macro is good? Is a field successful as long as its members aren’t divided into warring camps? Or should we require a science to give us actual answers? And if we conclude that a science isn’t giving us actual answers, what do we, the people outside the field, do? Do we demand that the people currently working in the field start producing results pronto, threatening to replace them with people who are currently relegated to the fringe? Do we keep supporting the field with money and acclaim, in the hope that we’re currently only in an interim stage, and that real answers will emerge soon enough? Do we simply conclude that the field isn’t as fruitful an area of inquiry as we thought, and quietly defund it?

All of this seems to me to be a side issue. Who cares if macroeconomists like each other or hate each other? Whether they get along or not, whether they treat each other nicely or not, is really of no great import. For example, it was largely at Milton Friedman’s urging that Harry Johnson was hired to be the resident Keynesian at Chicago. But almost as soon as Johnson arrived, he and Friedman were getting into rather unpleasant personal exchanges and arguments. And even though Johnson underwent a metamorphosis from mildly left-wing Keynesianism to moderately conservative monetarism during his nearly two decades at Chicago, his personal and professional relationship with Friedman got progressively worse. And all of that nastiness was happening while both Friedman and Johnson were becoming dominant figures in the economics profession. So what does the level of collegiality and absence of personal discord have to do with the state of a scientific or academic discipline? Not all that much, I would venture to say.

So when Scott Sumner says:

while Krugman might seem pessimistic about the state of macro, he’s a Pollyanna compared to me. I see the field of macro as being completely adrift

I agree totally. But I diagnose the problem with macro a bit differently from how Scott does. He is chiefly concerned with getting policy right, which is certainly important, inasmuch as policy, since early 2008, has, for the most part, been disastrously wrong. One did not need a theoretically sophisticated model to see that the FOMC, out of misplaced concern that inflation expectations were becoming unanchored, kept money way too tight in 2008 in the face of rising food and energy prices, even as the economy was rapidly contracting in the second and third quarters. And in the wake of the contraction in the second and third quarters and a frightening collapse and panic in the fourth quarter, it did not take a sophisticated model to understand that rapid monetary expansion was called for. That’s why Scott writes the following:

All we really know is what Milton Friedman knew, with his partial equilibrium approach. Monetary policy drives nominal variables.  And cyclical fluctuations caused by nominal shocks seem sub-optimal.  Beyond that it’s all conjecture.

Ahem, and Marshall and Wicksell and Cassel and Fisher and Keynes and Hawtrey and Robertson and Hayek and at least 25 others that I could easily name. But it’s interesting to note that, despite his Marshallian (anti-Walrasian) proclivities, it was Friedman himself who started modern macroeconomics down the fruitless path it has been following for the last 40 years when he introduced the concept of the natural rate of unemployment in his famous 1968 AEA Presidential lecture on the role of monetary policy. Friedman defined the natural rate of unemployment as:

the level [of unemployment] that would be ground out by the Walrasian system of general equilibrium equations, provided there is embedded in them the actual structural characteristics of the labor and commodity markets, including market imperfections, stochastic variability in demands and supplies, the costs of gathering information about job vacancies, and labor availabilities, the costs of mobility, and so on.

Aside from the peculiar verb choice in describing the solution of an unknown variable contained in a system of equations, what is noteworthy about his definition is that Friedman was explicitly adopting a conception of an intertemporal general equilibrium as the unique and stable solution of that system of equations, and, whether he intended to or not, appeared to be suggesting that such a concept was operationally useful as a policy benchmark. Thus, despite Friedman’s own deep skepticism about the usefulness and relevance of general-equilibrium analysis, Friedman, for whatever reasons, chose to present his natural-rate argument in the language (however stilted on his part) of the Walrasian general-equilibrium theory for which he had little use and even less sympathy.

Inspired by the powerful policy conclusions that followed from the natural-rate hypothesis, Friedman’s direct and indirect followers, most notably Robert Lucas, used that analysis to transform macroeconomics, reducing macroeconomics to the manipulation of a simplified intertemporal general-equilibrium system. Under the assumption that all economic agents could correctly forecast all future prices (aka rational expectations), all agents could be viewed as intertemporal optimizers, any observed unemployment reflecting the optimizing choices of individuals to consume leisure or to engage in non-market production. I find it inconceivable that Friedman could have been pleased with the direction taken by the economics profession at large, and especially by his own department when he departed Chicago in 1977. This is pure conjecture on my part, but Friedman’s departure upon reaching retirement age might have had something to do with his own lack of sympathy with the direction that his own department had, under Lucas’s leadership, already taken. The problem was not so much with policy, but with the whole conception of what constitutes macroeconomic analysis.

The paper by Carlaw and Lipsey, which I referenced in my previous post, provides just one of many possible lines of attack against what modern macroeconomics has become. Without in any way suggesting that their criticisms are not weighty and serious, I would just point out that there really is no basis at all for assuming that the economy can be appropriately modeled as being in a continuous, or nearly continuous, state of general equilibrium. In the absence of a complete set of markets, the Arrow-Debreu conditions for the existence of a full intertemporal equilibrium are not satisfied, and there is no market mechanism that leads, even in principle, to a general equilibrium. The rational-expectations assumption is simply a deus-ex-machina method by which to solve a simplified model, a method with no real-world counterpart. And the suggestion that rational expectations is no more than the extension, let alone a logical consequence, of the standard rationality assumptions of basic economic theory is transparently bogus. Nor is there any basis for assuming that, if a general equilibrium does exist, it is unique, and that if it is unique, it is necessarily stable. In particular, in an economy with an incomplete (in the Arrow-Debreu sense) set of markets, an equilibrium may very much depend on the expectations of agents, expectations potentially even being self-fulfilling. We actually know that in many markets, especially those characterized by network effects, equilibria are expectation-dependent. Self-fulfilling expectations may thus be a characteristic property of modern economies, but they do not necessarily produce equilibrium.

An especially pretentious conceit of the modern macroeconomics of the last 40 years is that the extreme assumptions on which it rests are the essential microfoundations without which macroeconomics lacks any scientific standing. That’s preposterous. Perfect foresight and rational expectations are assumptions required for finding the solution to a system of equations describing a general equilibrium. They are not essential properties of a system consistent with the basic rationality propositions of microeconomics. To insist that a macroeconomic theory must correspond to the extreme assumptions necessary to prove the existence of a unique stable general equilibrium is to guarantee in advance the sterility and uselessness of that theory, because the entire field of study called macroeconomics is the result of long historical experience strongly suggesting that persistent, even cumulative, deviations from general equilibrium have been routine features of economic life since at least the early 19th century. That modern macroeconomics can tell a story in which apparently large deviations from general equilibrium are not really what they seem is not evidence that such deviations don’t exist; it merely shows that modern macroeconomics has constructed a language that allows the observed data to be classified in terms consistent with a theoretical paradigm that does not allow for lapses from equilibrium. That modern macroeconomics has constructed such a language is no reason why anyone not already committed to its underlying assumptions should feel compelled to accept its validity.

In fact, the standard comparative-statics propositions of microeconomics are also based on the assumption of the existence of a unique stable general equilibrium. Those comparative-statics propositions about the signs of the derivatives of various endogenous variables (price, quantity demanded, quantity supplied, etc.) with respect to various parameters of a microeconomic model involve comparisons between equilibrium values of the relevant variables before and after the posited parametric changes. All such comparative-statics results involve a ceteris-paribus assumption, conditional on the existence of a unique stable general equilibrium which serves as the starting and ending point (after adjustment to the parameter change) of the exercise, thereby isolating the purely hypothetical effect of a parameter change. Thus, as much as macroeconomics may require microfoundations, microeconomics is no less in need of macrofoundations, i.e., the existence of a unique stable general equilibrium, absent which a comparative-statics exercise would be meaningless, because the ceteris-paribus assumption could not otherwise be maintained. To assert that macroeconomics is impossible without microfoundations is therefore to reason in a circle, the empirically relevant propositions of microeconomics being predicated on the existence of a unique stable general equilibrium. But it is precisely the putative failure of a unique stable intertemporal general equilibrium to be attained, or to serve as a powerful attractor to economic variables, that provides the rationale for the existence of a field called macroeconomics.

So I certainly agree with Krugman that the present state of macroeconomics is pretty dismal. However, his own admitted willingness (and that of his New Keynesian colleagues) to adopt a theoretical paradigm that assumes the perpetual, or near-perpetual, existence of a unique stable intertemporal equilibrium, or at most admits the possibility of a very small set of deviations from such an equilibrium, means that, by his own admission, Krugman and his saltwater colleagues also bear a share of the responsibility for the very state of macroeconomics that Krugman now deplores.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,272 other subscribers
Follow Uneasy Money on WordPress.com