Archive for the 'Friedman' Category



The State We’re In

Last week, Paul Krugman, set off by this blog post, complained about the current state macroeconomics. Apparently, Krugman feels that if saltwater economists like himself were willing to accommodate the intertemporal-maximization paradigm developed by the freshwater economists, the freshwater economists ought to have reciprocated by acknowledging some role for countercyclical policy. Seeing little evidence of accommodation on the part of the freshwater economists, Krugman, evidently feeling betrayed, came to this rather harsh conclusion:

The state of macro is, in fact, rotten, and will remain so until the cult that has taken over half the field is somehow dislodged.

Besides engaging in a pretty personal attack on his fellow economists, Krugman did not present a very flattering picture of economics as a scientific discipline. What Krugman describes seems less like a search for truth than a cynical bargaining game, in which Krugman feels that his (saltwater) side, after making good faith offers of cooperation and accommodation that were seemingly accepted by the other (freshwater) side, was somehow misled into making concessions that undermined his side’s strategic position. What I found interesting was that Krugman seemed unaware that his account of the interaction between saltwater and freshwater economists was not much more flattering to the former than the latter.

Krugman’s diatribe gave Stephen Williamson an opportunity to scorn and scold Krugman for a crass misunderstanding of the progress of science. According to Williamson, modern macroeconomics has passed by out-of-touch old-timers like Krugman. Among modern macroeconomists, Williamson observes, the freshwater-saltwater distinction is no longer meaningful or relevant. Everyone is now, more or less, on the same page; differences are worked out collegially in seminars, workshops, conferences and in the top academic journals without the rancor and disrespect in which Krugman indulges himself. If you are lucky (and hard-working) enough to be part of it, macroeconomics is a great place to be. One can almost visualize the condescension and the pity oozing from Williamson’s pores for those not part of the charmed circle.

Commenting on this exchange, Noah Smith generally agreed with Williamson that modern macroeconomics is not a discipline divided against itself; the intetermporal maximizers are clearly dominant. But Noah allows himself to wonder whether this is really any cause for celebration – celebration, at any rate, by those not in the charmed circle.

So macro has not yet discovered what causes recessions, nor come anywhere close to reaching a consensus on how (or even if) we should fight them. . . .

Given this state of affairs, can we conclude that the state of macro is good? Is a field successful as long as its members aren’t divided into warring camps? Or should we require a science to give us actual answers? And if we conclude that a science isn’t giving us actual answers, what do we, the people outside the field, do? Do we demand that the people currently working in the field start producing results pronto, threatening to replace them with people who are currently relegated to the fringe? Do we keep supporting the field with money and acclaim, in the hope that we’re currently only in an interim stage, and that real answers will emerge soon enough? Do we simply conclude that the field isn’t as fruitful an area of inquiry as we thought, and quietly defund it?

All of this seems to me to be a side issue. Who cares if macroeconomists like each other or hate each other? Whether they get along or not, whether they treat each other nicely or not, is really of no great import. For example, it was largely at Milton Friedman’s urging that Harry Johnson was hired to be the resident Keynesian at Chicago. But almost as soon as Johnson arrived, he and Friedman were getting into rather unpleasant personal exchanges and arguments. And even though Johnson underwent a metamorphosis from mildly left-wing Keynesianism to moderately conservative monetarism during his nearly two decades at Chicago, his personal and professional relationship with Friedman got progressively worse. And all of that nastiness was happening while both Friedman and Johnson were becoming dominant figures in the economics profession. So what does the level of collegiality and absence of personal discord have to do with the state of a scientific or academic discipline? Not all that much, I would venture to say.

So when Scott Sumner says:

while Krugman might seem pessimistic about the state of macro, he’s a Pollyanna compared to me. I see the field of macro as being completely adrift

I agree totally. But I diagnose the problem with macro a bit differently from how Scott does. He is chiefly concerned with getting policy right, which is certainly important, inasmuch as policy, since early 2008, has, for the most part, been disastrously wrong. One did not need a theoretically sophisticated model to see that the FOMC, out of misplaced concern that inflation expectations were becoming unanchored, kept money way too tight in 2008 in the face of rising food and energy prices, even as the economy was rapidly contracting in the second and third quarters. And in the wake of the contraction in the second and third quarters and a frightening collapse and panic in the fourth quarter, it did not take a sophisticated model to understand that rapid monetary expansion was called for. That’s why Scott writes the following:

All we really know is what Milton Friedman knew, with his partial equilibrium approach. Monetary policy drives nominal variables.  And cyclical fluctuations caused by nominal shocks seem sub-optimal.  Beyond that it’s all conjecture.

Ahem, and Marshall and Wicksell and Cassel and Fisher and Keynes and Hawtrey and Robertson and Hayek and at least 25 others that I could easily name. But it’s interesting to note that, despite his Marshallian (anti-Walrasian) proclivities, it was Friedman himself who started modern macroeconomics down the fruitless path it has been following for the last 40 years when he introduced the concept of the natural rate of unemployment in his famous 1968 AEA Presidential lecture on the role of monetary policy. Friedman defined the natural rate of unemployment as:

the level [of unemployment] that would be ground out by the Walrasian system of general equilibrium equations, provided there is embedded in them the actual structural characteristics of the labor and commodity markets, including market imperfections, stochastic variability in demands and supplies, the costs of gathering information about job vacancies, and labor availabilities, the costs of mobility, and so on.

Aside from the peculiar verb choice in describing the solution of an unknown variable contained in a system of equations, what is noteworthy about his definition is that Friedman was explicitly adopting a conception of an intertemporal general equilibrium as the unique and stable solution of that system of equations, and, whether he intended to or not, appeared to be suggesting that such a concept was operationally useful as a policy benchmark. Thus, despite Friedman’s own deep skepticism about the usefulness and relevance of general-equilibrium analysis, Friedman, for whatever reasons, chose to present his natural-rate argument in the language (however stilted on his part) of the Walrasian general-equilibrium theory for which he had little use and even less sympathy.

Inspired by the powerful policy conclusions that followed from the natural-rate hypothesis, Friedman’s direct and indirect followers, most notably Robert Lucas, used that analysis to transform macroeconomics, reducing macroeconomics to the manipulation of a simplified intertemporal general-equilibrium system. Under the assumption that all economic agents could correctly forecast all future prices (aka rational expectations), all agents could be viewed as intertemporal optimizers, any observed unemployment reflecting the optimizing choices of individuals to consume leisure or to engage in non-market production. I find it inconceivable that Friedman could have been pleased with the direction taken by the economics profession at large, and especially by his own department when he departed Chicago in 1977. This is pure conjecture on my part, but Friedman’s departure upon reaching retirement age might have had something to do with his own lack of sympathy with the direction that his own department had, under Lucas’s leadership, already taken. The problem was not so much with policy, but with the whole conception of what constitutes macroeconomic analysis.

The paper by Carlaw and Lipsey, which I referenced in my previous post, provides just one of many possible lines of attack against what modern macroeconomics has become. Without in any way suggesting that their criticisms are not weighty and serious, I would just point out that there really is no basis at all for assuming that the economy can be appropriately modeled as being in a continuous, or nearly continuous, state of general equilibrium. In the absence of a complete set of markets, the Arrow-Debreu conditions for the existence of a full intertemporal equilibrium are not satisfied, and there is no market mechanism that leads, even in principle, to a general equilibrium. The rational-expectations assumption is simply a deus-ex-machina method by which to solve a simplified model, a method with no real-world counterpart. And the suggestion that rational expectations is no more than the extension, let alone a logical consequence, of the standard rationality assumptions of basic economic theory is transparently bogus. Nor is there any basis for assuming that, if a general equilibrium does exist, it is unique, and that if it is unique, it is necessarily stable. In particular, in an economy with an incomplete (in the Arrow-Debreu sense) set of markets, an equilibrium may very much depend on the expectations of agents, expectations potentially even being self-fulfilling. We actually know that in many markets, especially those characterized by network effects, equilibria are expectation-dependent. Self-fulfilling expectations may thus be a characteristic property of modern economies, but they do not necessarily produce equilibrium.

An especially pretentious conceit of the modern macroeconomics of the last 40 years is that the extreme assumptions on which it rests are the essential microfoundations without which macroeconomics lacks any scientific standing. That’s preposterous. Perfect foresight and rational expectations are assumptions required for finding the solution to a system of equations describing a general equilibrium. They are not essential properties of a system consistent with the basic rationality propositions of microeconomics. To insist that a macroeconomic theory must correspond to the extreme assumptions necessary to prove the existence of a unique stable general equilibrium is to guarantee in advance the sterility and uselessness of that theory, because the entire field of study called macroeconomics is the result of long historical experience strongly suggesting that persistent, even cumulative, deviations from general equilibrium have been routine features of economic life since at least the early 19th century. That modern macroeconomics can tell a story in which apparently large deviations from general equilibrium are not really what they seem is not evidence that such deviations don’t exist; it merely shows that modern macroeconomics has constructed a language that allows the observed data to be classified in terms consistent with a theoretical paradigm that does not allow for lapses from equilibrium. That modern macroeconomics has constructed such a language is no reason why anyone not already committed to its underlying assumptions should feel compelled to accept its validity.

In fact, the standard comparative-statics propositions of microeconomics are also based on the assumption of the existence of a unique stable general equilibrium. Those comparative-statics propositions about the signs of the derivatives of various endogenous variables (price, quantity demanded, quantity supplied, etc.) with respect to various parameters of a microeconomic model involve comparisons between equilibrium values of the relevant variables before and after the posited parametric changes. All such comparative-statics results involve a ceteris-paribus assumption, conditional on the existence of a unique stable general equilibrium which serves as the starting and ending point (after adjustment to the parameter change) of the exercise, thereby isolating the purely hypothetical effect of a parameter change. Thus, as much as macroeconomics may require microfoundations, microeconomics is no less in need of macrofoundations, i.e., the existence of a unique stable general equilibrium, absent which a comparative-statics exercise would be meaningless, because the ceteris-paribus assumption could not otherwise be maintained. To assert that macroeconomics is impossible without microfoundations is therefore to reason in a circle, the empirically relevant propositions of microeconomics being predicated on the existence of a unique stable general equilibrium. But it is precisely the putative failure of a unique stable intertemporal general equilibrium to be attained, or to serve as a powerful attractor to economic variables, that provides the rationale for the existence of a field called macroeconomics.

So I certainly agree with Krugman that the present state of macroeconomics is pretty dismal. However, his own admitted willingness (and that of his New Keynesian colleagues) to adopt a theoretical paradigm that assumes the perpetual, or near-perpetual, existence of a unique stable intertemporal equilibrium, or at most admits the possibility of a very small set of deviations from such an equilibrium, means that, by his own admission, Krugman and his saltwater colleagues also bear a share of the responsibility for the very state of macroeconomics that Krugman now deplores.

Advertisements

The Road to Serfdom: Good Hayek or Bad Hayek?

A new book by Angus Burgin about the role of F. A. Hayek and Milton Friedman and the Mont Pelerin Society (an organization of free-market economists plus some scholars in other disciplines founded by Hayek and later headed by Friedman) in resuscitating free-market capitalism as a political ideal after its nineteenth-century version had been discredited by the twin catastrophes of the Great War and the Great Depression was the subject of an interesting and in many ways insightful review by Robert Solow in the latest New Republic. Despite some unfortunate memory lapses and apologetics concerning his own errors and those of his good friend and colleague Paul Samuelson in their assessments of the of efficiency of central planning, thereby minimizing the analytical contributions of Hayek and Friedman, Solow does a good job of highlighting the complexity and nuances of Hayek’s thought — a complexity often ignored not only by Hayek’s critics but by many of his most vocal admirers — and of contrasting Hayek’s complexity and nuance with Friedman’s rhetorically and strategically compelling, but intellectually dubious, penchant for simplification.

First, let’s get the apologetics out of the way. Tyler Cowen pounced on this comment by Solow:

The MPS [Mont Pelerin Society] was no more influential inside the economics profession. There were no publications to be discussed. The American membership was apparently limited to economists of the Chicago School and its scattered university outposts, plus a few transplanted Europeans. “Some of my best friends” belonged. There was, of course, continuing research and debate among economists on the good and bad properties of competitive and noncompetitive markets, and the capacities and limitations of corrective regulation. But these would have gone on in the same way had the MPS not existed. It has to be remembered that academic economists were never optimistic about central planning. Even discussion about the economics of some conceivable socialism usually took the form of devising institutions and rules of behavior that would make a socialist economy function like a competitive market economy (perhaps more like one than any real-world market economy does). Maybe the main function of the MPS was to maintain the morale of the free-market fellowship.

And one of Tyler’s commenters unearthed this gem from Samuelson’s legendary textbook:

The Soviet economy is proof that, contrary to what many skeptics had earlier believed, a socialist command economy can function and even thrive.

Tyler also dug up this nugget from the classic paper by Sameulson and Solow on the Phillips Curve (but see this paper by James Forder for some revisionist history about the Samuelson-Solow paper):

We have not here entered upon the important question of what feasible institutional reforms might be introduced to lessen the degree of disharmony between full employment and price stability. These could of course involve such wide-ranging issues as direct price and wage controls, antiunion and antitrust legislation, and a host of other measures hopefully designed to move the American Phillips’ curves downward and to the left.

But actually, Solow was undoubtedly right that the main function of the MPS was morale-building! Plus networking. Nothing to be sneered at, and nothing to apologize for. The real heavy lifting was done in the 51 weeks of the year when the MPS was not in session.

Anyway, enough score settling, because Solow does show a qualified, but respectful, appreciation for Hayek’s virtues as an economist, scholar, and social philosopher, suggesting that there was a Good Hayek, who struggled to reformulate a version of liberalism that transcended the inadequacies (practical and theoretical) that doomed the laissez-faire liberalism of the nineteenth century, and a Bad Hayek, who engaged in a black versus white polemical struggle with “socialists of all parties.” The trope strikes me as a bit unfair, but Hayek could sometimes be injudicious in his policy pronouncements, or in his off-the-cuff observations and remarks. Despite his natural reserve, Hayek sometimes indulged in polemical exaggeration. The appetite for rhetorical overkill was especially hard for Hayek to resist when the topic of discussion was J. M. Keynes, the object of both Hayek’s admiration and his disdain. Hayek seemingly could not help but caricature Keynes in a way calculated to make him seem both ridiculous and irresistible.  Have a look.

So I would not dispute that Hayek occasionally committed rhetorical excesses when wearing his policy-advocate hat. And there were some other egregious lapses on Hayek’s part like his unqualified support for General Pinochet, reflecting perhaps a Quixotic hope that somewhere there was a benevolent despot waiting to be persuaded to implement Hayek’s ideas for a new liberal political constitution in which the principle of the separation of powers would be extended to separate the law-making powers of the legislative body from the governing powers of the representative assembly.

But Solow exaggerates by characterizing the Road to Serfdom as an example of the Bad Hayek, despite acknowledging that the Road to Serfdom was very far from advocating a return to nineteenth-century laissez-faire. What Solow finds troubling is thesis that

the standard regulatory interventions in the economy have any inherent tendency to snowball into “serfdom.” The correlations often run the other way. Sixty-five years later, Hayek’s implicit prediction is a failure, rather like Marx’s forecast of the coming “immiserization of the working class.”

This is a common interpretation of Hayek’s thesis in the Road to Serfdom.   And it is true that Hayek did intimate that piecemeal social engineering (to borrow a phrase coined by Hayek’s friend Karl Popper) created tendencies, which, if not held in check by strict adherence to liberal principles, could lead to comprehensive central planning. But that argument is a different one from the main argument of the Road to Serfdom that comprehensive central planning could be carried out effectively only by a government exercising unlimited power over individuals. And there is no empirical evidence that refutes Hayek’s main thesis.

A few years ago, in perhaps his last published article, Paul Samuelson wrote a brief historical assessment of Hayek, including personal recollections of their mostly friendly interactions and of one not so pleasant exchange they had in Hayek’s old age, when Hayek wrote to Samuelson demanding that Samuelson retract the statement in his textbook (essentially the same as the one made by Solow) that the empirical evidence, showing little or no correlation between economic and political freedom, refutes the thesis of the Road to Serfdom that intervention leads to totalitarianism. Hayek complained that this charge misrepresented what he had argued in the Road to Serfdom. Observing that Hayek, with whom he had long been acquainted, never previously complained about the passage, Samuelson explained that he tried to placate Hayek with an empty promise to revise the passage, attributing Hayek’s belated objection to the irritability of old age and a bad heart. Whether Samuelson’s evasive response to Hayek was an appropriate one is left as an exercise for the reader.

Defenders of Hayek expressed varying degrees of outrage at the condescending tone taken by Samuelson in his assessment of Hayek. I think that they were overreacting. Samuelson, an academic enfant terrible if there ever was one, may have treated his elders and peers with condescension, but, speaking from experience, I can testify that he treated his inferiors with the utmost courtesy. Samuelson was not dismissing Hayek, he was just being who he was.

The question remains: what was Hayek trying to say in the Road to Serfdom, and in subsequent works? Well, believe it or not, he was trying to say many things, but the main thesis of the Road to Serfdom was clearly what he always said it was: comprehensive central planning is, and always will be, incompatible with individual and political liberty. Samuelson and Solow were not testing Hayek’s main thesis. None of the examples of interventionist governments that they cite, mostly European social democracies, adopted comprehensive central planning, so Hayek’s thesis was not refuted by those counterexamples. Samuelson once acknowledged “considerable validity . . . for the nonnovel part [my emphasis] of Hayek’s warning” in the Road to Serfdom: “controlled socialist societies are rarely efficient and virtually never freely democratic.” Presumably Samuelson assumed that Hayek must have been saying something more than what had previously been said by other liberal economists. After all, if Hayek were saying no more than that liberty and democracy are incompatible with comprehensive central planning, what claim to originality could Hayek have been making? None.

Yep, that’s exactly right; Hayek was not making any claim to originality in the Road to Serfdom. But sometimes old truths have to be restated in a new and more persuasive form than that in which they were originally stated. That was especially the case in the early 1940s when collectivism and planning were widely viewed as the wave of the future, and even so thoroughly conservative and so eminent an economic theorist as Joseph Schumpeter could argue without embarrassment that there was no practical or theoretical reason why socialist central planning could not be implemented. And besides, the argument that every intervention leads to another one until the market system becomes paralyzed was not invented by Hayek either, having been made by Ludwig von Mises some twenty years earlier, and quite possibly by other writers before that.  So even the argument that Samuelson tried to pin on Hayek was not really novel either.

To be sure, Hayek’s warning that central planning would inevitably lead to totalitarianism was not the only warning he made in the Road to Serfdom, but conceptually distinct arguments should not be conflated. Hayek clearly wanted to make the argument that an unprincipled policy of economic interventions was dangerous, because interventions introduce distortions that beget further interventions, producing a cumulative process of ever-more intrusive interventions, thereby smothering market forces and eventually sapping the productive capacity of the free enterprise system. That is an argument about how it is possible to stumble into central planning without really intending to do so.  Hayek clearly believed in that argument, often invoking it in tandem with, or as a supplement to, his main argument about the incompatibility of central planning with liberty and democracy. Despite the undeniable tendency for interventions to create pressure (for both political and economic reasons) to adopt additional interventions, Hayek clearly overestimated the power of that tendency, failing to understand, or at least to take sufficient account of, the countervailing political forces resisting further interventions. So although Hayek was right that no intellectual principle enables one to say “so much intervention and not a drop more,” there could still be a kind of (messy) democratic political equilibrium that effectively limits the extent to which new interventions can be piled on top of old ones. That surely was a significant gap in Hayek’s too narrow, and overly critical, view of how the democratic political process operates.

That said, I think that Solow came close to getting it right in this paragraph:

THE GOOD HAYEK was not happy with the reception of The Road to Serfdom. He had not meant to provide a manifesto for the far right. Careless readers ignored his rejection of unqualified laissez-faire, and the fact that he reserved a useful, limited economic role for government. He had not actually claimed that the descent into serfdom was inevitable. There is no reason to doubt Hayek’s sincerity in this (although the Bad Hayek occasionally made other appearances). Perhaps he would be appalled at the thought of a Congress full of Tea Party Hayekians. But it was his book, after all. The fact that natural allies such as Knight and moderates such as Viner thought that he had overreached suggests that the Bad Hayek really was there in the text.

But not exactly right. Hayek was not totally good. Who is? Hayek made mistakes. Let he who is without sin cast the first stone. Frank Knight didn’t like the Road to Serfdom. But as Solow, himself, observed earlier in his review, Knight was a curmudgeon, and had previously crossed swords with Hayek over arcane issues of capital theory.  So any inference from Knight’s reaction to the Road to Serfdom must be taken with a large grain of salt. And one might also want to consider what Schumpeter said about Hayek in his review of the Road to Serfdom, criticizing Hayek for “politeness to a fault,” because Hayek would “hardly ever attribute to opponents anything beyond intellectual error.”  Was the Bad Hayek really there in the text? Was it really “not a good book?” The verdict has to be: unproven.

PS  In his review, Solow expressed a wish for a full list of the original attendees at the founding meeting of the Mont Pelerin Society.  Hayek included the list as a footnote to his “Opening Address to a  Conference at Mont Pelerin” published in his Studies in Philosophy, Politics and Economics.  There is a slightly different list of original members in Wikipedia.

Maurice Allais, Paris

Carlo Antoni, Rome

Hans Barth, Zurich

Karl Brandt, Stanford, Calif.

John Davenport, New York

Stanley R. Dennison, Cambridge

Walter Eucken, Freiburg i. B.

Erich Eyck, Oxford

Milton Friedman, Chicago

H. D. Gideonse, Brooklyn

F. D. Graham, Princeton

F. A. Harper, Irvington-on-Hudson, NY

Henry Hazlitt, New York

T. J. B. Hoff, Oslo

Albert Hunold, Zurich

Bertrand de Jouvenal, Chexbres, Vaud

Carl Iversen, Copenhagen

John Jewkes, Manchester

F. H. Knight, Chicgao

Fritz Machlup, Buffalo

L. B. Miller, Detroit

Ludwig von Mises, New York

Felix Morely, Washington, DC

Michael Polanyi, Manchester

Karl R. Popper, London

William E. Rappard, Geneva

L. E. Read, Irvington-on-Hudson, NY

Lionel Robbins, London

Wilhelm Roepke, Geneva

George J. Stigler, Providence, RI

Herbert Tingsten, Stockholm

Fracois Trevoux, Lyon

V. O. Watts, Irvington-on-Hudson, NY

C. V. Wedgewood, London

In addition, Hayek included the names of others invited but unable to attend who joined MPS as original members

Constatino Bresciani-Turroni, Rome

William H. Chamberlin, New York

Rene Courtin, Paris

Max Eastman, New York

Luigi Einaudi, Rome

Howard Ellis, Berkeley, Calif.

A. G. B. Fisher, London

Eli Heckscher, Stockholm

Hans Kohn, Northampton, Mass

Walter Lippmann, New York

Friedrich Lutz, Princeton

Salvador de Madriaga, Oxford

Charles Morgan, London

W. A. Orten, Northampton, Mass.

Arnold Plant, London

Charles Rist, Paris

Michael Roberts, London

Jacques Rueff, Paris

Alexander Rustow, Istanbul

F. Schnabel, Heidelberg

W. J. H. Sprott, Nottingham

Roger Truptil, Paris

D. Villey, Poitiers

E. L. Woodward, Oxford

H. M. Wriston, Providence, RI

G. M. Young, London

The Wisdom of David Laidler

Michael Woodford’s paper for the Jackson Hole Symposium on Monetary Policy wasn’t the only important paper on monetary economics to be posted on the internet last month. David Laidler, perhaps the world’s greatest expert on the history of monetary theory and macroeconomics since the time of Adam Smith, has written an important paper with the somewhat cryptic title, “Two Crises, Two Ideas, and One Question.” Most people will figure out pretty quickly which two crises Laidler is referring to, but you will have to read the paper in order to figure out which two ideas and which question, Laidler has on his mind. Actually, you won’t have to read the paper if you keep reading this post, because I am about to tell you. The two ideas are what Laidler calls the “Fisher relation” between real and nominal interest rates, and the idea of a lender of last resort. The question is whether a market economy is inherently stable or unstable.

How does one weave these threads into a coherent narrative? Well, to really understand that you really will just have to read Laidler’s paper, but this snippet from the introduction will give you some sense of what he is up to.

These two particular ideas are especially interesting, because in the 1960s and ’70s, between our two crises, they feature prominently in the Monetarist reassessment of the Great Depression, which helped to establish the dominance in macroeconomic thought of the view that, far from being a manifestation of deep flaws in the very structure of the market economy, as it had at first been taken to be, this crisis was the consequence of serious policy errors visited upon an otherwise robustly self-stabilizing system. The crisis that began in 2007 has re-opened this question.

The Monetarist counterargument to the Keynesian view that the market economy is inherently subject to wide fluctuations and has no strong tendency toward full employment was that the Great Depression was caused primarily by a policy shock, the failure of the Fed to fulfill its duty to act as a lender of last resort during the US financial crisis of 1930-31. Originally, the Fisher relation did not figure prominently in this argument, but it eventually came to dominate Monetarism and the post-Monetarist/New Keynesian orthodoxy in which the job of monetary policy was viewed as setting a nominal interest rate (via a Taylor rule) that would be consistent with expectations of an almost negligible rate of inflation of about 2%.

This comfortable state of affairs – Monetarism without money is how Laidler describes it — in which an inherently stable economy would glide along its long-run growth path with low inflation, only rarely interrupted by short, shallow recessions, was unpleasantly overturned by the housing bubble and the subsequent financial crisis, producing the steepest downturn since 1937-38. That downturn has posed a challenge to Monetarist orthodoxy inasmuch as the sudden collapse, more or less out of nowhere in 2008, seemed to suggest that the market economy is indeed subject to a profound instability, as the Keynesians of old used to maintain. In the Great Depression, Monetarists could argue, it was all, or almost all, the fault of the Federal Reserve for not taking prompt action to save failing banks and for not expanding the money supply sufficiently to avoid deflation. But in 2008, the Fed provided massive support to banks, and even to non-banks like AIG, to prevent a financial meltdown, and then embarked on an aggressive program of open-market purchases that prevented an incipient deflation from taking hold.

As a result, self-identifying Monetarists have split into two camps. I will call one camp the Market Monetarists, with whom I identify even though I am much less of a fan of Milton Friedman, the father of Monetarism, than most Market Monetarists, and, borrowing terminology adopted in the last twenty years or so by political conservatives in the US to distinguish between old-fashioned conservatives and neoconservatives, I will call the old-style Monetarists, paleo-Monetarists. The paelo-Monetarists are those like Alan Meltzer, the late Anna Schwartz, Thomas Humphrey, and John Taylor (a late-comer to Monetarism who has learned quite well how to talk to the Monetarist talk). For the paleo-Monetarists, in the absence of deflation, the extension of Fed support to non-banking institutions and the massive expansion of the Fed’s balance sheet cannot be justified. But this poses a dilemma for them. If there is no deflation, why is an inherently stable economy not recovering? It seems to me that it is this conundrum which has led paleo-Monetarists into taking the dubious position that the extreme weakness of the economic recovery is a consequence of fiscal and monetary-policy uncertainty, the passage of interventionist legislation like the Affordable Health Care Act and the Dodd-Frank Bill, and the imposition of various other forms of interventionist regulations by the Obama administration.

Market Monetarists, on the other hand, have all along looked to monetary policy as the ultimate cause of both the downturn in 2008 and the lack of a recovery subsequently. So, on this interpretation, what separates paleo-Monetarists from Market Monetarists is whether you need outright deflation in order to precipitate a serious malfunction in a market economy, or whether something less drastic can suffice. Paleo-Monetarists agree that Japan in the 1990s and even early in the 2000s was suffering from a deflationary monetary policy, a policy requiring extraordinary measures to counteract. But the annual rate of deflation in Japan was never more than about 1% a year, a far cry from the 10% annual rate of deflation in the US between late 1929 and early 1933. Paleo-Monetarists must therefore explain why there is a radical difference between 1% inflation and 1% deflation. Market Monetarists also have a problem in explaining why a positive rate of inflation, albeit less than the 2% rate that is generally preferred, is not adequate to sustain a real recovery from starting more than four years after the original downturn. Or, if you prefer, the question could be restated as why a 3 to 4% rate of increase in NGDP is not adequate to sustain a real recovery, especially given the assumption, shared by paleo-Monetarists and Market Monetarists, that a market economy is generally stable and tends to move toward a full-employment equilibrium.

Here is where I think Laidler’s focus on the Fisher relation is critically important, though Laidler doesn’t explicitly address the argument that I am about to make. This argument, which I originally made in my paper “The Fisher Effect under Deflationary Expectations,” and have repeated in several subsequent blog posts (e.g., here) is that there is no specific rate of deflation that necessarily results in a contracting economy. There is plenty of historical experience, as George Selgin and others have demonstrated, that deflation is consistent with strong economic growth and full employment. In a certain sense, deflation can be a healthy manifestation of growth, allowing that growth, i.e., increasing productivity of some or all factors of production, to be translated into falling output prices. However, deflation is only healthy in an economy that is growing because of productivity gains. If productivity is flagging, there is no space for healthy (productivity-driven) deflation.

The Fisher relation between the nominal interest rate, the real interest rate and the expected rate of deflation basically tells us how much room there is for healthy deflation. If we take the real interest rate as given, that rate constitutes the upper bound on healthy deflation. Why, because deflation greater than real rate of interest implies a nominal rate of interest less than zero. But the nominal rate of interest has a lower bound at zero. So what happens if the expected rate of deflation is greater than the real rate of interest? Fisher doesn’t tell us, because in equilibrium it isn’t possible for the rate of deflation to exceed the real rate of interest. But that doesn’t mean that there can’t be a disequilibrium in which the expected rate of deflation is greater than the real rate of interest. We (or I) can’t exactly model that disequilibrium process, but whatever it is, it’s ugly. Really ugly. Most investment stops, the rate of return on cash (i.e., expected rate of deflation) being greater than the rate of return on real capital. Because the expected yield on holding cash exceeds the expected yield on holding real capital, holders of real capital try to sell their assets for cash. The only problem is that no one wants to buy real capital with cash. The result is a collapse of asset values. At some point, asset values having fallen, and the stock of real capital having worn out without being replaced, a new equilibrium may be reached at which the real rate will again exceed the expected rate of deflation. But that is an optimistic scenario, because the adjustment process of falling asset values and a declining stock of real capital may itself feed pessimistic expectations about the future value of real capital so that there literally might not be a floor to the downward spiral, at least not unless there is some exogenous force that can reverse the downward spiral, e.g., by changing price-level expectations.  Given the riskiness of allowing the rate of deflation to come too close to the real interest rate, it seems prudent to keep deflation below the real rate of interest by a couple of points, so that the nominal interest rate doesn’t fall below 2%.

But notice that this cumulative downward process doesn’t really require actual deflation. The same process could take place even if the expected rate of inflation were positive in an economy with a negative real interest rate. Real interest rates have been steadily falling for over a year, and are now negative even at maturities up to 10 years. What that suggests is that ceiling on tolerable deflation is negative. Negative deflation is the same as inflation, which means that there is a lower bound to tolerable inflation.  When the economy is operating in an environment of very low or negative real rates of interest, the economy can’t recover unless the rate of inflation is above the lower bound of tolerable inflation. We are not in the critical situation that we were in four years ago, when the expected yield on cash was greater than the expected yield on real capital, but it is a close call. Why are businesses, despite high earnings, holding so much cash rather than using it to purchase real capital assets? My interpretation is that with real interest rates negative, businesses do not see a sufficient number of profitable investment projects to invest in. Raising the expected price level would increase the number of investment projects that appear profitable, thereby inducing additional investment spending, finally inducing businesses to draw down, rather than add to, their cash holdings.

So it seems to me that paleo-Monetarists have been misled by a false criterion, one not implied by the Fisher relation that has become central to Monetarist and Post-Monetarist policy orthodoxy. The mere fact that we have not had deflation since 2009 does not mean that monetary policy has not been contractionary, or, at any rate, insufficiently expansionary. So someone committed to the proposition that a market economy is inherently stable is not obliged, as the paleo-Monetarists seem to think, to take the position that monetary policy could not have been responsible for the failure of the feeble recovery since 2009 to bring us back to full employment. Whether it even makes sense to think about an economy as being inherently stable or unstable is a whole other question that I will leave for another day.

HT:  Lars Christensen

Stephen Moore Turns F. A. Hayek into Chopped Liver

Tuesday, July 31 2012 is the 100th anniversary of Milton Friedman’s birth. There is plenty to celebrate. Milton Friedman, by almost anyone’s reckoning, was one of the great figures of twentieth century economics. And I say this as someone who is very far from being an uncritical admirer of Friedman. But he was brilliant, industrious, had a superb understanding of microeconomic theory, and could apply microeconomic theory very creatively to derive interesting and testable implications of the theory to inform his historical and empirical studies in a broad range of topics. He put his exceptional skills as an economist, polemicist, and debater to effective use as an advocate for his conception of the classical liberal ideals of limited government, free trade, and personal liberty, achieving astonishing success as a popularizer of libertarian doctrines, becoming a familiar and sought-after television figure, a best-selling author, and an adviser first to Barry Goldwater, then to Richard Nixon (until Nixon treacherously imposed wage-and-price controls in 1971), and, most famously, to Ronald Reagan. The arc of his influence was closely correlated with the success of those three politicians.

So it is altogether fitting and proper that the Wall Street Journal would commemorate this auspicious anniversary with an appropriate tribute to Friedman’s career and his influence. But amazingly, the Journal was unable to find anyone more qualified to write about Friedman than none other than one of their own editorial writers, Stephen Moore, whose dubious contributions to the spread of economic understanding and enlightenment I have had occasion to write about in the past. Friedman has many students and colleagues who are still alive and active. One would think that there would have been more than a handful of them that could have been asked  to write about Friedman on this occasion, but apparently the powers that be at the Journal felt that none of them could do the job as well as Mr. Moore.

How did Mr. Moore do? Well, he recites many of Friedman’s accomplishments as a scholar and as an advocate of less government intervention in the economy. But in his enthusiasm, Mr. Moore was unable to control his penchant for making stuff up without regard to the facts. Writing about the award of the Nobel Prize to Friedman in 1976, Moore makes the following statement:

Friedman was awarded the Nobel Prize in economics for 1976—at a time when almost all the previous prizes had gone to socialists. This marked the first sign of the intellectual comeback of free-market economics since the 1930s, when John Maynard Keynes hijacked the profession.

Here is a list of Nobel Prize winners before Friedman

1969: Ragnar Frisch, Jan Tinbergen

1970: Paul Samuelson

1971: Simon Kuznets

1972: Kenneth Arrow, J. R. Hicks

1973: Wassily Leontief

1974: F. A. Hayek, Gunnar Myrdal

1975: Leonid Kantorovich, T. C. Koopmans

So there were eleven recipients of the Nobel Prize before Friedman. Of these Gunnar Myrdal was a prominent Social Democratic politician in Sweden as well as an academic economist, so perhaps he qualifies as a socialist.

Wassily Leontief was a Russian expatriate; he developed mathematical and empirical techniques for describing the production process of an economy in terms of input-output tables. This was an empirical technique that had no particular ideological significance, but Leontief did seem to think that the technique could be adapted to provide a basis for economic planning. So perhaps he might be also be classified as a socialist.

Paul Samuelson was a prominent adviser to Democratic politicians, an advocate of Keynesian countercyclical policies, but never supported socialism. Kenneth Arrow has been less involved in politics than Samuelson, but has also been a supporter of Democrats.  Apparently that is enough to make someone a socialist in Mr. Moore’s estimation.

Ragnar Frisch and Jan Tinbergen were pioneers in the development of econometrics and other mathematical tools used by economists. They also tried to make those tools serviceable to policy makers. Frisch and Tinbergen were classic technocrats who, as far as I can tell, carried very little ideological baggage. But perhaps Mr. Moore has subjected the baggage to his socialism detector and heard the alarm bells go off.

J. R. Hicks was a prominent English economic theorist who was not identified strongly with any political party. Although he helped create the standard Keynesian IS-LM model, he was theoretically eclectic and as far as I know never wrote a word advocating socialism.

Simon Kuznets was an archetypical empirical technocratic economist who was one of the fathers of national income accounting. He actually was the coauthor of Milton Friedman’s first book, Income from Independent Professional Practice, hardly evidence of a socialistic mindset.

T. C. Koopmans, an early pioneer of econometric techniques, was awarded the Nobel Prize largely for his work in developing the mathematical techniques of linear programming which is a method of finding solutions to a class of problems that can be understood in terms of allocating resources efficiently to achieve a certain desired result, such as maximizing the nutritional content from a given expenditure on food or minimizing the cost to obtain a given level of nutrients. Leonid Kantorovich, a Soviet mathematician, developed the mathematical techniques of linear programming even before Koopmans. His results were actually subversive of Marxian theory, but their deeper implications were not understood in the Soviet Union. Again, the Nobel Prize was awarded for technical contributions, not for any particular economic policy or economic ideology.

But the most amazing thing about Mr. Moore’s statement about the bias of Nobel Prize Committee in favor of socialists is that he effectively re-writes history as if F. A. Hayek had not already won the Nobel Prize two years before Friedman. What is one supposed to make of Moore’s statement that the award of the Nobel Prize to Friedman in 1976 “was the FIRST sign of the intellectual comeback of free-market economics since the 1930s?” What was Hayek, Mr. Moore? Chopped Liver?

Williamson v. Sumner

Stephen Williamson weighed in on nominal GDP targeting in a blog post on Monday. Scott Sumner and Marcus Nunes have already responded, and Williamson has already responded to Scott, so I will just offer a few semi-random comments about Williamson’s post, the responses and counter-response.

Let’s start with Williamson’s first post. He interprets Fed policy, since the Volcker era, as an implementation of the Taylor rule:

The Taylor rule takes as given the operating procedure of the Fed, under which the FOMC determines a target for the overnight federal funds rate, and the job of the New York Fed people who manage the System Open Market Account (SOMA) is to hit that target. The Taylor rule, if the FOMC follows it, simply dictates how the fed funds rate target should be set every six weeks, given new information.

So, from the mid-1980s until 2008, everything seemed to be going swimmingly. Just as the inflation targeters envisioned, inflation was not only low, but we had a Great Moderation in the United States. Ben Bernanke, who had long been a supporter of inflation targeting, became Fed Chair in 2006, and I think it was widely anticipated that he would push for inflation targeting with the US Congress.

Thus, under the Taylor rule, as implemented, ever more systematically, by the FOMC, the federal funds rate (FFR) was set with a view to achieving an implicit inflation target, presumably in the neighborhood of 2%. However, as a result of the Little Depression beginning in 2008, Scott Sumner et al. have proposed targeting NGDP instead of inflation. Williamson has problems with NGDP targeting that I will come back to, but he makes a positive case for inflation targeting in terms of Friedman’s optimal-supply-of-money rule, under which the nominal rate of interest is held at zero via a rate of inflation that is the negative of the real interest rate (i.e., deflation whenever the real rate of interest is positive). Back to Williamson:

The Friedman rule . . . dictates that monetary policy be conducted so that the nominal interest rate is always zero. Of course we know that no central bank does that, and we have good reasons to think that there are other frictions in the economy which imply that we should depart from the Friedman rule. However, the lesson from the Friedman rule argument is that the nominal interest rate reflects a distortion and that, once we take account of other frictions, we should arrive at an optimal policy rule that will imply that the nominal interest rate should be smooth. One of the frictions some macroeconomists like to think about is price stickiness. In New Keynesian models, price stickiness leads to relative price distortions that monetary policy can correct.

If monetary policy is about managing price distortions, what does that have to do with targeting some nominal quantity? Any model I know about, if subjected to a NGDP targeting rule, would yield a suboptimal allocation of resources.

I really don’t understand this. Williamson is apparently defending current Fed practice (i.e., targeting a rate of inflation) by presenting it as a practical implementation of Friedman’s proposal to set the nominal interest rate at zero. But setting the nominal interest rate at zero is analogous to inflation targeting only if the real rate of interest doesn’t change. Friedman’s rule implies that the rate of deflation changes by as much as the real rate of interest changes. Or does Williamson believe that the real rate of interest never changes? Those of us now calling for monetary stimulus believe that we are stuck in a trap of widespread entrepreneurial pessimism, reflected in very low nominal and negative real interest rates. To get out of such a self-reinforcing network of pessimistic expectations, the economy needs a jolt of inflationary shock therapy like the one administered by FDR in 1933 when he devalued the dollar by 40%.

As I said a moment ago, even apart from Friedman’s optimality argument for a zero nominal interest rate, Williamson thinks that NGDP targeting is a bad idea, but the reasons that he offers for thinking it a bad idea strike me as a bit odd. Consider this one. The Fed would never adopt NGDP targeting, because it would be inconsistent with the Fed’s own past practice. I kid you not; that’s just what he said:

It will be a cold day in hell when the Fed adopts NGDP targeting. Just as the Fed likes the Taylor rule, as it confirms the Fed’s belief in the wisdom of its own actions, the Fed will not buy into a policy rule that makes its previous actions look stupid.

So is Williamson saying that the Fed will not adopt any policy that is inconsistent with its actions in, say, the Great Depression? That will surely do a lot to enhance the Fed’s institutional credibility, about which Williamson is so solicitous.

Then Williamson makes another curious argument based on a comparison of Hodrick-Prescott-filtered NGDP and RGDP data from 1947 to 2011. Williamson plotted the two series on the accompanying graph. Observing that while NGDP was less variable than RDGP in the 1970s, the two series tracked each other closely in the Great-Moderation period (1983-2007), Williamson suggests that, inasmuch as the 1970s are now considered to have been a period of bad monetary policy, low variability of NGDP does not seem to matter that much.

Marcus Nunes, I think properly, concludes that Williamson’s graph is wrong, because Williamson ignores the fact that there was a rising trend of NGDP growth during the 1970s, while during the Great Moderation, NGDP growth was stationary. Marcus corrects Williamson’s error with two graphs of his own (which I attach), showing that the shift to NGDP targeting was associated with diminished volatility in RGDP during the Great Moderation.

Furthermore, Scott Sumner questions whether the application of the Hodrick-Prescott filter to the entire 1947-2011 period was appropriate, given the collapse of NGDP after 2008, thereby distorting estimates of the trend.

There may be further issues associated with the appropriateness of the Hodrick-Prescott filter, issues which I am certainly not competent to assess, but I will just quote from Andrew Harvey’s article on filters for Business Cycles and Depressions: An Encyclopedia, to which I referred recently in my post about Anna Schwartz. Here is what Harvey said about the HP filter.

Thus for quarterly data, applying the [Hodrick-Prescott] filter to a random walk is likely to create a spurious cycle with a period of about seven or eight years which could easily be identified as a business cycle . . . Of course, the application of the Hodrick-Prescott filter yields quite sensible results in some cases, but everything depends on the properties of the series in question.

Williamson then wonders, if stabilizing NGDP is such a good idea, why not stabilize raw NGDP rather than seasonally adjusted NGDP, as just about all advocates of NGDP targeting implicitly or explicitly recommend? In a comment on Williamson’s blog, Nick Rowe raised the following point:

The seasonality question is interesting. We could push it further. Should we want the same level of NGDP on weekends as during the week? What about nighttime?

But then I think the same question could be asked for inflation targeting, or price level path targeting, because there is a seasonal pattern to CPI too. And (my guess is) the CPI is higher on weekends. Not sure if the CPI is lower or higher at night.

In a subsequent comment, Nick made the following, quite telling, observation:

Actually, thinking about seasonality is a regular repeated shock reminds me of something Lucas once said about rational expectations equilibria. I don’t remember his precise words, but it was something to the effect that we should be very wary of assuming the economy will hit the RE equilibrium after a shock that is genuinely new, but if the shock is regular and repeated agents will have figured out the RE equilibrium. Seasonality, and day of the week effects, will be presumably like that.

So, I think the point about eliminating seasonal fluctuations has been pretty much laid to rest. But perhaps Williamson will try to resurrect it (see below).

In his reply to Scott, Williamson reiterates his long-held position that the Fed is powerless to affect the economy except by altering the interest rate, now 0.25%, paid to banks on their reserves held at the Fed. Since the Fed could do no more than cut the rate to zero, and a negative interest rate would be deemed an illegal tax, Williamson sees no scope for monetary policy to be effective. Lars Chritensen, however, points out that the Fed could aim at a lower foreign exchange value of the dollar and conduct its monetary policy via unsterilized sales of dollars in the foreign-exchange markets in support of an explicit price level or NGDP target.

Williamson defends his comments about stabilizing seasonal fluctuations as follows:

My point in looking at seasonally adjusted nominal GDP was to point out that fluctuations in nominal GDP can’t be intrinsically bad. I think we all recognize that seasonal variation in NGDP is something that policy need not be doing anything to eliminate. So how do we know that we want to eliminate this variation at business cycle frequencies? In contrast to what Sumner states, it is widely recognized that some of the business cycle variability in RGDP we observe is in fact not suboptimal. Most of what we spend our time discussing (or fighting about) is the nature and quantitative significance of the suboptimalities. Sumner seems to think (like old-fashioned quantity theorists), that there is a sufficient statistic for subomptimality – in this case NGDP. I don’t see it.

So, apparently, Williamson does accept the comment from Nick Rowe (quoted above) on his first post. He now suggests that Scott Sumner and other NGDP targeters are too quick to assume that observed business-cycle fluctuations are non-optimal, because some business-cycle fluctuations may actually be no less optimal than the sort of responses to seasonal fluctuations that are general conceded to be unproblematic. The difference, of course, is that seasonal fluctuations are generally predictable and predicted, which is not the case for business-cycle fluctuations. Why, then, is there any theoretical presumption that unpredictable business-cycle fluctuations that falsify widely held expectations result in optimal responses? The rational for counter-cyclical policy is to minimize incorrect expectations that lead to inefficient search (unemployment) and speculative withholding of resources from their most valuable uses. The first-best policy for doing this, as I explained in the last chapter of my book Free Banking and Monetary Reform, would be to stabilize a comprehensive index of wage rates.  Practical considerations may dictate choosing a less esoteric policy target than stabilizing a wage index, say, stablizing the growth path of NGDP.

I think I’ve said more than enough for one post, so I’ll pass on Williamson’s further comments of the Friedman rule and why he chooses to call himself a Monetarist.

PS Yesterday was the first anniversary of this blog. Happy birthday and many happy returns to all my readers.

Anna Schwartz, RIP

Last Thursday night, I was in Niagra Falls en route to the History of Economics Society Conference at Brock University in St. Catharines, Ontario to present a paper on the Sraffa-Hayek debate (co-authored with my FTC colleague Paul Zimmerman) when I saw the news that Anna Schwartz had passed away a few hours earlier. The news brought back memories of how I first got to know Anna in 1985, thanks to our mutual friend Harvey Segal, formerly chief economist at Citibank, who had recently joined the Manhattan Institute where I was a Senior Fellow and had just started writing my book Free Banking and Monetary Reform. When Harvey suggested that it would be a good idea for me to meet and get to know Anna, I was not so sure that it was such a good idea, because I knew that I was going to be writing critically about Friedman and Monetarism, and about the explanation for the Great Depression given by Friedman and Schwartz in their Monetary History of the US. Nevertheless, Harvey was insistent, dismissing my misgivings and assuring me that Anna was not only a great scholar, but a wonderful and kind-hearted person, and that she would not take offense at a sincerely held difference of opinion. Taking Harvey’s word, I went to visit Anna at her office at the NBER on the NYU campus at Washington Square, but not without some residual trepidation at what was in store for me. But when I arrived at her office, I was immediately put at ease by her genuine warmth and interest in my work, based on what Harvey had told her about me and what I was doing. About a year later when my first draft was complete and submitted to Cambridge University Press, I was truly gratified when I received the report that Anna had written to the editors at Cambridge about my manuscript, praising the book as an important contribution to monetary economics even while registering her own disagreement with certain positions I had taken that were at odds with what she and Friedman had written.

Over the next couple of years Anna and I actually became even closer when, after finishing Free Banking and Monetary Reform, I accepted an offer to edit a proposed encyclopedia of business cycles and depressions, an assignment that I later bitterly regretted accepting when the enormity of the project that I had undertaken became all too clear to me.  After taking the assignment, I think that Anna was probably the first person that I contacted, and she agreed to serve as a consulting editor, and immediately put me in touch with two of her colleagues at the National Bureau, Victor Zarnowitz, and Geoffrey Moore. During my decade-long struggle to plan, execute, and see to conclusion this project, it was in no small part thanks to the generous and unstinting assistance of my three original consulting editors, Anna, Victor Zarnowitz, and Geof Moore. Over time, they were soon joined by other distinguished economists (Tom Cooley, Barry Eichengreen, Harald Hagemann, Phil Klein, Roger Kormendi, David Laidler, Phil Mirowski, Ed Nell, Lionello Punzo and Alesandro Vercelli) whose interest in and enthusiasm for the project kept me going when I wanted nothing more than to rid myself of this troublesome project. But without the help I received at the very start from Anna, and from Victor Zarnowitz and Geof Moore, the project would have never gotten off the ground. Sadly, with Anna gone, none of my original three consulting editors is still with us. Nor is another dear friend, Harvey Segal. I shall miss, but will not forget, them.

In a small tribute to Anna’s memory, I reproduce below (in part) the entry, written by Michael Bordo, on Anna Jacobsen Schwartz (1915 – 2012), from Business Cycles and Depressions: An Encyclopedia.

Anna Schwartz has contributed significantly to our understanding of the role of money in propagating and exacerbating business-cycle disturbances. Schwartz’s collaboration with Milton Friedman in the highly acclaimed money and business-cycle project of the National Bureau of Economic Research (NBER) helped establish the modern quantity theory of money (or Monetarism) as a dominant explanation for macroeconomic instability. Her contributions lie in the four related areas of monetary statistics, monetary history, monetary theory and policy, and international arrangements.

Born in New York City, she received a B. A. from Barnard College in 1934, an M.A. from Columbia in 1936, and a Ph.D. from Columbia in 1964. Most of Schwartz’s career has been spent in active research. After a year at the United States Department of Agriculture in 1936, she spent five years at Columbia University’s Social Science Research Council. She joined the NBER in 1941, where she has remained ever since. In 1981-82, Schwartz served as staff director of the United States Gold Commission and was responsible for writing the Gold Commission Report.

Schwartz’s early research was focused mainly on economic history and statistics. A collaboration with A. D. Gayer and W. W. Rostow from 1936 to 1941 produced a massive and important study of cycles and trends in the British economy during the Industrial Revolution, The Growth and Fluctuation of the British Economy, 1790-1850. The authors adopted NBER techniques to isolate cycles and trends in key time series of economic performance. Historical analysis was then interwoven with descriptive statistics to present an anatomy of the development of the British economy in this important period.

Schwartz collaborated with Milton Friedman on the NBER’s money and business-cycle project over a period of thirty years. This research resulted in three volumes: A Monetary History of the United States, 1867-1960, Monetary Statistics of the United States, and Monetary Trends in the United States and the United Kingdom, 1875-1975. . . .

The overwhelming historical evidence gathered by Schwartz linking economic instability to erratic monetary behavior, in turn a product of discretionary monetary policy, has convinced her of the desirability of stable money brought about through a constant money-growth rule. The evidence of particular interest to the student of cyclical phenomena is the banking panics in the United States between 1873 and 1933, especially from 1930 to 1933. Banking panics were a key ingredient in virtually every severe cyclical downturn and were critical in converting a serious, but not unusual, downturn beginning in 19329 into the “Great Contraction.” According to Schwartz’s research, each of the panics could have been allayed by timely and appropriate lender-of-last-resort intervention by the monetary authorities. Moreover, the likelihood of panics ever occurring would be remote in a stable monetary environment.

How Did We Get into a 2-Percent Inflation Trap?

It is now over 50 years since Paul Samuelson and Robert Solow published their famous paper “Analytical Aspects of Anti-Inflation Policy,” now remembered mainly for offering the Phillips Curve as a menu of possible combinations of unemployment and inflation, reflecting a trade-off between inflation and unemployment. By accepting a bit more inflation, policy-makers could bring down the rate of unemployment, vice-versa. This view of the world enjoyed a brief heyday in the early 1960s, but, thanks to a succession of bad, and sometimes disastrous, policy choices, and more than a little bad luck, we seemed, by the late 1970s and early 1980s, to be stuck with the worst of both worlds: high inflation and high unemployment. In the meantime, Milton Friedman (and less famously Edmund Phelps) countered Samuelson and Solow with a reinterpretation of the Phillips Curve in which the trade-off between inflation and unemployment was only temporary, inflation bringing down unemployment only when it is unexpected. But once people begin to expect inflation, it is incorporated into wage demands, so that the stimulative effect of inflation wears off, unemployment reverting back to its “natural” level, determined by “real” forces. Except in the short run, monetary policy is useless as a means of reducing unemployment. That theoretical argument, combined with the unpleasant experience of the 1970s and early 1980s, combined with a fairly rapid fall in unemployment after inflation was reduced from 12 to 4% in the 1982 recession, created an enduring consensus that inflation is a bad thing and should not be resorted to as a method of reducing unemployment.

I generally accept the Friedman/Phelps argument (actually widely anticipated by others, including, among others, Mises, Hayek and Hawtrey, before it was made by Friedman and Phelps) though it is subject to many qualifying conditions, for example, workers acquire skills by working, so a temporary increase in employment can have residual positive effects by increasing the skill sets and employability of the work force, so that part of the increase in employment resulting from inflation may turn out to be permanent even after inflation is fully anticipated. But even if one accepts Friedman’s natural-rate hypothesis in its most categorical form, the Friedman argument does not imply that inflation is never an appropriate counter-cyclical tool. Indeed, the logic of Friedman’s argument, properly understood and applied, implies that inflation ought to be increased when the actual rate of unemployment exceeds the natural rate of unemployment.

But first let’s understand why Friedman’s argument implies that it is a bad bargain to reduce the unemployment rate temporarily by raising the rate of inflation.  After all, one could ask, why not pocket a temporary increase in output and employment and accept a permanently higher rate of inflation? The cost of the higher rate of inflation is not zero, but it is not necessarily greater than the increased output and employment achieved in the transition. To this there could be two responses, one is that inflation produces distortions of its own that are not sustainable, so that once the inflation is expected, output and employment will not remain at old natural level, but will, at least temporarily, fall below the original level, so the increase in output and employment will be offset by a future decrease in output and employment. This is, in a very general sense, an Austrian type of argument about the distorting effects of inflation requiring some sort of correction before the economy can revert back to its equilibrium path even at a new higher rate of inflation, though it doesn’t have to be formulated in the familiar terms of Austrian business cycle theory.

But that is not the argument against inflation that Friedman made. His argument against inflation was that using inflation to increase output and employment does not really generate an increase in output, income and employment properly measured. The measured increase in output and employment is achieved only because individuals and businesses are misled into increasing output and employment by mistakenly accepting job offers at nominal wages that they would not have accepted had they realized that pries and general would be rising. Had they correctly foreseen the increase in prices and wages, workers would not have accepted job offers as quickly as they did, and if they had searched longer, they would have found that even better job offers were available. More workers are employed, but the increase in employment comes at the expense of mismatches between workers and the jobs that they have accepted. Since the apparent increase in output is illusory, there is little or no benefit from inflation to outweigh the costs of inflation. The implied policy prescription is therefore not to resort to inflation in the first place.

Even if we accept it as valid, this argument works only when the economy is starting from a position of full employment. But if output and employment are below their natural or potential levels, the argument doesn’t work. The reason the argument doesn’t work is that when an economy starts from a position of less than full employment, increases in output and employment are self-reinforcing and cumulative. There is a multiplier effect, because as the great Cambridge economist, Frederick Lavington put it so well, “the inactivity of all is the cause of the inactivity of each.” Thus, the social gain to increasing employment is greater than the private gain, so in a situation of less-than-full employment, tricking workers to accept employment turns out to be socially desirable, because by becoming employed they increase the prospects for others to become employed. When the rate of unemployment is above the natural level, a short-run increase in inflation generates an increase in output and employment that is permanent, and therefore greater than the cost associated with a temporary increase in inflation. As the unemployment rate drops toward the natural level, the optimal level of inflation drops, so there is no reason why the public should anticipate a permanent increase in the rate of inflation. When actual unemployment exceeds the natural rate, inflation, under a strict Friedmanian analysis, clearly pays its own way.

But we are now trapped in a monetary regime in which even a temporary increase in inflation above 2-percent apparently will not be tolerated even though it means perpetuating an unemployment rate of 8 percent that not so long ago would have been considered intolerable. What is utterly amazing is that the intellectual foundation for our new 2-percent-inflation-targeting regime is Friedman’s natural-rate hypothesis, and a straightforward application of Friedman’s hypothesis implies that the inflation rate should be increased whenever the actual unemployment rate exceeds the natural rate. What a holy mess.

OMG! John Taylor REALLY Misunderstands Hayek

Since Friday’s post about John Taylor’s misunderstanding of Hayek, I watched the 57-minute video of John Taylor’s Hayek Prize Lecture. I will not offer an extended critique of the lecture, which was little more than a collection of talking points based on little empirical evidence and no serious analysis or argument. If that description sounds like a critique, so be it, but the lecture was more in the way of a ritual invocation of shared beliefs and values than an attempt to make a substantive case for a definite policy or set of policies. Whether those present at the lecture were appropriately reinforced in their shared beliefs by Taylor’s low-key remarks and placid delivery, I have no idea, but he obviously was not trying to break any new intellectual ground.

Though I found Taylor’s remarks generally boring, I did perk up about 33-34 minutes through the lecture when Taylor observed that Hayek had himself, on occasion, deviated from his own principles. How does Taylor know this? He knows this (or thinks he does, at any rate), because, as a fellow of the Hoover Institution at Stanford University, he has access to Hayek’s correspondence, which contains Keynes’s famous letter to Hayek praising The Road to Serfdom, a letter Taylor quotes from just before he gets to his point about Hayek’s “deviation,” and access to a letter that Milton Friedman wrote to Hayek complaining about Hayek’s criticism of his 3-percent rule for growth in the stock of money. Hayek made the criticism in a 1975 lecture entitled, “Inflation, the Misdirection of Labour, and Unemployment,” which was published in a 52-page pamphlet called Full Employment at any Price? (of which I own a copy) along with Hayek’s Nobel Lecture and some additional Hayek had written about inflation and unemployment.

Here is what Hayek said about Friedman’s rule:

I wish I could share the confidence of my friend Milton Friedman who thinks that one could deprive the monetary authorities, in order to prevent the abuse of their powers for political purposes, of all discretionary powers by prescribing the amount of money they may and should add to circulation in any one year. It seems to me that he regards this as practicable because he has become used for statistical purposes to draw a sharp distinction between what is to be regarded as money and what is not. This distinction does not exist in the real world. I believe that, to ensure the convertibility of all kinds of near-money into real money, which is necessary if we are to avoid severe liquidity crises or panics, the monetary authorities must be given some discretion. But I agree with Friedman that we will have to try and get back to a more or less automatic system for regulating the quantity of money in ordinary times. The necessity of “suspending” Sir Robert Peel’s Bank Act of 1844 three times within 25 years after it was passed ought to have taught us this once and for all.

A polite, but stern, rebuke to Friedman. Friedman, not well disposed to being rebuked, even by his elders and betters, wrote back an outraged response to Hayek accusing him of condoning the discretionary behavior of central bankers, as if unaware that Hayek had already explained 15 years earlier in chapter 21 of The Constitution of Liberty why central bank discretion was not a violation of the rule of law.

Somehow or other, Professor Taylor must have come across Friedman’s letter to Hayek, and thought that it would be edifying to mention it in his Hayek Prize lecture. Bad idea!

The following is my rough transcription of Taylor’s remarks, starting at about 33:50 of the Manhattan Institute video, just after Taylor quoted from Keynes’s letter to Hayek about The Road to Serfdom and Friedman’s comment about the letter that Keynes had obviously not read the chapter of The Road to Serfdom entitled “Why the Worst Get on Top.”

Now there’s always pressure for even the best-intentioned people to move away from the principles of economic freedom. And just to show you how this can happen, Hayek, himself, deviated, at least in his writings. There’s a book he wrote called Full Employment at Any Price [no intonation indicating the question market in the title], written in the middle of the 1970s mess of high inflation, rising unemployment. So people, you know, just really said, we gotta get – he wanted, of course, to get back to the rule of law and rules-based policy, but what about – well, we gotta do something else in the meantime. Well, once again, Milton Friedman, his compatriot in his cause — and it’s good to have compatriots by the way, very good to have friends in his cause. He wrote in another letter to Hayek – Hoover Archives – “I hate to see you come out, as you do here, for what I believe to be one of the most fundamental violations of the rule of law that we have, namely, discretionary activities of central bankers.”

So, hopefully, that was enough to get everybody back on track. Actually, this episode – I certainly, obviously, don’t mean to suggest, as some people might, that Hayek changed his message, which, of course, he was consistent on everywhere else.

Well, this is embarrassing. Obviously not well-versed in Hayek’s writings, Taylor mistakes the Institute of Economic Affairs, Occasional Paper 45, Full Employment at any Price? for a book, while also overlooking the question mark in the title. That would be bad enough, but Taylor apparently infers that the title (without the question mark) represented Hayek’s position in the pamphlet, i.e., that Hayek was arguing that the chief goal of policy in the 1970s ought to be full employment, in other words, exactly the opposite of the position for which Hayek was arguing in the pamphlet that Taylor was misidentifying and in everything else Hayek ever wrote about inflation and unemployment policy.  Hayek was trying to explain that the single-minded pursuit of full employment by monetary policy-makers, regardless of the consequences, would be self-defeating and self-destructive. But, ignorant of Hayek’s writings, Taylor could not figure out from reading Friedman’s letter that all Friedman was responding to was Hayek’s devastating criticism of Friedman’s 3-percent rule, a rule that Taylor, for some inexplicable reason, still seems to find attractive, even though just about everyone else realized long ago that it was at best unworkable, and, in the unfortunate event that it could be made to work, would be disastrous. As a result, Taylor thoughtlessly decided to show that even the great Hayek wasn’t totally consistent and needed the guidance of (the presumably even greater) Milton Friedman to keep him on the straight and narrow. And this from the winner of the Hayek Prize in his Hayek Prize Lecture, no less.

Just by way of sequel, here is how well Hayek learned from Friedman to stay on the straight and narrow. In Denationalization of Money, published in 1976 and a revised edition in 1978, Hayek again commented (p. 81) on the Friedman 3-percent rule.

As regards Professor Friedman’s proposal of a legal limit on the rate at which a monopolistic issuer of money was to be allowed to increase the quantity in circulation, I can only say that I would not like to see what would happen if it ever became known that the amount of cash in circulation was approaching the upper limit and that therefore a need for increased liquidity could not be met.

And then in a footnote, Hayek added the following:

To such a situation the classic account of Walter Bagehot . . . would apply: “In a sensitive state of the English money market the near approach to the legal limit of reserve would be a sure incentive to panic; if one-third were fixed by law, the moment the banks were close to one-third, alarm would begin and would run like magic.

So much for Friedman getting Hayek back on track.  The idea!

Krugman v. Friedman

Regular readers of this blog will not be surprised to learn that I am not one of Milton Friedman’s greatest fans. He was really, really smart, and a brilliant debater; he had a great intuitive grasp of price theory (aka microeconomics), which helped him derive interesting, and often testable, implications from his analysis, a skill he put to effective use in his empirical work in many areas especially in monetary economics. But he was intolerant of views he didn’t agree with and, when it suited him, he could, despite his libertarianism, be a bit of a bully. Of course, there are lots of academics like that, including Karl Popper, the quintessential anti-totalitarian, whose most famous book The Open Society and Its Enemies was retitled “The Open Society and its Enemy Karl Popper” by one of Popper’s abused and exasperated students. Friedman was also sloppy in his scholarship, completely mischaracterizing the state of pre-Keynesian monetary economics, more or less inventing a non-existent Chicago oral tradition as carrier of the torch of non-Keynesian monetary economics during the dark days of the Keynesian Revolution, while re-packaging a diluted version of the Keynesian IS-LM model as a restatement of that oral tradition. Invoking a largely invented monetary tradition to provide a respectable non-Keynesian pedigree for the ideas that he was promoting, Friedman simply ignored, largely I think out of ignorance, the important work of non-Keynesian monetary theorists like R. G. Hawtrey and Gustav Cassel, making no mention of their monetary explanation of the Great Depression in any of works, especially in the epochal Monetary History of the United States.

It would be one thing if Friedman had provided a better explanation for the Great Depression than Hawtrey and Cassel did, but in every important respect his explanation was inferior to that of Hawtrey and Cassel (see my paper with Ron Batchelder on Hawtrey and Cassel). Friedman’s explanation was partial, providing little if any insight into the causes of the 1929 downturn, treating it as a severe, but otherwise typical, business-cycle downturn. It was also misleading, because Friedman almost entirely ignored the international dimensions and causes of the downturn, causes that directly followed from the manner in which the international community attempted to recreate the international gold standard after its collapse during World War I. Instead, Friedman, argued that the source, whatever it was, of the Great Depression lay in the US, the trigger for its degeneration into a worldwide catastrophe being the failure of the Federal Reserve Board to prevent the collapse of the unfortunately named Bank of United States in early 1931, thereby setting off a contagion of bank failures and a contraction of the US money supply. In doing so, Friedman mistook a symptom for the cause. As Hawtrey and Cassel understood, the contraction of the US money supply was the result of a deflation associated with a rising value of gold, an appreciation resulting mainly from the policy of the insane Bank of France in 1928-29 and an incompetent Fed stupidly trying to curb stock-market speculation by raising interest rates. Bank failures exacerbated this deflationary dynamic, but were not its cause. Once it started, the increase in the monetary demand for gold became self-reinforcing, fueling a downward deflationary spiral; bank failures were merely one of the ways in which increase in the monetary demand for gold fed on itself.

So if Paul Krugman had asked me (an obviously fanciful hypothesis) whether to criticize Friedman’s work on the Great Depression, I certainly would not have discouraged him from doing so. But his criticism of Friedman on his blog yesterday was misguided, largely accepting the historical validity of Friedman’s account of the Great Depression, and criticizing Friedman for tendentiously drawing political conclusions that did not follow from his analysis.

When wearing his professional economist hat, what Friedman really argued was that the Fed could easily have prevented the Great Depression with policy activism; if only it had acted to prevent a big fall in broad monetary aggregates all would have been well. Since the big decline in M2 took place despite rising monetary base, however, this would have required that the Fed “print” lots of money.

This claim now looks wrong. Even big expansions in the monetary base, whether in Japan after 2000 or here after 2008, do little if the economy is up against the zero lower bound. The Fed could and should do more — but it’s a much harder job than Friedman and Schwartz suggested.

Krugman is mischaracterizing Friedman’s argument. Friedman said that the money supply contracted because the Fed didn’t act as a lender of last resort to save the Bank of United States from insolvency setting off a contagion of bank runs. So Friedman would have said that the Fed could have prevented M2 from falling in the first place if it had acted aggressively as a lender of last resort, precisely what the Fed was created to do in the wake of the panic of 1907. The problem with Friedman’s argument is that he ignored the worldwide deflationary spiral that, independently of the bank failures, was already under way. The bank failures added to the increase in demand for gold, but were not its source. To have stopped the Depression the Fed would have had to flood the rest of the world with gold out of the massive hoards that had been accumulated in World War I and which, perversely, were still growing in 1928-31. Moreover, leaving the gold standard or devaluation was clearly effective in stopping deflation and promoting recovery, so monetary policy even at the zero lower bound was certainly not ineffective when the right instrument was chosen.

Krugman then makes a further charge against Friedman:

Beyond that, however, Friedman in his role as political advocate committed a serious sin; he consistently misrepresented his own economic work. What he had really shown, or thought he had shown, was that the Fed could have prevented the Depression; but he transmuted this into a claim that the Fed caused the Depression.

Not so fast. Friedman claimed that the Fed converted a serious recession in 1929-30 into the Great Depression by not faithfully discharging its lender of last resort responsibility. I don’t say that Friedman never applied any spin to the results of his positive analysis when engaging in political advocacy. But in Friedman’s discussions of the Great Depression, the real problem was not the political spin that he put on his historical analysis; it was that his historical analysis was faulty on some basic issues. The correct historical analysis of the Great Depression – the one provided by Hawtrey and Cassel – would have been at least as supportive of Friedman’s political views as the partial and inadequate account presented in the Monetary History.

PS  Judging from some of the reactions that I have seen to this post, I suspect that my comments about Friedman came across somewhat more harshly than I intended.  My feelings about Friedman are indeed ambivalent, so I now want to emphasize that there is a great deal to admire in his work.  And even though he may have been intolerant of opposing views when he encountered them from those he regarded as his inferiors, he was often willing to rethink his ideas in the face of criticism.  My main criticism of his work on monetary theory in general and the Great Depression in particular is that he was not well enough versed in the history of thought on the subject, and, as a result, did not properly characterize earlier work that he referred to or simply ignored earlier work that was relevant.   I am very critical of Friedman for having completely ignored the work of Hawtrey and Cassel on the Great Depression, work that I regard as superior to Friedman’s on the Great Depression, but that doesn’t mean that what Friedman had to say on the subject is invalid.

Friedman and Schwartz on James Tobin

Nick Rowe and I, with some valuable commentary from Bill Woolsey, Mike Sproul and Scott Sumner, and perhaps others whom I am not now remembering, have been having an intermittent and (I hope) friendly argument for the past six months or so about the “hot potato” theory of money to which Nick subscribes, and which I deny, at least when it comes to privately produced bank money as opposed to government issued “fiat” money. Our differences were put on display once again in the discussion following my previous post on endogenous money. As I have mentioned many times, my view of how banks operate is derived from one of the best papers I have ever read, James Tobin’s classic 1963 paper “Commercial Banks as Creators of Money,” a paper that in my estimation would, on its own, have amply entitled Tobin to be awarded the Nobel Prize. If you haven’t read the paper, you should not deny yourself that pleasure and profit any longer.

A few months ago, I stumbled across the PDF version of one of the relatively obscure follow-up volumes to the Monetary History of the US that Friedman and Schwartz wrote: Monetary Statistics of the US: Estimates, Sources, Methods. Part one of the book is an extended discussion about the definition of money, presenting various historical definitions of money and approaches to defining money. I think that I read parts of it when I was in graduate school, perhaps when I took Ben Klein’s graduate class monetary theory. As one might expect, Friedman and Schwartz spent a lot of time on discussing a priori versus pragmatic or empirical definitions of money, arguing that definitions based on concepts like “the essential properties of the medium of exchange” (title of a paper written by Leland Yeager) inevitably lead to dead ends, preferring instead definitions, like M2, that turn out to be empirically useful, even if only for a certain period of time, under a certain set of monetary institutions and practices. In rereading a number of sections of part one, I was repeatedly struck by how good and insightful an economist Friedman was. Since I am far from being an unqualified admirer of Friedman’s, it was good to be reminded again that despite his faults, he was a true master of the subject.

At any rate, on pp. 123-24, there is a discussion of definitions based on a concept of “market equilibrium.”

Gramley and Chase, in a highly formal analysis of monetary adjustments in the shortest of short periods (Marshall’s market equilibrium contrasted with his short-run and long-run equilibriua), discuss the definition of money only incidentally. Yet their analysis qualifies for consideration along with the analyses of Pesek and Saving, Newlyn, and Yeager because, like the others, Gramley and Chase believe that far-reaching substantive conclusions about monetary analysis can be derived from rather simple abstract considerations and like, Newlyn and Yeager, they put great stress on whether the decisions of the public can or do affect monetary totals. That “the stock of money” is “an exogenous variable set by central bank policy,” they regard as one of the “time-honored doctrines of traditional monetary analysis.” They contrast this “more conventional view” with the “new view” that “open market operations alter the stock of money balances if, and only if, they alter the quantity of money demanded by the public.

In a footnote to this passage, Friedman and Schwartz add the following comment (p. 124).

In this respect [Gramley and Chase] follow James Tobin, “Commercial Banks as Creators of ‘Money.'” . . . Tobin presents a lucid exposition of commercial banks as financial intermediaries with which we agree fully and which we find most illuminating. His analysis, like that of Pesek and Saving, Newlyn, and Yeager, and as we shall note, Gramley and Chase, demonstrates that emphasis on supply considerations leads to a distinction between high-powered money and other assets but not between any broader total and other assets. Unlike Gramley and Chase, Tobin explicitly eschews drawing any far-reaching conclusions for policy and analysis from his quantitative analysis.

Then on p. 135, Friedman and Schwartz in a critical discussion of the New View, of which Tobin’s paper was a key contribution, observed:

This approach is an appropriate theoretical counterpart to an analysis of changes in income and expenditure along Keynesian lines. That analysis takes the price level as an institutional datum and therefore minimizes the distinction between nominal and real magnitudes. It takes interest rates as essentially the only market variable that reconciles the structure of assets supplied with the structure demanded.

In a footnote to this passage, Friedman and Schwartz add this comment.

It is instructive that economists who adopt this general view write as if the monetary authorities could determine the real and not merely the nominal quantity of high-powered money. For example, William C. Brainard and James Tobin in setting up a financial model to illustrate pitfalls in the building of such models use “the replacement value of . . . physical assets . . . as the numeraire of the system,” yet regard “the supply of reserves” as “one of the quantities the central bank directly controls (“Pitfalls in Financial Model-Building,” AER, May 1968, pp. 101-02). If the nominal level of prices is regarded as an endogenous variable, this is clearly wrong. Hence the writers must be assuming this nominal level of prices to be fixed outsider their system. Keynes’ “wage unit” serves the same role in his analysis and leads him and his followers also to treat the monetary authorities as directly controlling real and not nominal variables.

But there is no logical necessity that requires the New View so elegantly formulated by Tobin to be deployed within a Keynesian framework rather than in a non-Keynesian framework in which some monetary aggregate, like the stock of currency or the monetary base, rather than M1 or M2, is what determines the price level. The stock of currency (or the monetary base) can function as the hot potato that determines (in conjunction with all the other variables affecting the demand for currency or the monetary base) the price level. Denying that bank money is a hot potato doesn’t require you to treat the price level “as an institutional datum.” Friedman, almost, but not quite, figured that one out.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 2,282 other followers

Follow Uneasy Money on WordPress.com
Advertisements