Archive for the 'Hayek' Category

Milton Friedman and the Phillips Curve

In December 1967, Milton Friedman delivered his Presidential Address to the American Economic Association in Washington DC. In those days the AEA met in the week between Christmas and New Years, in contrast to the more recent practice of holding the convention in the week after New Years. That’s why the anniversary of Friedman’s 1967 address was celebrated at the 2018 AEA convention. A special session was dedicated to commemoration of that famous address, published in the March 1968 American Economic Review, and fittingly one of the papers at the session as presented by the outgoing AEA president Olivier Blanchard, who also wrote one of the papers discussed at the session. Other papers were written by Thomas Sargent and Robert Hall, and by Greg Mankiw and Ricardo Reis. The papers were discussed by Lawrence Summers, Eric Nakamura, and Stanley Fischer. An all-star cast.

Maybe in a future post, I will comment on the papers presented in the Friedman session, but in this post I want to discuss a point that has been generally overlooked, not only in the three “golden” anniversary papers on Friedman and the Phillips Curve, but, as best as I can recall, in all the commentaries I’ve seen about Friedman and the Phillips Curve. The key point to understand about Friedman’s address is that his argument was basically an extension of the idea of monetary neutrality, which says that the real equilibrium of an economy corresponds to a set of relative prices that allows all agents simultaneously to execute their optimal desired purchases and sales conditioned on those relative prices. So it is only relative prices, not absolute prices, that matter. Taking an economy in equilibrium, if you were suddenly to double all prices, relative prices remaining unchanged, the equilibrium would be preserved and the economy would proceed exactly – and optimally – as before as if nothing had changed. (There are some complications about what is happening to the quantity of money in this thought experiment that I am skipping over.) On the other hand, if you change just a single price, not only would the market in which that price is determined be disequilibrated, at least one, and potentially more than one, other market would be disequilibrated. The point here is that the real economy rules, and equilibrium in the real economy depends on relative, not absolute, prices.

What Friedman did was to argue that if money is neutral with respect to changes in the price level, it should also be neutral with respect to changes in the rate of inflation. The idea that you can wring some extra output and employment out of the economy just by choosing to increase the rate of inflation goes against the grain of two basic principles: (1) monetary neutrality (i.e., the real equilibrium of the economy is determined solely by real factors) and (2) Friedman’s famous non-existence (of a free lunch) theorem. In other words, you can’t make the economy as a whole better off just by printing money.

Or can you?

Actually you can, and Friedman himself understood that you can, but he argued that the possibility of making the economy as a whole better of (in the sense of increasing total output and employment) depends crucially on whether inflation is expected or unexpected. Only if inflation is not expected does it serve to increase output and employment. If inflation is correctly expected, the neutrality principle reasserts itself so that output and employment are no different from what they would have been had prices not changed.

What that means is that policy makers (monetary authorities) can cause output and employment to increase by inflating the currency, as implied by the downward-sloping Phillips Curve, but that simply reflects that actual inflation exceeds expected inflation. And, sure, the monetary authorities can always surprise the public by raising the rate of inflation above the rate expected by the public , but that doesn’t mean that the public can be perpetually fooled by a monetary authority determined to keep inflation higher than expected. If that is the strategy of the monetary authorities, it will lead, sooner or later, to a very unpleasant outcome.

So, in any time period – the length of the time period corresponding to the time during which expectations are given – the short-run Phillips Curve for that time period is downward-sloping. But given the futility of perpetually delivering higher than expected inflation, the long-run Phillips Curve from the point of view of the monetary authorities trying to devise a sustainable policy must be essentially vertical.

Two quick parenthetical remarks. Friedman’s argument was far from original. Many critics of Keynesian policies had made similar arguments; the names Hayek, Haberler, Mises and Viner come immediately to mind, but the list could easily be lengthened. But the earliest version of the argument of which I am aware is Hayek’s 1934 reply in Econometrica to a discussion of Prices and Production by Alvin Hansen and Herbert Tout in their 1933 article reviewing recent business-cycle literature in Econometrica in which they criticized Hayek’s assertion that a monetary expansion that financed investment spending in excess of voluntary savings would be unsustainable. They pointed out that there was nothing to prevent the monetary authority from continuing to create money, thereby continually financing investment in excess of voluntary savings. Hayek’s reply was that a permanent constant rate of monetary expansion would not suffice to permanently finance investment in excess of savings, because once that monetary expansion was expected, prices would adjust so that in real terms the constant flow of monetary expansion would correspond to the same amount of investment that had been undertaken prior to the first and unexpected round of monetary expansion. To maintain a rate of investment permanently in excess of voluntary savings would require progressively increasing rates of monetary expansion over and above the expected rate of monetary expansion, which would sooner or later prove unsustainable. The gist of the argument, more than three decades before Friedman’s 1967 Presidential address, was exactly the same as Friedman’s.

A further aside. But what Hayek failed to see in making this argument was that, in so doing, he was refuting his own argument in Prices and Production that only a constant rate of total expenditure and total income is consistent with maintenance of a real equilibrium in which voluntary saving and planned investment are equal. Obviously, any rate of monetary expansion, if correctly foreseen, would be consistent with a real equilibrium with saving equal to investment.

My second remark is to note the ambiguous meaning of the short-run Phillips Curve relationship. The underlying causal relationship reflected in the negative correlation between inflation and unemployment can be understood either as increases in inflation causing unemployment to go down, or as increases in unemployment causing inflation to go down. Undoubtedly the causality runs in both directions, but subtle differences in the understanding of the causal mechanism can lead to very different policy implications. Usually the Keynesian understanding of the causality is that it runs from unemployment to inflation, while a more monetarist understanding treats inflation as a policy instrument that determines (with expected inflation treated as a parameter) at least directionally the short-run change in the rate of unemployment.

Now here is the main point that I want to make in this post. The standard interpretation of the Friedman argument is that since attempts to increase output and employment by monetary expansion are futile, the best policy for a monetary authority to pursue is a stable and predictable one that keeps the economy at or near the optimal long-run growth path that is determined by real – not monetary – factors. Thus, the best policy is to find a clear and predictable rule for how the monetary authority will behave, so that monetary mismanagement doesn’t inadvertently become a destabilizing force causing the economy to deviate from its optimal growth path. In the 50 years since Friedman’s address, this message has been taken to heart by monetary economists and monetary authorities, leading to a broad consensus in favor of inflation targeting with the target now almost always set at 2% annual inflation. (I leave aside for now the tricky question of what a clear and predictable monetary rule would look like.)

But this interpretation, clearly the one that Friedman himself drew from his argument, doesn’t actually follow from the argument that monetary expansion can’t affect the long-run equilibrium growth path of an economy. The monetary neutrality argument, being a pure comparative-statics exercise, assumes that an economy, starting from a position of equilibrium, is subjected to a parametric change (either in the quantity of money or in the price level) and then asks what will the new equilibrium of the economy look like? The answer is: it will look exactly like the prior equilibrium, except that the price level will be twice as high with twice as much money as previously, but with relative prices unchanged. The same sort of reasoning, with appropriate adjustments, can show that changing the expected rate of inflation will have no effect on the real equilibrium of the economy, with only the rate of inflation and the rate of monetary expansion affected.

This comparative-statics exercise teaches us something, but not as much as Friedman and his followers thought. True, you can’t get more out of the economy – at least not for very long – than its real equilibrium will generate. But what if the economy is not operating at its real equilibrium? Even Friedman didn’t believe that the economy always operates at its real equilibrium. Just read his Monetary History of the United States. Real-business cycle theorists do believe that the economy always operates at its real equilibrium, but they, unlike Friedman, think monetary policy is useless, so we can forget about them — at least for purposes of this discussion. So if we have reason to think that the economy is falling short of its real equilibrium, as almost all of us believe that it sometimes does, why should we assume that monetary policy might not nudge the economy in the direction of its real equilibrium?

The answer to that question is not so obvious, but one answer might be that if you use monetary policy to move the economy toward its real equilibrium, you might make mistakes sometimes and overshoot the real equilibrium and then bad stuff would happen and inflation would run out of control, and confidence in the currency would be shattered, and you would find yourself in a re-run of the horrible 1970s. I get that argument, and it is not totally without merit, but I wouldn’t characterize it as overly compelling. On a list of compelling arguments, I would put it just above, or possibly just below, the domino theory on the basis of which the US fought the Vietnam War.

But even if the argument is not overly compelling, it should not be dismissed entirely, so here is a way of taking it into account. Just for fun, I will call it a Taylor Rule for the Inflation Target (IT). Let us assume that the long-run inflation target is 2% and let us say that (YY*) is the output gap between current real GDP and potential GDP (i.e., the GDP corresponding to the real equilibrium of the economy). We could then define the following Taylor Rule for the inflation target:

IT = α(2%) + β((YY*)/ Y*).

This equation says that the inflation target in any period would be a linear combination of the default Inflation Target of 2% times an adjustment coefficient α designed to keep successively chosen Inflation targets from deviating from the long-term price-level-path corresponding to 2% annual inflation and some fraction β of the output gap expressed as a percentage of potential GDP. Thus, for example, if the output gap was -0.5% and β was 0.5, the short-term Inflation Target would be raised to 4.5% if α were 1.

However, if on average output gaps are expected to be negative, then α would have to be chosen to be less than 1 in order for the actual time path of the price level to revert back to a target price-level corresponding to a 2% annual rate.

Such a procedure would fit well with the current dual inflation and employment mandate of the Federal Reserve. The long-term price level path would correspond to the price-stability mandate, while the adjustable short-term choice of the IT would correspond to and promote the goal of maximum employment by raising the inflation target when unemployment was high as a countercyclical policy for promoting recovery. But short-term changes in the IT would not be allowed to cause a long-term deviation of the price level from its target path. The dual mandate would ensure that relatively higher inflation in periods of high unemployment would be compensated for by periods of relatively low inflation in periods of low unemployment.

Alternatively, you could just target nominal GDP at a rate consistent with a long-run average 2% inflation target for the price level, with the target for nominal GDP adjusted over time as needed to ensure that the 2% average inflation target for the price level was also maintained.

Advertisements

Does Economic Theory Entail or Support Free-Market Ideology?

A few weeks ago, via Twitter, Beatrice Cherrier solicited responses to this query from Dina Pomeranz

It is a serious — and a disturbing – question, because it suggests that the free-market ideology which is a powerful – though not necessarily the most powerful — force in American right-wing politics, and probably more powerful in American politics than in the politics of any other country, is the result of how economics was taught in the 1970s and 1980s, and in the 1960s at UCLA, where I was an undergrad (AB 1970) and a graduate student (PhD 1977), and at Chicago.

In the 1950s, 1960s and early 1970s, free-market economics had been largely marginalized; Keynes and his successors were ascendant. But thanks to Milton Friedman and his compatriots at a few other institutions of higher learning, especially UCLA, the power of microeconomics (aka price theory) to explain a very broad range of economic and even non-economic phenomena was becoming increasingly appreciated by economists. A very broad range of advances in economic theory on a number of fronts — economics of information, industrial organization and antitrust, law and economics, public choice, monetary economics and economic history — supported by the award of the Nobel Prize to Hayek in 1974 and Friedman in 1976, greatly elevated the status of free-market economics just as Margaret Thatcher and Ronald Reagan were coming into office in 1979 and 1981.

The growing prestige of free-market economics was used by Thatcher and Reagan to bolster the credibility of their policies, especially when the recessions caused by their determination to bring double-digit inflation down to about 4% annually – a reduction below 4% a year then being considered too extreme even for Thatcher and Reagan – were causing both Thatcher and Reagan to lose popular support. But the growing prestige of free-market economics and economists provided some degree of intellectual credibility and weight to counter the barrage of criticism from their opponents, enabling both Thatcher and Reagan to use Friedman and Hayek, Nobel Prize winners with a popular fan base, as props and ornamentation under whose reflected intellectual glory they could take cover.

And so after George Stigler won the Nobel Prize in 1982, he was invited to the White House in hopes that, just in time, he would provide some additional intellectual star power for a beleaguered administration about to face the 1982 midterm elections with an unemployment rate over 10%. Famously sharp-tongued, and far less a team player than his colleague and friend Milton Friedman, Stigler refused to play his role as a prop and a spokesman for the administration when asked to meet reporters following his celebratory visit with the President, calling the 1981-82 downturn a “depression,” not a mere “recession,” and dismissing supply-side economics as “a slogan for packaging certain economic ideas rather than an orthodox economic category.” That Stiglerian outburst of candor brought the press conference to an unexpectedly rapid close as the Nobel Prize winner was quickly ushered out of the shouting range of White House reporters. On the whole, however, Republican politicians have not been lacking of economists willing to lend authority and intellectual credibility to Republican policies and to proclaim allegiance to the proposition that the market is endowed with magical properties for creating wealth for the masses.

Free-market economics in the 1960s and 1970s made a difference by bringing to light the many ways in which letting markets operate freely, allowing output and consumption decisions to be guided by market prices, could improve outcomes for all people. A notable success of Reagan’s free-market agenda was lifting, within days of his inauguration, all controls on the prices of domestically produced crude oil and refined products, carryovers of the disastrous wage-and-price controls imposed by Nixon in 1971, but which, following OPEC’s quadrupling of oil prices in 1973, neither Nixon, Ford, nor Carter had dared to scrap. Despite a political consensus against lifting controls, a consensus endorsed, or at least not strongly opposed, by a surprisingly large number of economists, Reagan, following the advice of Friedman and other hard-core free-market advisers, lifted the controls anyway. The Iran-Iraq war having started just a few months earlier, the Saudi oil minister was predicting that the price of oil would soon rise from $40 to at least $50 a barrel, and there were few who questioned his prediction. One opponent of decontrol described decontrol as writing a blank check to the oil companies and asking OPEC to fill in the amount. So the decision to decontrol oil prices was truly an act of some political courage, though it was then characterized as an act of blind ideological faith, or a craven sellout to Big Oil. But predictions of another round of skyrocketing oil prices, similar to the 1973-74 and 1978-79 episodes, were refuted almost immediately, international crude-oil prices falling steadily from $40/barrel in January to about $33/barrel in June.

Having only a marginal effect on domestic gasoline prices, via an implicit subsidy to imported crude oil, controls on domestic crude-oil prices were primarily a mechanism by which domestic refiners could extract a share of the rents that otherwise would have accrued to domestic crude-oil producers. Because additional crude-oil imports increased a domestic refiner’s allocation of “entitlements” to cheap domestic crude oil, thereby reducing the net cost of foreign crude oil below the price paid by the refiner, one overall effect of the controls was to subsidize the importation of crude oil, notwithstanding the goal loudly proclaimed by all the Presidents overseeing the controls: to achieve US “energy independence.” In addition to increasing the demand for imported crude oil, the controls reduced the elasticity of refiners’ demand for imported crude, controls and “entitlements” transforming a given change in the international price of crude into a reduced change in the net cost to domestic refiners of imported crude, thereby raising OPEC’s profit-maximizing price for crude oil. Once domestic crude oil prices were decontrolled, market forces led almost immediately to reductions in the international price of crude oil, so the coincidence of a fall in oil prices with Reagan’s decision to lift all price controls on crude oil was hardly accidental.

The decontrol of domestic petroleum prices was surely as pure a victory for, and vindication of, free-market economics as one could have ever hoped for [personal disclosure: I wrote a book for The Independent Institute, a free-market think tank, Politics, Prices and Petroleum, explaining in rather tedious detail many of the harmful effects of price controls on crude oil and refined products]. Unfortunately, the coincidence of free-market ideology with good policy is not necessarily as comprehensive as Friedman and his many acolytes, myself included, had assumed.

To be sure, price-fixing is almost always a bad idea, and attempts at price-fixing almost always turn out badly, providing lots of ammunition for critics of government intervention of all kinds. But the implicit assumption underlying the idea that freely determined market prices optimally guide the decentralized decisions of economic agents is that the private costs and benefits taken into account by economic agents in making and executing their plans about how much to buy and sell and produce closely correspond to the social costs and benefits that an omniscient central planner — if such a being actually did exist — would take into account in making his plans. But in the real world, the private costs and benefits considered by individual agents when making their plans and decisions often don’t reflect all relevant costs and benefits, so the presumption that market prices determined by the elemental forces of supply and demand always lead to the best possible outcomes is hardly ironclad, as we – i.e., those of us who are not philosophical anarchists – all acknowledge in practice, and in theory, when we affirm that competing private armies and competing private police forces and competing judicial systems would not provide for common defense and for domestic tranquility more effectively than our national, state, and local governments, however imperfectly, provide those essential services. The only question is where and how to draw the ever-shifting lines between those decisions that are left mostly or entirely to the voluntary decisions and plans of private economic agents and those decisions that are subject to, and heavily — even mainly — influenced by, government rule-making, oversight, or intervention.

I didn’t fully appreciate how widespread and substantial these deviations of private costs and benefits from social costs and benefits can be even in well-ordered economies until early in my blogging career, when it occurred to me that the presumption underlying that central pillar of modern right-wing, free-market ideology – that reducing marginal income tax rates increases economic efficiency and promotes economic growth with little or no loss in tax revenue — implicitly assumes that all taxable private income corresponds to the output of goods and services whose private values and costs equal their social values and costs.

But one of my eminent UCLA professors, Jack Hirshleifer, showed that this presumption is subject to a huge caveat, because insofar as some people can earn income by exploiting their knowledge advantages over the counterparties with whom they trade, incentives are created to seek the kinds of knowledge that can be exploited in trades with less-well informed counterparties. The incentive to search for, and exploit, knowledge advantages implies excessive investment in the acquisition of exploitable knowledge, the private gain from acquiring such knowledge greatly exceeding the net gain to society from the acquisition of such knowledge, inasmuch as gains accruing to the exploiter are largely achieved at the expense of the knowledge-disadvantaged counterparties with whom they trade.

For example, substantial resources are now almost certainly wasted by various forms of financial research aiming to gain information that would have been revealed in due course anyway slightly sooner than the knowledge is gained by others, so that the better-informed traders can profit by trading with less knowledgeable counterparties. Similarly, the incentive to exploit knowledge advantages encourages the creation of financial products and structuring other kinds of transactions designed mainly to capitalize on and exploit individual weaknesses in underestimating the probability of adverse events (e.g., late repayment penalties, gambling losses when the house knows the odds better than most gamblers do). Even technical and inventive research encouraged by the potential to patent those discoveries may induce too much research activity by enabling patent-protected monopolies to exploit discoveries that would have been made eventually even without the monopoly rents accruing to the patent holders.

The list of examples of transactions that are profitable for one side only because the other side is less well-informed than, or even misled by, his counterparty could be easily multiplied. Because much, if not most, of the highest incomes earned, are associated with activities whose private benefits are at least partially derived from losses to less well-informed counterparties, it is not a stretch to suspect that reducing marginal income tax rates may have led resources to be shifted from activities in which private benefits and costs approximately equal social benefits and costs to more lucrative activities in which the private benefits and costs are very different from social benefits and costs, the benefits being derived largely at the expense of losses to others.

Reducing marginal tax rates may therefore have simultaneously reduced economic efficiency, slowed economic growth and increased the inequality of income. I don’t deny that this hypothesis is largely speculative, but the speculative part is strictly about the magnitude, not the existence, of the effect. The underlying theory is completely straightforward.

So there is no logical necessity requiring that right-wing free-market ideological policy implications be inferred from orthodox economic theory. Economic theory is a flexible set of conceptual tools and models, and the policy implications following from those models are sensitive to the basic assumptions and initial conditions specified in those models, as well as the value judgments informing an evaluation of policy alternatives. Free-market policy implications require factual assumptions about low transactions costs and about the existence of a low-cost process of creating and assigning property rights — including what we now call intellectual property rights — that imply that private agents perceive costs and benefits that closely correspond to social costs and benefits. Altering those assumptions can radically change the policy implications of the theory.

The best example I can find to illustrate that point is another one of my UCLA professors, the late Earl Thompson, who was certainly the most relentless economic reductionist whom I ever met, perhaps the most relentless whom I can even think of. Despite having a Harvard Ph.D. when he arrived back at UCLA as an assistant professor in the early 1960s, where he had been an undergraduate student of Armen Alchian, he too started out as a pro-free-market Friedman acolyte. But gradually adopting the Buchanan public-choice paradigm – Nancy Maclean, please take note — of viewing democratic politics as a vehicle for advancing the self-interest of agents participating in the political process (marketplace), he arrived at increasingly unorthodox policy conclusions to the consternation and dismay of many of his free-market friends and colleagues. Unlike most public-choice theorists, Earl viewed the political marketplace as a largely efficient mechanism for achieving collective policy goals. The main force tending to make the political process inefficient, Earl believed, was ideologically driven politicians pursuing ideological aims rather than the interests of their constituents, a view that seems increasingly on target as our political process becomes simultaneously increasingly ideological and increasingly dysfunctional.

Until Earl’s untimely passing in 2010, I regarded his support of a slew of interventions in the free-market economy – mostly based on national-defense grounds — as curiously eccentric, and I am still inclined to disagree with many of them. But my point here is not to argue whether Earl was right or wrong on specific policies. What matters in the context of the question posed by Dina Pomeranz is the economic logic that gets you from a set of facts and a set of behavioral and causality assumptions to a set of policy conclusion. What is important to us as economists has to be the process not the conclusion. There is simply no presumption that the economic logic that takes you from a set of reasonably accurate factual assumptions and a set of plausible behavioral and causality assumptions has to take you to the policy conclusions advocated by right-wing, free-market ideologues, or, need I add, to the policy conclusions advocated by anti-free-market ideologues of either left or right.

Certainly we are all within our rights to advocate for policy conclusions that are congenial to our own political preferences, but our obligation as economists is to acknowledge the extent to which a policy conclusion follows from a policy preference rather than from strict economic logic.

Hayek’s Rapid Rise to Stardom

For a month or so, I have been working on a paper about Hayek’s early pro-deflationary policy recommendations which seem to be at odds with his own idea of neutral money which he articulated in a way that implied or at least suggested that the ideal monetary policy would aim to keep nominal spending or nominal income constant. In the Great Depression, prices and real output were both falling, so that nominal spending and income were also falling at a rate equal to the rate of decline in real output plus the rate of decline in the price level. So in a depression, the monetary policy implied by Hayek’s neutral money criterion would have been to print money like crazy to generate enough inflation to keep nominal spending and nominal income constant. But Hayek denounced any monetary policy that aimed to raise prices during the depression, arguing that such a policy would treat the disease of depression with the drug that had caused the disease in the first place. Decades later, Hayek acknowledged his mistake and made clear that he favored a policy that would prevent the flow of nominal spending from ever shrinking. In this post, I am excerpting the introductory section of the current draft of my paper.

Few economists, if any, ever experienced as rapid a rise to stardom as F. A. Hayek did upon arriving in London in January 1931, at the invitation of Lionel Robbins, to deliver a series of four lectures on the theory of industrial fluctuations. The Great Depression having started about 15 months earlier, British economists were desperately seeking new insights into the unfolding and deteriorating economic catastrophe. The subject on which Hayek was to expound was of more than academic interest; it was of the most urgent economic, political and social, import.

Only 31 years old, Hayek, director of the Austrian Institute of Business Cycle Research headed by his mentor Ludwig von Mises, had never held an academic position. Upon completing his doctorate at the University of Vienna, writing his doctoral thesis under Friedrich von Wieser, one of the eminent figures of the Austrian School of Economics, Hayek, through financial assistance secured by Mises, spent over a year in the United States doing research on business cycles, and meeting such leading American experts on business cycles as W. C. Mitchell. While in the US, Hayek also exhaustively studied the English-language  literature on the monetary history of the eighteenth and nineteenth centuries and the, mostly British, monetary doctrines of that era.

Even without an academic position, Hayek’s productivity upon returning to Vienna was impressive. Aside from writing a monthly digest of statistical reports, financial news, and analysis of business conditions for the Institute, Hayek published several important theoretical papers, gaining a reputation as a young economist of considerable promise. Moreover, Hayek’s immersion in the English monetary literature and his sojourn in the United States gave him an excellent command of English, so that when Robbins, newly installed as head of the economics department at LSE, and having fallen under the influence of the Austrian school of economics, was seeking to replace Edwin Cannan, who before his retirement had been the leading monetary economist at LSE, Robbins thought of Hayek as a candidate for Cannan’s position.

Hoping that Hayek’s performance would be sufficiently impressive to justify the offer of a position at LSE, Robbins undoubtedly made clear to Hayek that if his lectures were well received, his chances of receiving an offer to replace Cannan were quite good. A secure academic position for a young economist, even one as talented as Hayek, was then hard to come by in Austria or Germany. Realizing how much depended on the impression he would make, Hayek, despite having undertaken to write a textbook on monetary theory for which he had already written several chapters, dropped everything else to compose the four lectures that he would present at LSE.

When he arrived in England in January 1931, Hayek actually went first to Cambridge to give a lecture, a condensed version of the four LSE lectures. Hayek was not feeling well when he came to Cambridge to face an unsympathetic, if not hostile, audience, and the lecture was not a success. However, either despite, or because of, his inauspicious debut at Cambridge, Hayek’s performance at LSE turned out to be an immediate sensation. In his History of Economic Analysis, Joseph Schumpeter, who, although an Austrian with a background in economics similar to Hayek’s, was neither a personal friend nor an ideological ally of Hayek’s, wrote that Hayek’s theory

on being presented to the Anglo-American community of economists, met with a sweeping success that has never been equaled by any strictly theoretical book that failed to make amends for its rigors by including plans and policy recommendations or to make contact in other ways with its readers loves or hates. A strong critical reaction followed that, at first, but served to underline the success, and then the profession turned away to other leaders and interests.

The four lectures provided a masterful survey of business-cycle theory and the role of monetary analysis in business-cycle theory, including a lucid summary of the Austrian capital-theoretic approach to business-cycle theory and of the equilibrium price relationships that are conducive to economic stability, an explanation of how those equilibrium price relationships are disturbed by monetary disturbances giving rise to cyclical effects, and some comments on the appropriate policies for avoiding or minimizing such disturbances. The goal of monetary policy should be to set the money interest rate equal to the hypothetical equilibrium interest rate determined by strictly real factors. The only policy implication that Hayek could extract from this rarified analysis was that monetary policy should aim not to stabilize the price level as recommended by such distinguished monetary theorists as Alfred Marshall and Knut Wicksell, but to stabilize total spending or total money income.

This objective would be achieved, Hayek argued, only if injections of new money preserved the equilibrium relationship between savings and investment, investments being financed entirely by voluntary savings, not by money newly created for that purpose. Insofar as new investment projects were financed by newly created money, the additional expenditure thereby financed would entail a deviation from the real equilibrium that would obtain in a hypothetical barter economy or in an economy in which money had no distortionary effect. That  interest rate was called by Hayek, following Wicksell, the natural (or equilibrium) rate of interest.

But according to Hayek, Wicksell failed to see that, in a progressive economy with real investment financed by voluntary saving, the increasing output of goods and services over time implies generally falling prices as the increasing productivity of factors of production progressively reduces costs of production. A stable price level would require ongoing increases in the quantity of money to, the new money being used to finance additional investment over and above voluntary saving, thereby causing the economy to deviate from its equilibrium time path by inducing investment that would not otherwise have been undertaken.

As Paul Zimmerman and I have pointed out in our paper on Hayek’s response to Piero Sraffa’s devastating, but flawed, review of Prices and Production (the published version of Hayek’s LSE lectures) Hayek’s argument that only an economy in which no money is created to finance investment is consistent with the real equilibrium of a pure barter economy depends on the assumption that money is non-interest-bearing and that the rate of inflation is not correctly foreseen. If money bears competitive interest and inflation is correctly foreseen, the economy can attain its real equilibrium regardless of the rate of inflation – provided, at least, that the rate of deflation is not greater than the real rate of interest. Inasmuch as the real equilibrium is defined by a system of n-1 relative prices per time period which can be multiplied by any scalar representing the expected price level or expected rate of inflation between time periods.

So Hayek’s assumption that the real equilibrium requires a rate of deflation equal to the rate of increase in factor productivity is an arbitrary and unfounded assumption reflecting his failure to see that the real equilibrium of the economy is independent of the price levels in different time periods and rates of inflation between time periods, when prices levels and rates of inflation are correctly anticipated. If inflation is correctly foreseen, nominal wages will rise commensurately with inflation and real wages with productivity increases, so that the increase in nominal money supplied by banks will not induce or finance investment beyond voluntary savings. Hayek’s argument was based on a failure to work through the full implications of his equilibrium method. As Hayek would later come to recognize, disequilibrium is the result not of money creation by banks but of mistaken expectations about the future.

Thus, Hayek’s argument mistakenly identified monetary expansion of any sort that moderated or reversed what Hayek considered the natural tendency of prices to fall in a progressively expanding economy, as the disturbing and distorting impulse responsible for business-cycle fluctuations. Although he did not offer a detailed account of the origins of the Great Depression, Hayek’s diagnosis of the causes of the Great Depression, made explicit in various other writings, was clear: monetary expansion by the Federal Reserve during the 1920s — especially in 1927 — to keep the US price level from falling and to moderate deflationary pressure on Britain (sterling having been overvalued at the prewar dollar-sterling parity when Britain restored gold convertibility in March 1925) distorted relative prices and the capital structure. When distortions eventually become unsustainable, unprofitable investment projects would be liquidated, supposedly freeing those resources to be re-employed in more productive activities. Why the Depression continued to deepen rather than recover more than a year after the downturn had started, was another question.

Despite warning of the dangers of a policy of price-level stabilization, Hayek was reluctant to advance an alternative policy goal or criterion beyond the general maxim that policy should avoid any disturbing or distorting effect — in particular monetary expansion — on the economic system. But Hayek was incapable of, or unwilling to, translate this abstract precept into a definite policy norm.

The simplest implementation of Hayek’s objective would be to hold the quantity of money constant. But that policy, as Hayek acknowledged, was beset with both practical and conceptual difficulties. Under a gold standard, which Hayek, at least in the early 1930s, still favored, the relevant area within which to keep the quantity of money constant would be the entire world (or, more precisely, the set of countries linked to the gold standard). But national differences between the currencies on the gold standard would make it virtually impossible to coordinate those national currencies to keep some aggregate measure of the quantity of money convertible into gold constant. And Hayek also recognized that fluctuations in the demand to hold money (the reciprocal of the velocity of circulation) produce monetary disturbances analogous to variations in the quantity of money, so that the relevant policy objective was not to hold the quantity of money constant, but to change the quantity of money proportionately (inversely) with the demand to hold money (the velocity of circulation).

Hayek therefore suggested that the appropriate criterion for the neutrality of money might be to hold total spending (or alternatively total factor income) constant. With constant total spending, neither an increase nor a decrease in the amount of money the public desired to hold would lead to disequilibrium. This was a compelling argument for constant total spending as the goal of policy, but Hayek was unwilling to adopt it as a practical guide for monetary policy.

In the final paragraph of his final LSE lecture, Hayek made his most explicit, though still equivocal, policy recommendation:

[T]he only practical maxim for monetary policy to be derived from our considerations is probably . . . that the simple fact of an increase of production and trade forms no justification for an expansion of credit, and that—save in an acute crisis—bankers need not be afraid to harm production by overcaution. . . . It is probably an illusion to suppose that we shall ever be able entirely to eliminate industrial fluctuations by means of monetary policy. The most we may hope for is that the growing information of the public may make it easier for central banks both to follow a cautious policy during the upward swing of the cycle, and so to mitigate the following depression, and to resist the well-meaning but dangerous proposals to fight depression by “a little inflation “.

Thus, Hayek concluded his series of lectures by implicitly rejecting his own idea of neutral money as a policy criterion, warning instead against the “well-meaning but dangerous proposals to fight depression by ‘a little inflation.’” The only sensible interpretation of Hayek’s counsel of “resistance” is an icy expression of indifference to falling nominal spending in a deep depression.

Larry White has defended Hayek against the charge that his policy advice in the depression was liquidationist, encouraging policy makers to take a “hands-off” approach to the unfolding economic catastrophe. In making this argument, White relies on Hayek’s neutral-money concept as well as Hayek’s disavowals decades later of his early pro-deflation policy advice. However, White omitted any mention of Hayek’s explicit rejection of neutral money as a policy norm at the conclusion of his LSE lectures. White also disputes that Hayek was a liquidationist, arguing that Hayek supported liquidation not for its own sake but only as a means to reallocate resources from lower- to higher-valued uses. Although that is certainly true, White does not establish that any of the other liquidationists he mentions favored liquidation as an end and not, like Hayek, as a means.

Hayek’s policy stance in the early 1930s was characterized by David Laidler as a skepticism bordering on nihilism in opposing any monetary- or fiscal-policy responses to mitigate the suffering of the general public caused by the Depression. White’s efforts at rehabilitation notwithstanding, Laidler’s characterization seems to be on the mark. The perplexing and disturbing question raised by Hayek’s policy stance in the early 1930s is why, given the availability of his neutral-money criterion as a justification for favoring at least a mildly inflationary (or reflationary) policy to promote economic recovery from the Depression, did Hayek remain, during the 1930s at any rate, implacably opposed to expansionary monetary policies? Hayek’s later disavowals of his early position actually provide some insight into his reasoning in the early 1930s, but to understand the reasons for his advocacy of a policy inconsistent with his own theoretical understanding of the situation for which he was offering policy advice, it is necessary to understand the intellectual and doctrinal background that set the boundaries on what kinds of policies Hayek was prepared to entertain. The source of that intellectual and doctrinal background was David Hume and the intermediary through which it was transmitted was none other than Hayek’s mentor Ludwig von Mises.

The Understanding and Misunderstanding of Imperfect Information

Last Friday on his blog, Timothy Taylor, editor of the Journal of Economic Perspectives, wrote about whether imperfect information strengthens or weakens the case for free markets and for deregulation. Taylor frames his discussion by comparing and contrasting two recent papers. One paper, “Friedrich Hayek and the Market Algorithm,” by Samuel Bowles, Alan Kirman and Rajiv Sethi, appeared in the Journal of Economic Perspectives; the other, “The Revolution of Information Economics: the Past and the Future,” by Joseph Stiglitz is a NBER working paper. Although I agree with much of what Taylor has to say, I think he, like many others, misses some important distinctions and nuances in Hayek’s thought. Although Hayek’s instincts were indeed very much opposed to any form of government intervention, that did not prevent him from acknowledging that there is a very wide range of government action that is not inconsistent with his understanding of liberal principles. He was, in fact, very far from being the dogmatic libertarian anti-interventionist for which he is mistaken. So I am going to try to put things in a clearer perspective.

Taylor begins by referencing Hayek and the paper by Bowles, Kirman and Sethi.

Friedrich von Hayek (Nobel 1974) is among the most prominent of those who have made the case that imperfect information strengthens the case for free markets. . . .

In one much-quoted example, Hayek offers a discussion of what happens in the market for some raw material, like tin, when “somewhere in the world a new opportunity for the use” arises, or “one of the sources of supply of tin has been eliminated.” Either of these changes (rise in demand, or a fall in supply) will lead to a higher market price. But as Hayek points out, no company that uses tin, nor any consumer who uses products made with tin as an ingredient, needs to know any details about what happened. No commission of government officials needs to meet to discuss how every firm and consumer should be required to react to this change in the price of tin. No government quota system for allocation of tin supplies needs to be established. No special government program for research and development into cheaper substitutes for tin, and no government-subsidized producers for potential-but-still-costly substitutes needs to be created. Instead, the shifts in demand or supply, and the corresponding changes in price, work themselves out with a larger number of small-scale shifts in the market.

A government agency might collect information on who currently produces and uses tin. But that government lacks the granular information about all the different alternatives that might possibly be used for tin, and any sense of when a user of tin would be willing to pay twice as much, or when a user of tin would shift to a substitute if the price rose even a little. Indeed, this granular information about the tin market is not even theoretically available to a government planner or regulator! Many users of tin, or potential suppliers of additional tin, or potential suppliers of substitutes, don’t actually know just how they would react to the higher price until after it happens. Their reactions emerge through a process of trial and error.

Hayek’s point becomes even more acute if one considers not just existing basic products, like tin, but the potential for innovative new products or services. One can make a guess about whether a certain type of new smartphone, headache remedy, spicy sauce, alternative energy source, or water-in-a-bottle will be popular and desired. But government planners–especially given that they are operating under political constraints–won’t have the knowledge to make these decisions. Hayek’s point is not only that government economic planners not only that government planners lack perfect information, but that it is not even theoretically possible for them to have perfect information–because much of the information about production, consumption, and prices does not exist. thus, Hayek wrote:

[The market is] a system of the utilization of knowledge which nobody can possess as a whole, which. . . leads people to aim at the needs of people whom they do not know, make use of facilities about which they have no direct information; all this condensed in abstract signals. . . [T]hat our whole modern wealth and production could arise only thanks to this mechanism is, I believe, the basis not only of my economics but also much of my political views. . .

Taylor, channeling Bowles, Kirman and Sethi, is here quoting from a passage in Hayek’s classic paper, “The Use of Knowledge in Society” in which he explained how markets accomplish automatically the task of transmitting and processing dispersed knowledge held by disparate agents who otherwise would have no way to communicate with each other to coordinate and reconcile their distinct plans into a coherent set of mutually consistent and interdependent actions, thereby achieving coincidentally a coherence and consistency that all decision-makers take for granted, but which none deliberately sought. The key point that Hayek was making is not so much that this “market order” is optimal in any static sense, but that if a central planner tried to replicate it, he would have to collect, process, and constantly update an impossibly huge quantity of information.

After describing Hayek’s explanation of why imperfect information – a term that for Hayek involved both the dispersal of existing knowledge and the discovery of new knowledge – implies that markets are a better mechanism than central planning for coordinating a complex network of interrelated activities, Taylor turns to Stiglitz’s paper on imperfect information.

Joseph Stiglitz (Nobel, 2001) is among the best-known of those who have explained how imperfect information can hinder the functioning of a market, and thus offer a justification for government intervention or regulation. Stiglitz offers a readable overview of his perspective in “The Revolution of Information Economics: The Past and the Future” (September 2017, National Bureau of Economic Research Working Paper 23780). The paper isn’t freely available online, although readers may have access through a library subscription, but a set of slides from when he presented a talk on this topic at the World Bank in 2016 are available here. Stiglitz emphasizes two particular aspects of imperfect information: it leads to a lack of competition and especially to problems in the financial sector. He writes:

The imperfections of competition and the absence of risk markets with which they are marked matter a great deal. . . . And in those sectors where information and its imperfections play a particularly important role, there is an even greater presumption of the need for public policy. The financial sector is, above all else, about gathering and processing information, on the basis of which capital resources can be efficiently allocated. Information is central. And that is at least part of the reason that financial sector regulation is so important. Markets where information is imperfect are also typically far from perfectly competitive. . . In markets with some, but imperfect competition, firms strive to increase their market power and to increase the extraction of rents from existing market power, giving rise to widespread distortions. In such circumstances, institutions and the rules of the game matter. Public policy is critical in setting the rules of the game.

There’s a lot going on here, and I think it’s a mistake to set up Hayek and Stiglitz as polar opposites. Although they surely are not in total agreement, Hayek did agree that the perfect-competition model is not descriptive of most actual markets. Hayek may have had a more benign view of the operation of “imperfect” competition than Stiglitz, but he certainly did not view perfect competition as a normative ideal in terms of which the performance of actual economies should be assessed. It is certainly true that imperfectly competitive firms attempt to increase their market power, either by colluding or by tacit understandings to refrain from “ruinous” competition, but perfectly competitive firms also seek to collude on their own or try to enlist the government to help restrain competition that drives profits down to – or even below – zero.

And it would be hard to think of a statement with which Hayek would have been less likely to disagree than this one: “public policy is critical in setting the rules of the game.” To suggest that Hayek conceived of a market economy as a system operating independently of the constraints of an evolving and increasingly sophisticated system of rules is to completely misunderstand Hayek’s conception of a market order and the legal underpinnings without which no such order could come into existence. The ideal of a free market is not for businesses and entrepreneurs to be able to do whatever they want, but for all agents to be subject to a system of general rules that lays out the acceptable means by which every individual may pursue his interests and try to achieve goals of his own choosing. Taylor continues:

Stiglitz also argues that in a modern economy, concerns over information are likely to become more acute.

Looking forward, changes in structure of demand (that is, as a country gets richer, the mix of goods purchased changes) and in technology may lead to an increased role of information and increased consequences of information imperfections, decreased competition, and increasing inequality. Many key battles will be about information and knowledge (implicitly or explicitly)—and the governance of information. Already, there are big debates going on about privacy (the rights of individuals to keep their own information) and transparency (requirements that government and corporations, for instance, reveal critical information about what they are doing). In many sectors, most especially, the financial sector, there are ongoing debates about disclosure—obligations on the part of individuals or firms to reveal certain things about their products.

Taylor misses an opportunity here to dig deeper into Stiglitz’s analysis of what makes imperfect information so problematic. The most serious problems arise when substantial information asymmetries exist, allowing better-informed agents to make trades that exploit the ignorance or gullibility of their counterparties. Though not confined to the financial sector – the health sector being another area in which information asymmetries are especially acute and potentially disastrous to the relatively uninformed party – existing information asymmetries create opportunities and incentives for reprehensible behavior by financial institutions while encouraging them to engage in tireless efforts to find or create additional information asymmetries, devoting valuable resources to the search for and creation of those asymmetries.

In many previous posts, I have discussed how the financial sector, when seeking to profit from transitory informational advantages by anticipating short-term price movements, or by creating new financial products that counterparties do not understand as well as their creators do, wastes resources on a massive scale. The net social product of such activity is far less than the private gains reaped from those fleeting informational advantages. But Wall Street banks and other financial institutions pay huge salaries to the very bright people who help create these momentary informational advantages and these new financial products. The actual and potential harms created by the existence – and, even worse, the pursuit – of such information asymmetries calls for serious analysis and creative thinking to correct, or at least mitigate, the malincentives that lead to such socially wasteful activity. And I can’t think of any reason why Hayek would have opposed changing “the rules of the game” to correct those malincentives. So the idea that reforming the legal framework within which markets operate to eliminate inefficient malincentives somehow is indicative of hostility to or skepticism about free markets, an idea that seems to underlie much of what Taylor and Stiglitz are saying, is entirely misplaced.

Which is not to say that it is easy to change the rules to fix every malincentive besetting the market economy; some malincentives may be truly intractable. But when malincentives truly are intractable – a state of affairs that, unfortunately, is closer to being the rule than the exception — it is usually not obvious what the appropriate policy response is. The problem is compounded many times over, because the theory of second best teaches us that, as soon as there is a single departure from optimality, satisfying all the other optimality conditions will not achieve the next best outcome. A single departure from optimality in one market requires departures from optimality in all related markets, so trying to satisfy optimality conditions in n-2 out of n markets doesn’t get you to the second best outcome.

In the end Taylor tries to suggest an awkward reconciliation between the supposedly opposing visions of Hayek and Stiglitz.

Both Hayek and Stiglitz use a similar “straw man” argumentative tactic: that is, set up a weak position as the opposing view, and then set it on fire. Hayek’s preferred straw man is government economic planners who seek to dictate every economic decision. He was writing in part with economic systems like the Communist Soviet Union in mind. But arguing that a market is better than wildly intrusive and weirdly over-precise old-time Soviet-style economic planning doesn’t make a case against more restrained and better-aimed forms of economic regulation. Indeed, Hayek occasionally expressed support for a universal basic income and for certain kinds of bank regulation.

I get what Taylor is trying to say, but I’m afraid he has phrased it rather badly. As Taylor actually seems to recognize, Hayek wasn’t just arguing against a straw man, which suggests creating an opposing argument to refute that no one really believes in. But that was hardly the case in the 1930s and 1940s when Hayek was first making his systematic argumeents against central planning by thinking carefully about what knowledge we actually are assuming that individual agents possess in standard economic models, and what knowledge a central planner would need in order to replicate the optimal state of affairs that is associated with the equilibrium of the standard economic model. And in the post-neoliberal political environment in which we now find ourselves, it is not clear that what not so long ago seemed like a straw man has not come back to life.

However, Taylor’s assessment of Stiglitz seems to me to be pretty much on target.

Stiglitz’s straw man is a free market that operates essentially without government intervention or regulation. He likes to emphasize that in the real world of imperfect information, there is no conceptual reason to presume that markets are efficient. But arguing that imperfect information can offer a potential justification for government regulation doesn’t make a case that all or most government regulation is justified. especially given that the real-world government regulators labor with their own problems of political constraints and limited information. And indeed, while Stiglitz tends to favor an increase in US economic regulations in a number of specific areas, his vision of the economy always leaves a substantial role for private sector ownership, decision-making, and innovation.

Taylor sums up this confused state of affairs with two quotations. The first from Scott Fitzgerald. “The true test of a first-rate mind is the ability to hold two contradictory ideas at the same time.” Taylor adds:

In this case, the contradictory ideas are that markets can often be a substantial improvement on government regulators, and government regulators can often be a substantial improvement on unconstrained market outcomes.

Taylor then quotes Joan Robinson: “[E]conomic theory, in itself, preaches no doctrines and cannot establish any universally valid laws. It is a method of ordering ideas and formulating questions.” And, if we are lucky, coming up with some conjectures that might answer those questions.

But before closing, I would add another quote from the paper by Bowles, Kirman and Sethi, which seems to me to penetrate to the core of the problem of imperfect information:

[W]e wish to call into question Hayek’s belief that his advocacy of free market policies follows as a matter of logic from his economic vision. The very usefulness of prices (and other economic variables) as informative messages—which is the centerpiece of Hayek’s economics—creates incentives to extract information from signals in ways that can be destabilizing. Markets can promote prosperity but can also generate crises. We will argue, accordingly, that a Hayekian understanding of the economy as an information-processing system does not support the type of policy positions that he favored. Thus, we find considerable lasting value in Hayek’s economic analysis while nonetheless questioning the connection of this analysis to his political philosophy.

My only quibble with their insightful comment is that Hayek’s political philosophy did not necessarily exclude a role for government intervention and regulation, provided that interventions and regulations satisfied appropriate procedural standards of generality and non-arbitrariness. Hayek’s main concern was not to make government small, but to subject all laws and regulations enacted by government to procedural conditions ensuring that the substantive content of legislation and regulation does not aim at achieving specific concrete objectives, e.g., a particular distribution of income or the advancement of a particular special interest, but at making markets function more smoothly and more predictably, e.g., by prohibiting anticompetitive or collusive agreements between business firms. In principle, measures such as guaranteeing a minimum income, or providing medical care, to all citizens, prohibiting or taxing pollution by manufacturers or unduly risky behavior by financial institutions, is not incompatible with that philosophy. The advisability of any specific law or regulation would of course depend on an appropriate weighing of the expected costs and benefits of imposing such a law or regulation.

Hayek, Deflation and Nihilism: A Popperian Postscript

In my previous post about Hayek’s support for deflationary monetary policy in the early 1930s, I wrote that Hayek’s support for deflation in the hope that it would break rigidities (he thought) were blocking the relative-price adjustments whereby self-correcting market forces would induce a spontaneous recovery from the Great Depression reminded me of the epigram attributed to Lenin: “you can’t make an omelet without breaking eggs.” I actually believed that that was a line that I had seen Karl Popper use somewhere. But in searching unsuccessfully for that quotation in Popper, I did find the following passage in Popper’s autobiography (Unended Quest), which seems to me to be worth reproducing. Popper describes the circumstances that led him while still a teenager to renounce his youthful Marxism.

The incident that turned me against communism, and that soon led me away from Marxism altogether, was one of the most important incidents in my life. It happened shortly before my seventeenth birthday. In Vienna, shooting broke out during a demonstration by unarmed young socialists who, instigated by the communists, tried to help some communists to escape who were under arrest in the central police station in Vienna. Several young socialist and communist workers were killed. I was horrified and shocked by the brutality of the police, but also by myself. For I felt that as a Marxist I bore part of the responsibility for the tragedy – at least in principle. Marxist theory demands that the class struggle be intensified, in order to speed up the coming of socialism. Its thesis is that although the revolution may claim some victims, capitalism is claiming more victims than the whole socialist revolution.

That was the Marxist theory – part of so-called “scientific socialism”. I now asked myself whether such a calculation could ever be supported by “science”. The whole experience, and especially this question, produced in me a life-long revulsion of feeling.

Communism is a creed which promises to bring about a better world. It claims to be based on knowledge: knowledge of the laws of historical development. I still hoped for a better world, a less violent and more just world, but I questioned whether I really knew – whether what I thought was knowledge was perhaps not more than mere pretence. I had, of course, read some Marx and Engels – but had I really understood it? Had I examined it critically, as anybody should do before he accepts a creed which justifies its means by a somewhat distant end?

I was shocked to have to admit to myself that not only had I accepted a complex theory somewhat uncritically, but that I had also actually noticed quite a bit of what was wrong, in the theory as well as in the practice of communism. But I had repressed this – partly out of loyalty to my friends, partly out of loyalty to “the cause”, and partly because there is a mechanism of getting oneself more and more deeply involved: once one has sacrificed one’s intellectual conscience over a minor point one does not wish to give in too easily; one wishes to justify the self-sacrifice by convincing oneself of the fundamental goodness of the cause, which is seen to outweigh any little moral or intellectual compromise that may be required. With every such moral or intellectual sacrifice one gets more deeply involved. One becomes ready to back one’s moral or intellectual investments in the cause with further investments. It is like being eager to throw good money after bad.

I saw how this mechanism had been working in my case, and I was horrified. I also saw it at work in others, especially my communist friends. And the experience enabled me to understand later many things which otherwise I would not have understood.

I had accepted a dangerous creed uncritically, dogmatically. The reaction made me first a sceptic; then it led me, though only for a very short time, to react against all rationalism. (As I found later, this is a typical reaction of a disappointed Marxist.)

By the time I was seventeen I had become an anti-Marxist. I realized the dogmatic character of the creed, and its incredible intellectual arrogance. It was a terrible thing to arrogate to oneself a kind of knowledge which made it a duty to risk  the lives of other people for an uncritically accepted dogma or for a dream which might turn out not to be realizable. (pp. 32-34)

Popper’s description of the process whereby emotional investment in a futile, but seemingly noble, cause leads to moral self-corruption is both chilling and frighteningly familiar to anyone paying attention to the news.

Hayek, Deflation and Nihilism

In the discussion about my paper on Hayek and intertemporal equilibrium at the HES meeting last month, Harald Hagemann suggested looking at Hansjorg Klausinger’s introductions to the two recently published volumes of Hayek’s Collected Works containing his writings (mostly from the 1920s and 1930s) about business-cycle theory in which he explores how Hayek’s attitude toward equilibrium analysis changed over time. But what I found most interesting in Klausinger’s introduction was his account of Hayek’s tolerant, if not supportive, attitude toward deflation — even toward what Hayek and other Austrians at the time referred to as “secondary deflation.” Some Austrians, notably Gottfried Haberler and Wilhelm Roepke, favored activist “reflationary” policies to counteract, and even reverse, secondary deflation. What did Hayek mean by secondary deflation? Here is how Klausinger (“Introduction” in Collected Works of F. A. Hayek: Business Cycles, Part II, pp. 5-6) explains the difference between primary and secondary deflation:

[A]ccording to Hayek’s theory the crisis is caused by a maladjustment in the structure of production typically initiated by a credit boom, such that the period of production (representing the capitalistic structure of production) is lengthened beyond what can be sustained by the rate of voluntary savings. The necessary reallocation of resources and its consequences give rise to crisis and depression. Thus, the “primary” cause of the crisis is a kind of “capital scarcity” while the depression represents an adjustment process by which the capital structure is adapted.

The Hayekian crisis or upper-turning point of the cycle occurs when banks are no longer willing or able to supply the funds investors need to finance their projects, causing business failures and layoffs of workers. The turning point is associated with distress sales of assets and goods, initiating a deflationary spiral. The collapse of asset prices and sell-off of inventories is the primary deflation, but at some point, the contraction may begin to feed on itself, and the contraction takes on a different character. That is the secondary deflation phase. But it is difficult to identify a specific temporal or analytic criterion by which to distinguish the primary from the secondary deflation.

Roepke and Haberler used the distinction – often referring to “depression”” and “deflation” interchangeably – to denote two phases of the cycle. The primary depression is characterized by the reactions to the disproportionalities of the boom, and accordingly an important cleansing function is ascribed to it; thus it is necessary to allow the primary depression to run its course. In contrast, the secondary depression refers to a self-feeding, cumulative process, not causally connected with the disproportionality that the primary depression is designed to correct. Thus the existence of the secondary depression opens up the possibility of a phase of depression dysfunctional to the economic system, where an expansionist policy might be called for. (Id. p. 6)

Despite conceding that there is a meaningful distinction between a primary and secondary deflation that might justify monetary expansion to counteract the latter, Hayek consistently opposed monetary expansion during the 1930s. The puzzle of Hayek’s opposition to monetary expansion, even at the bottom of the Great Depression, is compounded if we consider his idea of neutral money as a criterion for a monetary policy with no distorting effect on the price system. That idea can be understood in terms of the simple MV=PQ equation. Hayek argued that the proper criterion for neutral money was neither, as some had suggested, a constant quantity of money (M), nor, as others had suggested, a constant price level (P), but constant total spending (MV). But for MV to be constant, M must increase or decrease just enough to offset any change in V, where V represents the percentage of income held by the public in the form of money. Thus, if MV is constant, the quantity of money is increasing or decreasing by just as much as the amount of money the public wants to hold is increasing or decreasing.

The neutral-money criterion led Hayek to denounce the US Federal Reserve for a policy that kept the average level of prices essentially stable from 1922 to 1929, arguing that rapid economic growth should have been accompanied by falling not stable prices, in line with his neutral money criterion. The monetary expansion necessary to keep prices stable, had in Hayek’s view, led to a distortion of relative prices, causing an overextension of the capital structure of production, which was the ultimate cause of the 1929 downturn that triggered the Great Depression. But once the downturn started to accelerate, causing aggregate spending to decline by 50% between 1929 and 1933, Hayek, totally disregarding his own neutral-money criterion, uttered not a single word in protest of a monetary policy that was in flagrant violation of his own neutral money criterion. On the contrary, Hayek wrote an impassioned defense of the insane gold accumulation policy of the Bank of France, which along with the US Federal Reserve was chiefly responsible for the decline in aggregate spending.

In an excellent paper, Larry White has recently discussed Hayek’s pro-deflationary stance in the 1930s, absolving Hayek from responsibility for the policy errors of the 1930s on the grounds that the Federal Reserve Board and the Hoover Administration had been influenced not by Hayek, but by a different strand of pro-deflationary thinking, while pointing out that Hayek’s own theory of monetary policy, had he followed it consistently, would have led him to support monetary expansion during the 1930s to prevent any decline in aggregate spending. White may be correct in saying that policy makers paid little if any attention to Hayek’s pro-deflation policy advice. But Hayek’s policy advice was what it was: relentlessly pro-deflation.

Why did Hayek offer policy advice so blatantly contradicted by his own neutral-money criterion? White suggests that the reason was that Hayek viewed deflation as potentially beneficial if it would break the rigidities obstructing adjustments in relative prices. It was the lack of relative-price adjustments that, in Hayek’s view, caused the depression. Here is how Hayek (“The Present State and Immediate Prospects of the Study of Industrial Fluctuations” in Collected Works of F. A. Hayek: Business Cycles, Part II, pp. 171-79) put it:

The analysis of the crisis shows that, once an excessive increase of the capital structure has proved insupportable and has led to a crisis, profitability of production can be restored only by considerable changes in relative prices, reductions of certain stocks, and transfers of the means of production to other uses. In connection with these changes, liquidations of firms in a purely financial sense of the word may be inevitable, and their postponement may possibly delay the process of liquidation in the first, more general sense; but this is a separate and special phenomenon which in recent discussions has been stressed rather excessively at the expense of the more fundamental changes in prices, stocks, etc. (Id. pp. 175-76)

Hayek thus draws a distinction between two possible interpretations of liquidation, noting that widespread financial bankruptcy is not necessary for liquidation in the economic sense, an important distinction. Continuing with the following argument about rigidities, Hayek writes:

A theoretical problem of great importance which needs to be elucidated in this connection is the significance, for this process of liquidation, of the rigidity of prices and wages, which since the great war has undoubtedly become very considerable. There can be little question that these rigidities tend to delay the process of adaptation and that this will cause a “secondary” deflation which at first will intensify the depression but ultimately will help to overcome those rigidities. (Id. p. 176)

It is worth noting that Hayek’s assertion that the intensification of the depression would help to overcome the rigidities is an unfounded and unsupported supposition. Moreover, the notion that increased price flexibility in a depression would actually promote recovery has a flimsy theoretical basis, because, even if an equilibrium does exist in an economy dislocated by severe maladjustments — the premise of Austrian cycle theory — the notion that price adjustments are all that’s required for recovery can’t be proven even under the assumption of Walrasian tatonnement, much less under the assumption of incomplete markets with trading at non-equilibrium prices. The intuitively appealing notion that markets self-adjust is an extrapolation from Marshallian partial-equilibrium analysis in which the disequilibrium of a single market is analyzed under the assumption that all other markets remain in equilibrium. The assumption of approximate macroeconomic equilibrium is a necessary precondition for the partial-equilibrium analysis to show that a single (relatively small) market reverts to equilibrium after a disturbance. In the general case in which multiple markets are simultaneously disturbed from an initial equilibrium, it can’t be shown that price adjustments based on excess demands in individual markets lead to the restoration of equilibrium.

The main problem in this connection, on which opinions are still diametrically opposed, are, firstly, whether this process of deflation is merely an evil which has to be combated, or whether it does not serve a necessary function in breaking these rigidities, and, secondly, whether the persistence of these deflationary tendencies proves that the fundamental maladjustment of prices still exists, or whether, once that process of deflation has gathered momentum, it may not continue long after it has served its initial function. (Id.)

Unable to demonstrate that deflation was not exacerbating economic conditions, Hayek justified tolerating further deflation, as White acknowledged, with the hope that it would break the “rigidities” preventing the relative-price adjustments that he felt were necessary for recovery. Lacking a solid basis in economic theory, Hayek’s support for deflation to break rigidities in relative-price adjustment invites evaluation in ideological terms. Conceding that monetary expansion might increase employment, Hayek may have been disturbed by the prospect that an expansionary monetary policy would be credited for having led to a positive outcome, thereby increasing the chances that inflationary policies would be adopted under less extreme conditions. Hayek therefore appears to have supported deflation as a means to accomplish a political objective – breaking politically imposed and supported rigidities in prices – he did not believe could otherwise be accomplished.

Such a rationale, I am sorry to say, reminds me of Lenin’s famous saying that you can’t make an omelet without breaking eggs. Which is to say, that in order to achieve a desired political outcome, Hayek was prepared to support policies that he had good reason to believe would increase the misery and suffering of a great many people. I don’t accuse Hayek of malevolence, but I do question the judgment that led him to such a conclusion. In Fabricating the Keynesian Revolution, David Laidler described Hayek’s policy stance in the 1930s as extreme pessimism verging on nihilism. But in supporting deflation as a means to accomplish a political end, Hayek clearly seems to have crossed over the line separating pessimism from nihilism.

In fairness to Hayek, it should be noted that he eventually acknowledged and explicitly disavowed his early pro-deflation stance.

I am the last to deny – or rather, I am today the last to deny – that, in these circumstances, monetary counteractions, deliberate attempts to maintain the money stream, are appropriate.

I probably ought to add a word of explanation: I have to admit that I took a different attitude forty years ago, at the beginning of the Great Depression. At that time I believed that a process of deflation of some short duration might break the rigidity of wages which I thought was incompatible with a functioning economy. Perhaps I should have even then understood that this possibility no longer existed. . . . I would no longer maintain, as I did in the early ‘30s, that for this reason, and for this reason only, a short period of deflation might be desirable. Today I believe that deflation has no recognizable function whatever, and that there is no justification for supporting or permitting a process of deflation. (A Discussion with Friedrich A. Von Hayek: Held at the American Enterprise Institute on April 9, 1975, p. 5)

Responding to a question about “secondary deflation” from his old colleague and friend, Gottfried Haberler, Hayek went on to elaborate:

The moment there is any sign that the total income stream may actually shrink, I should certainly not only try everything in my power to prevent it from dwindling, but I should announce beforehand that I would do so in the event the problem arose. . .

You ask whether I have changed my opinion about combating secondary deflation. I do not have to change my theoretical views. As I explained before, I have always thought that deflation had no economic function; but I did once believe, and no longer do, that it was desirable because it could break the growing rigidity of wage rates. Even at that time I regarded this view as a political consideration; I did not think that deflation improved the adjustment mechanism of the market. (Id. pp. 12-13)

I am not sure that Hayek’s characterization of his early views is totally accurate. Although he may indeed have believed that a short period of deflation would be enough to break the rigidities that he found so troublesome, he never spoke out against deflation, even as late as 1932 more than two years the start of deflation at the end of 1929. But on the key point Hayek was perfectly candid: “I regarded this view as a political consideration.”

This harrowing episode seems worth recalling now, as the U.S. Senate is about to make decisions about the future of the highly imperfect American health care system, and many are explicitly advocating taking steps calculated to make the system (or substantial parts of it) implode or enter a “death spiral” for the express purpose of achieving a political/ideological objective. Policy-making and nihilism are a toxic mix, as we learned in the 1930s with such catastrophic results. Do we really need to be taught that lesson again?

What’s Wrong with the Price-Specie-Flow Mechanism? Part I

The tortured intellectual history of the price-specie-flow mechanism (PSFM), which received its classic exposition in an essay (“Of the Balance of Trade”) by David Hume about 275 years ago is not a history that, properly understood, provides solid grounds for optimism about the chances for progress in what we, somewhat credulously, call economic science. In brief, the price-specie-flow mechanism asserts that, under a gold or commodity standard, deviations between the price levels of those countries on the gold standard induce gold to be shipped from countries where prices are relatively high to countries where prices are relatively low, the gold flows continuing until price levels are equalized. Hence, the compound adjective “price-specie-flow,” signifying that the mechanism is set in motion by price-level differences that induce gold (specie) flows.

The PSFM is thus premised on a version of the quantity theory of money in which price levels in each country on the gold standard are determined by the quantity of money circulating in that country. In his account, Hume assumed that money consists entirely of gold, so that he could present a scenario of disturbance and re-equilibration strictly in terms of changes in the amount of gold circulating in each country. Inasmuch as Hume held a deeply hostile attitude toward banks, believing them to be essentially inflationary engines of financial disorder, subsequent interpretations of the PSFM had to struggle to formulate a more general theoretical account of international monetary adjustment to accommodate the presence of the fractional-reserve banking so detested by Hume and to devise an institutional framework that would facilitate operation of the adjustment mechanism under a fractional-reserve-banking system.

In previous posts on this blog (e.g., here, here and here) a recent article on the history of the (misconceived) distinction between rules and discretion, I’ve discussed the role played by the PSFM in one not very successful attempt at monetary reform, the English Bank Charter Act of 1844. The Bank Charter Act was intended to ensure the maintenance of monetary equilibrium by reforming the English banking system so that it would operate the way Hume described it in his account of the PSFM. However, despite the failings of the Bank Charter Act, the general confusion about monetary theory and policy that has beset economic theory for over two centuries has allowed PSFM to retain an almost canonical status, so that it continues to be widely regarded as the basic positive and normative model of how the classical gold standard operated. Using the PSFM as their normative model, monetary “experts” came up with the idea that, in countries with gold inflows, monetary authorities should reduce interest rates (i.e., lending rates to the banking system) causing monetary expansion through the banking system, and, in countries losing gold, the monetary authorities should do the opposite. These vague maxims described as the “rules of the game,” gave only directional guidance about how to respond to an increase or decrease in gold reserves, thereby avoiding the strict numerical rules, and resulting financial malfunctions, prescribed by the Bank Charter Act.

In his 1932 defense of the insane gold-accumulation policy of the Bank of France, Hayek posited an interpretation of what the rules of the game required that oddly mirrored the strict numerical rules of the Bank Charter Act, insisting that, having increased the quantity of banknotes by about as much its gold reserves had increased after restoration of the gold convertibility of the franc, the Bank of France had done all that the “rules of the game” required it to do. In fairness to Hayek, I should note that decades after his misguided defense of the Bank of France, he was sharply critical of the Bank Charter Act. At any rate, the episode indicates how indefinite the “rules of the game” actually were as a guide to policy. And, for that reason alone, it is not surprising that evidence that the rules of the game were followed during the heyday of the gold standard (roughly 1880 to 1914) is so meager. But the main reason for the lack of evidence that the rules of the game were actually followed is that the PSFM, whose implementation the rules of the game were supposed to guarantee, was a theoretically flawed misrepresentation of the international-adjustment mechanism under the gold standard.

Until my second year of graduate school (1971-72), I had accepted the PSFM as a straightforward implication of the quantity theory of money, endorsed by such luminaries as Hayek, Friedman and Jacob Viner. I had taken Axel Leijonhufvud’s graduate macro class in my first year, so in my second year I audited Earl Thompson’s graduate macro class in which he expounded his own unique approach to macroeconomics. One of the first eye-opening arguments that Thompson made was to deny that the quantity theory of money is relevant to an economy on the gold standard, the kind of economy (allowing for silver and bimetallic standards as well) that classical economics, for the most part, dealt with. It was only after the Great Depression that fiat money was widely accepted as a viable system for the long-term rather than a mere temporary wartime expedient.

What determines the price level for a gold-standard economy? Thompson’s argument was simple. The value of gold is determined relative to every other good in the economy by exactly the same forces of supply and demand that determine relative prices for every other real good. If gold is the standard, or numeraire, in terms of which all prices are quoted, then the nominal price of gold is one (the relative price of gold in terms of itself). A unit of currency is specified as a certain quantity of gold, so the price level measure in terms of the currency unit varies inversely with the value of gold. The amount of money in such an economy will correspond to the amount of gold, or, more precisely, to the amount of gold that people want to devote to monetary, as opposed to real (non-monetary), uses. But financial intermediaries (banks) will offer to exchange IOUs convertible on demand into gold for IOUs of individual agents. The IOUs of banks have the property that they are accepted in exchange, unlike the IOUs of individual agents which are not accepted in exchange (not strictly true as bills of exchange have in the past been widely accepted in exchange). Thus, the amount of money (IOUs payable on demand) issued by the banking system depends on how much money, given the value of gold, the public wants to hold; whenever people want to hold more money than they have on hand, they obtain additional money by exchanging their own IOUs – not accepted in payment — with a bank for a corresponding amount of the bank’s IOUs – which are accepted in payment.

Thus, the simple monetary theory that corresponds to a gold standard starts with a value of gold determined by real factors. Given the public’s demand to hold money, the banking system supplies whatever quantity of money is demanded by the public at a price level corresponding to the real value of gold. This monetary theory is a theory of an ideal banking system producing a competitive supply of money. It is the basic monetary paradigm of Adam Smith and a significant group of subsequent monetary theorists who formed the Banking School (and also the Free Banking School) that opposed the Currency School doctrine that provided the rationale for the Bank Charter Act. The model is highly simplified and based on assumptions that aren’t necessarily fulfilled always or even at all in the real world. The same qualification applies to all economic models, but the realism of the monetary model is certainly open to question.

So under the ideal gold-standard model described by Thompson, what was the mechanism of international monetary adjustment? All countries on the gold standard shared a common price level, because, under competitive conditions, prices for any tradable good at any two points in space can deviate by no more than the cost of transporting that product from one point to the other. If geographic price differences are constrained by transportation costs, then the price effects of an increased quantity of gold at any location cannot be confined to prices at that location; arbitrage spreads the price effect at one location across the whole world. So the basic premise underlying the PSFM — that price differences across space resulting from any disturbance to the equilibrium distribution of gold would trigger equilibrating gold shipments to equalize prices — is untenable; price differences between any two points are always constrained by the cost of transportation between those points, whatever the geographic distribution of gold happens to be.

Aside from the theoretical point that there is a single world price level – actually it’s more correct to call it a price band reflecting the range of local price differences consistent with arbitrage — that exists under the gold standard, so that the idea that local prices vary in proportion to the local money stock is inconsistent with standard price theory, Thompson also provided an empirical refutation of the PSFM. According to the PSFM, when gold is flowing into one country and out of another, the price levels in the two countries should move in opposite directions. But the evidence shows that price-level changes in gold-standard countries were highly correlated even when gold flows were in the opposite direction. Similarly, if PSFM were correct, cyclical changes in output and employment should have been correlated with gold flows, but no such correlation between cyclical movements and gold flows is observed in the data. It was on this theoretical foundation that Thompson built a novel — except that Hawtrey and Cassel had anticipated him by about 50 years — interpretation of the Great Depression as a deflationary episode caused by a massive increase in the demand for gold between 1929 and 1933, in contrast to Milton Friedman’s narrative that explained the Great Depression in terms of massive contraction in the US money stock between 1929 and 1933.

Thompson’s ideas about the gold standard, which he had been working on for years before I encountered them, were in the air, and it wasn’t long before I encountered them in the work of Harry Johnson, Bob Mundell, Jacob Frenkel and others at the University of Chicago who were then developing what came to be known as the monetary approach to the balance of payments. Not long after leaving UCLA in 1976 for my first teaching job, I picked up a volume edited by Johnson and Frenkel with the catchy title The Monetary Approach to the Balance of Payments. I studied many of the papers in the volume, but only two made a lasting impression, the first by Johnson and Frenkel “The Monetary Approach to the Balance of Payments: Essential Concepts and Historical Origins,” and the last by McCloskey and Zecher, “How the Gold Standard Really Worked.” Reinforcing what I had learned from Thompson, the papers provided a deeper understanding of the relevant history of thought on the international-monetary-adjustment  mechanism, and the important empirical and historical evidence that contradicts the PSFM. I also owe my interest in Hawtrey to the Johnson and Frenkel paper which cites Hawtrey repeatedly for many of the basic concepts of the monetary approach, especially the existence of a single arbitrage-constrained international price level under the gold standard.

When I attended the History of Economics Society Meeting in Toronto a couple of weeks ago, I had the  pleasure of meeting Deirdre McCloskey for the first time. Anticipating that we would have a chance to chat, I reread the 1976 paper in the Johnson and Frenkel volume and a follow-up paper by McCloskey and Zecher (“The Success of Purchasing Power Parity: Historical Evidence and Its Implications for Macroeconomics“) that appeared in a volume edited by Michael Bordo and Anna Schwartz, A Retrospective on the Classical Gold Standard. We did have a chance to chat and she did attend the session at which I talked about Friedman and the gold standard, but regrettably the chat was not a long one, so I am going to try to keep the conversation going with this post, and the next one in which I will discuss the two McCloskey and Zecher papers and especially the printed comment to the later paper that Milton Friedman presented at the conference for which the paper was written. So stay tuned.

PS Here is are links to Thompson’s essential papers on monetary theory, “The Theory of Money and Income Consistent with Orthodox Value Theory” and “A Reformulation of Macroeconomic Theory” about which I have written several posts in the past. And here is a link to my paper “A Reinterpretation of Classical Monetary Theory” showing that Earl’s ideas actually captured much of what classical monetary theory was all about.

The 2017 History of Economics Society Conference in Toronto

I arrived in Toronto last Thursday for the History of Economics Society Meeting at the University of Toronto (Trinity College to be exact) to give talks on Friday about two papers, one of which (“Hayek and Three Equilibrium Concepts: Sequential, Temporary and Rational Expectations”) I have been posting over the past few weeks on this blog (here, here, here, here, and here). I want to thank those of you who have posted your comments, which have been very helpful, and apologize for not responding to the more recent comments. The other paper about which I gave a talk was based on a post from three of years ago (“Real and Pseudo Gold Standards: Did Friedman Know the Difference?”) on which one of the sections of that paper was based.

Here I am talking about Friedman.

Here are the abstracts of the two papers:

“Hayek and Three Equilibrium Concepts: Sequential, Temporary, and Rational Expectations”

Almost 40 years ago, Murray Milgate (1979) drew attention to the neglected contribution of F. A. Hayek to the concept of intertemporal equilibrium, which had previously been associated with Erik Lindahl and J. R. Hicks. Milgate showed that although Lindahl had developed the concept of intertemporal equilibrium independently, Hayek’s original 1928 contribution was published before Lindahl’s and that, curiously, Hicks in Value and Capital had credited Lindahl with having developed the concept despite having been Hayek’s colleague at LSE in the early 1930s and having previously credited Hayek for the idea of intertemporal equilibrium. Aside from Milgate’s contribution, few developments of the idea of intertemporal equilibrium have adequately credited Hayek’s contribution. This paper attempts to compare three important subsequent developments of that idea with Hayek’s 1937 refinement of the key idea of his 1928 paper. In non-chronological order, the three developments of interest are: 1) Radner’s model of sequential equilibrium with incomplete markets as an alternative to the Arrow-Debreu-McKenzie model of full equilibrium with complete markets; 2) Hicks’s temporary equilibrium model, and 3) the Muth-Lucas rational expectations model. While Hayek’s 1937 treatment most closely resembles Radner’s sequential equilibrium model, which Radner, echoing Hayek, describes as an equilibrium of plans, prices, and price expectations, Hicks’s temporary equilibrium model seems to be the natural development of Hayek’s approach. The Muth-Lucas rational-expectations model, however, develops the concept of intertemporal equilibrium in a way that runs counter to the fundamental Hayekian insight about the nature of intertemporal equilibrium

“Milton Friedman and the Gold Standard”

Milton Friedman discussed the gold standard in a number of works. His two main discussions of the gold standard appear in a 1951 paper on commodity-reserve currencies and in a 1961 paper on real and pseudo gold standards. In the 1951 paper, he distinguished between a gold standard in which only gold or warehouse certificates to equivalent amounts of gold circulated as a medium of exchange and one in which mere fiduciary claims to gold also circulated as media of exchange. Friedman called the former a strict gold standard and the latter as a partial gold standard. In the later paper, he distinguished between a gold standard in which gold is used as money, and a gold standard in which the government merely fixes the price of gold, dismissing the latter as a “pseudo” gold standard. In this paper, I first discuss the origin for the real/partial distinction, an analytical error, derived from David Hume via the nineteenth-century Currency School, about the incentives of banks to overissue convertible claims to base money, which inspired the Chicago plan for 100-percent reserve banking. I then discuss the real/pseudo distinction and argue that it was primarily motivated by the ideological objective of persuading libertarian and classical-liberal supporters of the gold standard to support a fiat standard supplemented by the k-percent quantity rule that Friedman was about to propose.

And here is my concluding section from the Friedman paper:

Milton Friedman’s view of the gold standard was derived from his mentors at the University Chicago, an inheritance that, in a different context, he misleadingly described as the Chicago oral tradition. The Chicago view of the gold standard was, in turn, derived from the English Currency School of the mid-nineteenth century, which successfully promoted the enactment of the Bank Charter Act of 1844, imposing a 100-percent marginal reserve requirement on the banknotes issued by the Bank of England, and served as a model for the Chicago Plan for 100-percent-reserve banking. The Currency School, in turn, based its proposals for reform on the price-specie-flow analysis of David Hume (1742).

The pure quantity-theoretic lineage of Friedman’s views of the gold standard and the intellectual debt that he owed to the Currency School and the Bank Charter Act disposed him to view the gold standard as nothing more than a mechanism for limiting the quantity of money. If the really compelling purpose and justification of the gold standard was to provide a limitation on the capacity of a government or a monetary authority to increase the quantity of money, then there was nothing special or exceptional about the gold standard.

I have no interest in exploring the reasons why supporters of, and true believers in, the gold standard feel a strong ideological or emotional attachment to that institution, and even if I had such an interest, this would not be the place to enter into such an exploration, but I conjecture that the sources of that attachment to the gold standard go deeper than merely to provide a constraint on the power of the government to increase the quantity of money.

But from Friedman’s quantity-theoretical perspective, if the primary virtue of the gold standard was that it served to limit the ability of the government to increase the quantity of money, if another institution could perform that service, it would serve just as well as the gold standard. The lesson that Friedman took from the efforts of the Currency School to enact the Bank Charter Act was that the gold standard, on its own, did not provide a sufficient constraint on the ability of private banks to increase the quantity of money. Otherwise, the 100-percent marginal reserve requirement of the Bank Charter Act would have been unnecessary.

Now if the gold standard could not function well without additional constraints on the quantity of money, then obviously the constraint on the quantity of money that really matters is not the gold standard itself, but the 100-percent marginal reserve requirement imposed on the banking system. But if the relevant constraint on the quantity of money is the 100 percent marginal reserve requirement, then the gold standard is really just excess baggage.

That was the view of Henry Simons and the other authors of the Chicago Plan. For a long time, Friedman accepted the Chicago Plan as the best prescription for monetary stability, but at about the time that he was writing his paper on real and pseudo gold standards, Friedman was frcoming to position that a k-percent rule would be a superior alternative to the old Chicago Plan. His paper on Pseudo gold standards for the Mont Pelerin Society was his initial attempt to persuade his libertarian and classical-liberal friends and colleagues to reconsider their support for the gold standard and prepare the ground for the k-percent rule that he was about to offer. But in his ideological enthusiasm he, in effect, denied the reality of the historical gold standard.

Aside from the getting to talk about my papers, the other highlights of the HES meeting for me included the opportunity to renew a very old acquaintance with the eminent Samuel Hollander whom I met about 35 years ago at the first History of Economics Society meeting that I ever attended and making the acquaintance for the first time with the eminent Deidre McCloskey who was at both of my sessions and with the eminent E. Roy Weintraub who has been doing important research on my illustrious cousin Abraham Wald, the first one to prove the existence of a competitive equilibrium almost 20 years before Arrow, Debreu and McKenzie came up with their proofs. Doing impressive and painstaking historical research Weintraub found a paper, long thought to have been lost in which Wald, using the fixed-point theorem that Arrow, Debreu and McKenzie had independently used in their proofs, gave a more general existence proof than he had provided in his published existence proofs, clearly establishing Wald’s priority over Arrow, Debreu and McKenzie in proving the existence of general equilibrium.

HT: Rebeca Betancourt

 

Hayek and Rational Expectations

In this, my final, installment on Hayek and intertemporal equilibrium, I want to focus on a particular kind of intertemporal equilibrium: rational-expectations equilibrium. In his discussions of intertemporal equilibrium, Roy Radner assigns a meaning to the term “rational-expectations equilibrium” very different from the meaning normally associated with that term. Radner describes a rational-expectations equilibrium as the equilibrium that results when some agents are able to make inferences about the beliefs held by other agents when observed prices differ from what they had expected prices to be. Agents attribute the differences between observed and expected prices to information held by agents better informed than themselves, and revise their own expectations accordingly in light of the information that would have justified the observed prices.

In the early 1950s, one very rational agent, Armen Alchian, was able to figure out what chemicals were being used in making the newly developed hydrogen bomb by identifying companies whose stock prices had risen too rapidly to be explained otherwise. Alchian, who spent almost his entire career at UCLA while also moonlighting at the nearby Rand Corporation, wrote a paper for Rand in which he listed the chemicals used in making the hydrogen bomb. When people at the Defense Department heard about the paper – the Rand Corporation was started as a think tank largely funded by the Department of Defense to do research that the Defense Department was interested in – they went to Alchian, confiscated and destroyed the paper. Joseph Newhard recently wrote a paper about this episode in the Journal of Corporate Finance. Here’s the abstract:

At RAND in 1954, Armen A. Alchian conducted the world’s first event study to infer the fuel material used in the manufacturing of the newly-developed hydrogen bomb. Successfully identifying lithium as the fusion fuel using only publicly available financial data, the paper was seen as a threat to national security and was immediately confiscated and destroyed. The bomb’s construction being secret at the time but having since been partially declassified, the nuclear tests of the early 1950s provide an opportunity to observe market efficiency through the dissemination of private information as it becomes public. I replicate Alchian’s event study of capital market reactions to the Operation Castle series of nuclear detonations in the Marshall Islands, beginning with the Bravo shot on March 1, 1954 at Bikini Atoll which remains the largest nuclear detonation in US history, confirming Alchian’s results. The Operation Castle tests pioneered the use of lithium deuteride dry fuel which paved the way for the development of high yield nuclear weapons deliverable by aircraft. I find significant upward movement in the price of Lithium Corp. relative to the other corporations and to DJIA in March 1954; within three weeks of Castle Bravo the stock was up 48% before settling down to a monthly return of 28% despite secrecy, scientific uncertainty, and public confusion surrounding the test; the company saw a return of 461% for the year.

Radner also showed that the ability of some agents to infer the information on which other agents are causing prices to differ from the prices that had been expected does not necessarily lead to an equilibrium. The process of revising expectations in light of observed prices may not converge on a shared set of expectations of the future based on commonly shared knowledge.

So rather than pursue Radner’s conception of rational expectations, I will focus here on the conventional understanding of “rational expectations” in modern macroeconomics, which is that the price expectations formed by the agents in a model should be consistent with what the model itself predicts that those future prices will be. In this very restricted sense, I believe rational expectations is a very important property that any model ought to have. It simply says that a model ought to have the property that if one assumes that the agents in a model expect the equilibrium predicted by the model, then, given those expectations, the solution of the model will turn out to be the equilibrium of the model. This property is a consistency and coherence property that any model, regardless of its substantive predictions, ought to have. If a model lacks this property, there is something wrong with the model.

But there is a huge difference between saying that a model should have the property that correct expectations are self-fulfilling and saying that agents are in fact capable of predicting the equilibrium of the model. Assuming the former does not entail the latter. What kind of crazy model would have the property that correct expectations are not self-fulfilling? I mean think about: a model in which correct expectations are not self-fulfilling is a nonsense model.

But demanding that a model not spout out jibberish is very different from insisting that the agents in the model necessarily have the capacity to predict what the equilibrium of the model will be. Rational expectations in the first sense is a minimal consistency property of an economic model; rational expectations in the latter sense is an empirical assertion about the real world. You can make such an assumption if you want, but you can’t claim that it is a property of the real world. Whether it is a property of the real world is a matter of fact, not a matter of methodological fiat. But methodological fiat is what rational expectations has become in macroeconomics.

In his 1937 paper on intertemporal equilibrium, Hayek was very clear that correct expectations are logically implied by the concept of an equilibrium of plans extending through time. But correct expectations are not a necessary, or even descriptively valid, characteristic of reality. Hayek also conceded that we don’t even have an explanation in theory of how correct expectations come into existence. He merely alluded to the empirical observation – perhaps not the most accurate description of empirical reality in 1937 – that there is an observed general tendency for markets to move toward equilibrium, implying that over time expectations do tend to become more accurate.

It is worth pointing out that when the idea of rational expectations was introduced by John Muth in the early 1960s, he did so in the context of partial-equilibrium models in which the rational expectation in the model was the rational expectation of the equilibrium price in a paraticular market. The motivation for Muth to introduce the idea of a rational expectation was idea of a cobweb cycle in which producers simply assume that the current price will remain at whatever level currently prevails. If there is a time lag between production, as in agricultural markets between the initial application of inputs and the final yield of output, it is easy to generate an alternating sequence of boom and bust, with current high prices inducing increased output in the following period, driving prices down, thereby inducing low output and high prices in the next period and so on.

Muth argued that rational producers would not respond to price signals in a way that led to consistently mistaken expectations, but would base their price expectations on more realistic expectations of what future prices would turn out to be. In his microeconomic work on rational expectations, Muth showed that the rational-expectation assumption was a better predictor of observed prices than the assumption of static expectations underlying the traditional cobweb-cycle model. So Muth’s rational-expectations assumption was based on a realistic conjecture of how real-world agents would actually form expectations. In that sense, Muth’s assumption was consistent with Hayek’s conjecture that there is an empirical tendency for markets to move toward equilibrium.

So while Muth’s introduction of the rational-expectations hypothesis was an empirically progressive theoretical innovation, extending rational-expectations into the domain of macroeconomics has not been empirically progressive, rational expectations models having consistently failed to generate better predictions than macro-models using other expectational assumptions. Instead, a rational-expectations axiom has been imposed as part of a spurious methodological demand that all macroeconomic models be “micro-founded.” But the deeper point – a point that Hayek understood better than perhaps anyone else — is that there is a huge difference in kind between forming rational expectations about a single market price and forming rational expectations about the vector of n prices on the basis of which agents are choosing or revising their optimal intertemporal consumption and production plans.

It is one thing to assume that agents have some expert knowledge about the course of future prices in the particular markets in which they participate regularly; it is another thing entirely to assume that they have knowledge sufficient to forecast the course of all future prices and in particular to understand the subtle interactions between prices in one market and the apparently unrelated prices in another market. The former kind of knowledge is knowledge that expert traders might be expected to have; the latter kind of knowledge is knowledge that would be possessed by no one but a nearly omniscient central planner, whose existence was shown by Hayek to be a practical impossibility.

Standard macroeconomic models are typically so highly aggregated that the extreme nature of the rational-expectations assumption is effectively suppressed. To treat all output as a single good (which involves treating the single output as both a consumption good and a productive asset generating a flow of productive services) effectively imposes the assumption that the only relative price that can ever change is the wage, so that all but one future relative prices are known in advance. That assumption effectively assumes away the problem of incorrect expectations except for two variables: the future price level and the future productivity of labor (owing to the productivity shocks so beloved of Real Business Cycle theorists). Having eliminated all complexity from their models, modern macroeconomists, purporting to solve micro-founded macromodels, simply assume that there is but one or at most two variables about which agents have to form their rational expectations.

Four score years since Hayek explained how challenging the notion of intertemporal equilibrium really is and the difficulties inherent in explaining any empirical tendency toward intertempral equilibrium, modern macroeconomics has succeeded in assuming all those difficulties out of existence. Many macroeconomists feel rather proud of what modern macroeconomics has achieved. I am not quite as impressed as they are.

Hayek and Temporary Equilibrium

In my three previous posts (here, here, and here) about intertemporal equilibrium, I have been emphasizing that the defining characteristic of an intertemporal equilibrium is that agents all share the same expectations of future prices – or at least the same expectations of those future prices on which they are basing their optimizing plans – over their planning horizons. At a given moment at which agents share the same expectations of future prices, the optimizing plans of the agents are consistent, because none of the agents would have any reason to change his optimal plan as long as price expectations do not change, or are not disappointed as a result of prices turning out to be different from what they had been expected to be.

The failure of expected prices to be fulfilled would therefore signify that the information available to agents in forming their expectations and choosing optimal plans conditional on their expectations had been superseded by newly obtained information. The arrival of new information can thus be viewed as a cause of disequilibrium as can any difference in information among agents. The relationship between information and equilibrium can be expressed as follows: differences in information or differences in how agents interpret information leads to disequilibrium, because those differences lead agents to form differing expectations of future prices.

Now the natural way to generalize the intertemporal equilibrium model is to allow for agents to have different expectations of future prices reflecting their differences in how they acquire, or in how they process, information. But if agents have different information, so that their expectations of future prices are not the same, the plans on which agents construct their subjectively optimal plans will be inconsistent and incapable of implementation without at least some revisions. But this generalization seems incompatible with the equilibrium of optimal plans, prices and price expectations described by Roy Radner, which I have identified as an updated version of Hayek’s concept of intertemporal equilibrium.

The question that I want to explore in this post is how to reconcile the absence of equilibrium of optimal plans, prices, and price expectations, with the intuitive notion of market clearing that we use to analyze asset markets and markets for current delivery. If markets for current delivery and for existing assets are in equilibrium in the sense that prices are adjusting in those markets to equate demand and supply in those markets, how can we understand the idea that  the optimizing plans that agents are seeking to implement are mutually inconsistent?

The classic attempt to explain this intermediate situation which partially is and partially is not an equilibrium, was made by J. R. Hicks in 1939 in Value and Capital when he coined the term “temporary equilibrium” to describe a situation in which current prices are adjusting to equilibrate supply and demand in current markets even though agents are basing their choices of optimal plans to implement over time on different expectations of what prices will be in the future. The divergence of the price expectations on the basis of which agents choose their optimal plans makes it inevitable that some or all of those expectations won’t be realized, and that some, or all, of those agents won’t be able to implement the optimal plans that they have chosen, without at least some revisions.

In Hayek’s early works on business-cycle theory, he argued that the correct approach to the analysis of business cycles must be analyzed as a deviation by the economy from its equilibrium path. The problem that he acknowledged with this approach was that the tools of equilibrium analysis could be used to analyze the nature of the equilibrium path of an economy, but could not easily be deployed to analyze how an economy performs once it deviates from its equilibrium path. Moreover, cyclical deviations from an equilibrium path tend not to be immediately self-correcting, but rather seem to be cumulative. Hayek attributed the tendency toward cumulative deviations from equilibrium to the lagged effects of monetary expansion which cause cumulative distortions in the capital structure of the economy that lead at first to an investment-driven expansion of output, income and employment and then later to cumulative contractions in output, income, and employment. But Hayek’s monetary analysis was never really integrated with the equilibrium analysis that he regarded as the essential foundation for a theory of business cycles, so the monetary analysis of the cycle remained largely distinct from, if not inconsistent with, the equilibrium analysis.

I would suggest that for Hayek the Hicksian temporary-equilibrium construct would have been the appropriate theoretical framework within which to formulate a monetary analysis consistent with equilibrium analysis. Although there are hints in the last part of The Pure Theory of Capital that Hayek was thinking along these lines, I don’t believe that he got very far, and he certainly gave no indication that he saw in the Hicksian method the analytical tool with which to weave the two threads of his analysis.

I will now try to explain how the temporary-equilibrium method makes it possible to understand  the conditions for a cumulative monetary disequilibrium. I make no attempt to outline a specifically Austrian or Hayekian theory of monetary disequilibrium, but perhaps others will find it worthwhile to do so.

As I mentioned in my previous post, agents understand that their price expectations may not be realized, and that their plans may have to be revised. Agents also recognize that, given the uncertainty underlying all expectations and plans, not all debt instruments (IOUs) are equally reliable. The general understanding that debt – promises to make future payments — must be evaluated and assessed makes it profitable for some agents to specialize in in debt assessment. Such specialists are known as financial intermediaries. And, as I also mentioned previously, the existence of financial intermediaries cannot be rationalized in the ADM model, because, all contracts being made in period zero, there can be no doubt that the equilibrium exchanges planned in period zero will be executed whenever and exactly as scheduled, so that everyone’s promise to pay in time zero is equally good and reliable.

For our purposes, a particular kind of financial intermediary — banks — are of primary interest. The role of a bank is to assess the quality of the IOUs offered by non-banks, and select from the IOUs offered to them those that are sufficiently reliable to be accepted by the bank. Once a prospective borrower’s IOU is accepted, the bank exchanges its own IOU for the non-bank’s IOU. No non-bank would accept a non-bank’s IOU, at least not on terms as favorable as those on which the bank offers in accepting an IOU. In return for the non-bank IOU, the bank credits the borrower with a corresponding amount of its own IOUs, which, because the bank promises to redeem its IOUs for the numeraire commodity on demand, is generally accepted at face value.

Thus, bank debt functions as a medium of exchange even as it enables non-bank agents to make current expenditures they could not have made otherwise if they can demonstrate to the bank that they are sufficiently likely to repay the loan in the future at agreed upon terms. Such borrowing and repayments are presumably similar to the borrowing and repayments that would occur in the ADM model unmediated by any financial intermediary. In assessing whether a prospective borrower will repay a loan, the bank makes two kinds of assessments. First, does the borrower have sufficient income-earning capacity to generate enough future income to make the promised repayments that the borrower would be committing himself to make? Second, should the borrower’s future income, for whatever reason, turn out to be insufficient to finance the promised repayments, does the borrower have collateral that would allow the bank to secure repayment from the collateral offered as security? In making both kinds of assessments the bank has to form an expectation about the future — the future income of the borrower and the future value of the collateral.

In a temporary-equilibrium context, the expectations of future prices held by agents are not the same, so the expectations of future prices of at least some agents will not be accurate, and some agents won’tbe able to execute their plans as intended. Agents that can’t execute their plans as intended are vulnerable if they have incurred future obligations based on their expectations of future prices that exceed their repayment capacity given the future prices that are actually realized. If they have sufficient wealth — i.e., if they have asset holdings of sufficient value — they may still be able to repay their obligations. However, in the process they may have to sell assets or reduce their own purchases, thereby reducing the income earned by other agents. Selling assets under pressure of obligations coming due is almost always associated with selling those assets at a significant loss, which is precisely why it usually preferable to finance current expenditure by borrowing funds and making repayments on a fixed schedule than to finance the expenditure by the sale of assets.

Now, in adjusting their plans when they observe that their price expectations are disappointed, agents may respond in two different ways. One type of adjustment is to increase sales or decrease purchases of particular goods and services that they had previously been planning to purchase or sell; such marginal adjustments do not fundamentally alter what agents are doing and are unlikely to seriously affect other agents. But it is also possible that disappointed expectations will cause some agents to conclude that their previous plans are no longer sustainable under the conditions in which they unexpectedly find themselves, so that they must scrap their old plans replacing them with completely new plans instead. In the latter case, the abandonment of plans that are no longer viable given disappointed expectations may cause other agents to conclude that the plans that they had expected to implement are no longer profitable and must be scrapped.

When agents whose price expectations have been disappointed respond with marginal adjustments in their existing plans rather than scrapping them and replacing them with new ones, a temporary equilibrium with disappointed expectations may still exist and that equilibrium may be reached through appropriate price adjustments in the markets for current delivery despite the divergent expectations of future prices held by agents. Operation of the price mechanism may still be able to achieve a reconciliation of revised but sub-optimal plans. The sub-optimal temporary equilibrium will be inferior to the allocation that would have resulted had agents all held correct expectations of future prices. Nevertheless, given a history of incorrect price expectations and misallocations of capital assets, labor, and other factors of production, a sub-optimal temporary equilibrium may be the best feasible outcome.

But here’s the problem. There is no guarantee that, when prices turn out to be very different from what they were expected to be, the excess demands of agents will adjust smoothly to changes in current prices. A plan that was optimal based on the expectation that the price of widgets would be $500 a unit may well be untenable at a price of $120 a unit. When realized prices are very different from what they had been expected to be, those price changes can lead to discontinuous adjustments, violating a basic assumption — the continuity of excess demand functions — necessary to prove the existence of an equilibrium. Once output prices reach some minimum threshold, the best response for some firms may be to shut down, the excess demand for the product produced by the firm becoming discontinuous at the that threshold price. The firms shutting down operations may be unable to repay loans they had obligated themselves to repay based on their disappointed price expectations. If ownership shares in firms forced to cease production are held by households that have predicated their consumption plans on prior borrowing and current repayment obligations, the ability of those households to fulfill their obligations may be compromised once those firms stop paying out the expected profit streams. Banks holding debts incurred by firms or households that borrowers cannot service may find that their own net worth is reduced sufficiently to make the banks’ own debt unreliable, potentially causing a breakdown in the payment system. Such effects are entirely consistent with a temporary-equilibrium model if actual prices turn out to be very different from what agents had expected and upon which they had constructed their future consumption and production plans.

Sufficiently large differences between expected and actual prices in a given period may result in discontinuities in excess demand functions once prices reach critical thresholds, thereby violating the standard continuity assumptions on which the existence of general equilibrium depends under the fixed-point theorems that are the lynchpin of modern existence proofs. C. J. Bliss made such an argument in a 1983 paper (“Consistent Temporary Equilibrium” in the volume Modern Macroeconomic Theory edited by  J. P. Fitoussi) in which he also suggested, as I did above, that the divergence of individual expectations implies that agents will not typically regard the debt issued by other agents as homogeneous. Bliss therefore posited the existence of a “Financier” who would subject the borrowing plans of prospective borrowers to an evaluation process to determine if the plan underlying the prospective loan sought by a borrower was likely to generate sufficient cash flow to enable the borrower to repay the loan. The role of the Financier is to ensure that the plans that firms choose are based on roughly similar expectations of future prices so that firms will not wind up acting on price expectations that must inevitably be disappointed.

I am unsure how to understand the function that Bliss’s Financier is supposed to perform. Presumably the Financier is meant as a kind of idealized companion to the Walrasian auctioneer rather than as a representation of an actual institution, but the resemblance between what the Financier is supposed to do and what bankers actually do is close enough to make it unclear to me why Bliss chose an obviously fictitious character to weed out business plans based on implausible price expectations rather than have the role filled by more realistic characters that do what their real-world counterparts are supposed to do. Perhaps Bliss’s implicit assumption is that real-world bankers do not constrain the expectations of prospective borrowers sufficiently to suggest that their evaluation of borrowers would increase the likelihood that a temporary equilibrium actually exists so that only an idealized central authority could impose sufficient consistency on the price expectations to make the existence of a temporary equilibrium likely.

But from the perspective of positive macroeconomic and business-cycle theory, explicitly introducing banks that simultaneously provide an economy with a medium of exchange – either based on convertibility into a real commodity or into a fiat base money issued by the monetary authority – while intermediating between ultimate borrowers and ultimate lenders seems to be a promising way of modeling a dynamic economy that sometimes may — and sometimes may not — function at or near a temporary equilibrium.

We observe economies operating in the real world that sometimes appear to be functioning, from a macroeconomic perspective, reasonably well with reasonably high employment, increasing per capita output and income, and reasonable price stability. At other times, these economies do not function well at all, with high unemployment and negative growth, sometimes with high rates of inflation or with deflation. Sometimes, these economies are beset with financial crises in which there is a general crisis of solvency, and even apparently solvent firms are unable to borrow. A macroeconomic model should be able to account in some way for the diversity of observed macroeconomic experience. The temporary equilibrium paradigm seems to offer a theoretical framework capable of accounting for this diversity of experience and for explaining at least in a very general way what accounts for the difference in outcomes: the degree of congruence between the price expectations of agents. When expectations are reasonably consistent, the economy is able to function at or near a temporary equilibrium which is likely to exist. When expectations are highly divergent, a temporary equilibrium may not exist, and even if it does, the economy may not be able to find its way toward the equilibrium. Price adjustments in current markets may be incapable of restoring equilibrium inasmuch as expectations of future prices must also adjust to equilibrate the economy, there being no market mechanism by which equilibrium price expectations can be adjusted or restored.

This, I think, is the insight underlying Axel Leijonhufvud’s idea of a corridor within which an economy tends to stay close to an equilibrium path. However if the economy drifts or is shocked away from its equilibrium time path, the stabilizing forces that tend to keep an economy within the corridor cease to operate at all or operate only weakly, so that the tendency for the economy to revert back to its equilibrium time path is either absent or disappointingly weak.

The temporary-equilibrium method, it seems to me, might have been a path that Hayek could have successfully taken in pursuing the goal he had set for himself early in his career: to reconcile equilibrium-analysis with a theory of business cycles. Why he ultimately chose not to take this path is a question that, for now at least, I will leave to others to try to answer.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 1,679 other followers

Follow Uneasy Money on WordPress.com
Advertisements