Archive for the 'Keynes' Category

More on Sticky Wages

It’s been over four and a half years since I wrote my second most popular post on this blog (“Why are Wages Sticky?”). Although the post was linked to and discussed by Paul Krugman (which is almost always a guarantee of getting a lot of traffic) and by other econoblogosphere standbys like Mark Thoma and Barry Ritholz, unlike most of my other popular posts, it has continued ever since to attract a steady stream of readers. It’s the posts that keep attracting readers long after their original expiration date that I am generally most proud of.

I made a few preliminary points about wage stickiness before getting to my point. First, although Keynes is often supposed to have used sticky wages as the basis for his claim that market forces, unaided by stimulus to aggregate demand, cannot automatically eliminate cyclical unemployment within the short or even medium term, he actually devoted a lot of effort and space in the General Theory to arguing that nominal wage reductions would not increase employment, and to criticizing economists who blamed unemployment on nominal wages fixed by collective bargaining at levels too high to allow all workers to be employed. So, the idea that wage stickiness is a Keynesian explanation for unemployment doesn’t seem to me to be historically accurate.

I also discussed the search theories of unemployment that in some ways have improved our understanding of why some level of unemployment is a normal phenomenon even when people are able to find jobs fairly easily and why search and unemployment can actually be productive, enabling workers and employers to improve the matches between the skills and aptitudes that workers have and the skills and aptitudes that employers are looking for. But search theories also have trouble accounting for some basic facts about unemployment.

First, a lot of job search takes place when workers have jobs while search theories assume that workers can’t or don’t search while they are employed. Second, when unemployment rises in recessions, it’s not because workers mistakenly expect more favorable wage offers than employers are offering and mistakenly turn down job offers that they later regret not having accepted, which is a very skewed way of interpreting what happens in recessions; it’s because workers are laid off by employers who are cutting back output and idling production lines.

I then suggested the following alternative explanation for wage stickiness:

Consider the incentive to cut price of a firm that can’t sell as much as it wants [to sell] at the current price. The firm is off its supply curve. The firm is a price taker in the sense that, if it charges a higher price than its competitors, it won’t sell anything, losing all its sales to competitors. Would the firm have any incentive to cut its price? Presumably, yes. But let’s think about that incentive. Suppose the firm has a maximum output capacity of one unit, and can produce either zero or one units in any time period. Suppose that demand has gone down, so that the firm is not sure if it will be able to sell the unit of output that it produces (assume also that the firm only produces if it has an order in hand). Would such a firm have an incentive to cut price? Only if it felt that, by doing so, it would increase the probability of getting an order sufficiently to compensate for the reduced profit margin at the lower price. Of course, the firm does not want to set a price higher than its competitors, so it will set a price no higher than the price that it expects its competitors to set.

Now consider a different sort of firm, a firm that can easily expand its output. Faced with the prospect of losing its current sales, this type of firm, unlike the first type, could offer to sell an increased amount at a reduced price. How could it sell an increased amount when demand is falling? By undercutting its competitors. A firm willing to cut its price could, by taking share away from its competitors, actually expand its output despite overall falling demand. That is the essence of competitive rivalry. Obviously, not every firm could succeed in such a strategy, but some firms, presumably those with a cost advantage, or a willingness to accept a reduced profit margin, could expand, thereby forcing marginal firms out of the market.

Workers seem to me to have the characteristics of type-one firms, while most actual businesses seem to resemble type-two firms. So what I am suggesting is that the inability of workers to take over the jobs of co-workers (the analog of output expansion by a firm) when faced with the prospect of a layoff means that a powerful incentive operating in non-labor markets for price cutting in response to reduced demand is not present in labor markets. A firm faced with the prospect of being terminated by a customer whose demand for the firm’s product has fallen may offer significant concessions to retain the customer’s business, especially if it can, in the process, gain an increased share of the customer’s business. A worker facing the prospect of a layoff cannot offer his employer a similar deal. And requiring a workforce of many workers, the employer cannot generally avoid the morale-damaging effects of a wage cut on his workforce by replacing current workers with another set of workers at a lower wage than the old workers were getting.

I think that what I wrote four years ago is clearly right, identifying an important reason for wage stickiness. But there’s also another reason that I didn’t mention then, but whose importance has since come to appear increasingly significant to me, especially as a result of writing and rewriting my paper “Hayek, Hicks, Radner and three concepts of intertemporal equilibrium.”

If you are unemployed because the demand for your employer’s product has gone down, and your employer, planning to reduce output, is laying off workers no longer needed, how could you, as an individual worker, unconstrained by a union collective-bargaining agreement or by a minimum-wage law, persuade your employer not to lay you off? Could you really keep your job by offering to accept a wage cut — no matter how big? If you are being laid off because your employer is reducing output, would your offer to work at a lower wage cause your employer to keep output unchanged, despite a reduction in demand? If not, how would your offer to take a pay cut help you keep your job? Unless enough workers are willing to accept a big enough wage cut for your employer to find it profitable to maintain current output instead of cutting output, how would your own willingness to accept a wage cut enable you to keep your job?

Now, if all workers were to accept a sufficiently large wage cut, it might make sense for an employer not to carry out a planned reduction in output, but the offer by any single worker to accept a wage cut certainly would not cause the employer to change its output plans. So, if you are making an independent decision whether to offer to accept a wage cut, and other workers are making their own independent decisions about whether to accept a wage cut, would it be rational for you or any of them to accept a wage cut? Whether it would or wouldn’t might depend on what each worker was expecting other workers to do. But certainly given the expectation that other workers are not offering to accept a wage cut, why would it make any sense for any worker to be the one to offer to accept a wage cut? Would offering to accept a wage cut, increase the likelihood that a worker would be one of the lucky ones chosen not to be laid off? Why would offering to accept a wage cut that no one else was offering to accept, make the worker willing to work for less appear more desirable to the employer than the others that wouldn’t accept a wage cut? One reaction by the employer might be: what’s this guy’s problem?

Combining this way of looking at the incentives workers have to offer to accept wage reductions to keep their jobs with my argument in my post of four years ago, I now am inclined to suggest that unemployment as such provides very little incentive for workers and employers to cut wages. Price cutting in periods of excess supply is often driven by aggressive price cutting by suppliers with large unsold inventories. There may be lots of unemployment, but no one is holding a large stock of unemployed workers, and no is in a position to offer low wages to undercut the position of those currently employed at  nominal wages that, arguably, are too high.

That’s not how labor markets operate. Labor markets involve matching individual workers and individual employers more or less one at a time. If nominal wages fall, it’s not because of an overhang of unsold labor flooding the market; it’s because something is changing the expectations of workers and employers about what wage will be offered by employers, and accepted by workers, for a particular kind of work. If the expected wage is too high, not all workers willing to work at that wage will find employment; if it’s too low, employers will not be able to find as many workers as they would like to hire, but the situation will not change until wage expectations change. And the reason that wage expectations change is not because the excess demand for workers causes any immediate pressure for nominal wages to rise.

The further point I would make is that the optimal responses of workers and the optimal responses of their employers to a recessionary reduction in demand, in which the employers, given current input and output prices, are planning to cut output and lay off workers, are mutually interdependent. While it is, I suppose, theoretically possible that if enough workers decided to immediately offer to accept sufficiently large wage cuts, some employers might forego plans to lay off their workers, there are no obvious market signals that would lead to such a response, because such a response would be contingent on a level of coordination between workers and employers and a convergence of expectations about future outcomes that is almost unimaginable.

One can’t simply assume that it is in the independent self-interest of every worker to accept a wage cut as soon as an employer perceives a reduced demand for its product, making the current level of output unprofitable. But unless all, or enough, workers decide to accept a wage cut, the optimal response of the employer is still likely to be to cut output and lay off workers. There is no automatic mechanism by which the market adjusts to demand shocks to achieve the set of mutually consistent optimal decisions that characterizes a full-employment market-clearing equilibrium. Market-clearing equilibrium requires not merely isolated price and wage cuts by individual suppliers of inputs and final outputs, but a convergence of expectations about the prices of inputs and outputs that will be consistent with market clearing. And there is no market mechanism that achieves that convergence of expectations.

So, this brings me back to Keynes and the idea of sticky wages as the key to explaining cyclical fluctuations in output and employment. Keynes writes at the beginning of chapter 19 of the General Theory.

For the classical theory has been accustomed to rest the supposedly self-adjusting character of the economic system on an assumed fluidity of money-wages; and, when there is rigidity, to lay on this rigidity the blame of maladjustment.

A reduction in money-wages is quite capable in certain circumstances of affording a stimulus to output, as the classical theory supposes. My difference from this theory is primarily a difference of analysis. . . .

The generally accept explanation is . . . quite a simple one. It does not depend on roundabout repercussions, such as we shall discuss below. The argument simply is that a reduction in money wages will, cet. par. Stimulate demand by diminishing the price of the finished product, and will therefore increase output, and will therefore increase output and employment up to the point where  the reduction which labour has agreed to accept in its money wages is just offset by the diminishing marginal efficiency of labour as output . . . is increased. . . .

It is from this type of analysis that I fundamentally differ.

[T]his way of thinking is probably reached as follows. In any given industry we have a demand schedule for the product relating the quantities which can be sold to the prices asked; we have a series of supply schedules relating the prices which will be asked for the sale of different quantities. .  . and these schedules between them lead up to a further schedule which, on the assumption that other costs are unchanged . . . gives us the demand schedule for labour in the industry relating the quantity of employment to different levels of wages . . . This conception is then transferred . . . to industry as a whole; and it is supposed, by a parity of reasoning, that we have a demand schedule for labour in industry as a whole relating the quantity of employment to different levels of wages. It is held that it makes no material difference to this argument whether it is in terms of money-wages or of real wages. If we are thinking of real wages, we must, of course, correct for changes in the value of money; but this leaves the general tendency of the argument unchanged, since prices certainly do not change in exact proportion to changes in money wages.

If this is the groundwork of the argument . . ., surely it is fallacious. For the demand schedules for particular industries can only be constructed on some fixed assumption as to the nature of the demand and supply schedules of other industries and as to the amount of aggregate effective demand. It is invalid, therefore, to transfer the argument to industry as a whole unless we also transfer our assumption that the aggregate effective demand is fixed. Yet this assumption amount to an ignoratio elenchi. For whilst no one would wish to deny the proposition that a reduction in money-wages accompanied by the same aggregate demand as before will be associated with an increase in employment, the precise question at issue is whether the reduction in money wages will or will not be accompanied by the same aggregate effective demand as before measured in money, or, at any rate, measured by an aggregate effective demand which is not reduced in full proportion to the reduction in money-wages. . . But if the classical theory is not allowed to extend by analogy its conclusions in respect of a particular industry to industry as a whole, it is wholly unable to answer the question what effect on employment a reduction in money-wages will have. For it has no method of analysis wherewith to tackle the problem. (General Theory, pp. 257-60)

Keynes’s criticism here is entirely correct. But I would restate slightly differently. Standard microeconomic reasoning about preferences, demand, cost and supply is partial-equilbriium analysis. The focus is on how equilibrium in a single market is achieved by the adjustment of the price in a single market to equate the amount demanded in that market with amount supplied in that market.

Supply and demand is a wonderful analytical tool that can illuminate and clarify many economic problems, providing the key to important empirical insights and knowledge. But supply-demand analysis explicitly – but too often without realizing its limiting implications – assumes that other prices and incomes in other markets are held constant. That assumption essentially means that the market – i.e., the demand, cost and supply curves used to represent the behavioral characteristics of the market being analyzed – is small relative to the rest of the economy, so that changes in that single market can be assumed to have a de minimus effect on the equilibrium of all other markets. (The conditions under which such an assumption could be justified are themselves not unproblematic, but I am now assuming that those problems can in fact be assumed away at least in many applications. And a good empirical economist will have a good instinctual sense for when it’s OK to make the assumption and when it’s not OK to make the assumption.)

So, the underlying assumption of microeconomics is that the individual markets under analysis are very small relative to the whole economy. Why? Because if those markets are not small, we can’t assume that the demand curves, cost curves, and supply curves end up where they started. Because a high price in one market may have effects on other markets and those effects will have further repercussions that move the very demand, cost and supply curves that were drawn to represent the market of interest. If the curves themselves are unstable, the ability to predict the final outcome is greatly impaired if not completely compromised.

The working assumption of the bread and butter partial-equilibrium analysis that constitutes econ 101 is that markets have closed borders. And that assumption is not always valid. If markets have open borders so that there is a lot of spillover between and across markets, the markets can only be analyzed in terms of broader systems of simultaneous equations, not the simplified solutions that we like to draw in two-dimensional space corresponding to intersections of stable supply curves with stable supply curves.

What Keynes was saying is that it makes no sense to draw a curve representing the demand of an entire economy for labor or a curve representing the supply of labor of an entire economy, because the underlying assumption of such curves that all other prices are constant cannot possibly be satisfied when you are drawing a demand curve and a supply curve for an input that generates more than half the income earned in an economy.

But the problem is even deeper than just the inability to draw a curve that meaningfully represents the demand of an entire economy for labor. The assumption that you can model a transition from one point on the curve to another point on the curve is simply untenable, because not only is the assumption that other variables are being held constant untenable and self-contradictory, the underlying assumption that you are starting from an equilibrium state is never satisfied when you are trying to analyze a situation of unemployment – at least if you have enough sense not to assume that economy is starting from, and is not always in, a state of general equilibrium.

So, Keynes was certainly correct to reject the naïve transfer of partial equilibrium theorizing from its legitimate field of applicability in analyzing the effects of small parameter changes on outcomes in individual markets – what later came to be known as comparative statics – to macroeconomic theorizing about economy-wide disturbances in which the assumptions underlying the comparative-statics analysis used in microeconomics are clearly not satisfied. That illegitimate transfer of one kind of theorizing to another has come to be known as the demand for microfoundations in macroeconomic models that is the foundational methodological principle of modern macroeconomics.

The principle, as I have been arguing for some time, is illegitimate for a variety of reasons. And one of those reasons is that microeconomics itself is based on the macroeconomic foundational assumption of a pre-existing general equilibrium, in which all plans in the entire economy are, and will remain, perfectly coordinated throughout the analysis of a particular parameter change in a single market. Once you relax the assumption that all, but one, markets are in equilibrium, the discipline imposed by the assumption of the rationality of general equilibrium and comparative statics is shattered, and a different kind of theorizing must be adopted to replace it.

The search for that different kind of theorizing is the challenge that has always faced macroeconomics. Despite heroic attempts to avoid facing that challenge and pretend that macroeconomics can be built as if it were microeconomics, the search for a different kind of theorizing will continue; it must continue. But it would certainly help if more smart and creative people would join in that search.

Advertisements

Hu McCulloch Figures out (More or Less) the Great Depression

Last week Houston McCulloch, one of the leading monetary economists of my generation, posted an  insightful and thoughtful discussion of the causes of the Great Depression with which I largely, though not entirely, agree. Although Scott Sumner has already commented on Hu’s discussion, I also wanted to weigh in with some of my comments. Here is how McCulloch sets up his discussion.

Understanding what caused the Great Depression of 1929-39 and why it persisted so long has been fairly characterized by Ben Bernanke as the “Holy Grail of Macroeconomics.” The fear that the financial crisis of 2008 would lead to a similar Depression induced the Fed to use its emergency powers to bail out failing firms and to more than quadruple the monetary base, while Congress authorized additional bailouts and doubled the national debt. Could the Great Recession have taken a similar turn had these extreme measures not been taken?

Economists have often blamed the Depression on U.S. monetary policy or financial institutions.  Friedman and Schwartz (1963) famously argued that a spontaneous wave of runs against fragile fractional reserve banks led to a rise in the currency/deposit ratio. The Fed failed to offset the resulting fall in the money multiplier with base expansion, leading to a disastrous 24% deflation from 1929 to 1933. Through the short-run Phillips curve effect (Friedman 1968), this in turn led to a surge in unemployment to 22.5% by 1932.

The Debt-Deflation theory of Irving Fisher, and later Ben Bernanke (1995), takes the deflation as given, and blames the severity of the disruption on the massive bankruptcies that were caused by the increased burden of nominal indebtedness.  Murray Rothbard (1963) uses the “Austrian” business cycle theory of Ludwig von Mises and F.A. Hayek to blame the downturn on excessive domestic credit expansion by the Fed during the 1920s that disturbed the intertemporal structure of production (cf. McCulloch 2014).

My own view, after pondering the problem for many decades, is that indeed the Depression was monetary in origin, but that the ultimate blame lies not with U.S. domestic monetary and financial policy during the 1920s and 30s. Rather, the massive deflation was an inevitable consequence of Europe’s departure from the gold standard during World War I —  and its bungled and abrupt attempt to return to gold in the late 1920s.

I agree with every word of this introductory passage, so let’s continue.

In brief, the departure of the European belligerents from gold in 1914 massively reduced the global demand for gold, leading to the inflation of prices in terms of gold — and, therefore, in terms of currencies like the U.S. dollar which were convertible to gold at a fixed parity. After the war, Europe initially postponed its return to gold, leading to a plateau of high prices during the 1920s that came to be perceived as the new normal. In the late 1920s, there was a scramble to return to the pre-war gold standard, with the inevitable consequence that commodity prices — in terms of gold, and therefore in terms of the dollar — had to return to something approaching their 1914 level.

The deflation was thus inevitable, but was made much more harmful by its postponement and then abruptness. In retrospect, the UK could have returned to its pre-war parity with far less pain by emulating the U.S. post-Civil War policy of freezing the monetary base until the price level gradually fell to its pre-war level. France should not have over-devalued the franc, and then should have monetized its gold influx rather than acting as a global gold sink. Gold reserve ratios were unnecessarily high, especially in France.

Here is where I start to quibble a bit with Hu, mainly about the importance of Britain’s 1925 resumption of convertibility at the prewar parity with the dollar. Largely owing to Keynes’s essay “The Economic Consequences of Mr. Churchill,” in which Keynes berated Churchill for agreeing to the demands of the City and to the advice of the British Treasury advisers (including Ralph Hawtrey), on whom he relied despite Keynes’s attempt to convince him otherwise, to quickly resume gold convertibility at the prewar dollar parity, warning of the devastating effects of the subsequent deflation on British industry and employment, much greater significance has been attributed to the resumption of convertibility than it actually had on the subsequent course of events. Keynes’s analysis of the deflationary effect of the resumption was largely correct, but the effect turned out to be milder than he anticipated. Britain had already undergone a severe deflation in the early 1920s largely as a result of the American deflation. Thus by 1925, Britain had already undergone nearly 5 years of continuous deflation that brought the foreign exchange value of sterling to within 10 percent of the prewar dollar parity. The remaining deflation required to enable sterling to appreciate another 10 percent was not trivial, but by 1925 most of the deflationary pain had already been absorbed. Britain was able to sustain further mild deflation for the next four years till mid-1929 even as the British economy grew and unemployment declined gradually. The myth that Britain was mired in a continuous depression after the resumption of convertibility in 1925 has no basis in the evidence. Certainly, a faster recovery would have been desirable, and Hawtrey consistently criticized the Bank of England for keeping Bank Rate at 5% even with deflation in the 1-2% range.

The US Federal Reserve was somewhat accommodative in the 1925-28 period, but could have easily been even more accommodative. As McCulloch correctly notes it was really France, which undervalued the franc when it restored convertibility in 1928, and began accumulating gold in record quantities that became the primary destabilizing force in the world economy. Britain was largely an innocent bystander.

However, given that the U.S. had a fixed exchange rate relative to gold and no control over Europe’s misguided policies, it was stuck with importing the global gold deflation — regardless of its own domestic monetary policies. The debt/deflation problem undoubtedly aggravated the Depression and led to bank failures, which in turn increased the currency/deposit ratio and compounded the situation. However, a substantial portion of the fall in the U.S. nominal money stock was to be expected as a result of the inevitable deflation — and therefore was the product, rather than the primary cause, of the deflation. The anti-competitive policies of the Hoover years and FDR’s New Deal (Rothbard 1963, Ohanian 2009) surely aggravated and prolonged the Depression, but were not the ultimate cause.

Actually, the Fed, holding 40% of the world’s gold reserves in 1929, could have eased pressure on the world gold market by allowing an efflux of gold to accommodate the French demand for gold. However, instead of taking an accommodative stance, the Fed, seized by dread of stock-market speculation, kept increasing short-term interest rates, thereby attracting gold into the United States instead of allowing gold to flow out, increasing pressure on the world gold market and triggering the latent deflationary forces that until mid-1929 had been kept at bay. Anti-competitive policies under Hoover and under FDR were certainly not helpful, but those policies, as McCulloch recognizes, did not cause the collapse of output between 1929 and 1933.

Contemporary economists Ralph Hawtrey, Charles Rist, and Gustav Cassel warned throughout the 1920s that substantial deflation, in terms of gold and therefore the dollar, would be required to sustain a return to anything like the 1914 gold standard.[1]  In 1928, Cassel actually predicted that a global depression was imminent:

The post-War superfluity of gold is, however, of an entirely temporary character, and the great problem is how to meet the growing scarcity of gold which threatens the world both from increased demand and from diminished supply. We must solve this problem by a systematic restriction of the monetary demand for gold. Only if we succeed in doing this can we hope to prevent a permanent fall in the general price level and a prolonged and world-wide depression which would inevitably be connected with such a fall in prices [as quoted by Johnson (1997, p. 55)].

As early as 1919 both Hawtrey and Cassel had warned that a global depression would follow an attempt to restore the gold standard as it existed before World War I. To avoid such a deflation it was necessary to limit the increase in the monetary demand for gold. Hawtrey and Cassel therefore proposed shifting to a gold exchange standard in which gold coinage would not be restored and central banks would hold non-gold foreign exchange reserves rather than gold bullion. The 1922 Genoa Resolutions were largely inspired by the analysis of Hawtrey and Cassel, and those resolutions were largely complied with until France began its insane gold accumulation policy in 1928 just as the Fed began tightening monetary policy to suppress stock-market speculation, thereby triggering, more or less inadvertently, an almost equally massive inflow of gold into the US. (On Hawtrey and Cassel, see my paper with Ron Batchelder.)

McCulloch has a very interesting discussion of the role of the gold standard as a tool of war finance, which reminds me of Earl Thompson’s take on the gold standard, (“Gold Standard: Causes and Consequences”) that Earl contributed to a volume I edited, Business Cycles and Depressions: An Encyclopedia. To keep this post from growing inordinately long, and because it’s somewhat tangential to McCulloch’s larger theme, I won’t comment on that part of McCulloch’s discussion.

The Gold Exchange Standard

After the war, in order to stay on gold at $20.67 an ounce, with Europe off gold, the U.S. had to undo its post-1917 inflation. The Fed achieved this by raising the discount rate on War Bonds, beginning in late 1919, inducing banks to repay their corresponding loans. The result was a sharp 16% fall in the price level from 1920 to 1922. Unemployment rose from 3.0% in 1919 to 8.7% in 1921. However, nominal wages fell quickly, and unemployment was back to 4.8% by 1923, where it remained until 1929.[3]

The 1920-22 deflation thus brought the U.S. price level into equilibrium, but only in a world with Europe still off gold. Restoring the full 1914 gold standard would have required going back to approximately the 1914 value of gold in terms of commodities, and therefore the 1914 U.S. price level, after perhaps extrapolating for a continuation of the 1900-1914 “gold inflation.”

This is basically right except that I don’t think it makes sense to refer to the US price level as being in equilibrium in 1922. Holding 40% of the world’s monetary gold reserves, the US was in a position to determine the value of gold at whatever level it wanted. To call the particular level at which the US decided to stabilize the value of gold in 1922 an equilibrium is not based on any clear definition of equilibrium that I can identify.

However, the European countries did not seriously try to get back on gold until the second half of the 1920s. The Genoa Conference of 1922 recognized that prices were too high for a full gold standard, but instead tried to put off the necessary deflation with an unrealistic “Gold Exchange Standard.” Under that system, only the “gold center” countries, the U.S. and UK, would hold actual gold reserves, while other central banks would be encouraged to hold dollar or sterling reserves, which in turn would only be fractionally backed by gold. The Gold Exchange Standard sounded good on paper, but unrealistically assumed that the rest of the world would permanently kowtow to the financial supremacy of New York and London.

There is an argument to be made that the Genoa Resolutions were unrealistic in the sense that they assumed that countries going back on the gold standard would be willing to forego the holding of gold reserves to a greater extent than they were willing to. But to a large extent, this was the result of systematically incorrect ideas about how the gold standard worked before World War I and how the system could work after World War I, not of any inherent or necessary properties of the gold standard itself. Nor was the assumption that the rest of the world would permanently kowtow to the financial supremacy of New York and London all that unrealistic when considered in the light of how readily, before World War I, the rest of the world kowtowed to the financial supremacy of London.

In 1926, under Raymond Poincaré, France stabilized the franc after a 5:1 devaluation. However, it overdid the devaluation, leaving the franc undervalued by about 25%, according to The Economist (Johnson 1997, p. 131). Normally, under the specie flow mechanism, this would have led to a rapid accumulation of international reserves accompanied by monetary expansion and inflation, until the price level caught up with purchasing power parity. But instead, the Banque de France sterilized the reserve influx by reducing its holdings of government and commercial credit, so that inflation did not automatically stop the reserve inflow. Furthermore, it often cashed dollar and sterling reserves for gold, again contrary to the Gold Exchange Standard.  The Banking Law of 1928 made the new exchange rate, as well as the gold-only policy, official. By 1932, French gold reserves were 80% of currency and sight deposits (Irwin 2012), and France had acquired 28.4% of world gold reserves — even though it accounted for only 6.6% of world manufacturing output (Johnson 1997, p. 194). This “French Gold Sink” created even more deflationary pressure on gold, and therefore dollar prices, than would otherwise have been expected.

Here McCulloch is unintentionally displaying some of the systematically incorrect ideas about how the gold standard worked that I referred to above. McCulloch is correct that the franc was substantially undervalued when France restored convertibility in 1928. But under the gold standard, the French price level would automatically adjust to the world price level regardless of what happened to the French money supply. However, the Bank of France, partly because it was cashing in the rapidly accumulating foreign exchange reserves for gold as French exports were rising and its imports falling given the low internal French price level, and partly because it was legally barred from increasing the supply of banknotes by open-market operations, was accumulating gold both actively and passively. With no mechanism for increasing the quantity of banknotes in France, a balance of payment of surplus was the only mechanism by which an excess demand for money could be accommodated. It was not an inflow of gold that was being sterilized (sterilization being a misnomer reflecting a confusion about the direction of causality) it was the lack of any domestic mechanism for increasing the quantity of banknotes that caused an inflow of gold. Importing gold was the only means by which an excess domestic demand for banknotes could be satisfied.

The Second Post-War Deflation

By 1931, French gold withdrawals forced Germany to adopt exchange controls, and Britain to give up convertibility altogether. However, these countries did not then disgorge their remaining gold, but held onto it in the hopes of one day restoring free convertibility. Meanwhile, after having been burned by the Bank of England’s suspension, the “Gold Bloc” countries — Belgium, Netherlands and Switzerland — also began amassing gold reserves in earnest, raising their share of world gold reserves from 4.2% in June 1930 to 11.1% two years later (Johnson 1997, p. 194). Despite the dollar’s relatively strong position, the Fed also contributed to the problem by raising its gold coverage ratio to over 75% by 1930, well in excess of the 40% required by law (Irwin 2012, Fig. 5).

The result was a second post-war deflation as the value of gold, and therefore of the dollar, in terms of commodities, abruptly caught up with the greatly increased global demand for gold. The U.S. price level fell 24.0% between 1929 and 1933, with deflation averaging 6.6% per year for 4 years in a row. Unemployment shot up to 22.5% by 1932.

By 1933, the U.S. price level was still well above its 1914 level. However, if the “gold inflation” of 1900-1914 is extrapolated to 1933, as in Figure 3, the trend comes out to almost the 1933 price level. It therefore appears that the U.S. price level, if not its unemployment rate, was finally near its equilibrium under a global gold standard with the dollar at $20.67 per ounce, and that further deflation was probably unnecessary.[4]

Once again McCulloch posits an equilibrium price level under a global gold standard. The mistake is in assuming that there is a fixed monetary demand for gold, which is an assumption completely without foundation. The monetary demand for gold is not fixed. The greater the monetary demand for gold the higher the equilibrium real value of gold and the lower the price level in terms of gold. The equilibrium price level is a function of the monetary demand for gold.

The 1929-33 deflation was much more destructive than the 1920-22 deflation, in large part because it followed a 7-year “plateau” of relatively stable prices that lulled the credit and labor markets into thinking that the higher price level was the new norm — and that gave borrowers time to accumulate substantial nominal debt. In 1919-1920, on the other hand, the newly elevated price level seemed abnormally high and likely to come back down in the near future, as it had after 1812 and 1865.

This is correct, and I fully agree.

How It Could Have been Different

In retrospect, the UK could have successfully gotten itself back on gold with far less disruption simply by emulating the U.S. post-Civil War policy of freezing the monetary base at its war-end level, and then letting the economy grow into the money supply with a gradual deflation. This might have taken 14 years, as in the U.S. between 1865-79, or even longer, but it would have been superior to the economic, social, and political turmoil that the UK experienced. After the pound rose to its pre-war parity of $4.86, the BOE could have begun gradually buying gold reserves with new liabilities and even redeeming those liabilities on demand for gold, subject to reserve availability. Once reserves reached say 20% of its liabilities, it could have started to extend domestic credit to the government and the private sector through the banks, while still maintaining convertibility. Gold coins could even have been circulated, as demanded.

Again, I reiterate that, although the UK resumption was more painful for Britain than it need have been, the resumption had little destabilizing effect on the international economy. The UK did not have a destabilizing effect on the world economy until the September 1931 crisis that caused Britain to leave the gold standard.

If the UK and other countries had all simply devalued in proportion to their domestic price levels at the end of the war, they could have returned to gold quicker, and with less deflation. However, given that a country’s real demand for money — and therefore its demand for real gold reserves — depends only on the real size of the economy and its net gold reserve ratio, such policies would not have reduced the ultimate global demand for gold or lessened the postwar deflation in countries that remained on gold at a fixed parity.

This is an important point. Devaluation was a way of avoiding an overvalued currency and the relative deflation that a single country needed to undergo. But the main problem facing the world in restoring the gold standard was not the relative deflation of countries with overvalued currencies but the absolute deflation associated with an increased world demand for gold across all countries. Indeed, it was France, the country that did devalue that was the greatest source of increased demand for gold owing to its internal monetary policies based on a perverse gold standard ideology.

In fact, the problem was not that gold was “undervalued” (as Johnson puts it) or that there was a “shortage” of gold (as per Cassel and others), but that the price level in terms of gold, and therefore dollars, was simply unsustainably high given Europe’s determination to return to gold. In any event, it was inconceivable that the U.S. would have devalued in 1922, since it had plenty of gold, had already corrected its price level to the world situation with the 1920-22 deflation, and did not have the excuse of a banking crisis as in 1933.

I think that this is totally right. It was not the undervaluation of individual currencies that was the problem, it was the increase in the demand for gold associated with the simultaneous return of many countries to the gold standard. It is also a mistake to confuse Cassel’s discussion of gold shortage, which he viewed as a long-term problem, with the increase in gold demand associated with returning to the gold standard which was the cause of sudden deflation starting in 1929 as a result of the huge increase in gold demand by the Bank of France, a phenomenon that McCulloch mentions early on in his discussion but does not refer to again.

Keynes and the Fisher Equation

The history of economics society is holding its annual meeting in Chicago from Friday June 15 to Sunday June 17. Bringing together material from a number of posts over the past five years or so about Keynes and the Fisher equation and Fisher effect, I will be presenting a new paper called “Keynes and the Fisher Equation.” Here is the abstract of my paper.

One of the most puzzling passages in the General Theory is the attack (GT p. 142) on Fisher’s distinction between the money rate of interest and the real rate of interest “where the latter is equal to the former after correction for changes in the value of money.” Keynes’s attack on the real/nominal distinction is puzzling on its own terms, inasmuch as the distinction is a straightforward and widely accepted distinction that was hardly unique to Fisher, and was advanced as a fairly obvious proposition by many earlier economists including Marshall. What makes Keynes’s criticism even more problematic is that Keynes’s own celebrated theorem in the Tract on Monetary Reform about covered interest arbitrage is merely an application of Fisher’s reasoning in Appreciation and Interest. Moreover, Keynes endorsed Fisher’s distinction in the Treatise on Money. But even more puzzling is that Keynes’s analysis in Chapter 17 demonstrates that in equilibrium the return on alternative assets must reflect their differences in their expected rates of appreciation. Thus Keynes, himself, in the General Theory endorsed the essential reasoning underlying the distinction between real and the money rates of interest. The solution to the puzzle lies in understanding the distinction between the relationships between the real and nominal rates of interest at a moment in time and the effects of a change in expected rates of appreciation that displaces an existing equilibrium and leads to a new equilibrium. Keynes’s criticism of the Fisher effect must be understood in the context of his criticism of the idea of a unique natural rate of interest implicitly identifying the Fisherian real rate with a unique natural rate.

And here is the concluding section of my paper.

Keynes’s criticisms of the Fisher effect, especially the facile assumption that changes in inflation expectations are reflected mostly, if not entirely, in nominal interest rates – an assumption for which neither Fisher himself nor subsequent researchers have found much empirical support – were grounded in well-founded skepticism that changes in expected inflation do not affect the real interest rate. A Fisherian analysis of an increase in expected deflation at the zero lower bound shows that the burden of the adjustment must be borne by an increase in the real interest rate. Of course, such a scenario might be dismissed as a special case, which it certainly is, but I very much doubt that it is the only assumptions leading to the conclusion that a change in expected inflation or deflation affects the real as well as the nominal interest rate.

Although Keynes’s criticism of the Fisher equation (or more precisely against the conventional simplistic interpretation) was not well argued, his intuition was sound. And in his contribution to the Fisher festschrift, Keynes (1937b) correctly identified the two key assumptions leading to the conclusion that changes in inflation expectations are reflected entirely in nominal interest rates: (1) a unique real equilibrium and (2) the neutrality (actually superneutrality) of money. Keynes’s intuition was confirmed by Hirshleifer (1970, 135-38) who derived the Fisher equation as a theorem by performing a comparative-statics exercise in a two-period general-equilibrium model with money balances, when the money stock in the second period was increased by an exogenous shift factor k. The price level in the second period increases by a factor of k and the nominal interest rate increases as well by a factor of k, with no change in the real interest rate.

But typical Keynesian and New Keynesian macromodels based on the assumption of no capital or a single capital good drastically oversimplify the analysis, because those highly aggregated models assume that the determination of the real interest rate takes place in a single market. The market-clearing assumption invites the conclusion that the rate of interest, like any other price, is determined by the equality of supply and demand – both of which are functions of that price — in  that market.

The equilibrium rate of interest, as C. J. Bliss (1975) explains in the context of an intertemporal general-equilibrium analysis, is not a price; it is an intertemporal rate of exchange characterizing the relationships between all equilibrium prices and expected equilibrium prices in the current and future time periods. To say that the interest rate is determined in any single market, e.g., a market for loanable funds or a market for cash balances, is, at best, a gross oversimplification, verging on fallaciousness. The interest rate or term structure of interest rates is a reflection of the entire intertemporal structure of prices, so a market for something like loanable funds cannot set the rate of interest at a level inconsistent with that intertemporal structure of prices without disrupting and misaligning that structure of intertemporal price relationships. The interest rates quoted in the market for loanable funds are determined and constrained by those intertemporal price relationships, not the other way around.

In the real world, in which current prices, future prices and expected future prices are not and almost certainly never are in an equilibrium relationship with each other, there is always some scope for second-order variations in the interest rates transacted in markets for loanable funds, but those variations are still tightly constrained by the existing intertemporal relationships between current, future and expected future prices. Because the conditions under which Hirshleifer derived his theorem demonstrating that changes in expected inflation are fully reflected in nominal interest rates are not satisfied, there is no basis for assuming that a change in expected inflation affect only nominal interest rates with no effect on real rates.

There are probably a huge range of possible scenarios of how changes in expected inflation could affect nominal and real interest rates. One should not disregard the Fisher equation as one possibility, it seems completely unwarranted to assume that it is the most plausible scenario in any actual situation. If we read Keynes at the end of his marvelous Chapter 17 in the General Theory in which he remarks that he has abandoned the belief he had once held in the existence of a unique natural rate of interest, and has come to believe that there are really different natural rates corresponding to different levels of unemployment, we see that he was indeed, notwithstanding his detour toward a pure liquidity preference theory of interest, groping his way toward a proper understanding of the Fisher equation.

In my Treatise on Money I defined what purported to be a unique rate of interest, which I called the natural rate of interest – namely, the rate of interest which, in the terminology of my Treatise, preserved equality between the rate of saving (as there defined) and the rate of investment. I believed this to be a development and clarification of of Wicksell’s “natural rate of interest,” which was, according to him, the rate which would preserve the stability of some, not quite clearly specified, price-level.

I had, however, overlooked the fact that in any given society there is, on this definition, a different natural rate for each hypothetical level of employment. And, similarly, for every rate of interest there is a level of employment for which that rate is the “natural” rate, in the sense that the system will be in equilibrium with that rate of interest and that level of employment. Thus, it was a mistake to speak of the natural rate of interest or to suggest that the above definition would yield a unique value for the rate of interest irrespective of the level of employment. . . .

If there is any such rate of interest, which is unique and significant, it must be the rate which we might term the neutral rate of interest, namely, the natural rate in the above sense which is consistent with full employment, given the other parameters of the system; though this rate might be better described, perhaps, as the optimum rate. (pp. 242-43)

Because Keynes believed that an increased in the expected future price level implies an increase in the marginal efficiency of capital, it follows that an increase in expected inflation under conditions of less than full employment would increase investment spending and employment, thereby raising the real rate of interest as well the nominal rate. Cottrell (1994) has attempted to make an argument along such lines within a traditional IS-LM framework. I believe that, in a Fisherian framework, my argument points in a similar direction.

 

What Hath Merkel Wrought?

In my fifth month of blogging in November 2011, I wrote a post which I called “The Economic Consequences of Mrs. Merkel.” The title, as I explained, was inspired by J. M. Keynes’s famous essay “The Economic Consequences of Mr. Churchill,” which eloquently warned that Britain was courting disaster by restoring the convertibility of sterling into gold at the prewar parity of $4.86 to the pound, the dollar then being the only major currency convertible into gold. The title of Keynes’s essay, in turn, had been inspired by Keynes’s celebrated book The Economic Consequences of the Peace about the disastrous Treaty of Versailles, which accurately foretold the futility of imposing punishing war reparations on Germany.

In his essay, Keynes warned that by restoring the prewar parity, Churchill would force Britain into an untenable deflation at a time when more than 10% of the British labor force was unemployed (i.e., looking for, but unable to find, a job at prevailing wages). Keynes argued that the deflation necessitated by restoration of the prewar parity would impose an intolerable burden of continued and increased unemployment on British workers.

But, as it turned out, Churchill’s decision turned out to be less disastrous than Keynes had feared. The resulting deflation was quite mild, wages in nominal terms were roughly stable, and real output and employment grew steadily with unemployment gradually falling under 10% by 1928. The deflationary shock that Keynes had warned against turned out to be less severe than Keynes had feared because the U.S. Federal Reserve, under the leadership of Benjamin Strong, President of the New York Fed, the de facto monetary authority of the US and the world, followed a policy that allowed a slight increase in the world price level in terms of dollars, thereby moderating the deflationary effect on Britain of restoring the prewar sterling/dollar exchange rate.

Thanks to Strong’s enlightened policy, the world economy continued to expand through 1928. I won’t discuss the sequence of events in 1928 and 1929 that led to the 1929 stock market crash, but those events had little, if anything, to do with Churchill’s 1925 decision. I’ve discussed the causes of the 1929 crash and the Great Depression in many other places including my 2011 post about Mrs. Merkel, so I will skip the 1929 story in this post.

The point that I want to make is that even though Keynes’s criticism of Churchill’s decision to restore the prewar dollar/sterling parity was well-taken, the dire consequences that Keynes foretold, although they did arrive a few years thereafter, were not actually caused by Churchill’s decision, but by decisions made in Paris and New York, over which Britain may have had some influence, but little, if any, control.

What I want to discuss in this post is how my warnings about potential disaster almost six and a half years ago have turned out. Here’s how I described the situation in November 2011:

Fast forward some four score years to today’s tragic re-enactment of the deflationary dynamics that nearly destroyed European civilization in the 1930s. But what a role reversal! In 1930 it was Germany that was desperately seeking to avoid defaulting on its obligations by engaging in round after round of futile austerity measures and deflationary wage cuts, causing the collapse of one major European financial institution after another in the annus horribilis of 1931, finally (at least a year after too late) forcing Britain off the gold standard in September 1931. Eighty years ago it was France, accumulating huge quantities of gold, in Midas-like self-satisfaction despite the economic wreckage it was inflicting on the rest of Europe and ultimately itself, whose monetary policy was decisive for the international value of gold and the downward course of the international economy. Now, it is Germany, the economic powerhouse of Europe dominating the European Central Bank, which effectively controls the value of the euro. And just as deflation under the gold standard made it impossible for Germany (and its state and local governments) not to default on its obligations in 1931, the policy of the European Central Bank, self-righteously dictated by Germany, has made default by Greece and now Italy and at least three other members of the Eurozone inevitable. . . .

If the European central bank does not soon – and I mean really soon – grasp that there is no exit from the debt crisis without a reversal of monetary policy sufficient to enable nominal incomes in all the economies in the Eurozone to grow more rapidly than does their indebtedness, the downward spiral will overtake even the stronger European economies. (I pointed out three months ago that the European crisis is a NGDP crisis not a debt crisis.) As the weakest countries choose to ditch the euro and revert back to their own national currencies, the euro is likely to start to appreciate as it comes to resemble ever more closely the old deutschmark. At some point the deflationary pressures of a rising euro will cause even the Germans, like the French in 1935, to relent. But one shudders at the economic damage that will be inflicted until the Germans come to their senses. Only then will we be able to assess the full economic consequences of Mrs. Merkel.

Greece did default, but the European Community succeeded in imposing draconian austerity measures on Greece, while Italy, Spain, France, and Portugal, which had all been in some danger, managed to avoid default. That they did so is due first to the enormous cost that would have be borne by a country in the Eurozone to extricate itself from the Eurozone and reinstitute its own national currency and second to the actions taken by Mario Draghi, who succeeded Jean Claude Trichet as President of the European Central Bank in November 2011. If monetary secession from the eurozone were less fraught, surely Greece and perhaps other countries would have chosen that course rather than absorb the continuing pain of remaining in the eurozone.

But if it were not for a decisive change in policy by Draghi, Greece and perhaps other countries would have been compelled to follow that uncharted and potentially catastrophic path. But, after assuming leadership of the ECB, Draghi immediately reversed the perverse interest-rate hikes imposed by his predecessor and, even more crucially, announced in July 2012 that the ECB “is ready to do whatever it takes to preserve the Euro. And believe me, it will be enough.” Draghi’s reassurance that monetary easing would be sufficient to avoid default calmed markets, alleviated market pressure driving up interest rates on debt issued by those countries.

But although Draghi’s courageous actions to ease monetary policy in the face of German disapproval avoided a complete collapse, the damage inflicted by Mrs. Merkel’s ferocious anti-inflation policy did irreparable damage, not only on Greece, but, by deepening the European downturn and delaying and suppressing the recovery, on the rest of the European community, inflaming anti-EU, populist nationalism in much of Europe that helped fuel the campaign for Brexit in the UK and has inspired similar anti-EU movements elsewhere in Europe and almost prevented Mrs. Merkel from forming a government after the election a few months ago.

Mrs. Merkel is perhaps the most impressive political leader of our time, and her willingness to follow a humanitarian policy toward refugees fleeing the horrors of war and persecution showed an extraordinary degree of political courage and personal decency that ought to serve as a model for other politicians to emulate. But that admirable legacy will be forever tarnished by the damage she inflicted on her own country and the rest of the EU by her misguided battle against the phantom threat of inflation.

In the General Theory Keynes First Trashed and then Restated the Fisher Equation

I am sorry to have gone on a rather extended hiatus from posting, but I have been struggling to come up with a new draft of a working paper (“The Fisher Effect under Deflationary Expectations“) I wrote with the encouragement of Scott Sumner in 2010 and posted on SSRN in 2011 not too long before I started blogging. Aside from a generous mention of the paper by Scott on his blog, Paul Krugman picked up on it and wrote about it on his blog as well. Because the empirical work was too cursory, I have been trying to update the results and upgrade the techniques. In working on a new draft of my paper, I also hit upon a simple proof of a point that I believe I discovered several years ago: that in the General Theory Keynes criticized Fisher’s distinction between the real and nominal rates of interest even though he used exactly analogous reasoning in his famous theorem on covered interest parity in the forward exchange market and in his discussion of liquidity preference in chapter 17 of the General Theory. So I included a section making that point in the new draft of my paper, which I am reproducing here. Eventually, I hope to write a paper exploring more deeply Keynes’s apparently contradictory thinking on the Fisher equation. Herewith is an excerpt from my paper.

One of the puzzles of Keynes’s General Theory is his criticism of the Fisher equation.

This is the truth which lies behind Professor Irving Fisher’ss theory of what he originally called “Appreciation and Interest” – the  distinction between the money rate of interest and the real rate of interest where the latter is equal to the former after correction for changes in the value of money. It is difficult to make sense of this theory as stated, because it is not clear whether the change in the value of money is or is not assumed to be foreseen. There is no escape from the dilemma that, if it is not foreseen, there will be no effect on current affairs; whilst, if it is foreseen, the prices of existing goods will be forthwith so adjusted that the advantages of holding money and of holding goods are again equalized, and it will be too late for holders of money to gain or to suffer a change in the rate of interest which will offset the prospective change during the period of the loan in the value of money lent. . . .

The mistake lies in supposing that it is the rate of interest on which prospective changes in the value of money will directly react, instead of the marginal efficiency of a given stock of capital. The prices of existing assets will always adjust themselves to changes in expectation concerning the prospective value of money. The significance of such changes in expectation lies in their effect on the readiness to produce new assets through their reaction on the marginal efficiency of capital. The stimulating effect of the expectation of higher prices is due, not to its raising the rate of interest (that would be a paradoxical way of stimulating output – in so far as the rate of interest rises, the stimulating effect is to that extent offset), but to its raising the marginal efficiency of a given stock of capital. (pp. 142-43)

As if the problem of understanding that criticism were not enough, the problem is further compounded by the fact that one of Keynes’s most important pre-General Theory contributions, his theorem about covered interest parity in his Tract on Monetary Reform seems like a straightforward application of the Fisher equation. According to his covered-interest-parity theorem, in equilibrium, the difference between interest rates quoted in terms of two different currencies will be just enough to equalize borrowing costs in either currency given the anticipated change in the exchange rate between the two currencies over reflected in the market for forward exchange as far into the future as the duration of the loan.

The most fundamental cause is to be found in the interest rates obtainable on “short” money – that is to say, on money lent or deposited for short periods of time in the money markets of the two centres under comparison. If by lending dollars in New York for one month the lender could earn interest at the rate of 5-1/2 per cent per annum, whereas by lending sterling in London for one month he could only earn interest at the rate of 4 per cent, then the preference observed above for holding funds in New York rather than in London is wholly explained. That is to say, forward quotations for the purchase of the currency of the dearer money market tend to be cheaper than the spot quotations by a percentage per month equal to the excess of the interest which can be earned in a month in the dearer market over what can be earned in the cheaper. (p. 125)

And as if that self-contradiction not enough, Keynes’s own exposition of the idea of liquidity preference in chapter 17 of the General Theory extends the basic idea of the Fisher equation that expected rates of return from holding different assets must be accounted for in a way that equalizes the expected return from holding any asset. At least formally, it can be shown that the own-interest-rate analysis in chapter 17 of the General Theory explaining how the liquidity premium affects the relative yields of money and alternative assets can be translated into a form that is equivalent to the Fisher equation.

In explaining the factors affecting the expected yields from alternative assets now being held into the future, Keynes lists three classes of return from holding assets: (1) the expected physical real yield (q) (i.e., the ex ante real rate of interest or Fisher’s real rate) from holding an asset, including either or both a flow of physical services or real output or real appreciation; (2) the expected service flow from holding an easily marketable assets generates liquidity services or a liquidity premium (l); and (3) wastage in the asset or a carrying cost (c). Keynes specifies the following equilibrium condition for asset holding: if assets are held into the future, the expected overall return from holding every asset including all service flows, carrying costs, and expected appreciation or depreciation, must be equalized.

[T]he total return expected from the ownership of an asset over a period is equal to its yield minus its carrying cost plus its liquidity premium, i.e., to q c + l. That is to say, q c + l is the own rate of interest of any commodity, where q, c, and l are measured in terms of itself as the standard. (Keynes 1936, p. 226)

Thus, every asset that is held, including money, must generate a return including the liquidity premium l, after subtracting of the carrying cost c. Thus, a standard real asset with zero carrying cost will be expected to generate a return equal to q (= r). For money to be held, at the margin, it must also generate a return equal to q net of its carrying cost, c. In other words, q = lc.

But in equilibrium, the nominal rate of interest must equal the liquidity premium, because if the liquidity premium (at the margin) generated by money exceeds the nominal interest rate, holders of debt instruments returning the nominal rate will convert those instruments into cash, thereby deriving liquidity services in excess of the foregone interest from the debt instruments. Similarly, the carrying cost of holding money is the expected depreciation in the value of money incurred by holding money, which corresponds to expected inflation. Thus, substituting the nominal interest rate for the liquidity premium, and expected inflation for the carrying cost of money, we can rewrite the Keynes equilibrium condition for money to be held in equilibrium as q = r = ipe. But this equation is identical to the Fisher equation: i = r + pe.

Keynes’s version of the Fisher equation makes it obvious that the disequilibrium dynamics that are associated with changes in expected inflation can be triggered not only by decreased inflation expectations but by an increase in the liquidity premium generated by money, and especially if expected inflation falls and the liquidity premium rises simultaneously, as was likely the case during the 2008 financial crisis.

I will not offer a detailed explanation here of the basis on which Keynes criticized the Fisher equation in the General Theory despite having applied the same idea in the Tract on Monetary Reform and restating the same underlying idea some 80 pages later in the General Theory itself. But the basic point is simply this: the seeming contradiction can be rationalized by distinguishing between the Fisher equation as a proposition about a static equilibrium relationship and the Fisher equation as a proposition about the actual adjustment process occasioned by a parametric expectational change. While Keynes clearly did accept the Fisher equation in an equilibrium setting, he did not believe the real interest rate to be uniquely determined by real forces and so he didn’t accept its the invariance of the real interest rate with respect to changes in expected inflation in the Fisher equation. Nevertheless it is stunning that Keynes could have committed such a blatant, if only superficial, self-contradiction without remarking upon it.

Price Stickiness Is a Symptom not a Cause

In my recent post about Nick Rowe and the law of reflux, I mentioned in passing that I might write a post soon about price stickiness. The reason that I thought it would be worthwhile writing again about price stickiness (which I have written about before here and here), because Nick, following a broad consensus among economists, identifies price stickiness as a critical cause of fluctuations in employment and income. Here’s how Nick phrased it:

An excess demand for land is observed in the land market. An excess demand for bonds is observed in the bond market. An excess demand for equities is observed in the equity market. An excess demand for money is observed in any market. If some prices adjust quickly enough to clear their market, but other prices are sticky so their markets don’t always clear, we may observe an excess demand for money as an excess supply of goods in those sticky-price markets, but the prices in flexible-price markets will still be affected by the excess demand for money.

Then a bit later, Nick continues:

If individuals want to save in the form of money, they won’t collectively be able to if the stock of money does not increase.There will be an excess demand for money in all the money markets, except those where the price of the non-money thing in that market is flexible and adjusts to clear that market. In the sticky-price markets there will nothing an individual can do if he wants to buy more money but nobody else wants to sell more. But in those same sticky-price markets any individual can always sell less money, regardless of what any other individual wants to do. Nobody can stop you selling less money, if that’s what you want to do.

Unable to increase the flow of money into their portfolios, each individual reduces the flow of money out of his portfolio. Demand falls in stick-price markets, quantity traded is determined by the short side of the market (Q=min{Qd,Qs}), so trade falls, and some traders that would be mutually advantageous in a barter or Walrasian economy even at those sticky prices don’t get made, and there’s a recession. Since money is used for trade, the demand for money depends on the volume of trade. When trade falls the flow of money falls too, and the stock demand for money falls, until the representative individual chooses a flow of money out of his portfolio equal to the flow in. He wants to increase the flow in, but cannot, since other individuals don’t want to increase their flows out.

The role of price stickiness or price rigidity in accounting for involuntary unemployment is an old and complicated story. If you go back and read what economists before Keynes had to say about the Great Depression, you will find that there was considerable agreement that, in principle, if workers were willing to accept a large enough cut in their wages, they could all get reemployed. That was a proposition accepted by Hawtry and by Keynes. However, they did not believe that wage cutting was a good way of restoring full employment, because the process of wage cutting would be brutal economically and divisive – even self-destructive – politically. So they favored a policy of reflation that would facilitate and hasten the process of recovery. However, there also those economists, e.g., Ludwig von Mises and the young Lionel Robbins in his book The Great Depression, (which he had the good sense to disavow later in life) who attributed high unemployment to an unwillingness of workers and labor unions to accept wage cuts and to various other legal barriers preventing the price mechanism from operating to restore equilibrium in the normal way that prices adjust to equate the amount demanded with the amount supplied in each and every single market.

But in the General Theory, Keynes argued that if you believed in the standard story told by microeconomics about how prices constantly adjust to equate demand and supply and maintain equilibrium, then maybe you should be consistent and follow the Mises/Robbins story and just wait for the price mechanism to perform its magic, rather than support counter-cyclical monetary and fiscal policies. So Keynes then argued that there is actually something wrong with the standard microeconomic story; price adjustments can’t ensure that overall economic equilibrium is restored, because the level of employment depends on aggregate demand, and if aggregate demand is insufficient, wage cutting won’t increase – and, more likely, would reduce — aggregate demand, so that no amount of wage-cutting would succeed in reducing unemployment.

To those upholding the idea that the price system is a stable self-regulating system or process for coordinating a decentralized market economy, in other words to those upholding microeconomic orthodoxy as developed in any of the various strands of the neoclassical paradigm, Keynes’s argument was deeply disturbing and subversive.

In one of the first of his many important publications, “Liquidity Preference and the Theory of Money and Interest,” Franco Modigliani argued that, despite Keynes’s attempt to prove that unemployment could persist even if prices and wages were perfectly flexible, the assumption of wage rigidity was in fact essential to arrive at Keynes’s result that there could be an equilibrium with involuntary unemployment. Modigliani did so by positing a model in which the supply of labor is a function of real wages. It was not hard for Modigliani to show that in such a model an equilibrium with unemployment required a rigid real wage.

Modigliani was not in favor of relying on price flexibility instead of counter-cyclical policy to solve the problem of involuntary unemployment; he just argued that the rationale for such policies had to be that prices and wages were not adjusting immediately to clear markets. But the inference that Modigliani drew from that analysis — that price flexibility would lead to an equilibrium with full employment — was not valid, there being no guarantee that price adjustments would necessarily lead to equilibrium, unless all prices and wages instantaneously adjusted to their new equilibrium in response to any deviation from a pre-existing equilibrium.

All the theory of general equilibrium tells us is that if all trading takes place at the equilibrium set of prices, the economy will be in equilibrium as long as the underlying “fundamentals” of the economy do not change. But in a decentralized economy, no one knows what the equilibrium prices are, and the equilibrium price in each market depends in principle on what the equilibrium prices are in every other market. So unless the price in every market is an equilibrium price, none of the markets is necessarily in equilibrium.

Now it may well be that if all prices are close to equilibrium, the small changes will keep moving the economy closer and closer to equilibrium, so that the adjustment process will converge. But that is just conjecture, there is no proof showing the conditions under which a simple rule that says raise the price in any market with an excess demand and decrease the price in any market with an excess supply will in fact lead to the convergence of the whole system to equilibrium. Even in a Walrasian tatonnement system, in which no trading at disequilibrium prices is allowed, there is no proof that the adjustment process will eventually lead to the discovery of the equilibrium price vector. If trading at disequilibrium prices is allowed, tatonnement is hopeless.

So the real problem is not that prices are sticky but that trading takes place at disequilibrium prices and there is no mechanism by which to discover what the equilibrium prices are. Modern macroeconomics solves this problem, in its characteristic fashion, by assuming it away by insisting that expectations are “rational.”

Economists have allowed themselves to make this absurd assumption because they are in the habit of thinking that the simple rule of raising price when there is an excess demand and reducing the price when there is an excess supply inevitably causes convergence to equilibrium. This habitual way of thinking has been inculcated in economists by the intense, and largely beneficial, training they have been subjected to in Marshallian partial-equilibrium analysis, which is built on the assumption that every market can be analyzed in isolation from every other market. But that analytic approach can only be justified under a very restrictive set of assumptions. In particular it is assumed that any single market under consideration is small relative to the whole economy, so that its repercussions on other markets can be ignored, and that every other market is in equilibrium, so that there are no changes from other markets that are impinging on the equilibrium in the market under consideration.

Neither of these assumptions is strictly true in theory, so all partial equilibrium analysis involves a certain amount of hand-waving. Nor, even if we wanted to be careful and precise, could we actually dispense with the hand-waving; the hand-waving is built into the analysis, and can’t be avoided. I have often referred to these assumptions required for the partial-equilibrium analysis — the bread and butter microeconomic analysis of Econ 101 — to be valid as the macroeconomic foundations of microeconomics, by which I mean that the casual assumption that microeconomics somehow has a privileged and secure theoretical position compared to macroeconomics and that macroeconomic propositions are only valid insofar as they can be reduced to more basic microeconomic principles is entirely unjustified. That doesn’t mean that we shouldn’t care about reconciling macroeconomics with microeconomics; it just means that the validity of proposition in macroeconomics is not necessarily contingent on being derived from microeconomics. Reducing macroeconomics to microeconomics should be an analytical challenge, not a methodological imperative.

So the assumption, derived from Modigliani’s 1944 paper that “price stickiness” is what prevents an economic system from moving automatically to a new equilibrium after being subjected to some shock or disturbance, reflects either a misunderstanding or a semantic confusion. It is not price stickiness that prevents the system from moving toward equilibrium, it is the fact that individuals are engaging in transactions at disequilibrium prices. We simply do not know how to compare different sets of non-equilibrium prices to determine which set of non-equilibrium prices will move the economy further from or closer to equilibrium. Our experience and out intuition suggest that in some neighborhood of equilibrium, an economy can absorb moderate shocks without going into a cumulative contraction. But all we really know from theory is that any trading at any set of non-equilibrium prices can trigger an economic contraction, and once it starts to occur, a contraction may become cumulative.

It is also a mistake to assume that in a world of incomplete markets, the missing markets being markets for the delivery of goods and the provision of services in the future, any set of price adjustments, however large, could by themselves ensure that equilibrium is restored. With an incomplete set of markets, economic agents base their decisions not just on actual prices in the existing markets; they base their decisions on prices for future goods and services which can only be guessed at. And it is only when individual expectations of those future prices are mutually consistent that equilibrium obtains. With inconsistent expectations of future prices, the adjustments in current prices in the markets that exist for currently supplied goods and services that in some sense equate amounts demanded and supplied, lead to a (temporary) equilibrium that is not efficient, one that could be associated with high unemployment and unused capacity even though technically existing markets are clearing.

So that’s why I regard the term “sticky prices” and other similar terms as very unhelpful and misleading; they are a kind of mental crutch that economists are too ready to rely on as a substitute for thinking about what are the actual causes of economic breakdowns, crises, recessions, and depressions. Most of all, they represent an uncritical transfer of partial-equilibrium microeconomic thinking to a problem that requires a system-wide macroeconomic approach. That approach should not ignore microeconomic reasoning, but it has to transcend both partial-equilibrium supply-demand analysis and the mathematics of intertemporal optimization.

What’s Wrong with Econ 101?

Hendrickson responded recently to criticisms of Econ 101 made by Noah Smith and Mark Thoma. Mark Thoma thinks that Econ 101 has a conservative bias, presumably because Econ 101 teaches students that markets equilibrate supply and demand and allocate resources to their highest valued use and that sort of thing. If markets are so wonderful, then shouldn’t we keep hands off the market and let things take care of themselves? Noah Smith is especially upset that Econ 101, slighting the ambiguous evidence that minimum-wage laws actually do increase unemployment, is too focused on theory and pays too little attention to empirical techniques.

I sympathize with Josh defense of Econ 101, and I think he makes a good point that there is nothing in Econ 101 that quantifies the effect on unemployment of minimum-wage legislation, so that the disconnect between theory and evidence isn’t as stark as Noah suggests. Josh also emphasizes, properly, that whatever the effect of an increase in the minimum wage implied by economic theory, that implication by itself can’t tell us whether the minimum wage should be raised. An ought statement can’t be derived from an is statement. Philosophers are not as uniformly in agreement about the positive-normative distinction as they used to be, but I am old-fashioned enough to think that it’s still valid. If there is a conservative bias in Econ 101, the problem is not Econ 101; the problem is bad teaching.

Having said all that, however, I don’t think that Josh’s defense addresses the real problems with Econ 101. Noah Smith’s complaints about the implied opposition of Econ 101 to minimum-wage legislation and Mark Thoma’s about the conservative bias of Econ 101 are symptoms of a deeper problem with Econ 101, a problem inherent in the current state of economic theory, and unlikely to go away any time soon.

The deeper problem that I think underlies much of the criticism of Econ 101 is the fragility of its essential propositions. These propositions, what Paul Samuelson misguidedly called “meaningful theorems” are deducible from the basic postulates of utility maximization and wealth maximization by applying the method of comparative statics. Not only are the propositions based on questionable psychological assumptions, the comparative-statics method imposes further restrictive assumptions designed to isolate a single purely theoretical relationship. The assumptions aren’t just the kind of simplifications necessary for the theoretical models of any empirical science to be applicable to the real world, they subvert the powerful logic used to derive those implications. It’s not just that the assumptions may not be fully consistent with the conditions actually observed, but the implications of the model are themselves highly sensitive to those assumptions. The meaningful theorems themselves are very sensitive to the assumptions of the model.

The bread and butter of Econ 101 is the microeconomic theory of market adjustment in which price and quantity adjust to equilibrate what consumers demand with what suppliers produce. This is the partial-equilibrium analysis derived from Alfred Marshall, and gradually perfected in the 1920s and 1930s after Marshall’s death with the development of the theories of the firm, and perfect and imperfect competition. As I have pointed out before in a number of posts just as macroeconomics depends on microfoundations, microeconomics depends on macrofoundations (e.g. here and here). All partial-equilibrium analysis relies on the – usually implicit — assumption that all markets but the single market under analysis are in equilibrium. Without that assumption, it is logically impossible to derive any of Samuelson’s meaningful theorems, and the logical necessity of microeconomics is severely compromised.

The underlying idea is very simple. Samuelson’s meaningful theorems are meant to isolate the effect of a change in a single parameter on a particular endogenous variable in an economic system. The only way to isolate the effect of the parameter on the variable is to start from an equilibrium state in which the system is, as it were, at rest. A small (aka infinitesimal) change in the parameter induces an adjustment in the equilibrium, and a comparison of the small change in the variable of interest between the new equilibrium and the old equilibrium relative to the parameter change identifies the underlying relationship between the variable and the parameter, all else being held constant. If the analysis did not start from equilibrium, then the effect of the parameter change on the variable could not be isolated, because the variable would be changing for reasons having nothing to do with the parameter change, making it impossible to isolate the pure effect of the parameter change on the variable of interest.

Not only must the exercise start from an equilibrium state, the equilibrium must be at least locally stable, so that the posited small parameter change doesn’t cause the system to gravitate towards another equilibrium — the usual assumption of a unique equilibrium being an assumption to ensure tractability rather than a deduction from any plausible assumptions – or simply veer off on some explosive or indeterminate path.

Even aside from all these restrictive assumptions, the standard partial-equilibrium analysis is restricted to markets that can be assumed to be very small relative to the entire system. For small markets, it is safe to assume that the small changes in the single market under analysis will have sufficiently small effects on all the other markets in the economy that the induced effects on all the other markets from the change in the market of interest have a negligible feedback effect on the market of interest.

But the partial-equilibrium method surely breaks down when the market under analysis is a market that is large relative to the entire economy, like, shall we say, the market for labor. The feedback effects are simply too strong for the small-market assumptions underlying the partial-equilibrium analysis to be satisfied by the labor market. But even aside from the size issue, the essence of the partial-equilibrium method is the assumption that all markets other than the market under analysis are in equilibrium. But the very assumption that the labor market is not in equilibrium renders the partial-equilibrium assumption that all other markets are in equilibrium untenable. I would suggest that the proper way to think about what Keynes was trying, not necessarily successfully, to do in the General Theory when discussing nominal wage cuts as a way to reduce unemployment is to view that discussion as a critique of using the partial-equilibrium method to analyze a state of general unemployment, as opposed to a situation in which unemployment is confined to a particular occupation or a particular geographic area.

So the question naturally arises: If the logical basis of Econ 101 is as flimsy as I have been suggesting, should we stop teaching Econ 101? My answer is an emphatic, but qualified, no. Econ 101 is the distillation of almost a century and a half of rigorous thought about how to analyze human behavior. What we have come up with so far is very imperfect, but it is still the most effective tool we have for systematically thinking about human conduct and its consequences, especially its unintended consequences. But we should be more forthright about its limitations and the nature of the assumptions that underlie the analysis. We should also be more aware of the logical gaps between the theory – Samuelson’s meaningful theorems — and the applications of the theory.

In fact, many meaningful theorems are consistently corroborated by statistical tests, presumably because observations by and large occur when the economy operates in the neighborhood of a general equililbrium and feedback effect are small, so that the extraneous forces – other than those derived from theory – impinge on actual observations more or less randomly, and thus don’t significantly distort the predicted relationship. And undoubtedly there are also cases in which the random effects overwhelm the theoretically identified relationships, preventing the relationships from being identified statistically, at least when the number of observations is relatively small as is usually the case with economic data. But we should also acknowledge that the theoretically predicted relationships may simply not hold in the real world, because the extreme conditions required for the predicted partial-equilibrium relationships to hold – near-equilibrium conditions and the absence of feedback effects – may often not be satisfied.

What’s Wrong with Monetarism?

UPDATE: (05/06): In an email Richard Lipsey has chided me for seeming to endorse the notion that 1970s stagflation refuted Keynesian economics. Lipsey rightly points out that by introducing inflation expectations into the Phillips Curve or the Aggregate Supply Curve, a standard Keynesian model is perfectly capable of explaining stagflation, so that it is simply wrong to suggest that 1970s stagflation constituted an empirical refutation of Keynesian theory. So my statement in the penultimate paragraph that the k-percent rule

was empirically demolished in the 1980s in a failure even more embarrassing than the stagflation failure of Keynesian economics.

should be amended to read “the supposed stagflation failure of Keynesian economics.”

Brad DeLong recently did a post (“The Disappearance of Monetarism”) referencing an old (apparently unpublished) paper of his following up his 2000 article (“The Triumph of Monetarism”) in the Journal of Economic Perspectives. Paul Krugman added his own gloss on DeLong on Friedman in a post called “Why Monetarism Failed.” In the JEP paper, DeLong argued that the New Keynesian policy consensus of the 1990s was built on the foundation of what DeLong called “classic monetarism,” the analytical core of the doctrine developed by Friedman in the 1950s and 1960s, a core that survived the demise of what he called “political monetarism,” the set of factual assumptions and policy preferences required to justify Friedman’s k-percent rule as the holy grail of monetary policy.

In his follow-up paper, DeLong balanced his enthusiasm for Friedman with a bow toward Keynes, noting the influence of Keynes on both classic and political monetarism, arguing that, unlike earlier adherents of the quantity theory, Friedman believed that a passive monetary policy was not the appropriate policy stance during the Great Depression; Friedman famously held the Fed responsible for the depth and duration of what he called the Great Contraction, because it had allowed the US money supply to drop by a third between 1929 and 1933. This was in sharp contrast to hard-core laissez-faire opponents of Fed policy, who regarded even the mild and largely ineffectual steps taken by the Fed – increasing the monetary base by 15% – as illegitimate interventionism to obstruct the salutary liquidation of bad investments, thereby postponing the necessary reallocation of real resources to more valuable uses. So, according to DeLong, Friedman, no less than Keynes, was battling against the hard-core laissez-faire opponents of any positive action to speed recovery from the Depression. While Keynes believed that in a deep depression only fiscal policy would be effective, Friedman believed that, even in a deep depression, monetary policy would be effective. But both agreed that there was no structural reason why stimulus would necessarily counterproductive; both rejected the idea that only if the increased output generated during the recovery was of a particular composition would recovery be sustainable.

Indeed, that’s why Friedman has always been regarded with suspicion by laissez-faire dogmatists who correctly judged him to be soft in his criticism of Keynesian doctrines, never having disputed the possibility that “artificially” increasing demand – either by government spending or by money creation — in a deep depression could lead to sustainable economic growth. From the point of view of laissez-faire dogmatists that concession to Keynesianism constituted a total sellout of fundamental free-market principles.

Friedman parried such attacks on the purity of his free-market dogmatism with a counterattack against his free-market dogmatist opponents, arguing that the gold standard to which they were attached so fervently was itself inconsistent with free-market principles, because, in virtually all historical instances of the gold standard, the monetary authorities charged with overseeing or administering the gold standard retained discretionary authority allowing them to set interest rates and exercise control over the quantity of money. Because monetary authorities retained substantial discretionary latitude under the gold standard, Friedman argued that a gold standard was institutionally inadequate and incapable of constraining the behavior of the monetary authorities responsible for its operation.

The point of a gold standard, in Friedman’s view, was that it makes it costly to increase the quantity of money. That might once have been true, but advances in banking technology eventually made it easy for banks to increase the quantity of money without any increase in the quantity of gold, making inflation possible even under a gold standard. True, eventually the inflation would have to be reversed to maintain the gold standard, but that simply made alternative periods of boom and bust inevitable. Thus, the gold standard, i.e., a mere obligation to convert banknotes or deposits into gold, was an inadequate constraint on the quantity of money, and an inadequate systemic assurance of stability.

In other words, if the point of a gold standard is to prevent the quantity of money from growing excessively, then, why not just eliminate the middleman, and simply establish a monetary rule constraining the growth in the quantity of money. That was why Friedman believed that his k-percent rule – please pardon the expression – trumped the gold standard, accomplishing directly what the gold standard could not accomplish, even indirectly: a gradual steady increase in the quantity of money that would prevent monetary-induced booms and busts.

Moreover, the k-percent rule made the monetary authority responsible for one thing, and one thing alone, imposing a rule on the monetary authority prescribing the time path of a targeted instrument – the quantity of money – over which the monetary authority has direct control: the quantity of money. The belief that the monetary authority in a modern banking system has direct control over the quantity of money was, of course, an obvious mistake. That the mistake could have persisted as long as it did was the result of the analytical distraction of the money multiplier: one of the leading fallacies of twentieth-century monetary thought, a fallacy that introductory textbooks unfortunately continue even now to foist upon unsuspecting students.

The money multiplier is not a structural supply-side variable, it is a reduced-form variable incorporating both supply-side and demand-side parameters, but Friedman and other Monetarists insisted on treating it as if it were a structural — and a deep structural variable at that – supply variable, so that it no less vulnerable to the Lucas Critique than, say, the Phillips Curve. Nevertheless, for at least a decade and a half after his refutation of the structural Phillips Curve, demonstrating its dangers as a guide to policy making, Friedman continued treating the money multiplier as if it were a deep structural variable, leading to the Monetarist forecasting debacle of the 1980s when Friedman and his acolytes were confidently predicting – over and over again — the return of double-digit inflation because the quantity of money was increasing for most of the 1980s at double-digit rates.

So once the k-percent rule collapsed under an avalanche of contradictory evidence, the Monetarist alternative to the gold standard that Friedman had persuasively, though fallaciously, argued was, on strictly libertarian grounds, preferable to the gold standard, the gold standard once again became the default position of laissez-faire dogmatists. There was to be sure some consideration given to free banking as an alternative to the gold standard. In his old age, after winning the Nobel Prize, F. A. Hayek introduced a proposal for direct currency competition — the elimination of legal tender laws and the like – which he later developed into a proposal for the denationalization of money. Hayek’s proposals suggested that convertibility into a real commodity was not necessary for a non-legal tender currency to have value – a proposition which I have argued is fallacious. So Hayek can be regarded as the grandfather of crypto currencies like the bitcoin. On the other hand, advocates of free banking, with a few exceptions like Earl Thompson and me, have generally gravitated back to the gold standard.

So while I agree with DeLong and Krugman (and for that matter with his many laissez-faire dogmatist critics) that Friedman had Keynesian inclinations which, depending on his audience, he sometimes emphasized, and sometimes suppressed, the most important reason that he was unable to retain his hold on right-wing monetary-economics thinking is that his key monetary-policy proposal – the k-percent rule – was empirically demolished in a failure even more embarrassing than the stagflation failure of Keynesian economics. With the k-percent rule no longer available as an alternative, what’s a right-wing ideologue to do?

Anyone for nominal gross domestic product level targeting (or NGDPLT for short)?

There Is No Intertemporal Budget Constraint

Last week Nick Rowe posted a link to a just published article in a special issue of the Review of Keynesian Economics commemorating the 80th anniversary of the General Theory. Nick’s article discusses the confusion in the General Theory between saving and hoarding, and Nick invited readers to weigh in with comments about his article. The ROKE issue also features an article by Simon Wren-Lewis explaining the eclipse of Keynesian theory as a result of the New Classical Counter-Revolution, correctly identified by Wren-Lewis as a revolution inspired not by empirical success but by a methodological obsession with reductive micro-foundationalism. While deploring the New Classical methodological authoritarianism, Wren-Lewis takes solace from the ability of New Keynesians to survive under the New Classical methodological regime, salvaging a role for activist counter-cyclical policy by, in effect, negotiating a safe haven for the sticky-price assumption despite its shaky methodological credentials. The methodological fiction that sticky prices qualify as micro-founded allowed New Keynesianism to survive despite the ascendancy of micro-foundationalist methodology, thereby enabling the core Keynesian policy message to survive.

I mention the Wren-Lewis article in this context because of an exchange between two of the commenters on Nick’s article: the presumably pseudonymous Avon Barksdale and blogger Jason Smith about microfoundations and Keynesian economics. Avon began by chastising Nick for wasting time discussing Keynes’s 80-year old ideas, something Avon thinks would never happen in a discussion about a true science like physics, the 100-year-old ideas of Einstein being of no interest except insofar as they have been incorporated into the theoretical corpus of modern physics. Of course, this is simply vulgar scientism, as if the only legitimate way to do economics is to mimic how physicists do physics. This methodological scolding is typically charming New Classical arrogance. Sort of reminds one of how Friedrich Engels described Marxian theory as scientific socialism. I mean who, other than a religious fanatic, would be stupid enough to argue with the assertions of science?

Avon continues with a quotation from David Levine, a fine economist who has done a lot of good work, but who is also enthralled by the New Classical methodology. Avon’s scientism provoked the following comment from Jason Smith, a Ph. D. in physics with a deep interest in and understanding of economics.

You quote from Levine: “Keynesianism as argued by people such as Paul Krugman and Brad DeLong is a theory without people either rational or irrational”

This is false. The L in ISLM means liquidity preference and e.g. here …

http://krugman.blogs.nytimes.com/2013/11/18/the-new-keynesian-case-for-fiscal-policy-wonkish/

… Krugman mentions an Euler equation. The Euler equation essentially says that an agent must be indifferent between consuming one more unit today on the one hand and saving that unit and consuming in the future on the other if utility is maximized.

So there are agents in both formulations preferring one state of the world relative to others.

Avon replied:

Jason,

“This is false. The L in ISLM means liquidity preference and e.g. here”

I know what ISLM is. It’s not recursive so it really doesn’t have people in it. The dynamics are not set by any micro-foundation. If you’d like to see models with people in them, try Ljungqvist and Sargent, Recursive Macroeconomic Theory.

To which Jason retorted:

Avon,

So the definition of “people” is restricted to agents making multi-period optimizations over time, solving a dynamic programming problem?

Well then any such theory is obviously wrong because people don’t behave that way. For example, humans don’t optimize the dictator game. How can you add up optimizing agents and get a result that is true for non-optimizing agents … coincident with the details of the optimizing agents mattering.

Your microfoundation requirement is like saying the ideal gas law doesn’t have any atoms in it. And it doesn’t! It is an aggregate property of individual “agents” that don’t have properties like temperature or pressure (or even volume in a meaningful sense). Atoms optimize entropy, but not out of any preferences.

So how do you know for a fact that macro properties like inflation or interest rates are directly related to agent optimizations? Maybe inflation is like temperature — it doesn’t exist for individuals and is only a property of economics in aggregate.

These questions are not answered definitively, and they’d have to be to enforce a requirement for microfoundations … or a particular way of solving the problem.

Are quarks important to nuclear physics? Not really — it’s all pions and nucleons. Emergent degrees of freedom. Sure, you can calculate pion scattering from QCD lattice calculations (quark and gluon DoF), but it doesn’t give an empirically better result than chiral perturbation theory (pion DoF) that ignores the microfoundations (QCD).

Assuming quarks are required to solve nuclear physics problems would have been a giant step backwards.

To which Avon rejoined:

Jason

The microfoundation of nuclear physics and quarks is quantum mechanics and quantum field theory. How the degrees of freedom reorganize under the renormalization group flow, what effective field theory results is an empirical question. Keynesian economics is worse tha[n] useless. It’s wrong empirically, it has no theoretical foundation, it has no laws. It has no microfoundation. No serious grad school has taught Keynesian economics in nearly 40 years.

To which Jason answered:

Avon,

RG flow is irrelevant to chiral perturbation theory which is based on the approximate chiral symmetry of QCD. And chiral perturbation theory could exist without QCD as the “microfoundation”.

Quantum field theory is not a ‘microfoundation’, but rather a framework for building theories that may or may not have microfoundations. As Weinberg (1979) said:

” … quantum field theory itself has no content beyond analyticity, unitarity,
cluster decomposition, and symmetry.”

If I put together an NJL model, there is no requirement that the scalar field condensate be composed of quark-antiquark pairs. In fact, the basic idea was used for Cooper pairs as a model of superconductivity. Same macro theory; different microfoundations. And that is a general problem with microfoundations — different microfoundations can lead to the same macro theory, so which one is right?

And the IS-LM model is actually pretty empirically accurate (for economics):

http://informationtransfereconomics.blogspot.com/2014/03/the-islm-model-again.html

To which Avon responded:

First, ISLM analysis does not hold empirically. It just doesn’t work. That’s why we ended up with the macro revolution of the 70s and 80s. Keynesian economics ignores intertemporal budget constraints, it violates Ricardian equivalence. It’s just not the way the world works. People might not solve dynamic programs to set their consumption path, but at least these models include a future which people plan over. These models work far better than Keynesian ISLM reasoning.

As for chiral perturbation theory and the approximate chiral symmetries of QCD, I am not making the case that NJL models requires QCD. NJL is an effective field theory so it comes from something else. That something else happens to be QCD. It could have been something else, that’s an empirical question. The microfoundation I’m talking about with theories like NJL is QFT and the symmetries of the vacuum, not the short distance physics that might be responsible for it. The microfoundation here is about the basic laws, the principles.

ISLM and Keynesian economics has none of this. There is no principle. The microfoundation of modern macro is not about increasing the degrees of freedom to model every person in the economy on some short distance scale, it is about building the basic principles from consistent economic laws that we find in microeconomics.

Well, I totally agree that IS-LM is a flawed macroeconomic model, and, in its original form, it was borderline-incoherent, being a single-period model with an interest rate, a concept without meaning except as an intertemporal price relationship. These deficiencies of IS-LM became obvious in the 1970s, so the model was extended to include a future period, with an expected future price level, making it possible to speak meaningfully about real and nominal interest rates, inflation and an equilibrium rate of spending. So the failure of IS-LM to explain stagflation, cited by Avon as the justification for rejecting IS-LM in favor of New Classical macro, was not that hard to fix, at least enough to make it serviceable. And comparisons of the empirical success of augmented IS-LM and the New Classical models have shown that IS-LM models consistently outperform New Classical models.

What Avon fails to see is that the microfoundations that he considers essential for macroeconomics are themselves derived from the assumption that the economy is operating in macroeconomic equilibrium. Thus, insisting on microfoundations – at least in the formalist sense that Avon and New Classical macroeconomists understand the term – does not provide a foundation for macroeconomics; it is just question begging aka circular reasoning or petitio principia.

The circularity is obvious from even a cursory reading of Samuelson’s Foundations of Economic Analysis, Robert Lucas’s model for doing economics. What Samuelson called meaningful theorems – thereby betraying his misguided acceptance of the now discredited logical positivist dogma that only potentially empirically verifiable statements have meaning – are derived using the comparative-statics method, which involves finding the sign of the derivative of an endogenous economic variable with respect to a change in some parameter. But the comparative-statics method is premised on the assumption that before and after the parameter change the system is in full equilibrium or at an optimum, and that the equilibrium, if not unique, is at least locally stable and the parameter change is sufficiently small not to displace the system so far that it does not revert back to a new equilibrium close to the original one. So the microeconomic laws invoked by Avon are valid only in the neighborhood of a stable equilibrium, and the macroeconomics that Avon’s New Classical mentors have imposed on the economics profession is a macroeconomics that, by methodological fiat, is operative only in the neighborhood of a locally stable equilibrium.

Avon dismisses Keynesian economics because it ignores intertemporal budget constraints. But the intertemporal budget constraint doesn’t exist in any objective sense. Certainly macroeconomics has to take into account intertemporal choice, but the idea of an intertemporal budget constraint analogous to the microeconomic budget constraint underlying the basic theory of consumer choice is totally misguided. In the static theory of consumer choice, the consumer has a given resource endowment and known prices at which consumers can transact at will, so the utility-maximizing vector of purchases and sales can be determined as the solution of a constrained-maximization problem.

In the intertemporal context, consumers have a given resource endowment, but prices are not known. So consumers have to make current transactions based on their expectations about future prices and a variety of other circumstances about which consumers can only guess. Their budget constraints are thus not real but totally conjectural based on their expectations of future prices. The optimizing Euler equations are therefore entirely conjectural as well, and subject to continual revision in response to changing expectations. The idea that the microeconomic theory of consumer choice is straightforwardly applicable to the intertemporal choice problem in a setting in which consumers don’t know what future prices will be and agents’ expectations of future prices are a) likely to be very different from each other and thus b) likely to be different from their ultimate realizations is a huge stretch. The intertemporal budget constraint has a completely different role in macroeconomics from the role it has in microeconomics.

If I expect that the demand for my services will be such that my disposable income next year would be $500k, my consumption choices would be very different from what they would have been if I were expecting a disposable income of $100k next year. If I expect a disposable income of $500k next year, and it turns out that next year’s income is only $100k, I may find myself in considerable difficulty, because my planned expenditure and the future payments I have obligated myself to make may exceed my disposable income or my capacity to borrow. So if there are a lot of people who overestimate their future incomes, the repercussions of their over-optimism may reverberate throughout the economy, leading to bankruptcies and unemployment and other bad stuff.

A large enough initial shock of mistaken expectations can become self-amplifying, at least for a time, possibly resembling the way a large initial displacement of water can generate a tsunami. A financial crisis, which is hard to model as an equilibrium phenomenon, may rather be an emergent phenomenon with microeconomic sources, but whose propagation can’t be described in microeconomic terms. New Classical macroeconomics simply excludes such possibilities on methodological grounds by imposing a rational-expectations general-equilibrium structure on all macroeconomic models.

This is not to say that the rational expectations assumption does not have a useful analytical role in macroeconomics. But the most interesting and most important problems in macroeconomics arise when the rational expectations assumption does not hold, because it is when individual expectations are very different and very unstable – say, like now, for instance — that macroeconomies become vulnerable to really scary instability.

Simon Wren-Lewis makes a similar point in his paper in the Review of Keynesian Economics.

Much discussion of current divisions within macroeconomics focuses on the ‘saltwater/freshwater’ divide. This understates the importance of the New Classical Counter Revolution (hereafter NCCR). It may be more helpful to think about the NCCR as involving two strands. The one most commonly talked about involves Keynesian monetary and fiscal policy. That is of course very important, and plays a role in the policy reaction to the recent Great Recession. However I want to suggest that in some ways the second strand, which was methodological, is more important. The NCCR helped completely change the way academic macroeconomics is done.

Before the NCCR, macroeconomics was an intensely empirical discipline: something made possible by the developments in statistics and econometrics inspired by The General Theory. After the NCCR and its emphasis on microfoundations, it became much more deductive. As Hoover (2001, p. 72) writes, ‘[t]he conviction that macroeconomics must possess microfoundations has changed the face of the discipline in the last quarter century’. In terms of this second strand, the NCCR was triumphant and remains largely unchallenged within mainstream academic macroeconomics.

Perhaps I will have some more to say about Wren-Lewis’s article in a future post. And perhaps also about Nick Rowe’s article.

HT: Tom Brown

Update (02/11/16):

On his blog Jason Smith provides some further commentary on his exchange with Avon on Nick Rowe’s blog, explaining at greater length how irrelevant microfoundations are to doing real empirically relevant physics. He also expands on and puts into a broader meta-theoretical context my point about the extremely narrow range of applicability of the rational-expectations equilibrium assumptions of New Classical macroeconomics.

David Glasner found a back-and-forth between me and a commenter (with the pseudonym “Avon Barksdale” after [a] character on The Wire who [didn’t end] up taking an economics class [per Tom below]) on Nick Rowe’s blog who expressed the (widely held) view that the only scientific way to proceed in economics is with rigorous microfoundations. “Avon” held physics up as a purported shining example of this approach.
I couldn’t let it go: even physics isn’t that reductionist. I gave several examples of cases where the microfoundations were actually known, but not used to figure things out: thermodynamics, nuclear physics. Even modern physics is supposedly built on string theory. However physicists do not require every pion scattering amplitude be calculated from QCD. Some people do do so-called lattice calculations. But many resort to the “effective” chiral perturbation theory. In a sense, that was what my thesis was about — an effective theory that bridges the gap between lattice QCD and chiral perturbation theory. That effective theory even gave up on one of the basic principles of QCD — confinement. It would be like an economist giving up opportunity cost (a basic principle of the micro theory). But no physicist ever said to me “your model is flawed because it doesn’t have true microfoundations”. That’s because the kind of hard core reductionism that surrounds the microfoundations paradigm doesn’t exist in physics — the most hard core reductionist natural science!
In his post, Glasner repeated something that he had before and — probably because it was in the context of a bunch of quotes about physics — I thought of another analogy.

Glasner says:

But the comparative-statics method is premised on the assumption that before and after the parameter change the system is in full equilibrium or at an optimum, and that the equilibrium, if not unique, is at least locally stable and the parameter change is sufficiently small not to displace the system so far that it does not revert back to a new equilibrium close to the original one. So the microeconomic laws invoked by Avon are valid only in the neighborhood of a stable equilibrium, and the macroeconomics that Avon’s New Classical mentors have imposed on the economics profession is a macroeconomics that, by methodological fiat, is operative only in the neighborhood of a locally stable equilibrium.

 

This hits on a basic principle of physics: any theory radically simplifies near an equilibrium.

Go to Jason’s blog to read the rest of his important and insightful post.

Sumner on the Demand for Money, Interest Rates and Barsky and Summers

Scott Sumner had two outstanding posts a couple of weeks ago (here and here) discussing the relationship between interest rates and NGDP, making a number of important points, which I largely agree with, even though I have some (mostly semantic) quibbles about the details. I especially liked how in the second post he applied the analysis of Robert Barsky and Larry Summers in their article about Gibson’s Paradox under the gold standard to recent monetary experience. The two posts are so good and cover such a wide range of topics that the best way for me to address them is by cutting and pasting relevant passages and commenting on them.

Scott begins with the equation of exchange MV = PY. I personally prefer the Cambridge version (M = kPY) where k stands for the fraction of income that people hold as cash, thereby making it clear that the relevant concept is how much money want to hold, not that mysterious metaphysical concept called the velocity of circulation V (= 1/k). With attention focused on the decision about how much money to hold, it is natural to think of the rate of interest as the opportunity cost of holding non-interest-bearing cash balances. When the rate of interest rate rises, the desired holdings of non-interest-bearing cash tend to fall; in other words k falls (and V rises). With unchanged M, the equation is satisfied only if PY increases. So the notion that a reduction in interest rates, in and of itself, is expansionary is based on a misunderstanding. An increase in the amount of money demanded is always contractionary. A reduction in interest rates increases the amount of money demanded (if money is non-interest-bearing). A reduction in interest rates is therefore contractionary (all else equal).

Scott suggests some reasons why this basic relationship seems paradoxical.

Sometimes, not always, reductions in interest rates are caused by an increase in the monetary base. (This was not the case in late 2007 and early 2008, but it is the case on some occasions.) When there is an expansionary monetary policy, specifically an exogenous increase in M, then when interest rates fall, V tends to fall by less than M rises. So the policy as a whole causes NGDP to rise, even as the specific impact of lower interest rates is to cause NGDP to fall.

To this I would add that, as discussed in my recent posts about Keynes and Fisher, Keynes in the General Theory seemed to be advancing a purely monetary theory of the rate of interest. If Keynes meant that the rate of interest is determined exclusively by monetary factors, then a falling rate of interest is a sure sign of an excess supply of money. Of course in the Hicksian world of IS-LM, the rate of interest is simultaneously determined by both equilibrium in the money market and an equilibrium rate of total spending, but Keynes seems to have had trouble with the notion that the rate of interest could be simultaneously determined by not one, but two, equilibrium conditions.

Another problem is the Keynesian model, which hopelessly confuses the transmission mechanism. Any Keynesian model with currency that says low interest rates are expansionary is flat out wrong.

But if Keynes believed that the rate of interest is exclusively determined by money demand and money supply, then the only possible cause of a low or falling interest rate is the state of the money market, the supply side of which is always under the control of the monetary authority. Or stated differently, in the Keynesian model, the money-supply function is perfectly elastic at the target rate of interest, so that the monetary authority supplies whatever amount of money is demanded at that rate of interest. I disagree with the underlying view of what determines the rate of interest, but given that theory of the rate of interest, the model is not incoherent and doesn’t confuse the transmission mechanism.

That’s probably why economists were so confused by 2008. Many people confuse aggregate demand with consumption. Thus they think low rates encourage people to “spend” and that this n somehow boosts AD and NGDP. But it doesn’t, at least not in the way they assume. If by “spend” you mean higher velocity, then yes, spending more boosts NGDP. But we’ve already seen that lower interest rates don’t boost velocity, rather they lower velocity.

But, remember that Keynes believed that the interest rate can be reduced only by increasing the quantity of money, which nullifies the contractionary effect of a reduced interest rate.

Even worse, some assume that “spending” is the same as consumption, hence if low rates encourage people to save less and consume more, then AD will rise. This is reasoning from a price change on steroids! When you don’t spend you save, and saving goes into investment, which is also part of GDP.

But this is reasoning from an accounting identity. The question is what happens if people try to save. The Keynesian argument is that the attempt to save will be self-defeating; instead of increased saving, there is reduced income. Both scenarios are consistent with the accounting identity. The question is which causal mechanism is operating? Does an attempt to increase saving cause investment to increase, or does it cause income to go down? Seemingly aware of the alternative scenario, Scott continues:

Now here’s were amateur Keynesians get hopelessly confused. They recall reading something about the paradox of thrift, about planned vs. actual saving, about the fact that an attempt to save more might depress NGDP, and that in the end people may fail to save more, and instead NGDP will fall. This is possible, but even if true it has no bearing on my claim that low rates are contractionary.

Just so. But there is not necessarily any confusion; the issue may be just a difference in how monetary policy is implemented. You can think of the monetary authority as having a choice in setting its policy in terms of the quantity of the monetary base, or in terms of an interest-rate target. Scott characterizes monetary policy in terms of the base, allowing the interest rate to adjust; Keynesians characterize monetary policy in terms of an interest-rate target, allowing the monetary base to adjust. The underlying analysis should not depend on how policy is characterized. I think that this is borne out by Scott’s next paragraph, which is consistent with a policy choice on the part of the Keynesian monetary authority to raise interest rates as needed to curb aggregate demand when aggregate demand is excessive.

To see the problem with this analysis, consider the Keynesian explanations for increases in AD. One theory is that animal spirits propel businesses to invest more. Another is that consumer optimism propels consumers to spend more. Another is that fiscal policy becomes more expansionary, boosting the budget deficit. What do all three of these shocks have in common? In all three cases the shock leads to higher interest rates. (Use the S&I diagram to show this.) Yes, in all three cases the higher interest rates boost velocity, and hence ceteris paribus (i.e. fixed monetary base) the higher V leads to more NGDP. But that’s not an example of low rates boosting AD, it’s an example of some factor boosting AD, and also raising interest rates.

In the Keynesian terminology, the shocks do lead to higher rates, but only because excessive aggregate demand, caused by animal spirits, consumer optimism, or government budget deficits, has to be curbed by interest-rate increases. The ceteris paribus assumption is ambiguous; it can be interpreted to mean holding the monetary base constant or holding the interest-rate target constant. I don’t often cite Milton Friedman as an authority, but one of his early classic papers was “The Marshallian Demand Curve” in which he pointed out that there is an ambiguity in what is held constant along the demand curve: prices of other goods or real income. You can hold only one of the two constant, not both, and you get a different demand curve depending on which ceteris paribus assumption you make. So the upshot of my commentary here is that, although Scott is right to point out that the standard reasoning about how a change in interest rates affects NGDP implicitly assumes that the quantity of money is changing, that valid point doesn’t refute the standard reasoning. There is an inherent ambiguity in specifying what is actually held constant in any ceteris paribus exercise. It’s good to make these ambiguities explicit, and there might be good reasons to prefer one ceteris paribus assumption over another, but a ceteris paribus assumption isn’t a sufficient basis for rejecting a model.

Now just to be clear, I agree with Scott that, as a matter of positive economics, the interest rate is not fully under the control of the monetary authority. And one reason that it’s not  is that the rate of interest is embedded in the entire price system, not just a particular short-term rate that the central bank may be able to control. So I don’t accept the basic Keynesian premise that monetary authority can always make the rate of interest whatever it wants it to be, though the monetary authority probably does have some control over short-term rates.

Scott also provides an analysis of the effects of interest on reserves, and he is absolutely correct to point out that paying interest on reserves is deflationary.

I will just note that near the end of his post, Scott makes a comment about living “in a Ratex world.” WADR, I don’t think that ratex is at all descriptive of reality, but I will save that discussion for another time.

Scott followed up the post about the contractionary effects of low interest rates with a post about the 1988 Barsky and Summers paper.

Barsky and Summers . . . claim that the “Gibson Paradox” is caused by the fact that low interest rates are deflationary under the gold standard, and that causation runs from falling interest rates to deflation. Note that there was no NGDP data for this period, so they use the price level rather than NGDP as their nominal indicator. But their basic argument is identical to mine.

The Gibson Paradox referred to the tendency of prices and interest rates to be highly correlated under the gold standard. Initially some people thought this was due to the Fisher effect, but it turns out that prices were roughly a random walk under the gold standard, and hence the expected rate of inflation was close to zero. So the actual correlation was between prices and both real and nominal interest rates. Nonetheless, the nominal interest rate is the key causal variable in their model, even though changes in that variable are mostly due to changes in the real interest rate.

Since gold is a durable good with a fixed price, the nominal interest rate is the opportunity cost of holding that good. A lower nominal rate tends to increase the demand for gold, for both monetary and non-monetary purposes.  And an increased demand for gold is deflationary (and also reduces NGDP.)

Very insightful on Scott’s part to see the connection between the Barsky and Summers analysis and the standard theory of the demand for money. I had previously thought about the Barsky and Summers discussion simply as a present-value problem. The present value of any durable asset, generating a given expected flow of future services, must vary inversely with the interest rate at which those future services are discounted. Since the future price level under the gold standard was expected to be roughly stable, any change in nominal interest rates implied a change in real interest rates. The value of gold, like other durable assets, varied inversely with nominal interest rate. But with the nominal value of gold fixed by the gold standard, changes in the value of gold implied a change in the price level, an increased value of gold being deflationary and a decreased value of gold inflationary. Scott rightly observes that the same idea can be expressed in the language of monetary theory by thinking of the nominal interest rate as the cost of holding any asset, so that a reduction in the nominal interest rate has to increase the demand to own assets, because reducing the cost of holding an asset increases the demand to own it, thereby raising its value in exchange, provided that current output of the asset is small relative to the total stock.

However, the present-value approach does have an advantage over the opportunity-cost approach, because the present-value approach relates the value of gold or money to the entire term structure of interest rates, while the opportunity-cost approach can only handle a single interest rate – presumably the short-term rate – that is relevant to the decision to hold money at any given moment in time. In simple models of the IS-LM ilk, the only interest rate under consideration is the short-term rate, or the term-structure is assumed to have a fixed shape so that all interest rates are equally affected by, or along with, any change in the short-term rate. The latter assumption of course is clearly unrealistic, though Keynes made it without a second thought. However, in his Century of Bank Rate, Hawtrey showed that between 1844 and 1938, when the gold standard was in effect in Britain (except 1914-25 and 1931-38) short-term rates and long-term rates often moved by significantly different magnitudes and even in opposite directions.

Scott makes a further interesting observation:

The puzzle of why the economy does poorly when interest rates fall (such as during 2007-09) is in principle just as interesting as the one Barsky and Summers looked at. Just as gold was the medium of account during the gold standard, base money is currently the medium of account. And just as causation went from falling interest rates to higher demand for gold to deflation under the gold standard, causation went from falling interest rates to higher demand for base money to recession in 2007-08.

There is something to this point, but I think Scott may be making too much of it. Falling interest rates in 2007 may have caused the demand for money to increase, but other factors were also important in causing contraction. The problem in 2008 was that the real rate of interest was falling, while the Fed, fixated on commodity (especially energy) prices, kept interest rates too high given the rapidly deteriorating economy. With expected yields from holding real assets falling, the Fed, by not cutting interest rates any further between April and October of 2008, precipitated a financial crisis once inflationary expectations started collapsing in August 2008, the expected yield from holding money dominating the expected yield from holding real assets, bringing about a pathological Fisher effect in which asset values had to collapse for the yields from holding money and from holding assets to be equalized.

Under the gold standard, the value of gold was actually sensitive to two separate interest-rate effects – one reflected in the short-term rate and one reflected in the long-term rate. The latter effect is the one focused on by Barsky and Summers, though they also performed some tests on the short-term rate. However, it was through the short-term rate that the central bank, in particular the Bank of England, the dominant central bank during in the pre-World War I era, manifested its demand for gold reserves, raising the short-term rate when it was trying to accumulate gold and reducing the short-term rate when it was willing to reduce its reserve holdings. Barsky and Summers found the long-term rate to be more highly correlated with the price level than the short-term rate. I conjecture that the reason for that result is that the long-term rate is what captures the theoretical inverse relationship between the interest rate and the value of a durable asset, while the short-term rate would be negatively correlated with the value of gold when (as is usually the case) it moves together with the long-term rate but may sometimes be positively correlated with the value of gold (when the central bank is trying to accumulate gold) and thereby tightening the world market for gold. I don’t know if Barsky and Summers ran regressions using both long-term and short-term rates, but using both long-term and short-term rates in the same regression might have allowed them to find evidence of both effects in the data.

PS I have been too busy and too distracted of late to keep up with comments on earlier posts. Sorry for not responding promptly. In case anyone is still interested, I hope to respond to comments over the next few days, and to post and respond more regularly than I have been doing for the past few weeks.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 1,939 other followers

Follow Uneasy Money on WordPress.com
Advertisements