Archive for the 'Uncategorized' Category

More on Sticky Wages

It’s been over four and a half years since I wrote my second most popular post on this blog (“Why are Wages Sticky?”). Although the post was linked to and discussed by Paul Krugman (which is almost always a guarantee of getting a lot of traffic) and by other econoblogosphere standbys like Mark Thoma and Barry Ritholz, unlike most of my other popular posts, it has continued ever since to attract a steady stream of readers. It’s the posts that keep attracting readers long after their original expiration date that I am generally most proud of.

I made a few preliminary points about wage stickiness before getting to my point. First, although Keynes is often supposed to have used sticky wages as the basis for his claim that market forces, unaided by stimulus to aggregate demand, cannot automatically eliminate cyclical unemployment within the short or even medium term, he actually devoted a lot of effort and space in the General Theory to arguing that nominal wage reductions would not increase employment, and to criticizing economists who blamed unemployment on nominal wages fixed by collective bargaining at levels too high to allow all workers to be employed. So, the idea that wage stickiness is a Keynesian explanation for unemployment doesn’t seem to me to be historically accurate.

I also discussed the search theories of unemployment that in some ways have improved our understanding of why some level of unemployment is a normal phenomenon even when people are able to find jobs fairly easily and why search and unemployment can actually be productive, enabling workers and employers to improve the matches between the skills and aptitudes that workers have and the skills and aptitudes that employers are looking for. But search theories also have trouble accounting for some basic facts about unemployment.

First, a lot of job search takes place when workers have jobs while search theories assume that workers can’t or don’t search while they are employed. Second, when unemployment rises in recessions, it’s not because workers mistakenly expect more favorable wage offers than employers are offering and mistakenly turn down job offers that they later regret not having accepted, which is a very skewed way of interpreting what happens in recessions; it’s because workers are laid off by employers who are cutting back output and idling production lines.

I then suggested the following alternative explanation for wage stickiness:

Consider the incentive to cut price of a firm that can’t sell as much as it wants [to sell] at the current price. The firm is off its supply curve. The firm is a price taker in the sense that, if it charges a higher price than its competitors, it won’t sell anything, losing all its sales to competitors. Would the firm have any incentive to cut its price? Presumably, yes. But let’s think about that incentive. Suppose the firm has a maximum output capacity of one unit, and can produce either zero or one units in any time period. Suppose that demand has gone down, so that the firm is not sure if it will be able to sell the unit of output that it produces (assume also that the firm only produces if it has an order in hand). Would such a firm have an incentive to cut price? Only if it felt that, by doing so, it would increase the probability of getting an order sufficiently to compensate for the reduced profit margin at the lower price. Of course, the firm does not want to set a price higher than its competitors, so it will set a price no higher than the price that it expects its competitors to set.

Now consider a different sort of firm, a firm that can easily expand its output. Faced with the prospect of losing its current sales, this type of firm, unlike the first type, could offer to sell an increased amount at a reduced price. How could it sell an increased amount when demand is falling? By undercutting its competitors. A firm willing to cut its price could, by taking share away from its competitors, actually expand its output despite overall falling demand. That is the essence of competitive rivalry. Obviously, not every firm could succeed in such a strategy, but some firms, presumably those with a cost advantage, or a willingness to accept a reduced profit margin, could expand, thereby forcing marginal firms out of the market.

Workers seem to me to have the characteristics of type-one firms, while most actual businesses seem to resemble type-two firms. So what I am suggesting is that the inability of workers to take over the jobs of co-workers (the analog of output expansion by a firm) when faced with the prospect of a layoff means that a powerful incentive operating in non-labor markets for price cutting in response to reduced demand is not present in labor markets. A firm faced with the prospect of being terminated by a customer whose demand for the firm’s product has fallen may offer significant concessions to retain the customer’s business, especially if it can, in the process, gain an increased share of the customer’s business. A worker facing the prospect of a layoff cannot offer his employer a similar deal. And requiring a workforce of many workers, the employer cannot generally avoid the morale-damaging effects of a wage cut on his workforce by replacing current workers with another set of workers at a lower wage than the old workers were getting.

I think that what I wrote four years ago is clearly right, identifying an important reason for wage stickiness. But there’s also another reason that I didn’t mention then, but whose importance has since come to appear increasingly significant to me, especially as a result of writing and rewriting my paper “Hayek, Hicks, Radner and three concepts of intertemporal equilibrium.”

If you are unemployed because the demand for your employer’s product has gone down, and your employer, planning to reduce output, is laying off workers no longer needed, how could you, as an individual worker, unconstrained by a union collective-bargaining agreement or by a minimum-wage law, persuade your employer not to lay you off? Could you really keep your job by offering to accept a wage cut — no matter how big? If you are being laid off because your employer is reducing output, would your offer to work at a lower wage cause your employer to keep output unchanged, despite a reduction in demand? If not, how would your offer to take a pay cut help you keep your job? Unless enough workers are willing to accept a big enough wage cut for your employer to find it profitable to maintain current output instead of cutting output, how would your own willingness to accept a wage cut enable you to keep your job?

Now, if all workers were to accept a sufficiently large wage cut, it might make sense for an employer not to carry out a planned reduction in output, but the offer by any single worker to accept a wage cut certainly would not cause the employer to change its output plans. So, if you are making an independent decision whether to offer to accept a wage cut, and other workers are making their own independent decisions about whether to accept a wage cut, would it be rational for you or any of them to accept a wage cut? Whether it would or wouldn’t might depend on what each worker was expecting other workers to do. But certainly given the expectation that other workers are not offering to accept a wage cut, why would it make any sense for any worker to be the one to offer to accept a wage cut? Would offering to accept a wage cut, increase the likelihood that a worker would be one of the lucky ones chosen not to be laid off? Why would offering to accept a wage cut that no one else was offering to accept, make the worker willing to work for less appear more desirable to the employer than the others that wouldn’t accept a wage cut? One reaction by the employer might be: what’s this guy’s problem?

Combining this way of looking at the incentives workers have to offer to accept wage reductions to keep their jobs with my argument in my post of four years ago, I now am inclined to suggest that unemployment as such provides very little incentive for workers and employers to cut wages. Price cutting in periods of excess supply is often driven by aggressive price cutting by suppliers with large unsold inventories. There may be lots of unemployment, but no one is holding a large stock of unemployed workers, and no is in a position to offer low wages to undercut the position of those currently employed at  nominal wages that, arguably, are too high.

That’s not how labor markets operate. Labor markets involve matching individual workers and individual employers more or less one at a time. If nominal wages fall, it’s not because of an overhang of unsold labor flooding the market; it’s because something is changing the expectations of workers and employers about what wage will be offered by employers, and accepted by workers, for a particular kind of work. If the expected wage is too high, not all workers willing to work at that wage will find employment; if it’s too low, employers will not be able to find as many workers as they would like to hire, but the situation will not change until wage expectations change. And the reason that wage expectations change is not because the excess demand for workers causes any immediate pressure for nominal wages to rise.

The further point I would make is that the optimal responses of workers and the optimal responses of their employers to a recessionary reduction in demand, in which the employers, given current input and output prices, are planning to cut output and lay off workers, are mutually interdependent. While it is, I suppose, theoretically possible that if enough workers decided to immediately offer to accept sufficiently large wage cuts, some employers might forego plans to lay off their workers, there are no obvious market signals that would lead to such a response, because such a response would be contingent on a level of coordination between workers and employers and a convergence of expectations about future outcomes that is almost unimaginable.

One can’t simply assume that it is in the independent self-interest of every worker to accept a wage cut as soon as an employer perceives a reduced demand for its product, making the current level of output unprofitable. But unless all, or enough, workers decide to accept a wage cut, the optimal response of the employer is still likely to be to cut output and lay off workers. There is no automatic mechanism by which the market adjusts to demand shocks to achieve the set of mutually consistent optimal decisions that characterizes a full-employment market-clearing equilibrium. Market-clearing equilibrium requires not merely isolated price and wage cuts by individual suppliers of inputs and final outputs, but a convergence of expectations about the prices of inputs and outputs that will be consistent with market clearing. And there is no market mechanism that achieves that convergence of expectations.

So, this brings me back to Keynes and the idea of sticky wages as the key to explaining cyclical fluctuations in output and employment. Keynes writes at the beginning of chapter 19 of the General Theory.

For the classical theory has been accustomed to rest the supposedly self-adjusting character of the economic system on an assumed fluidity of money-wages; and, when there is rigidity, to lay on this rigidity the blame of maladjustment.

A reduction in money-wages is quite capable in certain circumstances of affording a stimulus to output, as the classical theory supposes. My difference from this theory is primarily a difference of analysis. . . .

The generally accept explanation is . . . quite a simple one. It does not depend on roundabout repercussions, such as we shall discuss below. The argument simply is that a reduction in money wages will, cet. par. Stimulate demand by diminishing the price of the finished product, and will therefore increase output, and will therefore increase output and employment up to the point where  the reduction which labour has agreed to accept in its money wages is just offset by the diminishing marginal efficiency of labour as output . . . is increased. . . .

It is from this type of analysis that I fundamentally differ.

[T]his way of thinking is probably reached as follows. In any given industry we have a demand schedule for the product relating the quantities which can be sold to the prices asked; we have a series of supply schedules relating the prices which will be asked for the sale of different quantities. .  . and these schedules between them lead up to a further schedule which, on the assumption that other costs are unchanged . . . gives us the demand schedule for labour in the industry relating the quantity of employment to different levels of wages . . . This conception is then transferred . . . to industry as a whole; and it is supposed, by a parity of reasoning, that we have a demand schedule for labour in industry as a whole relating the quantity of employment to different levels of wages. It is held that it makes no material difference to this argument whether it is in terms of money-wages or of real wages. If we are thinking of real wages, we must, of course, correct for changes in the value of money; but this leaves the general tendency of the argument unchanged, since prices certainly do not change in exact proportion to changes in money wages.

If this is the groundwork of the argument . . ., surely it is fallacious. For the demand schedules for particular industries can only be constructed on some fixed assumption as to the nature of the demand and supply schedules of other industries and as to the amount of aggregate effective demand. It is invalid, therefore, to transfer the argument to industry as a whole unless we also transfer our assumption that the aggregate effective demand is fixed. Yet this assumption amount to an ignoratio elenchi. For whilst no one would wish to deny the proposition that a reduction in money-wages accompanied by the same aggregate demand as before will be associated with an increase in employment, the precise question at issue is whether the reduction in money wages will or will not be accompanied by the same aggregate effective demand as before measured in money, or, at any rate, measured by an aggregate effective demand which is not reduced in full proportion to the reduction in money-wages. . . But if the classical theory is not allowed to extend by analogy its conclusions in respect of a particular industry to industry as a whole, it is wholly unable to answer the question what effect on employment a reduction in money-wages will have. For it has no method of analysis wherewith to tackle the problem. (General Theory, pp. 257-60)

Keynes’s criticism here is entirely correct. But I would restate slightly differently. Standard microeconomic reasoning about preferences, demand, cost and supply is partial-equilbriium analysis. The focus is on how equilibrium in a single market is achieved by the adjustment of the price in a single market to equate the amount demanded in that market with amount supplied in that market.

Supply and demand is a wonderful analytical tool that can illuminate and clarify many economic problems, providing the key to important empirical insights and knowledge. But supply-demand analysis explicitly – but too often without realizing its limiting implications – assumes that other prices and incomes in other markets are held constant. That assumption essentially means that the market – i.e., the demand, cost and supply curves used to represent the behavioral characteristics of the market being analyzed – is small relative to the rest of the economy, so that changes in that single market can be assumed to have a de minimus effect on the equilibrium of all other markets. (The conditions under which such an assumption could be justified are themselves not unproblematic, but I am now assuming that those problems can in fact be assumed away at least in many applications. And a good empirical economist will have a good instinctual sense for when it’s OK to make the assumption and when it’s not OK to make the assumption.)

So, the underlying assumption of microeconomics is that the individual markets under analysis are very small relative to the whole economy. Why? Because if those markets are not small, we can’t assume that the demand curves, cost curves, and supply curves end up where they started. Because a high price in one market may have effects on other markets and those effects will have further repercussions that move the very demand, cost and supply curves that were drawn to represent the market of interest. If the curves themselves are unstable, the ability to predict the final outcome is greatly impaired if not completely compromised.

The working assumption of the bread and butter partial-equilibrium analysis that constitutes econ 101 is that markets have closed borders. And that assumption is not always valid. If markets have open borders so that there is a lot of spillover between and across markets, the markets can only be analyzed in terms of broader systems of simultaneous equations, not the simplified solutions that we like to draw in two-dimensional space corresponding to intersections of stable supply curves with stable supply curves.

What Keynes was saying is that it makes no sense to draw a curve representing the demand of an entire economy for labor or a curve representing the supply of labor of an entire economy, because the underlying assumption of such curves that all other prices are constant cannot possibly be satisfied when you are drawing a demand curve and a supply curve for an input that generates more than half the income earned in an economy.

But the problem is even deeper than just the inability to draw a curve that meaningfully represents the demand of an entire economy for labor. The assumption that you can model a transition from one point on the curve to another point on the curve is simply untenable, because not only is the assumption that other variables are being held constant untenable and self-contradictory, the underlying assumption that you are starting from an equilibrium state is never satisfied when you are trying to analyze a situation of unemployment – at least if you have enough sense not to assume that economy is starting from, and is not always in, a state of general equilibrium.

So, Keynes was certainly correct to reject the naïve transfer of partial equilibrium theorizing from its legitimate field of applicability in analyzing the effects of small parameter changes on outcomes in individual markets – what later came to be known as comparative statics – to macroeconomic theorizing about economy-wide disturbances in which the assumptions underlying the comparative-statics analysis used in microeconomics are clearly not satisfied. That illegitimate transfer of one kind of theorizing to another has come to be known as the demand for microfoundations in macroeconomic models that is the foundational methodological principle of modern macroeconomics.

The principle, as I have been arguing for some time, is illegitimate for a variety of reasons. And one of those reasons is that microeconomics itself is based on the macroeconomic foundational assumption of a pre-existing general equilibrium, in which all plans in the entire economy are, and will remain, perfectly coordinated throughout the analysis of a particular parameter change in a single market. Once you relax the assumption that all, but one, markets are in equilibrium, the discipline imposed by the assumption of the rationality of general equilibrium and comparative statics is shattered, and a different kind of theorizing must be adopted to replace it.

The search for that different kind of theorizing is the challenge that has always faced macroeconomics. Despite heroic attempts to avoid facing that challenge and pretend that macroeconomics can be built as if it were microeconomics, the search for a different kind of theorizing will continue; it must continue. But it would certainly help if more smart and creative people would join in that search.

Advertisements

Only Idiots Think that Judges Are Umpires and Only Cads Say that They Think So

It now seems besides the point, but I want to go back and consider something Judge Kavanaugh said in his initial testimony three weeks ago before the Senate Judiciary Committee, now largely, and deservedly, forgotten.

In his earlier testimony, Judge Kavanaugh made the following ludicrous statement, echoing a similar statement by (God help us) Chief Justice Roberts at his confirmation hearing before the Senate Judiciary Committee:

A good judge must be an umpire, a neutral and impartial arbiter who favors no litigant or policy. As Justice Kennedy explained in Texas versus Johnson, one of his greatest opinions, judges do not make decisions to reach a preferred result. Judges make decisions because “the law and the Constitution, as we see them, compel the result.”

I don’t decide cases based on personal or policy preferences.

Kavanaugh’s former law professor Akhil Amar offered an embarrassingly feeble defense of Kavanaugh’s laughable comparison, in a touching gesture of loyalty to a former student, to put the most generous possible gloss on his deeply inappropriate defense of an indefensible trivialization of what judging is all about.

According to the Chief Justice and to Judge Kavanaugh, judges, like umpires, are there to call balls and strikes. An umpire calls balls and strikes with no concern for the consequences of calling a ball or a strike on the outcome of the game. Think about it: do judges reach decisions about cases, make their rulings, write their opinions, with no concern for the consequences of their decisions?

Umpires make their calls based on split-second responses to their visual perceptions of what happens in front of their eyes, with no reflection on what implications their decisions have for anyone else, or the expectations held by the players whom they are watching. Think about it: would you want a judge to decide a case without considering the effects of his decision on the litigants and on the society at large?

Umpires make their decisions without hearing arguments from the players before rendering their decisions. Players, coaches, managers, or their spokesmen do not submit written briefs, or make oral arguments, to umpires in an effort to explain to umpires why justice requires that a decision be rendered in their favor. Umpires don’t study briefs or do research on decisions rendered by earlier umpires in previous contests. Think about it: would you want a judge to decide a case within the time that an umpire takes to call balls and strikes and do so with no input from the litigants?

Umpires never write opinions in which they explain (or at least try to explain) why their decisions are right and just after having taken into account on all the arguments advanced by the opposing sides and any other relevant considerations that might properly be taken into account in reaching a decision. Think about it: would you want a judge to decide a case without having to write an opinion explaining why his or her decision is the right and just one?

Umpires call balls on strikes instinctively, unreflectively, and without hesitation. But to judge means to think, to reflect, to consider both (or all) sides, to consider the consequences of the decision for the litigants and for society, and for future judges in future cases who will be guided by the decision being rendered in the case at hand. Judging — especially appellate judging — is a deeply intellectual and reflective vocation requiring knowledge, erudition, insight, wisdom, temperament, and, quite often, empathy and creativity.

To reduce this venerable vocation to the mere calling of balls and strikes is deeply dishonorable, and, coming from a judge who presumes to be worthy of sitting on the highest court in the land, supremely offensive.

What could possibly possess a judge — and a judge, presumably neither an idiot nor insufficiently self-aware to understand what he is actually doing — to engage in such obvious sophistry? The answer, I think, is that it has come to be in the obvious political and ideological self-interest of many lawyers and judges, to deliberately adopt a pretense that judging is — or should be — a mechanical activity that can be reduced to simply looking up and following already existing rules that have already been written down somewhere, and that to apply those rules requires nothing more than knowing how to read them properly. That idea can be summed up in two eight-letter words, one of which is nonsense, and those who knowingly propagate it are just, well, dare I say it, deplorable.

Why Judge Kavanaugh Shamefully Refused to Reject Chae Chan Ping v. United States (AKA Chinese Exclusion Case) as Precedent

Senator Kamala Harris asked Judge Kavanaugh if he considered the infamous Supreme Court decision in Chae Chan Ping v. United States (AKA Chinese Exclusion Case) as a valid precedent. Judge Kavanaugh disgraced himself by refusing to say that the case was in error from the moment it was rendered, no less, if not even more so, than was Plessy v. Ferguson overturned by the Supreme Court in Brown v. Board of Education.

The question is why would he not want to distance himself from a racist abomination of a decision that remains a stain on the Supreme Court to this day? After all, Judge Kavanaugh, in his fastidiousness, kept explaining to Senators that he wouldn’t want to get within three zipcodes of a political controversy. But, although obviously uncomfortable in his refusal to do so, he could not bring himself to say that Chae Chan Ping belongs in the garbage can along with Dred Scott and Plessy.

Here’s the reason. Chae Chan Ping is still an important precedent that has been and continues to be relied on by the government and the Supreme Court to uphold the power of President to keep out foreigners whenever he wants to.

In a post in March 2017, I quoted from Justice Marshall’s magnificent dissent in Kleindienst v. Mandel, a horrible decision in which the Court upheld the exclusion of a Marxist scholar from the United States based on, among other precedents, the execrable Chae Chan Ping decision. Here is a brief excerpt from Justice Marshall’s opinion, which I discuss at greater length in my 2017 post.

The heart of appellants’ position in this case . . . is that the Government’s power is distinctively broad and unreviewable because “the regulation in question is directed at the admission of aliens.” Brief for Appellants 33. Thus, in the appellants’ view, this case is no different from a long line of cases holding that the power to exclude aliens is left exclusively to the “political” branches of Government, Congress, and the Executive.

These cases are not the strongest precedents in the United States Reports, and the majority’s baroque approach reveals its reluctance to rely on them completely. They include such milestones as The Chinese Exclusion Case, 130 U.S. 581 (1889), and Fong Yue Ting v. United States, 149 U.S. 698 (1893), in which this Court upheld the Government’s power to exclude and expel Chinese aliens from our midst.

Kleindienst has become the main modern precedent affirming the nearly unchecked power of the government to arbitrarily exclude foreigners from entering the United States on whatever whim the government chooses to act upon, so long as it can come up with an excuse, however pretextual, that the exclusion has a national security rationale.

And because Judge Kavanaugh will be a solid vote in favor of affirming the kind of monumentally dishonest decision made by Justice Roberts in the Muslim Travel Ban case, he can’t disavow Chae Chan Ping without undermining Kleindienst which, in turn, would undermine the Muslim Travel Ban. 

Aside from being a great coach of his daughter’s basketball team, and superb carpool driver, I’m sure Judge Kavanaugh appreciates and understands how I feel.

Whatta guy.

Hayek v. Rawls on Social Justice: Correcting the False Narrative

Matt Yglesias, citing an article (“John Rawls, Socialist?“) by Ed Quish in the Jacobin arguing that Rawls, in his later years, drifted from his welfare-state liberalism to democratic socialism, tweeted a little while ago

I’m an admirer of, but no expert on, Rawls, so I won’t weigh in on where to pigeon-hole Rawls on the ideological spectrum. In general, I think such pigeon-holing is as likely to mislead as to clarify because it tends to obscure the individuality of the individual or thinker being pigeon-hold. Rawls was above all a Rawlsian and to reduce his complex and nuanced philosophy to simple catch-phrase like “socialism” or even “welfare-state liberalism” cannot possibly do his rich philosophical contributions justice (no pun intended).

A good way to illustrate both the complexity of Rawls’s philosophy and that of someone like F. A. Hayek, often regarded as standing on the opposite end of the philosophical spectrum from Rawls, is to quote from two passages of volume 2 of Law, Legislation and Liberty. Hayek entitled this volume The Mirage of Social Justice, and the main thesis of that volume is that the term “justice” is meaningful only in the context of the foreseen or foreseable consequences of deliberate decisions taken by responsible individual agents. Social justice, because it refers to the outcomes of complex social processes that no one is deliberately aiming at, is not a meaningful concept.

Because Rawls argued in favor of the difference principle, which says that unequal outcomes are only justifiable insofar as they promote the absolute (though not the relative) well-being of the least well-off individuals in society, most libertarians, including famously Robert Nozick whose book Anarchy, State and Utopia was a kind of rejoinder to Rawls’s book A Theory of Justice, viewed Rawls as an ideological opponent.

Hayek, however, had a very different take on Rawls. At the end of his preface to volume 2, explaining why he had not discussed various recent philosophical contributions on the subject of social justice, Hayek wrote:

[A]fter careful consideration I have come to the conclusion that what I might have to say about John Rawls’ A theory of Justice would not assist in the pursuit of my immediate object because the differences between us seemed more verbal than substantial. Though the first impression of readers may be different, Rawls’ statement which I quote later in this volume (p. 100) seems to me to show that we agree on what is to me the essential point. Indeed, as I indicate in a note to that passage, it appears to me that Rawls has been widely misunderstood on this central issue. (pp. xii-xiii)

Here is what Hayek says about Rawls in the cited passage.

Before leaving this subject I want to point out once more that the recognition that in such combinations as “social”, “economic”, “distributive”, or “retributive” justice the term “justice” is wholly empty should not lead us to throw the baby out with the bath water. Not only as the basis of the legal rules of just conduct is the justice which the courts of justice administer exceedingly important; there unquestionably also exists a genuine problem of justice in connection with the deliberate design of political institutions the problem to which Professor John Rawls has recently devoted an important book. The fact which I regret and regard as confusing is merely that in this connection he employs the term “social justice”. But I have no basic quarrel with an author who, before he proceeds to that problem, acknowledges that the task of selecting specific systems or distributions of desired things as just must be abandoned as mistaken in principle and it is, in any case, not capable of a definite answer. Rather, the principles of justice define the crucial constraints which institutions and joint activities must satisfy if persons engaging in them are to have no complaints against them. If these constraints are satisfied, the resulting distribution, whatever it is, may be accepted as just (or at least not unjust).” This is more or less what I have been trying to argue in this chapter.

In the footnote at the end of the quotation, Hayek cites the source from which he takes the quotation and then continues:

John Rawls, “Constitutional Liberty and the Concept of Justice,” Nomos IV, Justice (New York, 1963), p. 102. where the passage quoted is preceded by the statement that “It is the system of institutions which has to be judged and judged from a general point of view.” I am not aware that Professor Rawls’ later more widely read work A Theory of Justice contains a comparatively clear statement of the main point, which may explain why this work seems often, but as it  appears to me wrongly, to have been interpreted as lending support to socialist demands, e.g., by Daniel Bell, “On Meritocracy and Equality”, Public Interest, Autumn 1972, p. 72, who describes Rawls’ theory as “the most comprehensive effort in modern philosophy to justify a socialist ethic.”

My Paper (with Sean Sullivan) on Defining Relevant Antitrust Markets Now Available on SSRN

Antitrust aficionados may want to have a look at this new paper (“The Logic of Market Definition”) that I have co-authored with Sean Sullivan of the University of Iowa School of Law about defining relevant antitrust markets. The paper is now posted on SSRN.

Here is the abstract:

Despite the voluminous commentary that the topic has attracted in recent years, much confusion still surrounds the proper definition of antitrust markets. This paper seeks to clarify market definition, partly by explaining what should not factor into the exercise. Specifically, we identify and describe three common errors in how courts and advocates approach market definition. The first error is what we call the natural market fallacy: the mistake of treating market boundaries as preexisting features of competition, rather than the purely conceptual abstractions of a particular analytical process. The second is the independent market fallacy: the failure to recognize that antitrust markets must always be defined to reflect a theory of harm, and do not exist independent of a theory of harm. The third is the single market fallacy: the tendency of courts and advocates to seek some single, best relevant market, when in reality there will typically be many relevant markets, all of which could be appropriately drawn to aid in competitive effects analysis. In the process of dispelling these common fallacies, this paper offers a clarifying framework for understanding the fundamental logic of market definition.

Martin Wolf Reviews Adam Tooze on the 2008 Financial Crisis

The eminent Martin Wolf, a fine economist and the foremost financial journalist of his generation, has written an admiring review of a new book (Crashed: How a Decade of Financial Crises Changed the World) about the financial crisis of 2008 and the ensuing decade of aftershocks and turmoil and upheaval by the distinguished historian Adam Tooze. This is not the first time I have written a post commenting on a review of a book by Tooze; in 2015, I wrote a post about David Frum’s review of Tooze’s book on World War I and its aftermath (Deluge: The Great War, America and the Remaking of the World Order 1916-1931). No need to dwell on the obvious similarities between these two impressive volumes.

Let me admit at the outset that I haven’t read either book. Unquestionably my loss, but I hope at some point to redeem myself by reading both of them. But in this post I don’t intend to comment at length about Tooze’s argument. Judging from Martin Wolf’s review, I fully expect that I will agree with most of what Tooze has to say about the crisis.

My criticism – and I hesitate even to use that word – will be directed toward what, judging from Wolf’s review, Tooze seems to have been left out of his book. I am referring to the role of tight monetary policy, motivated by an excessive concern with inflation, when what was causing inflation was a persistent rise in energy and commodity prices that had little to do with monetary policy. Certainly, the failure to fully understand the role of monetary policy during the 2006 to 2008 period in the run-up to the financial crisis doesn’t negate all the excellent qualities that the book undoubtedly has, nevertheless, leaving out that essential part of the story that is like watching Hamlet without the prince.

Let me just offer a few examples from Wolf’s review. Early in the review, Wolf provides a clear overview of the nature of the crisis, its scope and the response.

As Tooze explains, the book examines “the struggle to contain the crisis in three interlocking zones of deep private financial integration: the transatlantic dollar-based financial system, the eurozone and the post-Soviet sphere of eastern Europe”. This implosion “entangled both public and private finances in a doom loop”. The failures of banks forced “scandalous government intervention to rescue private oligopolists”. The Federal Reserve even acted to provide liquidity to banks in other countries.

Such a huge crisis, Tooze points out, has inevitably deeply affected international affairs: relations between Germany and Greece, the UK and the eurozone, the US and the EU and the west and Russia were all affected. In all, he adds, the challenges were “mind-bogglingly technical and complex. They were vast in scale. They were fast moving. Between 2007 and 2012, the pressure was relentless.”

Tooze concludes this description of events with the judgment that “In its own terms, . . . the response patched together by the US Treasury and the Fed was remarkably successful.” Yet the success of these technocrats, first with support from the Democratic Congress at the end of the administration of George W Bush, and then under a Democratic president, brought the Democrats no political benefits.

This is all very insightful and I have no quarrel with any of it. But it mentions not a word about the role of monetary policy. Last month I wrote a post about the implications of a flat or inverted yield curve. The yield curve usually has an upward slope because short-term rates interest rates tend to be lower than long-term rates. Over the past year the yield curve has been steadily flattening as short term rates have been increasing while long-term rates have risen only slightly if at all. Many analysts are voicing concern that the yield curve may go flat or become inverted once again. And one reason that they worry is that the last time the yield curve became flat was late in 2006. Here’s how I described what happened to the yield curve in 2006 after the Fed started mechanically raising its Fed-funds target interest rate by 25 basis points every 6 weeks starting in June 2004.

The Fed having put itself on autopilot, the yield curve became flat or even slightly inverted in early 2006, implying that a substantial liquidity premium had to be absorbed in order to keep cash on hand to meet debt obligations. By the second quarter of 2006, insufficient liquidity caused the growth in total spending to slow, just when housing prices were peaking, a development that intensified the stresses on the financial system, further increasing the demand for liquidity. Despite the high liquidity premium and flat yield curve, total spending continued to increase modestly through 2006 and most of 2007. But after stock prices dropped in August 2007 and home prices continued to slide, growth in total spending slowed further at the end of 2007, and the downturn began.

Despite the weakening economy, the Fed remained focused primarily on inflation. The Fed did begin cutting its Fed Funds target from 5.25% in late 2007 once the downturn began, but the Fed’s reluctance to move aggressively to counter a recession that worsened rapidly in spring and summer of 2008 because the Fed remain fixated on headline inflation which was consistently higher than the Fed’s 2% target. But inflation was staying above the 2% target simply because of an ongoing supply shock that began in early 2006 when the price of oil was just over $50 a barrel and rose steadily with a short dip late in 2006 and early 2007 and continuing to rise above $100 a barrel in the summer of 2007 and peaking at over $140 a barrel in July 2008.

The mistake of tightening monetary policy in response to a supply shock in the midst of a recession would have been egregious under any circumstances, but in the context of a seriously weakened and fragile financial system, the mistake was simply calamitous. And, indeed, the calamitous consequences of that decision are plain. But somehow the connection between the focus of the Fed on inflation while the economy was contracting and the financial system was in peril has never been fully recognized by most observers and certainly not by the Federal Reserve officials who made those decisions. A few paragraphs later, Wolf observes.

Furthermore, because the banking systems had become so huge and intertwined, this became, in the words of Ben Bernanke — Fed chairman throughout the worst days of the crisis and a noted academic expert — the “worst financial crisis in global history, including the Great Depression”. The fact that the people who had been running the system had so little notion of these risks inevitably destroyed their claim to competence and, for some, even probity.

I will not agree or disagree with Bernanke that the 2008 crisis was the worse than 1929-30 or 1931 or 1933 crises, but it appears that they still have not fully understood their own role in precipitating the crisis. That is a story that remains to be told. I hope we don’t have to wait too much longer.

Who’s Afraid of a Flattening Yield Curve?

Last week the Fed again raised its benchmark Federal Funds rate target, now at 2%, up from the 0.25% rate that had been maintained steadily from late 2008 until late 2015, when the Fed, after a few false starts, finally worked up the courage — or caved to the pressure of the banks and the financial community — to start raising rates. The Fed also signaled its intention last week to continue raising rates – presumably at 0.25% increments – at least twice more this calendar year.

Some commentators have worried that rising short-term interest rates are outpacing increases at the longer end, so that the normally positively-sloped yield curve is flattening. They point out that historically flat or inverted yield curves have often presaged an economic downturn or recession within a year.

What accounts for the normally positive slope of the yield curve? It’s usually attributed to the increased risk associated with a lengthening of the duration of a financial instrument, even if default risk is zero. The longer the duration of a financial instrument, the more sensitive the (resale) value of the instrument to changes in the rate of interest. Because risk falls as the duration of the of the instrument is shortened, risk-averse asset-holders are willing to accept a lower return on short-dated claims than on riskier long-dated claims.

If the Fed continues on its current course, it’s likely that the yield curve will flatten or become inverted – sloping downward instead of upward – a phenomenon that has frequently presaged recessions within about a year. So the question I want to think through in this post is whether there is anything inherently recessionary about a flat or inverted yield curve, or is the correlation between recessions and inverted yield curves merely coincidental?

The beginning of wisdom in this discussion is the advice of Scott Sumner: never reason from a price change. A change in the slope of the yield curve reflects a change in price relationships. Any given change in price relationships can reflect a variety of possible causes, and the ultimate effects, e.g., an inverted yield curve, of those various underlying causes, need not be the same. So, we can’t take it for granted that all yield-curve inversions are created equal; just because yield-curve inversions have sometimes, or usually, or always, preceded recessions doesn’t mean that recessions must necessarily follow once the yield curve becomes inverted.

Let’s try to sort out some of the possible causes of an inverted yield curve, and see whether those causes are likely to result in a recession if the yield curve remains flat or inverted for a substantial period of time. But it’s also important to realize that the shape of the yield curve reflects a myriad of possible causes in a complex economic system. The yield curve summarizes expectations about the future that are deeply intertwined in the intertemporal structure of an economic system. Interest rates aren’t simply prices determined in specific markets for debt instruments of various durations; interest rates reflect the opportunities to exchange current goods for future goods or to transform current output into future output. Interest rates are actually distillations of relationships between current prices and expected future prices that govern the prices and implied yields at which debt instruments are bought and sold. If the interest rates on debt instruments are out of line with the intricate web of intertemporal price relationships that exist in any complex economy, those discrepancies imply profitable opportunities for exchange and production that tend to eliminate those discrepancies. Interest rates are not set in a vacuum, they are a reflection of innumerable asset valuations and investment opportunities. So there are potentially innumerable possible causes that could lead to the flattening or inversion of the yield curve.

For purposes of this discussion, however, I will focus on just two factors that, in an ultra-simplified partial-equilibrium setting, seem most likely to cause a normally upward-sloping yield curve to become relatively flat or even inverted. These two factors affecting the slope of the yield curve are the demand for liquidity and the supply of liquidity.

An increase in the demand for liquidity manifests itself in reduced current spending to conserve liquidity and by an increase in the demands of the public on the banking system for credit. But even as reduced spending improves the liquidity position of those trying to conserve liquidity, it correspondingly worsens the liquidity position of those whose revenues are reduced, the reduced spending of some necessarily reducing the revenues of others. So, ultimately, an increase in the demand for liquidity can be met only by (a) the banking system, which is uniquely positioned to create liquidity by accepting the illiquid IOUs of the private sector in exchange for the highly liquid IOUs (cash or deposits) that the banking system can create, or (b) by the discretionary action of a monetary authority that can issue additional units of fiat currency.

Let’s consider first what would happen in case of an increased demand for liquidity by the public. Such an increased demand could have two possible causes. (There might be others, of course, but these two seem fairly commonplace.)

First, the price expectations on which one or more significant sectors of the economy have made investments have turned out to overly optimistic (or alternatively made investments on overly optimistic expectations of low input prices). Given the commitments made on the basis of optimistic expectations, it then turns out that realized sales or revenues fall short of what was required by those firms to service their debt obligations. Thus, to service their debt obligations, firms may seek short-term loans to cover the shortfall in earnings relative to expectations. Potential lenders, including the banking system, who may already be holding the debt of such firms, must then decide whether to continue extending credit to these firms in hopes that prices will rebound back to what they had been expected to be (or that borrowers will be able to cut costs sufficiently to survive if prices don’t recover), or to cut their losses by ceasing to lend further.

The short-run demand for credit will tend to raise short-term rates relative to long-term rates, causing the yield curve to flatten. And the more serious the short-term need for liquidity, the flatter or more inverted the yield curve becomes. In such a period of financial stress, the potential for significant failures of firms that can’t service their financial obligations is an indication that an economic downturn or a recession is likely, so that the extent to which the yield curve flattens or becomes inverted is a measure of the likelihood that a downturn is in the offing.

Aside from sectoral problems affecting particular industries or groups of industries, the demand for liquidity might increase owing to a generalized increase in uncertainty that causes entrepreneurs to hold back from making investments (dampens animal spirits). This is often a response during and immediately following a recession, when the current state of economic activity and uncertainty about its future state discourages entrepreneurs from making investments whose profitability depends on the magnitude and scope of the future recovery. In that case, an increasing demand for liquidity causes firms to hoard their profits as cash rather than undertake new investments, because expected demand is not sufficient to justify commitments that would be remunerative only if future demand exceeds some threshold. Such a flattening of the yield curve can be mitigated if the monetary authority makes liquidity cheaply available by cutting short-term rates to very low levels or even to zero, as the Fed did when it adopted its quantitative easing policies after the 2008-09 downturn, thereby supporting a recovery, a modest one to be sure, but still a stronger recovery than occurred in Europe after the European Central Bank prematurely raised interest short-term rates.

Such an episode occurred in 2002-03, after the 9-11 attack on the US. The American economy had entered a recession in early 2001, partly as a result of the bursting of the dotcom bubble of the late 1990s. The recession was short and mild, and the large tax cut enacted by Congress at the behest of the Bush administration in June 2001 was expected to provide significant economic stimulus to promote recovery. However, it soon became clear that, besides the limited US attack on Afghanistan to unseat the Taliban regime and to kill or capture the Al Qaeda leadership in Afghanistan, the Bush Administration was planning for a much more ambitious military operation to effect regime change in Iraq and perhaps even in other neighboring countries in hopes of radically transforming the political landscape of the Middle East. The grandiose ambitions of the Bush administration and the likelihood that a major war of unknown scope and duration with unpredictable consequences might well begin sometime in early 2003 created a general feeling of apprehension and uncertainty that discouraged businesses from making significant new commitments until the war plans of the Administration were clarified and executed and their consequences assessed.

Gauging the unusual increase in the demand for liquidity in 2002 and 2003, the Fed reduced short-term rates to accommodate increasing demands for liquidity, even as the economy entered into a weak expansion and recovery. Given the unusual increase in the demand for liquidity, the accommodative stance of the Fed and the reduction in the Fed Funds target to an unusually low level of 1% had no inflationary effect, but merely cushioned the economy against a relapse into recession. The weakness of the recovery is reflected in the modest rate of increase in nominal spending, averaging about 3.9%, and not exceeding 5.1% in any of the seven quarters from 2001-IV when the recession ended until 2003-II when the Saddam Hussein regime was toppled.

Quarter              % change in NGDP

2001-IV               2.34%

2002-I                 5.07%

2002-II                3.76%

2002-III               3.80%

2002-IV               2.44%

2003-I                 4.63%

2003-II                5.10%

2003-III               9.26%

2003-IV               6.76%

2004-I                 5.94%

2004-II                6.60%

2004-III               6.26%

2004-IV               6.44%

2005-I                 8.25%

2005-II                5.10%

2005-III               7.33%

2005-IV               5.44%

2006-I                 8.23%

2006-II                4.50%

2006-III               3.19%

2006-IV               4.62%

2007-I                 4.83%

2007-II                5.42%

2007-III               4.15%

2007-IV               3.21%

The apparent success of the American invasion in the second quarter of 2003 was matched by a quickening expansion from 2003-III through 2006-I, nominal GDP increasing at a 6.8% annual rate over those 11 quarters. As the economy recovered, and spending began increasing rapidly, the Fed gradually raised its Fed Funds target by 25 basis points about every six weeks starting at the end of June 2004, so that in early 2006, the Fed Funds target rate reached 4.25%, peaking at 5.25% in July 2006, where it remained till September 2007. By February 2006, the yield on 3-month Treasury bills reached the yield on 10-year Treasuries, so that the yield curve had become essentially flat, remaining so until October 2008, soon after the start of the financial crisis. Indeed, for most of 2006 and 2007, the Fed Funds target was above the yield on three-month Treasury bills, implying a slight inversion at the short-end of the yield curve, suggesting that the Fed was exacting a slight liquidity surcharge on overnight reserves and that there was a market expectation that the Fed Funds target would be reduced from its 5.25% peak.

The Fed was probably tardy in increasing its Fed Funds target till June 2004, nominal spending having increased in 2003-III at an annual rate above 9%, and increasing in the next three quarters at an average annual rate of about 6.5%. In 2005 while the Fed was in auto-pilot mode, automatically raising its Fed Funds target 25 basis points every six weeks, nominal spending continued to increase at a roughly 6% annual rate, increases becoming slightly more erratic, fluctuating between 5.1% and 8.3%. But by the second quarter of 2006 when the Fed Funds target rose to 5%, the rate of increase in spending slowed to an average of just over 4% and just under 5% in the first three quarters of 2007.

While the rate of increase in spending slowed to less than 5% in the second quarter of 2006, as the yield curve flattened, and the Fed Funds target peaked at 5.25%, housing prices also peaked, and the concerns about financial stability started to be voiced. The chart below shows the yields on 10-year constant maturity Treasuries and the yield on 3-month Treasury bills, the two key market rates at opposite ends of the yield curve.

The yields on the two instruments became nearly equal in early 2006, and, with slight variations, remained so till the onset of the financial crisis in September 2008. In retrospect, at least, the continued increases in the Fed Funds rate target seem too have been extremely ill-advised, perhaps triggering the downturn that started at the end of 2007, and leading nine months later to the financial crisis of 2008.

The Fed having put itself on autopilot, the yield curve became flat or even slightly inverted in early 2006, implying that a substantial liquidity premium had to be absorbed in order to keep cash on hand to meet debt obligations. By the second quarter of 2006, insufficient liquidity caused the growth in total spending to slow, just when housing prices were peaking, a development that intensified the stresses on the financial system, further increasing the demand for liquidity. Despite the high liquidity premium and flat yield curve, total spending continued to increase modestly through 2006 and most of 2007. But after stock prices dropped in August 2007 and home prices continued to slide, growth in total spending slowed further at the end of 2007, and the downturn began.

Responding to signs of economic weakness and falling long-term rates, the Fed did lower its Fed Funds target late in 2007, cutting the Fed Funds target several more times in early 2008. In May 2008, the Fed reduced the target to 2%, but the yield curve remained flat, because the Fed, consistently underestimating the severity of the downturn, kept signaling its concern with inflation, thereby suggesting that an increase in the target might be in the offing. So, even as it reduced its Fed Funds target, the Fed kept the yield curve nearly flat until, and even after, the start of the financial crisis in September 2008, thereby maintaining an excessive liquidity premium while the demand for liquidity was intensifying as total spending contracted rapidly in the third quarter of 2008.

To summarize this discussion of the liquidity premium and the yield curve during the 2001-08 period, the Fed appropriately steepened the yield curve right after the 2001 recession and the 9/11 attacks, but was slow to normalize the slope of the yield curve after the US invasion of Iraq in the second quarter of 2003. When it did begin to normalize the yield curve in a series of automatic 25 basis point increases in its Fed Fund target rate, the Fed was again slow to reassess the effects of the policy as yield curve flattened in 2006. Thus by 2006, the Fed had effectively implemented a tight monetary policy in the face of rising demands for liquidity just as the bursting of the housing bubble in mid-2006 began to subject the financial system to steadily increasing stress. The implications of a flat or slightly inverted yield curve were ignored or dismissed by the Fed for at least two years until after the financial panic and crisis in September 2008.

At the beginning of the 2001-08 period, the Fed seemed to be aware that an unusual demand for liquidity justified a policy response to increase the supply of liquidity by reducing the Fed Funds target and steepening the yield curve. But, at the end of the period, the Fed was unwilling to respond to increasing demands for liquidity and instead allowed a flat yield curve to remain in place even when the increasing demand for liquidity was causing a slowdown in aggregate spending growth. One possible reason for the asymmetric response of the Fed to increasing liquidity demands in 2002 and 2006 is that the Fed was sensitive to criticism that, by holding short-term rates too low for too long, it had promoted and prolonged the housing bubble. Even if the criticism contained some element of truth, the Fed’s refusal to respond to increasing demands for liquidity in 2006 was tragically misguided.

The current Fed’s tentative plan to keep increasing the Fed Funds target seems less unreflective as the nearly mindless schedule followed by the Fed from mid-2004 to mid-2006. However, the Fed is playing a weaker hand now than it did in 2004. Nominal GDP has been increasing at a very lackluster annual rate of about 4-4.5% for the past two years. Certainly, further increases in the Fed Funds target would not be warranted if the rate of growth in nominal GDP is any less than 4% or if the yield curve should flatten for some other reason like a decline in interest rates at the longer end of the yield curve. Caution, possible inversion ahead.

Milton Friedman and the Phillips Curve

In December 1967, Milton Friedman delivered his Presidential Address to the American Economic Association in Washington DC. In those days the AEA met in the week between Christmas and New Years, in contrast to the more recent practice of holding the convention in the week after New Years. That’s why the anniversary of Friedman’s 1967 address was celebrated at the 2018 AEA convention. A special session was dedicated to commemoration of that famous address, published in the March 1968 American Economic Review, and fittingly one of the papers at the session as presented by the outgoing AEA president Olivier Blanchard, who also wrote one of the papers discussed at the session. Other papers were written by Thomas Sargent and Robert Hall, and by Greg Mankiw and Ricardo Reis. The papers were discussed by Lawrence Summers, Eric Nakamura, and Stanley Fischer. An all-star cast.

Maybe in a future post, I will comment on the papers presented in the Friedman session, but in this post I want to discuss a point that has been generally overlooked, not only in the three “golden” anniversary papers on Friedman and the Phillips Curve, but, as best as I can recall, in all the commentaries I’ve seen about Friedman and the Phillips Curve. The key point to understand about Friedman’s address is that his argument was basically an extension of the idea of monetary neutrality, which says that the real equilibrium of an economy corresponds to a set of relative prices that allows all agents simultaneously to execute their optimal desired purchases and sales conditioned on those relative prices. So it is only relative prices, not absolute prices, that matter. Taking an economy in equilibrium, if you were suddenly to double all prices, relative prices remaining unchanged, the equilibrium would be preserved and the economy would proceed exactly – and optimally – as before as if nothing had changed. (There are some complications about what is happening to the quantity of money in this thought experiment that I am skipping over.) On the other hand, if you change just a single price, not only would the market in which that price is determined be disequilibrated, at least one, and potentially more than one, other market would be disequilibrated. The point here is that the real economy rules, and equilibrium in the real economy depends on relative, not absolute, prices.

What Friedman did was to argue that if money is neutral with respect to changes in the price level, it should also be neutral with respect to changes in the rate of inflation. The idea that you can wring some extra output and employment out of the economy just by choosing to increase the rate of inflation goes against the grain of two basic principles: (1) monetary neutrality (i.e., the real equilibrium of the economy is determined solely by real factors) and (2) Friedman’s famous non-existence (of a free lunch) theorem. In other words, you can’t make the economy as a whole better off just by printing money.

Or can you?

Actually you can, and Friedman himself understood that you can, but he argued that the possibility of making the economy as a whole better of (in the sense of increasing total output and employment) depends crucially on whether inflation is expected or unexpected. Only if inflation is not expected does it serve to increase output and employment. If inflation is correctly expected, the neutrality principle reasserts itself so that output and employment are no different from what they would have been had prices not changed.

What that means is that policy makers (monetary authorities) can cause output and employment to increase by inflating the currency, as implied by the downward-sloping Phillips Curve, but that simply reflects that actual inflation exceeds expected inflation. And, sure, the monetary authorities can always surprise the public by raising the rate of inflation above the rate expected by the public , but that doesn’t mean that the public can be perpetually fooled by a monetary authority determined to keep inflation higher than expected. If that is the strategy of the monetary authorities, it will lead, sooner or later, to a very unpleasant outcome.

So, in any time period – the length of the time period corresponding to the time during which expectations are given – the short-run Phillips Curve for that time period is downward-sloping. But given the futility of perpetually delivering higher than expected inflation, the long-run Phillips Curve from the point of view of the monetary authorities trying to devise a sustainable policy must be essentially vertical.

Two quick parenthetical remarks. Friedman’s argument was far from original. Many critics of Keynesian policies had made similar arguments; the names Hayek, Haberler, Mises and Viner come immediately to mind, but the list could easily be lengthened. But the earliest version of the argument of which I am aware is Hayek’s 1934 reply in Econometrica to a discussion of Prices and Production by Alvin Hansen and Herbert Tout in their 1933 article reviewing recent business-cycle literature in Econometrica in which they criticized Hayek’s assertion that a monetary expansion that financed investment spending in excess of voluntary savings would be unsustainable. They pointed out that there was nothing to prevent the monetary authority from continuing to create money, thereby continually financing investment in excess of voluntary savings. Hayek’s reply was that a permanent constant rate of monetary expansion would not suffice to permanently finance investment in excess of savings, because once that monetary expansion was expected, prices would adjust so that in real terms the constant flow of monetary expansion would correspond to the same amount of investment that had been undertaken prior to the first and unexpected round of monetary expansion. To maintain a rate of investment permanently in excess of voluntary savings would require progressively increasing rates of monetary expansion over and above the expected rate of monetary expansion, which would sooner or later prove unsustainable. The gist of the argument, more than three decades before Friedman’s 1967 Presidential address, was exactly the same as Friedman’s.

A further aside. But what Hayek failed to see in making this argument was that, in so doing, he was refuting his own argument in Prices and Production that only a constant rate of total expenditure and total income is consistent with maintenance of a real equilibrium in which voluntary saving and planned investment are equal. Obviously, any rate of monetary expansion, if correctly foreseen, would be consistent with a real equilibrium with saving equal to investment.

My second remark is to note the ambiguous meaning of the short-run Phillips Curve relationship. The underlying causal relationship reflected in the negative correlation between inflation and unemployment can be understood either as increases in inflation causing unemployment to go down, or as increases in unemployment causing inflation to go down. Undoubtedly the causality runs in both directions, but subtle differences in the understanding of the causal mechanism can lead to very different policy implications. Usually the Keynesian understanding of the causality is that it runs from unemployment to inflation, while a more monetarist understanding treats inflation as a policy instrument that determines (with expected inflation treated as a parameter) at least directionally the short-run change in the rate of unemployment.

Now here is the main point that I want to make in this post. The standard interpretation of the Friedman argument is that since attempts to increase output and employment by monetary expansion are futile, the best policy for a monetary authority to pursue is a stable and predictable one that keeps the economy at or near the optimal long-run growth path that is determined by real – not monetary – factors. Thus, the best policy is to find a clear and predictable rule for how the monetary authority will behave, so that monetary mismanagement doesn’t inadvertently become a destabilizing force causing the economy to deviate from its optimal growth path. In the 50 years since Friedman’s address, this message has been taken to heart by monetary economists and monetary authorities, leading to a broad consensus in favor of inflation targeting with the target now almost always set at 2% annual inflation. (I leave aside for now the tricky question of what a clear and predictable monetary rule would look like.)

But this interpretation, clearly the one that Friedman himself drew from his argument, doesn’t actually follow from the argument that monetary expansion can’t affect the long-run equilibrium growth path of an economy. The monetary neutrality argument, being a pure comparative-statics exercise, assumes that an economy, starting from a position of equilibrium, is subjected to a parametric change (either in the quantity of money or in the price level) and then asks what will the new equilibrium of the economy look like? The answer is: it will look exactly like the prior equilibrium, except that the price level will be twice as high with twice as much money as previously, but with relative prices unchanged. The same sort of reasoning, with appropriate adjustments, can show that changing the expected rate of inflation will have no effect on the real equilibrium of the economy, with only the rate of inflation and the rate of monetary expansion affected.

This comparative-statics exercise teaches us something, but not as much as Friedman and his followers thought. True, you can’t get more out of the economy – at least not for very long – than its real equilibrium will generate. But what if the economy is not operating at its real equilibrium? Even Friedman didn’t believe that the economy always operates at its real equilibrium. Just read his Monetary History of the United States. Real-business cycle theorists do believe that the economy always operates at its real equilibrium, but they, unlike Friedman, think monetary policy is useless, so we can forget about them — at least for purposes of this discussion. So if we have reason to think that the economy is falling short of its real equilibrium, as almost all of us believe that it sometimes does, why should we assume that monetary policy might not nudge the economy in the direction of its real equilibrium?

The answer to that question is not so obvious, but one answer might be that if you use monetary policy to move the economy toward its real equilibrium, you might make mistakes sometimes and overshoot the real equilibrium and then bad stuff would happen and inflation would run out of control, and confidence in the currency would be shattered, and you would find yourself in a re-run of the horrible 1970s. I get that argument, and it is not totally without merit, but I wouldn’t characterize it as overly compelling. On a list of compelling arguments, I would put it just above, or possibly just below, the domino theory on the basis of which the US fought the Vietnam War.

But even if the argument is not overly compelling, it should not be dismissed entirely, so here is a way of taking it into account. Just for fun, I will call it a Taylor Rule for the Inflation Target (IT). Let us assume that the long-run inflation target is 2% and let us say that (YY*) is the output gap between current real GDP and potential GDP (i.e., the GDP corresponding to the real equilibrium of the economy). We could then define the following Taylor Rule for the inflation target:

IT = α(2%) + β((YY*)/ Y*).

This equation says that the inflation target in any period would be a linear combination of the default Inflation Target of 2% times an adjustment coefficient α designed to keep successively chosen Inflation targets from deviating from the long-term price-level-path corresponding to 2% annual inflation and some fraction β of the output gap expressed as a percentage of potential GDP. Thus, for example, if the output gap was -0.5% and β was 0.5, the short-term Inflation Target would be raised to 4.5% if α were 1.

However, if on average output gaps are expected to be negative, then α would have to be chosen to be less than 1 in order for the actual time path of the price level to revert back to a target price-level corresponding to a 2% annual rate.

Such a procedure would fit well with the current dual inflation and employment mandate of the Federal Reserve. The long-term price level path would correspond to the price-stability mandate, while the adjustable short-term choice of the IT would correspond to and promote the goal of maximum employment by raising the inflation target when unemployment was high as a countercyclical policy for promoting recovery. But short-term changes in the IT would not be allowed to cause a long-term deviation of the price level from its target path. The dual mandate would ensure that relatively higher inflation in periods of high unemployment would be compensated for by periods of relatively low inflation in periods of low unemployment.

Alternatively, you could just target nominal GDP at a rate consistent with a long-run average 2% inflation target for the price level, with the target for nominal GDP adjusted over time as needed to ensure that the 2% average inflation target for the price level was also maintained.

Does Economic Theory Entail or Support Free-Market Ideology?

A few weeks ago, via Twitter, Beatrice Cherrier solicited responses to this query from Dina Pomeranz

It is a serious — and a disturbing – question, because it suggests that the free-market ideology which is a powerful – though not necessarily the most powerful — force in American right-wing politics, and probably more powerful in American politics than in the politics of any other country, is the result of how economics was taught in the 1970s and 1980s, and in the 1960s at UCLA, where I was an undergrad (AB 1970) and a graduate student (PhD 1977), and at Chicago.

In the 1950s, 1960s and early 1970s, free-market economics had been largely marginalized; Keynes and his successors were ascendant. But thanks to Milton Friedman and his compatriots at a few other institutions of higher learning, especially UCLA, the power of microeconomics (aka price theory) to explain a very broad range of economic and even non-economic phenomena was becoming increasingly appreciated by economists. A very broad range of advances in economic theory on a number of fronts — economics of information, industrial organization and antitrust, law and economics, public choice, monetary economics and economic history — supported by the award of the Nobel Prize to Hayek in 1974 and Friedman in 1976, greatly elevated the status of free-market economics just as Margaret Thatcher and Ronald Reagan were coming into office in 1979 and 1981.

The growing prestige of free-market economics was used by Thatcher and Reagan to bolster the credibility of their policies, especially when the recessions caused by their determination to bring double-digit inflation down to about 4% annually – a reduction below 4% a year then being considered too extreme even for Thatcher and Reagan – were causing both Thatcher and Reagan to lose popular support. But the growing prestige of free-market economics and economists provided some degree of intellectual credibility and weight to counter the barrage of criticism from their opponents, enabling both Thatcher and Reagan to use Friedman and Hayek, Nobel Prize winners with a popular fan base, as props and ornamentation under whose reflected intellectual glory they could take cover.

And so after George Stigler won the Nobel Prize in 1982, he was invited to the White House in hopes that, just in time, he would provide some additional intellectual star power for a beleaguered administration about to face the 1982 midterm elections with an unemployment rate over 10%. Famously sharp-tongued, and far less a team player than his colleague and friend Milton Friedman, Stigler refused to play his role as a prop and a spokesman for the administration when asked to meet reporters following his celebratory visit with the President, calling the 1981-82 downturn a “depression,” not a mere “recession,” and dismissing supply-side economics as “a slogan for packaging certain economic ideas rather than an orthodox economic category.” That Stiglerian outburst of candor brought the press conference to an unexpectedly rapid close as the Nobel Prize winner was quickly ushered out of the shouting range of White House reporters. On the whole, however, Republican politicians have not been lacking of economists willing to lend authority and intellectual credibility to Republican policies and to proclaim allegiance to the proposition that the market is endowed with magical properties for creating wealth for the masses.

Free-market economics in the 1960s and 1970s made a difference by bringing to light the many ways in which letting markets operate freely, allowing output and consumption decisions to be guided by market prices, could improve outcomes for all people. A notable success of Reagan’s free-market agenda was lifting, within days of his inauguration, all controls on the prices of domestically produced crude oil and refined products, carryovers of the disastrous wage-and-price controls imposed by Nixon in 1971, but which, following OPEC’s quadrupling of oil prices in 1973, neither Nixon, Ford, nor Carter had dared to scrap. Despite a political consensus against lifting controls, a consensus endorsed, or at least not strongly opposed, by a surprisingly large number of economists, Reagan, following the advice of Friedman and other hard-core free-market advisers, lifted the controls anyway. The Iran-Iraq war having started just a few months earlier, the Saudi oil minister was predicting that the price of oil would soon rise from $40 to at least $50 a barrel, and there were few who questioned his prediction. One opponent of decontrol described decontrol as writing a blank check to the oil companies and asking OPEC to fill in the amount. So the decision to decontrol oil prices was truly an act of some political courage, though it was then characterized as an act of blind ideological faith, or a craven sellout to Big Oil. But predictions of another round of skyrocketing oil prices, similar to the 1973-74 and 1978-79 episodes, were refuted almost immediately, international crude-oil prices falling steadily from $40/barrel in January to about $33/barrel in June.

Having only a marginal effect on domestic gasoline prices, via an implicit subsidy to imported crude oil, controls on domestic crude-oil prices were primarily a mechanism by which domestic refiners could extract a share of the rents that otherwise would have accrued to domestic crude-oil producers. Because additional crude-oil imports increased a domestic refiner’s allocation of “entitlements” to cheap domestic crude oil, thereby reducing the net cost of foreign crude oil below the price paid by the refiner, one overall effect of the controls was to subsidize the importation of crude oil, notwithstanding the goal loudly proclaimed by all the Presidents overseeing the controls: to achieve US “energy independence.” In addition to increasing the demand for imported crude oil, the controls reduced the elasticity of refiners’ demand for imported crude, controls and “entitlements” transforming a given change in the international price of crude into a reduced change in the net cost to domestic refiners of imported crude, thereby raising OPEC’s profit-maximizing price for crude oil. Once domestic crude oil prices were decontrolled, market forces led almost immediately to reductions in the international price of crude oil, so the coincidence of a fall in oil prices with Reagan’s decision to lift all price controls on crude oil was hardly accidental.

The decontrol of domestic petroleum prices was surely as pure a victory for, and vindication of, free-market economics as one could have ever hoped for [personal disclosure: I wrote a book for The Independent Institute, a free-market think tank, Politics, Prices and Petroleum, explaining in rather tedious detail many of the harmful effects of price controls on crude oil and refined products]. Unfortunately, the coincidence of free-market ideology with good policy is not necessarily as comprehensive as Friedman and his many acolytes, myself included, had assumed.

To be sure, price-fixing is almost always a bad idea, and attempts at price-fixing almost always turn out badly, providing lots of ammunition for critics of government intervention of all kinds. But the implicit assumption underlying the idea that freely determined market prices optimally guide the decentralized decisions of economic agents is that the private costs and benefits taken into account by economic agents in making and executing their plans about how much to buy and sell and produce closely correspond to the social costs and benefits that an omniscient central planner — if such a being actually did exist — would take into account in making his plans. But in the real world, the private costs and benefits considered by individual agents when making their plans and decisions often don’t reflect all relevant costs and benefits, so the presumption that market prices determined by the elemental forces of supply and demand always lead to the best possible outcomes is hardly ironclad, as we – i.e., those of us who are not philosophical anarchists – all acknowledge in practice, and in theory, when we affirm that competing private armies and competing private police forces and competing judicial systems would not provide for common defense and for domestic tranquility more effectively than our national, state, and local governments, however imperfectly, provide those essential services. The only question is where and how to draw the ever-shifting lines between those decisions that are left mostly or entirely to the voluntary decisions and plans of private economic agents and those decisions that are subject to, and heavily — even mainly — influenced by, government rule-making, oversight, or intervention.

I didn’t fully appreciate how widespread and substantial these deviations of private costs and benefits from social costs and benefits can be even in well-ordered economies until early in my blogging career, when it occurred to me that the presumption underlying that central pillar of modern right-wing, free-market ideology – that reducing marginal income tax rates increases economic efficiency and promotes economic growth with little or no loss in tax revenue — implicitly assumes that all taxable private income corresponds to the output of goods and services whose private values and costs equal their social values and costs.

But one of my eminent UCLA professors, Jack Hirshleifer, showed that this presumption is subject to a huge caveat, because insofar as some people can earn income by exploiting their knowledge advantages over the counterparties with whom they trade, incentives are created to seek the kinds of knowledge that can be exploited in trades with less-well informed counterparties. The incentive to search for, and exploit, knowledge advantages implies excessive investment in the acquisition of exploitable knowledge, the private gain from acquiring such knowledge greatly exceeding the net gain to society from the acquisition of such knowledge, inasmuch as gains accruing to the exploiter are largely achieved at the expense of the knowledge-disadvantaged counterparties with whom they trade.

For example, substantial resources are now almost certainly wasted by various forms of financial research aiming to gain information that would have been revealed in due course anyway slightly sooner than the knowledge is gained by others, so that the better-informed traders can profit by trading with less knowledgeable counterparties. Similarly, the incentive to exploit knowledge advantages encourages the creation of financial products and structuring other kinds of transactions designed mainly to capitalize on and exploit individual weaknesses in underestimating the probability of adverse events (e.g., late repayment penalties, gambling losses when the house knows the odds better than most gamblers do). Even technical and inventive research encouraged by the potential to patent those discoveries may induce too much research activity by enabling patent-protected monopolies to exploit discoveries that would have been made eventually even without the monopoly rents accruing to the patent holders.

The list of examples of transactions that are profitable for one side only because the other side is less well-informed than, or even misled by, his counterparty could be easily multiplied. Because much, if not most, of the highest incomes earned, are associated with activities whose private benefits are at least partially derived from losses to less well-informed counterparties, it is not a stretch to suspect that reducing marginal income tax rates may have led resources to be shifted from activities in which private benefits and costs approximately equal social benefits and costs to more lucrative activities in which the private benefits and costs are very different from social benefits and costs, the benefits being derived largely at the expense of losses to others.

Reducing marginal tax rates may therefore have simultaneously reduced economic efficiency, slowed economic growth and increased the inequality of income. I don’t deny that this hypothesis is largely speculative, but the speculative part is strictly about the magnitude, not the existence, of the effect. The underlying theory is completely straightforward.

So there is no logical necessity requiring that right-wing free-market ideological policy implications be inferred from orthodox economic theory. Economic theory is a flexible set of conceptual tools and models, and the policy implications following from those models are sensitive to the basic assumptions and initial conditions specified in those models, as well as the value judgments informing an evaluation of policy alternatives. Free-market policy implications require factual assumptions about low transactions costs and about the existence of a low-cost process of creating and assigning property rights — including what we now call intellectual property rights — that imply that private agents perceive costs and benefits that closely correspond to social costs and benefits. Altering those assumptions can radically change the policy implications of the theory.

The best example I can find to illustrate that point is another one of my UCLA professors, the late Earl Thompson, who was certainly the most relentless economic reductionist whom I ever met, perhaps the most relentless whom I can even think of. Despite having a Harvard Ph.D. when he arrived back at UCLA as an assistant professor in the early 1960s, where he had been an undergraduate student of Armen Alchian, he too started out as a pro-free-market Friedman acolyte. But gradually adopting the Buchanan public-choice paradigm – Nancy Maclean, please take note — of viewing democratic politics as a vehicle for advancing the self-interest of agents participating in the political process (marketplace), he arrived at increasingly unorthodox policy conclusions to the consternation and dismay of many of his free-market friends and colleagues. Unlike most public-choice theorists, Earl viewed the political marketplace as a largely efficient mechanism for achieving collective policy goals. The main force tending to make the political process inefficient, Earl believed, was ideologically driven politicians pursuing ideological aims rather than the interests of their constituents, a view that seems increasingly on target as our political process becomes simultaneously increasingly ideological and increasingly dysfunctional.

Until Earl’s untimely passing in 2010, I regarded his support of a slew of interventions in the free-market economy – mostly based on national-defense grounds — as curiously eccentric, and I am still inclined to disagree with many of them. But my point here is not to argue whether Earl was right or wrong on specific policies. What matters in the context of the question posed by Dina Pomeranz is the economic logic that gets you from a set of facts and a set of behavioral and causality assumptions to a set of policy conclusion. What is important to us as economists has to be the process not the conclusion. There is simply no presumption that the economic logic that takes you from a set of reasonably accurate factual assumptions and a set of plausible behavioral and causality assumptions has to take you to the policy conclusions advocated by right-wing, free-market ideologues, or, need I add, to the policy conclusions advocated by anti-free-market ideologues of either left or right.

Certainly we are all within our rights to advocate for policy conclusions that are congenial to our own political preferences, but our obligation as economists is to acknowledge the extent to which a policy conclusion follows from a policy preference rather than from strict economic logic.

Has the S&P 500 Risen by 25% since November 8, 2016 Thanks to Economic Nationalist America First Policies?

Many people – I don’t think that I need to mention names — are saying that the roughly 25% rise US stock prices in the 13 months since the last Presidential election shows that the economic nationalist America First policies adopted since then have been a roaring success.

Responding to those claims some people have pointed out that the increase in the S&P500 since November 8, 2016 or since January 20, 2017 has been very close to the average yearly rate of increase in the S&P 500 since January 20, 2009, when Barrack Obama took office. Here is a comparison of the year on year rate of increase in the S&P500 since January 20, 2010, one year after Obama took office.

Year

% year over year change in S&P 500

2010

41.3

2011

12.5

2012

2.7

2013

13.5

2014

23.5

2015

9.7

2016

-8.1

2017

22.2

2018

15.8

Now the percent change in the S&P 500 for 2018 is just the change for the 10 and a half months between January 20, 2017 and December 5 2017, so if the current rate of increase in the S&P 500 since January 20 is maintained, the annual increase would be about 18% which would still be less than the year-on-year increase in the last year of the Obama administration. Over the entire 8 years of the Obama administration, the S&P 500 increased by about 220%, or an annual rate of increase of a little over 12% a year. So the S&P 500 in the first year since the adoption of the current economic nationalist America First policies has done better — but only slightly better — than it did on average in the eight years of the Obama administration.

But if we are trying to gauge the success of the economic nationalist America First policies of the current administration, it seems appropriate to take not just the performance of the S&P 500, which disproportionately represents US companies but also the performance of stocks in other countries. One such index is the MSCI EAFE index. (The MSCI EAFE Index is an index designed to measure the equity market performance of developed markets outside of the U.S. and Canada. It is maintained by MSCI Inc.,; the EAFE acronym stands for Europe, Australasia and Far East.)

The accompanying chart shows the performance of the S&P500 and the MSCI EAFE index since January 20, 2009. I have normalized both indices to equal 100 on November 8, 2016.

The two vertical lines are drawn at November 8, 2016 and January 20, 2017, the two dates of especial interest for comparison purposes. In the period between the election and the inauguration, the S&P 500 actually performed slightly better than did the MSCI EAFE. But the opposite has obviously been the case since the new administration actually came into power. Since the inauguration, the economic nationalist America First policies adopted by the administration have resulted in proportionately much greater increases in stock prices in Europe, Australia and the Far East than in the US (as reflected in the S&P 500).

Here are the year over year comparisons:

Year

% year-over-year change in S&P 500

% year-over-year change in the MSCI EAFE

2010

41.3

96.4

2011

12.5

9.5

2012

2.7

-7.5

2013

13.5

1.3

2014

23.5

35.6

2015

9.7

20.9

2016

-8.1

23.4

2017

22.2

26.5

2018

15.8

59

In fact, the MSCI EAFE has outperformed the S&P 500 in every year since 2013. But the gap in the rates of increase in the two indices has skyrocketed since last January 20. I have no doubt that inquiring minds will want to know why the the economic nationalist America First policies of the new administration have been allowing the rest of the world to outperforming the US by an increasingly wide margins. Is that really what winning looks like? Sad!

PS I also can’t help but observe that during the Obama administration, rising stock prices were routinely dismissed by the geniuses at places like the Wall Street Journal editorial page, the Heritage Foundation, and Freedomworks as evidence that Quantitative Easing was an elitist regressive policy aimed at enriching Wall Street and the one-percent at the expense of retirees living on fixed incomes, workers with stagnating wages, and all the others being left behind by the callous and elitist policies of the Fed and the previous administration. Under the current administration, it seems that rising stock prices are no longer evidence that the elites are exploiting the common people as used to be the case before the economic nationalist America First policies now being followed were adopted.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 1,916 other followers

Follow Uneasy Money on WordPress.com
Advertisements