Archive Page 3

The Trump Rally

David Beckworth has a recent post about the Trump stock-market rally. Just before the election I had a post in which I pointed out that the stock market seemed to be dreading the prospect of a Trump victory, based on the strong positive correlation between movements in the dollar value of the Mexican peso and the S&P 500, though, in response to a comment by one of my readers, I did partially walk back my argument. As the initial returns and exit polls briefly seemed to be pointing toward a Clinton victory, the correlation between the peso and the S&P 500 (futures) seemed to be very strong and getting stronger, and after the returns started to point increasingly toward a Trump victory, the strong correlation between the peso and the S&P 500 remained all too evident, showing a massive decline in both the peso and the S&P 500. But what seemed like a Trump panic was suddenly broken, when Mrs. Clinton phoned Trump to concede and Trump appeared to claim victory with a relatively restrained and conciliatory statement that calmed the worst fears about a messy transition and the potential for serious political instability. The survival of a Republican majority in the Senate was perhaps viewed as a further positive sign and strengthened hopes for business-friendly changes in the US corporate and personal taxes. The earlier losses in S&P 500 futures were reversed even without any recovery in the peso.

So what explains the turnaround in the reaction of the stock market to Trump’s victory? Here’s David Beckworth:

I have a new piece in The Hill where I argue markets are increasingly seeing the Trump shock as an inflection point for the U.S. economy:

It seems the U.S. economy is finally poised for robust economic growth, something that has been missing for the past eight years. Such strong economic growth is expected to cause the demand for credit to increase and the supply of savings to decline

Though this is not the main point, I will just register my disagreement with David’s version of how interest rates are determined, which essentially restates the “loanable-funds” theory of interest determination, which is often described as the orthodox alternative to the Keynesian liquidity preference theory of interest rates. I disagree that it is the alternative to the Keynesian theory. I think that is a very basic misconception perpetrated by macroeconomists with either a regrettable memory lapse or an insufficient understanding of, the Fisherian theory of interest rates. In the Fisherian theory interest rates are implicit in the intertemporal structure of all prices, they are therefore not determined in any single market, as asserted by the loanable-funds theory, any more than the price level is determined in any single market. The way to think about interest-rate determination is to ask the following question: at what structure of interest rates would holders of long-lived assets be content to continue holding the existing stock of assets? Current savings and current demand for credit are an epiphenomenon of interest-rate determination, not a determinant of interest rates — with the caveat that every factor that influences the intertemporal structure of prices is one of the myriad determinants of interest rates.

Together, these forces are naturally pushing interest rates higher. The Fed’s interest rate hike today is simply piggybacking on this new reality.

If “these forces” is interpreted in the way I have suggested in my above comment on David’s previous sentence, then I would agree with this sentence.

Here are some charts that document this upbeat economic outlook as seen from the treasury market. The first one shows the treasury market’s implicit inflation forecast (or “breakeven inflation”) and real interest rate at the 10-year horizon. These come from TIPs and have their flaws, but they provide a good first approximation to knowing what the bond market is thinking. In this case, both the real interest rate and expected inflation rate are rising. This implies the market expects both higher real economic growth and higher inflation. The two may be related–the higher expected inflation may be a reflection of higher expected nominal demand growth causing real growth. The higher real growth expectations are also probably being fueled by Trump’s supply-side reforms.

beckworth_interest_rates

I agree that the rise in real interest rates may reflect improved prospects for economic growth, and that the rising TIPS spread may reflect expectations of at least a small rise in inflation towards the Fed’s largely rhetorical 2-percent target. And I concur that a higher inflation rate could be one of the causes of improving implicit forecasts of economic growth. However, I am not so sure that expectations of rising inflation and supply-side reforms are the only explanations for rising real interest rates.

What “reforms” is Trump promising? I’m not sure actually, but here is a list of possibilities: 1) reducing and simplifying corporate tax rates, 2) reducing and simplifying personal tax rates, 3) deregulation, 4) tougher enforcement of immigration laws, 5) deportation of an undetermined number of illegal immigrants, 6) aggressively protectionist international trade policies.

I think that there is a broad consensus in favor of reducing corporate tax rates. Not only is the 35% marginal rate on corporate profits very high compared to the top corporate set by other countries, the interest deduction is a perverse incentive favoring debt rather than equity financing. As I pointed out in a post five years ago, Hyman Minsky, one of the favorite economists of the left, was an outspoken opponent of corporate income taxation in general, precisely because it encourages debt rather than equity financing. I think that the Obama administration would have been happy to propose reducing the corporate tax rate as part of a broader budget deal, but no broader deal with the Republican majority in Congress was possible, and a simple reduction of the corporate tax rate would have been difficult for Obama to sell to his own political base without offering them something that could be described as reducing inequality. So cutting the top corporate tax rate would almost certainly be a good thing (but subject to qualification by the arguments in the next paragraph), and expectations of a reduction in the top corporate rate would tend to raise stock prices, though the effect on stock prices would be moderated by increased issues of new corporate stock.

Reducing and simplifying corporate and personal tax rates seems like a good thing, but there’s at least one problem. Not all earnings of taxable income is socially productive. Lots of earned income is generated by completely, or partially, unproductive activities associated with private gains that exceed social gains. I have written in the past about how unproductive many types of information gathering and knowledge production is (e.g., here, here, here, and here). Much of this activity enables the person who acquires knowledge or information to gain an information advantage over people with whom he transacts, so the private return to the acquisition of such knowledge is greater than the social gain, because the gain to one party to the trade comes not from an increase in output but by way of a transfer from the other less-informed party to the transaction.

The same is true — to a somewhat lesser extent, but the basic tendency is the same – of activity aimed at the discovery of knew knowledge over which an intellectual property right can be exercised for a substantial length of time. The ability to extract monopoly rents over newly discovered knowledge is likely to confer a private gain on the discoverer greater than the social gain accruing from the discovery, because the first discoverer to acquire exclusive rights can extract the full value of the discovery even though the marginal benefit accruing to the discovery is only the value of the new knowledge over the elapsed time between the moment of the discovery and the moment when the discovery would have been made, perhaps soon afterwards, by someone else. In general, there is a whole range of income accruing to a variety of winner-takes-all activities in which the private gain to the winner greatly exceeds the social gain. A low marginal rate of income taxation increases the incentive to engage in such socially wasteful winner-takes-all activities.

Deregulation can be a good thing when it undermines monopolistic price-fixing and legally imposed entry barriers entrenching incumbent suppliers. A lot of regulation has historically been of this type. But although it is convenient for libertarian ideologues to claim that monopoly enhancement or entrenchment characterizes all government regulation, I doubt that most current regulations are for this purpose. A lot of regulation is aimed at preventing dishonest or misleading business practices or environmental pollution or damage to third-parties. So as an empirical matter, I don’t think we can say whether a reduction in regulation will have a net positive or a net negative effect on society. Nevertheless, regulation probably does reduce the overall earnings of corporations, so that a reduction in regulation will tend to raise stock prices. If it becomes easier for corporations to emit harmful pollution into the atmosphere and into our rivers, lakes and oceans, the reductions in private costs enjoyed by the corporations will be capitalized into their stock prices while the increase in social costs will be borne in a variety of ways by all individuals in the country or the world. Insofar as stock prices have risen since Trump’s election because of expectations of a roll back in regulation, it is not clear to me at least whether that reflects an increase in net social welfare or a capitalization of the value of enhanced rights to engage in socially harmful conduct.

The possible effects of changes in immigration laws, in the enforcement of immigration laws and in trade policies seem to me far too murky at this point even to speculate upon. I would just observe that insofar as the stock market has capitalized the effects of Trump’s supposed supply-side reforms, those reforms would have tended to reduce, not increase, inflation expectations. So it does not seem likely to me that whatever increase in stock prices we have seen so far reflects a pure supply-side effect.

I am more inclined to believe that the recent increases in stock prices and inflation expectations reflect expectations that Trump will fulfill his commitments to conduct irresponsible fiscal policies generating increased budget deficits, which the Republican majorities in Congress will now meekly accept and dutifully applaud, and that Trump will be able either to cajole or intimidate enough officials at the Federal Reserve to accommodate those policies or will appoint enough willing accomplices to the Fed to overcome the opposition of the current FOMC.

Larry White on the Gold Standard and Me

A little over three months ago on a brutally hot day in Washington DC, I gave a talk about a not yet completed paper at the Mercatus Center Conference on Monetary Rules for a Post-Crisis World. The title of my paper was (and still is) “Rules versus Discretion Historically Contemplated.” I hope to post a draft of the paper soon on SSRN.

One of the attendees at the conference was Larry White who started his graduate training at UCLA just after I had left. When I wrote a post about my talk, Larry responded with a post of his own in which he took issue with some of what I had to say about the gold standard, which I described as the first formal attempt at a legislated monetary rule. Actually, in my talk and my paper, my intention was not as much to criticize the gold standard as it was to criticize the idea, which originated after the gold standard had already been adopted in England, of imposing a fixed numerical rule in addition to the gold standard to control the quantity of banknotes or the total stock of money. The fixed mechanical rule was imposed by an act of Parliament, the Bank Charter Act of 1844. The rule, intended to avoid financial crises such as those experienced in 1825 and 1836, actually led to further crises in 1847, 1857 and 1866 and the latter crises were quelled only after the British government suspended those provisions of the Act preventing the Bank of England from increasing the quantity of banknotes in circulation. So my first point was that the fixed quantitative rule made the gold standard less stable than it would otherwise have been.

My second point was that, in the depths of the Great Depression, a fixed rule freezing the nominal quantity of money was proposed as an alternative rule to the gold standard. It was this rule that one of its originators, Henry Simons, had in mind when he introduced his famous distinction between rules and discretion. Simons had many other reasons for opposing the gold standard, but he introduced the famous rules-discretion dichotomy as a way of convincing those supporters of the gold standard who considered it a necessary bulwark against comprehensive government control over the economy to recognize that his fixed quantity rule would be a far more effective barrier than the gold standard against arbitrary government meddling and intervention in the private sector, because the gold standard, far from constraining the conduct of central banks, granted them broad discretionary authority. The gold standard was an ineffective rule, because it specified only the target pursued by the monetary authority, but not the means of achieving the target. In Simons view, giving the monetary authority to exercise discretion over the instruments used to achieve its target granted the monetary authority far too much discretion for independent unconstrained decision making.

My third point was that Henry Simons himself recognized that the strict quantity rule that he would have liked to introduce could only be made operational effectively if the entire financial system were radically restructured, an outcome that he reluctantly concluded was unattainable. However, his student Milton Friedman convinced himself that a variant of the Simons rule could actually be implemented quite easily, and he therefore argued over the course of almost his entire career that opponents of discretion ought to favor the quantity rule that he favored instead of continuing to support a restoration of the gold standard. However, Friedman was badly mistaken in assuming that his modified quantity rule eliminated discretion in the manner that Simons had wanted, because his quantity rule was defined in terms of a magnitude, the total money stock in the hands of the public, which was a target, not, as he insisted, an instrument, the quantity of money held by the public being dependent on choices made by the public, not just on choices made by the monetary authority.

So my criticism of quantity rules can be read as at least a partial defense of the gold standard against the attacks of those who criticized the gold standard for being insufficiently rigorous in controlling the conduct of central banks.

Let me now respond to some of Larry’s specific comments and criticisms of my post.

[Glasner] suggests that perhaps the earliest monetary rule, in the general sense of a binding pre-commitment for a money issuer, can be seen in the redemption obligations attached to banknotes. The obligation was contractual: A typical banknote pledged that the bank “will pay the bearer on demand” in specie. . . .  He rightly remarks that “convertibility was not originally undertaken as a policy rule; it was undertaken simply as a business expedient” without which the public would not have accepted demand deposits or banknotes.

I wouldn’t characterize the contract in quite the way Glasner does, however, as a “monetary rule to govern the operation of a monetary system.” In a system with many banks of issue, the redemption contract on any one bank’s notes was a commitment from that bank to the holders of those notes only, without anyone intending it as a device to govern the operation of the entire system. The commitment that governs a single bank ipso facto governs an entire monetary system only when that single bank is a central bank, the only bank allowed to issue currency and the repository of the gold reserves of ordinary commercial banks.

It’s hard to write a short description of a system that covers all possible permutations in the system. While I think Larry is correct in noting the difference between the commitment made by any single bank to convert – on demand — its obligations into gold and the legal commitment imposed on an entire system to maintain convertibility into gold, the historical process was rather complicated, because both silver and gold coins circulating in Britain. So the historical fact that British banks were making their obligations convertible into gold was the result of prior decisions that had been made about the legal exchange rate between gold and silver coins, decisions which overvalued gold and undervalued silver, causing full bodied silver coins to disappear from circulation. Given a monetary framework shaped by the legal gold/silver parity established by the British mint, it was inevitable that British banks operating within that framework would make their banknotes convertible into gold not silver.

Under a gold standard with competitive plural note-issuers (a free banking system) holding their own reserves, by contrast, the operation of the monetary system is governed by impersonal market forces rather than by any single agent. This is an important distinction between the properties of a gold standard with free banking and the properties of a gold standard managed by a central bank. The distinction is especially important when it comes to judging whether historical monetary crises and depressions can be accurately described as instances where “the gold standard failed” or instead where “central bank management of the monetary system failed.”

I agree that introducing a central bank into the picture creates the possibility that the actions of the central bank will have a destabilizing effect. But that does not necessarily mean that the actions of the central bank could not also have a stabilizing effect compared to how a pure free-banking system would operate under a gold standard.

As the author of Free Banking and Monetary Reform, Glasner of course knows the distinction well. So I am not here telling him anything he doesn’t know. I am only alerting readers to keep the distinction in mind when they hear or read “the gold standard” being blamed for financial instability. I wish that Glasner had made it more explicit that he is talking about a system run by the Bank of England, not the more automatic type of gold standard with free banking.

But in my book, I did acknowledge that there inherent instabilities associated with a gold standard. That’s why I proposed a system that would aim at stabilizing the average wage level. Almost thirty years on, I have to admit to having my doubts whether that would be the right target to aim for. And those doubts make me more skeptical than I once was about adopting any rigid monetary rule. When it comes to monetary rules, I fear that the best is the enemy of the good.

Glasner highlights the British Parliament’s legislative decision “to restore the convertibility of banknotes issued by the Bank of England into a fixed weight of gold” after a decades-long suspension that began during the Napoleonic wars. He comments:

However, the widely held expectations that the restoration of convertibility of banknotes issued by the Bank of England into gold would produce a stable monetary regime and a stable economy were quickly disappointed, financial crises and depressions occurring in 1825 and again in 1836.

Left unexplained is why the expectations were disappointed, why the monetary regime remained unstable. A reader who hasn’t read Glasner’s other blog entries on the gold standard might think that he is blaming the gold standard as such.

Actually I didn’t mean to blame anyone for the crises of 1825 and 1836. All I meant to do was a) blame the Currency School for agitating for a strict quantitative rule governing the total quantity of banknotes in circulation to be imposed on top of the gold standard, b) point out that the rule that was enacted when Parliament passed the Bank Charter Act of 1844 failed to prevent subsequent crises in 1847, 1857 and 1866, and c) that the crises ended only after the provisions of the Bank Charter Act limiting the issue of banknotes by the Bank of England had been suspended.

My own view is that, because the monopoly Bank of England’s monopoly was not broken up, even with convertibility acting as a long-run constraint, the Bank had the power to create cyclical monetary instability and occasionally did so by (unintentionally) over-issuing and then having to contract suddenly as gold flowed out of its vault — as happened in 1825 and again in 1836. Because the London note-issue was not decentralized, the Bank of England did not experience prompt loss of reserves to rival banks (adverse clearings) as soon as it over-issued. Regulation via the price-specie-flow mechanism (external drain) allowed over-issue to persist longer and grow larger. Correction came only with a delay, and came more harshly than continuous intra-London correction through adverse clearings would have. Bank of England mistakes boggled the entire financial system. It was central bank errors and not the gold standard that disrupted monetary stability after 1821.

Here, I think, we do arrive at a basic theoretical disagreement, because I don’t accept that the price-specie-flow mechanism played any significant role in the international adjustment process. National price levels under the gold standard were positively correlated to a high degree, not negatively correlated, as implied by the price-specie-flow mechanism. Moreover, the Bank Charter Act imposed a fixed quantitative limit on the note issue of all British banks and the Bank of England in particular, so the overissue of banknotes by the Bank of England could not have been the cause of the post-1844 financial crises. If there was excessive credit expansion, it was happening through deposit creation by a great number of competing deposit-creating banks, not the overissue of banknotes by the Bank of England.

This hypothesis about the source of England’s cyclical instability is far from original with me. It was offered during the 1821-1850 period by a number of writers. Some, like Robert Torrens, were members of the Currency School and offered the Currency Principle as a remedy. Others, like James William Gilbart, are better classified as members of the Free Banking School because they argued that competition and adverse clearings would effectively constrain the Bank of England once rival note issuers were allowed in London. Although they offered different remedies, these writers shared the judgment that the Bank of England had over-issued, stimulating an unsustainable boom, then was eventually forced by gold reserve losses to reverse course, instituting a credit crunch. Because Glasner elides the distinction between free banking and central banking in his talk and blog post, he naturally omits the third side in the Currency School-Banking School-Free Banking School debate.

And my view is that Free Bankers like Larry White overestimate the importance of note issue in a banking system in which deposits were rapidly overtaking banknotes as the primary means by which banks extended credit. As Henry Simons, himself, recognized this shift from banknotes to bank deposits was itself stimulated, at least in part, by the Bank Charter Act, which made the extension of credit via banknotes prohibitively costly relative to expansion by deposit creation.

Later in his blog post, Glasner fairly summarizes how a gold standard works when a central bank does not subvert or over-ride its automatic operation:

Given the convertibility commitment, the actual quantity of the monetary instrument that is issued is whatever quantity the public wishes to hold.

But he then immediately remarks:

That, at any rate, was the theory of the gold standard. There were — and are – at least two basic problems with that theory. First, making the value of money equal to the value of gold does not imply that the value of money will be stable unless the value of gold is stable, and there is no necessary reason why the value of gold should be stable. Second, the behavior of a banking system may be such that the banking system will itself destabilize the value of gold, e.g., in periods of distress when the public loses confidence in the solvency of banks and banks simultaneously increase their demands for gold. The resulting increase in the monetary demand for gold drives up the value of gold, triggering a vicious cycle in which the attempt by each to increase his own liquidity impairs the solvency of all.

These two purported “basic problems” prompt me to make two sets of comments:

1 While it is true that the purchasing power of gold was not perfectly stable under the classical gold standard, perfection is not the relevant benchmark. The purchasing power of money was more stable under the classical gold standard than it has been under fiat money standards since the Second World War. Average inflation rates were closer to zero, and the price level was more predictable at medium to long horizons. Whatever Glasner may have meant by “necessary reason,” there certainly is a theoretical reason for this performance: the economics of gold mining make the purchasing power of gold (ppg) mean-reverting in the face of monetary demand and supply shocks. An unusually high ppg encourages additional gold mining, until the ppg declines to the normal long-run value determined by the flow supply and demand for gold. An unusually low ppg discourages mining, until the normal long-run ppg is restored. It is true that permanent changes in the gold mining cost conditions can have a permanent impact on the long-run level of the ppg, but empirically such shocks were smaller than the money supply variations that central banks have produced.

2 The behavior of the banking system is indeed critically important for short-run stability. Instability wasn’t a problem in all countries, so we need to ask why some banking systems were unstable or panic-prone, while others were stable. The US banking system was panic prone in the late 19th century while the Canadian system was not. The English system was panic-prone while the Scottish system was not. The behavioral differences were not random or mere facts of nature, but grew directly from differences in the legal restrictions constraining the banks. The Canadian and Scottish systems, unlike the US and English systems, allowed their banks to adequately diversify, and to respond to peak currency demands, thus allowed banks to be more solvent and more liquid, and thus avoided loss of confidence in the banks. The problem in the US and England was not the gold standard, or a flaw in “the theory of the gold standard,” but ill-conceived legal restrictions that weakened the banking systems.

Larry makes two good points, but I doubt that they are very important in practice. The problem with the value of gold is that there is a very long time lag before the adjustment in the rate of output of new gold will cause the value of gold to revert back to its normal level. The annual output of gold is only about 3 percent of the total stock of gold. If the monetary demand for gold is large relative to the total stock and that demand is unstable, the swing in the overall demand for gold can easily dominate the small resulting change in the annual rate of output. So I do not have much confidence that the mean-reversion characteristic of the purchasing power of gold to be of much help in the short or even the medium term. I also agree with Larry that the Canadian and Scottish banking systems exhibited a lot more stability than the neighboring US and English banking systems. That is an important point, but I don’t think it is decisive. It’s true that there were no bank failures in Canada in the Great Depression. But the absence of bank failures, while certainly a great benefit, did not prevent Canada from suffering a downturn of about the same depth and duration as the US did between 1929 and 1933. The main cause of the Great Depression was the deflation caused by the appreciation of the value of gold. The deflation caused bank failures when banks were small and unstable and did not cause bank failures when banks were large and diversified. But the deflation  was still wreaking havoc on the rest of the economy even though banks weren’t failing.

A Primer on Equilibrium

After my latest post about rational expectations, Henry from Australia, one of my most prolific commenters, has been engaging me in a conversation about what assumptions are made – or need to be made – for an economic model to have a solution and for that solution to be characterized as an equilibrium, and in particular, a general equilibrium. Equilibrium in economics is not always a clearly defined concept, and it can have a number of different meanings depending on the properties of a given model. But the usual understanding is that the agents in the model (as consumers or producers) are trying to do as well for themselves as they can, given the endowments of resources, skills and technology at their disposal and given their preferences. The conversation was triggered by my assertion that rational expectations must be “compatible with the equilibrium of the model in which those expectations are embedded.”

That was the key insight of John Muth in his paper introducing the rational-expectations assumption into economic modelling. So in any model in which the current and future actions of individuals depend on their expectations of the future, the model cannot arrive at an equilibrium unless those expectations are consistent with the equilibrium of the model. If the expectations of agents are incompatible or inconsistent with the equilibrium of the model, then, since the actions taken or plans made by agents are based on those expectations, the model cannot have an equilibrium solution.

Now Henry thinks that this reasoning is circular. My argument would be circular if I defined an equilibrium to be the same thing as correct expectations. But I am not so defining an equilibrium. I am saying that the correctness of expectations by all agents implies 1) that their expectations are mutually consistent, and 2) that, having made plans, based on their expectations, which, by assumption, agents felt were the best set of choices available to them given those expectations, if the expectations of the agents are realized, then they would not regret the decisions and the choices that they made. Each agent would be as well off as he could have made himself, given his perceived opportunities when the decision were made. That the correctness of expectations implies equilibrium is the consequence of assuming that agents are trying to optimize their decision-making process, given their available and expected opportunities. If all expected opportunities are correctly foreseen, then all decisions will have been the optimal decisions under the circumstances. But nothing has been said that requires all expectations to be correct, or even that it is possible for all expectations to be correct. If an equilibrium does not exist, and just because you can write down an economic model, it does not mean that a solution to the model exists, then the sweet spot where all expectations are consistent and compatible is just a blissful fantasy. So a logical precondition to showing that rational expectations are even possible is to prove that an equilibrium exists. There is nothing circular about the argument.

Now the key to proving the existence of a general equilibrium is to show that the general equilibrium model implies the existence of what mathematicians call a fixed point. A fixed point is said to exist when there is a mapping – a rule or a function – that takes every point in a convex compact set of points and assigns that point to another point in the same set. A convex, compact set has two important properties: 1) the line connecting any two points in the set is entirely contained within the boundaries of the set, and 2) there are no gaps between any two points in set. The set of points in a circle or a rectangle is a convex compact set; the set of points contained in the Star of David is not a convex set. Any two points in the circle will be connected by a line that lies completely within the circle; the points at adjacent edges of a Star of David will be connected by a line that lies entirely outside the Star of David.

If you think of the set of all possible price vectors for an economy, those vectors – each containing a price for each good or service in the economy – could be mapped onto itself in the following way. Given all the equations describing the behavior of each agent in the economy, the quantity demanded and supplied of each good could be calculated, giving us the excess demand (the difference between amount demand and supplied) for each good. Then the price of every good in excess demand would be raised, the price of every good in negative excess demand would be reduced, and the price of every good with zero excess demand would be held constant. To ensure that the mapping was taking a point from a given convex set onto itself, all prices could be normalized so that they would have the property that the sum of all the individual prices would always equal 1. The fixed point theorem ensures that for a mapping from one convex compact set onto itself there must be at least one fixed point, i.e., at least one point in the set that gets mapped onto itself. The price vector corresponding to that point is an equilibrium, because, given how our mapping rule was defined, a point would be mapped onto itself if and only if all excess demands are zero, so that no prices changed. Every fixed point – and there may be one or more fixed points – corresponds to an equilibrium price vector and every equilibrium price vector is associated with a fixed point.

Before going on, I ought to make an important observation that is often ignored. The mathematical proof of the existence of an equilibrium doesn’t prove that the economy operates at an equilibrium, or even that the equilibrium could be identified under the mapping rule described (which is a kind of formalization of the Walrasian tatonnement process). The mapping rule doesn’t guarantee that you would ever discover a fixed point in any finite amount of iterations. Walras thought the price adjustment rule of raising the prices of goods in excess demand and reducing prices of goods in excess supply would converge on the equilibrium price vector. But the conditions under which you can prove that the naïve price-adjustment rule converges to an equilibrium price vector turn out to be very restrictive, so even though we can prove that the competitive model has an equilibrium solution – in other words the behavioral, structural and technological assumptions of the model are coherent, meaning that the model has a solution, the model has no assumptions about how prices are actually determined that would prove that the equilibrium is ever reached. In fact, the problem is even more daunting than the previous sentence suggest, because even Walrasian tatonnement imposes an incredibly powerful restriction, namely that no trading is allowed at non-equilibrium prices. In practice there are almost never recontracting provisions allowing traders to revise the terms of their trades once it becomes clear that the prices at which trades were made were not equilibrium prices.

I now want to show how price expectations fit into all of this, because the original general equilibrium models were either one-period models or formal intertemporal models that were reduced to single-period models by assuming that all trading for future delivery was undertaken in the first period by long-lived agents who would eventually carry out the transactions that were contracted in period 1 for subsequent consumption and production. Time was preserved in a purely formal, technical way, but all economic decision-making was actually concluded in the first period. But even though the early general-equilibrium models did not encompass expectations, one of the extraordinary precursors of modern economics, Augustin Cournot, who was way too advanced for his contemporaries even to comprehend, much less make any use of, what he was saying, had incorporated the idea of expectations into the solution of his famous economic model of oligopolistic price setting.

The key to oligopolistic pricing is that each oligopolist must take into account not just consumer demand for his product, and his own production costs; he must consider as well what actions will be taken by his rivals. This is not a problem for a competitive producer (a price-taker) or a pure monopolist. The price-taker simply compares the price at which he can sell as much as he wants with his production costs and decides how much it is worthwhile to produce by comparing his marginal cost to price ,and increases output until the marginal cost rises to match the price at which he can sell. The pure monopolist, if he knows, as is assumed in such exercises, or thinks he knows the shape of the customer demand curve, selects the price and quantity combination on the demand curve that maximizes total profit (corresponding to the equality of marginal revenue and marginal cost). In oligopolistic situations, each producer must take into account how much his rivals will sell, or what prices they will set.

It was by positing such a situation and finding an analytic solution, that Cournot made a stunning intellectual breakthrough. In the simple duopoly case, Cournot posited that if the duopolists had identical costs, then each could find his optimal price conditional on the output chosen by the other. This is a simple profit-maximization problem for each duopolist, given a demand curve for the combined output of both (assumed to be identical, so that a single price must obtain for the output of both) a cost curve and the output of the other duopolist. Thus, for each duopolist there is a reaction curve showing his optimal output given the output of the other. See the accompanying figure.cournot

If one duopolist produces zero, the optimal output for the other is the monopoly output. Depending on what the level of marginal cost is, there is some output by either of the duopolists that is sufficient to make it unprofitable for the other duopolist to produce anything. That level of output corresponds to the competitive output where price just equals marginal cost. So the slope of the two reaction functions corresponds to the ratio of the monopoly output to the competitive output, which, with constant marginal cost is 2:1. Given identical costs, the two reaction curves are symmetric and the optimal output for each, given the expected output of the other, corresponds to the intersection of the two reaction curves, at which both duopolists produce the same quantity. The combined output of the two duopolists will be greater than the monopoly output, but less than the competitive output at which price equals marginal cost. With constant marginal cost, it turns out that each duopolist produces one-third of the competitive output. In the general case with n oligoplists, the ratio of the combined output of all n firms to the competitive output equals n/(n+1).

Cournot’s solution corresponds to a fixed point where the equilibrium of the model implies that both duopolists have correct expectations of the output of the other. Given the assumptions of the model, if the duopolists both expect the other to produce an output equal to one-third of the competitive output, their expectations will be consistent and will be realized. If either one expects the other to produce a different output, the outcome will not be an equilibrium, and each duopolist will regret his output decision, because the price at which he can sell his output will differ from the price that he had expected. In the Cournot case, you could define a mapping of a vector of the quantities that each duopolist had expected the other to produce and the corresponding planned output of each duopolist. An equilibrium corresponds to a case in which both duopolists expected the output planned by the other. If either duopolist expected a different output from what the other planned, the outcome would not be an equilibrium.

We can now recognize that Cournot’s solution anticipated John Nash’s concept of an equilibrium strategy in which player chooses a strategy that is optimal given his expectation of what the other player’s strategy will be. A Nash equilibrium corresponds to a fixed point in which each player chooses an optimal strategy based on the correct expectation of what the other player’s strategy will be. There may be more than one Nash equilibrium in many games. For example, rather than base their decisions on an expectation of the quantity choice of the other duopolist, the two duopolists could base their decisions on an expectation of what price the other duopolist would set. In the constant-cost case, this choice of strategies would lead to the competitive output because both duopolists would conclude that the optimal strategy of the other duopolist would be to charge a price just sufficient to cover his marginal cost. This was the alternative oligopoly model suggested by another French economist J. L. F. Bertrand. Of course there is a lot more to be said about how oligopolists strategize than just these two models, and the conditions under which one or the other model is the more appropriate. I just want to observe that assumptions about expectations are crucial to how we analyze market equilibrium, and that the importance of these assumptions for understanding market behavior has been recognized for a very long time.

But from a macroeconomic perspective, the important point is that expected prices become the critical equilibrating variable in the theory of general equilibrium and in macroeconomics in general. Single-period models of equilibrium, including general-equilibrium models that are formally intertemporal, but in which all trades are executed in the initial period at known prices in a complete array of markets determining all future economic activity, are completely sterile and useless for macroeconomics except as a stepping stone to analyzing the implications of imperfect forecasts of future prices. If we want to think about general equilibrium in a useful macroeconomic context, we have to think about a general-equilibrium system in which agents make plans about consumption and production over time based on only the vaguest conjectures about what future conditions will be like when the various interconnected stages of their plans will be executed.

Unlike the full Arrow-Debreu system of complete markets, a general-equilibrium system with incomplete markets cannot be equilibrated, even in principle, by price adjustments in the incomplete set of present markets. Equilibration depends on the consistency of expected prices with equilibrium. If equilibrium is characterized by a fixed point, the fixed point must be mapping of a set of vectors of current prices and expected prices on to itself. That means that expected future prices are as much equilibrating variables as current market prices. But expected future prices exist only in the minds of the agents, they are not directly subject to change by market forces in the way that prices in actual markets are. If the equilibrating tendencies of market prices in a system of complete markets are very far from completely effective, the equilibrating tendencies of expected future prices may not only be non-existent, but may even be potentially disequilibrating rather than equilibrating.

The problem of price expectations in an intertemporal general-equilibrium system is central to the understanding of macroeconomics. Hayek, who was the father of intertemporal equilibrium theory, which he was the first to outline in a 1928 paper in German, and who explained the problem with unsurpassed clarity in his 1937 paper “Economics and Knowledge,” unfortunately did not seem to acknowledge its radical consequences for macroeconomic theory, and the potential ineffectiveness of self-equilibrating market forces. My quarrel with rational expectations as a strategy of macroeconomic analysis is its implicit assumption, lacking any analytical support, that prices and price expectations somehow always adjust to equilibrium values. In certain contexts, when there is no apparent basis to question whether a particular market is functioning efficiently, rational expectations may be a reasonable working assumption for modelling observed behavior. However, when there is reason to question whether a given market is operating efficiently or whether an entire economy is operating close to its potential, to insist on principle that the rational-expectations assumption must be made, to assume, in other words, that actual and expected prices adjust rapidly to their equilibrium values allowing an economy to operate at or near its optimal growth path, is simply, as I have often said, an exercise in circular reasoning and question begging.

Making Sense of Rational Expectations

Almost two months ago I wrote a provocatively titled post about rational expectations, in which I argued against the idea that it is useful to make the rational-expectations assumption in developing a theory of business cycles. The title of the post was probably what led to the start of a thread about my post on the econjobrumors blog, the tenor of which  can be divined from the contribution of one commenter: “Who on earth is Glasner?” But, aside from the attention I received on econjobrumors, I also elicited a response from Scott Sumner

David Glasner has a post criticizing the rational expectations modeling assumption in economics:

What this means is that expectations can be rational only when everyone has identical expectations. If people have divergent expectations, then the expectations of at least some people will necessarily be disappointed — the expectations of both people with differing expectations cannot be simultaneously realized — and those individuals whose expectations have been disappointed will have to revise their plans. But that means that the expectations of those people who were correct were also not rational, because the prices that they expected were not equilibrium prices. So unless all agents have the same expectations about the future, the expectations of no one are rational. Rational expectations are a fixed point, and that fixed point cannot be attained unless everyone shares those expectations.

Beyond that little problem, Mason raises the further problem that, in a rational-expectations equilibrium, it makes no sense to speak of a shock, because the only possible meaning of “shock” in the context of a full intertemporal (aka rational-expectations) equilibrium is a failure of expectations to be realized. But if expectations are not realized, expectations were not rational.

I see two mistakes here. Not everyone must have identical expectations in a world of rational expectations. Now it’s true that there are ratex models where people are simply assumed to have identical expectations, such as representative agent models, but that modeling assumption has nothing to do with rational expectations, per se.

In fact, the rational expectations hypothesis suggests that people form optimal forecasts based on all publicly available information. One of the most famous rational expectations models was Robert Lucas’s model of monetary misperceptions, where people observed local conditions before national data was available. In that model, each agent sees different local prices, and thus forms different expectations about aggregate demand at the national level.

It is true that not all expectations must be identical in a world of rational expectations. The question is whether those expectations are compatible with the equilibrium of the model in which those expectations are embedded. If any of those expectations are incompatible with the equilibrium of the model, then, if agents’ decision are based on their expectations, the model will not arrive at an equilibrium solution. Lucas’s monetary misperception model was a clever effort to tweak the rational-expectations assumption just enough to allow for a temporary disequilibrium. But the attempt was a failure, because Lucas could only generate a one-period deviation from equilibrium, which was too little for the model to pose as a plausible account of a business cycle. That provided Kydland and Prescott the idea to discard Lucas’s monetary misperceptions idea and write their paper on real business cycles without adulterating the rational expectations assumption.

Here’s what Muth said about the rational expectations assumption in the paper in which he introduced “rational expectations” as a modeling strategy.

In order to explain these phenomena, I should like to suggest that expectations, since they are informed predictions of future events, are essentially the same as the predictions of the relevant economic theory. At the risk of confusing this purely descriptive hypothesis with a pronouncement as to what firms ought to do, we call such expectations “rational.”

The hypothesis can be rephrased a little more precisely as follows: that expectations of firms (or, more generally, the subjective probability distribution of outcomes) tend to be distributed, for the same information set, about the prediction of the theory (or the “objective” probability distributions of outcomes).

The hypothesis asserts three things: (1) Information is scarce, and the economic system generally does not waste it. (2) The way expectations are formed depends specifically on the structure of the relevant system describing the economy. (3) A “public prediction,” in the sense of Grunberg and Modigliani, will have no substantial effect on the operation of the economic system (unless it is based on inside information).

It does not assert that the scratch work of entrepreneurs resembles the system of equations in any way; nor does it state that predictions of entrepreneurs are perfect or that their expectations are all the same. For purposes of analysis, we shall use a specialized form of the hypothesis. In particular, we assume: 1. The random disturbances are normally distributed. 2. Certainty equivalents exist for the variables to be predicted. 3. The equations of the system, including the expectations formulas, are linear. These assumptions are not quite so strong as may appear at first because any one of them virtually implies the other two.

It seems to me that Muth was confused about what the rational-expectations assumption entails. He asserts that the expectations of entrepreneurs — and presumably that applies to other economic agents as well insofar as their decisions are influenced by their expectations of the future – should be assumed to be exactly what the relevant economic model predicts the expected outcomes to be. If so, I don’t see how it can be maintained that expectations could diverge from each other. If what entrepreneurs produce next period depends on the price they expect next period, then how is it possible that the total supply produced next period is independent of the distribution of expectations as long as the errors are normally distributed and the mean of the distribution corresponds to the equilibrium of the model? This could only be true if the output produced by each entrepreneur was a linear function of the expected price and all entrepreneurs had identical marginal costs or if the distribution of marginal costs was uncorrelated with the distribution of expectations. The linearity assumption is hardly compelling unless you assume that the system is in equilibrium and all changes are small. But making that assumption is just another form of question begging.

It’s also wrong to say:

But if expectations are not realized, expectations were not rational.

Scott is right. What I said was wrong. What I ought to have said is: “But if expectations (being divergent) could not have been realized, those expectations were not rational.”

Suppose I am watching the game of roulette. I form the expectation that the ball will not land on one of the two green squares. Now suppose it does. Was my expectation rational? I’d say yes—there was only a 2/38 chance of the ball landing on a green square. It’s true that I lacked perfect foresight, but my expectation was rational, given what I knew at the time.

I don’t think that Scott’s response is compelling, because you can’t judge the rationality of an expectation in isolation, it has to be judged in a broader context. If you are forming your expectation about where the ball will fall in a game of roulette, the rationality of that expectation can only be evaluated in the context of how much you should be willing to bet that the ball will fall on one of the two green squares and that requires knowledge of what the payoff would be if the ball did fall on one of those two squares. And that would mean that someone else is involved in the game and would be taking an opposite position. The rationality of expectations could only be judged in the context of what everyone participating in the game was expecting and what the payoffs and penalties were for each participant.

In 2006, it might have been rational to forecast that housing prices would not crash. If you lived in many countries, your forecast would have been correct. If you happened to live in Ireland or the US, your forecast would have been incorrect. But it might well have been a rational forecast in all countries.

The rationality of a forecast can’t be assessed in isolation. A forecast is rational if it is consistent with other forecasts, so that it, along with the other forecasts, could potentially be realized. As a commenter on Scott’s blog observed, a rational expectation is an expectation that, at the time the forecast is made, is consistent with the relevant model. The forecast of housing prices may turn out to be incorrect, but the forecast might still have been rational when it was made if the forecast of prices was consistent with what the relevant model would have predicted. The failure of the forecast to be realized could mean either that forecast was not consistent with the model, or that between the time of the forecast and the time of its realization, new information,  not available at the time of the forecast, came to light and changed the the prediction of the relevant model.

The need for context in assessing the rationality of expectations was wonderfully described by Thomas Schelling in his classic analysis of cooperative games.

One may or may not agree with any particular hypothesis as to how a bargainer’s expectations are formed either in the bargaining process or before it and either by the bargaining itself or by other forces. But it does seem clear that the outcome of a bargaining process is to be described most immediately, most straightforwardly, and most empirically, in terms of some phenomenon of stable and convergent expectations. Whether one agrees explicitly to a bargain, or agrees tacitly, or accepts by default, he must if he has his wits about him, expect that he could do no better and recognize that the other party must reciprocate the feeling. Thus, the fact of an outcome, which is simply a coordinated choice, should be analytically characterized by the notion of convergent expectations.

The intuitive formulation, or even a careful formulation in psychological terms, of what it is that a rational player expects in relation to another rational player in the “pure” bargaining game, poses a problem in sheer scientific description. Both players, being rational, must recognize that the only kind of “rational” expectation they can have is a fully shared expectation of an outcome. It is not quite accurate – as a description of a psychological phenomenon – to say that one expects the second to concede something; the second’s readiness to concede or to accept is only an expression of what he expects the first to accept or to concede, which in turn is what he expects the first to expect the second to expect the first to expect, and so on. To avoid an “ad infinitum” in the description process, we have to say that both sense a shared expectation of an outcome; one’s expectation is a belief that both identify the outcome as being indicated by the situation, hence as virtually inevitable. Both players, in effect, accept a common authority – the power of the game to dictate its own solution through their intellectual capacity to perceive it – and what they “expect” is that they both perceive the same solution.

Viewed in this way, the intellectual process of arriving at “rational expectations” in the full-communication “pure” bargaining game is virtually identical with the intellectual process of arriving at a coordinated choice in the tacit game. The actual solutions might be different because the game contexts might be different, with different suggestive details; but the intellectual nature of the two solutions seems virtually identical since both depend on an agreement that is reached by tacit consent. This is true because the explicit agreement that is reached in the full communication game corresponds to the a prioir expectations that were reached (or in theory could have been reached) jointly but independently by the two players before the bargaining started. And it is a tacit “agreement” in the sense that both can hold confident rational expectation only if both are aware that both accept the indicated solution in advance as the outcome that they both know they both expect.

So I agree that rational expectations can simply mean that agents are forming expectations about the future incorporating as best as they can all the knowledge available to them. This is a weak common sense interpretation of rational expectations that I think is what Scott Sumner has in mind when he uses the term “rational expectations.” But in the context of formal modelling, rational expectations has a more restrictive meaning, which is that given all the information available, the expectations of all agents in the model must correspond to what the model itself predicts given that information. Even though Muth himself and others have tried to avoid the inference that all agents must have expectations that match the solution of the model, given the information underlying the model, the assumptions under which agents could hold divergent expectations are, in their own way, just as restrictive as the assumption that agents hold convergent expectations.

In a way, the disconnect between a common-sense understanding of what “rational expectations” means and what “rational expectations” means in the context of formal macroeconomic models is analogous to the disconnect between what “competition” means in normal discourse and what “competition” (and especially “perfect competition”) means in the context of formal microeconomic models. Much of the rivalrous behavior between competitors that we think of as being essential aspects of competition and the competitive process is simply ruled out by the formal assumption of perfect competition.

OMG! The Age of Trump Is upon Us

UPDATE (11/11, 10:47 am EST): Clinton’s lead in the popular vote is now about 400,000 and according to David Leonhardt of the New York Times, the lead is likely to increase to as much as 2 million votes by the time all the votes are counted.

Here’s a little thought experiment for you to ponder. Suppose that the outcome of yesterday’s election had been reversed and Hillary Clinton emerged with 270+ electoral votes but trailed Donald Trump by 200,000 popular votes. What would the world be like today? What would we be hearing from Trump and his entourage about the outcome of the election? I daresay we would be hearing about “second amendment remedies” from many of the Trumpsters. I wonder how that would have played out.

(As I write this, I am hearing news reports about rowdy demonstrations in a number of locations against Trump’s election. Insofar as these demonstrations become violent, they are certainly deplorable, but nothing we have heard from Clinton and her campaign or from leaders of the Democratic Party would provide any encouragement for violent protests against the outcome of a free election.)

But enough of fantasies about an alternative universe; in the one that we happen to inhabit, the one in which Donald Trump is going to be sworn in as President of the United States in about ten weeks, we are faced with this stark reality. The American voters, in their wisdom, have elected a mountebank (OED: “A false pretender to skill or knowledge, a charlatan: a person incurring contempt or ridicule through efforts to acquire something, esp. social distinction or glamour.”), a narcissistic sociopath, as their chief executive and head of state. The success of Trump’s demagogic campaign – a campaign repackaging the repugnant themes of such successful 20th century American demagogues as Huey Long, Father Coughlin and George Wallace (not to mention not so successful ones like the deplorable Pat Buchanan) — is now being celebrated by Trump apologists and Banana Republican sycophants as evidence of his political genius in sensing and tapping into the anger and frustrations of the forgotten white working class, as if the anger and frustration of the white working class has not been the trump card that every two-bit demagogue and would-be despot of the last 150 has tried to play. Some genius.

I recently overheard a conversation between a close friend of mine who is a Trump supporter and a non-Trump supporter. My friend is white, but is not one of the poorly educated of whom Trump is so fond, holding a Ph.D. in physics, and being well read and knowledgeable about many subjects. Although he doesn’t like Trump, he is very conservative and can’t stand Clinton, so he decided to vote for Trump without any apparent internal struggle or second thoughts. One of his reasons for favoring Trump is his opposition to Obamacare, which he blames for the very large increase in premiums he has to pay for the medical insurance he gets through his employer. When it was pointed out to him that it is unlikely that the increase in his insurance premiums was caused by Obamacare, his response was that Obamacare has added to the regulations that insurance companies must comply with, so that the cost of those regulations is ultimately borne by those buying insurance, which means that his insurance premiums must have gone up because of Obamacare.

Since I wasn’t part of the conversation, I didn’t interrupt to point out that the standard arguments about the costs of regulation being ultimately borne by consumers of the regulated product don’t necessarily apply to markets like health care in which customers don’t have good information about whether suppliers are providing them with the services that they need or are instead providing unnecessary services to enrich themselves. In such markets, third-parties (i.e., insurance companies) supposedly better informed than patients about whether the services provided to patients by their doctors are really serving the patients’ interests, and are really worth the cost of providing those services, can help protect the interests of patients. Of course, the interests of insurance companies aren’t necessarily aligned very well with the interests of their policyholders either, because insurance companies may prefer not to pay for treatments that it would be in the interests of patients to receive.

So in health markets there are doctors treating ill-informed patients whose bills are being paid by insurance companies that try to monitor doctors to make sure that doctors do not provide unnecessary services and treatments to patients. But since the interests of insurance companies may be not to pay doctors to provide services that would be beneficial to patients, who is going to protect policyholders from the insurance companies? Well, um, maybe the government should be involved. Yes, but how do we know if the government is doing a good job or bad job of looking out for the interests of patients? I don’t think that we know the answer to that question. But Obamacare, aside from making medical insurance more widely available to people who need it, is an attempt to try to make insurance companies more responsive to the interests of their policyholders. Perhaps not the smartest attempt, by any means, but given the system of health care delivery that has evolved in the United States over the past three quarters of a century, it is not obviously a step in the wrong direction.

But even if Obamacare is not working well, and I have no well thought out opinion about whether it is or isn’t, the kind of simple-minded critique that my friend was making seemed to me to be genuinely cringe-worthy. Here is a Ph.D. in physics making an argument that sounded as if it were coming straight out of the mouth of Sean Hannity. OMG! The dumbing down of America is being expertly engineered by Fox News, and, boy, are they succeeding. Geniuses, that’s what they are. Geniuses!

When I took my first economics course almost a half century ago and read the greatest economics textbook ever written, University Economics by Armen Alchian and William Allen, I was blown away by their ability to show how much sloppy and muddled thinking there was about how markets work and how controls that prevent prices from allocating resources don’t eliminate destructive or wasteful competition, but rather shift competition from relatively cheap modes like offering to pay a higher price or to accept a lower price to relatively costly forms like waiting in line or lobbying a regulator to gain access to a politically determined allocation system.

I have been a fan of free markets ever since. I oppose government intervention in the economy as a default position. But the lazy thinking that once led people to assume that government regulation is the cure for all problems now leads people to assume that government regulation is the cause of all problems. What a difference half a century makes.

So You Don’t Think the Stock Market Cares Who Wins the Election — Think Again UPDATE

UPDATE (October 30 (9:22pm EDST): Commenter BJH correctly finds a basic flaw in my little attempt to infer some causation between Trump’s effect on the peso, the peso’s correlation with the S&P 500 and Trump’s effect on the stock market. The correlation cannot bear the weight I put on it. See my reply to BJH below.

The little swoon in the stock markets on Friday afternoon after FBI Director James Comey announced that the FBI was again investigating Hillary Clinton’s emails coincided with a sharp drop in the Mexican peso, whose value is widely assumed to be a market barometer of the likelihood of Trump’s victory. A lot of people have wondered why the stock market has not evidenced much concern about the prospect of a Trump presidency, notwithstanding his surprising success at, and in, the polls. After all, the market recovered from a rough start at the beginning of 2016 even as Trump was racking up victory after victory over his competitors for the Republican presidential nomination. And even after Trump’s capture of the Republican nomination was seen as inevitable, even though many people did start to panick, the stock markets have been behaving as if they were under heavy sedation.

So I thought that I would do a little checking on how the market has been behaving since April, when it had become clear that, barring a miracle, Trump was going to be the Republican nominee for President. Here is a chart showing the movements in the S&P 500 and in the dollar value of the Mexican peso since April 1 (normalized at their April 1 values). The stability in the two indexes is evident. The difference between the high and low values of the S&P 500 has been less than 7 percent; the peso has fluctuated more than the S&P 500, presumably because of Mexico’s extreme vulnerability to Trumpian policies, but the difference between the high and low values of the peso has been only about 12%.

trump_1

But what happens when you look at the daily changes in the S&P 500 and in the peso? Looking at the changes, rather than the levels, can help identify what is actually moving the markets. Taking the logarithms of the S&P 500 and of the peso (measured in cents) and calculating the daily changes in the logarithms gives the daily percentage change in the two series. The next chart plots the daily percentage changes in the S&P 500 and the peso since April 4. The chart looks pretty remarkable to me; the correlation between changes in the peso and change in the S&P 500 is striking.

trump_2

A quick regression analysis on excel produces the following result:

∆S&P = 0.0002 + .5∆peso, r-squared = .557,

where ∆S&P is the daily percentage change in the S&P 500 and ∆peso is the daily percentage change in the dollar value of the peso. The t-value on the peso coefficient is 13.5, which, in a regression with only 147 observations, is an unusually high level of statistical significance.

This result says that almost 56% of the observed daily variation in the S&P 500 between April 4 and October 28 of 2016 is accounted for by the observed daily variation in the peso. To be precise, the result doesn’t mean that there is any causal relationship between changes in the value of the peso and changes in the S&P 500. Correlation does not establish causation. It could be the case that the regression is simply reflecting the existence of causal factors that are common to both the Mexican peso and to the S&P 500 and affect both of them at the same time. Now it seems pretty obvious who or what has been the main causal factor affecting the value of the peso, so I leave it as an exercise for readers to identify what factor has been affecting the S&P 500 these past few months, and in which direction.

Rational Expectations, or, The Road to Incoherence

J. W. Mason left a very nice comment on my recent post about Paul Romer’s now-famous essay on macroeconomics, a comment now embedded in his interesting and insightful blog post on the Romer essay. As a wrote in my reply to Mason’s comment, I really liked the way he framed his point about rational expectations and intertemporal equilibrium. Sometimes when you see a familiar idea expressed in a particular way, the novelty of the expression, even though it’s not substantively different from other ways of expressing the idea, triggers a new insight. And that’s what I think happened in my own mind as I read Mason’s comment. Here’s what he wrote:

David Glasner’s interesting comment on Romer makes in passing a point that’s bugged me for years — that you can’t talk about transitions from one intertemporal equilibrium to another, there’s only the one. Or equivalently, you can’t have a model with rational expectations and then talk about what happens if there’s a “shock.” To say there is a shock in one period, is just to say that expectations in the previous period were wrong. Glasner:

the Lucas Critique applies even to micro-founded models, those models being strictly valid only in equilibrium settings and being unable to predict the adjustment of economies in the transition between equilibrium states. All models are subject to the Lucas Critique.

So the further point that I would make, after reading Mason’s comment, is just this. For an intertemporal equilibrium to exist, there must be a complete set of markets for all future periods and contingent states of the world, or, alternatively, there must be correct expectations shared by all agents about all future prices and the probability that each contingent future state of the world will be realized. By the way, If you think about it for a moment, the notion that probabilities can be assigned to every contingent future state of the world is mind-bogglingly unrealistic, because the number of contingent states must rapidly become uncountable, because every single contingency itself gives rise to further potential contingencies, and so on and on and on. But forget about that little complication. What intertemporal equilibrium requires is that all expectations of all individuals be in agreement – or at least not be inconsistent, some agents possibly having an incomplete set of expectations about future prices and future states of the world. If individuals differ in their expectations, so that their planned future purchases and sales are based on what they expect future prices to be when the time comes for those transactions to be carried out, then individuals will not be able to execute their plans as intended when at least one of them finds that actual prices are different from what they had been expected to be.

What this means is that expectations can be rational only when everyone has identical expectations. If people have divergent expectations, then the expectations of at least some people will necessarily be disappointed — the expectations of both people with differing expectations cannot be simultaneously realized — and those individuals whose expectations have been disappointed will have to revise their plans. But that means that the expectations of those people who were correct were also not rational, because the prices that they expected were not equilibrium prices. So unless all agents have the same expectations about the future, the expectations of no one are rational. Rational expectations are a fixed point, and that fixed point cannot be attained unless everyone shares those expectations.

Beyond that little problem, Mason raises the further problem that, in a rational-expectations equilibrium, it makes no sense to speak of a shock, because the only possible meaning of “shock” in the context of a full intertemporal (aka rational-expectations) equilibrium is a failure of expectations to be realized. But if expectations are not realized, expectations were not rational. So the whole New Classical modeling strategy of identifying shocks  to a system in rational-expectations equilibrium, and “predicting” the responses to these shocks as if they had been anticipated is self-contradictory and incoherent.

Price Stickiness Is a Symptom not a Cause

In my recent post about Nick Rowe and the law of reflux, I mentioned in passing that I might write a post soon about price stickiness. The reason that I thought it would be worthwhile writing again about price stickiness (which I have written about before here and here), because Nick, following a broad consensus among economists, identifies price stickiness as a critical cause of fluctuations in employment and income. Here’s how Nick phrased it:

An excess demand for land is observed in the land market. An excess demand for bonds is observed in the bond market. An excess demand for equities is observed in the equity market. An excess demand for money is observed in any market. If some prices adjust quickly enough to clear their market, but other prices are sticky so their markets don’t always clear, we may observe an excess demand for money as an excess supply of goods in those sticky-price markets, but the prices in flexible-price markets will still be affected by the excess demand for money.

Then a bit later, Nick continues:

If individuals want to save in the form of money, they won’t collectively be able to if the stock of money does not increase.There will be an excess demand for money in all the money markets, except those where the price of the non-money thing in that market is flexible and adjusts to clear that market. In the sticky-price markets there will nothing an individual can do if he wants to buy more money but nobody else wants to sell more. But in those same sticky-price markets any individual can always sell less money, regardless of what any other individual wants to do. Nobody can stop you selling less money, if that’s what you want to do.

Unable to increase the flow of money into their portfolios, each individual reduces the flow of money out of his portfolio. Demand falls in stick-price markets, quantity traded is determined by the short side of the market (Q=min{Qd,Qs}), so trade falls, and some traders that would be mutually advantageous in a barter or Walrasian economy even at those sticky prices don’t get made, and there’s a recession. Since money is used for trade, the demand for money depends on the volume of trade. When trade falls the flow of money falls too, and the stock demand for money falls, until the representative individual chooses a flow of money out of his portfolio equal to the flow in. He wants to increase the flow in, but cannot, since other individuals don’t want to increase their flows out.

The role of price stickiness or price rigidity in accounting for involuntary unemployment is an old and complicated story. If you go back and read what economists before Keynes had to say about the Great Depression, you will find that there was considerable agreement that, in principle, if workers were willing to accept a large enough cut in their wages, they could all get reemployed. That was a proposition accepted by Hawtry and by Keynes. However, they did not believe that wage cutting was a good way of restoring full employment, because the process of wage cutting would be brutal economically and divisive – even self-destructive – politically. So they favored a policy of reflation that would facilitate and hasten the process of recovery. However, there also those economists, e.g., Ludwig von Mises and the young Lionel Robbins in his book The Great Depression, (which he had the good sense to disavow later in life) who attributed high unemployment to an unwillingness of workers and labor unions to accept wage cuts and to various other legal barriers preventing the price mechanism from operating to restore equilibrium in the normal way that prices adjust to equate the amount demanded with the amount supplied in each and every single market.

But in the General Theory, Keynes argued that if you believed in the standard story told by microeconomics about how prices constantly adjust to equate demand and supply and maintain equilibrium, then maybe you should be consistent and follow the Mises/Robbins story and just wait for the price mechanism to perform its magic, rather than support counter-cyclical monetary and fiscal policies. So Keynes then argued that there is actually something wrong with the standard microeconomic story; price adjustments can’t ensure that overall economic equilibrium is restored, because the level of employment depends on aggregate demand, and if aggregate demand is insufficient, wage cutting won’t increase – and, more likely, would reduce — aggregate demand, so that no amount of wage-cutting would succeed in reducing unemployment.

To those upholding the idea that the price system is a stable self-regulating system or process for coordinating a decentralized market economy, in other words to those upholding microeconomic orthodoxy as developed in any of the various strands of the neoclassical paradigm, Keynes’s argument was deeply disturbing and subversive.

In one of the first of his many important publications, “Liquidity Preference and the Theory of Money and Interest,” Franco Modigliani argued that, despite Keynes’s attempt to prove that unemployment could persist even if prices and wages were perfectly flexible, the assumption of wage rigidity was in fact essential to arrive at Keynes’s result that there could be an equilibrium with involuntary unemployment. Modigliani did so by positing a model in which the supply of labor is a function of real wages. It was not hard for Modigliani to show that in such a model an equilibrium with unemployment required a rigid real wage.

Modigliani was not in favor of relying on price flexibility instead of counter-cyclical policy to solve the problem of involuntary unemployment; he just argued that the rationale for such policies had to be that prices and wages were not adjusting immediately to clear markets. But the inference that Modigliani drew from that analysis — that price flexibility would lead to an equilibrium with full employment — was not valid, there being no guarantee that price adjustments would necessarily lead to equilibrium, unless all prices and wages instantaneously adjusted to their new equilibrium in response to any deviation from a pre-existing equilibrium.

All the theory of general equilibrium tells us is that if all trading takes place at the equilibrium set of prices, the economy will be in equilibrium as long as the underlying “fundamentals” of the economy do not change. But in a decentralized economy, no one knows what the equilibrium prices are, and the equilibrium price in each market depends in principle on what the equilibrium prices are in every other market. So unless the price in every market is an equilibrium price, none of the markets is necessarily in equilibrium.

Now it may well be that if all prices are close to equilibrium, the small changes will keep moving the economy closer and closer to equilibrium, so that the adjustment process will converge. But that is just conjecture, there is no proof showing the conditions under which a simple rule that says raise the price in any market with an excess demand and decrease the price in any market with an excess supply will in fact lead to the convergence of the whole system to equilibrium. Even in a Walrasian tatonnement system, in which no trading at disequilibrium prices is allowed, there is no proof that the adjustment process will eventually lead to the discovery of the equilibrium price vector. If trading at disequilibrium prices is allowed, tatonnement is hopeless.

So the real problem is not that prices are sticky but that trading takes place at disequilibrium prices and there is no mechanism by which to discover what the equilibrium prices are. Modern macroeconomics solves this problem, in its characteristic fashion, by assuming it away by insisting that expectations are “rational.”

Economists have allowed themselves to make this absurd assumption because they are in the habit of thinking that the simple rule of raising price when there is an excess demand and reducing the price when there is an excess supply inevitably causes convergence to equilibrium. This habitual way of thinking has been inculcated in economists by the intense, and largely beneficial, training they have been subjected to in Marshallian partial-equilibrium analysis, which is built on the assumption that every market can be analyzed in isolation from every other market. But that analytic approach can only be justified under a very restrictive set of assumptions. In particular it is assumed that any single market under consideration is small relative to the whole economy, so that its repercussions on other markets can be ignored, and that every other market is in equilibrium, so that there are no changes from other markets that are impinging on the equilibrium in the market under consideration.

Neither of these assumptions is strictly true in theory, so all partial equilibrium analysis involves a certain amount of hand-waving. Nor, even if we wanted to be careful and precise, could we actually dispense with the hand-waving; the hand-waving is built into the analysis, and can’t be avoided. I have often referred to these assumptions required for the partial-equilibrium analysis — the bread and butter microeconomic analysis of Econ 101 — to be valid as the macroeconomic foundations of microeconomics, by which I mean that the casual assumption that microeconomics somehow has a privileged and secure theoretical position compared to macroeconomics and that macroeconomic propositions are only valid insofar as they can be reduced to more basic microeconomic principles is entirely unjustified. That doesn’t mean that we shouldn’t care about reconciling macroeconomics with microeconomics; it just means that the validity of proposition in macroeconomics is not necessarily contingent on being derived from microeconomics. Reducing macroeconomics to microeconomics should be an analytical challenge, not a methodological imperative.

So the assumption, derived from Modigliani’s 1944 paper that “price stickiness” is what prevents an economic system from moving automatically to a new equilibrium after being subjected to some shock or disturbance, reflects either a misunderstanding or a semantic confusion. It is not price stickiness that prevents the system from moving toward equilibrium, it is the fact that individuals are engaging in transactions at disequilibrium prices. We simply do not know how to compare different sets of non-equilibrium prices to determine which set of non-equilibrium prices will move the economy further from or closer to equilibrium. Our experience and out intuition suggest that in some neighborhood of equilibrium, an economy can absorb moderate shocks without going into a cumulative contraction. But all we really know from theory is that any trading at any set of non-equilibrium prices can trigger an economic contraction, and once it starts to occur, a contraction may become cumulative.

It is also a mistake to assume that in a world of incomplete markets, the missing markets being markets for the delivery of goods and the provision of services in the future, any set of price adjustments, however large, could by themselves ensure that equilibrium is restored. With an incomplete set of markets, economic agents base their decisions not just on actual prices in the existing markets; they base their decisions on prices for future goods and services which can only be guessed at. And it is only when individual expectations of those future prices are mutually consistent that equilibrium obtains. With inconsistent expectations of future prices, the adjustments in current prices in the markets that exist for currently supplied goods and services that in some sense equate amounts demanded and supplied, lead to a (temporary) equilibrium that is not efficient, one that could be associated with high unemployment and unused capacity even though technically existing markets are clearing.

So that’s why I regard the term “sticky prices” and other similar terms as very unhelpful and misleading; they are a kind of mental crutch that economists are too ready to rely on as a substitute for thinking about what are the actual causes of economic breakdowns, crises, recessions, and depressions. Most of all, they represent an uncritical transfer of partial-equilibrium microeconomic thinking to a problem that requires a system-wide macroeconomic approach. That approach should not ignore microeconomic reasoning, but it has to transcend both partial-equilibrium supply-demand analysis and the mathematics of intertemporal optimization.

Paul Romer on Modern Macroeconomics, Or, the “All Models Are False” Dodge

Paul Romer has been engaged for some time in a worthy campaign against the travesty of modern macroeconomics. A little over a year ago I commented favorably about Romer’s takedown of Robert Lucas, but I also defended George Stigler against what I thought was an unfair attempt by Romer to identify George Stigler as an inspiration and role model for Lucas’s transgressions. Now just a week ago, a paper based on Romer’s Commons Memorial Lecture to the Omicron Delta Epsilon Society, has become just about the hottest item in the econ-blogosophere, even drawing the attention of Daniel Drezner in the Washington Post.

I have already written critically about modern macroeconomics in my five years of blogging, and here are some links to previous posts (link, link, link, link). It’s good to see that Romer is continuing to voice his criticisms, and that they are gaining a lot of attention. But the macroeconomic hierarchy is used to criticism, and has its standard responses to criticism, which are being dutifully deployed by defenders of the powers that be.

Romer’s most effective rhetorical strategy is to point out that the RBC core of modern DSGE models posit unobservable taste and technology shocks to account for fluctuations in the economic time series, but that these taste and technology shocks are themselves simply inferred from the fluctuations in the times-series data, so that the entire structure of modern macroeconometrics is little more than an elaborate and sophisticated exercise in question-begging.

In this post, I just want to highlight one of the favorite catch-phrases of modern macroeconomics which serves as a kind of default excuse and self-justification for the rampant empirical failures of modern macroeconomics (documented by Lipsey and Carlaw as I showed in this post). When confronted by evidence that the predictions of their models are wrong, the standard and almost comically self-confident response of the modern macroeconomists is: All models are false. By which the modern macroeconomists apparently mean something like: “And if they are all false anyway, you can’t hold us accountable, because any model can be proven wrong. What really matters is that our models, being microfounded, are not subject to the Lucas Critique, and since all other models than ours are not micro-founded, and, therefore, being subject to the Lucas Critique, they are simply unworthy of consideration. This is what I have called methodological arrogance. That response is simply not true, because the Lucas Critique applies even to micro-founded models, those models being strictly valid only in equilibrium settings and being unable to predict the adjustment of economies in the transition between equilibrium states. All models are subject to the Lucas Critique.

Here is Romer’s take:

In response to the observation that the shocks are imaginary, a standard defense invokes Milton Friedman’s (1953) methodological assertion from unnamed authority that “the more significant the theory, the more unrealistic the assumptions (p.14).” More recently, “all models are false” seems to have become the universal hand-wave for dismissing any fact that does not conform to the model that is the current favorite.

Friedman’s methodological assertion would have been correct had Friedman substituted “simple” for “unrealistic.” Sometimes simplifications are unrealistic, but they don’t have to be. A simplification is a generalization of something complicated. By simplifying, we can transform a problem that had been too complex to handle into a problem more easily analyzed. But such simplifications aren’t necessarily unrealistic. To say that all models are false is simply a dodge to avoid having to account for failure. The excuse of course is that all those other models are subject to the Lucas Critique, so my model wins. But your model is subject to the Lucas Critique even though you claim it’s not, so even according to the rules you have arbitrarily laid down, you don’t win.

So I was just curious about where the little phrase “all models are false” came from. I was expecting that Karl Popper might have said it, in which case to use the phrase as a defense mechanism against empirical refutation would have been a particularly fraudulent tactic, because it would have been a perversion of Popper’s methodological stance, which was to force our theoretical constructs to face up to, not to insulate it from, empirical testing. But when I googled “all theories are false” what I found was not Popper, but the British statistician, G. E. P. Box who wrote in his paper “Science and Statistics” based on his R. A. Fisher Memorial Lecture to the American Statistical Association: “All models are wrong.” Here’s the exact quote:

Since all models are wrong the scientist cannot obtain a “correct” one by excessive elaboration. On the contrary following William of Occam he should seek an economical description of natural phenomena. Just as the ability to devise simple but evocative models is the signature of the great scientist so overelaboration and overparameterization is often the mark of mediocrity.

Since all models are wrong the scientist must be alert to what is importantly wrong. It is inappropriate to be concerned about mice when there are tigers abroad. Pure mathematics is concerned with propositions like “given that A is true, does B necessarily follow?” Since the statement is a conditional one, it has nothing whatsoever to do with the truth of A nor of the consequences B in relation to real life. The pure mathematician, acting in that capacity, need not, and perhaps should not, have any contact with practical matters at all.

In applying mathematics to subjects such as physics or statistics we make tentative assumptions about the real world which we know are false but which we believe may be useful nonetheless. The physicist knows that particles have mass and yet certain results, approximating what really happens, may be derived from the assumption that they do not. Equally, the statistician knows, for example, that in nature there never was a normal distribution, there never was a straight line, yet with normal and linear assumptions, known to be false, he can often derive results which match, to a useful approximation, those found in the real world. It follows that, although rigorous derivation of logical consequences is of great importance to statistics, such derivations are necessarily encapsulated in the knowledge that premise, and hence consequence, do not describe natural truth.

It follows that we cannot know that any statistical technique we develop is useful unless we use it. Major advances in science and in the science of statistics in particular, usually occur, therefore, as the result of the theory-practice iteration.

One of the most annoying conceits of modern macroeconomists is the constant self-congratulatory references to themselves as scientists because of their ostentatious use of axiomatic reasoning, formal proofs, and higher mathematical techniques. The tiresome self-congratulation might get toned down ever so slightly if they bothered to read and take to heart Box’s lecture.

Putin v. Obama: It’s the Economy, Stupid

A couple of days ago, Daniel Drezner wrote an op-ed for the Washington Post commenting on a statement made by one of the candidates in the recent televised forum on national security.

Last week at a televised presidential forum on national security, Donald Trump continued his pattern of praising Russian President Vladimir Putin. In particular, Trump said the following:

I mean, the man has very strong control over a country. And that’s a very different system and I don’t happen to like the system. But certainly in that system he’s been a leader, far more than our president has been a leader. We have a divided country.

As my Post colleague David Weigel notes, this is simply Trump’s latest slathering of praise onto the Russian strongman:

Trump goes further than many Republicans. In his telling, Putin — a “strong leader” — epitomizes how any serious president should position his country in the world. Knowingly or not, Trump builds on years of wistful, sometimes ironic praise of Putin as a swaggering, bare-chested autocrat.

After the forum, his running mate, Mike Pence, who used to be more critical of Putin, doubled down on Trump’s claim:

Pence walked that line back a little Sunday, suggesting that he was trying to indict the “weak and feckless leadership” of President Obama — but you get the point.

Well, if we are going to compare the leadership of Putin and Obama, why not compare them by measuring what people really care about? After all, don’t we all know that “it’s the economy, stupid.”

So let’s see how what Putin’s leadership has done for Russia compares with what Obama’s leadership has done for the US? We all know that the last eight years under Obama have not been the greatest, but if it’s Putin that Obama is being compared to, we ought to check out how Putin’s “very strong” leadership has worked out for the Russian economy as opposed to Obama’s “weak and feckless leadership” has worked out for the US economy.

Here’s a little graph comparing US and Russian GDP between 2008 and 2015. To make the comparison on an even playing field, I have normalized GDP for both countries at 1.0 in 2008.

putin_v_obamaLooks to me like Obama wins that leadership contest pretty handily. And it’s not getting any better for Putin in 2016, as the Russian economy continues to contract, while the US economy continues to expand, albeit slowly, in 2016.

So chalk one up for the home team.

USA! USA!


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 1,320 other followers

Follow Uneasy Money on WordPress.com