Archive for September, 2015

The Neoclassical Synthesis and the Mind-Body Problem

The neoclassical synthesis that emerged in the early postwar period aimed at reconciling the macroeconomic (IS-LM) analysis derived from Keynes via Hicks and others with the neoclassical microeconomic analysis of general equilibrium derived from Walras. The macroeconomic analysis was focused on an equilibrium of income and expenditure flows while the Walrasian analysis was focused on the equilibrium between supply and demand in individual markets. The two types of analysis seemed to be incommensurate inasmuch as the conditions for equilibrium in the two analysis did not seem to match up against each other. How does an analysis focused on the equality of aggregate flows of income and expenditure get translated into an analysis focused on the equality of supply and demand in individual markets? The two languages seem to be different, so it is not obvious how a statement formulated in one language gets translated into the other. And even if a translation is possible, does the translation hold under all, or only under some, conditions? And if so, what are those conditions?

The original neoclassical synthesis did not aim to provide a definitive answer to those questions, but it was understood to assert that if the equality of income and expenditure was assured at a level consistent with full employment, one could safely assume that market forces would take care of the allocation of resources, so that markets would be cleared and the conditions of microeconomic general equilibrium satisfied, at least as a first approximation. This version of the neoclassical synthesis was obviously ad hoc and an unsatisfactory resolution of the incommensurability of the two levels of analysis. Don Patinkin sought to provide a rigorous reconciliation of the two levels of analysis in his treatise Money, Interest and Prices. But for all its virtues – and they are numerous – Patinkin’s treatise failed to bridge the gap between the two levels of analysis.

As I mentioned recently in a post on Romer and Lucas, Kenneth Arrow in a 1967 review of Samuelson’s Collected Works commented disparagingly on the neoclassical synthesis of which Samuelson was a leading proponent. The widely shared dissatisfaction expressed by Arrow motivated much of the work that soon followed on the microfoundations of macroeconomics exemplified in the famous 1970 Phelps volume. But the motivation for the search for microfoundations was then (before the rational expectations revolution) to specify the crucial deviations from the assumptions underlying the standard Walrasian general-equilibrium model that would generate actual or seeming price rigidities, which a straightforward – some might say superficial — understanding of neoclassical microeconomic theory suggested were necessary to explain why, after a macro-disturbance, equilibrium was not rapidly restored by price adjustments. Two sorts of explanations emerged from the early microfoundations literature: a) search and matching theories assuming that workers and employers must expend time and resources to find appropriate matches; b) institutional theories of efficiency wages or implicit contracts that explain why employers and workers prefer layoffs to wage cuts in response to negative demand shocks.

Forty years on, the search and matching theories do not seem capable of accounting for the magnitude of observed fluctuations in employment or the cyclical variation in layoffs, and the institutional theories are still difficult to reconcile with the standard neoclassical assumptions, remaining an ad hoc appendage to New Keynesian models that otherwise adhere to the neoclassical paradigm. Thus, although the original neoclassical synthesis in which the Keynesian income-expenditure model was seen as a pre-condition for the validity of the neoclassical model was rejected within a decade of Arrow’s dismissive comment about the neoclassical synthesis, Tom Sargent has observed in a recent review of Robert Lucas’s Collected Papers on Monetary Theory that Lucas has implicitly adopted a new version of the neoclassical synthesis dominated by an intertemporal neoclassical general-equilibrium model, but with the proviso that substantial shocks to aggregate demand and the price level are prevented by monetary policy, thereby making the neoclassical model a reasonable approximation to reality.

Ok, so you are probably asking what does all this have to do with the mind-body problem? A lot, I think in that both the neoclassical synthesis and the mind-body problem involve a disconnect between two kinds – two levels – of explanation. The neoclassical synthesis asserts some sort of connection – but a problematic one — between the explanatory apparatus – macroeconomics — used to understand the cyclical fluctuations of what we are used to think of as the aggregate economy and the explanatory apparatus – microeconomics — used to understand the constituent elements of the aggregate economy — households and firms — and how those elements are related to, and interact with, each other.

The mind-body problem concerns the relationship between the mental – our direct experience of a conscious inner life of thoughts, emotions, memories, decisions, hopes and regrets — and the physical – matter, atoms, neurons. A basic postulate of science is that all phenomena have material causes. So the existence of conscious states that seem to us, by way of our direct experience, to be independent of material causes is also highly problematic. There are a few strategies for handling the problem. One is to assert that the mind truly is independent of the body, which is to say that consciousness is not the result of physical causes. A second is to say that mind is not independent of the body; we just don’t understand the nature of the relationship. There are two possible versions of this strategy: a) that although the nature of the relationship is unknown to us now, advances in neuroscience could reveal to us the way in which consciousness is caused by the operation of the brain; b) although our minds are somehow related to the operation of our brains, the nature of this relationship is beyond the capacity of our minds or brains to comprehend owing to considerations analogous to Godel’s incompleteness theorem (a view espoused by the philosopher Colin McGinn among others); in other words, the mind-body problem is inherently beyond human understanding. And the third strategy is to deny the existence of consciousness, because a conscious state is identical with the physical state of a brain, so that consciousness is just an epiphenomenon of a brain state; we in our naivete may think that our conscious states have a separate existence, but those states are strictly identical with corresponding brain states, so that whatever conscious state that we think we are experiencing has been entirely produced by the physical forces that determine the behavior of our brains and the configuration of its physical constituents.

The first, and probably the last, thing that one needs to understand about the third strategy is that, as explained by Colin McGinn (see e.g., here), its validity has not been demonstrated by neuroscience or by any other branch of science; it is, no less than any of the other strategies, strictly a metaphysical position. The mind-body problem is a problem precisely because science has not even come close to demonstrating how mental states are caused by, let alone that they are identical to, brain states, despite some spurious misinterpretations of research that purport to show such an identity.

Analogous to the scientific principle that all phenomena have material or physical causes, there is in economics and social science a principle called methodological individualism, which roughly states that explanations of social outcomes should be derived from theories about the conduct of individuals, not from theories about abstract social entities that exist independently of their constituent elements. The underlying motivation for methodological individualism (as opposed to political individualism with which it is related but from which it is distinct) was to counter certain ideas popular in the nineteenth and twentieth centuries asserting the existence of metaphysical social entities like “history” that are somehow distinct from yet impinge upon individual human beings, and that there are laws of history or social development from which future states of the world can be predicted, as Hegel, Marx and others tried to do. This notion gave rise to a two famous books by Popper: The Open Society and its Enemies and The Poverty of Historicism. Methodological individualism as articulated by Popper was thus primarily an attack on the attribution of special powers to determine the course of future events to abstract metaphysical or mystical entities like history or society that are supposedly things or beings in themselves distinct from the individual human beings of which they are constituted. Methodological individualism does not deny the existence of collective entities like society; it simply denies that such collective entities exist as objective facts that can be observed as such. Our apprehension of these entities must be built up from more basic elements — individuals and their plans, beliefs and expectations — that we can apprehend directly.

However, methodological individualism is not the same as reductionism; methodological individualism teaches us to look for explanations of higher-level phenomena, e.g., a pattern of social relationships like the business cycle, in terms of the basic constituents forming the pattern: households, business firms, banks, central banks and governments. It does not assert identity between the pattern of relationships and the constituent elements; it says that the pattern can be understood in terms of interactions between the elements. Thus, a methodologically individualistic explanation of the business cycle in terms of the interactions between agents – households, businesses, etc. — would be analogous to an explanation of consciousness in terms of the brain if an explanation of consciousness existed. A methodologically individualistic explanation of the business cycle would not be analogous to an assertion that consciousness exists only as an epiphenomenon of brain states. The assertion that consciousness is nothing but the epiphenomenon of a corresponding brain state is reductionist; it asserts an identity between consciousness and brain states without explaining how consciousness is caused by brain states.

In business-cycle theory, the analogue of such a reductionist assertion of identity between higher-level and lower level phenomena is the assertion that the business cycle is not the product of the interaction of individual agents, but is simply the optimal plan of a representative agent. On this account, the business cycle becomes an epiphenomenon; apparent fluctuations being nothing more than the optimal choices of the representative agent. Of course, everyone knows that the representative agent is merely a convenient modeling device in terms of which a business-cycle theorist tries to account for the observed fluctuations. But that is precisely the point. The whole exercise is a sham; the representative agent is an as-if device that does not ground business-cycle fluctuations in the conduct of individual agents and their interactions, but simply asserts an identity between those interactions and the supposed decisions of the fictitious representative agent. The optimality conditions in terms of which the model is solved completely disregard the interactions between individuals that might cause an unintended pattern of relationships between those individuals. The distinctive feature of methodological individualism is precisely the idea that the interactions between individuals can lead to unintended consequences; it is by way of those unintended consequences that a higher-level pattern might emerge from interactions among individuals. And those individual interactions are exactly what is suppressed by representative-agent models.

So the notion that any analysis premised on a representative agent provides microfoundations for macroeconomic theory seems to be a travesty built on a total misunderstanding of the principle of methodological individualism that it purports to affirm.

All New Classical Models Are Subject to the Lucas Critique

Almost 40 years ago, Robert Lucas made a huge, but not quite original, contribution, when he provided a very compelling example of how the predictions of the then standard macroeconometric models used for policy analysis were inherently vulnerable to shifts in the empirically estimated parameters contained in the models, shifts induced by the very policy change under consideration. Insofar as those models could provide reliable forecasts of the future course of the economy, it was because the policy environment under which the parameters of the model had been estimated was not changing during the time period for which the forecasts were made. But any forecast deduced from the model conditioned on a policy change would necessarily be inaccurate, because the policy change itself would cause the agents in the model to alter their expectations in light of the policy change, causing the parameters of the model to diverge from their previously estimated values. Lucas concluded that only models based on deep parameters reflecting the underlying tastes, technology, and resource constraints under which agents make decisions could provide a reliable basis for policy analysis.

The Lucas critique undoubtedly conveyed an important insight about how to use econometric models in analyzing the effects of policy changes, and if it did no more than cause economists to be more cautious in offering policy advice based on their econometric models and policy makers to more skeptical about the advice they got from economists using such models, the Lucas critique would have performed a very valuable public service. Unfortunately, the lesson that the economics profession learned from the Lucas critique went far beyond that useful warning about the reliability of conditional forecasts potentially sensitive to unstable parameter estimates. In an earlier post, I discussed another way in which the Lucas Critique has been misapplied. (One responsible way to deal with unstable parameter estimates would be make forecasts showing a range of plausible outcome depending on how parameter estimates might change as a result of the policy change. Such an approach is inherently messy, and, at least in the short run, would tend to make policy makers less likely to pay attention to the policy advice of economists. But the inherent sensitivity of forecasts to unstable model parameters ought to make one skeptical about the predictions derived from any econometric model.)

Instead, the Lucas critique was used by Lucas and his followers as a tool by which to advance a reductionist agenda of transforming macroeconomics into a narrow slice of microeconomics, the slice being applied general-equilibrium theory in which the models required drastic simplification before they could generate quantitative predictions. The key to deriving quantitative results from these models is to find an optimal intertemporal allocation of resources given the specified tastes, technology and resource constraints, which is typically done by describing the model in terms of an optimizing representative agent with a utility function, a production function, and a resource endowment. A kind of hand-waving is performed via the rational-expectations assumption, thereby allowing the optimal intertemporal allocation of the representative agent to be identified as a composite of the mutually compatible optimal plans of a set of decentralized agents, the hand-waving being motivated by the Arrow-Debreu welfare theorems proving that any Pareto-optimal allocation can be sustained by a corresponding equilibrium price vector. Under rational expectations, agents correctly anticipate future equilibrium prices, so that market-clearing prices in the current period are consistent with full intertemporal equilibrium.

What is amazing – mind-boggling might be a more apt adjective – is that this modeling strategy is held by Lucas and his followers to be invulnerable to the Lucas critique, being based supposedly on deep parameters reflecting nothing other than tastes, technology and resource endowments. The first point to make – there are many others, but we needn’t exhaust the list – is that it is borderline pathological to convert a valid and important warning about how economic models may be subject to misunderstanding or misuse as a weapon with which to demolish any model susceptible of such misunderstanding or misuse as a prelude to replacing those models by the class of reductionist micromodels that now pass for macroeconomics.

But there is a second point to make, which is that the reductionist models adopted by Lucas and his followers are no less vulnerable to the Lucas critique than the models they replaced. All the New Classical models are explicitly conditioned on the assumption of optimality. It is only by positing an optimal solution for the representative agent that the equilibrium price vector can be inferred. The deep parameters of the model are conditioned on the assumption of optimality and the existence of an equilibrium price vector supporting that equilibrium. If the equilibrium does not obtain – the optimal plans of the individual agents or the fantastical representative agent becoming incapable of execution — empirical estimates of the parameters of the model parameters cannot correspond to the equilibrium values implied by the model itself. Parameter estimates are therefore sensitive to how closely the economic environment in which the parameters were estimated corresponded to conditions of equilibrium. If the conditions under which the parameters were estimated more nearly approximated the conditions of equilibrium than the period in which the model is being used to make conditional forecasts, those forecasts, from the point of view of the underlying equilibrium model, must be inaccurate. The Lucas critique devours its own offspring.

Scott Sumner Defends EMH

Last week I wrote about the sudden increase in stock market volatility as an illustration of why the efficient market hypothesis (EMH) is not entirely accurate. I focused on the empirical argument made by Robert Shiller that the observed volatility of stock prices is greater than the volatility implied by the proposition that stock prices reflect rational expectations of future dividends paid out by the listed corporations. I made two further points about EMH: a) empirical evidence cited in favor of EMH like the absence of simple trading rules that would generate excess profits and the lack of serial correlation in the returns earned by asset managers is also consistent with theories of asset pricing other than EMH such as Keynes’s casino (beauty contest) model, and b) the distinction between fundamentals and expectations that underlies the EMH model is not valid because expectations are themselves fundamental owing to the potential for expectations to be self-fulfilling.

Scott responded to my criticism by referencing two of his earlier posts — one criticizing the Keynesian beauty contest model, and another criticizing the Keynesian argument that the market can stay irrational longer than any trader seeking to exploit such irrationality can stay solvent – and by writing a new post describing what he called the self-awareness of markets.

Let me begin with Scott’s criticism of the beauty-contest model. I do so by registering my agreement with Scott that the beauty contest model is not a good description of how stocks are typically priced. As I have said, I don’t view EMH as being radically wrong, and in much applied work (including some of my own) it is an extremely useful assumption to make. But EMH describes a kind of equilibrium condition, and not all economic processes can be characterized or approximated by equilibrium conditions.

Perhaps the chief contribution of recent Austrian economics has been to explain how all entrepreneurial activity aims at exploiting latent disequilibrium relationships in the price system. We have no theoretical or empirical basis for assuming that deviations of prices whether for assets for services and whether prices are determined in auction markets or in imperfectly competitive markets that prices cannot deviate substantially from their equilibrium values.  We have no theoretical or empirical basis for assuming that substantial deviations of prices — whether for assets or for services, and whether prices are determined in auction markets or in imperfectly competitive markets — from their equilibrium values are immediately or even quickly eliminated. (Let me note parenthetically that vulgar Austrians who deny that prices voluntarily agreed upon are ever different from equilibrium values thereby undermine the Austrian theory of entrepreneurship based on the equilibrating activity of entrepreneurs which is the source of the profits they earn. The profits earned are ipso facto evidence of disequilibrium pricing. Austrians can’t have it both ways.)

So my disagreement with Scott about the beauty-contest theory of stock prices as an alternative to EMH is relatively small. My main reason for mentioning the beauty-contest theory was not to advocate it but to point out that the sort of empirical evidence that Scott cites in support of EMH is also consistent with the beauty-contest theory. As Scott emphasizes himself, it’s not easy to predict who judges will choose as the winner of the beauty contest. And Keynes also used a casino metaphor to describe stock pricing in same chapter (12) of the General Theory in which he developed the beauty-contest analogy. However, there do seem to be times when prices are rising or falling for extended periods of time, and enough people, observing the trends and guessing that the trends will continue long enough so that they can rely on continuation of the trend in making investment decisions, keep the trend going despite underlying forces that eventually cause a price collapse.

Let’s turn to Scott’s post about the ability of the market to stay irrational longer than any individual trader can stay solvent.

The markets can stay irrational for longer than you can stay solvent.

Thus people who felt that tech stocks were overvalued in 1996, or American real estate was overvalued in 2003, and who shorted tech stocks or MBSs, might go bankrupt before their accurate predictions were finally vindicated.

There are lots of problems with this argument. First of all, it’s not clear that stocks were overvalued in 1996, or that real estate was overvalued in 2003. Lots of people who made those claims later claimed that subsequent events had proven them correct, but it’s not obvious why they were justified in making this claim. If you claim X is overvalued at time t, is it vindication if X later rises much higher, and then falls back to the levels of time t?

I agree with Scott that the argument is problematic; it is almost impossible to specify when a suspected bubble is really a bubble. However, I don’t think that Scott fully comes to terms with the argument. The argument doesn’t depend on the time lag between the beginning of the run-up and the peak; it depends on the unwillingness of most speculators to buck a trend when there is no clear terminal point to the run-up. Scott continues:

The first thing to note is that the term ‘bubble’ implies asset mis-pricing that is easily observable. A positive bubble is when asset prices are clearly irrationally high, and a negative bubble is when asset price are clearly irrationally low. If these bubbles existed, then investors could earn excess returns in a highly diversified contra-bubble fund. At any given time there are many assets that pundits think are overpriced, and many others that are seen as underpriced. These asset classes include stocks, bonds, foreign exchange, REITs, commodities, etc. And even within stocks there are many different sectors, biotech might be booming while oil is plunging. And then you have dozens of markets around the world that respond to local factors. So if you think QE has led Japanese equity prices to be overvalued, and tight money has led Swiss stocks to be undervalued, the fund could take appropriate short positions in Japanese stocks and long positions in Swiss stocks.

A highly diversified mutual fund that takes advantage of bubble mis-pricing should clearly outperform other investments, such as index funds. Or at least it should if the EMH is not true. I happen to think the EMH is true, or at least roughly true, and hence I don’t actually expect to see the average contra-bubble fund do well. (Of course individual funds may do better or worse than average.)

I think that Scott is conflating a couple of questions here: a) is EMH a valid theory of asset prices? b) are asset prices frequently characterized by bubble-like behavior? Even if the answer to b) is no, the answer to a) need not be yes. Investors may be able, by identifying mis-priced assets, to earn excess returns even if the mis-pricing doesn’t meet a threshold level required for identifying a bubble. But the main point that Scott is making is that if there are a lot of examples of mis-pricing out there, it should be possible for astute investors capable of identifying mis-priced assets to diversify their portfolios sufficiently to avoid the problem of staying solvent longer than the market is irrational.

That is a very good point, worth taking into account. But it’s not dispositive and certainly doesn’t dispose of the objection that investors are unlikely to try to bet against a bubble, at least not in sufficient numbers to keep it from expanding. The reason is that the absence of proof is not proof of absence. That of course is a legal, not a scientific, principle, but it expresses a valid common-sense notion, you can’t make an evidentiary inference that something is not the case simply because you have not found evidence that it is the case. So you can’t infer from the non-implementatio of the plausible investment strategies listed by Scott that such strategies would not have generated excess returns if they were implemented. We simply don’t know whether they would be profitable or not.

In his new post Scott makes the following observation about what I had written in my post on excess volatility.

David Glasner seems to feel that it’s not rational for consumers to change their views on the economy after a stock crash. I will argue the reverse, that rationality requires them to do so. First, here’s David:

This seems an odd interpretation of what I had written because in the passage quoted by Scott I wrote the following:

I may hold a very optimistic view about the state of the economy today. But suppose that I wake up tomorrow and hear that the Shanghai stock market crashes, going down by 30% in one day. Will my expectations be completely independent of my observation of falling asset prices in China? Maybe, but what if I hear that S&P futures are down by 10%? If other people start revising their expectations, will it not become rational for me to change my own expectations at some point? How can it not be rational for me to change my expectations if I see that everyone else is changing theirs?

So, like Scott, I am saying that it is rational for people to revise their expectations based on new information that there has been a stock crash. I guess what Scott meant to say is that my argument, while valid, is not an argument against EMH, because the scenario I am describing is consistent with EMH. But that is not the case. Scott goes on to provide his own example.

All citizens are told there’s a jar with lots of jellybeans locked away in a room. That’s all they know. The average citizen guesstimates there are 453 jellybeans in this mysterious jar. Now 10,000 citizens are allowed in to look at the jar. They each guess the contents, and their average guess is 761 jellybeans. This information is reported to the other citizens. They revise their estimate accordingly.

But there’s a difference between my example and Scott’s. In my example, the future course of the economy depends on whether people are optimistic or pessimistic. In Scott’s example, the number of jellybeans in the jar is what it is regardless of what people expect it to be. The problem with EMH is that it presumes that there is some criterion of efficiency that is independent of expectations, just as in Scott’s example there is objective knowledge out there of the number of jellybeans in the jar. I claim that there is no criterion of market efficiency that is independent of expectations, even though some expectations may produce better outcomes than those produced by other expectations.

More Economic Prejudice and High-Minded Sloganeering

I wasn’t planning to post today, but I just saw (courtesy of the New York Times) a classic example of the economic prejudice wrapped in high-minded sloganeering that I talked about yesterday. David Rocker, founder and former managing general partner of the hedge fund Rocker Partners, proclaims that he is in favor of a free market.

The worldwide turbulence of recent days is a strong indication that government intervention alone cannot restore the economy and offers a glimpse of the risk of completely depending on it. It is time to give the free market a chance. Since the crash of 2008, governments have tried to stimulate their economies by a variety of means but have relied heavily on manipulating interest rates lower through one form or other of quantitative easing or simply printing money. The immediate rescue of the collapsing economy was necessary at the time, but the manipulation has now gone on for nearly seven years and has produced many unwanted consequences.

In what sense is the market less free than it was before the crash of 2008? It’s not as if the Fed before 2008 wasn’t doing the sorts of things that are so upsetting to Mr. Rucker now. The Fed was setting an interest rate target for short-term rates and it was conducting open market purchases (printing money) to ensure that its target was achieved. There are to be sure some people, like, say, Ron Paul, that regard such action by the Fed as an intolerable example of government intervention in the market, but it’s not something that, as Mr. Rucker suggests, the Fed just started to do after 2008. And at a deeper level, there is a very basic difference between the Fed targeting an interest rate by engaging in open-market operations (repeat open-market operations) and imposing price controls that prevent transactors from engaging in transactions on mutually agreeable terms. Aside from libertarian ideologues, most people are capable of understanding the difference between monetary policy and government interference with the free market.

So what really bothers Mr. Rucker is not that the absence of a free market, but that he disagrees with the policy that the Fed is implementing. He has every right to disagree with the policy, but it is misleading to suggest that he is the one defending the free market against the Fed’s intervention into an otherwise free market.

When Mr. Rucker tries to explain what’s wrong with the Fed’s policy, his explanations continue to reflect prejudices expressed in high-minded sloganeering. First he plays the income inequality card.

The Federal Reserve, waiting for signs of inflation to change its policies, seems to be looking at the wrong data. . . .

Low interest rates have hugely lifted assets largely owned by the very rich, and inflation in these areas is clearly apparent. Stocks have tripled and real estate prices in the major cities where the wealthy live have been soaring, as have the prices of artwork and the conspicuous consumption of luxury goods.

Now it may be true that certain assets like real estate in Manhattan and San Francisco, works of art, and yachts have been rising rapidly in price, but there is no meaningful price index in which these assets account for a large enough share of purchases to generate a significant inflation. So this claim by Mr. Rucker is just an empty rhetorical gesture to show how good-hearted he is and how callous and unfeeling Janet Yellen and her ilk are. He goes on.

Cheap financing has led to a boom in speculative activity, and mergers and acquisitions. Most acquisitions are justified by “efficiencies” which is usually a euphemism for layoffs. Valeant Pharmaceuticals International, one of the nation’s most active acquirers, routinely fires “redundant” workers after each acquisition to enhance reported earnings. This elevates its stock, with which it makes the next acquisition. With money cheap, corporate executives have used cash flow to buy back stock, enhancing the value of their options, instead of investing for the future. This pattern, and the fear it engenders, has added to downward pressure on employment and wages.

Actually, according to data reported by the Institute for Mergers and Acquisitions and Alliances displayed in the accompanying chart, the level of mergers and acquisitions since 2008 has been consistently below what it was in the late 1990s when interest rates were over 5 percent and in 2007 when interest rates were also above 5 percent.

M&A1985-2015And if corporate executives are using cash flow to buy back stock to enhance the value of their stock options instead of making profitable investments that would enhance share-holder value, there is a serious problem in how corporate executives are discharging their responsibilities to shareholders. Violations of management responsibility to their shareholders should be disciplined and the legal environment that allows executives to disregard shareholder interests should be reformed. To blame the bad behavior of corporate executives on the Fed is a total distraction.

Having just attributed a supposed boom in speculative activity and mergers and acquisitions to the Fed’s low-interest rate policy, Mr. Rucker, without batting an eye, flatly denies that an increase in interest rates would have any negative effect on investment.

The Fed should raise rates in September. The focus on a quarter-point change in short rates and its precise date of imposition is foolishness. Expected rates of return on new investments are typically well above 10 percent. No sensible businessman would defer a sound investment because short-term rates are slightly higher for a few months. They either have a sound investment or they don’t.

Let me repeat that. “Expected rates of return on new investment are typically well above 10 percent.” I wonder what Mr. Rucker thinks the expected rate of return on speculative activity and mergers and acquisitions is.

But, almost despite himself, Mr. Rucker is on to something. Some long-term investment surely is sensitive to the rate of interest, but – and I know that this will come as a rude shock to adherents of Austrian Business Cycle Theory – most investment by business in plant and equipment depends on expected future sales, not the rate of interest. So the way to increase investment is really not by manipulating the rate of interest; the way to increase investment is to increase aggregate demand, and the best way to do that would be to increase inflation and expected inflation (aka nominal GDP and expected nominal GDP).


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey’s unduly neglected contributions to the attention of a wider audience.

My new book Studies in the History of Monetary Theory: Controversies and Clarifications has been published by Palgrave Macmillan

Follow me on Twitter @david_glasner

Archives

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 3,272 other subscribers
Follow Uneasy Money on WordPress.com