What’s Wrong with EMH?

Scott Sumner wrote a post commenting on my previous post about Paul Krugman’s column in the New York Times last Friday. I found Krugman’s column really interesting in his ability to pack so much real economic content into an 800-word column written to help non-economists understand recent fluctuations in the stock market. Part of what I was doing in my post was to offer my own criticism of the efficient market hypothesis (EMH) of which Krugman is probably not an enthusiastic adherent either. Nevertheless, both Krugman and I recognize that EMH serves as a useful way to discipline how we think about fluctuating stock prices.

Here is a passage of Krugman’s that I commented on:

But why are long-term interest rates so low? As I argued in my last column, the answer is basically weakness in investment spending, despite low short-term interest rates, which suggests that those rates will have to stay low for a long time.

My comment was:

Again, this seems inexactly worded. Weakness in investment spending is a symptom not a cause, so we are back to where we started from. At the margin, there are no attractive investment opportunities.

Scott had this to say about my comment:

David is certainly right that Krugman’s statement is “inexactly worded”, but I’m also a bit confused by his criticism. Certainly “weakness in investment spending” is not a “symptom” of low interest rates, which is how his comment reads in context.  Rather I think David meant that the shift in the investment schedule is a symptom of a low level of AD, which is a very reasonable argument, and one he develops later in the post.  But that’s just a quibble about wording.  More substantively, I’m persuaded by Krugman’s argument that weak investment is about more than just AD; the modern information economy (with, I would add, a slowgrowing working age population) just doesn’t generate as much investment spending as before, even at full employment.

Just to be clear, what I was trying to say was that investment spending is determined by “fundamentals,” i.e., expectations about future conditions (including what demand for firms’ output will be, what competing firms are planning to do, what cost conditions will be, and a whole range of other considerations. It is the combination of all those real and psychological factors that determines the projected returns from undertaking an investment, and those expected returns must be compared with the cost of capital to reach a final decision about which projects will be undertaken, thereby giving rise to actual investment spending. So I certainly did not mean to say that weakness in investment spending is a symptom of low interest rates. I meant that it is a symptom of the entire economic environment that, depending on the level of interest rates, makes specific investment projects seem attractive or unattractive. Actually, I don’t think that there is any real disagreement between Scott and me on this particular point; I just mention the point to avoid possible misunderstandings.

But the differences between Scott and me about the EMH seem to be substantive. Scott quotes this passage from my previous post:

The efficient market hypothesis (EMH) is at best misleading in positing that market prices are determined by solid fundamentals. What does it mean for fundamentals to be solid? It means that the fundamentals remain what they are independent of what people think they are. But if fundamentals themselves depend on opinions, the idea that values are determined by fundamentals is a snare and a delusion.

Scott responded as follows:

I don’t think it’s correct to say the EMH is based on “solid fundamentals”.  Rather, AFAIK, the EMH says that asset prices are based on rational expectations of future fundamentals, what David calls “opinions”.  Thus when David tries to replace the EMH view of fundamentals with something more reasonable, he ends up with the actual EMH, as envisioned by people like Eugene Fama.  Or am I missing something?

In fairness, David also rejects rational expectations, so he would not accept even my version of the EMH, but I think he’s too quick to dismiss the EMH as being obviously wrong. Lots of people who are much smarter than me believe in the EMH, and if there was an obvious flaw I think it would have been discovered by now.

I accept Scott’s correction that EMH is based on the rational expectation of future fundamentals, but I don’t think that the distinction is as meaningful as Scott does. The problem is that in a typical rational-expectations model, the fundamentals are given and don’t change, so that fundamentals are actually static. The seemingly non-static property of a rational-expectations model is achieved by introducing stochastic parameters with known means and variances, so that the ultimate realizations of stochastic variables within the model are not known in advance. However, the rational expectations of all stochastic variables are unbiased, and they are – in some sense — the best expectations possible given the underlying stochastic nature of the variables. But given that stochastic structure, current asset prices reflect the actual – and unchanging — fundamentals, the stochastic elements in the model being fully reflected in asset prices today. Prices may change ex post, but, conditional on the realizations of the stochastic variables (whose probability distributions are assumed to have been known in advance), those changes are fully anticipated. Thus, in a rational-expectations equilibrium, causation still runs from fundamentals to expectations.

The problem with rational expectations is not a flaw in logic. In fact, the importance of rational expectations is that it is a very important logical test for the coherence of a model. If a model cannot be solved for a rational-expectations equilibrium, it suffers from a basic lack of coherence. Something is basically wrong with a model in which the expectation of the equilibrium values predicted by the model does not lead to their realization. But a logical property of the model is not the same as a positive theory of how expectations are formed and how they evolve. In the real world, knowledge is constantly growing, and new knowledge implies that the fundamentals underlying the economy must be changing as knowledge grows. The future fundamentals that will determine the future prices of a future economy cannot be rationally expected in the present, because we have no way of specifying probability distributions corresponding to dynamic evolving systems.

If future fundamentals are logically unknowable — even in a probabilistic sense — in the present, because we can’t predict what our future knowledge will be, because if we could, future knowledge would already be known, making it present knowledge, then expectations of the future can’t possibly be rational because we never have the knowledge that would be necessary to form rational expectations. And so I can’t accept Scott’s assertion that asset prices are based on rational expectations of future fundamentals. It seems to me that the causation goes in the other direction as well: future fundamentals will be based, at least in part, on current expectations.

Stock Prices, the Economy and Self-Fulfilling Prophecies

Paul Krugman has a nice column today warning us that the recent record highs in the stock market indices don’t mean that happy days are here again. While I agree with much of what he says, I don’t agree with all of it, so let me try to sort out what I think is right and what I think may not be right.

Like most economists, I don’t usually have much to say about stocks. Stocks are even more susceptible than other markets to popular delusions and the madness of crowds, and stock prices generally have a lot less to do with the state of the economy or its future prospects than many people believe.

I think that’s generally right. The efficient market hypothesis (EMH) is at best misleading in positing that market prices are determined by solid fundamentals. What does it mean for fundamentals to be solid? It means that the fundamentals remain what they are independent of what people think they are. But if fundamentals themselves depend on opinions, the idea that values are determined by fundamentals is a snare and a delusion. So the fundamental idea on which the EMH is premised that there are fundamentals is itself fundamentally wrong. Fundamentals are no more than conjectures and psychologically flimsy perceptions, and individual perceptions are themselves very much influenced by how other people perceive the world and their perceptions. That’s why fads are contagious and bubbles can arise. But because fundamentals are nothing but opinions, expectations can be self-fulfilling. So it is possible for some ex ante bubbles to wind up being justified ex post, but only because expectations can be self-fulfilling.

Still, we shouldn’t completely ignore stock prices. The fact that the major averages have lately been hitting new highs — the Dow has risen 177 percent from its low point in March 2009 — is newsworthy and noteworthy. What are those Wall Street indexes telling us?

Stock prices are in fact governed by expectations, but expectations may or may not be rational, where a rational expectation is an expectation that could actually be realized in some possible state of the world.

The answer, I’d suggest, isn’t entirely positive. In fact, in some ways the stock market’s gains reflect economic weaknesses, not strengths. And understanding how that works may help us make sense of the troubling state our economy is in. . . .

The truth . . . is that there are three big points of slippage between stock prices and the success of the economy in general. First, stock prices reflect profits, not overall incomes. Second, they also reflect the availability of other investment opportunities — or the lack thereof. Finally, the relationship between stock prices and real investment that expands the economy’s capacity has gotten very tenuous.

To put this into the slightly different language of basic financial theory, stock prices reflect the expected future cash flows from owning shares of publicly traded corporations. So stock prices reflect the net value of the tangible and intangible capital assets of these corporations. The public valuations of those assets reflected in stock prices reflect expectations about the future income streams associated with those assets, but those expected future income streams must be discounted so that they can be expressed as a present value. The rate at which future income streams are discounted into the present represents what Krugman calls “the availability of other investment opportunities.” If lots of good investment opportunities are available, then future income streams will be discounted at a higher rate than if there aren’t so many good investment opportunities. In theory the discount rate at which future income streams are discounted would reflect the rate of return corresponding to the marginal investment opportunities that are on the verge of being adopted or abandoned, because they just break even. What Krugman means by the tenuous relationship between stock prices and real investment that expands the economy’ capacity will have to be considered below.

Krugman maintains that, over the past two decades, even though the economy as a whole has not done all that well, stock prices have increased a lot, because the share of capital in total GDP has increased at the expense of labor. He also points out that the low — even negative — real interest rates on government bonds are indicative of the poor opportunities now available (at the margin) to investors.

And these days those options [“for converting money today into income tomorrow”] are pretty poor, with interest rates on long-term government bonds not only very low by historical standards but zero or negative once you adjust for inflation. So investors are willing to pay a lot for future income, hence high stock prices for any given level of profits.

Two points should be noted here. First, scare talk about low interest rates causing bubbles because investors search for yield is nonsense. Even in a fundamentalist EMH universe, a deterioration of marginal investment opportunities causing a drop in the real interest rate will, for given expectations of future income streams, imply that the present value of the assets generating those streams would rise. Rising asset prices in such circumstances are totally rational, which is exactly what bubbles are not. Second, the low interest rates on long-term government bonds are not the cause of poor investment opportunities but the result of poor investment opportunities. Krugman certainly understands that, but many of his readers might not.

But why are long-term interest rates so low? As I argued in my last column, the answer is basically weakness in investment spending, despite low short-term interest rates, which suggests that those rates will have to stay low for a long time.

Again, this seems inexactly worded. Weakness in investment spending is a symptom not a cause, so we are back to where we started from. At the margin, there are no attractive investment opportunities. The mystery deepens:

This may seem, however, to present a paradox. If the private sector doesn’t see itself as having a lot of good investment opportunities, how can profits be so high? The answer, I’d suggest, is that these days profits often seem to bear little relationship to investment in new capacity. Instead, profits come from some kind of market power — brand position, the advantages of an established network, or good old-fashioned monopoly. And companies making profits from such power can simultaneously have high stock prices and little reason to spend.

Why do profits bear only a weak relationship to investment in new capacity? Krugman suggests that the cause  is that rising profits are due to the exercise of market power, firms increasing profits not by increasing output, but by restricting output to raise prices (not necessarily in absolute terms but relative to costs). This is a kind of microeconomic explanation of a macroeconomic phenomenon, which does not necessarily make it wrong, but it is a somewhat anomalous argument for a Keynesian. Be that as it may, to be credible such an argument must explain how the share of corporate profits in total income has been able to grow steadily for nearly twenty years. What would account for a steady economy-wide increase in the market power of corporations lasting for two decades?

Consider the fact that the three most valuable companies in America are Apple, Google and Microsoft. None of the three spends large sums on bricks and mortar. In fact, all three are sitting on huge reserves of cash. When interest rates go down, they don’t have much incentive to spend more on expanding their businesses; they just keep raking in earnings, and the public becomes willing to pay more for a piece of those earnings.

Krugman’s example suggests that the continuing increase in market power, if that is what has been happening, has been structural. By structural I mean that much of the growth in the economy over the past two decades has been in sectors characterized by strong network effects or aggressive enforcement of intellectual property rights. Network effects and strong intellectual property rights tend to create, enhance, and entrench market power, supporting very large gaps between prices and variable costs, which is the standard metric for identifying exercises of market power. The nature of what these companies offer consumers is such that their marginal cost of production is very low, so that reducing price and expanding output would not require a substantial increase in their demand for inputs (at least compared to other industries with higher marginal costs), but would cause a big loss of profit.

But I would suggest looking at the problem from a different perspective, using the distinction between two kinds of capital investment proposed by Ralph Hawtrey. One kind of investment is capital deepening, which involves an increase in the capital intensity of production, the idea being to reduce the cost of production by installing new or better equipment to economize on other inputs (usually labor); the other kind of investment is capital widening, which involves an increase in the scale of output but not in capital intensity, for example building a new plant or expanding an existing one. Capital deepening tends to reduce the demand for labor while capital widening tends to increase it.

More of the investment now being undertaken may be of the capital-deepening sort than has been true historically. Aside from the structural shifts mentioned above, the reduction in capital-widening investment may be the result of declining optimism by businesses in their projections about future demand for their products, making capital-widening investments seem less profitable. For the economy as a whole, a decline in optimism about future demand may turn out to be self-fulfilling. Thus, an increasing share of total investment has become capital-deepening and a declining share capital-widening. But for the economy as a whole, this self-fulfilling pessimism implies that total investment declines. The question is whether monetary (or fiscal) policy could now do anything to increase expectations of future demand sufficiently to induce an self-fulfilling increase in optimism and in capital-widening investment.

 

Krugman Goes Easy on King

Mervyn King, former Governor of the Bank of England, and professor of economics at LSE, recently published a book, The End of Alchemy, containing his reflections on the current state of economic theory and policy from the special vantage point of someone who has been a practitioner of both callings at the highest levels. Paul Krugman has a review of King’s book in the current edition of the New York Review of Books. Krugman points out that King’s tenure at the Bank of England coincided with that of Ben Bernanke, also an academic economist of some renown before embarking on a second career as a central banker, and who has also published a book about his experience as a central banker. A quick check of the Wikipedia article about King reveals a fact left unmentioned by Krugman: that while a Kennedy Scholar at MIT in the late 1970s, King actually shared an office with the young Ben Bernanke who was then working on his Ph. D. at MIT.

In his review, Krugman observes that, unlike Bernanke’s recent memoir, King’s book is less an account of his tenure as a central banker during the 2008 financial crisis and its aftermath than it is a “meditation on monetary theory and the methodology of economics.” Here’s how Krugman describes King’s book.

Now King, like Bernanke, has written a book inspired by his experiences. But it’s not at all the book one might have expected. It’s not a play-by-play of the crisis, or a tell-all, or a personal memoir. In fact, King not-so-subtly mocks the authors of such books, which “share the same invisible subtitle: ‘how I saved the world.’”

King’s book is, instead, devoted to “economic ideas.” It is rich in wide-ranging historical detail, with many stories I didn’t know—the desperate shortage of banknotes at the outbreak of World War I, the remarkable emergence of the “Swiss dinar” (old Iraqi notes printed from Swiss plates) in Kurdistan. But it is mainly an extended meditation on monetary theory and the methodology of economics.

And a fascinating meditation it is. As I’ll explain shortly, King takes sides in a long-running dispute between mainstream economic analysis and a more or less radical fringe that rejects the mainstream’s methods—and comes down on the side of the radical fringe. The policy implications of his methodological radicalism aren’t as clear or, I’d argue, as persuasive as one might like, but he definitely challenges policy as well as research orthodoxy.

You don’t have to agree with everything King says—and I don’t—to be impressed by his willingness to let his freak flag fly. His assertion that we haven’t done nearly enough to head off the next financial crisis will, I think, receive wide assent; I don’t know anyone who thinks, for example, that the US financial reforms enacted in 2010 were sufficient. But his assertion that the whole intellectual frame we’ve been using is more or less irreparably flawed is a brave position that should produce a lot of soul-searching among both economists and policy officials.

I don’t want to discuss Krugman’s review in detail, but I was struck by a passage toward the end of the review in which Krugman takes issue with King’s rather pessimistic assessment of the possibility that central bankers or economic policy makers can do much to improve an economy that is underperforming.

In any case, King’s policy proposals don’t stop with banking reform. He also weighs in on macroeconomic policy, on how to fight the economic weakness that has persisted long after the acute phase of the financial crisis ended. He dismisses talk of demographic and other “headwinds”—such as an aging population—that may be holding the economy back. What has happened, he declares, is a change in the narrative that consumers are telling themselves to a story far more pessimistic about what the future might hold, leading them to spend less year after year. And then a funny thing happens: his radical views on economics lead him to what would ordinarily be considered conservative, even boringly orthodox policy recommendations.

The conventional Keynesian view . . . is that what we need in the face of persistent weakness is policies to boost demand. Keep interest rates low, and maybe raise inflation targets to further encourage people to spend rather than hoard. Have government take advantage of incredibly low interest rates by borrowing and spending on much-needed infrastructure. Offer relief to individuals and nations crippled by debt. And so on.

King is, however, having none of it. Under his leadership, the Bank of England was aggressively engaged in monetary easing by keeping interest rates low—the bank was as aggressive in this respect or even more so than the Bernanke Fed. Now, however, King seems to condemn his old policies:

Monetary stimulus via low interest rates works largely by giving incentives to bring forward spending from the future to the present. But this is a short-term effect. After a time, tomorrow becomes today. Then we have to repeat the exercise and bring forward spending from the new tomorrow to the new today. As time passes, we will be digging larger and larger holes in future demand. The result is a self-reinforcing path of weak growth in the economy.

Is this argument right, analytically? I’d like to see King lay out a specific model for his claims, because I suspect that this is exactly the kind of situation in which words alone can create an illusion of logical coherence that dissipates when you try to do the math. Also, it’s unclear what this has to do with radical uncertainty. But this is a topic that really should be hashed out in technical working papers.

I must admit to being a surprised – and disappointed – that Krugman gave such a mild response to King’s argument, which seems to me obviously problematic rather than, as Krugman implies, plausible, but potentially disprovable if subjected to sufficiently rigorous mathematical scrutiny. The problem is not whether you can produce a mathematical model that generates the result King has asserted. It’s really not that hard for a smart theorist to work out a mathematical model that will generate whatever result he or she wants to generate. The problem is whether the result corresponds to any plausible state of the world, so that one could specify the conditions under which the result of the model would be relevant for policy.

But the argument for stimulus to which King is objecting is that the economy is operating at a lower time path of output and employment than the path at which it is capable of operating; actual output over time is less than potential output over time. Thus, if you stimulate the economy and increase output now, the economy will move to a path that is closer to its potential than the current path. King’s argument, at least as reproduced by Krugman, is simply irrelevant to the question whether a stimulus can move an economy from a lower time path of output to a higher time path of output. The trade-off that supposedly exists in King’s argument is not a real trade-off.

So the issue is, not the model, but the underlying assumption about what the initial conditions are. Is the economy operating at its potential or is operating at less than its potential. If King is right about the initial conditions – if the economy is already operating as well as it could – then he is right that stimulus is futile and increasing output now may decrease output in the future (presumably by reducing investment that would generate increased future output). But if he is wrong about the initial conditions – if the economy is not operating as well as it could – then the increase in output resulting from a stimulus does not imply any reduction in future output; it merely prevents a loss of current output that would otherwise be – avoidably – wasted. King’s argument is actually not an argument – at least insofar as Krugman has accurately characterized it – it is just question begging.

How Martin Feldstein Learned to Stop Worrying and Love Inflation

Martin Feldstein and I go back a ways. Not that I have ever met him, which I haven’t, or that he has ever heard of me, which he probably hasn’t, but I have been following his mostly deplorable commentary on Fed policy since at least 2010 when he published an op-ed piece in the Financial Times, “QE2 is risky and should be limited,” which was sufficiently obtuse to provoke me to write a letter to the editor in response. A year and a half later, after I had started this blog – five years ago to the day on July 5, 2011 – Feldstein wrote an op-ed (“The Federal Reserve’s Policy Dead End”) in the Wall Street Journal, to which he is a regular contributor, in which he offered another misguided critique of quantitative easing, eliciting a blog post from me in response.

Well, now, almost six years after our first encounter, Feldstein has written another op-ed (“Where the Fed Will Be When the Next Downturn Comes“) for the Wall Street Journal which actually shows some glimmers of enlightenment on Feldstein’s part. Always eager to offer encouragement to slow learners, I am glad to be able to report that Feldstein seems to making some headway in understanding how monetary policy operates. He is still far from having mastered the material, but he does seem to be on the right track. If he keeps progressing, the Wall Street Journal will probably stop publishing his op-eds, which would be powerful evidence that he had progressed in understanding of the basics of monetary policy.

The Fed’s traditional response to an economic slump is to cut rates sharply in order to stimulate interest-sensitive spending. When the U.S. economy headed into recession at the end of 2007, the Fed cut the short-term federal-funds rate by three percentage points within 12 months. But it can’t do that anytime soon with short rates at less than 1%. And raising the federal-funds rate now to 3% or more would push the economy into recession.

Yet, whether by accident or intent, Fed policy is headed down a path that could eventually solve this problem. The Fed’s plan to continue a very easy monetary policy over the next few years is likely to drive the inflation rate to more than 3%. The Fed could then raise the federal-funds rate rapidly, reaching at least a 3% nominal rate, while still keeping a low or negative real fed-funds rate. This would put the Fed in a position to cut rates sharply when a new downturn occurred.

Bravo, Professor Feldstein! If only he had seen the light back in 2010 when he wrote the following in the Financial Times:

Under the label of QE, the Fed will buy long-term government bonds, perhaps one trillion dollars or more, adding an equal amount of cash to the economy and to banks’ excess reserves. Expectation of this has lowered long-term interest rates, depressed the dollar’s international value, bid up the price of commodities and farm land and raised share prices. . . .

Ahead, when the US economy does begin to grow, the increased cash on banks’ balance sheets will make the Fed’s exit strategy harder. It was previously “cautiously optimistic” it would be able to contain the inflationary pressures that could be unleashed by banks with a trillion dollars of excess reserves. This will be harder if the amount of excess reserves is doubled. This could lead to much higher interest rates to restrain demand or to an unwanted rise in inflation.

But now Feldstein is singing a different tune:

Based on the Fed’s own numbers, the real federal-funds rate will still be negative at the end of 2017. All of this is aimed at driving down the unemployment rate to only 4.6% in 2018, the median of the Federal Open Market Committee’s projections. Since that rate is less than the 4.8% rate Fed policy makers judge to be the long-term sustainable rate, their projections of unemployment imply that inflation will continue to rise beyond the Fed’s stated 2% target.

If the Fed succeeds in achieving this—raising the inflation rate above 3% and then raising the fed-funds rate close to that level without pushing the economy into recession—it will have solved the problem of having a high-enough fed-funds rate to deal with a traditional economic downturn.

Financial markets may of course get nervous if the Fed continues to have a very low interest rate even after it has achieved its dual goals of low unemployment and a 2% inflation rate. But the Fed could then argue that the 2% inflation rate was never intended as a ceiling but as an average rate to be achieved over time. Since annual inflation has been below 2% for more than three years, it would arguably be consistent with the Fed’s goal to have inflation temporarily above 2%.

So Professor Feldstein seems at last to have figured out that whether inflation is bad or “unwanted” depends not just on an arbitrary number, but on the overall economic environment. If real interest rates are very low or negative, as they are now, the optimal inflation target must be higher than when the real interest rate is above 3%.

But old habits are hard to break, and Feldstein is still nervous about 3% inflation, even in an environment like ours in which real interest rates are low and falling, as they have been doing for some time, even though measured inflation has turned up ever so slightly, largely reflecting a minor rebound in oil prices, since the first quarter of 2016.

Of course, this path of future inflation and interest rates may not be what the Fed has in mind. But it does look consistent with the Fed’s current actions and its projected plans for interest rates over the next two years. It would be a clever policy but it would also be a policy of high-risk fine-tuning.

It’s risky because the financial markets may not be convinced that the Fed will act to reverse an inflation rate that has drifted above 3% and continues to rise. That could cause long-term interest rates to rise sharply, leading to declines in the prices of equities and of commercial real estate. The resulting higher mortgage rates would depress house prices and housing demand. The higher long-term interest rates would also inflict large losses on bondholders who had bought long-term bonds with very low coupons. These financial losses could precipitate an economic downturn. Fed actions to cut its newly increased fed-funds rate might not be enough to reverse that downturn.

If Feldstein is worried that a temporary increase in the rate of inflation might unleash uncontrollable inflationary expectations, he ought to read up on price-level targeting (PLT) or on nominal gross domestic product level targeting (NGDPLT). When the policy target is the path of the price level or of NGDP, inflation expectations are sensitive not to the current rate of inflation but to where the price level is relative to its target path or where NGDP is relative to its target path. So when, under level targeting, inflation speeds up temporarily after having previously undershot its target path, there is no reason for the corrective temporary rise in inflation to cause inflation expectations to explode, as Feldstein fears they would. Feldstein should read up on level targeting before he writes his next op-ed. Although the Wall Street Journal might not be too happy with it, I am sure that, as a distinguished Harvard Professor, he will have no troubled getting it published somewhere else, maybe even in the Financial Times.

What’s Wrong with the EU? Part 1

Since the Brexit vote last week, I have been trying to sort out my disparate thoughts about the EU. In doing so, I have been thinking about the early history of the EU and its origins in the early 1950s and the convoluted process by which Britain came to join what was then called the European Common Market. In the process I have also been thinking a lot about the fascinating but disturbing figure of Enoch Powell who became the foremost opponent of Britain’s entry into the Common Market and the reasons for his opposition. What follows is my reconstruction of the early process by which the Common Market came into existence and Britain’s entry into the Common Market in 1972 and Powell’s role in both Britain’s entry and in the first attempt only a few years later to undo that entry. I will try to carry the story a bit further in my next installment and, if possible, draw some of the many threads of the narrative together and offer some judgments about what really is wrong with the EU and maybe even what could be done about it.

The EU, as of now including 28 states, began its first incarnation with the Treaty of Paris in April 1951 as the European Coal and Steel Community (ECSC) comprised of six states — France, West Germany, Italy, Belgium, the Netherlands and Luxembourg – in which coal and steel would be traded freely within the community but with a common tariff applied to imports of coal and steel from outside the community. The Treaty also created four supranational bodies to administer the agreement, an executive body (the High Authority), two legislative bodies (the Assembly and the Council) and a Court of Justice. The members of each body were appointed by the governments of the six countries.

The UK, under a Labour government that had recently nationalized the steel and coal industries, was disinclined to join an institution that would constrain its authority to manage the physical and human resources under its control and therefore opted not to join the ECSC. The Labour government was voted out of office in October 1951, Winston Churchill, at the age of 77, becoming prime minister for the second time. Churchill was pro-European, having often given voice to the ideal of a united Europe, even speaking favorably in general terms about a United States of Europe. However, Churchill, still hoping to preserve what could be salvaged of the remnants of the British Empire and emotionally tied to a special relationship with the United States, was no more eager to take concrete steps to join the ECSC than his Labour predecessor.

The 1957 Treaty of Rome expanded the ECSC into a broader European Common Market encompassing all cross-border trade within the six member estates, eliminating all internal tariffs on imports and exports within the Common Market and creating a customs union with a uniform tariff on all imports from outside the community. In the mid-1950s, even after Churchill’s retirement, Great Britain, still in the thrall of its dwindling empire, which it quixotically hoped to recreate in the guise of a British Commonwealth whose member states would enjoy preferential access to each other’s markets, could not bring herself to sever the fraying ties to her former empire to join the soon to be created European Community. So the British government, now led by Harold Macmillan, chose not to participate in the negotiations to draft the Treaty of Rome and did not seek to join once it was created.

However, the rapid growth of the six economies of the Common Market produced a change of British opinion about entry into the Common Market. Among the first English politicians to argue that the UK should abandon its pretensions to being a great power and instead promote economic expansion by joining the Common Market was a brilliant young Conservative MP named Enoch Powell. Fluent in at least a dozen languages, a classical Greek scholar who had once been the youngest classics professor in the British Empire, Powell, an autodidact in economics, had become the most articulate Parliamentary advocate of free-market economic policies, though such views were then regarded as almost embarrassingly out of date even among staunch Conservatives. Powell thought that attempts to maintain a vestigial Empire as a Commonwealth were pure humbug, a costly illusion diverting resources that could be put to much more productive use if left under the control of private enterprise. Powell was not alone in favoring entry into the Common Market, but he was more radical than most in favoring a complete reorientation of British economic and foreign policy toward the Continent.

So in 1962, the Conservative government headed by Harold Macmillan, in which Powell served as minister of health, applied for admission to the Common Market in 1962 only to have its application, along with those of Denmark and Ireland, vetoed by Charles de Gaulle with the concurrence of Prime Minister Adenauer of West Germany, because de Gaulle, deeply mistrustful of the British and especially the Americans, feared that Britain would be more loyal to the US than to Europe. In 1967, Britain again applied for admission to the European Community, this time under a Labour government headed by Harold Wilson, but once again the application was vetoed by de Gaulle.

After de Gaulle’s departure in 1969, his successor George Pompidou was more amenable to British entry into the European Community, creating an opportunity for a third British attempt to enter the EC; the other five members were also eager for British entry, so that continued French opposition would have placed France and Pompidou in an awkward position. In 1970, the Conservatives, now led by Edward Heath, defeated Labour. Heath had been elected leader of the Conservatives by a vote of Conservative MPs in 1965, the first selection of a Conservative leader, the leader having formerly been chosen by an informal process understood only by a few well-placed party leaders. Heath narrowly defeated his main opponent, Reginald Maudling, by a small margin. Enoch Powell came in a distant third with only 15 of the 298 votes cast. Maudling was an economic interventionist and an advocate of what was then called an “incomes policy” to control inflation by using statutory or informal controls over wages and prices to limit the growth of money income. As a minister in Macmillan’s government, Heath had been deeply involved in the negotiations for entry into the EEC, and he was known to be a strongly in favor of British entry into the EC.

Heath included Powell in his shadow cabinet giving him the defense portfolio; he remained in that position until April 1968 when Powell gave a speech calling for a halt to non-white immigration from Commonwealth countries. The speech known as the “rivers of blood speech,” because Powell quoted a passage from Virgil alluding to a vision of the river Tiber foaming with blood. Powell’s intention in quoting that passage was not clear, but it was widely interpreted as a prediction of a coming race war, and the speech was condemned even by Conservatives as racist, a charge Powell denied. But the rhetoric of the speech, even if it wasn’t motivated racial animosity or prejudice on Powell’s part, was clearly inflammatory. Heath quickly dismissed Powell from the shadow cabinet, and Powell never again held a leadership position. However, the speech transformed Powell from a relatively obscure, overly intellectual and eccentric figure in the second rank of the British political hierarchy into perhaps the most popular politician in Britain, becoming a sort of folk hero to large segments of the British white working class who immediately began demonstrating in large numbers in his support.

Despite his expulsion from the front bench of the Conservative Party, and ostracism by many of his colleagues, Powell remained a Conservative MP and stood in the 1970 election in which the Conservatives, led by Heath, won a surprise victory, a victory credited by some to Powell’s popularity with white working-class voters who would have otherwise have voted for Labour. The personal popularity that Powell achieved through his attack on non-white immigration and his references to a breakdown in law and order and a recital of white grievances against non-whites mirrored a similar campaign being undertaken across the Atlantic by another frustrated candidate for high office, George Wallace, who was successfully launching a third-party bid for President in 1967, under the banner of the American Independent Party. Wallace had run for President in 1964 as a Democrat, gaining shockingly high vote counts in primaries in a number of Northern and Border states like Wisconsin, Michigan, Ohio and Maryland. Certainly Powell could not have been unaware of Wallace’s popularity with white-working class voters owing to his skillful use of inflammatory and racially-charged, if not explicitly racist, law-and-order, anti-elitist rhetoric, and it would be naïve to suppose that political calculation was absent from a mind as powerful and as concentrated as Powell’s when he made his April 1968 speech about non-white immigration and adopted an alarmist rhetorical strategy in composing that speech. Though Powell did indeed choose the path of political incorrectness, he hardly chose the path of political inexpediency. His only miscalculation was in supposing that Heath would lose the next election and that the Conservative Party would then turn to him.

Substantively there was little difference between Heath and Powell in 1970. Heath had largely embraced the free market position that had once made Powell an outlier even in the Conservative Party, and he had pledged to stop further non-white immigration from the Commonwealth, though he refused to bar immigration by the family members of existing immigrants, nor would he take any steps to repatriate legal non-white immigrants, as Powell advocated. And on the question of entry into the Common Market, Powell in 1968 had not yet repudiated his earlier stance in favor of entry into the European Community. Heath’s unexpected election therefore largely dashed Powell’s hopes of becoming leader of the Conservative Party.

Although there seemed to be a consensus in the 1970 election in favor of entry into European Community, that consensus was more apparent than real. In fact, the Labour Party, which had historically opposed entry into the Common Market until the Wilson government, despite the being forewarned that a French veto would inevitably follow, applied for entry in 1967, was deeply divided on the question. Wilson continued to support entry, but a majority of the party was actually opposed, viewing the Common Market as a basically capitalist institution and an obstacle to implementing their economic program. And although the Conservatives seemed united in supporting entry, few Conservatives supported entry into the EC as unreservedly as Heath; most Conservative supporters were conflicted, viewing entry as a purely pragmatic decision, with advantages only marginally outweighing disadvantages, making a final decision sensitive to the terms on which entry could be secured. Only the Liberals, holding just six seats in the new Parliament, and a small number of Conservatives and Labourites shared Heath’s unqualified enthusiasm for entry.

But with all parties nominally supporting entry in the 1970 election, the question of entry into the European Community was not an issue and was not debated. The official Conservative position was relatively circumspect, favoring negotiations to secure entry, but making no pledge to enter, so that Heath could not claim convincingly that his election provided a popular mandate for bringing Britain into the EC on whatever terms he negotiated. But entry into the EC had become the chief policy goal of Heath’s political career. Charles de Gaulle having retired from public life in 1969 and succeeded by his protege Georges Pompidou, Heath’s task in securing entry into the EC consisted in securing Pompidou’s assent to British entry. Unlike de Gaulle, Pompidou did not regard the idea of Britain being part of the EC as inherently repugnant, but if Britain were to enter it would have to be on French terms, meaning that Britain would have to accept the existing Treaty of Rome without change. The Treaty of Rome had been drafted with by the six original members with a view to provide protection for their large agricultural sectors by keeping out cheap agricultural imports to support high prices for domestic agricultural products. Britain, on the other hand, with a far smaller agricultural sector than those of the six, was an importer of agricultural products from Commonwealth countries, providing cash payments to domestic farmers. Accepting the common agricultural policy of the EC as it existed would require Britain to shift from low-cost Commonwealth imports to high-cost EC imports of agricultural products, imposing a net transfer from the British economy to the six original members, especially to France with the largest agricultural sector in the EC. That acceptance was the price for British entry into the Common Market

Pompidou’s assent in principle to Britain’s application for entry once Heath accepted the Treaty of Rome and all existing EEC regulations with no substantial changes meant that Britain’s entry into the Common Market was assured. Less than a year after meeting with Pompidou, Heath signed the Treaty of Rome on January 22, 1972, and a bill was introduced in Parliament assenting to Britain’s entry under the agreed upon terms and accepting the Treaty of Rome and EEC regulations. With a Conservative Parliamentary majority, the support of the Liberal Party and many Labour MPs, the bill passed easily. But Enoch Powell voted against, having begun speaking out against entry into the Common Market the previous year. Powell’s argument was that, under appropriate economic policies, Britain could thrive as an independent trading nation and therefore had little to gain, and much to lose, owing to the Common Agricultural Policy, by joining the European Community. Beyond the economic argument against joining the Common Market, Powell had two more fundamental objections: 1) that entry into the EC implied a surrender of British sovereignty because powers that had historically been exercised by Parliament such as the power to set taxes and enact legislation, were being transferred to the European Commission, and 2) that so momentous a decision should not be made without giving the British people an opportunity to express their opinions on the matter, and the Conservative Party, having won a Parliamentary majority promising only to negotiate for entry into the EC, had no mandate to take Britain into the EC without a further expression of support from the voters.

While Powell’s opposition to entry into the EU can certainly be understood as a natural outgrowth of his nationalistic and Tory conception of England and Britain, the timing and the abruptness of Powell’s attitude toward the Common Market invites the inference that Powell, in making opposition to entry into the Common Market the central cause of the last phase of his political career, was influenced by a political calculation. After all, when Harold Macmillan first proposed British entry into the Common Market, a decade before Powell declared his opposition to entry, Hugh Gaitskell, leader of the opposition, had made a similar argument to Powell’s: entry into the Common Market would replace Parliament with a supranational body as the supreme sovereign authority of the British nation. Powell had surely understood what Gaitskell was arguing in 1962, but he didn’t abandon his support for entry into the European Community until his hopes of becoming leader of the Conservative Party were dashed by Heath’s election. It is probably more likely that Powell had been insincere in favoring entry before 1971 than that he was insincere in opposing entry after; as long as he hoped to become leader of the Conservative Party, he may well have intended to reverse his public stance on Europe after becoming leader. But it is hard to believe he was not insincere in holding at least one of his two positions on entry into the Common Market.

It is characteristic of Powell that he framed the question of joining Europe as a matter of high principle: preserving British identity as self-governing, autonomous nation state, whose Parliament was the ultimate law making and governing institution of the realm, not subservient to any foreign external authority. For Powell the supremacy of Parliament was akin to a metaphysical imperative, even a religious duty, and to compromise that imperative would have been a betrayal of all he believed and stood for. Reading such rhetorical absolutism, one again comes back to the question how Powell could have explained or justified his own earlier support, however lukewarm or insincere, for British entry into the Common Market.

For Powell, unwavering opposition to joining the European Community was the ultimate expression of his Toryism, but considered from another perspective, his absolutist mentality was profoundly unconservative. Elevating the single abstract principle of Parliamentary sovereignty and supremacy above and beyond all other principles and considerations was characteristic of a metaphysical extremism that runs strongly counter to the conservative disposition described by Michael Oakeshott in his essay “On Being Conservative.” Indeed, Oakeshott, perhaps the leading academic British conservative of his generation, was loath to express any opinion about British entry into the Common Market notwithstanding his own reservations about federalism as a mode of government, believing that sovereignty is indivisible, a very British idea that baffles us Americans. Some claim that Oakeshott was actually opposed to entry into the Common Market, but even so, Oakeshott’s reticence in voicing that opinion seems very much at odds with Powell’s absolutist frame of mind wherein opposition to joining the European Community became the overriding object of Powell’s life to which all other considerations were subordinated.

Powell’s antagonism toward Heath was further inflamed when Heath abandoned the anti-inflation policy his government had followed in its first two years, adopting the strategy of monetary and fiscal expansion combined with wage and price controls that, a year earlier, Richard Nixon had implemented to win re-election after his initial strategy of fiscal and monetary restraint produced disappointing results. As had Nixon, Heath initially succeeded in promoting an economic expansion, but the policy soon ran aground, because, with inflationary pressures rekindled by monetary expansion, labor unions, refusing to accept the limits Heath wanted to impose on wage increases, called strikes. The most damaging strike was by the coal miners, which led to a curtailment of electricity production that forced the government to impose a three-day work week to conserve electricity.

Meanwhile, the Labour opposition gradually coalesced around demand for a referendum on entry into the Common Market proposed by the left-wing Labour MP Tony Benn. While continuing to support entry into the Common Market, Harold Wilson pledged that if returned to power, a new Labour government would renegotiate the terms of Britain’s entry, and put the renegotiated terms to a vote of the British public, a pledge that caused the pro-European deputy leader of the Labour Party, Roy Jenkins, to resign his position, eventually leaving the Labour Party with a handful of other pro-European Labourites to start a new Social Democratic Party (which subsequently merged with the Liberal Party).

Locked in a confrontation with striking coal miners over their strike for wage increases above the statutory limits of the Government’s incomes policy, Heath, in the winter of 1974, called a general election to secure a mandate for enforcing the statutory limits on wage increases for the miners, framing the election as a contest between a popularly elected government and a narrow special interest group over who would govern the nation. Almost alone among Conservatives, Powell had spoken out consistently against Heath’s economic policies and especially against entry into Europe. When the election was called, Powell denounced the move as pretextual and unnecessary, an increase in miners’ wages being justified by the steep increase in energy prices precipitated by OPEC the previous October. More shockingly, Powell declared that he would not stand for re-election to Parliament, being unable to support the policies that the Conservative Party was committed to implement if returned to power. And then even more shockingly still, Powell, just four days before the election, implicitly endorsed Labour as the only party that would renegotiate the terms of Britain’s entry into the European Community, and submit the terms of entry to a vote of the British public.

Powell’s last-minute virtual endorsement of Labour may well have been critical to the outcome of the election. Although Conservatives won the most seats in the new Parliament, they fell short of a majority and could not reach an agreement with the Liberal Party to form a coalition government, allowing Harold Wilson to form a temporary minority government. A new election was called for the fall, and Labour gained a slim majority in Parliament, a second defeat in succession leading to Heath’s ouster from the leadership in February 1975 by Margaret Thatcher, an outcome that was undoubtedly deeply satisfying to Powell, especially as he had played a crucial role in Heath’s undoing. However, if his ultimate aim was to keep Britain out of the Common Market, his satisfaction was short-lived, because when the question of entry into the European Community was finally put to a vote, 67% of British voters cast their votes in favor of entry.

While Thatcher strongly opposed Heath’s economic policies, though as a minister in Heath’s government she had never spoken out against them, she did not criticize Heath’s position on the Common Market. Powell was returned to Parliament as a pro-Unionist member from Northern Ireland in the November 1974 election, but he was no longer a member of the Conservative Party and no longer had any influence in the party. So when Mrs. Thatcher became leader of the Conservative Party, the only politicians who wanted Britain to leave the Common Market were Enoch Powell, a lone voice representing pro-Unionists in Northern Ireland and the far left of the Labour Party. The far left would eventually gain control of the Labour Party after Mrs. Thatcher was elected Prime Minister, but a significant Euro-skeptic faction in the Conservative Party would not come into being for another decade.

But the point with which I want to end this post is simply that although in 1976 a large majority of the British public seemed to have acquiesced in British membership in the European Community, and opposition seemed to be confined to a substantial segment of the left-wing of the Labour Party and a single charismatic figure who seemed to be permanently estranged from the Conservative Party, the consensus favoring membership was largely accidental and was not based on any clear principle of integration into Europe or any clear economic advantage. Edward Heritath’s enthusiasm for entry into the European Community was almost as unique as Enoch Powell’s abhorrence of it. Most of the support for British entry into the European Community was purely contingent and opportunistic, a fact that de Gaulle had perceived in the 1960s when he twice vetoed Britain’s entry. Given Britain’s ambiguous relationship to Europe, her ambivalent feelings about belonging to Europe, and the unclear balance of economic advantages and disadvantages, Britain’s entry into the European Community lacked any strong basis either in principle or in economic advantage.

It is a twist of history that Britain would up in the European Community only because Edward Heath was elected Prime Minister in 1970, and his election in 1970 might not have occurred but for Enoch Powell’s 1968 speech opposing non-while immigration into Britain, which caused a sufficient swing to the Conservatives give them an unexpected majority and to make Heath the Prime Minister, thereby depriving Powell of the leadership of the party which had been his life’s ambition.

Walter Oi and the Productivity Puzzle

Just over a year ago I wrote a post suggesting that slow productivity growth in the current recovery might have something to do with the changing demographic composition of the US labor force and the significant structural changes in the US economy following the 2008 financial crisis and downturn. Here’s how I put it last year.

I don’t deny that secular stagnation is a reasonable inference to be drawn from the persistently low increases in labor productivity during this recovery, but it does seem to me that a less depressing, though perhaps partial, explanation for low productivity growth may be available. My suggestion is that the 2008-09 downturn was associated with major sectoral shifts that caused an unusually large reallocation of labor from industries like construction and finance to other industries so that an unusually large number of workers have had to find new jobs doing work different from what they were doing previously. In many recessions, laid-off workers are either re-employed at their old jobs or find new jobs doing basically the same work that they had been doing at their old jobs. When workers transfer from one job to another similar job, there is little reason to expect a decline in their productivity after they are re-employed, but when workers are re-employed doing something very different from what they did before, a significant drop in their productivity in their new jobs is likely, though there may instances when, as workers gain new skills and experience in their new jobs, their productivity will rise rapidly.

In addition, the number of long-term unemployed (27 weeks or more) since the 2008-09 downturn has been unusually high. Workers who remain unemployed for an extended period of time tend to suffer an erosion of skills, causing their productivity to drop when they are re-employed even if they are able to find a new job in their old occupation. It seems likely that the percentage of long-term unemployed workers that switch occupations is larger than the percentage of short-term unemployed workers that switch occupations, so the unusually high rate of long-term unemployment has probably had a doubly negative effect on labor productivity.

I wrote that post trying to find some reason for optimism in the consistently dismal productivity data that have been reported since a recovery of sorts began in 2009. Unfortunately, the productivity data reported since I wrote that post last year have not improved. Job growth, until last month at any rate, has continued to be strong, while productivity growth has remained nearly anemic. Although it’s disappointing that productivity growth hasn’t picked up in the last ‘year, I haven’t totally given up hope that productivity growth could still revive.

Aside from the demographic and structural changes that I mentioned last year, there is another factor operating and also tend to hold down productivity growth when the growth in employment involves a lot of new entrants into the labor force and a lot of switching between jobs and, even more so, switching between occupations. The basic idea, developed by the great Walter Oi is that labor is a quasi-fixed factor.

From a firm’s viewpoint labor is surely a quasi-fixed factor. The largest part of total labor cost is the variable-wages bill representing payments for a flow of productive services. In addition the firms ordinarily incurs certain fixed employment costs in hiring a specific stock of workers. These fixed employment costs constitute an investment by the firms in its labor force. As such they introduce an element of capital in the use of labor. Decisions regarding the labor input can no longer be based solely on the current relation between wages and marginal value products but must also take cognizance of the future course of these quantities. The theoretical implications of labor’s fixity will be analyzed before turning to the empirical magnitude of these fixed costs.

For analytic purposes fixed employment costs can be separated into two categories called, for convenience, hiring and training costs. Hiring costs are defined as those costs that have no effect on a worker’s productivity and include outlays for recruiting, for processing payroll records, and for supplements such as unemployment compensation. These costs are closely related to the number of new workers and only indirectly related to the flow of labor’s services Training expenses, on the other hand, are investments in the human agent, specifically designed to improve a worker’s productivity.

The training activity typically entails direct money outlays as well as numerous implicit costs such as the allocation of old workers to teaching skills and rejection of unqualified workers during the training period.

So if the increase in employment during this recovery has been associated with more job and occupation switching and more new entrants into the labor force than in previous recoveries, then at least part of the deficit in productivity in this recovery relative to earlier recoveries might be accounted for. And if so, we might still expect that the rate of productivity growth will start increasing before long as the on-the-job training they have received enables the recently hired workers to improve their skills in their new jobs and occupations.

What’s Wrong with Econ 101?

Hendrickson responded recently to criticisms of Econ 101 made by Noah Smith and Mark Thoma. Mark Thoma thinks that Econ 101 has a conservative bias, presumably because Econ 101 teaches students that markets equilibrate supply and demand and allocate resources to their highest valued use and that sort of thing. If markets are so wonderful, then shouldn’t we keep hands off the market and let things take care of themselves? Noah Smith is especially upset that Econ 101, slighting the ambiguous evidence that minimum-wage laws actually do increase unemployment, is too focused on theory and pays too little attention to empirical techniques.

I sympathize with Josh defense of Econ 101, and I think he makes a good point that there is nothing in Econ 101 that quantifies the effect on unemployment of minimum-wage legislation, so that the disconnect between theory and evidence isn’t as stark as Noah suggests. Josh also emphasizes, properly, that whatever the effect of an increase in the minimum wage implied by economic theory, that implication by itself can’t tell us whether the minimum wage should be raised. An ought statement can’t be derived from an is statement. Philosophers are not as uniformly in agreement about the positive-normative distinction as they used to be, but I am old-fashioned enough to think that it’s still valid. If there is a conservative bias in Econ 101, the problem is not Econ 101; the problem is bad teaching.

Having said all that, however, I don’t think that Josh’s defense addresses the real problems with Econ 101. Noah Smith’s complaints about the implied opposition of Econ 101 to minimum-wage legislation and Mark Thoma’s about the conservative bias of Econ 101 are symptoms of a deeper problem with Econ 101, a problem inherent in the current state of economic theory, and unlikely to go away any time soon.

The deeper problem that I think underlies much of the criticism of Econ 101 is the fragility of its essential propositions. These propositions, what Paul Samuelson misguidedly called “meaningful theorems” are deducible from the basic postulates of utility maximization and wealth maximization by applying the method of comparative statics. Not only are the propositions based on questionable psychological assumptions, the comparative-statics method imposes further restrictive assumptions designed to isolate a single purely theoretical relationship. The assumptions aren’t just the kind of simplifications necessary for the theoretical models of any empirical science to be applicable to the real world, they subvert the powerful logic used to derive those implications. It’s not just that the assumptions may not be fully consistent with the conditions actually observed, but the implications of the model are themselves highly sensitive to those assumptions. The meaningful theorems themselves are very sensitive to the assumptions of the model.

The bread and butter of Econ 101 is the microeconomic theory of market adjustment in which price and quantity adjust to equilibrate what consumers demand with what suppliers produce. This is the partial-equilibrium analysis derived from Alfred Marshall, and gradually perfected in the 1920s and 1930s after Marshall’s death with the development of the theories of the firm, and perfect and imperfect competition. As I have pointed out before in a number of posts just as macroeconomics depends on microfoundations, microeconomics depends on macrofoundations (e.g. here and here). All partial-equilibrium analysis relies on the – usually implicit — assumption that all markets but the single market under analysis are in equilibrium. Without that assumption, it is logically impossible to derive any of Samuelson’s meaningful theorems, and the logical necessity of microeconomics is severely compromised.

The underlying idea is very simple. Samuelson’s meaningful theorems are meant to isolate the effect of a change in a single parameter on a particular endogenous variable in an economic system. The only way to isolate the effect of the parameter on the variable is to start from an equilibrium state in which the system is, as it were, at rest. A small (aka infinitesimal) change in the parameter induces an adjustment in the equilibrium, and a comparison of the small change in the variable of interest between the new equilibrium and the old equilibrium relative to the parameter change identifies the underlying relationship between the variable and the parameter, all else being held constant. If the analysis did not start from equilibrium, then the effect of the parameter change on the variable could not be isolated, because the variable would be changing for reasons having nothing to do with the parameter change, making it impossible to isolate the pure effect of the parameter change on the variable of interest.

Not only must the exercise start from an equilibrium state, the equilibrium must be at least locally stable, so that the posited small parameter change doesn’t cause the system to gravitate towards another equilibrium — the usual assumption of a unique equilibrium being an assumption to ensure tractability rather than a deduction from any plausible assumptions – or simply veer off on some explosive or indeterminate path.

Even aside from all these restrictive assumptions, the standard partial-equilibrium analysis is restricted to markets that can be assumed to be very small relative to the entire system. For small markets, it is safe to assume that the small changes in the single market under analysis will have sufficiently small effects on all the other markets in the economy that the induced effects on all the other markets from the change in the market of interest have a negligible feedback effect on the market of interest.

But the partial-equilibrium method surely breaks down when the market under analysis is a market that is large relative to the entire economy, like, shall we say, the market for labor. The feedback effects are simply too strong for the small-market assumptions underlying the partial-equilibrium analysis to be satisfied by the labor market. But even aside from the size issue, the essence of the partial-equilibrium method is the assumption that all markets other than the market under analysis are in equilibrium. But the very assumption that the labor market is not in equilibrium renders the partial-equilibrium assumption that all other markets are in equilibrium untenable. I would suggest that the proper way to think about what Keynes was trying, not necessarily successfully, to do in the General Theory when discussing nominal wage cuts as a way to reduce unemployment is to view that discussion as a critique of using the partial-equilibrium method to analyze a state of general unemployment, as opposed to a situation in which unemployment is confined to a particular occupation or a particular geographic area.

So the question naturally arises: If the logical basis of Econ 101 is as flimsy as I have been suggesting, should we stop teaching Econ 101? My answer is an emphatic, but qualified, no. Econ 101 is the distillation of almost a century and a half of rigorous thought about how to analyze human behavior. What we have come up with so far is very imperfect, but it is still the most effective tool we have for systematically thinking about human conduct and its consequences, especially its unintended consequences. But we should be more forthright about its limitations and the nature of the assumptions that underlie the analysis. We should also be more aware of the logical gaps between the theory – Samuelson’s meaningful theorems — and the applications of the theory.

In fact, many meaningful theorems are consistently corroborated by statistical tests, presumably because observations by and large occur when the economy operates in the neighborhood of a general equililbrium and feedback effect are small, so that the extraneous forces – other than those derived from theory – impinge on actual observations more or less randomly, and thus don’t significantly distort the predicted relationship. And undoubtedly there are also cases in which the random effects overwhelm the theoretically identified relationships, preventing the relationships from being identified statistically, at least when the number of observations is relatively small as is usually the case with economic data. But we should also acknowledge that the theoretically predicted relationships may simply not hold in the real world, because the extreme conditions required for the predicted partial-equilibrium relationships to hold – near-equilibrium conditions and the absence of feedback effects – may often not be satisfied.


About Me

David Glasner
Washington, DC

I am an economist at the Federal Trade Commission. Nothing that you read on this blog necessarily reflects the views of the FTC or the individual commissioners. Although I work at the FTC as an antitrust economist, most of my research and writing has been on monetary economics and policy and the history of monetary theory. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 417 other followers


Follow

Get every new post delivered to your Inbox.

Join 417 other followers