Archive for the 'Michael Oakeshott' Category

Neo- and Other Liberalisms

Everybody seems to be worked up about “neoliberalism” these days. A review of Quinn Slobodian’s new book on the Austrian (or perhaps the Austro-Hungarian) roots of neoliberalism in the New Republic by Patrick Iber reminded me that the term “neoliberalism” which, in my own faulty recollection, came into somewhat popular usage only in the early 1980s, had actually been coined in the early the late 1930s at the now almost legendary Colloque Walter Lippmann and had actually been used by Hayek in at least one of his political essays in the 1940s. In that usage the point of neoliberalism was to revise and update the classical nineteenth-century liberalism that seemed to have run aground in the Great Depression, when the attempt to resurrect and restore what had been widely – and in my view mistakenly – regarded as an essential pillar of the nineteenth-century liberal order – the international gold standard – collapsed in an epic international catastrophe. The new liberalism was supposed to be a kinder and gentler — less relentlessly laissez-faire – version of the old liberalism, more amenable to interventions to aid the less well-off and to social-insurance programs providing a safety net to cushion individuals against the economic risks of modern capitalism, while preserving the social benefits and efficiencies of a market economy based on private property and voluntary exchange.

Any memory of Hayek’s use of “neo-liberalism” was blotted out by the subsequent use of the term to describe the unorthodox efforts of two young ambitious Democratic politicians, Bill Bradley and Dick Gephardt to promote tax reform. Bradley, who was then a first-term Senator from New Jersey, having graduated directly from NBA stardom to the US Senate in 1978, and Gephardt, then an obscure young Congressman from Missouri, made a splash in the first term of the Reagan administration by proposing to cut income tax rates well below the rates to which Reagan had proposed when running for President, in 1980, subsequently enacted early in his first term. Bradley and Gephardt proposed cutting the top federal income tax bracket from the new 50% rate to the then almost unfathomable 30%. What made the Bradley-Gephardt proposal liberal was the idea that special-interest tax exemptions would be eliminated, so that the reduced rates would not mean a loss of tax revenue, while making the tax system less intrusive on private decision-making, improving economic efficiency. Despite cutting the top rate, Bradley and Gephardt retained the principle of progressivity by reducing the entire rate structure from top to bottom while eliminating tax deductions and tax shelters.

Here is how David Ignatius described Bradley’s role in achieving the 1986 tax reform in the Washington Post (May 18, 1986)

Bradley’s intellectual breakthrough on tax reform was to combine the traditional liberal approach — closing loopholes that benefit mainly the rich — with the supply-side conservatives’ demand for lower marginal tax rates. The result was Bradley’s 1982 “Fair Tax” plan, which proposed removing many tax preferences and simplifying the tax code with just three rates: 14 percent, 26 percent and 30 percent. Most subsequent reform plans, including the measure that passed the Senate Finance Committee this month, were modelled on Bradley’s.

The Fair Tax was an example of what Democrats have been looking for — mostly without success — for much of the last decade. It synthesized liberal and conservative ideas in a new package that could appeal to middle-class Americans. As Bradley noted in an interview this week, the proposal offered “lower rates for the middle-income people who are the backbone of America, who are paying most of the freight.” And who, it might be added, increasingly have been voting Republican in recent presidential elections.

The Bradley proposal also offered Democrats a way to shed their anti-growth, tax-and-spend image by allowing them, as Bradley says, “to advocate economic growth and fairness simultaneously.” The only problem with the idea was that it challenged the party’s penchant for soak-the-rich rhetoric and interest-group politics.

So the new liberalism of Bradley and Gephardt was an ideological movement in the opposite direction from that of the earlier version of neoliberalism; the point of neoliberalism 1.0 was to moderate classical laissez-faire liberal orthodoxy; neoliberalism 2.0 aimed to counter the knee-jerk interventionism of New Deal liberalism that favored highly progressive income taxation to redistribute income from rich to poor and price ceilings and controls to protect the poor from exploitation by ruthless capitalists and greedy landlords and as an anti-inflation policy. The impetus for reassessing mid-twentieth-century American liberalism was the evident failure in the 1970s of wage and price controls, which had been supported with little evidence of embarrassment by most Democratic economists (with the notable exception of James Tobin) when imposed by Nixon in 1971, and by the decade-long rotting residue of Nixon’s controls — controls on crude oil and gasoline prices — finally scrapped by Reagan in 1981.

Although the neoliberalism 2.0 enjoyed considerable short-term success, eventually providing the template for the 1986 Reagan tax reform, and establishing Bradley and Gephardt as major figures in the Democratic Party, neoliberalism 2.0 was never embraced by the Democratic grassroots. Gephardt himself abandoned the neo-liberal banner in 1988 when he ran for President as a protectionist, pro-Labor Democrat, providing the eventual nominee, the mildly neoliberalish Michael Dukakis, with plenty of material with which to portray Gephardt as a flip-flopper. But Dukasis’s own failure in the general election did little to enhance the prospects of neoliberalism as a winning electoral strategy. The Democratic acceptance of low marginal tax rates in exchange for eliminating tax breaks, exemptions and shelters was short-lived, and Bradley himself abandoned the approach in 2000 when he ran for the Democratic Presidential nomination from the left against Al Gore.

So the notion that “neoliberalism” has any definite meaning is as misguided as the notion that “liberalism” has any definite meaning. “Neoliberalism” now serves primarily as a term of abuse for leftists to impugn the motives of their ideological and political opponents in exactly the same way that right-wingers use “liberal” as a term of abuse — there are so many of course — with which to dismiss and denigrate their ideological and political opponents. That archetypical classical liberal Ludwig von Mises was openly contemptuous of the neoliberalism that emerged from the Colloque Walter Lipmann and of its later offspring Ordoliberalism (frequently described as the Germanic version of neoliberalism) referring to it as “neo-interventionism.” Similarly, modern liberals who view themselves as upholders of New Deal liberalism deploy “neoliberalism” as a useful pejorative epithet with which to cast a rhetorical cloud over those sharing a not so dissimilar political background or outlook but who are more willing to tolerate the outcomes of market forces than they are.

There are many liberalisms and perhaps almost as many neoliberalisms, so it’s pointless and futile to argue about which is the true or legitimate meaning of “liberalism.” However, one can at least say about the two versions of neoliberalism that I’ve mentioned that they were attempts to moderate more extreme versions of liberalism and to move toward the ideological middle of the road: from the extreme laissez-faire of classical liberalism on the one right and from the dirigisme of the New Deal on the left toward – pardon the cliché – a third way in the center.

But despite my disclaimer that there is no fixed, essential, meaning of “liberalism,” I want to suggest that it is possible to find some common thread that unites many, if not all, of the disparate strands of liberalism. I think it’s important to do so, because it wasn’t so long ago that even conservatives were able to speak approvingly about the “liberal democratic” international order that was created, largely thanks to American leadership, in the post-World War II era. That time is now unfortunately past, but it’s still worth remembering that it once was possible to agree that “liberal” did correspond to an admirable political ideal.

The deep underlying principle that I think reconciles the different strands of the best versions of liberalism is a version of Kant’s categorical imperative: treat every individual as an end not a means. Individuals must not be used merely as tools or instruments with which other individuals or groups satisfy their own purposes. If you want someone else to serve you in accomplishing your ends, that other person must provide that assistance to you voluntarily not because you require him to do so. If you want that assistance you must secure it not by command but by persuasion. Persuasion can be secured in two ways, either by argument — persuading the other person to share your objective — or if you can’t, or won’t, persuade the person to share your objective, you can still secure his or her agreement to help you by offering some form of compensation to induce the person to provide you the services you desire.

The principle has an obvious libertarian interpretation: all cooperation is secured through voluntary agreements between autonomous agents. Force and fraud are impermissible. But the Kantian ideal doesn’t necessarily imply a strictly libertarian political system. The choices of autonomous agents can — actually must — be restricted by a set of legal rules governing the conduct of those agents. And the content of those legal rules must be worked out either by legislation or by an evolutionary process of common law adjudication or some combination of the two. The content of those rules needn’t satisfy a libertarian laissez-faire standard. Rather the liberal standard that legal rules must satisfy is that they don’t prescribe or impose ends, goals, or purposes that must be pursued by autonomous agents, but simply govern the means agents can employ in pursuing their objectives.

Legal rules of conduct are like semantic rules of grammar. Like rules of grammar that don’t dictate the ideas or thoughts expressed in speech or writing, only the manner of their expression, rules of conduct don’t specify the objectives that agents seek to achieve, only the acceptable means of accomplishing those objectives. The rules of conduct need not be libertarian; some choices may be ruled out for reasons of ethics or morality or expediency or the common good. What makes the rules liberal is that they apply equally to all citizens, and that the rules allow sufficient space to agents to conduct their own lives according to their own purposes, goals, preferences, and values.

In other words, the rule of law — not the rule of particular groups, classes, occupations — prevails. Agents are subject to an impartial legal standard, not to the will or command of another agent, or of the ruler. And for this to be the case, the ruler himself must be subject to the law. But within this framework of law that imposes no common goals and purposes on agents, a good deal of collective action to provide for common purposes — far beyond the narrow boundaries of laissez-faire doctrine — is possible. Citizens can be taxed to pay for a wide range of public services that the public, through its elected representatives, decides to provide. Those elected representatives can enact legislation that governs the conduct of individuals as long as the legislation does not treat individuals differently based on irrelevant distinctions or based on criteria that disadvantage certain people unfairly.

My view that the rule of law, not laissez-faire, not income redistribution, is the fundamental value and foundation of liberalism is a view that I learned from Hayek, who, in his later life was as much a legal philosopher as an economist, but it is a view that John Rawls, Ronald Dworkin on the left, and Michael Oakeshott on the right, also shared. Hayek, indeed, went so far as to say that he was fundamentally in accord with Rawls’s magnum opus A Theory of Justice, which was supposed to have provided a philosophical justification for modern welfare-state liberalism. Liberalism is a big tent, and it can accommodate a wide range of conflicting views on economic and even social policy. What sets liberalism apart is a respect for and commitment to the rule of law and due process, a commitment that ought to take precedence over any specific policy goal or preference.

But here’s the problem. If the ruler can also make or change the laws, the ruler is not really bound by the laws, because the ruler can change the law to permit any action that the ruler wants to take. How then is the rule of law consistent with a ruler that is empowered to make the law to which he is supposedly subject. That is the dilemma that every liberal state must cope with. And for Hayek, at least, the issue was especially problematic in connection with taxation.

With the possible exception of inflation, what concerned Hayek most about modern welfare-state policies was the highly progressive income-tax regimes that western countries had adopted in the mid-twentieth century. By almost any reasonable standard, top marginal income-tax rates were way too high in the mid-twentieth century, and the economic case for reducing the top rates was compelling when reducing the top rates would likely entail little, if any, net revenue loss. As a matter of optics, reductions in the top marginal rates had to be coupled with reductions of lower tax brackets which did entail revenue losses, but reforming an overly progressive tax system without a substantial revenue loss was not that hard to do.

But Hayek’s argument against highly progressive income tax rates was based more on principle than on expediency. Hayek regarded steeply progressive income tax rates as inherently discriminatory by imposing a disproportionate burden on a minority — the wealthy — of the population. Hayek did not oppose modest progressivity to ease the tax burden on the least well-off, viewing such progressivity treating as a legitimate concession that a well-off majority could allow to a less-well-off minority. But he greatly feared attempts by the majority to shift the burden of taxation onto a well-off minority, viewing that kind of progressivity as a kind of legalized hold-up, whereby the majority uses its control of the legislature to write the rules to their own advantage at the expense of the minority.

While Hayek’s concern that a wealthy minority could be plundered by a greedy majority seems plausible, a concern bolstered by the unreasonably high top marginal rates that were in place when he wrote, he overstated his case in arguing that high marginal rates were, in and of themselves, unequal treatment. Certainly it would be discriminatory if different tax rates applied to people because of their religion or national origin or for reasons unrelated to income, but even a highly progressive income tax can’t be discriminatory on its face, as Hayek alleged, when the progressivity is embedded in a schedule of rates applicable to everyone that reaches specified income thresholds.

There are other reasons to think that Hayek went too far in his opposition to progressive tax rates. First, he assumed that earned income accurately measures the value of the incremental contribution to social output. But Hayek overlooked that much of earned income reflects either rents that are unnecessary to call forth the efforts required to earn that income, in which case increasing the marginal tax rate on such earnings does not diminish effort and output. We also know as a result of a classic 1971 paper by Jack Hirshleifer that earned incomes often do not correspond to net social output. For example, incomes earned by stock and commodity traders reflect only in part incremental contributions to social output; they also reflect losses incurred by other traders. So resources devoted to acquiring information with which to make better predictions of future prices add less to output than those resources are worth, implying a net reduction in total output. Insofar as earned incomes reflect not incremental contributions to social output but income transfers from other individuals, raising taxes on those incomes can actually increase aggregate output.

So the economic case for reducing marginal tax rates is not necessarily more compelling than the philosophical case, and the economic arguments certainly seem less compelling than they did some three decades ago when Bill Bradley, in his youthful neoliberal enthusiasm, argued eloquently for drastically reducing marginal rates while broadening the tax base. Supporters of reducing marginal tax rates still like to point to the dynamic benefits of increasing incentives to work and invest, but they don’t acknowledge that earned income does not necessarily correspond closely to net contributions to aggregate output.

Drastically reducing the top marginal rate from 70% to 28% within five years, greatly increased the incentive to earn high incomes. The taxation of high incomes having been reducing so drastically, the number of people earning very high incomes since 1986 has grown very rapidly. Does that increase in the number of people earning very high incomes reflect an improvement in the overall economy, or does it reflect a shift in the occupational choices of talented people? Since the increase in very high incomes has not been associated with an increase in the overall rate of economic growth, it hardly seems obvious that the increase in the number of people earning very high incomes is closely correlated with the overall performance of the economy. I suspect rather that the opportunity to earn and retain very high incomes has attracted a many very talented people into occupations, like financial management, venture capital, investment banking, and real-estate brokerage, in which high incomes are being earned, with correspondingly fewer people choosing to enter less lucrative occupations. And if, as I suggested above, these occupations in which high incomes are being earned often contribute less to total output than lower-paying occupations, the increased opportunity to earn high incomes has actually reduced overall economic productivity.

Perhaps the greatest effect of reducing marginal income tax rates has been sociological. I conjecture that, as a consequence of reduced marginal income tax rates, the social status and prestige of people earning high incomes has risen, as has the social acceptability of conspicuous — even brazen — public displays of wealth. The presumption that those who have earned high incomes and amassed great fortunes are morally deserving of those fortunes, and therefore entitled to deference and respect on account of their wealth alone, a presumption that Hayek himself warned against, seems to be much more widely held now than it was forty or fifty years ago. Others may take a different view, but I find this shift towards increased respect and admiration for the wealthy, curiously combined with a supposedly populist political environment, to be decidedly unedifying.

Advertisements

Two Cheers (Well, Maybe Only One and a Half) for Falsificationism

Noah Smith recently wrote a defense (sort of) of falsificationism in response to Sean Carroll’s suggestion that the time has come for scientists to throw falisficationism overboard as a guide for scientific practice. While Noah isn’t ready to throw out falsification as a scientific ideal, he does acknowledge that not everything that scientists do is really falsifiable.

But, as Carroll himself seems to understand in arguing against falsificationism, even though a particular concept or entity may itself be unobservable (and thus unfalsifiable), the larger theory of which it is a part may still have implications that are falsifiable. This is the case in economics. A utility function or a preference ordering is not observable, but by imposing certain conditions on that utility function, one can derive some (weakly) testable implications. This is exactly what Karl Popper, who introduced and popularized the idea of falsificationism, meant when he said that the aim of science is to explain the known by the unknown. To posit an unobservable utility function or an unobservable string is not necessarily to engage in purely metaphysical speculation, but to do exactly what scientists have always done, to propose explanations that would somehow account for some problematic phenomenon that they had already observed. The explanations always (or at least frequently) involve positing something unobservable (e.g., gravitation) whose existence can only be indirectly perceived by comparing the implications (predictions) inferred from the existence of the unobservable entity with what we can actually observe. Here’s how Popper once put it:

Science is valued for its liberalizing influence as one of the greatest of the forces that make for human freedom.

According to the view of science which I am trying to defend here, this is due to the fact that scientists have dared (since Thales, Democritus, Plato’s Timaeus, and Aristarchus) to create myths, or conjectures, or theories, which are in striking contrast to the everyday world of common experience, yet able to explain some aspects of this world of common experience. Galileo pays homage to Aristarchus and Copernicus precisely because they dared to go beyond this known world of our senses: “I cannot,” he writes, “express strongly enough my unbounded admiration for the greatness of mind of these men who conceived [the heliocentric system] and held it to be true […], in violent opposition to the evidence of their own senses.” This is Galileo’s testimony to the liberalizing force of science. Such theories would be important even if they were no more than exercises for our imagination. But they are more than this, as can be seen from the fact that we submit them to severe tests by trying to deduce from them some of the regularities of the known world of common experience by trying to explain these regularities. And these attempts to explain the known by the unknown (as I have described them elsewhere) have immeasurably extended the realm of the known. They have added to the facts of our everyday world the invisible air, the antipodes, the circulation of the blood, the worlds of the telescope and the microscope, of electricity, and of tracer atoms showing us in detail the movements of matter within living bodies.  All these things are far from being mere instruments: they are witness to the intellectual conquest of our world by our minds.

So I think that Sean Carroll, rather than arguing against falisficationism, is really thinking of falsificationism in the broader terms that Popper himself laid out a long time ago. And I think that Noah’s shrug-ability suggestion is also, with appropriate adjustments for changes in expository style, entirely in the spirit of Popper’s view of falsificationism. But to make that point clear, one needs to understand what motivated Popper to propose falsifiability as a criterion for distinguishing between science and non-science. Popper’s aim was to overturn logical positivism, a philosophical doctrine associated with the group of eminent philosophers who made up what was known as the Vienna Circle in the 1920s and 1930s. Building on the British empiricist tradition in science and philosophy, the logical positivists argued that our knowledge of the external world is based on sensory experience, and that apart from the tautological truths of pure logic (of which mathematics is a part) there is no other knowledge. Furthermore, no meaning could be attached to any statement whose validity could not checked either by examining its logical validity as an inference from explicit premises or verified by sensory experience. According to this criterion, much of human discourse about ethics, morals, aesthetics, religion and much of philosophy was simply meaningless, aka metaphysics.

Popper, who grew up in Vienna and was on the periphery of the Vienna Circle, rejected the idea that logical tautologies and statements potentially verifiable by observation are the only conveyors of meaning between human beings. Metaphysical statements can be meaningful even if they can’t be confirmed by observation. Metaphysical statements are meaningful if they are coherent and are not nonsensical. If there is a problem with metaphysical statements, the problem is not necessarily because they have no meaning. In making this argument, Popper suggested an alternative criterion of demarcation to that between meaning and non-meaning: a criterion of demarcation between science and metaphysics. Science is indeed different from metaphysics, but the difference is not that science is meaningful and metaphysics is not. The difference is that scientific statements can be refuted (or falsified) by observations while metaphysical statements cannot be refuted by observations. As a matter of logic, the only way to refute a proposition by an observation is for the proposition to assert that the observation was not possible. Unless you can say what observation would refute what you are saying, you are engaging in metaphysical, not scientific, talk. This gave rise to Popper’s then very surprising result. If you positively assert the existence of something – an assertion potentially verifiable by observation, and hence for logical positivists the quintessential scientific statement — you are making a metaphysical, not a scientific, statement. The statement that something (e.g., God, a string, or a utility function) exists cannot be refuted by any observation. However the unobservable phenomenon may be part of a theory with implications that could be refuted by some observation. But in that case it would be the theory not the posited object that was refuted.

In fact, Popper thought that metaphysical statements not only could be meaningful, but could even be extremely useful, coining the term “metaphysical research programs,” because a metaphysical, unfalsifiable idea or theory could be the impetus for further research, possibly becoming scientifically fruitful in the way that evolutionary biology eventually sprang from the possibly unfalsifiable idea of survival of the fittest. That sounds to me pretty much like Noah’s idea of shrug-ability.

Popper was largely successful in overthrowing logical positivism, though whether it was entirely his doing (as he liked to claim) and whether it was fully overthrown are not so clear. One reason to think that it was not all his doing is that there is still a lot of confusion about what the falsification criterion actually means. Reading Noah Smith and Sean Carroll, I almost get the impression that they think the falsification criterion distinguishes not just between science and non-science but between meaning and non-meaning. Otherwise, why would anyone think that there is any problem with introducing an unfalsifiable concept into scientific discussion. When Popper argued that science should aim at proposing and testing falsifiable theories, he meant that one should not design a theory so that it can’t be tested, or adopt stratagems — ad hoc hypotheses — that serve only to account for otherwise falsifying observations. But if someone comes up with a creative new idea, and the idea can’t be tested, at least given the current observational technology, that is not a reason to reject the theory, especially if the new theory accounts for otherwise unexplained observations.

Another manifestation of Popper’s imperfect success in overthrowing logical positivism is that Paul Samuelson in his classic The Foundations of Economic Analysis chose to call the falsifiable implications of economic theory, meaningful theorems. By naming those implications “meaningful theorems,” Samuelson clearly was operating under the positivist presumption that only a proposition that could (at least in principle) be falsified by observation was meaningful. However, that formulation reflected an untenable compromise between Popper’s criterion for distinguishing science from metaphysics and the logical positivist criterion for distinguishing meaningful from meaningless statements. Instead of referring to meaningful theorems, Samuelson should have called them, more modestly, testable or scientific theorems.

So, at least as I read Popper, Noah Smith and Sean Carroll are only discovering what Popper already understood a long time ago.

At this point, some readers may be wondering why, having said all that, I seem to have trouble giving falisficationism (and Popper) even two cheers. So I am afraid that I will have to close this post on a somewhat critical note. The problem with Popper is that his rhetoric suggests that scientific methodology is a lot more important than it really is. Apart from some egregious examples like Marxism and Freudianism, which were deliberately formulated to exclude the possibility of refutation, there really aren’t that many theories entertained by scientists that can be ruled out of order on strictly methodological grounds. Popper can occasionally provide some methodological reminders to scientists to avoid relying on ad hoc theorizing — at least when a non-ad-hoc alternative is handy — but beyond that I don’t think methodology counts for very much in the day to day work of scientists. Many theories are difficult to falsify, but the difficulty is not necessarily the result of deliberate choices by the theorists, it is the result of the nature of the problem and the nature of the evidence that could potentially refute the theory. The evidence is what it is. It is nice to come up with a theory that predicts a novel fact that can be observed, but nature is not always so accommodating to our theories.

There is a kind of rationalistic (I am using “rationalistic” in the pejorative sense of Michael Oakeshott) faith that following the methodological rules that Popper worked so hard to formulate will guarantee scientific progress. Those rules tend to encourage an unrealistic focus on making theories testable (especially in economics) when by their nature the phenomena are too complex for theories to be formulated in ways that are susceptible to decisive testing. And although Popper recognized that empirical testing of a theory has very limited usefulness unless the theory is being compared to some alternative theory, too often discussions of theory testing are in the context of testing a single theory in isolation. Kuhn and others have pointed out that science is not routinely carried out in the way that Popper suggested it should be. To some extent, Popper acknowledged the truth of that observation, though he liked to cite examples from the history of science to illustrate his thesis, but argued that he was offering a normative, not a positive, theory of scientific discovery. But why should we assume that Popper had more insight into the process of discovery for particular sciences than the practitioners of those sciences actually doing the research? That is the nub of the criticism of Popper that I take away from Oakeshott’s work. Life and any form of endeavor involves the transmission of ways of doing things, traditions, that cannot be reduced to a set of rules, but require education, training, practice and experience. That’s what Kuhn called normal science. Normal science can go off the tracks too, but it is naïve to think that a list of methodological rules is what will keep science moving constantly in the right direction. Why should Popper’s rules necessarily trump the lessons that practitioners have absorbed from the scientific traditions in which they have been trained? I don’t believe that there is any surefire recipe for scientific progress.

Nevertheless, when I look at the way economics is now being practiced and taught, I can’t help but think that a dose of Popperianism might not be the worst thing that could be administered to modern economics. But that’s a discussion for another day.


About Me

David Glasner
Washington, DC

I am an economist in the Washington DC area. My research and writing has been mostly on monetary economics and policy and the history of economics. In my book Free Banking and Monetary Reform, I argued for a non-Monetarist non-Keynesian approach to monetary policy, based on a theory of a competitive supply of money. Over the years, I have become increasingly impressed by the similarities between my approach and that of R. G. Hawtrey and hope to bring Hawtrey's unduly neglected contributions to the attention of a wider audience.

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 1,886 other followers

Follow Uneasy Money on WordPress.com
Advertisements